Merge branch 'develop' into madlittlemods/wait_for_multi_writer_stream_token

This commit is contained in:
Eric Eastwood
2026-05-13 12:20:55 -05:00
65 changed files with 1246 additions and 208 deletions
+40 -2
View File
@@ -1,9 +1,47 @@
# Synapse 1.153.0rc2 (2026-05-13)
## Bugfixes
- Correctly handle arbitrary precision integers in `unsigned` field of events. The bug was introduced in 1.153.0rc1. ([\#19769](https://github.com/element-hq/synapse/issues/19769))
# Synapse 1.153.0rc1 (2026-05-08)
## Features
- Make ACLs apply to EDUs per [MSC4163](https://github.com/matrix-org/matrix-spec-proposals/pull/4163). ([\#18475](https://github.com/element-hq/synapse/issues/18475))
- Stabilize [MSC3266: Room summary API](https://github.com/matrix-org/matrix-spec-proposals/pull/3266), removing the experimental config flag `msc3266_enabled`. Contributed by @dasha-uwu. ([\#19720](https://github.com/element-hq/synapse/issues/19720))
- Partial [MSC4311](https://github.com/matrix-org/matrix-spec-proposals/pull/4311) implementation: `m.room.create` is now a required part of stripped `invite_state`/`knock_state` . Contributed by @FrenchGithubUser @Famedly. ([\#19722](https://github.com/element-hq/synapse/issues/19722))
- Expose `tombstoned` and `replacement_room` in room details on admin API endpoint `GET /_synapse/admin/v1/rooms/<room_id>`. Contributed by Noah Markert. ([\#19737](https://github.com/element-hq/synapse/issues/19737))
## Bugfixes
- Allow self-requested user erasure (upon account deactivation) to succeed even if Synapse has disabled profile changes. Contributed by Famedly. ([\#19398](https://github.com/element-hq/synapse/issues/19398))
- Fix Synapse not backfilling new history when attempting to use a pagination token near a backward extremity. ([\#19611](https://github.com/element-hq/synapse/issues/19611))
- Have [MSC4186: Simplified Sliding Sync](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) return a new response immediately if a room subscription has changed and produced a new response. ([\#19714](https://github.com/element-hq/synapse/issues/19714))
- Fix a bug where when upgrading a room to room version 12, the power level event in the old room got temporarily mutated to remove the user upgrading the room's power. ([\#19727](https://github.com/element-hq/synapse/issues/19727))
- Fix packaging for Fedora and EPEL caused by unnecessary bumping `authlib` minimum version requirement in `pyproject.toml` file. Contributed by Oleg Girko. ([\#19742](https://github.com/element-hq/synapse/issues/19742))
## Improved Documentation
- Add warning about known problems when configuring `use_frozen_dicts`. ([\#19711](https://github.com/element-hq/synapse/issues/19711))
## Internal Changes
- Port `Event.signatures` field to Rust. ([\#19706](https://github.com/element-hq/synapse/issues/19706))
- Port `Event.unsigned` field to Rust. ([\#19708](https://github.com/element-hq/synapse/issues/19708))
- Add a Rust canonical JSON serializer. ([\#19739](https://github.com/element-hq/synapse/issues/19739), [\#19763](https://github.com/element-hq/synapse/issues/19763))
- Configure Dependabot to only update Python dependencies in the lockfile, unless widening upper bounds. ([\#19743](https://github.com/element-hq/synapse/issues/19743))
- Reduce `WORKER_LOCK_MAX_RETRY_INTERVAL` to 5 seconds to reduce idle time after lock is released. ([\#19755](https://github.com/element-hq/synapse/issues/19755))
- Force keyword-only arguments for `Duration` so time units have to be specified. ([\#19756](https://github.com/element-hq/synapse/issues/19756))
# Synapse 1.152.1 (2026-05-07)
## Security Fixes
- Prevent CPU starvation (Denial of Service) under worker lock contention, additionally capping the `WorkerLock` time out interval to a maximum of 60 seconds. Contributed by Famedly. ([\#19394](https://github.com/element-hq/synapse/issues/19394), ELEMENTSEC-2026-1706, [GHSA-8q93-326v-3m7g](https://github.com/element-hq/synapse/security/advisories/GHSA-8q93-326v-3m7g), CVE pending)
- Prevent pagination ending when a page is full of rejected events. (ELEMENTSEC-2025-1636, [GHSA-6qf2-7x63-mm6v](https://github.com/element-hq/synapse/security/advisories/GHSA-6qf2-7x63-mm6v), CVE pending)
- Prevent CPU starvation (Denial of Service) under worker lock contention, additionally capping the `WorkerLock` time out interval to a maximum of 60 seconds. Contributed by Famedly. ([\#19394](https://github.com/element-hq/synapse/issues/19394), ELEMENTSEC-2026-1706, [GHSA-8q93-326v-3m7g](https://github.com/element-hq/synapse/security/advisories/GHSA-8q93-326v-3m7g), CVE-2026-45078)
- Prevent pagination ending when a page is full of rejected events. (ELEMENTSEC-2025-1636, [GHSA-6qf2-7x63-mm6v](https://github.com/element-hq/synapse/security/advisories/GHSA-6qf2-7x63-mm6v), CVE-2026-45076)
# Synapse 1.152.0 (2026-04-28)
Generated
+15 -44
View File
@@ -29,12 +29,6 @@ version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1505bd5d3d116872e7271a6d4e16d81d0c8570876c8de68093a09ac269d8aac0"
[[package]]
name = "autocfg"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c08606f8c3cbf4ce6ec8e28fb0014a2c086708fe954eaa885384a6165172e7e8"
[[package]]
name = "base64"
version = "0.22.1"
@@ -646,12 +640,6 @@ dependencies = [
"hashbrown",
]
[[package]]
name = "indoc"
version = "2.0.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f4c7245a08504955605670dbf141fceab975f15ca21570696aebe9d2e71576bd"
[[package]]
name = "ipnet"
version = "2.11.0"
@@ -735,15 +723,6 @@ version = "2.7.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32a282da65faaf38286cf3be983213fcf1d2e2a58700e808f83f4ea9a4804bc0"
[[package]]
name = "memoffset"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "488016bfae457b036d996092f6cb448677611ce4449e970ceaf42695203f218a"
dependencies = [
"autocfg",
]
[[package]]
name = "mime"
version = "0.3.17"
@@ -821,37 +800,34 @@ dependencies = [
[[package]]
name = "pyo3"
version = "0.27.2"
version = "0.28.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ab53c047fcd1a1d2a8820fe84f05d6be69e9526be40cb03b73f86b6b03e6d87d"
checksum = "91fd8e38a3b50ed1167fb981cd6fd60147e091784c427b8f7183a7ee32c31c12"
dependencies = [
"anyhow",
"bytes",
"indoc",
"libc",
"memoffset",
"once_cell",
"portable-atomic",
"pyo3-build-config",
"pyo3-ffi",
"pyo3-macros",
"unindent",
]
[[package]]
name = "pyo3-build-config"
version = "0.27.2"
version = "0.28.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b455933107de8642b4487ed26d912c2d899dec6114884214a0b3bb3be9261ea6"
checksum = "e368e7ddfdeb98c9bca7f8383be1648fd84ab466bf2bc015e94008db6d35611e"
dependencies = [
"target-lexicon",
]
[[package]]
name = "pyo3-ffi"
version = "0.27.2"
version = "0.28.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1c85c9cbfaddf651b1221594209aed57e9e5cff63c4d11d1feead529b872a089"
checksum = "7f29e10af80b1f7ccaf7f69eace800a03ecd13e883acfacc1e5d0988605f651e"
dependencies = [
"libc",
"pyo3-build-config",
@@ -870,9 +846,9 @@ dependencies = [
[[package]]
name = "pyo3-macros"
version = "0.27.2"
version = "0.28.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0a5b10c9bf9888125d917fb4d2ca2d25c8df94c7ab5a52e13313a07e050a3b02"
checksum = "df6e520eff47c45997d2fc7dd8214b25dd1310918bbb2642156ef66a67f29813"
dependencies = [
"proc-macro2",
"pyo3-macros-backend",
@@ -882,9 +858,9 @@ dependencies = [
[[package]]
name = "pyo3-macros-backend"
version = "0.27.2"
version = "0.28.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "03b51720d314836e53327f5871d4c0cfb4fb37cc2c4a11cc71907a86342c40f9"
checksum = "c4cdc218d835738f81c2338f822078af45b4afdf8b2e33cbb5916f108b813acb"
dependencies = [
"heck",
"proc-macro2",
@@ -895,12 +871,13 @@ dependencies = [
[[package]]
name = "pythonize"
version = "0.27.0"
version = "0.28.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a3a8f29db331e28c332c63496cfcbb822aca3d7320bc08b655d7fd0c29c50ede"
checksum = "0b79f670c9626c8b651c0581011b57b6ba6970bb69faf01a7c4c0cfc81c43f95"
dependencies = [
"pyo3",
"serde",
"serde_json",
]
[[package]]
@@ -1391,9 +1368,9 @@ dependencies = [
[[package]]
name = "target-lexicon"
version = "0.13.2"
version = "0.13.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e502f78cdbb8ba4718f566c418c52bc729126ffd16baee5baa718cf25dd5a69a"
checksum = "adb6935a6f5c20170eeceb1a3835a49e12e19d792f6dd344ccc76a985ca5a6ca"
[[package]]
name = "thiserror"
@@ -1569,12 +1546,6 @@ version = "1.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5a5f39404a5da50712a4c1eecf25e90dd62b613502b7e925fd4e4d19b5c96512"
[[package]]
name = "unindent"
version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7264e107f553ccae879d21fbea1d6724ac785e8c3bfc762137959b5802826ef3"
[[package]]
name = "untrusted"
version = "0.9.0"
-1
View File
@@ -1 +0,0 @@
Make ACLs apply to EDUs per [MSC4163](https://github.com/matrix-org/matrix-spec-proposals/pull/4163).
-1
View File
@@ -1 +0,0 @@
Capped the `WorkerLock` time out interval to a maximum of 60 seconds to prevent dealing with excessively long numbers. Contributed by Famedly.
-1
View File
@@ -1 +0,0 @@
Allow user requested erasure to succeed even if Synapse has disabled profile changes. Contributed by Famedly.
-1
View File
@@ -1 +0,0 @@
Fix Synapse not backfilling new history when attempting to use a pagination token near a backward extremity.
-1
View File
@@ -1 +0,0 @@
Port `Event.signatures` field to Rust.
-1
View File
@@ -1 +0,0 @@
Port `Event.unsigned` field to Rust.
-1
View File
@@ -1 +0,0 @@
Add warning about known problems when configuring `use_frozen_dicts`.
-1
View File
@@ -1 +0,0 @@
Have SSS return a new response immediately if a room subscription have changed and produced a new response.
+3
View File
@@ -0,0 +1,3 @@
Add support for "MSC4452 Preview URL capabilities API" which exposes a `io.element.msc4452.preview_url` capability.
If `experimental_features.msc4452_enabled` is `true`, the `/_matrix/(client/v1/media|media/v3)/preview_url` endpoint
now responds with a 403 status code when the capability is disabled.
-1
View File
@@ -1 +0,0 @@
Stabilize MSC3266, removing the experimental config flag `msc3266_enabled`. Add support for stable room summary endpoints. Contributed by @dasha-uwu.
-1
View File
@@ -1 +0,0 @@
Partial [MSC4311](https://github.com/matrix-org/matrix-spec-proposals/pull/4311) implementation: `m.room.create` is now a required part of stripped `invite_state`/`knock_state` . Contributed by @FrenchGithubUser @Famedly.
+1
View File
@@ -0,0 +1 @@
Port `Event.content` field to Rust.
-1
View File
@@ -1 +0,0 @@
Fix a bug where when upgrading a room to v12 the power level event in the old room got mutated to remove the user upgrading the room's power.
-1
View File
@@ -1 +0,0 @@
Exposes `tombstoned` and `replacement_room` in room details on admin API endpoint `GET /_synapse/admin/v1/rooms/<room_id>`. Contributed by Noah Markert.
-1
View File
@@ -1 +0,0 @@
Add a Rust canonical JSON serializer.
-1
View File
@@ -1 +0,0 @@
Fix packaging for Fedora and EPEL caused by unnecessary bumping `authlib` minimum version requirement in `pyproject.toml` file. Contributed by Oleg Girko.
-1
View File
@@ -1 +0,0 @@
Configure Dependabot to only update Python dependencies in the lockfile, unless widening upper bounds.
-1
View File
@@ -1 +0,0 @@
Reduce `WORKER_LOCK_MAX_RETRY_INTERVAL` to 5 seconds to reduce idle time after lock is released.
-1
View File
@@ -1 +0,0 @@
Force keyword-only args for `Duration` (prevent footgun) so people have to specify which time unit they want to us.
+1
View File
@@ -0,0 +1 @@
Update `WorkerLock` tests to better stress the `WORKER_LOCK_MAX_RETRY_INTERVAL`.
+12
View File
@@ -1,3 +1,15 @@
matrix-synapse-py3 (1.153.0~rc2) stable; urgency=medium
* New Synapse release 1.153.0rc2.
-- Synapse Packaging team <packages@matrix.org> Wed, 13 May 2026 12:00:39 +0100
matrix-synapse-py3 (1.153.0~rc1) stable; urgency=medium
* New Synapse release 1.153.0rc1.
-- Synapse Packaging team <packages@matrix.org> Fri, 08 May 2026 13:05:08 +0100
matrix-synapse-py3 (1.152.1) stable; urgency=medium
* New Synapse release 1.152.1.
+1 -1
View File
@@ -1,6 +1,6 @@
[project]
name = "matrix-synapse"
version = "1.152.1"
version = "1.153.0rc2"
description = "Homeserver for the Matrix decentralised comms protocol"
readme = "README.rst"
authors = [
+10 -4
View File
@@ -30,7 +30,7 @@ http = "1.1.0"
lazy_static = "1.4.0"
log = "0.4.17"
mime = "0.3.17"
pyo3 = { version = "0.27.2", features = [
pyo3 = { version = "0.28.3", features = [
"macros",
"anyhow",
"abi3",
@@ -39,12 +39,18 @@ pyo3 = { version = "0.27.2", features = [
# https://docs.rs/pyo3/latest/pyo3/bytes/index.html
"bytes",
] }
pyo3-log = "0.13.1"
pythonize = "0.27.0"
pyo3-log = "0.13.3"
pythonize = { version = "0.28.0", features = ["arbitrary_precision"] }
regex = "1.6.0"
sha2 = "0.10.8"
serde = { version = "1.0.144", features = ["derive", "rc"] }
serde_json = { version = "1.0.85", features = ["raw_value"] }
serde_json = { version = "1.0.85", features = [
"raw_value",
# We need to be able to parse arbitrary precision numbers, as some numbers
# in the database may be out of range of i64 (as Python uses arbitrary
# precision integers).
"arbitrary_precision",
] }
ulid = "1.1.2"
icu_segmenter = "2.0.0"
reqwest = { version = "0.12.15", default-features = false, features = [
+1 -1
View File
@@ -47,7 +47,7 @@ pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()>
}
#[derive(Debug, Clone)]
#[pyclass(frozen)]
#[pyclass(frozen, skip_from_py_object)]
pub struct ServerAclEvaluator {
allow_ip_literals: bool,
allow: Vec<Regex>,
+119 -15
View File
@@ -29,7 +29,7 @@ use std::{
io::{self, Write},
};
use serde::ser::SerializeMap;
use serde::{ser::SerializeMap, Serializer as _};
use serde::{
ser::{Error as _, SerializeStruct},
Serialize,
@@ -37,7 +37,7 @@ use serde::{
use serde_json::{
ser::{Formatter, Serializer},
value::RawValue,
Value,
Number, Value,
};
/// The minimum integer that can be used in canonical JSON.
@@ -46,6 +46,12 @@ pub const MIN_VALID_INTEGER: i64 = -(2i64.pow(53)) + 1;
/// The maximum integer that can be used in canonical JSON.
pub const MAX_VALID_INTEGER: i64 = (2i64.pow(53)) - 1;
/// A token used by `serde_json` to identify its internal `Number` type when the
/// `arbitrary_precision` feature is enabled. This is a copy from serde_json's
/// internal `TOKEN` for `Number`, which unfortunately isn't exported by the
/// crate.
const SERDE_JSON_NUMBER_TOKEN: &str = "$serde_json::private::Number";
/// Options to control how strict JSON canonicalization is.
#[derive(Clone, Debug)]
pub struct CanonicalizationOptions {
@@ -236,7 +242,7 @@ where
type SerializeMap = CanonicalSerializeMap<'a, W>;
type SerializeStruct = CanonicalSerializeMap<'a, W>;
type SerializeStruct = CanonicalSerializeStruct<'a, W>;
type SerializeStructVariant =
<&'a mut Serializer<W, CanonicalFormatter> as serde::Serializer>::SerializeStructVariant;
@@ -426,7 +432,7 @@ where
fn serialize_struct(
self,
name: &'static str,
_len: usize,
len: usize,
) -> Result<Self::SerializeStruct, Self::Error> {
// We want to disallow `RawValue` as we don't know if its contents is
// canonical JSON.
@@ -436,10 +442,7 @@ where
if name == "$serde_json::private::RawValue" {
return Err(Self::Error::custom("`RawValue` is not supported"));
}
Ok(CanonicalSerializeMap::new(
&mut self.inner,
self.options.clone(),
))
CanonicalSerializeStruct::new(name, len, &mut self.inner, self.options.clone())
}
fn serialize_struct_variant(
@@ -554,7 +557,42 @@ where
}
}
impl<'a, W> SerializeStruct for CanonicalSerializeMap<'a, W>
/// A helper type for [`CanonicalSerializer`] that serializes structs in
/// lexicographic order.
#[doc(hidden)]
pub struct CanonicalSerializeStruct<'a, W: Write> {
name: &'static str,
// We buffer up the key and serialized value for each field we see.
// The BTreeMap will then serialize in lexicographic order.
map: BTreeMap<&'static str, Box<RawValue>>,
options: CanonicalizationOptions,
// The serializer to use to write the sorted map too.
struct_serializer:
<&'a mut Serializer<W, CanonicalFormatter> as serde::Serializer>::SerializeStruct,
}
impl<'a, W> CanonicalSerializeStruct<'a, W>
where
W: Write,
{
fn new(
name: &'static str,
len: usize,
ser: &'a mut Serializer<W, CanonicalFormatter>,
options: CanonicalizationOptions,
) -> Result<Self, serde_json::Error> {
let struct_serializer = ser.serialize_struct(name, len)?;
Ok(Self {
name,
map: BTreeMap::new(),
options,
struct_serializer,
})
}
}
impl<'a, W> SerializeStruct for CanonicalSerializeStruct<'a, W>
where
W: Write,
{
@@ -566,20 +604,69 @@ where
where
T: Serialize + ?Sized,
{
let key_string = key.to_string();
// Check if this is the special case of `SERDE_JSON_NUMBER_TOKEN`,
// which is used when serializing numbers with the `arbitrary_precision`
// feature. If so, we can just serialize it directly without
// canonicalizing it first, as `serde_json` will have already serialized
// it in a canonical way.
if key == SERDE_JSON_NUMBER_TOKEN && self.name == SERDE_JSON_NUMBER_TOKEN {
if self.options.enforce_int_range {
// We need to check that the number is in the valid range, as
// `serde_json` won't have done this for us as we're using the
// `arbitrary_precision` feature.
// The value here will be something that serializes to a JSON
// string containing the number, so we first serialize it to a
// Value and pull the string out, then parse it as a `Number`.
let serde_val = serde_json::to_value(value)?;
let serde_json::Value::String(number_str) = serde_val else {
return Err(serde_json::Error::custom("invalid number"));
};
let number: Number = number_str
.parse()
.map_err(|_| serde_json::Error::custom("invalid number"))?;
// Now check that the number is an integer in the valid range.
if let Some(int) = number.as_i64() {
assert_integer_in_range(int)?;
} else {
// Can't be cast to an i64, so it must be out of range.
return Err(serde_json::Error::custom("integer out of range"));
}
}
self.struct_serializer.serialize_field(key, value)?;
return Ok(());
}
// We serialize the value canonically, then store it as a `RawValue` in
// the buffer map.
let value_string = to_string_canonical(value, self.options.clone())?;
self.map
.insert(key_string, RawValue::from_string(value_string)?);
self.map.insert(key, RawValue::from_string(value_string)?);
Ok(())
}
fn end(self) -> Result<Self::Ok, Self::Error> {
self.map.serialize(self.ser)?;
fn end(mut self) -> Result<Self::Ok, Self::Error> {
if self.name == SERDE_JSON_NUMBER_TOKEN {
// Map must be empty in this case, as `SERDE_JSON_NUMBER_TOKEN`
// only has one field and we've handled it in `serialize_field`.
if !self.map.is_empty() {
return Err(Self::Error::custom(format!(
"unexpected fields in `{}`",
SERDE_JSON_NUMBER_TOKEN
)));
}
}
for (key, value) in self.map {
self.struct_serializer.serialize_field(key, &value)?;
}
SerializeStruct::end(self.struct_serializer)?;
Ok(())
}
@@ -737,6 +824,23 @@ mod tests {
assert!(to_string_canonical(&-(2i128.pow(60)), CanonicalizationOptions::strict()).is_err());
}
#[test]
fn bigints() {
// Create a `serde_json::Number` that is too big to be represented as an
// i64, but can be represented as a string.
let bigint_string = "10000000000000000000000000000000000000";
let value: serde_json::Number = bigint_string.parse().unwrap();
// This should work with relaxed option.
assert_eq!(
to_string_canonical(&value, CanonicalizationOptions::relaxed()).unwrap(),
bigint_string
);
// But should fail with strict option, as it's out of range.
assert!(to_string_canonical(&value, CanonicalizationOptions::strict()).is_err());
}
#[test]
fn backwards_compatibility() {
assert_eq!(
@@ -824,7 +928,7 @@ mod tests {
// Serialize the keys in the reverse order.
for (c, _) in ascii_order.iter().rev() {
map_serializer.serialize_entry(c.into(), &1).unwrap();
map_serializer.serialize_entry(c, &1).unwrap();
}
SerializeMap::end(map_serializer).unwrap();
+1 -1
View File
@@ -476,7 +476,7 @@ impl EventInternalMetadataInner {
}
}
#[pyclass(frozen)]
#[pyclass(frozen, skip_from_py_object)]
#[derive(Clone)]
pub struct EventInternalMetadata {
inner: Arc<RwLock<EventInternalMetadataInner>>,
+488
View File
@@ -0,0 +1,488 @@
/*
* This file is licensed under the Affero General Public License (AGPL) version 3.
*
* Copyright (C) 2026 Element Creations Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* See the GNU Affero General Public License for more details:
* <https://www.gnu.org/licenses/agpl-3.0.html>.
*
*/
use std::{collections::BTreeMap, sync::Arc};
use pyo3::{
exceptions::{PyKeyError, PyTypeError},
pyclass, pymethods,
types::{
PyAnyMethods, PyIterator, PyList, PyListMethods, PyMapping, PySet, PySetMethods, PyTuple,
},
Bound, IntoPyObject, IntoPyObjectExt, Py, PyAny, PyResult, Python,
};
use pythonize::{depythonize, pythonize};
use serde::{Deserialize, Serialize};
/// A generic class for representing immutable JSON objects.
///
/// This is used for representing the `content` field of an event.
///
/// The basic architecture here is to optimize for two things:
/// 1. Fast access of top-level keys (e.g. `event.content["key"]`)
/// 2. Pure Rust implementation.
#[derive(Serialize, Deserialize, Clone, Default)]
#[pyclass(mapping, frozen)]
#[serde(transparent)]
pub struct JsonObject {
object: Arc<BTreeMap<Box<str>, serde_json::Value>>,
}
#[pymethods]
impl JsonObject {
#[new]
#[pyo3(signature = (content = None))]
fn new<'a, 'py>(content: Option<&'a Bound<'py, PyAny>>) -> PyResult<Self> {
let Some(content) = content else {
// If no content is provided, default to an empty object.
return Ok(Self::default());
};
if let Ok(content) = content.cast::<JsonObject>() {
// If the content is already a JsonObject, we can just clone the
// underlying map (this is safe as the object is immutable).
return Ok(JsonObject {
object: content.get().object.clone(),
});
}
let Ok(content) = content.cast::<PyMapping>() else {
return Err(PyTypeError::new_err("'content' must be a mapping"));
};
// Use pythonize to try and convert from a mapping.
let content = depythonize(content)?;
Ok(Self {
object: Arc::new(content),
})
}
fn __len__(&self) -> usize {
self.object.len()
}
fn __contains__(&self, key: &Bound<'_, PyAny>) -> bool {
// Match dict semantics: a non-string key is simply "not in" the
// mapping, rather than raising TypeError.
let Ok(key_str) = key.extract::<&str>() else {
return false;
};
self.object.contains_key(key_str)
}
fn __getitem__<'py>(
&self,
py: Python<'py>,
key: Bound<'_, PyAny>,
) -> PyResult<Bound<'py, PyAny>> {
// We only ever store string keys, so any non-string lookup is a miss.
// Raise KeyError (not TypeError) to match dict's behaviour.
let Ok(key_str) = key.extract::<&str>() else {
return Err(PyKeyError::new_err(key.unbind()));
};
let Some(value) = self.object.get(key_str) else {
return Err(PyKeyError::new_err(key.unbind()));
};
Ok(pythonize(py, value)?)
}
fn __iter__<'py>(&self, py: Python<'py>) -> PyResult<Bound<'py, PyIterator>> {
// The easiest way to get an iterator over the keys is to create a
// temporary list and call `iter()` on it. This is not the most
// efficient approach, but is much less boilerplate than implementing a
// custom iterator type. Since the keys are typically small in number
// this should be fine in practice.
let list = PyList::new(py, self.object.keys().map(Box::as_ref))?;
PyIterator::from_object(&list)
}
// The view classes below each hold a `JsonObject` clone. This is cheap
// because the underlying map is behind an `Arc`, and lets the view outlive
// the originating object (matching dict_keys/values/items semantics in
// Python, which also keep the dict alive).
fn keys(&self) -> JsonObjectKeysView {
JsonObjectKeysView {
object: self.clone(),
}
}
fn values(&self) -> JsonObjectValuesView {
JsonObjectValuesView {
object: self.clone(),
}
}
fn items(&self) -> JsonObjectItemsView {
JsonObjectItemsView {
object: self.clone(),
}
}
#[pyo3(signature = (key, default=None))]
fn get<'py>(
&self,
py: Python<'py>,
key: Bound<'_, PyAny>,
default: Option<Bound<'py, PyAny>>,
) -> PyResult<Bound<'py, PyAny>> {
// Non-string keys can never match, so treat them as a miss and return
// the caller-supplied default rather than raising.
let Ok(key_str) = key.extract::<&str>() else {
return Ok(default.into_pyobject(py)?);
};
match self.object.get(key_str) {
Some(value) => Ok(pythonize(py, value)?),
None => Ok(default.into_pyobject(py)?),
}
}
fn __eq__(&self, other: Bound<'_, PyAny>) -> bool {
// We support equality against any Python mapping (e.g. plain dicts),
// so callers can swap a JsonObject in without rewriting comparisons.
let Ok(mapping) = other.cast::<PyMapping>() else {
return false;
};
let Ok(other_len) = mapping.len() else {
return false;
};
if other_len != self.object.len() {
return false;
}
// We know the "other" is a mapping with the same number of fields as
// us. So we can convert it into a JsonObject and compare the underlying
// maps.
let Ok(other_dict) = depythonize(&other) else {
return false;
};
*self.object == other_dict
}
// Since we implement comparisons with other types, we need to disable
// hashing to avoid violating the invariant that equal objects must have the
// same hash.
//
// Alternatively, we could only allow comparisons with other JsonObjects and
// allow hashing, but a) its nicer to be able to compare with arbitrary
// mappings and b) we don't really need hashing for these objects.
#[classattr]
const __hash__: Option<Py<PyAny>> = None;
fn __str__(&self) -> String {
serde_json::to_string(&self.object).expect("Value should be serializable")
}
fn __repr__(&self) -> String {
format!("JsonObject({})", self.__str__())
}
}
/// Helper class returned by `JsonObject.keys()` to act as a view into the keys
/// of the object.
///
/// This needs to both be iterable *and* operate like a set.
#[pyclass(frozen)]
#[derive(Clone)]
pub struct JsonObjectKeysView {
object: JsonObject,
}
#[pymethods]
impl JsonObjectKeysView {
fn __iter__<'py>(&self, py: Python<'py>) -> PyResult<Bound<'py, PyIterator>> {
// Create the iterator by making a temporary python list of the keys and
// calling `iter()` on it.
let list = PyList::new(py, self.object.object.keys().map(Box::as_ref))?;
PyIterator::from_object(&list)
}
fn __len__(&self) -> usize {
self.object.__len__()
}
fn __contains__(&self, key: &Bound<'_, PyAny>) -> bool {
self.object.__contains__(key)
}
fn __eq__(&self, other: Bound<'_, PyAny>) -> bool {
let other_len = match other.len() {
Ok(len) => len,
Err(_) => return false,
};
if self.object.__len__() != other_len {
return false;
}
for key in self.object.object.keys() {
if !matches!(other.contains(key.as_ref()), Ok(true)) {
return false;
}
}
true
}
// The set operators below match the behaviour of `dict.keys()` in Python:
// they accept any object that supports `__contains__` (for `&`) or is
// iterable (for `|`, `-`, `^`), not just sets. Each returns a fresh
// `PySet` so the caller gets a normal mutable Python set back.
//
// The `__r*__` variants are reflected operators, called by Python when
// the left-hand operand doesn't know how to combine with us. Since these
// operations are commutative for sets (or symmetric in the case of `^`),
// they just delegate. The asymmetric ops (`-`) need a separate impl.
fn __and__<'py>(
&self,
py: Python<'py>,
other: Bound<'_, PyAny>,
) -> PyResult<Bound<'py, PySet>> {
// Iterate our (typically small) key set and probe `other`, which may
// be any container supporting `__contains__`.
let mut result = Vec::new();
for key in self.object.object.keys() {
if matches!(other.contains(key.as_ref()), Ok(true)) {
result.push(key.as_ref());
}
}
PySet::new(py, &result)
}
fn __rand__<'py>(
&self,
py: Python<'py>,
other: Bound<'_, PyAny>,
) -> PyResult<Bound<'py, PySet>> {
self.__and__(py, other)
}
fn __or__<'py>(&self, py: Python<'py>, other: Bound<'_, PyAny>) -> PyResult<Bound<'py, PySet>> {
// Union needs to enumerate both sides, so the right operand must be
// iterable (a bare `__contains__` is not enough).
let Ok(other_iter) = other.try_iter() else {
return Err(PyTypeError::new_err("Right operand must be iterable"));
};
let result = PySet::new(py, self.object.object.keys().map(Box::as_ref))?;
// PySet handles dedup, so we can blindly add every element from the
// other iterable.
for item in other_iter {
let item = item?;
result.add(item)?;
}
Ok(result)
}
fn __ror__<'py>(
&self,
py: Python<'py>,
other: Bound<'_, PyAny>,
) -> PyResult<Bound<'py, PySet>> {
self.__or__(py, other)
}
fn __sub__<'py>(
&self,
py: Python<'py>,
other: Bound<'_, PyAny>,
) -> PyResult<Bound<'py, PySet>> {
// `self - other`: keep our keys that are not in `other`. Only `other`
// needs to support `__contains__` here.
let mut result = Vec::new();
for key in self.object.object.keys() {
if matches!(other.contains(key.as_ref()), Ok(true)) {
continue;
}
result.push(key.as_ref());
}
PySet::new(py, &result)
}
fn __rsub__<'py>(
&self,
py: Python<'py>,
other: Bound<'_, PyAny>,
) -> PyResult<Bound<'py, PySet>> {
// `other - self`: we need to enumerate `other`, so it must be
// iterable. Not symmetric with `__sub__`, hence a separate impl.
let Ok(other_iter) = other.try_iter() else {
return Err(PyTypeError::new_err("Left operand must be iterable"));
};
let result = PySet::empty(py)?;
for item in other_iter {
let item = item?;
if self.object.__contains__(&item) {
continue;
}
result.add(item)?;
}
Ok(result)
}
fn __xor__<'py>(
&self,
py: Python<'py>,
other: Bound<'_, PyAny>,
) -> PyResult<Bound<'py, PySet>> {
// Symmetric difference: elements in exactly one side. Implemented as
// two filtered passes — one over our keys, one over `other`.
let Ok(other_iter) = other.try_iter() else {
return Err(PyTypeError::new_err("Right operand must be iterable"));
};
let result = PySet::empty(py)?;
for key in self.object.object.keys() {
if matches!(other.contains(key.as_ref()), Ok(true)) {
continue;
}
result.add(key.as_ref())?;
}
for item in other_iter {
let item = item?;
if self.object.__contains__(&item) {
continue;
}
result.add(item)?;
}
Ok(result)
}
fn __rxor__<'py>(
&self,
py: Python<'py>,
other: Bound<'_, PyAny>,
) -> PyResult<Bound<'py, PySet>> {
self.__xor__(py, other)
}
fn isdisjoint(&self, other: Bound<'_, PyAny>) -> bool {
for key in self.object.object.keys() {
if matches!(other.contains(key.as_ref()), Ok(true)) {
return false;
}
}
true
}
}
/// Helper class returned by `JsonObject.values()` to act as a view into the
/// values of the object.
#[pyclass(frozen)]
#[derive(Clone)]
pub struct JsonObjectValuesView {
object: JsonObject,
}
#[pymethods]
impl JsonObjectValuesView {
fn __iter__<'py>(&self, py: Python<'py>) -> PyResult<Bound<'py, PyIterator>> {
// Create the iterator by making a temporary python list of the keys and
// calling `iter()` on it.
let list = PyList::empty(py);
for v in self.object.object.values() {
let py_value = pythonize(py, v)?.into_bound_py_any(py)?;
list.append(py_value)?;
}
PyIterator::from_object(&list)
}
fn __len__(&self) -> usize {
self.object.__len__()
}
fn __contains__(&self, other: Bound<'_, PyAny>) -> bool {
// We compare by JSON equality rather than Python identity: convert
// the candidate into a `serde_json::Value` once and scan our values.
// Anything that fails to depythonize cannot match by definition.
let other_value: serde_json::Value = match depythonize(&other) {
Ok(v) => v,
Err(_) => return false,
};
self.object.object.values().any(|v| *v == other_value)
}
}
/// Helper class returned by `JsonObject.items()` to act as a view into the
/// items of the object.
///
/// Technically this should be a set-like view according to Python semantics,
/// unless the values are unhashable. Since the values are immutable we could
/// support it, but it's more work and nobody seems to actually use the set
/// operations on `dict_items` in practice.
#[pyclass(frozen)]
#[derive(Clone)]
pub struct JsonObjectItemsView {
object: JsonObject,
}
#[pymethods]
impl JsonObjectItemsView {
fn __iter__<'py>(&self, py: Python<'py>) -> PyResult<Bound<'py, PyIterator>> {
// Create the iterator by making a temporary python list of the keys and
// calling `iter()` on it.
let list = PyList::empty(py);
for (k, v) in self.object.object.iter() {
let py_key = k.as_ref().into_bound_py_any(py)?;
let py_value = pythonize(py, v)?.into_bound_py_any(py)?;
let item = PyTuple::new(py, [py_key, py_value])?;
list.append(item)?;
}
PyIterator::from_object(&list)
}
fn __len__(&self) -> usize {
self.object.__len__()
}
fn __contains__(&self, other: Bound<'_, PyAny>) -> bool {
// `(key, value) in items` — only a 2-tuple can possibly match. We
// look the key up directly (avoiding a full scan) and then compare
// the stored value against `value` using JSON equality.
let Ok((key, value)) = other.extract::<(Bound<'_, PyAny>, Bound<'_, PyAny>)>() else {
return false;
};
let Ok(key_str) = key.extract::<&str>() else {
return false;
};
let Some(stored) = self.object.object.get(key_str) else {
return false;
};
let other_value: serde_json::Value = match depythonize(&value) {
Ok(v) => v,
Err(_) => return false,
};
*stored == other_value
}
}
+11 -1
View File
@@ -21,21 +21,31 @@
//! Classes for representing Events.
use pyo3::{
types::{PyAnyMethods, PyModule, PyModuleMethods},
types::{PyAnyMethods, PyMapping, PyModule, PyModuleMethods},
wrap_pyfunction, Bound, PyResult, Python,
};
pub mod filter;
mod internal_metadata;
mod json_object;
pub mod signatures;
pub mod unsigned;
use json_object::JsonObject;
/// Called when registering modules with python.
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
// Register the `JsonObject` class as a `Mapping` so that `isinstance` works.
PyMapping::register::<JsonObject>(py)?;
let child_module = PyModule::new(py, "events")?;
child_module.add_class::<internal_metadata::EventInternalMetadata>()?;
child_module.add_class::<signatures::Signatures>()?;
child_module.add_class::<unsigned::Unsigned>()?;
child_module.add_class::<JsonObject>()?;
child_module.add_class::<json_object::JsonObjectKeysView>()?;
child_module.add_class::<json_object::JsonObjectValuesView>()?;
child_module.add_class::<json_object::JsonObjectItemsView>()?;
child_module.add_function(wrap_pyfunction!(filter::event_visible_to_server_py, m)?)?;
m.add_submodule(&child_module)?;
+16 -12
View File
@@ -23,6 +23,7 @@ use pyo3::{
};
use pythonize::{depythonize, pythonize};
use serde::{Deserialize, Serialize};
use serde_json::Number;
#[pyclass(frozen, skip_from_py_object)]
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
@@ -36,7 +37,7 @@ pub struct Unsigned {
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
struct PersistedUnsignedFields {
#[serde(skip_serializing_if = "Option::is_none")]
age_ts: Option<i64>,
age_ts: Option<Number>,
#[serde(skip_serializing_if = "Option::is_none")]
replaces_state: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
@@ -129,11 +130,14 @@ impl Unsigned {
let unsigned = self.py_read()?;
match field {
UnsignedField::AgeTs => Ok(unsigned
.persisted_fields
.age_ts
.ok_or_else(|| PyKeyError::new_err("age_ts"))?
.into_bound_py_any(py)?),
UnsignedField::AgeTs => {
let age_ts = &unsigned
.persisted_fields
.age_ts
.as_ref()
.ok_or_else(|| PyKeyError::new_err("age_ts"))?;
Ok(pythonize(py, age_ts)?)
}
UnsignedField::ReplacesState => Ok((unsigned.persisted_fields.replaces_state)
.as_ref()
.ok_or_else(|| PyKeyError::new_err("replaces_state"))?
@@ -203,7 +207,7 @@ impl Unsigned {
let mut unsigned = self.py_write()?;
match field {
UnsignedField::AgeTs => unsigned.persisted_fields.age_ts = Some(value.extract()?),
UnsignedField::AgeTs => unsigned.persisted_fields.age_ts = Some(depythonize(&value)?),
UnsignedField::ReplacesState => {
unsigned.persisted_fields.replaces_state = Some(value.extract()?)
}
@@ -339,7 +343,7 @@ mod tests {
#[test]
fn test_persisted_fields_serialize_populated() {
let fields = PersistedUnsignedFields {
age_ts: Some(1234),
age_ts: Some(1234.into()),
replaces_state: Some("$prev:example.com".to_string()),
invite_room_state: Some(vec![json!({"type": "m.room.name"})]),
knock_room_state: Some(vec![json!({"type": "m.room.topic"})]),
@@ -360,7 +364,7 @@ mod tests {
fn test_unsigned_inner_flattens_persisted_fields() {
let inner = UnsignedInner {
persisted_fields: PersistedUnsignedFields {
age_ts: Some(99),
age_ts: Some(99.into()),
..Default::default()
},
prev_content: Some(Box::new(json!({"body": "hi"}))),
@@ -382,7 +386,7 @@ mod tests {
fn test_unsigned_inner_roundtrip() {
let original = UnsignedInner {
persisted_fields: PersistedUnsignedFields {
age_ts: Some(10),
age_ts: Some(10.into()),
replaces_state: Some("$state:example.com".to_string()),
invite_room_state: None,
knock_room_state: None,
@@ -394,7 +398,7 @@ mod tests {
let json = serde_json::to_string(&original).unwrap();
let roundtripped: UnsignedInner = serde_json::from_str(&json).unwrap();
assert_eq!(roundtripped.persisted_fields.age_ts, Some(10));
assert_eq!(roundtripped.persisted_fields.age_ts, Some(10.into()));
assert_eq!(
roundtripped.persisted_fields.replaces_state.as_deref(),
Some("$state:example.com")
@@ -423,7 +427,7 @@ mod tests {
});
let unsigned: Unsigned = serde_json::from_value(json).unwrap();
let inner = unsigned.inner.read().unwrap();
assert_eq!(inner.persisted_fields.age_ts, Some(5));
assert_eq!(inner.persisted_fields.age_ts, Some(5.into()));
assert_eq!(inner.prev_sender.as_deref(), Some("@bob:example.com"));
}
}
+3 -3
View File
@@ -104,7 +104,7 @@ fn get_base_rule_ids() -> HashSet<&'static str> {
/// A single push rule for a user.
#[derive(Debug, Clone)]
#[pyclass(frozen)]
#[pyclass(frozen, from_py_object)]
pub struct PushRule {
/// A unique ID for this rule
pub rule_id: Cow<'static, str>,
@@ -462,7 +462,7 @@ pub struct RelatedEventMatchTypeCondition {
/// The collection of push rules for a user.
#[derive(Debug, Clone, Default)]
#[pyclass(frozen)]
#[pyclass(frozen, from_py_object)]
pub struct PushRules {
/// Custom push rules that override a base rule.
overridden_base_rules: HashMap<Cow<'static, str>, PushRule>,
@@ -549,7 +549,7 @@ impl PushRules {
/// A wrapper around `PushRules` that checks the enabled state of rules and
/// filters out disabled experimental rules.
#[derive(Debug, Clone, Default)]
#[pyclass(frozen)]
#[pyclass(frozen, skip_from_py_object)]
pub struct FilteredPushRules {
push_rules: PushRules,
enabled_map: BTreeMap<String, bool>,
+8 -5
View File
@@ -93,7 +93,7 @@ impl PushRuleRoomFlag {
}
/// An object which describes the unique attributes of a room version.
#[pyclass(frozen, eq, hash, get_all)]
#[pyclass(frozen, eq, hash, get_all, skip_from_py_object)]
#[derive(Clone, Copy, PartialEq, Eq, Hash, PartialOrd, Ord)]
pub struct RoomVersion {
/// The identifier for this version.
@@ -622,7 +622,7 @@ const ROOM_VERSION_MSC4242V12: RoomVersion = RoomVersion {
/// Note: room versions can be added to this mapping at startup (allowing
/// support for experimental room versions to be behind experimental feature
/// flags).
#[pyclass(frozen, mapping)]
#[pyclass(frozen, mapping, skip_from_py_object)]
#[derive(Clone)]
pub struct KnownRoomVersionsMapping {
// Note we use a Vec here to ensure that the order of keys is
@@ -637,19 +637,22 @@ pub struct KnownRoomVersionsMapping {
impl KnownRoomVersionsMapping {
/// Add a new room version to the mapping, indicating that this instance
/// supports it.
fn add_room_version(&self, version: RoomVersion) -> PyResult<()> {
fn add_room_version(&self, version: Bound<'_, RoomVersion>) -> PyResult<()> {
let mut versions = self
.versions
.write()
.map_err(|_| PyRuntimeError::new_err("KnownRoomVersionsMapping lock poisoned"))?;
if versions.iter().any(|v| v.identifier == version.identifier) {
if versions
.iter()
.any(|v| v.identifier == version.get().identifier)
{
// We already have this room version, so we don't add it again (as
// otherwise we'd end up with duplicates).
return Ok(());
}
versions.push(version);
versions.push(*version.get());
Ok(())
}
+1 -1
View File
@@ -1,5 +1,5 @@
$schema: https://element-hq.github.io/synapse/latest/schema/v1/meta.schema.json
$id: https://element-hq.github.io/synapse/schema/synapse/v1.152/synapse-config.schema.json
$id: https://element-hq.github.io/synapse/schema/synapse/v1.153/synapse-config.schema.json
type: object
properties:
modules:
+8 -1
View File
@@ -65,7 +65,8 @@ try:
except ImportError:
pass
# Teach canonicaljson how to serialise immutabledicts.
# Teach canonicaljson how to serialise immutabledicts and JsonObjects.
try:
from canonicaljson import register_preserialisation_callback
from immutabledict import immutabledict
@@ -79,6 +80,12 @@ try:
return dict(d)
register_preserialisation_callback(immutabledict, _immutabledict_cb)
# Teach canonicaljson how to serialise JsonObjects, which is just to
# convert them to dicts.
from synapse.synapse_rust.events import JsonObject # noqa: E402
register_preserialisation_callback(JsonObject, lambda obj: dict(obj))
except ImportError:
pass
+4
View File
@@ -613,3 +613,7 @@ class ExperimentalConfig(Config):
# Tracked in: https://github.com/element-hq/synapse/issues/19691
# Note that this is only applicable to legacy auth, not MAS integration (OAuth 2.0).
self.msc4450_enabled: bool = experimental.get("msc4450_enabled", False)
# MSC4455: Preview URL capability
# Tracked in: https://github.com/element-hq/synapse/issues/19719
self.msc4452_enabled: bool = experimental.get("msc4452_enabled", False)
+2 -1
View File
@@ -242,7 +242,8 @@ class ContentRepositoryConfig(Config):
self.thumbnail_requirements = parse_thumbnail_requirements(
config.get("thumbnail_sizes", DEFAULT_THUMBNAIL_SIZES)
)
self.url_preview_enabled = config.get("url_preview_enabled", False)
self.url_preview_enabled = bool(config.get("url_preview_enabled", False))
if self.url_preview_enabled:
check_requirements("url-preview")
+30 -23
View File
@@ -44,9 +44,15 @@ from synapse.api.constants import (
StickyEvent,
)
from synapse.api.room_versions import EventFormatVersions, RoomVersion, RoomVersions
from synapse.synapse_rust.events import EventInternalMetadata, Signatures, Unsigned
from synapse.synapse_rust.events import (
EventInternalMetadata,
JsonObject,
Signatures,
Unsigned,
)
from synapse.types import (
JsonDict,
JsonMapping,
StateKey,
StrCollection,
)
@@ -206,17 +212,29 @@ class EventBase(metaclass=abc.ABCMeta):
):
assert room_version.event_format == self.format_version
if "content" in event_dict:
event_dict["content"] = JsonObject(event_dict["content"])
# We intern these strings because they turn up a lot (especially when
# caching).
event_dict = intern_dict(event_dict)
if USE_FROZEN_DICTS:
frozen_dict = freeze(event_dict)
else:
frozen_dict = event_dict
self.room_version = room_version
self.signatures = Signatures(signatures)
self.unsigned = Unsigned(unsigned)
self.rejected_reason = rejected_reason
self._dict = event_dict
self._dict = frozen_dict
self.internal_metadata = EventInternalMetadata(internal_metadata_dict)
depth: DictProperty[int] = DictProperty("depth")
content: DictProperty[JsonDict] = DictProperty("content")
content: DictProperty[JsonMapping] = DictProperty("content")
hashes: DictProperty[dict[str, str]] = DictProperty("hashes")
origin_server_ts: DictProperty[int] = DictProperty("origin_server_ts")
sender: DictProperty[str] = DictProperty("sender")
@@ -259,7 +277,14 @@ class EventBase(metaclass=abc.ABCMeta):
def get_dict(self) -> JsonDict:
"""Convert the event to a dictionary suitable for serialisation."""
d = dict(self._dict)
if "content" in d:
# Convert the content (which is a JsonObject) back to a dict. Json
# serialization should handle JsonObjects fine, but for sanities
# sake we want `get_dict()` and `get_pdu_json()` to return plain
# dicts.
d["content"] = dict(self.content)
d.update(
{
"signatures": self.signatures.as_dict(),
@@ -419,19 +444,10 @@ class FrozenEvent(EventBase):
unsigned = event_dict.pop("unsigned", {})
# We intern these strings because they turn up a lot (especially when
# caching).
event_dict = intern_dict(event_dict)
if USE_FROZEN_DICTS:
frozen_dict = freeze(event_dict)
else:
frozen_dict = event_dict
self._event_id = event_dict["event_id"]
super().__init__(
frozen_dict,
event_dict,
room_version=room_version,
signatures=signatures,
unsigned=unsigned,
@@ -473,19 +489,10 @@ class FrozenEventV2(EventBase):
unsigned = event_dict.pop("unsigned", {})
# We intern these strings because they turn up a lot (especially when
# caching).
event_dict = intern_dict(event_dict)
if USE_FROZEN_DICTS:
frozen_dict = freeze(event_dict)
else:
frozen_dict = event_dict
self._event_id: str | None = None
super().__init__(
frozen_dict,
event_dict,
room_version=room_version,
signatures=signatures,
unsigned=unsigned,
+1 -1
View File
@@ -1032,7 +1032,7 @@ def strip_event(event: EventBase) -> JsonDict:
return {
"type": event.type,
"state_key": event.state_key,
"content": event.content,
"content": dict(event.content),
"sender": event.sender,
}
+2 -2
View File
@@ -42,7 +42,7 @@ from synapse.events.utils import (
)
from synapse.http.servlet import validate_json_object
from synapse.storage.controllers.state import server_acl_evaluator_from_event
from synapse.types import EventID, JsonDict, RoomID, StrCollection, UserID
from synapse.types import EventID, JsonDict, JsonMapping, RoomID, StrCollection, UserID
from synapse.types.rest import RequestBodyModel
@@ -245,7 +245,7 @@ class EventValidator:
self._ensure_state_event(event)
def _ensure_strings(self, d: JsonDict, keys: StrCollection) -> None:
def _ensure_strings(self, d: JsonMapping, keys: StrCollection) -> None:
for s in keys:
if s not in d:
raise SynapseError(400, "'%s' not in content" % (s,))
+5 -4
View File
@@ -706,12 +706,12 @@ class RoomCreationHandler:
spam_check = await self._spam_checker_module_callbacks.user_may_create_room(
user_id,
{
"creation_content": creation_content,
"creation_content": dict(creation_content),
"initial_state": [
{
"type": state_key[0],
"state_key": state_key[1],
"content": event_content,
"content": dict(event_content),
}
for state_key, event_content in initial_state.items()
],
@@ -1437,7 +1437,7 @@ class RoomCreationHandler:
room_config: JsonDict,
invite_list: list[str],
initial_state: MutableStateMap,
creation_content: JsonDict,
creation_content: JsonMapping,
room_alias: RoomAlias | None = None,
power_level_content_override: JsonDict | None = None,
creator_join_profile: JsonDict | None = None,
@@ -1508,7 +1508,7 @@ class RoomCreationHandler:
async def create_event(
etype: str,
content: JsonDict,
content: JsonMapping,
for_batch: bool,
**kwargs: Any,
) -> tuple[EventBase, synapse.events.snapshot.UnpersistedEventContextBase]:
@@ -1561,6 +1561,7 @@ class RoomCreationHandler:
if creation_event_with_context is None:
# MSC2175 removes the creator field from the create event.
if not room_version.implicit_room_creator:
creation_content = dict(creation_content)
creation_content["creator"] = creator_id
creation_event, unpersisted_creation_context = await create_event(
EventTypes.Create, creation_content, False
+2 -2
View File
@@ -31,7 +31,7 @@ from typing import (
from synapse.api.constants import EventContentFields, EventTypes, Membership
from synapse.metrics import SERVER_NAME_LABEL, event_processing_positions
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.types import JsonDict
from synapse.types import JsonMapping
from synapse.util.duration import Duration
from synapse.util.events import get_plain_text_topic_from_event_content
@@ -195,7 +195,7 @@ class StatsHandler:
)
continue
event_content: JsonDict = {}
event_content: JsonMapping = {}
if delta.event_id is not None:
event = await self.store.get_event(delta.event_id, allow_none=True)
+4 -4
View File
@@ -61,10 +61,10 @@ The maximum wait time before retrying to acquire the lock.
Better to retry more quickly than have workers wait around. 5 seconds is still a
reasonable gap in time to not overwhelm the CPU/Database.
This matters most in cross-worker scenarios. When locks are on the same worker, when the
lock holder releases, we signal to other locks (with the same name/key) that they
should try reacquiring the lock immediately. But locks on other workers only re-check
based on their retry `_timeout_interval`.
This matters most when locks go stale as normally, when the lock holder releases, we
signal to other locks (with the same name/key) that they should try reacquiring the lock
immediately. But stale locks are never released and instead forcefully reaped behind the
scenes.
"""
WORKER_LOCK_EXCESSIVE_WAITING_WARN_DURATION = Duration(minutes=10)
+2 -2
View File
@@ -51,7 +51,7 @@ from synapse.storage.databases.main.roommember import EventIdMembership
from synapse.storage.invite_rule import InviteRule
from synapse.storage.roommember import ProfileInfo
from synapse.synapse_rust.push import FilteredPushRules, PushRuleEvaluator
from synapse.types import JsonValue
from synapse.types import JsonMapping, JsonValue
from synapse.types.state import StateFilter
from synapse.util import unwrapFirstError
from synapse.util.async_helpers import gather_results
@@ -231,7 +231,7 @@ class BulkPushRuleEvaluator:
event: EventBase,
context: EventContext,
event_id_to_event: Mapping[str, EventBase],
) -> tuple[dict, int | None]:
) -> tuple[JsonMapping, int | None]:
"""
Given an event and an event context, get the power level event relevant to the event
and the power level of the sender of the event.
+5
View File
@@ -77,6 +77,11 @@ class CapabilitiesRestServlet(RestServlet):
}
}
if self.config.experimental.msc4452_enabled:
response["capabilities"]["io.element.msc4452.preview_url"] = {
"enabled": self.config.media.url_preview_enabled,
}
if self.config.experimental.msc3720_enabled:
response["capabilities"]["org.matrix.msc3720.account_status"] = {
"enabled": True,
+14 -6
View File
@@ -23,7 +23,12 @@
import logging
import re
from synapse.api.errors import Codes, cs_error
from synapse.api.errors import (
Codes,
SynapseError,
UnrecognizedRequestError,
cs_error,
)
from synapse.http.server import (
HttpServer,
respond_with_json,
@@ -79,11 +84,17 @@ class PreviewURLServlet(RestServlet):
self.clock = hs.get_clock()
self.media_repo = media_repo
self.media_storage = media_storage
assert self.media_repo.url_previewer is not None
self.url_previewer = self.media_repo.url_previewer
self.can_respond_403 = hs.config.experimental.msc4452_enabled
async def on_GET(self, request: SynapseRequest) -> None:
requester = await self.auth.get_user_by_req(request)
if self.url_previewer is None:
# If we have no url_previewer then it has been disabled by the server.
if self.can_respond_403:
raise SynapseError(403, "URL Previews are disabled", Codes.FORBIDDEN)
else:
raise UnrecognizedRequestError(code=404)
url = parse_string(request, "url", required=True)
ts = parse_integer(request, "ts")
if ts is None:
@@ -299,10 +310,7 @@ class DownloadResource(RestServlet):
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
media_repo = hs.get_media_repository()
if hs.config.media.url_preview_enabled:
PreviewURLServlet(hs, media_repo, media_repo.media_storage).register(
http_server
)
PreviewURLServlet(hs, media_repo, media_repo.media_storage).register(http_server)
MediaConfigResource(hs).register(http_server)
ThumbnailResource(hs, media_repo, media_repo.media_storage).register(http_server)
DownloadResource(hs, media_repo).register(http_server)
@@ -106,8 +106,7 @@ class MediaRepositoryResource(JsonResource):
ThumbnailResource(hs, media_repo, media_repo.media_storage).register(
http_server
)
if hs.config.media.url_preview_enabled:
PreviewUrlResource(hs, media_repo, media_repo.media_storage).register(
http_server
)
PreviewUrlResource(hs, media_repo, media_repo.media_storage).register(
http_server
)
MediaConfigResource(hs).register(http_server)
+8 -2
View File
@@ -23,6 +23,7 @@
import re
from typing import TYPE_CHECKING
from synapse.api.errors import Codes, SynapseError, UnrecognizedRequestError
from synapse.http.server import respond_with_json_bytes
from synapse.http.servlet import RestServlet, parse_integer, parse_string
from synapse.http.site import SynapseRequest
@@ -65,12 +66,17 @@ class PreviewUrlResource(RestServlet):
self.clock = hs.get_clock()
self.media_repo = media_repo
self.media_storage = media_storage
assert self.media_repo.url_previewer is not None
self.url_previewer = self.media_repo.url_previewer
self.can_respond_403 = hs.config.experimental.msc4452_enabled
async def on_GET(self, request: SynapseRequest) -> None:
# XXX: if get_user_by_req fails, what should we do in an async render?
requester = await self.auth.get_user_by_req(request)
if self.url_previewer is None:
# If we have no url_previewer then it has been disabled by the server.
if self.can_respond_403:
raise SynapseError(403, "URL Previews are disabled", Codes.FORBIDDEN)
else:
raise UnrecognizedRequestError(code=404)
url = parse_string(request, "url", required=True)
ts = parse_integer(request, "ts", default=self.clock.time_msec())
og = await self.url_previewer.preview(url, requester.user, ts)
+8 -6
View File
@@ -53,9 +53,9 @@ logger = logging.getLogger(__name__)
_RENEWAL_INTERVAL = Duration(seconds=30)
# How long before an acquired lock times out.
_LOCK_TIMEOUT_MS = 2 * 60 * 1000
_LOCK_TIMEOUT = Duration(minutes=2)
_LOCK_REAP_INTERVAL = Duration(milliseconds=_LOCK_TIMEOUT_MS / 10.0)
_LOCK_REAP_INTERVAL = Duration(milliseconds=_LOCK_TIMEOUT.as_millis() / 10.0)
class LockStore(SQLBaseStore):
@@ -63,7 +63,7 @@ class LockStore(SQLBaseStore):
Locks are identified by a name and key. A lock is acquired by inserting into
the `worker_locks` table if a) there is no existing row for the name/key or
b) the existing row has a `last_renewed_ts` older than `_LOCK_TIMEOUT_MS`.
b) the existing row has a `last_renewed_ts` older than `_LOCK_TIMEOUT`.
When a lock is taken out the instance inserts a random `token`, the instance
that holds that token holds the lock until it drops (or times out).
@@ -182,7 +182,7 @@ class LockStore(SQLBaseStore):
self._instance_name,
token,
now,
now - _LOCK_TIMEOUT_MS,
now - _LOCK_TIMEOUT.as_millis(),
),
)
@@ -340,7 +340,9 @@ class LockStore(SQLBaseStore):
"""
def reap_stale_read_write_locks_txn(txn: LoggingTransaction) -> None:
txn.execute(delete_sql, (self.clock.time_msec() - _LOCK_TIMEOUT_MS,))
txn.execute(
delete_sql, (self.clock.time_msec() - _LOCK_TIMEOUT.as_millis(),)
)
if txn.rowcount:
logger.info("Reaped %d stale locks", txn.rowcount)
@@ -489,7 +491,7 @@ class Lock:
)
return (
last_renewed_ts is not None
and self._clock.time_msec() - _LOCK_TIMEOUT_MS < last_renewed_ts
and self._clock.time_msec() - _LOCK_TIMEOUT.as_millis() < last_renewed_ts
)
async def __aenter__(self) -> None:
+10 -1
View File
@@ -10,7 +10,7 @@
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
from typing import Any, Mapping
from typing import Any, Iterator, Mapping
from synapse.types import JsonDict, JsonMapping
@@ -213,3 +213,12 @@ class Unsigned:
def for_event(self) -> JsonDict: ...
"""Return a dict of all unsigned fields, including those only kept in
memory, suitable for inclusion in an event."""
class JsonObject(Mapping[str, Any]):
"""Immutable JSON object mapping."""
def __init__(self, content_dict: JsonMapping | None = None): ...
def __len__(self) -> int: ...
def __getitem__(self, key: str) -> Any: ...
def __iter__(self) -> Iterator[str]: ...
def __eq__(self, other: object) -> bool: ...
+2 -2
View File
@@ -17,7 +17,7 @@ from typing import Any
from pydantic import Field, StrictStr, ValidationError, field_validator
from synapse.types import JsonDict
from synapse.types import JsonMapping
from synapse.util.pydantic_models import ParseModel
from synapse.util.stringutils import random_string
@@ -103,7 +103,7 @@ class TopicContent(ParseModel):
return None
def get_plain_text_topic_from_event_content(content: JsonDict) -> str | None:
def get_plain_text_topic_from_event_content(content: JsonMapping) -> str | None:
"""
Given the `content` of an `m.room.topic` event, returns the plain-text topic
representation. Prefers pulling plain-text from the newer `m.topic` field if
+5
View File
@@ -23,6 +23,8 @@ from typing import Any
from immutabledict import immutabledict
from synapse.synapse_rust.events import JsonObject
def freeze(o: Any) -> Any:
if isinstance(o, dict):
@@ -31,6 +33,9 @@ def freeze(o: Any) -> Any:
if isinstance(o, immutabledict):
return o
if isinstance(o, JsonObject):
return o
if isinstance(o, (bytes, str)):
return o
+18 -12
View File
@@ -20,24 +20,30 @@ from typing import (
from immutabledict import immutabledict
from synapse.synapse_rust.events import JsonObject
def _reject_invalid_json(val: Any) -> None:
"""Do not allow Infinity, -Infinity, or NaN values in JSON."""
raise ValueError("Invalid JSON value: '%s'" % val)
def _handle_immutabledict(obj: Any) -> dict[Any, Any]:
"""Helper for json_encoder. Makes immutabledicts serializable by returning
the underlying dict
def _handle_extra_mappings(obj: Any) -> dict[Any, Any]:
"""Helper for json_encoder. Makes immutabledicts and JsonObjects
serializable
"""
if type(obj) is immutabledict:
# fishing the protected dict out of the object is a bit nasty,
# but we don't really want the overhead of copying the dict.
try:
# Safety: we catch the AttributeError immediately below.
return obj._dict
except AttributeError:
# If all else fails, resort to making a copy of the immutabledict
match obj:
case immutabledict():
# fishing the protected dict out of the object is a bit nasty,
# but we don't really want the overhead of copying the dict.
try:
# Safety: we catch the AttributeError immediately below.
return obj._dict
except AttributeError:
# If all else fails, resort to making a copy of the immutabledict
return dict(obj)
case JsonObject():
# Just convert to a dict.
return dict(obj)
raise TypeError(
"Object of type %s is not JSON serializable" % obj.__class__.__name__
@@ -49,7 +55,7 @@ def _handle_immutabledict(obj: Any) -> dict[Any, Any]:
# * produces valid JSON (no NaNs etc)
# * reduces redundant whitespace
json_encoder = json.JSONEncoder(
allow_nan=False, separators=(",", ":"), default=_handle_immutabledict
allow_nan=False, separators=(",", ":"), default=_handle_extra_mappings
)
# Create a custom decoder to reject Python extensions to JSON.
+4 -3
View File
@@ -62,6 +62,7 @@ class EventSigningTestCase(unittest.TestCase):
"origin_server_ts": 1000000,
"signatures": {},
"type": "X",
"content": {},
"unsigned": {"age_ts": 1000000},
}
@@ -74,7 +75,7 @@ class EventSigningTestCase(unittest.TestCase):
self.assertTrue(hasattr(event, "hashes"))
self.assertIn("sha256", event.hashes)
self.assertEqual(
event.hashes["sha256"], "A6Nco6sqoy18PPfPDVdYvoowfc0PVBk9g9OiyT3ncRM"
event.hashes["sha256"], "mq4QfPPpC+QsBd6eqfVsmJIEz8uvMSVK0+AU67PLESk"
)
self.assertTrue(hasattr(event, "signatures"))
@@ -82,8 +83,8 @@ class EventSigningTestCase(unittest.TestCase):
self.assertIn(KEY_NAME, event.signatures["domain"])
self.assertEqual(
event.signatures[HOSTNAME][KEY_NAME],
"PBc48yDVszWB9TRaB/+CZC1B+pDAC10F8zll006j+NN"
"fe4PEMWcVuLaG63LFTK9e4rwJE8iLZMPtCKhDTXhpAQ",
"18rGIkd4JJXxw9m+1j3BtN+TmqmLip4VHvFbyXLngpB"
"LXOqbxlQViQABRzep2cODQ2aa5FnFgz+Llt2P03WiAw",
)
def test_sign_message(self) -> None:
+64 -9
View File
@@ -25,8 +25,13 @@ import platform
from twisted.internet import defer
from twisted.internet.testing import MemoryReactor
from synapse.handlers.worker_lock import WORKER_LOCK_MAX_RETRY_INTERVAL
from synapse.server import HomeServer
from synapse.storage.databases.main.lock import _RENEWAL_INTERVAL
from synapse.storage.databases.main.lock import (
_LOCK_REAP_INTERVAL,
_LOCK_TIMEOUT,
_RENEWAL_INTERVAL,
)
from synapse.util.clock import Clock
from synapse.util.duration import Duration
@@ -83,7 +88,7 @@ class WorkerLockTestCase(unittest.HomeserverTestCase):
# Note: We use `_pump_by` instead of `pump`/`advance` as the `Lock` has an
# internal background looping call that runs every 30 seconds
# (`_RENEWAL_INTERVAL`) to renew the `Lock` and push it's "drop timeout" value
# further out by 2 minutes (`_LOCK_TIMEOUT_MS`). The `Lock` will prematurely
# further out by 2 minutes (`_LOCK_TIMEOUT`). The `Lock` will prematurely
# drop if this renewal is not allowed to run, which sours the test.
# self.pump(amount=Duration(hours=1))
self._pump_by(amount=Duration(hours=1), by=_RENEWAL_INTERVAL)
@@ -91,9 +96,34 @@ class WorkerLockTestCase(unittest.HomeserverTestCase):
# Make sure we haven't acquired the `lock2` yet (`lock1` still holds it)
self.assertNoResult(d2)
# Release the first lock (`lock1`). The second lock(`lock2`) should be
# automatically acquired by the `pump()` inside `get_success()`
self.get_success(lock1.__aexit__(None, None, None))
# Drop the lock without releasing it. If we just normally released the lock
# (`self.get_success(lock1.__aexit__(None, None, None))`), the
# `add_lock_released_callback`/`notify_lock_released` cycle would signal that we
# should re-aquire the lock right away (on the next reactor tick). And we want
# to avoid that as the point of this test is to stress the retry timeout
# interval and `WORKER_LOCK_MAX_RETRY_INTERVAL`.
del lock1
# Wait for `lock1` to go stale (it won't be renewed anymore because we deleted
# it just above)
self._pump_by(
amount=_LOCK_TIMEOUT,
by=_RENEWAL_INTERVAL,
)
# Wait just enough time so `lock1` is reaped (found stale and forcefully drops
# the lock its holding)
self._pump_by(
amount=_LOCK_REAP_INTERVAL,
by=_RENEWAL_INTERVAL,
)
# Wait just enough time so `lock2` tries re-acquiring the lock. Should be no
# longer than our `WORKER_LOCK_MAX_RETRY_INTERVAL`.
self._pump_by(
amount=WORKER_LOCK_MAX_RETRY_INTERVAL,
by=_RENEWAL_INTERVAL,
)
# We should now have the lock
self.successResultOf(d2)
@@ -219,7 +249,7 @@ class WorkerLockWorkersTestCase(BaseMultiWorkerStreamTestCase):
# Note: We use `_pump_by` instead of `pump`/`advance` as the `Lock` has an
# internal background looping call that runs every 30 seconds
# (`_RENEWAL_INTERVAL`) to renew the `Lock` and push it's "drop timeout" value
# further out by 2 minutes (`_LOCK_TIMEOUT_MS`). The `Lock` will prematurely
# further out by 2 minutes (`_LOCK_TIMEOUT`). The `Lock` will prematurely
# drop if this renewal is not allowed to run, which sours the test.
# self.pump(amount=Duration(hours=1))
self._pump_by(amount=Duration(hours=1), by=_RENEWAL_INTERVAL)
@@ -227,9 +257,34 @@ class WorkerLockWorkersTestCase(BaseMultiWorkerStreamTestCase):
# Make sure we haven't acquired the `lock2` yet (`lock1` still holds it)
self.assertNoResult(d2)
# Release the first lock (`lock1`). The second lock(`lock2`) should be
# automatically acquired by the `pump()` inside `get_success()`
self.get_success(lock1.__aexit__(None, None, None))
# Drop the lock without releasing it. If we just normally released the lock
# (`self.get_success(lock1.__aexit__(None, None, None))`), the
# `add_lock_released_callback`/`notify_lock_released` cycle would signal that we
# should re-aquire the lock right away (on the next reactor tick). And we want
# to avoid that as the point of this test is to stress the retry timeout
# interval and `WORKER_LOCK_MAX_RETRY_INTERVAL`.
del lock1
# Wait for `lock1` to go stale (it won't be renewed anymore because we deleted
# it just above)
self._pump_by(
amount=_LOCK_TIMEOUT,
by=_RENEWAL_INTERVAL,
)
# Wait just enough time so `lock1` is reaped (found stale and forcefully drops
# the lock its holding)
self._pump_by(
amount=_LOCK_REAP_INTERVAL,
by=_RENEWAL_INTERVAL,
)
# Wait just enough time so `lock2` tries re-acquiring the lock. Should be no
# longer than our `WORKER_LOCK_MAX_RETRY_INTERVAL`.
self._pump_by(
amount=WORKER_LOCK_MAX_RETRY_INTERVAL,
by=_RENEWAL_INTERVAL,
)
# We should now have the lock
self.successResultOf(d2)
+2 -2
View File
@@ -265,7 +265,7 @@ class ModuleApiTestCase(BaseModuleApiTestCase):
self.assertEqual(event.type, "m.room.message")
self.assertEqual(event.room_id, room_id)
self.assertFalse(hasattr(event, "state_key"))
self.assertDictEqual(event.content, content)
self.assertDictEqual(dict(event.content), content)
expected_requester = create_requester(
user_id, authenticated_entity=self.hs.hostname
@@ -301,7 +301,7 @@ class ModuleApiTestCase(BaseModuleApiTestCase):
self.assertEqual(event.type, "m.room.power_levels")
self.assertEqual(event.room_id, room_id)
self.assertEqual(event.state_key, "")
self.assertDictEqual(event.content, content)
self.assertDictEqual(dict(event.content), content)
# Check that the event was sent
self.event_creation_handler.create_and_send_nonmember_event.assert_called_with(
+42 -1
View File
@@ -28,7 +28,12 @@ from synapse.server import HomeServer
from synapse.util.clock import Clock
from tests import unittest
from tests.unittest import override_config
from tests.unittest import override_config, skip_unless
try:
import lxml
except ImportError:
lxml = None # type: ignore[assignment]
class CapabilitiesTestCase(unittest.HomeserverTestCase):
@@ -276,3 +281,39 @@ class CapabilitiesTestCase(unittest.HomeserverTestCase):
self.assertFalse(
capabilities["org.matrix.msc4267.forget_forced_upon_leave"]["enabled"]
)
@override_config(
{
"url_preview_enabled": False,
"experimental_features": {"msc4452_enabled": True},
}
)
def test_url_previews_disabled(self) -> None:
access_token = self.get_success(
self.auth_handler.create_access_token_for_user_id(
self.user, device_id=None, valid_until_ms=None
)
)
channel = self.make_request("GET", self.url, access_token=access_token)
capabilities = channel.json_body["capabilities"]
self.assertEqual(channel.code, HTTPStatus.OK)
self.assertFalse(capabilities["io.element.msc4452.preview_url"]["enabled"])
@skip_unless(lxml is not None, "Requires lxml")
@override_config(
{
"url_preview_enabled": True,
"url_preview_ip_range_blacklist": ["127.0.0.1"],
"experimental_features": {"msc4452_enabled": True},
},
)
def test_url_previews_enabled(self) -> None:
access_token = self.get_success(
self.auth_handler.create_access_token_for_user_id(
self.user, device_id=None, valid_until_ms=None
)
)
channel = self.make_request("GET", self.url, access_token=access_token)
capabilities = channel.json_body["capabilities"]
self.assertEqual(channel.code, HTTPStatus.OK)
self.assertTrue(capabilities["io.element.msc4452.preview_url"]["enabled"])
+54
View File
@@ -1573,6 +1573,60 @@ class URLPreviewTests(unittest.HomeserverTestCase):
self.assertEqual(channel.code, 403, channel.result)
# We test this here because this endpoint must still work
# even if lxml is not installed.
class URLPreviewDisabledTests(unittest.HomeserverTestCase):
servlets = [
admin.register_servlets,
login.register_servlets,
media.register_servlets,
]
def prepare(
self, reactor: MemoryReactor, clock: Clock, homeserver: HomeServer
) -> None:
self.register_user("user", "password")
self.tok = self.login("user", "password")
@override_config(
{
"url_preview_enabled": False,
"experimental_features": {"msc4452_enabled": False},
}
)
def test_disabled_previews(self) -> None:
"""Tests that disabling URL previews gives back a sane response."""
channel = self.make_request(
"GET",
"/_matrix/client/v1/media/preview_url?url=" + quote("http://example.com"),
access_token=self.tok,
)
self.assertEqual(channel.code, 404, channel.result)
self.assertEqual(
channel.json_body,
{"errcode": "M_UNRECOGNIZED", "error": "Unrecognized request"},
)
@override_config(
{
"url_preview_enabled": False,
"experimental_features": {"msc4452_enabled": True},
}
)
def test_disabled_previews_with_msc4452(self) -> None:
"""Tests that disabling URL previews gives back a sane response."""
channel = self.make_request(
"GET",
"/_matrix/client/v1/media/preview_url?url=" + quote("http://example.com"),
access_token=self.tok,
)
self.assertEqual(channel.code, 403, channel.result)
self.assertEqual(
channel.json_body,
{"errcode": "M_FORBIDDEN", "error": "URL Previews are disabled"},
)
class MediaConfigTest(unittest.HomeserverTestCase):
servlets = [
media.register_servlets,
+2 -2
View File
@@ -59,7 +59,7 @@ from synapse.rest.client import (
sync,
)
from synapse.server import HomeServer
from synapse.types import JsonDict, RoomAlias, UserID, create_requester
from synapse.types import JsonDict, JsonMapping, RoomAlias, UserID, create_requester
from synapse.util.clock import Clock
from synapse.util.stringutils import random_string
@@ -1859,7 +1859,7 @@ class RoomMessagesTestCase(RoomBase):
mock_return_value: str | bool | Codes | tuple[Codes, JsonDict] | bool = (
"NOT_SPAM"
)
mock_content: JsonDict | None = None
mock_content: JsonMapping | None = None
async def check_event_for_spam(
self,
+2 -1
View File
@@ -101,6 +101,7 @@ from synapse.storage.engines import BaseDatabaseEngine, create_engine
from synapse.storage.prepare_database import prepare_database
from synapse.types import ISynapseReactor, JsonDict
from synapse.util.clock import Clock
from synapse.util.json import json_encoder
from tests.utils import (
LEAVE_DB,
@@ -422,7 +423,7 @@ def make_request(
path = b"/" + path
if isinstance(content, dict):
content = json.dumps(content).encode("utf8")
content = json_encoder.encode(content).encode("utf8")
if isinstance(content, str):
content = content.encode("utf8")
+6 -6
View File
@@ -26,7 +26,7 @@ from twisted.internet.defer import Deferred
from twisted.internet.testing import MemoryReactor
from synapse.server import HomeServer
from synapse.storage.databases.main.lock import _LOCK_TIMEOUT_MS, _RENEWAL_INTERVAL
from synapse.storage.databases.main.lock import _LOCK_TIMEOUT, _RENEWAL_INTERVAL
from synapse.util.clock import Clock
from tests import unittest
@@ -117,7 +117,7 @@ class LockTestCase(unittest.HomeserverTestCase):
self.get_success(lock.__aenter__())
# Wait for ages with the lock, we should not be able to get the lock.
self.reactor.advance(5 * _LOCK_TIMEOUT_MS / 1000)
self.reactor.advance(5 * _LOCK_TIMEOUT.as_secs())
lock2 = self.get_success(self.store.try_acquire_lock("name", "key"))
self.assertIsNone(lock2)
@@ -138,7 +138,7 @@ class LockTestCase(unittest.HomeserverTestCase):
lock._looping_call.stop()
# Wait for the lock to timeout.
self.reactor.advance(2 * _LOCK_TIMEOUT_MS / 1000)
self.reactor.advance(2 * _LOCK_TIMEOUT.as_secs())
lock2 = self.get_success(self.store.try_acquire_lock("name", "key"))
self.assertIsNotNone(lock2)
@@ -154,7 +154,7 @@ class LockTestCase(unittest.HomeserverTestCase):
del lock
# Wait for the lock to timeout.
self.reactor.advance(2 * _LOCK_TIMEOUT_MS / 1000)
self.reactor.advance(2 * _LOCK_TIMEOUT.as_secs())
lock2 = self.get_success(self.store.try_acquire_lock("name", "key"))
self.assertIsNotNone(lock2)
@@ -402,7 +402,7 @@ class ReadWriteLockTestCase(unittest.HomeserverTestCase):
lock._looping_call.stop()
# Wait for the lock to timeout.
self.reactor.advance(2 * _LOCK_TIMEOUT_MS / 1000)
self.reactor.advance(2 * _LOCK_TIMEOUT.as_secs())
lock2 = self.get_success(
self.store.try_acquire_read_write_lock("name", "key", write=True)
@@ -422,7 +422,7 @@ class ReadWriteLockTestCase(unittest.HomeserverTestCase):
del lock
# Wait for the lock to timeout.
self.reactor.advance(2 * _LOCK_TIMEOUT_MS / 1000)
self.reactor.advance(2 * _LOCK_TIMEOUT.as_secs())
lock2 = self.get_success(
self.store.try_acquire_read_write_lock("name", "key", write=True)
+149
View File
@@ -0,0 +1,149 @@
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2026 Element Creations Ltd.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
from synapse.synapse_rust.events import JsonObject
from synapse.util import MutableOverlayMapping
from tests import unittest
class JsonObjectMappingTestCase(unittest.TestCase):
def test_new_and_basic_mapping_behavior(self) -> None:
obj = JsonObject({"a": 1, "b": 2})
self.assertEqual(len(obj), 2)
self.assertTrue("a" in obj)
self.assertTrue("b" in obj)
self.assertFalse("c" in obj)
self.assertFalse(123 in obj) # type: ignore[comparison-overlap]
def test_getitem_and_key_errors(self) -> None:
obj = JsonObject({"a": 1, "b": 2})
self.assertEqual(obj["a"], 1)
with self.assertRaises(KeyError):
_ = obj["missing"]
with self.assertRaises(KeyError):
_ = obj[10] # type: ignore[index]
def test_iter_keys_values_items(self) -> None:
obj = JsonObject({"a": 1, "b": 2})
iterator = iter(obj)
first = next(iterator)
second = next(iterator)
self.assertCountEqual((first, second), ("a", "b"))
with self.assertRaises(StopIteration):
next(iterator)
self.assertCountEqual(list(obj.keys()), ["a", "b"])
self.assertCountEqual(list(obj.values()), [1, 2])
self.assertCountEqual(list(obj.items()), [("a", 1), ("b", 2)])
def test_keys_set_like_behavior(self) -> None:
obj = JsonObject({"a": 1, "b": 2})
# Test 'and' operator.
self.assertEqual(obj.keys() & {"a"}, {"a"})
self.assertEqual({"a"} & obj.keys(), {"a"})
self.assertEqual(obj.keys() & {"c"}, set())
self.assertEqual({"c"} & obj.keys(), set())
# Test 'or' operator.
self.assertEqual(obj.keys() | {"a"}, {"a", "b"})
self.assertEqual({"a"} | obj.keys(), {"a", "b"})
self.assertEqual(obj.keys() | {"c"}, {"a", "b", "c"})
self.assertEqual({"c"} | obj.keys(), {"a", "b", "c"})
# Test 'xor' operator.
self.assertEqual(obj.keys() ^ {"a"}, {"b"})
self.assertEqual({"a"} ^ obj.keys(), {"b"})
self.assertEqual(obj.keys() ^ {"c"}, {"a", "b", "c"})
self.assertEqual({"c"} ^ obj.keys(), {"a", "b", "c"})
# Test 'sub' operator.
self.assertEqual(obj.keys() - {"a"}, {"b"})
self.assertEqual({"a"} - obj.keys(), set())
self.assertEqual(obj.keys() - {"c"}, {"a", "b"})
self.assertEqual({"c"} - obj.keys(), {"c"})
def test_values_view(self) -> None:
obj = JsonObject({"a": 1, "b": 2})
values = obj.values()
self.assertEqual(len(values), 2)
self.assertCountEqual(list(values), [1, 2])
self.assertIn(1, values)
self.assertIn(2, values)
self.assertNotIn(3, values)
self.assertNotIn("a", values)
self.assertNotIn(object(), values)
# Iterating twice should yield the same values.
self.assertCountEqual(list(values), [1, 2])
def test_items_view(self) -> None:
obj = JsonObject({"a": 1, "b": 2})
items = obj.items()
self.assertEqual(len(items), 2)
self.assertCountEqual(list(items), [("a", 1), ("b", 2)])
self.assertIn(("a", 1), items)
self.assertIn(("b", 2), items)
self.assertNotIn(("a", 2), items)
self.assertNotIn(("c", 1), items)
self.assertNotIn("a", items)
self.assertNotIn(("a", 1, "extra"), items)
# Iterating twice should yield the same items.
self.assertCountEqual(list(items), [("a", 1), ("b", 2)])
def test_get(self) -> None:
obj = JsonObject({"a": 1, "b": 2})
self.assertEqual(obj.get("a"), 1)
self.assertEqual(obj.get("missing", "fallback"), "fallback")
self.assertEqual(obj.get(5, "fallback"), "fallback") # type: ignore[call-overload]
def test_eq(self) -> None:
obj = JsonObject({"a": 1, "b": 2})
self.assertEqual(obj, {"a": 1, "b": 2})
self.assertNotEqual(obj, {"a": 1})
self.assertNotEqual(obj, ["a", "b"])
def test_str_and_repr(self) -> None:
obj = JsonObject({"a": 1, "b": 2})
self.assertEqual(str(obj), r'{"a":1,"b":2}')
self.assertEqual(repr(obj), r'JsonObject({"a":1,"b":2})')
def test_json_object_constructor(self) -> None:
obj = JsonObject({"a": 1, "b": 2})
# Passing in an existing JsonObject should work.
obj2 = JsonObject(obj)
self.assertEqual(obj2, {"a": 1, "b": 2})
# Other mapping types should also work.
obj3 = JsonObject(MutableOverlayMapping({"a": 1, "b": 2}))
self.assertEqual(obj3, {"a": 1, "b": 2})
# Test that passing a non-mapping raises a TypeError.
with self.assertRaises(TypeError):
JsonObject(123) # type: ignore[arg-type]
+52
View File
@@ -0,0 +1,52 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2026 Element Creations Ltd.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
from synapse.synapse_rust.events import Unsigned
from tests import unittest
class UnsignedTestCase(unittest.TestCase):
def test_prev_content(self) -> None:
"""Test that the prev_content field is correctly exposed as a JsonObject."""
unsigned = Unsigned({"prev_content": {"key1": "value1", "key2": 42}})
self.assert_dict(unsigned["prev_content"], {"key1": "value1", "key2": 42})
self.assert_dict(
unsigned.for_event(), {"prev_content": {"key1": "value1", "key2": 42}}
)
def test_large_age_ts(self) -> None:
"""Test that we can handle integers larger than 2^128, which is larger
than the maximum rust native integer size."""
large_int = 2**200
unsigned = Unsigned({"age_ts": large_int})
self.assertEqual(unsigned["age_ts"], large_int)
self.assert_dict(unsigned.for_event(), {"age_ts": large_int})
def test_large_integer_in_prev_content(self) -> None:
"""Test that we can handle integers larger than 2^128 in the
prev_content field, which is a JsonObject and thus can contain arbitrary
JSON."""
large_int = 2**200
unsigned = Unsigned({"prev_content": {"some_field": large_int}})
self.assertEqual(unsigned["prev_content"]["some_field"], large_int)
self.assert_dict(
unsigned.for_event(), {"prev_content": {"some_field": large_int}}
)
+3 -2
View File
@@ -35,7 +35,7 @@ from synapse.api.room_versions import RoomVersions
from synapse.events import EventBase, make_event_from_dict
from synapse.events.snapshot import EventContext
from synapse.state import StateHandler, StateResolutionHandler, _make_state_cache_entry
from synapse.types import MutableStateMap, StateMap
from synapse.types import JsonDict, MutableStateMap, StateMap
from synapse.types.state import StateFilter
from synapse.util.macaroons import MacaroonGenerator
@@ -67,13 +67,14 @@ def create_event(
else:
name = "<%s, %s>" % (type, event_id)
d = {
d: JsonDict = {
"event_id": event_id,
"type": type,
"sender": "@user_id:example.com",
"room_id": "!room_id:example.com",
"depth": depth,
"prev_events": prev_events or [],
"content": {},
}
if state_key is not None:
+2 -2
View File
@@ -24,7 +24,6 @@ Utilities for running the unit tests
"""
import base64
import json
import sys
import warnings
from binascii import unhexlify
@@ -41,6 +40,7 @@ from twisted.web.http_headers import Headers
from twisted.web.iweb import IResponse
from synapse.types import JsonSerializable
from synapse.util.json import json_encoder
if TYPE_CHECKING:
from sys import UnraisableHookArgs
@@ -127,7 +127,7 @@ class FakeResponse: # type: ignore[misc]
@classmethod
def json(cls, *, code: int = 200, payload: JsonSerializable) -> "FakeResponse":
headers = Headers({"Content-Type": ["application/json"]})
body = json.dumps(payload).encode("utf-8")
body = json_encoder.encode(payload).encode("utf-8")
return cls(code=code, body=body, headers=headers)