Compare commits

..

60 Commits

Author SHA1 Message Date
Jade Ellis
b8e476626f docs: Add links to matrix guides 2026-02-11 18:25:11 +00:00
Jade Ellis
4e55e1ea90 docs: Add note about checking the contents of configuration 2026-02-11 16:56:07 +00:00
ginger
f5f3108d5f chore: Formatting 2026-02-10 22:56:11 +00:00
chri-k
d1e1ee6156 fix: always treat server_user as an admin 2026-02-10 22:56:11 +00:00
Omar Pakker
ae16a45515 chore: Add towncrier news fragment 2026-02-10 23:07:38 +01:00
Omar Pakker
077bda23a6 feat(admin): Add resolver cache flush command
This command allows an admin to flush a specific server
from the resolver caches or flush the whole cache.
2026-02-10 23:07:32 +01:00
Renovate Bot
a2bf0c1223 chore(deps): update pre-commit hook crate-ci/typos to v1.43.4 2026-02-10 05:02:40 +00:00
Ginger
b9b1ff87f2 chore: Formatting fixes 2026-02-10 02:29:11 +00:00
Ginger
3c0146d437 feat: Implement a migration to fix busted local invites 2026-02-10 02:29:11 +00:00
Ginger
7485d4aa91 fix: Properly set stripped state for local invites 2026-02-10 02:29:11 +00:00
Jade Ellis
39bdb4c5a2 chore: Announcement for v0.5.4 2026-02-09 20:48:47 +00:00
Renovate Bot
55fb3b8848 chore(deps): update pre-commit hook crate-ci/typos to v1.43.3 2026-02-09 15:26:52 +00:00
timedout
19146166c0 chore: Linkify pull requests in CHANGELOG.md 2026-02-08 17:49:53 +00:00
timedout
f47027006f chore: Bump cargo lock 2026-02-08 17:45:51 +00:00
timedout
b7a8f71e14 chore: Bump version 2026-02-08 17:41:53 +00:00
timedout
c7378d15ab chore: Update changelog 2026-02-08 17:41:30 +00:00
timedout
7beeab270e fix: Add failing spell check string to typos
This isn't the proper fix but whatever it makes CI pass
2026-02-08 17:25:09 +00:00
Julian Anderson
6a812b7776 chore: Add news fragment 2026-02-08 17:25:09 +00:00
Julian Anderson
b1f4bbe89e docs(deploying/fedora): Remove seemingly nonexistent/impossible Fedora install method 2026-02-08 17:25:09 +00:00
Julian Anderson
6701f88bf9 docs(deploying/fedora): Fix URLs for known working install methods, add EL caveat, correct GPG key info 2026-02-08 17:25:09 +00:00
Jade Ellis
62b9e8227b docs: Explain enabling backtraces at runtime 2026-02-08 17:23:09 +00:00
Jade Ellis
7369b58d91 feat: Try log original server error 2026-02-08 17:23:09 +00:00
Jade Ellis
f6df44b13f feat: Try log panics before unwinds to catch backtraces 2026-02-08 17:23:09 +00:00
timedout
f243b383cb style: Fix typo in validate_remote_member_event_stub 2026-02-08 15:37:40 +00:00
timedout
e0b7d03018 fix: Perform additional membership validation on remote knocks too 2026-02-08 15:34:07 +00:00
timedout
184ae2ebb9 fix: Apply validation to make_join process 2026-02-06 18:15:39 +00:00
timedout
0ea0d09b97 fix: Don't fail open when a PDU doesn't have a short state hash 2026-02-06 18:09:09 +00:00
timedout
6763952ce4 chore: Bump ruwuma 2026-02-06 17:52:48 +00:00
Renovate Bot
e2da8301df chore(deps): update pre-commit hook crate-ci/typos to v1.43.2 2026-02-06 16:49:57 +00:00
April Grimoire
296a4b92d6 fix: Resolve unnecessary serialization issue
Fixes #1335
2026-02-06 07:52:19 +00:00
timedout
00c054d356 fix: Get_missing_events returns the same event N times 2026-02-05 21:28:21 +00:00
Renovate Bot
2558ec0c2a chore(deps): update rust-patch-updates 2026-02-05 14:06:42 +00:00
timedout
56bc3c184e feat: Enable running complement manually 2026-02-04 18:06:53 +00:00
Renovate Bot
5c1b90b463 chore(deps): update dependency cargo-bins/cargo-binstall to v1.17.4 2026-02-04 16:05:32 +00:00
Renovate Bot
0dbb774559 chore(deps): update dependency @rspress/plugin-sitemap to v2.0.2 2026-02-04 16:04:56 +00:00
Renovate Bot
16e0566c84 chore(deps): update dependency @rspress/plugin-client-redirects to v2.0.2 2026-02-04 16:02:09 +00:00
Renovate Bot
489b6e4ecb chore(deps): update pre-commit hook crate-ci/typos to v1.43.1 2026-02-04 15:58:34 +00:00
Renovate Bot
e71f75a58c chore(deps): update dependency @rspress/core to v2.0.2 2026-02-04 05:04:11 +00:00
timedout
082ed5b70c feat: Use info level logs for residency check failures 2026-02-03 20:09:41 +00:00
timedout
76fe8c4cdc chore: Add news fragment 2026-02-03 20:09:41 +00:00
timedout
c4a9f7a6d1 perf: Don't handle expensive requests for rooms we aren't in
Mostly borrowed from dendrite:

https://github.com/element-hq/dendrite/blob/a042861/federationapi/routing/routing.go#L601
2026-02-03 20:09:41 +00:00
timedout
a047199fb4 perf: Don't handle PDUs for rooms we aren't in 2026-02-03 20:09:41 +00:00
Renovate Bot
411c9da743 chore(deps): update rust-patch-updates 2026-02-02 01:34:58 +00:00
Renovate Bot
fb54f2058c chore(deps): update dependency @rspress/plugin-client-redirects to v2.0.1 2026-02-01 05:03:41 +00:00
ginger
358273226c chore: Update FUNDING.yml 2026-01-31 01:13:15 +00:00
timedout
fd9bbb08ed fix: Restore admin room announcement for deactivations 2026-01-30 05:11:30 +00:00
timedout
53184cd2fc chore: Add news fragment 2026-01-30 05:11:30 +00:00
timedout
25f7d80a8c fix: Clippy lint 2026-01-30 05:11:30 +00:00
timedout
02fa0ba0b8 perf: Optimise account deactivation process 2026-01-30 05:11:30 +00:00
ginger
572b228f40 Update homeserver list 2026-01-29 23:35:07 +00:00
Renovate Bot
b0a61e38da chore(deps): update pre-commit hook crate-ci/typos to v1.42.3 2026-01-29 15:49:54 +00:00
Renovate Bot
401dff20eb chore(deps): update dependency cargo-bins/cargo-binstall to v1.17.3 2026-01-29 15:49:32 +00:00
Ginger
f2a50e8f62 fix(docs): Remove rspress-plugin-preview 2026-01-29 10:41:46 -05:00
Ginger
36e80b0af4 fix(docs): Add stub type definition for docs CSS 2026-01-29 10:36:44 -05:00
Ginger
c9a4c546e2 chore(deps): Update to rspress 2.0.0 2026-01-29 10:35:24 -05:00
Ginger
da8b60b4ce fix(docs): Add redirect from old community page 2026-01-26 21:42:50 -05:00
Ginger
89afaa94ac feat(docs): Move community pages into subdir, add partnered homeservers page 2026-01-26 21:32:05 -05:00
Ginger
2b5563cee3 fix(docs): Remove busted link in nav 2026-01-26 20:55:12 -05:00
Ginger
6cb9d50383 chore: News fragment 2026-01-21 12:27:13 -05:00
Ginger
77c0f6e0c6 fix: Add a code path for clients trying to use fallback auth 2026-01-21 12:27:13 -05:00
101 changed files with 1477 additions and 3430 deletions

2
.github/FUNDING.yml vendored
View File

@@ -1,4 +1,4 @@
github: [JadedBlueEyes, nexy7574]
github: [JadedBlueEyes, nexy7574, gingershaped]
custom:
- https://ko-fi.com/nexy7574
- https://ko-fi.com/JadedBlueEyes

View File

@@ -23,7 +23,7 @@ repos:
- id: check-added-large-files
- repo: https://github.com/crate-ci/typos
rev: v1.41.0
rev: v1.43.4
hooks:
- id: typos
- id: typos

View File

@@ -6,14 +6,13 @@ extend-exclude = ["*.csr", "*.lock", "pnpm-lock.yaml"]
extend-ignore-re = [
"(?Rm)^.*(#|//|<!--)\\s*spellchecker:disable-line(\\s*-->)$", # Ignore a line by making it trail with a `spellchecker:disable-line` comment
"^[0-9a-f]{7,}$", # Commit hashes
"4BA7",
# some heuristics for base64 strings
"[A-Za-z0-9+=]{72,}",
"([A-Za-z0-9+=]|\\\\\\s\\*){72,}",
"[0-9+][A-Za-z0-9+]{30,}[a-z0-9+]",
"\\$[A-Z0-9+][A-Za-z0-9+]{6,}[a-z0-9+]",
"\\b[a-z0-9+/=][A-Za-z0-9+/=]{7,}[a-z0-9+/=][A-Z]\\b",
# In the renovate config
".ontainer"
]

View File

@@ -1,58 +1,98 @@
# Continuwuity v0.5.4 (2026-02-08)
## Features
- The announcement checker will now announce errors it encounters in the first run to the admin room, plus a few other
misc improvements. Contributed by @Jade ([#1288](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1288))
- Drastically improved the performance and reliability of account deactivations. Contributed by @nex ([#1314](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1314))
- Refuse to process requests for and events in rooms that we no longer have any local users in (reduces state resets
and improves performance). Contributed by @nex ([#1316](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1316))
- Added server-specific admin API routes to ban and unban rooms, for use with moderation bots. Contributed by @nex
([#1301](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1301))
## Bugfixes
- Fix the generated configuration containing uncommented optional sections. Contributed by @Jade ([#1290](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1290))
- Fixed specification non-compliance when handling remote media errors. Contributed by @nex ([#1298](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1298))
- UIAA requests which check for out-of-band success (sent by matrix-js-sdk) will no longer create unhelpful errors in
the logs. Contributed by @ginger ([#1305](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1305))
- Use exists instead of contains to save writing to a buffer in `src/service/users/mod.rs`: `is_login_disabled`.
Contributed
by @aprilgrimoire. ([#1340](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1340))
- Fixed backtraces being swallowed during panics. Contributed by @jade ([#1337](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1337))
- Fixed a potential vulnerability that could allow an evil remote server to return malicious events during the room join
and knock process. Contributed by @nex, reported by violet & [mat](https://matdoes.dev).
- Fixed a race condition that could result in outlier PDUs being incorrectly marked as visible to a remote server.
Contributed by @nex, reported by violet & [mat](https://matdoes.dev).
- ACLs are no longer case-sensitive. Contributed by @nex, reported by [vel](matrix:u/vel:nhjkl.com?action=chat).
## Docs
- Fixed Fedora install instructions. Contributed by @julian45 ([#1342](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1342))
# Continuwuity 0.5.3 (2026-01-12)
## Features
- Improve the display of nested configuration with the `!admin server show-config` command. Contributed by @Jade (#1279)
- Improve the display of nested configuration with the `!admin server show-config` command. Contributed by @Jade ([#1279](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1279))
## Bugfixes
- Fixed `M_BAD_JSON` error when sending invites to other servers or when providing joins. Contributed by @nex (#1286)
- Fixed `M_BAD_JSON` error when sending invites to other servers or when providing joins. Contributed by @nex ([#1286](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1286))
## Docs
- Improve admin command documentation generation. Contributed by @ginger (#1280)
- Improve admin command documentation generation. Contributed by @ginger ([#1280](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1280))
## Misc
- Improve timeout-related code for federation and URL previews. Contributed by @Jade (#1278)
- Improve timeout-related code for federation and URL previews. Contributed by @Jade ([#1278](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1278))
# Continuwuity 0.5.2 (2026-01-09)
## Features
- Added support for issuing additional registration tokens, stored in the database, which supplement the existing registration token hardcoded in the config file. These tokens may optionally expire after a certain number of uses or after a certain amount of time has passed. Additionally, the `registration_token_file` configuration option is superseded by this feature and **has been removed**. Use the new `!admin token` command family to manage registration tokens. Contributed by @ginger (#783).
- Implemented a configuration defined admin list independent of the admin room. Contributed by @Terryiscool160. (#1253)
- Added support for invite and join anti-spam via Draupnir and Meowlnir, similar to that of synapse-http-antispam. Contributed by @nex. (#1263)
- Implemented account locking functionality, to complement user suspension. Contributed by @nex. (#1266)
- Added admin command to forcefully log out all of a user's existing sessions. Contributed by @nex. (#1271)
- Implemented toggling the ability for an account to log in without mutating any of its data. Contributed by @nex. (#1272)
- Add support for custom room create event timestamps, to allow generating custom prefixes in hashed room IDs. Contributed by @nex. (#1277)
- Certain potentially dangerous admin commands are now restricted to only be usable in the admin room and server console. Contributed by @ginger.
- Added support for issuing additional registration tokens, stored in the database, which supplement the existing
registration token hardcoded in the config file. These tokens may optionally expire after a certain number of uses or
after a certain amount of time has passed. Additionally, the `registration_token_file` configuration option is
superseded by this feature and **has been removed**. Use the new `!admin token` command family to manage registration
tokens. Contributed by @ginger (#783).
- Implemented a configuration defined admin list independent of the admin room. Contributed by @Terryiscool160. ([#1253](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1253))
- Added support for invite and join anti-spam via Draupnir and Meowlnir, similar to that of synapse-http-antispam.
Contributed by @nex. ([#1263](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1263))
- Implemented account locking functionality, to complement user suspension. Contributed by @nex. ([#1266](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1266))
- Added admin command to forcefully log out all of a user's existing sessions. Contributed by @nex. ([#1271](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1271))
- Implemented toggling the ability for an account to log in without mutating any of its data. Contributed by @nex. (
[#1272](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1272))
- Add support for custom room create event timestamps, to allow generating custom prefixes in hashed room IDs.
Contributed by @nex. ([#1277](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1277))
- Certain potentially dangerous admin commands are now restricted to only be usable in the admin room and server
console. Contributed by @ginger.
## Bugfixes
- Fixed unreliable room summary fetching and improved error messages. Contributed by @nex. (#1257)
- Client requested timeout parameter is now applied to e2ee key lookups and claims. Related federation requests are now also concurrent. Contributed by @nex. (#1261)
- Fixed the whoami endpoint returning HTTP 404 instead of HTTP 403, which confused some appservices. Contributed by @nex. (#1276)
- Fixed unreliable room summary fetching and improved error messages. Contributed by @nex. ([#1257](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1257))
- Client requested timeout parameter is now applied to e2ee key lookups and claims. Related federation requests are now
also concurrent. Contributed by @nex. ([#1261](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1261))
- Fixed the whoami endpoint returning HTTP 404 instead of HTTP 403, which confused some appservices. Contributed by
@nex. ([#1276](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1276))
## Misc
- The `console` feature is now enabled by default, allowing the server console to be used for running admin commands directly. To automatically open the console on startup, set the `admin_console_automatic` config option to `true`. Contributed by @ginger.
- The `console` feature is now enabled by default, allowing the server console to be used for running admin commands
directly. To automatically open the console on startup, set the `admin_console_automatic` config option to `true`.
Contributed by @ginger.
- We now (finally) document our container image mirrors. Contributed by @Jade
# Continuwuity 0.5.0 (2025-12-30)
**This release contains a CRITICAL vulnerability patch, and you must update as soon as possible**
## Features
- Enabled the OTLP exporter in default builds, and allow configuring the exporter protocol. (@Jade). (#1251)
- Enabled the OTLP exporter in default builds, and allow configuring the exporter protocol. (@Jade). ([#1251](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1251))
## Bug Fixes
- Don't allow admin room upgrades, as this can break the admin room (@timedout) (#1245)
- Fix invalid creators in power levels during upgrade to v12 (@timedout) (#1245)
- Don't allow admin room upgrades, as this can break the admin room (@timedout) ([#1245](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1245))
- Fix invalid creators in power levels during upgrade to v12 (@timedout) ([#1245](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1245))

537
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -12,7 +12,7 @@ license = "Apache-2.0"
# See also `rust-toolchain.toml`
readme = "README.md"
repository = "https://forgejo.ellis.link/continuwuation/continuwuity"
version = "0.5.3"
version = "0.5.4"
[workspace.metadata.crane]
name = "conduwuit"
@@ -158,7 +158,7 @@ features = ["raw_value"]
# Used for appservice registration files
[workspace.dependencies.serde-saphyr]
version = "0.0.14"
version = "0.0.17"
# Used to load forbidden room/user regex from config
[workspace.dependencies.serde_regex]
@@ -342,7 +342,7 @@ version = "0.1.2"
# Used for matrix spec type definitions and helpers
[workspace.dependencies.ruma]
git = "https://forgejo.ellis.link/continuwuation/ruwuma"
rev = "85d00fb5746cba23904234b4fd3c838dcf141541"
rev = "458d52bdc7f9a07c497be94a1420ebd3d87d7b2b"
features = [
"compat",
"rand",
@@ -548,10 +548,6 @@ features = ["sync", "tls-rustls", "rustls-provider"]
[workspace.dependencies.resolv-conf]
version = "0.7.5"
# Used by stitched ordering
[workspace.dependencies.indexmap]
version = "2.13.0"
#
# Patches
#

View File

@@ -57,9 +57,10 @@ ### What are the project's goals?
### Can I try it out?
Check out the [documentation](https://continuwuity.org) for installation instructions.
Check out the [documentation](https://continuwuity.org) for installation instructions, or join one of these vetted public homeservers running Continuwuity to get a feel for things!
There are currently no open registration Continuwuity instances available.
- https://continuwuity.rocks -- A public demo server operated by the Continuwuity Team.
- https://federated.nexus -- Federated Nexus is a community resource hosting multiple FOSS (especially federated) services, including Matrix and Forgejo.
### What are we working on?

View File

@@ -2,11 +2,7 @@
set -euo pipefail
# Path to Complement's source code
#
# The `COMPLEMENT_SRC` environment variable is set in the Nix dev shell, which
# points to a store path containing the Complement source code. It's likely you
# want to just pass that as the first argument to use it here.
# The root path where complement is available.
COMPLEMENT_SRC="${COMPLEMENT_SRC:-$1}"
# A `.jsonl` file to write test logs to
@@ -15,7 +11,10 @@ LOG_FILE="${2:-complement_test_logs.jsonl}"
# A `.jsonl` file to write test results to
RESULTS_FILE="${3:-complement_test_results.jsonl}"
COMPLEMENT_BASE_IMAGE="${COMPLEMENT_BASE_IMAGE:-complement-conduwuit:main}"
# The base docker image to use for complement tests
# You can build the default with `docker build -t continuwuity:complement -f ./docker/complement.Dockerfile .`
# after running `cargo build`. Only the debug binary is used.
COMPLEMENT_BASE_IMAGE="${COMPLEMENT_BASE_IMAGE:-continuwuity:complement}"
# Complement tests that are skipped due to flakiness/reliability issues or we don't implement such features and won't for a long time
SKIPPED_COMPLEMENT_TESTS='TestPartialStateJoin.*|TestRoomDeleteAlias/Parallel/Regular_users_can_add_and_delete_aliases_when_m.*|TestRoomDeleteAlias/Parallel/Can_delete_canonical_alias|TestUnbanViaInvite.*|TestRoomState/Parallel/GET_/publicRooms_lists.*"|TestRoomDeleteAlias/Parallel/Users_with_sufficient_power-level_can_delete_other.*'
@@ -34,25 +33,6 @@ toplevel="$(git rev-parse --show-toplevel)"
pushd "$toplevel" > /dev/null
if [ ! -f "complement_oci_image.tar.gz" ]; then
echo "building complement conduwuit image"
# if using macOS, use linux-complement
#bin/nix-build-and-cache just .#linux-complement
bin/nix-build-and-cache just .#complement
#nix build -L .#complement
echo "complement conduwuit image tar.gz built at \"result\""
echo "loading into docker"
docker load < result
popd > /dev/null
else
echo "skipping building a complement conduwuit image as complement_oci_image.tar.gz was already found, loading this"
docker load < complement_oci_image.tar.gz
popd > /dev/null
fi
echo ""
echo "running go test with:"
@@ -72,24 +52,16 @@ env \
set -o pipefail
# Post-process the results into an easy-to-compare format, sorted by Test name for reproducible results
cat "$LOG_FILE" | jq -s -c 'sort_by(.Test)[]' | jq -c '
jq -s -c 'sort_by(.Test)[]' < "$LOG_FILE" | jq -c '
select(
(.Action == "pass" or .Action == "fail" or .Action == "skip")
and .Test != null
) | {Action: .Action, Test: .Test}
' > "$RESULTS_FILE"
#if command -v gotestfmt &> /dev/null; then
# echo "using gotestfmt on $LOG_FILE"
# grep '{"Time":' "$LOG_FILE" | gotestfmt > "complement_test_logs_gotestfmt.log"
#fi
echo ""
echo ""
echo "complement logs saved at $LOG_FILE"
echo "complement results saved at $RESULTS_FILE"
#if command -v gotestfmt &> /dev/null; then
# echo "complement logs in gotestfmt pretty format outputted at complement_test_logs_gotestfmt.log (use an editor/terminal/pager that interprets ANSI colours and UTF-8 emojis)"
#fi
echo ""
echo ""

View File

@@ -0,0 +1 @@
Fixed invites sent to other users in the same homeserver not being properly sent down sync. Users with missing or broken invites should clear their client caches after updating to make them appear.

View File

@@ -1 +0,0 @@
The announcement checker will now announce errors it encounters in the first run to the admin room, plus a few other misc improvements. Contributed by @Jade

View File

@@ -1 +0,0 @@
Fix the generated configuration containing uncommented optional sections. Contributed by @Jade

View File

@@ -1 +0,0 @@
Fixed specification non-compliance when handling remote media errors. Contributed by @nex.

1
changelog.d/1349.feature Normal file
View File

@@ -0,0 +1 @@
Introduce a resolver command to allow flushing a server from the cache or to flush the complete cache. Contributed by @Omar007

View File

@@ -0,0 +1,67 @@
#!/usr/bin/env bash
set -xe
# If we have no $SERVER_NAME set, abort
if [ -z "$SERVER_NAME" ]; then
echo "SERVER_NAME is not set, aborting"
exit 1
fi
# If /complement/ca/ca.crt or /complement/ca/ca.key are missing, abort
if [ ! -f /complement/ca/ca.crt ] || [ ! -f /complement/ca/ca.key ]; then
echo "/complement/ca/ca.crt or /complement/ca/ca.key is missing, aborting"
exit 1
fi
# Add the root cert to the local trust store
echo 'Installing Complement CA certificate to local trust store'
cp /complement/ca/ca.crt /usr/local/share/ca-certificates/complement-ca.crt
update-ca-certificates
# Sign a certificate for our $SERVER_NAME
echo "Generating and signing certificate for $SERVER_NAME"
openssl genrsa -out "/$SERVER_NAME.key" 2048
echo "Generating CSR for $SERVER_NAME"
openssl req -new -sha256 \
-key "/$SERVER_NAME.key" \
-out "/$SERVER_NAME.csr" \
-subj "/C=US/ST=CA/O=Continuwuity, Inc./CN=$SERVER_NAME"\
-addext "subjectAltName=DNS:$SERVER_NAME"
openssl req -in "$SERVER_NAME.csr" -noout -text
echo "Signing certificate for $SERVER_NAME with Complement CA"
cat <<EOF > ./cert.ext
authorityKeyIdentifier=keyid,issuer
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment, dataEncipherment, nonRepudiation
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = *.docker.internal
DNS.2 = hs1
DNS.3 = hs2
DNS.4 = hs3
DNS.5 = hs4
DNS.6 = $SERVER_NAME
IP.1 = 127.0.0.1
EOF
openssl x509 \
-req \
-in "/$SERVER_NAME.csr" \
-CA /complement/ca/ca.crt \
-CAkey /complement/ca/ca.key \
-CAcreateserial \
-out "/$SERVER_NAME.crt" \
-days 1 \
-sha256 \
-extfile ./cert.ext
# Tell continuwuity where to find the certs
export CONTINUWUITY_TLS__KEY="/$SERVER_NAME.key"
export CONTINUWUITY_TLS__CERTS="/$SERVER_NAME.crt"
# And who it is
export CONTINUWUITY_SERVER_NAME="$SERVER_NAME"
echo "Starting Continuwuity with SERVER_NAME=$SERVER_NAME"
# Start continuwuity
/usr/local/bin/conduwuit --config /etc/continuwuity/config.toml

View File

@@ -0,0 +1,53 @@
# ============================================= #
# Complement pre-filled configuration file #
#
# DANGER: THIS FILE FORCES INSECURE VALUES. #
# DO NOT USE OUTSIDE THE TEST SUITE ENV! #
# ============================================= #
[global]
address = "0.0.0.0"
allow_device_name_federation = true
allow_guest_registration = true
allow_public_room_directory_over_federation = true
allow_public_room_directory_without_auth = true
allow_registration = true
database_path = "/database"
log = "trace,h2=debug,hyper=debug"
port = [8008, 8448]
trusted_servers = []
only_query_trusted_key_servers = false
query_trusted_key_servers_first = false
query_trusted_key_servers_first_on_join = false
yes_i_am_very_very_sure_i_want_an_open_registration_server_prone_to_abuse = true
ip_range_denylist = []
url_preview_domain_contains_allowlist = ["*"]
url_preview_domain_explicit_denylist = ["*"]
media_compat_file_link = false
media_startup_check = true
prune_missing_media = true
log_colors = true
admin_room_notices = false
allow_check_for_updates = false
intentionally_unknown_config_option_for_testing = true
rocksdb_log_level = "info"
rocksdb_max_log_files = 1
rocksdb_recovery_mode = 0
rocksdb_paranoid_file_checks = true
log_guest_registrations = false
allow_legacy_media = true
startup_netburst = true
startup_netburst_keep = -1
allow_invalid_tls_certificates_yes_i_know_what_the_fuck_i_am_doing_with_this_and_i_know_this_is_insecure = true
dns_timeout = 60
dns_attempts = 20
request_conn_timeout = 60
request_timeout = 120
well_known_conn_timeout = 60
well_known_timeout = 60
federation_idle_timeout = 300
sender_timeout = 300
sender_idle_timeout = 300
sender_retry_backoff_limit = 300
[global.tls]
dual_protocol = true

View File

@@ -48,7 +48,7 @@ EOF
# Developer tool versions
# renovate: datasource=github-releases depName=cargo-bins/cargo-binstall
ENV BINSTALL_VERSION=1.16.7
ENV BINSTALL_VERSION=1.17.4
# renovate: datasource=github-releases depName=psastras/sbom-rs
ENV CARGO_SBOM_VERSION=0.9.1
# renovate: datasource=crate depName=lddtree

View File

@@ -0,0 +1,11 @@
FROM ubuntu:latest
EXPOSE 8008
EXPOSE 8448
RUN apt-get update && apt-get install -y ca-certificates liburing2 && rm -rf /var/lib/apt/lists/*
RUN mkdir -p /etc/continuwuity /var/lib/continuwuity
COPY docker/complement-entrypoint.sh /usr/local/bin/complement-entrypoint.sh
COPY docker/complement.config.toml /etc/continuwuity/config.toml
COPY target/debug/conduwuit /usr/local/bin/conduwuit
RUN chmod +x /usr/local/bin/conduwuit /usr/local/bin/complement-entrypoint.sh
#HEALTHCHECK --interval=30s --timeout=5s CMD curl --fail http://localhost:8008/_continuwuity/server_version || exit 1
ENTRYPOINT ["/usr/local/bin/complement-entrypoint.sh"]

View File

@@ -18,7 +18,7 @@ RUN --mount=type=cache,target=/etc/apk/cache apk add \
# Developer tool versions
# renovate: datasource=github-releases depName=cargo-bins/cargo-binstall
ENV BINSTALL_VERSION=1.16.7
ENV BINSTALL_VERSION=1.17.4
# renovate: datasource=github-releases depName=psastras/sbom-rs
ENV CARGO_SBOM_VERSION=0.9.1
# renovate: datasource=crate depName=lddtree

View File

@@ -34,6 +34,14 @@
"name": "troubleshooting",
"label": "Troubleshooting"
},
"security",
{
"type": "dir-section-header",
"name": "community",
"label": "Community",
"collapsible": true,
"collapsed": false
},
{
"type": "divider"
},
@@ -63,7 +71,5 @@
},
{
"type": "divider"
},
"community",
"security"
}
]

View File

@@ -19,16 +19,21 @@
{
"text": "Admin Command Reference",
"link": "/reference/admin/"
},
{
"text": "Server Reference",
"link": "/reference/server"
}
]
},
{
"text": "Community",
"link": "/community"
"items": [
{
"text": "Community Guidelines",
"link": "/community/guidelines"
},
{
"text": "Become a Partnered Homeserver!",
"link": "/community/ops-guidelines"
}
]
},
{
"text": "Security",

12
docs/community/_meta.json Normal file
View File

@@ -0,0 +1,12 @@
[
{
"type": "file",
"name": "guidelines",
"label": "Community Guidelines"
},
{
"type": "file",
"name": "ops-guidelines",
"label": "Partnered Homeserver Guidelines"
}
]

View File

@@ -0,0 +1,32 @@
# Partnered Homeserver Operator Requirements
> _So you want to be an officially sanctioned public Continuwuity homeserver operator?_
Thank you for your interest in the project! There's a few things we need from you first to make sure your homeserver meets our quality standards and that you are prepared to handle the additional workload introduced by operating a public chat service.
## Stuff you must have
if you don't do these things we will tell you to go away
- Your homeserver must be running an up-to-date version of Continuwuity
- You must have a CAPTCHA, external registration system, or apply-to-join system that provides one-time-use invite codes (we do not accept fully open nor static token registration)
- Your homeserver must have support details listed in [`/.well-known/matrix/support`](https://spec.matrix.org/v1.17/client-server-api/#getwell-knownmatrixsupport)
- Your rules and guidelines must align with [the project's own code of conduct](guidelines).
- You must be reasonably responsive (i.e. don't leave us hanging for a week if we alert you to an issue on your server)
- Your homeserver's community rooms (if any) must be protected by a moderation bot subscribed to policy lists like the Community Moderation Effort (you can get one from https://asgard.chat if you don't want to run your own)
## Stuff we encourage you to have
not strictly required but we will consider your request more strongly if you have it
- You should have automated moderation tooling that can automatically suspend abusive users on your homeserver who are added to policy lists
- You should have multiple server administrators (increased bus factor)
- You should have a terms of service and privacy policy prominently available
## Stuff you get
- Prominent listing in our README!
- A gold star sticker
- Access to a low noise room for more direct communication with maintainers and collaboration with fellow operators
- Read-only access to the continuwuity internal ban list
- Early notice of upcoming releases
## Sound good?
To get started, ping a team member in [our main chatroom](https://matrix.to/#/#continuwuity:continuwuity.org) and ask to be added to the list.

View File

@@ -1,17 +1,18 @@
# RPM Installation Guide
Continuwuity is available as RPM packages for Fedora, RHEL, and compatible distributions.
Continuwuity is available as RPM packages for Fedora and compatible distributions.
We do not currently have infrastructure to build RPMs for RHEL and compatible distributions, but this is a work in progress.
The RPM packaging files are maintained in the `fedora/` directory:
- `continuwuity.spec.rpkg` - RPM spec file using rpkg macros for building from git
- `continuwuity.service` - Systemd service file for the server
- `RPM-GPG-KEY-continuwuity.asc` - GPG public key for verifying signed packages
RPM packages built by CI are signed with our GPG key (Ed25519, ID: `5E0FF73F411AAFCA`).
RPM packages built by CI are signed with our GPG key (RSA, ID: `6595 E8DB 9191 D39A 46D6 A514 4BA7 F590 DF0B AA1D`). # spellchecker:disable-line
```bash
# Import the signing key
sudo rpm --import https://forgejo.ellis.link/continuwuation/continuwuity/raw/branch/main/fedora/RPM-GPG-KEY-continuwuity.asc
sudo rpm --import https://forgejo.ellis.link/api/packages/continuwuation/rpm/repository.key
# Verify a downloaded package
rpm --checksig continuwuity-*.rpm
@@ -23,7 +24,7 @@ ## Installation methods
```bash
# Add the repository and install
sudo dnf config-manager addrepo --from-repofile=https://forgejo.ellis.link/api/packages/continuwuation/rpm/stable/continuwuation.repo
sudo dnf config-manager addrepo --from-repofile=https://forgejo.ellis.link/api/packages/continuwuation/rpm/stable.repo
sudo dnf install continuwuity
```
@@ -31,7 +32,7 @@ # Add the repository and install
```bash
# Add the dev repository and install
sudo dnf config-manager addrepo --from-repofile=https://forgejo.ellis.link/api/packages/continuwuation/rpm/dev/continuwuation.repo
sudo dnf config-manager addrepo --from-repofile=https://forgejo.ellis.link/api/packages/continuwuation/rpm/dev.repo
sudo dnf install continuwuity
```
@@ -39,23 +40,10 @@ # Add the dev repository and install
```bash
# Branch names are sanitized (slashes become hyphens, lowercase only)
sudo dnf config-manager addrepo --from-repofile=https://forgejo.ellis.link/api/packages/continuwuation/rpm/tom-new-feature/continuwuation.repo
sudo dnf config-manager addrepo --from-repofile=https://forgejo.ellis.link/api/packages/continuwuation/rpm/tom-new-feature.repo
sudo dnf install continuwuity
```
**Direct installation** without adding repository
```bash
# Latest stable release
sudo dnf install https://forgejo.ellis.link/api/packages/continuwuation/rpm/stable/continuwuity
# Latest development build
sudo dnf install https://forgejo.ellis.link/api/packages/continuwuation/rpm/dev/continuwuity
# Specific feature branch
sudo dnf install https://forgejo.ellis.link/api/packages/continuwuation/rpm/branch-name/continuwuity
```
**Manual repository configuration** (alternative method)
```bash
@@ -65,7 +53,7 @@ # Specific feature branch
baseurl=https://forgejo.ellis.link/api/packages/continuwuation/rpm/stable
enabled=1
gpgcheck=1
gpgkey=https://forgejo.ellis.link/continuwuation/continuwuity/raw/branch/main/fedora/RPM-GPG-KEY-continuwuity.asc
gpgkey=https://forgejo.ellis.link/api/packages/continuwuation/rpm/repository.key
EOF
sudo dnf install continuwuity

View File

@@ -19,6 +19,16 @@
src: /assets/logo.svg
alt: continuwuity logo
beforeFeatures:
- title: Matrix for Discord users
details: New to Matrix? Learn how Matrix compares to Discord
link: https://joinmatrix.org/guide/matrix-vs-discord/
buttonText: Find Out the Difference
- title: How Matrix Works
details: Learn how Matrix works under the hood, and what that means
link: https://matrix.org/docs/matrix-concepts/elements-of-matrix/
buttonText: Read the Guide
features:
- title: 🚀 High Performance
details: Built with Rust for exceptional speed and efficiency. Designed to run smoothly even on modest hardware.

View File

@@ -6,10 +6,10 @@
"message": "Welcome to Continuwuity! Important announcements about the project will appear here."
},
{
"id": 8,
"id": 9,
"mention_room": false,
"date": "2026-01-12",
"message": "Hey everyone!\n\nJust letting you know we've released [v0.5.3](https://forgejo.ellis.link/continuwuation/continuwuity/releases/tag/v0.5.3) - this one is a bit of a hotfix for an issue with inviting and allowing others to join rooms.\n\nIf you appreceate the round-the-clock work we've been doing to keep your servers secure over this holiday period, we'd really appreciate your support - you can sponsor individuals on our team using the 'sponsor' button at the top of [our GitHub repository](https://github.com/continuwuity/continuwuity). If you can't do that, even a star helps - spreading the word and advocating for our project helps keep it going.\n\nHave a lovely rest of your year \\\n[Jade \\(she/her\\)](https://matrix.to/#/%40jade%3Aellis.link) \n🩵"
"date": "2026-02-09",
"message": "Yesterday we released [v0.5.4](https://forgejo.ellis.link/continuwuation/continuwuity/releases/tag/v0.5.4). Bugfixes, performance improvements and more moderation features! There's also a security fix, so please update as soon as possible. Don't forget to join [our announcements channel](https://matrix.to/#/!jIdNjSM5X-V5JVx2h2kAhUZIIQ08GyzPL55NFZAH1vM/%2489TY9CqRg4-ff1MGo3Ulc5r5X4pakfdzT-99RD8Docc?via=ellis.link&via=explodie.org&via=matrix.org) to get important information sooner <3 "
}
]
}

View File

@@ -112,6 +112,19 @@ ### `!admin query resolver overrides-cache`
Query the overrides cache
### `!admin query resolver flush-cache`
Flush a given server from the resolver caches or flush them completely
* Examples:
* Flush a specific server:
`!admin query resolver flush-cache matrix.example.com`
* Flush all resolver caches completely:
`!admin query resolver flush-cache --all`
## `!admin query pusher`
pusher service

View File

@@ -20,6 +20,16 @@ ### Lost access to admin room
## General potential issues
### Configuration not working as expected
Sometimes you can make a mistake in your configuration that
means things don't get passed to Continuwuity correctly.
This is particularly easy to do with environment variables.
To check what configuration Continuwuity actually sees, you can
use the `!admin server show-config` command in your admin room.
Beware that this prints out any secrets in your configuration,
so you might want to delete the result afterwards!
### Potential DNS issues when using Docker
Docker's DNS setup for containers in a non-default network intercepts queries to
@@ -139,7 +149,7 @@ ### Database corruption
## Debugging
Note that users should not really be debugging things. If you find yourself
Note that users should not really need to debug things. If you find yourself
debugging and find the issue, please let us know and/or how we can fix it.
Various debug commands can be found in `!admin debug`.
@@ -178,6 +188,31 @@ ### Pinging servers
and simply fetches a string on a static JSON endpoint. It is very low cost both
bandwidth and computationally.
### Enabling backtraces for errors
Continuwuity can capture backtraces (stack traces) for errors to help diagnose
issues. Backtraces show the exact sequence of function calls that led to an
error, which is invaluable for debugging.
To enable backtraces, set the `RUST_BACKTRACE` environment variable before starting Continuwuity:
```bash
# For both panics and errors
RUST_BACKTRACE=1 ./conduwuit
```
For systemd deployments, add this to your service file:
```ini
[Service]
Environment="RUST_BACKTRACE=1"
```
Backtrace capture has a performance cost. Avoid leaving it on.
You can also enable it only for panics by setting
`RUST_BACKTRACE=1` and `RUST_LIB_BACKTRACE=0`.
### Allocator memory stats
When using jemalloc with jemallocator's `stats` feature (`--enable-stats`), you

1848
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -22,10 +22,9 @@
"license": "ISC",
"type": "commonjs",
"devDependencies": {
"@rspress/core": "^2.0.0-rc.1",
"@rspress/plugin-client-redirects": "^2.0.0-alpha.12",
"@rspress/plugin-preview": "^2.0.0-beta.35",
"@rspress/plugin-sitemap": "^2.0.0-beta.23",
"@rspress/core": "^2.0.0",
"@rspress/plugin-client-redirects": "^2.0.0",
"@rspress/plugin-sitemap": "^2.0.0",
"typescript": "^5.9.3"
}
}

View File

@@ -1,5 +1,4 @@
import { defineConfig } from '@rspress/core';
import { pluginPreview } from '@rspress/plugin-preview';
import { pluginSitemap } from '@rspress/plugin-sitemap';
import { pluginClientRedirects } from '@rspress/plugin-client-redirects';
@@ -41,7 +40,7 @@ export default defineConfig({
},
},
plugins: [pluginPreview(), pluginSitemap({
plugins: [pluginSitemap({
siteUrl: 'https://continuwuity.org', // TODO: Set automatically in build pipeline
}),
pluginClientRedirects({
@@ -54,6 +53,9 @@ export default defineConfig({
}, {
from: '/server_reference',
to: '/reference/server'
}, {
from: '/community$',
to: '/community/guidelines'
}
]
})],

View File

@@ -1,5 +1,5 @@
use clap::Subcommand;
use conduwuit::{Result, utils::time};
use conduwuit::{Err, Result, utils::time};
use futures::StreamExt;
use ruma::OwnedServerName;
@@ -7,6 +7,7 @@
#[admin_command_dispatch]
#[derive(Debug, Subcommand)]
#[allow(clippy::enum_variant_names)]
/// Resolver service and caches
pub enum ResolverCommand {
/// Query the destinations cache
@@ -18,6 +19,14 @@ pub enum ResolverCommand {
OverridesCache {
name: Option<String>,
},
/// Flush a specific server from the resolver caches or everything
FlushCache {
name: Option<OwnedServerName>,
#[arg(short, long)]
all: bool,
},
}
#[admin_command]
@@ -69,3 +78,18 @@ async fn overrides_cache(&self, server_name: Option<String>) -> Result {
Ok(())
}
#[admin_command]
async fn flush_cache(&self, name: Option<OwnedServerName>, all: bool) -> Result {
if all {
self.services.resolver.cache.clear().await;
writeln!(self, "Resolver caches cleared!").await
} else if let Some(name) = name {
self.services.resolver.cache.del_destination(&name);
self.services.resolver.cache.del_override(&name);
self.write_str(&format!("Cleared {name} from resolver caches!"))
.await
} else {
Err!("Missing name. Supply a name or use --all to flush the whole cache.")
}
}

View File

@@ -3,10 +3,7 @@
fmt::Write as _,
};
use api::client::{
full_user_deactivate, join_room_by_id_helper, leave_all_rooms, leave_room, remote_leave_room,
update_avatar_url, update_displayname,
};
use api::client::{full_user_deactivate, join_room_by_id_helper, leave_room, remote_leave_room};
use conduwuit::{
Err, Result, debug, debug_warn, error, info, is_equal_to,
matrix::{Event, pdu::PduBuilder},
@@ -227,9 +224,6 @@ pub(super) async fn deactivate(&self, no_leave_rooms: bool, user_id: String) ->
full_user_deactivate(self.services, &user_id, &all_joined_rooms)
.boxed()
.await?;
update_displayname(self.services, &user_id, None, &all_joined_rooms).await;
update_avatar_url(self.services, &user_id, None, None, &all_joined_rooms).await;
leave_all_rooms(self.services, &user_id).await;
}
self.write_str(&format!("User {user_id} has been deactivated"))
@@ -406,10 +400,6 @@ pub(super) async fn deactivate_all(&self, no_leave_rooms: bool, force: bool) ->
full_user_deactivate(self.services, &user_id, &all_joined_rooms)
.boxed()
.await?;
update_displayname(self.services, &user_id, None, &all_joined_rooms).await;
update_avatar_url(self.services, &user_id, None, None, &all_joined_rooms)
.await;
leave_all_rooms(self.services, &user_id).await;
}
},
}

View File

@@ -26,6 +26,7 @@
events::{
GlobalAccountDataEventType, StateEventType,
room::{
member::{MembershipState, RoomMemberEventContent},
message::RoomMessageEventContent,
power_levels::{RoomPowerLevels, RoomPowerLevelsEventContent},
},
@@ -815,9 +816,6 @@ pub(crate) async fn deactivate_route(
.collect()
.await;
super::update_displayname(&services, sender_user, None, &all_joined_rooms).await;
super::update_avatar_url(&services, sender_user, None, None, &all_joined_rooms).await;
full_user_deactivate(&services, sender_user, &all_joined_rooms)
.boxed()
.await?;
@@ -907,9 +905,6 @@ pub async fn full_user_deactivate(
) -> Result<()> {
services.users.deactivate_account(user_id).await.ok();
super::update_displayname(services, user_id, None, all_joined_rooms).await;
super::update_avatar_url(services, user_id, None, None, all_joined_rooms).await;
services
.users
.all_profile_keys(user_id)
@@ -918,9 +913,11 @@ pub async fn full_user_deactivate(
})
.await;
for room_id in all_joined_rooms {
let state_lock = services.rooms.state.mutex.lock(room_id).await;
// TODO: Rescind all user invites
let mut pdu_queue: Vec<(PduBuilder, &OwnedRoomId)> = Vec::new();
for room_id in all_joined_rooms {
let room_power_levels = services
.rooms
.state_accessor
@@ -948,30 +945,33 @@ pub async fn full_user_deactivate(
if user_can_demote_self {
let mut power_levels_content = room_power_levels.unwrap_or_default();
power_levels_content.users.remove(user_id);
// ignore errors so deactivation doesn't fail
match services
.rooms
.timeline
.build_and_append_pdu(
PduBuilder::state(String::new(), &power_levels_content),
user_id,
Some(room_id),
&state_lock,
)
.await
{
| Err(e) => {
warn!(%room_id, %user_id, "Failed to demote user's own power level: {e}");
},
| _ => {
info!("Demoted {user_id} in {room_id} as part of account deactivation");
},
}
let pl_evt = PduBuilder::state(String::new(), &power_levels_content);
pdu_queue.push((pl_evt, room_id));
}
// Leave the room
pdu_queue.push((
PduBuilder::state(user_id.to_string(), &RoomMemberEventContent {
avatar_url: None,
blurhash: None,
membership: MembershipState::Leave,
displayname: None,
join_authorized_via_users_server: None,
reason: None,
is_direct: None,
third_party_invite: None,
redact_events: None,
}),
room_id,
));
// TODO: Redact all messages sent by the user in the room
}
super::leave_all_rooms(services, user_id).boxed().await;
super::update_all_rooms(services, pdu_queue, user_id).await;
for room_id in all_joined_rooms {
services.rooms.state_cache.forget(room_id, user_id);
}
Ok(())
}

View File

@@ -15,6 +15,7 @@
utils::{
self, shuffle,
stream::{IterStream, ReadyExt},
to_canonical_object,
},
warn,
};
@@ -1012,40 +1013,55 @@ async fn make_join_request(
trace!("make_join response: {:?}", make_join_response);
make_join_counter = make_join_counter.saturating_add(1);
if let Err(ref e) = make_join_response {
if matches!(
e.kind(),
ErrorKind::IncompatibleRoomVersion { .. } | ErrorKind::UnsupportedRoomVersion
) {
incompatible_room_version_count =
incompatible_room_version_count.saturating_add(1);
}
match make_join_response {
| Ok(response) => {
info!("Received make_join response from {remote_server}");
if let Err(e) = validate_remote_member_event_stub(
&MembershipState::Join,
sender_user,
room_id,
&to_canonical_object(&response.event)?,
) {
warn!("make_join response from {remote_server} failed validation: {e}");
continue;
}
make_join_response_and_server = Ok((response, remote_server.clone()));
break;
},
| Err(e) => {
info!("make_join request to {remote_server} failed: {e}");
if matches!(
e.kind(),
ErrorKind::IncompatibleRoomVersion { .. } | ErrorKind::UnsupportedRoomVersion
) {
incompatible_room_version_count =
incompatible_room_version_count.saturating_add(1);
}
if incompatible_room_version_count > 15 {
info!(
"15 servers have responded with M_INCOMPATIBLE_ROOM_VERSION or \
M_UNSUPPORTED_ROOM_VERSION, assuming that conduwuit does not support the \
room version {room_id}: {e}"
);
make_join_response_and_server =
Err!(BadServerResponse("Room version is not supported by Conduwuit"));
return make_join_response_and_server;
}
if incompatible_room_version_count > 15 {
info!(
"15 servers have responded with M_INCOMPATIBLE_ROOM_VERSION or \
M_UNSUPPORTED_ROOM_VERSION, assuming that conduwuit does not support \
the room version {room_id}: {e}"
);
make_join_response_and_server =
Err!(BadServerResponse("Room version is not supported by Conduwuit"));
return make_join_response_and_server;
}
if make_join_counter > 40 {
warn!(
"40 servers failed to provide valid make_join response, assuming no server \
can assist in joining."
);
make_join_response_and_server =
Err!(BadServerResponse("No server available to assist in joining."));
if make_join_counter > 40 {
warn!(
"40 servers failed to provide valid make_join response, assuming no \
server can assist in joining."
);
make_join_response_and_server =
Err!(BadServerResponse("No server available to assist in joining."));
return make_join_response_and_server;
}
return make_join_response_and_server;
}
},
}
make_join_response_and_server = make_join_response.map(|r| (r, remote_server.clone()));
if make_join_response_and_server.is_ok() {
break;
}

View File

@@ -10,7 +10,7 @@
},
result::FlatOk,
trace,
utils::{self, shuffle, stream::IterStream},
utils::{self, shuffle, stream::IterStream, to_canonical_object},
warn,
};
use futures::{FutureExt, StreamExt};
@@ -741,6 +741,17 @@ async fn make_knock_request(
trace!("make_knock response: {make_knock_response:?}");
make_knock_counter = make_knock_counter.saturating_add(1);
if let Ok(r) = &make_knock_response {
if let Err(e) = validate_remote_member_event_stub(
&MembershipState::Knock,
sender_user,
room_id,
&to_canonical_object(&r.event)?,
) {
warn!("make_knock response from {remote_server} failed validation: {e}");
continue;
}
}
make_knock_response_and_server = make_knock_response.map(|r| (r, remote_server.clone()));

View File

@@ -231,7 +231,7 @@ pub(crate) fn validate_remote_member_event_stub(
};
if event_membership != &membership.as_str() {
return Err!(BadServerResponse(
"Remote server returned member event with incorrect room_id"
"Remote server returned member event with incorrect membership type"
));
}

View File

@@ -2,7 +2,7 @@
use axum::extract::State;
use conduwuit::{
Event, PduCount, Result,
Err, Event, PduCount, Result, info,
result::LogErr,
utils::{IterStream, ReadyExt, stream::TryTools},
};
@@ -34,6 +34,18 @@ pub(crate) async fn get_backfill_route(
}
.check()
.await?;
if !services
.rooms
.state_cache
.server_in_room(services.globals.server_name(), &body.room_id)
.await
{
info!(
origin = body.origin().as_str(),
"Refusing to serve backfill for room we aren't participating in"
);
return Err!(Request(NotFound("This server is not participating in that room.")));
}
let limit = body
.limit

View File

@@ -1,5 +1,5 @@
use axum::extract::State;
use conduwuit::{Result, err};
use conduwuit::{Err, Result, err, info};
use ruma::{MilliSecondsSinceUnixEpoch, RoomId, api::federation::event::get_event};
use super::AccessCheck;
@@ -38,6 +38,19 @@ pub(crate) async fn get_event_route(
.check()
.await?;
if !services
.rooms
.state_cache
.server_in_room(services.globals.server_name(), room_id)
.await
{
info!(
origin = body.origin().as_str(),
"Refusing to serve state for room we aren't participating in"
);
return Err!(Request(NotFound("This server is not participating in that room.")));
}
Ok(get_event::v1::Response {
origin: services.globals.server_name().to_owned(),
origin_server_ts: MilliSecondsSinceUnixEpoch::now(),

View File

@@ -1,7 +1,7 @@
use std::{borrow::Borrow, iter::once};
use axum::extract::State;
use conduwuit::{Error, Result, utils::stream::ReadyExt};
use conduwuit::{Err, Error, Result, info, utils::stream::ReadyExt};
use futures::StreamExt;
use ruma::{
RoomId,
@@ -29,6 +29,19 @@ pub(crate) async fn get_event_authorization_route(
.check()
.await?;
if !services
.rooms
.state_cache
.server_in_room(services.globals.server_name(), &body.room_id)
.await
{
info!(
origin = body.origin().as_str(),
"Refusing to serve state for room we aren't participating in"
);
return Err!(Request(NotFound("This server is not participating in that room.")));
}
let event = services
.rooms
.timeline

View File

@@ -1,5 +1,5 @@
use axum::extract::State;
use conduwuit::{Result, debug, debug_error, utils::to_canonical_object};
use conduwuit::{Err, Result, debug, debug_error, info, utils::to_canonical_object};
use ruma::api::federation::event::get_missing_events;
use super::AccessCheck;
@@ -26,6 +26,19 @@ pub(crate) async fn get_missing_events_route(
.check()
.await?;
if !services
.rooms
.state_cache
.server_in_room(services.globals.server_name(), &body.room_id)
.await
{
info!(
origin = body.origin().as_str(),
"Refusing to serve state for room we aren't participating in"
);
return Err!(Request(NotFound("This server is not participating in that room.")));
}
let limit = body
.limit
.try_into()
@@ -66,12 +79,12 @@ pub(crate) async fn get_missing_events_route(
continue;
}
i = i.saturating_add(1);
let Ok(event) = to_canonical_object(&pdu) else {
debug_error!(
body.origin = body.origin.as_ref().map(tracing::field::display),
"Failed to convert PDU in database to canonical JSON: {pdu:?}"
);
i = i.saturating_add(1);
continue;
};

View File

@@ -1,6 +1,6 @@
use axum::extract::State;
use conduwuit::{
Err, Result,
Err, Result, info,
utils::stream::{BroadbandExt, IterStream},
};
use conduwuit_service::rooms::spaces::{
@@ -23,6 +23,19 @@ pub(crate) async fn get_hierarchy_route(
return Err!(Request(NotFound("Room does not exist.")));
}
if !services
.rooms
.state_cache
.server_in_room(services.globals.server_name(), &body.room_id)
.await
{
info!(
origin = body.origin().as_str(),
"Refusing to serve state for room we aren't participating in"
);
return Err!(Request(NotFound("This server is not participating in that room.")));
}
let room_id = &body.room_id;
let suggested_only = body.suggested_only;
let ref identifier = Identifier::ServerName(body.origin());

View File

@@ -30,6 +30,18 @@ pub(crate) async fn create_join_event_template_route(
if !services.rooms.metadata.exists(&body.room_id).await {
return Err!(Request(NotFound("Room is unknown to this server.")));
}
if !services
.rooms
.state_cache
.server_in_room(services.globals.server_name(), &body.room_id)
.await
{
info!(
origin = body.origin().as_str(),
"Refusing to serve make_join for room we aren't participating in"
);
return Err!(Request(NotFound("This server is not participating in that room.")));
}
if body.user_id.server_name() != body.origin() {
return Err!(Request(BadJson("Not allowed to join on behalf of another server/user.")));

View File

@@ -1,6 +1,6 @@
use RoomVersionId::*;
use axum::extract::State;
use conduwuit::{Err, Error, Result, debug_warn, matrix::pdu::PduBuilder, warn};
use conduwuit::{Err, Error, Result, debug_warn, info, matrix::pdu::PduBuilder, warn};
use ruma::{
RoomVersionId,
api::{client::error::ErrorKind, federation::knock::create_knock_event_template},
@@ -20,6 +20,18 @@ pub(crate) async fn create_knock_event_template_route(
if !services.rooms.metadata.exists(&body.room_id).await {
return Err!(Request(NotFound("Room is unknown to this server.")));
}
if !services
.rooms
.state_cache
.server_in_room(services.globals.server_name(), &body.room_id)
.await
{
info!(
origin = body.origin().as_str(),
"Refusing to serve make_knock for room we aren't participating in"
);
return Err!(Request(NotFound("This server is not participating in that room.")));
}
if body.user_id.server_name() != body.origin() {
return Err!(Request(BadJson("Not allowed to knock on behalf of another server/user.")));

View File

@@ -1,5 +1,5 @@
use axum::extract::State;
use conduwuit::{Err, Result, matrix::pdu::PduBuilder};
use conduwuit::{Err, Result, info, matrix::pdu::PduBuilder};
use ruma::{
api::federation::membership::prepare_leave_event,
events::room::member::{MembershipState, RoomMemberEventContent},
@@ -20,6 +20,19 @@ pub(crate) async fn create_leave_event_template_route(
return Err!(Request(NotFound("Room is unknown to this server.")));
}
if !services
.rooms
.state_cache
.server_in_room(services.globals.server_name(), &body.room_id)
.await
{
info!(
origin = body.origin().as_str(),
"Refusing to serve make_leave for room we aren't participating in"
);
return Err!(Request(NotFound("This server is not participating in that room.")));
}
if body.user_id.server_name() != body.origin() {
return Err!(Request(Forbidden(
"Not allowed to leave on behalf of another server/user."

View File

@@ -36,6 +36,18 @@ async fn create_join_event(
if !services.rooms.metadata.exists(room_id).await {
return Err!(Request(NotFound("Room is unknown to this server.")));
}
if !services
.rooms
.state_cache
.server_in_room(services.globals.server_name(), room_id)
.await
{
info!(
origin = origin.as_str(),
"Refusing to serve send_join for room we aren't participating in"
);
return Err!(Request(NotFound("This server is not participating in that room.")));
}
// ACL check origin server
services

View File

@@ -1,6 +1,6 @@
use axum::extract::State;
use conduwuit::{
Err, Result, err,
Err, Result, err, info,
matrix::{event::gen_event_id_canonical_json, pdu::PduEvent},
warn,
};
@@ -54,6 +54,19 @@ pub(crate) async fn create_knock_event_v1_route(
return Err!(Request(NotFound("Room is unknown to this server.")));
}
if !services
.rooms
.state_cache
.server_in_room(services.globals.server_name(), &body.room_id)
.await
{
info!(
origin = body.origin().as_str(),
"Refusing to serve send_knock for room we aren't participating in"
);
return Err!(Request(NotFound("This server is not participating in that room.")));
}
// ACL check origin server
services
.rooms

View File

@@ -1,7 +1,7 @@
#![allow(deprecated)]
use axum::extract::State;
use conduwuit::{Err, Result, err, matrix::event::gen_event_id_canonical_json};
use conduwuit::{Err, Result, err, info, matrix::event::gen_event_id_canonical_json};
use conduwuit_service::Services;
use futures::FutureExt;
use ruma::{
@@ -50,6 +50,19 @@ async fn create_leave_event(
return Err!(Request(NotFound("Room is unknown to this server.")));
}
if !services
.rooms
.state_cache
.server_in_room(services.globals.server_name(), room_id)
.await
{
info!(
origin = origin.as_str(),
"Refusing to serve backfill for room we aren't participating in"
);
return Err!(Request(NotFound("This server is not participating in that room.")));
}
// ACL check origin
services
.rooms

View File

@@ -1,7 +1,7 @@
use std::{borrow::Borrow, iter::once};
use axum::extract::State;
use conduwuit::{Result, at, err, utils::IterStream};
use conduwuit::{Err, Result, at, err, info, utils::IterStream};
use futures::{FutureExt, StreamExt, TryStreamExt};
use ruma::{OwnedEventId, api::federation::event::get_room_state};
@@ -24,6 +24,19 @@ pub(crate) async fn get_room_state_route(
.check()
.await?;
if !services
.rooms
.state_cache
.server_in_room(services.globals.server_name(), &body.room_id)
.await
{
info!(
origin = body.origin().as_str(),
"Refusing to serve state for room we aren't participating in"
);
return Err!(Request(NotFound("This server is not participating in that room.")));
}
let shortstatehash = services
.rooms
.state_accessor

View File

@@ -1,7 +1,7 @@
use std::{borrow::Borrow, iter::once};
use axum::extract::State;
use conduwuit::{Result, at, err};
use conduwuit::{Err, Result, at, err, info};
use futures::{StreamExt, TryStreamExt};
use ruma::{OwnedEventId, api::federation::event::get_room_state_ids};
@@ -25,6 +25,19 @@ pub(crate) async fn get_room_state_ids_route(
.check()
.await?;
if !services
.rooms
.state_cache
.server_in_room(services.globals.server_name(), &body.room_id)
.await
{
info!(
origin = body.origin().as_str(),
"Refusing to serve state for room we aren't participating in"
);
return Err!(Request(NotFound("This server is not participating in that room.")));
}
let shortstatehash = services
.rooms
.state_accessor

View File

@@ -1,6 +1,6 @@
use conduwuit::{Err, Result, implement, is_false};
use conduwuit_service::Services;
use futures::{FutureExt, StreamExt, future::OptionFuture, join};
use futures::{FutureExt, future::OptionFuture, join};
use ruma::{EventId, RoomId, ServerName};
pub(super) struct AccessCheck<'a> {
@@ -31,15 +31,6 @@ pub(super) async fn check(&self) -> Result {
.state_cache
.server_in_room(self.origin, self.room_id);
// if any user on our homeserver is trying to knock this room, we'll need to
// acknowledge bans or leaves
let user_is_knocking = self
.services
.rooms
.state_cache
.room_members_knocked(self.room_id)
.count();
let server_can_see: OptionFuture<_> = self
.event_id
.map(|event_id| {
@@ -51,14 +42,14 @@ pub(super) async fn check(&self) -> Result {
})
.into();
let (world_readable, server_in_room, server_can_see, acl_check, user_is_knocking) =
join!(world_readable, server_in_room, server_can_see, acl_check, user_is_knocking);
let (world_readable, server_in_room, server_can_see, acl_check) =
join!(world_readable, server_in_room, server_can_see, acl_check);
if !acl_check {
return Err!(Request(Forbidden("Server access denied.")));
}
if !world_readable && !server_in_room && user_is_knocking == 0 {
if !world_readable && !server_in_room {
return Err!(Request(Forbidden("Server is not in room.")));
}

View File

@@ -14,6 +14,25 @@
impl axum::response::IntoResponse for Error {
fn into_response(self) -> axum::response::Response {
let status = self.status_code();
if status.is_server_error() {
error!(
error = %self,
error_debug = ?self,
kind = ?self.kind(),
status = %status,
"Server error"
);
} else if status.is_client_error() {
use crate::debug_error;
debug_error!(
error = %self,
kind = ?self.kind(),
status = %status,
"Client error"
);
}
let response: UiaaResponse = self.into();
response
.try_into_http_response::<BytesMut>()

View File

@@ -7,6 +7,7 @@
mod clap;
mod logging;
mod mods;
mod panic;
mod restart;
mod runtime;
mod sentry;
@@ -19,6 +20,8 @@
pub use crate::clap::Args;
pub fn run() -> Result<()> {
panic::init();
let args = clap::parse();
run_with_args(&args)
}

34
src/main/panic.rs Normal file
View File

@@ -0,0 +1,34 @@
use std::{backtrace::Backtrace, panic};
/// Initialize the panic hook to capture backtraces at the point of panic.
/// This is needed to capture the backtrace before the unwind destroys it.
pub(crate) fn init() {
let default_hook = panic::take_hook();
panic::set_hook(Box::new(move |info| {
let backtrace = Backtrace::force_capture();
let location_str = info.location().map_or_else(String::new, |loc| {
format!(" at {}:{}:{}", loc.file(), loc.line(), loc.column())
});
let message = if let Some(s) = info.payload().downcast_ref::<&str>() {
(*s).to_owned()
} else if let Some(s) = info.payload().downcast_ref::<String>() {
s.clone()
} else {
"Box<dyn Any>".to_owned()
};
let thread_name = std::thread::current()
.name()
.map_or_else(|| "<unnamed>".to_owned(), ToOwned::to_owned);
eprintln!(
"\nthread '{thread_name}' panicked{location_str}: \
{message}\n\nBacktrace:\n{backtrace}"
);
default_hook(info);
}));
}

View File

@@ -118,7 +118,6 @@ webpage.optional = true
blurhash.workspace = true
blurhash.optional = true
recaptcha-verify = { version = "0.1.5", default-features = false }
indexmap.workspace = true
[target.'cfg(all(unix, target_os = "linux"))'.dependencies]
sd-notify.workspace = true

View File

@@ -406,6 +406,10 @@ pub async fn get_admins(&self) -> Vec<OwnedUserId> {
/// Checks whether a given user is an admin of this server
pub async fn user_is_admin(&self, user_id: &UserId) -> bool {
if self.services.globals.server_user == user_id {
return true;
}
if self
.services
.server

View File

@@ -1,7 +1,7 @@
use std::{cmp, collections::HashMap};
use std::{cmp, collections::HashMap, future::ready};
use conduwuit::{
Err, Pdu, Result, debug, debug_info, debug_warn, error, info,
Err, Event, Pdu, Result, debug, debug_info, debug_warn, error, info,
result::NotFound,
utils::{
IterStream, ReadyExt,
@@ -15,8 +15,9 @@
use ruma::{
OwnedRoomId, OwnedUserId, RoomId, UserId,
events::{
GlobalAccountDataEventType, StateEventType, push_rules::PushRulesEvent,
room::member::MembershipState,
AnyStrippedStateEvent, GlobalAccountDataEventType, StateEventType,
push_rules::PushRulesEvent,
room::member::{MembershipState, RoomMemberEventContent},
},
push::Ruleset,
serde::Raw,
@@ -162,6 +163,14 @@ async fn migrate(services: &Services) -> Result<()> {
populate_userroomid_leftstate_table(services).await?;
}
if db["global"]
.get(FIXED_LOCAL_INVITE_STATE_MARKER)
.await
.is_not_found()
{
fix_local_invite_state(services).await?;
}
assert_eq!(
services.globals.db.database_version().await,
DATABASE_VERSION,
@@ -721,3 +730,46 @@ async fn populate_userroomid_leftstate_table(services: &Services) -> Result {
db.db.sort()?;
Ok(())
}
const FIXED_LOCAL_INVITE_STATE_MARKER: &str = "fix_local_invite_state";
async fn fix_local_invite_state(services: &Services) -> Result {
// Clean up the effects of !1249 by caching stripped state for invites
type KeyVal<'a> = (Key<'a>, Raw<Vec<AnyStrippedStateEvent>>);
type Key<'a> = (&'a UserId, &'a RoomId);
let db = &services.db;
let cork = db.cork_and_sync();
let userroomid_invitestate = services.db["userroomid_invitestate"].clone();
// for each user invited to a room
let fixed = userroomid_invitestate.stream()
// if they're a local user on this homeserver
.try_filter(|((user_id, _), _): &KeyVal<'_>| ready(services.globals.user_is_local(user_id)))
.and_then(async |((user_id, room_id), stripped_state): KeyVal<'_>| Ok::<_, conduwuit::Error>((user_id.to_owned(), room_id.to_owned(), stripped_state.deserialize()?)))
.try_fold(0_usize, async |mut fixed, (user_id, room_id, stripped_state)| {
// and their invite state is None
if stripped_state.is_empty()
// and they are actually invited to the room
&& let Ok(membership_event) = services.rooms.state_accessor.room_state_get(&room_id, &StateEventType::RoomMember, user_id.as_str()).await
&& membership_event.get_content::<RoomMemberEventContent>().is_ok_and(|content| content.membership == MembershipState::Invite)
// and the invite was sent by a local user
&& services.globals.user_is_local(&membership_event.sender) {
// build and save stripped state for their invite in the database
let stripped_state = services.rooms.state.summary_stripped(&membership_event, &room_id).await;
userroomid_invitestate.put((&user_id, &room_id), Json(stripped_state));
fixed = fixed.saturating_add(1);
}
Ok(fixed)
})
.await?;
drop(cork);
info!(?fixed, "Fixed local invite state cache entries.");
db["global"].insert(FIXED_LOCAL_INVITE_STATE_MARKER, []);
db.db.sort()?;
Ok(())
}

View File

@@ -4,8 +4,8 @@
};
use conduwuit::{
Err, Event, Result, debug::INFO_SPAN_LEVEL, defer, err, implement, utils::stream::IterStream,
warn,
Err, Event, Result, debug::INFO_SPAN_LEVEL, defer, err, implement, info,
utils::stream::IterStream, warn,
};
use futures::{
FutureExt, TryFutureExt, TryStreamExt,
@@ -70,7 +70,7 @@ pub async fn handle_incoming_pdu<'a>(
return Err!(Request(TooLarge("PDU is too large")));
}
// 1.1 Check the server is in the room
// 1.1 Check we even know about the room
let meta_exists = self.services.metadata.exists(room_id).map(Ok);
// 1.2 Check if the room is disabled
@@ -114,6 +114,19 @@ pub async fn handle_incoming_pdu<'a>(
return Err!(Request(Forbidden("Federation of this room is disabled by this server.")));
}
if !self
.services
.state_cache
.server_in_room(self.services.globals.server_name(), room_id)
.await
{
info!(
%origin,
"Dropping inbound PDU for room we aren't participating in"
);
return Err!(Request(NotFound("This server is not participating in that room.")));
}
let (incoming_pdu, val) = self
.handle_outlier_pdu(origin, create_event, event_id, room_id, value, false)
.await?;

View File

@@ -1,7 +1,7 @@
use std::borrow::Borrow;
use conduwuit::{
Result, err, implement,
Pdu, Result, err, implement,
matrix::{Event, StateKey},
};
use futures::{Stream, StreamExt, TryFutureExt};
@@ -84,7 +84,7 @@ pub async fn room_state_get(
room_id: &RoomId,
event_type: &StateEventType,
state_key: &str,
) -> Result<impl Event> {
) -> Result<Pdu> {
self.services
.state
.get_room_shortstatehash(room_id)

View File

@@ -1,4 +1,4 @@
use conduwuit::{implement, utils::stream::ReadyExt};
use conduwuit::{implement, utils::stream::ReadyExt, warn};
use futures::StreamExt;
use ruma::{
EventId, RoomId, ServerName,
@@ -19,7 +19,12 @@ pub async fn server_can_see_event(
event_id: &EventId,
) -> bool {
let Ok(shortstatehash) = self.pdu_shortstatehash(event_id).await else {
return true;
warn!(
"Unable to visibility check event {} in room {} for server {}: shortstatehash not \
found",
event_id, room_id, origin
);
return false;
};
let history_visibility = self

View File

@@ -30,6 +30,7 @@ struct Services {
config: Dep<config::Service>,
globals: Dep<globals::Service>,
metadata: Dep<rooms::metadata::Service>,
state: Dep<rooms::state::Service>,
state_accessor: Dep<rooms::state_accessor::Service>,
users: Dep<users::Service>,
}
@@ -64,6 +65,7 @@ fn build(args: crate::Args<'_>) -> Result<Arc<Self>> {
config: args.depend::<config::Service>("config"),
globals: args.depend::<globals::Service>("globals"),
metadata: args.depend::<rooms::metadata::Service>("rooms::metadata"),
state: args.depend::<rooms::state::Service>("rooms::state"),
state_accessor: args
.depend::<rooms::state_accessor::Service>("rooms::state_accessor"),
users: args.depend::<users::Service>("users"),

View File

@@ -118,10 +118,8 @@ pub async fn update_membership(
self.mark_as_joined(user_id, room_id);
},
| MembershipState::Invite => {
// TODO: make sure that passing None for `last_state` is correct behavior.
// the call from `append_pdu` used to use `services.state.summary_stripped`
// to fill that parameter.
self.mark_as_invited(user_id, room_id, pdu.sender(), None, None)
let last_state = self.services.state.summary_stripped(pdu, room_id).await;
self.mark_as_invited(user_id, room_id, pdu.sender(), Some(last_state), None)
.await?;
},
| MembershipState::Leave | MembershipState::Ban => {

View File

@@ -233,6 +233,16 @@ pub async fn try_auth(
| AuthData::Dummy(_) => {
uiaainfo.completed.push(AuthType::Dummy);
},
| AuthData::FallbackAcknowledgement(_) => {
// The client is checking if authentication has succeeded out-of-band. This is
// possible if the client is using "fallback auth" (see spec section
// 4.9.1.4), which we don't support (and probably never will, because it's a
// disgusting hack).
// Return early to tell the client that no, authentication did not succeed while
// it wasn't looking.
return Ok((false, uiaainfo));
},
| k => error!("type not supported: {:?}", k),
}

View File

@@ -304,7 +304,11 @@ pub fn disable_login(&self, user_id: &UserId) {
pub fn enable_login(&self, user_id: &UserId) { self.db.userid_logindisabled.remove(user_id); }
pub async fn is_login_disabled(&self, user_id: &UserId) -> bool {
self.db.userid_logindisabled.contains(user_id).await
self.db
.userid_logindisabled
.exists(user_id.as_str())
.await
.is_ok()
}
/// Check if account is active, infallible

View File

@@ -1,19 +0,0 @@
[package]
name = "stitcher"
description = "An implementation of stitched ordering (https://codeberg.org/andybalaam/stitched-order)"
edition.workspace = true
license.workspace = true
readme.workspace = true
repository.workspace = true
version.workspace = true
[lib]
path = "mod.rs"
[dependencies]
indexmap.workspace = true
itertools.workspace = true
[dev-dependencies]
peg = "0.8.5"
rustyline = { version = "17.0.2", default-features = false }

View File

@@ -1,141 +0,0 @@
use std::collections::HashSet;
use indexmap::IndexSet;
use itertools::Itertools;
use super::{Batch, Gap, OrderKey, StitchedItem, StitcherBackend};
/// Updates to a gap in the stitched order.
#[derive(Debug)]
pub struct GapUpdate<'id, K: OrderKey> {
/// The opaque key of the gap to update.
pub key: K,
/// The new contents of the gap. If this is empty, the gap should be
/// deleted.
pub gap: Gap,
/// New items to insert after the gap. These items _should not_ be
/// synchronized to clients.
pub inserted_items: Vec<StitchedItem<'id>>,
}
/// Updates to the stitched order.
#[derive(Debug)]
pub struct OrderUpdates<'id, K: OrderKey> {
/// Updates to individual gaps. The items inserted by these updates _should
/// not_ be synchronized to clients.
pub gap_updates: Vec<GapUpdate<'id, K>>,
/// New items to append to the end of the order. These items _should_ be
/// synchronized to clients.
pub new_items: Vec<StitchedItem<'id>>,
// The subset of events in the batch which got slotted into an existing gap. This is tracked
// for unit testing and may eventually be sent to clients.
pub events_added_to_gaps: HashSet<&'id str>,
}
/// The stitcher, which implements the stitched ordering algorithm.
/// Its primary method is [`Stitcher::stitch`].
pub struct Stitcher<'backend, B: StitcherBackend> {
backend: &'backend B,
}
impl<B: StitcherBackend> Stitcher<'_, B> {
/// Create a new [`Stitcher`] given a [`StitcherBackend`].
pub fn new(backend: &B) -> Stitcher<'_, B> { Stitcher { backend } }
/// Given a [`Batch`], compute the [`OrderUpdates`] which should be made to
/// the stitched order to incorporate that batch. It is the responsibility
/// of the caller to apply the updates.
pub fn stitch<'id>(&self, batch: &Batch<'id>) -> OrderUpdates<'id, B::Key> {
let mut gap_updates = Vec::new();
let mut events_added_to_gaps: HashSet<&'id str> = HashSet::new();
// Events in the batch which haven't been fitted into a gap or appended to the
// end yet.
let mut remaining_events: IndexSet<_> = batch.events().collect();
// 1: Find existing gaps which include IDs of events in `batch`
let matching_gaps = self.backend.find_matching_gaps(batch.events());
// Repeat steps 2-9 for each matching gap
for (key, mut gap) in matching_gaps {
// 2. Find events in `batch` which are mentioned in `gap`
let matching_events = remaining_events.iter().filter(|id| gap.contains(**id));
// Extend `events_added_to_gaps` with the matching events, which are destined to
// be slotted into gaps.
events_added_to_gaps.extend(matching_events.clone());
// 3. Create the to-insert list from the predecessor sets of each matching event
let events_to_insert: Vec<_> = matching_events
.filter_map(|event| batch.predecessors(event))
.flat_map(|predecessors| predecessors.predecessor_set.iter())
.filter(|event| remaining_events.contains(*event))
.copied()
.collect();
// 4. Remove the events in the to-insert list from `remaining_events` so they
// aren't processed again
remaining_events.retain(|event| !events_to_insert.contains(event));
// 5 and 6
let inserted_items = self.sort_events_and_create_gaps(batch, events_to_insert);
// 8. Update gap
gap.retain(|id| !batch.contains(id));
// 7 and 9. Append to-insert list and delete gap if empty
// The actual work of mutating the order is handled by the callee,
// we just record an update to make.
gap_updates.push(GapUpdate { key: key.clone(), gap, inserted_items });
}
// 10. Append remaining events and gaps
let new_items = self.sort_events_and_create_gaps(batch, remaining_events);
OrderUpdates {
gap_updates,
new_items,
events_added_to_gaps,
}
}
fn sort_events_and_create_gaps<'id>(
&self,
batch: &Batch<'id>,
events_to_insert: impl IntoIterator<Item = &'id str>,
) -> Vec<StitchedItem<'id>> {
// 5. Sort the to-insert list with DAG;received order
let events_to_insert = events_to_insert
.into_iter()
.sorted_by(batch.compare_by_dag_received())
.collect_vec();
// allocate 1.5x the size of the to-insert list
let items_capacity = events_to_insert
.capacity()
.saturating_add(events_to_insert.capacity().div_euclid(2));
let mut items = Vec::with_capacity(items_capacity);
for event in events_to_insert {
let missing_prev_events: HashSet<String> = batch
.predecessors(event)
.expect("events in to_insert should be in batch")
.prev_events
.iter()
.filter(|prev_event| {
!(batch.contains(prev_event) || self.backend.event_exists(prev_event))
})
.map(|id| String::from(*id))
.collect();
if !missing_prev_events.is_empty() {
items.push(StitchedItem::Gap(missing_prev_events));
}
items.push(StitchedItem::Event(event));
}
items
}
}

View File

@@ -1,88 +0,0 @@
use std::collections::HashSet;
use rustyline::{DefaultEditor, Result, error::ReadlineError};
use stitcher::{Batch, EventEdges, Stitcher, memory_backend::MemoryStitcherBackend};
const BANNER: &str = "
stitched ordering test repl
- append an event by typing its name: `A`
- to add prev events, type an arrow and then space-separated event names: `A --> B C D`
- to add multiple events at once, separate them with commas
- use `/reset` to clear the ordering
Ctrl-D to exit, Ctrl-C to clear input
"
.trim_ascii();
enum Command<'line> {
AppendEvents(EventEdges<'line>),
ResetOrder,
}
peg::parser! {
// partially copied from the test case parser
grammar command_parser() for str {
/// Parse whitespace.
rule _ -> () = quiet! { $([' '])* {} }
/// Parse an event ID.
rule event_id() -> &'input str
= quiet! { id:$([char if char.is_ascii_alphanumeric() || ['_', '-'].contains(&char)]+) { id } }
/ expected!("an event ID containing only [a-zA-Z0-9_-]")
/// Parse an event and its prev events.
rule event() -> (&'input str, HashSet<&'input str>)
= id:event_id() prev_events:(_ "-->" _ id:(event_id() ++ _) { id })? {
(id, prev_events.into_iter().flatten().collect())
}
pub rule command() -> Command<'input> =
"/reset" { Command::ResetOrder }
/ events:event() ++ (_ "," _) { Command::AppendEvents(events.into_iter().collect()) }
}
}
fn main() -> Result<()> {
let mut backend = MemoryStitcherBackend::default();
let mut reader = DefaultEditor::new()?;
println!("{BANNER}");
loop {
match reader.readline("> ") {
| Ok(line) => match command_parser::command(&line) {
| Ok(Command::AppendEvents(events)) => {
let batch = Batch::from_edges(&events);
let stitcher = Stitcher::new(&backend);
let updates = stitcher.stitch(&batch);
for update in &updates.gap_updates {
println!("update to gap {}:", update.key);
println!(" new gap contents: {:?}", update.gap);
println!(" inserted items: {:?}", update.inserted_items);
}
println!("events added to gaps: {:?}", &updates.events_added_to_gaps);
println!();
println!("items to sync: {:?}", &updates.new_items);
backend.extend(updates);
println!("order: {backend:?}");
},
| Ok(Command::ResetOrder) => {
backend.clear();
println!("order cleared.");
},
| Err(parse_error) => {
println!("parse error!! {parse_error}");
},
},
| Err(ReadlineError::Interrupted) => {
println!("interrupt");
},
| Err(ReadlineError::Eof) => {
println!("goodbye :3");
break Ok(());
},
| Err(err) => break Err(err),
}
}
}

View File

@@ -1,130 +0,0 @@
use std::{
fmt::Debug,
sync::atomic::{AtomicU64, Ordering},
};
use crate::{Gap, OrderUpdates, StitchedItem, StitcherBackend};
/// A version of [`StitchedItem`] which owns event IDs.
#[derive(Debug)]
enum MemoryStitcherItem {
Event(String),
Gap(Gap),
}
impl From<StitchedItem<'_>> for MemoryStitcherItem {
fn from(value: StitchedItem) -> Self {
match value {
| StitchedItem::Event(id) => MemoryStitcherItem::Event(id.to_string()),
| StitchedItem::Gap(gap) => MemoryStitcherItem::Gap(gap),
}
}
}
impl<'id> From<&'id MemoryStitcherItem> for StitchedItem<'id> {
fn from(value: &'id MemoryStitcherItem) -> Self {
match value {
| MemoryStitcherItem::Event(id) => StitchedItem::Event(id),
| MemoryStitcherItem::Gap(gap) => StitchedItem::Gap(gap.clone()),
}
}
}
/// A stitcher backend which holds a stitched ordering in RAM.
#[derive(Default)]
pub struct MemoryStitcherBackend {
items: Vec<(u64, MemoryStitcherItem)>,
counter: AtomicU64,
}
impl MemoryStitcherBackend {
fn next_id(&self) -> u64 { self.counter.fetch_add(1, Ordering::Relaxed) }
/// Extend this ordering with new updates.
pub fn extend(&mut self, results: OrderUpdates<'_, <Self as StitcherBackend>::Key>) {
for update in results.gap_updates {
let Some(gap_index) = self.items.iter().position(|(key, _)| *key == update.key)
else {
panic!("bad update key {}", update.key);
};
let insertion_index = if update.gap.is_empty() {
self.items.remove(gap_index);
gap_index
} else {
match self.items.get_mut(gap_index) {
| Some((_, MemoryStitcherItem::Gap(gap))) => {
*gap = update.gap;
},
| Some((key, other)) => {
panic!("expected item with key {key} to be a gap, it was {other:?}");
},
| None => unreachable!("we just checked that this index is valid"),
}
gap_index.checked_add(1).expect(
"should never allocate usize::MAX ids. what kind of test are you running",
)
};
let to_insert: Vec<_> = update
.inserted_items
.into_iter()
.map(|item| (self.next_id(), item.into()))
.collect();
self.items
.splice(insertion_index..insertion_index, to_insert.into_iter())
.for_each(drop);
}
let new_items: Vec<_> = results
.new_items
.into_iter()
.map(|item| (self.next_id(), item.into()))
.collect();
self.items.extend(new_items);
}
/// Iterate over the items in this ordering.
pub fn iter(&self) -> impl Iterator<Item = StitchedItem<'_>> {
self.items.iter().map(|(_, item)| item.into())
}
/// Clear this ordering.
pub fn clear(&mut self) { self.items.clear(); }
}
impl StitcherBackend for MemoryStitcherBackend {
type Key = u64;
fn find_matching_gaps<'a>(
&'a self,
events: impl Iterator<Item = &'a str>,
) -> impl Iterator<Item = (Self::Key, Gap)> {
// nobody cares about test suite performance right
let mut gaps = vec![];
for event in events {
for (key, item) in &self.items {
if let MemoryStitcherItem::Gap(gap) = item
&& gap.contains(event)
{
gaps.push((*key, gap.clone()));
}
}
}
gaps.into_iter()
}
fn event_exists<'a>(&'a self, event: &'a str) -> bool {
self.items
.iter()
.any(|item| matches!(&item.1, MemoryStitcherItem::Event(id) if event == id))
}
}
impl Debug for MemoryStitcherBackend {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_list().entries(self.iter()).finish()
}
}

View File

@@ -1,160 +0,0 @@
use std::{cmp::Ordering, collections::HashSet};
use indexmap::IndexMap;
pub mod algorithm;
pub mod memory_backend;
#[cfg(test)]
mod test;
pub use algorithm::*;
/// A gap in the stitched order.
pub type Gap = HashSet<String>;
/// An item in the stitched order.
#[derive(Debug)]
pub enum StitchedItem<'id> {
/// A single event.
Event(&'id str),
/// A gap representing one or more missing events.
Gap(Gap),
}
/// An opaque key returned by a [`StitcherBackend`] to identify an item in its
/// order.
pub trait OrderKey: Eq + Clone {}
impl<T: Eq + Clone> OrderKey for T {}
/// A trait providing read-only access to an existing stitched order.
pub trait StitcherBackend {
type Key: OrderKey;
/// Return all gaps containing one or more events listed in `events`.
fn find_matching_gaps<'a>(
&'a self,
events: impl Iterator<Item = &'a str>,
) -> impl Iterator<Item = (Self::Key, Gap)>;
/// Return whether an event exists in the stitched order.
fn event_exists<'a>(&'a self, event: &'a str) -> bool;
}
/// An ordered map from an event ID to its `prev_events`.
pub type EventEdges<'id> = IndexMap<&'id str, HashSet<&'id str>>;
/// Information about the `prev_events` of an event.
/// This struct does not store the ID of the event itself.
#[derive(Debug)]
struct EventPredecessors<'id> {
/// The `prev_events` of the event.
pub prev_events: HashSet<&'id str>,
/// The predecessor set of the event. This is derived from, and a superset
/// of, [`EventPredecessors::prev_events`]. See
/// [`Batch::find_predecessor_set`] for details. It is cached in this
/// struct for performance.
pub predecessor_set: HashSet<&'id str>,
}
/// A batch of events to be inserted into the stitched order.
#[derive(Debug)]
pub struct Batch<'id> {
events: IndexMap<&'id str, EventPredecessors<'id>>,
}
impl<'id> Batch<'id> {
/// Create a new [`Batch`] from an [`EventEdges`].
pub fn from_edges<'edges>(edges: &EventEdges<'edges>) -> Batch<'edges> {
let mut events = IndexMap::new();
for (event, prev_events) in edges {
let predecessor_set = Self::find_predecessor_set(event, edges);
events.insert(*event, EventPredecessors {
prev_events: prev_events.clone(),
predecessor_set,
});
}
Batch { events }
}
/// Build the predecessor set of `event` using `edges`. The predecessor set
/// is a subgraph of the room's DAG which may be thought of as a tree
/// rooted at `event` containing _only_ events which are included in
/// `edges`. It is represented as a set and not a proper tree structure for
/// efficiency.
fn find_predecessor_set<'a>(event: &'a str, edges: &EventEdges<'a>) -> HashSet<&'a str> {
// The predecessor set which we are building.
let mut predecessor_set = HashSet::new();
// The queue of events to check for membership in `remaining_events`.
let mut events_to_check = vec![event];
// Events which we have already checked and do not need to revisit.
let mut events_already_checked = HashSet::new();
while let Some(event) = events_to_check.pop() {
// Don't add this event to the queue again.
events_already_checked.insert(event);
// If this event is in `edges`, add it to the predecessor set.
if let Some(children) = edges.get(event) {
predecessor_set.insert(event);
// Also add all its `prev_events` to the queue. It's fine if some of them don't
// exist in `edges` because they'll just be discarded when they're popped
// off the queue.
events_to_check.extend(
children
.iter()
.filter(|event| !events_already_checked.contains(*event)),
);
}
}
predecessor_set
}
/// Iterate over all the events contained in this batch.
fn events(&self) -> impl Iterator<Item = &'id str> { self.events.keys().copied() }
/// Check whether an event exists in this batch.
fn contains(&self, event: &'id str) -> bool { self.events.contains_key(event) }
/// Return the predecessors of an event, if it exists in this batch.
fn predecessors(&self, event: &str) -> Option<&EventPredecessors<'id>> {
self.events.get(event)
}
/// Compare two events by DAG;received order.
///
/// If either event is in the other's predecessor set it comes first,
/// otherwise they are sorted by which comes first in the batch.
fn compare_by_dag_received(&self) -> impl FnMut(&&'id str, &&'id str) -> Ordering {
|a, b| {
if self
.predecessors(a)
.is_some_and(|it| it.predecessor_set.contains(b))
{
Ordering::Greater
} else if self
.predecessors(b)
.is_some_and(|it| it.predecessor_set.contains(a))
{
Ordering::Less
} else {
let a_index = self
.events
.get_index_of(a)
.expect("a should be in this batch");
let b_index = self
.events
.get_index_of(b)
.expect("b should be in this batch");
a_index.cmp(&b_index)
}
}
}
}

View File

@@ -1,102 +0,0 @@
use itertools::Itertools;
use super::{algorithm::*, *};
use crate::memory_backend::MemoryStitcherBackend;
mod parser;
fn run_testcase(testcase: parser::TestCase<'_>) {
let mut backend = MemoryStitcherBackend::default();
for (index, phase) in testcase.into_iter().enumerate() {
let stitcher = Stitcher::new(&backend);
let batch = Batch::from_edges(&phase.batch);
let updates = stitcher.stitch(&batch);
println!();
println!("===== phase {index}");
for update in &updates.gap_updates {
println!("update to gap {}:", update.key);
println!(" new gap contents: {:?}", update.gap);
println!(" inserted items: {:?}", update.inserted_items);
}
println!("expected new items: {:?}", &phase.order.new_items);
println!(" actual new items: {:?}", &updates.new_items);
for (expected, actual) in phase
.order
.new_items
.iter()
.zip_eq(updates.new_items.iter())
{
assert_eq!(
expected, actual,
"bad new item, expected {expected:?} but got {actual:?}"
);
}
if let Some(updated_gaps) = phase.updated_gaps {
println!("expected events added to gaps: {updated_gaps:?}");
println!(" actual events added to gaps: {:?}", updates.events_added_to_gaps);
assert_eq!(
updated_gaps, updates.events_added_to_gaps,
"incorrect events added to gaps"
);
}
backend.extend(updates);
println!("extended ordering: {:?}", backend);
for (expected, ref actual) in phase.order.iter().zip_eq(backend.iter()) {
assert_eq!(
expected, actual,
"bad item in order, expected {expected:?} but got {actual:?}",
);
}
}
}
macro_rules! testcase {
($index:literal : $id:ident) => {
#[test]
fn $id() {
let testcase = parser::parse(include_str!(concat!(
"./testcases/",
$index,
"-",
stringify!($id),
".stitched"
)));
run_testcase(testcase);
}
};
}
testcase!("001": receiving_new_events);
testcase!("002": recovering_after_netsplit);
testcase!("zzz": being_before_a_gap_item_beats_being_after_an_existing_item_multiple);
testcase!("zzz": being_before_a_gap_item_beats_being_after_an_existing_item);
testcase!("zzz": chains_are_reordered_using_prev_events);
testcase!("zzz": empty_then_simple_chain);
testcase!("zzz": empty_then_two_chains_interleaved);
testcase!("zzz": empty_then_two_chains);
testcase!("zzz": filling_in_a_gap_with_a_batch_containing_gaps);
testcase!("zzz": gaps_appear_before_events_referring_to_them_received_order);
testcase!("zzz": gaps_appear_before_events_referring_to_them);
testcase!("zzz": if_prev_events_determine_order_they_override_received);
testcase!("zzz": insert_into_first_of_several_gaps);
testcase!("zzz": insert_into_last_of_several_gaps);
testcase!("zzz": insert_into_middle_of_several_gaps);
testcase!("zzz": linked_events_are_split_across_gaps);
testcase!("zzz": linked_events_in_a_diamond_are_split_across_gaps);
testcase!("zzz": middle_of_batch_matches_gap_and_end_of_batch_matches_end);
testcase!("zzz": middle_of_batch_matches_gap);
testcase!("zzz": multiple_events_referring_to_the_same_missing_event_first_has_more);
testcase!("zzz": multiple_events_referring_to_the_same_missing_event);
testcase!("zzz": multiple_events_referring_to_the_same_missing_event_with_more);
testcase!("zzz": multiple_missing_prev_events_turn_into_a_single_gap);
testcase!("zzz": partially_filling_a_gap_leaves_it_before_new_nodes);
testcase!("zzz": partially_filling_a_gap_with_two_events);
testcase!("zzz": received_order_wins_within_a_subgroup_if_no_prev_event_chain);
testcase!("zzz": subgroups_are_processed_in_first_received_order);

View File

@@ -1,140 +0,0 @@
use std::collections::HashSet;
use indexmap::IndexMap;
use super::StitchedItem;
pub(super) type TestEventId<'id> = &'id str;
pub(super) type TestGap<'id> = HashSet<TestEventId<'id>>;
#[derive(Debug)]
pub(super) enum TestStitchedItem<'id> {
Event(TestEventId<'id>),
Gap(TestGap<'id>),
}
impl PartialEq<StitchedItem<'_>> for TestStitchedItem<'_> {
fn eq(&self, other: &StitchedItem<'_>) -> bool {
match (self, other) {
| (TestStitchedItem::Event(lhs), StitchedItem::Event(rhs)) => lhs == rhs,
| (TestStitchedItem::Gap(lhs), StitchedItem::Gap(rhs)) =>
lhs.iter().all(|id| rhs.contains(*id)),
| _ => false,
}
}
}
pub(super) type TestCase<'id> = Vec<Phase<'id>>;
pub(super) struct Phase<'id> {
pub batch: Batch<'id>,
pub order: Order<'id>,
pub updated_gaps: Option<HashSet<TestEventId<'id>>>,
}
pub(super) type Batch<'id> = IndexMap<TestEventId<'id>, HashSet<TestEventId<'id>>>;
pub(super) struct Order<'id> {
pub inserted_items: Vec<TestStitchedItem<'id>>,
pub new_items: Vec<TestStitchedItem<'id>>,
}
impl<'id> Order<'id> {
pub(super) fn iter(&self) -> impl Iterator<Item = &TestStitchedItem<'id>> {
self.inserted_items.iter().chain(self.new_items.iter())
}
}
peg::parser! {
grammar testcase() for str {
/// Parse whitespace.
rule _ -> () = quiet! { $([' '])* {} }
/// Parse empty lines and comments.
rule newline() -> () = quiet! { (("#" [^'\n']*)? "\n")+ {} }
/// Parse an "event ID" in a test case, which may only consist of ASCII letters and numbers.
rule event_id() -> TestEventId<'input>
= quiet! { id:$([char if char.is_ascii_alphanumeric()]+) { id } }
/ expected!("event id")
/// Parse a gap in the order section.
rule gap() -> TestGap<'input>
= "-" events:event_id() ++ "," { events.into_iter().collect() }
/// Parse either an event id or a gap.
rule stitched_item() -> TestStitchedItem<'input> =
id:event_id() { TestStitchedItem::Event(id) }
/ gap:gap() { TestStitchedItem::Gap(gap) }
/// Parse an event line in the batch section, mapping an event name to zero or one prev events.
/// The prev events are merged together by [`batch()`].
rule batch_event() -> (TestEventId<'input>, Option<TestEventId<'input>>)
= id:event_id() prev:(_ "-->" _ prev:event_id() { prev })? { (id, prev) }
/// Parse the batch section of a phase.
rule batch() -> Batch<'input>
= events:batch_event() ++ newline() {
/*
Repeated event lines need to be merged together. For example,
A --> B
A --> C
represents a _single_ event `A` with two prev events, `B` and `C`.
*/
events.into_iter()
.fold(IndexMap::new(), |mut batch: Batch<'_>, (id, prev_event)| {
// Find the prev events set of this event in the batch.
// If it doesn't exist, make a new empty one.
let mut prev_events = batch.entry(id).or_default();
// If this event line defines a prev event to add, insert it into the set.
if let Some(prev_event) = prev_event {
prev_events.insert(prev_event);
}
batch
})
}
rule order() -> Order<'input> =
items:(item:stitched_item() new:"*"? { (item, new.is_some()) }) ** newline()
{
let (mut inserted_items, mut new_items) = (vec![], vec![]);
for (item, new) in items {
if new {
new_items.push(item);
} else {
inserted_items.push(item);
}
}
Order {
inserted_items,
new_items,
}
}
rule updated_gaps() -> HashSet<TestEventId<'input>> =
events:event_id() ++ newline() { events.into_iter().collect() }
rule phase() -> Phase<'input> =
"=== when we receive these events ==="
newline() batch:batch()
newline() "=== then we arrange into this order ==="
newline() order:order()
updated_gaps:(
newline() "=== and we notify about these gaps ==="
newline() updated_gaps:updated_gaps() { updated_gaps }
)?
{ Phase { batch, order, updated_gaps } }
pub rule testcase() -> TestCase<'input> = phase() ++ newline()
}
}
pub(super) fn parse<'input>(input: &'input str) -> TestCase<'input> {
testcase::testcase(input.trim_ascii_end()).expect("parse error")
}

View File

@@ -1,22 +0,0 @@
=== when we receive these events ===
A
B --> A
C --> B
=== then we arrange into this order ===
# Given the server has some existing events in this order:
A*
B*
C*
=== when we receive these events ===
# When it receives new ones:
D --> C
E --> D
=== then we arrange into this order ===
# Then it simply appends them at the end of the order:
A
B
C
D*
E*

View File

@@ -1,46 +0,0 @@
=== when we receive these events ===
A1
A2 --> A1
A3 --> A2
=== then we arrange into this order ===
# Given the server has some existing events in this order:
A1*
A2*
A3*
=== when we receive these events ===
# And after a netsplit the server receives some unrelated events, which refer to
# some unknown event, because the server didn't receive all of them:
B7 --> B6
B8 --> B7
B9 --> B8
=== then we arrange into this order ===
# Then these events are new, and we add a gap to show something is missing:
A1
A2
A3
-B6*
B7*
B8*
B9*
=== when we receive these events ===
# Then if we backfill and receive more of those events later:
B4 --> B3
B5 --> B4
B6 --> B5
=== then we arrange into this order ===
# They are slotted into the gap, and a new gap is created to represent the
# still-missing events:
A1
A2
A3
-B3
B4
B5
B6
B7
B8
B9
=== and we notify about these gaps ===
B6

View File

@@ -1,30 +0,0 @@
=== when we receive these events ===
D --> C
=== then we arrange into this order ===
# We may see situations that are ambiguous about whether an event is new or
# belongs in a gap, because it is a predecessor of a gap event and also has a
# new event as its predecessor. This a rare case where either outcome could be
# valid. If the initial order is this:
-C*
D*
=== when we receive these events ===
# And then we receive B
B --> A
=== then we arrange into this order ===
# Which is new because it's unrelated to everything else
-C
D
-A*
B*
=== when we receive these events ===
# And later it turns out that C refers back to B
C --> B
=== then we arrange into this order ===
# Then we place C into the early gap even though it is after B, so arguably
# should be the newest
C
D
-A
B
=== and we notify about these gaps ===
C

View File

@@ -1,28 +0,0 @@
=== when we receive these events ===
# An ambiguous situation can occur when we have multiple gaps that both might
# accepts an event. This should be relatively rare.
A --> G1
B --> A
C --> G2
=== then we arrange into this order ===
-G1*
A*
B*
-G2*
C*
=== when we receive these events ===
# When we receive F, which is a predecessor of both G1 and G2
F
G1 --> F
G2 --> F
=== then we arrange into this order ===
# Then F appears in the earlier gap, but arguably it should appear later.
F
G1
A
B
G2
C
=== and we notify about these gaps ===
G1
G2

View File

@@ -1,10 +0,0 @@
=== when we receive these events ===
# Even though we see C first, it is re-ordered because we must obey prev_events
# so A comes first.
C --> A
A
B --> A
=== then we arrange into this order ===
A*
C*
B*

View File

@@ -1,8 +0,0 @@
=== when we receive these events ===
A
B --> A
C --> B
=== then we arrange into this order ===
A*
B*
C*

View File

@@ -1,18 +0,0 @@
=== when we receive these events ===
# A chain ABC
A
B --> A
C --> B
# And a separate chain XYZ
X --> W
Y --> X
Z --> Y
=== then we arrange into this order ===
# Should produce them in order with a gap
A*
B*
C*
-W*
X*
Y*
Z*

View File

@@ -1,18 +0,0 @@
=== when we receive these events ===
# Same as empty_then_two_chains except for received order
# A chain ABC, and a separate chain XYZ, but interleaved
A
X --> W
B --> A
Y --> X
C --> B
Z --> Y
=== then we arrange into this order ===
# Should produce them in order with a gap
A*
-W*
X*
B*
Y*
C*
Z*

View File

@@ -1,33 +0,0 @@
=== when we receive these events ===
# Given 3 gaps exist
B --> A
D --> C
F --> E
=== then we arrange into this order ===
-A*
B*
-C*
D*
-E*
F*
=== when we receive these events ===
# When we fill one with something that also refers to non-existent events
C --> X
C --> Y
G --> C
G --> Z
=== then we arrange into this order ===
# Then we fill in the gap (C) and make new gaps too (X+Y and Z)
-A
B
-X,Y
C
D
-E
F
-Z*
G*
=== and we notify about these gaps ===
# And we notify about the gap that was updated
C

View File

@@ -1,13 +0,0 @@
=== when we receive these events ===
# Several events refer to missing events and the events are unrelated
C --> Y
C --> Z
A --> X
B
=== then we arrange into this order ===
# The gaps appear immediately before the events referring to them
-Y,Z*
C*
-X*
A*
B*

View File

@@ -1,14 +0,0 @@
=== when we receive these events ===
# Several events refer to missing events and the events are related
C --> Y
C --> Z
C --> B
A --> X
B --> A
=== then we arrange into this order ===
# The gaps appear immediately before the events referring to them
-X*
A*
B*
-Y,Z*
C*

View File

@@ -1,15 +0,0 @@
=== when we receive these events ===
# The relationships determine the order here, so they override received order
F --> E
C --> B
D --> C
E --> D
B --> A
A
=== then we arrange into this order ===
A*
B*
C*
D*
E*
F*

View File

@@ -1,27 +0,0 @@
=== when we receive these events ===
# Given 3 gaps exist
B --> A
D --> C
F --> E
=== then we arrange into this order ===
-A*
B*
-C*
D*
-E*
F*
=== when we receive these events ===
# When the first of them is filled in
A
=== then we arrange into this order ===
# Then we slot it into the gap, not at the end
A
B
-C
D
-E
F
=== and we notify about these gaps ===
# And we notify about the gap being filled in
A

View File

@@ -1,27 +0,0 @@
=== when we receive these events ===
# Given 3 gaps exist
B --> A
D --> C
F --> E
=== then we arrange into this order ===
-A*
B*
-C*
D*
-E*
F*
=== when we receive these events ===
# When the last gap is filled in
E
=== then we arrange into this order ===
# Then we slot it into the gap, not at the end
-A
B
-C
D
E
F
=== and we notify about these gaps ===
# And we notify about the gap being filled in
E

View File

@@ -1,27 +0,0 @@
=== when we receive these events ===
# Given 3 gaps exist
B --> A
D --> C
F --> E
=== then we arrange into this order ===
-A*
B*
-C*
D*
-E*
F*
=== when we receive these events ===
# When a middle one is filled in
C
=== then we arrange into this order ===
# Then we slot it into the gap, not at the end
-A
B
C
D
-E
F
=== and we notify about these gaps ===
# And we notify about the gap being filled in
C

View File

@@ -1,29 +0,0 @@
=== when we receive these events ===
# Given a couple of gaps
B --> X2
D --> X4
=== then we arrange into this order ===
-X2*
B*
-X4*
D*
=== when we receive these events ===
# And linked events that fill those in and are newer
X1
X2 --> X1
X3 --> X2
X4 --> X3
X5 --> X4
=== then we arrange into this order ===
# Then the gaps are filled and new events appear at the front
X1
X2
B
X3
X4
D
X5*
=== and we notify about these gaps ===
X2
X4

View File

@@ -1,31 +0,0 @@
=== when we receive these events ===
# Given a couple of gaps
B --> X2a
D --> X3
=== then we arrange into this order ===
-X2a*
B*
-X3*
D*
=== when we receive these events ===
# When we receive a diamond that touches gaps and some new events
X1
X2a --> X1
X2b --> X1
X3 --> X2a
X3 --> X2b
X4 --> X3
=== then we arrange into this order ===
# Then matching events and direct predecessors fit into the gaps
# and other stuff is new
X1
X2a
B
X2b
X3
D
X4*
=== and we notify about these gaps ===
X2a
X3

View File

@@ -1,25 +0,0 @@
=== when we receive these events ===
# Given a gap before all the Bs
B1 --> C2
B2 --> B1
=== then we arrange into this order ===
-C2*
B1*
B2*
=== when we receive these events ===
# When a batch arrives with a not-last event matching the gap
C1
C2 --> C1
C3 --> C2
=== then we arrange into this order ===
# Then we slot the matching events into the gap
# and the later events are new
C1
C2
B1
B2
C3*
=== and we notify about these gaps ===
# And we notify about the gap being filled in
C2

View File

@@ -1,26 +0,0 @@
=== when we receive these events ===
# Given a gap before all the Bs
B1 --> C2
B2 --> B1
=== then we arrange into this order ===
-C2*
B1*
B2*
=== when we receive these events ===
# When a batch arrives with a not-last event matching the gap, and the last
# event linked to a recent event
C1
C2 --> C1
C3 --> C2
C3 --> B2
=== then we arrange into this order ===
# Then we slot the entire batch into the gap
C1
C2
B1
B2
C3*
=== and we notify about these gaps ===
# And we notify about the gap being filled in
C2

View File

@@ -1,26 +0,0 @@
=== when we receive these events ===
# If multiple events all refer to the same missing event:
A --> X
B --> X
C --> X
=== then we arrange into this order ===
# Then we insert gaps before all of them. This avoids the need to search the
# entire existing order whenever we create a new gap.
-X*
A*
-X*
B*
-X*
C*
=== when we receive these events ===
# The ambiguity is resolved when the missing event arrives:
X
=== then we arrange into this order ===
# We choose the earliest gap, and all the relevant gaps are removed (which does
# mean we need to search the existing order).
X
A
B
C
=== and we notify about these gaps ===
X

View File

@@ -1,29 +0,0 @@
=== when we receive these events ===
# Several events refer to the same missing event, but the first refers to
# others too
A --> X
A --> Y
A --> Z
B --> X
C --> X
=== then we arrange into this order ===
# We end up with multiple gaps
-X,Y,Z*
A*
-X*
B*
-X*
C*
=== when we receive these events ===
# When we receive the missing item
X
=== then we arrange into this order ===
# It goes into the earliest slot, and the non-empty gap remains
-Y,Z
X
A
B
C
=== and we notify about these gaps ===
X

View File

@@ -1,28 +0,0 @@
=== when we receive these events ===
# Several events refer to the same missing event, but one refers to others too
A --> X
B --> X
B --> Y
B --> Z
C --> X
=== then we arrange into this order ===
# We end up with multiple gaps
-X*
A*
-X,Y,Z*
B*
-X*
C*
=== when we receive these events ===
# When we receive the missing item
X
=== then we arrange into this order ===
# It goes into the earliest slot, and the non-empty gap remains
X
A
-Y,Z
B
C
=== and we notify about these gaps ===
X

View File

@@ -1,9 +0,0 @@
=== when we receive these events ===
# A refers to multiple missing things
A --> X
A --> Y
A --> Z
=== then we arrange into this order ===
# But we only make one gap, with multiple IDs in it
-X,Y,Z*
A*

View File

@@ -1,23 +0,0 @@
=== when we receive these events ===
A
F --> B
F --> C
F --> D
F --> E
=== then we arrange into this order ===
# Given a gap that lists several nodes:
A*
-B,C,D,E*
F*
=== when we receive these events ===
# When we provide one of the missing events:
C
=== then we arrange into this order ===
# Then it is inserted after the gap, and the gap is shrunk:
A
-B,D,E
C
F
=== and we notify about these gaps ===
# And we notify about the gap that was updated
C

View File

@@ -1,27 +0,0 @@
=== when we receive these events ===
# Given an event references multiple missing events
A
F --> B
F --> C
F --> D
F --> E
=== then we arrange into this order ===
A*
-B,C,D,E*
F*
=== when we receive these events ===
# When we provide some of the missing events
C
E
=== then we arrange into this order ===
# Then we insert them after the gap and shrink the list of events in the gap
A
-B,D
C
E
F
=== and we notify about these gaps ===
# And we notify about the gap that was updated
C
E

View File

@@ -1,16 +0,0 @@
=== when we receive these events ===
# Everything is after A, but there is no prev_event chain between the others, so
# we use received order.
A
F --> A
C --> A
D --> A
E --> A
B --> A
=== then we arrange into this order ===
A*
F*
C*
D*
E*
B*

View File

@@ -1,16 +0,0 @@
=== when we receive these events ===
# We preserve the received order where it does not conflict with the prev_events
A
X --> W
Y --> X
Z --> Y
B --> A
C --> B
=== then we arrange into this order ===
A*
-W*
X*
Y*
Z*
B*
C*

1
theme/css.d.ts vendored Normal file
View File

@@ -0,0 +1 @@
declare module "*.css"

View File

@@ -105,3 +105,68 @@ body:not(.notTopArrived) header.rp-nav {
.rspress-logo {
height: 32px;
}
/* pre-hero */
.custom-section {
padding: 4rem 1.5rem;
background: var(--rp-c-bg);
}
.custom-cards {
display: flex;
gap: 2rem;
max-width: 800px;
margin: 0 auto;
justify-content: center;
flex-wrap: wrap;
}
.custom-card {
padding: 2rem;
border: 1px solid var(--rp-c-divider-light);
border-radius: 12px;
background: var(--rp-c-bg-soft);
text-decoration: none;
color: var(--rp-c-text-1);
transition: all 0.3s ease;
display: flex;
flex-direction: column;
flex: 1;
min-width: 280px;
max-width: 350px;
}
.custom-card:hover {
border-color: var(--rp-c-brand);
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);
transform: translateY(-2px);
}
.custom-card h3 {
margin: 0 0 1rem 0;
font-size: 1.25rem;
font-weight: 600;
color: var(--rp-c-text-0);
}
.custom-card p {
margin: 0 0 1.5rem 0;
color: var(--rp-c-text-2);
line-height: 1.6;
flex: 1;
}
.custom-card-button {
display: inline-block;
padding: 0.5rem 1.5rem;
background: var(--rp-c-brand);
color: white;
border-radius: 6px;
font-weight: 500;
text-align: center;
transition: background 0.2s ease;
}
.custom-card:hover .custom-card-button {
background: var(--rp-c-brand-light);
}

Some files were not shown because too many files have changed in this diff Show More