mirror of
https://forgejo.ellis.link/continuwuation/continuwuity/
synced 2026-04-02 20:56:12 +00:00
Compare commits
10 Commits
jade/docs-
...
ginger/sti
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a91ef01a7f | ||
|
|
4a6295e374 | ||
|
|
4da955438e | ||
|
|
9210ee2b42 | ||
|
|
b60af7c282 | ||
|
|
d2d580c76c | ||
|
|
c507ff6687 | ||
|
|
70c62158d0 | ||
|
|
b242bad6d2 | ||
|
|
a1ad9f0144 |
2
.github/FUNDING.yml
vendored
2
.github/FUNDING.yml
vendored
@@ -1,4 +1,4 @@
|
||||
github: [JadedBlueEyes, nexy7574, gingershaped]
|
||||
github: [JadedBlueEyes, nexy7574]
|
||||
custom:
|
||||
- https://ko-fi.com/nexy7574
|
||||
- https://ko-fi.com/JadedBlueEyes
|
||||
|
||||
@@ -23,7 +23,7 @@ repos:
|
||||
- id: check-added-large-files
|
||||
|
||||
- repo: https://github.com/crate-ci/typos
|
||||
rev: v1.43.4
|
||||
rev: v1.41.0
|
||||
hooks:
|
||||
- id: typos
|
||||
- id: typos
|
||||
|
||||
@@ -6,13 +6,14 @@ extend-exclude = ["*.csr", "*.lock", "pnpm-lock.yaml"]
|
||||
extend-ignore-re = [
|
||||
"(?Rm)^.*(#|//|<!--)\\s*spellchecker:disable-line(\\s*-->)$", # Ignore a line by making it trail with a `spellchecker:disable-line` comment
|
||||
"^[0-9a-f]{7,}$", # Commit hashes
|
||||
"4BA7",
|
||||
|
||||
# some heuristics for base64 strings
|
||||
"[A-Za-z0-9+=]{72,}",
|
||||
"([A-Za-z0-9+=]|\\\\\\s\\*){72,}",
|
||||
"[0-9+][A-Za-z0-9+]{30,}[a-z0-9+]",
|
||||
"\\$[A-Z0-9+][A-Za-z0-9+]{6,}[a-z0-9+]",
|
||||
"\\b[a-z0-9+/=][A-Za-z0-9+/=]{7,}[a-z0-9+/=][A-Z]\\b",
|
||||
|
||||
# In the renovate config
|
||||
".ontainer"
|
||||
]
|
||||
|
||||
86
CHANGELOG.md
86
CHANGELOG.md
@@ -1,98 +1,58 @@
|
||||
# Continuwuity v0.5.4 (2026-02-08)
|
||||
|
||||
## Features
|
||||
|
||||
- The announcement checker will now announce errors it encounters in the first run to the admin room, plus a few other
|
||||
misc improvements. Contributed by @Jade ([#1288](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1288))
|
||||
- Drastically improved the performance and reliability of account deactivations. Contributed by @nex ([#1314](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1314))
|
||||
- Refuse to process requests for and events in rooms that we no longer have any local users in (reduces state resets
|
||||
and improves performance). Contributed by @nex ([#1316](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1316))
|
||||
- Added server-specific admin API routes to ban and unban rooms, for use with moderation bots. Contributed by @nex
|
||||
([#1301](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1301))
|
||||
|
||||
## Bugfixes
|
||||
|
||||
- Fix the generated configuration containing uncommented optional sections. Contributed by @Jade ([#1290](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1290))
|
||||
- Fixed specification non-compliance when handling remote media errors. Contributed by @nex ([#1298](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1298))
|
||||
- UIAA requests which check for out-of-band success (sent by matrix-js-sdk) will no longer create unhelpful errors in
|
||||
the logs. Contributed by @ginger ([#1305](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1305))
|
||||
- Use exists instead of contains to save writing to a buffer in `src/service/users/mod.rs`: `is_login_disabled`.
|
||||
Contributed
|
||||
by @aprilgrimoire. ([#1340](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1340))
|
||||
- Fixed backtraces being swallowed during panics. Contributed by @jade ([#1337](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1337))
|
||||
- Fixed a potential vulnerability that could allow an evil remote server to return malicious events during the room join
|
||||
and knock process. Contributed by @nex, reported by violet & [mat](https://matdoes.dev).
|
||||
- Fixed a race condition that could result in outlier PDUs being incorrectly marked as visible to a remote server.
|
||||
Contributed by @nex, reported by violet & [mat](https://matdoes.dev).
|
||||
- ACLs are no longer case-sensitive. Contributed by @nex, reported by [vel](matrix:u/vel:nhjkl.com?action=chat).
|
||||
|
||||
## Docs
|
||||
|
||||
- Fixed Fedora install instructions. Contributed by @julian45 ([#1342](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1342))
|
||||
|
||||
# Continuwuity 0.5.3 (2026-01-12)
|
||||
|
||||
## Features
|
||||
|
||||
- Improve the display of nested configuration with the `!admin server show-config` command. Contributed by @Jade ([#1279](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1279))
|
||||
- Improve the display of nested configuration with the `!admin server show-config` command. Contributed by @Jade (#1279)
|
||||
|
||||
## Bugfixes
|
||||
|
||||
- Fixed `M_BAD_JSON` error when sending invites to other servers or when providing joins. Contributed by @nex ([#1286](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1286))
|
||||
- Fixed `M_BAD_JSON` error when sending invites to other servers or when providing joins. Contributed by @nex (#1286)
|
||||
|
||||
|
||||
## Docs
|
||||
|
||||
- Improve admin command documentation generation. Contributed by @ginger ([#1280](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1280))
|
||||
- Improve admin command documentation generation. Contributed by @ginger (#1280)
|
||||
|
||||
|
||||
## Misc
|
||||
|
||||
- Improve timeout-related code for federation and URL previews. Contributed by @Jade ([#1278](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1278))
|
||||
- Improve timeout-related code for federation and URL previews. Contributed by @Jade (#1278)
|
||||
|
||||
|
||||
# Continuwuity 0.5.2 (2026-01-09)
|
||||
|
||||
## Features
|
||||
|
||||
- Added support for issuing additional registration tokens, stored in the database, which supplement the existing
|
||||
registration token hardcoded in the config file. These tokens may optionally expire after a certain number of uses or
|
||||
after a certain amount of time has passed. Additionally, the `registration_token_file` configuration option is
|
||||
superseded by this feature and **has been removed**. Use the new `!admin token` command family to manage registration
|
||||
tokens. Contributed by @ginger (#783).
|
||||
- Implemented a configuration defined admin list independent of the admin room. Contributed by @Terryiscool160. ([#1253](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1253))
|
||||
- Added support for invite and join anti-spam via Draupnir and Meowlnir, similar to that of synapse-http-antispam.
|
||||
Contributed by @nex. ([#1263](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1263))
|
||||
- Implemented account locking functionality, to complement user suspension. Contributed by @nex. ([#1266](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1266))
|
||||
- Added admin command to forcefully log out all of a user's existing sessions. Contributed by @nex. ([#1271](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1271))
|
||||
- Implemented toggling the ability for an account to log in without mutating any of its data. Contributed by @nex. (
|
||||
[#1272](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1272))
|
||||
- Add support for custom room create event timestamps, to allow generating custom prefixes in hashed room IDs.
|
||||
Contributed by @nex. ([#1277](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1277))
|
||||
- Certain potentially dangerous admin commands are now restricted to only be usable in the admin room and server
|
||||
console. Contributed by @ginger.
|
||||
- Added support for issuing additional registration tokens, stored in the database, which supplement the existing registration token hardcoded in the config file. These tokens may optionally expire after a certain number of uses or after a certain amount of time has passed. Additionally, the `registration_token_file` configuration option is superseded by this feature and **has been removed**. Use the new `!admin token` command family to manage registration tokens. Contributed by @ginger (#783).
|
||||
- Implemented a configuration defined admin list independent of the admin room. Contributed by @Terryiscool160. (#1253)
|
||||
- Added support for invite and join anti-spam via Draupnir and Meowlnir, similar to that of synapse-http-antispam. Contributed by @nex. (#1263)
|
||||
- Implemented account locking functionality, to complement user suspension. Contributed by @nex. (#1266)
|
||||
- Added admin command to forcefully log out all of a user's existing sessions. Contributed by @nex. (#1271)
|
||||
- Implemented toggling the ability for an account to log in without mutating any of its data. Contributed by @nex. (#1272)
|
||||
- Add support for custom room create event timestamps, to allow generating custom prefixes in hashed room IDs. Contributed by @nex. (#1277)
|
||||
- Certain potentially dangerous admin commands are now restricted to only be usable in the admin room and server console. Contributed by @ginger.
|
||||
|
||||
## Bugfixes
|
||||
|
||||
- Fixed unreliable room summary fetching and improved error messages. Contributed by @nex. ([#1257](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1257))
|
||||
- Client requested timeout parameter is now applied to e2ee key lookups and claims. Related federation requests are now
|
||||
also concurrent. Contributed by @nex. ([#1261](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1261))
|
||||
- Fixed the whoami endpoint returning HTTP 404 instead of HTTP 403, which confused some appservices. Contributed by
|
||||
@nex. ([#1276](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1276))
|
||||
- Fixed unreliable room summary fetching and improved error messages. Contributed by @nex. (#1257)
|
||||
- Client requested timeout parameter is now applied to e2ee key lookups and claims. Related federation requests are now also concurrent. Contributed by @nex. (#1261)
|
||||
- Fixed the whoami endpoint returning HTTP 404 instead of HTTP 403, which confused some appservices. Contributed by @nex. (#1276)
|
||||
|
||||
## Misc
|
||||
|
||||
- The `console` feature is now enabled by default, allowing the server console to be used for running admin commands
|
||||
directly. To automatically open the console on startup, set the `admin_console_automatic` config option to `true`.
|
||||
Contributed by @ginger.
|
||||
- The `console` feature is now enabled by default, allowing the server console to be used for running admin commands directly. To automatically open the console on startup, set the `admin_console_automatic` config option to `true`. Contributed by @ginger.
|
||||
- We now (finally) document our container image mirrors. Contributed by @Jade
|
||||
|
||||
|
||||
# Continuwuity 0.5.0 (2025-12-30)
|
||||
|
||||
**This release contains a CRITICAL vulnerability patch, and you must update as soon as possible**
|
||||
|
||||
## Features
|
||||
|
||||
- Enabled the OTLP exporter in default builds, and allow configuring the exporter protocol. (@Jade). ([#1251](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1251))
|
||||
- Enabled the OTLP exporter in default builds, and allow configuring the exporter protocol. (@Jade). (#1251)
|
||||
|
||||
## Bug Fixes
|
||||
|
||||
- Don't allow admin room upgrades, as this can break the admin room (@timedout) ([#1245](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1245))
|
||||
- Fix invalid creators in power levels during upgrade to v12 (@timedout) ([#1245](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1245))
|
||||
- Don't allow admin room upgrades, as this can break the admin room (@timedout) (#1245)
|
||||
- Fix invalid creators in power levels during upgrade to v12 (@timedout) (#1245)
|
||||
|
||||
535
Cargo.lock
generated
535
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
10
Cargo.toml
10
Cargo.toml
@@ -12,7 +12,7 @@ license = "Apache-2.0"
|
||||
# See also `rust-toolchain.toml`
|
||||
readme = "README.md"
|
||||
repository = "https://forgejo.ellis.link/continuwuation/continuwuity"
|
||||
version = "0.5.4"
|
||||
version = "0.5.3"
|
||||
|
||||
[workspace.metadata.crane]
|
||||
name = "conduwuit"
|
||||
@@ -158,7 +158,7 @@ features = ["raw_value"]
|
||||
|
||||
# Used for appservice registration files
|
||||
[workspace.dependencies.serde-saphyr]
|
||||
version = "0.0.17"
|
||||
version = "0.0.14"
|
||||
|
||||
# Used to load forbidden room/user regex from config
|
||||
[workspace.dependencies.serde_regex]
|
||||
@@ -342,7 +342,7 @@ version = "0.1.2"
|
||||
# Used for matrix spec type definitions and helpers
|
||||
[workspace.dependencies.ruma]
|
||||
git = "https://forgejo.ellis.link/continuwuation/ruwuma"
|
||||
rev = "458d52bdc7f9a07c497be94a1420ebd3d87d7b2b"
|
||||
rev = "85d00fb5746cba23904234b4fd3c838dcf141541"
|
||||
features = [
|
||||
"compat",
|
||||
"rand",
|
||||
@@ -548,6 +548,10 @@ features = ["sync", "tls-rustls", "rustls-provider"]
|
||||
[workspace.dependencies.resolv-conf]
|
||||
version = "0.7.5"
|
||||
|
||||
# Used by stitched ordering
|
||||
[workspace.dependencies.indexmap]
|
||||
version = "2.13.0"
|
||||
|
||||
#
|
||||
# Patches
|
||||
#
|
||||
|
||||
@@ -57,10 +57,9 @@ ### What are the project's goals?
|
||||
|
||||
### Can I try it out?
|
||||
|
||||
Check out the [documentation](https://continuwuity.org) for installation instructions, or join one of these vetted public homeservers running Continuwuity to get a feel for things!
|
||||
Check out the [documentation](https://continuwuity.org) for installation instructions.
|
||||
|
||||
- https://continuwuity.rocks -- A public demo server operated by the Continuwuity Team.
|
||||
- https://federated.nexus -- Federated Nexus is a community resource hosting multiple FOSS (especially federated) services, including Matrix and Forgejo.
|
||||
There are currently no open registration Continuwuity instances available.
|
||||
|
||||
### What are we working on?
|
||||
|
||||
|
||||
@@ -2,7 +2,11 @@
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# The root path where complement is available.
|
||||
# Path to Complement's source code
|
||||
#
|
||||
# The `COMPLEMENT_SRC` environment variable is set in the Nix dev shell, which
|
||||
# points to a store path containing the Complement source code. It's likely you
|
||||
# want to just pass that as the first argument to use it here.
|
||||
COMPLEMENT_SRC="${COMPLEMENT_SRC:-$1}"
|
||||
|
||||
# A `.jsonl` file to write test logs to
|
||||
@@ -11,10 +15,7 @@ LOG_FILE="${2:-complement_test_logs.jsonl}"
|
||||
# A `.jsonl` file to write test results to
|
||||
RESULTS_FILE="${3:-complement_test_results.jsonl}"
|
||||
|
||||
# The base docker image to use for complement tests
|
||||
# You can build the default with `docker build -t continuwuity:complement -f ./docker/complement.Dockerfile .`
|
||||
# after running `cargo build`. Only the debug binary is used.
|
||||
COMPLEMENT_BASE_IMAGE="${COMPLEMENT_BASE_IMAGE:-continuwuity:complement}"
|
||||
COMPLEMENT_BASE_IMAGE="${COMPLEMENT_BASE_IMAGE:-complement-conduwuit:main}"
|
||||
|
||||
# Complement tests that are skipped due to flakiness/reliability issues or we don't implement such features and won't for a long time
|
||||
SKIPPED_COMPLEMENT_TESTS='TestPartialStateJoin.*|TestRoomDeleteAlias/Parallel/Regular_users_can_add_and_delete_aliases_when_m.*|TestRoomDeleteAlias/Parallel/Can_delete_canonical_alias|TestUnbanViaInvite.*|TestRoomState/Parallel/GET_/publicRooms_lists.*"|TestRoomDeleteAlias/Parallel/Users_with_sufficient_power-level_can_delete_other.*'
|
||||
@@ -33,6 +34,25 @@ toplevel="$(git rev-parse --show-toplevel)"
|
||||
|
||||
pushd "$toplevel" > /dev/null
|
||||
|
||||
if [ ! -f "complement_oci_image.tar.gz" ]; then
|
||||
echo "building complement conduwuit image"
|
||||
|
||||
# if using macOS, use linux-complement
|
||||
#bin/nix-build-and-cache just .#linux-complement
|
||||
bin/nix-build-and-cache just .#complement
|
||||
#nix build -L .#complement
|
||||
|
||||
echo "complement conduwuit image tar.gz built at \"result\""
|
||||
|
||||
echo "loading into docker"
|
||||
docker load < result
|
||||
popd > /dev/null
|
||||
else
|
||||
echo "skipping building a complement conduwuit image as complement_oci_image.tar.gz was already found, loading this"
|
||||
|
||||
docker load < complement_oci_image.tar.gz
|
||||
popd > /dev/null
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "running go test with:"
|
||||
@@ -52,16 +72,24 @@ env \
|
||||
set -o pipefail
|
||||
|
||||
# Post-process the results into an easy-to-compare format, sorted by Test name for reproducible results
|
||||
jq -s -c 'sort_by(.Test)[]' < "$LOG_FILE" | jq -c '
|
||||
cat "$LOG_FILE" | jq -s -c 'sort_by(.Test)[]' | jq -c '
|
||||
select(
|
||||
(.Action == "pass" or .Action == "fail" or .Action == "skip")
|
||||
and .Test != null
|
||||
) | {Action: .Action, Test: .Test}
|
||||
' > "$RESULTS_FILE"
|
||||
|
||||
#if command -v gotestfmt &> /dev/null; then
|
||||
# echo "using gotestfmt on $LOG_FILE"
|
||||
# grep '{"Time":' "$LOG_FILE" | gotestfmt > "complement_test_logs_gotestfmt.log"
|
||||
#fi
|
||||
|
||||
echo ""
|
||||
echo ""
|
||||
echo "complement logs saved at $LOG_FILE"
|
||||
echo "complement results saved at $RESULTS_FILE"
|
||||
#if command -v gotestfmt &> /dev/null; then
|
||||
# echo "complement logs in gotestfmt pretty format outputted at complement_test_logs_gotestfmt.log (use an editor/terminal/pager that interprets ANSI colours and UTF-8 emojis)"
|
||||
#fi
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Fixed invites sent to other users in the same homeserver not being properly sent down sync. Users with missing or broken invites should clear their client caches after updating to make them appear.
|
||||
1
changelog.d/1288.feature.md
Normal file
1
changelog.d/1288.feature.md
Normal file
@@ -0,0 +1 @@
|
||||
The announcement checker will now announce errors it encounters in the first run to the admin room, plus a few other misc improvements. Contributed by @Jade
|
||||
1
changelog.d/1290.bugfix.md
Normal file
1
changelog.d/1290.bugfix.md
Normal file
@@ -0,0 +1 @@
|
||||
Fix the generated configuration containing uncommented optional sections. Contributed by @Jade
|
||||
1
changelog.d/1298.bugfix
Normal file
1
changelog.d/1298.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fixed specification non-compliance when handling remote media errors. Contributed by @nex.
|
||||
@@ -1 +0,0 @@
|
||||
Introduce a resolver command to allow flushing a server from the cache or to flush the complete cache. Contributed by @Omar007
|
||||
@@ -1,67 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -xe
|
||||
# If we have no $SERVER_NAME set, abort
|
||||
if [ -z "$SERVER_NAME" ]; then
|
||||
echo "SERVER_NAME is not set, aborting"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# If /complement/ca/ca.crt or /complement/ca/ca.key are missing, abort
|
||||
if [ ! -f /complement/ca/ca.crt ] || [ ! -f /complement/ca/ca.key ]; then
|
||||
echo "/complement/ca/ca.crt or /complement/ca/ca.key is missing, aborting"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Add the root cert to the local trust store
|
||||
echo 'Installing Complement CA certificate to local trust store'
|
||||
cp /complement/ca/ca.crt /usr/local/share/ca-certificates/complement-ca.crt
|
||||
update-ca-certificates
|
||||
|
||||
# Sign a certificate for our $SERVER_NAME
|
||||
echo "Generating and signing certificate for $SERVER_NAME"
|
||||
openssl genrsa -out "/$SERVER_NAME.key" 2048
|
||||
|
||||
echo "Generating CSR for $SERVER_NAME"
|
||||
openssl req -new -sha256 \
|
||||
-key "/$SERVER_NAME.key" \
|
||||
-out "/$SERVER_NAME.csr" \
|
||||
-subj "/C=US/ST=CA/O=Continuwuity, Inc./CN=$SERVER_NAME"\
|
||||
-addext "subjectAltName=DNS:$SERVER_NAME"
|
||||
openssl req -in "$SERVER_NAME.csr" -noout -text
|
||||
|
||||
echo "Signing certificate for $SERVER_NAME with Complement CA"
|
||||
cat <<EOF > ./cert.ext
|
||||
authorityKeyIdentifier=keyid,issuer
|
||||
basicConstraints = CA:FALSE
|
||||
keyUsage = digitalSignature, keyEncipherment, dataEncipherment, nonRepudiation
|
||||
extendedKeyUsage = serverAuth
|
||||
subjectAltName = @alt_names
|
||||
[alt_names]
|
||||
DNS.1 = *.docker.internal
|
||||
DNS.2 = hs1
|
||||
DNS.3 = hs2
|
||||
DNS.4 = hs3
|
||||
DNS.5 = hs4
|
||||
DNS.6 = $SERVER_NAME
|
||||
IP.1 = 127.0.0.1
|
||||
EOF
|
||||
openssl x509 \
|
||||
-req \
|
||||
-in "/$SERVER_NAME.csr" \
|
||||
-CA /complement/ca/ca.crt \
|
||||
-CAkey /complement/ca/ca.key \
|
||||
-CAcreateserial \
|
||||
-out "/$SERVER_NAME.crt" \
|
||||
-days 1 \
|
||||
-sha256 \
|
||||
-extfile ./cert.ext
|
||||
|
||||
# Tell continuwuity where to find the certs
|
||||
export CONTINUWUITY_TLS__KEY="/$SERVER_NAME.key"
|
||||
export CONTINUWUITY_TLS__CERTS="/$SERVER_NAME.crt"
|
||||
# And who it is
|
||||
export CONTINUWUITY_SERVER_NAME="$SERVER_NAME"
|
||||
|
||||
echo "Starting Continuwuity with SERVER_NAME=$SERVER_NAME"
|
||||
# Start continuwuity
|
||||
/usr/local/bin/conduwuit --config /etc/continuwuity/config.toml
|
||||
@@ -1,53 +0,0 @@
|
||||
# ============================================= #
|
||||
# Complement pre-filled configuration file #
|
||||
#
|
||||
# DANGER: THIS FILE FORCES INSECURE VALUES. #
|
||||
# DO NOT USE OUTSIDE THE TEST SUITE ENV! #
|
||||
# ============================================= #
|
||||
[global]
|
||||
address = "0.0.0.0"
|
||||
allow_device_name_federation = true
|
||||
allow_guest_registration = true
|
||||
allow_public_room_directory_over_federation = true
|
||||
allow_public_room_directory_without_auth = true
|
||||
allow_registration = true
|
||||
database_path = "/database"
|
||||
log = "trace,h2=debug,hyper=debug"
|
||||
port = [8008, 8448]
|
||||
trusted_servers = []
|
||||
only_query_trusted_key_servers = false
|
||||
query_trusted_key_servers_first = false
|
||||
query_trusted_key_servers_first_on_join = false
|
||||
yes_i_am_very_very_sure_i_want_an_open_registration_server_prone_to_abuse = true
|
||||
ip_range_denylist = []
|
||||
url_preview_domain_contains_allowlist = ["*"]
|
||||
url_preview_domain_explicit_denylist = ["*"]
|
||||
media_compat_file_link = false
|
||||
media_startup_check = true
|
||||
prune_missing_media = true
|
||||
log_colors = true
|
||||
admin_room_notices = false
|
||||
allow_check_for_updates = false
|
||||
intentionally_unknown_config_option_for_testing = true
|
||||
rocksdb_log_level = "info"
|
||||
rocksdb_max_log_files = 1
|
||||
rocksdb_recovery_mode = 0
|
||||
rocksdb_paranoid_file_checks = true
|
||||
log_guest_registrations = false
|
||||
allow_legacy_media = true
|
||||
startup_netburst = true
|
||||
startup_netburst_keep = -1
|
||||
allow_invalid_tls_certificates_yes_i_know_what_the_fuck_i_am_doing_with_this_and_i_know_this_is_insecure = true
|
||||
dns_timeout = 60
|
||||
dns_attempts = 20
|
||||
request_conn_timeout = 60
|
||||
request_timeout = 120
|
||||
well_known_conn_timeout = 60
|
||||
well_known_timeout = 60
|
||||
federation_idle_timeout = 300
|
||||
sender_timeout = 300
|
||||
sender_idle_timeout = 300
|
||||
sender_retry_backoff_limit = 300
|
||||
|
||||
[global.tls]
|
||||
dual_protocol = true
|
||||
@@ -48,7 +48,7 @@ EOF
|
||||
|
||||
# Developer tool versions
|
||||
# renovate: datasource=github-releases depName=cargo-bins/cargo-binstall
|
||||
ENV BINSTALL_VERSION=1.17.4
|
||||
ENV BINSTALL_VERSION=1.16.7
|
||||
# renovate: datasource=github-releases depName=psastras/sbom-rs
|
||||
ENV CARGO_SBOM_VERSION=0.9.1
|
||||
# renovate: datasource=crate depName=lddtree
|
||||
|
||||
@@ -1,11 +0,0 @@
|
||||
FROM ubuntu:latest
|
||||
EXPOSE 8008
|
||||
EXPOSE 8448
|
||||
RUN apt-get update && apt-get install -y ca-certificates liburing2 && rm -rf /var/lib/apt/lists/*
|
||||
RUN mkdir -p /etc/continuwuity /var/lib/continuwuity
|
||||
COPY docker/complement-entrypoint.sh /usr/local/bin/complement-entrypoint.sh
|
||||
COPY docker/complement.config.toml /etc/continuwuity/config.toml
|
||||
COPY target/debug/conduwuit /usr/local/bin/conduwuit
|
||||
RUN chmod +x /usr/local/bin/conduwuit /usr/local/bin/complement-entrypoint.sh
|
||||
#HEALTHCHECK --interval=30s --timeout=5s CMD curl --fail http://localhost:8008/_continuwuity/server_version || exit 1
|
||||
ENTRYPOINT ["/usr/local/bin/complement-entrypoint.sh"]
|
||||
@@ -18,7 +18,7 @@ RUN --mount=type=cache,target=/etc/apk/cache apk add \
|
||||
|
||||
# Developer tool versions
|
||||
# renovate: datasource=github-releases depName=cargo-bins/cargo-binstall
|
||||
ENV BINSTALL_VERSION=1.17.4
|
||||
ENV BINSTALL_VERSION=1.16.7
|
||||
# renovate: datasource=github-releases depName=psastras/sbom-rs
|
||||
ENV CARGO_SBOM_VERSION=0.9.1
|
||||
# renovate: datasource=crate depName=lddtree
|
||||
|
||||
@@ -34,14 +34,6 @@
|
||||
"name": "troubleshooting",
|
||||
"label": "Troubleshooting"
|
||||
},
|
||||
"security",
|
||||
{
|
||||
"type": "dir-section-header",
|
||||
"name": "community",
|
||||
"label": "Community",
|
||||
"collapsible": true,
|
||||
"collapsed": false
|
||||
},
|
||||
{
|
||||
"type": "divider"
|
||||
},
|
||||
@@ -71,5 +63,7 @@
|
||||
},
|
||||
{
|
||||
"type": "divider"
|
||||
}
|
||||
},
|
||||
"community",
|
||||
"security"
|
||||
]
|
||||
|
||||
@@ -19,21 +19,16 @@
|
||||
{
|
||||
"text": "Admin Command Reference",
|
||||
"link": "/reference/admin/"
|
||||
},
|
||||
{
|
||||
"text": "Server Reference",
|
||||
"link": "/reference/server"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"text": "Community",
|
||||
"items": [
|
||||
{
|
||||
"text": "Community Guidelines",
|
||||
"link": "/community/guidelines"
|
||||
},
|
||||
{
|
||||
"text": "Become a Partnered Homeserver!",
|
||||
"link": "/community/ops-guidelines"
|
||||
}
|
||||
]
|
||||
"link": "/community"
|
||||
},
|
||||
{
|
||||
"text": "Security",
|
||||
|
||||
@@ -1,12 +0,0 @@
|
||||
[
|
||||
{
|
||||
"type": "file",
|
||||
"name": "guidelines",
|
||||
"label": "Community Guidelines"
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"name": "ops-guidelines",
|
||||
"label": "Partnered Homeserver Guidelines"
|
||||
}
|
||||
]
|
||||
@@ -1,32 +0,0 @@
|
||||
# Partnered Homeserver Operator Requirements
|
||||
> _So you want to be an officially sanctioned public Continuwuity homeserver operator?_
|
||||
|
||||
Thank you for your interest in the project! There's a few things we need from you first to make sure your homeserver meets our quality standards and that you are prepared to handle the additional workload introduced by operating a public chat service.
|
||||
|
||||
## Stuff you must have
|
||||
if you don't do these things we will tell you to go away
|
||||
|
||||
- Your homeserver must be running an up-to-date version of Continuwuity
|
||||
- You must have a CAPTCHA, external registration system, or apply-to-join system that provides one-time-use invite codes (we do not accept fully open nor static token registration)
|
||||
- Your homeserver must have support details listed in [`/.well-known/matrix/support`](https://spec.matrix.org/v1.17/client-server-api/#getwell-knownmatrixsupport)
|
||||
- Your rules and guidelines must align with [the project's own code of conduct](guidelines).
|
||||
- You must be reasonably responsive (i.e. don't leave us hanging for a week if we alert you to an issue on your server)
|
||||
- Your homeserver's community rooms (if any) must be protected by a moderation bot subscribed to policy lists like the Community Moderation Effort (you can get one from https://asgard.chat if you don't want to run your own)
|
||||
|
||||
## Stuff we encourage you to have
|
||||
not strictly required but we will consider your request more strongly if you have it
|
||||
|
||||
- You should have automated moderation tooling that can automatically suspend abusive users on your homeserver who are added to policy lists
|
||||
- You should have multiple server administrators (increased bus factor)
|
||||
- You should have a terms of service and privacy policy prominently available
|
||||
|
||||
## Stuff you get
|
||||
|
||||
- Prominent listing in our README!
|
||||
- A gold star sticker
|
||||
- Access to a low noise room for more direct communication with maintainers and collaboration with fellow operators
|
||||
- Read-only access to the continuwuity internal ban list
|
||||
- Early notice of upcoming releases
|
||||
|
||||
## Sound good?
|
||||
To get started, ping a team member in [our main chatroom](https://matrix.to/#/#continuwuity:continuwuity.org) and ask to be added to the list.
|
||||
@@ -1,18 +1,17 @@
|
||||
# RPM Installation Guide
|
||||
|
||||
Continuwuity is available as RPM packages for Fedora and compatible distributions.
|
||||
We do not currently have infrastructure to build RPMs for RHEL and compatible distributions, but this is a work in progress.
|
||||
Continuwuity is available as RPM packages for Fedora, RHEL, and compatible distributions.
|
||||
|
||||
The RPM packaging files are maintained in the `fedora/` directory:
|
||||
- `continuwuity.spec.rpkg` - RPM spec file using rpkg macros for building from git
|
||||
- `continuwuity.service` - Systemd service file for the server
|
||||
- `RPM-GPG-KEY-continuwuity.asc` - GPG public key for verifying signed packages
|
||||
|
||||
RPM packages built by CI are signed with our GPG key (RSA, ID: `6595 E8DB 9191 D39A 46D6 A514 4BA7 F590 DF0B AA1D`). # spellchecker:disable-line
|
||||
RPM packages built by CI are signed with our GPG key (Ed25519, ID: `5E0FF73F411AAFCA`).
|
||||
|
||||
```bash
|
||||
# Import the signing key
|
||||
sudo rpm --import https://forgejo.ellis.link/api/packages/continuwuation/rpm/repository.key
|
||||
sudo rpm --import https://forgejo.ellis.link/continuwuation/continuwuity/raw/branch/main/fedora/RPM-GPG-KEY-continuwuity.asc
|
||||
|
||||
# Verify a downloaded package
|
||||
rpm --checksig continuwuity-*.rpm
|
||||
@@ -24,7 +23,7 @@ ## Installation methods
|
||||
|
||||
```bash
|
||||
# Add the repository and install
|
||||
sudo dnf config-manager addrepo --from-repofile=https://forgejo.ellis.link/api/packages/continuwuation/rpm/stable.repo
|
||||
sudo dnf config-manager addrepo --from-repofile=https://forgejo.ellis.link/api/packages/continuwuation/rpm/stable/continuwuation.repo
|
||||
sudo dnf install continuwuity
|
||||
```
|
||||
|
||||
@@ -32,7 +31,7 @@ # Add the repository and install
|
||||
|
||||
```bash
|
||||
# Add the dev repository and install
|
||||
sudo dnf config-manager addrepo --from-repofile=https://forgejo.ellis.link/api/packages/continuwuation/rpm/dev.repo
|
||||
sudo dnf config-manager addrepo --from-repofile=https://forgejo.ellis.link/api/packages/continuwuation/rpm/dev/continuwuation.repo
|
||||
sudo dnf install continuwuity
|
||||
```
|
||||
|
||||
@@ -40,10 +39,23 @@ # Add the dev repository and install
|
||||
|
||||
```bash
|
||||
# Branch names are sanitized (slashes become hyphens, lowercase only)
|
||||
sudo dnf config-manager addrepo --from-repofile=https://forgejo.ellis.link/api/packages/continuwuation/rpm/tom-new-feature.repo
|
||||
sudo dnf config-manager addrepo --from-repofile=https://forgejo.ellis.link/api/packages/continuwuation/rpm/tom-new-feature/continuwuation.repo
|
||||
sudo dnf install continuwuity
|
||||
```
|
||||
|
||||
**Direct installation** without adding repository
|
||||
|
||||
```bash
|
||||
# Latest stable release
|
||||
sudo dnf install https://forgejo.ellis.link/api/packages/continuwuation/rpm/stable/continuwuity
|
||||
|
||||
# Latest development build
|
||||
sudo dnf install https://forgejo.ellis.link/api/packages/continuwuation/rpm/dev/continuwuity
|
||||
|
||||
# Specific feature branch
|
||||
sudo dnf install https://forgejo.ellis.link/api/packages/continuwuation/rpm/branch-name/continuwuity
|
||||
```
|
||||
|
||||
**Manual repository configuration** (alternative method)
|
||||
|
||||
```bash
|
||||
@@ -53,7 +65,7 @@ # Branch names are sanitized (slashes become hyphens, lowercase only)
|
||||
baseurl=https://forgejo.ellis.link/api/packages/continuwuation/rpm/stable
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://forgejo.ellis.link/api/packages/continuwuation/rpm/repository.key
|
||||
gpgkey=https://forgejo.ellis.link/continuwuation/continuwuity/raw/branch/main/fedora/RPM-GPG-KEY-continuwuity.asc
|
||||
EOF
|
||||
|
||||
sudo dnf install continuwuity
|
||||
|
||||
@@ -19,16 +19,6 @@
|
||||
src: /assets/logo.svg
|
||||
alt: continuwuity logo
|
||||
|
||||
beforeFeatures:
|
||||
- title: Matrix for Discord users
|
||||
details: New to Matrix? Learn how Matrix compares to Discord
|
||||
link: https://joinmatrix.org/guide/matrix-vs-discord/
|
||||
buttonText: Find Out the Difference
|
||||
- title: How Matrix Works
|
||||
details: Learn how Matrix works under the hood, and what that means
|
||||
link: https://matrix.org/docs/matrix-concepts/elements-of-matrix/
|
||||
buttonText: Read the Guide
|
||||
|
||||
features:
|
||||
- title: 🚀 High Performance
|
||||
details: Built with Rust for exceptional speed and efficiency. Designed to run smoothly even on modest hardware.
|
||||
|
||||
@@ -6,10 +6,10 @@
|
||||
"message": "Welcome to Continuwuity! Important announcements about the project will appear here."
|
||||
},
|
||||
{
|
||||
"id": 9,
|
||||
"id": 8,
|
||||
"mention_room": false,
|
||||
"date": "2026-02-09",
|
||||
"message": "Yesterday we released [v0.5.4](https://forgejo.ellis.link/continuwuation/continuwuity/releases/tag/v0.5.4). Bugfixes, performance improvements and more moderation features! There's also a security fix, so please update as soon as possible. Don't forget to join [our announcements channel](https://matrix.to/#/!jIdNjSM5X-V5JVx2h2kAhUZIIQ08GyzPL55NFZAH1vM/%2489TY9CqRg4-ff1MGo3Ulc5r5X4pakfdzT-99RD8Docc?via=ellis.link&via=explodie.org&via=matrix.org) to get important information sooner <3 "
|
||||
"date": "2026-01-12",
|
||||
"message": "Hey everyone!\n\nJust letting you know we've released [v0.5.3](https://forgejo.ellis.link/continuwuation/continuwuity/releases/tag/v0.5.3) - this one is a bit of a hotfix for an issue with inviting and allowing others to join rooms.\n\nIf you appreceate the round-the-clock work we've been doing to keep your servers secure over this holiday period, we'd really appreciate your support - you can sponsor individuals on our team using the 'sponsor' button at the top of [our GitHub repository](https://github.com/continuwuity/continuwuity). If you can't do that, even a star helps - spreading the word and advocating for our project helps keep it going.\n\nHave a lovely rest of your year \\\n[Jade \\(she/her\\)](https://matrix.to/#/%40jade%3Aellis.link) \n🩵"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -112,19 +112,6 @@ ### `!admin query resolver overrides-cache`
|
||||
|
||||
Query the overrides cache
|
||||
|
||||
### `!admin query resolver flush-cache`
|
||||
|
||||
Flush a given server from the resolver caches or flush them completely
|
||||
|
||||
* Examples:
|
||||
* Flush a specific server:
|
||||
|
||||
`!admin query resolver flush-cache matrix.example.com`
|
||||
|
||||
* Flush all resolver caches completely:
|
||||
|
||||
`!admin query resolver flush-cache --all`
|
||||
|
||||
## `!admin query pusher`
|
||||
|
||||
pusher service
|
||||
|
||||
@@ -20,16 +20,6 @@ ### Lost access to admin room
|
||||
|
||||
## General potential issues
|
||||
|
||||
### Configuration not working as expected
|
||||
|
||||
Sometimes you can make a mistake in your configuration that
|
||||
means things don't get passed to Continuwuity correctly.
|
||||
This is particularly easy to do with environment variables.
|
||||
To check what configuration Continuwuity actually sees, you can
|
||||
use the `!admin server show-config` command in your admin room.
|
||||
Beware that this prints out any secrets in your configuration,
|
||||
so you might want to delete the result afterwards!
|
||||
|
||||
### Potential DNS issues when using Docker
|
||||
|
||||
Docker's DNS setup for containers in a non-default network intercepts queries to
|
||||
@@ -149,7 +139,7 @@ ### Database corruption
|
||||
|
||||
## Debugging
|
||||
|
||||
Note that users should not really need to debug things. If you find yourself
|
||||
Note that users should not really be debugging things. If you find yourself
|
||||
debugging and find the issue, please let us know and/or how we can fix it.
|
||||
Various debug commands can be found in `!admin debug`.
|
||||
|
||||
@@ -188,31 +178,6 @@ ### Pinging servers
|
||||
and simply fetches a string on a static JSON endpoint. It is very low cost both
|
||||
bandwidth and computationally.
|
||||
|
||||
### Enabling backtraces for errors
|
||||
|
||||
Continuwuity can capture backtraces (stack traces) for errors to help diagnose
|
||||
issues. Backtraces show the exact sequence of function calls that led to an
|
||||
error, which is invaluable for debugging.
|
||||
|
||||
To enable backtraces, set the `RUST_BACKTRACE` environment variable before starting Continuwuity:
|
||||
|
||||
```bash
|
||||
# For both panics and errors
|
||||
RUST_BACKTRACE=1 ./conduwuit
|
||||
|
||||
```
|
||||
|
||||
For systemd deployments, add this to your service file:
|
||||
|
||||
```ini
|
||||
[Service]
|
||||
Environment="RUST_BACKTRACE=1"
|
||||
```
|
||||
|
||||
Backtrace capture has a performance cost. Avoid leaving it on.
|
||||
You can also enable it only for panics by setting
|
||||
`RUST_BACKTRACE=1` and `RUST_LIB_BACKTRACE=0`.
|
||||
|
||||
### Allocator memory stats
|
||||
|
||||
When using jemalloc with jemallocator's `stats` feature (`--enable-stats`), you
|
||||
|
||||
1862
package-lock.json
generated
1862
package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -22,9 +22,10 @@
|
||||
"license": "ISC",
|
||||
"type": "commonjs",
|
||||
"devDependencies": {
|
||||
"@rspress/core": "^2.0.0",
|
||||
"@rspress/plugin-client-redirects": "^2.0.0",
|
||||
"@rspress/plugin-sitemap": "^2.0.0",
|
||||
"@rspress/core": "^2.0.0-rc.1",
|
||||
"@rspress/plugin-client-redirects": "^2.0.0-alpha.12",
|
||||
"@rspress/plugin-preview": "^2.0.0-beta.35",
|
||||
"@rspress/plugin-sitemap": "^2.0.0-beta.23",
|
||||
"typescript": "^5.9.3"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import { defineConfig } from '@rspress/core';
|
||||
import { pluginPreview } from '@rspress/plugin-preview';
|
||||
import { pluginSitemap } from '@rspress/plugin-sitemap';
|
||||
import { pluginClientRedirects } from '@rspress/plugin-client-redirects';
|
||||
|
||||
@@ -40,7 +41,7 @@ export default defineConfig({
|
||||
},
|
||||
},
|
||||
|
||||
plugins: [pluginSitemap({
|
||||
plugins: [pluginPreview(), pluginSitemap({
|
||||
siteUrl: 'https://continuwuity.org', // TODO: Set automatically in build pipeline
|
||||
}),
|
||||
pluginClientRedirects({
|
||||
@@ -53,9 +54,6 @@ export default defineConfig({
|
||||
}, {
|
||||
from: '/server_reference',
|
||||
to: '/reference/server'
|
||||
}, {
|
||||
from: '/community$',
|
||||
to: '/community/guidelines'
|
||||
}
|
||||
]
|
||||
})],
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
use clap::Subcommand;
|
||||
use conduwuit::{Err, Result, utils::time};
|
||||
use conduwuit::{Result, utils::time};
|
||||
use futures::StreamExt;
|
||||
use ruma::OwnedServerName;
|
||||
|
||||
@@ -7,7 +7,6 @@
|
||||
|
||||
#[admin_command_dispatch]
|
||||
#[derive(Debug, Subcommand)]
|
||||
#[allow(clippy::enum_variant_names)]
|
||||
/// Resolver service and caches
|
||||
pub enum ResolverCommand {
|
||||
/// Query the destinations cache
|
||||
@@ -19,14 +18,6 @@ pub enum ResolverCommand {
|
||||
OverridesCache {
|
||||
name: Option<String>,
|
||||
},
|
||||
|
||||
/// Flush a specific server from the resolver caches or everything
|
||||
FlushCache {
|
||||
name: Option<OwnedServerName>,
|
||||
|
||||
#[arg(short, long)]
|
||||
all: bool,
|
||||
},
|
||||
}
|
||||
|
||||
#[admin_command]
|
||||
@@ -78,18 +69,3 @@ async fn overrides_cache(&self, server_name: Option<String>) -> Result {
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[admin_command]
|
||||
async fn flush_cache(&self, name: Option<OwnedServerName>, all: bool) -> Result {
|
||||
if all {
|
||||
self.services.resolver.cache.clear().await;
|
||||
writeln!(self, "Resolver caches cleared!").await
|
||||
} else if let Some(name) = name {
|
||||
self.services.resolver.cache.del_destination(&name);
|
||||
self.services.resolver.cache.del_override(&name);
|
||||
self.write_str(&format!("Cleared {name} from resolver caches!"))
|
||||
.await
|
||||
} else {
|
||||
Err!("Missing name. Supply a name or use --all to flush the whole cache.")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,7 +3,10 @@
|
||||
fmt::Write as _,
|
||||
};
|
||||
|
||||
use api::client::{full_user_deactivate, join_room_by_id_helper, leave_room, remote_leave_room};
|
||||
use api::client::{
|
||||
full_user_deactivate, join_room_by_id_helper, leave_all_rooms, leave_room, remote_leave_room,
|
||||
update_avatar_url, update_displayname,
|
||||
};
|
||||
use conduwuit::{
|
||||
Err, Result, debug, debug_warn, error, info, is_equal_to,
|
||||
matrix::{Event, pdu::PduBuilder},
|
||||
@@ -224,6 +227,9 @@ pub(super) async fn deactivate(&self, no_leave_rooms: bool, user_id: String) ->
|
||||
full_user_deactivate(self.services, &user_id, &all_joined_rooms)
|
||||
.boxed()
|
||||
.await?;
|
||||
update_displayname(self.services, &user_id, None, &all_joined_rooms).await;
|
||||
update_avatar_url(self.services, &user_id, None, None, &all_joined_rooms).await;
|
||||
leave_all_rooms(self.services, &user_id).await;
|
||||
}
|
||||
|
||||
self.write_str(&format!("User {user_id} has been deactivated"))
|
||||
@@ -400,6 +406,10 @@ pub(super) async fn deactivate_all(&self, no_leave_rooms: bool, force: bool) ->
|
||||
full_user_deactivate(self.services, &user_id, &all_joined_rooms)
|
||||
.boxed()
|
||||
.await?;
|
||||
update_displayname(self.services, &user_id, None, &all_joined_rooms).await;
|
||||
update_avatar_url(self.services, &user_id, None, None, &all_joined_rooms)
|
||||
.await;
|
||||
leave_all_rooms(self.services, &user_id).await;
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
@@ -26,7 +26,6 @@
|
||||
events::{
|
||||
GlobalAccountDataEventType, StateEventType,
|
||||
room::{
|
||||
member::{MembershipState, RoomMemberEventContent},
|
||||
message::RoomMessageEventContent,
|
||||
power_levels::{RoomPowerLevels, RoomPowerLevelsEventContent},
|
||||
},
|
||||
@@ -816,6 +815,9 @@ pub(crate) async fn deactivate_route(
|
||||
.collect()
|
||||
.await;
|
||||
|
||||
super::update_displayname(&services, sender_user, None, &all_joined_rooms).await;
|
||||
super::update_avatar_url(&services, sender_user, None, None, &all_joined_rooms).await;
|
||||
|
||||
full_user_deactivate(&services, sender_user, &all_joined_rooms)
|
||||
.boxed()
|
||||
.await?;
|
||||
@@ -905,6 +907,9 @@ pub async fn full_user_deactivate(
|
||||
) -> Result<()> {
|
||||
services.users.deactivate_account(user_id).await.ok();
|
||||
|
||||
super::update_displayname(services, user_id, None, all_joined_rooms).await;
|
||||
super::update_avatar_url(services, user_id, None, None, all_joined_rooms).await;
|
||||
|
||||
services
|
||||
.users
|
||||
.all_profile_keys(user_id)
|
||||
@@ -913,11 +918,9 @@ pub async fn full_user_deactivate(
|
||||
})
|
||||
.await;
|
||||
|
||||
// TODO: Rescind all user invites
|
||||
|
||||
let mut pdu_queue: Vec<(PduBuilder, &OwnedRoomId)> = Vec::new();
|
||||
|
||||
for room_id in all_joined_rooms {
|
||||
let state_lock = services.rooms.state.mutex.lock(room_id).await;
|
||||
|
||||
let room_power_levels = services
|
||||
.rooms
|
||||
.state_accessor
|
||||
@@ -945,33 +948,30 @@ pub async fn full_user_deactivate(
|
||||
if user_can_demote_self {
|
||||
let mut power_levels_content = room_power_levels.unwrap_or_default();
|
||||
power_levels_content.users.remove(user_id);
|
||||
let pl_evt = PduBuilder::state(String::new(), &power_levels_content);
|
||||
pdu_queue.push((pl_evt, room_id));
|
||||
|
||||
// ignore errors so deactivation doesn't fail
|
||||
match services
|
||||
.rooms
|
||||
.timeline
|
||||
.build_and_append_pdu(
|
||||
PduBuilder::state(String::new(), &power_levels_content),
|
||||
user_id,
|
||||
Some(room_id),
|
||||
&state_lock,
|
||||
)
|
||||
.await
|
||||
{
|
||||
| Err(e) => {
|
||||
warn!(%room_id, %user_id, "Failed to demote user's own power level: {e}");
|
||||
},
|
||||
| _ => {
|
||||
info!("Demoted {user_id} in {room_id} as part of account deactivation");
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// Leave the room
|
||||
pdu_queue.push((
|
||||
PduBuilder::state(user_id.to_string(), &RoomMemberEventContent {
|
||||
avatar_url: None,
|
||||
blurhash: None,
|
||||
membership: MembershipState::Leave,
|
||||
displayname: None,
|
||||
join_authorized_via_users_server: None,
|
||||
reason: None,
|
||||
is_direct: None,
|
||||
third_party_invite: None,
|
||||
redact_events: None,
|
||||
}),
|
||||
room_id,
|
||||
));
|
||||
|
||||
// TODO: Redact all messages sent by the user in the room
|
||||
}
|
||||
|
||||
super::update_all_rooms(services, pdu_queue, user_id).await;
|
||||
for room_id in all_joined_rooms {
|
||||
services.rooms.state_cache.forget(room_id, user_id);
|
||||
}
|
||||
super::leave_all_rooms(services, user_id).boxed().await;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -15,7 +15,6 @@
|
||||
utils::{
|
||||
self, shuffle,
|
||||
stream::{IterStream, ReadyExt},
|
||||
to_canonical_object,
|
||||
},
|
||||
warn,
|
||||
};
|
||||
@@ -1013,55 +1012,40 @@ async fn make_join_request(
|
||||
trace!("make_join response: {:?}", make_join_response);
|
||||
make_join_counter = make_join_counter.saturating_add(1);
|
||||
|
||||
match make_join_response {
|
||||
| Ok(response) => {
|
||||
info!("Received make_join response from {remote_server}");
|
||||
if let Err(e) = validate_remote_member_event_stub(
|
||||
&MembershipState::Join,
|
||||
sender_user,
|
||||
room_id,
|
||||
&to_canonical_object(&response.event)?,
|
||||
) {
|
||||
warn!("make_join response from {remote_server} failed validation: {e}");
|
||||
continue;
|
||||
}
|
||||
make_join_response_and_server = Ok((response, remote_server.clone()));
|
||||
break;
|
||||
},
|
||||
| Err(e) => {
|
||||
info!("make_join request to {remote_server} failed: {e}");
|
||||
if matches!(
|
||||
e.kind(),
|
||||
ErrorKind::IncompatibleRoomVersion { .. } | ErrorKind::UnsupportedRoomVersion
|
||||
) {
|
||||
incompatible_room_version_count =
|
||||
incompatible_room_version_count.saturating_add(1);
|
||||
}
|
||||
if let Err(ref e) = make_join_response {
|
||||
if matches!(
|
||||
e.kind(),
|
||||
ErrorKind::IncompatibleRoomVersion { .. } | ErrorKind::UnsupportedRoomVersion
|
||||
) {
|
||||
incompatible_room_version_count =
|
||||
incompatible_room_version_count.saturating_add(1);
|
||||
}
|
||||
|
||||
if incompatible_room_version_count > 15 {
|
||||
info!(
|
||||
"15 servers have responded with M_INCOMPATIBLE_ROOM_VERSION or \
|
||||
M_UNSUPPORTED_ROOM_VERSION, assuming that conduwuit does not support \
|
||||
the room version {room_id}: {e}"
|
||||
);
|
||||
make_join_response_and_server =
|
||||
Err!(BadServerResponse("Room version is not supported by Conduwuit"));
|
||||
return make_join_response_and_server;
|
||||
}
|
||||
if incompatible_room_version_count > 15 {
|
||||
info!(
|
||||
"15 servers have responded with M_INCOMPATIBLE_ROOM_VERSION or \
|
||||
M_UNSUPPORTED_ROOM_VERSION, assuming that conduwuit does not support the \
|
||||
room version {room_id}: {e}"
|
||||
);
|
||||
make_join_response_and_server =
|
||||
Err!(BadServerResponse("Room version is not supported by Conduwuit"));
|
||||
return make_join_response_and_server;
|
||||
}
|
||||
|
||||
if make_join_counter > 40 {
|
||||
warn!(
|
||||
"40 servers failed to provide valid make_join response, assuming no \
|
||||
server can assist in joining."
|
||||
);
|
||||
make_join_response_and_server =
|
||||
Err!(BadServerResponse("No server available to assist in joining."));
|
||||
if make_join_counter > 40 {
|
||||
warn!(
|
||||
"40 servers failed to provide valid make_join response, assuming no server \
|
||||
can assist in joining."
|
||||
);
|
||||
make_join_response_and_server =
|
||||
Err!(BadServerResponse("No server available to assist in joining."));
|
||||
|
||||
return make_join_response_and_server;
|
||||
}
|
||||
},
|
||||
return make_join_response_and_server;
|
||||
}
|
||||
}
|
||||
|
||||
make_join_response_and_server = make_join_response.map(|r| (r, remote_server.clone()));
|
||||
|
||||
if make_join_response_and_server.is_ok() {
|
||||
break;
|
||||
}
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
},
|
||||
result::FlatOk,
|
||||
trace,
|
||||
utils::{self, shuffle, stream::IterStream, to_canonical_object},
|
||||
utils::{self, shuffle, stream::IterStream},
|
||||
warn,
|
||||
};
|
||||
use futures::{FutureExt, StreamExt};
|
||||
@@ -741,17 +741,6 @@ async fn make_knock_request(
|
||||
|
||||
trace!("make_knock response: {make_knock_response:?}");
|
||||
make_knock_counter = make_knock_counter.saturating_add(1);
|
||||
if let Ok(r) = &make_knock_response {
|
||||
if let Err(e) = validate_remote_member_event_stub(
|
||||
&MembershipState::Knock,
|
||||
sender_user,
|
||||
room_id,
|
||||
&to_canonical_object(&r.event)?,
|
||||
) {
|
||||
warn!("make_knock response from {remote_server} failed validation: {e}");
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
make_knock_response_and_server = make_knock_response.map(|r| (r, remote_server.clone()));
|
||||
|
||||
|
||||
@@ -231,7 +231,7 @@ pub(crate) fn validate_remote_member_event_stub(
|
||||
};
|
||||
if event_membership != &membership.as_str() {
|
||||
return Err!(BadServerResponse(
|
||||
"Remote server returned member event with incorrect membership type"
|
||||
"Remote server returned member event with incorrect room_id"
|
||||
));
|
||||
}
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
use axum::extract::State;
|
||||
use conduwuit::{
|
||||
Err, Event, PduCount, Result, info,
|
||||
Event, PduCount, Result,
|
||||
result::LogErr,
|
||||
utils::{IterStream, ReadyExt, stream::TryTools},
|
||||
};
|
||||
@@ -34,18 +34,6 @@ pub(crate) async fn get_backfill_route(
|
||||
}
|
||||
.check()
|
||||
.await?;
|
||||
if !services
|
||||
.rooms
|
||||
.state_cache
|
||||
.server_in_room(services.globals.server_name(), &body.room_id)
|
||||
.await
|
||||
{
|
||||
info!(
|
||||
origin = body.origin().as_str(),
|
||||
"Refusing to serve backfill for room we aren't participating in"
|
||||
);
|
||||
return Err!(Request(NotFound("This server is not participating in that room.")));
|
||||
}
|
||||
|
||||
let limit = body
|
||||
.limit
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
use axum::extract::State;
|
||||
use conduwuit::{Err, Result, err, info};
|
||||
use conduwuit::{Result, err};
|
||||
use ruma::{MilliSecondsSinceUnixEpoch, RoomId, api::federation::event::get_event};
|
||||
|
||||
use super::AccessCheck;
|
||||
@@ -38,19 +38,6 @@ pub(crate) async fn get_event_route(
|
||||
.check()
|
||||
.await?;
|
||||
|
||||
if !services
|
||||
.rooms
|
||||
.state_cache
|
||||
.server_in_room(services.globals.server_name(), room_id)
|
||||
.await
|
||||
{
|
||||
info!(
|
||||
origin = body.origin().as_str(),
|
||||
"Refusing to serve state for room we aren't participating in"
|
||||
);
|
||||
return Err!(Request(NotFound("This server is not participating in that room.")));
|
||||
}
|
||||
|
||||
Ok(get_event::v1::Response {
|
||||
origin: services.globals.server_name().to_owned(),
|
||||
origin_server_ts: MilliSecondsSinceUnixEpoch::now(),
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use std::{borrow::Borrow, iter::once};
|
||||
|
||||
use axum::extract::State;
|
||||
use conduwuit::{Err, Error, Result, info, utils::stream::ReadyExt};
|
||||
use conduwuit::{Error, Result, utils::stream::ReadyExt};
|
||||
use futures::StreamExt;
|
||||
use ruma::{
|
||||
RoomId,
|
||||
@@ -29,19 +29,6 @@ pub(crate) async fn get_event_authorization_route(
|
||||
.check()
|
||||
.await?;
|
||||
|
||||
if !services
|
||||
.rooms
|
||||
.state_cache
|
||||
.server_in_room(services.globals.server_name(), &body.room_id)
|
||||
.await
|
||||
{
|
||||
info!(
|
||||
origin = body.origin().as_str(),
|
||||
"Refusing to serve state for room we aren't participating in"
|
||||
);
|
||||
return Err!(Request(NotFound("This server is not participating in that room.")));
|
||||
}
|
||||
|
||||
let event = services
|
||||
.rooms
|
||||
.timeline
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
use axum::extract::State;
|
||||
use conduwuit::{Err, Result, debug, debug_error, info, utils::to_canonical_object};
|
||||
use conduwuit::{Result, debug, debug_error, utils::to_canonical_object};
|
||||
use ruma::api::federation::event::get_missing_events;
|
||||
|
||||
use super::AccessCheck;
|
||||
@@ -26,19 +26,6 @@ pub(crate) async fn get_missing_events_route(
|
||||
.check()
|
||||
.await?;
|
||||
|
||||
if !services
|
||||
.rooms
|
||||
.state_cache
|
||||
.server_in_room(services.globals.server_name(), &body.room_id)
|
||||
.await
|
||||
{
|
||||
info!(
|
||||
origin = body.origin().as_str(),
|
||||
"Refusing to serve state for room we aren't participating in"
|
||||
);
|
||||
return Err!(Request(NotFound("This server is not participating in that room.")));
|
||||
}
|
||||
|
||||
let limit = body
|
||||
.limit
|
||||
.try_into()
|
||||
@@ -79,12 +66,12 @@ pub(crate) async fn get_missing_events_route(
|
||||
continue;
|
||||
}
|
||||
|
||||
i = i.saturating_add(1);
|
||||
let Ok(event) = to_canonical_object(&pdu) else {
|
||||
debug_error!(
|
||||
body.origin = body.origin.as_ref().map(tracing::field::display),
|
||||
"Failed to convert PDU in database to canonical JSON: {pdu:?}"
|
||||
);
|
||||
i = i.saturating_add(1);
|
||||
continue;
|
||||
};
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
use axum::extract::State;
|
||||
use conduwuit::{
|
||||
Err, Result, info,
|
||||
Err, Result,
|
||||
utils::stream::{BroadbandExt, IterStream},
|
||||
};
|
||||
use conduwuit_service::rooms::spaces::{
|
||||
@@ -23,19 +23,6 @@ pub(crate) async fn get_hierarchy_route(
|
||||
return Err!(Request(NotFound("Room does not exist.")));
|
||||
}
|
||||
|
||||
if !services
|
||||
.rooms
|
||||
.state_cache
|
||||
.server_in_room(services.globals.server_name(), &body.room_id)
|
||||
.await
|
||||
{
|
||||
info!(
|
||||
origin = body.origin().as_str(),
|
||||
"Refusing to serve state for room we aren't participating in"
|
||||
);
|
||||
return Err!(Request(NotFound("This server is not participating in that room.")));
|
||||
}
|
||||
|
||||
let room_id = &body.room_id;
|
||||
let suggested_only = body.suggested_only;
|
||||
let ref identifier = Identifier::ServerName(body.origin());
|
||||
|
||||
@@ -30,18 +30,6 @@ pub(crate) async fn create_join_event_template_route(
|
||||
if !services.rooms.metadata.exists(&body.room_id).await {
|
||||
return Err!(Request(NotFound("Room is unknown to this server.")));
|
||||
}
|
||||
if !services
|
||||
.rooms
|
||||
.state_cache
|
||||
.server_in_room(services.globals.server_name(), &body.room_id)
|
||||
.await
|
||||
{
|
||||
info!(
|
||||
origin = body.origin().as_str(),
|
||||
"Refusing to serve make_join for room we aren't participating in"
|
||||
);
|
||||
return Err!(Request(NotFound("This server is not participating in that room.")));
|
||||
}
|
||||
|
||||
if body.user_id.server_name() != body.origin() {
|
||||
return Err!(Request(BadJson("Not allowed to join on behalf of another server/user.")));
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
use RoomVersionId::*;
|
||||
use axum::extract::State;
|
||||
use conduwuit::{Err, Error, Result, debug_warn, info, matrix::pdu::PduBuilder, warn};
|
||||
use conduwuit::{Err, Error, Result, debug_warn, matrix::pdu::PduBuilder, warn};
|
||||
use ruma::{
|
||||
RoomVersionId,
|
||||
api::{client::error::ErrorKind, federation::knock::create_knock_event_template},
|
||||
@@ -20,18 +20,6 @@ pub(crate) async fn create_knock_event_template_route(
|
||||
if !services.rooms.metadata.exists(&body.room_id).await {
|
||||
return Err!(Request(NotFound("Room is unknown to this server.")));
|
||||
}
|
||||
if !services
|
||||
.rooms
|
||||
.state_cache
|
||||
.server_in_room(services.globals.server_name(), &body.room_id)
|
||||
.await
|
||||
{
|
||||
info!(
|
||||
origin = body.origin().as_str(),
|
||||
"Refusing to serve make_knock for room we aren't participating in"
|
||||
);
|
||||
return Err!(Request(NotFound("This server is not participating in that room.")));
|
||||
}
|
||||
|
||||
if body.user_id.server_name() != body.origin() {
|
||||
return Err!(Request(BadJson("Not allowed to knock on behalf of another server/user.")));
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
use axum::extract::State;
|
||||
use conduwuit::{Err, Result, info, matrix::pdu::PduBuilder};
|
||||
use conduwuit::{Err, Result, matrix::pdu::PduBuilder};
|
||||
use ruma::{
|
||||
api::federation::membership::prepare_leave_event,
|
||||
events::room::member::{MembershipState, RoomMemberEventContent},
|
||||
@@ -20,19 +20,6 @@ pub(crate) async fn create_leave_event_template_route(
|
||||
return Err!(Request(NotFound("Room is unknown to this server.")));
|
||||
}
|
||||
|
||||
if !services
|
||||
.rooms
|
||||
.state_cache
|
||||
.server_in_room(services.globals.server_name(), &body.room_id)
|
||||
.await
|
||||
{
|
||||
info!(
|
||||
origin = body.origin().as_str(),
|
||||
"Refusing to serve make_leave for room we aren't participating in"
|
||||
);
|
||||
return Err!(Request(NotFound("This server is not participating in that room.")));
|
||||
}
|
||||
|
||||
if body.user_id.server_name() != body.origin() {
|
||||
return Err!(Request(Forbidden(
|
||||
"Not allowed to leave on behalf of another server/user."
|
||||
|
||||
@@ -36,18 +36,6 @@ async fn create_join_event(
|
||||
if !services.rooms.metadata.exists(room_id).await {
|
||||
return Err!(Request(NotFound("Room is unknown to this server.")));
|
||||
}
|
||||
if !services
|
||||
.rooms
|
||||
.state_cache
|
||||
.server_in_room(services.globals.server_name(), room_id)
|
||||
.await
|
||||
{
|
||||
info!(
|
||||
origin = origin.as_str(),
|
||||
"Refusing to serve send_join for room we aren't participating in"
|
||||
);
|
||||
return Err!(Request(NotFound("This server is not participating in that room.")));
|
||||
}
|
||||
|
||||
// ACL check origin server
|
||||
services
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
use axum::extract::State;
|
||||
use conduwuit::{
|
||||
Err, Result, err, info,
|
||||
Err, Result, err,
|
||||
matrix::{event::gen_event_id_canonical_json, pdu::PduEvent},
|
||||
warn,
|
||||
};
|
||||
@@ -54,19 +54,6 @@ pub(crate) async fn create_knock_event_v1_route(
|
||||
return Err!(Request(NotFound("Room is unknown to this server.")));
|
||||
}
|
||||
|
||||
if !services
|
||||
.rooms
|
||||
.state_cache
|
||||
.server_in_room(services.globals.server_name(), &body.room_id)
|
||||
.await
|
||||
{
|
||||
info!(
|
||||
origin = body.origin().as_str(),
|
||||
"Refusing to serve send_knock for room we aren't participating in"
|
||||
);
|
||||
return Err!(Request(NotFound("This server is not participating in that room.")));
|
||||
}
|
||||
|
||||
// ACL check origin server
|
||||
services
|
||||
.rooms
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
#![allow(deprecated)]
|
||||
|
||||
use axum::extract::State;
|
||||
use conduwuit::{Err, Result, err, info, matrix::event::gen_event_id_canonical_json};
|
||||
use conduwuit::{Err, Result, err, matrix::event::gen_event_id_canonical_json};
|
||||
use conduwuit_service::Services;
|
||||
use futures::FutureExt;
|
||||
use ruma::{
|
||||
@@ -50,19 +50,6 @@ async fn create_leave_event(
|
||||
return Err!(Request(NotFound("Room is unknown to this server.")));
|
||||
}
|
||||
|
||||
if !services
|
||||
.rooms
|
||||
.state_cache
|
||||
.server_in_room(services.globals.server_name(), room_id)
|
||||
.await
|
||||
{
|
||||
info!(
|
||||
origin = origin.as_str(),
|
||||
"Refusing to serve backfill for room we aren't participating in"
|
||||
);
|
||||
return Err!(Request(NotFound("This server is not participating in that room.")));
|
||||
}
|
||||
|
||||
// ACL check origin
|
||||
services
|
||||
.rooms
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use std::{borrow::Borrow, iter::once};
|
||||
|
||||
use axum::extract::State;
|
||||
use conduwuit::{Err, Result, at, err, info, utils::IterStream};
|
||||
use conduwuit::{Result, at, err, utils::IterStream};
|
||||
use futures::{FutureExt, StreamExt, TryStreamExt};
|
||||
use ruma::{OwnedEventId, api::federation::event::get_room_state};
|
||||
|
||||
@@ -24,19 +24,6 @@ pub(crate) async fn get_room_state_route(
|
||||
.check()
|
||||
.await?;
|
||||
|
||||
if !services
|
||||
.rooms
|
||||
.state_cache
|
||||
.server_in_room(services.globals.server_name(), &body.room_id)
|
||||
.await
|
||||
{
|
||||
info!(
|
||||
origin = body.origin().as_str(),
|
||||
"Refusing to serve state for room we aren't participating in"
|
||||
);
|
||||
return Err!(Request(NotFound("This server is not participating in that room.")));
|
||||
}
|
||||
|
||||
let shortstatehash = services
|
||||
.rooms
|
||||
.state_accessor
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use std::{borrow::Borrow, iter::once};
|
||||
|
||||
use axum::extract::State;
|
||||
use conduwuit::{Err, Result, at, err, info};
|
||||
use conduwuit::{Result, at, err};
|
||||
use futures::{StreamExt, TryStreamExt};
|
||||
use ruma::{OwnedEventId, api::federation::event::get_room_state_ids};
|
||||
|
||||
@@ -25,19 +25,6 @@ pub(crate) async fn get_room_state_ids_route(
|
||||
.check()
|
||||
.await?;
|
||||
|
||||
if !services
|
||||
.rooms
|
||||
.state_cache
|
||||
.server_in_room(services.globals.server_name(), &body.room_id)
|
||||
.await
|
||||
{
|
||||
info!(
|
||||
origin = body.origin().as_str(),
|
||||
"Refusing to serve state for room we aren't participating in"
|
||||
);
|
||||
return Err!(Request(NotFound("This server is not participating in that room.")));
|
||||
}
|
||||
|
||||
let shortstatehash = services
|
||||
.rooms
|
||||
.state_accessor
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
use conduwuit::{Err, Result, implement, is_false};
|
||||
use conduwuit_service::Services;
|
||||
use futures::{FutureExt, future::OptionFuture, join};
|
||||
use futures::{FutureExt, StreamExt, future::OptionFuture, join};
|
||||
use ruma::{EventId, RoomId, ServerName};
|
||||
|
||||
pub(super) struct AccessCheck<'a> {
|
||||
@@ -31,6 +31,15 @@ pub(super) async fn check(&self) -> Result {
|
||||
.state_cache
|
||||
.server_in_room(self.origin, self.room_id);
|
||||
|
||||
// if any user on our homeserver is trying to knock this room, we'll need to
|
||||
// acknowledge bans or leaves
|
||||
let user_is_knocking = self
|
||||
.services
|
||||
.rooms
|
||||
.state_cache
|
||||
.room_members_knocked(self.room_id)
|
||||
.count();
|
||||
|
||||
let server_can_see: OptionFuture<_> = self
|
||||
.event_id
|
||||
.map(|event_id| {
|
||||
@@ -42,14 +51,14 @@ pub(super) async fn check(&self) -> Result {
|
||||
})
|
||||
.into();
|
||||
|
||||
let (world_readable, server_in_room, server_can_see, acl_check) =
|
||||
join!(world_readable, server_in_room, server_can_see, acl_check);
|
||||
let (world_readable, server_in_room, server_can_see, acl_check, user_is_knocking) =
|
||||
join!(world_readable, server_in_room, server_can_see, acl_check, user_is_knocking);
|
||||
|
||||
if !acl_check {
|
||||
return Err!(Request(Forbidden("Server access denied.")));
|
||||
}
|
||||
|
||||
if !world_readable && !server_in_room {
|
||||
if !world_readable && !server_in_room && user_is_knocking == 0 {
|
||||
return Err!(Request(Forbidden("Server is not in room.")));
|
||||
}
|
||||
|
||||
|
||||
@@ -14,25 +14,6 @@
|
||||
|
||||
impl axum::response::IntoResponse for Error {
|
||||
fn into_response(self) -> axum::response::Response {
|
||||
let status = self.status_code();
|
||||
if status.is_server_error() {
|
||||
error!(
|
||||
error = %self,
|
||||
error_debug = ?self,
|
||||
kind = ?self.kind(),
|
||||
status = %status,
|
||||
"Server error"
|
||||
);
|
||||
} else if status.is_client_error() {
|
||||
use crate::debug_error;
|
||||
debug_error!(
|
||||
error = %self,
|
||||
kind = ?self.kind(),
|
||||
status = %status,
|
||||
"Client error"
|
||||
);
|
||||
}
|
||||
|
||||
let response: UiaaResponse = self.into();
|
||||
response
|
||||
.try_into_http_response::<BytesMut>()
|
||||
|
||||
@@ -7,7 +7,6 @@
|
||||
mod clap;
|
||||
mod logging;
|
||||
mod mods;
|
||||
mod panic;
|
||||
mod restart;
|
||||
mod runtime;
|
||||
mod sentry;
|
||||
@@ -20,8 +19,6 @@
|
||||
pub use crate::clap::Args;
|
||||
|
||||
pub fn run() -> Result<()> {
|
||||
panic::init();
|
||||
|
||||
let args = clap::parse();
|
||||
run_with_args(&args)
|
||||
}
|
||||
|
||||
@@ -1,34 +0,0 @@
|
||||
use std::{backtrace::Backtrace, panic};
|
||||
|
||||
/// Initialize the panic hook to capture backtraces at the point of panic.
|
||||
/// This is needed to capture the backtrace before the unwind destroys it.
|
||||
pub(crate) fn init() {
|
||||
let default_hook = panic::take_hook();
|
||||
|
||||
panic::set_hook(Box::new(move |info| {
|
||||
let backtrace = Backtrace::force_capture();
|
||||
|
||||
let location_str = info.location().map_or_else(String::new, |loc| {
|
||||
format!(" at {}:{}:{}", loc.file(), loc.line(), loc.column())
|
||||
});
|
||||
|
||||
let message = if let Some(s) = info.payload().downcast_ref::<&str>() {
|
||||
(*s).to_owned()
|
||||
} else if let Some(s) = info.payload().downcast_ref::<String>() {
|
||||
s.clone()
|
||||
} else {
|
||||
"Box<dyn Any>".to_owned()
|
||||
};
|
||||
|
||||
let thread_name = std::thread::current()
|
||||
.name()
|
||||
.map_or_else(|| "<unnamed>".to_owned(), ToOwned::to_owned);
|
||||
|
||||
eprintln!(
|
||||
"\nthread '{thread_name}' panicked{location_str}: \
|
||||
{message}\n\nBacktrace:\n{backtrace}"
|
||||
);
|
||||
|
||||
default_hook(info);
|
||||
}));
|
||||
}
|
||||
@@ -118,6 +118,7 @@ webpage.optional = true
|
||||
blurhash.workspace = true
|
||||
blurhash.optional = true
|
||||
recaptcha-verify = { version = "0.1.5", default-features = false }
|
||||
indexmap.workspace = true
|
||||
|
||||
[target.'cfg(all(unix, target_os = "linux"))'.dependencies]
|
||||
sd-notify.workspace = true
|
||||
|
||||
@@ -406,10 +406,6 @@ pub async fn get_admins(&self) -> Vec<OwnedUserId> {
|
||||
|
||||
/// Checks whether a given user is an admin of this server
|
||||
pub async fn user_is_admin(&self, user_id: &UserId) -> bool {
|
||||
if self.services.globals.server_user == user_id {
|
||||
return true;
|
||||
}
|
||||
|
||||
if self
|
||||
.services
|
||||
.server
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use std::{cmp, collections::HashMap, future::ready};
|
||||
use std::{cmp, collections::HashMap};
|
||||
|
||||
use conduwuit::{
|
||||
Err, Event, Pdu, Result, debug, debug_info, debug_warn, error, info,
|
||||
Err, Pdu, Result, debug, debug_info, debug_warn, error, info,
|
||||
result::NotFound,
|
||||
utils::{
|
||||
IterStream, ReadyExt,
|
||||
@@ -15,9 +15,8 @@
|
||||
use ruma::{
|
||||
OwnedRoomId, OwnedUserId, RoomId, UserId,
|
||||
events::{
|
||||
AnyStrippedStateEvent, GlobalAccountDataEventType, StateEventType,
|
||||
push_rules::PushRulesEvent,
|
||||
room::member::{MembershipState, RoomMemberEventContent},
|
||||
GlobalAccountDataEventType, StateEventType, push_rules::PushRulesEvent,
|
||||
room::member::MembershipState,
|
||||
},
|
||||
push::Ruleset,
|
||||
serde::Raw,
|
||||
@@ -163,14 +162,6 @@ async fn migrate(services: &Services) -> Result<()> {
|
||||
populate_userroomid_leftstate_table(services).await?;
|
||||
}
|
||||
|
||||
if db["global"]
|
||||
.get(FIXED_LOCAL_INVITE_STATE_MARKER)
|
||||
.await
|
||||
.is_not_found()
|
||||
{
|
||||
fix_local_invite_state(services).await?;
|
||||
}
|
||||
|
||||
assert_eq!(
|
||||
services.globals.db.database_version().await,
|
||||
DATABASE_VERSION,
|
||||
@@ -730,46 +721,3 @@ async fn populate_userroomid_leftstate_table(services: &Services) -> Result {
|
||||
db.db.sort()?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
const FIXED_LOCAL_INVITE_STATE_MARKER: &str = "fix_local_invite_state";
|
||||
async fn fix_local_invite_state(services: &Services) -> Result {
|
||||
// Clean up the effects of !1249 by caching stripped state for invites
|
||||
|
||||
type KeyVal<'a> = (Key<'a>, Raw<Vec<AnyStrippedStateEvent>>);
|
||||
type Key<'a> = (&'a UserId, &'a RoomId);
|
||||
|
||||
let db = &services.db;
|
||||
let cork = db.cork_and_sync();
|
||||
let userroomid_invitestate = services.db["userroomid_invitestate"].clone();
|
||||
|
||||
// for each user invited to a room
|
||||
let fixed = userroomid_invitestate.stream()
|
||||
// if they're a local user on this homeserver
|
||||
.try_filter(|((user_id, _), _): &KeyVal<'_>| ready(services.globals.user_is_local(user_id)))
|
||||
.and_then(async |((user_id, room_id), stripped_state): KeyVal<'_>| Ok::<_, conduwuit::Error>((user_id.to_owned(), room_id.to_owned(), stripped_state.deserialize()?)))
|
||||
.try_fold(0_usize, async |mut fixed, (user_id, room_id, stripped_state)| {
|
||||
// and their invite state is None
|
||||
if stripped_state.is_empty()
|
||||
// and they are actually invited to the room
|
||||
&& let Ok(membership_event) = services.rooms.state_accessor.room_state_get(&room_id, &StateEventType::RoomMember, user_id.as_str()).await
|
||||
&& membership_event.get_content::<RoomMemberEventContent>().is_ok_and(|content| content.membership == MembershipState::Invite)
|
||||
// and the invite was sent by a local user
|
||||
&& services.globals.user_is_local(&membership_event.sender) {
|
||||
|
||||
// build and save stripped state for their invite in the database
|
||||
let stripped_state = services.rooms.state.summary_stripped(&membership_event, &room_id).await;
|
||||
userroomid_invitestate.put((&user_id, &room_id), Json(stripped_state));
|
||||
fixed = fixed.saturating_add(1);
|
||||
}
|
||||
|
||||
Ok(fixed)
|
||||
})
|
||||
.await?;
|
||||
|
||||
drop(cork);
|
||||
info!(?fixed, "Fixed local invite state cache entries.");
|
||||
|
||||
db["global"].insert(FIXED_LOCAL_INVITE_STATE_MARKER, []);
|
||||
db.db.sort()?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -4,8 +4,8 @@
|
||||
};
|
||||
|
||||
use conduwuit::{
|
||||
Err, Event, Result, debug::INFO_SPAN_LEVEL, defer, err, implement, info,
|
||||
utils::stream::IterStream, warn,
|
||||
Err, Event, Result, debug::INFO_SPAN_LEVEL, defer, err, implement, utils::stream::IterStream,
|
||||
warn,
|
||||
};
|
||||
use futures::{
|
||||
FutureExt, TryFutureExt, TryStreamExt,
|
||||
@@ -70,7 +70,7 @@ pub async fn handle_incoming_pdu<'a>(
|
||||
return Err!(Request(TooLarge("PDU is too large")));
|
||||
}
|
||||
|
||||
// 1.1 Check we even know about the room
|
||||
// 1.1 Check the server is in the room
|
||||
let meta_exists = self.services.metadata.exists(room_id).map(Ok);
|
||||
|
||||
// 1.2 Check if the room is disabled
|
||||
@@ -114,19 +114,6 @@ pub async fn handle_incoming_pdu<'a>(
|
||||
return Err!(Request(Forbidden("Federation of this room is disabled by this server.")));
|
||||
}
|
||||
|
||||
if !self
|
||||
.services
|
||||
.state_cache
|
||||
.server_in_room(self.services.globals.server_name(), room_id)
|
||||
.await
|
||||
{
|
||||
info!(
|
||||
%origin,
|
||||
"Dropping inbound PDU for room we aren't participating in"
|
||||
);
|
||||
return Err!(Request(NotFound("This server is not participating in that room.")));
|
||||
}
|
||||
|
||||
let (incoming_pdu, val) = self
|
||||
.handle_outlier_pdu(origin, create_event, event_id, room_id, value, false)
|
||||
.await?;
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use std::borrow::Borrow;
|
||||
|
||||
use conduwuit::{
|
||||
Pdu, Result, err, implement,
|
||||
Result, err, implement,
|
||||
matrix::{Event, StateKey},
|
||||
};
|
||||
use futures::{Stream, StreamExt, TryFutureExt};
|
||||
@@ -84,7 +84,7 @@ pub async fn room_state_get(
|
||||
room_id: &RoomId,
|
||||
event_type: &StateEventType,
|
||||
state_key: &str,
|
||||
) -> Result<Pdu> {
|
||||
) -> Result<impl Event> {
|
||||
self.services
|
||||
.state
|
||||
.get_room_shortstatehash(room_id)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
use conduwuit::{implement, utils::stream::ReadyExt, warn};
|
||||
use conduwuit::{implement, utils::stream::ReadyExt};
|
||||
use futures::StreamExt;
|
||||
use ruma::{
|
||||
EventId, RoomId, ServerName,
|
||||
@@ -19,12 +19,7 @@ pub async fn server_can_see_event(
|
||||
event_id: &EventId,
|
||||
) -> bool {
|
||||
let Ok(shortstatehash) = self.pdu_shortstatehash(event_id).await else {
|
||||
warn!(
|
||||
"Unable to visibility check event {} in room {} for server {}: shortstatehash not \
|
||||
found",
|
||||
event_id, room_id, origin
|
||||
);
|
||||
return false;
|
||||
return true;
|
||||
};
|
||||
|
||||
let history_visibility = self
|
||||
|
||||
@@ -30,7 +30,6 @@ struct Services {
|
||||
config: Dep<config::Service>,
|
||||
globals: Dep<globals::Service>,
|
||||
metadata: Dep<rooms::metadata::Service>,
|
||||
state: Dep<rooms::state::Service>,
|
||||
state_accessor: Dep<rooms::state_accessor::Service>,
|
||||
users: Dep<users::Service>,
|
||||
}
|
||||
@@ -65,7 +64,6 @@ fn build(args: crate::Args<'_>) -> Result<Arc<Self>> {
|
||||
config: args.depend::<config::Service>("config"),
|
||||
globals: args.depend::<globals::Service>("globals"),
|
||||
metadata: args.depend::<rooms::metadata::Service>("rooms::metadata"),
|
||||
state: args.depend::<rooms::state::Service>("rooms::state"),
|
||||
state_accessor: args
|
||||
.depend::<rooms::state_accessor::Service>("rooms::state_accessor"),
|
||||
users: args.depend::<users::Service>("users"),
|
||||
|
||||
@@ -118,8 +118,10 @@ pub async fn update_membership(
|
||||
self.mark_as_joined(user_id, room_id);
|
||||
},
|
||||
| MembershipState::Invite => {
|
||||
let last_state = self.services.state.summary_stripped(pdu, room_id).await;
|
||||
self.mark_as_invited(user_id, room_id, pdu.sender(), Some(last_state), None)
|
||||
// TODO: make sure that passing None for `last_state` is correct behavior.
|
||||
// the call from `append_pdu` used to use `services.state.summary_stripped`
|
||||
// to fill that parameter.
|
||||
self.mark_as_invited(user_id, room_id, pdu.sender(), None, None)
|
||||
.await?;
|
||||
},
|
||||
| MembershipState::Leave | MembershipState::Ban => {
|
||||
|
||||
@@ -233,16 +233,6 @@ pub async fn try_auth(
|
||||
| AuthData::Dummy(_) => {
|
||||
uiaainfo.completed.push(AuthType::Dummy);
|
||||
},
|
||||
| AuthData::FallbackAcknowledgement(_) => {
|
||||
// The client is checking if authentication has succeeded out-of-band. This is
|
||||
// possible if the client is using "fallback auth" (see spec section
|
||||
// 4.9.1.4), which we don't support (and probably never will, because it's a
|
||||
// disgusting hack).
|
||||
|
||||
// Return early to tell the client that no, authentication did not succeed while
|
||||
// it wasn't looking.
|
||||
return Ok((false, uiaainfo));
|
||||
},
|
||||
| k => error!("type not supported: {:?}", k),
|
||||
}
|
||||
|
||||
|
||||
@@ -304,11 +304,7 @@ pub fn disable_login(&self, user_id: &UserId) {
|
||||
pub fn enable_login(&self, user_id: &UserId) { self.db.userid_logindisabled.remove(user_id); }
|
||||
|
||||
pub async fn is_login_disabled(&self, user_id: &UserId) -> bool {
|
||||
self.db
|
||||
.userid_logindisabled
|
||||
.exists(user_id.as_str())
|
||||
.await
|
||||
.is_ok()
|
||||
self.db.userid_logindisabled.contains(user_id).await
|
||||
}
|
||||
|
||||
/// Check if account is active, infallible
|
||||
|
||||
19
src/stitcher/Cargo.toml
Normal file
19
src/stitcher/Cargo.toml
Normal file
@@ -0,0 +1,19 @@
|
||||
[package]
|
||||
name = "stitcher"
|
||||
description = "An implementation of stitched ordering (https://codeberg.org/andybalaam/stitched-order)"
|
||||
edition.workspace = true
|
||||
license.workspace = true
|
||||
readme.workspace = true
|
||||
repository.workspace = true
|
||||
version.workspace = true
|
||||
|
||||
[lib]
|
||||
path = "mod.rs"
|
||||
|
||||
[dependencies]
|
||||
indexmap.workspace = true
|
||||
itertools.workspace = true
|
||||
|
||||
[dev-dependencies]
|
||||
peg = "0.8.5"
|
||||
rustyline = { version = "17.0.2", default-features = false }
|
||||
141
src/stitcher/algorithm.rs
Normal file
141
src/stitcher/algorithm.rs
Normal file
@@ -0,0 +1,141 @@
|
||||
use std::collections::HashSet;
|
||||
|
||||
use indexmap::IndexSet;
|
||||
use itertools::Itertools;
|
||||
|
||||
use super::{Batch, Gap, OrderKey, StitchedItem, StitcherBackend};
|
||||
|
||||
/// Updates to a gap in the stitched order.
|
||||
#[derive(Debug)]
|
||||
pub struct GapUpdate<'id, K: OrderKey> {
|
||||
/// The opaque key of the gap to update.
|
||||
pub key: K,
|
||||
/// The new contents of the gap. If this is empty, the gap should be
|
||||
/// deleted.
|
||||
pub gap: Gap,
|
||||
/// New items to insert after the gap. These items _should not_ be
|
||||
/// synchronized to clients.
|
||||
pub inserted_items: Vec<StitchedItem<'id>>,
|
||||
}
|
||||
|
||||
/// Updates to the stitched order.
|
||||
#[derive(Debug)]
|
||||
pub struct OrderUpdates<'id, K: OrderKey> {
|
||||
/// Updates to individual gaps. The items inserted by these updates _should
|
||||
/// not_ be synchronized to clients.
|
||||
pub gap_updates: Vec<GapUpdate<'id, K>>,
|
||||
/// New items to append to the end of the order. These items _should_ be
|
||||
/// synchronized to clients.
|
||||
pub new_items: Vec<StitchedItem<'id>>,
|
||||
// The subset of events in the batch which got slotted into an existing gap. This is tracked
|
||||
// for unit testing and may eventually be sent to clients.
|
||||
pub events_added_to_gaps: HashSet<&'id str>,
|
||||
}
|
||||
|
||||
/// The stitcher, which implements the stitched ordering algorithm.
|
||||
/// Its primary method is [`Stitcher::stitch`].
|
||||
pub struct Stitcher<'backend, B: StitcherBackend> {
|
||||
backend: &'backend B,
|
||||
}
|
||||
|
||||
impl<B: StitcherBackend> Stitcher<'_, B> {
|
||||
/// Create a new [`Stitcher`] given a [`StitcherBackend`].
|
||||
pub fn new(backend: &B) -> Stitcher<'_, B> { Stitcher { backend } }
|
||||
|
||||
/// Given a [`Batch`], compute the [`OrderUpdates`] which should be made to
|
||||
/// the stitched order to incorporate that batch. It is the responsibility
|
||||
/// of the caller to apply the updates.
|
||||
pub fn stitch<'id>(&self, batch: &Batch<'id>) -> OrderUpdates<'id, B::Key> {
|
||||
let mut gap_updates = Vec::new();
|
||||
let mut events_added_to_gaps: HashSet<&'id str> = HashSet::new();
|
||||
|
||||
// Events in the batch which haven't been fitted into a gap or appended to the
|
||||
// end yet.
|
||||
let mut remaining_events: IndexSet<_> = batch.events().collect();
|
||||
|
||||
// 1: Find existing gaps which include IDs of events in `batch`
|
||||
let matching_gaps = self.backend.find_matching_gaps(batch.events());
|
||||
|
||||
// Repeat steps 2-9 for each matching gap
|
||||
for (key, mut gap) in matching_gaps {
|
||||
// 2. Find events in `batch` which are mentioned in `gap`
|
||||
let matching_events = remaining_events.iter().filter(|id| gap.contains(**id));
|
||||
|
||||
// Extend `events_added_to_gaps` with the matching events, which are destined to
|
||||
// be slotted into gaps.
|
||||
events_added_to_gaps.extend(matching_events.clone());
|
||||
|
||||
// 3. Create the to-insert list from the predecessor sets of each matching event
|
||||
let events_to_insert: Vec<_> = matching_events
|
||||
.filter_map(|event| batch.predecessors(event))
|
||||
.flat_map(|predecessors| predecessors.predecessor_set.iter())
|
||||
.filter(|event| remaining_events.contains(*event))
|
||||
.copied()
|
||||
.collect();
|
||||
|
||||
// 4. Remove the events in the to-insert list from `remaining_events` so they
|
||||
// aren't processed again
|
||||
remaining_events.retain(|event| !events_to_insert.contains(event));
|
||||
|
||||
// 5 and 6
|
||||
let inserted_items = self.sort_events_and_create_gaps(batch, events_to_insert);
|
||||
|
||||
// 8. Update gap
|
||||
gap.retain(|id| !batch.contains(id));
|
||||
|
||||
// 7 and 9. Append to-insert list and delete gap if empty
|
||||
// The actual work of mutating the order is handled by the callee,
|
||||
// we just record an update to make.
|
||||
gap_updates.push(GapUpdate { key: key.clone(), gap, inserted_items });
|
||||
}
|
||||
|
||||
// 10. Append remaining events and gaps
|
||||
let new_items = self.sort_events_and_create_gaps(batch, remaining_events);
|
||||
|
||||
OrderUpdates {
|
||||
gap_updates,
|
||||
new_items,
|
||||
events_added_to_gaps,
|
||||
}
|
||||
}
|
||||
|
||||
fn sort_events_and_create_gaps<'id>(
|
||||
&self,
|
||||
batch: &Batch<'id>,
|
||||
events_to_insert: impl IntoIterator<Item = &'id str>,
|
||||
) -> Vec<StitchedItem<'id>> {
|
||||
// 5. Sort the to-insert list with DAG;received order
|
||||
let events_to_insert = events_to_insert
|
||||
.into_iter()
|
||||
.sorted_by(batch.compare_by_dag_received())
|
||||
.collect_vec();
|
||||
|
||||
// allocate 1.5x the size of the to-insert list
|
||||
let items_capacity = events_to_insert
|
||||
.capacity()
|
||||
.saturating_add(events_to_insert.capacity().div_euclid(2));
|
||||
|
||||
let mut items = Vec::with_capacity(items_capacity);
|
||||
|
||||
for event in events_to_insert {
|
||||
let missing_prev_events: HashSet<String> = batch
|
||||
.predecessors(event)
|
||||
.expect("events in to_insert should be in batch")
|
||||
.prev_events
|
||||
.iter()
|
||||
.filter(|prev_event| {
|
||||
!(batch.contains(prev_event) || self.backend.event_exists(prev_event))
|
||||
})
|
||||
.map(|id| String::from(*id))
|
||||
.collect();
|
||||
|
||||
if !missing_prev_events.is_empty() {
|
||||
items.push(StitchedItem::Gap(missing_prev_events));
|
||||
}
|
||||
|
||||
items.push(StitchedItem::Event(event));
|
||||
}
|
||||
|
||||
items
|
||||
}
|
||||
}
|
||||
88
src/stitcher/examples/repl.rs
Normal file
88
src/stitcher/examples/repl.rs
Normal file
@@ -0,0 +1,88 @@
|
||||
use std::collections::HashSet;
|
||||
|
||||
use rustyline::{DefaultEditor, Result, error::ReadlineError};
|
||||
use stitcher::{Batch, EventEdges, Stitcher, memory_backend::MemoryStitcherBackend};
|
||||
|
||||
const BANNER: &str = "
|
||||
stitched ordering test repl
|
||||
- append an event by typing its name: `A`
|
||||
- to add prev events, type an arrow and then space-separated event names: `A --> B C D`
|
||||
- to add multiple events at once, separate them with commas
|
||||
- use `/reset` to clear the ordering
|
||||
Ctrl-D to exit, Ctrl-C to clear input
|
||||
"
|
||||
.trim_ascii();
|
||||
|
||||
enum Command<'line> {
|
||||
AppendEvents(EventEdges<'line>),
|
||||
ResetOrder,
|
||||
}
|
||||
|
||||
peg::parser! {
|
||||
// partially copied from the test case parser
|
||||
grammar command_parser() for str {
|
||||
/// Parse whitespace.
|
||||
rule _ -> () = quiet! { $([' '])* {} }
|
||||
|
||||
/// Parse an event ID.
|
||||
rule event_id() -> &'input str
|
||||
= quiet! { id:$([char if char.is_ascii_alphanumeric() || ['_', '-'].contains(&char)]+) { id } }
|
||||
/ expected!("an event ID containing only [a-zA-Z0-9_-]")
|
||||
|
||||
/// Parse an event and its prev events.
|
||||
rule event() -> (&'input str, HashSet<&'input str>)
|
||||
= id:event_id() prev_events:(_ "-->" _ id:(event_id() ++ _) { id })? {
|
||||
(id, prev_events.into_iter().flatten().collect())
|
||||
}
|
||||
|
||||
pub rule command() -> Command<'input> =
|
||||
"/reset" { Command::ResetOrder }
|
||||
/ events:event() ++ (_ "," _) { Command::AppendEvents(events.into_iter().collect()) }
|
||||
}
|
||||
}
|
||||
|
||||
fn main() -> Result<()> {
|
||||
let mut backend = MemoryStitcherBackend::default();
|
||||
let mut reader = DefaultEditor::new()?;
|
||||
|
||||
println!("{BANNER}");
|
||||
|
||||
loop {
|
||||
match reader.readline("> ") {
|
||||
| Ok(line) => match command_parser::command(&line) {
|
||||
| Ok(Command::AppendEvents(events)) => {
|
||||
let batch = Batch::from_edges(&events);
|
||||
let stitcher = Stitcher::new(&backend);
|
||||
let updates = stitcher.stitch(&batch);
|
||||
|
||||
for update in &updates.gap_updates {
|
||||
println!("update to gap {}:", update.key);
|
||||
println!(" new gap contents: {:?}", update.gap);
|
||||
println!(" inserted items: {:?}", update.inserted_items);
|
||||
}
|
||||
|
||||
println!("events added to gaps: {:?}", &updates.events_added_to_gaps);
|
||||
println!();
|
||||
println!("items to sync: {:?}", &updates.new_items);
|
||||
backend.extend(updates);
|
||||
println!("order: {backend:?}");
|
||||
},
|
||||
| Ok(Command::ResetOrder) => {
|
||||
backend.clear();
|
||||
println!("order cleared.");
|
||||
},
|
||||
| Err(parse_error) => {
|
||||
println!("parse error!! {parse_error}");
|
||||
},
|
||||
},
|
||||
| Err(ReadlineError::Interrupted) => {
|
||||
println!("interrupt");
|
||||
},
|
||||
| Err(ReadlineError::Eof) => {
|
||||
println!("goodbye :3");
|
||||
break Ok(());
|
||||
},
|
||||
| Err(err) => break Err(err),
|
||||
}
|
||||
}
|
||||
}
|
||||
130
src/stitcher/memory_backend.rs
Normal file
130
src/stitcher/memory_backend.rs
Normal file
@@ -0,0 +1,130 @@
|
||||
use std::{
|
||||
fmt::Debug,
|
||||
sync::atomic::{AtomicU64, Ordering},
|
||||
};
|
||||
|
||||
use crate::{Gap, OrderUpdates, StitchedItem, StitcherBackend};
|
||||
|
||||
/// A version of [`StitchedItem`] which owns event IDs.
|
||||
#[derive(Debug)]
|
||||
enum MemoryStitcherItem {
|
||||
Event(String),
|
||||
Gap(Gap),
|
||||
}
|
||||
|
||||
impl From<StitchedItem<'_>> for MemoryStitcherItem {
|
||||
fn from(value: StitchedItem) -> Self {
|
||||
match value {
|
||||
| StitchedItem::Event(id) => MemoryStitcherItem::Event(id.to_string()),
|
||||
| StitchedItem::Gap(gap) => MemoryStitcherItem::Gap(gap),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'id> From<&'id MemoryStitcherItem> for StitchedItem<'id> {
|
||||
fn from(value: &'id MemoryStitcherItem) -> Self {
|
||||
match value {
|
||||
| MemoryStitcherItem::Event(id) => StitchedItem::Event(id),
|
||||
| MemoryStitcherItem::Gap(gap) => StitchedItem::Gap(gap.clone()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// A stitcher backend which holds a stitched ordering in RAM.
|
||||
#[derive(Default)]
|
||||
pub struct MemoryStitcherBackend {
|
||||
items: Vec<(u64, MemoryStitcherItem)>,
|
||||
counter: AtomicU64,
|
||||
}
|
||||
|
||||
impl MemoryStitcherBackend {
|
||||
fn next_id(&self) -> u64 { self.counter.fetch_add(1, Ordering::Relaxed) }
|
||||
|
||||
/// Extend this ordering with new updates.
|
||||
pub fn extend(&mut self, results: OrderUpdates<'_, <Self as StitcherBackend>::Key>) {
|
||||
for update in results.gap_updates {
|
||||
let Some(gap_index) = self.items.iter().position(|(key, _)| *key == update.key)
|
||||
else {
|
||||
panic!("bad update key {}", update.key);
|
||||
};
|
||||
|
||||
let insertion_index = if update.gap.is_empty() {
|
||||
self.items.remove(gap_index);
|
||||
gap_index
|
||||
} else {
|
||||
match self.items.get_mut(gap_index) {
|
||||
| Some((_, MemoryStitcherItem::Gap(gap))) => {
|
||||
*gap = update.gap;
|
||||
},
|
||||
| Some((key, other)) => {
|
||||
panic!("expected item with key {key} to be a gap, it was {other:?}");
|
||||
},
|
||||
| None => unreachable!("we just checked that this index is valid"),
|
||||
}
|
||||
gap_index.checked_add(1).expect(
|
||||
"should never allocate usize::MAX ids. what kind of test are you running",
|
||||
)
|
||||
};
|
||||
|
||||
let to_insert: Vec<_> = update
|
||||
.inserted_items
|
||||
.into_iter()
|
||||
.map(|item| (self.next_id(), item.into()))
|
||||
.collect();
|
||||
self.items
|
||||
.splice(insertion_index..insertion_index, to_insert.into_iter())
|
||||
.for_each(drop);
|
||||
}
|
||||
|
||||
let new_items: Vec<_> = results
|
||||
.new_items
|
||||
.into_iter()
|
||||
.map(|item| (self.next_id(), item.into()))
|
||||
.collect();
|
||||
self.items.extend(new_items);
|
||||
}
|
||||
|
||||
/// Iterate over the items in this ordering.
|
||||
pub fn iter(&self) -> impl Iterator<Item = StitchedItem<'_>> {
|
||||
self.items.iter().map(|(_, item)| item.into())
|
||||
}
|
||||
|
||||
/// Clear this ordering.
|
||||
pub fn clear(&mut self) { self.items.clear(); }
|
||||
}
|
||||
|
||||
impl StitcherBackend for MemoryStitcherBackend {
|
||||
type Key = u64;
|
||||
|
||||
fn find_matching_gaps<'a>(
|
||||
&'a self,
|
||||
events: impl Iterator<Item = &'a str>,
|
||||
) -> impl Iterator<Item = (Self::Key, Gap)> {
|
||||
// nobody cares about test suite performance right
|
||||
let mut gaps = vec![];
|
||||
|
||||
for event in events {
|
||||
for (key, item) in &self.items {
|
||||
if let MemoryStitcherItem::Gap(gap) = item
|
||||
&& gap.contains(event)
|
||||
{
|
||||
gaps.push((*key, gap.clone()));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
gaps.into_iter()
|
||||
}
|
||||
|
||||
fn event_exists<'a>(&'a self, event: &'a str) -> bool {
|
||||
self.items
|
||||
.iter()
|
||||
.any(|item| matches!(&item.1, MemoryStitcherItem::Event(id) if event == id))
|
||||
}
|
||||
}
|
||||
|
||||
impl Debug for MemoryStitcherBackend {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
f.debug_list().entries(self.iter()).finish()
|
||||
}
|
||||
}
|
||||
160
src/stitcher/mod.rs
Normal file
160
src/stitcher/mod.rs
Normal file
@@ -0,0 +1,160 @@
|
||||
use std::{cmp::Ordering, collections::HashSet};
|
||||
|
||||
use indexmap::IndexMap;
|
||||
|
||||
pub mod algorithm;
|
||||
pub mod memory_backend;
|
||||
#[cfg(test)]
|
||||
mod test;
|
||||
|
||||
pub use algorithm::*;
|
||||
|
||||
/// A gap in the stitched order.
|
||||
pub type Gap = HashSet<String>;
|
||||
|
||||
/// An item in the stitched order.
|
||||
#[derive(Debug)]
|
||||
pub enum StitchedItem<'id> {
|
||||
/// A single event.
|
||||
Event(&'id str),
|
||||
/// A gap representing one or more missing events.
|
||||
Gap(Gap),
|
||||
}
|
||||
|
||||
/// An opaque key returned by a [`StitcherBackend`] to identify an item in its
|
||||
/// order.
|
||||
pub trait OrderKey: Eq + Clone {}
|
||||
|
||||
impl<T: Eq + Clone> OrderKey for T {}
|
||||
|
||||
/// A trait providing read-only access to an existing stitched order.
|
||||
pub trait StitcherBackend {
|
||||
type Key: OrderKey;
|
||||
|
||||
/// Return all gaps containing one or more events listed in `events`.
|
||||
fn find_matching_gaps<'a>(
|
||||
&'a self,
|
||||
events: impl Iterator<Item = &'a str>,
|
||||
) -> impl Iterator<Item = (Self::Key, Gap)>;
|
||||
|
||||
/// Return whether an event exists in the stitched order.
|
||||
fn event_exists<'a>(&'a self, event: &'a str) -> bool;
|
||||
}
|
||||
|
||||
/// An ordered map from an event ID to its `prev_events`.
|
||||
pub type EventEdges<'id> = IndexMap<&'id str, HashSet<&'id str>>;
|
||||
|
||||
/// Information about the `prev_events` of an event.
|
||||
/// This struct does not store the ID of the event itself.
|
||||
#[derive(Debug)]
|
||||
struct EventPredecessors<'id> {
|
||||
/// The `prev_events` of the event.
|
||||
pub prev_events: HashSet<&'id str>,
|
||||
/// The predecessor set of the event. This is derived from, and a superset
|
||||
/// of, [`EventPredecessors::prev_events`]. See
|
||||
/// [`Batch::find_predecessor_set`] for details. It is cached in this
|
||||
/// struct for performance.
|
||||
pub predecessor_set: HashSet<&'id str>,
|
||||
}
|
||||
|
||||
/// A batch of events to be inserted into the stitched order.
|
||||
#[derive(Debug)]
|
||||
pub struct Batch<'id> {
|
||||
events: IndexMap<&'id str, EventPredecessors<'id>>,
|
||||
}
|
||||
|
||||
impl<'id> Batch<'id> {
|
||||
/// Create a new [`Batch`] from an [`EventEdges`].
|
||||
pub fn from_edges<'edges>(edges: &EventEdges<'edges>) -> Batch<'edges> {
|
||||
let mut events = IndexMap::new();
|
||||
|
||||
for (event, prev_events) in edges {
|
||||
let predecessor_set = Self::find_predecessor_set(event, edges);
|
||||
|
||||
events.insert(*event, EventPredecessors {
|
||||
prev_events: prev_events.clone(),
|
||||
predecessor_set,
|
||||
});
|
||||
}
|
||||
|
||||
Batch { events }
|
||||
}
|
||||
|
||||
/// Build the predecessor set of `event` using `edges`. The predecessor set
|
||||
/// is a subgraph of the room's DAG which may be thought of as a tree
|
||||
/// rooted at `event` containing _only_ events which are included in
|
||||
/// `edges`. It is represented as a set and not a proper tree structure for
|
||||
/// efficiency.
|
||||
fn find_predecessor_set<'a>(event: &'a str, edges: &EventEdges<'a>) -> HashSet<&'a str> {
|
||||
// The predecessor set which we are building.
|
||||
let mut predecessor_set = HashSet::new();
|
||||
|
||||
// The queue of events to check for membership in `remaining_events`.
|
||||
let mut events_to_check = vec![event];
|
||||
// Events which we have already checked and do not need to revisit.
|
||||
let mut events_already_checked = HashSet::new();
|
||||
|
||||
while let Some(event) = events_to_check.pop() {
|
||||
// Don't add this event to the queue again.
|
||||
events_already_checked.insert(event);
|
||||
|
||||
// If this event is in `edges`, add it to the predecessor set.
|
||||
if let Some(children) = edges.get(event) {
|
||||
predecessor_set.insert(event);
|
||||
|
||||
// Also add all its `prev_events` to the queue. It's fine if some of them don't
|
||||
// exist in `edges` because they'll just be discarded when they're popped
|
||||
// off the queue.
|
||||
events_to_check.extend(
|
||||
children
|
||||
.iter()
|
||||
.filter(|event| !events_already_checked.contains(*event)),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
predecessor_set
|
||||
}
|
||||
|
||||
/// Iterate over all the events contained in this batch.
|
||||
fn events(&self) -> impl Iterator<Item = &'id str> { self.events.keys().copied() }
|
||||
|
||||
/// Check whether an event exists in this batch.
|
||||
fn contains(&self, event: &'id str) -> bool { self.events.contains_key(event) }
|
||||
|
||||
/// Return the predecessors of an event, if it exists in this batch.
|
||||
fn predecessors(&self, event: &str) -> Option<&EventPredecessors<'id>> {
|
||||
self.events.get(event)
|
||||
}
|
||||
|
||||
/// Compare two events by DAG;received order.
|
||||
///
|
||||
/// If either event is in the other's predecessor set it comes first,
|
||||
/// otherwise they are sorted by which comes first in the batch.
|
||||
fn compare_by_dag_received(&self) -> impl FnMut(&&'id str, &&'id str) -> Ordering {
|
||||
|a, b| {
|
||||
if self
|
||||
.predecessors(a)
|
||||
.is_some_and(|it| it.predecessor_set.contains(b))
|
||||
{
|
||||
Ordering::Greater
|
||||
} else if self
|
||||
.predecessors(b)
|
||||
.is_some_and(|it| it.predecessor_set.contains(a))
|
||||
{
|
||||
Ordering::Less
|
||||
} else {
|
||||
let a_index = self
|
||||
.events
|
||||
.get_index_of(a)
|
||||
.expect("a should be in this batch");
|
||||
let b_index = self
|
||||
.events
|
||||
.get_index_of(b)
|
||||
.expect("b should be in this batch");
|
||||
|
||||
a_index.cmp(&b_index)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
102
src/stitcher/test/mod.rs
Normal file
102
src/stitcher/test/mod.rs
Normal file
@@ -0,0 +1,102 @@
|
||||
use itertools::Itertools;
|
||||
|
||||
use super::{algorithm::*, *};
|
||||
use crate::memory_backend::MemoryStitcherBackend;
|
||||
|
||||
mod parser;
|
||||
|
||||
fn run_testcase(testcase: parser::TestCase<'_>) {
|
||||
let mut backend = MemoryStitcherBackend::default();
|
||||
|
||||
for (index, phase) in testcase.into_iter().enumerate() {
|
||||
let stitcher = Stitcher::new(&backend);
|
||||
let batch = Batch::from_edges(&phase.batch);
|
||||
let updates = stitcher.stitch(&batch);
|
||||
|
||||
println!();
|
||||
println!("===== phase {index}");
|
||||
for update in &updates.gap_updates {
|
||||
println!("update to gap {}:", update.key);
|
||||
println!(" new gap contents: {:?}", update.gap);
|
||||
println!(" inserted items: {:?}", update.inserted_items);
|
||||
}
|
||||
|
||||
println!("expected new items: {:?}", &phase.order.new_items);
|
||||
println!(" actual new items: {:?}", &updates.new_items);
|
||||
for (expected, actual) in phase
|
||||
.order
|
||||
.new_items
|
||||
.iter()
|
||||
.zip_eq(updates.new_items.iter())
|
||||
{
|
||||
assert_eq!(
|
||||
expected, actual,
|
||||
"bad new item, expected {expected:?} but got {actual:?}"
|
||||
);
|
||||
}
|
||||
|
||||
if let Some(updated_gaps) = phase.updated_gaps {
|
||||
println!("expected events added to gaps: {updated_gaps:?}");
|
||||
println!(" actual events added to gaps: {:?}", updates.events_added_to_gaps);
|
||||
assert_eq!(
|
||||
updated_gaps, updates.events_added_to_gaps,
|
||||
"incorrect events added to gaps"
|
||||
);
|
||||
}
|
||||
|
||||
backend.extend(updates);
|
||||
println!("extended ordering: {:?}", backend);
|
||||
|
||||
for (expected, ref actual) in phase.order.iter().zip_eq(backend.iter()) {
|
||||
assert_eq!(
|
||||
expected, actual,
|
||||
"bad item in order, expected {expected:?} but got {actual:?}",
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
macro_rules! testcase {
|
||||
($index:literal : $id:ident) => {
|
||||
#[test]
|
||||
fn $id() {
|
||||
let testcase = parser::parse(include_str!(concat!(
|
||||
"./testcases/",
|
||||
$index,
|
||||
"-",
|
||||
stringify!($id),
|
||||
".stitched"
|
||||
)));
|
||||
|
||||
run_testcase(testcase);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
testcase!("001": receiving_new_events);
|
||||
testcase!("002": recovering_after_netsplit);
|
||||
testcase!("zzz": being_before_a_gap_item_beats_being_after_an_existing_item_multiple);
|
||||
testcase!("zzz": being_before_a_gap_item_beats_being_after_an_existing_item);
|
||||
testcase!("zzz": chains_are_reordered_using_prev_events);
|
||||
testcase!("zzz": empty_then_simple_chain);
|
||||
testcase!("zzz": empty_then_two_chains_interleaved);
|
||||
testcase!("zzz": empty_then_two_chains);
|
||||
testcase!("zzz": filling_in_a_gap_with_a_batch_containing_gaps);
|
||||
testcase!("zzz": gaps_appear_before_events_referring_to_them_received_order);
|
||||
testcase!("zzz": gaps_appear_before_events_referring_to_them);
|
||||
testcase!("zzz": if_prev_events_determine_order_they_override_received);
|
||||
testcase!("zzz": insert_into_first_of_several_gaps);
|
||||
testcase!("zzz": insert_into_last_of_several_gaps);
|
||||
testcase!("zzz": insert_into_middle_of_several_gaps);
|
||||
testcase!("zzz": linked_events_are_split_across_gaps);
|
||||
testcase!("zzz": linked_events_in_a_diamond_are_split_across_gaps);
|
||||
testcase!("zzz": middle_of_batch_matches_gap_and_end_of_batch_matches_end);
|
||||
testcase!("zzz": middle_of_batch_matches_gap);
|
||||
testcase!("zzz": multiple_events_referring_to_the_same_missing_event_first_has_more);
|
||||
testcase!("zzz": multiple_events_referring_to_the_same_missing_event);
|
||||
testcase!("zzz": multiple_events_referring_to_the_same_missing_event_with_more);
|
||||
testcase!("zzz": multiple_missing_prev_events_turn_into_a_single_gap);
|
||||
testcase!("zzz": partially_filling_a_gap_leaves_it_before_new_nodes);
|
||||
testcase!("zzz": partially_filling_a_gap_with_two_events);
|
||||
testcase!("zzz": received_order_wins_within_a_subgroup_if_no_prev_event_chain);
|
||||
testcase!("zzz": subgroups_are_processed_in_first_received_order);
|
||||
140
src/stitcher/test/parser.rs
Normal file
140
src/stitcher/test/parser.rs
Normal file
@@ -0,0 +1,140 @@
|
||||
use std::collections::HashSet;
|
||||
|
||||
use indexmap::IndexMap;
|
||||
|
||||
use super::StitchedItem;
|
||||
|
||||
pub(super) type TestEventId<'id> = &'id str;
|
||||
|
||||
pub(super) type TestGap<'id> = HashSet<TestEventId<'id>>;
|
||||
|
||||
#[derive(Debug)]
|
||||
pub(super) enum TestStitchedItem<'id> {
|
||||
Event(TestEventId<'id>),
|
||||
Gap(TestGap<'id>),
|
||||
}
|
||||
|
||||
impl PartialEq<StitchedItem<'_>> for TestStitchedItem<'_> {
|
||||
fn eq(&self, other: &StitchedItem<'_>) -> bool {
|
||||
match (self, other) {
|
||||
| (TestStitchedItem::Event(lhs), StitchedItem::Event(rhs)) => lhs == rhs,
|
||||
| (TestStitchedItem::Gap(lhs), StitchedItem::Gap(rhs)) =>
|
||||
lhs.iter().all(|id| rhs.contains(*id)),
|
||||
| _ => false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) type TestCase<'id> = Vec<Phase<'id>>;
|
||||
|
||||
pub(super) struct Phase<'id> {
|
||||
pub batch: Batch<'id>,
|
||||
pub order: Order<'id>,
|
||||
pub updated_gaps: Option<HashSet<TestEventId<'id>>>,
|
||||
}
|
||||
|
||||
pub(super) type Batch<'id> = IndexMap<TestEventId<'id>, HashSet<TestEventId<'id>>>;
|
||||
|
||||
pub(super) struct Order<'id> {
|
||||
pub inserted_items: Vec<TestStitchedItem<'id>>,
|
||||
pub new_items: Vec<TestStitchedItem<'id>>,
|
||||
}
|
||||
|
||||
impl<'id> Order<'id> {
|
||||
pub(super) fn iter(&self) -> impl Iterator<Item = &TestStitchedItem<'id>> {
|
||||
self.inserted_items.iter().chain(self.new_items.iter())
|
||||
}
|
||||
}
|
||||
|
||||
peg::parser! {
|
||||
grammar testcase() for str {
|
||||
/// Parse whitespace.
|
||||
rule _ -> () = quiet! { $([' '])* {} }
|
||||
|
||||
/// Parse empty lines and comments.
|
||||
rule newline() -> () = quiet! { (("#" [^'\n']*)? "\n")+ {} }
|
||||
|
||||
/// Parse an "event ID" in a test case, which may only consist of ASCII letters and numbers.
|
||||
rule event_id() -> TestEventId<'input>
|
||||
= quiet! { id:$([char if char.is_ascii_alphanumeric()]+) { id } }
|
||||
/ expected!("event id")
|
||||
|
||||
/// Parse a gap in the order section.
|
||||
rule gap() -> TestGap<'input>
|
||||
= "-" events:event_id() ++ "," { events.into_iter().collect() }
|
||||
|
||||
/// Parse either an event id or a gap.
|
||||
rule stitched_item() -> TestStitchedItem<'input> =
|
||||
id:event_id() { TestStitchedItem::Event(id) }
|
||||
/ gap:gap() { TestStitchedItem::Gap(gap) }
|
||||
|
||||
/// Parse an event line in the batch section, mapping an event name to zero or one prev events.
|
||||
/// The prev events are merged together by [`batch()`].
|
||||
rule batch_event() -> (TestEventId<'input>, Option<TestEventId<'input>>)
|
||||
= id:event_id() prev:(_ "-->" _ prev:event_id() { prev })? { (id, prev) }
|
||||
|
||||
/// Parse the batch section of a phase.
|
||||
rule batch() -> Batch<'input>
|
||||
= events:batch_event() ++ newline() {
|
||||
/*
|
||||
Repeated event lines need to be merged together. For example,
|
||||
|
||||
A --> B
|
||||
A --> C
|
||||
|
||||
represents a _single_ event `A` with two prev events, `B` and `C`.
|
||||
*/
|
||||
events.into_iter()
|
||||
.fold(IndexMap::new(), |mut batch: Batch<'_>, (id, prev_event)| {
|
||||
// Find the prev events set of this event in the batch.
|
||||
// If it doesn't exist, make a new empty one.
|
||||
let mut prev_events = batch.entry(id).or_default();
|
||||
// If this event line defines a prev event to add, insert it into the set.
|
||||
if let Some(prev_event) = prev_event {
|
||||
prev_events.insert(prev_event);
|
||||
}
|
||||
|
||||
batch
|
||||
})
|
||||
}
|
||||
|
||||
rule order() -> Order<'input> =
|
||||
items:(item:stitched_item() new:"*"? { (item, new.is_some()) }) ** newline()
|
||||
{
|
||||
let (mut inserted_items, mut new_items) = (vec![], vec![]);
|
||||
|
||||
for (item, new) in items {
|
||||
if new {
|
||||
new_items.push(item);
|
||||
} else {
|
||||
inserted_items.push(item);
|
||||
}
|
||||
}
|
||||
|
||||
Order {
|
||||
inserted_items,
|
||||
new_items,
|
||||
}
|
||||
}
|
||||
|
||||
rule updated_gaps() -> HashSet<TestEventId<'input>> =
|
||||
events:event_id() ++ newline() { events.into_iter().collect() }
|
||||
|
||||
rule phase() -> Phase<'input> =
|
||||
"=== when we receive these events ==="
|
||||
newline() batch:batch()
|
||||
newline() "=== then we arrange into this order ==="
|
||||
newline() order:order()
|
||||
updated_gaps:(
|
||||
newline() "=== and we notify about these gaps ==="
|
||||
newline() updated_gaps:updated_gaps() { updated_gaps }
|
||||
)?
|
||||
{ Phase { batch, order, updated_gaps } }
|
||||
|
||||
pub rule testcase() -> TestCase<'input> = phase() ++ newline()
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) fn parse<'input>(input: &'input str) -> TestCase<'input> {
|
||||
testcase::testcase(input.trim_ascii_end()).expect("parse error")
|
||||
}
|
||||
@@ -0,0 +1,22 @@
|
||||
=== when we receive these events ===
|
||||
A
|
||||
B --> A
|
||||
C --> B
|
||||
=== then we arrange into this order ===
|
||||
# Given the server has some existing events in this order:
|
||||
A*
|
||||
B*
|
||||
C*
|
||||
|
||||
=== when we receive these events ===
|
||||
# When it receives new ones:
|
||||
D --> C
|
||||
E --> D
|
||||
|
||||
=== then we arrange into this order ===
|
||||
# Then it simply appends them at the end of the order:
|
||||
A
|
||||
B
|
||||
C
|
||||
D*
|
||||
E*
|
||||
@@ -0,0 +1,46 @@
|
||||
=== when we receive these events ===
|
||||
A1
|
||||
A2 --> A1
|
||||
A3 --> A2
|
||||
=== then we arrange into this order ===
|
||||
# Given the server has some existing events in this order:
|
||||
A1*
|
||||
A2*
|
||||
A3*
|
||||
|
||||
=== when we receive these events ===
|
||||
# And after a netsplit the server receives some unrelated events, which refer to
|
||||
# some unknown event, because the server didn't receive all of them:
|
||||
B7 --> B6
|
||||
B8 --> B7
|
||||
B9 --> B8
|
||||
|
||||
=== then we arrange into this order ===
|
||||
# Then these events are new, and we add a gap to show something is missing:
|
||||
A1
|
||||
A2
|
||||
A3
|
||||
-B6*
|
||||
B7*
|
||||
B8*
|
||||
B9*
|
||||
=== when we receive these events ===
|
||||
# Then if we backfill and receive more of those events later:
|
||||
B4 --> B3
|
||||
B5 --> B4
|
||||
B6 --> B5
|
||||
=== then we arrange into this order ===
|
||||
# They are slotted into the gap, and a new gap is created to represent the
|
||||
# still-missing events:
|
||||
A1
|
||||
A2
|
||||
A3
|
||||
-B3
|
||||
B4
|
||||
B5
|
||||
B6
|
||||
B7
|
||||
B8
|
||||
B9
|
||||
=== and we notify about these gaps ===
|
||||
B6
|
||||
@@ -0,0 +1,30 @@
|
||||
=== when we receive these events ===
|
||||
D --> C
|
||||
=== then we arrange into this order ===
|
||||
# We may see situations that are ambiguous about whether an event is new or
|
||||
# belongs in a gap, because it is a predecessor of a gap event and also has a
|
||||
# new event as its predecessor. This a rare case where either outcome could be
|
||||
# valid. If the initial order is this:
|
||||
-C*
|
||||
D*
|
||||
=== when we receive these events ===
|
||||
# And then we receive B
|
||||
B --> A
|
||||
=== then we arrange into this order ===
|
||||
# Which is new because it's unrelated to everything else
|
||||
-C
|
||||
D
|
||||
-A*
|
||||
B*
|
||||
=== when we receive these events ===
|
||||
# And later it turns out that C refers back to B
|
||||
C --> B
|
||||
=== then we arrange into this order ===
|
||||
# Then we place C into the early gap even though it is after B, so arguably
|
||||
# should be the newest
|
||||
C
|
||||
D
|
||||
-A
|
||||
B
|
||||
=== and we notify about these gaps ===
|
||||
C
|
||||
@@ -0,0 +1,28 @@
|
||||
=== when we receive these events ===
|
||||
# An ambiguous situation can occur when we have multiple gaps that both might
|
||||
# accepts an event. This should be relatively rare.
|
||||
A --> G1
|
||||
B --> A
|
||||
C --> G2
|
||||
=== then we arrange into this order ===
|
||||
-G1*
|
||||
A*
|
||||
B*
|
||||
-G2*
|
||||
C*
|
||||
=== when we receive these events ===
|
||||
# When we receive F, which is a predecessor of both G1 and G2
|
||||
F
|
||||
G1 --> F
|
||||
G2 --> F
|
||||
=== then we arrange into this order ===
|
||||
# Then F appears in the earlier gap, but arguably it should appear later.
|
||||
F
|
||||
G1
|
||||
A
|
||||
B
|
||||
G2
|
||||
C
|
||||
=== and we notify about these gaps ===
|
||||
G1
|
||||
G2
|
||||
@@ -0,0 +1,10 @@
|
||||
=== when we receive these events ===
|
||||
# Even though we see C first, it is re-ordered because we must obey prev_events
|
||||
# so A comes first.
|
||||
C --> A
|
||||
A
|
||||
B --> A
|
||||
=== then we arrange into this order ===
|
||||
A*
|
||||
C*
|
||||
B*
|
||||
@@ -0,0 +1,8 @@
|
||||
=== when we receive these events ===
|
||||
A
|
||||
B --> A
|
||||
C --> B
|
||||
=== then we arrange into this order ===
|
||||
A*
|
||||
B*
|
||||
C*
|
||||
@@ -0,0 +1,18 @@
|
||||
=== when we receive these events ===
|
||||
# A chain ABC
|
||||
A
|
||||
B --> A
|
||||
C --> B
|
||||
# And a separate chain XYZ
|
||||
X --> W
|
||||
Y --> X
|
||||
Z --> Y
|
||||
=== then we arrange into this order ===
|
||||
# Should produce them in order with a gap
|
||||
A*
|
||||
B*
|
||||
C*
|
||||
-W*
|
||||
X*
|
||||
Y*
|
||||
Z*
|
||||
@@ -0,0 +1,18 @@
|
||||
=== when we receive these events ===
|
||||
# Same as empty_then_two_chains except for received order
|
||||
# A chain ABC, and a separate chain XYZ, but interleaved
|
||||
A
|
||||
X --> W
|
||||
B --> A
|
||||
Y --> X
|
||||
C --> B
|
||||
Z --> Y
|
||||
=== then we arrange into this order ===
|
||||
# Should produce them in order with a gap
|
||||
A*
|
||||
-W*
|
||||
X*
|
||||
B*
|
||||
Y*
|
||||
C*
|
||||
Z*
|
||||
@@ -0,0 +1,33 @@
|
||||
=== when we receive these events ===
|
||||
# Given 3 gaps exist
|
||||
B --> A
|
||||
D --> C
|
||||
F --> E
|
||||
=== then we arrange into this order ===
|
||||
-A*
|
||||
B*
|
||||
-C*
|
||||
D*
|
||||
-E*
|
||||
F*
|
||||
|
||||
=== when we receive these events ===
|
||||
# When we fill one with something that also refers to non-existent events
|
||||
C --> X
|
||||
C --> Y
|
||||
G --> C
|
||||
G --> Z
|
||||
=== then we arrange into this order ===
|
||||
# Then we fill in the gap (C) and make new gaps too (X+Y and Z)
|
||||
-A
|
||||
B
|
||||
-X,Y
|
||||
C
|
||||
D
|
||||
-E
|
||||
F
|
||||
-Z*
|
||||
G*
|
||||
=== and we notify about these gaps ===
|
||||
# And we notify about the gap that was updated
|
||||
C
|
||||
@@ -0,0 +1,13 @@
|
||||
=== when we receive these events ===
|
||||
# Several events refer to missing events and the events are unrelated
|
||||
C --> Y
|
||||
C --> Z
|
||||
A --> X
|
||||
B
|
||||
=== then we arrange into this order ===
|
||||
# The gaps appear immediately before the events referring to them
|
||||
-Y,Z*
|
||||
C*
|
||||
-X*
|
||||
A*
|
||||
B*
|
||||
@@ -0,0 +1,14 @@
|
||||
=== when we receive these events ===
|
||||
# Several events refer to missing events and the events are related
|
||||
C --> Y
|
||||
C --> Z
|
||||
C --> B
|
||||
A --> X
|
||||
B --> A
|
||||
=== then we arrange into this order ===
|
||||
# The gaps appear immediately before the events referring to them
|
||||
-X*
|
||||
A*
|
||||
B*
|
||||
-Y,Z*
|
||||
C*
|
||||
@@ -0,0 +1,15 @@
|
||||
=== when we receive these events ===
|
||||
# The relationships determine the order here, so they override received order
|
||||
F --> E
|
||||
C --> B
|
||||
D --> C
|
||||
E --> D
|
||||
B --> A
|
||||
A
|
||||
=== then we arrange into this order ===
|
||||
A*
|
||||
B*
|
||||
C*
|
||||
D*
|
||||
E*
|
||||
F*
|
||||
@@ -0,0 +1,27 @@
|
||||
=== when we receive these events ===
|
||||
# Given 3 gaps exist
|
||||
B --> A
|
||||
D --> C
|
||||
F --> E
|
||||
=== then we arrange into this order ===
|
||||
-A*
|
||||
B*
|
||||
-C*
|
||||
D*
|
||||
-E*
|
||||
F*
|
||||
|
||||
=== when we receive these events ===
|
||||
# When the first of them is filled in
|
||||
A
|
||||
=== then we arrange into this order ===
|
||||
# Then we slot it into the gap, not at the end
|
||||
A
|
||||
B
|
||||
-C
|
||||
D
|
||||
-E
|
||||
F
|
||||
=== and we notify about these gaps ===
|
||||
# And we notify about the gap being filled in
|
||||
A
|
||||
@@ -0,0 +1,27 @@
|
||||
=== when we receive these events ===
|
||||
# Given 3 gaps exist
|
||||
B --> A
|
||||
D --> C
|
||||
F --> E
|
||||
=== then we arrange into this order ===
|
||||
-A*
|
||||
B*
|
||||
-C*
|
||||
D*
|
||||
-E*
|
||||
F*
|
||||
|
||||
=== when we receive these events ===
|
||||
# When the last gap is filled in
|
||||
E
|
||||
=== then we arrange into this order ===
|
||||
# Then we slot it into the gap, not at the end
|
||||
-A
|
||||
B
|
||||
-C
|
||||
D
|
||||
E
|
||||
F
|
||||
=== and we notify about these gaps ===
|
||||
# And we notify about the gap being filled in
|
||||
E
|
||||
@@ -0,0 +1,27 @@
|
||||
=== when we receive these events ===
|
||||
# Given 3 gaps exist
|
||||
B --> A
|
||||
D --> C
|
||||
F --> E
|
||||
=== then we arrange into this order ===
|
||||
-A*
|
||||
B*
|
||||
-C*
|
||||
D*
|
||||
-E*
|
||||
F*
|
||||
|
||||
=== when we receive these events ===
|
||||
# When a middle one is filled in
|
||||
C
|
||||
=== then we arrange into this order ===
|
||||
# Then we slot it into the gap, not at the end
|
||||
-A
|
||||
B
|
||||
C
|
||||
D
|
||||
-E
|
||||
F
|
||||
=== and we notify about these gaps ===
|
||||
# And we notify about the gap being filled in
|
||||
C
|
||||
@@ -0,0 +1,29 @@
|
||||
=== when we receive these events ===
|
||||
# Given a couple of gaps
|
||||
B --> X2
|
||||
D --> X4
|
||||
=== then we arrange into this order ===
|
||||
-X2*
|
||||
B*
|
||||
-X4*
|
||||
D*
|
||||
|
||||
=== when we receive these events ===
|
||||
# And linked events that fill those in and are newer
|
||||
X1
|
||||
X2 --> X1
|
||||
X3 --> X2
|
||||
X4 --> X3
|
||||
X5 --> X4
|
||||
=== then we arrange into this order ===
|
||||
# Then the gaps are filled and new events appear at the front
|
||||
X1
|
||||
X2
|
||||
B
|
||||
X3
|
||||
X4
|
||||
D
|
||||
X5*
|
||||
=== and we notify about these gaps ===
|
||||
X2
|
||||
X4
|
||||
@@ -0,0 +1,31 @@
|
||||
=== when we receive these events ===
|
||||
# Given a couple of gaps
|
||||
B --> X2a
|
||||
D --> X3
|
||||
=== then we arrange into this order ===
|
||||
-X2a*
|
||||
B*
|
||||
-X3*
|
||||
D*
|
||||
|
||||
=== when we receive these events ===
|
||||
# When we receive a diamond that touches gaps and some new events
|
||||
X1
|
||||
X2a --> X1
|
||||
X2b --> X1
|
||||
X3 --> X2a
|
||||
X3 --> X2b
|
||||
X4 --> X3
|
||||
=== then we arrange into this order ===
|
||||
# Then matching events and direct predecessors fit into the gaps
|
||||
# and other stuff is new
|
||||
X1
|
||||
X2a
|
||||
B
|
||||
X2b
|
||||
X3
|
||||
D
|
||||
X4*
|
||||
=== and we notify about these gaps ===
|
||||
X2a
|
||||
X3
|
||||
@@ -0,0 +1,25 @@
|
||||
=== when we receive these events ===
|
||||
# Given a gap before all the Bs
|
||||
B1 --> C2
|
||||
B2 --> B1
|
||||
=== then we arrange into this order ===
|
||||
-C2*
|
||||
B1*
|
||||
B2*
|
||||
|
||||
=== when we receive these events ===
|
||||
# When a batch arrives with a not-last event matching the gap
|
||||
C1
|
||||
C2 --> C1
|
||||
C3 --> C2
|
||||
=== then we arrange into this order ===
|
||||
# Then we slot the matching events into the gap
|
||||
# and the later events are new
|
||||
C1
|
||||
C2
|
||||
B1
|
||||
B2
|
||||
C3*
|
||||
=== and we notify about these gaps ===
|
||||
# And we notify about the gap being filled in
|
||||
C2
|
||||
@@ -0,0 +1,26 @@
|
||||
=== when we receive these events ===
|
||||
# Given a gap before all the Bs
|
||||
B1 --> C2
|
||||
B2 --> B1
|
||||
=== then we arrange into this order ===
|
||||
-C2*
|
||||
B1*
|
||||
B2*
|
||||
|
||||
=== when we receive these events ===
|
||||
# When a batch arrives with a not-last event matching the gap, and the last
|
||||
# event linked to a recent event
|
||||
C1
|
||||
C2 --> C1
|
||||
C3 --> C2
|
||||
C3 --> B2
|
||||
=== then we arrange into this order ===
|
||||
# Then we slot the entire batch into the gap
|
||||
C1
|
||||
C2
|
||||
B1
|
||||
B2
|
||||
C3*
|
||||
=== and we notify about these gaps ===
|
||||
# And we notify about the gap being filled in
|
||||
C2
|
||||
@@ -0,0 +1,26 @@
|
||||
=== when we receive these events ===
|
||||
# If multiple events all refer to the same missing event:
|
||||
A --> X
|
||||
B --> X
|
||||
C --> X
|
||||
=== then we arrange into this order ===
|
||||
# Then we insert gaps before all of them. This avoids the need to search the
|
||||
# entire existing order whenever we create a new gap.
|
||||
-X*
|
||||
A*
|
||||
-X*
|
||||
B*
|
||||
-X*
|
||||
C*
|
||||
=== when we receive these events ===
|
||||
# The ambiguity is resolved when the missing event arrives:
|
||||
X
|
||||
=== then we arrange into this order ===
|
||||
# We choose the earliest gap, and all the relevant gaps are removed (which does
|
||||
# mean we need to search the existing order).
|
||||
X
|
||||
A
|
||||
B
|
||||
C
|
||||
=== and we notify about these gaps ===
|
||||
X
|
||||
@@ -0,0 +1,29 @@
|
||||
=== when we receive these events ===
|
||||
# Several events refer to the same missing event, but the first refers to
|
||||
# others too
|
||||
A --> X
|
||||
A --> Y
|
||||
A --> Z
|
||||
B --> X
|
||||
C --> X
|
||||
=== then we arrange into this order ===
|
||||
# We end up with multiple gaps
|
||||
-X,Y,Z*
|
||||
A*
|
||||
-X*
|
||||
B*
|
||||
-X*
|
||||
C*
|
||||
|
||||
=== when we receive these events ===
|
||||
# When we receive the missing item
|
||||
X
|
||||
=== then we arrange into this order ===
|
||||
# It goes into the earliest slot, and the non-empty gap remains
|
||||
-Y,Z
|
||||
X
|
||||
A
|
||||
B
|
||||
C
|
||||
=== and we notify about these gaps ===
|
||||
X
|
||||
@@ -0,0 +1,28 @@
|
||||
=== when we receive these events ===
|
||||
# Several events refer to the same missing event, but one refers to others too
|
||||
A --> X
|
||||
B --> X
|
||||
B --> Y
|
||||
B --> Z
|
||||
C --> X
|
||||
=== then we arrange into this order ===
|
||||
# We end up with multiple gaps
|
||||
-X*
|
||||
A*
|
||||
-X,Y,Z*
|
||||
B*
|
||||
-X*
|
||||
C*
|
||||
|
||||
=== when we receive these events ===
|
||||
# When we receive the missing item
|
||||
X
|
||||
=== then we arrange into this order ===
|
||||
# It goes into the earliest slot, and the non-empty gap remains
|
||||
X
|
||||
A
|
||||
-Y,Z
|
||||
B
|
||||
C
|
||||
=== and we notify about these gaps ===
|
||||
X
|
||||
@@ -0,0 +1,9 @@
|
||||
=== when we receive these events ===
|
||||
# A refers to multiple missing things
|
||||
A --> X
|
||||
A --> Y
|
||||
A --> Z
|
||||
=== then we arrange into this order ===
|
||||
# But we only make one gap, with multiple IDs in it
|
||||
-X,Y,Z*
|
||||
A*
|
||||
@@ -0,0 +1,23 @@
|
||||
=== when we receive these events ===
|
||||
A
|
||||
F --> B
|
||||
F --> C
|
||||
F --> D
|
||||
F --> E
|
||||
=== then we arrange into this order ===
|
||||
# Given a gap that lists several nodes:
|
||||
A*
|
||||
-B,C,D,E*
|
||||
F*
|
||||
=== when we receive these events ===
|
||||
# When we provide one of the missing events:
|
||||
C
|
||||
=== then we arrange into this order ===
|
||||
# Then it is inserted after the gap, and the gap is shrunk:
|
||||
A
|
||||
-B,D,E
|
||||
C
|
||||
F
|
||||
=== and we notify about these gaps ===
|
||||
# And we notify about the gap that was updated
|
||||
C
|
||||
@@ -0,0 +1,27 @@
|
||||
=== when we receive these events ===
|
||||
# Given an event references multiple missing events
|
||||
A
|
||||
F --> B
|
||||
F --> C
|
||||
F --> D
|
||||
F --> E
|
||||
=== then we arrange into this order ===
|
||||
A*
|
||||
-B,C,D,E*
|
||||
F*
|
||||
|
||||
=== when we receive these events ===
|
||||
# When we provide some of the missing events
|
||||
C
|
||||
E
|
||||
=== then we arrange into this order ===
|
||||
# Then we insert them after the gap and shrink the list of events in the gap
|
||||
A
|
||||
-B,D
|
||||
C
|
||||
E
|
||||
F
|
||||
=== and we notify about these gaps ===
|
||||
# And we notify about the gap that was updated
|
||||
C
|
||||
E
|
||||
@@ -0,0 +1,16 @@
|
||||
=== when we receive these events ===
|
||||
# Everything is after A, but there is no prev_event chain between the others, so
|
||||
# we use received order.
|
||||
A
|
||||
F --> A
|
||||
C --> A
|
||||
D --> A
|
||||
E --> A
|
||||
B --> A
|
||||
=== then we arrange into this order ===
|
||||
A*
|
||||
F*
|
||||
C*
|
||||
D*
|
||||
E*
|
||||
B*
|
||||
@@ -0,0 +1,16 @@
|
||||
=== when we receive these events ===
|
||||
# We preserve the received order where it does not conflict with the prev_events
|
||||
A
|
||||
X --> W
|
||||
Y --> X
|
||||
Z --> Y
|
||||
B --> A
|
||||
C --> B
|
||||
=== then we arrange into this order ===
|
||||
A*
|
||||
-W*
|
||||
X*
|
||||
Y*
|
||||
Z*
|
||||
B*
|
||||
C*
|
||||
1
theme/css.d.ts
vendored
1
theme/css.d.ts
vendored
@@ -1 +0,0 @@
|
||||
declare module "*.css"
|
||||
@@ -105,68 +105,3 @@ body:not(.notTopArrived) header.rp-nav {
|
||||
.rspress-logo {
|
||||
height: 32px;
|
||||
}
|
||||
|
||||
/* pre-hero */
|
||||
.custom-section {
|
||||
padding: 4rem 1.5rem;
|
||||
background: var(--rp-c-bg);
|
||||
}
|
||||
|
||||
.custom-cards {
|
||||
display: flex;
|
||||
gap: 2rem;
|
||||
max-width: 800px;
|
||||
margin: 0 auto;
|
||||
justify-content: center;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.custom-card {
|
||||
padding: 2rem;
|
||||
border: 1px solid var(--rp-c-divider-light);
|
||||
border-radius: 12px;
|
||||
background: var(--rp-c-bg-soft);
|
||||
text-decoration: none;
|
||||
color: var(--rp-c-text-1);
|
||||
transition: all 0.3s ease;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
flex: 1;
|
||||
min-width: 280px;
|
||||
max-width: 350px;
|
||||
}
|
||||
|
||||
.custom-card:hover {
|
||||
border-color: var(--rp-c-brand);
|
||||
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);
|
||||
transform: translateY(-2px);
|
||||
}
|
||||
|
||||
.custom-card h3 {
|
||||
margin: 0 0 1rem 0;
|
||||
font-size: 1.25rem;
|
||||
font-weight: 600;
|
||||
color: var(--rp-c-text-0);
|
||||
}
|
||||
|
||||
.custom-card p {
|
||||
margin: 0 0 1.5rem 0;
|
||||
color: var(--rp-c-text-2);
|
||||
line-height: 1.6;
|
||||
flex: 1;
|
||||
}
|
||||
|
||||
.custom-card-button {
|
||||
display: inline-block;
|
||||
padding: 0.5rem 1.5rem;
|
||||
background: var(--rp-c-brand);
|
||||
color: white;
|
||||
border-radius: 6px;
|
||||
font-weight: 500;
|
||||
text-align: center;
|
||||
transition: background 0.2s ease;
|
||||
}
|
||||
|
||||
.custom-card:hover .custom-card-button {
|
||||
background: var(--rp-c-brand-light);
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user