Compare commits

..

30 Commits

Author SHA1 Message Date
Ginger
a91ef01a7f feat(stitched): Minor improvements to the repl 2026-01-22 15:43:59 -05:00
Ginger
4a6295e374 feat(stitched): Implement a simple REPL for testing stitched ordering 2026-01-22 14:55:08 -05:00
Ginger
4da955438e refactor(stitched): Move stitcher into its own crate 2026-01-22 13:44:25 -05:00
Ginger
9210ee2b42 chore(stitched): Add doc comments 2026-01-22 13:08:17 -05:00
Ginger
b60af7c282 feat(stitched): Test for updated gaps 2026-01-22 12:58:09 -05:00
Ginger
d2d580c76c fix(stitched): Remove redundant event tracking code 2026-01-22 12:33:48 -05:00
Ginger
c507ff6687 chore: Clippy fixes 2026-01-22 12:28:22 -05:00
Ginger
70c62158d0 feat(stitched): Fix bugs in stitcher and correct two testcases 2026-01-22 12:13:13 -05:00
Ginger
b242bad6d2 feat(stitched): Implement test harness using tests from codeberg 2026-01-21 12:54:36 -05:00
Ginger
a1ad9f0144 feat(stitched): Initial algorithm implementation 2026-01-21 12:54:36 -05:00
Jade Ellis
c85e710760 fix: Add option to mark certain config sections as optional
Fixes #1290
2026-01-20 17:36:22 +00:00
Renovate Bot
59346fc766 chore(deps): update pre-commit hook crate-ci/committed to v1.1.10 2026-01-20 16:25:19 +00:00
Renovate Bot
9c5e735888 chore(deps): update dependency cargo-bins/cargo-binstall to v1.16.7 2026-01-20 16:24:46 +00:00
Ginger
fe74e82318 chore: Formatting 2026-01-20 10:00:26 -05:00
K900
cb79a3b9d7 refactor(treewide): get rid of compile time build environment introspection
It's cursed and not very useful. Still a few uses of ctor left, but oh well.
2026-01-19 19:44:28 +00:00
timedout
ebc8df1c4d feat: Add endpoints required for API-based takedowns and room bans 2026-01-18 18:47:15 +00:00
nex
b667a963cf chore: Fixup typos 2026-01-18 15:22:14 +00:00
timedout
5a6b909b37 fix: Remove homebrewed error mangling for correctness 2026-01-18 15:22:14 +00:00
timedout
dba9cf0ad2 chore: Add news fragment 2026-01-18 15:22:14 +00:00
timedout
287ddd9bc5 fix: Only fall back to legacy media when response is M_UNRECOGNIZED
https://spec.matrix.org/v1.17/server-server-api/#content-repository
Previously we would fall back for ALL
auth media errors.
2026-01-18 15:22:14 +00:00
Jason Volk
79a278b9e8 Fix verification loss; workaround Nheko-Reborn/nheko#1908 (closes #146)
Signed-off-by: Jason Volk <jason@zemos.net>
2026-01-18 14:41:01 +00:00
Ginger
6c5d658ef2 fix: Fix explosions with new tracing 2026-01-15 09:28:26 -05:00
Renovate Bot
70c43abca8 chore(deps): update rust-patch-updates 2026-01-15 09:28:26 -05:00
Renovate Bot
6a9b47c52e chore(deps): update rust-patch-updates 2026-01-15 05:03:40 +00:00
Ginger
c042de96f8 chore(deps): Update rspress to 2.0.0-rc.5 2026-01-14 09:35:20 -05:00
Jade Ellis
7a6acd1c82 chore: Changelog 2026-01-13 20:29:30 +00:00
Jade Ellis
d260c4fcc2 style: Fix yo unused variables 2026-01-13 20:29:30 +00:00
Jade Ellis
fa15de9764 feat: Admin announce improvements
- Check announcements on first start
- Print out any fetch errors on first start in the admin room
- Randomly jitter the next check
2026-01-13 20:29:30 +00:00
Jade Ellis
e6c7a4ae60 docs: Changelog 2026-01-13 00:05:20 +00:00
Jade Ellis
5bed4ad81d chore: Admin announcement 2026-01-13 00:01:28 +00:00
88 changed files with 2307 additions and 843 deletions

View File

@@ -31,7 +31,7 @@ repos:
stages: [commit-msg]
- repo: https://github.com/crate-ci/committed
rev: v1.1.9
rev: v1.1.10
hooks:
- id: committed

View File

@@ -16,7 +16,7 @@ ## Docs
## Misc
- Improve timeout-related code for federation and URL previews. Contributed by @Jade
- Improve timeout-related code for federation and URL previews. Contributed by @Jade (#1278)
# Continuwuity 0.5.2 (2026-01-09)

1020
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -158,7 +158,7 @@ features = ["raw_value"]
# Used for appservice registration files
[workspace.dependencies.serde-saphyr]
version = "0.0.10"
version = "0.0.14"
# Used to load forbidden room/user regex from config
[workspace.dependencies.serde_regex]
@@ -342,7 +342,7 @@ version = "0.1.2"
# Used for matrix spec type definitions and helpers
[workspace.dependencies.ruma]
git = "https://forgejo.ellis.link/continuwuation/ruwuma"
rev = "f9e74cb206cfa45cf5f17d39282253b43a15fcd5"
rev = "85d00fb5746cba23904234b4fd3c838dcf141541"
features = [
"compat",
"rand",
@@ -548,6 +548,10 @@ features = ["sync", "tls-rustls", "rustls-provider"]
[workspace.dependencies.resolv-conf]
version = "0.7.5"
# Used by stitched ordering
[workspace.dependencies.indexmap]
version = "2.13.0"
#
# Patches
#

View File

@@ -0,0 +1 @@
The announcement checker will now announce errors it encounters in the first run to the admin room, plus a few other misc improvements. Contributed by @Jade

View File

@@ -0,0 +1 @@
Fix the generated configuration containing uncommented optional sections. Contributed by @Jade

1
changelog.d/1298.bugfix Normal file
View File

@@ -0,0 +1 @@
Fixed specification non-compliance when handling remote media errors. Contributed by @nex.

View File

@@ -1926,9 +1926,9 @@
#
#admin_filter = ""
[global.antispam]
#[global.antispam]
[global.antispam.meowlnir]
#[global.antispam.meowlnir]
# The base URL on which to contact Meowlnir (before /_meowlnir/antispam).
#
@@ -1953,7 +1953,7 @@
#
#check_all_joins = false
[global.antispam.draupnir]
#[global.antispam.draupnir]
# The base URL on which to contact Draupnir (before /api/).
#

View File

@@ -48,7 +48,7 @@ EOF
# Developer tool versions
# renovate: datasource=github-releases depName=cargo-bins/cargo-binstall
ENV BINSTALL_VERSION=1.16.6
ENV BINSTALL_VERSION=1.16.7
# renovate: datasource=github-releases depName=psastras/sbom-rs
ENV CARGO_SBOM_VERSION=0.9.1
# renovate: datasource=crate depName=lddtree

View File

@@ -18,7 +18,7 @@ RUN --mount=type=cache,target=/etc/apk/cache apk add \
# Developer tool versions
# renovate: datasource=github-releases depName=cargo-bins/cargo-binstall
ENV BINSTALL_VERSION=1.16.6
ENV BINSTALL_VERSION=1.16.7
# renovate: datasource=github-releases depName=psastras/sbom-rs
ENV CARGO_SBOM_VERSION=0.9.1
# renovate: datasource=crate depName=lddtree

View File

@@ -6,10 +6,10 @@
"message": "Welcome to Continuwuity! Important announcements about the project will appear here."
},
{
"id": 7,
"mention_room": true,
"date": "2025-12-30",
"message": "Continuwuity v0.5.1 has been released. **The release contains a fix for the critical vulnerability [GHSA-m5p2-vccg-8c9v](https://github.com/continuwuity/continuwuity/security/advisories/GHSA-m5p2-vccg-8c9v) (embargoed) affecting all Conduit-derived servers. Update as soon as possible.**\n\nThis has been *actively exploited* to attempt account takeover and forge events bricking the Continuwuity rooms. The new space is accessible at [Continuwuity (room list)](https://matrix.to/#/!8cR4g-i9ucof69E4JHNg9LbPVkGprHb3SzcrGBDDJgk?via=continuwuity.org&via=starstruck.systems&via=gingershaped.computer)\n"
"id": 8,
"mention_room": false,
"date": "2026-01-12",
"message": "Hey everyone!\n\nJust letting you know we've released [v0.5.3](https://forgejo.ellis.link/continuwuation/continuwuity/releases/tag/v0.5.3) - this one is a bit of a hotfix for an issue with inviting and allowing others to join rooms.\n\nIf you appreceate the round-the-clock work we've been doing to keep your servers secure over this holiday period, we'd really appreciate your support - you can sponsor individuals on our team using the 'sponsor' button at the top of [our GitHub repository](https://github.com/continuwuity/continuwuity). If you can't do that, even a star helps - spreading the word and advocating for our project helps keep it going.\n\nHave a lovely rest of your year \\\n[Jade \\(she/her\\)](https://matrix.to/#/%40jade%3Aellis.link) \n🩵"
}
]
}

View File

@@ -118,10 +118,6 @@ ## `!admin debug time`
Print the current time
## `!admin debug list-dependencies`
List dependencies
## `!admin debug database-stats`
Get database statistics

View File

@@ -16,10 +16,6 @@ ## `!admin server reload-config`
Reload configuration values
## `!admin server list-features`
List the features built into the server
## `!admin server memory-usage`
Print database memory usage statistics

30
package-lock.json generated
View File

@@ -713,9 +713,9 @@
}
},
"node_modules/@remix-run/router": {
"version": "1.23.1",
"resolved": "https://registry.npmjs.org/@remix-run/router/-/router-1.23.1.tgz",
"integrity": "sha512-vDbaOzF7yT2Qs4vO6XV1MHcJv+3dgR1sT+l3B8xxOVhUC336prMvqrvsLL/9Dnw2xr6Qhz4J0dmS0llNAbnUmQ==",
"version": "1.23.2",
"resolved": "https://registry.npmjs.org/@remix-run/router/-/router-1.23.2.tgz",
"integrity": "sha512-Ic6m2U/rMjTkhERIa/0ZtXJP17QUi2CbWE7cqx4J58M8aA3QTfW+2UlQ4psvTX9IO1RfNVhK3pcpdjej7L+t2w==",
"dev": true,
"license": "MIT",
"engines": {
@@ -3244,9 +3244,9 @@
}
},
"node_modules/mdast-util-to-hast": {
"version": "13.2.0",
"resolved": "https://registry.npmjs.org/mdast-util-to-hast/-/mdast-util-to-hast-13.2.0.tgz",
"integrity": "sha512-QGYKEuUsYT9ykKBCMOEDLsU5JRObWQusAolFMeko/tYPufNkRffBAQjIE+99jbA87xv6FgmjLtwjh9wBWajwAA==",
"version": "13.2.1",
"resolved": "https://registry.npmjs.org/mdast-util-to-hast/-/mdast-util-to-hast-13.2.1.tgz",
"integrity": "sha512-cctsq2wp5vTsLIcaymblUriiTcZd0CwWtCbLvrOzYCDZoWyMNV8sZ7krj09FSnsiJi3WVsHLM4k6Dq/yaPyCXA==",
"dev": true,
"license": "MIT",
"dependencies": {
@@ -4285,13 +4285,13 @@
}
},
"node_modules/react-router": {
"version": "6.30.2",
"resolved": "https://registry.npmjs.org/react-router/-/react-router-6.30.2.tgz",
"integrity": "sha512-H2Bm38Zu1bm8KUE5NVWRMzuIyAV8p/JrOaBJAwVmp37AXG72+CZJlEBw6pdn9i5TBgLMhNDgijS4ZlblpHyWTA==",
"version": "6.30.3",
"resolved": "https://registry.npmjs.org/react-router/-/react-router-6.30.3.tgz",
"integrity": "sha512-XRnlbKMTmktBkjCLE8/XcZFlnHvr2Ltdr1eJX4idL55/9BbORzyZEaIkBFDhFGCEWBBItsVrDxwx3gnisMitdw==",
"dev": true,
"license": "MIT",
"dependencies": {
"@remix-run/router": "1.23.1"
"@remix-run/router": "1.23.2"
},
"engines": {
"node": ">=14.0.0"
@@ -4301,14 +4301,14 @@
}
},
"node_modules/react-router-dom": {
"version": "6.30.2",
"resolved": "https://registry.npmjs.org/react-router-dom/-/react-router-dom-6.30.2.tgz",
"integrity": "sha512-l2OwHn3UUnEVUqc6/1VMmR1cvZryZ3j3NzapC2eUXO1dB0sYp5mvwdjiXhpUbRb21eFow3qSxpP8Yv6oAU824Q==",
"version": "6.30.3",
"resolved": "https://registry.npmjs.org/react-router-dom/-/react-router-dom-6.30.3.tgz",
"integrity": "sha512-pxPcv1AczD4vso7G4Z3TKcvlxK7g7TNt3/FNGMhfqyntocvYKj+GCatfigGDjbLozC4baguJ0ReCigoDJXb0ag==",
"dev": true,
"license": "MIT",
"dependencies": {
"@remix-run/router": "1.23.1",
"react-router": "6.30.2"
"@remix-run/router": "1.23.2",
"react-router": "6.30.3"
},
"engines": {
"node": ">=14.0.0"

View File

@@ -87,7 +87,6 @@ serde-saphyr.workspace = true
tokio.workspace = true
tracing-subscriber.workspace = true
tracing.workspace = true
ctor.workspace = true
[lints]
workspace = true

View File

@@ -819,32 +819,6 @@ pub(super) async fn time(&self) -> Result {
self.write_str(&now).await
}
#[admin_command]
pub(super) async fn list_dependencies(&self, names: bool) -> Result {
if names {
let out = info::cargo::dependencies_names().join(" ");
return self.write_str(&out).await;
}
let mut out = String::new();
let deps = info::cargo::dependencies();
writeln!(out, "| name | version | features |")?;
writeln!(out, "| ---- | ------- | -------- |")?;
for (name, dep) in deps {
let version = dep.try_req().unwrap_or("*");
let feats = dep.req_features();
let feats = if !feats.is_empty() {
feats.join(" ")
} else {
String::new()
};
writeln!(out, "| {name} | {version} | {feats} |")?;
}
self.write_str(&out).await
}
#[admin_command]
pub(super) async fn database_stats(
&self,

View File

@@ -206,12 +206,6 @@ pub enum DebugCommand {
/// Print the current time
Time,
/// List dependencies
ListDependencies {
#[arg(short, long)]
names: bool,
},
/// Get database statistics
DatabaseStats {
property: Option<String>,

View File

@@ -30,11 +30,8 @@
pub(crate) const PAGE_SIZE: usize = 100;
use ctor::{ctor, dtor};
conduwuit::mod_ctor! {}
conduwuit::mod_dtor! {}
conduwuit::rustc_flags_capture! {}
pub use crate::admin::AdminCommand;

View File

@@ -1,7 +1,7 @@
use std::{fmt::Write, path::PathBuf, sync::Arc};
use std::{path::PathBuf, sync::Arc};
use conduwuit::{
Err, Result, info,
Err, Result,
utils::{stream::IterStream, time},
warn,
};
@@ -59,34 +59,6 @@ pub(super) async fn reload_config(&self, path: Option<PathBuf>) -> Result {
.await
}
#[admin_command]
pub(super) async fn list_features(&self, available: bool, enabled: bool, comma: bool) -> Result {
let delim = if comma { "," } else { " " };
if enabled && !available {
let features = info::rustc::features().join(delim);
let out = format!("`\n{features}\n`");
return self.write_str(&out).await;
}
if available && !enabled {
let features = info::cargo::features().join(delim);
let out = format!("`\n{features}\n`");
return self.write_str(&out).await;
}
let mut features = String::new();
let enabled = info::rustc::features();
let available = info::cargo::features();
for feature in available {
let active = enabled.contains(&feature.as_str());
let emoji = if active { "" } else { "" };
let remark = if active { "[enabled]" } else { "" };
writeln!(features, "{emoji} {feature} {remark}")?;
}
self.write_str(&features).await
}
#[admin_command]
pub(super) async fn memory_usage(&self) -> Result {
let services_usage = self.services.memory_usage().await?;

View File

@@ -21,18 +21,6 @@ pub enum ServerCommand {
path: Option<PathBuf>,
},
/// List the features built into the server
ListFeatures {
#[arg(short, long)]
available: bool,
#[arg(short, long)]
enabled: bool,
#[arg(short, long)]
comma: bool,
},
/// Print database memory usage statistics
MemoryUsage,

View File

@@ -91,7 +91,6 @@ serde.workspace = true
sha1.workspace = true
tokio.workspace = true
tracing.workspace = true
ctor.workspace = true
[lints]
workspace = true

1
src/api/admin/mod.rs Normal file
View File

@@ -0,0 +1 @@
pub mod rooms;

132
src/api/admin/rooms/ban.rs Normal file
View File

@@ -0,0 +1,132 @@
use axum::extract::State;
use conduwuit::{Err, Result, info, utils::ReadyExt, warn};
use futures::{FutureExt, StreamExt};
use ruma::{
OwnedRoomAliasId, continuwuity_admin_api::rooms,
events::room::message::RoomMessageEventContent,
};
use crate::{Ruma, client::leave_room};
/// # `PUT /_continuwuity/admin/rooms/{roomID}/ban`
///
/// Bans or unbans a room.
pub(crate) async fn ban_room(
State(services): State<crate::State>,
body: Ruma<rooms::ban::v1::Request>,
) -> Result<rooms::ban::v1::Response> {
let sender_user = body.sender_user();
if !services.users.is_admin(sender_user).await {
return Err!(Request(Forbidden("Only server administrators can use this endpoint")));
}
if body.banned {
// Don't ban again if already banned
if services.rooms.metadata.is_banned(&body.room_id).await {
return Err!(Request(InvalidParam("Room is already banned")));
}
info!(%sender_user, "Banning room {}", body.room_id);
services
.admin
.notice(&format!("{sender_user} banned {} (ban in progress)", body.room_id))
.await;
let mut users = services
.rooms
.state_cache
.room_members(&body.room_id)
.map(ToOwned::to_owned)
.ready_filter(|user| services.globals.user_is_local(user))
.boxed();
let mut evicted = Vec::new();
let mut failed_evicted = Vec::new();
while let Some(ref user_id) = users.next().await {
info!("Evicting user {} from room {}", user_id, body.room_id);
match leave_room(&services, user_id, &body.room_id, None)
.boxed()
.await
{
| Ok(()) => {
services.rooms.state_cache.forget(&body.room_id, user_id);
evicted.push(user_id.clone());
},
| Err(e) => {
warn!("Failed to evict user {} from room {}: {}", user_id, body.room_id, e);
failed_evicted.push(user_id.clone());
},
}
}
let aliases: Vec<OwnedRoomAliasId> = services
.rooms
.alias
.local_aliases_for_room(&body.room_id)
.map(ToOwned::to_owned)
.collect::<Vec<_>>()
.await;
for alias in &aliases {
info!("Removing alias {} for banned room {}", alias, body.room_id);
services
.rooms
.alias
.remove_alias(alias, &services.globals.server_user)
.await?;
}
services.rooms.directory.set_not_public(&body.room_id); // remove from the room directory
services.rooms.metadata.ban_room(&body.room_id, true); // prevent further joins
services.rooms.metadata.disable_room(&body.room_id, true); // disable federation
services
.admin
.notice(&format!(
"Finished banning {}: Removed {} users ({} failed) and {} aliases",
body.room_id,
evicted.len(),
failed_evicted.len(),
aliases.len()
))
.await;
if !evicted.is_empty() || !failed_evicted.is_empty() || !aliases.is_empty() {
let msg = services
.admin
.text_or_file(RoomMessageEventContent::text_markdown(format!(
"Removed users:\n{}\n\nFailed to remove users:\n{}\n\nRemoved aliases: {}",
evicted
.iter()
.map(|u| u.as_str())
.collect::<Vec<_>>()
.join("\n"),
failed_evicted
.iter()
.map(|u| u.as_str())
.collect::<Vec<_>>()
.join("\n"),
aliases
.iter()
.map(|a| a.as_str())
.collect::<Vec<_>>()
.join(", "),
)))
.await;
services.admin.send_message(msg).await.ok();
}
Ok(rooms::ban::v1::Response::new(evicted, failed_evicted, aliases))
} else {
// Don't unban if not banned
if !services.rooms.metadata.is_banned(&body.room_id).await {
return Err!(Request(InvalidParam("Room is not banned")));
}
info!(%sender_user, "Unbanning room {}", body.room_id);
services.rooms.metadata.disable_room(&body.room_id, false);
services.rooms.metadata.ban_room(&body.room_id, false);
services
.admin
.notice(&format!("{sender_user} unbanned {}", body.room_id))
.await;
Ok(rooms::ban::v1::Response::new(Vec::new(), Vec::new(), Vec::new()))
}
}

View File

@@ -0,0 +1,35 @@
use axum::extract::State;
use conduwuit::{Err, Result};
use futures::StreamExt;
use ruma::{OwnedRoomId, continuwuity_admin_api::rooms};
use crate::Ruma;
/// # `GET /_continuwuity/admin/rooms/list`
///
/// Lists all rooms known to this server, excluding banned ones.
pub(crate) async fn list_rooms(
State(services): State<crate::State>,
body: Ruma<rooms::list::v1::Request>,
) -> Result<rooms::list::v1::Response> {
let sender_user = body.sender_user();
if !services.users.is_admin(sender_user).await {
return Err!(Request(Forbidden("Only server administrators can use this endpoint")));
}
let mut rooms: Vec<OwnedRoomId> = services
.rooms
.metadata
.iter_ids()
.filter_map(|room_id| async move {
if !services.rooms.metadata.is_banned(room_id).await {
Some(room_id.to_owned())
} else {
None
}
})
.collect()
.await;
rooms.sort();
Ok(rooms::list::v1::Response::new(rooms))
}

View File

@@ -0,0 +1,2 @@
pub mod ban;
pub mod list;

View File

@@ -91,8 +91,11 @@ pub(crate) async fn upload_keys_route(
.users
.get_device_keys(sender_user, sender_device)
.await
.and_then(|keys| keys.deserialize().map_err(Into::into))
{
if existing_keys.json().get() == device_keys.json().get() {
// NOTE: also serves as a workaround for a nheko bug which omits cross-signing
// NOTE: signatures when re-uploading the same DeviceKeys.
if existing_keys.keys == deser_device_keys.keys {
debug!(
%sender_user,
%sender_device,

View File

@@ -371,11 +371,3 @@ pub(crate) async fn is_ignored_invite(
.invite_filter_level(&sender_user, recipient_user)
.await == FilterLevel::Ignore
}
#[cfg_attr(debug_assertions, ctor::ctor)]
fn _is_sorted() {
debug_assert!(
IGNORED_MESSAGE_TYPES.is_sorted(),
"IGNORED_MESSAGE_TYPES must be sorted by the developer"
);
}

View File

@@ -1,12 +1,13 @@
#![type_length_limit = "16384"] //TODO: reduce me
#![allow(clippy::toplevel_ref_arg)]
extern crate conduwuit_core as conduwuit;
extern crate conduwuit_service as service;
pub mod client;
pub mod router;
pub mod server;
extern crate conduwuit_core as conduwuit;
extern crate conduwuit_service as service;
pub mod admin;
pub(crate) use self::router::{Ruma, RumaResponse, State};

View File

@@ -17,7 +17,7 @@
use self::handler::RouterExt;
pub(super) use self::{args::Args as Ruma, response::RumaResponse};
use crate::{client, server};
use crate::{admin, client, server};
pub fn build(router: Router<State>, server: &Server) -> Router<State> {
let config = &server.config;
@@ -187,7 +187,9 @@ pub fn build(router: Router<State>, server: &Server) -> Router<State> {
.route("/_conduwuit/server_version", get(client::conduwuit_server_version))
.route("/_continuwuity/server_version", get(client::conduwuit_server_version))
.ruma_route(&client::room_initial_sync_route)
.route("/client/server.json", get(client::syncv3_client_server_json));
.route("/client/server.json", get(client::syncv3_client_server_json))
.ruma_route(&admin::rooms::ban::ban_room)
.ruma_route(&admin::rooms::list::list_rooms);
if config.allow_federation {
router = router

View File

@@ -13,8 +13,7 @@
use conduwuit_service::Services;
use futures::{FutureExt, StreamExt, TryStreamExt};
use ruma::{
CanonicalJsonValue, OwnedEventId, OwnedRoomId, OwnedServerName, OwnedUserId, RoomId,
ServerName,
CanonicalJsonValue, OwnedEventId, OwnedRoomId, OwnedUserId, RoomId, ServerName,
api::federation::membership::create_join_event,
events::{
StateEventType,

View File

@@ -6,7 +6,7 @@
};
use futures::FutureExt;
use ruma::{
OwnedServerName, OwnedUserId,
OwnedUserId,
RoomVersionId::*,
api::federation::knock::send_knock,
events::{

View File

@@ -47,7 +47,7 @@
const NAME_MAX: usize = 128;
const KEY_SEGS: usize = 8;
#[crate::ctor]
#[ctor::ctor]
fn _static_initialization() {
acq_epoch().expect("pre-initialization of jemalloc failed");
acq_epoch().expect("pre-initialization of jemalloc failed");

View File

@@ -2261,7 +2261,11 @@ struct ListeningAddr {
}
#[derive(Clone, Debug, Deserialize)]
#[config_example_generator(filename = "conduwuit-example.toml", section = "global.antispam")]
#[config_example_generator(
filename = "conduwuit-example.toml",
section = "global.antispam",
optional = "true"
)]
pub struct Antispam {
/// display: nested
pub meowlnir: Option<MeowlnirConfig>,
@@ -2272,7 +2276,8 @@ pub struct Antispam {
#[derive(Clone, Debug, Deserialize)]
#[config_example_generator(
filename = "conduwuit-example.toml",
section = "global.antispam.meowlnir"
section = "global.antispam.meowlnir",
optional = "true"
)]
pub struct MeowlnirConfig {
/// The base URL on which to contact Meowlnir (before /_meowlnir/antispam).
@@ -2301,7 +2306,8 @@ pub struct MeowlnirConfig {
#[derive(Clone, Debug, Deserialize)]
#[config_example_generator(
filename = "conduwuit-example.toml",
section = "global.antispam.draupnir"
section = "global.antispam.draupnir",
optional = "true"
)]
pub struct DraupnirConfig {
/// The base URL on which to contact Draupnir (before /api/).

View File

@@ -62,7 +62,7 @@ macro_rules! debug_info {
pub static DEBUGGER: LazyLock<bool> =
LazyLock::new(|| env::var("_").unwrap_or_default().ends_with("gdb"));
#[cfg_attr(debug_assertions, crate::ctor)]
#[cfg_attr(debug_assertions, ctor::ctor)]
#[cfg_attr(not(debug_assertions), allow(dead_code))]
fn set_panic_trap() {
if !*DEBUGGER {

View File

@@ -115,7 +115,7 @@ macro_rules! err {
macro_rules! err_log {
($out:ident, $level:ident, $($fields:tt)+) => {{
use $crate::tracing::{
callsite, callsite2, metadata, valueset, Callsite,
callsite, callsite2, metadata, valueset_all, Callsite,
Level,
};
@@ -133,7 +133,7 @@ macro_rules! err_log {
fields: $($fields)+,
};
($crate::error::visit)(&mut $out, LEVEL, &__CALLSITE, &mut valueset!(__CALLSITE.metadata().fields(), $($fields)+));
($crate::error::visit)(&mut $out, LEVEL, &__CALLSITE, &mut valueset_all!(__CALLSITE.metadata().fields(), $($fields)+));
($out).into()
}}
}

View File

@@ -1,95 +0,0 @@
//! Information about the build related to Cargo. This is a frontend interface
//! informed by proc-macros that capture raw information at build time which is
//! further processed at runtime either during static initialization or as
//! necessary.
use std::sync::OnceLock;
use cargo_toml::{DepsSet, Manifest};
use conduwuit_macros::cargo_manifest;
use crate::Result;
// Raw captures of the cargo manifest for each crate. This is provided by a
// proc-macro at build time since the source directory and the cargo toml's may
// not be present during execution.
#[cargo_manifest]
const WORKSPACE_MANIFEST: &'static str = ();
#[cargo_manifest(crate = "macros")]
const MACROS_MANIFEST: &'static str = ();
#[cargo_manifest(crate = "core")]
const CORE_MANIFEST: &'static str = ();
#[cargo_manifest(crate = "database")]
const DATABASE_MANIFEST: &'static str = ();
#[cargo_manifest(crate = "service")]
const SERVICE_MANIFEST: &'static str = ();
#[cargo_manifest(crate = "admin")]
const ADMIN_MANIFEST: &'static str = ();
#[cargo_manifest(crate = "router")]
const ROUTER_MANIFEST: &'static str = ();
#[cargo_manifest(crate = "main")]
const MAIN_MANIFEST: &'static str = ();
/// Processed list of features across all project crates. This is generated from
/// the data in the MANIFEST strings and contains all possible project features.
/// For *enabled* features see the info::rustc module instead.
static FEATURES: OnceLock<Vec<String>> = OnceLock::new();
/// Processed list of dependencies. This is generated from the data captured in
/// the MANIFEST.
static DEPENDENCIES: OnceLock<DepsSet> = OnceLock::new();
#[must_use]
pub fn dependencies_names() -> Vec<&'static str> {
dependencies().keys().map(String::as_str).collect()
}
pub fn dependencies() -> &'static DepsSet {
DEPENDENCIES.get_or_init(|| {
init_dependencies().unwrap_or_else(|e| panic!("Failed to initialize dependencies: {e}"))
})
}
/// List of all possible features for the project. For *enabled* features in
/// this build see the companion function in info::rustc.
pub fn features() -> &'static Vec<String> {
FEATURES.get_or_init(|| {
init_features().unwrap_or_else(|e| panic!("Failed initialize features: {e}"))
})
}
fn init_features() -> Result<Vec<String>> {
let mut features = Vec::new();
append_features(&mut features, WORKSPACE_MANIFEST)?;
append_features(&mut features, MACROS_MANIFEST)?;
append_features(&mut features, CORE_MANIFEST)?;
append_features(&mut features, DATABASE_MANIFEST)?;
append_features(&mut features, SERVICE_MANIFEST)?;
append_features(&mut features, ADMIN_MANIFEST)?;
append_features(&mut features, ROUTER_MANIFEST)?;
append_features(&mut features, MAIN_MANIFEST)?;
features.sort();
features.dedup();
Ok(features)
}
fn append_features(features: &mut Vec<String>, manifest: &str) -> Result<()> {
let manifest = Manifest::from_str(manifest)?;
features.extend(manifest.features.keys().cloned());
Ok(())
}
fn init_dependencies() -> Result<DepsSet> {
let manifest = Manifest::from_str(WORKSPACE_MANIFEST)?;
let deps_set = manifest
.workspace
.as_ref()
.expect("manifest has workspace section")
.dependencies
.clone();
Ok(deps_set)
}

View File

@@ -1,12 +1,5 @@
//! Information about the project. This module contains version, build, system,
//! etc information which can be queried by admins or used by developers.
pub mod cargo;
pub mod room_version;
pub mod rustc;
pub mod version;
pub use conduwuit_macros::rustc_flags_capture;
pub const MODULE_ROOT: &str = const_str::split!(std::module_path!(), "::")[0];
pub const CRATE_PREFIX: &str = const_str::split!(MODULE_ROOT, '_')[0];

View File

@@ -1,54 +0,0 @@
//! Information about the build related to rustc. This is a frontend interface
//! informed by proc-macros at build time. Since the project is split into
//! several crates, lower-level information is supplied from each crate during
//! static initialization.
use std::{collections::BTreeMap, sync::OnceLock};
use crate::utils::exchange;
/// Raw capture of rustc flags used to build each crate in the project. Informed
/// by rustc_flags_capture macro (one in each crate's mod.rs). This is
/// done during static initialization which is why it's mutex-protected and pub.
/// Should not be written to by anything other than our macro.
///
/// We specifically use a std mutex here because parking_lot cannot be used
/// after thread local storage is destroyed on MacOS.
pub static FLAGS: std::sync::Mutex<BTreeMap<&str, &[&str]>> =
std::sync::Mutex::new(BTreeMap::new());
/// Processed list of enabled features across all project crates. This is
/// generated from the data in FLAGS.
static FEATURES: OnceLock<Vec<&'static str>> = OnceLock::new();
/// List of features enabled for the project.
pub fn features() -> &'static Vec<&'static str> { FEATURES.get_or_init(init_features) }
fn init_features() -> Vec<&'static str> {
let mut features = Vec::new();
FLAGS
.lock()
.expect("locked")
.iter()
.for_each(|(_, flags)| append_features(&mut features, flags));
features.sort_unstable();
features.dedup();
features
}
fn append_features(features: &mut Vec<&'static str>, flags: &[&'static str]) {
let mut next_is_cfg = false;
for flag in flags {
let is_cfg = *flag == "--cfg";
let is_feature = flag.starts_with("feature=");
if exchange(&mut next_is_cfg, is_cfg) && is_feature {
if let Some(feature) = flag
.split_once('=')
.map(|(_, feature)| feature.trim_matches('"'))
{
features.push(feature);
}
}
}
}

View File

@@ -22,7 +22,7 @@
pub use config::Config;
pub use error::Error;
pub use info::{
rustc_flags_capture, version,
version,
version::{name, version},
};
pub use matrix::{
@@ -30,12 +30,10 @@
};
pub use parking_lot::{Mutex as SyncMutex, RwLock as SyncRwLock};
pub use server::Server;
pub use utils::{ctor, dtor, implement, result, result::Result};
pub use utils::{implement, result, result::Result};
pub use crate as conduwuit_core;
rustc_flags_capture! {}
#[cfg(any(not(conduwuit_mods), not(feature = "conduwuit_mods")))]
pub mod mods {
#[macro_export]

View File

@@ -22,7 +22,6 @@
pub mod with_lock;
pub use ::conduwuit_macros::implement;
pub use ::ctor::{ctor, dtor};
pub use self::{
arrayvec::ArrayVecExt,

View File

@@ -64,7 +64,6 @@ serde.workspace = true
serde_json.workspace = true
tokio.workspace = true
tracing.workspace = true
ctor.workspace = true
[lints]
workspace = true

View File

@@ -3,11 +3,8 @@
extern crate conduwuit_core as conduwuit;
extern crate rust_rocksdb as rocksdb;
use ctor::{ctor, dtor};
conduwuit::mod_ctor! {}
conduwuit::mod_dtor! {}
conduwuit::rustc_flags_capture! {}
#[cfg(test)]
mod benches;

View File

@@ -1,47 +0,0 @@
use std::{fs::read_to_string, path::PathBuf};
use proc_macro::{Span, TokenStream};
use quote::quote;
use syn::{Error, ItemConst, Meta};
use crate::{Result, utils};
pub(super) fn manifest(item: ItemConst, args: &[Meta]) -> Result<TokenStream> {
let member = utils::get_named_string(args, "crate");
let path = manifest_path(member.as_deref())?;
let manifest = read_to_string(&path).unwrap_or_default();
let val = manifest.as_str();
let name = item.ident;
let ret = quote! {
const #name: &'static str = #val;
};
Ok(ret.into())
}
#[allow(clippy::option_env_unwrap)]
fn manifest_path(member: Option<&str>) -> Result<PathBuf> {
let Some(path) = option_env!("CARGO_MANIFEST_DIR") else {
return Err(Error::new(
Span::call_site().into(),
"missing CARGO_MANIFEST_DIR in environment",
));
};
let mut path: PathBuf = path.into();
// conduwuit/src/macros/ -> conduwuit/src/
path.pop();
if let Some(member) = member {
// conduwuit/$member/Cargo.toml
path.push(member);
} else {
// conduwuit/src/ -> conduwuit/
path.pop();
}
path.push("Cargo.toml");
Ok(path)
}

View File

@@ -73,7 +73,13 @@ fn generate_example(input: &ItemStruct, args: &[Meta], write: bool) -> Result<To
.expect("written to config file");
}
file.write_fmt(format_args!("\n[{section}]\n"))
let optional = settings.get("optional").is_some_and(|v| v == "true");
let section_header = if optional {
format!("\n#[{section}]\n")
} else {
format!("\n[{section}]\n")
};
file.write_fmt(format_args!("{section_header}"))
.expect("written to config file");
}

View File

@@ -1,15 +1,13 @@
mod admin;
mod cargo;
mod config;
mod debug;
mod implement;
mod refutable;
mod rustc;
mod utils;
use proc_macro::TokenStream;
use syn::{
Error, Item, ItemConst, ItemEnum, ItemFn, ItemStruct, Meta,
Error, Item, ItemEnum, ItemFn, ItemStruct, Meta,
parse::{Parse, Parser},
parse_macro_input,
};
@@ -26,19 +24,11 @@ pub fn admin_command_dispatch(args: TokenStream, input: TokenStream) -> TokenStr
attribute_macro::<ItemEnum, _>(args, input, admin::command_dispatch)
}
#[proc_macro_attribute]
pub fn cargo_manifest(args: TokenStream, input: TokenStream) -> TokenStream {
attribute_macro::<ItemConst, _>(args, input, cargo::manifest)
}
#[proc_macro_attribute]
pub fn recursion_depth(args: TokenStream, input: TokenStream) -> TokenStream {
attribute_macro::<Item, _>(args, input, debug::recursion_depth)
}
#[proc_macro]
pub fn rustc_flags_capture(args: TokenStream) -> TokenStream { rustc::flags_capture(args) }
#[proc_macro_attribute]
pub fn refutable(args: TokenStream, input: TokenStream) -> TokenStream {
attribute_macro::<ItemFn, _>(args, input, refutable::refutable)

View File

@@ -1,29 +0,0 @@
use proc_macro::TokenStream;
use quote::quote;
pub(super) fn flags_capture(args: TokenStream) -> TokenStream {
let cargo_crate_name = std::env::var("CARGO_CRATE_NAME");
let crate_name = match cargo_crate_name.as_ref() {
| Err(_) => return args,
| Ok(crate_name) => crate_name.trim_start_matches("conduwuit_"),
};
let flag = std::env::args().collect::<Vec<_>>();
let flag_len = flag.len();
let ret = quote! {
pub static RUSTC_FLAGS: [&str; #flag_len] = [#( #flag ),*];
#[ctor]
fn _set_rustc_flags() {
conduwuit_core::info::rustc::FLAGS.lock().expect("locked").insert(#crate_name, &RUSTC_FLAGS);
}
// static strings have to be yanked on module unload
#[dtor]
fn _unset_rustc_flags() {
conduwuit_core::info::rustc::FLAGS.lock().expect("locked").remove(#crate_name);
}
};
ret.into()
}

View File

@@ -207,7 +207,6 @@ clap.workspace = true
console-subscriber.optional = true
console-subscriber.workspace = true
const-str.workspace = true
ctor.workspace = true
log.workspace = true
opentelemetry.optional = true
opentelemetry.workspace = true

View File

@@ -2,7 +2,7 @@
use std::sync::{Arc, atomic::Ordering};
use conduwuit_core::{debug_info, error, rustc_flags_capture};
use conduwuit_core::{debug_info, error};
mod clap;
mod logging;
@@ -13,12 +13,8 @@
mod server;
mod signal;
use ctor::{ctor, dtor};
use server::Server;
rustc_flags_capture! {}
pub use conduwuit_core::{Error, Result};
use server::Server;
pub use crate::clap::Args;

View File

@@ -122,7 +122,6 @@ tokio.workspace = true
tower.workspace = true
tower-http.workspace = true
tracing.workspace = true
ctor.workspace = true
[target.'cfg(all(unix, target_os = "linux"))'.dependencies]
sd-notify.workspace = true

View File

@@ -12,12 +12,10 @@
use conduwuit::{Error, Result, Server};
use conduwuit_service::Services;
use ctor::{ctor, dtor};
use futures::{Future, FutureExt, TryFutureExt};
conduwuit::mod_ctor! {}
conduwuit::mod_dtor! {}
conduwuit::rustc_flags_capture! {}
#[unsafe(no_mangle)]
pub extern "Rust" fn start(

View File

@@ -118,7 +118,7 @@ webpage.optional = true
blurhash.workspace = true
blurhash.optional = true
recaptcha-verify = { version = "0.1.5", default-features = false }
ctor.workspace = true
indexmap.workspace = true
[target.'cfg(all(unix, target_os = "linux"))'.dependencies]
sd-notify.workspace = true

View File

@@ -18,8 +18,9 @@
use std::{sync::Arc, time::Duration};
use async_trait::async_trait;
use conduwuit::{Result, Server, debug, info, warn};
use conduwuit::{Result, Server, debug, error, info, warn};
use database::{Deserialized, Map};
use rand::Rng;
use ruma::events::{Mentions, room::message::RoomMessageEventContent};
use serde::Deserialize;
use tokio::{
@@ -86,9 +87,27 @@ async fn worker(self: Arc<Self>) -> Result<()> {
return Ok(());
}
// Run the first check immediately and send errors to admin room
if let Err(e) = self.check().await {
error!(?e, "Failed to check for announcements on startup");
self.services
.admin
.send_message(RoomMessageEventContent::text_plain(format!(
"Failed to check for announcements on startup: {e}"
)))
.await
.ok();
}
let first_check_jitter = {
let mut rng = rand::thread_rng();
let jitter_percent = rng.gen_range(-50.0..=10.0);
self.interval.mul_f64(1.0 + jitter_percent / 100.0)
};
let mut i = interval(self.interval);
i.set_missed_tick_behavior(MissedTickBehavior::Delay);
i.reset_after(self.interval);
i.reset_after(first_check_jitter);
loop {
tokio::select! {
() = self.interrupt.notified() => break,

View File

@@ -56,9 +56,18 @@ pub async fn fetch_remote_content(
let result = self
.fetch_content_authenticated(mxc, user, server, timeout_ms)
.await;
.await
.inspect_err(|error| {
debug_warn!(
%mxc,
?user,
?server,
?error,
"Authenticated fetch of remote content failed"
);
});
if let Err(Error::Request(NotFound, ..)) = &result {
if let Err(Error::Request(Unrecognized, ..)) = &result {
return self
.fetch_content_unauthenticated(mxc, user, server, timeout_ms)
.await;
@@ -87,7 +96,7 @@ async fn fetch_thumbnail_authenticated(
timeout_ms,
};
let Response { content, .. } = self.federation_request(mxc, user, server, request).await?;
let Response { content, .. } = self.federation_request(mxc, server, request).await?;
match content {
| FileOrLocation::File(content) =>
@@ -111,7 +120,7 @@ async fn fetch_content_authenticated(
timeout_ms,
};
let Response { content, .. } = self.federation_request(mxc, user, server, request).await?;
let Response { content, .. } = self.federation_request(mxc, server, request).await?;
match content {
| FileOrLocation::File(content) => self.handle_content_file(mxc, user, content).await,
@@ -145,7 +154,7 @@ async fn fetch_thumbnail_unauthenticated(
let Response {
file, content_type, content_disposition, ..
} = self.federation_request(mxc, user, server, request).await?;
} = self.federation_request(mxc, server, request).await?;
let content = Content { file, content_type, content_disposition };
@@ -173,7 +182,7 @@ async fn fetch_content_unauthenticated(
let Response {
file, content_type, content_disposition, ..
} = self.federation_request(mxc, user, server, request).await?;
} = self.federation_request(mxc, server, request).await?;
let content = Content { file, content_type, content_disposition };
@@ -296,7 +305,6 @@ async fn location_request(&self, location: &str) -> Result<FileMeta> {
async fn federation_request<Request>(
&self,
mxc: &Mxc<'_>,
user: Option<&UserId>,
server: Option<&ServerName>,
request: Request,
) -> Result<Request::IncomingResponse>
@@ -307,40 +315,6 @@ async fn federation_request<Request>(
.sending
.send_federation_request(server.unwrap_or(mxc.server_name), request)
.await
.map_err(|error| handle_federation_error(mxc, user, server, error))
}
// Handles and adjusts the error for the caller to determine if they should
// request the fallback endpoint or give up.
fn handle_federation_error(
mxc: &Mxc<'_>,
user: Option<&UserId>,
server: Option<&ServerName>,
error: Error,
) -> Error {
let fallback = || {
err!(Request(NotFound(
debug_error!(%mxc, user = user.map(tracing::field::display), server = server.map(tracing::field::display), ?error, "Remote media not found")
)))
};
// Matrix server responses for fallback always taken.
if error.kind() == NotFound || error.kind() == Unrecognized {
return fallback();
}
// If we get these from any middleware we'll try the other endpoint rather than
// giving up too early.
if error.status_code().is_redirection()
|| error.status_code().is_client_error()
|| error.status_code().is_server_error()
{
return fallback();
}
// Reached for 5xx errors. This is where we don't fallback given the likelihood
// the other endpoint will also be a 5xx and we're wasting time.
error
}
#[implement(super::Service)]

View File

@@ -34,11 +34,9 @@
pub mod uiaa;
pub mod users;
use ctor::{ctor, dtor};
pub(crate) use service::{Args, Dep, Service};
pub use crate::services::Services;
conduwuit::mod_ctor! {}
conduwuit::mod_dtor! {}
conduwuit::rustc_flags_capture! {}

19
src/stitcher/Cargo.toml Normal file
View File

@@ -0,0 +1,19 @@
[package]
name = "stitcher"
description = "An implementation of stitched ordering (https://codeberg.org/andybalaam/stitched-order)"
edition.workspace = true
license.workspace = true
readme.workspace = true
repository.workspace = true
version.workspace = true
[lib]
path = "mod.rs"
[dependencies]
indexmap.workspace = true
itertools.workspace = true
[dev-dependencies]
peg = "0.8.5"
rustyline = { version = "17.0.2", default-features = false }

141
src/stitcher/algorithm.rs Normal file
View File

@@ -0,0 +1,141 @@
use std::collections::HashSet;
use indexmap::IndexSet;
use itertools::Itertools;
use super::{Batch, Gap, OrderKey, StitchedItem, StitcherBackend};
/// Updates to a gap in the stitched order.
#[derive(Debug)]
pub struct GapUpdate<'id, K: OrderKey> {
/// The opaque key of the gap to update.
pub key: K,
/// The new contents of the gap. If this is empty, the gap should be
/// deleted.
pub gap: Gap,
/// New items to insert after the gap. These items _should not_ be
/// synchronized to clients.
pub inserted_items: Vec<StitchedItem<'id>>,
}
/// Updates to the stitched order.
#[derive(Debug)]
pub struct OrderUpdates<'id, K: OrderKey> {
/// Updates to individual gaps. The items inserted by these updates _should
/// not_ be synchronized to clients.
pub gap_updates: Vec<GapUpdate<'id, K>>,
/// New items to append to the end of the order. These items _should_ be
/// synchronized to clients.
pub new_items: Vec<StitchedItem<'id>>,
// The subset of events in the batch which got slotted into an existing gap. This is tracked
// for unit testing and may eventually be sent to clients.
pub events_added_to_gaps: HashSet<&'id str>,
}
/// The stitcher, which implements the stitched ordering algorithm.
/// Its primary method is [`Stitcher::stitch`].
pub struct Stitcher<'backend, B: StitcherBackend> {
backend: &'backend B,
}
impl<B: StitcherBackend> Stitcher<'_, B> {
/// Create a new [`Stitcher`] given a [`StitcherBackend`].
pub fn new(backend: &B) -> Stitcher<'_, B> { Stitcher { backend } }
/// Given a [`Batch`], compute the [`OrderUpdates`] which should be made to
/// the stitched order to incorporate that batch. It is the responsibility
/// of the caller to apply the updates.
pub fn stitch<'id>(&self, batch: &Batch<'id>) -> OrderUpdates<'id, B::Key> {
let mut gap_updates = Vec::new();
let mut events_added_to_gaps: HashSet<&'id str> = HashSet::new();
// Events in the batch which haven't been fitted into a gap or appended to the
// end yet.
let mut remaining_events: IndexSet<_> = batch.events().collect();
// 1: Find existing gaps which include IDs of events in `batch`
let matching_gaps = self.backend.find_matching_gaps(batch.events());
// Repeat steps 2-9 for each matching gap
for (key, mut gap) in matching_gaps {
// 2. Find events in `batch` which are mentioned in `gap`
let matching_events = remaining_events.iter().filter(|id| gap.contains(**id));
// Extend `events_added_to_gaps` with the matching events, which are destined to
// be slotted into gaps.
events_added_to_gaps.extend(matching_events.clone());
// 3. Create the to-insert list from the predecessor sets of each matching event
let events_to_insert: Vec<_> = matching_events
.filter_map(|event| batch.predecessors(event))
.flat_map(|predecessors| predecessors.predecessor_set.iter())
.filter(|event| remaining_events.contains(*event))
.copied()
.collect();
// 4. Remove the events in the to-insert list from `remaining_events` so they
// aren't processed again
remaining_events.retain(|event| !events_to_insert.contains(event));
// 5 and 6
let inserted_items = self.sort_events_and_create_gaps(batch, events_to_insert);
// 8. Update gap
gap.retain(|id| !batch.contains(id));
// 7 and 9. Append to-insert list and delete gap if empty
// The actual work of mutating the order is handled by the callee,
// we just record an update to make.
gap_updates.push(GapUpdate { key: key.clone(), gap, inserted_items });
}
// 10. Append remaining events and gaps
let new_items = self.sort_events_and_create_gaps(batch, remaining_events);
OrderUpdates {
gap_updates,
new_items,
events_added_to_gaps,
}
}
fn sort_events_and_create_gaps<'id>(
&self,
batch: &Batch<'id>,
events_to_insert: impl IntoIterator<Item = &'id str>,
) -> Vec<StitchedItem<'id>> {
// 5. Sort the to-insert list with DAG;received order
let events_to_insert = events_to_insert
.into_iter()
.sorted_by(batch.compare_by_dag_received())
.collect_vec();
// allocate 1.5x the size of the to-insert list
let items_capacity = events_to_insert
.capacity()
.saturating_add(events_to_insert.capacity().div_euclid(2));
let mut items = Vec::with_capacity(items_capacity);
for event in events_to_insert {
let missing_prev_events: HashSet<String> = batch
.predecessors(event)
.expect("events in to_insert should be in batch")
.prev_events
.iter()
.filter(|prev_event| {
!(batch.contains(prev_event) || self.backend.event_exists(prev_event))
})
.map(|id| String::from(*id))
.collect();
if !missing_prev_events.is_empty() {
items.push(StitchedItem::Gap(missing_prev_events));
}
items.push(StitchedItem::Event(event));
}
items
}
}

View File

@@ -0,0 +1,88 @@
use std::collections::HashSet;
use rustyline::{DefaultEditor, Result, error::ReadlineError};
use stitcher::{Batch, EventEdges, Stitcher, memory_backend::MemoryStitcherBackend};
const BANNER: &str = "
stitched ordering test repl
- append an event by typing its name: `A`
- to add prev events, type an arrow and then space-separated event names: `A --> B C D`
- to add multiple events at once, separate them with commas
- use `/reset` to clear the ordering
Ctrl-D to exit, Ctrl-C to clear input
"
.trim_ascii();
enum Command<'line> {
AppendEvents(EventEdges<'line>),
ResetOrder,
}
peg::parser! {
// partially copied from the test case parser
grammar command_parser() for str {
/// Parse whitespace.
rule _ -> () = quiet! { $([' '])* {} }
/// Parse an event ID.
rule event_id() -> &'input str
= quiet! { id:$([char if char.is_ascii_alphanumeric() || ['_', '-'].contains(&char)]+) { id } }
/ expected!("an event ID containing only [a-zA-Z0-9_-]")
/// Parse an event and its prev events.
rule event() -> (&'input str, HashSet<&'input str>)
= id:event_id() prev_events:(_ "-->" _ id:(event_id() ++ _) { id })? {
(id, prev_events.into_iter().flatten().collect())
}
pub rule command() -> Command<'input> =
"/reset" { Command::ResetOrder }
/ events:event() ++ (_ "," _) { Command::AppendEvents(events.into_iter().collect()) }
}
}
fn main() -> Result<()> {
let mut backend = MemoryStitcherBackend::default();
let mut reader = DefaultEditor::new()?;
println!("{BANNER}");
loop {
match reader.readline("> ") {
| Ok(line) => match command_parser::command(&line) {
| Ok(Command::AppendEvents(events)) => {
let batch = Batch::from_edges(&events);
let stitcher = Stitcher::new(&backend);
let updates = stitcher.stitch(&batch);
for update in &updates.gap_updates {
println!("update to gap {}:", update.key);
println!(" new gap contents: {:?}", update.gap);
println!(" inserted items: {:?}", update.inserted_items);
}
println!("events added to gaps: {:?}", &updates.events_added_to_gaps);
println!();
println!("items to sync: {:?}", &updates.new_items);
backend.extend(updates);
println!("order: {backend:?}");
},
| Ok(Command::ResetOrder) => {
backend.clear();
println!("order cleared.");
},
| Err(parse_error) => {
println!("parse error!! {parse_error}");
},
},
| Err(ReadlineError::Interrupted) => {
println!("interrupt");
},
| Err(ReadlineError::Eof) => {
println!("goodbye :3");
break Ok(());
},
| Err(err) => break Err(err),
}
}
}

View File

@@ -0,0 +1,130 @@
use std::{
fmt::Debug,
sync::atomic::{AtomicU64, Ordering},
};
use crate::{Gap, OrderUpdates, StitchedItem, StitcherBackend};
/// A version of [`StitchedItem`] which owns event IDs.
#[derive(Debug)]
enum MemoryStitcherItem {
Event(String),
Gap(Gap),
}
impl From<StitchedItem<'_>> for MemoryStitcherItem {
fn from(value: StitchedItem) -> Self {
match value {
| StitchedItem::Event(id) => MemoryStitcherItem::Event(id.to_string()),
| StitchedItem::Gap(gap) => MemoryStitcherItem::Gap(gap),
}
}
}
impl<'id> From<&'id MemoryStitcherItem> for StitchedItem<'id> {
fn from(value: &'id MemoryStitcherItem) -> Self {
match value {
| MemoryStitcherItem::Event(id) => StitchedItem::Event(id),
| MemoryStitcherItem::Gap(gap) => StitchedItem::Gap(gap.clone()),
}
}
}
/// A stitcher backend which holds a stitched ordering in RAM.
#[derive(Default)]
pub struct MemoryStitcherBackend {
items: Vec<(u64, MemoryStitcherItem)>,
counter: AtomicU64,
}
impl MemoryStitcherBackend {
fn next_id(&self) -> u64 { self.counter.fetch_add(1, Ordering::Relaxed) }
/// Extend this ordering with new updates.
pub fn extend(&mut self, results: OrderUpdates<'_, <Self as StitcherBackend>::Key>) {
for update in results.gap_updates {
let Some(gap_index) = self.items.iter().position(|(key, _)| *key == update.key)
else {
panic!("bad update key {}", update.key);
};
let insertion_index = if update.gap.is_empty() {
self.items.remove(gap_index);
gap_index
} else {
match self.items.get_mut(gap_index) {
| Some((_, MemoryStitcherItem::Gap(gap))) => {
*gap = update.gap;
},
| Some((key, other)) => {
panic!("expected item with key {key} to be a gap, it was {other:?}");
},
| None => unreachable!("we just checked that this index is valid"),
}
gap_index.checked_add(1).expect(
"should never allocate usize::MAX ids. what kind of test are you running",
)
};
let to_insert: Vec<_> = update
.inserted_items
.into_iter()
.map(|item| (self.next_id(), item.into()))
.collect();
self.items
.splice(insertion_index..insertion_index, to_insert.into_iter())
.for_each(drop);
}
let new_items: Vec<_> = results
.new_items
.into_iter()
.map(|item| (self.next_id(), item.into()))
.collect();
self.items.extend(new_items);
}
/// Iterate over the items in this ordering.
pub fn iter(&self) -> impl Iterator<Item = StitchedItem<'_>> {
self.items.iter().map(|(_, item)| item.into())
}
/// Clear this ordering.
pub fn clear(&mut self) { self.items.clear(); }
}
impl StitcherBackend for MemoryStitcherBackend {
type Key = u64;
fn find_matching_gaps<'a>(
&'a self,
events: impl Iterator<Item = &'a str>,
) -> impl Iterator<Item = (Self::Key, Gap)> {
// nobody cares about test suite performance right
let mut gaps = vec![];
for event in events {
for (key, item) in &self.items {
if let MemoryStitcherItem::Gap(gap) = item
&& gap.contains(event)
{
gaps.push((*key, gap.clone()));
}
}
}
gaps.into_iter()
}
fn event_exists<'a>(&'a self, event: &'a str) -> bool {
self.items
.iter()
.any(|item| matches!(&item.1, MemoryStitcherItem::Event(id) if event == id))
}
}
impl Debug for MemoryStitcherBackend {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_list().entries(self.iter()).finish()
}
}

160
src/stitcher/mod.rs Normal file
View File

@@ -0,0 +1,160 @@
use std::{cmp::Ordering, collections::HashSet};
use indexmap::IndexMap;
pub mod algorithm;
pub mod memory_backend;
#[cfg(test)]
mod test;
pub use algorithm::*;
/// A gap in the stitched order.
pub type Gap = HashSet<String>;
/// An item in the stitched order.
#[derive(Debug)]
pub enum StitchedItem<'id> {
/// A single event.
Event(&'id str),
/// A gap representing one or more missing events.
Gap(Gap),
}
/// An opaque key returned by a [`StitcherBackend`] to identify an item in its
/// order.
pub trait OrderKey: Eq + Clone {}
impl<T: Eq + Clone> OrderKey for T {}
/// A trait providing read-only access to an existing stitched order.
pub trait StitcherBackend {
type Key: OrderKey;
/// Return all gaps containing one or more events listed in `events`.
fn find_matching_gaps<'a>(
&'a self,
events: impl Iterator<Item = &'a str>,
) -> impl Iterator<Item = (Self::Key, Gap)>;
/// Return whether an event exists in the stitched order.
fn event_exists<'a>(&'a self, event: &'a str) -> bool;
}
/// An ordered map from an event ID to its `prev_events`.
pub type EventEdges<'id> = IndexMap<&'id str, HashSet<&'id str>>;
/// Information about the `prev_events` of an event.
/// This struct does not store the ID of the event itself.
#[derive(Debug)]
struct EventPredecessors<'id> {
/// The `prev_events` of the event.
pub prev_events: HashSet<&'id str>,
/// The predecessor set of the event. This is derived from, and a superset
/// of, [`EventPredecessors::prev_events`]. See
/// [`Batch::find_predecessor_set`] for details. It is cached in this
/// struct for performance.
pub predecessor_set: HashSet<&'id str>,
}
/// A batch of events to be inserted into the stitched order.
#[derive(Debug)]
pub struct Batch<'id> {
events: IndexMap<&'id str, EventPredecessors<'id>>,
}
impl<'id> Batch<'id> {
/// Create a new [`Batch`] from an [`EventEdges`].
pub fn from_edges<'edges>(edges: &EventEdges<'edges>) -> Batch<'edges> {
let mut events = IndexMap::new();
for (event, prev_events) in edges {
let predecessor_set = Self::find_predecessor_set(event, edges);
events.insert(*event, EventPredecessors {
prev_events: prev_events.clone(),
predecessor_set,
});
}
Batch { events }
}
/// Build the predecessor set of `event` using `edges`. The predecessor set
/// is a subgraph of the room's DAG which may be thought of as a tree
/// rooted at `event` containing _only_ events which are included in
/// `edges`. It is represented as a set and not a proper tree structure for
/// efficiency.
fn find_predecessor_set<'a>(event: &'a str, edges: &EventEdges<'a>) -> HashSet<&'a str> {
// The predecessor set which we are building.
let mut predecessor_set = HashSet::new();
// The queue of events to check for membership in `remaining_events`.
let mut events_to_check = vec![event];
// Events which we have already checked and do not need to revisit.
let mut events_already_checked = HashSet::new();
while let Some(event) = events_to_check.pop() {
// Don't add this event to the queue again.
events_already_checked.insert(event);
// If this event is in `edges`, add it to the predecessor set.
if let Some(children) = edges.get(event) {
predecessor_set.insert(event);
// Also add all its `prev_events` to the queue. It's fine if some of them don't
// exist in `edges` because they'll just be discarded when they're popped
// off the queue.
events_to_check.extend(
children
.iter()
.filter(|event| !events_already_checked.contains(*event)),
);
}
}
predecessor_set
}
/// Iterate over all the events contained in this batch.
fn events(&self) -> impl Iterator<Item = &'id str> { self.events.keys().copied() }
/// Check whether an event exists in this batch.
fn contains(&self, event: &'id str) -> bool { self.events.contains_key(event) }
/// Return the predecessors of an event, if it exists in this batch.
fn predecessors(&self, event: &str) -> Option<&EventPredecessors<'id>> {
self.events.get(event)
}
/// Compare two events by DAG;received order.
///
/// If either event is in the other's predecessor set it comes first,
/// otherwise they are sorted by which comes first in the batch.
fn compare_by_dag_received(&self) -> impl FnMut(&&'id str, &&'id str) -> Ordering {
|a, b| {
if self
.predecessors(a)
.is_some_and(|it| it.predecessor_set.contains(b))
{
Ordering::Greater
} else if self
.predecessors(b)
.is_some_and(|it| it.predecessor_set.contains(a))
{
Ordering::Less
} else {
let a_index = self
.events
.get_index_of(a)
.expect("a should be in this batch");
let b_index = self
.events
.get_index_of(b)
.expect("b should be in this batch");
a_index.cmp(&b_index)
}
}
}
}

102
src/stitcher/test/mod.rs Normal file
View File

@@ -0,0 +1,102 @@
use itertools::Itertools;
use super::{algorithm::*, *};
use crate::memory_backend::MemoryStitcherBackend;
mod parser;
fn run_testcase(testcase: parser::TestCase<'_>) {
let mut backend = MemoryStitcherBackend::default();
for (index, phase) in testcase.into_iter().enumerate() {
let stitcher = Stitcher::new(&backend);
let batch = Batch::from_edges(&phase.batch);
let updates = stitcher.stitch(&batch);
println!();
println!("===== phase {index}");
for update in &updates.gap_updates {
println!("update to gap {}:", update.key);
println!(" new gap contents: {:?}", update.gap);
println!(" inserted items: {:?}", update.inserted_items);
}
println!("expected new items: {:?}", &phase.order.new_items);
println!(" actual new items: {:?}", &updates.new_items);
for (expected, actual) in phase
.order
.new_items
.iter()
.zip_eq(updates.new_items.iter())
{
assert_eq!(
expected, actual,
"bad new item, expected {expected:?} but got {actual:?}"
);
}
if let Some(updated_gaps) = phase.updated_gaps {
println!("expected events added to gaps: {updated_gaps:?}");
println!(" actual events added to gaps: {:?}", updates.events_added_to_gaps);
assert_eq!(
updated_gaps, updates.events_added_to_gaps,
"incorrect events added to gaps"
);
}
backend.extend(updates);
println!("extended ordering: {:?}", backend);
for (expected, ref actual) in phase.order.iter().zip_eq(backend.iter()) {
assert_eq!(
expected, actual,
"bad item in order, expected {expected:?} but got {actual:?}",
);
}
}
}
macro_rules! testcase {
($index:literal : $id:ident) => {
#[test]
fn $id() {
let testcase = parser::parse(include_str!(concat!(
"./testcases/",
$index,
"-",
stringify!($id),
".stitched"
)));
run_testcase(testcase);
}
};
}
testcase!("001": receiving_new_events);
testcase!("002": recovering_after_netsplit);
testcase!("zzz": being_before_a_gap_item_beats_being_after_an_existing_item_multiple);
testcase!("zzz": being_before_a_gap_item_beats_being_after_an_existing_item);
testcase!("zzz": chains_are_reordered_using_prev_events);
testcase!("zzz": empty_then_simple_chain);
testcase!("zzz": empty_then_two_chains_interleaved);
testcase!("zzz": empty_then_two_chains);
testcase!("zzz": filling_in_a_gap_with_a_batch_containing_gaps);
testcase!("zzz": gaps_appear_before_events_referring_to_them_received_order);
testcase!("zzz": gaps_appear_before_events_referring_to_them);
testcase!("zzz": if_prev_events_determine_order_they_override_received);
testcase!("zzz": insert_into_first_of_several_gaps);
testcase!("zzz": insert_into_last_of_several_gaps);
testcase!("zzz": insert_into_middle_of_several_gaps);
testcase!("zzz": linked_events_are_split_across_gaps);
testcase!("zzz": linked_events_in_a_diamond_are_split_across_gaps);
testcase!("zzz": middle_of_batch_matches_gap_and_end_of_batch_matches_end);
testcase!("zzz": middle_of_batch_matches_gap);
testcase!("zzz": multiple_events_referring_to_the_same_missing_event_first_has_more);
testcase!("zzz": multiple_events_referring_to_the_same_missing_event);
testcase!("zzz": multiple_events_referring_to_the_same_missing_event_with_more);
testcase!("zzz": multiple_missing_prev_events_turn_into_a_single_gap);
testcase!("zzz": partially_filling_a_gap_leaves_it_before_new_nodes);
testcase!("zzz": partially_filling_a_gap_with_two_events);
testcase!("zzz": received_order_wins_within_a_subgroup_if_no_prev_event_chain);
testcase!("zzz": subgroups_are_processed_in_first_received_order);

140
src/stitcher/test/parser.rs Normal file
View File

@@ -0,0 +1,140 @@
use std::collections::HashSet;
use indexmap::IndexMap;
use super::StitchedItem;
pub(super) type TestEventId<'id> = &'id str;
pub(super) type TestGap<'id> = HashSet<TestEventId<'id>>;
#[derive(Debug)]
pub(super) enum TestStitchedItem<'id> {
Event(TestEventId<'id>),
Gap(TestGap<'id>),
}
impl PartialEq<StitchedItem<'_>> for TestStitchedItem<'_> {
fn eq(&self, other: &StitchedItem<'_>) -> bool {
match (self, other) {
| (TestStitchedItem::Event(lhs), StitchedItem::Event(rhs)) => lhs == rhs,
| (TestStitchedItem::Gap(lhs), StitchedItem::Gap(rhs)) =>
lhs.iter().all(|id| rhs.contains(*id)),
| _ => false,
}
}
}
pub(super) type TestCase<'id> = Vec<Phase<'id>>;
pub(super) struct Phase<'id> {
pub batch: Batch<'id>,
pub order: Order<'id>,
pub updated_gaps: Option<HashSet<TestEventId<'id>>>,
}
pub(super) type Batch<'id> = IndexMap<TestEventId<'id>, HashSet<TestEventId<'id>>>;
pub(super) struct Order<'id> {
pub inserted_items: Vec<TestStitchedItem<'id>>,
pub new_items: Vec<TestStitchedItem<'id>>,
}
impl<'id> Order<'id> {
pub(super) fn iter(&self) -> impl Iterator<Item = &TestStitchedItem<'id>> {
self.inserted_items.iter().chain(self.new_items.iter())
}
}
peg::parser! {
grammar testcase() for str {
/// Parse whitespace.
rule _ -> () = quiet! { $([' '])* {} }
/// Parse empty lines and comments.
rule newline() -> () = quiet! { (("#" [^'\n']*)? "\n")+ {} }
/// Parse an "event ID" in a test case, which may only consist of ASCII letters and numbers.
rule event_id() -> TestEventId<'input>
= quiet! { id:$([char if char.is_ascii_alphanumeric()]+) { id } }
/ expected!("event id")
/// Parse a gap in the order section.
rule gap() -> TestGap<'input>
= "-" events:event_id() ++ "," { events.into_iter().collect() }
/// Parse either an event id or a gap.
rule stitched_item() -> TestStitchedItem<'input> =
id:event_id() { TestStitchedItem::Event(id) }
/ gap:gap() { TestStitchedItem::Gap(gap) }
/// Parse an event line in the batch section, mapping an event name to zero or one prev events.
/// The prev events are merged together by [`batch()`].
rule batch_event() -> (TestEventId<'input>, Option<TestEventId<'input>>)
= id:event_id() prev:(_ "-->" _ prev:event_id() { prev })? { (id, prev) }
/// Parse the batch section of a phase.
rule batch() -> Batch<'input>
= events:batch_event() ++ newline() {
/*
Repeated event lines need to be merged together. For example,
A --> B
A --> C
represents a _single_ event `A` with two prev events, `B` and `C`.
*/
events.into_iter()
.fold(IndexMap::new(), |mut batch: Batch<'_>, (id, prev_event)| {
// Find the prev events set of this event in the batch.
// If it doesn't exist, make a new empty one.
let mut prev_events = batch.entry(id).or_default();
// If this event line defines a prev event to add, insert it into the set.
if let Some(prev_event) = prev_event {
prev_events.insert(prev_event);
}
batch
})
}
rule order() -> Order<'input> =
items:(item:stitched_item() new:"*"? { (item, new.is_some()) }) ** newline()
{
let (mut inserted_items, mut new_items) = (vec![], vec![]);
for (item, new) in items {
if new {
new_items.push(item);
} else {
inserted_items.push(item);
}
}
Order {
inserted_items,
new_items,
}
}
rule updated_gaps() -> HashSet<TestEventId<'input>> =
events:event_id() ++ newline() { events.into_iter().collect() }
rule phase() -> Phase<'input> =
"=== when we receive these events ==="
newline() batch:batch()
newline() "=== then we arrange into this order ==="
newline() order:order()
updated_gaps:(
newline() "=== and we notify about these gaps ==="
newline() updated_gaps:updated_gaps() { updated_gaps }
)?
{ Phase { batch, order, updated_gaps } }
pub rule testcase() -> TestCase<'input> = phase() ++ newline()
}
}
pub(super) fn parse<'input>(input: &'input str) -> TestCase<'input> {
testcase::testcase(input.trim_ascii_end()).expect("parse error")
}

View File

@@ -0,0 +1,22 @@
=== when we receive these events ===
A
B --> A
C --> B
=== then we arrange into this order ===
# Given the server has some existing events in this order:
A*
B*
C*
=== when we receive these events ===
# When it receives new ones:
D --> C
E --> D
=== then we arrange into this order ===
# Then it simply appends them at the end of the order:
A
B
C
D*
E*

View File

@@ -0,0 +1,46 @@
=== when we receive these events ===
A1
A2 --> A1
A3 --> A2
=== then we arrange into this order ===
# Given the server has some existing events in this order:
A1*
A2*
A3*
=== when we receive these events ===
# And after a netsplit the server receives some unrelated events, which refer to
# some unknown event, because the server didn't receive all of them:
B7 --> B6
B8 --> B7
B9 --> B8
=== then we arrange into this order ===
# Then these events are new, and we add a gap to show something is missing:
A1
A2
A3
-B6*
B7*
B8*
B9*
=== when we receive these events ===
# Then if we backfill and receive more of those events later:
B4 --> B3
B5 --> B4
B6 --> B5
=== then we arrange into this order ===
# They are slotted into the gap, and a new gap is created to represent the
# still-missing events:
A1
A2
A3
-B3
B4
B5
B6
B7
B8
B9
=== and we notify about these gaps ===
B6

View File

@@ -0,0 +1,30 @@
=== when we receive these events ===
D --> C
=== then we arrange into this order ===
# We may see situations that are ambiguous about whether an event is new or
# belongs in a gap, because it is a predecessor of a gap event and also has a
# new event as its predecessor. This a rare case where either outcome could be
# valid. If the initial order is this:
-C*
D*
=== when we receive these events ===
# And then we receive B
B --> A
=== then we arrange into this order ===
# Which is new because it's unrelated to everything else
-C
D
-A*
B*
=== when we receive these events ===
# And later it turns out that C refers back to B
C --> B
=== then we arrange into this order ===
# Then we place C into the early gap even though it is after B, so arguably
# should be the newest
C
D
-A
B
=== and we notify about these gaps ===
C

View File

@@ -0,0 +1,28 @@
=== when we receive these events ===
# An ambiguous situation can occur when we have multiple gaps that both might
# accepts an event. This should be relatively rare.
A --> G1
B --> A
C --> G2
=== then we arrange into this order ===
-G1*
A*
B*
-G2*
C*
=== when we receive these events ===
# When we receive F, which is a predecessor of both G1 and G2
F
G1 --> F
G2 --> F
=== then we arrange into this order ===
# Then F appears in the earlier gap, but arguably it should appear later.
F
G1
A
B
G2
C
=== and we notify about these gaps ===
G1
G2

View File

@@ -0,0 +1,10 @@
=== when we receive these events ===
# Even though we see C first, it is re-ordered because we must obey prev_events
# so A comes first.
C --> A
A
B --> A
=== then we arrange into this order ===
A*
C*
B*

View File

@@ -0,0 +1,8 @@
=== when we receive these events ===
A
B --> A
C --> B
=== then we arrange into this order ===
A*
B*
C*

View File

@@ -0,0 +1,18 @@
=== when we receive these events ===
# A chain ABC
A
B --> A
C --> B
# And a separate chain XYZ
X --> W
Y --> X
Z --> Y
=== then we arrange into this order ===
# Should produce them in order with a gap
A*
B*
C*
-W*
X*
Y*
Z*

View File

@@ -0,0 +1,18 @@
=== when we receive these events ===
# Same as empty_then_two_chains except for received order
# A chain ABC, and a separate chain XYZ, but interleaved
A
X --> W
B --> A
Y --> X
C --> B
Z --> Y
=== then we arrange into this order ===
# Should produce them in order with a gap
A*
-W*
X*
B*
Y*
C*
Z*

View File

@@ -0,0 +1,33 @@
=== when we receive these events ===
# Given 3 gaps exist
B --> A
D --> C
F --> E
=== then we arrange into this order ===
-A*
B*
-C*
D*
-E*
F*
=== when we receive these events ===
# When we fill one with something that also refers to non-existent events
C --> X
C --> Y
G --> C
G --> Z
=== then we arrange into this order ===
# Then we fill in the gap (C) and make new gaps too (X+Y and Z)
-A
B
-X,Y
C
D
-E
F
-Z*
G*
=== and we notify about these gaps ===
# And we notify about the gap that was updated
C

View File

@@ -0,0 +1,13 @@
=== when we receive these events ===
# Several events refer to missing events and the events are unrelated
C --> Y
C --> Z
A --> X
B
=== then we arrange into this order ===
# The gaps appear immediately before the events referring to them
-Y,Z*
C*
-X*
A*
B*

View File

@@ -0,0 +1,14 @@
=== when we receive these events ===
# Several events refer to missing events and the events are related
C --> Y
C --> Z
C --> B
A --> X
B --> A
=== then we arrange into this order ===
# The gaps appear immediately before the events referring to them
-X*
A*
B*
-Y,Z*
C*

View File

@@ -0,0 +1,15 @@
=== when we receive these events ===
# The relationships determine the order here, so they override received order
F --> E
C --> B
D --> C
E --> D
B --> A
A
=== then we arrange into this order ===
A*
B*
C*
D*
E*
F*

View File

@@ -0,0 +1,27 @@
=== when we receive these events ===
# Given 3 gaps exist
B --> A
D --> C
F --> E
=== then we arrange into this order ===
-A*
B*
-C*
D*
-E*
F*
=== when we receive these events ===
# When the first of them is filled in
A
=== then we arrange into this order ===
# Then we slot it into the gap, not at the end
A
B
-C
D
-E
F
=== and we notify about these gaps ===
# And we notify about the gap being filled in
A

View File

@@ -0,0 +1,27 @@
=== when we receive these events ===
# Given 3 gaps exist
B --> A
D --> C
F --> E
=== then we arrange into this order ===
-A*
B*
-C*
D*
-E*
F*
=== when we receive these events ===
# When the last gap is filled in
E
=== then we arrange into this order ===
# Then we slot it into the gap, not at the end
-A
B
-C
D
E
F
=== and we notify about these gaps ===
# And we notify about the gap being filled in
E

View File

@@ -0,0 +1,27 @@
=== when we receive these events ===
# Given 3 gaps exist
B --> A
D --> C
F --> E
=== then we arrange into this order ===
-A*
B*
-C*
D*
-E*
F*
=== when we receive these events ===
# When a middle one is filled in
C
=== then we arrange into this order ===
# Then we slot it into the gap, not at the end
-A
B
C
D
-E
F
=== and we notify about these gaps ===
# And we notify about the gap being filled in
C

View File

@@ -0,0 +1,29 @@
=== when we receive these events ===
# Given a couple of gaps
B --> X2
D --> X4
=== then we arrange into this order ===
-X2*
B*
-X4*
D*
=== when we receive these events ===
# And linked events that fill those in and are newer
X1
X2 --> X1
X3 --> X2
X4 --> X3
X5 --> X4
=== then we arrange into this order ===
# Then the gaps are filled and new events appear at the front
X1
X2
B
X3
X4
D
X5*
=== and we notify about these gaps ===
X2
X4

View File

@@ -0,0 +1,31 @@
=== when we receive these events ===
# Given a couple of gaps
B --> X2a
D --> X3
=== then we arrange into this order ===
-X2a*
B*
-X3*
D*
=== when we receive these events ===
# When we receive a diamond that touches gaps and some new events
X1
X2a --> X1
X2b --> X1
X3 --> X2a
X3 --> X2b
X4 --> X3
=== then we arrange into this order ===
# Then matching events and direct predecessors fit into the gaps
# and other stuff is new
X1
X2a
B
X2b
X3
D
X4*
=== and we notify about these gaps ===
X2a
X3

View File

@@ -0,0 +1,25 @@
=== when we receive these events ===
# Given a gap before all the Bs
B1 --> C2
B2 --> B1
=== then we arrange into this order ===
-C2*
B1*
B2*
=== when we receive these events ===
# When a batch arrives with a not-last event matching the gap
C1
C2 --> C1
C3 --> C2
=== then we arrange into this order ===
# Then we slot the matching events into the gap
# and the later events are new
C1
C2
B1
B2
C3*
=== and we notify about these gaps ===
# And we notify about the gap being filled in
C2

View File

@@ -0,0 +1,26 @@
=== when we receive these events ===
# Given a gap before all the Bs
B1 --> C2
B2 --> B1
=== then we arrange into this order ===
-C2*
B1*
B2*
=== when we receive these events ===
# When a batch arrives with a not-last event matching the gap, and the last
# event linked to a recent event
C1
C2 --> C1
C3 --> C2
C3 --> B2
=== then we arrange into this order ===
# Then we slot the entire batch into the gap
C1
C2
B1
B2
C3*
=== and we notify about these gaps ===
# And we notify about the gap being filled in
C2

View File

@@ -0,0 +1,26 @@
=== when we receive these events ===
# If multiple events all refer to the same missing event:
A --> X
B --> X
C --> X
=== then we arrange into this order ===
# Then we insert gaps before all of them. This avoids the need to search the
# entire existing order whenever we create a new gap.
-X*
A*
-X*
B*
-X*
C*
=== when we receive these events ===
# The ambiguity is resolved when the missing event arrives:
X
=== then we arrange into this order ===
# We choose the earliest gap, and all the relevant gaps are removed (which does
# mean we need to search the existing order).
X
A
B
C
=== and we notify about these gaps ===
X

View File

@@ -0,0 +1,29 @@
=== when we receive these events ===
# Several events refer to the same missing event, but the first refers to
# others too
A --> X
A --> Y
A --> Z
B --> X
C --> X
=== then we arrange into this order ===
# We end up with multiple gaps
-X,Y,Z*
A*
-X*
B*
-X*
C*
=== when we receive these events ===
# When we receive the missing item
X
=== then we arrange into this order ===
# It goes into the earliest slot, and the non-empty gap remains
-Y,Z
X
A
B
C
=== and we notify about these gaps ===
X

View File

@@ -0,0 +1,28 @@
=== when we receive these events ===
# Several events refer to the same missing event, but one refers to others too
A --> X
B --> X
B --> Y
B --> Z
C --> X
=== then we arrange into this order ===
# We end up with multiple gaps
-X*
A*
-X,Y,Z*
B*
-X*
C*
=== when we receive these events ===
# When we receive the missing item
X
=== then we arrange into this order ===
# It goes into the earliest slot, and the non-empty gap remains
X
A
-Y,Z
B
C
=== and we notify about these gaps ===
X

View File

@@ -0,0 +1,9 @@
=== when we receive these events ===
# A refers to multiple missing things
A --> X
A --> Y
A --> Z
=== then we arrange into this order ===
# But we only make one gap, with multiple IDs in it
-X,Y,Z*
A*

View File

@@ -0,0 +1,23 @@
=== when we receive these events ===
A
F --> B
F --> C
F --> D
F --> E
=== then we arrange into this order ===
# Given a gap that lists several nodes:
A*
-B,C,D,E*
F*
=== when we receive these events ===
# When we provide one of the missing events:
C
=== then we arrange into this order ===
# Then it is inserted after the gap, and the gap is shrunk:
A
-B,D,E
C
F
=== and we notify about these gaps ===
# And we notify about the gap that was updated
C

View File

@@ -0,0 +1,27 @@
=== when we receive these events ===
# Given an event references multiple missing events
A
F --> B
F --> C
F --> D
F --> E
=== then we arrange into this order ===
A*
-B,C,D,E*
F*
=== when we receive these events ===
# When we provide some of the missing events
C
E
=== then we arrange into this order ===
# Then we insert them after the gap and shrink the list of events in the gap
A
-B,D
C
E
F
=== and we notify about these gaps ===
# And we notify about the gap that was updated
C
E

View File

@@ -0,0 +1,16 @@
=== when we receive these events ===
# Everything is after A, but there is no prev_event chain between the others, so
# we use received order.
A
F --> A
C --> A
D --> A
E --> A
B --> A
=== then we arrange into this order ===
A*
F*
C*
D*
E*
B*

View File

@@ -0,0 +1,16 @@
=== when we receive these events ===
# We preserve the received order where it does not conflict with the prev_events
A
X --> W
Y --> X
Z --> Y
B --> A
C --> B
=== then we arrange into this order ===
A*
-W*
X*
Y*
Z*
B*
C*