Compare commits

..

88 Commits

Author SHA1 Message Date
Jade Ellis
c988c2b387 chore: Release 2026-03-16 16:48:53 +00:00
theS1LV3R
3121229707 docs: Update docker documentation to add /sbin/conduwuit to examples
These will likely have to be updated when !1485 goes through.

Fixes: !1529
2026-03-15 00:21:37 +00:00
Shane Jaroch
ff85145ee8 fix: missing logic inversion for acquired keys (should speed up room joins) 2026-03-13 20:54:38 -04:00
lveneris
f61d1a11e0 chore: set correct commit types for all renovate PRs 2026-03-09 21:51:21 +00:00
lveneris
11ba8979ff chore: batch non-major non-zerover cargo renovate PRs 2026-03-09 21:51:21 +00:00
Ginger
f6956ccf12 fix: Nuke all remaining references to MSC3575 in docs and code 2026-03-09 17:11:19 +00:00
Kimiblock Moe
977a5ac8c1 Enable the reloading of systemd credentials
systemd v260 has introduced a new option: RefreshOnReload, of which when set to true automatically reloads all confext and credential files. This should eliminate the full restart requirement to reload a changed configuration.
2026-03-09 16:08:47 +00:00
timedout
906c3df953 style: Reduce migration warning verbosity to info
They aren't actually warning of anything
2026-03-09 13:30:24 +00:00
timedout
33e5fdc16f style: Reduce verbosity of fix_corrupt_msc4133_fields 2026-03-09 13:30:24 +00:00
timedout
77ac17855a fix: Don't fail on invalid stripped state entries during migration 2026-03-09 13:30:24 +00:00
timedout
65ffcd2884 perf: Insert missed migration markers into fresh databases 2026-03-09 13:30:24 +00:00
timedout
7ec88bdbfe feat: Make noise about migrations and make errors more informative 2026-03-09 13:30:24 +00:00
Ginger
da3fac8cb4 fix: Use more robust check for max_request_size 2026-03-09 13:27:39 +00:00
Trash Panda
3366113939 fix: Retrieve content_type and video width/height 2026-03-09 13:27:39 +00:00
Trash Panda
9039784f41 fix: Clippy lints 2026-03-09 13:27:39 +00:00
Trash Panda
7f165e5bbe fix: Refactor and block media downloads larger than max_request_size 2026-03-09 13:27:39 +00:00
Trash Panda
c97111e3ca fix: Update example config 2026-03-09 13:27:39 +00:00
Trash Panda
e8746760fa feat(url-preview): Optionally download audio/video files for url preview requests 2026-03-09 13:27:39 +00:00
Katie Kloss
9dbd75e740 docs: Update FreeBSD instructions 2026-03-09 13:26:57 +00:00
Renovate Bot
85b2fd91b9 chore(deps): update rust crate serde-saphyr to 0.0.21 2026-03-09 13:26:23 +00:00
Renovate Bot
6420c218a9 chore(deps): update node-patch-updates to v2.0.5 2026-03-09 12:59:58 +00:00
Renovate Bot
ec9402a328 chore(deps): update github-actions-non-major 2026-03-09 12:32:58 +00:00
Renovate Bot
d01f06a5c2 chore(deps): lock file maintenance 2026-03-09 12:32:42 +00:00
Renovate Bot
aee51b3b0d chore(deps): update docker/setup-buildx-action action to v4 2026-03-08 14:52:50 +00:00
Renovate Bot
afcbccd9dd chore(deps): update ghcr.io/renovatebot/renovate docker tag to v43 2026-03-08 13:10:56 +00:00
Renovate Bot
02448000f9 chore(deps): update dependency cargo-bins/cargo-binstall to v1.17.7 2026-03-08 12:43:37 +00:00
Renovate Bot
6af8918aa8 chore(deps): update docker/login-action action to v4 2026-03-08 12:43:26 +00:00
Renovate Bot
08f83cc438 chore(deps): update docker/build-push-action action to v7 2026-03-08 12:43:04 +00:00
Renovate Bot
a0468db121 chore(deps): update docker/metadata-action action to v6 2026-03-08 05:03:55 +00:00
Tom Foster
4f23d566ed docs(docker): Restructure deployment guide and add env var reference
Add Quick Run section with complete getting-started workflow including
admin user creation via --execute flag. Consolidate Docker Compose to
treat reverse proxy as essential with Traefik/Caddy/nginx examples.

Move detailed image building to development guide, keeping deployment
docs focused on using pre-built images.

Create environment variables reference with practical examples and
context. Clarify built-in TLS is for testing only; production should
use reverse proxies.
2026-03-07 18:28:47 +00:00
Ginger
dac619b5f8 fix: Lower "timeline for newly joined room is empty" to debug_warn
Reviewed-by: nex <me@nexy7574.co.uk>
2026-03-07 11:56:15 -05:00
stratself
fdc9cc8074 docs: small refactor of the troubleshooting page
* rename "Continuwuity and Matrix issues" to just "Continuwuity issues"
* move "Config not applying" subsection to C10y issues section
* rename "General potential issues" to just "DNS issues" - this section
  will be elaborated later in a DNS tuning page
2026-03-06 16:35:11 +00:00
timedout
40b1dabcca chore: Add news fragment 2026-03-06 14:32:13 +00:00
timedout
94c5af40cf fix: Automatically remove corrupted appservice registrations 2026-03-06 14:21:04 +00:00
Renovate Bot
36a3144757 chore(deps): update rust crate tokio to v1.50.0 2026-03-05 13:33:32 +00:00
Trash Panda
220b61c589 docs: Update prefligit references to prek 2026-03-05 13:32:22 +00:00
Ginger
38e93cde3e chore: News fragment 2026-03-04 12:51:59 -05:00
Ginger
7e501cdb09 fix: Fix left rooms always being sent on initial sync 2026-03-04 12:51:54 -05:00
Shane Jaroch
da182c162d fix(registration): discrepancy between 401 response and 500 log statement 2026-03-04 16:18:38 +00:00
aviac
9a3f7f4af7 feat(nix): always enable liburing in all builds by default 2026-03-04 15:58:15 +00:00
Skyler Mäntysaari
5ce1f682f6 docs: Update the actual doc page 2026-03-04 15:37:06 +00:00
Skyler Mäntysaari
5feb08dff2 docs: Update delete-past-remote-media example with correct flag syntax
It's not just a single `-` but rather `--`.
2026-03-04 15:37:06 +00:00
Ginger
1e527c1075 chore: Update example config 2026-03-04 10:24:16 -05:00
Trash Panda
c6943ae683 fix(pre-commit): Use default clippy toolchain to avoid cache thrashing 2026-03-04 15:10:48 +00:00
Trash Panda
8932dacdc4 fix(pre-commit): Remove unnecessary test expression 2026-03-04 15:10:48 +00:00
Trash Panda
0be3d850ac fix: Lessen complexity of test expression 2026-03-04 15:10:48 +00:00
Trash Panda
57e7cf7057 fix: Prevent clippy from running on changes that don't include rust code 2026-03-04 15:10:48 +00:00
Trash Panda
1005585ccb fix: Remove erroneous addition of pre-push stage to default_stages 2026-03-04 15:10:48 +00:00
Trash Panda
1188566dbd fix: Typo in always_run 2026-03-04 15:10:48 +00:00
Trash Panda
0058212757 chore: Add pre-push hook to run clippy 2026-03-04 15:10:48 +00:00
stratself
dbf8fd3320 docs: Add Delegation page (#1414)
Reviewed-on: https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1414
Reviewed-by: Jade Ellis <jade@ellis.link>
Reviewed-by: Jacob Taylor <aranjedeath@noreply.forgejo.ellis.link>
Co-authored-by: stratself <stratself@proton.me>
Co-committed-by: stratself <stratself@proton.me>
2026-03-04 15:10:00 +00:00
Ginger
ce295b079e chore: News fragment 2026-03-04 15:06:26 +00:00
Ben Botwin
5eb74bc1dd feat: Readded support for reading registration tokens from a file
Co-authored-by: Ginger <ginger@gingershaped.computer>
2026-03-04 15:06:26 +00:00
Niklas Wojtkowiak
da561ab792 fix(rooms): prevent removing admin room alias
Only the server user can now remove the #admins alias, matching the
existing check for setting the alias. This prevents users from
accidentally breaking the admin room functionality.

fixes #1408
2026-03-04 15:05:24 +00:00
Niklas Wojtkowiak
80c9bb4796 fix(rooms): prevent removing admin room alias
Only the server user can now remove the #admins alias, matching the
existing check for setting the alias. This prevents users from
accidentally breaking the admin room functionality.

fixes #1408
2026-03-04 15:05:24 +00:00
Renovate Bot
22a47d1e59 chore(deps): update pre-commit hook crate-ci/committed to v1.1.11 2026-03-04 15:05:03 +00:00
Ginger
83883a002c fix(complement): Fix complement conflicting with first-run
- Disabled first-run mode when running Complement tests
- Updated logging config under complement to be a bit less verbose
- Changed test result and log output locations
2026-03-04 15:04:37 +00:00
31a05b9c
8dd4b71e0e fix: make dropped PDU warning less useless 2026-03-04 14:58:01 +00:00
lveneris
6fe3b1563c docs: update caddy docker compose example 2026-03-04 14:57:39 +00:00
lveneris
44d3825c8e docs(config): merge backwards compatibility descriptions 2026-03-04 14:57:27 +00:00
lveneris
d6c5484c3a docs(config): use CONTINUWUITY_ environment prefix 2026-03-04 14:57:27 +00:00
Renovate Bot
1fd6056f3f chore(deps): update dependency cargo-bins/cargo-binstall to v1.17.6 2026-03-04 14:37:37 +00:00
Renovate Bot
525a0ae52b chore(deps): update node-patch-updates to v2.0.4 2026-03-04 14:35:14 +00:00
Jade Ellis
60210754d9 chore: Admin announcement 2026-03-04 09:13:41 +00:00
Renovate Bot
08dd787083 chore(deps): update pre-commit hook crate-ci/typos to v1.44.0 2026-03-04 05:03:04 +00:00
Jade Ellis
2c7233812b chore: Release 2026-03-04 00:32:43 +00:00
timedout
d725e98220 fix(ci): Special case ubuntu-latest 2026-03-03 23:07:55 +00:00
Jade Ellis
0226ca1e83 chore: Changelog for 0.5.6 2026-03-03 21:55:05 +00:00
nex
1695b6d19e fix(ci): Revert llvm-project#153385 workaround
LLVM was removed from the runner image, so this workaround (and dodgy clang manual pkg selection) is no longer necessary

Signed-off-by: Ellis Git <forgejo@mail.ellis.link>
2026-03-03 21:53:04 +00:00
Jade Ellis
c40cc3b236 chore: Release 2026-03-03 20:59:08 +00:00
Jade Ellis
754959e80d fix: Don't process admin escape commands for local users from federation
Reviewed-By: timedout <git@nexy7574.co.uk>
2026-03-03 19:55:50 +00:00
timedout
37888fb670 fix: Limit body read size of remote requests (CWE-409)
Reviewed-By: Jade Ellis <jade@ellis.link>
2026-03-03 19:54:34 +00:00
Jade Ellis
7207398a9e docs: Changelog 2026-03-03 19:39:54 +00:00
Jason Volk
1a7bda209b feat: Implement Dehydrated Devices MSC3814
Co-authored-by: Jade Ellis <jade@ellis.link>
Signed-off-by: Jason Volk <jason@zemos.net>
2026-03-03 19:39:53 +00:00
Autumn Ashton
7e1950b3d2 fix(docker): Fix building a docker container with dev profile
In Rust, the dev profile uses "debug" as the name of the output folder.
2026-03-03 19:31:04 +00:00
timedout
b507898c62 fix: Bump ruwuma again 2026-03-03 18:10:28 +00:00
nexy7574
f4af67575e fix: Bump ruwuma to resolve duplicate state error 2026-03-03 06:01:02 +00:00
timedout
6adb99397e feat: Remove MSC4010 support 2026-02-27 17:03:19 +00:00
Renovate Bot
8ce83a8a14 chore(deps): update rust crate axum-extra to 0.12.0 2026-02-25 17:16:35 +00:00
Niklas Wojtkowiak
052c4dfa21 fix(sync): don't override sliding sync v5 list range start to zero 2026-02-24 13:59:33 +00:00
lynxize
a43dee1728 fix: Don't show successful media deletion as an error
Fixes !admin media delete --mxc <url> responding with an error message
when the media was deleted successfully.
2026-02-23 22:02:34 -07:00
Niklas Wojtkowiak
763d9b3de8 fixup! fix(api): restore backwards compatibility for RTC foci config 2026-02-23 18:10:25 -05:00
Niklas Wojtkowiak
1e6d95583c chore(deps): update ruwuma revision 2026-02-23 23:01:15 +00:00
Niklas Wojtkowiak
8a254a33cc fix(api): restore backwards compatibility for RTC foci config 2026-02-23 23:01:15 +00:00
Niklas Wojtkowiak
c97dd54766 chore(changelog): add news fragment for #1442 2026-02-23 23:01:15 +00:00
Niklas Wojtkowiak
8ddb7c70c0 feat(api): implement MSC4143 RTC transports discovery endpoint
Add dedicated \`GET /_matrix/client/v1/rtc/transports\` and \`GET /_matrix/client/unstable/org.matrix.msc4143/rtc/transports\` endpoints for MatrixRTC focus discovery (MSC4143), replacing the deprecated well-known approach.

Move RTC foci configuration from \`[global.well_known]\` into a new \`[global.matrix_rtc]\` config section with a \`foci\` field. Remove \`rtc_foci\` from the \`.well-known/matrix/client\` response. Update LiveKit setup documentation accordingly.

Closes #1431
2026-02-23 23:01:15 +00:00
Niklas Wojtkowiak
cb9786466b chore(changelog): add news fragment for #1441 2026-02-23 17:59:13 +00:00
Niklas Wojtkowiak
18d2662b01 fix(config): remove allow_public_room_directory_without_auth 2026-02-23 17:59:13 +00:00
102 changed files with 2393 additions and 814 deletions

View File

@@ -44,7 +44,7 @@ runs:
- name: Login to builtin registry
if: ${{ env.BUILTIN_REGISTRY_ENABLED == 'true' }}
uses: docker/login-action@v3
uses: docker/login-action@v4
with:
registry: ${{ env.BUILTIN_REGISTRY }}
username: ${{ inputs.registry_user }}
@@ -52,7 +52,7 @@ runs:
- name: Set up Docker Buildx
if: ${{ env.BUILTIN_REGISTRY_ENABLED == 'true' }}
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v4
with:
# Use persistent BuildKit if BUILDKIT_ENDPOINT is set (e.g. tcp://buildkit:8125)
driver: ${{ env.BUILDKIT_ENDPOINT != '' && 'remote' || 'docker-container' }}
@@ -61,7 +61,7 @@ runs:
- name: Extract metadata (tags) for Docker
if: ${{ env.BUILTIN_REGISTRY_ENABLED == 'true' }}
id: meta
uses: docker/metadata-action@v5
uses: docker/metadata-action@v6
with:
flavor: |
latest=auto

View File

@@ -67,7 +67,7 @@ runs:
uses: ./.forgejo/actions/rust-toolchain
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v4
with:
# Use persistent BuildKit if BUILDKIT_ENDPOINT is set (e.g. tcp://buildkit:8125)
driver: ${{ env.BUILDKIT_ENDPOINT != '' && 'remote' || 'docker-container' }}
@@ -79,7 +79,7 @@ runs:
- name: Login to builtin registry
if: ${{ env.BUILTIN_REGISTRY_ENABLED == 'true' }}
uses: docker/login-action@v3
uses: docker/login-action@v4
with:
registry: ${{ env.BUILTIN_REGISTRY }}
username: ${{ inputs.registry_user }}
@@ -87,7 +87,7 @@ runs:
- name: Extract metadata (labels, annotations) for Docker
id: meta
uses: docker/metadata-action@v5
uses: docker/metadata-action@v6
with:
images: ${{ inputs.images }}
# default labels & annotations: https://github.com/docker/metadata-action/blob/master/src/meta.ts#L509
@@ -152,7 +152,7 @@ runs:
- name: inject cache into docker
if: ${{ env.BUILDKIT_ENDPOINT == '' }}
uses: https://github.com/reproducible-containers/buildkit-cache-dance@v3.3.0
uses: https://github.com/reproducible-containers/buildkit-cache-dance@v3.3.2
with:
cache-map: |
{

View File

@@ -30,22 +30,22 @@ jobs:
echo "version=$VERSION" >> $GITHUB_OUTPUT
echo "distribution=$DISTRIBUTION" >> $GITHUB_OUTPUT
echo "Debian distribution: $DISTRIBUTION ($VERSION)"
- name: Work around llvm-project#153385
id: llvm-workaround
run: |
if [ -f /usr/share/apt/default-sequoia.config ]; then
echo "Applying workaround for llvm-project#153385"
mkdir -p /etc/crypto-policies/back-ends/
cp /usr/share/apt/default-sequoia.config /etc/crypto-policies/back-ends/apt-sequoia.config
sed -i 's/\(sha1\.second_preimage_resistance = \)2026-02-01/\12026-06-01/' /etc/crypto-policies/back-ends/apt-sequoia.config
else
echo "No workaround needed for llvm-project#153385"
fi
#- name: Work around llvm-project#153385
# id: llvm-workaround
# run: |
# if [ -f /usr/share/apt/default-sequoia.config ]; then
# echo "Applying workaround for llvm-project#153385"
# mkdir -p /etc/crypto-policies/back-ends/
# cp /usr/share/apt/default-sequoia.config /etc/crypto-policies/back-ends/apt-sequoia.config
# sed -i 's/\(sha1\.second_preimage_resistance = \)2026-02-01/\12026-06-01/' /etc/crypto-policies/back-ends/apt-sequoia.config
# else
# echo "No workaround needed for llvm-project#153385"
# fi
- name: Pick compatible clang version
id: clang-version
run: |
# both latest need to use clang-23, but oldstable and previous can just use clang
if [[ "${{ matrix.container }}" == "ubuntu-latest" || "${{ matrix.container }}" == "debian-latest" ]]; then
if [[ "${{ matrix.container }}" == "ubuntu-latest" ]]; then
echo "Using clang-23 package for ${{ matrix.container }}"
echo "version=clang-23" >> $GITHUB_OUTPUT
else

View File

@@ -59,7 +59,7 @@ jobs:
registry_password: ${{ secrets.BUILTIN_REGISTRY_PASSWORD || secrets.GITHUB_TOKEN }}
- name: Build and push Docker image by digest
id: build
uses: docker/build-push-action@v6
uses: docker/build-push-action@v7
with:
context: .
file: "docker/Dockerfile"
@@ -146,7 +146,7 @@ jobs:
registry_password: ${{ secrets.BUILTIN_REGISTRY_PASSWORD || secrets.GITHUB_TOKEN }}
- name: Build and push max-perf Docker image by digest
id: build
uses: docker/build-push-action@v6
uses: docker/build-push-action@v7
with:
context: .
file: "docker/Dockerfile"

View File

@@ -43,7 +43,7 @@ jobs:
name: Renovate
runs-on: ubuntu-latest
container:
image: ghcr.io/renovatebot/renovate:42.70.2@sha256:3c2ac1b94fa92ef2fa4d1a0493f2c3ba564454720a32fdbcac2db2846ff1ee47
image: ghcr.io/renovatebot/renovate:43.59.4@sha256:f951508dea1e7d71cbe6deca298ab0a05488e7631229304813f630cc06010892
options: --tmpfs /tmp:exec
steps:
- name: Checkout

View File

@@ -23,7 +23,7 @@ jobs:
persist-credentials: true
token: ${{ secrets.FORGEJO_TOKEN }}
- uses: https://github.com/cachix/install-nix-action@4e002c8ec80594ecd40e759629461e26c8abed15 # v31.9.0
- uses: https://github.com/cachix/install-nix-action@19effe9fe722874e6d46dd7182e4b8b7a43c4a99 # v31.10.0
with:
nix_path: nixpkgs=channel:nixos-unstable

View File

@@ -1,5 +1,6 @@
default_install_hook_types:
- pre-commit
- pre-push
- commit-msg
default_stages:
- pre-commit
@@ -23,7 +24,7 @@ repos:
- id: check-added-large-files
- repo: https://github.com/crate-ci/typos
rev: v1.43.5
rev: v1.44.0
hooks:
- id: typos
- id: typos
@@ -31,7 +32,7 @@ repos:
stages: [commit-msg]
- repo: https://github.com/crate-ci/committed
rev: v1.1.10
rev: v1.1.11
hooks:
- id: committed
@@ -45,3 +46,14 @@ repos:
pass_filenames: false
stages:
- pre-commit
- repo: local
hooks:
- id: cargo-clippy
name: cargo clippy
entry: cargo clippy -- -D warnings
language: system
pass_filenames: false
types: [rust]
stages:
- pre-push

View File

@@ -1,3 +1,32 @@
# Continuwuity 0.5.6 (2026-03-03)
## Security
- Admin escape commands received over federation will never be executed, as this is never valid in a genuine situation. Contributed by @Jade.
- Fixed data amplification vulnerability (CWE-409) that affected configurations with server-side compression enabled (non-default). Contributed by @nex.
## Features
- Outgoing presence is now disabled by default, and the config option documentation has been adjusted to more accurately represent the weight of presence, typing indicators, and read receipts. Contributed by @nex. ([#1399](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1399))
- Improved the concurrency handling of federation transactions, vastly improving performance and reliability by more accurately handling inbound transactions and reducing the amount of repeated wasted work. Contributed by @nex and @Jade. ([#1428](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1428))
- Added [MSC3202](https://github.com/matrix-org/matrix-spec-proposals/pull/3202) Device masquerading (not all of MSC3202). This should fix issues with enabling [MSC4190](https://github.com/matrix-org/matrix-spec-proposals/pull/4190) for some Mautrix bridges. Contributed by @Jade ([#1435](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1435))
- Added [MSC3814](https://github.com/matrix-org/matrix-spec-proposals/pull/3814) Dehydrated Devices - you can now decrypt messages sent while all devices were logged out. ([#1436](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1436))
- Implement [MSC4143](https://github.com/matrix-org/matrix-spec-proposals/pull/4143) MatrixRTC transport discovery endpoint. Move RTC foci configuration from `[global.well_known]` to a new `[global.matrix_rtc]` section with a `foci` field. Contributed by @0xnim ([#1442](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1442))
- Updated `list-backups` admin command to output one backup per line. ([#1394](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1394))
- Improved URL preview fetching with a more compatible user agent for sites like YouTube Music. Added `!admin media delete-url-preview <url>` command to clear cached URL previews that were stuck and broken. ([#1434](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1434))
## Bugfixes
- Removed non-compliant nor functional room alias lookups over federation. Contributed by @nex ([#1393](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1393))
- Removed ability to set rocksdb as read only. Doing so would cause unintentional and buggy behaviour. Contributed by @Terryiscool160. ([#1418](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1418))
- Fixed a startup crash in the sender service if we can't detect the number of CPU cores, even if the `sender_workers` config option is set correctly. Contributed by @katie. ([#1421](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1421))
- Removed the `allow_public_room_directory_without_auth` config option. Contributed by @0xnim. ([#1441](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1441))
- Fixed sliding sync v5 list ranges always starting from 0, causing extra rooms to be unnecessarily processed and returned. Contributed by @0xnim ([#1445](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1445))
- Fixed a bug that (repairably) caused a room split between continuwuity and non-continuwuity servers when the room had both `m.room.policy` and `org.matrix.msc4284.policy` in its room state. Contributed by @nex ([#1481](https://forgejo.ellis.link/continuwuation/continuwuity/pulls/1481))
- Fixed `!admin media delete --mxc <url>` responding with an error message when the media was deleted successfully. Contributed by @lynxize
- Fixed spurious 404 media errors in the logs. Contributed by @benbot.
- Fixed spurious warn about needed backfill via federation for non-federated rooms. Contributed by @kraem.
# Continuwuity v0.5.5 (2026-02-15)
## Features

View File

@@ -22,22 +22,18 @@ ### Pre-commit Checks
- Validating YAML, JSON, and TOML files
- Checking for merge conflicts
You can run these checks locally by installing [prefligit](https://github.com/j178/prefligit):
You can run these checks locally by installing [prek](https://github.com/j178/prek):
```bash
# Requires UV: https://docs.astral.sh/uv/getting-started/installation/
# Mac/linux: curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows: powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
# Install prefligit using cargo-binstall
cargo binstall prefligit
# Install prek using cargo-binstall
cargo binstall prek
# Install git hooks to run checks automatically
prefligit install
prek install
# Run all checks
prefligit --all-files
prek --all-files
```
Alternatively, you can use [pre-commit](https://pre-commit.com/):
@@ -54,7 +50,7 @@ # Run all checks manually
pre-commit run --all-files
```
These same checks are run in CI via the prefligit-checks workflow to ensure consistency. These must pass before the PR is merged.
These same checks are run in CI via the prek-checks workflow to ensure consistency. These must pass before the PR is merged.
### Running tests locally

533
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -12,7 +12,7 @@ license = "Apache-2.0"
# See also `rust-toolchain.toml`
readme = "README.md"
repository = "https://forgejo.ellis.link/continuwuation/continuwuity"
version = "0.5.5"
version = "0.5.7-alpha.1"
[workspace.metadata.crane]
name = "conduwuit"
@@ -97,7 +97,7 @@ features = [
]
[workspace.dependencies.axum-extra]
version = "0.10.1"
version = "0.12.0"
default-features = false
features = ["typed-header", "tracing"]
@@ -144,6 +144,7 @@ features = [
"socks",
"hickory-dns",
"http2",
"stream",
]
[workspace.dependencies.serde]
@@ -158,7 +159,7 @@ features = ["raw_value"]
# Used for appservice registration files
[workspace.dependencies.serde-saphyr]
version = "0.0.19"
version = "0.0.21"
# Used to load forbidden room/user regex from config
[workspace.dependencies.serde_regex]
@@ -343,7 +344,7 @@ version = "0.1.2"
[workspace.dependencies.ruma]
git = "https://forgejo.ellis.link/continuwuation/ruwuma"
#branch = "conduwuit-changes"
rev = "e087ff15888156942ca2ffe6097d1b4c3fd27628"
rev = "bb12ed288a31a23aa11b10ba0fad22b7f985eb88"
features = [
"compat",
"rand",
@@ -363,6 +364,7 @@ features = [
"unstable-msc2870",
"unstable-msc3026",
"unstable-msc3061",
"unstable-msc3814",
"unstable-msc3245",
"unstable-msc3266",
"unstable-msc3381", # polls
@@ -381,6 +383,7 @@ features = [
"unstable-pdu",
"unstable-msc4155",
"unstable-msc4143", # livekit well_known response
"unstable-msc4284"
]
[workspace.dependencies.rust-rocksdb]

View File

@@ -6,10 +6,10 @@ set -euo pipefail
COMPLEMENT_SRC="${COMPLEMENT_SRC:-$1}"
# A `.jsonl` file to write test logs to
LOG_FILE="${2:-complement_test_logs.jsonl}"
LOG_FILE="${2:-tests/test_results/complement/test_logs.jsonl}"
# A `.jsonl` file to write test results to
RESULTS_FILE="${3:-complement_test_results.jsonl}"
RESULTS_FILE="${3:-tests/test_results/complement/test_results.jsonl}"
# The base docker image to use for complement tests
# You can build the default with `docker build -t continuwuity:complement -f ./docker/complement.Dockerfile .`

View File

@@ -0,0 +1 @@
Stopped left rooms from being unconditionally sent on initial sync, hopefully fixing spurious appearances of left rooms in some clients (and making sync faster as a bonus). Contributed by @ginger

1
changelog.d/1265.bugfix Normal file
View File

@@ -0,0 +1 @@
Fixed corrupted appservice registrations causing the server to enter a crash loop. Contributed by @nex.

View File

@@ -0,0 +1 @@
Re-added support for reading registration tokens from a file. Contributed by @ginger and @benbot.

View File

@@ -1 +0,0 @@
Removed non-compliant nor functional room alias lookups over federation. Contributed by @nex

View File

@@ -1 +0,0 @@
Outgoing presence is now disabled by default, and the config option documentation has been adjusted to more accurately represent the weight of presence, typing indicators, and read receipts. Contributed by @nex.

View File

@@ -1 +0,0 @@
Removed ability to set rocksdb as read only. Doing so would cause unintentional and buggy behaviour. Contributed by @Terryiscool160.

View File

@@ -1 +0,0 @@
Fixed a startup crash in the sender service if we can't detect the number of CPU cores, even if the `sender_workers' config option is set correctly. Contributed by @katie.

View File

@@ -1 +0,0 @@
Improved the concurrency handling of federation transactions, vastly improving performance and reliability by more accurately handling inbound transactions and reducing the amount of repeated wasted work. Contributed by @nex and @Jade.

View File

@@ -1 +0,0 @@
Added MSC3202 Device masquerading (not all of MSC3202). This should fix issues with enabling MSC4190 for some Mautrix bridges. Contributed by @Jade

1
changelog.d/1448.bugfix Normal file
View File

@@ -0,0 +1 @@
Prevent removing the admin room alias (`#admins`) to avoid accidentally breaking admin room functionality. Contributed by @0xnim

View File

@@ -1 +0,0 @@
Updated `list-backups` admin command to output one backup per line.

View File

@@ -1 +0,0 @@
Improved URL preview fetching with a more compatible user agent for sites like YouTube Music. Added `!admin media delete-url-preview <url>` command to clear cached URL previews that were stuck and broken.

View File

@@ -15,6 +15,18 @@ disallowed-macros = [
{ path = "log::trace", reason = "use conduwuit_core::trace" },
]
disallowed-methods = [
{ path = "tokio::spawn", reason = "use and pass conduuwit_core::server::Server::runtime() to spawn from" },
]
[[disallowed-methods]]
path = "tokio::spawn"
reason = "use and pass conduwuit_core::server::Server::runtime() to spawn from"
[[disallowed-methods]]
path = "reqwest::Response::bytes"
reason = "bytes is unsafe, use limit_read via the conduwuit_core::utils::LimitReadExt trait instead"
[[disallowed-methods]]
path = "reqwest::Response::text"
reason = "text is unsafe, use limit_read_text via the conduwuit_core::utils::LimitReadExt trait instead"
[[disallowed-methods]]
path = "reqwest::Response::json"
reason = "json is unsafe, use limit_read_text via the conduwuit_core::utils::LimitReadExt trait instead"

View File

@@ -9,10 +9,9 @@ address = "0.0.0.0"
allow_device_name_federation = true
allow_guest_registration = true
allow_public_room_directory_over_federation = true
allow_public_room_directory_without_auth = true
allow_registration = true
database_path = "/database"
log = "trace,h2=debug,hyper=debug"
log = "trace,h2=debug,hyper=debug,conduwuit_database=warn,conduwuit_service::manager=info,conduwuit_api::router=error,conduwuit_router=error,tower_http=error"
port = [8008, 8448]
trusted_servers = []
only_query_trusted_key_servers = false
@@ -25,7 +24,7 @@ url_preview_domain_explicit_denylist = ["*"]
media_compat_file_link = false
media_startup_check = true
prune_missing_media = true
log_colors = true
log_colors = false
admin_room_notices = false
allow_check_for_updates = false
intentionally_unknown_config_option_for_testing = true
@@ -48,6 +47,7 @@ federation_idle_timeout = 300
sender_timeout = 300
sender_idle_timeout = 300
sender_retry_backoff_limit = 300
force_disable_first_run_mode = true
[global.tls]
dual_protocol = true

View File

@@ -476,18 +476,25 @@
#yes_i_am_very_very_sure_i_want_an_open_registration_server_prone_to_abuse = false
# A static registration token that new users will have to provide when
# creating an account. If unset and `allow_registration` is true,
# you must set
# `yes_i_am_very_very_sure_i_want_an_open_registration_server_prone_to_abuse`
# to true to allow open registration without any conditions.
#
# If you do not want to set a static token, the `!admin token` commands
# may also be used to manage registration tokens.
# creating an account. This token does not supersede tokens from other
# sources, such as the `!admin token` command or the
# `registration_token_file` configuration option.
#
# example: "o&^uCtes4HPf0Vu@F20jQeeWE7"
#
#registration_token =
# A path to a file containing static registration tokens, one per line.
# Tokens in this file do not supersede tokens from other sources, such as
# the `!admin token` command or the `registration_token` configuration
# option.
#
# The file will be read once, when Continuwuity starts. It is not
# currently reread when the server configuration is reloaded. If the file
# cannot be read, Continuwuity will fail to start.
#
#registration_token_file =
# The public site key for reCaptcha. If this is provided, reCaptcha
# becomes required during registration. If both captcha *and*
# registration token are enabled, both will be required during
@@ -546,12 +553,6 @@
#
#allow_public_room_directory_over_federation = false
# Set this to true to allow your server's public room directory to be
# queried without client authentication (access token) through the Client
# APIs. Set this to false to protect against /publicRooms spiders.
#
#allow_public_room_directory_without_auth = false
# Allow guests/unauthenticated users to access TURN credentials.
#
# This is the equivalent of Synapse's `turn_allow_guests` config option.
@@ -1504,6 +1505,11 @@
#
#url_preview_user_agent = "continuwuity/<version> (bot; +https://continuwuity.org)"
# Determines whether audio and video files will be downloaded for URL
# previews.
#
#url_preview_allow_audio_video = false
# List of forbidden room aliases and room IDs as strings of regex
# patterns.
#
@@ -1850,14 +1856,13 @@
#
#support_mxid =
# A list of MatrixRTC foci URLs which will be served as part of the
# MSC4143 client endpoint at /.well-known/matrix/client. If you're
# setting up livekit, you'd want something like:
# rtc_focus_server_urls = [
# { type = "livekit", livekit_service_url = "https://livekit.example.com" },
# ]
# **DEPRECATED**: Use `[global.matrix_rtc].foci` instead.
#
# To disable, set this to be an empty vector (`[]`).
# A list of MatrixRTC foci URLs which will be served as part of the
# MSC4143 client endpoint at /.well-known/matrix/client.
#
# This option is deprecated and will be removed in a future release.
# Please migrate to the new `[global.matrix_rtc]` config section.
#
#rtc_focus_server_urls = []
@@ -1879,6 +1884,23 @@
#
#blurhash_max_raw_size = 33554432
[global.matrix_rtc]
# A list of MatrixRTC foci (transports) which will be served via the
# MSC4143 RTC transports endpoint at
# `/_matrix/client/v1/rtc/transports`. If you're setting up livekit,
# you'd want something like:
# ```toml
# [global.matrix_rtc]
# foci = [
# { type = "livekit", livekit_service_url = "https://livekit.example.com" },
# ]
# ```
#
# To disable, set this to an empty list (`[]`).
#
#foci = []
[global.ldap]
# Whether to enable LDAP login.

View File

@@ -48,7 +48,7 @@ EOF
# Developer tool versions
# renovate: datasource=github-releases depName=cargo-bins/cargo-binstall
ENV BINSTALL_VERSION=1.17.5
ENV BINSTALL_VERSION=1.17.7
# renovate: datasource=github-releases depName=psastras/sbom-rs
ENV CARGO_SBOM_VERSION=0.9.1
# renovate: datasource=crate depName=lddtree
@@ -180,6 +180,11 @@ RUN --mount=type=cache,target=/usr/local/cargo/registry \
export RUSTFLAGS="${RUSTFLAGS}"
fi
RUST_PROFILE_DIR="${RUST_PROFILE}"
if [[ "${RUST_PROFILE}" == "dev" ]]; then
RUST_PROFILE_DIR="debug"
fi
TARGET_DIR=($(cargo metadata --no-deps --format-version 1 | \
jq -r ".target_directory"))
mkdir /out/sbin
@@ -191,8 +196,8 @@ RUN --mount=type=cache,target=/usr/local/cargo/registry \
jq -r ".packages[] | select(.name == \"$PACKAGE\") | .targets[] | select( .kind | map(. == \"bin\") | any ) | .name"))
for BINARY in "${BINARIES[@]}"; do
echo $BINARY
xx-verify $TARGET_DIR/$(xx-cargo --print-target-triple)/${RUST_PROFILE}/$BINARY
cp $TARGET_DIR/$(xx-cargo --print-target-triple)/${RUST_PROFILE}/$BINARY /out/sbin/$BINARY
xx-verify $TARGET_DIR/$(xx-cargo --print-target-triple)/${RUST_PROFILE_DIR}/$BINARY
cp $TARGET_DIR/$(xx-cargo --print-target-triple)/${RUST_PROFILE_DIR}/$BINARY /out/sbin/$BINARY
done
EOF

View File

@@ -18,7 +18,7 @@ RUN --mount=type=cache,target=/etc/apk/cache apk add \
# Developer tool versions
# renovate: datasource=github-releases depName=cargo-bins/cargo-binstall
ENV BINSTALL_VERSION=1.17.5
ENV BINSTALL_VERSION=1.17.7
# renovate: datasource=github-releases depName=psastras/sbom-rs
ENV CARGO_SBOM_VERSION=0.9.1
# renovate: datasource=crate depName=lddtree

View File

@@ -34,6 +34,11 @@
"name": "troubleshooting",
"label": "Troubleshooting"
},
{
"type": "dir",
"name": "advanced",
"label": "Advanced"
},
"security",
{
"type": "dir-section-header",

View File

@@ -2,7 +2,7 @@
{
"text": "Guide",
"link": "/introduction",
"activeMatch": "^/(introduction|configuration|deploying|calls|appservices|maintenance|troubleshooting)"
"activeMatch": "^/(introduction|configuration|deploying|calls|appservices|maintenance|troubleshooting|advanced)"
},
{
"text": "Development",

7
docs/advanced/_meta.json Normal file
View File

@@ -0,0 +1,7 @@
[
{
"type": "file",
"name": "delegation",
"label": "Delegation / split-domain"
}
]

View File

@@ -0,0 +1,206 @@
# Delegation/split-domain deployment
Matrix allows clients and servers to discover a homeserver's "true" destination via **`.well-known` delegation**. This is especially useful if you would like to:
- Serve Continuwuity on a subdomain while having only the base domain for your usernames
- Use a port other than `:8448` for server-to-server connections
This guide will show you how to have `@user:example.com` usernames while serving Continuwuity on `https://matrix.example.com`. It assumes you are using port 443 for both client-to-server connections and server-to-server federation.
## Configuration
First, ensure you have set up A/AAAA records for `matrix.example.com` and `example.com` pointing to your IP.
Then, ensure that the `server_name` field matches your intended username suffix. If this is not the case, you **MUST** wipe the database directory and reinstall Continuwuity with your desired `server_name`.
Then, in the `[global.well_known]` section of your config file, add the following fields:
```toml
[global.well_known]
client = "https://matrix.example.com"
# port number MUST be specified
server = "matrix.example.com:443"
# (optional) customize your support contacts
#support_page =
#support_role = "m.role.admin"
#support_email =
#support_mxid = "@user:example.com"
```
Alternatively if you are using Docker, you can set the `CONTINUWUITY_WELL_KNOWN` environment variable as below:
```yaml
services:
continuwuity:
...
environment:
CONTINUWUITY_WELL_KNOWN: |
{
client=https://matrix.example.com,
server=matrix.example.com:443
}
```
## Serving with a reverse proxy
After doing the steps above, Continuwuity will serve these 3 JSON files:
- `/.well-known/matrix/client`: for Client-Server discovery
- `/.well-known/matrix/server`: for Server-Server (federation) discovery
- `/.well-known/matrix/support`: admin contact details (strongly recommended to have)
To enable full discovery, you will need to reverse proxy these paths from the base domain back to Continuwuity.
<details>
<summary>For Caddy</summary>
```
matrix.example.com:443 {
reverse_proxy 127.0.0.1:8008
}
example.com:443 {
reverse_proxy /.well-known/matrix* 127.0.0.1:8008
}
```
</details>
<details>
<summary>For Traefik (via Docker labels)</summary>
```
services:
continuwuity:
...
labels:
- "traefik.enable=true"
- "traefik.http.routers.continuwuity.rule=(Host(`matrix.example.com`) || (Host(`example.com`) && PathPrefix(`/.well-known/matrix`)))"
- "traefik.http.routers.continuwuity.service=continuwuity"
- "traefik.http.services.continuwuity.loadbalancer.server.port=8008"
```
</details>
Restart Continuwuity and your reverse proxy. Once that's done, visit these routes and check that the responses match the examples below:
<details open>
<summary>`https://example.com/.well-known/matrix/server`</summary>
```json
{
"m.server": "matrix.example.com:443"
}
```
</details>
<details open>
<summary>`https://example.com/.well-known/matrix/client`</summary>
```json
{
"m.homeserver": {
"base_url": "https://matrix.example.com/"
}
}
```
</details>
## Troubleshooting
### Cannot log in with web clients
Make sure there is an `Access-Control-Allow-Origin: *` header in your `/.well-known/matrix/client` path. While Continuwuity serves this header by default, it may be dropped by reverse proxies or other middlewares.
---
## Using SRV records (not recommended)
:::warning
The following methods are **not recommended** due to increased complexity with little benefits. If you have already set up `.well-known` delegation as above, you can safely skip this part.
:::
The following methods uses SRV DNS records and only work with federation traffic. They are only included for completeness.
<details>
<summary>Using only SRV records</summary>
If you can't set up `/.well-known/matrix/server` on :443 for some reason, you can set up a SRV record (via your DNS provider) as below:
- Service and name: `_matrix-fed._tcp.example.com.`
- Priority: `10` (can be any number)
- Weight: `10` (can be any number)
- Port: `443`
- Target: `matrix.example.com.`
On the target's IP at port 443, you must configure a valid route and cert for your server name, `example.com`. Therefore, this method only works to redirect traffic into the right IP/port combo, and can not delegate your federation to a different domain.
</details>
<details>
<summary>Using SRV records + .well-known</summary>
You can also set up `/.well-known/matrix/server` with a delegated domain but no ports:
```toml
[global.well_known]
server = "matrix.example.com"
```
Then, set up a SRV record (via your DNS provider) to announce the port number as below:
- Service and name: `_matrix-fed._tcp.matrix.example.com.`
- Priority: `10` (can be any number)
- Weight: `10` (can be any number)
- Port: `443`
- Target: `matrix.example.com.`
On the target's IP at port 443, you'll need to provide a valid route and cert for `matrix.example.com`. It provides the same feature as pure `.well-known` delegation, albeit with more parts to handle.
</details>
<details>
<summary>Using SRV records as a fallback for .well-known delegation</summary>
Assume your delegation is as below:
```toml
[global.well_known]
server = "example.com:443"
```
If your Continuwuity instance becomes temporarily unreachable, other servers will not be able to find your `/.well-known/matrix/server` file, and defaults to using `server_name:8448`. This incorrect cache can persist for a long time, and would hinder re-federation when your server eventually comes back online.
If you want other servers to default to using port :443 even when it is offline, you could set up a SRV record (via your DNS provider) as follows:
- Service and name: `_matrix-fed._tcp.example.com.`
- Priority: `10` (can be any number)
- Weight: `10` (can be any number)
- Port: `443`
- Target: `example.com.`
On the target's IP at port 443, you'll need to provide a valid route and cert for `example.com`.
</details>
---
## Related Documentation
See the following Matrix Specs for full details on client/server resolution mechanisms:
- [Server-to-Server resolution](https://spec.matrix.org/v1.17/server-server-api/#resolving-server-names) (see this for more information on SRV records)
- [Client-to-Server resolution](https://spec.matrix.org/v1.17/client-server-api/#server-discovery)
- [MSC1929: Homeserver Admin Contact and Support page](https://github.com/matrix-org/matrix-spec-proposals/pull/1929)

View File

@@ -78,47 +78,19 @@ #### Firewall hints
### 3. Telling clients where to find LiveKit
To tell clients where to find LiveKit, you need to add the address of your `lk-jwt-service` to your client .well-known file. To do so, in the config section `global.well-known`, add (or modify) the option `rtc_focus_server_urls`.
To tell clients where to find LiveKit, you need to add the address of your `lk-jwt-service` to the `[global.matrix_rtc]` config section using the `foci` option.
The variable should be a list of servers serving as MatrixRTC endpoints to serve in the well-known file to the client.
The variable should be a list of servers serving as MatrixRTC endpoints. Clients discover these via the `/_matrix/client/v1/rtc/transports` endpoint (MSC4143).
```toml
rtc_focus_server_urls = [
[global.matrix_rtc]
foci = [
{ type = "livekit", livekit_service_url = "https://livekit.example.com" },
]
```
Remember to replace the URL with the address you are deploying your instance of lk-jwt-service to.
#### Serving .well-known manually
If you don't let Continuwuity serve your `.well-known` files, you need to add the following lines to your `.well-known/matrix/client` file, remembering to replace the URL with your own `lk-jwt-service` deployment:
```json
"org.matrix.msc4143.rtc_foci": [
{
"type": "livekit",
"livekit_service_url": "https://livekit.example.com"
}
]
```
The final file should look something like this:
```json
{
"m.homeserver": {
"base_url":"https://matrix.example.com"
},
"org.matrix.msc4143.rtc_foci": [
{
"type": "livekit",
"livekit_service_url": "https://livekit.example.com"
}
]
}
```
### 4. Configure your Reverse Proxy
Reverse proxies can be configured in many different ways - so we can't provide a step by step for this.

View File

@@ -13,8 +13,9 @@ ## Basics
The config file to use can be specified on the commandline when running
Continuwuity by specifying the `-c`, `--config` flag. Alternatively, you can use
the environment variable `CONDUWUIT_CONFIG` to specify the config file to used.
Conduit's environment variables are supported for backwards compatibility.
the environment variable `CONTINUWUITY_CONFIG` to specify the config file to be
used; see [the section on environment variables](#environment-variables) for
more information.
## Option commandline flag
@@ -52,13 +53,15 @@ ## Environment variables
All of the settings that are found in the config file can be specified by using
environment variables. The environment variable names should be all caps and
prefixed with `CONDUWUIT_`.
prefixed with `CONTINUWUITY_`.
For example, if the setting you are changing is `max_request_size`, then the
environment variable to set is `CONDUWUIT_MAX_REQUEST_SIZE`.
environment variable to set is `CONTINUWUITY_MAX_REQUEST_SIZE`.
To modify config options not in the `[global]` context such as
`[global.well_known]`, use the `__` suffix split: `CONDUWUIT_WELL_KNOWN__SERVER`
`[global.well_known]`, use the `__` suffix split:
`CONTINUWUITY_WELL_KNOWN__SERVER`
Conduit's environment variables are supported for backwards compatibility (e.g.
Conduit and conduwuit's environment variables are also supported for backwards
compatibility, via the `CONDUIT_` and `CONDUWUIT_` prefixes respectively (e.g.
`CONDUIT_SERVER_NAME`).

View File

@@ -6,6 +6,7 @@ services:
### then you are ready to go.
image: forgejo.ellis.link/continuwuation/continuwuity:latest
restart: unless-stopped
command: /sbin/conduwuit
volumes:
- db:/var/lib/continuwuity
#- ./continuwuity.toml:/etc/continuwuity.toml

View File

@@ -16,14 +16,14 @@ services:
restart: unless-stopped
labels:
caddy: example.com
caddy.0_respond: /.well-known/matrix/server {"m.server":"matrix.example.com:443"}
caddy.1_respond: /.well-known/matrix/client {"m.server":{"base_url":"https://matrix.example.com"},"m.homeserver":{"base_url":"https://matrix.example.com"},"org.matrix.msc3575.proxy":{"url":"https://matrix.example.com"}}
caddy.reverse_proxy: /.well-known/matrix/* homeserver:6167
homeserver:
### If you already built the Continuwuity image with 'docker build' or want to use a registry image,
### then you are ready to go.
image: forgejo.ellis.link/continuwuation/continuwuity:latest
restart: unless-stopped
command: /sbin/conduwuit
volumes:
- db:/var/lib/continuwuity
- /etc/resolv.conf:/etc/resolv.conf:ro # Use the host's DNS resolver rather than Docker's.
@@ -42,6 +42,10 @@ services:
#CONTINUWUITY_LOG: warn,state_res=warn
CONTINUWUITY_ADDRESS: 0.0.0.0
#CONTINUWUITY_CONFIG: '/etc/continuwuity.toml' # Uncomment if you mapped config toml above
# Required for .well-known delegation - edit these according to your chosen domain
CONTINUWUITY_WELL_KNOWN__CLIENT: https://matrix.example.com
CONTINUWUITY_WELL_KNOWN__SERVER: matrix.example.com:443
networks:
- caddy
labels:

View File

@@ -6,6 +6,7 @@ services:
### then you are ready to go.
image: forgejo.ellis.link/continuwuation/continuwuity:latest
restart: unless-stopped
command: /sbin/conduwuit
volumes:
- db:/var/lib/continuwuity
- /etc/resolv.conf:/etc/resolv.conf:ro # Use the host's DNS resolver rather than Docker's.

View File

@@ -6,6 +6,7 @@ services:
### then you are ready to go.
image: forgejo.ellis.link/continuwuation/continuwuity:latest
restart: unless-stopped
command: /sbin/conduwuit
ports:
- 8448:6167
volumes:

View File

@@ -78,7 +78,7 @@ #### 2. Start the server with initial admin user
-e CONTINUWUITY_ALLOW_REGISTRATION="false" \
--name continuwuity \
forgejo.ellis.link/continuwuation/continuwuity:latest \
--execute "users create-user admin"
/sbin/conduwuit --execute "users create-user admin"
```
Replace `matrix.example.com` with your actual server name and `admin` with
@@ -141,7 +141,7 @@ #### Creating Your First Admin User
services:
continuwuity:
image: forgejo.ellis.link/continuwuation/continuwuity:latest
command: --execute "users create-user admin"
command: /sbin/conduwuit --execute "users create-user admin"
# ... rest of configuration
```

View File

@@ -1,7 +1,7 @@
# Continuwuity for FreeBSD
Continuwuity currently does not provide FreeBSD builds or FreeBSD packaging. However, Continuwuity does build and work on FreeBSD using the system-provided RocksDB.
Continuwuity doesn't provide official FreeBSD packages; however, a community-maintained set of packages is available on [Forgejo](https://forgejo.ellis.link/katie/continuwuity-bsd). Note that these are provided as standalone packages and are not part of a FreeBSD package repository (yet), so updates need to be downloaded and installed manually.
Contributions to get Continuwuity packaged for FreeBSD are welcome.
Please see the installation instructions in that repository. Direct any questions to its issue tracker or to [@katie:kat5.dev](https://matrix.to/#/@katie:kat5.dev).
Please join our [Continuwuity BSD](https://matrix.to/#/%23bsd:continuwuity.org) community room.
For general BSD support, please join our [Continuwuity BSD](https://matrix.to/#/%23bsd:continuwuity.org) community room.

View File

@@ -39,6 +39,7 @@ # Continuwuity for Kubernetes
- name: continuwuity
# use a sha hash <3
image: forgejo.ellis.link/continuwuation/continuwuity:latest
command: ["/sbin/conduwuit"]
imagePullPolicy: IfNotPresent
ports:
- name: http

View File

@@ -6,10 +6,10 @@
"message": "Welcome to Continuwuity! Important announcements about the project will appear here."
},
{
"id": 9,
"id": 10,
"mention_room": false,
"date": "2026-02-09",
"message": "Yesterday we released [v0.5.4](https://forgejo.ellis.link/continuwuation/continuwuity/releases/tag/v0.5.4). Bugfixes, performance improvements and more moderation features! There's also a security fix, so please update as soon as possible. Don't forget to join [our announcements channel](https://matrix.to/#/!jIdNjSM5X-V5JVx2h2kAhUZIIQ08GyzPL55NFZAH1vM/%2489TY9CqRg4-ff1MGo3Ulc5r5X4pakfdzT-99RD8Docc?via=ellis.link&via=explodie.org&via=matrix.org) to get important information sooner <3 "
"date": "2026-03-03",
"message": "We've just released [v0.5.6](https://forgejo.ellis.link/continuwuation/continuwuity/releases/tag/v0.5.6), which contains a few security improvements - plus significant reliability and performance improvements. Please update as soon as possible. \n\nWe released [v0.5.5](https://forgejo.ellis.link/continuwuation/continuwuity/releases/tag/v0.5.5) two weeks ago, but it skipped your admin room straight to [our announcements channel](https://matrix.to/#/!jIdNjSM5X-V5JVx2h2kAhUZIIQ08GyzPL55NFZAH1vM?via=ellis.link&via=gingershaped.computer&via=matrix.org). Make sure you're there to get important information as soon as we announce it! [Our space](https://matrix.to/#/!8cR4g-i9ucof69E4JHNg9LbPVkGprHb3SzcrGBDDJgk?via=continuwuity.org&via=ellis.link&via=matrix.org) has also gained a bunch of new and interesting rooms - be there or be square."
}
]
}

View File

@@ -1 +1 @@
{"m.homeserver":{"base_url": "https://matrix.continuwuity.org"},"org.matrix.msc3575.proxy":{"url": "https://matrix.continuwuity.org"},"org.matrix.msc4143.rtc_foci":[{"type":"livekit","livekit_service_url":"https://livekit.ellis.link"}]}
{"m.homeserver":{"base_url": "https://matrix.continuwuity.org"},"org.matrix.msc4143.rtc_foci":[{"type":"livekit","livekit_service_url":"https://livekit.ellis.link"}]}

View File

@@ -27,7 +27,7 @@ ## `!admin media delete-past-remote-media`
* Delete all remote and local media from 3 days ago, up until now:
`!admin media delete-past-remote-media -a 3d
-yes-i-want-to-delete-local-media`
--yes-i-want-to-delete-local-media`
## `!admin media delete-all-from-user`

View File

@@ -6,7 +6,7 @@ # Troubleshooting Continuwuity
Please check that your issues are not due to problems with your Docker setup.
:::
## Continuwuity and Matrix issues
## Continuwuity issues
### Slow joins to rooms
@@ -23,6 +23,16 @@ ### Slow joins to rooms
the bug caused your homeserver to forget to tell your client. **To fix this, clear your client's cache.** Both Element and Cinny
have a button to clear their cache in the "About" section of their settings.
### Configuration not working as expected
Sometimes you can make a mistake in your configuration that
means things don't get passed to Continuwuity correctly.
This is particularly easy to do with environment variables.
To check what configuration Continuwuity actually sees, you can
use the `!admin server show-config` command in your admin room.
Beware that this prints out any secrets in your configuration,
so you might want to delete the result afterwards!
### Lost access to admin room
You can reinvite yourself to the admin room through the following methods:
@@ -33,17 +43,7 @@ ### Lost access to admin room
- Or specify the `emergency_password` config option to allow you to temporarily
log into the server account (`@conduit`) from a web client
## General potential issues
### Configuration not working as expected
Sometimes you can make a mistake in your configuration that
means things don't get passed to Continuwuity correctly.
This is particularly easy to do with environment variables.
To check what configuration Continuwuity actually sees, you can
use the `!admin server show-config` command in your admin room.
Beware that this prints out any secrets in your configuration,
so you might want to delete the result afterwards!
## DNS issues
### Potential DNS issues when using Docker

54
flake.lock generated
View File

@@ -3,11 +3,11 @@
"advisory-db": {
"flake": false,
"locked": {
"lastModified": 1766324728,
"narHash": "sha256-9C+WyE5U3y5w4WQXxmb0ylRyMMsPyzxielWXSHrcDpE=",
"lastModified": 1772776993,
"narHash": "sha256-CpBa+UpogN0Xn1gMmgqQrzKGee+E8TCkgHar8/w6CRk=",
"owner": "rustsec",
"repo": "advisory-db",
"rev": "c88b88c62bda077be8aa621d4e89d8701e39cb5d",
"rev": "b3472341e37cbd4b8c27b052b2abb34792f4d3c4",
"type": "github"
},
"original": {
@@ -18,11 +18,11 @@
},
"crane": {
"locked": {
"lastModified": 1766194365,
"narHash": "sha256-4AFsUZ0kl6MXSm4BaQgItD0VGlEKR3iq7gIaL7TjBvc=",
"lastModified": 1772560058,
"narHash": "sha256-NuVKdMBJldwUXgghYpzIWJdfeB7ccsu1CC7B+NfSoZ8=",
"owner": "ipetkov",
"repo": "crane",
"rev": "7d8ec2c71771937ab99790b45e6d9b93d15d9379",
"rev": "db590d9286ed5ce22017541e36132eab4e8b3045",
"type": "github"
},
"original": {
@@ -39,11 +39,11 @@
"rust-analyzer-src": "rust-analyzer-src"
},
"locked": {
"lastModified": 1766299592,
"narHash": "sha256-7u+q5hexu2eAxL2VjhskHvaUKg+GexmelIR2ve9Nbb4=",
"lastModified": 1772953398,
"narHash": "sha256-fTTHCaEvPLzWyZFxPud/G9HM3pNYmW/64Kj58hdH4+k=",
"owner": "nix-community",
"repo": "fenix",
"rev": "381579dee168d5ced412e2990e9637ecc7cf1c5d",
"rev": "fc4863887d98fd879cf5f11af1d23d44d9bdd8ae",
"type": "github"
},
"original": {
@@ -55,11 +55,11 @@
"flake-compat": {
"flake": false,
"locked": {
"lastModified": 1765121682,
"narHash": "sha256-4VBOP18BFeiPkyhy9o4ssBNQEvfvv1kXkasAYd0+rrA=",
"lastModified": 1767039857,
"narHash": "sha256-vNpUSpF5Nuw8xvDLj2KCwwksIbjua2LZCqhV1LNRDns=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "65f23138d8d09a92e30f1e5c87611b23ef451bf3",
"rev": "5edf11c44bc78a0d334f6334cdaf7d60d732daab",
"type": "github"
},
"original": {
@@ -74,11 +74,11 @@
"nixpkgs-lib": "nixpkgs-lib"
},
"locked": {
"lastModified": 1765835352,
"narHash": "sha256-XswHlK/Qtjasvhd1nOa1e8MgZ8GS//jBoTqWtrS1Giw=",
"lastModified": 1772408722,
"narHash": "sha256-rHuJtdcOjK7rAHpHphUb1iCvgkU3GpfvicLMwwnfMT0=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "a34fae9c08a15ad73f295041fec82323541400a9",
"rev": "f20dc5d9b8027381c474144ecabc9034d6a839a3",
"type": "github"
},
"original": {
@@ -89,11 +89,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1766070988,
"narHash": "sha256-G/WVghka6c4bAzMhTwT2vjLccg/awmHkdKSd2JrycLc=",
"lastModified": 1772773019,
"narHash": "sha256-E1bxHxNKfDoQUuvriG71+f+s/NT0qWkImXsYZNFFfCs=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "c6245e83d836d0433170a16eb185cefe0572f8b8",
"rev": "aca4d95fce4914b3892661bcb80b8087293536c6",
"type": "github"
},
"original": {
@@ -105,11 +105,11 @@
},
"nixpkgs-lib": {
"locked": {
"lastModified": 1765674936,
"narHash": "sha256-k00uTP4JNfmejrCLJOwdObYC9jHRrr/5M/a/8L2EIdo=",
"lastModified": 1772328832,
"narHash": "sha256-e+/T/pmEkLP6BHhYjx6GmwP5ivonQQn0bJdH9YrRB+Q=",
"owner": "nix-community",
"repo": "nixpkgs.lib",
"rev": "2075416fcb47225d9b68ac469a5c4801a9c4dd85",
"rev": "c185c7a5e5dd8f9add5b2f8ebeff00888b070742",
"type": "github"
},
"original": {
@@ -132,11 +132,11 @@
"rust-analyzer-src": {
"flake": false,
"locked": {
"lastModified": 1766253897,
"narHash": "sha256-ChK07B1aOlJ4QzWXpJo+y8IGAxp1V9yQ2YloJ+RgHRw=",
"lastModified": 1772877513,
"narHash": "sha256-RcRGv2Bng5I9y75XwFX7oK2l6mLH1dtbTTG9U8qun0c=",
"owner": "rust-lang",
"repo": "rust-analyzer",
"rev": "765b7bdb432b3740f2d564afccfae831d5a972e4",
"rev": "a1b86d600f88be98643e5dd61d6ed26eda17c09e",
"type": "github"
},
"original": {
@@ -153,11 +153,11 @@
]
},
"locked": {
"lastModified": 1766000401,
"narHash": "sha256-+cqN4PJz9y0JQXfAK5J1drd0U05D5fcAGhzhfVrDlsI=",
"lastModified": 1772660329,
"narHash": "sha256-IjU1FxYqm+VDe5qIOxoW+pISBlGvVApRjiw/Y/ttJzY=",
"owner": "numtide",
"repo": "treefmt-nix",
"rev": "42d96e75aa56a3f70cab7e7dc4a32868db28e8fd",
"rev": "3710e0e1218041bbad640352a0440114b1e10428",
"type": "github"
},
"original": {

View File

@@ -12,7 +12,6 @@
rocksdbAllFeatures = self'.packages.rocksdb.override {
enableJemalloc = true;
enableLiburing = true;
};
commonAttrs = (uwulib.build.commonAttrs { }) // {

View File

@@ -27,7 +27,6 @@
commonAttrsArgs.profile = "release";
rocksdb = self'.packages.rocksdb.override {
enableJemalloc = true;
enableLiburing = true;
};
features = {
enabledFeatures = "all";

View File

@@ -7,7 +7,6 @@
rust-jemalloc-sys-unprefixed,
enableJemalloc ? false,
enableLiburing ? false,
fetchFromGitea,
@@ -32,7 +31,7 @@ in
# for some reason enableLiburing in nixpkgs rocksdb is default true
# which breaks Darwin entirely
enableLiburing = enableLiburing && notDarwin;
enableLiburing = notDarwin;
}).overrideAttrs
(old: {
src = fetchFromGitea {
@@ -74,7 +73,7 @@ in
"USE_RTTI"
]);
enableLiburing = enableLiburing && notDarwin;
enableLiburing = notDarwin;
# outputs has "tools" which we don't need or use
outputs = [ "out" ];

View File

@@ -11,7 +11,6 @@
uwulib = inputs.self.uwulib.init pkgs;
rocksdbAllFeatures = self'.packages.rocksdb.override {
enableJemalloc = true;
enableLiburing = true;
};
in
{

571
package-lock.json generated
View File

@@ -119,15 +119,14 @@
}
},
"node_modules/@rsbuild/core": {
"version": "2.0.0-beta.3",
"resolved": "https://registry.npmjs.org/@rsbuild/core/-/core-2.0.0-beta.3.tgz",
"integrity": "sha512-dfH+Pt2GuF3rWOWGsf5XOhn3Zarvr4DoHwoI1arAsCGvpzoeud3DNGmWPy13tngj0r/YvQRcPTRBCRV4RP5CMw==",
"version": "2.0.0-beta.6",
"resolved": "https://registry.npmjs.org/@rsbuild/core/-/core-2.0.0-beta.6.tgz",
"integrity": "sha512-DUBhUzvzj6xlGUAHTTipFskSuZmVEuTX7lGU+ToPuo8n3bsQrWn/UBOEQAd45g66k7QfXadoZ/v7eodQErpvGQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"@rspack/core": "2.0.0-beta.0",
"@swc/helpers": "^0.5.18",
"jiti": "^2.6.1"
"@rspack/core": "2.0.0-beta.3",
"@swc/helpers": "^0.5.19"
},
"bin": {
"rsbuild": "bin/rsbuild.js"
@@ -159,28 +158,28 @@
}
},
"node_modules/@rspack/binding": {
"version": "2.0.0-beta.0",
"resolved": "https://registry.npmjs.org/@rspack/binding/-/binding-2.0.0-beta.0.tgz",
"integrity": "sha512-L6PPqhwZWC2vzwdhBItNPXw+7V4sq+MBDRXLdd8NMqaJSCB5iKdJIbpbEQucST9Nn7V28IYoQTXs6+ol5vWUBA==",
"version": "2.0.0-beta.3",
"resolved": "https://registry.npmjs.org/@rspack/binding/-/binding-2.0.0-beta.3.tgz",
"integrity": "sha512-GSj+d8AlLs1oElhYq32vIN/eAsxWG9jy0EiNgSxWTt5Gdamv87kcvsV4jwfWIjlltdnBIJgey2RnU+hDZlTAvw==",
"dev": true,
"license": "MIT",
"optionalDependencies": {
"@rspack/binding-darwin-arm64": "2.0.0-beta.0",
"@rspack/binding-darwin-x64": "2.0.0-beta.0",
"@rspack/binding-linux-arm64-gnu": "2.0.0-beta.0",
"@rspack/binding-linux-arm64-musl": "2.0.0-beta.0",
"@rspack/binding-linux-x64-gnu": "2.0.0-beta.0",
"@rspack/binding-linux-x64-musl": "2.0.0-beta.0",
"@rspack/binding-wasm32-wasi": "2.0.0-beta.0",
"@rspack/binding-win32-arm64-msvc": "2.0.0-beta.0",
"@rspack/binding-win32-ia32-msvc": "2.0.0-beta.0",
"@rspack/binding-win32-x64-msvc": "2.0.0-beta.0"
"@rspack/binding-darwin-arm64": "2.0.0-beta.3",
"@rspack/binding-darwin-x64": "2.0.0-beta.3",
"@rspack/binding-linux-arm64-gnu": "2.0.0-beta.3",
"@rspack/binding-linux-arm64-musl": "2.0.0-beta.3",
"@rspack/binding-linux-x64-gnu": "2.0.0-beta.3",
"@rspack/binding-linux-x64-musl": "2.0.0-beta.3",
"@rspack/binding-wasm32-wasi": "2.0.0-beta.3",
"@rspack/binding-win32-arm64-msvc": "2.0.0-beta.3",
"@rspack/binding-win32-ia32-msvc": "2.0.0-beta.3",
"@rspack/binding-win32-x64-msvc": "2.0.0-beta.3"
}
},
"node_modules/@rspack/binding-darwin-arm64": {
"version": "2.0.0-beta.0",
"resolved": "https://registry.npmjs.org/@rspack/binding-darwin-arm64/-/binding-darwin-arm64-2.0.0-beta.0.tgz",
"integrity": "sha512-PPx1+SPEROSvDKmBuCbsE7W9tk07ajPosyvyuafv2wbBI6PW2rNcz62uzpIFS+FTgwwZ5u/06WXRtlD2xW9bKg==",
"version": "2.0.0-beta.3",
"resolved": "https://registry.npmjs.org/@rspack/binding-darwin-arm64/-/binding-darwin-arm64-2.0.0-beta.3.tgz",
"integrity": "sha512-QebSomLWlCbFsC0sfDuGqLJtkgyrnr38vrCepWukaAXIY4ANy5QB49LDKdLpVv6bKlC95MpnW37NvSNWY5GMYA==",
"cpu": [
"arm64"
],
@@ -192,9 +191,9 @@
]
},
"node_modules/@rspack/binding-darwin-x64": {
"version": "2.0.0-beta.0",
"resolved": "https://registry.npmjs.org/@rspack/binding-darwin-x64/-/binding-darwin-x64-2.0.0-beta.0.tgz",
"integrity": "sha512-GucsfjrSKBZ9cuOTXmHWxeY2wPmaNyvGNxTyzttjRcfwqOWz8r+ku6PCsMSXUqxZRYWW1L9mvtTdlDrzTYJZ0w==",
"version": "2.0.0-beta.3",
"resolved": "https://registry.npmjs.org/@rspack/binding-darwin-x64/-/binding-darwin-x64-2.0.0-beta.3.tgz",
"integrity": "sha512-EysmBq+sz+Ph0bu0gXpU1uuZG9gXgjqY+w3MJel+ieTFyQO3L/R56V32McgssMbheJbYcviDDn7Tz4D+lTvdJA==",
"cpu": [
"x64"
],
@@ -206,9 +205,9 @@
]
},
"node_modules/@rspack/binding-linux-arm64-gnu": {
"version": "2.0.0-beta.0",
"resolved": "https://registry.npmjs.org/@rspack/binding-linux-arm64-gnu/-/binding-linux-arm64-gnu-2.0.0-beta.0.tgz",
"integrity": "sha512-nTtYtklRZD4sb2RIFCF9YS8tZ/MjpqIBKVS3YIvdXcfHUdVfmQHTZGtwEuZGg6AxTC5L1hcvkYmTXCG0ok7auw==",
"version": "2.0.0-beta.3",
"resolved": "https://registry.npmjs.org/@rspack/binding-linux-arm64-gnu/-/binding-linux-arm64-gnu-2.0.0-beta.3.tgz",
"integrity": "sha512-iFPj4TQZKewnqWPfTbyk3F8QCBI/Edv7TVSRIPBHRnCM0lvYZl/8IZlUzXSamLvrtDpouF0nUzht/fktoWOhAg==",
"cpu": [
"arm64"
],
@@ -220,9 +219,9 @@
]
},
"node_modules/@rspack/binding-linux-arm64-musl": {
"version": "2.0.0-beta.0",
"resolved": "https://registry.npmjs.org/@rspack/binding-linux-arm64-musl/-/binding-linux-arm64-musl-2.0.0-beta.0.tgz",
"integrity": "sha512-S2fshx0Rf7/XYwoMLaqFsVg4y+VAfHzubrczy8AW5xIs6UNC3eRLVTgShLerUPtF6SG+v6NQxQ9JI3vOo2qPOA==",
"version": "2.0.0-beta.3",
"resolved": "https://registry.npmjs.org/@rspack/binding-linux-arm64-musl/-/binding-linux-arm64-musl-2.0.0-beta.3.tgz",
"integrity": "sha512-355mygfCNb0eF/y4HgtJcd0i9csNTG4Z15PCCplIkSAKJpFpkORM2xJb50BqsbhVafYl6AHoBlGWAo9iIzUb/w==",
"cpu": [
"arm64"
],
@@ -234,9 +233,9 @@
]
},
"node_modules/@rspack/binding-linux-x64-gnu": {
"version": "2.0.0-beta.0",
"resolved": "https://registry.npmjs.org/@rspack/binding-linux-x64-gnu/-/binding-linux-x64-gnu-2.0.0-beta.0.tgz",
"integrity": "sha512-yx5Fk1gl7lfkvqcjolNLCNeduIs6C2alMsQ/kZ1pLeP5MPquVOYNqs6EcDPIp+fUjo3lZYtnJBiZKK+QosbzYg==",
"version": "2.0.0-beta.3",
"resolved": "https://registry.npmjs.org/@rspack/binding-linux-x64-gnu/-/binding-linux-x64-gnu-2.0.0-beta.3.tgz",
"integrity": "sha512-U8a+bcP/tkMyiwiO9XfeRYYO20YPGiZNxWWt7FEsdmRuRAl6M+EmWaJllJFQtKH+GG8IN93pNoVPMvARjLoJOQ==",
"cpu": [
"x64"
],
@@ -248,9 +247,9 @@
]
},
"node_modules/@rspack/binding-linux-x64-musl": {
"version": "2.0.0-beta.0",
"resolved": "https://registry.npmjs.org/@rspack/binding-linux-x64-musl/-/binding-linux-x64-musl-2.0.0-beta.0.tgz",
"integrity": "sha512-sBX4b2W0PgehlAVT224k0Q6GaH6t9HP+hBNDrbX/g6d0hfxZN56gm5NfOTOD1Rien4v7OBEejJ3/uFbm1WjwYQ==",
"version": "2.0.0-beta.3",
"resolved": "https://registry.npmjs.org/@rspack/binding-linux-x64-musl/-/binding-linux-x64-musl-2.0.0-beta.3.tgz",
"integrity": "sha512-g81rqkaqDFRTID2VrHBYeM+xZe8yWov7IcryTrl9RGXXr61s+6Tu/mWyM378PuHOCyMNu7G3blVaSjLvKauG6Q==",
"cpu": [
"x64"
],
@@ -262,9 +261,9 @@
]
},
"node_modules/@rspack/binding-wasm32-wasi": {
"version": "2.0.0-beta.0",
"resolved": "https://registry.npmjs.org/@rspack/binding-wasm32-wasi/-/binding-wasm32-wasi-2.0.0-beta.0.tgz",
"integrity": "sha512-o6OatnNvb4kCzXbCaomhENGaCsO3naIyAqqErew90HeAwa1lfY3NhRfDLeIyuANQ+xqFl34/R7n8q3ZDx3nd4Q==",
"version": "2.0.0-beta.3",
"resolved": "https://registry.npmjs.org/@rspack/binding-wasm32-wasi/-/binding-wasm32-wasi-2.0.0-beta.3.tgz",
"integrity": "sha512-tzGd8H2oj5F3oR/Hxp+J68zVU/nG+9ndH2KK3/RieVjNAiVNHCR0/ZU9D47s6fnmvWOqAQ1qO8gnVoVLopC4YA==",
"cpu": [
"wasm32"
],
@@ -276,9 +275,9 @@
}
},
"node_modules/@rspack/binding-win32-arm64-msvc": {
"version": "2.0.0-beta.0",
"resolved": "https://registry.npmjs.org/@rspack/binding-win32-arm64-msvc/-/binding-win32-arm64-msvc-2.0.0-beta.0.tgz",
"integrity": "sha512-neCzVllXzIqM8p8qKb89qV7wyk233gC/V9VrHIKbGeQjAEzpBsk5GOWlFbq5DDL6tivQ+uzYaTrZWm9tb2qxXg==",
"version": "2.0.0-beta.3",
"resolved": "https://registry.npmjs.org/@rspack/binding-win32-arm64-msvc/-/binding-win32-arm64-msvc-2.0.0-beta.3.tgz",
"integrity": "sha512-TZZRSWa34sm5WyoQHwnyBjLJ4w3fcWRYA9ybYjSVWjUU6tVGdMiHiZp+WexUpIETvChLXU1JENNmBg/U7wvZEA==",
"cpu": [
"arm64"
],
@@ -290,9 +289,9 @@
]
},
"node_modules/@rspack/binding-win32-ia32-msvc": {
"version": "2.0.0-beta.0",
"resolved": "https://registry.npmjs.org/@rspack/binding-win32-ia32-msvc/-/binding-win32-ia32-msvc-2.0.0-beta.0.tgz",
"integrity": "sha512-/f0n2eO+DxMKQm9IebeMQJITx8M/+RvY/i8d3sAQZBgR53izn8y7EcDlidXpr24/2DvkLbiub8IyCKPlhLB+1A==",
"version": "2.0.0-beta.3",
"resolved": "https://registry.npmjs.org/@rspack/binding-win32-ia32-msvc/-/binding-win32-ia32-msvc-2.0.0-beta.3.tgz",
"integrity": "sha512-VFnfdbJhyl6gNW1VzTyd1ZrHCboHPR7vrOalEsulQRqVNbtDkjm1sqLHtDcLmhTEv0a9r4lli8uubWDwmel8KQ==",
"cpu": [
"ia32"
],
@@ -304,9 +303,9 @@
]
},
"node_modules/@rspack/binding-win32-x64-msvc": {
"version": "2.0.0-beta.0",
"resolved": "https://registry.npmjs.org/@rspack/binding-win32-x64-msvc/-/binding-win32-x64-msvc-2.0.0-beta.0.tgz",
"integrity": "sha512-dx4zgiAT88EQE7kEUpr7Z9EZAwLnO5FhzWzvd/cDK4bkqYsx+rTklgf/c0EYPBeroXCxlGiMsuC9wHAFNK7sFw==",
"version": "2.0.0-beta.3",
"resolved": "https://registry.npmjs.org/@rspack/binding-win32-x64-msvc/-/binding-win32-x64-msvc-2.0.0-beta.3.tgz",
"integrity": "sha512-rwZ6Y3b3oqPj+ZDPPRxr3136HUPKDSlPQa4v7bBOPLDlrFDFOynMIEqDUUi5+8lPaUQ8WWR0aJK4cgcTTT0Siw==",
"cpu": [
"x64"
],
@@ -318,20 +317,19 @@
]
},
"node_modules/@rspack/core": {
"version": "2.0.0-beta.0",
"resolved": "https://registry.npmjs.org/@rspack/core/-/core-2.0.0-beta.0.tgz",
"integrity": "sha512-aEqlQQjiXixT5i9S4DFtiAap8ZjF6pOgfY2ALHOizins/QqWyB8dyLxSoXdzt7JixmKcFmHkbL9XahO28BlVUA==",
"version": "2.0.0-beta.3",
"resolved": "https://registry.npmjs.org/@rspack/core/-/core-2.0.0-beta.3.tgz",
"integrity": "sha512-VuLteRIesuyFFTXZaciUY0lwDZiwMc7JcpE8guvjArztDhtpVvlaOcLlVBp/Yza8c/Tk8Dxwe1ARzFL7xG1/0w==",
"dev": true,
"license": "MIT",
"dependencies": {
"@rspack/binding": "2.0.0-beta.0",
"@rspack/lite-tapable": "1.1.0"
"@rspack/binding": "2.0.0-beta.3"
},
"engines": {
"node": "^20.19.0 || >=22.12.0"
},
"peerDependencies": {
"@module-federation/runtime-tools": ">=0.22.0",
"@module-federation/runtime-tools": "^0.24.1 || ^2.0.0",
"@swc/helpers": ">=0.5.1"
},
"peerDependenciesMeta": {
@@ -343,17 +341,10 @@
}
}
},
"node_modules/@rspack/lite-tapable": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/@rspack/lite-tapable/-/lite-tapable-1.1.0.tgz",
"integrity": "sha512-E2B0JhYFmVAwdDiG14+DW0Di4Ze4Jg10Pc4/lILUrd5DRCaklduz2OvJ5HYQ6G+hd+WTzqQb3QnDNfK4yvAFYw==",
"dev": true,
"license": "MIT"
},
"node_modules/@rspack/plugin-react-refresh": {
"version": "1.6.0",
"resolved": "https://registry.npmjs.org/@rspack/plugin-react-refresh/-/plugin-react-refresh-1.6.0.tgz",
"integrity": "sha512-OO53gkrte/Ty4iRXxxM6lkwPGxsSsupFKdrPFnjwFIYrPvFLjeolAl5cTx+FzO5hYygJiGnw7iEKTmD+ptxqDA==",
"version": "1.6.1",
"resolved": "https://registry.npmjs.org/@rspack/plugin-react-refresh/-/plugin-react-refresh-1.6.1.tgz",
"integrity": "sha512-eqqW5645VG3CzGzFgNg5HqNdHVXY+567PGjtDhhrM8t67caxmsSzRmT5qfoEIfBcGgFkH9vEg7kzXwmCYQdQDw==",
"dev": true,
"license": "MIT",
"dependencies": {
@@ -371,22 +362,22 @@
}
},
"node_modules/@rspress/core": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/@rspress/core/-/core-2.0.3.tgz",
"integrity": "sha512-a+JJFiALqMxGJBqR38/lkN6tas42UF4jRIhu6RilC/3DdqpfqR8j6jjQFOmqoNKo6ZGXW2W+i1Pscn6drvoG3w==",
"version": "2.0.5",
"resolved": "https://registry.npmjs.org/@rspress/core/-/core-2.0.5.tgz",
"integrity": "sha512-2ezGmANmIrWmhsUrvlRb9Df4xsun1BDgEertDc890aQqtKcNrbu+TBRsOoO+E/N6ioavun7JGGe1wWjvxubCHw==",
"dev": true,
"license": "MIT",
"dependencies": {
"@mdx-js/mdx": "^3.1.1",
"@mdx-js/react": "^3.1.1",
"@rsbuild/core": "2.0.0-beta.3",
"@rsbuild/core": "2.0.0-beta.6",
"@rsbuild/plugin-react": "~1.4.5",
"@rspress/shared": "2.0.3",
"@shikijs/rehype": "^3.21.0",
"@rspress/shared": "2.0.5",
"@shikijs/rehype": "^4.0.1",
"@types/unist": "^3.0.3",
"@unhead/react": "^2.1.4",
"@unhead/react": "^2.1.9",
"body-scroll-lock": "4.0.0-beta.0",
"cac": "^6.7.14",
"cac": "^7.0.0",
"chokidar": "^3.6.0",
"clsx": "2.1.1",
"copy-to-clipboard": "^3.3.3",
@@ -404,15 +395,18 @@
"react-dom": "^19.2.4",
"react-lazy-with-preload": "^2.2.1",
"react-reconciler": "0.33.0",
"react-router-dom": "^7.13.0",
"react-render-to-markdown": "19.0.1",
"react-router-dom": "^7.13.1",
"rehype-external-links": "^3.0.0",
"rehype-raw": "^7.0.0",
"remark-cjk-friendly": "^2.0.1",
"remark-cjk-friendly-gfm-strikethrough": "^2.0.1",
"remark-gfm": "^4.0.1",
"remark-mdx": "^3.1.1",
"remark-parse": "^11.0.0",
"remark-stringify": "^11.0.0",
"scroll-into-view-if-needed": "^3.1.0",
"shiki": "^3.21.0",
"shiki": "^4.0.1",
"tinyglobby": "^0.2.15",
"tinypool": "^1.1.1",
"unified": "^11.0.5",
@@ -428,125 +422,162 @@
}
},
"node_modules/@rspress/plugin-client-redirects": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/@rspress/plugin-client-redirects/-/plugin-client-redirects-2.0.3.tgz",
"integrity": "sha512-9+SoAbfoxM6OCRWx8jWHHi2zwJDcNaej/URx0CWZk8tvQ618yJW5mXJydknlac62399eYh/F7C3w8TZM3ORGVA==",
"version": "2.0.5",
"resolved": "https://registry.npmjs.org/@rspress/plugin-client-redirects/-/plugin-client-redirects-2.0.5.tgz",
"integrity": "sha512-sxwWzwHPefSPWUyV6/AA/hBlQUeNFntL8dBQi/vZCQiZHM6ShvKbqa3s5Xu2yI7DeFKHH3jb0VGbjufu8M3Ypw==",
"dev": true,
"license": "MIT",
"engines": {
"node": "^20.19.0 || >=22.12.0"
},
"peerDependencies": {
"@rspress/core": "^2.0.3"
"@rspress/core": "^2.0.5"
}
},
"node_modules/@rspress/plugin-sitemap": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/@rspress/plugin-sitemap/-/plugin-sitemap-2.0.3.tgz",
"integrity": "sha512-SKa7YEAdkUqya2YjMKbakg3kcYMkXgXhTQdDsHd+QlJWN8j8cDPiCcctMZu8iIPeKZlb+hTJkTWvh27LSIKdOA==",
"version": "2.0.5",
"resolved": "https://registry.npmjs.org/@rspress/plugin-sitemap/-/plugin-sitemap-2.0.5.tgz",
"integrity": "sha512-wBxKL8sNd3bkKFxlFtB1xJ7jCtSRDL6pfVvWsmTIbTNDPCtefd1nmiMBIDMLOR8EflwuStIz3bMQXdWpbC7ahA==",
"dev": true,
"license": "MIT",
"engines": {
"node": "^20.19.0 || >=22.12.0"
},
"peerDependencies": {
"@rspress/core": "^2.0.3"
"@rspress/core": "^2.0.5"
}
},
"node_modules/@rspress/shared": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/@rspress/shared/-/shared-2.0.3.tgz",
"integrity": "sha512-yI9G4P165fSsmm6QoYTUrdgUis1aFnDh04GcM4SQIpL3itvEZhGtItgoeGkX9EWbnEjhriwI8mTqDDJIp+vrGA==",
"version": "2.0.5",
"resolved": "https://registry.npmjs.org/@rspress/shared/-/shared-2.0.5.tgz",
"integrity": "sha512-Wdhh+VjU8zJWoVLhv9KJTRAZQ4X2V/Z81Lo2D0hQsa0Kj5F3EaxlMt5/dhX7DoflqNuZPZk/e7CSUB+gO/Umlg==",
"dev": true,
"license": "MIT",
"dependencies": {
"@rsbuild/core": "2.0.0-beta.3",
"@shikijs/rehype": "^3.21.0",
"@rsbuild/core": "2.0.0-beta.6",
"@shikijs/rehype": "^4.0.1",
"gray-matter": "4.0.3",
"lodash-es": "^4.17.23",
"unified": "^11.0.5"
}
},
"node_modules/@shikijs/core": {
"version": "3.22.0",
"resolved": "https://registry.npmjs.org/@shikijs/core/-/core-3.22.0.tgz",
"integrity": "sha512-iAlTtSDDbJiRpvgL5ugKEATDtHdUVkqgHDm/gbD2ZS9c88mx7G1zSYjjOxp5Qa0eaW0MAQosFRmJSk354PRoQA==",
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/@shikijs/core/-/core-4.0.2.tgz",
"integrity": "sha512-hxT0YF4ExEqB8G/qFdtJvpmHXBYJ2lWW7qTHDarVkIudPFE6iCIrqdgWxGn5s+ppkGXI0aEGlibI0PAyzP3zlw==",
"dev": true,
"license": "MIT",
"dependencies": {
"@shikijs/types": "3.22.0",
"@shikijs/primitive": "4.0.2",
"@shikijs/types": "4.0.2",
"@shikijs/vscode-textmate": "^10.0.2",
"@types/hast": "^3.0.4",
"hast-util-to-html": "^9.0.5"
},
"engines": {
"node": ">=20"
}
},
"node_modules/@shikijs/engine-javascript": {
"version": "3.22.0",
"resolved": "https://registry.npmjs.org/@shikijs/engine-javascript/-/engine-javascript-3.22.0.tgz",
"integrity": "sha512-jdKhfgW9CRtj3Tor0L7+yPwdG3CgP7W+ZEqSsojrMzCjD1e0IxIbwUMDDpYlVBlC08TACg4puwFGkZfLS+56Tw==",
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/@shikijs/engine-javascript/-/engine-javascript-4.0.2.tgz",
"integrity": "sha512-7PW0Nm49DcoUIQEXlJhNNBHyoGMjalRETTCcjMqEaMoJRLljy1Bi/EGV3/qLBgLKQejdspiiYuHGQW6dX94Nag==",
"dev": true,
"license": "MIT",
"dependencies": {
"@shikijs/types": "3.22.0",
"@shikijs/types": "4.0.2",
"@shikijs/vscode-textmate": "^10.0.2",
"oniguruma-to-es": "^4.3.4"
},
"engines": {
"node": ">=20"
}
},
"node_modules/@shikijs/engine-oniguruma": {
"version": "3.22.0",
"resolved": "https://registry.npmjs.org/@shikijs/engine-oniguruma/-/engine-oniguruma-3.22.0.tgz",
"integrity": "sha512-DyXsOG0vGtNtl7ygvabHd7Mt5EY8gCNqR9Y7Lpbbd/PbJvgWrqaKzH1JW6H6qFkuUa8aCxoiYVv8/YfFljiQxA==",
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/@shikijs/engine-oniguruma/-/engine-oniguruma-4.0.2.tgz",
"integrity": "sha512-UpCB9Y2sUKlS9z8juFSKz7ZtysmeXCgnRF0dlhXBkmQnek7lAToPte8DkxmEYGNTMii72zU/lyXiCB6StuZeJg==",
"dev": true,
"license": "MIT",
"dependencies": {
"@shikijs/types": "3.22.0",
"@shikijs/types": "4.0.2",
"@shikijs/vscode-textmate": "^10.0.2"
},
"engines": {
"node": ">=20"
}
},
"node_modules/@shikijs/langs": {
"version": "3.22.0",
"resolved": "https://registry.npmjs.org/@shikijs/langs/-/langs-3.22.0.tgz",
"integrity": "sha512-x/42TfhWmp6H00T6uwVrdTJGKgNdFbrEdhaDwSR5fd5zhQ1Q46bHq9EO61SCEWJR0HY7z2HNDMaBZp8JRmKiIA==",
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/@shikijs/langs/-/langs-4.0.2.tgz",
"integrity": "sha512-KaXby5dvoeuZzN0rYQiPMjFoUrz4hgwIE+D6Du9owcHcl6/g16/yT5BQxSW5cGt2MZBz6Hl0YuRqf12omRfUUg==",
"dev": true,
"license": "MIT",
"dependencies": {
"@shikijs/types": "3.22.0"
"@shikijs/types": "4.0.2"
},
"engines": {
"node": ">=20"
}
},
"node_modules/@shikijs/primitive": {
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/@shikijs/primitive/-/primitive-4.0.2.tgz",
"integrity": "sha512-M6UMPrSa3fN5ayeJwFVl9qWofl273wtK1VG8ySDZ1mQBfhCpdd8nEx7nPZ/tk7k+TYcpqBZzj/AnwxT9lO+HJw==",
"dev": true,
"license": "MIT",
"dependencies": {
"@shikijs/types": "4.0.2",
"@shikijs/vscode-textmate": "^10.0.2",
"@types/hast": "^3.0.4"
},
"engines": {
"node": ">=20"
}
},
"node_modules/@shikijs/rehype": {
"version": "3.22.0",
"resolved": "https://registry.npmjs.org/@shikijs/rehype/-/rehype-3.22.0.tgz",
"integrity": "sha512-69b2VPc6XBy/VmAJlpBU5By+bJSBdE2nvgRCZXav7zujbrjXuT0F60DIrjKuutjPqNufuizE+E8tIZr2Yn8Z+g==",
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/@shikijs/rehype/-/rehype-4.0.2.tgz",
"integrity": "sha512-cmPlKLD8JeojasNFoY64162ScpEdEdQUMuVodPCrv1nx1z3bjmGwoKWDruQWa/ejSznImlaeB0Ty6Q3zPaVQAA==",
"dev": true,
"license": "MIT",
"dependencies": {
"@shikijs/types": "3.22.0",
"@shikijs/types": "4.0.2",
"@types/hast": "^3.0.4",
"hast-util-to-string": "^3.0.1",
"shiki": "3.22.0",
"shiki": "4.0.2",
"unified": "^11.0.5",
"unist-util-visit": "^5.1.0"
},
"engines": {
"node": ">=20"
}
},
"node_modules/@shikijs/themes": {
"version": "3.22.0",
"resolved": "https://registry.npmjs.org/@shikijs/themes/-/themes-3.22.0.tgz",
"integrity": "sha512-o+tlOKqsr6FE4+mYJG08tfCFDS+3CG20HbldXeVoyP+cYSUxDhrFf3GPjE60U55iOkkjbpY2uC3It/eeja35/g==",
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/@shikijs/themes/-/themes-4.0.2.tgz",
"integrity": "sha512-mjCafwt8lJJaVSsQvNVrJumbnnj1RI8jbUKrPKgE6E3OvQKxnuRoBaYC51H4IGHePsGN/QtALglWBU7DoKDFnA==",
"dev": true,
"license": "MIT",
"dependencies": {
"@shikijs/types": "3.22.0"
"@shikijs/types": "4.0.2"
},
"engines": {
"node": ">=20"
}
},
"node_modules/@shikijs/types": {
"version": "3.22.0",
"resolved": "https://registry.npmjs.org/@shikijs/types/-/types-3.22.0.tgz",
"integrity": "sha512-491iAekgKDBFE67z70Ok5a8KBMsQ2IJwOWw3us/7ffQkIBCyOQfm/aNwVMBUriP02QshIfgHCBSIYAl3u2eWjg==",
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/@shikijs/types/-/types-4.0.2.tgz",
"integrity": "sha512-qzbeRooUTPnLE+sHD/Z8DStmaDgnbbc/pMrU203950aRqjX/6AFHeDYT+j00y2lPdz0ywJKx7o/7qnqTivtlXg==",
"dev": true,
"license": "MIT",
"dependencies": {
"@shikijs/vscode-textmate": "^10.0.2",
"@types/hast": "^3.0.4"
},
"engines": {
"node": ">=20"
}
},
"node_modules/@shikijs/vscode-textmate": {
@@ -557,9 +588,9 @@
"license": "MIT"
},
"node_modules/@swc/helpers": {
"version": "0.5.18",
"resolved": "https://registry.npmjs.org/@swc/helpers/-/helpers-0.5.18.tgz",
"integrity": "sha512-TXTnIcNJQEKwThMMqBXsZ4VGAza6bvN4pa41Rkqoio6QBKMvo+5lexeTMScGCIxtzgQJzElcvIltani+adC5PQ==",
"version": "0.5.19",
"resolved": "https://registry.npmjs.org/@swc/helpers/-/helpers-0.5.19.tgz",
"integrity": "sha512-QamiFeIK3txNjgUTNppE6MiG3p7TdninpZu0E0PbqVh1a9FNLT2FRhisaa4NcaX52XVhA5l7Pk58Ft7Sqi/2sA==",
"dev": true,
"license": "Apache-2.0",
"dependencies": {
@@ -639,9 +670,9 @@
"license": "MIT"
},
"node_modules/@types/react": {
"version": "19.2.6",
"resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.6.tgz",
"integrity": "sha512-p/jUvulfgU7oKtj6Xpk8cA2Y1xKTtICGpJYeJXz2YVO2UcvjQgeRMLDGfDeqeRW2Ta+0QNFwcc8X3GH8SxZz6w==",
"version": "19.2.14",
"resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.14.tgz",
"integrity": "sha512-ilcTH/UniCkMdtexkoCN0bI7pMcJDvmQFPvuPvmEaYA/NSfFTAgdUSLAoVjaRJm7+6PvcM+q1zYOwS4wTYMF9w==",
"dev": true,
"license": "MIT",
"peer": true,
@@ -664,13 +695,13 @@
"license": "ISC"
},
"node_modules/@unhead/react": {
"version": "2.1.4",
"resolved": "https://registry.npmjs.org/@unhead/react/-/react-2.1.4.tgz",
"integrity": "sha512-3DzMi5nJkUyLVfQF/q78smCvcSy84TTYgTwXVz5s3AjUcLyHro5Z7bLWriwk1dn5+YRfEsec8aPkLCMi5VjMZg==",
"version": "2.1.10",
"resolved": "https://registry.npmjs.org/@unhead/react/-/react-2.1.10.tgz",
"integrity": "sha512-z9IzzkaCI1GyiBwVRMt4dGc2mOvsj9drbAdXGMy6DWpu9FwTR37ZTmAi7UeCVyIkpVdIaNalz7vkbvGG8afFng==",
"dev": true,
"license": "MIT",
"dependencies": {
"unhead": "2.1.4"
"unhead": "2.1.10"
},
"funding": {
"url": "https://github.com/sponsors/harlan-zw"
@@ -680,9 +711,9 @@
}
},
"node_modules/acorn": {
"version": "8.15.0",
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.15.0.tgz",
"integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
"version": "8.16.0",
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.16.0.tgz",
"integrity": "sha512-UVJyE9MttOsBQIDKw1skb9nAwQuR5wuGD3+82K6JgJlm/Y+KI92oNsMNGZCYdDsVtRHSak0pcV5Dno5+4jh9sw==",
"dev": true,
"license": "MIT",
"bin": {
@@ -781,13 +812,13 @@
}
},
"node_modules/cac": {
"version": "6.7.14",
"resolved": "https://registry.npmjs.org/cac/-/cac-6.7.14.tgz",
"integrity": "sha512-b6Ilus+c3RrdDk+JhLKUAQfzzgLEPy6wcXqS7f/xe1EETvsDP6GORG7SFuOs6cID5YkqchW/LXZbX5bc8j7ZcQ==",
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/cac/-/cac-7.0.0.tgz",
"integrity": "sha512-tixWYgm5ZoOD+3g6UTea91eow5z6AAHaho3g0V9CNSNb45gM8SmflpAc+GRd1InC4AqN/07Unrgp56Y94N9hJQ==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=8"
"node": ">=20.19.0"
}
},
"node_modules/ccount": {
@@ -960,9 +991,9 @@
}
},
"node_modules/decode-named-character-reference": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/decode-named-character-reference/-/decode-named-character-reference-1.2.0.tgz",
"integrity": "sha512-c6fcElNV6ShtZXmsgNgFFV5tVX2PaV4g+MOAkb8eXHvn6sryJBrZa9r0zV6+dtTyoCKxtDy5tyQ5ZwQuidtd+Q==",
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/decode-named-character-reference/-/decode-named-character-reference-1.3.0.tgz",
"integrity": "sha512-GtpQYB283KrPp6nRw50q3U9/VfOutZOe103qlN7BPP6Ad27xYnOIWv4lPzo8HCAL+mMZofJ9KEy30fq6MfaK6Q==",
"dev": true,
"license": "MIT",
"dependencies": {
@@ -997,6 +1028,19 @@
"url": "https://github.com/sponsors/wooorm"
}
},
"node_modules/entities": {
"version": "6.0.1",
"resolved": "https://registry.npmjs.org/entities/-/entities-6.0.1.tgz",
"integrity": "sha512-aN97NXWF6AWBTahfVOIrB/NShkzi5H7F9r1s9mD3cDj4Ko5f2qhhVoYMibXF7GlLveb/D2ioWay8lxI97Ven3g==",
"dev": true,
"license": "BSD-2-Clause",
"engines": {
"node": ">=0.12"
},
"funding": {
"url": "https://github.com/fb55/entities?sponsor=1"
}
},
"node_modules/error-stack-parser": {
"version": "2.1.4",
"resolved": "https://registry.npmjs.org/error-stack-parser/-/error-stack-parser-2.1.4.tgz",
@@ -1243,6 +1287,19 @@
"node": "^8.16.0 || ^10.6.0 || >=11.0.0"
}
},
"node_modules/get-east-asian-width": {
"version": "1.5.0",
"resolved": "https://registry.npmjs.org/get-east-asian-width/-/get-east-asian-width-1.5.0.tgz",
"integrity": "sha512-CQ+bEO+Tva/qlmw24dCejulK5pMzVnUOFOijVogd3KQs07HnRIgp8TGipvCCRT06xeYEbpbgwaCxglFyiuIcmA==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=18"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/github-slugger": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/github-slugger/-/github-slugger-2.0.0.tgz",
@@ -1450,16 +1507,16 @@
}
},
"node_modules/hast-util-to-parse5": {
"version": "8.0.0",
"resolved": "https://registry.npmjs.org/hast-util-to-parse5/-/hast-util-to-parse5-8.0.0.tgz",
"integrity": "sha512-3KKrV5ZVI8if87DVSi1vDeByYrkGzg4mEfeu4alwgmmIeARiBLKCZS2uw5Gb6nU9x9Yufyj3iudm6i7nl52PFw==",
"version": "8.0.1",
"resolved": "https://registry.npmjs.org/hast-util-to-parse5/-/hast-util-to-parse5-8.0.1.tgz",
"integrity": "sha512-MlWT6Pjt4CG9lFCjiz4BH7l9wmrMkfkJYCxFwKQic8+RTZgWPuWxwAfjJElsXkex7DJjfSJsQIt931ilUgmwdA==",
"dev": true,
"license": "MIT",
"dependencies": {
"@types/hast": "^3.0.0",
"comma-separated-tokens": "^2.0.0",
"devlop": "^1.0.0",
"property-information": "^6.0.0",
"property-information": "^7.0.0",
"space-separated-tokens": "^2.0.0",
"web-namespaces": "^2.0.0",
"zwitch": "^2.0.0"
@@ -1469,17 +1526,6 @@
"url": "https://opencollective.com/unified"
}
},
"node_modules/hast-util-to-parse5/node_modules/property-information": {
"version": "6.5.0",
"resolved": "https://registry.npmjs.org/property-information/-/property-information-6.5.0.tgz",
"integrity": "sha512-PgTgs/BlvHxOu8QuEN7wi5A0OmXaBcHpmCSTehcs6Uuu9IkDIEo13Hy7n898RHfrQ49vKCoGeWZSaAK01nwVig==",
"dev": true,
"license": "MIT",
"funding": {
"type": "github",
"url": "https://github.com/sponsors/wooorm"
}
},
"node_modules/hast-util-to-string": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/hast-util-to-string/-/hast-util-to-string-3.0.1.tgz",
@@ -1562,9 +1608,9 @@
}
},
"node_modules/inline-style-parser": {
"version": "0.2.6",
"resolved": "https://registry.npmjs.org/inline-style-parser/-/inline-style-parser-0.2.6.tgz",
"integrity": "sha512-gtGXVaBdl5mAes3rPcMedEBm12ibjt1kDMFfheul1wUAOVEJW60voNdMVzVkfLN06O7ZaD/rxhfKgtlgtTbMjg==",
"version": "0.2.7",
"resolved": "https://registry.npmjs.org/inline-style-parser/-/inline-style-parser-0.2.7.tgz",
"integrity": "sha512-Nb2ctOyNR8DqQoR0OwRG95uNWIC0C1lCgf5Naz5H6Ji72KZ8OcFZLz2P5sNgwlyoJ8Yif11oMuYs5pBQa86csA==",
"dev": true,
"license": "MIT"
},
@@ -1698,16 +1744,6 @@
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/jiti": {
"version": "2.6.1",
"resolved": "https://registry.npmjs.org/jiti/-/jiti-2.6.1.tgz",
"integrity": "sha512-ekilCSN1jwRvIbgeg/57YFh8qQDNbwDb9xT/qu2DAHbFFZUicIl4ygVaAvzveMhMVr3LnpSKTNnwt8PoOfmKhQ==",
"dev": true,
"license": "MIT",
"bin": {
"jiti": "lib/jiti-cli.mjs"
}
},
"node_modules/js-yaml": {
"version": "3.14.2",
"resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-3.14.2.tgz",
@@ -1792,9 +1828,9 @@
}
},
"node_modules/mdast-util-from-markdown": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/mdast-util-from-markdown/-/mdast-util-from-markdown-2.0.2.tgz",
"integrity": "sha512-uZhTV/8NBuw0WHkPTrCqDOl0zVe1BIng5ZtHoDk49ME1qqcjYmmLmOf0gELgcRMxN4w2iuIeVso5/6QymSrgmA==",
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/mdast-util-from-markdown/-/mdast-util-from-markdown-2.0.3.tgz",
"integrity": "sha512-W4mAWTvSlKvf8L6J+VN9yLSqQ9AOAAvHuoDAmPkz4dHf553m5gVj2ejadHJhoJmcmxEnOv6Pa8XJhpxE93kb8Q==",
"dev": true,
"license": "MIT",
"dependencies": {
@@ -2155,6 +2191,80 @@
"micromark-util-types": "^2.0.0"
}
},
"node_modules/micromark-extension-cjk-friendly": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/micromark-extension-cjk-friendly/-/micromark-extension-cjk-friendly-2.0.1.tgz",
"integrity": "sha512-OkzoYVTL1ChbvQ8Cc1ayTIz7paFQz8iS9oIYmewncweUSwmWR+hkJF9spJ1lxB90XldJl26A1F4IkPOKS3bDXw==",
"dev": true,
"license": "MIT",
"dependencies": {
"devlop": "^1.1.0",
"micromark-extension-cjk-friendly-util": "3.0.1",
"micromark-util-chunked": "^2.0.1",
"micromark-util-resolve-all": "^2.0.1",
"micromark-util-symbol": "^2.0.1"
},
"engines": {
"node": ">=18"
},
"peerDependencies": {
"micromark": "^4.0.0",
"micromark-util-types": "^2.0.0"
},
"peerDependenciesMeta": {
"micromark-util-types": {
"optional": true
}
}
},
"node_modules/micromark-extension-cjk-friendly-gfm-strikethrough": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/micromark-extension-cjk-friendly-gfm-strikethrough/-/micromark-extension-cjk-friendly-gfm-strikethrough-2.0.1.tgz",
"integrity": "sha512-wVC0zwjJNqQeX+bb07YTPu/CvSAyCTafyYb7sMhX1r62/Lw5M/df3JyYaANyp8g15c1ypJRFSsookTqA1IDsUg==",
"dev": true,
"license": "MIT",
"dependencies": {
"devlop": "^1.1.0",
"get-east-asian-width": "^1.4.0",
"micromark-extension-cjk-friendly-util": "3.0.1",
"micromark-util-character": "^2.1.1",
"micromark-util-chunked": "^2.0.1",
"micromark-util-resolve-all": "^2.0.1",
"micromark-util-symbol": "^2.0.1"
},
"engines": {
"node": ">=18"
},
"peerDependencies": {
"micromark": "^4.0.0",
"micromark-util-types": "^2.0.0"
},
"peerDependenciesMeta": {
"micromark-util-types": {
"optional": true
}
}
},
"node_modules/micromark-extension-cjk-friendly-util": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/micromark-extension-cjk-friendly-util/-/micromark-extension-cjk-friendly-util-3.0.1.tgz",
"integrity": "sha512-GcbXqTTHOsiZHyF753oIddP/J2eH8j9zpyQPhkof6B2JNxfEJabnQqxbCgzJNuNes0Y2jTNJ3LiYPSXr6eJA8w==",
"dev": true,
"license": "MIT",
"dependencies": {
"get-east-asian-width": "^1.4.0",
"micromark-util-character": "^2.1.1",
"micromark-util-symbol": "^2.0.1"
},
"engines": {
"node": ">=18"
},
"peerDependenciesMeta": {
"micromark-util-types": {
"optional": true
}
}
},
"node_modules/micromark-extension-gfm": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/micromark-extension-gfm/-/micromark-extension-gfm-3.0.0.tgz",
@@ -2919,19 +3029,6 @@
"url": "https://github.com/inikulin/parse5?sponsor=1"
}
},
"node_modules/parse5/node_modules/entities": {
"version": "6.0.1",
"resolved": "https://registry.npmjs.org/entities/-/entities-6.0.1.tgz",
"integrity": "sha512-aN97NXWF6AWBTahfVOIrB/NShkzi5H7F9r1s9mD3cDj4Ko5f2qhhVoYMibXF7GlLveb/D2ioWay8lxI97Ven3g==",
"dev": true,
"license": "BSD-2-Clause",
"engines": {
"node": ">=0.12"
},
"funding": {
"url": "https://github.com/fb55/entities?sponsor=1"
}
},
"node_modules/picocolors": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz",
@@ -3019,10 +3116,23 @@
"node": ">=0.10.0"
}
},
"node_modules/react-render-to-markdown": {
"version": "19.0.1",
"resolved": "https://registry.npmjs.org/react-render-to-markdown/-/react-render-to-markdown-19.0.1.tgz",
"integrity": "sha512-BPv48o+ubcu2JyUDIktvJXFqLIZqR7hA4mvGu1eFIofz9fogT2me9UvXwRvqvGs9jEtNaJkxZIUKUX0oiK4hDA==",
"dev": true,
"license": "MIT",
"dependencies": {
"react-reconciler": "0.33.0"
},
"peerDependencies": {
"react": ">=19"
}
},
"node_modules/react-router": {
"version": "7.13.0",
"resolved": "https://registry.npmjs.org/react-router/-/react-router-7.13.0.tgz",
"integrity": "sha512-PZgus8ETambRT17BUm/LL8lX3Of+oiLaPuVTRH3l1eLvSPpKO3AvhAEb5N7ihAFZQrYDqkvvWfFh9p0z9VsjLw==",
"version": "7.13.1",
"resolved": "https://registry.npmjs.org/react-router/-/react-router-7.13.1.tgz",
"integrity": "sha512-td+xP4X2/6BJvZoX6xw++A2DdEi++YypA69bJUV5oVvqf6/9/9nNlD70YO1e9d3MyamJEBQFEzk6mbfDYbqrSA==",
"dev": true,
"license": "MIT",
"dependencies": {
@@ -3043,13 +3153,13 @@
}
},
"node_modules/react-router-dom": {
"version": "7.13.0",
"resolved": "https://registry.npmjs.org/react-router-dom/-/react-router-dom-7.13.0.tgz",
"integrity": "sha512-5CO/l5Yahi2SKC6rGZ+HDEjpjkGaG/ncEP7eWFTvFxbHP8yeeI0PxTDjimtpXYlR3b3i9/WIL4VJttPrESIf2g==",
"version": "7.13.1",
"resolved": "https://registry.npmjs.org/react-router-dom/-/react-router-dom-7.13.1.tgz",
"integrity": "sha512-UJnV3Rxc5TgUPJt2KJpo1Jpy0OKQr0AjgbZzBFjaPJcFOb2Y8jA5H3LT8HUJAiRLlWrEXWHbF1Z4SCZaQjWDHw==",
"dev": true,
"license": "MIT",
"dependencies": {
"react-router": "7.13.0"
"react-router": "7.13.1"
},
"engines": {
"node": ">=20.0.0"
@@ -3221,6 +3331,50 @@
"url": "https://opencollective.com/unified"
}
},
"node_modules/remark-cjk-friendly": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/remark-cjk-friendly/-/remark-cjk-friendly-2.0.1.tgz",
"integrity": "sha512-6WwkoQyZf/4j5k53zdFYrR8Ca+UVn992jXdLUSBDZR4eBpFhKyVxmA4gUHra/5fesjGIxrDhHesNr/sVoiiysA==",
"dev": true,
"license": "MIT",
"dependencies": {
"micromark-extension-cjk-friendly": "2.0.1"
},
"engines": {
"node": ">=18"
},
"peerDependencies": {
"@types/mdast": "^4.0.0",
"unified": "^11.0.0"
},
"peerDependenciesMeta": {
"@types/mdast": {
"optional": true
}
}
},
"node_modules/remark-cjk-friendly-gfm-strikethrough": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/remark-cjk-friendly-gfm-strikethrough/-/remark-cjk-friendly-gfm-strikethrough-2.0.1.tgz",
"integrity": "sha512-pWKj25O2eLXIL1aBupayl1fKhco+Brw8qWUWJPVB9EBzbQNd7nGLj0nLmJpggWsGLR5j5y40PIdjxby9IEYTuA==",
"dev": true,
"license": "MIT",
"dependencies": {
"micromark-extension-cjk-friendly-gfm-strikethrough": "2.0.1"
},
"engines": {
"node": ">=18"
},
"peerDependencies": {
"@types/mdast": "^4.0.0",
"unified": "^11.0.0"
},
"peerDependenciesMeta": {
"@types/mdast": {
"optional": true
}
}
},
"node_modules/remark-gfm": {
"version": "4.0.1",
"resolved": "https://registry.npmjs.org/remark-gfm/-/remark-gfm-4.0.1.tgz",
@@ -3345,20 +3499,23 @@
"license": "MIT"
},
"node_modules/shiki": {
"version": "3.22.0",
"resolved": "https://registry.npmjs.org/shiki/-/shiki-3.22.0.tgz",
"integrity": "sha512-LBnhsoYEe0Eou4e1VgJACes+O6S6QC0w71fCSp5Oya79inkwkm15gQ1UF6VtQ8j/taMDh79hAB49WUk8ALQW3g==",
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/shiki/-/shiki-4.0.2.tgz",
"integrity": "sha512-eAVKTMedR5ckPo4xne/PjYQYrU3qx78gtJZ+sHlXEg5IHhhoQhMfZVzetTYuaJS0L2Ef3AcCRzCHV8T0WI6nIQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"@shikijs/core": "3.22.0",
"@shikijs/engine-javascript": "3.22.0",
"@shikijs/engine-oniguruma": "3.22.0",
"@shikijs/langs": "3.22.0",
"@shikijs/themes": "3.22.0",
"@shikijs/types": "3.22.0",
"@shikijs/core": "4.0.2",
"@shikijs/engine-javascript": "4.0.2",
"@shikijs/engine-oniguruma": "4.0.2",
"@shikijs/langs": "4.0.2",
"@shikijs/themes": "4.0.2",
"@shikijs/types": "4.0.2",
"@shikijs/vscode-textmate": "^10.0.2",
"@types/hast": "^3.0.4"
},
"engines": {
"node": ">=20"
}
},
"node_modules/source-map": {
@@ -3422,23 +3579,23 @@
}
},
"node_modules/style-to-js": {
"version": "1.1.19",
"resolved": "https://registry.npmjs.org/style-to-js/-/style-to-js-1.1.19.tgz",
"integrity": "sha512-Ev+SgeqiNGT1ufsXyVC5RrJRXdrkRJ1Gol9Qw7Pb72YCKJXrBvP0ckZhBeVSrw2m06DJpei2528uIpjMb4TsoQ==",
"version": "1.1.21",
"resolved": "https://registry.npmjs.org/style-to-js/-/style-to-js-1.1.21.tgz",
"integrity": "sha512-RjQetxJrrUJLQPHbLku6U/ocGtzyjbJMP9lCNK7Ag0CNh690nSH8woqWH9u16nMjYBAok+i7JO1NP2pOy8IsPQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"style-to-object": "1.0.12"
"style-to-object": "1.0.14"
}
},
"node_modules/style-to-object": {
"version": "1.0.12",
"resolved": "https://registry.npmjs.org/style-to-object/-/style-to-object-1.0.12.tgz",
"integrity": "sha512-ddJqYnoT4t97QvN2C95bCgt+m7AAgXjVnkk/jxAfmp7EAB8nnqqZYEbMd3em7/vEomDb2LAQKAy1RFfv41mdNw==",
"version": "1.0.14",
"resolved": "https://registry.npmjs.org/style-to-object/-/style-to-object-1.0.14.tgz",
"integrity": "sha512-LIN7rULI0jBscWQYaSswptyderlarFkjQ+t79nzty8tcIAceVomEVlLzH5VP4Cmsv6MtKhs7qaAiwlcp+Mgaxw==",
"dev": true,
"license": "MIT",
"dependencies": {
"inline-style-parser": "0.2.6"
"inline-style-parser": "0.2.7"
}
},
"node_modules/tinyglobby": {
@@ -3563,9 +3720,9 @@
}
},
"node_modules/unhead": {
"version": "2.1.4",
"resolved": "https://registry.npmjs.org/unhead/-/unhead-2.1.4.tgz",
"integrity": "sha512-+5091sJqtNNmgfQ07zJOgUnMIMKzVKAWjeMlSrTdSGPB6JSozhpjUKuMfWEoLxlMAfhIvgOU8Me0XJvmMA/0fA==",
"version": "2.1.10",
"resolved": "https://registry.npmjs.org/unhead/-/unhead-2.1.10.tgz",
"integrity": "sha512-We8l9uNF8zz6U8lfQaVG70+R/QBfQx1oPIgXin4BtZnK2IQpz6yazQ0qjMNVBDw2ADgF2ea58BtvSK+XX5AS7g==",
"dev": true,
"license": "MIT",
"dependencies": {

View File

@@ -18,6 +18,7 @@ Environment="CONTINUWUITY_DATABASE_PATH=%S/conduwuit"
Environment="CONTINUWUITY_CONFIG_RELOAD_SIGNAL=true"
LoadCredential=conduwuit.toml:/etc/conduwuit/conduwuit.toml
RefreshOnReload=yes
ExecStart=/usr/bin/conduwuit --config ${CREDENTIALS_DIRECTORY}/conduwuit.toml

View File

@@ -1,6 +1,6 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": ["config:recommended", "replacements:all"],
"extends": ["config:recommended", "replacements:all", ":semanticCommitTypeAll(chore)"],
"dependencyDashboard": true,
"osvVulnerabilityAlerts": true,
"lockFileMaintenance": {
@@ -36,10 +36,18 @@
},
"packageRules": [
{
"description": "Batch patch-level Rust dependency updates",
"description": "Batch minor and patch Rust dependency updates",
"matchManagers": ["cargo"],
"matchUpdateTypes": ["minor", "patch"],
"matchCurrentVersion": ">=1.0.0",
"groupName": "rust-non-major"
},
{
"description": "Batch patch-level zerover Rust dependency updates",
"matchManagers": ["cargo"],
"matchUpdateTypes": ["patch"],
"groupName": "rust-patch-updates"
"matchCurrentVersion": ">=0.1.0,<1.0.0",
"groupName": "rust-zerover-patch-updates"
},
{
"description": "Limit concurrent Cargo PRs",

View File

@@ -1,6 +1,6 @@
use std::fmt::Write;
use conduwuit::{Err, Result};
use conduwuit::{Err, Result, utils::response::LimitReadExt};
use futures::StreamExt;
use ruma::{OwnedRoomId, OwnedServerName, OwnedUserId};
@@ -55,7 +55,15 @@ pub(super) async fn fetch_support_well_known(&self, server_name: OwnedServerName
.send()
.await?;
let text = response.text().await?;
let text = response
.limit_read_text(
self.services
.config
.max_request_size
.try_into()
.expect("u64 fits into usize"),
)
.await?;
if text.is_empty() {
return Err!("Response text/body is empty.");

View File

@@ -29,7 +29,9 @@ pub(super) async fn delete(
.delete(&mxc.as_str().try_into()?)
.await?;
return Err!("Deleted the MXC from our database and on our filesystem.",);
return self
.write_str("Deleted the MXC from our database and on our filesystem.")
.await;
}
if let Some(event_id) = event_id {

View File

@@ -40,7 +40,7 @@ pub enum MediaCommand {
/// * Delete all remote and local media from 3 days ago, up until now:
///
/// `!admin media delete-past-remote-media -a 3d
///-yes-i-want-to-delete-local-media`
///--yes-i-want-to-delete-local-media`
#[command(verbatim_doc_comment)]
DeletePastRemoteMedia {
/// The relative time (e.g. 30s, 5m, 7d) from now within which to

View File

@@ -9,7 +9,7 @@
},
events::{
AnyGlobalAccountDataEventContent, AnyRoomAccountDataEventContent,
GlobalAccountDataEventType, RoomAccountDataEventType,
RoomAccountDataEventType,
},
serde::Raw,
};
@@ -126,12 +126,6 @@ async fn set_account_data(
)));
}
if event_type_s == GlobalAccountDataEventType::PushRules.to_cow_str() {
return Err!(Request(BadJson(
"This endpoint cannot be used for setting/configuring push rules."
)));
}
let data: serde_json::Value = serde_json::from_str(data.get())
.map_err(|e| err!(Request(BadJson(warn!("Invalid JSON provided: {e}")))))?;

View File

@@ -0,0 +1,121 @@
use axum::extract::State;
use axum_client_ip::InsecureClientIp;
use conduwuit::{Err, Result, at};
use futures::StreamExt;
use ruma::api::client::dehydrated_device::{
delete_dehydrated_device::unstable as delete_dehydrated_device,
get_dehydrated_device::unstable as get_dehydrated_device, get_events::unstable as get_events,
put_dehydrated_device::unstable as put_dehydrated_device,
};
use crate::Ruma;
const MAX_BATCH_EVENTS: usize = 50;
/// # `PUT /_matrix/client/../dehydrated_device`
///
/// Creates or overwrites the user's dehydrated device.
#[tracing::instrument(skip_all, fields(%client))]
pub(crate) async fn put_dehydrated_device_route(
State(services): State<crate::State>,
InsecureClientIp(client): InsecureClientIp,
body: Ruma<put_dehydrated_device::Request>,
) -> Result<put_dehydrated_device::Response> {
let sender_user = body
.sender_user
.as_deref()
.expect("AccessToken authentication required");
let device_id = body.body.device_id.clone();
services
.users
.set_dehydrated_device(sender_user, body.body)
.await?;
Ok(put_dehydrated_device::Response { device_id })
}
/// # `DELETE /_matrix/client/../dehydrated_device`
///
/// Deletes the user's dehydrated device without replacement.
#[tracing::instrument(skip_all, fields(%client))]
pub(crate) async fn delete_dehydrated_device_route(
State(services): State<crate::State>,
InsecureClientIp(client): InsecureClientIp,
body: Ruma<delete_dehydrated_device::Request>,
) -> Result<delete_dehydrated_device::Response> {
let sender_user = body.sender_user();
let device_id = services.users.get_dehydrated_device_id(sender_user).await?;
services.users.remove_device(sender_user, &device_id).await;
Ok(delete_dehydrated_device::Response { device_id })
}
/// # `GET /_matrix/client/../dehydrated_device`
///
/// Gets the user's dehydrated device
#[tracing::instrument(skip_all, fields(%client))]
pub(crate) async fn get_dehydrated_device_route(
State(services): State<crate::State>,
InsecureClientIp(client): InsecureClientIp,
body: Ruma<get_dehydrated_device::Request>,
) -> Result<get_dehydrated_device::Response> {
let sender_user = body.sender_user();
let device = services.users.get_dehydrated_device(sender_user).await?;
Ok(get_dehydrated_device::Response {
device_id: device.device_id,
device_data: device.device_data,
})
}
/// # `GET /_matrix/client/../dehydrated_device/{device_id}/events`
///
/// Paginates the events of the dehydrated device.
#[tracing::instrument(skip_all, fields(%client))]
pub(crate) async fn get_dehydrated_events_route(
State(services): State<crate::State>,
InsecureClientIp(client): InsecureClientIp,
body: Ruma<get_events::Request>,
) -> Result<get_events::Response> {
let sender_user = body.sender_user();
let device_id = &body.body.device_id;
let existing_id = services.users.get_dehydrated_device_id(sender_user).await;
if existing_id.as_ref().is_err()
|| existing_id
.as_ref()
.is_ok_and(|existing_id| existing_id != device_id)
{
return Err!(Request(Forbidden("Not the dehydrated device_id.")));
}
let since: Option<u64> = body
.body
.next_batch
.as_deref()
.map(str::parse)
.transpose()?;
let mut next_batch: Option<u64> = None;
let events = services
.users
.get_to_device_events(sender_user, device_id, since, None)
.take(MAX_BATCH_EVENTS)
.inspect(|&(count, _)| {
next_batch.replace(count);
})
.map(at!(1))
.collect()
.await;
Ok(get_events::Response {
events,
next_batch: next_batch.as_ref().map(ToString::to_string),
})
}

View File

@@ -6,6 +6,7 @@
pub(super) mod backup;
pub(super) mod capabilities;
pub(super) mod context;
pub(super) mod dehydrated_device;
pub(super) mod device;
pub(super) mod directory;
pub(super) mod filter;
@@ -49,6 +50,7 @@
pub(super) use backup::*;
pub(super) use capabilities::*;
pub(super) use context::*;
pub(super) use dehydrated_device::*;
pub(super) use device::*;
pub(super) use directory::*;
pub(super) use filter::*;

View File

@@ -270,7 +270,7 @@ async fn build_state_and_timeline(
// joined since the last sync, that being the syncing user's join event. if
// it's empty something is wrong.
if joined_since_last_sync && timeline.pdus.is_empty() {
warn!("timeline for newly joined room is empty");
debug_warn!("timeline for newly joined room is empty");
}
let (summary, device_list_updates) = try_join(

View File

@@ -1,5 +1,5 @@
use conduwuit::{
Event, PduCount, PduEvent, Result, at, debug_warn,
Event, PduEvent, Result, at, debug_warn,
pdu::EventHash,
trace,
utils::{self, IterStream, future::ReadyEqExt, stream::WidebandExt as _},
@@ -68,9 +68,13 @@ pub(super) async fn load_left_room(
return Ok(None);
}
// return early if this is an incremental sync, and we've already synced this
// leave to the user, and `include_leave` isn't set on the filter.
if !filter.room.include_leave && last_sync_end_count >= Some(left_count) {
// return early if:
// - this is an initial sync and the room filter doesn't include leaves, or
// - this is an incremental sync, and we've already synced the leave, and the
// room filter doesn't include leaves
if last_sync_end_count.is_none_or(|last_sync_end_count| last_sync_end_count >= left_count)
&& !filter.room.include_leave
{
return Ok(None);
}
@@ -195,27 +199,13 @@ async fn build_left_state_and_timeline(
leave_shortstatehash: ShortStateHash,
prev_membership_event: PduEvent,
) -> Result<(TimelinePdus, Vec<PduEvent>)> {
let SyncContext {
syncing_user,
last_sync_end_count,
filter,
..
} = sync_context;
let SyncContext { syncing_user, filter, .. } = sync_context;
let timeline_start_count = if let Some(last_sync_end_count) = last_sync_end_count {
// for incremental syncs, start the timeline after `since`
PduCount::Normal(last_sync_end_count)
} else {
// for initial syncs, start the timeline after the previous membership
// event. we don't want to include the membership event itself
// because clients get confused when they see a `join`
// membership event in a `leave` room.
services
.rooms
.timeline
.get_pdu_count(&prev_membership_event.event_id)
.await?
};
let timeline_start_count = services
.rooms
.timeline
.get_pdu_count(&prev_membership_event.event_id)
.await?;
// end the timeline at the user's leave event
let timeline_end_count = services

View File

@@ -11,7 +11,7 @@
use axum::extract::State;
use axum_client_ip::InsecureClientIp;
use conduwuit::{
Result, extract_variant,
Result, at, extract_variant,
utils::{
ReadyExt, TryFutureExtExt,
stream::{BroadbandExt, Tools, WidebandExt},
@@ -297,12 +297,18 @@ pub(crate) async fn build_sync_events(
.rooms
.state_cache
.rooms_left(syncing_user)
.broad_filter_map(|(room_id, leave_pdu)| {
load_left_room(services, context, room_id.clone(), leave_pdu)
.map_ok(move |left_room| (room_id, left_room))
.ok()
.broad_filter_map(|(room_id, leave_pdu)| async {
let left_room = load_left_room(services, context, room_id.clone(), leave_pdu).await;
match left_room {
| Ok(Some(left_room)) => Some((room_id, left_room)),
| Ok(None) => None,
| Err(err) => {
warn!(?err, %room_id, "error loading joined room");
None
},
}
})
.ready_filter_map(|(room_id, left_room)| left_room.map(|left_room| (room_id, left_room)))
.collect();
let invited_rooms = services
@@ -385,6 +391,7 @@ pub(crate) async fn build_sync_events(
last_sync_end_count,
Some(current_count),
)
.map(at!(1))
.collect::<Vec<_>>();
let device_one_time_keys_count = services

View File

@@ -336,7 +336,9 @@ async fn handle_lists<'a, Rooms, AllRooms>(
let ranges = list.ranges.clone();
for mut range in ranges {
range.0 = uint!(0);
range.0 = range
.0
.min(UInt::try_from(active_rooms.len()).unwrap_or(UInt::MAX));
range.1 = range.1.checked_add(uint!(1)).unwrap_or(range.1);
range.1 = range
.1
@@ -1027,6 +1029,7 @@ async fn collect_to_device(
events: services
.users
.get_to_device_events(sender_user, sender_device, None, Some(next_batch))
.map(at!(1))
.collect()
.await,
})

View File

@@ -50,6 +50,7 @@ pub(crate) async fn get_supported_versions_route(
("org.matrix.msc2836".to_owned(), true), /* threading/threads (https://github.com/matrix-org/matrix-spec-proposals/pull/2836) */
("org.matrix.msc2946".to_owned(), true), /* spaces/hierarchy summaries (https://github.com/matrix-org/matrix-spec-proposals/pull/2946) */
("org.matrix.msc3026.busy_presence".to_owned(), true), /* busy presence status (https://github.com/matrix-org/matrix-spec-proposals/pull/3026) */
("org.matrix.msc3814".to_owned(), true), /* dehydrated devices */
("org.matrix.msc3827".to_owned(), true), /* filtering of /publicRooms by room type (https://github.com/matrix-org/matrix-spec-proposals/pull/3827) */
("org.matrix.msc3952_intentional_mentions".to_owned(), true), /* intentional mentions (https://github.com/matrix-org/matrix-spec-proposals/pull/3952) */
("org.matrix.msc3916.stable".to_owned(), true), /* authenticated media (https://github.com/matrix-org/matrix-spec-proposals/pull/3916) */

View File

@@ -27,10 +27,32 @@ pub(crate) async fn well_known_client(
identity_server: None,
sliding_sync_proxy: Some(SlidingSyncProxyInfo { url: client_url }),
tile_server: None,
rtc_foci: services.config.well_known.rtc_focus_server_urls.clone(),
rtc_foci: services
.config
.matrix_rtc
.effective_foci(&services.config.well_known.rtc_focus_server_urls)
.to_vec(),
})
}
/// # `GET /_matrix/client/v1/rtc/transports`
/// # `GET /_matrix/client/unstable/org.matrix.msc4143/rtc/transports`
///
/// Returns the list of MatrixRTC foci (transports) configured for this
/// homeserver, implementing MSC4143.
pub(crate) async fn get_rtc_transports(
State(services): State<crate::State>,
_body: Ruma<ruma::api::client::discovery::get_rtc_transports::Request>,
) -> Result<ruma::api::client::discovery::get_rtc_transports::Response> {
Ok(ruma::api::client::discovery::get_rtc_transports::Response::new(
services
.config
.matrix_rtc
.effective_foci(&services.config.well_known.rtc_focus_server_urls)
.to_vec(),
))
}
/// # `GET /.well-known/matrix/support`
///
/// Server support contact and support page of a homeserver's domain.

View File

@@ -160,6 +160,10 @@ pub fn build(router: Router<State>, server: &Server) -> Router<State> {
.ruma_route(&client::update_device_route)
.ruma_route(&client::delete_device_route)
.ruma_route(&client::delete_devices_route)
.ruma_route(&client::put_dehydrated_device_route)
.ruma_route(&client::delete_dehydrated_device_route)
.ruma_route(&client::get_dehydrated_device_route)
.ruma_route(&client::get_dehydrated_events_route)
.ruma_route(&client::get_tags_route)
.ruma_route(&client::update_tag_route)
.ruma_route(&client::delete_tag_route)
@@ -184,6 +188,7 @@ pub fn build(router: Router<State>, server: &Server) -> Router<State> {
.ruma_route(&client::put_suspended_status)
.ruma_route(&client::well_known_support)
.ruma_route(&client::well_known_client)
.ruma_route(&client::get_rtc_transports)
.route("/_conduwuit/server_version", get(client::conduwuit_server_version))
.route("/_continuwuity/server_version", get(client::conduwuit_server_version))
.ruma_route(&client::room_initial_sync_route)

View File

@@ -67,23 +67,17 @@ pub(super) async fn auth(
if metadata.authentication == AuthScheme::None {
match metadata {
| &get_public_rooms::v3::Request::METADATA => {
if !services
.server
.config
.allow_public_room_directory_without_auth
{
match token {
| Token::Appservice(_) | Token::User(_) => {
// we should have validated the token above
// already
},
| Token::None | Token::Invalid => {
return Err(Error::BadRequest(
ErrorKind::MissingToken,
"Missing or invalid access token.",
));
},
}
match token {
| Token::Appservice(_) | Token::User(_) => {
// we should have validated the token above
// already
},
| Token::None | Token::Invalid => {
return Err(Error::BadRequest(
ErrorKind::MissingToken,
"Missing or invalid access token.",
));
},
}
},
| &get_profile::v3::Request::METADATA

View File

@@ -174,6 +174,7 @@ pub fn check(config: &Config) -> Result {
if config.allow_registration
&& config.yes_i_am_very_very_sure_i_want_an_open_registration_server_prone_to_abuse
&& config.registration_token.is_none()
&& config.registration_token_file.is_none()
{
warn!(
"Open registration is enabled via setting \

View File

@@ -609,19 +609,25 @@ pub struct Config {
pub yes_i_am_very_very_sure_i_want_an_open_registration_server_prone_to_abuse: bool,
/// A static registration token that new users will have to provide when
/// creating an account. If unset and `allow_registration` is true,
/// you must set
/// `yes_i_am_very_very_sure_i_want_an_open_registration_server_prone_to_abuse`
/// to true to allow open registration without any conditions.
///
/// If you do not want to set a static token, the `!admin token` commands
/// may also be used to manage registration tokens.
/// creating an account. This token does not supersede tokens from other
/// sources, such as the `!admin token` command or the
/// `registration_token_file` configuration option.
///
/// example: "o&^uCtes4HPf0Vu@F20jQeeWE7"
///
/// display: sensitive
pub registration_token: Option<String>,
/// A path to a file containing static registration tokens, one per line.
/// Tokens in this file do not supersede tokens from other sources, such as
/// the `!admin token` command or the `registration_token` configuration
/// option.
///
/// The file will be read once, when Continuwuity starts. It is not
/// currently reread when the server configuration is reloaded. If the file
/// cannot be read, Continuwuity will fail to start.
pub registration_token_file: Option<PathBuf>,
/// The public site key for reCaptcha. If this is provided, reCaptcha
/// becomes required during registration. If both captcha *and*
/// registration token are enabled, both will be required during
@@ -678,12 +684,6 @@ pub struct Config {
#[serde(default)]
pub allow_public_room_directory_over_federation: bool,
/// Set this to true to allow your server's public room directory to be
/// queried without client authentication (access token) through the Client
/// APIs. Set this to false to protect against /publicRooms spiders.
#[serde(default)]
pub allow_public_room_directory_without_auth: bool,
/// Allow guests/unauthenticated users to access TURN credentials.
///
/// This is the equivalent of Synapse's `turn_allow_guests` config option.
@@ -1735,6 +1735,11 @@ pub struct Config {
/// default: "continuwuity/<version> (bot; +https://continuwuity.org)"
pub url_preview_user_agent: Option<String>,
/// Determines whether audio and video files will be downloaded for URL
/// previews.
#[serde(default)]
pub url_preview_allow_audio_video: bool,
/// List of forbidden room aliases and room IDs as strings of regex
/// patterns.
///
@@ -2074,6 +2079,16 @@ pub struct Config {
pub allow_invalid_tls_certificates_yes_i_know_what_the_fuck_i_am_doing_with_this_and_i_know_this_is_insecure:
bool,
/// Forcibly disables first-run mode.
///
/// This is intended to be used for Complement testing to allow the test
/// suite to register users, because first-run mode interferes with open
/// registration.
///
/// display: hidden
#[serde(default)]
pub force_disable_first_run_mode: bool,
/// display: nested
#[serde(default)]
pub ldap: LdapConfig,
@@ -2086,6 +2101,12 @@ pub struct Config {
/// display: nested
#[serde(default)]
pub blurhashing: BlurhashConfig,
/// Configuration for MatrixRTC (MSC4143) transport discovery.
/// display: nested
#[serde(default)]
pub matrix_rtc: MatrixRtcConfig,
#[serde(flatten)]
#[allow(clippy::zero_sized_map_values)]
// this is a catchall, the map shouldn't be zero at runtime
@@ -2151,17 +2172,16 @@ pub struct WellKnownConfig {
/// listed.
pub support_mxid: Option<OwnedUserId>,
/// A list of MatrixRTC foci URLs which will be served as part of the
/// MSC4143 client endpoint at /.well-known/matrix/client. If you're
/// setting up livekit, you'd want something like:
/// rtc_focus_server_urls = [
/// { type = "livekit", livekit_service_url = "https://livekit.example.com" },
/// ]
/// **DEPRECATED**: Use `[global.matrix_rtc].foci` instead.
///
/// To disable, set this to be an empty vector (`[]`).
/// A list of MatrixRTC foci URLs which will be served as part of the
/// MSC4143 client endpoint at /.well-known/matrix/client.
///
/// This option is deprecated and will be removed in a future release.
/// Please migrate to the new `[global.matrix_rtc]` config section.
///
/// default: []
#[serde(default = "default_rtc_focus_urls")]
#[serde(default)]
pub rtc_focus_server_urls: Vec<RtcFocusInfo>,
}
@@ -2190,6 +2210,43 @@ pub struct BlurhashConfig {
pub blurhash_max_raw_size: u64,
}
#[derive(Clone, Debug, Deserialize, Default)]
#[config_example_generator(filename = "conduwuit-example.toml", section = "global.matrix_rtc")]
pub struct MatrixRtcConfig {
/// A list of MatrixRTC foci (transports) which will be served via the
/// MSC4143 RTC transports endpoint at
/// `/_matrix/client/v1/rtc/transports`. If you're setting up livekit,
/// you'd want something like:
/// ```toml
/// [global.matrix_rtc]
/// foci = [
/// { type = "livekit", livekit_service_url = "https://livekit.example.com" },
/// ]
/// ```
///
/// To disable, set this to an empty list (`[]`).
///
/// default: []
#[serde(default)]
pub foci: Vec<RtcFocusInfo>,
}
impl MatrixRtcConfig {
/// Returns the effective foci, falling back to the deprecated
/// `rtc_focus_server_urls` if the new config is empty.
#[must_use]
pub fn effective_foci<'a>(
&'a self,
deprecated_foci: &'a [RtcFocusInfo],
) -> &'a [RtcFocusInfo] {
if !self.foci.is_empty() {
&self.foci
} else {
deprecated_foci
}
}
}
#[derive(Clone, Debug, Default, Deserialize)]
#[config_example_generator(filename = "conduwuit-example.toml", section = "global.ldap")]
pub struct LdapConfig {
@@ -2383,6 +2440,7 @@ pub struct DraupnirConfig {
"well_known_support_email",
"well_known_support_mxid",
"registration_token_file",
"well_known.rtc_focus_server_urls",
];
impl Config {
@@ -2666,9 +2724,6 @@ fn default_rocksdb_stats_level() -> u8 { 1 }
#[inline]
pub fn default_default_room_version() -> RoomVersionId { RoomVersionId::V11 }
#[must_use]
pub fn default_rtc_focus_urls() -> Vec<RtcFocusInfo> { vec![] }
fn default_ip_range_denylist() -> Vec<String> {
vec![
"127.0.0.0/8".to_owned(),

View File

@@ -191,6 +191,7 @@ pub fn status_code(&self) -> http::StatusCode {
| Self::Reqwest(error) => error.status().unwrap_or(StatusCode::INTERNAL_SERVER_ERROR),
| Self::Conflict(_) => StatusCode::CONFLICT,
| Self::Io(error) => response::io_error_code(error.kind()),
| Self::Uiaa(_) => StatusCode::UNAUTHORIZED,
| _ => StatusCode::INTERNAL_SERVER_ERROR,
}
}

View File

@@ -11,6 +11,7 @@
pub mod math;
pub mod mutex_map;
pub mod rand;
pub mod response;
pub mod result;
pub mod set;
pub mod stream;

View File

@@ -0,0 +1,51 @@
use futures::StreamExt;
use num_traits::ToPrimitive;
use crate::Err;
/// Reads the response body while enforcing a maximum size limit to prevent
/// memory exhaustion.
pub async fn limit_read(response: reqwest::Response, max_size: u64) -> crate::Result<Vec<u8>> {
if response.content_length().is_some_and(|len| len > max_size) {
return Err!(BadServerResponse("Response too large"));
}
let mut data = Vec::new();
let mut reader = response.bytes_stream();
while let Some(chunk) = reader.next().await {
let chunk = chunk?;
data.extend_from_slice(&chunk);
if data.len() > max_size.to_usize().expect("max_size must fit in usize") {
return Err!(BadServerResponse("Response too large"));
}
}
Ok(data)
}
/// Reads the response body as text while enforcing a maximum size limit to
/// prevent memory exhaustion.
pub async fn limit_read_text(
response: reqwest::Response,
max_size: u64,
) -> crate::Result<String> {
let text = String::from_utf8(limit_read(response, max_size).await?)?;
Ok(text)
}
#[allow(async_fn_in_trait)]
pub trait LimitReadExt {
async fn limit_read(self, max_size: u64) -> crate::Result<Vec<u8>>;
async fn limit_read_text(self, max_size: u64) -> crate::Result<String>;
}
impl LimitReadExt for reqwest::Response {
async fn limit_read(self, max_size: u64) -> crate::Result<Vec<u8>> {
limit_read(self, max_size).await
}
async fn limit_read_text(self, max_size: u64) -> crate::Result<String> {
limit_read_text(self, max_size).await
}
}

View File

@@ -362,6 +362,10 @@ pub(super) fn open_list(db: &Arc<Engine>, maps: &[Descriptor]) -> Result<Maps> {
name: "userid_blurhash",
..descriptor::RANDOM_SMALL
},
Descriptor {
name: "userid_dehydrateddevice",
..descriptor::RANDOM_SMALL
},
Descriptor {
name: "userid_devicelistversion",
..descriptor::RANDOM_SMALL

View File

@@ -530,7 +530,12 @@ async fn handle_response_error(
Ok(())
}
pub async fn is_admin_command<E>(&self, event: &E, body: &str) -> Option<InvocationSource>
pub async fn is_admin_command<E>(
&self,
event: &E,
body: &str,
sent_locally: bool,
) -> Option<InvocationSource>
where
E: Event + Send + Sync,
{
@@ -580,6 +585,15 @@ pub async fn is_admin_command<E>(&self, event: &E, body: &str) -> Option<Invocat
return None;
}
// Escaped commands must be sent locally (via client API), not via federation
if !sent_locally {
conduwuit::warn!(
"Ignoring escaped admin command from {} that arrived via federation",
event.sender()
);
return None;
}
// Looks good
Some(InvocationSource::EscapedCommand)
}

View File

@@ -18,7 +18,7 @@
use std::{sync::Arc, time::Duration};
use async_trait::async_trait;
use conduwuit::{Result, Server, debug, error, warn};
use conduwuit::{Result, Server, debug, error, utils::response::LimitReadExt, warn};
use database::{Deserialized, Map};
use ruma::events::{Mentions, room::message::RoomMessageEventContent};
use serde::Deserialize;
@@ -137,7 +137,7 @@ async fn check(&self) -> Result<()> {
.get(CHECK_FOR_ANNOUNCEMENTS_URL)
.send()
.await?
.text()
.limit_read_text(1024 * 1024)
.await?;
let response = serde_json::from_str::<CheckForAnnouncementsResponse>(&response)?;

View File

@@ -272,7 +272,10 @@ pub async fn get_db_registration(&self, id: &str) -> Result<Registration> {
.get(id)
.await
.and_then(|ref bytes| serde_saphyr::from_slice(bytes).map_err(Into::into))
.map_err(|e| err!(Database("Invalid appservice {id:?} registration: {e:?}")))
.map_err(|e| {
self.db.id_appserviceregistrations.remove(id);
err!(Database("Invalid appservice {id:?} registration: {e:?}. Removed."))
})
}
pub fn read(&self) -> impl Future<Output = RwLockReadGuard<'_, Registrations>> + Send {

View File

@@ -19,10 +19,9 @@ impl Service {
/// Get the registration token set in the config file, if it exists.
#[must_use]
pub fn get_config_file_token(&self) -> Option<ValidToken> {
self.registration_token.clone().map(|token| ValidToken {
token,
source: ValidTokenSource::ConfigFile,
})
self.registration_token
.clone()
.map(|token| ValidToken { token, source: ValidTokenSource::Config })
}
}

View File

@@ -2,8 +2,8 @@
use bytes::Bytes;
use conduwuit::{
Err, Error, Result, debug, debug::INFO_SPAN_LEVEL, debug_error, debug_warn, err,
error::inspect_debug_log, implement, trace,
Err, Error, Result, debug, debug::INFO_SPAN_LEVEL, debug_error, debug_warn, err, implement,
trace, utils::response::LimitReadExt,
};
use http::{HeaderValue, header::AUTHORIZATION};
use ipaddress::IPAddress;
@@ -133,7 +133,22 @@ async fn handle_response<T>(
where
T: OutgoingRequest + Send,
{
let response = into_http_response(dest, actual, method, url, response).await?;
const HUGE_ENDPOINTS: [&str; 2] =
["/_matrix/federation/v2/send_join/", "/_matrix/federation/v2/state/"];
let size_limit: u64 = if HUGE_ENDPOINTS.iter().any(|e| url.path().starts_with(e)) {
// Some federation endpoints can return huge response bodies, so we'll bump the
// limit for those endpoints specifically.
self.services
.server
.config
.max_request_size
.saturating_mul(10)
} else {
self.services.server.config.max_request_size
}
.try_into()
.expect("size_limit (usize) should fit within a u64");
let response = into_http_response(dest, actual, method, url, response, size_limit).await?;
T::IncomingResponse::try_from_http_response(response)
.map_err(|e| err!(BadServerResponse("Server returned bad 200 response: {e:?}")))
@@ -145,6 +160,7 @@ async fn into_http_response(
method: &Method,
url: &Url,
mut response: Response,
max_size: u64,
) -> Result<http::Response<Bytes>> {
let status = response.status();
trace!(
@@ -167,14 +183,14 @@ async fn into_http_response(
);
trace!("Waiting for response body...");
let body = response
.bytes()
.await
.inspect_err(inspect_debug_log)
.unwrap_or_else(|_| Vec::new().into());
let http_response = http_response_builder
.body(body)
.body(
response
.limit_read(max_size)
.await
.unwrap_or_default()
.into(),
)
.expect("reqwest body is valid http body");
debug!("Got {status:?} for {method} {url}");

View File

@@ -67,15 +67,17 @@ fn build(args: crate::Args<'_>) -> Result<Arc<Self>> {
fn name(&self) -> &str { crate::service::make_name(std::module_path!()) }
async fn worker(self: Arc<Self>) -> Result {
// first run mode will be enabled if there are no local users
let is_first_run = self
.services
.users
.list_local_users()
.ready_filter(|user| *user != self.services.globals.server_user)
.next()
.await
.is_none();
// first run mode will be enabled if there are no local users, provided it's not
// forcibly disabled for Complement tests
let is_first_run = !self.services.config.force_disable_first_run_mode
&& self
.services
.users
.list_local_users()
.ready_filter(|user| *user != self.services.globals.server_user)
.next()
.await
.is_none();
self.first_run_marker
.set(if is_first_run {

View File

@@ -142,6 +142,10 @@ pub fn url_preview_check_root_domain(&self) -> bool {
self.server.config.url_preview_check_root_domain
}
pub fn url_preview_allow_audio_video(&self) -> bool {
self.server.config.url_preview_allow_audio_video
}
pub fn forbidden_alias_names(&self) -> &RegexSet { &self.server.config.forbidden_alias_names }
pub fn forbidden_usernames(&self) -> &RegexSet { &self.server.config.forbidden_usernames }

View File

@@ -207,6 +207,28 @@ pub(super) fn set_url_preview(
value.extend_from_slice(&data.image_width.unwrap_or(0).to_be_bytes());
value.push(0xFF);
value.extend_from_slice(&data.image_height.unwrap_or(0).to_be_bytes());
value.push(0xFF);
value.extend_from_slice(
data.video
.as_ref()
.map(String::as_bytes)
.unwrap_or_default(),
);
value.push(0xFF);
value.extend_from_slice(&data.video_size.unwrap_or(0).to_be_bytes());
value.push(0xFF);
value.extend_from_slice(&data.video_width.unwrap_or(0).to_be_bytes());
value.push(0xFF);
value.extend_from_slice(&data.video_height.unwrap_or(0).to_be_bytes());
value.push(0xFF);
value.extend_from_slice(
data.audio
.as_ref()
.map(String::as_bytes)
.unwrap_or_default(),
);
value.push(0xFF);
value.extend_from_slice(&data.audio_size.unwrap_or(0).to_be_bytes());
self.url_previews.insert(url.as_bytes(), &value);
@@ -267,6 +289,48 @@ pub(super) async fn get_url_preview(&self, url: &str) -> Result<UrlPreviewData>
| Some(0) => None,
| x => x,
};
let video = match values
.next()
.and_then(|b| String::from_utf8(b.to_vec()).ok())
{
| Some(s) if s.is_empty() => None,
| x => x,
};
let video_size = match values
.next()
.map(|b| usize::from_be_bytes(b.try_into().unwrap_or_default()))
{
| Some(0) => None,
| x => x,
};
let video_width = match values
.next()
.map(|b| u32::from_be_bytes(b.try_into().unwrap_or_default()))
{
| Some(0) => None,
| x => x,
};
let video_height = match values
.next()
.map(|b| u32::from_be_bytes(b.try_into().unwrap_or_default()))
{
| Some(0) => None,
| x => x,
};
let audio = match values
.next()
.and_then(|b| String::from_utf8(b.to_vec()).ok())
{
| Some(s) if s.is_empty() => None,
| x => x,
};
let audio_size = match values
.next()
.map(|b| usize::from_be_bytes(b.try_into().unwrap_or_default()))
{
| Some(0) => None,
| x => x,
};
Ok(UrlPreviewData {
title,
@@ -275,6 +339,12 @@ pub(super) async fn get_url_preview(&self, url: &str) -> Result<UrlPreviewData>
image_size,
image_width,
image_height,
video,
video_size,
video_width,
video_height,
audio,
audio_size,
})
}
}

View File

@@ -7,9 +7,11 @@
use std::time::SystemTime;
use conduwuit::{Err, Result, debug, err};
use conduwuit::{Err, Result, debug, err, utils::response::LimitReadExt};
use conduwuit_core::implement;
use ipaddress::IPAddress;
#[cfg(feature = "url_preview")]
use ruma::OwnedMxcUri;
use serde::Serialize;
use url::Url;
@@ -29,6 +31,18 @@ pub struct UrlPreviewData {
pub image_width: Option<u32>,
#[serde(skip_serializing_if = "Option::is_none", rename(serialize = "og:image:height"))]
pub image_height: Option<u32>,
#[serde(skip_serializing_if = "Option::is_none", rename(serialize = "og:video"))]
pub video: Option<String>,
#[serde(skip_serializing_if = "Option::is_none", rename(serialize = "matrix:video:size"))]
pub video_size: Option<usize>,
#[serde(skip_serializing_if = "Option::is_none", rename(serialize = "og:video:width"))]
pub video_width: Option<u32>,
#[serde(skip_serializing_if = "Option::is_none", rename(serialize = "og:video:height"))]
pub video_height: Option<u32>,
#[serde(skip_serializing_if = "Option::is_none", rename(serialize = "og:audio"))]
pub audio: Option<String>,
#[serde(skip_serializing_if = "Option::is_none", rename(serialize = "matrix:audio:size"))]
pub audio_size: Option<usize>,
}
#[implement(Service)]
@@ -96,7 +110,9 @@ async fn request_url_preview(&self, url: &Url) -> Result<UrlPreviewData> {
let data = match content_type {
| html if html.starts_with("text/html") => self.download_html(url.as_str()).await?,
| img if img.starts_with("image/") => self.download_image(url.as_str()).await?,
| img if img.starts_with("image/") => self.download_image(url.as_str(), None).await?,
| video if video.starts_with("video/") => self.download_video(url.as_str(), None).await?,
| audio if audio.starts_with("audio/") => self.download_audio(url.as_str(), None).await?,
| _ => return Err!(Request(Unknown("Unsupported Content-Type"))),
};
@@ -107,13 +123,34 @@ async fn request_url_preview(&self, url: &Url) -> Result<UrlPreviewData> {
#[cfg(feature = "url_preview")]
#[implement(Service)]
pub async fn download_image(&self, url: &str) -> Result<UrlPreviewData> {
pub async fn download_image(
&self,
url: &str,
preview_data: Option<UrlPreviewData>,
) -> Result<UrlPreviewData> {
use conduwuit::utils::random_string;
use image::ImageReader;
use ruma::Mxc;
let image = self.services.client.url_preview.get(url).send().await?;
let image = image.bytes().await?;
let mut preview_data = preview_data.unwrap_or_default();
let image = self
.services
.client
.url_preview
.get(url)
.send()
.await?
.limit_read(
self.services
.server
.config
.max_request_size
.try_into()
.expect("u64 should fit in usize"),
)
.await?;
let mxc = Mxc {
server_name: self.services.globals.server_name(),
media_id: &random_string(super::MXC_LENGTH),
@@ -121,27 +158,125 @@ pub async fn download_image(&self, url: &str) -> Result<UrlPreviewData> {
self.create(&mxc, None, None, None, &image).await?;
let cursor = std::io::Cursor::new(&image);
let (width, height) = match ImageReader::new(cursor).with_guessed_format() {
| Err(_) => (None, None),
| Ok(reader) => match reader.into_dimensions() {
preview_data.image = Some(mxc.to_string());
if preview_data.image_height.is_none() || preview_data.image_width.is_none() {
let cursor = std::io::Cursor::new(&image);
let (width, height) = match ImageReader::new(cursor).with_guessed_format() {
| Err(_) => (None, None),
| Ok((width, height)) => (Some(width), Some(height)),
},
| Ok(reader) => match reader.into_dimensions() {
| Err(_) => (None, None),
| Ok((width, height)) => (Some(width), Some(height)),
},
};
preview_data.image_width = width;
preview_data.image_height = height;
}
Ok(preview_data)
}
#[cfg(feature = "url_preview")]
#[implement(Service)]
pub async fn download_video(
&self,
url: &str,
preview_data: Option<UrlPreviewData>,
) -> Result<UrlPreviewData> {
let mut preview_data = preview_data.unwrap_or_default();
if self.services.globals.url_preview_allow_audio_video() {
let (url, size) = self.download_media(url).await?;
preview_data.video = Some(url.to_string());
preview_data.video_size = Some(size);
}
Ok(preview_data)
}
#[cfg(feature = "url_preview")]
#[implement(Service)]
pub async fn download_audio(
&self,
url: &str,
preview_data: Option<UrlPreviewData>,
) -> Result<UrlPreviewData> {
let mut preview_data = preview_data.unwrap_or_default();
if self.services.globals.url_preview_allow_audio_video() {
let (url, size) = self.download_media(url).await?;
preview_data.audio = Some(url.to_string());
preview_data.audio_size = Some(size);
}
Ok(preview_data)
}
#[cfg(feature = "url_preview")]
#[implement(Service)]
pub async fn download_media(&self, url: &str) -> Result<(OwnedMxcUri, usize)> {
use conduwuit::utils::random_string;
use http::header::CONTENT_TYPE;
use ruma::Mxc;
let response = self.services.client.url_preview.get(url).send().await?;
let content_type = response.headers().get(CONTENT_TYPE).cloned();
let media = response
.limit_read(
self.services
.server
.config
.max_request_size
.try_into()
.expect("u64 should fit in usize"),
)
.await?;
let mxc = Mxc {
server_name: self.services.globals.server_name(),
media_id: &random_string(super::MXC_LENGTH),
};
Ok(UrlPreviewData {
image: Some(mxc.to_string()),
image_size: Some(image.len()),
image_width: width,
image_height: height,
..Default::default()
})
let content_type = content_type.and_then(|v| v.to_str().map(ToOwned::to_owned).ok());
self.create(&mxc, None, None, content_type.as_deref(), &media)
.await?;
Ok((OwnedMxcUri::from(mxc.to_string()), media.len()))
}
#[cfg(not(feature = "url_preview"))]
#[implement(Service)]
pub async fn download_image(&self, _url: &str) -> Result<UrlPreviewData> {
pub async fn download_image(
&self,
_url: &str,
_preview_data: Option<UrlPreviewData>,
) -> Result<UrlPreviewData> {
Err!(FeatureDisabled("url_preview"))
}
#[cfg(not(feature = "url_preview"))]
#[implement(Service)]
pub async fn download_video(
&self,
_url: &str,
_preview_data: Option<UrlPreviewData>,
) -> Result<UrlPreviewData> {
Err!(FeatureDisabled("url_preview"))
}
#[cfg(not(feature = "url_preview"))]
#[implement(Service)]
pub async fn download_audio(
&self,
_url: &str,
_preview_data: Option<UrlPreviewData>,
) -> Result<UrlPreviewData> {
Err!(FeatureDisabled("url_preview"))
}
#[cfg(not(feature = "url_preview"))]
#[implement(Service)]
pub async fn download_media(&self, _url: &str) -> Result<UrlPreviewData> {
Err!(FeatureDisabled("url_preview"))
}
@@ -151,39 +286,46 @@ async fn download_html(&self, url: &str) -> Result<UrlPreviewData> {
use webpage::HTML;
let client = &self.services.client.url_preview;
let mut response = client.get(url).send().await?;
let mut bytes: Vec<u8> = Vec::new();
while let Some(chunk) = response.chunk().await? {
bytes.extend_from_slice(&chunk);
if bytes.len() > self.services.globals.url_preview_max_spider_size() {
debug!(
"Response body from URL {} exceeds url_preview_max_spider_size ({}), not \
processing the rest of the response body and assuming our necessary data is in \
this range.",
url,
self.services.globals.url_preview_max_spider_size()
);
break;
}
}
let body = String::from_utf8_lossy(&bytes);
let Ok(html) = HTML::from_string(body.to_string(), Some(url.to_owned())) else {
let body = client
.get(url)
.send()
.await?
.limit_read_text(
self.services
.server
.config
.max_request_size
.try_into()
.expect("u64 should fit in usize"),
)
.await?;
let Ok(html) = HTML::from_string(body.clone(), Some(url.to_owned())) else {
return Err!(Request(Unknown("Failed to parse HTML")));
};
let mut data = match html.opengraph.images.first() {
| None => UrlPreviewData::default(),
| Some(obj) => self.download_image(&obj.url).await?,
};
let mut preview_data = UrlPreviewData::default();
if let Some(obj) = html.opengraph.images.first() {
preview_data = self.download_image(&obj.url, Some(preview_data)).await?;
}
if let Some(obj) = html.opengraph.videos.first() {
preview_data = self.download_video(&obj.url, Some(preview_data)).await?;
preview_data.video_width = obj.properties.get("width").and_then(|v| v.parse().ok());
preview_data.video_height = obj.properties.get("height").and_then(|v| v.parse().ok());
}
if let Some(obj) = html.opengraph.audios.first() {
preview_data = self.download_audio(&obj.url, Some(preview_data)).await?;
}
let props = html.opengraph.properties;
/* use OpenGraph title/description, but fall back to HTML if not available */
data.title = props.get("title").cloned().or(html.title);
data.description = props.get("description").cloned().or(html.description);
preview_data.title = props.get("title").cloned().or(html.title);
preview_data.description = props.get("description").cloned().or(html.description);
Ok(data)
Ok(preview_data)
}
#[cfg(not(feature = "url_preview"))]

View File

@@ -2,7 +2,7 @@
use conduwuit::{
Err, Error, Result, debug_warn, err, implement,
utils::content_disposition::make_content_disposition,
utils::{content_disposition::make_content_disposition, response::LimitReadExt},
};
use http::header::{CONTENT_DISPOSITION, CONTENT_TYPE, HeaderValue};
use ruma::{
@@ -286,10 +286,15 @@ async fn location_request(&self, location: &str) -> Result<FileMeta> {
.and_then(Result::ok);
response
.bytes()
.limit_read(
self.services
.server
.config
.max_request_size
.try_into()
.expect("u64 should fit in usize"),
)
.await
.map(Vec::from)
.map_err(Into::into)
.map(|content| FileMeta {
content: Some(content),
content_type: content_type.clone(),

View File

@@ -1,8 +1,9 @@
use std::{cmp, collections::HashMap, future::ready};
use conduwuit::{
Err, Event, Pdu, Result, debug, debug_info, debug_warn, error, info,
Err, Event, Pdu, Result, debug, debug_info, debug_warn, err, error, info,
result::NotFound,
trace,
utils::{
IterStream, ReadyExt,
stream::{TryExpect, TryIgnore},
@@ -57,6 +58,7 @@ pub(crate) async fn migrations(services: &Services) -> Result<()> {
}
async fn fresh(services: &Services) -> Result<()> {
info!("Creating new fresh database");
let db = &services.db;
services.globals.db.bump_database_version(DATABASE_VERSION);
@@ -66,11 +68,18 @@ async fn fresh(services: &Services) -> Result<()> {
db["global"].insert(b"retroactively_fix_bad_data_from_roomuserid_joined", []);
db["global"].insert(b"fix_referencedevents_missing_sep", []);
db["global"].insert(b"fix_readreceiptid_readreceipt_duplicates", []);
db["global"].insert(b"fix_corrupt_msc4133_fields", []);
db["global"].insert(b"populate_userroomid_leftstate_table", []);
db["global"].insert(b"fix_local_invite_state", []);
// Create the admin room and server user on first run
crate::admin::create_admin_room(services).boxed().await?;
info!("Creating admin room and server user");
crate::admin::create_admin_room(services)
.boxed()
.await
.inspect_err(|e| error!("Failed to create admin room during db init: {e}"))?;
warn!("Created new RocksDB database with version {DATABASE_VERSION}");
info!("Created new database with version {DATABASE_VERSION}");
Ok(())
}
@@ -88,19 +97,33 @@ async fn migrate(services: &Services) -> Result<()> {
}
if services.globals.db.database_version().await < 12 {
db_lt_12(services).await?;
db_lt_12(services)
.await
.map_err(|e| err!("Failed to run v12 migrations: {e}"))?;
}
// This migration can be reused as-is anytime the server-default rules are
// updated.
if services.globals.db.database_version().await < 13 {
db_lt_13(services).await?;
db_lt_13(services)
.await
.map_err(|e| err!("Failed to run v13 migrations: {e}"))?;
}
if db["global"].get(b"feat_sha256_media").await.is_not_found() {
media::migrations::migrate_sha256_media(services).await?;
media::migrations::migrate_sha256_media(services)
.await
.map_err(|e| err!("Failed to run SHA256 media migration: {e}"))?;
} else if config.media_startup_check {
media::migrations::checkup_sha256_media(services).await?;
info!("Starting media startup integrity check.");
let now = std::time::Instant::now();
media::migrations::checkup_sha256_media(services)
.await
.map_err(|e| err!("Failed to verify media integrity: {e}"))?;
info!(
"Finished media startup integrity check in {} seconds.",
now.elapsed().as_secs_f32()
);
}
if db["global"]
@@ -108,7 +131,12 @@ async fn migrate(services: &Services) -> Result<()> {
.await
.is_not_found()
{
fix_bad_double_separator_in_state_cache(services).await?;
info!("Running migration 'fix_bad_double_separator_in_state_cache'");
fix_bad_double_separator_in_state_cache(services)
.await
.map_err(|e| {
err!("Failed to run 'fix_bad_double_separator_in_state_cache' migration: {e}")
})?;
}
if db["global"]
@@ -116,7 +144,15 @@ async fn migrate(services: &Services) -> Result<()> {
.await
.is_not_found()
{
retroactively_fix_bad_data_from_roomuserid_joined(services).await?;
info!("Running migration 'retroactively_fix_bad_data_from_roomuserid_joined'");
retroactively_fix_bad_data_from_roomuserid_joined(services)
.await
.map_err(|e| {
err!(
"Failed to run 'retroactively_fix_bad_data_from_roomuserid_joined' \
migration: {e}"
)
})?;
}
if db["global"]
@@ -125,7 +161,12 @@ async fn migrate(services: &Services) -> Result<()> {
.is_not_found()
|| services.globals.db.database_version().await < 17
{
fix_referencedevents_missing_sep(services).await?;
info!("Running migration 'fix_referencedevents_missing_sep'");
fix_referencedevents_missing_sep(services)
.await
.map_err(|e| {
err!("Failed to run 'fix_referencedevents_missing_sep' migration': {e}")
})?;
}
if db["global"]
@@ -134,7 +175,12 @@ async fn migrate(services: &Services) -> Result<()> {
.is_not_found()
|| services.globals.db.database_version().await < 17
{
fix_readreceiptid_readreceipt_duplicates(services).await?;
info!("Running migration 'fix_readreceiptid_readreceipt_duplicates'");
fix_readreceiptid_readreceipt_duplicates(services)
.await
.map_err(|e| {
err!("Failed to run 'fix_readreceiptid_readreceipt_duplicates' migration': {e}")
})?;
}
if services.globals.db.database_version().await < 17 {
@@ -147,7 +193,10 @@ async fn migrate(services: &Services) -> Result<()> {
.await
.is_not_found()
{
fix_corrupt_msc4133_fields(services).await?;
info!("Running migration 'fix_corrupt_msc4133_fields'");
fix_corrupt_msc4133_fields(services)
.await
.map_err(|e| err!("Failed to run 'fix_corrupt_msc4133_fields' migration': {e}"))?;
}
if services.globals.db.database_version().await < 18 {
@@ -160,7 +209,12 @@ async fn migrate(services: &Services) -> Result<()> {
.await
.is_not_found()
{
populate_userroomid_leftstate_table(services).await?;
info!("Running migration 'populate_userroomid_leftstate_table'");
populate_userroomid_leftstate_table(services)
.await
.map_err(|e| {
err!("Failed to run 'populate_userroomid_leftstate_table' migration': {e}")
})?;
}
if db["global"]
@@ -168,14 +222,17 @@ async fn migrate(services: &Services) -> Result<()> {
.await
.is_not_found()
{
fix_local_invite_state(services).await?;
info!("Running migration 'fix_local_invite_state'");
fix_local_invite_state(services)
.await
.map_err(|e| err!("Failed to run 'fix_local_invite_state' migration': {e}"))?;
}
assert_eq!(
services.globals.db.database_version().await,
DATABASE_VERSION,
"Failed asserting local database version {} is equal to known latest conduwuit database \
version {}",
"Failed asserting local database version {} is equal to known latest continuwuity \
database version {}",
services.globals.db.database_version().await,
DATABASE_VERSION,
);
@@ -370,7 +427,7 @@ async fn db_lt_13(services: &Services) -> Result<()> {
}
async fn fix_bad_double_separator_in_state_cache(services: &Services) -> Result<()> {
warn!("Fixing bad double separator in state_cache roomuserid_joined");
info!("Fixing bad double separator in state_cache roomuserid_joined");
let db = &services.db;
let roomuserid_joined = &db["roomuserid_joined"];
@@ -414,7 +471,7 @@ async fn fix_bad_double_separator_in_state_cache(services: &Services) -> Result<
}
async fn retroactively_fix_bad_data_from_roomuserid_joined(services: &Services) -> Result<()> {
warn!("Retroactively fixing bad data from broken roomuserid_joined");
info!("Retroactively fixing bad data from broken roomuserid_joined");
let db = &services.db;
let _cork = db.cork_and_sync();
@@ -504,7 +561,7 @@ async fn retroactively_fix_bad_data_from_roomuserid_joined(services: &Services)
}
async fn fix_referencedevents_missing_sep(services: &Services) -> Result {
warn!("Fixing missing record separator between room_id and event_id in referencedevents");
info!("Fixing missing record separator between room_id and event_id in referencedevents");
let db = &services.db;
let cork = db.cork_and_sync();
@@ -552,7 +609,7 @@ async fn fix_readreceiptid_readreceipt_duplicates(services: &Services) -> Result
type ArrayId = ArrayString<MAX_BYTES>;
type Key<'a> = (&'a RoomId, u64, &'a UserId);
warn!("Fixing undeleted entries in readreceiptid_readreceipt...");
info!("Fixing undeleted entries in readreceiptid_readreceipt...");
let db = &services.db;
let cork = db.cork_and_sync();
@@ -606,7 +663,7 @@ async fn fix_corrupt_msc4133_fields(services: &Services) -> Result {
use serde_json::{Value, from_slice};
type KeyVal<'a> = ((OwnedUserId, String), &'a [u8]);
warn!("Fixing corrupted `us.cloke.msc4175.tz` fields...");
info!("Fixing corrupted `us.cloke.msc4175.tz` fields...");
let db = &services.db;
let cork = db.cork_and_sync();
@@ -746,7 +803,18 @@ async fn fix_local_invite_state(services: &Services) -> Result {
let fixed = userroomid_invitestate.stream()
// if they're a local user on this homeserver
.try_filter(|((user_id, _), _): &KeyVal<'_>| ready(services.globals.user_is_local(user_id)))
.and_then(async |((user_id, room_id), stripped_state): KeyVal<'_>| Ok::<_, conduwuit::Error>((user_id.to_owned(), room_id.to_owned(), stripped_state.deserialize()?)))
.and_then(async |((user_id, room_id), stripped_state): KeyVal<'_>| Ok::<_,
conduwuit::Error>((user_id.to_owned(), room_id.to_owned(), stripped_state.deserialize
().unwrap_or_else(|e| {
trace!("Failed to deserialize: {:?}", stripped_state.json());
warn!(
%user_id,
%room_id,
"Failed to deserialize stripped state for invite, removing from db: {e}"
);
userroomid_invitestate.del((user_id, room_id));
vec![]
}))))
.try_fold(0_usize, async |mut fixed, (user_id, room_id, stripped_state)| {
// and their invite state is None
if stripped_state.is_empty()

View File

@@ -1,6 +1,7 @@
use std::{fmt::Debug, mem, sync::Arc};
use bytes::BytesMut;
use conduwuit::utils::response::LimitReadExt;
use conduwuit_core::{
Err, Event, Result, debug_warn, err, trace,
utils::{stream::TryIgnore, string_from_bytes},
@@ -30,7 +31,7 @@
uint,
};
use crate::{Dep, client, globals, rooms, sending, users};
use crate::{Dep, client, config, globals, rooms, sending, users};
pub struct Service {
db: Data,
@@ -39,6 +40,7 @@ pub struct Service {
struct Services {
globals: Dep<globals::Service>,
config: Dep<config::Service>,
client: Dep<client::Service>,
state_accessor: Dep<rooms::state_accessor::Service>,
state_cache: Dep<rooms::state_cache::Service>,
@@ -61,6 +63,7 @@ fn build(args: crate::Args<'_>) -> Result<Arc<Self>> {
services: Services {
globals: args.depend::<globals::Service>("globals"),
client: args.depend::<client::Service>("client"),
config: args.depend::<config::Service>("config"),
state_accessor: args
.depend::<rooms::state_accessor::Service>("rooms::state_accessor"),
state_cache: args.depend::<rooms::state_cache::Service>("rooms::state_cache"),
@@ -245,7 +248,15 @@ pub async fn send_request<T>(&self, dest: &str, request: T) -> Result<T::Incomin
.expect("http::response::Builder is usable"),
);
let body = response.bytes().await?;
let body = response
.limit_read(
self.services
.config
.max_request_size
.try_into()
.expect("usize fits into u64"),
)
.await?;
if !status.is_success() {
debug_warn!("Push gateway response body: {:?}", string_from_bytes(&body));

View File

@@ -2,7 +2,7 @@
use std::{future::ready, pin::Pin, sync::Arc};
use conduwuit::{Err, Result, utils};
use conduwuit::{Err, Result, err, utils};
use data::Data;
pub use data::{DatabaseTokenInfo, TokenExpires};
use futures::{
@@ -18,6 +18,9 @@
pub struct Service {
db: Data,
services: Services,
/// The registration tokens which were read from the registration token
/// file, if one is configured.
registration_tokens_from_file: Vec<String>,
}
struct Services {
@@ -45,34 +48,54 @@ fn eq(&self, other: &str) -> bool { self.token == other }
/// The source of a valid database token.
#[derive(Debug)]
pub enum ValidTokenSource {
/// The static token set in the homeserver's config file, which is
/// always valid.
ConfigFile,
/// The static token set in the homeserver's config file.
Config,
/// A database token which has been checked to be valid.
Database(DatabaseTokenInfo),
/// The single-use token which may be used to create the homeserver's first
/// account.
FirstAccount,
/// A registration token read from the registration token file set in the
/// config.
TokenFile,
}
impl std::fmt::Display for ValidTokenSource {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
| Self::ConfigFile => write!(f, "Token defined in config."),
| Self::Config => write!(f, "Static token set in the server configuration."),
| Self::Database(info) => info.fmt(f),
| Self::FirstAccount => write!(f, "Initial setup token."),
| Self::TokenFile => write!(f, "Static token set in the registration token file."),
}
}
}
impl crate::Service for Service {
fn build(args: crate::Args<'_>) -> Result<Arc<Self>> {
let registration_tokens_from_file = args.server.config.registration_token_file
.clone()
// If the token file option was set, read the path it points to
.map(std::fs::read_to_string)
.transpose()
.map_err(|err| err!("Failed to read registration token file: {err}"))
.map(|tokens| {
if let Some(tokens) = tokens {
// If the token file option was set, return the file's lines as tokens
tokens.lines().map(ToOwned::to_owned).collect()
} else {
// Otherwise, if the option wasn't set, return no tokens
vec![]
}
})?;
Ok(Arc::new(Self {
db: Data::new(args.db),
services: Services {
config: args.depend::<config::Service>("config"),
firstrun: args.depend::<firstrun::Service>("firstrun"),
},
registration_tokens_from_file,
}))
}
@@ -97,12 +120,23 @@ pub fn issue_token(
(token, info)
}
/// Get all the "special" registration tokens that aren't defined in the
/// Get all the static registration tokens that aren't defined in the
/// database.
fn iterate_static_tokens(&self) -> impl Iterator<Item = ValidToken> {
// This does not include the first-account token, because it's special:
// no other registration tokens are valid when it is set.
self.services.config.get_config_file_token().into_iter()
// This does not include the first-account token, because it has special
// behavior: no other registration tokens are valid when it is set.
self.services
.config
.get_config_file_token()
.into_iter()
.chain(
self.registration_tokens_from_file
.iter()
.map(|token_string| ValidToken {
token: token_string.clone(),
source: ValidTokenSource::TokenFile,
}),
)
}
/// Validate a registration token.
@@ -158,7 +192,7 @@ pub fn mark_token_as_used(&self, ValidToken { token, source }: ValidToken) {
/// revoked.
pub fn revoke_token(&self, ValidToken { token, source }: ValidToken) -> Result {
match source {
| ValidTokenSource::ConfigFile => {
| ValidTokenSource::Config => {
Err!(
"The token set in the config file cannot be revoked. Edit the config file \
to change it."
@@ -171,6 +205,12 @@ pub fn revoke_token(&self, ValidToken { token, source }: ValidToken) -> Result {
| ValidTokenSource::FirstAccount => {
Err!("The initial setup token cannot be revoked.")
},
| ValidTokenSource::TokenFile => {
Err!(
"Tokens set in the registration token file cannot be revoked. Edit the \
registration token file and restart Continuwuity to change them."
)
},
}
}

View File

@@ -1,4 +1,6 @@
use conduwuit::{Result, debug, debug_error, debug_info, debug_warn, implement, trace};
use conduwuit::{
Result, debug, debug_error, debug_info, implement, trace, utils::response::LimitReadExt,
};
#[implement(super::Service)]
#[tracing::instrument(name = "well-known", level = "debug", skip(self, dest))]
@@ -24,12 +26,8 @@ pub(super) async fn request_well_known(&self, dest: &str) -> Result<Option<Strin
return Ok(None);
}
let text = response.text().await?;
let text = response.limit_read_text(8192).await?;
trace!("response text: {text:?}");
if text.len() >= 12288 {
debug_warn!("response contains junk");
return Ok(None);
}
let body: serde_json::Value = serde_json::from_str(&text).unwrap_or_default();

View File

@@ -94,6 +94,12 @@ pub fn set_alias(
#[tracing::instrument(skip(self))]
pub async fn remove_alias(&self, alias: &RoomAliasId, user_id: &UserId) -> Result<()> {
if alias == self.services.globals.admin_alias
&& user_id != self.services.globals.server_user
{
return Err!(Request(Forbidden("Only the server user can remove this alias")));
}
if !self.user_can_remove_alias(alias, user_id).await? {
return Err!(Request(Forbidden("User is not permitted to remove this alias.")));
}

View File

@@ -63,7 +63,9 @@ pub(super) async fn fetch_state<Pdu>(
},
| hash_map::Entry::Occupied(_) => {
return Err!(Database(
"State event's type and state_key combination exists multiple times.",
"State event's type and state_key combination exists multiple times: {}, {}",
pdu.kind(),
state_key
));
},
}

View File

@@ -134,7 +134,7 @@ pub async fn handle_incoming_pdu<'a>(
);
return Err!(Request(TooLarge("PDU is too large")));
}
trace!("processing incoming pdu from {origin} for room {room_id} with event id {event_id}");
trace!("processing incoming PDU from {origin} for room {room_id} with event id {event_id}");
// 1.1 Check we even know about the room
let meta_exists = self.services.metadata.exists(room_id).map(Ok);
@@ -164,7 +164,7 @@ pub async fn handle_incoming_pdu<'a>(
sender_acl_check.map(|o| o.unwrap_or(Ok(()))),
)
.await
.inspect_err(|e| debug_error!("failed to handle incoming PDU: {e}"))?;
.inspect_err(|e| debug_error!(%origin, "failed to handle incoming PDU {event_id}: {e}"))?;
if is_disabled {
return Err!(Request(Forbidden("Federation of this room is disabled by this server.")));
@@ -195,6 +195,7 @@ pub async fn handle_incoming_pdu<'a>(
}
info!(
%origin,
%room_id,
"Dropping inbound PDU for room we aren't participating in"
);
return Err!(Request(NotFound("This server is not participating in that room.")));

View File

@@ -162,7 +162,9 @@ pub(super) async fn handle_outlier_pdu<'a, Pdu>(
},
| hash_map::Entry::Occupied(_) => {
return Err!(Request(InvalidParam(
"Auth event's type and state_key combination exists multiple times.",
"Auth event's type and state_key combination exists multiple times: {}, {}",
auth_event.kind,
auth_event.state_key().unwrap_or("")
)));
},
}

View File

@@ -72,6 +72,26 @@ pub async fn append_incoming_pdu<'a, Leaves>(
.append_pdu(pdu, pdu_json, new_room_leaves, state_lock, room_id)
.await?;
// Process admin commands for federation events
if *pdu.kind() == TimelineEventType::RoomMessage {
let content: ExtractBody = pdu.get_content()?;
if let Some(body) = content.body {
if let Some(source) = self
.services
.admin
.is_admin_command(pdu, &body, false)
.await
{
self.services.admin.command_with_sender(
body,
Some(pdu.event_id().into()),
source,
pdu.sender.clone().into(),
)?;
}
}
}
Ok(Some(pdu_id))
}
@@ -334,15 +354,6 @@ pub async fn append_pdu<'a, Leaves>(
let content: ExtractBody = pdu.get_content()?;
if let Some(body) = content.body {
self.services.search.index_pdu(shortroomid, &pdu_id, &body);
if let Some(source) = self.services.admin.is_admin_command(pdu, &body).await {
self.services.admin.command_with_sender(
body,
Some((pdu.event_id()).into()),
source,
pdu.sender.clone().into(),
)?;
}
}
},
| _ => {},

View File

@@ -18,7 +18,7 @@
},
};
use super::RoomMutexGuard;
use super::{ExtractBody, RoomMutexGuard};
/// Creates a new persisted data unit and adds it to a room. This function
/// takes a roomid_mutex_state, meaning that only this function is able to
@@ -126,6 +126,26 @@ pub async fn build_and_append_pdu(
.boxed()
.await?;
// Process admin commands for locally sent events
if *pdu.kind() == TimelineEventType::RoomMessage {
let content: ExtractBody = pdu.get_content()?;
if let Some(body) = content.body {
if let Some(source) = self
.services
.admin
.is_admin_command(&pdu, &body, true)
.await
{
self.services.admin.command_with_sender(
body,
Some(pdu.event_id().into()),
source,
pdu.sender.clone().into(),
)?;
}
}
}
// We set the room state after inserting the pdu, so that we never have a moment
// in time where events in the current room state do not exist
trace!("Setting room state for room {room_id}");
@@ -167,6 +187,8 @@ pub async fn build_and_append_pdu(
Ok(pdu.event_id().to_owned())
}
/// Assert invariants about the admin room, to prevent (for example) all admins
/// from leaving or being banned from the room
#[implement(super::Service)]
#[tracing::instrument(skip_all, level = "debug")]
async fn check_pdu_for_admin_room<Pdu>(&self, pdu: &Pdu, sender: &UserId) -> Result

View File

@@ -1,7 +1,7 @@
use std::{fmt::Debug, mem};
use bytes::BytesMut;
use conduwuit::{Err, Result, debug_error, err, utils, warn};
use conduwuit::{Err, Result, debug_error, err, utils, utils::response::LimitReadExt, warn};
use reqwest::Client;
use ruma::api::{IncomingResponse, MatrixVersion, OutgoingRequest, SendAccessToken};
@@ -38,7 +38,7 @@ pub(crate) async fn send_antispam_request<T>(
.expect("http::response::Builder is usable"),
);
let body = response.bytes().await?; // TODO: handle timeout
let body = response.limit_read(65535).await?; // TODO: handle timeout
if !status.is_success() {
debug_error!("Antispam response bytes: {:?}", utils::string_from_bytes(&body));

View File

@@ -1,7 +1,9 @@
use std::{fmt::Debug, mem};
use bytes::BytesMut;
use conduwuit::{Err, Result, debug_error, err, implement, trace, utils, warn};
use conduwuit::{
Err, Result, debug_error, err, implement, trace, utils, utils::response::LimitReadExt, warn,
};
use ruma::api::{
IncomingResponse, MatrixVersion, OutgoingRequest, SendAccessToken, appservice::Registration,
};
@@ -77,7 +79,15 @@ pub async fn send_appservice_request<T>(
.expect("http::response::Builder is usable"),
);
let body = response.bytes().await?;
let body = response
.limit_read(
self.server
.config
.max_request_size
.try_into()
.expect("usize fits into u64"),
)
.await?;
if !status.is_success() {
debug_error!("Appservice response bytes: {:?}", utils::string_from_bytes(&body));

View File

@@ -10,7 +10,7 @@
use base64::{Engine as _, engine::general_purpose::URL_SAFE_NO_PAD};
use conduwuit_core::{
Error, Event, Result, debug, err, error,
Error, Event, Result, at, debug, err, error,
result::LogErr,
trace,
utils::{
@@ -175,7 +175,7 @@ async fn handle_response_ok<'a>(
if !new_events.is_empty() {
self.db.mark_as_active(new_events.iter());
let new_events_vec = new_events.into_iter().map(|(_, event)| event).collect();
let new_events_vec = new_events.into_iter().map(at!(1)).collect();
futures.push(self.send_events(dest.clone(), new_events_vec));
} else {
statuses.remove(dest);

View File

@@ -228,7 +228,7 @@ async fn acquire_notary_result(&self, missing: &mut Batch, server_keys: ServerSi
self.add_signing_keys(server_keys.clone()).await;
if let Some(key_ids) = missing.get_mut(server) {
key_ids.retain(|key_id| key_exists(&server_keys, key_id));
key_ids.retain(|key_id| !key_exists(&server_keys, key_id));
if key_ids.is_empty() {
missing.remove(server);
}

View File

@@ -1,7 +1,7 @@
use std::{any::Any, collections::BTreeMap, sync::Arc};
use conduwuit::{
Result, Server, SyncRwLock, debug, debug_info, info, trace, utils::stream::IterStream,
Result, Server, SyncRwLock, debug, debug_info, error, info, trace, utils::stream::IterStream,
};
use database::Database;
use futures::{Stream, StreamExt, TryStreamExt};
@@ -125,10 +125,12 @@ macro_rules! build {
}
pub async fn start(self: &Arc<Self>) -> Result<Arc<Self>> {
debug_info!("Starting services...");
info!("Starting services...");
self.admin.set_services(Some(Arc::clone(self)).as_ref());
super::migrations::migrations(self).await?;
super::migrations::migrations(self)
.await
.inspect_err(|e| error!("Migrations failed: {e}"))?;
self.manager
.lock()
.await
@@ -147,7 +149,7 @@ pub async fn start(self: &Arc<Self>) -> Result<Arc<Self>> {
.await;
}
debug_info!("Services startup complete.");
info!("Services startup complete.");
Ok(Arc::clone(self))
}

View File

@@ -0,0 +1,149 @@
use conduwuit::{Err, Result, implement, trace};
use conduwuit_database::{Deserialized, Json};
use ruma::{
DeviceId, OwnedDeviceId, UserId,
api::client::dehydrated_device::{
DehydratedDeviceData, put_dehydrated_device::unstable::Request,
},
serde::Raw,
};
use serde::{Deserialize, Serialize};
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct DehydratedDevice {
/// Unique ID of the device.
pub device_id: OwnedDeviceId,
/// Contains serialized and encrypted private data.
pub device_data: Raw<DehydratedDeviceData>,
}
/// Creates or recreates the user's dehydrated device.
#[implement(super::Service)]
#[tracing::instrument(
level = "info",
skip_all,
fields(
%user_id,
device_id = %request.device_id,
display_name = ?request.initial_device_display_name,
)
)]
pub async fn set_dehydrated_device(&self, user_id: &UserId, request: Request) -> Result {
assert!(
self.exists(user_id).await,
"Tried to create dehydrated device for non-existent user"
);
let existing_id = self.get_dehydrated_device_id(user_id).await;
if existing_id.is_err()
&& self
.get_device_metadata(user_id, &request.device_id)
.await
.is_ok()
{
return Err!("A hydrated device already exists with that ID.");
}
if let Ok(existing_id) = existing_id {
self.remove_device(user_id, &existing_id).await;
}
self.create_device(
user_id,
&request.device_id,
"",
request.initial_device_display_name.clone(),
None,
)
.await?;
trace!(device_data = ?request.device_data);
self.db.userid_dehydrateddevice.raw_put(
user_id,
Json(&DehydratedDevice {
device_id: request.device_id.clone(),
device_data: request.device_data,
}),
);
trace!(device_keys = ?request.device_keys);
self.add_device_keys(user_id, &request.device_id, &request.device_keys)
.await;
trace!(one_time_keys = ?request.one_time_keys);
for (one_time_key_key, one_time_key_value) in &request.one_time_keys {
self.add_one_time_key(user_id, &request.device_id, one_time_key_key, one_time_key_value)
.await?;
}
Ok(())
}
/// Removes a user's dehydrated device.
///
/// Calling this directly will remove the dehydrated data but leak the frontage
/// device. Thus this is called by the regular device interface such that the
/// dehydrated data will not leak instead.
///
/// If device_id is given, the user's dehydrated device must match or this is a
/// no-op, but an Err is still returned to indicate that. Otherwise returns the
/// removed dehydrated device_id.
#[implement(super::Service)]
#[tracing::instrument(
level = "debug",
skip_all,
fields(
%user_id,
device_id = ?maybe_device_id,
)
)]
pub(super) async fn remove_dehydrated_device(
&self,
user_id: &UserId,
maybe_device_id: Option<&DeviceId>,
) -> Result<OwnedDeviceId> {
let Ok(device_id) = self.get_dehydrated_device_id(user_id).await else {
return Err!(Request(NotFound("No dehydrated device for this user.")));
};
if let Some(maybe_device_id) = maybe_device_id {
if maybe_device_id != device_id {
return Err!(Request(NotFound("Not the user's dehydrated device.")));
}
}
self.db.userid_dehydrateddevice.remove(user_id);
Ok(device_id)
}
/// Get the device_id of the user's dehydrated device.
#[implement(super::Service)]
#[tracing::instrument(
level = "debug",
skip_all,
fields(%user_id)
)]
pub async fn get_dehydrated_device_id(&self, user_id: &UserId) -> Result<OwnedDeviceId> {
self.get_dehydrated_device(user_id)
.await
.map(|device| device.device_id)
}
/// Get the dehydrated device private data
#[implement(super::Service)]
#[tracing::instrument(
level = "debug",
skip_all,
fields(%user_id),
ret,
)]
pub async fn get_dehydrated_device(&self, user_id: &UserId) -> Result<DehydratedDevice> {
self.db
.userid_dehydrateddevice
.get(user_id)
.await
.deserialized()
}

Some files were not shown because too many files have changed in this diff Show More