Merge branch 'master' into rcv-services

This commit is contained in:
Evgeny Poberezkin
2026-03-03 21:16:46 +00:00
168 changed files with 17223 additions and 320 deletions
+8
View File
@@ -39,9 +39,17 @@ jobs:
type=semver,pattern=v{{major}}.{{minor}}
type=semver,pattern=v{{major}}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
uses: simplex-chat/docker-build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
build-args: |
APP=${{ matrix.app }}
+1
View File
@@ -11,3 +11,4 @@ cabal.project.local~
.hpc/
*.tix
.coverage
@@ -0,0 +1,53 @@
# XFTPClientAgent Pattern
## TOC
1. Executive Summary
2. Changes: client.ts
3. Changes: agent.ts
4. Changes: test/browser.test.ts
5. Verification
## Executive Summary
Add `XFTPClientAgent` — a per-server connection pool matching the Haskell pattern. The agent caches `XFTPClient` instances by server URL. All orchestration functions (`uploadFile`, `downloadFile`, `deleteFile`) take `agent` as first parameter and use `getXFTPServerClient(agent, server)` instead of calling `connectXFTP` directly. Connections stay open on success; the caller creates and closes the agent.
`connectXFTP` and `closeXFTP` stay exported (used by `XFTPWebTests.hs` Haskell tests). The `browserClients` hack, per-function `connections: Map`, and `getOrConnect` are deleted.
## Changes: client.ts
**Add** after types section: `XFTPClientAgent` interface, `newXFTPAgent`, `getXFTPServerClient`, `closeXFTPServerClient`, `closeXFTPAgent`.
**Delete**: `browserClients` Map and all `isNode` browser-cache checks in `connectXFTP` and `closeXFTP`.
**Revert `closeXFTP`** to unconditional `c.transport.close()` (browser transport.close() is already a no-op).
`connectXFTP` stays exported (backward compat) but becomes a raw low-level function — no caching.
## Changes: agent.ts
**Imports**: replace `connectXFTP`/`closeXFTP` with `getXFTPServerClient`/`closeXFTPAgent` etc.
**Re-export** from agent.ts: `newXFTPAgent`, `closeXFTPAgent`, `XFTPClientAgent`.
**`uploadFile`**: add `agent: XFTPClientAgent` as first param. Replace `connectXFTP``getXFTPServerClient`. Remove `finally { closeXFTP }`. Pass `agent` to `uploadRedirectDescription`.
**`uploadRedirectDescription`**: change from `(client, server, innerFd)` to `(agent, server, innerFd)`. Get client via `getXFTPServerClient`.
**`downloadFile`**: add `agent` param. Delete local `connections: Map`. Replace `getOrConnect``getXFTPServerClient`. Remove finally cleanup. Pass `agent` to `downloadWithRedirect`.
**`downloadWithRedirect`**: add `agent` param. Same replacements. Remove try/catch cleanup. Recursive call passes `agent`.
**`deleteFile`**: add `agent` param. Same pattern.
**Delete**: `getOrConnect` function entirely.
## Changes: test/browser.test.ts
Create agent before operations, pass to upload/download, close in finally.
## Verification
1. `npx vitest --run` — browser round-trip test passes
2. No remaining `browserClients`, `getOrConnect`, or per-function `connections: Map` locals
3. `connectXFTP` and `closeXFTP` still exported (XFTPWebTests.hs compat)
4. All orchestration functions take `agent` as first param
+26 -1
View File
@@ -2,7 +2,17 @@
This file provides guidance on coding style and approaches and on building the code.
## Code Style and Formatting
## Code Security
When designing code and planning implementations:
- Apply adversarial thinking, and consider what may happen if one of the communicating parties is malicious.
- Formulate an explicit threat model for each change - who can do which undesirable things and under which circumstances.
## Code Quality Standards
Haskell client and server code serves as system specification, not just implementation — we use type-driven design to reflect the business domain in types. Quality, conciseness, and clarity of Haskell code are critical.
## Code Style, Formatting and Approaches
The project uses **fourmolu** for Haskell code formatting. Configuration is in `fourmolu.yaml`.
@@ -36,6 +46,21 @@ Some files that use CPP language extension cannot be formatted as a whole, so in
- Do not add comments like "wire format encoding" (Encoding class is always wire format) or "check if X" when the function name already says that
- Assume a competent Haskell reader
**Diff and refactoring:**
- Avoid unnecessary changes and code movements
- Never do refactoring unless it substantially reduces cost of solving the current problem, including the cost of refactoring
- Aim to minimize the code changes - do what is minimally required to solve users' problems
**Document and code structure:**
- **Never move existing code or sections around** - add new content at appropriate locations without reorganizing existing structure.
- When adding new sections to documents, continue the existing numbering scheme.
- Minimize diff size - prefer small, targeted changes over reorganization.
**Code analysis and review:**
- Trace data flows end-to-end: from origin, through storage/parameters, to consumption. Flag values that are discarded and reconstructed from partial data (e.g. extracted from a URI missing original fields) — this is usually a bug.
- Read implementations of called functions, not just signatures — if duplication involves a called function, check whether decomposing it resolves the duplication.
- Do not save time on analysis. Read every function in the data flow even when the interface seems clear — wrong assumptions about internals are the main source of missed bugs.
### Haskell Extensions
- `StrictData` enabled by default
- Use STM for safe concurrency
+1
View File
@@ -17,6 +17,7 @@ Please discuss the problem you want to solve and your detailed implementation pl
This files can be used with LLM prompts, e.g. if you use Claude Code you can create CLAUDE.md file in project root importing content from these files:
```markdown
@README.md
@contributing/PROJECT.md
@contributing/CODE.md
```
-23
View File
@@ -1,23 +0,0 @@
common:
corrId - random BS, used as CbNonce
entityId - p2r tlsUniq
# setup
s->p: "proxy", uri, auth?
# unless connected
p->r: "p_handshake"
p<-r: "r_key", tls-signed dh pub
s<-r: "r_key", tls-signed dh pub # reply entityId contains tlsUniq
# working
s ; generate random dh priv, make shared secret
s->p: s2r("forward", random dh pub, SEND command blob)
p->r: p2r("forward", random dh pub, s2r("forward", ...)))
r->c@ "msg", ...
p<-r: p2r("r_res", s2r("ok" / "error", error))
s<-p@ s2r("ok" / "error", error)
# expired
p<-r@ p2r("error", "key expired")
s<-p@ "error", "key expired"
s ; reconnect
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,154 @@
# XFTP Server: SNI, CORS, and Web Support
Implementation details for Phase 3 of `rfcs/2026-01-30-send-file-page.md` (sections 6.1-6.4).
## 1. Overview
The XFTP server is extended to support web browser clients by:
1. **SNI-based TLS certificate switching** — Present a CA-issued web certificate (e.g., Let's Encrypt) to browsers, while continuing to present the self-signed XFTP identity certificate to native clients.
2. **CORS headers** — Add CORS response headers on SNI connections so browsers allow cross-origin XFTP requests.
3. **Configuration**`[WEB]` INI section for HTTPS cert/key paths; opt-in (commented out by default).
Web handshake (challenge-response identity proof, §6.3 of parent RFC) is not yet implemented and will be added separately.
## 2. SNI Certificate Switching
### 2.1 Reusing the SMP Pattern
The SMP server already implements SNI-based certificate switching via `TLSServerCredential` and `runTransportServerState_` (see `rfcs/2024-09-15-shared-port.md`). The XFTP server applies the same pattern with one key difference: both native and web XFTP clients use HTTP/2 transport, whereas SMP switches between raw SMP protocol and HTTP entirely.
### 2.2 Approach
When `httpServerCreds` is configured, the XFTP server bypasses `runHTTP2Server` and uses `runTransportServerState_` directly to obtain the per-connection `sniUsed` flag. It then sets up HTTP/2 manually on each TLS connection using `withHTTP2` (same internals as `runHTTP2ServerWith_`). The `sniUsed` flag is captured in the closure and shared by all HTTP/2 requests on that connection.
When `httpServerCreds` is absent, the existing `runHTTP2Server` path is unchanged.
```
Native client (no SNI) ──TLS──> XFTP identity cert ──HTTP/2──> processRequest (no CORS)
Browser client (SNI) ──TLS──> Web CA cert ──HTTP/2──> processRequest (+ CORS)
```
### 2.3 Certificate Chain
The web certificate file (e.g., `web.crt`) must contain the full chain: leaf certificate followed by the signing CA certificate. `loadServerCredential` uses `T.credentialLoadX509Chain` which reads all PEM blocks from the file.
The client validates the chain by comparing `idCert` fingerprint (the CA cert, second in the 2-cert chain) against the known `keyHash`. This is the same validation as for XFTP identity certificates — the CA that signed the web cert must match the XFTP server's identity.
## 3. CORS Support
### 3.1 Design
CORS headers are only added when both conditions are true:
- `addCORSHeaders` is `True` in `TransportServerConfig` (set in XFTP `Main.hs`)
- `sniUsed` is `True` for the current TLS connection
This ensures native clients never see CORS headers.
### 3.2 Response Headers
All POST responses on SNI connections include:
```
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: *
```
### 3.3 OPTIONS Preflight
OPTIONS requests are intercepted at the HTTP/2 dispatch level, before `processRequest`. This is necessary because `processRequest` rejects bodies that don't match `xftpBlockSize`.
Preflight response:
```
HTTP/2 200
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: POST, OPTIONS
Access-Control-Allow-Headers: *
Access-Control-Max-Age: 86400
```
### 3.4 Security
`Access-Control-Allow-Origin: *` is safe because:
- All XFTP commands require Ed25519 authentication (per-chunk keys from file description).
- No cookies or browser credentials are involved.
- File content is end-to-end encrypted.
## 4. Configuration
### 4.1 INI Template
```ini
[WEB]
# cert: /etc/opt/simplex-xftp/web.crt
# key: /etc/opt/simplex-xftp/web.key
```
Commented out by default — web support is opt-in.
### 4.2 Behavior
- `[WEB]` section not configured: silently ignored, server operates normally for native clients only.
- `[WEB]` section configured with valid cert/key paths: SNI + CORS enabled.
- `[WEB]` section configured with missing cert files: warning + continue (non-fatal, unlike SMP where it is fatal).
## 5. Files Modified
### 5.1 `src/Simplex/Messaging/Transport/Server.hs`
Added `addCORSHeaders :: Bool` field to `TransportServerConfig`. Updated `mkTransportServerConfig` to accept the new parameter. All existing SMP call sites pass `False`.
### 5.2 `src/Simplex/Messaging/Transport/HTTP2/Server.hs`
- Extracted `expireInactiveClient` from `runHTTP2ServerWith_`'s `where` clause to a module-level function.
- Parameterized `runHTTP2ServerWith_`: setup type changed from `((TLS p -> IO ()) -> a)` to `(((Bool, TLS p) -> IO ()) -> a)`, callback from `HTTP2ServerFunc` to `Bool -> HTTP2ServerFunc`. The `Bool` is the per-connection `sniUsed` flag, threaded through `H.run` to the callback.
- Extended `runHTTP2Server` with `Maybe T.Credential` parameter for SNI web certificate. Its setup uses `runTransportServerState_` with `TLSServerCredential`, which naturally provides `(sniUsed, tls)` pairs matching the new `runHTTP2ServerWith_` setup type.
- Adapted `runHTTP2ServerWith` (client-side HTTP/2, no SNI): wraps its setup to inject `(False, tls)` and its callback with `const`.
- Updated `getHTTP2Server` (test helper) to pass `Nothing` for httpCreds.
### 5.3 `src/Simplex/FileTransfer/Server/Env.hs`
- Added `httpCredentials :: Maybe ServerCredentials` to `XFTPServerConfig`.
- Added `httpServerCreds :: Maybe T.Credential` to `XFTPEnv`.
- `newXFTPServerEnv` loads HTTP credentials when configured.
### 5.4 `src/Simplex/FileTransfer/Server/Main.hs`
- Added `[WEB]` section to INI template.
- Added `httpCredentials` parsing from INI `[WEB]` section (`cert` and `key` fields).
- Set `addCORSHeaders = isJust httpCredentials_` in transport config (conditional on web cert presence).
### 5.5 `src/Simplex/FileTransfer/Server.hs`
Core server changes:
- `runServer` calls `runHTTP2Server` with `httpCreds_` and a `\sniUsed -> handleRequest (sniUsed && addCORSHeaders transportConfig)` callback. TLS params are `defaultSupportedParamsHTTPS` when web creds present, `defaultSupportedParams` otherwise. SNI routing, HTTP/2 setup, and client expiration are handled inside `runHTTP2Server`.
- `XFTPTransportRequest` carries `addCORS :: Bool` field, threaded through to `sendXFTPResponse`.
- `sendXFTPResponse` conditionally includes CORS headers based on `addCORS`.
- OPTIONS requests on SNI connections return CORS preflight headers before reaching `processRequest`.
- Helper functions: `corsHeaders` (response headers), `corsPreflightHeaders` (preflight headers).
### 5.6 `tests/XFTPClient.hs`
- Added `httpCredentials = Nothing` to `testXFTPServerConfig`.
- Added `testXFTPServerConfigSNI` with web cert config and `addCORSHeaders = True`.
- Added `withXFTPServerSNI` helper.
### 5.7 `tests/XFTPServerTests.hs`
Added SNI and CORS tests as a subsection within `xftpServerTests` (6 tests):
1. **SNI cert selection** — Connect with SNI + `h2` ALPN, verify RSA web certificate is presented.
2. **Non-SNI cert selection** — Connect without SNI + `xftp/1` ALPN, verify Ed448 XFTP certificate is presented.
3. **CORS headers** — SNI POST request includes `Access-Control-Allow-Origin: *` and `Access-Control-Expose-Headers: *`.
4. **OPTIONS preflight** — SNI OPTIONS request returns all CORS preflight headers.
5. **No CORS without SNI** — Non-SNI POST request has no CORS headers.
6. **File chunk delivery** — Full XFTP file chunk upload/download through SNI-enabled server verifying no regression.
## 6. Remaining Work
- **Web handshake** (§6.3 of parent RFC): Challenge-response identity proof for SNI connections. The server detects web clients via the `sniUsed` flag and expects a 32-byte challenge in the first POST body (non-empty, unlike standard handshake). Response includes full cert chain + signature over `(challenge ++ sessionId)`.
- **Static page serving** (§6.5 of parent RFC): Optional serving of the web page HTML/JS bundle on GET requests.
@@ -0,0 +1,246 @@
# Web Handshake — Challenge-Response Identity Proof
RFC §6.3: Server proves XFTP identity to web clients independently of TLS CA infrastructure.
## 1. Protocol
**Standard handshake** (unchanged):
```
Client → empty POST → Server
Server → padded {vRange, sessionId, authPubKey, Nothing} → Client
Client → padded {version, keyHash, Nothing} → Server
Server → empty → Client
```
**Web handshake** (SNI connection, non-empty hello):
```
Client → padded {32 random bytes} → Server
Server → padded {vRange, sessionId, authPubKey, Just sigBytes} → Client
sigBytes = signatureBytes(sign(identityLeafKey, challenge <> sessionId))
Client validates:
1. chainIdCaCerts(authPubKey.certChain) → CCValid {leafCert, idCert}
2. SHA-256(idCert) == keyHash (server identity)
3. verify(leafCert.pubKey, sigBytes, challenge <> sessionId) (challenge-response)
4. verify(leafCert.pubKey, signedPubKey.signature, signedPubKey.objectDer) (DH key auth)
Client → padded {version, keyHash, Just challenge} → Server
Server verifies: echoed challenge == stored challenge from step 1
Server → empty → Client
```
**Detection**: `sniUsed` per-connection flag. Non-empty hello allowed only when `sniUsed`. Empty hello with SNI → standard handshake.
**Why both steps 3 and 4**: Native clients verify `signedPubKey` using the TLS peer certificate (`serverKey` from `getServerVerifyKey`), which is the XFTP identity cert in non-SNI connections — TLS provides this binding. Web clients cannot access TLS peer certificate data (browser API limitation; TLS presents the web CA cert but provides no API to extract it). So web clients must verify at the application layer using `authPubKey.certChain`, which always contains the XFTP identity chain regardless of which cert TLS used. Step 3 proves the server holds its identity key *right now* (freshness via random challenge). Step 4 proves the DH session key was signed by the identity key holder (prevents MITM key substitution). Together they give web clients some assurance native clients get from TLS, except channel binding for commands.
## 2. Type Changes — `src/Simplex/FileTransfer/Transport.hs`
### `XFTPServerHandshake` (line 114)
Add field: `webIdentityProof :: Maybe ByteString` — raw Ed448 signature bytes (114 bytes), or `Nothing` for standard handshake. No record needed — the cert chain is already in `authPubKey.certChain`.
### `Encoding XFTPServerHandshake` (line 136)
- `smpEncode`: append `smpEncode webIdentityProof`
- `smpP`: `Tail compat`, if non-empty `eitherToMaybe $ smpDecode compat`
Backward compat: old clients ignore via `Tail _compat`; new client + old server → empty compat → `Nothing`.
### `XFTPClientHandshake` (line 121)
Add field: `webChallenge :: Maybe ByteString`
### `Encoding XFTPClientHandshake` (line 128)
Same `Tail compat` pattern as server handshake.
### Export list
Both types use `(..)` export — new fields auto-exported.
## 3. Server Changes — `src/Simplex/FileTransfer/Server.hs`
### `XFTPTransportRequest` (line 88)
Add field: `sniUsed :: SNICredentialUsed` (`Bool` from `Transport.Server`). Add import.
### `Handshake` (line 117)
`HandshakeSent C.PrivateKeyX25519``HandshakeSent C.PrivateKeyX25519 (Maybe ByteString)` — stores 32-byte web challenge or `Nothing`.
### `runServer` handler (line 145161)
- Pass `sniUsed` into request construction (line 154)
- SNI-first routing: when `sniUsed`, always route to `xftpServerHandshakeV1` (web ALPN `h2` would otherwise fall to `_` catch-all)
### `xftpServerHandshakeV1` (line 162)
- Destructure `sniUsed` from request
- Match `HandshakeSent pk challenge_``processClientHandshake pk challenge_`
### `processHello` (line 171)
- Branch `(sniUsed, B.null bodyHead)`:
- `(_, True)` → standard: `challenge_ = Nothing`
- `(True, False)` → web: unpad, verify 32 bytes, `challenge_ = Just`
- `(False, False)``throwE HANDSHAKE`
- Store: `HandshakeSent pk challenge_`
- Compute: `webIdentityProof = C.signatureBytes . C.sign serverSignKey . (<> sessionId) <$> challenge_`
- Construct `XFTPServerHandshake` with `webIdentityProof`
### `processClientHandshake` (line 183)
- Accept `challenge_` parameter
- Decode `webChallenge` from `XFTPClientHandshake`
- Add: `unless (challenge_ == webChallenge) $ throwE HANDSHAKE`
(standard: both `Nothing` → passes)
## 4. Native Client — `src/Simplex/FileTransfer/Client.hs`
### `xftpClientHandshakeV1` (line 142)
Add `webChallenge = Nothing` in `sendClientHandshake` call.
No other changes — parser handles new fields via `Tail`, native client ignores `webIdentityProof`.
## 5. TypeScript Changes (DONE except Ed448)
Sections 5.1 and 5.2 are implemented. Section 5.3 needs Ed448 support.
## 10. Ed448 Support via `@noble/curves`
**Problem**: Production servers use Ed448 certificates (default). `identity.ts` only supports Ed25519 via libsodium. libsodium has no Ed448 support and never will.
**Solution**: Add `@noble/curves` dependency for Ed448 verification only. All other crypto stays with libsodium.
### 10.1 `xftp-web/package.json` — Add dependency
```json
"dependencies": {
"libsodium-wrappers-sumo": "^0.7.13",
"@noble/curves": "^1.9.7"
}
```
Use v1.x (supports both CJS and ESM). v2.x is ESM-only with `.js` extension requirement.
### 10.2 `xftp-web/src/crypto/keys.ts` — Ed448 DER constants and decode
Add Ed448 SPKI DER prefix (12 bytes, same prefix length as Ed25519):
```
30 43 30 05 06 03 2b 65 71 03 3a 00
```
| Property | Ed25519 | Ed448 |
|----------|---------|-------|
| OID | `2b 65 70` | `2b 65 71` |
| SPKI prefix | `30 2a ...` | `30 43 ...` |
| Raw key size | 32 bytes | 57 bytes |
| SPKI total | 44 bytes | 69 bytes |
| Signature size | 64 bytes | 114 bytes |
New functions:
- `decodePubKeyEd448(der: Uint8Array): Uint8Array` — 69 bytes → 57 bytes raw
- `encodePubKeyEd448(raw: Uint8Array): Uint8Array` — 57 bytes → 69 bytes DER
- `verifyEd448(publicKey: Uint8Array, sig: Uint8Array, msg: Uint8Array): boolean` — uses `ed448.verify(sig, msg, publicKey)` from `@noble/curves/ed448`
Note: `@noble/curves` parameter order is `(signature, message, publicKey)`, not `(publicKey, signature, message)`.
### 10.3 `xftp-web/src/crypto/identity.ts` — Algorithm-agnostic verification
Replace `extractCertEd25519Key` + hardcoded Ed25519 `verify` with algorithm detection:
1. `extractCertPublicKeyInfo(certDer)` → SPKI DER (already exists, works for any algorithm)
2. Detect algorithm from SPKI: byte at offset 8 is `0x70` (Ed25519) or `0x71` (Ed448)
3. Extract raw key with appropriate decoder
4. Verify signatures with appropriate function
```typescript
type CertKeyAlgorithm = 'ed25519' | 'ed448'
function detectKeyAlgorithm(spki: Uint8Array): CertKeyAlgorithm {
if (spki.length === 44 && spki[8] === 0x70) return 'ed25519'
if (spki.length === 69 && spki[8] === 0x71) return 'ed448'
throw new Error("unsupported certificate key algorithm")
}
```
`verifyIdentityProof` changes:
- Extract SPKI from leaf cert
- Detect algorithm → choose `decodePubKeyEd25519`/`decodePubKeyEd448` and `verify`/`verifyEd448`
- Both challenge signature and DH key signature use the same leaf key + algorithm
Remove `extractCertEd25519Key` (replaced by generic path). Keep `extractCertPublicKeyInfo` (already generic).
### 10.4 `xftp-web/src/protocol/handshake.ts` — Comment update
`SignedKey.signature` comment: "raw Ed25519 signature bytes (64 bytes)" → "raw signature bytes (Ed25519: 64, Ed448: 114)"
### 10.5 Tests — `tests/XFTPWebTests.hs`
**Integration test**: Switch from `withXFTPServerEd25519SNI` (Ed25519 fixtures) to `withXFTPServerSNI` (default Ed448 fixtures). Update fingerprint source from `tests/fixtures/ed25519/ca.crt` to `tests/fixtures/ca.crt`.
Optionally add a second integration test with Ed25519 to cover both paths, or rely on existing unit tests for Ed25519 coverage.
### 10.6 Implementation order
1. `npm install @noble/curves` in `xftp-web/`
2. `keys.ts` — Ed448 constants, decode, encode, verifyEd448
3. `identity.ts` — algorithm detection, generic verification
4. `handshake.ts` — comment fix
5. `XFTPWebTests.hs` — switch integration test to Ed448
6. Build TS + run all tests
## 6. Haskell Integration Test — `tests/XFTPServerTests.hs`
Add `testWebHandshake` to "XFTP SNI and CORS" describe block.
1. `withXFTPServerSNI` — server with web credentials
2. Connect with SNI + `h2` ALPN
3. Send padded 32-byte challenge
4. Decode `XFTPServerHandshake`, assert `webIdentityProof` is `Just`
5. `chainIdCaCerts` on `authPubKey.certChain``CCValid {leafCert, idCert}`
6. Verify `SHA-256(idCert) == keyHash`
7. Extract `leafCert` public key, verify challenge signature
8. Verify `signedPubKey` signature using `leafCert` key (DH key auth)
9. Send `XFTPClientHandshake` with `webChallenge = Just challenge`
10. Assert empty response
Imports: `XFTPServerHandshake (..)`, `XFTPClientHandshake (..)`, `ChainCertificates (..)`, `chainIdCaCerts`.
## 7. TS Tests — `tests/XFTPWebTests.hs`
### Unit tests
- **`decodeServerHandshake` with proof**: Haskell-encode with `Just sigBytes`, TS-decode, verify bytes match.
- **`encodeClientHandshake` with challenge**: TS-encode, compare with Haskell-encoded.
- **`chainIdCaCerts`**: 2/3/4-cert chains return correct positions.
- **`caFingerprint` (fixed)**: matches `sha256(idCert)` for 2 and 3-cert chains.
### Integration test
Node.js inline script against `withXFTPServerSNI`:
1. Connect with SNI via `http2.connect`
2. Send padded challenge, decode `XFTPServerHandshake` with TS
3. `verifyIdentityProof` — full chain validation + challenge sig + DH key sig
4. Send client handshake with echoed challenge
5. Assert empty response
## 8. Implementation Order
1. `Transport.hs``Maybe` fields + encoding instances
2. `Server.hs``sniUsed`, challenge in `Handshake`, `processHello`, `processClientHandshake`, SNI routing
3. `Client.hs``webChallenge = Nothing`
4. Build: `cabal build --ghc-options -O0`
5. Run existing SNI/CORS tests
6. `XFTPServerTests.hs``testWebHandshake`
7. `handshake.ts` — types, decoding, `chainIdCaCerts`, fix `caFingerprint`
8. `crypto/identity.ts` — Node.js verification functions
9. `XFTPWebTests.hs` — unit + integration tests
10. Build TS + run all tests
## 9. Verification
```bash
cd xftp-web && npm install && npm run build && cd ..
cabal test --ghc-options=-O0 --test-option='--match=/XFTP/XFTP server/XFTP SNI and CORS/' --test-show-details=streaming
cabal test --ghc-options=-O0 --test-option='--match=/XFTP Web Client/' --test-show-details=streaming
```
@@ -0,0 +1,208 @@
# Plan: Browser ↔ Haskell File Transfer Tests
## Table of Contents
1. Goal
2. Current State
3. Implementation
4. Success Criteria
5. Files
6. Order
## 1. Goal
Run browser upload/download tests in headless Chromium via Vitest, proving fetch-based transport works in real browser environment.
## 2. Current State
- `client.ts`: Transport abstraction done — http2 for Node, fetch for browser ✓
- `agent.ts`: Uses `node:crypto` (randomBytes) and `node:zlib` (deflateRawSync/inflateRawSync) — **won't run in browser**
- `XFTPWebTests.hs`: Cross-language tests exist (Haskell calls TS via Node.js) ✓
## 3. Implementation
### 3.1 Make agent.ts isomorphic
| Current (Node.js only) | Isomorphic replacement |
|------------------------|------------------------|
| `import crypto from "node:crypto"` | Remove import |
| `import zlib from "node:zlib"` | `import pako from "pako"` |
| `crypto.randomBytes(32)` | `crypto.getRandomValues(new Uint8Array(32))` |
| `zlib.deflateRawSync(buf)` | `pako.deflateRaw(buf)` |
| `zlib.inflateRawSync(buf)` | `pako.inflateRaw(buf)` |
Note: `crypto.getRandomValues` available in both browser and Node.js (globalThis.crypto).
### 3.2 Vitest browser mode setup
`package.json` additions:
```json
"devDependencies": {
"vitest": "^3.0.0",
"@vitest/browser": "^3.0.0",
"playwright": "^1.50.0",
"@types/pako": "^2.0.3"
},
"dependencies": {
"pako": "^2.1.0"
}
```
`vitest.config.ts`:
```typescript
import {defineConfig} from 'vitest/config'
import {readFileSync} from 'fs'
import {createHash} from 'crypto'
// Compute fingerprint from ca.crt (same as Haskell's loadFileFingerprint)
const caCert = readFileSync('../tests/fixtures/ca.crt')
const fingerprint = createHash('sha256').update(caCert).digest('base64url')
const serverAddr = `xftp://${fingerprint}@localhost:7000`
export default defineConfig({
define: {
'import.meta.env.XFTP_SERVER': JSON.stringify(serverAddr)
},
test: {
browser: {
enabled: true,
provider: 'playwright',
instances: [{browser: 'chromium'}],
headless: true,
providerOptions: {
launch: {ignoreHTTPSErrors: true}
}
},
globalSetup: './test/globalSetup.ts'
}
})
```
### 3.3 Server startup
`test/globalSetup.ts`:
```typescript
import {spawn, ChildProcess} from 'child_process'
import {resolve, join} from 'path'
import {mkdtempSync, writeFileSync, copyFileSync} from 'fs'
import {tmpdir} from 'os'
let server: ChildProcess | null = null
export async function setup() {
const fixtures = resolve(__dirname, '../../tests/fixtures')
// Create temp directories
const cfgDir = mkdtempSync(join(tmpdir(), 'xftp-cfg-'))
const logDir = mkdtempSync(join(tmpdir(), 'xftp-log-'))
const filesDir = mkdtempSync(join(tmpdir(), 'xftp-files-'))
// Copy certificates to cfgDir (xftp-server expects ca.crt, server.key, server.crt there)
copyFileSync(join(fixtures, 'ca.crt'), join(cfgDir, 'ca.crt'))
copyFileSync(join(fixtures, 'server.key'), join(cfgDir, 'server.key'))
copyFileSync(join(fixtures, 'server.crt'), join(cfgDir, 'server.crt'))
// Write INI config file
const iniContent = `[STORE_LOG]
enable: off
[TRANSPORT]
host: localhost
port: 7000
[FILES]
path: ${filesDir}
[WEB]
cert: ${join(fixtures, 'web.crt')}
key: ${join(fixtures, 'web.key')}
`
writeFileSync(join(cfgDir, 'file-server.ini'), iniContent)
// Spawn xftp-server with env vars
server = spawn('cabal', ['exec', 'xftp-server', '--', 'start'], {
env: {
...process.env,
XFTP_SERVER_CFG_PATH: cfgDir,
XFTP_SERVER_LOG_PATH: logDir
},
stdio: ['ignore', 'pipe', 'pipe']
})
// Wait for "Listening on port 7000..."
await waitForServerReady(server)
}
export async function teardown() {
server?.kill('SIGTERM')
await new Promise(r => setTimeout(r, 500))
}
function waitForServerReady(proc: ChildProcess): Promise<void> {
return new Promise((resolve, reject) => {
const timeout = setTimeout(() => reject(new Error('Server start timeout')), 15000)
proc.stdout?.on('data', (data: Buffer) => {
if (data.toString().includes('Listening on port')) {
clearTimeout(timeout)
resolve()
}
})
proc.stderr?.on('data', (data: Buffer) => {
console.error('[xftp-server]', data.toString())
})
proc.on('error', reject)
proc.on('exit', (code) => {
clearTimeout(timeout)
if (code !== 0) reject(new Error(`Server exited with code ${code}`))
})
})
}
```
Server env vars (from `apps/xftp-server/Main.hs` + `getEnvPath`):
- `XFTP_SERVER_CFG_PATH` — directory containing `file-server.ini` and certs (`ca.crt`, `server.key`, `server.crt`)
- `XFTP_SERVER_LOG_PATH` — directory for logs
### 3.4 Browser test
`test/browser.test.ts`:
```typescript
import {test, expect} from 'vitest'
import {encryptFileForUpload, uploadFile, downloadFile} from '../src/agent.js'
import {parseXFTPServer} from '../src/protocol/address.js'
const server = parseXFTPServer(import.meta.env.XFTP_SERVER)
test('browser upload + download round-trip', async () => {
const data = new Uint8Array(50000)
crypto.getRandomValues(data)
const encrypted = encryptFileForUpload(data, 'test.bin')
const {rcvDescription} = await uploadFile(server, encrypted)
const {content} = await downloadFile(rcvDescription)
expect(content).toEqual(data)
})
```
## 4. Success Criteria
1. `npm run build` — agent.ts compiles without node: imports
2. `cabal test --test-option='--match=/XFTP Web Client/'` — existing Node.js tests still pass
3. `npm run test:browser` — browser round-trip test passes in headless Chromium
## 5. Files to Create/Modify
**Modify:**
- `xftp-web/package.json` — add vitest, @vitest/browser, playwright, pako, @types/pako
- `xftp-web/src/agent.ts` — replace node:crypto, node:zlib with isomorphic alternatives
**Create:**
- `xftp-web/vitest.config.ts` — browser mode config
- `xftp-web/test/globalSetup.ts` — xftp-server lifecycle
- `xftp-web/test/browser.test.ts` — browser round-trip test
## 6. Order of Implementation
1. **Add pako dependency**`npm install pako @types/pako`
2. **Make agent.ts isomorphic** — replace node:crypto, node:zlib
3. **Verify Node.js tests pass**`cabal test --test-option='--match=/XFTP Web Client/'`
4. **Set up Vitest** — add devDeps, create vitest.config.ts
5. **Create globalSetup.ts** — write INI config, spawn xftp-server
6. **Write browser test** — upload + download round-trip
7. **Verify browser test passes**`npm run test:browser`
@@ -0,0 +1,920 @@
# Browser Transport & Web Worker Architecture
## TOC
1. Executive Summary
2. Transport: fetch() API
3. Architecture: Environment Abstraction
4. Web Worker Implementation
5. OPFS Implementation
6. Implementation Plan
7. Testing Strategy
## 1. Executive Summary
Adapt `client.ts` from `node:http2` to `fetch()` API for isomorphic Node.js/browser support. Add environment abstraction layer so the same upload/download pipeline works with or without Web Workers and with or without OPFS. In browsers, crypto runs in a Web Worker to keep UI responsive; in Node.js tests, crypto runs directly.
**Key architectural constraint:** Existing crypto functions (`encryptFile`, `decryptChunks`, etc.) remain unchanged. The abstraction layer wraps them, choosing execution context (direct vs Worker) and storage (memory vs OPFS) based on environment.
**Scope:**
- Replace `node:http2` with `fetch()` in `client.ts`
- Add `CryptoBackend` abstraction with three implementations
- Create Web Worker that calls existing crypto functions
- Add OPFS storage for large files in browser
**Out of scope:** Web page UI (Phase 5 in main RFC).
## 2. Transport: fetch() API
### 2.1 Current State
`client.ts` uses `node:http2`:
```typescript
import http2 from "node:http2"
const session = http2.connect(url)
const stream = session.request({':method': 'POST', ':path': '/'})
stream.write(commandBlock)
stream.end(chunkData)
```
### 2.2 Target State
Isomorphic `fetch()` (Node.js 18+ and browsers):
```typescript
const response = await fetch(url, {
method: 'POST',
body: concatStreams(commandBlock, chunkData),
duplex: 'half', // Required for streaming request body
})
const reader = response.body!.getReader()
```
### 2.3 Key Differences
| Aspect | node:http2 | fetch() |
|--------|-----------|---------|
| Session management | Explicit `session.connect()` / `session.close()` | Per-request (HTTP/2 connection reuse is automatic) |
| Streaming upload | `stream.write()` chunks | `ReadableStream` body + `duplex: 'half'` |
| Streaming download | `stream.on('data')` | `response.body.getReader()` |
| Connection pooling | Manual | Automatic per origin |
### 2.4 API Changes
```typescript
// Before (node:http2)
export interface XFTPClient {
session: http2.ClientHttp2Session
thParams: THParams
server: XFTPServer
}
// After (fetch)
export interface XFTPClient {
baseUrl: string // "https://host:port"
thParams: THParams
server: XFTPServer
}
```
`connectXFTP()` performs handshake via fetch, returns `XFTPClient` with `baseUrl`.
Subsequent commands use `fetch(client.baseUrl, ...)`.
### 2.5 Handshake via fetch()
**TLS session binding:** Multiple fetch() requests to the same origin reuse the HTTP/2 connection, which means they share the same TLS session. The server's `sessionId` (derived from TLS channel binding) remains consistent across the handshake round-trips and subsequent commands.
```typescript
async function connectXFTP(server: XFTPServer): Promise<XFTPClient> {
const baseUrl = `https://${server.host}:${server.port}`
// Round-trip 1: challenge → server handshake + identity proof
const challenge = crypto.getRandomValues(new Uint8Array(32))
const req1 = pad(encodeWebClientHello(challenge), xftpBlockSize)
const resp1 = await fetch(baseUrl, {method: 'POST', body: req1})
const reader = resp1.body!.getReader()
const serverBlock = await readExactly(reader, xftpBlockSize)
const serverHs = decodeServerHandshake(unPad(serverBlock))
const proofBody = await readRemaining(reader)
verifyIdentityProof(server.keyHash, challenge, serverHs.sessionId, proofBody)
// Round-trip 2: client handshake → server ack
const clientHs = encodeClientHandshake({xftpVersion: 3, keyHash: server.keyHash})
const req2 = pad(clientHs, xftpBlockSize)
await fetch(baseUrl, {method: 'POST', body: req2})
return {baseUrl, thParams: {sessionId: serverHs.sessionId, ...}, server}
}
```
### 2.6 Command Execution
```typescript
async function sendXFTPCommand(
client: XFTPClient,
key: Uint8Array,
entityId: Uint8Array,
cmd: Uint8Array,
chunkData?: Uint8Array
): Promise<{response: Uint8Array, body?: ReadableStream}> {
const block = xftpEncodeAuthTransmission(client.thParams, key, entityId, cmd)
const reqBody = chunkData
? concatBytes(block, chunkData)
: block
const resp = await fetch(client.baseUrl, {
method: 'POST',
body: reqBody,
duplex: 'half',
})
const reader = resp.body!.getReader()
const responseBlock = await readExactly(reader, xftpBlockSize)
const parsed = xftpDecodeTransmission(responseBlock)
// For FGET: remaining body is encrypted chunk
const hasMore = await peekReader(reader)
return {
response: parsed,
body: hasMore ? wrapAsStream(reader) : undefined
}
}
```
## 3. Architecture: Environment Abstraction
### 3.1 Core Principle
**Existing crypto functions remain unchanged.** The functions `encryptFile()`, `decryptChunks()`, `sha512()`, etc. in `crypto/file.ts` and `crypto/digest.ts` are pure computation — they take input bytes and produce output bytes. They have no knowledge of Workers, OPFS, or execution context.
The abstraction layer sits between `agent.ts` (upload/download orchestration) and these crypto functions:
```
┌─────────────────────────────────────────────────────────────────────┐
│ agent.ts (upload/download orchestration) │
│ - Unchanged logic: encrypt → chunk → upload → build description │
│ - Calls CryptoBackend interface, not crypto functions directly │
├─────────────────────────────────────────────────────────────────────┤
│ CryptoBackend interface (env.ts) │
│ - Abstract interface for encrypt/decrypt/readChunk/writeChunk │
│ - Factory function selects implementation based on environment │
├──────────────┬──────────────────────┬───────────────────────────────┤
│ DirectMemory │ WorkerMemory │ WorkerOPFS │
│ Backend │ Backend │ Backend │
│ (Node.js) │ (Browser, ≤50MB) │ (Browser, >50MB) │
├──────────────┼──────────────────────┼───────────────────────────────┤
│ Calls crypto │ Posts to Worker, │ Posts to Worker, │
│ functions │ Worker calls crypto │ Worker calls crypto, │
│ directly │ functions, returns │ streams through OPFS │
│ │ via postMessage │ │
├──────────────┴──────────────────────┴───────────────────────────────┤
│ crypto/file.ts, crypto/digest.ts (unchanged) │
│ - encryptFile(), decryptChunks(), sha512(), etc. │
│ - Pure functions, no environment dependencies │
└─────────────────────────────────────────────────────────────────────┘
```
### 3.2 CryptoBackend Interface
```typescript
// env.ts
export interface CryptoBackend {
// Encrypt file, store result (in memory or OPFS depending on backend)
encrypt(
data: Uint8Array,
fileName: string,
onProgress?: (done: number, total: number) => void
): Promise<EncryptResult>
// Decrypt from stored encrypted data
decrypt(
key: Uint8Array,
nonce: Uint8Array,
size: number,
onProgress?: (done: number, total: number) => void
): Promise<DecryptResult>
// Read chunk from stored encrypted data (for upload)
readChunk(offset: number, size: number): Promise<Uint8Array>
// Write chunk to storage (for download, before decrypt)
writeChunk(data: Uint8Array, offset: number): Promise<void>
// Clean up temporary storage
cleanup(): Promise<void>
}
export interface EncryptResult {
digest: Uint8Array // SHA-512 of encrypted data
key: Uint8Array // Generated encryption key
nonce: Uint8Array // Generated nonce
chunkSizes: number[] // Chunk sizes for upload
totalSize: number // Total encrypted size
}
export interface DecryptResult {
header: FileHeader // Extracted file header (fileName, etc.)
content: Uint8Array // Decrypted file content
}
```
### 3.3 Backend Implementations
**DirectMemoryBackend** (Node.js):
```typescript
class DirectMemoryBackend implements CryptoBackend {
private encryptedData: Uint8Array | null = null
async encrypt(data: Uint8Array, fileName: string, onProgress?): Promise<EncryptResult> {
const key = randomBytes(32)
const nonce = randomBytes(24)
// Call existing crypto function directly
this.encryptedData = encryptFile(data, fileName, key, nonce, onProgress)
const digest = sha512(this.encryptedData)
const chunkSizes = prepareChunkSizes(this.encryptedData.length)
return { digest, key, nonce, chunkSizes, totalSize: this.encryptedData.length }
}
async decrypt(key, nonce, size, onProgress): Promise<DecryptResult> {
// Call existing crypto function directly
return decryptChunks([this.encryptedData!], key, nonce, size, onProgress)
}
async readChunk(offset: number, size: number): Promise<Uint8Array> {
return this.encryptedData!.slice(offset, offset + size)
}
async writeChunk(data: Uint8Array, offset: number): Promise<void> {
if (!this.encryptedData) this.encryptedData = new Uint8Array(offset + data.length)
this.encryptedData.set(data, offset)
}
async cleanup(): Promise<void> {
this.encryptedData = null
}
}
```
**WorkerMemoryBackend** and **WorkerOPFSBackend** are similar but post messages to a Web Worker instead of calling crypto directly. The Worker then calls the same `encryptFile()`, `decryptChunks()` functions. See §4 for Worker implementation details.
### 3.4 Factory Function
```typescript
// env.ts
export function createCryptoBackend(fileSize: number): CryptoBackend {
const hasWorker = typeof Worker !== 'undefined'
const hasOPFS = typeof navigator?.storage?.getDirectory !== 'undefined'
const isLargeFile = fileSize > 50 * 1024 * 1024
if (hasWorker && hasOPFS && isLargeFile) {
return new WorkerOPFSBackend() // Browser + large file
} else if (hasWorker) {
return new WorkerMemoryBackend() // Browser + small file
} else {
return new DirectMemoryBackend() // Node.js
}
}
```
### 3.5 Usage in agent.ts
```typescript
// agent.ts - upload orchestration (simplified)
export async function uploadFile(
server: XFTPServer,
fileData: Uint8Array,
fileName: string,
onProgress?: ProgressCallback
): Promise<string> {
// Create backend based on environment
const backend = createCryptoBackend(fileData.length)
try {
// Encrypt (runs in Worker in browser, directly in Node)
const enc = await backend.encrypt(fileData, fileName, onProgress)
// Upload chunks (same code regardless of backend)
const client = await connectXFTP(server)
const sentChunks = []
let offset = 0
for (const size of enc.chunkSizes) {
const chunk = await backend.readChunk(offset, size)
const sent = await uploadChunk(client, chunk, enc.digest)
sentChunks.push(sent)
offset += size
}
// Build description and URI
const fd = buildFileDescription(enc, sentChunks)
return encodeFileDescriptionURI(fd)
} finally {
await backend.cleanup()
}
}
```
The key point: `uploadFile()` logic is identical regardless of whether crypto runs in a Worker or directly. The `CryptoBackend` abstraction hides that detail.
### 3.6 Why This Matters for Testing
- **Layer 1 tests** (per-function): Call `encryptFile()`, `decryptChunks()` directly via Node — unchanged
- **Layer 2 tests** (full flow): Call `uploadFile()`, `downloadFile()` in Node — uses `DirectMemoryBackend`, same code path as browser except for Worker
- **Layer 3 tests** (browser): Call `uploadFile()`, `downloadFile()` in Playwright — uses `WorkerMemoryBackend` or `WorkerOPFSBackend`
All three layers exercise the same crypto functions. The only difference is execution context.
## 4. Web Worker Implementation
### 4.1 Why Web Worker
File encryption (XSalsa20-Poly1305) is sequential and CPU-bound:
- 100 MB file ≈ 1-2 seconds of continuous computation
- Running on main thread blocks UI (no progress updates, frozen page)
- Chunking into async microtasks adds complexity and still causes jank
Web Worker runs crypto in parallel thread. Main thread stays responsive.
### 4.2 Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Main Thread │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ UI (upload/ │ │ Progress │ │ Network (fetch) │ │
│ │ download) │ │ display │ │ │ │
│ └──────┬──────┘ └──────▲──────┘ └──────────▲──────────┘ │
│ │ │ │ │
│ │ postMessage │ progress │ encrypted │
│ ▼ │ events │ chunks │
├─────────────────────────────────────────────────────────────┤
│ Web Worker │
│ ┌─────────────────────────────────────────────────────────┐│
│ │ Crypto Pipeline ││
│ │ - encryptFile() with progress callbacks ││
│ │ - decryptChunks() with progress callbacks ││
│ │ - OPFS read/write for temp storage ││
│ └─────────────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────────────┘
```
### 4.3 Message Protocol
**Main → Worker:**
```typescript
type WorkerRequest =
// Encrypt file, store result in OPFS (large) or memory (small)
| {type: 'encrypt', file: File, fileName: string, useOPFS: boolean}
// Read encrypted chunk from OPFS for upload
| {type: 'readChunk', offset: number, size: number}
// Write downloaded chunk to OPFS for later decryption
| {type: 'writeChunk', data: ArrayBuffer, offset: number}
// Decrypt from OPFS or provided chunks
| {type: 'decrypt', key: Uint8Array, nonce: Uint8Array, size: number, chunks?: ArrayBuffer[]}
// Delete OPFS temp files
| {type: 'cleanup'}
| {type: 'cancel'}
```
**Worker → Main:**
```typescript
type WorkerResponse =
| {type: 'progress', phase: 'encrypt' | 'decrypt', done: number, total: number}
// For OPFS: encData is empty, data lives in OPFS temp file
| {type: 'encrypted', encData: ArrayBuffer | null, digest: Uint8Array, key: Uint8Array, nonce: Uint8Array, chunkSizes: number[]}
| {type: 'chunk', data: ArrayBuffer} // Response to readChunk
| {type: 'chunkWritten'} // Response to writeChunk
| {type: 'decrypted', header: FileHeader, content: ArrayBuffer}
| {type: 'cleaned'} // Response to cleanup
| {type: 'error', message: string}
```
### 4.4 Worker Implementation
```typescript
// crypto.worker.ts
import {encryptFile, encryptFileStreaming, decryptChunks, decryptFromOPFS} from './crypto/file.js'
import {sha512} from './crypto/digest.js'
import {prepareChunkSizes} from './protocol/chunks.js'
let opfsHandle: FileSystemSyncAccessHandle | null = null
self.onmessage = async (e: MessageEvent<WorkerRequest>) => {
const req = e.data
if (req.type === 'encrypt') {
const key = crypto.getRandomValues(new Uint8Array(32))
const nonce = crypto.getRandomValues(new Uint8Array(24))
if (req.useOPFS) {
// Large file: stream through OPFS to avoid memory pressure
const root = await navigator.storage.getDirectory()
const fileHandle = await root.getFileHandle('encrypted-temp', {create: true})
opfsHandle = await fileHandle.createSyncAccessHandle()
// Stream encrypt: read 64KB from File, encrypt, write to OPFS
const digest = await encryptFileStreaming(
req.file,
req.fileName,
key,
nonce,
opfsHandle,
(done, total) => self.postMessage({type: 'progress', phase: 'encrypt', done, total})
)
const encSize = opfsHandle.getSize()
const chunkSizes = prepareChunkSizes(encSize)
self.postMessage({
type: 'encrypted',
encData: null, // Data in OPFS, not memory
digest, key, nonce, chunkSizes
})
} else {
// Small file: in-memory is fine
const source = new Uint8Array(await req.file.arrayBuffer())
const encData = encryptFile(source, req.fileName, key, nonce, (done, total) => {
self.postMessage({type: 'progress', phase: 'encrypt', done, total})
})
const digest = sha512(encData)
const chunkSizes = prepareChunkSizes(encData.length)
self.postMessage({
type: 'encrypted',
encData: encData.buffer,
digest, key, nonce, chunkSizes
}, [encData.buffer])
}
}
if (req.type === 'readChunk') {
// Read chunk from OPFS for upload
const chunk = new Uint8Array(req.size)
opfsHandle!.read(chunk, {at: req.offset})
self.postMessage({type: 'chunk', data: chunk.buffer}, [chunk.buffer])
}
if (req.type === 'writeChunk') {
// Write downloaded chunk to OPFS
if (!opfsHandle) {
const root = await navigator.storage.getDirectory()
const fileHandle = await root.getFileHandle('download-temp', {create: true})
opfsHandle = await fileHandle.createSyncAccessHandle()
}
opfsHandle.write(new Uint8Array(req.data), {at: req.offset})
self.postMessage({type: 'chunkWritten'})
}
if (req.type === 'decrypt') {
let result
if (req.chunks) {
// Small file: chunks provided in memory
const chunks = req.chunks.map(b => new Uint8Array(b))
result = decryptChunks(chunks, req.key, req.nonce, req.size, (done, total) => {
self.postMessage({type: 'progress', phase: 'decrypt', done, total})
})
} else {
// Large file: read from OPFS
result = decryptFromOPFS(opfsHandle!, req.key, req.nonce, req.size, (done, total) => {
self.postMessage({type: 'progress', phase: 'decrypt', done, total})
})
}
self.postMessage({
type: 'decrypted',
header: result.header,
content: result.content.buffer
}, [result.content.buffer])
}
if (req.type === 'cleanup') {
if (opfsHandle) {
opfsHandle.close()
opfsHandle = null
}
const root = await navigator.storage.getDirectory()
try { await root.removeEntry('encrypted-temp') } catch {}
try { await root.removeEntry('download-temp') } catch {}
self.postMessage({type: 'cleaned'})
}
}
```
### 4.5 Main Thread Wrapper
```typescript
// crypto-worker.ts (main thread)
export class CryptoWorker {
private worker: Worker
private pending: Map<string, {resolve: Function, reject: Function}> = new Map()
private onProgress?: (done: number, total: number) => void
constructor() {
this.worker = new Worker(new URL('./crypto.worker.js', import.meta.url), {type: 'module'})
this.worker.onmessage = (e) => this.handleMessage(e.data)
}
async encrypt(file: File, onProgress?: (done: number, total: number) => void): Promise<EncryptedFileInfo> {
const useOPFS = file.size > 50 * 1024 * 1024 // 50 MB threshold
return new Promise((resolve, reject) => {
this.pending.set('encrypt', {resolve, reject})
this.onProgress = onProgress
this.worker.postMessage({type: 'encrypt', file, fileName: file.name, useOPFS})
})
}
async decrypt(
chunks: Uint8Array[],
key: Uint8Array,
nonce: Uint8Array,
size: number,
onProgress?: (done: number, total: number) => void
): Promise<DownloadResult> {
return new Promise((resolve, reject) => {
this.pending.set('decrypt', {resolve, reject})
this.onProgress = onProgress
this.worker.postMessage({
type: 'decrypt',
chunks: chunks.map(c => c.buffer),
key, nonce, size
}, chunks.map(c => c.buffer))
})
}
private handleMessage(msg: WorkerResponse) {
if (msg.type === 'progress') {
this.onProgress?.(msg.done, msg.total)
} else if (msg.type === 'encrypted') {
this.pending.get('encrypt')?.resolve({
encData: msg.encData ? new Uint8Array(msg.encData) : null, // null when using OPFS
digest: msg.digest,
key: msg.key,
nonce: msg.nonce,
chunkSizes: msg.chunkSizes
})
} else if (msg.type === 'decrypted') {
this.pending.get('decrypt')?.resolve({
header: msg.header,
content: new Uint8Array(msg.content)
})
} else if (msg.type === 'error') {
// Reject all pending
for (const p of this.pending.values()) p.reject(new Error(msg.message))
}
}
}
```
## 5. OPFS Implementation
### 5.1 Purpose
For files approaching 100 MB, holding encrypted data in memory while uploading creates memory pressure. OPFS provides temporary file storage:
- Write encrypted data to OPFS as it's generated
- Read chunks from OPFS for upload
- Delete after upload completes
### 5.2 When to Use
- Files > 50 MB: Use OPFS
- Files ≤ 50 MB: In-memory (simpler, no OPFS overhead)
Threshold is configurable.
### 5.3 OPFS API
```typescript
// In Web Worker (synchronous API for performance)
const root = await navigator.storage.getDirectory()
const fileHandle = await root.getFileHandle('encrypted-temp', {create: true})
const accessHandle = await fileHandle.createSyncAccessHandle()
// Write encrypted chunks as they're generated
accessHandle.write(encryptedChunk, {at: offset})
// Read chunk for upload
const chunk = new Uint8Array(chunkSize)
accessHandle.read(chunk, {at: chunkOffset})
// Cleanup
accessHandle.close()
await root.removeEntry('encrypted-temp')
```
### 5.4 Upload Flow with OPFS
```
1. Main: user drops file
2. Main → Worker: {type: 'encrypt', file}
3. Worker:
- Create OPFS temp file
- Encrypt 64KB at a time, write to OPFS
- Post progress every 64KB
- Compute digest
- Return {digest, key, nonce, chunkSizes} (data stays in OPFS)
4. Main: for each chunk:
- Main → Worker: {type: 'readChunk', offset, size}
- Worker: read from OPFS, return chunk
- Main: upload chunk via fetch()
5. Main → Worker: {type: 'cleanup'}
6. Worker: delete OPFS temp file
```
### 5.5 Download Flow with OPFS
```
1. Main: parse URL, get FileDescription
2. Main: for each chunk:
- Download via fetch()
- Main → Worker: {type: 'writeChunk', data, offset}
- Worker: write to OPFS temp file
3. Main → Worker: {type: 'decrypt', key, nonce, size}
4. Worker:
- Read from OPFS
- Decrypt, verify auth tag
- Return {header, content}
5. Main: trigger browser download
6. Main → Worker: {type: 'cleanup'}
```
## 6. Implementation Plan
### 6.1 Phase A: fetch() Transport
**Goal:** Replace `node:http2` with `fetch()` in `client.ts`. All existing Node.js tests pass.
1. Rewrite `connectXFTP()` to use fetch() for handshake
2. Rewrite `sendXFTPCommand()` to use fetch()
3. Update `createXFTPChunk`, `uploadXFTPChunk`, `downloadXFTPChunk`, etc.
4. Remove `node:http2` import
5. Run existing Haskell integration tests — must pass
**Files:** `client.ts`
### 6.2 Phase B: Environment Abstraction + Web Worker
**Goal:** Add `CryptoBackend` abstraction (§3) so the same code works in Node (direct) and browser (Worker).
1. Create `env.ts` with `CryptoBackend` interface and `createCryptoBackend()` factory (as specified in §3)
2. Implement `DirectMemoryBackend` for Node.js
3. Create `crypto.worker.ts` that imports and calls existing crypto functions
4. Implement `WorkerMemoryBackend` for browser
5. Update `agent.ts` to use `createCryptoBackend()` instead of direct crypto calls
6. Existing tests pass (now using `DirectMemoryBackend`)
**Files:** `env.ts`, `crypto.worker.ts`, `agent.ts`
### 6.3 Phase C: OPFS Backend
**Goal:** Large files (>50 MB) use OPFS for temp storage in browser.
1. Implement `WorkerOPFSBackend` — uses OPFS sync API in worker
2. Add OPFS helpers in worker: read/write to temp file
3. Factory function now returns `WorkerOPFSBackend` for large files
4. Same `agent.ts` code works — only backend implementation differs
**Files:** `env.ts`, `crypto.worker.ts`
### 6.4 Phase D: Browser Testing
**Goal:** Verify everything works in real browsers.
1. Create minimal test HTML page
2. Test upload flow in Chrome, Firefox, Safari
3. Test download flow
4. Test progress reporting
5. Test cancellation
6. Test error handling (network failure, invalid file)
## 7. Testing Strategy
### 7.1 Test Layers
The `CryptoBackend` abstraction (§3) enables testing at multiple levels without code duplication:
```
┌─────────────────────────────────────────────────────────────────┐
│ Layer 3: Browser Integration (Playwright) │
│ - Web Worker message passing │
│ - OPFS read/write │
│ - Progress UI updates │
│ - Real browser fetch() with CORS │
├─────────────────────────────────────────────────────────────────┤
│ Layer 2: Full Flow (Haskell-driven, Node.js) │
│ - fetch() transport against real xftp-server │
│ - Upload: encrypt → chunk → upload → build description │
│ - Download: parse → download → verify → decrypt │
│ - Cross-language: TS upload ↔ Haskell download (and vice versa) │
├─────────────────────────────────────────────────────────────────┤
│ Layer 1: Per-Function (Haskell-driven, Node.js) │
│ - 172 existing tests │
│ - Byte-identical output vs Haskell functions │
└─────────────────────────────────────────────────────────────────┘
```
### 7.2 Layer 1: Per-Function Tests (Existing)
Existing Haskell-driven tests in `XFTPWebTests.hs`. Each test calls a TypeScript function via Node and compares output with Haskell.
```bash
cabal test --ghc-options -O0 --test-option='--match=/XFTP Web Client/'
```
All 172 tests must pass. No changes needed for browser transport work.
### 7.3 Layer 2: Full Flow Tests (Node.js + fetch)
Haskell-driven integration tests using Node.js native fetch(). These test the complete upload/download flow without Worker/OPFS.
```haskell
-- XFTPWebTests.hs (extends existing test file)
it "fetch transport: upload and download round-trip" $ do
withXFTPServer testXFTPServerConfigSNI $ \server -> do
-- TypeScript uploads via fetch(), returns URI
uri <- jsOut $ callTS "src/agent" "uploadFileTest" serverAddrHex <> testFileHex
-- TypeScript downloads via fetch()
content <- jsOut $ callTS "src/agent" "downloadFileTest" uriHex
content `shouldBe` testFileContent
it "fetch transport: TS upload, Haskell download" $ do
withXFTPServer testXFTPServerConfigSNI $ \server -> do
uri <- jsOut $ callTS "src/agent" "uploadFileTest" serverAddrHex <> testFileHex
-- Haskell agent downloads using existing xftp CLI pattern
outPath <- withAgent 1 agentCfg initAgentServers testDB $ \a -> do
rfId <- xftpReceiveFile' a 1 uri Nothing
waitRfDone a
content <- B.readFile outPath
content `shouldBe` testFileContent
```
**What this tests:**
- fetch() handshake (challenge-response, TLS session binding)
- fetch() command execution (FNEW, FPUT, FGET, FACK)
- Streaming request/response bodies
- Full encrypt → upload → download → decrypt flow
**What this doesn't test:**
- Web Worker message passing
- OPFS storage
- Browser-specific fetch() behavior (CORS preflight, etc.)
### 7.4 Layer 3: Browser Integration Tests (Playwright)
Playwright tests run in real browsers, testing browser-specific functionality.
**Test infrastructure:**
```
xftp-web/
├── test/
│ ├── browser.test.ts # Playwright test file
│ └── test-server.ts # Spawns xftp-server for tests
└── test-page/
├── index.html # Minimal test UI
└── test-harness.ts # Exposes test functions to window
```
**Running browser tests:**
```bash
cd xftp-web
npm run test:browser # Spawns xftp-server, runs Playwright
```
**Test cases:**
```typescript
// test/browser.test.ts
import { test, expect } from '@playwright/test'
import { spawn } from 'child_process'
let serverProcess: ChildProcess
test.beforeAll(async () => {
// Spawn xftp-server with SNI cert for browser TLS
serverProcess = spawn('xftp-server', ['start', '-c', 'test-config.ini'])
await waitForServer()
})
test.afterAll(async () => {
serverProcess.kill()
})
test('small file upload/download (in-memory)', async ({ page }) => {
await page.goto('/test-page/')
const result = await page.evaluate(async () => {
const data = new Uint8Array(1024 * 1024) // 1 MB
crypto.getRandomValues(data)
const file = new File([data], 'small.bin')
const uri = await window.xftp.uploadFile(file)
const downloaded = await window.xftp.downloadFile(uri)
return {
uploadedSize: data.length,
downloadedSize: downloaded.length,
match: arraysEqual(data, downloaded),
usedOPFS: window.xftp.lastUploadUsedOPFS
}
})
expect(result.match).toBe(true)
expect(result.usedOPFS).toBe(false) // Small file, no OPFS
})
test('large file upload/download (OPFS)', async ({ page }) => {
await page.goto('/test-page/')
const result = await page.evaluate(async () => {
const data = new Uint8Array(60 * 1024 * 1024) // 60 MB
crypto.getRandomValues(data)
const file = new File([data], 'large.bin')
const uri = await window.xftp.uploadFile(file)
const downloaded = await window.xftp.downloadFile(uri)
return {
match: arraysEqual(data, downloaded),
usedOPFS: window.xftp.lastUploadUsedOPFS
}
})
expect(result.match).toBe(true)
expect(result.usedOPFS).toBe(true) // Large file, used OPFS
})
test('progress events fire during upload', async ({ page }) => {
await page.goto('/test-page/')
const progressEvents = await page.evaluate(async () => {
const events: number[] = []
const data = new Uint8Array(10 * 1024 * 1024) // 10 MB
const file = new File([data], 'progress.bin')
await window.xftp.uploadFile(file, (done, total) => {
events.push(done / total)
})
return events
})
expect(progressEvents.length).toBeGreaterThan(1)
expect(progressEvents[progressEvents.length - 1]).toBe(1) // 100% at end
})
test('Web Worker keeps UI responsive', async ({ page }) => {
await page.goto('/test-page/')
// Start upload and measure main thread responsiveness
const result = await page.evaluate(async () => {
const data = new Uint8Array(50 * 1024 * 1024) // 50 MB
const file = new File([data], 'responsive.bin')
let frameCount = 0
let uploadDone = false
// Count animation frames during upload
function countFrames() {
frameCount++
if (!uploadDone) requestAnimationFrame(countFrames)
}
requestAnimationFrame(countFrames)
const start = performance.now()
await window.xftp.uploadFile(file)
uploadDone = true
const elapsed = performance.now() - start
// If main thread was blocked, frameCount would be very low
const expectedFrames = (elapsed / 1000) * 30 // ~30 fps minimum
return { frameCount, expectedFrames, elapsed }
})
// Should maintain reasonable frame rate (Worker offloaded crypto)
expect(result.frameCount).toBeGreaterThan(result.expectedFrames * 0.5)
})
```
### 7.5 Cross-Browser Matrix
| Browser | fetch streaming | Web Worker | OPFS sync | Status |
|---------|----------------|------------|-----------|--------|
| Chrome 105+ | ✓ | ✓ | ✓ | Primary target |
| Firefox 111+ | ✓ | ✓ | ✓ | Supported |
| Safari 16.4+ | ✓ | ✓ | ✓ | Supported |
| Edge 105+ | ✓ | ✓ | ✓ | Supported (Chromium) |
Playwright tests run against Chrome by default. CI can run against all browsers.
### 7.6 Test Execution Summary
| Phase | Test Layer | Command | What's Verified |
|-------|-----------|---------|-----------------|
| A | Layer 1 + 2 | `cabal test --test-option='--match=/XFTP Web Client/'` | fetch() transport, full flow |
| B | Layer 3 | `npm run test:browser` | Worker message passing, progress |
| C | Layer 3 | `npm run test:browser` | OPFS storage for large files |
| D | Layer 3 | `npm run test:browser -- --project=firefox,webkit` | Cross-browser |
@@ -0,0 +1,772 @@
# Send File Web Page — Implementation Plan
## TOC
1. Executive Summary
2. Architecture
3. CryptoBackend & Web Worker
4. Server Configuration
5. Page Structure & UI
6. Upload Flow
7. Download Flow
8. Build & Dev Setup
9. agent.ts Changes
10. Testing
11. Files
12. Implementation Order
## 1. Executive Summary
Build a static web page for browser-based XFTP file transfer (Phase 5 of master RFC). The page supports upload (drag-drop → encrypt → upload → shareable link) and download (open link → download → decrypt → save). Crypto runs in a Web Worker; large files use OPFS temp storage.
Two build variants:
- **Local**: single test server at `localhost:7000` (development/testing)
- **Production**: 12 preset XFTP servers (6 SimpleX + 6 Flux)
Uses Vite for bundling (already a dependency via vitest). No CSS framework — plain CSS per RFC spec.
## 2. Architecture
```
xftp-web/
├── src/ # Library (existing, targeted changes)
│ ├── agent.ts # Modified: uploadFile readChunk, downloadFileRaw
│ ├── client.ts # Modified: downloadXFTPChunkRaw
│ ├── crypto/ # Unchanged
│ ├── download.ts # Unchanged
│ └── protocol/
│ └── description.ts # Fix: SHA-256 → SHA-512 comment on digest field
├── web/ # Web page (new)
│ ├── index.html # Entry point (CSP meta tag)
│ ├── main.ts # Router + sodium.ready init
│ ├── upload.ts # Upload UI + orchestration
│ ├── download.ts # Download UI + orchestration
│ ├── progress.ts # Circular progress canvas component
│ ├── servers.ts # Server list (build-time configured, imports servers.json)
│ ├── servers.json # Preset server addresses (shared with vite.config.ts)
│ ├── crypto-backend.ts # CryptoBackend interface + WorkerBackend
│ ├── crypto.worker.ts # Web Worker: encrypt/decrypt/OPFS
│ └── style.css # Minimal styling
├── vite.config.ts # Page build config (new)
├── tsconfig.web.json # IDE/CI type-check for web/ (new)
├── tsconfig.worker.json # IDE/CI type-check for worker (new)
├── playwright.config.ts # Page E2E test config (new)
├── vitest.config.ts # Test config (existing)
├── .gitignore # Existing (add dist-web/)
└── test/ # Tests (existing + new page test)
```
Data flow:
```
┌───────────────────────────────────────────┐
│ Main Thread │
│ │
│ Upload: upload.ts ──► agent.ts ──► fetch()│
│ Download: download.ts ──► agent.ts ──► fetch()
│ │ │
│ postMessage HTTP/2 │
│ ▼ ▼
│ ┌─────────────────┐ ┌──────────┐│
│ │ Web Worker │ │ XFTP ││
│ │ crypto.worker.ts │ │ Server ││
│ │ ┌─────────────┐ │ └──────────┘│
│ │ │ OPFS temp │ │ │
│ │ └─────────────┘ │ │
│ └─────────────────┘ │
└───────────────────────────────────────────┘
```
Both upload and download use `agent.ts` for orchestration (connection pooling, parallel chunk transfers, redirect handling). Upload uses a `readChunk` callback for Worker data access. Download uses an `onRawChunk` callback to route raw encrypted chunks to the Worker for decryption (see §7.2). ACK is the caller's responsibility — `downloadFileRaw` returns the resolved `FileDescription` without ACKing, so the caller can verify integrity before acknowledging.
## 3. CryptoBackend & Web Worker
### 3.1 Interface
```typescript
// crypto-backend.ts
export interface CryptoBackend {
// Upload: encrypt file, store encrypted data in OPFS
encrypt(data: Uint8Array, fileName: string,
onProgress?: (done: number, total: number) => void
): Promise<EncryptResult>
// Upload: read encrypted chunk from OPFS (called by agent.ts via readChunk callback)
readChunk(offset: number, size: number): Promise<Uint8Array>
// Download: transit-decrypt raw chunk and store in OPFS
decryptAndStoreChunk(
dhSecret: Uint8Array, nonce: Uint8Array,
body: Uint8Array, digest: Uint8Array, chunkNo: number
): Promise<void>
// Download: verify digest + file-level decrypt all stored chunks
// Only needs size/digest/key/nonce — not the full FileDescription (avoids sending private keys to Worker)
verifyAndDecrypt(params: {size: number, digest: Uint8Array, key: Uint8Array, nonce: Uint8Array}
): Promise<{header: FileHeader, content: Uint8Array}>
cleanup(): Promise<void>
}
// Structurally identical to EncryptedFileMetadata from agent.ts (§9.1).
// Kept separate to avoid crypto-backend.ts importing from agent.ts
// (which would pull in node:http2 via client.ts, breaking Worker bundling).
// TypeScript structural typing makes them assignment-compatible.
export interface EncryptResult {
digest: Uint8Array
key: Uint8Array
nonce: Uint8Array
chunkSizes: number[]
}
```
### 3.2 Factory
```typescript
export function createCryptoBackend(): CryptoBackend {
if (typeof Worker === 'undefined') {
throw new Error('Web Workers required — update your browser')
}
return new WorkerBackend()
}
```
The Worker always uses OPFS for temp storage (single code path — no memory/disk branching). OPFS I/O overhead is negligible relative to crypto and network time. Each Worker session creates a unique directory in OPFS root named `session-<Date.now()>-<crypto.randomUUID()>`, containing `upload.bin` and `download.bin` as needed. `cleanup()` deletes the entire session directory. On Worker startup (before processing messages), sweep OPFS root and delete any `session-*` directories whose embedded timestamp (parsed from the name) is older than 1 hour — this handles stale files from crashed tabs. The OPFS API does not expose directory timestamps, so the name-encoded timestamp is the only reliable mechanism. This prevents cross-tab collisions and unbounded OPFS growth.
### 3.3 Worker message protocol
Every request carries a numeric `id`. Responses carry the same `id`. WorkerBackend maintains a `Map<number, {resolve, reject}>` to match responses to pending promises.
Main → Worker (fields marked `†` are Transferable — arrive as `ArrayBuffer` in Worker, must be wrapped with `new Uint8Array(...)` before use):
- `{id: number, type: 'encrypt', data†: ArrayBuffer, fileName: string}` — encrypt file, store in OPFS
- `{id: number, type: 'readChunk', offset: number, size: number}` — read encrypted chunk from OPFS
- `{id: number, type: 'decryptAndStoreChunk', dhSecret: Uint8Array, nonce: Uint8Array, body†: ArrayBuffer, chunkDigest: Uint8Array, chunkNo: number}` — transit-decrypt + store in OPFS. `chunkDigest` is the per-chunk SHA-256 digest (verified by `decryptReceivedChunk`). Distinct from the file-level SHA-512 digest in `verifyAndDecrypt`.
- `{id: number, type: 'verifyAndDecrypt', size: number, digest: Uint8Array, key: Uint8Array, nonce: Uint8Array}` — verify digest + file-level decrypt all chunks. Only the four fields needed for verification/decryption are sent — not the full `FileDescription`, which contains private replica keys that the Worker doesn't need.
- `{id: number, type: 'cleanup'}` — delete OPFS temp files
Worker → Main (fields marked `†` are Transferable):
- `{id: number, type: 'progress', done: number, total: number}` — encryption/decryption progress (fire-and-forget, no promise)
- `{id: number, type: 'encrypted', digest: Uint8Array, key: Uint8Array, nonce: Uint8Array, chunkSizes: number[]}` — all fields structured-cloned (not transferred)
- `{id: number, type: 'chunk', data†: ArrayBuffer}` — readChunk response
- `{id: number, type: 'stored'}` — decryptAndStore acknowledgment
- `{id: number, type: 'decrypted', header: FileHeader, content†: ArrayBuffer}` — verifyAndDecrypt response
- `{id: number, type: 'cleaned'}`
- `{id: number, type: 'error', message: string}` — rejects the pending promise for this `id`
All messages carrying large `ArrayBuffer` payloads use `postMessage(msg, [transferables])` to transfer ownership instead of structured-clone copying. Only `ArrayBuffer` can be transferred — `Uint8Array`, `number[]`, and other types are always structured-cloned. This applies to: `encrypt` request (`data`), `readChunk` response (`data`), `decryptAndStoreChunk` request (`body`), and `verifyAndDecrypt` response (`content`). The `WorkerBackend` implementation must ensure the transferred `ArrayBuffer` covers the full `Uint8Array` — if `byteOffset !== 0` or `byteLength !== buffer.byteLength`, slice first: `data.buffer.slice(data.byteOffset, data.byteOffset + data.byteLength)`. This is required for `decryptAndStore` request bodies: `sendXFTPCommand` returns `body = fullResp.subarray(XFTP_BLOCK_SIZE)`, which has `byteOffset = XFTP_BLOCK_SIZE`. Other payloads are full-buffer views (§6 step 3 creates `new Uint8Array(await file.arrayBuffer())`; Worker responses allocate fresh buffers) but `WorkerBackend` should guard unconditionally.
### 3.4 Worker internals
**Imports:** The Worker imports directly from `libsodium-wrappers-sumo` (for `await sodium.ready`), `src/crypto/file.js` (`encryptFile`, `encodeFileHeader`, `decryptChunks`), `src/crypto/digest.js` (`sha512`), `src/protocol/chunks.js` (`prepareChunkSizes`, `fileSizeLen`, `authTagSize`), `src/protocol/encoding.js` (`concatBytes`), and `src/download.js` (`decryptReceivedChunk`). `download.js` directly imports `src/protocol/client.js` (for `decryptTransportChunk`). These transitively pull in `src/crypto/secretbox.js`, `src/crypto/keys.js`, and `src/crypto/padding.js`. None of these import `src/agent.ts` or `src/client.ts` — those pull in `node:http2` via dynamic import which would break Worker bundling. Vite tree-shakes the transitive deps automatically. Note: `download.js``protocol/client.js``crypto/keys.js` transitively pulls in `@noble/curves` (~50-80KB). This is unavoidable since `decryptTransportChunk` needs `dh` from `keys.js`. If Worker bundle size becomes a concern, `decryptReceivedChunk` could be refactored out of `download.js` into a separate module that doesn't import `protocol/client.js`.
**ArrayBuffer → Uint8Array conversion:** All Transferable fields arrive in the Worker as `ArrayBuffer`. The Worker's message handler must wrap them before passing to library functions: `new Uint8Array(msg.data)` for encrypt, `new Uint8Array(msg.body)` for decryptAndStore. Non-transferred fields (`dhSecret`, `nonce`, `digest`, `chunkSizes`) arrive as their original types (`Uint8Array` / `number[]`) via structured clone.
The Worker's encrypt handler calls the same functions as `encryptFileForUpload` in agent.ts (key/nonce generation → `encryptFile``sha512``prepareChunkSizes`). This is not reimplementation — it's calling the same library functions from a different entry point.
**Libsodium init:** Both the Worker and the main thread must `await sodium.ready` before calling any crypto functions that use libsodium. The Worker does this once on startup before processing messages. The main thread needs it before `connectXFTP` (which uses libsodium via `verifyIdentityProof`) and before `downloadXFTPChunkRaw` (which uses libsodium via `generateX25519KeyPair` + `dh`). In practice, `main.ts` calls `await sodium.ready` at page load, before any XFTP calls.
Encrypt (mirrors `encryptFileForUpload` in agent.ts):
1. Generate key (32B) + nonce (24B) via `crypto.getRandomValues`
2. `fileHdr = encodeFileHeader({fileName, fileExtra: null})`
3. `fileSize = BigInt(fileHdr.length + source.length)`
4. `payloadSize = Number(fileSize) + fileSizeLen + authTagSize`
5. `chunkSizes = prepareChunkSizes(payloadSize)`
6. `encSize = BigInt(chunkSizes.reduce((a, b) => a + b, 0))`
7. `encData = encryptFile(source, fileHdr, key, nonce, fileSize, encSize)`
8. `digest = sha512(encData)` — note: the `digest` field comment in `FileDescription` in `description.ts` says "SHA-256" but the actual hash is SHA-512 everywhere (`sha512` in agent.ts and download.ts). Fix the comment during implementation.
9. Open OPFS upload file via `createSyncAccessHandle`, write `encData`, flush, close handle. Null out `encData` reference.
10. Reopen the same OPFS file with `createSyncAccessHandle` as a persistent read handle (stored on the Worker module scope). This handle is used by all subsequent `readChunk` calls and closed on `cleanup`.
11. Post back `{digest, key, nonce, chunkSizes}` (no encData transfer — data stays in OPFS)
readChunk:
- Use the persistent read handle: `handle.read(buf, {at: offset})` → return slice as transferable ArrayBuffer. OPFS allows only one `FileSystemSyncAccessHandle` per file; the persistent handle avoids per-call open/close overhead.
decryptAndStoreChunk (removes transport encryption only — stored data is still file-level encrypted):
1. `decryptReceivedChunk(dhSecret, nonce, new Uint8Array(body), chunkDigest)` → transit-decrypted chunk data (still file-level encrypted — only the transport layer is removed). Argument order matches signature `(dhSecret, cbNonce, encData, expectedDigest)` from download.ts. `body` arrives as `ArrayBuffer` via Transferable and must be wrapped; `dhSecret`, `nonce`, `chunkDigest` arrive as `Uint8Array` via structured clone.
2. On first call, open the OPFS download temp file via `createSyncAccessHandle` and store as a persistent write handle. Record `{chunkNo, size: decrypted.length}` in an in-memory `chunkMeta: Map<number, {offset: number, size: number}>` — offset is the running sum of sizes for chunks stored so far (chunks may arrive out of order with `concurrency > 1`, so offset is assigned as `currentFileOffset`, then `currentFileOffset += size`)
3. Write decrypted chunk to the persistent handle at the recorded offset
verifyAndDecrypt (mirrors size/digest checks in agent.ts `downloadFile`):
1. Close the persistent download write handle (flush first), then reopen as a read handle. Read each chunk from OPFS into a `Uint8Array[]` array, ordered by `chunkNo`: for each entry in `chunkMeta` sorted by `chunkNo`, `handle.read(buf, {at: offset})` with the recorded offset and size
2. Concatenate for verification: `combined = concatBytes(...chunks)`
3. Verify total size: `combined.length === params.size`
4. Verify SHA-512 digest: `sha512(combined)` matches `params.digest`
5. Decrypt: `decryptChunks(BigInt(params.size), chunks, params.key, params.nonce)``params.size` is the encrypted file size (`fd.size` = `sum(chunkSizes)` = `decryptChunks`' first param `encSize`). Called directly instead of via `processDownloadedFile` (which expects a full `FileDescription`). Pass the original `chunks` array (not `combined`), as `decryptChunks` handles concatenation internally.
6. Delete OPFS download temp file
7. Return `{header, content}` via transferable ArrayBuffer
### 3.5 Browser requirements
The page requires a modern browser with Web Worker and OPFS support:
- Chrome 102+, Firefox 114+, Safari 15.2+ (Workers + OPFS + ES module Workers — Firefox added module Worker support in 114)
- If Worker or OPFS is unavailable, the page shows an error message rather than falling back silently.
No `DirectBackend` is needed — the page is browser-only, and tests run in vitest browser mode (real Chromium). The existing library tests (`test/browser.test.ts`) test the crypto/upload/download pipeline directly without Workers.
## 4. Server Configuration
### 4.1 Server lists
`web/servers.json` — single source of truth for preset server addresses (imported by both `servers.ts` and `vite.config.ts`):
```json
{
"simplex": [
"xftp://da1aH3nOT-9G8lV7bWamhxpDYdJ1xmW7j3JpGaDR5Ug=@xftp1.simplex.im",
"xftp://5vog2Imy1ExJB_7zDZrkV1KDWi96jYFyy9CL6fndBVw=@xftp2.simplex.im",
"xftp://PYa32DdYNFWi0uZZOprWQoQpIk5qyjRJ3EF7bVpbsn8=@xftp3.simplex.im",
"xftp://k_GgQl40UZVV0Y4BX9ZTyMVqX5ZewcLW0waQIl7AYDE=@xftp4.simplex.im",
"xftp://-bIo6o8wuVc4wpZkZD3tH-rCeYaeER_0lz1ffQcSJDs=@xftp5.simplex.im",
"xftp://6nSvtY9pJn6PXWTAIMNl95E1Kk1vD7FM2TeOA64CFLg=@xftp6.simplex.im"
],
"flux": [
"xftp://92Sctlc09vHl_nAqF2min88zKyjdYJ9mgxRCJns5K2U=@xftp1.simplexonflux.com",
"xftp://YBXy4f5zU1CEhnbbCzVWTNVNsaETcAGmYqGNxHntiE8=@xftp2.simplexonflux.com",
"xftp://ARQO74ZSvv2OrulRF3CdgwPz_AMy27r0phtLSq5b664=@xftp3.simplexonflux.com",
"xftp://ub2jmAa9U0uQCy90O-fSUNaYCj6sdhl49Jh3VpNXP58=@xftp4.simplexonflux.com",
"xftp://Rh19D5e4Eez37DEE9hAlXDB3gZa1BdFYJTPgJWPO9OI=@xftp5.simplexonflux.com",
"xftp://0AznwoyfX8Od9T_acp1QeeKtxUi676IBIiQjXVwbdyU=@xftp6.simplexonflux.com"
]
}
```
`web/servers.ts`:
```typescript
import {parseXFTPServer, type XFTPServer} from '../src/protocol/address.js'
import presets from './servers.json'
declare const __XFTP_SERVERS__: string[]
const serverAddresses: string[] = typeof __XFTP_SERVERS__ !== 'undefined'
? __XFTP_SERVERS__
: [...presets.simplex, ...presets.flux]
export function getServers(): XFTPServer[] {
return serverAddresses.map(parseXFTPServer)
}
export function pickRandomServer(servers: XFTPServer[]): XFTPServer {
return servers[Math.floor(Math.random() * servers.length)]
}
```
### 4.2 Build-time injection
`vite.config.ts` defines `__XFTP_SERVERS__`:
- `mode === 'local'`: `["xftp://<test-fingerprint>@localhost:7000"]`
- `mode === 'production'`: not defined → falls through to hardcoded list
### 4.3 Assumption
Production XFTP servers must have `[WEB]` section configured with a CA-signed certificate for browser TLS. Without this, browsers will reject the self-signed XFTP identity cert. The local test server uses `tests/fixtures/` certs which Chromium accepts via `ignoreHTTPSErrors`.
## 5. Page Structure & UI
### 5.1 Routing
`main.ts` checks `window.location.hash` once on page load:
- Hash present → download mode
- Hash absent → upload mode
No `hashchange` listener — the shareable link opens in a new tab. Simple page-load routing.
### 5.2 Upload UI states
1. **Landing**: Drag-drop zone centered, file picker button, size limit note
2. **Uploading**: Circular progress (canvas), percentage, cancel button
3. **Complete**: Shareable link (input + copy button), "Install SimpleX" CTA
4. **Error**: Error message + retry button. On server-unreachable, auto-retry with exponential backoff (1s, 2s, 4s, up to 3 attempts) before showing the error state.
### 5.3 Download UI states
1. **Ready**: Approximate file size displayed (encrypted size from `fd.size` or `fd.redirect.size` — see §7 step 2; file name is unavailable — it's inside the encrypted content), download button
2. **Downloading**: Circular progress, percentage
3. **Complete**: Browser save dialog triggered automatically
4. **Error**: Error message (expired, corrupted, unreachable)
### 5.4 Security summary (RFC §7.4)
Both upload-complete and download-ready states display a brief non-technical security summary:
- Files are encrypted in the browser before upload — the server never sees file contents.
- The link contains the decryption key in the hash fragment, which the browser never sends to any server.
- For maximum security, use the SimpleX app.
### 5.5 File expiry
Display on upload-complete state: "Files are typically available for 48 hours." This is an approximation — actual expiry depends on each XFTP server's `[STORE_LOG]` retention configuration. The 48-hour figure matches the current preset server defaults.
### 5.6 Styling
Plain CSS, no framework. White background, centered content, responsive. Circular progress via `<canvas>` (arc drawing, percentage text in center).
File size limit: 100MB. Displayed on upload page.
### 5.7 CSP
`index.html` includes a `<meta>` Content-Security-Policy tag with a build-time placeholder:
```html
<meta http-equiv="Content-Security-Policy"
content="default-src 'self'; worker-src 'self' blob:; style-src 'self' 'unsafe-inline'; connect-src __CSP_CONNECT_SRC__;">
```
Vite's `transformIndexHtml` hook (in `vite.config.ts`) replaces `__CSP_CONNECT_SRC__` at build time with origins derived from the server list:
- Local mode: `https://localhost:7000`
- Production: `https://xftp1.simplex.im:443 https://xftp2.simplex.im:443 ...` (all 12 servers)
## 6. Upload Flow
`web/upload.ts`:
1. User drops/picks file → `File` object
2. Validate `file.size <= 100 * 1024 * 1024` — show error if exceeded
3. Read file: `new Uint8Array(await file.arrayBuffer())` — note: after `backend.encrypt()` transfers the buffer to the Worker, `fileData` is detached (zero-length). Peak memory is ~2× file size (main thread holds original until transfer, Worker holds encrypted copy before OPFS write). Acceptable for the 100MB limit; do not raise the limit without considering memory implications.
4. Create `CryptoBackend` via factory
5. Create `XFTPClientAgent`
6. `backend.encrypt(fileData, file.name, onProgress)``EncryptResult`
- Encryption progress shown on canvas (Worker posts progress messages)
7. Pick one random server from configured list (V1: all chunks to same server)
8. Call `uploadFile(agent, server, metadata, {onProgress, readChunk: (off, sz) => backend.readChunk(off, sz)})`:
- `metadata` = `{digest, key, nonce, chunkSizes}` from EncryptResult
- Network progress shown on canvas
- Returns `{rcvDescription, sndDescription, uri}`
9. Construct full URL: `window.location.origin + window.location.pathname + '#' + uri`
10. Display link, copy button
11. Cleanup: `backend.cleanup()`, `closeXFTPAgent(agent)`
**Cancel:** User can abort via cancel button. Sets an `AbortController` signal that:
- Sends `{type: 'cleanup'}` to Worker
- Closes the XFTPClientAgent (drops HTTP/2 connections)
- Resets UI to landing state
## 7. Download Flow
`web/download.ts`:
1. Parse `window.location.hash.slice(1)``decodeDescriptionURI(fragment)``FileDescription`
2. Display file size (`fd.size` bytes, formatted human-readable). Note: `fd.size` is the encrypted size (slightly larger than plaintext due to padding + auth tag). The plaintext size is not available until decryption — display it as an approximate file size. If `fd.redirect !== null`, size comes from `fd.redirect.size` (which is the inner encrypted size).
3. User clicks "Download"
4. Create `CryptoBackend` and `XFTPClientAgent`
5. Call `downloadFileRaw(agent, fd, onRawChunk, {onProgress, concurrency: 3})`:
- `onRawChunk` forwards each raw chunk to the Worker: `backend.decryptAndStoreChunk(raw.dhSecret, raw.nonce, raw.body, raw.digest, raw.chunkNo)`
- `downloadFileRaw` handles redirect resolution internally (see §7.1), parallel downloads, and connection pooling
- Returns the resolved `FileDescription` (inner fd for redirect case, original fd otherwise)
6. `backend.verifyAndDecrypt({size: resolvedFd.size, digest: resolvedFd.digest, key: resolvedFd.key, nonce: resolvedFd.nonce})``{header, content}`
- Verifies size + SHA-512 digest + file-level decryption inside Worker. Only the four needed fields are sent — private replica keys stay on the main thread.
7. ACK: `ackFileChunks(agent, resolvedFd)` — best-effort, after verification succeeds
8. Sanitize `header.fileName` before use: strip path separators (`/`, `\`), replace null/control characters (U+0000-U+001F, U+007F), strip Unicode bidi override characters (U+202A-U+202E, U+2066-U+2069 — prevents `doc.pdf.exe` appearing as `doc.exe.pdf`), limit length to 255 chars. The filename is user-controlled (set by the uploader) and arrives via decrypted content. Then trigger browser save: `new Blob([content])``<a download="${sanitizedName}">` click
9. Cleanup: `backend.cleanup()`, `closeXFTPAgent(agent)`
### 7.1 Redirect handling
Handled inside `downloadFileRaw` in agent.ts — the web page doesn't see it. When `fd.redirect !== null`:
1. Download redirect chunks via `downloadXFTPChunkRaw` (parallel, same as regular chunks)
2. Transit-decrypt + verify + file-level decrypt on main thread (redirect data is always small — a few KB of YAML, so main thread decryption is fine)
3. Parse YAML → inner `FileDescription`, validate against `fd.redirect.{size, digest}`
4. ACK redirect chunks (best-effort)
5. Continue downloading inner description's chunks, calling `onRawChunk` for each
### 7.2 Architecture note: download refactoring
Both upload and download use `agent.ts` for orchestration. The key difference is where the crypto/network split happens:
- **Upload**: agent.ts reads encrypted chunks from the Worker via `readChunk` callback, sends them over the network.
- **Download**: agent.ts receives raw encrypted responses from the network via `downloadXFTPChunkRaw` (DH key exchange + network only, no decryption), passes them to the web page via `onRawChunk` callback, which routes them to the Worker for transit decryption.
This split keeps all expensive crypto off the main thread. Transit decryption uses a custom JS Salsa20 implementation (`xorKeystream` in secretbox.ts) that would block the UI for ~50-200ms on a 4MB chunk. File-level decryption (`decryptChunks`) is similarly expensive. Both happen in the Worker.
The cheap operations stay on the main thread: DH key exchange (`generateX25519KeyPair` + `dh` — ~1ms via libsodium WASM), XFTP command encoding/decoding, connection management.
## 8. Build & Dev Setup
### 8.1 vite.config.ts (new, separate from vitest.config.ts)
```typescript
import {defineConfig, type Plugin} from 'vite'
import {readFileSync} from 'fs'
import {createHash} from 'crypto'
import presets from './web/servers.json'
function parseHost(addr: string): string {
const m = addr.match(/@(.+)$/)
if (!m) throw new Error('bad server address: ' + addr)
const host = m[1].split(',')[0]
return host.includes(':') ? host : host + ':443'
}
function cspPlugin(servers: string[]): Plugin {
const origins = servers.map(s => 'https://' + parseHost(s)).join(' ')
return {
name: 'csp-connect-src',
transformIndexHtml: {
order: 'pre',
handler(html, ctx) {
if (ctx.server) {
// Dev mode: remove CSP meta tag entirely — Vite HMR needs inline scripts
return html.replace(/<meta\s[^>]*?Content-Security-Policy[\s\S]*?>/i, '')
}
return html.replace('__CSP_CONNECT_SRC__', origins)
}
}
}
}
export default defineConfig(({mode}) => {
const define: Record<string, string> = {}
let servers: string[]
if (mode === 'local') {
const pem = readFileSync('../tests/fixtures/ca.crt', 'utf-8')
const der = Buffer.from(pem.replace(/-----[^-]+-----/g, '').replace(/\s/g, ''), 'base64')
const fp = createHash('sha256').update(der).digest('base64')
.replace(/\+/g, '-').replace(/\//g, '_')
servers = [`xftp://${fp}@localhost:7000`]
define['__XFTP_SERVERS__'] = JSON.stringify(servers)
} else {
servers = [...presets.simplex, ...presets.flux]
}
return {
root: 'web',
build: {outDir: '../dist-web'},
define,
worker: {format: 'es'},
plugins: [cspPlugin(servers)],
}
})
```
### 8.2 package.json scripts
```json
"dev": "vite --mode local",
"build:local": "vite build --mode local",
"build:prod": "vite build --mode production",
"preview": "vite preview",
"check:web": "tsc -p tsconfig.web.json --noEmit && tsc -p tsconfig.worker.json --noEmit"
```
Note: `check:web` type-checks `src/` twice (once per config) — acceptable for this small library.
Add `vite` as an explicit devDependency (`^6.0.0` — matching the version vitest 3.x depends on transitively). Relying on transitive resolution is fragile across package managers.
### 8.3 TypeScript configuration
The existing `tsconfig.json` has `rootDir: "src"` and `include: ["src/**/*.ts"]` — this is for library compilation only (output to `dist/`). Vite handles `web/` TypeScript compilation independently via esbuild, so the main tsconfig is unchanged. `web/*.ts` files import from `../src/*.js` using relative paths.
Add two tsconfigs for `web/` type-checking — split by environment to avoid type pollution between DOM and WebWorker globals:
`tsconfig.web.json` — main-thread files (DOM globals: `document`, `window`, etc.):
```json
{
"extends": "./tsconfig.json",
"compilerOptions": {
"rootDir": ".",
"noEmit": true,
"types": [],
"moduleResolution": "bundler",
"lib": ["ES2022", "DOM"]
},
"include": ["web/**/*.ts", "src/**/*.ts"],
"exclude": ["web/crypto.worker.ts"]
}
```
`tsconfig.worker.json` — Worker file (`self`, `FileSystemSyncAccessHandle`, etc.):
```json
{
"extends": "./tsconfig.json",
"compilerOptions": {
"rootDir": ".",
"noEmit": true,
"types": [],
"moduleResolution": "bundler",
"lib": ["ES2022", "WebWorker"]
},
"include": ["web/crypto.worker.ts", "src/**/*.ts"]
}
```
Both configs set `"types": []` to prevent auto-inclusion of `@types/node` and `"moduleResolution": "bundler"` for Vite-compatible resolution (JSON imports, `.js` extension mapping). The base config's `"moduleResolution": "node"` would cause false type errors on `import ... from './servers.json'`. Both override `@types/node`, which would pollute DOM/WebWorker environments with Node.js globals (`process`, `Buffer`, etc.). This means `src/client.ts`'s dynamic `import("node:http2")` will produce a type error in these configs. This is acceptable — `src/client.ts` provides `createNodeTransport` which is never used in browser code (Vite tree-shakes it out), and full `src/` type-checking is handled by the base `tsconfig.json`. If the error is distracting, add `src/client.ts` to both configs' `exclude` arrays.
Both extend the library tsconfig (inheriting `strict`, `module`, etc.) and include `src/**/*.ts` so imports from `../src/*.js` resolve. `"noEmit": true` means they're only used for type-checking — Vite handles actual compilation. The inherited `"exclude": ["node_modules", "dist", "test"]` intentionally excludes `test/` — test files are type-checked by their own vitest/playwright configs, not by `check:web`.
### 8.4 Dev workflow
`npm run dev` → Vite dev server at `localhost:5173`, configured for local test server. Start `xftp-server` on port 7000 separately (or via the existing globalSetup).
Note: The CSP meta tag's `default-src 'self'` blocks Vite's injected HMR inline scripts in dev mode. The `cspPlugin` handles this by removing the entire CSP `<meta>` tag in serve mode (dev server), so HMR works without restrictions. Production builds always have the correct CSP.
## 9. Library Changes (agent.ts + client.ts)
Changes to support the web page: upload `readChunk` callback, download `onRawChunk` callback with parallel chunk downloads.
### 9.1 Type changes
Split the existing `EncryptedFileInfo` (which currently has `encData`, `digest`, `key`, `nonce`, `chunkSizes` as direct fields) into a metadata-only base and an extension:
```typescript
// Metadata-only variant (no encData — data lives in Worker/OPFS)
export interface EncryptedFileMetadata {
digest: Uint8Array
key: Uint8Array
nonce: Uint8Array
chunkSizes: number[]
}
// Full variant (existing, extends metadata with data)
export interface EncryptedFileInfo extends EncryptedFileMetadata {
encData: Uint8Array
}
```
### 9.2 uploadFile signature change
Replace positional optional params with an options bag. Add optional `readChunk`. When provided, `encrypted.encData` is not accessed.
```typescript
export interface UploadOptions {
onProgress?: (uploaded: number, total: number) => void
redirectThreshold?: number
readChunk?: (offset: number, size: number) => Promise<Uint8Array>
}
export async function uploadFile(
agent: XFTPClientAgent,
server: XFTPServer,
encrypted: EncryptedFileMetadata,
options?: UploadOptions
): Promise<UploadResult>
```
Inside `uploadFile`:
- Chunk read: if `options?.readChunk` is provided, use it. Otherwise, verify `'encData' in encrypted` at runtime (throws `"uploadFile: readChunk required when encData is absent"` if missing), then use `(off, sz) => Promise.resolve((encrypted as EncryptedFileInfo).encData.subarray(off, off + sz))`. This guards against calling `uploadFile` with `EncryptedFileMetadata` but no `readChunk`. For each chunk, call `readChunk(offset, size)` once and use the returned `Uint8Array` for both `getChunkDigest(chunkData)` and `uploadXFTPChunk(..., chunkData)` — do not call `readChunk` twice per chunk.
- Progress total: `const total = encrypted.chunkSizes.reduce((a, b) => a + b, 0)` — replaces `encrypted.encData.length` (line 129) since `EncryptedFileMetadata` has no `encData`. The values are identical: `encData.length === sum(chunkSizes)`.
- `buildDescription` parameter type: change from `EncryptedFileInfo` to `EncryptedFileMetadata` — it only accesses `chunkSizes`, `digest`, `key`, `nonce` (not `encData`).
`uploadRedirectDescription` (internal) is unchanged — redirect descriptions are always small and created in-memory by `encryptFileForUpload`.
### 9.3 Backward compatibility
The signature change from positional params `(agent, server, encrypted, onProgress?, redirectThreshold?)` to `(agent, server, encrypted, options?)` is a breaking change for callers that pass `onProgress` or `redirectThreshold`. In practice, the only callers are the browser test (which passes no options — no change needed) and the web page (new code). `EncryptedFileInfo` extends `EncryptedFileMetadata`, so existing callers that pass `EncryptedFileInfo` work without change.
### 9.4 client.ts: downloadXFTPChunkRaw
Split `downloadXFTPChunk` at the network/crypto boundary. The new function does DH key exchange and network I/O but skips transit decryption:
```typescript
export interface RawChunkResponse {
dhSecret: Uint8Array
nonce: Uint8Array
body: Uint8Array
}
export async function downloadXFTPChunkRaw(
c: XFTPClient, rpKey: Uint8Array, fId: Uint8Array
): Promise<RawChunkResponse> {
const {publicKey, privateKey} = generateX25519KeyPair()
const cmd = encodeFGET(encodePubKeyX25519(publicKey))
const {response, body} = await sendXFTPCommand(c, rpKey, fId, cmd)
if (response.type !== "FRFile") throw new Error("unexpected response: " + response.type)
const dhSecret = dh(response.rcvDhKey, privateKey)
return {dhSecret, nonce: response.nonce, body}
}
```
`RawChunkResponse` contains only what client.ts produces (DH secret, nonce, encrypted body). The chunk metadata (`chunkNo`, `digest`) is added by agent.ts when constructing `RawDownloadedChunk` (see §9.5).
The existing `downloadXFTPChunk` is refactored to call `downloadXFTPChunkRaw` + `decryptReceivedChunk`:
```typescript
export async function downloadXFTPChunk(
c: XFTPClient, rpKey: Uint8Array, fId: Uint8Array, digest?: Uint8Array
): Promise<Uint8Array> {
const {dhSecret, nonce, body} = await downloadXFTPChunkRaw(c, rpKey, fId)
return decryptReceivedChunk(dhSecret, nonce, body, digest ?? null)
}
```
### 9.5 agent.ts: downloadFileRaw, ackFileChunks, RawDownloadedChunk
New type combining client.ts's `RawChunkResponse` with chunk metadata from agent.ts:
```typescript
export interface RawDownloadedChunk {
chunkNo: number
dhSecret: Uint8Array
nonce: Uint8Array
body: Uint8Array
digest: Uint8Array
}
```
New function providing download orchestration with a raw chunk callback. Handles connection pooling, parallel downloads, redirect resolution, and progress. Does **not** ACK — the caller ACKs after verification.
```typescript
export interface DownloadRawOptions {
onProgress?: (downloaded: number, total: number) => void
concurrency?: number // max parallel chunk downloads, default 1
}
export async function downloadFileRaw(
agent: XFTPClientAgent,
fd: FileDescription,
onRawChunk: (chunk: RawDownloadedChunk) => Promise<void>,
options?: DownloadRawOptions
): Promise<FileDescription>
```
Returns the resolved `FileDescription` — for redirect files this is the inner fd, for non-redirect files this is the original fd. The caller uses this for verification and ACK.
Internal structure:
1. Validate `fd` via `validateFileDescription` (may double-validate if caller already validated via `decodeDescriptionURI` — harmless)
2. If `fd.redirect !== null`: resolve redirect on main thread (redirect data is small):
a. Download redirect chunks via `downloadXFTPChunk` (not raw — main thread decryption is fine for a few KB)
b. Verify size + digest, `processDownloadedFile` → YAML bytes
c. Parse inner `FileDescription`, validate against `fd.redirect.{size, digest}`
d. ACK redirect chunks (best-effort — redirect chunks are small and separate from the file chunks)
e. Replace `fd` with inner description
3. Pre-connect: call `getXFTPServerClient(agent, server)` for each unique server before launching concurrent workers. This ensures the client connection exists in the agent's map, avoiding a race condition where multiple concurrent workers all see the client as missing and each call `connectXFTP` independently (leaking all but the last connection). Known limitation: if a connection drops mid-download and multiple workers attempt reconnection simultaneously, the same TOCTOU race reappears. This is a pre-existing issue in `getXFTPServerClient`; a proper fix (per-key connection promise) is out of scope for this plan but should be tracked for follow-up.
4. Download file chunks in parallel (concurrency-limited via sliding window):
- Create a queue of chunk indices `[0, 1, ..., N-1]`. Launch `min(concurrency, N)` async workers, each pulling the next index from the queue until empty. Each worker loops: pull index → derive key → `getXFTPServerClient``downloadXFTPChunkRaw``await onRawChunk(...)` → update progress → next index. `await Promise.all(workers)` to wait for completion.
- For each chunk: derive key (`decodePrivKeyEd25519``ed25519KeyPairFromSeed`), get client (`getXFTPServerClient`), call `downloadXFTPChunkRaw`, `await onRawChunk(...)` with result + `chunkNo` + `chunk.digest`
- Each concurrency slot awaits its `onRawChunk` before starting the next download on that slot. With `concurrency > 1`, multiple `onRawChunk` calls may be in-flight concurrently (one per slot). The Worker handles this correctly — messages are queued and processed sequentially.
- Update progress after each chunk: `downloaded += chunk.chunkSize; onProgress?.(downloaded, resolvedFd.size)` — both values use encrypted sizes for consistency
5. Return the resolved `fd`
New helper for ACKing after verification:
```typescript
export async function ackFileChunks(
agent: XFTPClientAgent, fd: FileDescription
): Promise<void> {
for (const chunk of fd.chunks) {
const replica = chunk.replicas[0]
if (!replica) continue
try {
const client = await getXFTPServerClient(agent, parseXFTPServer(replica.server))
const seed = decodePrivKeyEd25519(replica.replicaKey)
const kp = ed25519KeyPairFromSeed(seed)
await ackXFTPChunk(client, kp.privateKey, replica.replicaId)
} catch (_) {}
}
}
```
The existing `downloadFile` is refactored to use `downloadFileRaw` internally:
```typescript
export async function downloadFile(
agent: XFTPClientAgent,
fd: FileDescription,
onProgress?: (downloaded: number, total: number) => void
): Promise<DownloadResult> {
const chunks: Uint8Array[] = []
const resolvedFd = await downloadFileRaw(agent, fd, async (raw) => {
chunks[raw.chunkNo - 1] = decryptReceivedChunk(
raw.dhSecret, raw.nonce, raw.body, raw.digest
)
}, {onProgress})
// verify + file-level decrypt using resolvedFd (inner fd for redirect case)
const combined = chunks.length === 1 ? chunks[0] : concatBytes(...chunks)
if (combined.length !== resolvedFd.size) throw new Error("downloadFile: file size mismatch")
const digest = sha512(combined)
if (!digestEqual(digest, resolvedFd.digest)) throw new Error("downloadFile: file digest mismatch")
// processDownloadedFile re-concatenates chunks internally — this mirrors the
// existing downloadFile pattern (verify on concatenated data, then pass chunks
// array to decryptChunks which concatenates again). Acceptable overhead for
// correctness: verification must happen on transit-decrypted data before
// file-level decryption transforms it.
const result = processDownloadedFile(resolvedFd, chunks)
await ackFileChunks(agent, resolvedFd)
return result
}
```
Existing callers retain serial behavior (`concurrency` defaults to 1). The web page opts into parallelism by passing `concurrency: 3`. The browser test (`test/browser.test.ts`) continues to work unchanged. The chunks array is initialized empty (`[]`) and populated by sparse index assignment (`chunks[raw.chunkNo - 1] = ...`), so it correctly handles both redirect and non-redirect cases regardless of the outer fd's chunk count. `digestEqual` is an existing module-private helper in agent.ts (line 327) that performs constant-time byte comparison.
### 9.6 Backward compatibility (download)
`downloadFile` signature is unchanged — existing callers are unaffected. The refactoring adds `downloadFileRaw`, `ackFileChunks`, and `RawDownloadedChunk` as new exports from agent.ts, and `downloadXFTPChunkRaw` + `RawChunkResponse` as new exports from client.ts.
## 10. Testing
### 10.1 Existing tests (unchanged)
- `npm run test:browser` — vitest browser round-trip (library-level)
- `cabal test --test-option='--match=/XFTP Web Client/'` — Haskell per-function tests
### 10.2 New: page E2E test
Add `test/page.spec.ts` using `@playwright/test` (not vitest browser mode — vitest tests run IN the browser and can't control page navigation; Playwright tests run in Node.js and control the browser). Add `@playwright/test` as a devDependency.
Add `playwright.config.ts` at the project root (`xftp-web/`):
- `webServer: { command: 'vite build --mode local && vite preview', url: 'http://localhost:4173', reuseExistingServer: !process.env.CI }` — the `url` property tells Playwright to wait until the preview server is ready before running tests
- `use.ignoreHTTPSErrors: true` (test server uses self-signed cert)
- `use.launchOptions: { args: ['--ignore-certificate-errors'] }` — required because Playwright's `ignoreHTTPSErrors` only affects page navigation, not `fetch()` calls from in-page JavaScript. Without this flag, the page's `createBrowserTransport` fetch to `https://localhost:7000` would fail TLS validation.
- `globalSetup`: `'./test/globalSetup.ts'` (starts xftp-server, shared with vitest)
```typescript
import {test, expect} from '@playwright/test'
test('page upload + download round-trip', async ({page}) => {
await page.goto(PAGE_URL)
// Set file input via page.setInputFiles()
// Wait for upload link to appear: page.waitForSelector('[data-testid="share-link"]')
// Extract hash from link text
// Navigate to PAGE_URL + '#' + hash
// Wait for download complete state
// Verify file was offered for save (check download event)
})
```
Add script: `"test:page": "playwright test test/page.spec.ts"`
This tests the real bundle including Worker loading, OPFS, and CSP. The existing `test/browser.test.ts` continues to test the library-level pipeline (vitest browser mode, no Workers).
### 10.3 Manual testing
`npm run dev` → open `localhost:5173` in browser → drag file → get link → open link in new tab → download. Requires xftp-server running on port 7000 (local mode).
## 11. Files
**Create:**
- `xftp-web/web/index.html` — page entry point (includes CSP meta tag)
- `xftp-web/web/main.ts` — router + libsodium init
- `xftp-web/web/upload.ts` — upload UI + orchestration
- `xftp-web/web/download.ts` — download UI + orchestration
- `xftp-web/web/progress.ts` — circular progress canvas component
- `xftp-web/web/servers.json` — preset server addresses (shared by servers.ts and vite.config.ts)
- `xftp-web/web/servers.ts` — server configuration (imports servers.json)
- `xftp-web/web/crypto-backend.ts` — CryptoBackend interface + WorkerBackend + factory
- `xftp-web/web/crypto.worker.ts` — Web Worker implementation
- `xftp-web/web/style.css` — styles
- `xftp-web/vite.config.ts` — page build config (CSP generation, server list)
- `xftp-web/tsconfig.web.json` — IDE/CI type-checking for `web/` main-thread files (DOM)
- `xftp-web/tsconfig.worker.json` — IDE/CI type-checking for `web/crypto.worker.ts` (WebWorker)
- `xftp-web/playwright.config.ts` — Playwright E2E test config (webServer, globalSetup)
- `xftp-web/test/page.spec.ts` — page E2E test (Playwright)
**Modify:**
- `xftp-web/src/agent.ts` — add `EncryptedFileMetadata` type, `uploadFile` options bag with `readChunk`, `downloadFileRaw` with `onRawChunk` callback + parallel downloads, `ackFileChunks`, `RawDownloadedChunk` type, refactor `downloadFile` on top of `downloadFileRaw`, add `import {decryptReceivedChunk} from "./download.js"` (needed by refactored `downloadFile`)
- `xftp-web/src/client.ts` — add `downloadXFTPChunkRaw`, `RawChunkResponse` type, refactor `downloadXFTPChunk` to use raw variant
- `xftp-web/package.json` — add dev/build/check:web/test:page scripts, add `vite` + `@playwright/test` devDeps
- `xftp-web/src/protocol/description.ts` — fix stale "SHA-256" comment on `FileDescription.digest` to "SHA-512"
- `xftp-web/.gitignore` — add `dist-web/`
## 12. Implementation Order
1. **Library refactoring**`client.ts`: add `downloadXFTPChunkRaw`; `agent.ts`: add `downloadFileRaw` + parallel downloads, `uploadFile` options bag with `readChunk`; refactor existing `downloadFile` on top of `downloadFileRaw`. Run existing tests to verify no regressions.
2. **Vite config + HTML shell**`vite.config.ts`, `index.html`, `main.ts`, verify dev server works
3. **Server config**`servers.ts` with both local and production server lists
4. **CryptoBackend + Worker** — interface, WorkerBackend, Worker implementation, OPFS logic
5. **Upload flow**`upload.ts` with drag-drop, encrypt via Worker, upload via agent, show link
6. **Download flow**`download.ts` with URL parsing, download via agent `downloadFileRaw`, Worker decrypt, browser save
7. **Progress component**`progress.ts` canvas drawing
8. **Styling**`style.css`
9. **Testing** — page E2E test, manual browser verification
10. **Build scripts**`build:local`, `build:prod` in package.json
@@ -0,0 +1,53 @@
# XFTPClientAgent Pattern
## TOC
1. Executive Summary
2. Changes: client.ts
3. Changes: agent.ts
4. Changes: test/browser.test.ts
5. Verification
## Executive Summary
Add `XFTPClientAgent` — a per-server connection pool matching the Haskell pattern. The agent caches `XFTPClient` instances by server URL. All orchestration functions (`uploadFile`, `downloadFile`, `deleteFile`) take `agent` as first parameter and use `getXFTPServerClient(agent, server)` instead of calling `connectXFTP` directly. Connections stay open on success; the caller creates and closes the agent.
`connectXFTP` and `closeXFTP` stay exported (used by `XFTPWebTests.hs` Haskell tests). The `browserClients` hack, per-function `connections: Map`, and `getOrConnect` are deleted.
## Changes: client.ts
**Add** after types section: `XFTPClientAgent` interface, `newXFTPAgent`, `getXFTPServerClient`, `closeXFTPServerClient`, `closeXFTPAgent`.
**Delete**: `browserClients` Map and all `isNode` browser-cache checks in `connectXFTP` and `closeXFTP`.
**Revert `closeXFTP`** to unconditional `c.transport.close()` (browser transport.close() is already a no-op).
`connectXFTP` stays exported (backward compat) but becomes a raw low-level function — no caching.
## Changes: agent.ts
**Imports**: replace `connectXFTP`/`closeXFTP` with `getXFTPServerClient`/`closeXFTPAgent` etc.
**Re-export** from agent.ts: `newXFTPAgent`, `closeXFTPAgent`, `XFTPClientAgent`.
**`uploadFile`**: add `agent: XFTPClientAgent` as first param. Replace `connectXFTP``getXFTPServerClient`. Remove `finally { closeXFTP }`. Pass `agent` to `uploadRedirectDescription`.
**`uploadRedirectDescription`**: change from `(client, server, innerFd)` to `(agent, server, innerFd)`. Get client via `getXFTPServerClient`.
**`downloadFile`**: add `agent` param. Delete local `connections: Map`. Replace `getOrConnect``getXFTPServerClient`. Remove finally cleanup. Pass `agent` to `downloadWithRedirect`.
**`downloadWithRedirect`**: add `agent` param. Same replacements. Remove try/catch cleanup. Recursive call passes `agent`.
**`deleteFile`**: add `agent` param. Same pattern.
**Delete**: `getOrConnect` function entirely.
## Changes: test/browser.test.ts
Create agent before operations, pass to upload/download, close in finally.
## Verification
1. `npx vitest --run` — browser round-trip test passes
2. No remaining `browserClients`, `getOrConnect`, or per-function `connections: Map` locals
3. `connectXFTP` and `closeXFTP` still exported (XFTPWebTests.hs compat)
4. All orchestration functions take `agent` as first param
@@ -0,0 +1,859 @@
# XFTP Web Page E2E Tests Plan
## Table of Contents
1. [Executive Summary](#1-executive-summary)
2. [Test Infrastructure](#2-test-infrastructure)
3. [Test Infrastructure - Page Objects](#3-test-infrastructure---page-objects)
4. [Upload Flow Tests](#4-upload-flow-tests)
5. [Download Flow Tests](#5-download-flow-tests)
6. [Edge Cases](#6-edge-cases)
7. [Implementation Order](#7-implementation-order)
8. [Test Utilities](#8-test-utilities)
---
## 1. Executive Summary
This document specifies comprehensive Playwright E2E tests for the XFTP web page. The existing test (`page.spec.ts`) performs a basic upload/download round-trip. This plan extends coverage to:
- **Upload flow**: File selection (picker + drag-drop), validation, progress, cancellation, link sharing, error handling
- **Download flow**: Invalid link handling, download button, progress, file save, error states
- **Edge cases**: Boundary file sizes, special characters, network failures, multi-chunk files with redirect, UI information display
**Key constraints**:
- Tests run against a local XFTP server (started via `globalSetup.ts`)
- Server port is dynamic (read from `/tmp/xftp-test-server.port`)
- Browser uses `--ignore-certificate-errors` for self-signed certs
- OPFS and Web Workers are required (Chromium supports both)
**Test file location**: `/code/simplexmq/xftp-web/test/page.spec.ts`
**Architecture**: Tests use the Page Object Model pattern to encapsulate UI interactions, making tests read as domain-specific scenarios rather than raw Playwright API calls.
---
## 2. Test Infrastructure
### 2.1 Current Setup
```
xftp-web/
├── playwright.config.ts # Playwright config (webServer, globalSetup)
├── test/
│ ├── globalSetup.ts # Starts xftp-server, writes port to PORT_FILE
│ ├── page.spec.ts # E2E tests (to be extended)
│ └── pages/ # Page Objects (new)
│ ├── UploadPage.ts
│ └── DownloadPage.ts
```
### 2.2 Prerequisites
- `globalSetup.ts` starts the XFTP server and writes port to `PORT_FILE`
- Tests must read the port dynamically: `readFileSync(PORT_FILE, 'utf-8').trim()`
- Vite builds and serves the page at `http://localhost:4173`
---
## 3. Test Infrastructure - Page Objects
Page Objects encapsulate page-specific selectors and actions, providing a clean API for tests. This follows the standard Page Object Model pattern used in simplex-chat and most professional test suites.
### 3.1 UploadPage
```typescript
// test/pages/UploadPage.ts
import {Page, Locator, expect} from '@playwright/test'
export class UploadPage {
readonly page: Page
readonly dropZone: Locator
readonly fileInput: Locator
readonly progressStage: Locator
readonly progressCanvas: Locator
readonly statusText: Locator
readonly cancelButton: Locator
readonly completeStage: Locator
readonly shareLink: Locator
readonly copyButton: Locator
readonly errorStage: Locator
readonly errorMessage: Locator
readonly retryButton: Locator
readonly expiryNote: Locator
readonly securityNote: Locator
constructor(page: Page) {
this.page = page
this.dropZone = page.locator('#drop-zone')
this.fileInput = page.locator('#file-input')
this.progressStage = page.locator('#upload-progress')
this.progressCanvas = page.locator('#progress-container canvas')
this.statusText = page.locator('#upload-status')
this.cancelButton = page.locator('#cancel-btn')
this.completeStage = page.locator('#upload-complete')
this.shareLink = page.locator('[data-testid="share-link"]')
this.copyButton = page.locator('#copy-btn')
this.errorStage = page.locator('#upload-error')
this.errorMessage = page.locator('#error-msg')
this.retryButton = page.locator('#retry-btn')
this.expiryNote = page.locator('.expiry')
this.securityNote = page.locator('.security-note')
}
async goto() {
await this.page.goto('http://localhost:4173')
}
async selectFile(name: string, content: Buffer, mimeType = 'application/octet-stream') {
await this.fileInput.setInputFiles({name, mimeType, buffer: content})
}
async selectTextFile(name: string, content: string) {
await this.selectFile(name, Buffer.from(content, 'utf-8'), 'text/plain')
}
async selectLargeFile(name: string, sizeBytes: number) {
// Create large file in browser to avoid memory issues in test process
await this.page.evaluate(({name, size}) => {
const input = document.getElementById('file-input') as HTMLInputElement
const buffer = new ArrayBuffer(size)
new Uint8Array(buffer).fill(0x55)
const file = new File([buffer], name, {type: 'application/octet-stream'})
const dt = new DataTransfer()
dt.items.add(file)
input.files = dt.files
input.dispatchEvent(new Event('change', {bubbles: true}))
}, {name, size: sizeBytes})
}
async dragDropFile(name: string, content: Buffer) {
// Drag-drop uses same file input handler internally
await this.selectFile(name, content)
}
async waitForEncrypting(timeout = 10_000) {
await expect(this.statusText).toContainText('Encrypting', {timeout})
}
async waitForUploading(timeout = 30_000) {
await expect(this.statusText).toContainText('Uploading', {timeout})
}
async waitForShareLink(timeout = 60_000): Promise<string> {
await expect(this.shareLink).toBeVisible({timeout})
return await this.shareLink.inputValue()
}
async clickCopy() {
await this.copyButton.click()
await expect(this.copyButton).toContainText('Copied!')
}
async clickCancel() {
await this.cancelButton.click()
}
async clickRetry() {
await this.retryButton.click()
}
async expectError(messagePattern: string | RegExp) {
await expect(this.errorStage).toBeVisible()
await expect(this.errorMessage).toContainText(messagePattern)
}
async expectDropZoneVisible() {
await expect(this.dropZone).toBeVisible()
}
async expectProgressVisible() {
await expect(this.progressStage).toBeVisible()
await expect(this.progressCanvas).toBeVisible()
}
async expectCompleteWithExpiry() {
await expect(this.completeStage).toBeVisible()
await expect(this.expiryNote).toContainText('48 hours')
}
async expectSecurityNote() {
await expect(this.securityNote).toBeVisible()
await expect(this.securityNote).toContainText('encrypted')
}
getHashFromLink(url: string): string {
return new URL(url).hash
}
}
```
### 3.2 DownloadPage
```typescript
// test/pages/DownloadPage.ts
import {Page, Locator, expect, Download} from '@playwright/test'
export class DownloadPage {
readonly page: Page
readonly readyStage: Locator
readonly downloadButton: Locator
readonly progressStage: Locator
readonly progressCanvas: Locator
readonly statusText: Locator
readonly errorStage: Locator
readonly errorMessage: Locator
readonly retryButton: Locator
readonly securityNote: Locator
constructor(page: Page) {
this.page = page
this.readyStage = page.locator('#dl-ready')
this.downloadButton = page.locator('#dl-btn')
this.progressStage = page.locator('#dl-progress')
this.progressCanvas = page.locator('#dl-progress-container canvas')
this.statusText = page.locator('#dl-status')
this.errorStage = page.locator('#dl-error')
this.errorMessage = page.locator('#dl-error-msg')
this.retryButton = page.locator('#dl-retry-btn')
this.securityNote = page.locator('.security-note')
}
async goto(hash: string) {
await this.page.goto(`http://localhost:4173${hash}`)
}
async gotoWithLink(fullUrl: string) {
const hash = new URL(fullUrl).hash
await this.goto(hash)
}
async expectFileReady() {
await expect(this.readyStage).toBeVisible()
await expect(this.downloadButton).toBeVisible()
}
async expectFileSizeDisplayed() {
await expect(this.readyStage).toContainText(/\d+(?:\.\d+)?\s*(?:KB|MB|B)/)
}
async clickDownload(): Promise<Download> {
const downloadPromise = this.page.waitForEvent('download')
await this.downloadButton.click()
return downloadPromise
}
async waitForDownloading(timeout = 30_000) {
await expect(this.statusText).toContainText('Downloading', {timeout})
}
async waitForDecrypting(timeout = 30_000) {
await expect(this.statusText).toContainText('Decrypting', {timeout})
}
async expectProgressVisible() {
await expect(this.progressStage).toBeVisible()
await expect(this.progressCanvas).toBeVisible()
}
async expectInitialError(messagePattern: string | RegExp) {
// For malformed links - error shown in card without #dl-error stage
await expect(this.page.locator('.card .error')).toBeVisible()
await expect(this.page.locator('.card .error')).toContainText(messagePattern)
}
async expectRuntimeError(messagePattern: string | RegExp) {
// For runtime download errors - uses #dl-error stage
await expect(this.errorStage).toBeVisible()
await expect(this.errorMessage).toContainText(messagePattern)
}
async expectSecurityNote() {
await expect(this.securityNote).toBeVisible()
await expect(this.securityNote).toContainText('encrypted')
}
}
```
### 3.3 Test Fixtures
```typescript
// test/fixtures.ts
import {test as base} from '@playwright/test'
import {UploadPage} from './pages/UploadPage'
import {DownloadPage} from './pages/DownloadPage'
import {readFileSync} from 'fs'
// Extend Playwright test with page objects
export const test = base.extend<{
uploadPage: UploadPage
downloadPage: DownloadPage
}>({
uploadPage: async ({page}, use) => {
const uploadPage = new UploadPage(page)
await uploadPage.goto()
await use(uploadPage)
},
downloadPage: async ({page}, use) => {
await use(new DownloadPage(page))
},
})
export {expect} from '@playwright/test'
// Test data helpers
export function createTestContent(size: number, fill = 0x41): Buffer {
return Buffer.alloc(size, fill)
}
export function createTextContent(text: string): Buffer {
return Buffer.from(text, 'utf-8')
}
export function uniqueFileName(base: string, ext = 'txt'): string {
return `${base}-${Date.now()}.${ext}`
}
```
---
## 4. Upload Flow Tests
### 4.1 File Selection - File Picker Button
**Test ID**: `upload-file-picker`
```typescript
test('upload via file picker button', async ({uploadPage}) => {
await uploadPage.expectDropZoneVisible()
await uploadPage.selectTextFile('picker-test.txt', 'test content ' + Date.now())
await uploadPage.waitForEncrypting()
await uploadPage.waitForUploading()
const link = await uploadPage.waitForShareLink()
expect(link).toMatch(/^http:\/\/localhost:\d+\/#/)
})
```
### 4.2 File Selection - Drag and Drop
**Test ID**: `upload-drag-drop`
```typescript
test('upload via drag and drop', async ({uploadPage}) => {
await uploadPage.dragDropFile('dragdrop-test.txt', createTextContent('drag drop test'))
await uploadPage.expectProgressVisible()
const link = await uploadPage.waitForShareLink()
expect(link).toContain('#')
})
```
### 4.3 File Size Validation - Too Large
**Test ID**: `upload-file-too-large`
```typescript
test('upload rejects file over 100MB', async ({uploadPage}) => {
await uploadPage.selectLargeFile('large.bin', 100 * 1024 * 1024 + 1)
await uploadPage.expectError('too large')
await uploadPage.expectError('100 MB')
})
```
### 4.4 File Size Validation - Empty File
**Test ID**: `upload-file-empty`
```typescript
test('upload rejects empty file', async ({uploadPage}) => {
await uploadPage.selectFile('empty.txt', Buffer.alloc(0))
await uploadPage.expectError('empty')
})
```
### 4.5 Progress Display
**Test ID**: `upload-progress-display`
```typescript
test('upload shows progress during encryption and upload', async ({uploadPage}) => {
await uploadPage.selectFile('progress-test.bin', createTestContent(500 * 1024))
await uploadPage.expectProgressVisible()
await uploadPage.waitForEncrypting()
await uploadPage.waitForUploading()
await uploadPage.waitForShareLink()
})
```
### 4.6 Cancel Button
**Test ID**: `upload-cancel`
```typescript
test('cancel button aborts upload and returns to landing', async ({uploadPage}) => {
await uploadPage.selectFile('cancel-test.bin', createTestContent(1024 * 1024))
await uploadPage.expectProgressVisible()
await uploadPage.clickCancel()
await uploadPage.expectDropZoneVisible()
await expect(uploadPage.shareLink).toBeHidden()
})
```
### 4.7 Share Link Display and Copy
**Test ID**: `upload-share-link-copy`
```typescript
test('share link copy button works', async ({uploadPage, context}) => {
await context.grantPermissions(['clipboard-read', 'clipboard-write'])
await uploadPage.selectTextFile('copy-test.txt', 'copy test content')
const link = await uploadPage.waitForShareLink()
await uploadPage.clickCopy()
// Verify clipboard (may fail in headless)
try {
const clipboardText = await uploadPage.page.evaluate(() => navigator.clipboard.readText())
expect(clipboardText).toBe(link)
} catch {
// Clipboard API may not be available
}
})
```
### 4.8 Error Handling and Retry
**Test ID**: `upload-error-retry`
```typescript
test('error state shows retry button', async ({uploadPage}) => {
await uploadPage.selectFile('error-test.txt', Buffer.alloc(0))
await uploadPage.expectError('empty')
await expect(uploadPage.retryButton).toBeVisible()
})
```
---
## 5. Download Flow Tests
### 5.1 Invalid Link Handling - Malformed Hash
**Test ID**: `download-invalid-hash-malformed`
```typescript
test('download shows error for malformed hash', async ({downloadPage}) => {
await downloadPage.goto('#not-valid-base64!!!')
await downloadPage.expectInitialError(/[Ii]nvalid|corrupted/)
await expect(downloadPage.downloadButton).not.toBeVisible()
})
```
### 5.2 Invalid Link Handling - Valid Base64 but Invalid Structure
**Test ID**: `download-invalid-hash-structure`
```typescript
test('download shows error for invalid structure', async ({downloadPage}) => {
await downloadPage.goto('#AAAA')
await downloadPage.expectInitialError(/[Ii]nvalid|corrupted/)
})
```
### 5.3 Download Button Click
**Test ID**: `download-button-click`
```typescript
test('download button initiates download', async ({uploadPage, downloadPage}) => {
// Upload first
await uploadPage.selectTextFile('dl-btn-test.txt', 'download test content')
const link = await uploadPage.waitForShareLink()
// Navigate to download
await downloadPage.gotoWithLink(link)
await downloadPage.expectFileReady()
// Click download
const download = await downloadPage.clickDownload()
expect(download.suggestedFilename()).toBe('dl-btn-test.txt')
})
```
### 5.4 Progress Display
**Test ID**: `download-progress-display`
```typescript
test('download shows progress', async ({uploadPage, downloadPage}) => {
await uploadPage.selectFile('dl-progress.bin', createTestContent(500 * 1024))
const link = await uploadPage.waitForShareLink()
await downloadPage.gotoWithLink(link)
const downloadPromise = downloadPage.clickDownload()
await downloadPage.expectProgressVisible()
await downloadPage.waitForDownloading()
await downloadPromise
})
```
### 5.5 File Save Verification
**Test ID**: `download-file-save`
```typescript
test('downloaded file content matches upload', async ({uploadPage, downloadPage}) => {
const content = 'verification content ' + Date.now()
const fileName = 'verify.txt'
await uploadPage.selectTextFile(fileName, content)
const link = await uploadPage.waitForShareLink()
await downloadPage.gotoWithLink(link)
const download = await downloadPage.clickDownload()
expect(download.suggestedFilename()).toBe(fileName)
const path = await download.path()
if (path) {
const downloadedContent = (await import('fs')).readFileSync(path, 'utf-8')
expect(downloadedContent).toBe(content)
}
})
```
---
## 6. Edge Cases
### 6.1 Very Small Files
**Test ID**: `edge-small-file`
```typescript
test('upload and download 1-byte file', async ({uploadPage, downloadPage}) => {
await uploadPage.selectFile('tiny.bin', Buffer.from([0x42]))
const link = await uploadPage.waitForShareLink()
await downloadPage.gotoWithLink(link)
const download = await downloadPage.clickDownload()
expect(download.suggestedFilename()).toBe('tiny.bin')
const path = await download.path()
if (path) {
const content = (await import('fs')).readFileSync(path)
expect(content.length).toBe(1)
expect(content[0]).toBe(0x42)
}
})
```
### 6.2 Files Near 100MB Limit
**Test ID**: `edge-near-limit`
```typescript
test.slow()
test('upload file at exactly 100MB', async ({uploadPage}) => {
await uploadPage.selectLargeFile('exactly-100mb.bin', 100 * 1024 * 1024)
// Should succeed (not show error)
await expect(uploadPage.errorStage).toBeHidden({timeout: 5000})
await uploadPage.expectProgressVisible()
// Wait for completion (may take a while)
await uploadPage.waitForShareLink(300_000)
})
```
### 6.3 Special Characters in Filename
**Test ID**: `edge-special-chars-filename`
```typescript
test('upload and download file with unicode filename', async ({uploadPage, downloadPage}) => {
const fileName = 'test-\u4e2d\u6587-\u0420\u0443\u0441\u0441\u043a\u0438\u0439.txt'
await uploadPage.selectTextFile(fileName, 'unicode filename test')
const link = await uploadPage.waitForShareLink()
await downloadPage.gotoWithLink(link)
const download = await downloadPage.clickDownload()
expect(download.suggestedFilename()).toBe(fileName)
})
test('upload and download file with spaces', async ({uploadPage, downloadPage}) => {
const fileName = 'my document (final) v2.txt'
await uploadPage.selectTextFile(fileName, 'spaces test')
const link = await uploadPage.waitForShareLink()
await downloadPage.gotoWithLink(link)
const download = await downloadPage.clickDownload()
expect(download.suggestedFilename()).toBe(fileName)
})
test('filename with path separators is sanitized', async ({uploadPage, downloadPage}) => {
await uploadPage.selectTextFile('../../../etc/passwd', 'path traversal test')
const link = await uploadPage.waitForShareLink()
await downloadPage.gotoWithLink(link)
const download = await downloadPage.clickDownload()
expect(download.suggestedFilename()).not.toContain('/')
expect(download.suggestedFilename()).not.toContain('\\')
})
```
### 6.4 Network Errors (Mocked)
**Test ID**: `edge-network-error`
```typescript
test('upload handles network error gracefully', async ({uploadPage}) => {
// Intercept and abort POST requests
await uploadPage.page.route('**/localhost:*', route => {
if (route.request().method() === 'POST') {
route.abort('failed')
} else {
route.continue()
}
})
await uploadPage.selectTextFile('network-error.txt', 'network error test')
await uploadPage.expectError(/.+/) // Any error message
})
```
### 6.5 Binary File Content Integrity
**Test ID**: `edge-binary-content`
```typescript
test('binary file with all byte values', async ({uploadPage, downloadPage}) => {
// Create buffer with all 256 byte values
const buffer = Buffer.alloc(256)
for (let i = 0; i < 256; i++) buffer[i] = i
await uploadPage.selectFile('all-bytes.bin', buffer)
const link = await uploadPage.waitForShareLink()
await downloadPage.gotoWithLink(link)
const download = await downloadPage.clickDownload()
const path = await download.path()
if (path) {
const content = (await import('fs')).readFileSync(path)
expect(content.length).toBe(256)
for (let i = 0; i < 256; i++) {
expect(content[i]).toBe(i)
}
}
})
```
### 6.6 Multiple Concurrent Downloads
**Test ID**: `edge-concurrent-downloads`
```typescript
test('concurrent downloads from same link', async ({browser}) => {
const context = await browser.newContext({ignoreHTTPSErrors: true})
const page1 = await context.newPage()
const upload = new UploadPage(page1)
await upload.goto()
await upload.selectTextFile('concurrent.txt', 'concurrent download test')
const link = await upload.waitForShareLink()
const hash = upload.getHashFromLink(link)
// Open two tabs and download concurrently
const page2 = await context.newPage()
const page3 = await context.newPage()
const dl2 = new DownloadPage(page2)
const dl3 = new DownloadPage(page3)
await dl2.goto(hash)
await dl3.goto(hash)
const [download2, download3] = await Promise.all([
dl2.clickDownload(),
dl3.clickDownload()
])
expect(download2.suggestedFilename()).toBe('concurrent.txt')
expect(download3.suggestedFilename()).toBe('concurrent.txt')
await context.close()
})
```
### 6.7 Redirect File Handling (Multi-chunk)
**Test ID**: `edge-redirect-file`
```typescript
test.slow()
test('upload and download multi-chunk file with redirect', async ({uploadPage, downloadPage}) => {
// Use ~5MB file to get multiple chunks
await uploadPage.selectLargeFile('multi-chunk.bin', 5 * 1024 * 1024)
const link = await uploadPage.waitForShareLink(120_000)
await downloadPage.gotoWithLink(link)
const download = await downloadPage.clickDownload()
expect(download.suggestedFilename()).toBe('multi-chunk.bin')
const path = await download.path()
if (path) {
const stat = (await import('fs')).statSync(path)
expect(stat.size).toBe(5 * 1024 * 1024)
}
})
```
### 6.8 UI Information Display
**Test ID**: `edge-ui-info`
```typescript
test('upload complete shows expiry and security note', async ({uploadPage}) => {
await uploadPage.selectTextFile('ui-test.txt', 'ui test')
await uploadPage.waitForShareLink()
await uploadPage.expectCompleteWithExpiry()
await uploadPage.expectSecurityNote()
})
test('download page shows file size and security note', async ({uploadPage, downloadPage}) => {
await uploadPage.selectFile('size-test.bin', createTestContent(1024))
const link = await uploadPage.waitForShareLink()
await downloadPage.gotoWithLink(link)
await downloadPage.expectFileSizeDisplayed()
await downloadPage.expectSecurityNote()
})
```
---
## 7. Implementation Order
### Phase 1: Core Infrastructure (Priority: High)
1. Create `test/pages/UploadPage.ts` with Page Object
2. Create `test/pages/DownloadPage.ts` with Page Object
3. Create `test/fixtures.ts` with extended test function
4. Refactor existing test to use Page Objects
### Phase 2: Core Happy Path (Priority: High)
5. `upload-file-picker` - Basic upload via file picker
6. `download-button-click` - Basic download
7. `download-file-save` - Content verification
### Phase 3: Validation (Priority: High)
8. `upload-file-too-large` - Size validation
9. `upload-file-empty` - Empty file validation
10. `download-invalid-hash-malformed` - Invalid link handling
11. `download-invalid-hash-structure` - Invalid structure handling
### Phase 4: Progress and Cancel (Priority: Medium)
12. `upload-progress-display` - Progress visibility
13. `upload-cancel` - Cancel functionality
14. `download-progress-display` - Download progress
### Phase 5: Link Sharing (Priority: Medium)
15. `upload-share-link-copy` - Copy button functionality
16. `upload-drag-drop` - Drag-drop upload
### Phase 6: Edge Cases (Priority: Low)
17. `edge-small-file` - 1-byte file
18. `edge-special-chars-filename` - Unicode/special characters
19. `edge-binary-content` - Binary content integrity
20. `edge-near-limit` - 100MB file (slow test)
21. `edge-network-error` - Network error handling
### Phase 7: Error Recovery and Advanced (Priority: Low)
22. `upload-error-retry` - Retry after error
23. `edge-concurrent-downloads` - Concurrent access
24. `edge-redirect-file` - Multi-chunk file with redirect (slow)
25. `edge-ui-info` - Expiry message, security notes
---
## 8. Test Utilities
### 8.1 Shared Test Setup
```typescript
// test/page.spec.ts
import {test, expect, createTestContent, createTextContent, uniqueFileName} from './fixtures'
test.describe('Upload Flow', () => {
test('upload via file picker', async ({uploadPage}) => {
// Tests use uploadPage fixture which navigates automatically
})
})
test.describe('Download Flow', () => {
test('download works', async ({uploadPage, downloadPage}) => {
// Both pages available via fixtures
})
})
test.describe('Edge Cases', () => {
// Edge case tests
})
```
### 8.2 File Structure
```
xftp-web/test/
├── fixtures.ts # Playwright fixtures with page objects
├── pages/
│ ├── UploadPage.ts # Upload page object
│ └── DownloadPage.ts # Download page object
├── page.spec.ts # All E2E tests
└── globalSetup.ts # Server startup (existing)
```
---
## Appendix: Test Matrix
| Test ID | Category | Priority | Estimated Time | Dependencies |
|---------|----------|----------|----------------|--------------|
| upload-file-picker | Upload | High | 30s | - |
| upload-drag-drop | Upload | Medium | 30s | - |
| upload-file-too-large | Upload | High | 5s | - |
| upload-file-empty | Upload | High | 5s | - |
| upload-progress-display | Upload | Medium | 45s | - |
| upload-cancel | Upload | Medium | 30s | - |
| upload-share-link-copy | Upload | Medium | 30s | - |
| upload-error-retry | Upload | Low | 30s | - |
| download-invalid-hash-malformed | Download | High | 5s | - |
| download-invalid-hash-structure | Download | High | 5s | - |
| download-button-click | Download | High | 45s | upload |
| download-progress-display | Download | Medium | 60s | upload |
| download-file-save | Download | High | 45s | upload |
| edge-small-file | Edge | Low | 30s | - |
| edge-near-limit | Edge | Low | 300s | - |
| edge-special-chars-filename | Edge | Low | 30s | - |
| edge-network-error | Edge | Low | 45s | - |
| edge-binary-content | Edge | Low | 30s | - |
| edge-concurrent-downloads | Edge | Low | 60s | upload |
| edge-redirect-file | Edge | Low | 120s | - |
| edge-ui-info | Edge | Low | 60s | upload |
**Total estimated time**: ~18 minutes (excluding 100MB and 5MB tests)
@@ -0,0 +1,221 @@
# XFTP Web Hello Header — Session Re-handshake for Browser Connection Reuse
## 1. Problem Statement
Browser HTTP/2 connection pooling reuses TLS connections across page navigations (same origin = same connection pool). The XFTP server maintains per-TLS-connection session state in `TMap SessionId Handshake` keyed by `tlsUniq tls`. When a browser navigates from the upload page to the download page (or reloads), the new page sends a fresh ClientHello on the reused HTTP/2 connection. The server is already in `HandshakeAccepted` state for that connection, so it routes the request to `processRequest`, which expects a 16384-byte command block but receives a 34-byte ClientHello → `ERR BLOCK`.
**Root cause**: The server cannot distinguish a ClientHello from a command on an already-handshaked connection because both arrive on the same HTTP/2 connection (same `tlsUniq`), and there is no content-level discriminator (ClientHello is unpadded, but the server never gets to parse it — the size check in `processRequest` rejects it first).
**Browser limitation**: `fetch()` provides zero control over HTTP/2 connection pooling. There is no browser API to force a new connection or detect connection reuse before a request is sent.
## 2. Solution Summary
Add an HTTP header `xftp-web-hello` to web ClientHello requests. When the server sees this header on an already-handshaked connection (`HandshakeAccepted` state), it re-runs `processHello` **reusing the existing session keys** (same X25519 key pair from the original handshake). The client then completes the normal handshake flow (sends ClientHandshake, receives ack) and proceeds with commands.
Key properties:
- Server reuses existing `serverPrivKey` — no new key material generated on re-handshake, so `thAuth` remains consistent with any in-flight commands on concurrent HTTP/2 streams.
- Header is only checked when `sniUsed` is true (web/browser connections). Native XFTP clients are unaffected.
- CORS preflight already allows all headers (`Access-Control-Allow-Headers: *`).
- Web clients always send this header on ClientHello — it's harmless on first connection (`Nothing` state) and enables re-handshake on reused connections (`HandshakeAccepted` state).
## 3. Detailed Technical Design
### 3.1 Server change: parameterize `processHello` (`src/Simplex/FileTransfer/Server.hs`)
The entire server change is parameterizing the existing `processHello` with `Maybe C.PrivateKeyX25519`. Zero new functions.
#### Current code (lines 165-191):
```haskell
xftpServerHandshakeV1 chain serverSignKey sessions
XFTPTransportRequest {thParams = thParams0@THandleParams {sessionId}, reqBody = HTTP2Body {bodyHead}, sendResponse, sniUsed, addCORS} = do
s <- atomically $ TM.lookup sessionId sessions
r <- runExceptT $ case s of
Nothing -> processHello
Just (HandshakeSent pk) -> processClientHandshake pk
Just (HandshakeAccepted thParams) -> pure $ Just thParams
either sendError pure r
where
processHello = do
challenge_ <-
if
| B.null bodyHead -> pure Nothing
| sniUsed -> do
XFTPClientHello {webChallenge} <- liftHS $ smpDecode bodyHead
pure webChallenge
| otherwise -> throwE HANDSHAKE
(k, pk) <- atomically . C.generateKeyPair =<< asks random
atomically $ TM.insert sessionId (HandshakeSent pk) sessions
-- ...build and send ServerHandshake...
pure Nothing
```
#### After (diff is ~10 lines):
```haskell
xftpServerHandshakeV1 chain serverSignKey sessions
XFTPTransportRequest {thParams = thParams0@THandleParams {sessionId}, request, reqBody = HTTP2Body {bodyHead}, sendResponse, sniUsed, addCORS} = do
-- ^^^^^^^ bind request
s <- atomically $ TM.lookup sessionId sessions
r <- runExceptT $ case s of
Nothing -> processHello Nothing
Just (HandshakeSent pk) -> processClientHandshake pk
Just (HandshakeAccepted thParams)
| webHello -> processHello (serverPrivKey <$> thAuth thParams)
| otherwise -> pure $ Just thParams
either sendError pure r
where
webHello = sniUsed && any (\(t, _) -> tokenKey t == "xftp-web-hello") (fst $ H.requestHeaders request)
processHello pk_ = do
challenge_ <-
if
| B.null bodyHead -> pure Nothing
| sniUsed -> do
XFTPClientHello {webChallenge} <- liftHS $ smpDecode bodyHead
pure webChallenge
| otherwise -> throwE HANDSHAKE
(k, pk) <- maybe
(atomically . C.generateKeyPair =<< asks random)
(\pk -> pure (C.publicKey pk, pk))
pk_
atomically $ TM.insert sessionId (HandshakeSent pk) sessions
-- ...rest unchanged...
pure Nothing
```
#### What changes:
1. **Bind `request`** in the `XFTPTransportRequest` pattern (+1 field)
2. **Add `webHello`** binding in `where` clause (1 line) — checks header only when `sniUsed`
3. **Add `pk_` parameter** to `processHello` (change signature)
4. **Replace key generation** with `maybe` that generates fresh keys when `pk_ = Nothing`, or derives public from existing private when `pk_ = Just pk` (3 lines replace 1 line)
5. **Add guard** in `HandshakeAccepted` branch (2 lines replace 1 line)
6. **Call site** `Nothing -> processHello Nothing` (+1 word)
7. **One import** added: `Network.HPACK.Token (tokenKey)`
#### Imports to add:
```haskell
import Network.HPACK.Token (tokenKey)
```
`OverloadedStrings` (already enabled in Server.hs) provides the `IsString` instance for `CI ByteString`, so `tokenKey t == "xftp-web-hello"` works without importing `Data.CaseInsensitive`. Verified on Hackage: `requestHeaders :: Request -> HeaderTable`, `tokenKey :: Token -> CI ByteString`.
### 3.2 Re-handshake flow
When `webHello` is true in `HandshakeAccepted` state:
1. `processHello (serverPrivKey <$> thAuth thParams)` is called with `Just pk` (existing private key)
2. `(k, pk) <- pure (C.publicKey pk, pk)` — reuses same key pair, no generation
3. `TM.insert sessionId (HandshakeSent pk) sessions` — transitions state back to `HandshakeSent` with same `pk`
4. Server sends `ServerHandshake` response (same format as initial handshake)
5. Client sends `ClientHandshake` on next stream → enters `Just (HandshakeSent pk) -> processClientHandshake pk` → normal flow
6. `processClientHandshake` stores `HandshakeAccepted thParams` with same `serverPrivKey = pk`
### 3.3 Web client change (`xftp-web/src/client.ts`)
Add optional `headers?` parameter to `Transport.post()`, thread it through `fetch()` and `session.request()`, and pass `{"xftp-web-hello": "1"}` in the ClientHello call in `connectXFTP`.
### 3.4 What does NOT change
- **CORS**: Already has `Access-Control-Allow-Headers: *` (Server.hs:106).
- **Native Haskell client**: Uses `[]` headers. No header = existing behavior.
- **Protocol wire format**: ClientHello, ServerHandshake, ClientHandshake, commands — all unchanged.
- **`processRequest`**, **`processClientHandshake`**, **`sendError`**, **`encodeXftp`** — unchanged.
### 3.5 Haskell test (`tests/XFTPServerTests.hs`)
Add `testWebReHandshake` next to the existing `testWebHandshake` (line 504). It reuses the same SNI + HTTP/2 setup pattern, performs a full handshake, then sends a second ClientHello with the `xftp-web-hello` header on the same connection and verifies the server responds with a valid ServerHandshake (same `sessionId`), then completes the second handshake.
```haskell
-- Register in xftpServerTests (after line 86):
it "should re-handshake on same connection with xftp-web-hello header" testWebReHandshake
-- Test (after testWebHandshake):
testWebReHandshake :: Expectation
testWebReHandshake =
withXFTPServerSNI $ \_ -> do
Fingerprint fp <- loadFileFingerprint "tests/fixtures/ca.crt"
let keyHash = C.KeyHash fp
cfg = defaultTransportClientConfig {clientALPN = Just ["h2"], useSNI = True}
runTLSTransportClient defaultSupportedParamsHTTPS Nothing cfg Nothing "localhost" xftpTestPort (Just keyHash) $ \(tls :: TLS 'TClient) -> do
let h2cfg = HC.defaultHTTP2ClientConfig {HC.bodyHeadSize = 65536}
h2 <- either (error . show) pure =<< HC.attachHTTP2Client h2cfg (THDomainName "localhost") xftpTestPort mempty 65536 tls
g <- C.newRandom
-- First handshake (same as testWebHandshake)
challenge1 <- atomically $ C.randomBytes 32 g
let helloReq1 = H2.requestBuilder "POST" "/" [] $ byteString (smpEncode (XFTPClientHello {webChallenge = Just challenge1}))
resp1 <- either (error . show) pure =<< HC.sendRequest h2 helloReq1 (Just 5000000)
shs1 <- either error pure $ smpDecode =<< C.unPad (bodyHead (HC.respBody resp1))
let XFTPServerHandshake {sessionId = sid1} = shs1
clientHsPadded <- either (error . show) pure $ C.pad (smpEncode (XFTPClientHandshake {xftpVersion = VersionXFTP 1, keyHash})) xftpBlockSize
resp1b <- either (error . show) pure =<< HC.sendRequest h2 (H2.requestBuilder "POST" "/" [] $ byteString clientHsPadded) (Just 5000000)
B.length (bodyHead (HC.respBody resp1b)) `shouldBe` 0
-- Second handshake on same connection with xftp-web-hello header
challenge2 <- atomically $ C.randomBytes 32 g
let helloReq2 = H2.requestBuilder "POST" "/" [("xftp-web-hello", "1")] $ byteString (smpEncode (XFTPClientHello {webChallenge = Just challenge2}))
resp2 <- either (error . show) pure =<< HC.sendRequest h2 helloReq2 (Just 5000000)
shs2 <- either error pure $ smpDecode =<< C.unPad (bodyHead (HC.respBody resp2))
let XFTPServerHandshake {sessionId = sid2} = shs2
sid2 `shouldBe` sid1 -- same TLS connection → same sessionId
-- Complete second handshake
resp2b <- either (error . show) pure =<< HC.sendRequest h2 (H2.requestBuilder "POST" "/" [] $ byteString clientHsPadded) (Just 5000000)
B.length (bodyHead (HC.respBody resp2b)) `shouldBe` 0
```
The only difference from `testWebHandshake`: the second `helloReq2` passes `[("xftp-web-hello", "1")]` instead of `[]`. The test verifies:
1. Server responds with `ServerHandshake` (not `ERR BLOCK`)
2. Same `sessionId` (same TLS connection)
3. Second `ClientHandshake` completes with empty ACK
## 4. Implementation Plan
### Step 1: Server — parameterize `processHello`
Apply the diff from Section 3.1 to `src/Simplex/FileTransfer/Server.hs`.
### Step 2: Test — add `testWebReHandshake`
Add the test from Section 3.5 to `tests/XFTPServerTests.hs`.
### Step 3: Client — add `xftp-web-hello` header
Add optional `headers?` to `Transport.post()`, pass `{"xftp-web-hello": "1"}` on ClientHello in `connectXFTP`.
### Step 4: Test
Run Haskell tests (`cabal test`) and E2E Playwright tests (`npx playwright test` in `xftp-web/`).
## 5. Race Condition Analysis
### Single-tab navigation (the common case)
1. Upload page completes, all fetch() requests finish
2. Browser navigates to download page (or reloads)
3. All upload-page fetches are aborted on page unload
4. Download page sends ClientHello with `xftp-web-hello` header
5. Server is in `HandshakeAccepted``processHello (Just pk)``HandshakeSent pk` (same key)
6. No concurrent streams → no race
**Safe.**
### Multi-tab (edge case)
Tab A (upload) and Tab B (download) share the same HTTP/2 connection.
1. Tab A has active command streams (e.g., FPUT upload in progress)
2. Tab B sends ClientHello with header
3. Server reads `HandshakeAccepted` atomically for both streams
4. Tab A's stream already has its `thParams` snapshot → proceeds with `processRequest` using old `thParams`
5. Tab B's stream triggers `processHello (Just pk)` → stores `HandshakeSent pk` (same pk!)
6. Tab A's in-progress FPUT continues with snapshot `thParams` → completes normally (same `serverPrivKey`)
7. Tab A's NEXT command reads `HandshakeSent` from TMap → enters `processClientHandshake` → fails (command body ≠ ClientHandshake format) → HANDSHAKE error
**Tab A's in-flight commands succeed. Tab A's subsequent commands fail with HANDSHAKE error.** This is the inherent multi-tab problem — unavoidable with per-connection session state and HTTP/2 connection sharing. The failure is clean (HANDSHAKE error, not silent corruption).
## 6. Security Considerations
- **No new key material**: Re-handshake reuses existing `serverPrivKey`. No opportunity for key confusion or downgrade.
- **Identity re-verification**: Server re-signs the web challenge with its long-term signing key. Client verifies identity again.
- **Header cannot escalate privileges**: The header only triggers re-handshake (which the server was already capable of doing on first connection). It does not bypass any authentication.
- **Timing**: Re-handshake takes the same code path as initial handshake, so timing side-channels are unchanged.
@@ -0,0 +1,948 @@
# XFTP Web Error Handling and Connection Resilience
## 1. Problem Statement
The XFTP web client is fundamentally fragile: any transient error (browser opening a new HTTP/2 connection, network hiccup, server restart) causes an unrecoverable failure with a cryptic error message. There is no retry logic, no fetch timeout, no error categorization, and the upload uses a single server instead of distributing chunks across preset servers. This makes the app frustrating — it works most of the time but fails unpredictably, which is worse than being completely broken.
### Confirmed root cause (from diagnostic logs)
When the browser opens a new HTTP/2 connection mid-operation, the new connection has a different TLS SessionId with no handshake state in the server's `TMap SessionId Handshake`. The server's `Nothing` branch in `xftpServerHandshakeV1` (Server.hs:169) unconditionally calls `processHello`, which tries to decode the command body as `XFTPClientHello`, fails, and sends a raw padded "HANDSHAKE" error string. The client cannot parse this as a proper transmission (first byte 'H' = 72 is read as batch count), producing `"expected batch count 1, got 72"`.
Server log confirming the SessionId change:
```
DEBUG dispatch: Accepted+command sessId="ZSo1GGETgIvjbB7CWHbvGPpbMjx_b2IlC1eTI6aKfqc="
...20 successful commands...
DEBUG dispatch: Nothing sessId="mJC7Sck9xxW5UsXoPGoUWduuHghSVgf6CnD6ZC6SBhU=" webHello=False
```
### Why re-handshake is required (cannot be made optional)
1. **SessionId is baked into signed command data.** `encodeAuthTransmission` signs `concat(encode(sessionId), tInner)` with Ed25519. Server's `tDecodeServer` (Protocol.hs:2242) verifies `sessId == sessionId`. New connection = different sessionId = signature mismatch.
2. **Server generates per-session DH keys.** `processHello` creates fresh X25519 keypair stored in `HandshakeSent`. For SMP browser clients (future), `verifyCmdAuth` (Protocol.hs:1322) requires the matching `serverPrivKey` from `thAuth`.
3. **This applies to both XFTP and future SMP browser clients** — the session management approach is the same.
### Why multiple preset servers cannot work
Upload (`agent.ts:105-157`) takes a single `server: XFTPServer` parameter and uploads ALL chunks to it. `web/upload.ts:133` calls `pickRandomServer(servers)` which selects ONE random server from all presets. The multi-server preset configuration is pointless — only one server is ever used per upload. The design intent (RFC section 11.6: "upload in parallel to 8 randomly selected servers") is not implemented. This must be fixed in Phase 2 (section 3.7).
## 2. Solution Summary
### Phase 1: Error handling and connection resilience
1. **Server: strict dispatch for allowed protocol combinations** — reject all invalid combinations
2. **Client: automatic retry with re-handshake** on SESSION/HANDSHAKE errors
3. **Client: fetch timeout** with configurable duration
4. **UI: error categorization and retry** — auto-retry temporary, human-readable permanent
5. **Client: connection state with Promise-based lock and per-server queues**`ServerConnection` with `client: Promise<XFTPClient>` + `queue: Promise<void>`
6. **Client: fix cache key** — include keyHash
### Phase 2: Multi-server upload (after Phase 1)
7. **Multi-server upload with server selection and failover** — distribute chunks across servers, retry FNEW on different server if one fails
## 3. Detailed Technical Design
### 3.1 Server: strict dispatch for allowed protocol combinations
**Principle:** Everything not explicitly done by existing Haskell/TS clients is prohibited. It is better to fail on impossible combinations than to be permissive — permissiveness complicates debugging and creates attack vectors via unexpected behaviors.
**Allowed behaviors by client type:**
| Client | SNI | webHello header | Hello body | When |
|--------|-----|----------------|------------|------|
| Haskell | No | No | Empty | New connection only |
| Web | Yes | Yes | Non-empty (XFTPClientHello) | New OR existing connection |
**Minimal surgical change.** The existing dispatch (Server.hs:169-189) already correctly handles `HandshakeSent` and `HandshakeAccepted` — their guards cover all valid and invalid combinations. The ONLY missing case is `Nothing` + web client sending a command on a stale session.
`processHello` (Server.hs:194-217) already internally routes: `B.null bodyHead` → Haskell hello, `sniUsed` → web hello decode, else → HANDSHAKE. For stale web sessions, it currently tries to decode a command body as `XFTPClientHello`, fails, and throws HANDSHAKE. The fix: detect this case BEFORE calling processHello and throw SESSION instead, so the client knows to re-handshake (not that its hello was malformed).
**Change: add one guard to `Nothing` branch, remove debug logging.**
```haskell
-- Before (1 line):
Nothing -> processHello Nothing
-- After (3 lines):
Nothing
| sniUsed && not webHello -> throwE SESSION -- web command on stale session
| otherwise -> processHello Nothing -- normal hello (web or Haskell)
```
`throwE SESSION` is caught by `either sendError pure r` (line 190). `sendError` pads `smpEncode SESSION` = `"SESSION"` (Transport.hs:298) to `xftpBlockSize`. The client's padded error detection (section 3.2) catches this as a retriable error and triggers re-handshake. SESSION is a valid `XFTPErrorType` constructor (Transport.hs:225) — no new helpers needed.
**All other branches remain unchanged.** `HandshakeSent` guards (`webHello` → processHello, `otherwise` → processClientHandshake with body size check inside) are correct. `HandshakeAccepted` guards (`webHello`, `webHandshake`, `otherwise` → command) are correct.
### 3.2 Client: automatic retry with re-handshake
**Location:** `sendXFTPCommand` in `client.ts`
**Design:** Retry loop inside `sendXFTPCommand`. Maximum 3 attempts. On retriable error, close old client, re-handshake, retry.
**Error classification:**
| Error | Type | Retriable? | Human-readable message |
|-------|------|-----------|----------------------|
| Padded "HANDSHAKE" | Temporary | Yes (auto) | "Connection interrupted, reconnecting..." |
| Padded "SESSION" | Temporary | Yes (auto) | "Session expired, reconnecting..." |
| `FRErr SESSION` | Temporary | Yes (auto) | "Session expired, reconnecting..." |
| `FRErr HANDSHAKE` | Temporary | Yes (auto) | "Connection interrupted, reconnecting..." |
| `fetch()` TypeError | Temporary | Yes (auto) | "Network error, retrying..." |
| AbortError (timeout) | Temporary | Yes (auto) | "Server timeout, retrying..." |
| `FRErr AUTH` | Permanent | No | "File is invalid, expired, or has been removed" |
| `FRErr NO_FILE` | Permanent | No | "File not found — it may have expired" |
| `FRErr SIZE` | Permanent | No | "File size exceeds server limit" |
| `FRErr QUOTA` | Permanent | No | "Server storage quota exceeded" |
| `FRErr BLOCKED` | Permanent | No | "File has been blocked by server" |
| `FRErr DIGEST` | Permanent | No | "File integrity check failed" |
| `FRErr INTERNAL` | Permanent | No | "Server internal error" |
| `CMD *` | Permanent | No | "Protocol error" |
**Retry behavior:**
- Auto-retry up to 3 times for temporary errors, transparent to user
- After 3 failures: show human-readable error with diagnosis, offer manual retry button
- Permanent errors: show human-readable error immediately, NO manual retry button (user can reload page)
**Implementation:**
```typescript
async function sendXFTPCommand(
agent: XFTPClientAgent,
server: XFTPServer,
privateKey: Uint8Array,
entityId: Uint8Array,
cmdBytes: Uint8Array,
chunkData?: Uint8Array,
maxRetries: number = 3
): Promise<{response: FileResponse, body: Uint8Array}> {
let clientP = getXFTPServerClient(agent, server)
let client = await clientP
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
return await sendXFTPCommandOnce(client, privateKey, entityId, cmdBytes, chunkData)
} catch (e) {
if (!isRetriable(e)) {
// Permanent error (AUTH, NO_FILE, etc.) — connection is fine, don't touch it
throw categorizeError(e)
}
if (attempt === maxRetries) {
// Retriable error exhausted — connection is bad, remove stale promise
removeStaleConnection(agent, server, clientP)
throw categorizeError(e)
}
clientP = reconnectClient(agent, server)
client = await clientP
}
}
throw new Error("unreachable")
}
```
**`sendXFTPCommandOnce`** — renamed from current `sendXFTPCommand`. Two changes:
1. **Padded error detection** (before `decodeTransmission`):
```typescript
// After getting respBlock, before decodeTransmission:
const raw = blockUnpad(respBlock)
if (raw.length < 20) {
const text = new TextDecoder().decode(raw)
if (/^[A-Z_]+$/.test(text)) {
throw new XFTPRetriableError(text) // "HANDSHAKE" or "SESSION"
}
}
```
2. **FRErr classification** (replaces current unconditional throw):
```typescript
// After decodeResponse, instead of throw new Error("Server error: " + err.type):
if (response.type === "FRErr") {
const err = response.err
if (err.type === "SESSION" || err.type === "HANDSHAKE") {
throw new XFTPRetriableError(err.type)
}
throw new XFTPPermanentError(err.type, humanReadableMessage(err))
}
```
### 3.3 Client: fetch timeout
**Location:** `createBrowserTransport` and `createNodeTransport` in `client.ts`
**Design:** `AbortController` with configurable timeout on every `fetch()`.
```typescript
interface TransportConfig {
timeoutMs: number // default 30000, lower for tests
}
function createBrowserTransport(baseUrl: string, config: TransportConfig): Transport {
return {
async post(body: Uint8Array, headers?: Record<string, string>): Promise<Uint8Array> {
const controller = new AbortController()
const timer = setTimeout(() => controller.abort(), config.timeoutMs)
try {
const resp = await fetch(effectiveUrl, {
method: "POST", headers, body,
signal: controller.signal
})
if (!resp.ok) throw new Error(`Server request failed: ${resp.status}`)
return new Uint8Array(await resp.arrayBuffer())
} finally {
clearTimeout(timer)
}
},
close() {}
}
}
```
For Node.js transport, use `setTimeout` on the HTTP/2 request stream.
Default: 30s for production, 5s for tests. Threaded through `connectXFTP``createTransport`.
### 3.4 UI: error categorization and retry
**Behavior (Option D):**
- **Temporary errors:** Auto-retry loop (3 attempts). After 3 failures, show human-readable diagnosis with manual retry button. Diagnosis examples: "Server timeout — the server may be temporarily unavailable", "Connection interrupted — your network may be unstable".
- **Permanent errors:** Show human-readable error immediately, NO retry button. User can reload page if they want to retry. Examples: "File is invalid, expired, or has been removed" (AUTH), "File not found" (NO_FILE).
**Current UI retry buttons:**
- `upload.ts:73-75` — retry calls `startUpload(pendingFile)` from scratch
- `download.ts:60` — retry calls `startDownload()` from scratch
**Improvement:** Track uploaded/downloaded chunk indices. On manual retry, skip completed chunks:
```typescript
// Upload: track which chunks completed
const completedChunks: Set<number> = new Set()
for (let i = 0; i < specs.length; i++) {
if (completedChunks.has(i)) continue
// ... create + upload chunk
completedChunks.add(i)
}
// Download: already naturally resumable — each chunk is independent
```
### 3.5 Client: connection state with Promise-based lock and per-server queues
**Design:** Each server gets a `ServerConnection` record containing a `Promise<XFTPClient>` (the connection lock) and a `Promise<void>` (the sequential command queue). The `XFTPClientAgent` maps server keys to these records.
The promise IS the lock — every consumer awaits the same promise. When reconnect is needed, the promise is replaced atomically.
```typescript
interface ServerConnection {
client: Promise<XFTPClient> // resolves to connected client; replaced on reconnect
queue: Promise<void> // tail of sequential command chain
}
interface XFTPClientAgent {
connections: Map<string, ServerConnection>
}
function newXFTPAgent(): XFTPClientAgent {
return {connections: new Map()}
}
```
**Connection lifecycle — `getXFTPServerClient` and `reconnectClient`:**
```typescript
function getXFTPServerClient(agent: XFTPClientAgent, server: XFTPServer): Promise<XFTPClient> {
const key = formatXFTPServer(server)
let conn = agent.connections.get(key)
if (!conn) {
const p = connectXFTP(server)
conn = {client: p, queue: Promise.resolve()}
agent.connections.set(key, conn)
// On connection failure, remove from map so next call retries
p.catch(() => {
const cur = agent.connections.get(key)
if (cur && cur.client === p) agent.connections.delete(key)
})
}
return conn.client
}
function reconnectClient(agent: XFTPClientAgent, server: XFTPServer): Promise<XFTPClient> {
const key = formatXFTPServer(server)
const old = agent.connections.get(key)
// Close old client (fire-and-forget)
old?.client.then(c => c.transport.close(), () => {})
// Replace with new connection promise — all concurrent callers will await this
// Queue survives reconnect — pending operations stay ordered
const p = connectXFTP(server)
const conn: ServerConnection = {client: p, queue: old?.queue ?? Promise.resolve()}
agent.connections.set(key, conn)
p.catch(() => {
const cur = agent.connections.get(key)
if (cur && cur.client === p) agent.connections.delete(key)
})
return p
}
function closeXFTPServerClient(agent: XFTPClientAgent, server: XFTPServer): void {
const key = formatXFTPServer(server)
const conn = agent.connections.get(key)
if (conn) {
agent.connections.delete(key)
conn.client.then(c => c.transport.close(), () => {})
}
}
function closeXFTPAgent(agent: XFTPClientAgent): void {
for (const conn of agent.connections.values()) {
conn.client.then(c => c.transport.close(), () => {})
}
agent.connections.clear()
}
```
**Precise semantics:**
1. `getXFTPServerClient(agent, server)` — returns existing `conn.client` promise if present, otherwise creates a new `ServerConnection` with fresh connection and empty queue
2. When error detected, first caller calls `reconnectClient` which replaces `conn.client` with a new connection promise. The queue is preserved across reconnect.
3. All concurrent callers awaiting the OLD promise receive the error
4. They then call `getXFTPServerClient` which returns the NEW promise
5. If reconnection fails, auto-cleanup (`p.catch(() => delete)`) removes the entry so the next caller starts fresh
**Stale error cleanup rule:** When a caller exhausts retries for a retriable error, it removes the failed entry from the map (only if no concurrent caller has already replaced it via `reconnectClient`). This prevents the next caller from receiving a stale rejected promise. Permanent errors (AUTH, NO_FILE, etc.) do NOT remove the connection — the transport is fine, only the command failed.
```typescript
function removeStaleConnection(
agent: XFTPClientAgent, server: XFTPServer, failedP: Promise<XFTPClient>
): void {
const key = formatXFTPServer(server)
const conn = agent.connections.get(key)
// Only remove if current promise is the one that failed — not if already replaced by reconnect
if (conn && conn.client === failedP) {
agent.connections.delete(key)
failedP.then(c => c.transport.close(), () => {})
}
}
```
**Per-server sequential queue:** `queue` is a `Promise<void>` — the tail of the sequential operation chain. Each new operation `.then()`s onto it. It's `void` because callers hold their own typed promises; the queue only tracks completion order:
```typescript
async function enqueueCommand<T>(
agent: XFTPClientAgent,
server: XFTPServer,
fn: () => Promise<T> // no client param — fn uses command wrappers (agent+server)
): Promise<T> {
const key = formatXFTPServer(server)
// Ensure connection exists (with auto-cleanup on failure)
await getXFTPServerClient(agent, server)
const conn = agent.connections.get(key)! // guaranteed to exist after getXFTPServerClient
// Chain onto the queue — fn runs after previous operation completes
let resolve_: (v: T) => void, reject_: (e: any) => void
const result = new Promise<T>((res, rej) => { resolve_ = res; reject_ = rej })
conn.queue = conn.queue.then(
() => fn().then(resolve_!, reject_!),
() => fn().then(resolve_!, reject_!)
).then(() => {}, () => {}) // swallow errors in the chain
return result
}
```
Commands to the same server execute one at a time via the queue. Commands to different servers execute concurrently because each has its own queue. `enqueueCommand` provides sequencing; `sendXFTPCommand` (called inside `fn` via command wrappers) provides retry. They compose as: `enqueueCommand` sequences calls to wrappers that internally use `sendXFTPCommand`.
**Download change:** Group chunks by server, process each server's chunks sequentially, servers in parallel. Uses `for` loop for per-server sequencing (same pattern as Stage 2 upload). `enqueueCommand` is available for cases where different callers target the same server.
```typescript
const byServer = new Map<string, FileChunk[]>()
for (const chunk of resolvedFd.chunks) {
const srv = chunk.replicas[0]?.server ?? ""
if (!byServer.has(srv)) byServer.set(srv, [])
byServer.get(srv)!.push(chunk)
}
await Promise.all([...byServer.entries()].map(async ([srv, chunks]) => {
const server = parseXFTPServer(srv)
for (const chunk of chunks) {
const seed = decodePrivKeyEd25519(chunk.replicas[0].replicaKey)
const kp = ed25519KeyPairFromSeed(seed)
const raw = await downloadXFTPChunkRaw(agent, server, kp.privateKey, chunk.replicas[0].replicaId)
await onRawChunk({chunkNo: chunk.chunkNo, dhSecret: raw.dhSecret, nonce: raw.nonce, body: raw.body, digest: chunk.digest})
downloaded += chunk.chunkSize
onProgress?.(downloaded, resolvedFd.size)
}
}))
```
### 3.6 Fix cache key
**Bug:** `getXFTPServerClient` (client.ts:110) uses `"https://" + server.host + ":" + server.port` as cache key, ignoring `keyHash`. Two servers with same host:port but different keyHash share a cached connection, bypassing identity verification.
**Fix:** Use `formatXFTPServer(server)` as cache key (includes keyHash). Already available in `protocol/address.ts:52-54`.
```typescript
// Before:
const key = "https://" + server.host + ":" + server.port
// After:
const key = formatXFTPServer(server)
```
Note: With the redesign in 3.5, the cache key fix is inherent — the `connections` Map uses `formatXFTPServer(server)` everywhere.
### 3.7 Phase 2: Multi-server upload with server selection and failover
**Problem:** Current upload (`agent.ts:105-157`) takes a single `server: XFTPServer` and uploads ALL chunks to it. The 12 preset servers (6 SimpleX + 6 Flux) are pointless — only one is ever used.
**Design goal:** Distribute chunks across servers. Retry FNEW on a different server if one fails. Once working servers are found, prefer them (heuristic: server unlikely to fail mid-process, more likely to be broken initially due to maintenance/downtime).
**Reference implementation:** Haskell `Agent.hs:457-486` (`createChunk` / `createWithNextSrv`) + `Client.hs:2335-2385` (`getNextServer_` / `withNextSrv`).
#### Haskell algorithm summary
Two-stage architecture:
1. **Allocate stage (serial per file in Haskell):** For each chunk, call FNEW on a randomly-selected server. If FNEW fails, pick a different server and retry. Track tried hosts to avoid retrying the same server. After all chunks are assigned to servers, spawn one upload worker per server.
2. **Upload stage (parallel per server):** Each server worker uploads its assigned chunks sequentially (FPUT). On FPUT failure, retry on the same server with backoff (because the chunk replica already exists on that server). No server failover for FPUT.
Server selection constraints (hierarchical, `getNextServer_` Client.hs:2335-2350):
1. Prefer servers from unused operators (operator diversity)
2. Prefer servers with unused hosts (host diversity)
3. Random pick from the most-constrained candidate set
4. If all exhausted, reset tried set and start over
#### Web client adaptation
The web client doesn't have operators or a database. Simplified algorithm with two stages:
**Stage 1 — Allocate:** Create chunk records on servers (FNEW). Unlike Haskell which is serial here, web FNEW runs concurrently within a concurrency limit. FNEW is a small command — concurrent FNEW on the same connection is not a problem, and concurrent FNEW across servers improves upload startup time.
**Stage 2 — Upload:** Upload chunk data (FPUT). Parallel across servers, sequential per server (reuses per-server queues from 3.5). FPUT retries on the same server with backoff — no server rotation because the chunk replica already exists on that server. Stage 2 reads chunk data by offset (via `readChunk`), so `SentChunk` must be extended with `chunkOffset: number` (from ChunkSpec).
```typescript
interface UploadState {
untriedServers: XFTPServer[] // servers not yet attempted — initially all servers
workingServers: XFTPServer[] // servers that succeeded FNEW
}
const MAX_FNEW_ATTEMPTS = 5 // per chunk: try up to 5 different servers
async function uploadFile(
agent: XFTPClientAgent,
allServers: XFTPServer[],
encrypted: EncryptedFileMetadata,
options?: UploadOptions
): Promise<UploadResult> {
const state: UploadState = {untriedServers: [...allServers], workingServers: []}
const specs = prepareChunkSpecs(encrypted.chunkSizes)
const concurrency = options?.concurrency ?? 4
// Stage 1: Allocate — concurrent FNEW within concurrency limit
const sentChunks: SentChunk[] = new Array(specs.length)
const queue = specs.map((spec, i) => ({spec, chunkNo: i + 1, index: i}))
let idx = 0
async function allocateWorker() {
while (idx < queue.length) {
const item = queue[idx++]
const {server, chunk} = await createChunkWithFailover(
agent, allServers, state, concurrency, item.spec, item.chunkNo
)
sentChunks[item.index] = chunk
}
}
const allocateWorkers = Array.from(
{length: Math.min(concurrency, queue.length)},
() => allocateWorker()
)
await Promise.all(allocateWorkers)
// Stage 2: Upload — parallel across servers, sequential per server
// readChunk reads from the encrypted file by offset (same as Phase 1 uploadFile)
let uploaded = 0
const total = encrypted.chunkSizes.reduce((a, b) => a + b, 0)
const byServer = groupBy(sentChunks, c => formatXFTPServer(c.server))
await Promise.all([...byServer.entries()].map(async ([srvKey, chunks]) => {
for (const chunk of chunks) {
const chunkData = await readChunk(chunk.chunkOffset, chunk.chunkSize)
await uploadXFTPChunk(agent, chunk.server, chunk.senderKey, chunk.senderId, chunkData)
uploaded += chunk.chunkSize
options?.onProgress?.(uploaded, total)
}
}))
return buildDescriptions(encrypted, sentChunks)
}
```
**`createChunkWithFailover`** — server selection with per-chunk retry limit:
```typescript
async function createChunkWithFailover(
agent: XFTPClientAgent,
allServers: XFTPServer[],
state: UploadState,
concurrency: number,
spec: ChunkSpec,
chunkNo: number
): Promise<{server: XFTPServer, chunk: SentChunk}> {
const maxAttempts = Math.min(allServers.length, MAX_FNEW_ATTEMPTS)
for (let attempt = 0; attempt < maxAttempts; attempt++) {
const server = pickServer(allServers, state, concurrency)
try {
const chunk = await createAndPrepareChunk(agent, server, spec, chunkNo)
// Success — add to working set (if not already there)
if (!state.workingServers.some(s => formatXFTPServer(s) === formatXFTPServer(server))) {
state.workingServers.push(server)
}
return {server, chunk}
} catch (e) {
// Remove from working if it was there
state.workingServers = state.workingServers.filter(
s => formatXFTPServer(s) !== formatXFTPServer(server)
)
if (attempt === maxAttempts - 1) throw e
}
}
throw new Error("unreachable")
}
```
**`pickServer`** — two-list selection:
```typescript
function pickServer(
allServers: XFTPServer[],
state: UploadState,
concurrency: number
): XFTPServer {
// Once enough working servers found, only use those
if (state.workingServers.length >= concurrency) {
return randomPick(state.workingServers)
}
// Still exploring — pick from untried
if (state.untriedServers.length > 0) {
const idx = Math.floor(Math.random() * state.untriedServers.length)
return state.untriedServers.splice(idx, 1)[0] // remove from untried
}
// All tried — reset untried to non-working servers and retry
state.untriedServers = allServers.filter(
s => !state.workingServers.some(w => formatXFTPServer(w) === formatXFTPServer(s))
)
if (state.untriedServers.length > 0) {
const idx = Math.floor(Math.random() * state.untriedServers.length)
return state.untriedServers.splice(idx, 1)[0]
}
// Every server is working — pick any working
return randomPick(state.workingServers)
}
```
**Algorithm:** Two lists — `untriedServers` (initially all) and `workingServers` (initially empty). When `workingServers.length < concurrency`, pick from `untriedServers` (removing on pick). On FNEW success, add to `workingServers`. On FNEW failure, server is already removed from `untriedServers`; remove from `workingServers` if present. When `untriedServers` is empty, reset it to all non-working servers. Once `workingServers.length >= concurrency`, pick randomly only from `workingServers`.
**Termination condition:** Each chunk tries at most `min(serverCount, 5)` different servers. If all attempts fail, the chunk fails and the upload fails with the last error. Rationale: if 5 out of 12 servers are down, something systemic is wrong and continuing is unlikely to help. Timeouts count as failures — the timed-out server is removed from working and a different server is picked next.
**Key differences from Haskell:**
- No operator concept — just host diversity via random selection
- No database — state tracked in-memory during upload
- FNEW runs concurrently (Haskell is serial) — improves startup time
- FNEW is cheap and retried with server rotation; FPUT retries on same server
**Download changes (also Phase 2):** Default concurrency should be 4 (matching Haskell). Download already groups by server in 3.5. If `replicas[0]` download fails, try `replicas[1]`, `replicas[2]`, etc. (fallback across replicas).
## 4. Implementation Plan
### Phase 1: Error handling and connection resilience
Steps are ordered by dependency and should be implemented one by one.
#### Step 1: Fix cache key (3.6)
- Change cache key to `formatXFTPServer(server)` in `getXFTPServerClient` and `closeXFTPServerClient`
- Add import for `formatXFTPServer`
- Run existing tests to verify no regression
#### Step 2: Typed error detection for padded server errors (3.2 client-side)
- Add `XFTPRetriableError` class
- In `sendXFTPCommand`, detect padded error strings before `decodeTransmission`
- Classify `FRErr` responses as retriable or permanent with human-readable messages
- Run existing tests
#### Step 3: Fetch timeout (3.3)
- Add `TransportConfig` with `timeoutMs`
- Thread config through `createTransport``connectXFTP` → command wrappers
- Add `AbortController` to browser `fetch()` and `setTimeout` to Node.js HTTP/2
- Add vitest test: timeout triggers after configured duration
- Run existing tests
#### Step 4: Connection state with Promise-based lock and per-server queues (3.5)
- Introduce `ServerConnection` record: `{client: Promise<XFTPClient>, queue: Promise<void>}`
- Replace `XFTPClientAgent.clients: Map<string, XFTPClient>` with `connections: Map<string, ServerConnection>`
- Implement `reconnectClient` — replaces `conn.client` with new promise, preserves queue
- Implement `enqueueCommand` — chains operation onto server's queue
- Implement `removeStaleConnection` — removes entry only if current promise is the failed one
- Auto-cleanup: `p.catch(() => delete)` removes failed connections so next caller starts fresh
- Adapt `closeXFTPServerClient` and `closeXFTPAgent`
- Add vitest tests:
- Concurrent calls to same server produce single connection
- Failed promise is cleaned up, next caller gets fresh connection
#### Step 5: Automatic retry in sendXFTPCommand (3.2)
- Add retry loop with reconnect
- Change `sendXFTPCommand` signature: takes `agent + server` instead of `client`; export it (needed by tests and by agent.ts callers)
- Rename current `sendXFTPCommand``sendXFTPCommandOnce` (private); add padded error detection + FRErr classification (throw `XFTPRetriableError` for SESSION/HANDSHAKE, `XFTPPermanentError` for AUTH/NO_FILE/etc.)
- All command wrappers (`createXFTPChunk`, `uploadXFTPChunk`, etc.) pass agent + server
- Update agent.ts call sites: remove `getXFTPServerClient` calls before command wrappers (in `uploadFile`, `uploadRedirectDescription`, `downloadFileRaw`, `resolveRedirect`, `deleteFile`)
- Max 3 retries for retriable errors, immediate throw for permanent
- On retriable error: call `reconnectClient` and retry. On retriable error exhausted: call `removeStaleConnection` to clean up. On permanent error: throw immediately without touching connection
- Add vitest tests:
- Server started with delay → first attempt fails, retry succeeds
- 3 retries exhausted → error propagates with human-readable message
- Non-retriable error (AUTH) → no retry, immediate failure
#### Step 6: Server-side stale session handling (3.1)
- Add one guard to `Nothing` branch: `sniUsed && not webHello -> throwE SESSION`
- Remove debug `hPutStrLn stderr` lines (all 6 occurrences in dispatch)
- All other branches unchanged
- Run Haskell tests + Playwright tests
#### Step 7: Download with per-server grouping
- Modify `downloadFileRaw` to group chunks by server, sequential within each server (`for` loop), parallel across servers (`Promise.all`)
- Add vitest test: concurrent downloads from different servers run in parallel
#### Step 8: UI error improvements (3.4)
- Temporary errors: auto-retry loop (3 attempts), then show human-readable diagnosis + manual retry button
- Permanent errors: show human-readable error, NO retry button
- Manual retry resumes from last successful chunk (not full restart)
#### Step 9: Remove debug logging
- Remove all `console.log('[DEBUG ...]')` and `hPutStrLn stderr "DEBUG ..."` lines
- Keep `console.error('[XFTP] ...')` error logging
### Phase 2: Multi-server upload
Implement after Phase 1 is complete and tested.
#### Step 10: Multi-server upload with failover (3.7)
- Extend `SentChunk` with `chunkOffset: number` (from ChunkSpec) and `server: XFTPServer` (assigned during allocate) — Stage 2 reads data by offset and groups chunks by server
- Change `uploadFile` signature: takes `allServers: XFTPServer[]` instead of single `server`
- Implement `UploadState` with `untriedServers` and `workingServers`
- Implement `createChunkWithFailover` and `pickServer`: two-list selection (untried → working once enough found), max `min(serverCount, 5)` attempts per chunk
- Allocate stage: concurrent FNEW within concurrency limit (default 4)
- Upload stage: parallel across servers, sequential per server (reuse queue from Step 7)
- Update `web/upload.ts`: pass `getServers()` instead of `pickRandomServer(getServers())`
- Update description building: each chunk references its actual server
- Add vitest tests:
- File split across N servers (verify different servers in description)
- One server down → chunks redistributed to others
- All servers down → error after exhausting 5 attempts per chunk
#### Step 11: Download concurrency and replica fallback
- Change default download concurrency from 1 to 4
- If `replicas[0]` download fails, try `replicas[1]`, `replicas[2]`, etc.
- Uses per-server queues from Step 7
## 5. Testing Plan
### Principle
Prefer low-level vitest tests over Playwright E2E. Each new function gets one focused test. Pure functions tested without mocks; connection management tested with mock `connectXFTP`; server behavior tested with real server. Total: 13 tests across 4 files.
Tests A-C run in browser context (`@vitest/browser` with Chromium headless), configured in `vitest.config.ts`. Test D (integration) requires a separate Node.js vitest config since it uses `node:http2`. Existing `globalSetup.ts` provides a real XFTP server for integration tests.
### Test file A: `test/errors.test.ts` — pure, no server
Tests error classification and padded error detection (Steps 2, 5).
**T1. `isRetriable` classifies errors correctly**
```typescript
// Retriable:
expect(isRetriable(new XFTPRetriableError("SESSION"))).toBe(true)
expect(isRetriable(new XFTPRetriableError("HANDSHAKE"))).toBe(true)
expect(isRetriable(new TypeError("fetch failed"))).toBe(true) // network error
expect(isRetriable(Object.assign(new Error(), {name: "AbortError"}))).toBe(true) // timeout
// Not retriable:
expect(isRetriable(new XFTPPermanentError("AUTH", "..."))).toBe(false)
expect(isRetriable(new XFTPPermanentError("NO_FILE", "..."))).toBe(false)
expect(isRetriable(new XFTPPermanentError("INTERNAL", "..."))).toBe(false)
```
**T2. `categorizeError` produces human-readable messages**
```typescript
// categorizeError receives thrown errors (from sendXFTPCommandOnce or transport)
const e = categorizeError(new XFTPPermanentError("AUTH", "File is invalid, expired, or has been removed"))
expect(e.message).toContain("expired")
// Verify every permanent error type maps to a non-empty human-readable message
for (const errType of ["AUTH", "NO_FILE", "SIZE", "QUOTA", "BLOCKED", "DIGEST", "INTERNAL"]) {
expect(humanReadableMessage({type: errType}).length).toBeGreaterThan(0)
}
// Retriable errors also get human-readable messages after exhaustion
const re = categorizeError(new XFTPRetriableError("SESSION"))
expect(re.message).toContain("expired") // "Session expired, reconnecting..."
```
**T3. Padded error detection extracts error string from padded block**
```typescript
import {blockPad, blockUnpad} from '../src/protocol/transmission.js'
// Simulate server sending padded "SESSION"
const padded = blockPad(new TextEncoder().encode("SESSION"))
const raw = blockUnpad(padded)
expect(raw.length).toBeLessThan(20)
expect(new TextDecoder().decode(raw)).toBe("SESSION")
// Normal transmission block (batch count + large-encoded data) is NOT a short string
const sessionId = new Uint8Array(32) // dummy
const normalBlock = encodeTransmission(sessionId, new Uint8Array(0), new Uint8Array(0), encodePING())
const normalRaw = blockUnpad(normalBlock)
expect(normalRaw.length).toBeGreaterThan(20) // not mistaken for padded error
```
### Test file B: `test/connection.test.ts` — mock connectXFTP, no server
Tests connection management functions (Steps 4, 5). Uses `vi.mock` to replace `connectXFTP` with a controllable promise factory.
**T4. `getXFTPServerClient` coalesces concurrent calls**
```typescript
// Mock connectXFTP to return a deferred promise
const {promise, resolve} = promiseWithResolvers<XFTPClient>()
vi.mocked(connectXFTP).mockReturnValueOnce(promise)
const agent = newXFTPAgent()
const p1 = getXFTPServerClient(agent, server)
const p2 = getXFTPServerClient(agent, server)
expect(p1).toBe(p2) // same promise, single connection
resolve(mockClient)
expect(await p1).toBe(mockClient)
```
**T5. `getXFTPServerClient` auto-cleans failed connections**
```typescript
vi.mocked(connectXFTP).mockReturnValueOnce(Promise.reject(new Error("down")))
const agent = newXFTPAgent()
const p1 = getXFTPServerClient(agent, server)
await expect(p1).rejects.toThrow("down")
// After microtask, entry is removed
await new Promise(r => setTimeout(r, 0))
expect(agent.connections.has(formatXFTPServer(server))).toBe(false)
// Next call creates fresh connection
vi.mocked(connectXFTP).mockReturnValueOnce(Promise.resolve(mockClient))
const p2 = getXFTPServerClient(agent, server)
expect(p2).not.toBe(p1)
```
**T6. `removeStaleConnection` respects promise identity**
```typescript
const agent = newXFTPAgent()
const p1 = Promise.resolve(mockClient)
agent.connections.set(key, {client: p1, queue: Promise.resolve()})
// Replace with reconnect
const p2 = Promise.resolve(mockClient2)
agent.connections.set(key, {client: p2, queue: Promise.resolve()})
// removeStaleConnection with old promise does NOT remove new entry
removeStaleConnection(agent, server, p1)
expect(agent.connections.has(key)).toBe(true)
expect(agent.connections.get(key)!.client).toBe(p2)
// removeStaleConnection with current promise removes it
removeStaleConnection(agent, server, p2)
expect(agent.connections.has(key)).toBe(false)
```
**T7. `reconnectClient` replaces promise but preserves queue**
```typescript
const agent = newXFTPAgent()
const origQueue = Promise.resolve()
agent.connections.set(key, {client: Promise.resolve(mockClient), queue: origQueue})
vi.mocked(connectXFTP).mockReturnValueOnce(Promise.resolve(mockClient2))
reconnectClient(agent, server)
const conn = agent.connections.get(key)!
expect(await conn.client).toBe(mockClient2) // new client
expect(conn.queue).toBe(origQueue) // queue preserved
```
**T8. Retry loop: retriable error triggers reconnect, permanent error does not**
Mock approach: `vi.mock('../src/client.js')` to mock `connectXFTP` (exported). `reconnectClient` is not exported — its behavior is controlled indirectly via `connectXFTP` mock (it calls `connectXFTP` internally). Verify retry count via `connectXFTP` call count. Note: vitest module mocking may need adjustment depending on ESM transform behavior — if intra-module calls bypass the mock, extract `connectXFTP` to a separate module or use dependency injection for testing.
```typescript
// Script: first connectXFTP returns client whose post throws retriable,
// second connectXFTP (from reconnect) returns client whose post succeeds
vi.mocked(connectXFTP)
.mockResolvedValueOnce({
...mockClient,
transport: { post: async () => { throw new XFTPRetriableError("SESSION") }, close: () => {} }
})
.mockResolvedValueOnce({
...mockClient,
transport: { post: async () => okResponseBlock, close: () => {} }
})
const agent = newXFTPAgent()
const result = await sendXFTPCommand(agent, server, dummyKey, dummyId, encodePING())
expect(result.response.type).toBe("FROk")
expect(vi.mocked(connectXFTP)).toHaveBeenCalledTimes(2) // initial + 1 reconnect
// Reset — all 3 retries exhausted: connectXFTP called 3 times (initial + 2 reconnects)
vi.mocked(connectXFTP).mockClear()
vi.mocked(connectXFTP).mockResolvedValue({
...mockClient,
transport: { post: async () => { throw new XFTPRetriableError("SESSION") }, close: () => {} }
})
const agent2 = newXFTPAgent()
await expect(sendXFTPCommand(agent2, server, dummyKey, dummyId, encodePING()))
.rejects.toThrow(/reconnecting|expired/)
expect(vi.mocked(connectXFTP)).toHaveBeenCalledTimes(3) // initial + 2 reconnects
// Reset — permanent error: connectXFTP called once (initial only, no reconnect)
vi.mocked(connectXFTP).mockClear()
vi.mocked(connectXFTP).mockResolvedValue({
...mockClient,
transport: { post: async () => authErrorBlock, close: () => {} }
})
const agent3 = newXFTPAgent()
await expect(sendXFTPCommand(agent3, server, dummyKey, dummyId, encodePING()))
.rejects.toThrow(/expired/)
expect(vi.mocked(connectXFTP)).toHaveBeenCalledTimes(1) // initial only, no reconnect
```
### Test file C: `test/server-selection.test.ts` — pure, no server
Tests `pickServer` state machine (Step 10). Determinism: seed `Math.random` or test invariants not specific picks.
**T9. `pickServer` picks from untried when working < concurrency**
```typescript
const servers = [s1, s2, s3, s4, s5]
const state: UploadState = {untriedServers: [...servers], workingServers: []}
const picked = pickServer(servers, state, 4)
// picked is from untried, and was removed from untried
expect(state.untriedServers.length).toBe(4)
expect(state.untriedServers).not.toContainEqual(picked)
```
**T10. `pickServer` picks only from working when working >= concurrency**
```typescript
const state: UploadState = {
untriedServers: [s5], // still has untried
workingServers: [s1, s2, s3, s4]
}
const picked = pickServer(servers, state, 4)
// Must pick from working, NOT from untried
expect([s1, s2, s3, s4]).toContainEqual(picked)
expect(state.untriedServers.length).toBe(1) // untried unchanged
```
**T11. `pickServer` resets untried when exhausted**
```typescript
const state: UploadState = {
untriedServers: [], // all tried
workingServers: [s1, s2] // only 2 working, concurrency=4
}
const picked = pickServer(servers, state, 4)
// Should have reset untried to non-working servers and picked from them
expect([s3, s4, s5]).toContainEqual(picked)
expect(state.untriedServers.length).toBe(2) // 3 non-working minus 1 picked
```
### Test file D: `test/integration.test.ts` — real server, Node.js mode
Requires separate vitest config with `browser: {enabled: false}` since these tests use `node:http2` directly. Alternatively, add `test/vitest.node.config.ts` that includes only `test/integration.test.ts` and runs in Node.js.
**T12. Stale session returns padded SESSION error (requires Step 6)**
```typescript
import http2 from 'node:http2'
// Connect and handshake normally via the client
const client = await connectXFTP(server)
// Create a raw HTTP/2 session (new TLS SessionId, no handshake state on server)
const session = http2.connect(client.baseUrl, {rejectUnauthorized: false})
// Build a dummy command block using the old client's sessionId.
// Content doesn't matter — server detects stale session before parsing command.
const dummyKey = new Uint8Array(64) // Ed25519 private key (dummy)
const dummyId = new Uint8Array(24) // entity ID (dummy)
const cmdBlock = encodeAuthTransmission(client.sessionId, new Uint8Array(0), dummyId, encodePING(), dummyKey)
const resp = await new Promise<Uint8Array>((resolve, reject) => {
const req = session.request({":method": "POST", ":path": "/"})
const chunks: Buffer[] = []
req.on("data", (c: Buffer) => chunks.push(c))
req.on("end", () => resolve(new Uint8Array(Buffer.concat(chunks))))
req.on("error", reject)
req.end(Buffer.from(cmdBlock))
})
// Server should return padded "SESSION" (not crash, not "HANDSHAKE")
const raw = blockUnpad(resp.subarray(0, XFTP_BLOCK_SIZE))
expect(new TextDecoder().decode(raw)).toBe("SESSION")
session.close()
closeXFTP(client)
```
**T13. Fetch timeout fires within configured duration**
```typescript
// connectXFTP with 1ms timeout — handshake requires multiple round trips,
// so even on localhost it will exceed 1ms and trigger abort
await expect(
connectXFTP(server, {timeoutMs: 1})
).rejects.toThrow(/abort|timeout/i)
```
### What existing tests already cover (no new tests needed)
| Behavior | Covered by |
|----------|-----------|
| Cache key fix (Step 1) | Existing round-trip test — uses `formatXFTPServer` after refactor |
| Basic upload/download | 24 Playwright tests + 1 vitest browser test |
| File size limits, unicode filenames | Playwright edge case tests |
| Server startup/teardown | `globalSetup.ts` / `globalTeardown.ts` |
| Handshake + identity verification | `connectXFTP` in existing round-trip test |
### Test ordering
Tests must be added alongside their implementation step:
- **Step 2**: Add T1, T2, T3 (test/errors.test.ts)
- **Step 3**: Add T13 (test/integration.test.ts) — requires Node.js vitest config
- **Step 4**: Add T4, T5, T6, T7 (test/connection.test.ts)
- **Step 5**: Add T8 (test/connection.test.ts)
- **Step 6**: Add T12 (test/integration.test.ts) — requires server change + Node.js vitest config
- **Step 10**: Add T9, T10, T11 (test/server-selection.test.ts)
## 6. Context for Implementation Sessions
### Files to re-read on session start
**TypeScript (xftp-web/src/):**
- `client.ts``XFTPClient`, `XFTPClientAgent`, `getXFTPServerClient`, `closeXFTPServerClient`, `connectXFTP`, `sendXFTPCommand`, `createBrowserTransport`, `createNodeTransport`, all command wrappers
- `agent.ts``uploadFile`, `downloadFileRaw`, `downloadFile`, `resolveRedirect`, `encryptFileForUpload`
- `protocol/transmission.ts``encodeAuthTransmission`, `decodeTransmission`, `blockPad`, `blockUnpad`
- `protocol/commands.ts``XFTPErrorType`, `FileResponse`, `decodeResponse`, `decodeXFTPError`
- `protocol/handshake.ts``decodeServerHandshake` (padded error detection heuristic)
- `protocol/address.ts``XFTPServer`, `parseXFTPServer`, `formatXFTPServer`
- `web/upload.ts` — UI error handling, retry button
- `web/download.ts` — UI error handling, retry button
- `web/servers.ts``getServers`, `pickRandomServer`
**TypeScript (xftp-web/test/):**
- `browser.test.ts` — vitest Node.js test template (uses real Haskell server)
- `globalSetup.ts` — server startup, config generation, port file
- `page.spec.ts` — Playwright page tests
**Haskell (reference for multi-server):**
- `src/Simplex/FileTransfer/Agent.hs``createChunk` (lines 457-486, allocate stage), `runXFTPSndPrepareWorker` (lines 391-430, serial allocate in Haskell), `runXFTPSndWorker` (lines 494-548, per-server upload worker)
- `src/Simplex/Messaging/Agent/Client.hs``getNextServer_` (lines 2335-2350), `withNextSrv` (lines 2366-2385), `pickServer` (lines 2309-2314)
**Haskell (server):**
- `src/Simplex/FileTransfer/Server.hs``xftpServerHandshakeV1` (lines 165-244), `processRequest` (lines 403-435)
- `src/Simplex/Messaging/Protocol.hs``tDecodeServer` (lines 2239-2265) — sessionId verification at line 2242
### Key design constraints
1. `tDecodeServer` (Protocol.hs:2242) verifies `sessId == sessionId` — commands signed with old sessionId WILL fail on new connection
2. Server generates per-session DH key in `processHello` (Server.hs:207) — cannot be shared across sessions
3. `fetch()` provides zero control over HTTP/2 connection reuse — browser decides
4. `xftp-web-hello` header is only checked in dispatch (Server.hs:192), NOT inside `processHello`
5. Handshake-phase errors are raw padded strings; command-phase errors are proper ERR transmissions
6. Ed25519 signature verification (`TASignature` path, Protocol.hs:1314) does NOT use `thAuth` — but SMP will
7. Reconnect must re-handshake to get new sessionId AND new server DH key
8. The new `throwE SESSION` guard (Step 6) sends a raw padded "SESSION" string — no sessionId framing. Client detects this via padded error heuristic (section 3.2), not via sessionId mismatch
9. FNEW is cheap (creates chunk record on server) — retry with different server on failure
10. FPUT retries on same server (chunk replica already exists there) — close connection + backoff
## 7. Plan Maintenance
This plan must be updated as implementation proceeds:
- Mark completed steps with date
- Record any deviations from the plan with rationale
- Add new issues discovered during implementation
- Update file references if code moves
@@ -0,0 +1,327 @@
# CLI-Web Link Compatibility
## Problem
CLI and web clients are isolated: CLI outputs `.xftp` description files, web outputs
`https://host/#<encoded>` links. A file uploaded via one cannot be downloaded via the other.
## Solution Summary
Make CLI produce and consume web-compatible links so that:
- CLI `send` always outputs a web link (in addition to `.xftp` files)
- CLI `recv` accepts a web link URL as input (alternative to `.xftp` file path)
- Browser can download files uploaded by CLI and vice versa
The web page host is derived from the XFTP server address - the server that hosts the file
also hosts the download page. Making XFTP servers actually serve the web page is a separate
concern (not covered here), but the link format anticipates it.
The YAML file description format is already identical between CLI and web.
The only gap is the URI encoding layer: DEFLATE-raw compression + base64url + URL structure.
## Current State
### Web link format
```
https://<xftp-server-host>/#<base64url(deflateRaw(YAML))>
```
Encoding chain (agent.ts:64-68):
1. `encodeFileDescription(fd)` -> YAML string
2. `TextEncoder.encode(yaml)` -> bytes
3. `pako.deflateRaw(bytes)` -> compressed
4. `base64urlEncode(compressed)` -> URI fragment (no `#`)
For multi-chunk files exceeding ~400 chars in URI, a redirect description is uploaded:
the real file description is encrypted, uploaded as a separate XFTP file, and a smaller
"redirect" description (pointing to it) is put in the URI.
### CLI file format
```
xftp send FILE -> writes rcv1.xftp (raw YAML), snd.xftp.private
xftp recv FILE.xftp -> reads raw YAML from file
```
No URI support. No compression. No redirect descriptions.
### Existing Haskell `FileDescriptionURI`
`Description.hs:243-266` defines a `simplex:/file#/?desc=<URL-encoded raw YAML>` format.
This is the SimpleX Chat app format - NOT the web page format. It uses URL-encoded raw YAML
(no DEFLATE compression), and has a different URL structure.
## Detailed Tech Design
### 1. File Header (Filename) Compatibility
The filename is carried **inside the encrypted file data**, not in the file description YAML.
Both CLI and web use the same `FileHeader` structure and binary encoding - full interop.
#### FileHeader type
Haskell (`Types.hs:36-46`):
```haskell
data FileHeader = FileHeader { fileName :: Text, fileExtra :: Maybe Text }
instance Encoding FileHeader where
smpEncode FileHeader {fileName, fileExtra} = smpEncode (fileName, fileExtra)
```
TypeScript (`crypto/file.ts:11-24`):
```typescript
interface FileHeader { fileName: string; fileExtra: string | null }
function encodeFileHeader(hdr: FileHeader): Uint8Array {
return concatBytes(encodeString(hdr.fileName), encodeMaybe(encodeString, hdr.fileExtra))
}
```
Both produce identical binary: `[1-byte UTF-8 length][fileName bytes]['0']` (for null fileExtra).
Max filename: 255 UTF-8 bytes (1-byte length prefix).
#### Encrypted file structure
Both CLI and web produce the same encrypted stream:
```
XSalsa20-Poly1305 encrypted:
[8-byte Int64 fileSize] [FileHeader] [file content] ['#' padding]
+ [16-byte auth tag]
Where fileSize = len(FileHeader) + len(file content)
```
The 8-byte length prefix and padding are handled identically:
- Haskell: `Crypto.hs:43-56` (`encryptFile`) / `Crypto.hs:81-87` (`decryptFirstChunk`)
- TypeScript: `crypto/file.ts:51-70` (`encryptFile`) / `crypto/file.ts:81-94` (`decryptChunks`)
On decryption, `unPadLazy`/`splitLen` strips the 8-byte length prefix, then `parseFileHeader`
extracts the filename from the remaining decrypted bytes (up to 1024 bytes examined, both sides).
#### CLI upload: sets real filename (ok)
`Client/Main.hs:246-247,273`:
```haskell
let (_, fileNameStr) = splitFileName filePath
fileName = T.pack fileNameStr
...
fileHdr = smpEncode FileHeader {fileName, fileExtra = Nothing}
```
Extracts the actual filename from the path and embeds it in the encrypted header.
#### CLI download: uses filename from header (ok)
`Crypto.hs:62-66` (single chunk) / `Crypto.hs:72-74` (multi-chunk):
```haskell
(FileHeader {fileName}, rest) <- parseFileHeader decryptedContent
destFile <- withExceptT FTCEFileIOError $ getDestFile fileName
```
`Client/Main.hs:435-441` (`getFilePath`):
- If output dir specified: saves to `<dir>/<fileName>`
- If no dir: saves to `~/Downloads/<fileName>`
The filename from the decrypted header determines the output file name.
#### Web upload: sets real filename (ok)
`upload.ts:121` -> `agent.ts:86`:
```typescript
const fileHdr = encodeFileHeader({fileName, fileExtra: null})
```
Where `fileName` comes from `file.name` (browser File API).
#### Web download: uses filename from header (ok)
`download.ts:97,102`:
```typescript
const fileName = sanitizeFileName(header.fileName)
a.download = encodeURIComponent(fileName)
```
The web client additionally sanitizes the filename (strips path separators, control chars,
bidi overrides, limits to 255 chars).
#### Web redirect description: empty filename (correct)
`agent.ts:193`: `encryptFileForUpload(yamlBytes, "")` - redirect descriptions use empty filename
because they are internal artifacts, not user files. This is handled correctly on both sides:
the redirect content is decrypted and parsed as YAML, not saved as a file.
#### Cross-client interop: fully compatible (ok)
| Scenario | Filename flow | Status |
|----------|--------------|--------|
| CLI upload -> CLI download | `splitFileName` -> header -> `getDestFile` | Works |
| Web upload -> Web download | `File.name` -> header -> `sanitizeFileName` | Works |
| CLI upload -> Web download | `splitFileName` -> header -> `sanitizeFileName` | **Compatible** |
| Web upload -> CLI download | `File.name` -> header -> `getDestFile` | **Compatible** |
The binary encoding is identical (smpEncode). No changes needed for filename interop.
The CLI should consider adding filename sanitization similar to the web client for safety.
### 2. Web Link Host Derivation
The web page URL domain comes from the XFTP server address, not from a CLI flag:
- **Non-redirected description**: use the server host of the first chunk's first replica.
E.g., `xftp://abc=@xftp1.simplex.im` -> `https://xftp1.simplex.im/#<encoded>`
- **Redirected description**: use the server host of the redirect chunk (the outer description's
chunk that stores the encrypted inner description).
The server address format is `xftp://<keyhash>@<host>[,<host2>,...][:<port>]`.
The web link uses `https://<host>` (port 443 implied).
This means the CLI does not need a `--web-url` flag - the server address fully determines
the link. The XFTP server serving the web page is a separate deployment concern.
### 3. Web URI Encoding/Decoding in Haskell
Add two functions (new module or in `Description.hs`):
```haskell
-- Encode file description as web URI fragment (no leading #)
encodeWebURI :: FileDescription 'FRecipient -> ByteString
-- 1. Y.encode . encodeFileDescription -> YAML bytes
-- 2. deflateRaw (raw DEFLATE, no zlib/gzip header) via zlib package
-- 3. base64url encode (with padding, matching Data.ByteString.Base64.URL)
-- Decode web URI fragment (no leading #) to file description
decodeWebURI :: ByteString -> Either String (ValidFileDescription 'FRecipient)
-- 1. base64url decode
-- 2. inflateRaw (raw DEFLATE decompress)
-- 3. Y.decodeEither' -> YAMLFileDescription -> FileDescription
-- 4. validateFileDescription
-- Build full web link from file description
-- Extracts server host from first chunk replica (or redirect chunk)
fileWebLink :: FileDescription 'FRecipient -> (String, ByteString)
-- Returns (webHost, uriFragment)
-- Caller assembles: "https://" <> webHost <> "/#" <> uriFragment
```
**Dependency**: Add `zlib` to `simplexmq.cabal` (for raw DEFLATE).
The codebase already has `zstd` for message compression - `zlib` is standard and small.
The `zlib` Haskell package provides `Codec.Compression.Zlib.Raw` for raw DEFLATE
(no header/trailer), matching `pako.deflateRaw()` / `pako.inflateRaw()`.
### 4. Redirect Description Support
The CLI currently does NOT create redirect descriptions. For single-server single-recipient
uploads, most file descriptions fit in a reasonable URI even for multi-chunk files. But for
large files (many chunks x long server hostnames), the URI can exceed practical limits.
**Approach**: Match the web client threshold.
- After encoding the URI, if `length > 400` and chunks > 1, upload a redirect description.
- The redirect upload uses the same XFTP upload flow: encrypt YAML -> upload as file -> create
outer description pointing to it.
- This matches `agent.ts:152-155` exactly.
- The redirect chunk's server becomes the web link host.
For CLI download from a redirect URI, the existing `cliReceiveFile` needs extension:
- After decoding the file description, check `redirect` field.
- If present: download and decrypt the redirect chunks first to get the inner description,
then download the actual file using the inner description.
- The web client already does this (`resolveRedirect` in agent.ts:320-346).
### 5. CLI Command Changes
#### `xftp send` - always output web link
```
xftp send FILE [DIR] [-n COUNT] [-s SERVERS]
```
- Upload file as usual
- Generate web link: `https://<server-host>/#<encodeWebURI(rcvDescription)>`
- If URI exceeds threshold, upload redirect description first
- Print web link to stdout (in addition to `.xftp` file paths)
- Only generates link for the first recipient (web links are single-recipient)
**Output change**:
```
Sender file description: ./file.xftp/snd.xftp.private
Pass file descriptions to the recipient(s):
./file.xftp/rcv1.xftp
Web link:
https://xftp1.simplex.im/#eJy0VduO2zYQ...
```
#### `xftp recv` - accept URL as input
```
xftp recv <FILE.xftp | URL> [DIR]
```
- If input starts with `http://` or `https://`, extract hash fragment after `#`
- Decode: base64url -> inflateRaw -> YAML -> FileDescription
- Resolve redirect if present
- Download and decrypt as usual
The URL must be quoted on the command line (`"https://...#..."`) because `#` is a shell
comment character when unquoted.
Implementation: modify `receiveP` parser to accept URL, add `decodeWebURI` path in
`cliReceiveFile` alongside existing `getFileDescription'`.
### 6. YAML Format Compatibility
Already identical. The web `description.ts` explicitly matches Haskell `Data.Yaml` output:
- Same field names (alphabetical key order)
- Same base64url encoding for binary fields (with `=` padding)
- Same server replica colon-delimited format: `chunkNo:replicaId:replicaKey[:digest][:chunkSize]`
- Same size encoding (`kb`/`mb`/`gb` suffixes)
- Same redirect structure
**Verification**: The Playwright test suite already tests upload->download round-trips.
Adding a cross-client test (CLI upload -> web download, or web upload -> CLI download) would
validate interop end-to-end.
### 7. Server Compatibility
No server changes needed. Both clients use the same XFTP protocol (FGET, FPUT, FNEW, FACK, FDEL).
The web client adds `xftp-web-hello: 1` header for the hello handshake, but the actual file
operations are identical wire-format.
The only consideration: CLI uses native HTTP/2 (via `http2` Haskell package), web uses
browser `fetch()` API over HTTP/2. Both produce identical XFTP protocol frames.
**Note**: Making XFTP servers actually serve the web download page at `https://<host>/` is a
separate deployment/infrastructure task. This plan only establishes the link format convention
so that links are ready to work once servers serve the page.
## Implementation Plan
### Phase 1: Web URI codec in Haskell
1. Add `zlib` dependency to `simplexmq.cabal`
2. Add `encodeWebURI` / `decodeWebURI` / `fileWebLink` to `Simplex.FileTransfer.Description`
(or a new `Simplex.FileTransfer.Description.WebURI` module)
3. `fileWebLink` extracts host from first chunk's first replica server address
4. Add unit tests: encode a known FileDescription, verify output matches web client encoding
5. Add round-trip test: encode -> decode -> compare
### Phase 2: CLI `recv` accepts URL
1. Modify `ReceiveOptions` to accept `Either FilePath WebURL` for `fileDescription`
2. In `cliReceiveFile`: if URL, extract fragment after `#`, call `decodeWebURI`
3. Add redirect resolution: if `redirect /= Nothing`, download redirect chunks,
decrypt, parse inner description, then proceed with download
4. Test: upload via web page -> copy link -> `xftp recv <link>`
### Phase 3: CLI `send` outputs web link
1. After upload, call `fileWebLink` to get (host, fragment)
2. If fragment exceeds threshold, upload redirect description first, rebuild link
3. Print `https://<host>/#<fragment>` to stdout
4. Test: `xftp send FILE` -> open link in browser -> download
### Phase 4: Cross-client integration test
1. Add test: CLI send -> extract link from stdout -> Playwright browser download -> verify
2. Add test: Playwright browser upload -> extract link -> CLI recv -> verify
3. These can be shell-script or Haskell test-suite tests that spawn both clients
+415
View File
@@ -0,0 +1,415 @@
# Fix subQ deadlock: blocking writeTBQueue inside connLock
## Problem
Users report that message reception silently and permanently stops across all connections, with no error alerts. The app appears functional but no messages arrive. Recovery requires restart.
Root cause: a deadlock between worker threads holding `connLock` and the `agentSubscriber` (sole `subQ` reader).
### The deadlock mechanism
`subQ` (`TBQueue ATransmission`, capacity 4096 on mobile / 1024 on desktop) is the single pipeline between the agent layer and the chat layer. The `agentSubscriber` thread (`Commands.hs:4373`) is its **sole reader**.
Three code sites hold `connLock` and call blocking `writeTBQueue subQ` without a fullness check. When `subQ` is full, these block while holding the lock. If `agentSubscriber` simultaneously needs the same `connLock` (via `sendMessagesB_``withConnLocks`), it blocks too — creating a circular wait:
- **Worker**: holds `connLock(X)`, waits for `subQ` space (needs `agentSubscriber` to read)
- **agentSubscriber**: sole `subQ` reader, waits for `connLock(X)` (needs worker to release)
- **Result**: permanent silent deadlock — no exception, no alert, all connections blocked
### Confirmed deadlock scenarios
**Scenario 1**: Delivery worker during queue rotation test
```
Delivery worker: agentSubscriber (sole subQ reader):
withConnLock(X) [2187] readTBQueue subQ → processAgentMessageConn
...DB operations... → sendPendingGroupMessages (on CON/SENT/QCONT)
notify → writeTBQueue subQ [2238] → batchSendConnMessages → deliverMessagesB
[BLOCKED — subQ full] → withAgent sendMessagesB [synchronous]
→ sendMessagesB_ → withConnLocks({..X..}) [1708]
[BLOCKED — connLock(X) held]
```
**Scenario 2**: Async command worker during message ACK with notification
```
Async cmd worker: agentSubscriber (sole subQ reader):
tryWithLock "ICAck" [1930→1824] readTBQueue subQ → processAgentMessageConn
→ withConnLock(X) → sendPendingGroupMessages
→ ack → ackQueueMessage [1899] → sendMessagesB_ → withConnLocks({..X..})
→ sendMsgNtf [2381] [BLOCKED — connLock(X) held]
→ writeTBQueue subQ [2386]
[BLOCKED — subQ full]
```
**Scenario 3**: Synchronous `ackMessage'` API (same mechanism as Scenario 2 but from external API caller)
```
ackMessage' caller: agentSubscriber (sole subQ reader):
withConnLock(X) [2254] → sendMessagesB_ → withConnLocks({..X..})
→ ack → ackQueueMessage [2267] [BLOCKED — connLock(X) held]
→ sendMsgNtf [2381]
→ writeTBQueue subQ [2386]
[BLOCKED — subQ full]
```
### ConnId overlap verified
No guard prevents a connection undergoing queue rotation (AM_QTEST_) or ACK processing from being included in `sendMessagesB_`'s batch. During these operations, the connection has `connStatus == ConnReady`, passing all filters in `memberSendAction`.
### Cascade amplification
Once any single deadlock triggers, `subQ` never drains. ALL other threads that attempt `writeTBQueue subQ` block progressively — their locks are held forever too. The entire threading system freezes within seconds.
### Affected code sites (blocking `writeTBQueue subQ` inside `connLock`)
| Site | File | Lock line | Write line | Events written |
|------|------|-----------|------------|----------------|
| `runSmpQueueMsgDelivery::notify` | Agent.hs | 2187 | 2238 | SWITCH SPCompleted, ERR INTERNAL |
| `runSmpQueueMsgDelivery::internalErr/notifyDel` | Agent.hs | 2187 | 2238 (via notifyDel→notify) | ERR INTERNAL + delMsg |
| `ackQueueMessage::sendMsgNtf` | Agent.hs | 2254 or 1930 | 2386 | MSGNTF |
### Safe patterns that already exist in the codebase
1. **`isFullTBQueue` + pending TVar** (used at `runCommandProcessing` lines 1782-1784/1937, and `runProcessSMP` lines 3027-3029/3216):
```haskell
-- Before processing (e.g. line 1782):
pending <- newTVarIO []
-- During processing — safe notify (e.g. line 1937):
notify cmd =
let t = (corrId, connId, AEvt (sAEntity @e) cmd)
in atomically $ ifM (isFullTBQueue subQ) (modifyTVar' pendingCmds (t :)) (writeTBQueue subQ t)
-- After processing — flush (e.g. line 1784):
mapM_ (atomically . writeTBQueue subQ) . reverse =<< readTVarIO pending
```
2. **`nonBlockingWriteTBQueue`** (used at Client.hs:789, NtfSubSupervisor.hs:507):
```haskell
nonBlockingWriteTBQueue q x = do
sent <- atomically $ tryWriteTBQueue q x
unless sent $ void $ forkIO $ atomically $ writeTBQueue q x
```
Note: `nonBlockingWriteTBQueue` does NOT preserve ordering — the spawned background thread may complete out of order relative to subsequent direct writes from the same calling thread.
### Exhaustive proof: no other deadlock scenarios exist
All 15 `withConnLock` sites in Agent.hs were analyzed. Only 3 write to `subQ`:
| withConnLock site | Writes subQ? | Safe? |
|-------------------|-------------|-------|
| switchConnectionAsync' (899) | No | ✓ |
| setConnShortLinkAsync' (995) | No | ✓ |
| setConnShortLink' (1031) | No | ✓ |
| deleteConnShortLink' (1075) | No | ✓ |
| allowConnection' (1407) | No | ✓ |
| acceptContact' (1417) | No | ✓ |
| sendMessagesB_ (1708, `withConnLocks`) | No | ✓ |
| tryWithLock/runSmpCommand (1930) | Yes (1937) | ✓ — `isFullTBQueue` check |
| tryMoveableWithLock/runSmpCommand (1931) | Yes (1937) | ✓ — `isFullTBQueue` check |
| **runSmpQueueMsgDelivery AM_QTEST_ (2187)** | **Yes (2238)** | **✗ — DEADLOCK** |
| **ackMessage' (2254)** | **Yes (2386)** | **✗ — DEADLOCK** |
| switchConnection' (2298) | No | ✓ |
| abortConnectionSwitch' (2328) | No | ✓ |
| synchronizeRatchet' (2351) | No | ✓ |
| suspendConnection' (2390) | No | ✓ |
| **processSMP (3037)** | Yes (3216) | ✓ — `isFullTBQueue` check |
Note: `processSMP` (line 3037) holds `connLock` and its local `notify` (line 3216) writes to `subQ`, but it uses the safe `isFullTBQueue` pattern. Its `ack` (line 3196) uses `enqueueCmd` (DB-only), NOT `ackQueueMessage`. The actual `ackQueueMessage` runs later from the async command worker via ICAck/ICAckDel.
Other lock pairs checked — no circular dependencies:
- `connLock × DB MVar`: DB never acquires connLock
- `entityLock × connLock`: consistent ordering (entity first in chat, conn in agent)
- `connLock(X) × connLock(Y)`: single agentSubscriber thread, one `withConnLocks` at a time
---
## Deadlock call graph: agentSubscriber → connLock
All deadlock paths require `agentSubscriber` to synchronously acquire `connLock`. Exhaustive analysis shows that **every such path converges on a single agent function**: `sendMessagesB_` → `withConnLocks` (Agent.hs:1708). No other agent API function called synchronously from the agentSubscriber acquires connLock.
Verified (FACT): `ackMessageAsync` → `enqueueCommand` only (no connLock). `toggleConnectionNtfs` → no lock. `deleteConnectionAsync` → `deleteLock` not `connLock`. `joinConnectionAsync` → `withInvLock` not `connLock`.
Also verified (FACT): `Lock = TMVar Text` (Lock.hs:24) is **non-reentrant** — double acquisition on the same thread deadlocks.
### All 22 trigger paths
Every path goes through `deliverMessage`/`deliverMessages`/`deliverMessagesB` → `withAgent sendMessagesB` → `sendMessagesB_` → `withConnLocks`:
| # | Trigger | Chat function | ConnIds locked | Risk |
|---|---------|--------------|----------------|------|
| 1 | Group CON (Invitee) | `introduceToAll` → broadcast XGrpMemNew | **ALL member connIds** | **HIGHEST** |
| 2 | Group MSG XGrpLinkAcpt | `introduceToRemaining` → broadcast | **ALL member connIds** | **HIGHEST** |
| 3 | Group CON (Invitee) | `sendIntroductions` → batch intros to new member | new member connId | Medium |
| 4 | Group CON (Invitee) | `sendHistory` → batch to new member | new member connId | Medium |
| 5 | Group CON | `sendPendingGroupMessages` | member connId | Medium |
| 6 | Group SENT | `sendPendingGroupMessages` | member connId | Medium |
| 7 | Group QCONT | `sendPendingGroupMessages` | member connId | Medium |
| 8 | Group CON (PendingReview) | `introduceToModerators` → to moderators | moderator connIds | Medium |
| 9 | Group CON (PreMember) | `sendXGrpMemCon` → to host | host connId | Low |
| 10 | Group CON (PreMember) | `probeMatchingMemberContact` → probes + hashes | member + N matching connIds | Medium |
| 11 | Direct CON | `probeMatchingMembers` → probes + hashes | contact + N matching connIds | Medium |
| 12 | Direct JOINED | `sendAutoReply` | contact connId | Low |
| 13 | Group JOINED | `sendGroupAutoReply` | member connId | Low |
| 14 | Group INV | `sendXGrpMemInv` → to host | host connId | Low |
| 15 | Group INV (legacy) | `sendGrpInvitation` → to contact | contact connId | Low |
| 16 | Group MSG XGrpMemInv | `xGrpMemInv` → `sendGroupMemberMessage` | re-member connId | Low |
| 17 | Group MSG XGrpMemDel | `forwardToMember` | deleted member connId | Low |
| 18 | Group MSG XGrpLinkMem | `probeMatchingMemberContact` | member + N matching connIds | Medium |
| 19 | Group MSG (dup relay) | `saveGroupRcvMsg` error → `sendDirectMemberMessage` | forwarder connId | Low |
| 20 | SFDONE | `sendFileDescriptions` → to recipients | recipient connIds | Medium |
| 21 | Group MSG XGrpLinkAcpt | `sendHistory` → to accepted member | accepted member connId | Medium |
| 22 | Direct MSG (autoAccept) | `autoAcceptFile` → inline accept reply | contact connId | Low (test-only config) |
### Key observations
1. **Single bottleneck**: All 22 paths converge on `sendMessagesB_` → `withConnLocks` (Agent.hs:1708). The deadlock is between this lock acquisition and any worker thread holding `connLock` + blocking on `writeTBQueue subQ`.
2. **Highest-risk paths** (#1, #2): Broadcasting to ALL group members in `introduceToAll` / `introduceToRemaining` acquires `withConnLocks` on ALL member connIds in a single batch. For large groups, this holds the agentSubscriber thread for a long time, during which subQ fills, which causes worker threads holding connLock on any of those connIds to deadlock.
3. **Medium-risk paths** (#5-7): `sendPendingGroupMessages` fires on every CON/SENT/QCONT. These are frequent and lock the member's connId, which is the SAME connId that a delivery worker or ACK worker may hold while writing to subQ.
---
## Analysis: `withConnLocks` in `sendMessagesB_`
### FACT: the lock protects ratchet encryption state
`sendMessagesB_` (Agent.hs:1708-1713) acquires `withConnLocks` and executes:
1. **`getConn_`** — reads connection metadata, send queues from DB
2. **`setConnPQSupport`** — updates PQ encryption flag per connection
3. **`enqueueMessagesB`** → `enqueueMessageB` → `storeSentMsg_` which calls:
- **`updateSndIds`** (AgentStore.hs:899) — increments `internalSndId` (sequential send counter)
- **`agentRatchetEncryptHeader`** (Agent.hs:3698) — reads current ratchet via `getRatchetForUpdate`, encrypts message header via `rcEncryptHeader`, writes advanced ratchet state via `updateRatchet`
- **`createSndMsg`** + **`createSndMsgDelivery`** — inserts message and delivery records
All operations run within `unsafeWithStore` → `withTransaction` (single DB transaction per batch).
### FACT: the lock CANNOT be removed
Without `withConnLocks`, concurrent `sendMessagesB_` calls targeting the same connection would:
- Read the same ratchet state, both encrypt, one overwrite the other → **ratchet desync** (unrecoverable)
- Get duplicate `internalSndId` values → **message ID collision**
- Race on `setConnPQSupport` → **PQ state inconsistency**
The lock serializes ALL operations on the connection's encryption state. Removing it would introduce data corruption.
Note: `sendMessage` (singular, line 530) uses the same `sendMessagesB_` function — there is no lock-free send path.
### Eliminated strategies
- **Strategy C (remove lock)**: The lock protects ratchet encryption. Removing it causes unrecoverable ratchet desync. Eliminated.
- **Strategy A (async dispatch)**: All 22 chat-layer callers use `deliverMessagesB` return values (delivery IDs, PQ state) synchronously. `forkIO` loses results. Eliminated.
- **Strategy W (isFullTBQueue + pending TVar)**: The existing pattern (lines 1937, 3216) buffers events in a local TVar and flushes after lock release. Between lock release and flush, another thread can acquire the same connLock and write events to subQ — reordering events within the same connection. This trades a visible deadlock for invisible ordering bugs. Eliminated.
- **Strategy O (per-connection overflow queues)**: Bounded overflow queues with "drop when full" were analyzed. Drop consequences are unacceptable at 5 of 6 write sites — CONF, INFO, CON cause permanent connection failure after ACK; INV loses connection invitations; SENT/MERR leave messages stuck forever. Unbounded overflow defeats backpressure. Eliminated.
---
## Solution: move subQ writes outside connLock
### Root cause
The `writeTBQueue subQ` calls at the 3 deadlock sites are inside `connLock` by accident of code structure, not necessity. `connLock` protects ratchet encryption state and DB consistency. The `notify` calls write informational events to `subQ` — they do not modify any state that `connLock` protects.
Moving the writes outside the lock scope eliminates the deadlock: blocking `writeTBQueue subQ` without holding `connLock` is safe — agentSubscriber is free to acquire the lock, process events, and drain `subQ`.
### Why reordering doesn't matter at these sites
The chat layer handlers for the 3 deadlock site events do NOT advance the ratchet:
| Event | Chat handler | Calls sendMessagesB_? |
|-------|-------------|----------------------|
| SWITCH SPCompleted | Creates internal chat item, updates UI | **No** |
| ERR INTERNAL | Logs error to view | **No** |
| MSGNTF | `toView CEvtNtfMessage` → empty output | **No** |
Events that DO trigger ratchet advances (CON, SENT, QCONT → `sendPendingGroupMessages` → `sendMessagesB_`) are all already written OUTSIDE `connLock` in the current code.
Ratchet state lives in the DB, not in subQ events. agentSubscriber processes events sequentially regardless of arrival order. The SENT-before-SWITCH race already exists in the current code (new queue worker writes SENT outside connLock while old queue worker writes SWITCH inside connLock).
### Fix: Site 1 — `runSmpQueueMsgDelivery` AM_QTEST_ (line 2187)
Restructure `withConnLock` to return the event, write outside.
**Current code** (Agent.hs:2187-2214):
```haskell
AM_QTEST_ -> withConnLock c connId "runSmpQueueMsgDelivery AM_QTEST_" $ do
withStore' c $ \db -> setSndQueueStatus db sq Active
SomeConn _ conn <- withStore c (`getConn` connId)
case conn of
DuplexConnection cData' rqs sqs -> do
let addr = qAddress sq
case findQ addr sqs of
Just SndQueue {dbReplaceQueueId = Just replacedId, primary} ->
case removeQP (\sq' -> dbQId sq' == replacedId && not (sameQueue addr sq')) sqs of
Nothing -> internalErr msgId "sent QTEST: queue not found in connection"
Just (sq', sq'' : sqs') -> do
checkSQSwchStatus sq' SSSendingQTEST
atomically $ TM.delete (qAddress sq') $ smpDeliveryWorkers c
withStore' c $ \db -> do
when primary $ setSndQueuePrimary db connId sq
deletePendingMsgs db connId sq'
deleteConnSndQueue db connId sq'
let sqs'' = sq'' :| sqs'
conn' = DuplexConnection cData' rqs sqs''
cStats <- connectionStats c conn'
notify $ SWITCH QDSnd SPCompleted cStats -- DEADLOCK
_ -> internalErr msgId "sent QTEST: ..." -- DEADLOCK (via notifyDel → notify)
_ -> internalErr msgId "sent QTEST: ..." -- DEADLOCK
_ -> internalErr msgId "QTEST sent not in duplex ..." -- DEADLOCK
```
**New code:**
```haskell
AM_QTEST_ -> do
evt_ <- withConnLock c connId "runSmpQueueMsgDelivery AM_QTEST_" $ do
withStore' c $ \db -> setSndQueueStatus db sq Active
SomeConn _ conn <- withStore c (`getConn` connId)
case conn of
DuplexConnection cData' rqs sqs -> do
let addr = qAddress sq
case findQ addr sqs of
Just SndQueue {dbReplaceQueueId = Just replacedId, primary} ->
case removeQP (\sq' -> dbQId sq' == replacedId && not (sameQueue addr sq')) sqs of
Nothing -> pure $ Left "sent QTEST: queue not found in connection"
Just (sq', sq'' : sqs') -> do
checkSQSwchStatus sq' SSSendingQTEST
atomically $ TM.delete (qAddress sq') $ smpDeliveryWorkers c
withStore' c $ \db -> do
when primary $ setSndQueuePrimary db connId sq
deletePendingMsgs db connId sq'
deleteConnSndQueue db connId sq'
let sqs'' = sq'' :| sqs'
conn' = DuplexConnection cData' rqs sqs''
cStats <- connectionStats c conn'
pure $ Right $ SWITCH QDSnd SPCompleted cStats
_ -> pure $ Left "sent QTEST: there is only one queue in connection"
_ -> pure $ Left "sent QTEST: queue not in connection or not replacing another queue"
_ -> pure $ Left "QTEST sent not in duplex connection"
-- subQ write is now OUTSIDE connLock — blocking writeTBQueue is safe
case evt_ of
Right evt -> notify evt
Left err -> internalErr msgId err
```
All DB operations remain inside the lock. Only `notify`/`internalErr` (which write to subQ) move outside. `internalErr` calls `notifyDel` = `notify >> delMsg` — both `notify` (subQ write) and `delMsg` (`deleteSndMsgDelivery`, keyed on unique msgId) are safe outside the lock. The existing double-delete pattern (`delMsg` inside `internalErr` + `delMsgKeep` at line 2216) is preserved.
### Fix: Sites 2 & 3 — `ackQueueMessage::sendMsgNtf` (line 2386)
Change `ackQueueMessage` to return the MSGNTF event instead of writing it to subQ. Callers write to subQ after releasing connLock.
**Current code** (Agent.hs:2371-2386):
```haskell
ackQueueMessage :: AgentClient -> RcvQueue -> SMP.MsgId -> AM ()
ackQueueMessage c rq@RcvQueue {userId, connId, server} srvMsgId = do
atomically $ incSMPServerStat c userId server ackAttempts
tryAllErrors (sendAck c rq srvMsgId) >>= \case
Right _ -> sendMsgNtf ackMsgs
Left (SMP _ SMP.NO_MSG) -> sendMsgNtf ackNoMsgErrs
Left e -> ...
where
sendMsgNtf stat = do
atomically $ incSMPServerStat c userId server stat
whenM (liftIO $ hasGetLock c rq) $ do
atomically $ releaseGetLock c rq
brokerTs_ <- eitherToMaybe <$> tryAllErrors (withStore c $ \db -> getRcvMsgBrokerTs db connId srvMsgId)
atomically $ writeTBQueue (subQ c) ("", connId, AEvt SAEConn $ MSGNTF srvMsgId brokerTs_)
```
**New code** — return `Maybe ATransmission` instead of writing:
```haskell
ackQueueMessage :: AgentClient -> RcvQueue -> SMP.MsgId -> AM (Maybe ATransmission)
ackQueueMessage c rq@RcvQueue {userId, connId, server} srvMsgId = do
atomically $ incSMPServerStat c userId server ackAttempts
tryAllErrors (sendAck c rq srvMsgId) >>= \case
Right _ -> sendMsgNtf ackMsgs
Left (SMP _ SMP.NO_MSG) -> sendMsgNtf ackNoMsgErrs
Left e -> ... >> pure Nothing
where
sendMsgNtf stat = do
atomically $ incSMPServerStat c userId server stat
ifM (liftIO $ hasGetLock c rq)
(do atomically $ releaseGetLock c rq
brokerTs_ <- eitherToMaybe <$> tryAllErrors (withStore c $ \db -> getRcvMsgBrokerTs db connId srvMsgId)
pure $ Just ("", connId, AEvt SAEConn $ MSGNTF srvMsgId brokerTs_))
(pure Nothing)
```
**Caller 1: `ackMessage'`** (Agent.hs:2253-2267) — return event from `withConnLock`, write after:
```haskell
ackMessage' c connId msgId rcptInfo_ = do
t_ <- withConnLock c connId "ackMessage" $ do
SomeConn _ conn <- withStore c (`getConn` connId)
case conn of
DuplexConnection {} -> do
t_ <- ack
sendRcpt conn
del
pure t_
RcvConnection {} -> do
t_ <- ack
del
pure t_
SndConnection {} -> throwE $ CONN SIMPLEX "ackMessage"
ContactConnection {} -> throwE $ CMD PROHIBITED "ackMessage: ContactConnection"
NewConnection _ -> throwE $ CMD PROHIBITED "ackMessage: NewConnection"
-- subQ write is OUTSIDE connLock
case t_ of
Just t -> atomically $ writeTBQueue (subQ c) t
Nothing -> pure ()
```
**Caller 2: `ICAck` / `ICAckDel`** (Agent.hs:1823-1824) — inline `tryWithLock` as `tryCommand` + `withConnLock`, write subQ between the two scopes:
`tryWithLock name = tryCommand . withConnLock c connId name` — by inlining, the subQ write can be placed outside `withConnLock` but inside `tryCommand` (retaining retry/error handling).
```haskell
ICAck rId srvMsgId -> withServer $ \srv ->
tryCommand $ do
t_ <- withConnLock c connId "ICAck" $ ack srv rId srvMsgId
-- subQ write is OUTSIDE connLock — cannot deadlock with agentSubscriber
forM_ t_ $ atomically . writeTBQueue subQ
ICAckDel rId srvMsgId msgId -> withServer $ \srv ->
tryCommand $ do
t_ <- withConnLock c connId "ICAckDel" $ do
t_ <- ack srv rId srvMsgId
withStore' c (\db -> deleteMsg db connId msgId)
pure t_
-- subQ write is OUTSIDE connLock — cannot deadlock with agentSubscriber
forM_ t_ $ atomically . writeTBQueue subQ
```
Where `ack` now returns `AM (Maybe ATransmission)`:
```haskell
ack srv rId srvMsgId = do
rq <- withStore c $ \db -> getRcvQueue db connId srv rId
ackQueueMessage c rq srvMsgId
```
All subQ writes for MSGNTF are now outside connLock. FIFO ordering is preserved — no `nonBlockingWriteTBQueue`, no forked threads. The same thread that held the lock writes to subQ sequentially after releasing it.
### Race analysis
Window between connLock release and subQ write at Site 1:
| Thread | Can acquire connLock(X)? | Writes subQ? | Consequence |
|--------|-------------------------|-------------|-------------|
| agentSubscriber via sendMessagesB_ | Yes | **No** (encrypts only) | No race |
| processSMP for connId X | Yes | Yes (pending flush) | MSG before SWITCH — cosmetic |
| runCommandProcessing for connId X | Yes | Yes (pending flush) | Command response before SWITCH — cosmetic |
| New queue delivery worker | No (SENT outside lock) | Yes | SENT before SWITCH — cosmetic, **already exists in current code** |
All races are cosmetic UI ordering. None affect ratchet state, protocol correctness, or message delivery.
### Summary of changes
| File | Change | Lines affected |
|------|--------|---------------|
| Agent.hs | Restructure AM_QTEST_ to return event from `withConnLock`, write outside | ~2187-2214 |
| Agent.hs | Change `ackQueueMessage` return type to `AM (Maybe ATransmission)`, return event instead of writing | ~2371-2386 |
| Agent.hs | `ackMessage'`: return event from `withConnLock`, write outside | ~2253-2267 |
| Agent.hs | `ICAck`/`ICAckDel`: inline `tryCommand` + `withConnLock`, write subQ between scopes | ~1823-1824 |
| Agent.hs | `ack` helper: propagate new return type | ~1899-1901 |
No new data structures. No new modules. No changes to other write sites (1937, 3216 — already safe). ~25 lines changed total.
+150
View File
@@ -0,0 +1,150 @@
# SimpleX Network Protocol Specifications — Governance and Evolution (draft)
## Why this document exists
SimpleX Network protocol specifications must evolve as the network grows. This document defines how specifications change, who governs those changes, and how the history of changes is preserved.
### Lessons from the web: why ratcheted governance matters
The web's governance history demonstrates both the necessity of consortium governance and the dangers of getting the transition wrong.
[Tim Berners-Lee invented the web in 1991](https://home.cern/science/computing/birth-web/short-history-web). [Netscape took over in 1994](https://en.wikipedia.org/wiki/Netscape_Navigator), driving rapid innovation as a single company — SSL, cookies, JavaScript, and the features that made the web commercially viable. In 1994, [W3C was founded](https://www.w3.org/about/history/) as a consortium hosted across multiple independent institutions (MIT in the US, INRIA/ERCIM in Europe, Keio University in Japan, later Beihang University in China) to govern web standards.
The transition from company-led innovation to consortium governance was abrupt rather than gradual. Netscape's decline (accelerated by the [browser wars](https://en.wikipedia.org/wiki/Browser_wars) and [AOL acquisition](https://cybercultural.com/p/1999-the-fall-of-netscape-and-the-rise-of-mozilla/)) transferred control to a standards body that prioritized process over progress. The result was [a lost decade of web stagnation](https://eev.ee/blog/2020/02/01/old-css-new-css/): CSS 2.0 shipped in 1998; CSS 2.1 didn't reach Candidate Recommendation until 2004 and wasn't finalized until 2011. W3C pursued XHTML and rejected proposed enhancements to HTML, until frustrated engineers from Apple, Mozilla, and Opera formed [WHATWG in 2004](https://en.wikipedia.org/wiki/WHATWG) to build HTML5 outside W3C's process. The abrupt governance transition, without a mechanism to balance community guarantees against the imperative to continue evolving the product at pace, dramatically slowed web evolution at the time it was needed most.
Then in 2023, [W3C restructured from a multi-host consortium into a single 501(c)(3) nonprofit entity](https://www.w3.org/press-releases/2023/w3c-le-launched/) — W3C Inc, incorporated in the US. The previous structure distributed governance across four independent university hosts in different countries, making capture by any single entity structurally difficult. The new structure concentrates governance in a single legal entity with a board of directors. While presented as modernization, this effectively ended the decentralized consortium model that had protected web standards for nearly three decades.
### The governance double ratchet
SimpleX follows the same Netscape-to-consortium evolution path, but with two ratchets designed to prevent both failure modes — stagnation from premature governance transfer, and capture from governance centralization:
- **Licensing ratchet**: all contributed IP is licensed under AGPLv3 (software) and Creative Commons (documentation), perpetually and irrevocably. What is licensed cannot be unlicensed. If a Party transfers Licensed IP, the licensing obligations transfer with it.
- **Governance ratchet**: power can be given to the SimpleX Network Consortium, but never taken back. The Consortium Agreement requires majority decision of all Governing Parties for changes to the agreement itself, IP policy, and admission or removal of parties.
The ratcheted transition is historically proven to be necessary. It allows the company to continue driving rapid product innovation (as Netscape did for the web) while incrementally and irreversibly transferring governance to the consortium, without the abrupt handover that stalled web evolution or the centralization that later undermined it.
### Specification governance via the Consortium Agreement
The SimpleX Network Consortium Agreement (being deployed in 2026) establishes two levels of intellectual property governance: **Licensed IP** (all contributed protocol specifications, software, and documentation, licensed perpetually and irrevocably) and **Core IP** (the subset essential to the network, requiring consortium governance to change). The distinction between these levels and how they map to the RFC process is described in [Standard vs Core specifications](#standard-vs-core-specifications) below.
## Specification change process: protocol specifications and RFCs
Protocol knowledge lives in two places:
### `protocol/` — Consolidated specifications
Each file is a complete, self-contained description of a protocol as it exists today. Like consolidated legislation in the UK legal system: the full current law in one document, not a patchwork of amendments.
Consolidated specifications are maintained on every code change that affects protocol behavior. With LLMs, the cost of maintaining consolidated documents collapses — reworking prose to incorporate a new RFC is now inexpensive relative to the value of a single authoritative document per protocol.
Implementers read `protocol/`. They should never need to reconstruct current behavior from a base spec plus a chain of RFCs.
### `rfcs/` — Protocol evolution commits
Each RFC describes a single change to a protocol specification. RFCs are the atomic unit of protocol evolution — analogous to commits in version control, or amending acts in legislation.
An RFC is not part of the protocol specification. It becomes part of the specification only when embedded into the consolidated `protocol/` document. The RFC itself remains as a permanent historical record of what changed, when, and why.
## RFC lifecycle
```
┌——> done/ ——> standard/
draft (root) ——>──┤
└——> rejected/
```
### Draft — `rfcs/*.md`
A proposal for a protocol change. Not yet implemented. Active proposals live in the `rfcs/` root directory.
Named by proposal date: `YYYY-MM-DD-topic.md`.
A draft may be rejected if the proposal is considered but not accepted for implementation.
### Done — `rfcs/done/`
Implemented in code. The protocol change described by this RFC exists in the codebase, but the RFC has not yet been verified against the actual implementation (code may have diverged from the proposal during implementation).
### Standard — `rfcs/standard/`
Verified against the actual implementation and synchronized with code. The RFC accurately describes what was implemented. This is a permanent historical record — standard RFCs are never modified or removed.
On promotion to standard, the RFC is:
1. Renamed from proposal date to standardization date: `YYYY-MM-DD-topic.md` (new date, same topic slug)
2. Updated with a document history header capturing the full lifecycle
3. Embedded into the corresponding `protocol/` consolidated specification
The `protocol/` document references embedded RFCs by name (e.g., "Private message routing added by RFC 2023-09-12-second-relays, standardized 2026-XX-XX"), similar to UK legislation citing the amending act for each clause.
Protocol version numbers make it clear which RFCs are included in which protocol revision — no separate tracking is needed.
### Rejected — `rfcs/rejected/`
Draft proposals that were considered but not accepted for implementation. Only drafts move to rejected — once an RFC is implemented (done/), it proceeds to standard/ after verification. Preserved for historical record of design decisions.
### Document history header
Every RFC in `standard/` carries a history header:
```
---
Proposed: YYYY-MM-DD
Implemented: YYYY-MM-DD
Standardized: YYYY-MM-DD
Protocol: simplex-messaging v9 (or whichever protocol this amends)
---
```
## Governance
SimpleX Network follows the Netscape-to-W3C evolution path, with ratcheted rather than abrupt transitions:
| Phase | Period | Governance | Development process |
|-------|--------|-----------|-------------------|
| Protocol invented | 2020 | Two people | Prototype developed |
| SimpleX Chat Ltd | 2022 | One company | Product-first: code leads, specs follow |
| SimpleX Network Consortium | 2026 | Agreement of SimpleX Chat Ltd and non-profit entities | Product-first for standard; standards-first for core |
| Decentralized governance | Future | TBD (DAO research ongoing) | Standards-first |
### Current: product-first development
SimpleX protocols currently follow a product-first development process: requirements drive code, code drives specification. RFCs are written as design proposals before implementation, but implementation details are figured out in code. Consolidated protocol specifications in `protocol/` are then amended to match the implementation.
This process is governed by SimpleX Chat Ltd as the IP Holding Party under the Consortium Agreement.
Any Specification Author (as defined in the Consortium Agreement) may propose RFCs. Acceptance and standardization decisions are made by SimpleX Chat Ltd during the current product-first phase.
### Standard vs Core specifications
The distinction between standard and core maps directly to the two levels of IP governance in the Consortium Agreement, and reflects the difference between product-first and standards-first development:
**Standard** — Licensed IP, not yet under consortium governance. Governed by the company.
All contributed protocol specifications are Licensed IP under the Consortium Agreement. Standard specifications follow product-first development: the company can evolve them with product needs, and they must be maintained on every code change that affects protocol behavior.
Standard specifications live in `rfcs/standard/` and `protocol/`.
**Core** — Governed IP, governed by the consortium.
A subset of standard specifications will be designated as Core IP under the Consortium Agreement. Core specifications will follow standards-first development: specification changes must be agreed via Governing Decision before code changes.
This is a legally binding commitment. Once Licensed IP is included in Core IP, the company that owns the code cannot unilaterally change it — even though they own the code, the Consortium Agreement requires a Governing Decision for any change to Core IP. This protects the fundamental properties of the network (privacy, security, decentralization) from unilateral modification by any single party.
The designation of specific specifications as Core IP is itself a Governing Decision requiring unanimous approval. The transition will happen incrementally as protocols stabilize — the governance ratchet ensures that each designation is irreversible.
The exact mechanism for distinguishing core from standard within the RFC and protocol folder structure is TBD — it will be decided as the first protocols are designated as Core IP.
### Future: standards-first development
As more protocols are designated as Core IP, development naturally transitions to a standards-first process for a growing portion of the protocol suite. The governance ratchet ensures this transition is gradual and irreversible — each protocol that becomes core gains the protection of consortium governance permanently, while remaining standard protocols continue to evolve at product pace.
## Current state
| Location | Contents | Count |
|----------|----------|-------|
| `protocol/` | Consolidated specs (SMP v9, Agent v5, XFTP v2, XRCP v1, Push v2, PQDR v1) | 6 specs + overview |
| `rfcs/` root | Active draft proposals | 19 |
| `rfcs/done/` | Implemented, not yet verified | 25 |
| `rfcs/standard/` | Verified against implementation | (to be populated) |
| `rfcs/rejected/` | Draft proposals not accepted | 7 |
+8 -3
View File
@@ -1,7 +1,7 @@
cabal-version: 1.12
name: simplexmq
version: 6.5.0.7
version: 6.5.0.9
synopsis: SimpleXMQ message broker
description: This package includes <./docs/Simplex-Messaging-Server.html server>,
<./docs/Simplex-Messaging-Client.html client> and
@@ -349,6 +349,7 @@ library
, process ==1.6.*
, temporary ==1.3.*
, websockets ==0.12.*
, zlib >=0.6 && <0.8
if flag(client_postgres) || flag(server_postgres)
build-depends:
postgresql-libpq >=0.10.0.0
@@ -499,6 +500,7 @@ test-suite simplexmq-test
XFTPCLI
XFTPClient
XFTPServerTests
XFTPWebTests
Static
Static.Embedded
Paths_simplexmq
@@ -514,6 +516,8 @@ test-suite simplexmq-test
AgentTests.NotificationTests
NtfClient
NtfServerTests
if flag(client_postgres) || flag(server_postgres)
other-modules:
PostgresSchemaDump
hs-source-dirs:
tests
@@ -528,6 +532,7 @@ test-suite simplexmq-test
, async
, base64-bytestring
, bytestring
, case-insensitive ==1.2.*
, containers
, crypton
, crypton-x509
@@ -547,6 +552,7 @@ test-suite simplexmq-test
, ini
, iso8601-time
, main-tester ==0.2.*
, memory
, mtl
, network
, QuickCheck ==2.14.*
@@ -575,8 +581,7 @@ test-suite simplexmq-test
cpp-options: -DdbPostgres
else
build-depends:
memory
, sqlcipher-simple
sqlcipher-simple
if !flag(client_postgres) || flag(client_postgres) || flag(server_postgres)
build-depends:
deepseq ==1.4.*
+2 -2
View File
@@ -47,7 +47,7 @@ import Data.Map.Strict (Map)
import qualified Data.Map.Strict as M
import Data.Maybe (fromMaybe, mapMaybe)
import qualified Data.Set as S
import Data.Text (Text)
import Data.Text (Text, pack)
import Data.Time.Clock (getCurrentTime)
import Data.Time.Format (defaultTimeLocale, formatTime)
import Simplex.FileTransfer.Chunks (toKB)
@@ -433,7 +433,7 @@ runXFTPSndPrepareWorker c Worker {doWork} = do
encryptFileForUpload :: SndFile -> FilePath -> AM (FileDigest, [(XFTPChunkSpec, FileDigest)])
encryptFileForUpload SndFile {key, nonce, srcFile, redirect} fsEncPath = do
let CryptoFile {filePath} = srcFile
fileName = takeFileName filePath
fileName = pack $ takeFileName filePath
fileSize <- liftIO $ fromInteger <$> CF.getFileContentsSize srcFile
when (fileSize > maxFileSizeHard) $ throwE $ FILE FT.SIZE
let fileHdr = smpEncode FileHeader {fileName, fileExtra = Nothing}
+11 -1
View File
@@ -1,4 +1,14 @@
module Simplex.FileTransfer.Chunks where
module Simplex.FileTransfer.Chunks
( serverChunkSizes,
chunkSize0,
chunkSize1,
chunkSize2,
chunkSize3,
kb,
toKB,
mb,
gb,
) where
import Data.Word (Word32)
+30 -8
View File
@@ -9,7 +9,28 @@
{-# LANGUAGE TupleSections #-}
{-# LANGUAGE TypeApplications #-}
module Simplex.FileTransfer.Client where
module Simplex.FileTransfer.Client
( XFTPClient (..),
XFTPClientConfig (..),
XFTPChunkSpec (..),
XFTPClientError,
defaultXFTPClientConfig,
getXFTPClient,
closeXFTPClient,
xftpClientServer,
xftpTransportHost,
createXFTPChunk,
addXFTPRecipients,
uploadXFTPChunk,
downloadXFTPChunk,
deleteXFTPChunk,
ackXFTPChunk,
pingXFTP,
singleChunkSize,
prepareChunkSizes,
prepareChunkSpecs,
getChunkDigest,
) where
import qualified Control.Exception as E
import Control.Logger.Simple
@@ -41,11 +62,11 @@ import Simplex.Messaging.Client
NetworkRequestMode (..),
ProtocolClientError (..),
TransportSession,
netTimeoutInt,
chooseTransportHost,
defaultNetworkConfig,
transportClientConfig,
clientSocksCredentials,
defaultNetworkConfig,
netTimeoutInt,
transportClientConfig,
unexpectedResponse,
clientHandlers,
useWebPort,
@@ -56,12 +77,12 @@ import Simplex.Messaging.Encoding (smpDecode, smpEncode)
import Simplex.Messaging.Encoding.String
import Simplex.Messaging.Protocol
( BasicAuth,
NetworkError (..),
Protocol (..),
ProtocolServer (..),
RecipientId,
SenderId,
pattern NoEntity,
NetworkError (..),
)
import Simplex.Messaging.Transport (ALPN, CertChainPubKey (..), HandshakeError (..), THandleAuth (..), THandleParams (..), TransportError (..), TransportPeer (..), defaultSupportedParams)
import Simplex.Messaging.Transport.Client (TransportClientConfig (..), TransportHost)
@@ -129,8 +150,9 @@ getXFTPClient transportSession@(_, srv, _) config@XFTPClientConfig {clientALPN,
thParams0 = THandleParams {sessionId, blockSize = xftpBlockSize, thVersion = v, thServerVRange, thAuth = Nothing, implySessId = False, encryptBlock = Nothing, batch = True, serviceAuth = False}
logDebug $ "Client negotiated handshake protocol: " <> tshow sessionALPN
thParams@THandleParams {thVersion} <- case sessionALPN of
Just alpn | alpn == xftpALPNv1 || alpn == httpALPN11 ->
xftpClientHandshakeV1 serverVRange keyHash http2Client thParams0
Just alpn
| alpn == xftpALPNv1 || alpn == httpALPN11 ->
xftpClientHandshakeV1 serverVRange keyHash http2Client thParams0
_ -> pure thParams0
logDebug $ "Client negotiated protocol: " <> tshow thVersion
let c = XFTPClient {http2Client, thParams, transportSession, config}
@@ -215,7 +237,7 @@ sendXFTPTransmission XFTPClient {config, thParams, http2Client} t chunkSpec_ = d
HTTP2Response {respBody = body@HTTP2Body {bodyHead}} <- withExceptT xftpClientError . ExceptT $ sendRequest http2Client req (Just reqTimeout)
when (B.length bodyHead /= xftpBlockSize) $ throwE $ PCEResponseError BLOCK
-- TODO validate that the file ID is the same as in the request?
(_, _fId, respOrErr) <-liftEither $ first PCEResponseError $ xftpDecodeTClient thParams bodyHead
(_, _fId, respOrErr) <- liftEither $ first PCEResponseError $ xftpDecodeTClient thParams bodyHead
case respOrErr of
Right r -> case protocolError r of
Just e -> throwE $ PCEProtocolError e
+11 -1
View File
@@ -5,7 +5,17 @@
{-# LANGUAGE NumericUnderscores #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.FileTransfer.Client.Agent where
module Simplex.FileTransfer.Client.Agent
( XFTPClientVar,
XFTPClientAgent (..),
XFTPClientAgentConfig (..),
XFTPClientAgentError (..),
defaultXFTPClientAgentConfig,
newXFTPAgent,
getXFTPServerClient,
showServer,
closeXFTPServerClient,
) where
import Control.Logger.Simple (logInfo)
import Control.Monad
+74 -22
View File
@@ -16,6 +16,9 @@ module Simplex.FileTransfer.Client.Main
xftpClientCLI,
cliSendFile,
cliSendFileOpts,
encodeWebURI,
decodeWebURI,
fileWebLink,
singleChunkSize,
prepareChunkSizes,
prepareChunkSpecs,
@@ -23,6 +26,7 @@ module Simplex.FileTransfer.Client.Main
)
where
import qualified Codec.Compression.Zlib.Raw as Z
import Control.Logger.Simple
import Control.Monad
import Control.Monad.Except
@@ -30,17 +34,19 @@ import Control.Monad.Trans.Except
import Crypto.Random (ChaChaDRG)
import qualified Data.Attoparsec.ByteString.Char8 as A
import Data.Bifunctor (first)
import qualified Data.ByteString.Base64.URL as U
import qualified Data.ByteString.Char8 as B
import qualified Data.ByteString.Lazy.Char8 as LB
import Data.Char (toLower)
import Data.Either (partitionEithers)
import Data.Int (Int64)
import Data.List (foldl', sortOn)
import Data.List (foldl', isPrefixOf, sortOn)
import Data.List.NonEmpty (NonEmpty (..), nonEmpty)
import qualified Data.List.NonEmpty as L
import Data.Map.Strict (Map)
import qualified Data.Map.Strict as M
import Data.Maybe (fromMaybe)
import Data.Text (Text)
import qualified Data.Text as T
import Data.Word (Word32)
import GHC.Records (HasField (getField))
@@ -62,7 +68,7 @@ import qualified Simplex.Messaging.Crypto.Lazy as LC
import Simplex.Messaging.Encoding
import Simplex.Messaging.Encoding.String (StrEncoding (..))
import Simplex.Messaging.Parsers (parseAll)
import Simplex.Messaging.Protocol (ProtoServerWithAuth (..), SenderId, SndPrivateAuthKey, XFTPServer, XFTPServerWithAuth)
import Simplex.Messaging.Protocol (ProtoServerWithAuth (..), ProtocolServer (..), SenderId, SndPrivateAuthKey, XFTPServer, XFTPServerWithAuth)
import Simplex.Messaging.Server.CLI (getCliCommand')
import Simplex.Messaging.Util (groupAllOn, ifM, tshow, whenM)
import System.Exit (exitFailure)
@@ -242,7 +248,8 @@ cliSendFile opts = cliSendFileOpts opts True $ printProgress "Uploaded"
cliSendFileOpts :: SendOptions -> Bool -> (Int64 -> Int64 -> IO ()) -> ExceptT CLIError IO ()
cliSendFileOpts SendOptions {filePath, outputDir, numRecipients, xftpServers, retryCount, tempPath, verbose} printInfo notifyProgress = do
let (_, fileName) = splitFileName filePath
let (_, fileNameStr) = splitFileName filePath
fileName = T.pack fileNameStr
liftIO $ when printInfo $ printNoNewLine "Encrypting file..."
g <- liftIO C.newRandom
(encPath, fdRcv, fdSnd, chunkSpecs, encSize) <- encryptFileForUpload g fileName
@@ -254,14 +261,18 @@ cliSendFileOpts SendOptions {filePath, outputDir, numRecipients, xftpServers, re
liftIO $ do
let fdRcvs = createRcvFileDescriptions fdRcv sentChunks
fdSnd' = createSndFileDescription fdSnd sentChunks
(fdRcvPaths, fdSndPath) <- writeFileDescriptions fileName fdRcvs fdSnd'
(fdRcvPaths, fdSndPath) <- writeFileDescriptions fileNameStr fdRcvs fdSnd'
when printInfo $ do
printNoNewLine "File uploaded!"
putStrLn $ "\nSender file description: " <> fdSndPath
putStrLn "Pass file descriptions to the recipient(s):"
forM_ fdRcvPaths putStrLn
when printInfo $ case fdRcvs of
rcvFd : _ -> forM_ (fileWebLink rcvFd) $ \(host, fragment) ->
putStrLn $ "\nWeb link:\nhttps://" <> B.unpack host <> "/#" <> B.unpack fragment
_ -> pure ()
where
encryptFileForUpload :: TVar ChaChaDRG -> String -> ExceptT CLIError IO (FilePath, FileDescription 'FRecipient, FileDescription 'FSender, [XFTPChunkSpec], Int64)
encryptFileForUpload :: TVar ChaChaDRG -> Text -> ExceptT CLIError IO (FilePath, FileDescription 'FRecipient, FileDescription 'FSender, [XFTPChunkSpec], Int64)
encryptFileForUpload g fileName = do
fileSize <- fromInteger <$> getFileSize filePath
when (fileSize > maxFileSize) $ throwE $ CLIError $ "Files bigger than " <> maxFileSizeStr <> " are not supported"
@@ -387,10 +398,16 @@ cliSendFileOpts SendOptions {filePath, outputDir, numRecipients, xftpServers, re
cliReceiveFile :: ReceiveOptions -> ExceptT CLIError IO ()
cliReceiveFile ReceiveOptions {fileDescription, filePath, retryCount, tempPath, verbose, yes} =
getFileDescription' fileDescription >>= receive
getInputFileDescription >>= receive 1
where
receive :: ValidFileDescription 'FRecipient -> ExceptT CLIError IO ()
receive (ValidFileDescription FileDescription {size, digest, key, nonce, chunks}) = do
getInputFileDescription
| "http://" `isPrefixOf` fileDescription || "https://" `isPrefixOf` fileDescription = do
let fragment = B.pack $ drop 1 $ dropWhile (/= '#') fileDescription
when (B.null fragment) $ throwE $ CLIError "Invalid URL: no fragment"
either (throwE . CLIError . ("Invalid web link: " <>)) pure $ decodeWebURI fragment
| otherwise = getFileDescription' fileDescription
receive :: Int -> ValidFileDescription 'FRecipient -> ExceptT CLIError IO ()
receive depth (ValidFileDescription FileDescription {size, digest, key, nonce, chunks, redirect}) = do
encPath <- getEncPath tempPath "xftp"
createDirectory encPath
a <- liftIO $ newXFTPAgent defaultXFTPClientAgentConfig
@@ -408,13 +425,26 @@ cliReceiveFile ReceiveOptions {fileDescription, filePath, retryCount, tempPath,
when (encDigest /= unFileDigest digest) $ throwE $ CLIError "File digest mismatch"
encSize <- liftIO $ foldM (\s path -> (s +) . fromIntegral <$> getFileSize path) 0 chunkPaths
when (FileSize encSize /= size) $ throwE $ CLIError "File size mismatch"
liftIO $ printNoNewLine "Decrypting file..."
CryptoFile path _ <- withExceptT cliCryptoError $ decryptChunks encSize chunkPaths key nonce $ fmap CF.plain . getFilePath
forM_ chunks $ acknowledgeFileChunk a
whenM (doesPathExist encPath) $ removeDirectoryRecursive encPath
liftIO $ do
printNoNewLine $ "File downloaded: " <> path
removeFD yes fileDescription
case redirect of
Just _
| depth > 0 -> do
CryptoFile tmpFile _ <- withExceptT cliCryptoError $ decryptChunks encSize chunkPaths key nonce $ \_ ->
fmap CF.plain $ uniqueCombine encPath "redirect.yaml"
forM_ chunks $ acknowledgeFileChunk a
yaml <- liftIO $ B.readFile tmpFile
whenM (doesPathExist encPath) $ removeDirectoryRecursive encPath
innerVfd <- either (throwE . CLIError . ("Redirect: invalid file description: " <>)) pure $ strDecode yaml
receive 0 innerVfd
| otherwise -> throwE $ CLIError "Redirect chain too long"
Nothing -> do
liftIO $ printNoNewLine "Decrypting file..."
CryptoFile path _ <- withExceptT cliCryptoError $ decryptChunks encSize chunkPaths key nonce $ fmap CF.plain . getFilePath
forM_ chunks $ acknowledgeFileChunk a
whenM (doesPathExist encPath) $ removeDirectoryRecursive encPath
liftIO $ do
printNoNewLine $ "File downloaded: " <> path
unless ("http://" `isPrefixOf` fileDescription || "https://" `isPrefixOf` fileDescription) $
removeFD yes fileDescription
downloadFileChunk :: TVar ChaChaDRG -> XFTPClientAgent -> FilePath -> FileSize Int64 -> TVar [Int64] -> FileChunk -> ExceptT CLIError IO (Int, FilePath)
downloadFileChunk g a encPath (FileSize encSize) downloadedChunks FileChunk {chunkNo, chunkSize, digest, replicas = replica : _} = do
let FileChunkReplica {server, replicaId, replicaKey} = replica
@@ -430,13 +460,14 @@ cliReceiveFile ReceiveOptions {fileDescription, filePath, retryCount, tempPath,
when verbose $ putStrLn ""
pure (chunkNo, chunkPath)
downloadFileChunk _ _ _ _ _ _ = throwE $ CLIError "chunk has no replicas"
getFilePath :: String -> ExceptT String IO FilePath
getFilePath name =
case filePath of
Just path ->
ifM (doesDirectoryExist path) (uniqueCombine path name) $
ifM (doesFileExist path) (throwE "File already exists") (pure path)
_ -> (`uniqueCombine` name) . (</> "Downloads") =<< getHomeDirectory
getFilePath :: Text -> ExceptT String IO FilePath
getFilePath name = case filePath of
Just path ->
ifM (doesDirectoryExist path) (uniqueCombine path name') $
ifM (doesFileExist path) (throwE "File already exists") (pure path)
_ -> (`uniqueCombine` name') . (</> "Downloads") =<< getHomeDirectory
where
name' = T.unpack name
acknowledgeFileChunk :: XFTPClientAgent -> FileChunk -> ExceptT CLIError IO ()
acknowledgeFileChunk a FileChunk {replicas = replica : _} = do
let FileChunkReplica {server, replicaId, replicaKey} = replica
@@ -552,3 +583,24 @@ cliRandomFile RandomFileOptions {filePath, fileSize = FileSize size} = do
B.hPut h bytes
when (sz > mb') $ saveRandomFile h (sz - mb')
mb' = mb 1
-- | Encode file description as web-compatible URI fragment.
-- Result is base64url(deflateRaw(YAML)), no leading '#'.
encodeWebURI :: FileDescription 'FRecipient -> B.ByteString
encodeWebURI fd = U.encode $ LB.toStrict $ Z.compress $ LB.fromStrict $ strEncode fd
-- | Decode web URI fragment to validated file description.
-- Input is base64url-encoded DEFLATE-compressed YAML, no leading '#'.
decodeWebURI :: B.ByteString -> Either String (ValidFileDescription 'FRecipient)
decodeWebURI fragment = do
compressed <- U.decode fragment
let yaml = LB.toStrict $ Z.decompress $ LB.fromStrict compressed
strDecode yaml >>= validateFileDescription
-- | Extract web link host and URI fragment from a file description.
-- Returns (hostname, uriFragment) for https://hostname/#uriFragment.
fileWebLink :: FileDescription 'FRecipient -> Maybe (B.ByteString, B.ByteString)
fileWebLink fd@FileDescription {chunks} = case chunks of
(FileChunk {replicas = FileChunkReplica {server = ProtocolServer {host}} : _} : _) ->
Just (strEncode (L.head host), encodeWebURI fd)
_ -> Nothing
+3 -1
View File
@@ -1,7 +1,9 @@
{-# LANGUAGE OverloadedLists #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.FileTransfer.Client.Presets where
module Simplex.FileTransfer.Client.Presets
( defaultXFTPServers,
) where
import Data.List.NonEmpty (NonEmpty)
import Simplex.Messaging.Protocol (XFTPServerWithAuth)
+7 -2
View File
@@ -4,7 +4,11 @@
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE ScopedTypeVariables #-}
module Simplex.FileTransfer.Crypto where
module Simplex.FileTransfer.Crypto
( encryptFile,
decryptChunks,
readChunks,
) where
import Control.Monad
import Control.Monad.Except
@@ -16,6 +20,7 @@ import Data.ByteString.Char8 (ByteString)
import qualified Data.ByteString.Char8 as B
import qualified Data.ByteString.Lazy.Char8 as LB
import Data.Int (Int64)
import Data.Text (Text)
import Simplex.FileTransfer.Types (FileHeader (..), authTagSize)
import qualified Simplex.Messaging.Crypto as C
import Simplex.Messaging.Crypto.File (CryptoFile (..), FTCryptoError (..))
@@ -54,7 +59,7 @@ encryptFile srcFile fileHdr key nonce fileSize' encSize encFile = do
liftIO $ B.hPut w ch'
encryptChunks_ get w (sb', len - chSize)
decryptChunks :: Int64 -> [FilePath] -> C.SbKey -> C.CbNonce -> (String -> ExceptT String IO CryptoFile) -> ExceptT FTCryptoError IO CryptoFile
decryptChunks :: Int64 -> [FilePath] -> C.SbKey -> C.CbNonce -> (Text -> ExceptT String IO CryptoFile) -> ExceptT FTCryptoError IO CryptoFile
decryptChunks _ [] _ _ _ = throwE $ FTCEInvalidHeader "empty"
decryptChunks encSize (chPath : chPaths) key nonce getDestFile = case reverse chPaths of
[] -> do
+19 -1
View File
@@ -14,7 +14,25 @@
{-# LANGUAGE TypeFamilies #-}
{-# OPTIONS_GHC -fno-warn-unticked-promoted-constructors #-}
module Simplex.FileTransfer.Protocol where
module Simplex.FileTransfer.Protocol
( FileParty (..),
SFileParty (..),
AFileParty (..),
FilePartyI (..),
FileCommand (..),
FileCmd (..),
FileInfo (..),
XFTPFileId,
FileResponse (..),
xftpBlockSize,
toFileParty,
aFileParty,
checkParty,
xftpEncodeAuthTransmission,
xftpEncodeTransmission,
xftpDecodeTServer,
xftpDecodeTClient,
) where
import qualified Data.Aeson.TH as J
import Data.Bifunctor (first)
+81 -33
View File
@@ -13,7 +13,10 @@
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TupleSections #-}
module Simplex.FileTransfer.Server where
module Simplex.FileTransfer.Server
( runXFTPServer,
runXFTPServerBlocking,
) where
import Control.Logger.Simple
import Control.Monad
@@ -40,10 +43,11 @@ import GHC.IO.Handle (hSetNewlineMode)
import GHC.IORef (atomicSwapIORef)
import GHC.Stats (getRTSStats)
import qualified Network.HTTP.Types as N
import Network.HPACK.Token (tokenKey)
import qualified Network.HTTP2.Server as H
import Network.Socket
import Simplex.FileTransfer.Protocol
import Simplex.FileTransfer.Server.Control
import Simplex.FileTransfer.Server.Control (ControlProtocol (..))
import Simplex.FileTransfer.Server.Env
import Simplex.FileTransfer.Server.Prometheus
import Simplex.FileTransfer.Server.Stats
@@ -63,12 +67,12 @@ import Simplex.Messaging.Server.Stats
import Simplex.Messaging.SystemTime
import Simplex.Messaging.TMap (TMap)
import qualified Simplex.Messaging.TMap as TM
import Simplex.Messaging.Transport (CertChainPubKey (..), SessionId, THandleAuth (..), THandleParams (..), TransportPeer (..), defaultSupportedParams)
import Simplex.Messaging.Transport (CertChainPubKey (..), SessionId, THandleAuth (..), THandleParams (..), TransportPeer (..), defaultSupportedParams, defaultSupportedParamsHTTPS)
import Simplex.Messaging.Transport.Buffer (trimCR)
import Simplex.Messaging.Transport.HTTP2
import Simplex.Messaging.Transport.HTTP2.File (fileBlockSize)
import Simplex.Messaging.Transport.HTTP2.Server
import Simplex.Messaging.Transport.Server (runLocalTCPServer)
import Simplex.Messaging.Transport.HTTP2.Server (runHTTP2Server)
import Simplex.Messaging.Transport.Server (SNICredentialUsed, TransportServerConfig (..), runLocalTCPServer)
import Simplex.Messaging.Util
import Simplex.Messaging.Version
import System.Environment (lookupEnv)
@@ -89,9 +93,24 @@ data XFTPTransportRequest = XFTPTransportRequest
{ thParams :: THandleParamsXFTP 'TServer,
reqBody :: HTTP2Body,
request :: H.Request,
sendResponse :: H.Response -> IO ()
sendResponse :: H.Response -> IO (),
sniUsed :: SNICredentialUsed,
addCORS :: Bool
}
corsHeaders :: Bool -> [N.Header]
corsHeaders addCORS
| addCORS = [("Access-Control-Allow-Origin", "*"), ("Access-Control-Expose-Headers", "*")]
| otherwise = []
corsPreflightHeaders :: [N.Header]
corsPreflightHeaders =
[ ("Access-Control-Allow-Origin", "*"),
("Access-Control-Allow-Methods", "POST, OPTIONS"),
("Access-Control-Allow-Headers", "*"),
("Access-Control-Max-Age", "86400")
]
runXFTPServer :: XFTPServerConfig -> IO ()
runXFTPServer cfg = do
started <- newEmptyTMVarIO
@@ -120,45 +139,73 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira
runServer :: M ()
runServer = do
srvCreds@(chain, pk) <- asks tlsServerCreds
httpCreds_ <- asks httpServerCreds
signKey <- liftIO $ case C.x509ToPrivate' pk of
Right pk' -> pure pk'
Left e -> putStrLn ("Server has no valid key: " <> show e) >> exitFailure
env <- ask
sessions <- liftIO TM.emptyIO
let cleanup sessionId = atomically $ TM.delete sessionId sessions
liftIO . runHTTP2Server started xftpPort defaultHTTP2BufferSize defaultSupportedParams srvCreds transportConfig inactiveClientExpiration cleanup $ \sessionId sessionALPN r sendResponse -> do
reqBody <- getHTTP2Body r xftpBlockSize
let v = VersionXFTP 1
thServerVRange = versionToRange v
thParams0 = THandleParams {sessionId, blockSize = xftpBlockSize, thVersion = v, thServerVRange, thAuth = Nothing, implySessId = False, encryptBlock = Nothing, batch = True, serviceAuth = False}
req0 = XFTPTransportRequest {thParams = thParams0, request = r, reqBody, sendResponse}
flip runReaderT env $ case sessionALPN of
Nothing -> processRequest req0
Just alpn | alpn == xftpALPNv1 || alpn == httpALPN11 ->
xftpServerHandshakeV1 chain signKey sessions req0 >>= \case
Nothing -> pure () -- handshake response sent
Just thParams -> processRequest req0 {thParams} -- proceed with new version (XXX: may as well switch the request handler here)
_ -> liftIO . sendResponse $ H.responseNoBody N.ok200 [] -- shouldn't happen: means server picked handshake protocol it doesn't know about
srvParams = if isJust httpCreds_ then defaultSupportedParamsHTTPS else defaultSupportedParams
liftIO . runHTTP2Server started xftpPort defaultHTTP2BufferSize srvParams srvCreds httpCreds_ transportConfig inactiveClientExpiration cleanup $ \sniUsed sessionId sessionALPN r sendResponse -> do
let addCORS' = sniUsed && addCORSHeaders transportConfig
if addCORS' && H.requestMethod r == Just "OPTIONS"
then sendResponse $ H.responseNoBody N.ok200 corsPreflightHeaders
else do
reqBody <- getHTTP2Body r xftpBlockSize
let v = VersionXFTP 1
thServerVRange = versionToRange v
thParams0 = THandleParams {sessionId, blockSize = xftpBlockSize, thVersion = v, thServerVRange, thAuth = Nothing, implySessId = False, encryptBlock = Nothing, batch = True, serviceAuth = False}
req0 = XFTPTransportRequest {thParams = thParams0, request = r, reqBody, sendResponse, sniUsed, addCORS = addCORS'}
flip runReaderT env $ case sessionALPN of
Nothing -> processRequest req0
Just alpn
| alpn == xftpALPNv1 || alpn == httpALPN11 || (sniUsed && alpn == "h2") ->
xftpServerHandshakeV1 chain signKey sessions req0 >>= \case
Nothing -> pure ()
Just thParams -> processRequest req0 {thParams}
| otherwise -> liftIO . sendResponse $ H.responseNoBody N.ok200 (corsHeaders addCORS')
xftpServerHandshakeV1 :: X.CertificateChain -> C.APrivateSignKey -> TMap SessionId Handshake -> XFTPTransportRequest -> M (Maybe (THandleParams XFTPVersion 'TServer))
xftpServerHandshakeV1 chain serverSignKey sessions XFTPTransportRequest {thParams = thParams0@THandleParams {sessionId}, reqBody = HTTP2Body {bodyHead}, sendResponse} = do
xftpServerHandshakeV1 chain serverSignKey sessions XFTPTransportRequest {thParams = thParams0@THandleParams {sessionId}, request, reqBody = HTTP2Body {bodyHead}, sendResponse, sniUsed, addCORS} = do
s <- atomically $ TM.lookup sessionId sessions
r <- runExceptT $ case s of
Nothing -> processHello
Just (HandshakeSent pk) -> processClientHandshake pk
Just (HandshakeAccepted thParams) -> pure $ Just thParams
Nothing
| sniUsed && not webHello -> throwE SESSION
| otherwise -> processHello Nothing
Just (HandshakeSent pk)
| webHello -> processHello (Just pk)
| otherwise -> processClientHandshake pk
Just (HandshakeAccepted thParams)
| webHello -> processHello (serverPrivKey <$> thAuth thParams)
| webHandshake, Just auth <- thAuth thParams -> processClientHandshake (serverPrivKey auth)
| otherwise -> pure $ Just thParams
either sendError pure r
where
processHello = do
unless (B.null bodyHead) $ throwE HANDSHAKE
(k, pk) <- atomically . C.generateKeyPair =<< asks random
atomically $ TM.insert sessionId (HandshakeSent pk) sessions
webHello = sniUsed && any (\(t, _) -> tokenKey t == "xftp-web-hello") (fst $ H.requestHeaders request)
webHandshake = sniUsed && any (\(t, _) -> tokenKey t == "xftp-handshake") (fst $ H.requestHeaders request)
processHello pk_ = do
challenge_ <-
if
| B.null bodyHead -> pure Nothing
| sniUsed -> do
body <- liftHS $ C.unPad bodyHead
XFTPClientHello {webChallenge} <- liftHS $ first show (smpDecode body)
pure webChallenge
| otherwise -> throwE HANDSHAKE
rng <- asks random
k <- atomically $ TM.lookup sessionId sessions >>= \case
Just (HandshakeSent pk') -> pure $ C.publicKey pk'
_ -> do
kp <- maybe (C.generateKeyPair rng) (\p -> pure (C.publicKey p, p)) pk_
fst kp <$ TM.insert sessionId (HandshakeSent $ snd kp) sessions
let authPubKey = CertChainPubKey chain (C.signX509 serverSignKey $ C.publicToX509 k)
let hs = XFTPServerHandshake {xftpVersionRange = xftpServerVRange, sessionId, authPubKey}
webIdentityProof = C.sign serverSignKey . (<> sessionId) <$> challenge_
let hs = XFTPServerHandshake {xftpVersionRange = xftpServerVRange, sessionId, authPubKey, webIdentityProof}
shs <- encodeXftp hs
#ifdef slow_servers
lift randomDelay
#endif
liftIO . sendResponse $ H.responseBuilder N.ok200 [] shs
liftIO . sendResponse $ H.responseBuilder N.ok200 (corsHeaders addCORS) shs
pure Nothing
processClientHandshake pk = do
unless (B.length bodyHead == xftpBlockSize) $ throwE HANDSHAKE
@@ -174,13 +221,13 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira
#ifdef slow_servers
lift randomDelay
#endif
liftIO . sendResponse $ H.responseNoBody N.ok200 []
liftIO . sendResponse $ H.responseNoBody N.ok200 (corsHeaders addCORS)
pure Nothing
Nothing -> throwE HANDSHAKE
sendError :: XFTPErrorType -> M (Maybe (THandleParams XFTPVersion 'TServer))
sendError err = do
runExceptT (encodeXftp err) >>= \case
Right bs -> liftIO . sendResponse $ H.responseBuilder N.ok200 [] bs
Right bs -> liftIO . sendResponse $ H.responseBuilder N.ok200 (corsHeaders addCORS) bs
Left _ -> logError $ "Error encoding handshake error: " <> tshow err
pure Nothing
encodeXftp :: Encoding a => a -> ExceptT XFTPErrorType (ReaderT XFTPEnv IO) Builder
@@ -330,6 +377,7 @@ xftpServer cfg@XFTPServerConfig {xftpPort, transportConfig, inactiveClientExpira
CPHelp -> hPutStrLn h "commands: stats-rts, delete, help, quit"
CPQuit -> pure ()
CPSkip -> pure ()
_ -> hPutStrLn h "unsupported command"
where
withUserRole action =
readTVarIO role >>= \case
@@ -346,7 +394,7 @@ data ServerFile = ServerFile
}
processRequest :: XFTPTransportRequest -> M ()
processRequest XFTPTransportRequest {thParams, reqBody = body@HTTP2Body {bodyHead}, sendResponse}
processRequest XFTPTransportRequest {thParams, reqBody = body@HTTP2Body {bodyHead}, sendResponse, addCORS}
| B.length bodyHead /= xftpBlockSize = sendXFTPResponse ("", NoEntity, FRErr BLOCK) Nothing
| otherwise =
case xftpDecodeTServer thParams bodyHead of
@@ -365,7 +413,7 @@ processRequest XFTPTransportRequest {thParams, reqBody = body@HTTP2Body {bodyHea
#ifdef slow_servers
randomDelay
#endif
liftIO $ sendResponse $ H.responseStreaming N.ok200 [] $ streamBody t_
liftIO $ sendResponse $ H.responseStreaming N.ok200 (corsHeaders addCORS) $ streamBody t_
where
streamBody t_ send done = do
case t_ of
+4 -1
View File
@@ -1,7 +1,10 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.FileTransfer.Server.Control where
module Simplex.FileTransfer.Server.Control
( ControlProtocol (..),
)
where
import qualified Data.Attoparsec.ByteString.Char8 as A
import Simplex.FileTransfer.Protocol (XFTPFileId)
+15 -3
View File
@@ -7,7 +7,16 @@
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE StrictData #-}
module Simplex.FileTransfer.Server.Env where
module Simplex.FileTransfer.Server.Env
( XFTPServerConfig (..),
XFTPEnv (..),
XFTPRequest (..),
defaultInactiveClientExpiration,
defFileExpirationHours,
defaultFileExpiration,
newXFTPServerEnv,
countUsedStorage,
) where
import Control.Logger.Simple
import Control.Monad
@@ -57,6 +66,7 @@ data XFTPServerConfig = XFTPServerConfig
-- | time after which inactive clients can be disconnected and check interval, seconds
inactiveClientExpiration :: Maybe ExpirationConfig,
xftpCredentials :: ServerCredentials,
httpCredentials :: Maybe ServerCredentials,
-- | XFTP client-server protocol version range
xftpServerVRange :: VersionRangeXFTP,
-- stats config - see SMP server config
@@ -84,6 +94,7 @@ data XFTPEnv = XFTPEnv
random :: TVar ChaChaDRG,
serverIdentity :: C.KeyHash,
tlsServerCreds :: T.Credential,
httpServerCreds :: Maybe T.Credential,
serverStats :: FileServerStats
}
@@ -98,7 +109,7 @@ defaultFileExpiration =
}
newXFTPServerEnv :: XFTPServerConfig -> IO XFTPEnv
newXFTPServerEnv config@XFTPServerConfig {storeLogFile, fileSizeQuota, xftpCredentials} = do
newXFTPServerEnv config@XFTPServerConfig {storeLogFile, fileSizeQuota, xftpCredentials, httpCredentials} = do
random <- C.newRandom
store <- newFileStore
storeLog <- mapM (`readWriteFileStore` store) storeLogFile
@@ -108,9 +119,10 @@ newXFTPServerEnv config@XFTPServerConfig {storeLogFile, fileSizeQuota, xftpCrede
logNote $ "Total / available storage: " <> tshow quota <> " / " <> tshow (quota - used)
when (quota < used) $ logWarn "WARNING: storage quota is less than used storage, no files can be uploaded!"
tlsServerCreds <- loadServerCredential xftpCredentials
httpServerCreds <- mapM loadServerCredential httpCredentials
Fingerprint fp <- loadFingerprint xftpCredentials
serverStats <- newFileServerStats =<< getCurrentTime
pure XFTPEnv {config, store, storeLog, random, tlsServerCreds, serverIdentity = C.KeyHash fp, serverStats}
pure XFTPEnv {config, store, storeLog, random, tlsServerCreds, httpServerCreds, serverIdentity = C.KeyHash fp, serverStats}
countUsedStorage :: M.Map k FileRec -> Int64
countUsedStorage = M.foldl' (\acc FileRec {fileInfo = FileInfo {size}} -> acc + fromIntegral size) 0
+33 -10
View File
@@ -6,13 +6,15 @@
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE PatternSynonyms #-}
module Simplex.FileTransfer.Server.Main where
module Simplex.FileTransfer.Server.Main
( xftpServerCLI,
) where
import Data.Either (fromRight)
import Data.Functor (($>))
import Data.Ini (lookupValue, readIniFile)
import Data.Int (Int64)
import Data.Maybe (fromMaybe)
import Data.Maybe (fromMaybe, isJust)
import qualified Data.Text as T
import qualified Data.Text.IO as T
import Network.Socket (HostName)
@@ -21,7 +23,7 @@ import Simplex.FileTransfer.Chunks
import Simplex.FileTransfer.Description (FileSize (..))
import Simplex.FileTransfer.Server (runXFTPServer)
import Simplex.FileTransfer.Server.Env (XFTPServerConfig (..), defFileExpirationHours, defaultFileExpiration, defaultInactiveClientExpiration)
import Simplex.FileTransfer.Transport (supportedFileServerVRange, alpnSupportedXFTPhandshakes)
import Simplex.FileTransfer.Transport (alpnSupportedXFTPhandshakes, supportedFileServerVRange)
import qualified Simplex.Messaging.Crypto as C
import Simplex.Messaging.Encoding.String
import Simplex.Messaging.Protocol (ProtoServerWithAuth (..), pattern XFTPServer)
@@ -29,7 +31,7 @@ import Simplex.Messaging.Server.CLI
import Simplex.Messaging.Server.Expiration
import Simplex.Messaging.Transport.Client (TransportHost (..))
import Simplex.Messaging.Transport.HTTP2 (httpALPN)
import Simplex.Messaging.Transport.Server (ServerCredentials (..), mkTransportServerConfig)
import Simplex.Messaging.Transport.Server (ServerCredentials (..), TransportServerConfig (..), mkTransportServerConfig)
import Simplex.Messaging.Util (eitherToMaybe, safeDecodeUtf8, tshow)
import System.Directory (createDirectoryIfMissing, doesFileExist)
import System.FilePath (combine)
@@ -124,6 +126,10 @@ xftpServerCLI cfgPath logPath = do
\disconnect: off\n"
<> ("# ttl: " <> tshow (ttl defaultInactiveClientExpiration) <> "\n")
<> ("# check_interval: " <> tshow (checkInterval defaultInactiveClientExpiration) <> "\n")
<> "\n\
\[WEB]\n\
\# cert: /etc/opt/simplex-xftp/web.crt\n\
\# key: /etc/opt/simplex-xftp/web.key\n"
runServer ini = do
hSetBuffering stdout LineBuffering
hSetBuffering stderr LineBuffering
@@ -155,6 +161,17 @@ xftpServerCLI cfgPath logPath = do
else "NOT allowed."
putStrLn $ "Listening on port " <> xftpPort <> "..."
httpCredentials_ =
eitherToMaybe $ do
cert <- T.unpack <$> lookupValue "WEB" "cert" ini
key <- T.unpack <$> lookupValue "WEB" "key" ini
pure
ServerCredentials
{ caCertificateFile = Nothing,
certificateFile = cert,
privateKeyFile = key
}
serverConfig =
XFTPServerConfig
{ xftpPort = T.unpack $ strictIni "TRANSPORT" "port" ini,
@@ -186,6 +203,7 @@ xftpServerCLI cfgPath logPath = do
privateKeyFile = c serverKeyFile,
certificateFile = c serverCrtFile
},
httpCredentials = httpCredentials_,
xftpServerVRange = supportedFileServerVRange,
logStatsInterval = logStats $> 86400, -- seconds
logStatsStartTime = 0, -- seconds from 00:00 UTC
@@ -194,10 +212,12 @@ xftpServerCLI cfgPath logPath = do
prometheusInterval = eitherToMaybe $ read . T.unpack <$> lookupValue "STORE_LOG" "prometheus_interval" ini,
prometheusMetricsFile = combine logPath "xftp-server-metrics.txt",
transportConfig =
mkTransportServerConfig
(fromMaybe False $ iniOnOff "TRANSPORT" "log_tls_errors" ini)
(Just $ alpnSupportedXFTPhandshakes <> httpALPN)
False,
let cfg =
mkTransportServerConfig
(fromMaybe False $ iniOnOff "TRANSPORT" "log_tls_errors" ini)
(Just $ alpnSupportedXFTPhandshakes <> httpALPN)
False
in cfg {addCORSHeaders = isJust httpCredentials_},
responseDelay = 0
}
@@ -229,11 +249,14 @@ cliCommandP cfgPath logPath iniFile =
initP :: Parser InitOptions
initP = do
enableStoreLog <-
flag' False
flag'
False
( long "disable-store-log"
<> help "Disable store log for persistence (enabled by default)"
)
<|> flag True True
<|> flag
True
True
( long "store-log"
<> short 'l'
<> help "Enable store log for persistence (DEPRECATED, enabled by default)"
@@ -4,7 +4,11 @@
{-# LANGUAGE TypeApplications #-}
{-# OPTIONS_GHC -fno-warn-unrecognised-pragmas #-}
module Simplex.FileTransfer.Server.Prometheus where
module Simplex.FileTransfer.Server.Prometheus
( FileServerMetrics (..),
rtsOptionsEnv,
xftpPrometheusMetrics,
) where
import Data.Int (Int64)
import Data.Text (Text)
+7 -1
View File
@@ -2,7 +2,13 @@
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.FileTransfer.Server.Stats where
module Simplex.FileTransfer.Server.Stats
( FileServerStats (..),
FileServerStatsData (..),
newFileServerStats,
getFileServerStatsData,
setFileServerStats,
) where
import Control.Applicative ((<|>))
import qualified Data.Attoparsec.ByteString.Char8 as A
+24 -7
View File
@@ -19,6 +19,7 @@ module Simplex.FileTransfer.Transport
-- xftpClientHandshake,
XFTPServerHandshake (..),
-- xftpServerHandshake,
XFTPClientHello (..),
THandleXFTP,
THandleParamsXFTP,
VersionXFTP,
@@ -35,6 +36,7 @@ module Simplex.FileTransfer.Transport
)
where
import Control.Applicative (optional)
import qualified Control.Exception as E
import Control.Logger.Simple
import Control.Monad
@@ -60,7 +62,7 @@ import Simplex.Messaging.Parsers
import Simplex.Messaging.Protocol (BlockingInfo, CommandError)
import Simplex.Messaging.Transport (ALPN, CertChainPubKey, ServiceCredentials, SessionId, THandle (..), THandleParams (..), TransportError (..), TransportPeer (..))
import Simplex.Messaging.Transport.HTTP2.File
import Simplex.Messaging.Util (bshow, tshow)
import Simplex.Messaging.Util (bshow, tshow, (<$?>))
import Simplex.Messaging.Version
import Simplex.Messaging.Version.Internal
import System.IO (Handle, IOMode (..), withFile)
@@ -111,11 +113,18 @@ alpnSupportedXFTPhandshakes = [xftpALPNv1]
xftpALPNv1 :: ALPN
xftpALPNv1 = "xftp/1"
data XFTPClientHello = XFTPClientHello
{ -- | a random string sent by the client to the server to prove that server has identity certificate
webChallenge :: Maybe ByteString
}
data XFTPServerHandshake = XFTPServerHandshake
{ xftpVersionRange :: VersionRangeXFTP,
sessionId :: SessionId,
-- | pub key to agree shared secrets for command authorization and entity ID encryption.
authPubKey :: CertChainPubKey
authPubKey :: CertChainPubKey,
-- | signed identity challenge from XFTPClientHello
webIdentityProof :: Maybe C.ASignature
}
data XFTPClientHandshake = XFTPClientHandshake
@@ -125,6 +134,14 @@ data XFTPClientHandshake = XFTPClientHandshake
keyHash :: C.KeyHash
}
instance Encoding XFTPClientHello where
smpEncode XFTPClientHello {webChallenge} = smpEncode webChallenge
smpP = do
webChallenge <- smpP
forM_ webChallenge $ \challenge -> unless (B.length challenge == 32) $ fail "bad XFTPClientHello webChallenge"
Tail _compat <- smpP
pure XFTPClientHello {webChallenge}
instance Encoding XFTPClientHandshake where
smpEncode XFTPClientHandshake {xftpVersion, keyHash} =
smpEncode (xftpVersion, keyHash)
@@ -134,13 +151,13 @@ instance Encoding XFTPClientHandshake where
pure XFTPClientHandshake {xftpVersion, keyHash}
instance Encoding XFTPServerHandshake where
smpEncode XFTPServerHandshake {xftpVersionRange, sessionId, authPubKey} =
smpEncode (xftpVersionRange, sessionId, authPubKey)
smpEncode XFTPServerHandshake {xftpVersionRange, sessionId, authPubKey, webIdentityProof} =
smpEncode (xftpVersionRange, sessionId, authPubKey, C.signatureBytes webIdentityProof)
smpP = do
(xftpVersionRange, sessionId) <- smpP
authPubKey <- smpP
(xftpVersionRange, sessionId, authPubKey) <- smpP
webIdentityProof <- optional $ C.decodeSignature <$?> smpP
Tail _compat <- smpP
pure XFTPServerHandshake {xftpVersionRange, sessionId, authPubKey}
pure XFTPServerHandshake {xftpVersionRange, sessionId, authPubKey, webIdentityProof}
sendEncFile :: Handle -> (Builder -> IO ()) -> LC.SbState -> Word32 -> IO ()
sendEncFile h send = go
+27 -3
View File
@@ -4,12 +4,36 @@
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE TemplateHaskell #-}
module Simplex.FileTransfer.Types where
module Simplex.FileTransfer.Types
( RcvFileId,
SndFileId,
FileHeader (..),
DBRcvFileId,
RcvFile (..),
RcvFileStatus (..),
RcvFileChunk (..),
RcvFileChunkReplica (..),
RcvFileRedirect (..),
DBSndFileId,
SndFile (..),
SndFileStatus (..),
SndFileChunk (..),
NewSndChunkReplica (..),
SndFileChunkReplica (..),
SndFileReplicaStatus (..),
DeletedSndChunkReplica (..),
SentRecipientReplica (..),
FileErrorType (..),
authTagSize,
sndFileEncPath,
sndChunkSize,
) where
import qualified Data.Aeson.TH as J
import qualified Data.Attoparsec.ByteString.Char8 as A
import Data.ByteString.Char8 (ByteString)
import Data.Int (Int64)
import Data.Text (Text)
import qualified Data.Text as T
import Data.Text.Encoding (encodeUtf8)
import Data.Word (Word32)
@@ -33,8 +57,8 @@ authTagSize = fromIntegral C.authTagSize
-- fileExtra is added to allow header extension in future versions
data FileHeader = FileHeader
{ fileName :: String,
fileExtra :: Maybe String
{ fileName :: Text,
fileExtra :: Maybe Text
}
deriving (Eq, Show)
+121 -34
View File
@@ -60,6 +60,8 @@ module Simplex.Messaging.Agent
deleteConnectionAsync,
deleteConnectionsAsync,
createConnection,
prepareConnectionLink,
createConnectionForLink,
setConnShortLink,
deleteConnShortLink,
getConnShortLink,
@@ -409,6 +411,19 @@ createConnection :: ConnectionModeI c => AgentClient -> NetworkRequestMode -> Us
createConnection c nm userId enableNtfs checkNotices = withAgentEnv c .::. newConn c nm userId enableNtfs checkNotices
{-# INLINE createConnection #-}
-- | Prepare connection link for contact mode (no network call).
-- Returns root key pair (for signing OwnerAuth), the created link, and internal params.
-- The link address is fully determined at this point.
prepareConnectionLink :: AgentClient -> UserId -> Maybe ByteString -> Bool -> Maybe CRClientData -> AE (C.KeyPairEd25519, CreatedConnLink 'CMContact, PreparedLinkParams)
prepareConnectionLink c userId linkEntityId checkNotices = withAgentEnv c . prepareConnectionLink' c userId linkEntityId checkNotices
{-# INLINE prepareConnectionLink #-}
-- | Create connection for prepared link (single network call).
-- Validates that server response matches the prepared link.
createConnectionForLink :: AgentClient -> NetworkRequestMode -> UserId -> Bool -> CreatedConnLink 'CMContact -> PreparedLinkParams -> UserConnLinkData 'CMContact -> CR.InitialKeys -> SubscriptionMode -> AE ConnId
createConnectionForLink c nm userId enableNtfs = withAgentEnv c .::. createConnectionForLink' c nm userId enableNtfs
{-# INLINE createConnectionForLink #-}
-- | Create or update user's contact connection short link
setConnShortLink :: AgentClient -> NetworkRequestMode -> ConnId -> SConnectionMode c -> UserConnLinkData c -> Maybe CRClientData -> AE (ConnShortLink c)
setConnShortLink c = withAgentEnv c .::. setConnShortLink' c
@@ -942,6 +957,66 @@ newConn c nm userId enableNtfs checkNotices cMode linkData_ clientData pqInitKey
<$> newRcvConnSrv c nm userId connId enableNtfs cMode linkData_ clientData pqInitKeys subMode srv
`catchE` \e -> withStore' c (`deleteConnRecord` connId) >> throwE e
-- | Prepare connection link for contact mode (no network, no database).
-- Generates all cryptographic material and returns the link that will be created.
prepareConnectionLink' :: AgentClient -> UserId -> Maybe ByteString -> Bool -> Maybe CRClientData -> AM (C.KeyPairEd25519, CreatedConnLink 'CMContact, PreparedLinkParams)
prepareConnectionLink' c userId linkEntityId checkNotices clientData = do
g <- asks random
plpSrvWithAuth@(ProtoServerWithAuth srv _) <- getSMPServer c userId
when checkNotices $ checkClientNotices c plpSrvWithAuth
AgentConfig {smpClientVRange, smpAgentVRange} <- asks config
plpNonce@(C.CbNonce corrId) <- atomically $ C.randomCbNonce g
sigKeys@(_, plpRootPrivKey) <- atomically $ C.generateKeyPair g
plpQueueE2EKeys@(e2ePubKey, _) <- atomically $ C.generateKeyPair g
let sndId = SMP.EntityId $ B.take 24 $ C.sha3_384 corrId
qUri = SMPQueueUri smpClientVRange $ SMPQueueAddress srv sndId e2ePubKey (Just QMContact)
connReq = CRContactUri $ ConnReqUriData SSSimplex smpAgentVRange [qUri] clientData
(plpLinkKey, plpSignedFixedData) = SL.encodeSignFixedData sigKeys smpAgentVRange connReq linkEntityId
ccLink = CCLink connReq $ Just $ CSLContact SLSServer CCTContact srv plpLinkKey
params = PreparedLinkParams {plpNonce, plpQueueE2EKeys, plpLinkKey, plpRootPrivKey, plpSignedFixedData, plpSrvWithAuth}
pure (sigKeys, ccLink, params)
-- | Create connection for prepared link (single network call).
createConnectionForLink' :: AgentClient -> NetworkRequestMode -> UserId -> Bool -> CreatedConnLink 'CMContact -> PreparedLinkParams -> UserConnLinkData 'CMContact -> CR.InitialKeys -> SubscriptionMode -> AM ConnId
createConnectionForLink' c nm userId enableNtfs (CCLink connReq _) PreparedLinkParams {plpNonce, plpQueueE2EKeys, plpLinkKey, plpRootPrivKey, plpSignedFixedData, plpSrvWithAuth} userLinkData pqInitKeys subMode = do
g <- asks random
AgentConfig {smpAgentVRange} <- asks config
case pqInitKeys of
CR.IKUsePQ -> throwE $ CMD PROHIBITED "createConnectionForLink"
_ -> pure ()
connId <- newConnNoQueues c userId enableNtfs SCMContact (CR.connPQEncryption pqInitKeys)
let CRContactUri ConnReqUriData {crSmpQueues = SMPQueueUri _ SMPQueueAddress {senderId = sndId} :| _} = connReq
md = SL.encodeSignUserData SCMContact plpRootPrivKey smpAgentVRange userLinkData
linkData = (plpSignedFixedData, md)
qd <- encryptContactLinkData g plpRootPrivKey plpLinkKey sndId linkData
(_, qUri) <-
createRcvQueue c nm userId connId plpSrvWithAuth enableNtfs subMode (Just plpNonce) qd plpQueueE2EKeys
`catchE` \e -> withStore' c (`deleteConnRecord` connId) >> throwE e
let SMPQueueUri _ SMPQueueAddress {senderId = actualSndId} = qUri
unless (actualSndId == sndId) $ throwE $ INTERNAL "createConnectionForLink: sender ID mismatch"
pure connId
-- | Encrypt signed link data for contact mode.
encryptContactLinkData :: TVar ChaChaDRG -> C.PrivateKeyEd25519 -> LinkKey -> SMP.SenderId -> (ByteString, ByteString) -> AM ClntQueueReqData
encryptContactLinkData g privSigKey linkKey sndId linkData = do
let (linkId, k) = SL.contactShortLinkKdf linkKey
srvData <- liftError id $ SL.encryptLinkData g k linkData
pure $ CQRContact $ Just CQRData {linkKey, privSigKey, srvReq = (linkId, (sndId, srvData))}
-- | Shared helper: create receive queue and set up subscriptions.
createRcvQueue :: AgentClient -> NetworkRequestMode -> UserId -> ConnId -> SMPServerWithAuth -> Bool -> SubscriptionMode -> Maybe C.CbNonce -> ClntQueueReqData -> C.KeyPairX25519 -> AM (RcvQueue, SMPQueueUri)
createRcvQueue c nm userId connId srvWithAuth@(ProtoServerWithAuth srv _) enableNtfs subMode nonce_ qd e2eKeys = do
AgentConfig {smpClientVRange = vr} <- asks config
ntfServer_ <- if enableNtfs then newQueueNtfServer else pure Nothing
(rq, qUri, tSess, sessId, serviceId_) <-
newRcvQueue_ c nm userId connId srvWithAuth vr qd (isJust ntfServer_) subMode nonce_ e2eKeys
`catchAllErrors` \e -> liftIO (print e) >> throwE e
atomically $ incSMPServerStat c userId srv connCreated
rq' <- withStore c $ \db -> updateNewConnRcv db connId rq subMode
lift . when (subMode == SMSubscribe) $ addNewQueueSubscription c rq' tSess sessId serviceId_
mapM_ (newQueueNtfSubscription c rq') ntfServer_
pure (rq', qUri)
checkClientNotices :: AgentClient -> SMPServerWithAuth -> AM ()
checkClientNotices AgentClient {clientNotices, presetServers} (ProtoServerWithAuth srv@(ProtocolServer {host}) _) = do
notices <- readTVarIO clientNotices
@@ -1018,7 +1093,7 @@ setConnShortLink' c nm connId cMode userLinkData clientData =
sigKeys@(_, privSigKey) <- atomically $ C.generateKeyPair @'C.Ed25519 g
let qUri = SMPQueueUri vr $ (rcvSMPQueueAddress rq) {queueMode = Just QMContact}
connReq = CRContactUri $ ConnReqUriData SSSimplex smpAgentVRange [qUri] clientData
(linkKey, linkData) = SL.encodeSignLinkData sigKeys smpAgentVRange connReq ud
(linkKey, linkData) = SL.encodeSignLinkData sigKeys smpAgentVRange connReq Nothing ud
(linkId, k) = SL.contactShortLinkKdf linkKey
srvData <- liftError id $ SL.encryptLinkData g k linkData
let slCreds = ShortLinkCreds linkId linkKey privSigKey Nothing (fst srvData)
@@ -1105,25 +1180,15 @@ newRcvConnSrv c nm userId connId enableNtfs cMode userLinkData_ clientData pqIni
case userLinkData_ of
Just d -> do
(nonce, qUri, cReq, qd) <- prepareLinkData d $ fst e2eKeys
(rq, qUri') <- createRcvQueue (Just nonce) qd e2eKeys
(rq, qUri') <- createRcvQueue c nm userId connId srvWithAuth enableNtfs subMode (Just nonce) qd e2eKeys
ccLink <- connReqWithShortLink qUri cReq qUri' (shortLink rq)
pure ccLink
Nothing -> do
let qd = case cMode of SCMContact -> CQRContact Nothing; SCMInvitation -> CQRMessaging Nothing
(_rq, qUri) <- createRcvQueue Nothing qd e2eKeys
(_rq, qUri) <- createRcvQueue c nm userId connId srvWithAuth enableNtfs subMode Nothing qd e2eKeys
cReq <- createConnReq qUri
pure $ CCLink cReq Nothing
where
createRcvQueue :: Maybe C.CbNonce -> ClntQueueReqData -> C.KeyPairX25519 -> AM (RcvQueue, SMPQueueUri)
createRcvQueue nonce_ qd e2eKeys = do
AgentConfig {smpClientVRange = vr} <- asks config
ntfServer_ <- if enableNtfs then newQueueNtfServer else pure Nothing
(rq, qUri, tSess, sessId, serviceId_) <- newRcvQueue_ c nm userId connId srvWithAuth vr qd (isJust ntfServer_) subMode nonce_ e2eKeys `catchAllErrors` \e -> liftIO (print e) >> throwE e
atomically $ incSMPServerStat c userId srv connCreated
rq' <- withStore c $ \db -> updateNewConnRcv db connId rq subMode
lift . when (subMode == SMSubscribe) $ addNewQueueSubscription c rq' tSess sessId serviceId_
mapM_ (newQueueNtfSubscription c rq') ntfServer_
pure (rq', qUri)
createConnReq :: SMPQueueUri -> AM (ConnectionRequestUri c)
createConnReq qUri = do
AgentConfig {smpAgentVRange, e2eEncryptVRange} <- asks config
@@ -1147,12 +1212,9 @@ newRcvConnSrv c nm userId connId enableNtfs cMode userLinkData_ clientData pqIni
qm = case cMode of SCMContact -> QMContact; SCMInvitation -> QMMessaging
qUri = SMPQueueUri vr $ SMPQueueAddress srv sndId e2eDhKey (Just qm)
connReq <- createConnReq qUri
let (linkKey, linkData) = SL.encodeSignLinkData sigKeys smpAgentVRange connReq userLinkData
let (linkKey, linkData) = SL.encodeSignLinkData sigKeys smpAgentVRange connReq Nothing userLinkData
qd <- case cMode of
SCMContact -> do
let (linkId, k) = SL.contactShortLinkKdf linkKey
srvData <- liftError id $ SL.encryptLinkData g k linkData
pure $ CQRContact $ Just CQRData {linkKey, privSigKey, srvReq = (linkId, (sndId, srvData))}
SCMContact -> encryptContactLinkData g privSigKey linkKey sndId linkData
SCMInvitation -> do
let k = SL.invShortLinkKdf linkKey
srvData <- liftError id $ SL.encryptLinkData g k linkData
@@ -1831,8 +1893,13 @@ runCommandProcessing c@AgentClient {subQ} connId server_ Worker {doWork} = do
_ -> throwE $ CMD PROHIBITED "SWCH: not duplex"
DEL -> withServer' . tryCommand $ deleteConnection' c NRMBackground connId >> notify OK
AInternalCommand cmd -> case cmd of
ICAckDel rId srvMsgId msgId -> withServer $ \srv -> tryWithLock "ICAckDel" $ ack srv rId srvMsgId >> withStore' c (\db -> deleteMsg db connId msgId)
ICAck rId srvMsgId -> withServer $ \srv -> tryWithLock "ICAck" $ ack srv rId srvMsgId
ICAckDel rId srvMsgId msgId -> withServer $ \srv ->
tryCommand $ withConnLockNotify c connId "ICAckDel" $ do
t_ <- ack srv rId srvMsgId
withStore' c (\db -> deleteMsg db connId msgId)
pure t_
ICAck rId srvMsgId -> withServer $ \srv ->
tryCommand $ withConnLockNotify c connId "ICAck" $ ack srv rId srvMsgId
ICAllowSecure _rId senderKey -> withServer' . tryMoveableWithLock "ICAllowSecure" $ do
(SomeConn _ conn, AcceptedConfirmation {senderConf, ownConnInfo}) <-
withStore c $ \db -> runExceptT $ (,) <$> ExceptT (getConn db connId) <*> ExceptT (getAcceptedConfirmation db connId)
@@ -2195,7 +2262,7 @@ runSmpQueueMsgDelivery c@AgentClient {subQ} sq@SndQueue {userId, connId, server,
cStats <- connectionStats c conn
notify $ SWITCH QDSnd SPConfirmed cStats
AM_QUSE_ -> pure ()
AM_QTEST_ -> withConnLock c connId "runSmpQueueMsgDelivery AM_QTEST_" $ do
AM_QTEST_ -> withConnLockNotify c connId "runSmpQueueMsgDelivery AM_QTEST_" $ do
withStore' c $ \db -> setSndQueueStatus db sq Active
SomeConn _ conn <- withStore c (`getConn` connId)
case conn of
@@ -2219,7 +2286,7 @@ runSmpQueueMsgDelivery c@AgentClient {subQ} sq@SndQueue {userId, connId, server,
let sqs'' = sq'' :| sqs'
conn' = DuplexConnection cData' rqs sqs''
cStats <- connectionStats c conn'
notify $ SWITCH QDSnd SPCompleted cStats
pure $ Just ("", connId, AEvt SAEConn $ SWITCH QDSnd SPCompleted cStats)
_ -> internalErr msgId "sent QTEST: there is only one queue in connection"
_ -> internalErr msgId "sent QTEST: queue not in connection or not replacing another queue"
_ -> internalErr msgId "QTEST sent not in duplex connection"
@@ -2251,7 +2318,9 @@ runSmpQueueMsgDelivery c@AgentClient {subQ} sq@SndQueue {userId, connId, server,
notifyDel msgId cmd = notify cmd >> delMsg msgId
connError msgId = notifyDel msgId . ERR . (`CONN` "")
qError msgId = notifyDel msgId . ERR . AGENT . A_QUEUE
internalErr msgId = notifyDel msgId . ERR . INTERNAL
internalErr msgId s = do
delMsg msgId
pure $ Just ("", connId, AEvt SAEConn $ ERR $ INTERNAL s)
retrySndOp :: AgentClient -> AM () -> AM ()
retrySndOp c loop = do
@@ -2261,17 +2330,31 @@ retrySndOp c loop = do
atomically $ beginAgentOperation c AOSndNetwork
loop
-- | Like 'withConnLock', but writes the returned 'ATransmission' to 'subQ'
-- after releasing the lock, preventing deadlock with agentSubscriber.
withConnLockNotify :: AgentClient -> ConnId -> Text -> AM (Maybe ATransmission) -> AM ()
withConnLockNotify c connId name action = do
t_ <- withConnLock c connId name action
forM_ t_ $ atomically . writeTBQueue (subQ c)
ackMessage' :: AgentClient -> ConnId -> AgentMsgId -> Maybe MsgReceiptInfo -> AM ()
ackMessage' c connId msgId rcptInfo_ = withConnLock c connId "ackMessage" $ do
ackMessage' c connId msgId rcptInfo_ = withConnLockNotify c connId "ackMessage" $ do
SomeConn _ conn <- withStore c (`getConn` connId)
case conn of
DuplexConnection {} -> ack >> sendRcpt conn >> del
RcvConnection {} -> ack >> del
DuplexConnection {} -> do
t_ <- ack
sendRcpt conn
del
pure t_
RcvConnection {} -> do
t_ <- ack
del
pure t_
SndConnection {} -> throwE $ CONN SIMPLEX "ackMessage"
ContactConnection {} -> throwE $ CMD PROHIBITED "ackMessage: ContactConnection"
NewConnection _ -> throwE $ CMD PROHIBITED "ackMessage: NewConnection"
where
ack :: AM ()
ack :: AM (Maybe ATransmission)
ack = do
-- the stored message was delivered via a specific queue, the rest failed to decrypt and were already acknowledged
(rq, srvMsgId) <- withStore c $ \db -> setMsgUserAck db connId $ InternalId msgId
@@ -2379,7 +2462,7 @@ synchronizeRatchet' c connId pqSupport' force = withConnLock c connId "synchroni
| otherwise -> throwE $ CMD PROHIBITED "synchronizeRatchet: not allowed"
_ -> throwE $ CMD PROHIBITED "synchronizeRatchet: not duplex"
ackQueueMessage :: AgentClient -> RcvQueue -> SMP.MsgId -> AM ()
ackQueueMessage :: AgentClient -> RcvQueue -> SMP.MsgId -> AM (Maybe ATransmission)
ackQueueMessage c rq@RcvQueue {userId, connId, server} srvMsgId = do
atomically $ incSMPServerStat c userId server ackAttempts
tryAllErrors (sendAck c rq srvMsgId) >>= \case
@@ -2391,10 +2474,11 @@ ackQueueMessage c rq@RcvQueue {userId, connId, server} srvMsgId = do
where
sendMsgNtf stat = do
atomically $ incSMPServerStat c userId server stat
whenM (liftIO $ hasGetLock c rq) $ do
atomically $ releaseGetLock c rq
brokerTs_ <- eitherToMaybe <$> tryAllErrors (withStore c $ \db -> getRcvMsgBrokerTs db connId srvMsgId)
atomically $ writeTBQueue (subQ c) ("", connId, AEvt SAEConn $ MSGNTF srvMsgId brokerTs_)
ifM (liftIO $ hasGetLock c rq)
(do atomically $ releaseGetLock c rq
brokerTs_ <- eitherToMaybe <$> tryAllErrors (withStore c $ \db -> getRcvMsgBrokerTs db connId srvMsgId)
pure $ Just ("", connId, AEvt SAEConn $ MSGNTF srvMsgId brokerTs_))
(pure Nothing)
-- | Suspend SMP agent connection (OFF command) in Reader monad
suspendConnection' :: AgentClient -> NetworkRequestMode -> ConnId -> AM ()
@@ -2881,10 +2965,13 @@ getNextSMPServer c userId = getNextServer c userId storageSrvs
{-# INLINE getNextSMPServer #-}
subscriber :: AgentClient -> AM' ()
subscriber c@AgentClient {msgQ} = forever $ do
subscriber c@AgentClient {msgQ, subQ} = run $ forever $ do
t <- atomically $ readTBQueue msgQ
agentOperationBracket c AORcvNetwork waitUntilActive $
processSMPTransmissions c t
where
run a = a `catchOwn` \e -> notify $ CRITICAL True $ "Agent subscriber stopped: " <> show e
notify err = atomically $ writeTBQueue subQ ("", "", AEvt SAEConn $ ERR err)
cleanupManager :: AgentClient -> AM' ()
cleanupManager c@AgentClient {subQ} = do
@@ -3214,7 +3301,7 @@ processSMPTransmissions c@AgentClient {subQ} (tSess@(userId, srv, _), THandlePar
ackDel :: InternalId -> AM ACKd
ackDel aId = enqueueCmd (ICAckDel rId srvMsgId aId) $> ACKd
handleNotifyAck :: AM ACKd -> AM ACKd
handleNotifyAck m = m `catchAllErrors` \e -> notify (ERR e) >> ack
handleNotifyAck m = m `catchAllOwnErrors` \e -> notify (ERR e) >> ack
SMP.END ->
atomically (ifM (activeClientSession c tSess sessId) (removeSubscription c tSess connId rq $> True) (pure False))
>>= notifyEnd
+23 -3
View File
@@ -129,6 +129,7 @@ module Simplex.Messaging.Agent.Protocol
ContactConnType (..),
ShortLinkScheme (..),
LinkKey (..),
PreparedLinkParams (..),
validateOwners,
validateLinkOwners,
sameConnReqContact,
@@ -179,7 +180,7 @@ module Simplex.Messaging.Agent.Protocol
where
import Control.Applicative (optional, (<|>))
import Control.Exception (BlockedIndefinitelyOnSTM (..), fromException)
import Control.Exception (BlockedIndefinitelyOnMVar (..), BlockedIndefinitelyOnSTM (..), fromException)
import Data.Aeson (FromJSON (..), ToJSON (..), Value (..), (.:), (.:?))
import qualified Data.Aeson as J'
import qualified Data.Aeson.Encoding as JE
@@ -1489,6 +1490,23 @@ newtype LinkKey = LinkKey ByteString -- sha3-256(fixed_data)
instance ToField LinkKey where toField (LinkKey s) = toField $ Binary s
-- | Parameters for creating a connection with a prepared link.
data PreparedLinkParams = PreparedLinkParams
{ -- | Correlation ID / determines sender ID
plpNonce :: C.CbNonce,
-- | Queue E2EE DH key pair
plpQueueE2EKeys :: C.KeyPairX25519,
-- | For encrypting link data
plpLinkKey :: LinkKey,
-- | Root signing key (for signing link data)
plpRootPrivKey :: C.PrivateKeyEd25519,
-- | smpEncode of FixedLinkData (includes linkEntityId)
plpSignedFixedData :: ByteString,
-- | Server with basic auth (not stored in link)
plpSrvWithAuth :: SMPServerWithAuth
}
deriving (Show)
instance ConnectionModeI c => ToField (ConnectionLink c) where toField = toField . Binary . strEncode
instance (Typeable c, ConnectionModeI c) => FromField (ConnectionLink c) where fromField = blobFieldDecoder strDecode
@@ -1824,7 +1842,7 @@ instance ConnectionModeI c => Encoding (FixedLinkData c) where
smpEncode (agentVRange, rootKey, linkConnReq) <> maybe "" smpEncode linkEntityId
smpP = do
(agentVRange, rootKey, linkConnReq) <- smpP
linkEntityId <- (smpP <|> pure Nothing) <* A.takeByteString -- ignoring tail for forward compatibility with the future link data encoding
linkEntityId <- optional smpP <* A.takeByteString -- ignoring tail for forward compatibility with the future link data encoding
pure FixedLinkData {agentVRange, rootKey, linkConnReq, linkEntityId}
instance ConnectionModeI c => Encoding (ConnLinkData c) where
@@ -1987,7 +2005,9 @@ data AgentErrorType
instance AnyError AgentErrorType where
fromSomeException e = case fromException e of
Just BlockedIndefinitelyOnSTM -> CRITICAL True "Thread blocked indefinitely in STM transaction"
_ -> INTERNAL $ show e
_ -> case fromException e of
Just BlockedIndefinitelyOnMVar -> CRITICAL True "Thread blocked indefinitely on MVar"
_ -> INTERNAL $ show e
{-# INLINE fromSomeException #-}
-- | SMP agent protocol command or response error.
+8 -1
View File
@@ -1,4 +1,11 @@
module Simplex.Messaging.Agent.QueryString where
module Simplex.Messaging.Agent.QueryString
( QueryStringParams (..),
QSPEscaping (..),
queryParam,
queryParamParser,
queryParam_,
queryParamStr,
) where
import Data.Attoparsec.ByteString.Char8 (Parser)
import qualified Data.Attoparsec.ByteString.Char8 as A
+26 -1
View File
@@ -3,7 +3,32 @@
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE TemplateHaskell #-}
module Simplex.Messaging.Agent.Stats where
module Simplex.Messaging.Agent.Stats
( AgentSMPServerStats (..),
AgentSMPServerStatsData (..),
OptionalInt (..),
AgentXFTPServerStats (..),
AgentXFTPServerStatsData (..),
AgentNtfServerStats (..),
AgentNtfServerStatsData (..),
AgentPersistedServerStats (..),
OptionalMap (..),
newAgentSMPServerStats,
newAgentSMPServerStatsData,
newAgentSMPServerStats',
getAgentSMPServerStats,
addSMPStatsData,
newAgentXFTPServerStats,
newAgentXFTPServerStatsData,
newAgentXFTPServerStats',
getAgentXFTPServerStats,
addXFTPStatsData,
newAgentNtfServerStats,
newAgentNtfServerStatsData,
newAgentNtfServerStats',
getAgentNtfServerStats,
addNtfStatsData,
) where
import Data.Aeson (FromJSON (..), FromJSONKey, ToJSON (..))
import qualified Data.Aeson.TH as J
+73 -1
View File
@@ -14,7 +14,79 @@
{-# LANGUAGE StandaloneDeriving #-}
{-# OPTIONS_GHC -fno-warn-unticked-promoted-constructors #-}
module Simplex.Messaging.Agent.Store where
module Simplex.Messaging.Agent.Store
( RcvQueue,
NewRcvQueue,
StoredRcvQueue (..),
RcvQueueSub (..),
ClientNtfCreds (..),
InvShortLink (..),
SndQueue,
NewSndQueue,
StoredSndQueue (..),
SMPQueueRec (..),
SomeRcvQueue (..),
ConnType (..),
Connection' (..),
Connection,
SConnType (..),
SomeConn' (..),
SomeConn,
SomeConnSub,
ConnData (..),
NoticeId,
PendingCommand (..),
AgentCmdType (..),
AgentCommand (..),
AgentCommandTag (..),
InternalCommand (..),
InternalCommandTag (..),
NewConfirmation (..),
AcceptedConfirmation (..),
NewInvitation (..),
Invitation (..),
PrevExternalSndId,
PrevRcvMsgHash,
PrevSndMsgHash,
RcvMsgData (..),
RcvMsg (..),
SndMsgData (..),
SndMsgPrepData (..),
SndMsg (..),
PendingMsgData (..),
PendingMsgPrepData (..),
InternalRcvId (..),
ExternalSndId,
ExternalSndTs,
BrokerId,
BrokerTs,
InternalSndId (..),
MsgBase (..),
InternalId (..),
InternalTs,
AsyncCmdId,
StoreError (..),
AnyStoreError (..),
ServiceAssoc,
createStore,
rcvQueueSub,
rcvSMPQueueAddress,
canAbortRcvSwitch,
findQ,
removeQ,
removeQP,
sndAddress,
findRQ,
switchingRQ,
updatedQs,
toConnData,
updateConnection,
connType,
ratchetSyncAllowed,
ratchetSyncSendProhibited,
agentCommandTag,
internalCmdTag,
) where
import Control.Exception (Exception (..))
import qualified Data.Attoparsec.ByteString.Char8 as A
+6 -1
View File
@@ -8,7 +8,12 @@
{-# LANGUAGE StandaloneDeriving #-}
{-# LANGUAGE TypeApplications #-}
module Simplex.Messaging.Agent.Store.Entity where
module Simplex.Messaging.Agent.Store.Entity
( DBStored (..),
DBEntityId,
DBEntityId' (..),
)
where
import Data.Aeson (FromJSON (..), ToJSON (..))
import qualified Data.Aeson as J
@@ -1,4 +1,6 @@
module Simplex.Messaging.Agent.Store.Postgres.Options where
module Simplex.Messaging.Agent.Store.Postgres.Options
( DBOpts (..),
) where
import Data.ByteString (ByteString)
import Numeric.Natural
@@ -1,4 +1,10 @@
module Simplex.Messaging.Agent.Store.SQLite.Util where
module Simplex.Messaging.Agent.Store.SQLite.Util
( SQLiteFunc,
SQLiteFuncFinal,
createStaticFunction,
createStaticAggregate,
mkSQLiteFunc,
) where
import Control.Exception (SomeException, catch, mask_)
import Data.ByteString (ByteString)
+13 -7
View File
@@ -1,7 +1,11 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.Messaging.Compression where
module Simplex.Messaging.Compression
( Compressed,
maxLengthPassthrough,
compressionLevel,
) where
import qualified Codec.Compression.Zstd as Z1
import Data.ByteString (ByteString)
@@ -36,10 +40,12 @@ compress1 bs
| B.length bs <= maxLengthPassthrough = Passthrough bs
| otherwise = Compressed . Large $ Z1.compress compressionLevel bs
decompress1 :: Compressed -> Either String ByteString
decompress1 = \case
decompress1 :: Int -> Compressed -> Either String ByteString
decompress1 limit = \case
Passthrough bs -> Right bs
Compressed (Large bs) -> case Z1.decompress bs of
Z1.Error e -> Left e
Z1.Skip -> Right mempty
Z1.Decompress bs' -> Right bs'
Compressed (Large bs) -> case Z1.decompressedSize bs of
Just sz | sz <= limit -> case Z1.decompress bs of
Z1.Error e -> Left e
Z1.Skip -> Right mempty
Z1.Decompress bs' -> Right bs'
_ -> Left $ "compressed size not specified or exceeds " <> show limit
+6 -1
View File
@@ -2,7 +2,12 @@
{-# LANGUAGE GADTs #-}
{-# LANGUAGE LambdaCase #-}
module Simplex.Messaging.Crypto.SNTRUP761 where
module Simplex.Messaging.Crypto.SNTRUP761
( KEMHybridSecret (..),
kcbDecrypt,
kcbEncrypt,
kemHybridSecret,
) where
import Crypto.Hash (Digest, SHA3_256, hash)
import Data.ByteArray (ScrubbedBytes)
@@ -1,7 +1,16 @@
{-# LANGUAGE CPP #-}
{-# LANGUAGE TypeApplications #-}
module Simplex.Messaging.Crypto.SNTRUP761.Bindings where
module Simplex.Messaging.Crypto.SNTRUP761.Bindings
( KEMPublicKey (..),
KEMSecretKey,
KEMCiphertext (..),
KEMSharedKey (..),
KEMKeyPair,
sntrup761Keypair,
sntrup761Enc,
sntrup761Dec,
) where
import Control.Concurrent.STM
import Crypto.Random (ChaChaDRG)
+20 -5
View File
@@ -13,7 +13,9 @@ module Simplex.Messaging.Crypto.ShortLink
( contactShortLinkKdf,
invShortLinkKdf,
encodeSignLinkData,
encodeSignFixedData,
encodeSignUserData,
newOwnerAuth,
encryptLinkData,
encryptUserData,
decryptLinkData,
@@ -50,11 +52,16 @@ contactShortLinkKdf (LinkKey k) =
invShortLinkKdf :: LinkKey -> C.SbKey
invShortLinkKdf (LinkKey k) = C.unsafeSbKey $ C.hkdf "" k "SimpleXInvLink" 32
encodeSignLinkData :: ConnectionModeI c => C.KeyPairEd25519 -> VersionRangeSMPA -> ConnectionRequestUri c -> UserConnLinkData c -> (LinkKey, (ByteString, ByteString))
encodeSignLinkData (rootKey, pk) agentVRange linkConnReq userData =
let fd = smpEncode FixedLinkData {agentVRange, rootKey, linkConnReq, linkEntityId = Nothing}
md = smpEncode $ connLinkData agentVRange userData
in (LinkKey (C.sha3_256 fd), (encodeSign pk fd, encodeSign pk md))
encodeSignLinkData :: forall c. ConnectionModeI c => C.KeyPairEd25519 -> VersionRangeSMPA -> ConnectionRequestUri c -> Maybe ByteString -> UserConnLinkData c -> (LinkKey, (ByteString, ByteString))
encodeSignLinkData keys@(_, pk) agentVRange linkConnReq linkEntityId userData =
let (linkKey, fd) = encodeSignFixedData keys agentVRange linkConnReq linkEntityId
md = encodeSignUserData (sConnectionMode @c) pk agentVRange userData
in (linkKey, (fd, md))
encodeSignFixedData :: ConnectionModeI c => C.KeyPairEd25519 -> VersionRangeSMPA -> ConnectionRequestUri c -> Maybe ByteString -> (LinkKey, ByteString)
encodeSignFixedData (rootKey, pk) agentVRange linkConnReq linkEntityId =
let fd = smpEncode FixedLinkData {agentVRange, rootKey, linkConnReq, linkEntityId}
in (LinkKey (C.sha3_256 fd), encodeSign pk fd)
encodeSignUserData :: ConnectionModeI c => SConnectionMode c -> C.PrivateKeyEd25519 -> VersionRangeSMPA -> UserConnLinkData c -> ByteString
encodeSignUserData _ pk agentVRange userLinkData =
@@ -68,6 +75,14 @@ connLinkData vr = \case
encodeSign :: C.PrivateKeyEd25519 -> ByteString -> ByteString
encodeSign pk s = smpEncode (C.sign' pk s) <> s
-- | Generate a new owner key pair and create OwnerAuth signed by the authorizing key.
-- ownerId is application-specific (e.g., MemberId in chat).
newOwnerAuth :: TVar ChaChaDRG -> OwnerId -> C.PrivateKeyEd25519 -> IO (C.PrivateKeyEd25519, OwnerAuth)
newOwnerAuth g ownerId signingKey = do
(ownerKey, ownerPrivKey) <- atomically $ C.generateKeyPair @'C.Ed25519 g
let authOwnerSig = C.sign' signingKey $ ownerId <> C.encodePubKey ownerKey
pure (ownerPrivKey, OwnerAuth {ownerId, ownerKey, authOwnerSig})
encryptLinkData :: TVar ChaChaDRG -> C.SbKey -> (ByteString, ByteString) -> ExceptT AgentErrorType IO QueueLinkData
encryptLinkData g k = bimapM (encrypt fixedDataPaddedLength) (encrypt userDataPaddedLength)
where
+8
View File
@@ -24,6 +24,8 @@ import Data.Bits (shiftL, shiftR, (.|.))
import Data.ByteString.Char8 (ByteString)
import qualified Data.ByteString.Char8 as B
import Data.ByteString.Internal (c2w, w2c)
import Data.Text (Text)
import Data.Text.Encoding (decodeUtf8', encodeUtf8)
import Data.Int (Int64)
import qualified Data.List.NonEmpty as L
import Data.Time.Clock.System (SystemTime (..))
@@ -156,6 +158,12 @@ smpEncodeList xs = B.cons (lenEncode $ length xs) . B.concat $ map smpEncode xs
smpListP :: Encoding a => Parser [a]
smpListP = (`A.count` smpP) =<< lenP
instance Encoding Text where
smpEncode = smpEncode . encodeUtf8
{-# INLINE smpEncode #-}
smpP = either (fail . show) pure . decodeUtf8' =<< smpP
{-# INLINE smpP #-}
instance Encoding String where
smpEncode = smpEncode . B.pack
{-# INLINE smpEncode #-}
+19 -1
View File
@@ -4,7 +4,25 @@
{-# LANGUAGE PatternSynonyms #-}
{-# LANGUAGE TypeApplications #-}
module Simplex.Messaging.Notifications.Client where
module Simplex.Messaging.Notifications.Client
( NtfClient,
NtfClientError,
defaultNTFClientConfig,
ntfRegisterToken,
ntfVerifyToken,
ntfCheckToken,
ntfReplaceToken,
ntfDeleteToken,
ntfSetCronInterval,
ntfCreateSubscription,
ntfCreateSubscriptions,
ntfCheckSubscription,
ntfCheckSubscriptions,
ntfDeleteSubscription,
sendNtfCommand,
okNtfCommand,
)
where
import Control.Monad.Except
import Control.Monad.Trans.Except
@@ -9,7 +9,36 @@
{-# LANGUAGE TypeApplications #-}
{-# LANGUAGE TypeFamilies #-}
module Simplex.Messaging.Notifications.Protocol where
module Simplex.Messaging.Notifications.Protocol
( NtfEntity (..),
SNtfEntity (..),
NtfEntityI (..),
NtfCommandTag (..),
NtfCmdTag (..),
NtfRegCode (..),
NewNtfEntity (..),
ANewNtfEntity (..),
NtfCommand (..),
NtfCmd (..),
NtfResponseTag (..),
NtfResponse (..),
SMPQueueNtf (..),
PushProvider (..),
DeviceToken (..),
PNMessageData (..),
NtfEntityId,
NtfSubscriptionId,
NtfTokenId,
NtfSubStatus (..),
NtfTknStatus (..),
NTInvalidReason (..),
encodePNMessages,
pnMessagesP,
subscribeNtfStatuses,
allowTokenVerification,
allowNtfSubCommands,
checkEntity,
) where
import Control.Applicative (optional, (<|>))
import Data.Aeson (FromJSON (..), ToJSON (..), (.:), (.=))
@@ -15,7 +15,12 @@
{-# LANGUAGE TupleSections #-}
{-# OPTIONS_GHC -fno-warn-ambiguous-fields #-}
module Simplex.Messaging.Notifications.Server where
module Simplex.Messaging.Notifications.Server
( runNtfServer,
runNtfServerBlocking,
restoreServerLastNtfs,
)
where
import Control.Concurrent (threadDelay)
import Control.Concurrent.Async (mapConcurrently)
@@ -1,7 +1,10 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.Messaging.Notifications.Server.Control where
module Simplex.Messaging.Notifications.Server.Control
( ControlProtocol (..),
)
where
import qualified Data.Attoparsec.ByteString.Char8 as A
import Simplex.Messaging.Encoding.String
@@ -7,7 +7,23 @@
{-# LANGUAGE OverloadedLists #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.Messaging.Notifications.Server.Env where
module Simplex.Messaging.Notifications.Server.Env
( NtfServerConfig (..),
NtfEnv (..),
NtfSubscriber (..),
SMPSubscriberVar,
SMPSubscriber (..),
NtfPushServer (..),
NtfRequest (..),
NtfServerClient (..),
defaultInactiveClientExpiration,
newNtfServerEnv,
newNtfSubscriber,
newNtfPushServer,
newPushClient,
getPushClient,
newNtfServerClient,
) where
import Control.Concurrent (ThreadId)
import Control.Monad.Except
@@ -8,7 +8,10 @@
{-# LANGUAGE PatternSynonyms #-}
{-# OPTIONS_GHC -fno-warn-ambiguous-fields #-}
module Simplex.Messaging.Notifications.Server.Main where
module Simplex.Messaging.Notifications.Server.Main
( ntfServerCLI,
)
where
import Control.Logger.Simple (setLogLevel)
import Control.Monad ((<$!>))
@@ -4,7 +4,14 @@
{-# LANGUAGE TypeApplications #-}
{-# OPTIONS_GHC -fno-warn-unrecognised-pragmas #-}
module Simplex.Messaging.Notifications.Server.Prometheus where
module Simplex.Messaging.Notifications.Server.Prometheus
( NtfServerMetrics (..),
NtfRealTimeMetrics (..),
NtfSMPWorkerMetrics (..),
NtfSMPSubMetrics (..),
rtsOptionsEnv,
ntfPrometheusMetrics,
) where
import Data.Int (Int64)
import qualified Data.Map.Strict as M
@@ -8,7 +8,20 @@
{-# HLINT ignore "Use newtype instead of data" #-}
module Simplex.Messaging.Notifications.Server.Push.APNS where
module Simplex.Messaging.Notifications.Server.Push.APNS
( PushNotification (..),
APNSNotification (..),
APNSNotificationBody (..),
APNSAlertBody (..),
APNSPushClientConfig (..),
PushProviderError (..),
PushProviderClient,
APNSErrorResponse (..),
apnsProviderHost,
defaultAPNSPushClientConfig,
createAPNSPushClient,
apnsPushProviderClient,
) where
import Control.Exception (Exception)
import Control.Logger.Simple
@@ -1,6 +1,11 @@
{-# LANGUAGE OverloadedStrings #-}
module Simplex.Messaging.Notifications.Server.Push.APNS.Internal where
module Simplex.Messaging.Notifications.Server.Push.APNS.Internal
( hApnsTopic,
hApnsPushType,
hApnsPriority,
apnsJSONOptions,
) where
import qualified Data.Aeson as J
import qualified Data.CaseInsensitive as CI
@@ -2,7 +2,19 @@
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.Messaging.Notifications.Server.Stats where
module Simplex.Messaging.Notifications.Server.Stats
( NtfServerStats (..),
NtfServerStatsData (..),
StatsByServer,
StatsByServerData (..),
newNtfServerStats,
getNtfServerStatsData,
setNtfServerStats,
getStatsByServer,
setStatsByServer,
incServerStat,
)
where
import Control.Applicative (optional, (<|>))
import Control.Concurrent.STM
@@ -9,7 +9,26 @@
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE ScopedTypeVariables #-}
module Simplex.Messaging.Notifications.Server.Store where
module Simplex.Messaging.Notifications.Server.Store
( NtfSTMStore (..),
NtfTknData (..),
NtfSubData (..),
TokenNtfMessageRecord (..),
newNtfSTMStore,
mkNtfTknData,
ntfSubServer,
stmGetNtfTokenIO,
stmAddNtfToken,
stmRemoveInactiveTokenRegistrations,
stmRemoveTokenRegistration,
stmDeleteNtfToken,
stmGetNtfSubscriptionIO,
stmAddNtfSubscription,
stmDeleteNtfSubscription,
stmStoreTokenLastNtf,
stmSetNtfService,
)
where
import Control.Concurrent.STM
import Control.Monad
@@ -2,7 +2,10 @@
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE QuasiQuotes #-}
module Simplex.Messaging.Notifications.Server.Store.Migrations where
module Simplex.Messaging.Notifications.Server.Store.Migrations
( ntfServerMigrations,
)
where
import Data.List (sortOn)
import Data.Text (Text)
@@ -16,7 +16,43 @@
{-# LANGUAGE TypeOperators #-}
{-# OPTIONS_GHC -fno-warn-orphans -fno-warn-ambiguous-fields #-}
module Simplex.Messaging.Notifications.Server.Store.Postgres where
module Simplex.Messaging.Notifications.Server.Store.Postgres
( NtfPostgresStore (..),
NtfEntityRec (..),
mkNtfTknRec,
newNtfDbStore,
closeNtfDbStore,
addNtfToken,
replaceNtfToken,
getNtfToken,
findNtfTokenRegistration,
deleteNtfToken,
updateTknCronInterval,
getUsedSMPServers,
getNtfServiceCredentials,
setNtfServiceCredentials,
updateNtfServiceId,
getServerNtfSubscriptions,
findNtfSubscription,
getNtfSubscription,
mkNtfSubRec,
updateTknStatus,
setTknStatusConfirmed,
setTokenActive,
withPeriodicNtfTokens,
updateTokenCronSentAt,
addNtfSubscription,
deleteNtfSubscription,
updateSubStatus,
updateSrvSubStatus,
batchUpdateSrvSubStatus,
batchUpdateSrvSubErrors,
removeServiceAndAssociations,
addTokenLastNtf,
getEntityCounts,
withDB',
withClientDB,
) where
import qualified Control.Exception as E
import Control.Logger.Simple
@@ -4,7 +4,16 @@
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE ScopedTypeVariables #-}
module Simplex.Messaging.Notifications.Server.Store.Types where
module Simplex.Messaging.Notifications.Server.Store.Types
( NtfTknRec (..),
NtfSubRec (..),
ServerNtfSub,
NtfAssociatedService,
mkTknData,
mkTknRec,
mkSubData,
mkSubRec,
) where
import Control.Applicative (optional)
import Control.Concurrent.STM
@@ -7,7 +7,18 @@
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TupleSections #-}
module Simplex.Messaging.Notifications.Transport where
module Simplex.Messaging.Notifications.Transport
( NTFVersion,
VersionRangeNTF,
pattern VersionNTF,
THandleNTF,
invalidReasonNTFVersion,
supportedClientNTFVRange,
supportedServerNTFVRange,
alpnSupportedNTFHandshakes,
ntfServerHandshake,
ntfClientHandshake,
) where
import Control.Monad (forM)
import Control.Monad.Except
+13 -1
View File
@@ -4,7 +4,19 @@
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.Messaging.Notifications.Types where
module Simplex.Messaging.Notifications.Types
( NtfTknAction (..),
NtfToken (..),
NtfSubAction (..),
NtfActionTs,
NtfSubNTFAction (..),
NtfSubSMPAction (..),
NtfAgentSubStatus (..),
NtfSubscription (..),
newNtfToken,
isDeleteNtfSubAction,
newNtfSubscription,
) where
import qualified Data.Attoparsec.ByteString.Char8 as A
import Data.Text.Encoding (decodeLatin1, encodeUtf8)
+14 -1
View File
@@ -4,7 +4,20 @@
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE PatternSynonyms #-}
module Simplex.Messaging.Parsers where
module Simplex.Messaging.Parsers
( parse,
parseAll,
parseE,
parseE',
parseRead1,
parseString,
fstToLower,
dropPrefix,
enumJSON,
sumTypeJSON,
defaultJSON,
textP,
) where
import Control.Monad.Trans.Except
import qualified Data.Aeson as J
+1
View File
@@ -219,6 +219,7 @@ module Simplex.Messaging.Protocol
-- * exports for tests
CommandTag (..),
BrokerMsgTag (..),
checkParty,
)
where
+3 -1
View File
@@ -3,7 +3,9 @@
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE TemplateHaskell #-}
module Simplex.Messaging.Protocol.Types where
module Simplex.Messaging.Protocol.Types
( ClientNotice (..),
) where
import qualified Data.Aeson.TH as J
import Data.Int (Int64)
+40 -1
View File
@@ -11,7 +11,46 @@
{-# LANGUAGE TupleSections #-}
{-# LANGUAGE TypeApplications #-}
module Simplex.Messaging.Server.CLI where
module Simplex.Messaging.Server.CLI
( SignAlgorithm (..),
X509Config (..),
CertOptions (..),
IniOptions (..),
exitError,
confirmOrExit,
defaultX509Config,
getCliCommand',
simplexmqVersionCommit,
simplexmqCommit,
createServerX509,
createServerX509_,
certOptionsP,
dbOptsP,
startOptionsP,
parseLogLevel,
genOnline,
warnCAPrivateKeyFile,
mkIniOptions,
strictIni,
readStrictIni,
readIniDefault,
iniOnOff,
strDecodeIni,
withPrompt,
onOffPrompt,
onOff,
settingIsOn,
checkSavedFingerprint,
iniTransports,
iniDBOptions,
printServerConfig,
printServerTransports,
printSMPServerConfig,
deleteDirIfExists,
printServiceInfo,
clearDirIfExists,
getEnvPath,
) where
import Control.Logger.Simple (LogLevel (..))
import Control.Monad
+4 -1
View File
@@ -1,7 +1,10 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.Messaging.Server.Control where
module Simplex.Messaging.Server.Control
( CPClientRole (..),
ControlProtocol (..),
) where
import qualified Data.Attoparsec.ByteString.Char8 as A
import Simplex.Messaging.Encoding.String
+5 -1
View File
@@ -1,6 +1,10 @@
{-# LANGUAGE NamedFieldPuns #-}
module Simplex.Messaging.Server.Expiration where
module Simplex.Messaging.Server.Expiration
( ExpirationConfig (..),
expireBeforeEpoch,
showTTL,
) where
import Control.Monad.IO.Class
import Data.Int (Int64)
+13 -1
View File
@@ -6,7 +6,19 @@
{-# LANGUAGE StrictData #-}
{-# LANGUAGE TemplateHaskell #-}
module Simplex.Messaging.Server.Information where
module Simplex.Messaging.Server.Information
( ServerInformation (..),
ServerPublicConfig (..),
ServerPublicInfo (..),
ServerPersistenceMode (..),
ServerConditions (..),
HostingType (..),
Entity (..),
ServerContactAddress (..),
PGPKey (..),
emptyServerInfo,
hasServerInfo,
) where
import Data.Aeson (FromJSON (..), ToJSON (..))
import qualified Data.Aeson.TH as J
+23 -1
View File
@@ -15,7 +15,29 @@
{-# LANGUAGE TypeApplications #-}
{-# OPTIONS_GHC -fno-warn-ambiguous-fields #-}
module Simplex.Messaging.Server.Main where
module Simplex.Messaging.Server.Main
( EmbeddedWebParams (..),
WebHttpsParams (..),
CliCommand (..),
StoreCmd (..),
DatabaseTable (..),
smpServerCLI,
smpServerCLI_,
#if defined(dbServerPostgres)
importStoreLogToDatabase,
importMessagesToDatabase,
exportDatabaseToStoreLog,
#endif
newJournalMsgStore,
storeMsgsJournalDir',
getServerSourceCode,
simplexmqSource,
serverPublicInfo,
validCountryValue,
printSourceCode,
cliCommandP,
strParse,
) where
import Control.Concurrent.STM
import Control.Exception (finally)
@@ -1,7 +1,9 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE TemplateHaskell #-}
module Simplex.Messaging.Server.Main.GitCommit where
module Simplex.Messaging.Server.Main.GitCommit
( gitCommit,
) where
import Language.Haskell.TH
import System.Process
+12 -1
View File
@@ -2,7 +2,18 @@
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.Messaging.Server.Main.Init where
module Simplex.Messaging.Server.Main.Init
( InitOptions (..),
ServerPassword (..),
defaultControlPort,
defaultDBOpts,
defaultDeletedTTL,
iniFileContent,
informationIniContent,
iniDbOpts,
optDisabled,
optDisabled',
) where
import Data.Int (Int64)
import qualified Data.List.NonEmpty as L
+3 -1
View File
@@ -1,7 +1,9 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.Messaging.Server.MsgStore where
module Simplex.Messaging.Server.MsgStore
( MsgLogRecord (..),
) where
import Simplex.Messaging.Encoding.String
import Simplex.Messaging.Protocol (Message (..), RecipientId)
+19 -1
View File
@@ -15,7 +15,25 @@
{-# HLINT ignore "Redundant multi-way if" #-}
module Simplex.Messaging.Server.MsgStore.Types where
module Simplex.Messaging.Server.MsgStore.Types
( MsgStoreClass (..),
MSType (..),
QSType (..),
SMSType (..),
SQSType (..),
MessageStats (..),
LoadedQueueCounts (..),
newMessageStats,
addQueue,
getQueue,
getQueueRec,
getQueues,
getQueueRecs,
readQueueRec,
withPeekMsgQueue,
expireQueueMsgs,
deleteExpireMsgs_,
) where
import Control.Concurrent.STM
import Control.Monad
+8 -1
View File
@@ -3,7 +3,14 @@
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.Messaging.Server.NtfStore where
module Simplex.Messaging.Server.NtfStore
( NtfStore (..),
MsgNtf (..),
NtfLogRecord (..),
storeNtf,
deleteNtfs,
deleteExpiredNtfs,
) where
import Control.Concurrent.STM
import Control.Monad (foldM)
+7 -1
View File
@@ -3,7 +3,13 @@
{-# LANGUAGE TypeApplications #-}
{-# OPTIONS_GHC -fno-warn-unrecognised-pragmas #-}
module Simplex.Messaging.Server.Prometheus where
module Simplex.Messaging.Server.Prometheus
( ServerMetrics (..),
RealTimeMetrics (..),
RTSubscriberMetrics (..),
rtsOptionsEnv,
prometheusMetrics,
) where
import Data.Int (Int64)
import qualified Data.IntMap.Strict as IM
+7 -1
View File
@@ -7,7 +7,13 @@
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.Messaging.Server.QueueStore where
module Simplex.Messaging.Server.QueueStore
( QueueRec (..),
NtfCreds (..),
ServiceRec (..),
CertFingerprint,
ServerEntityStatus (..),
) where
import Control.Applicative (optional, (<|>))
import qualified Data.ByteString.Char8 as B
@@ -1,4 +1,6 @@
module Simplex.Messaging.Server.QueueStore.Postgres.Config where
module Simplex.Messaging.Server.QueueStore.Postgres.Config
( PostgresStoreCfg (..),
) where
import Data.Int (Int64)
import Simplex.Messaging.Agent.Store.Postgres.Options (DBOpts)
@@ -2,7 +2,10 @@
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE QuasiQuotes #-}
module Simplex.Messaging.Server.QueueStore.Postgres.Migrations where
module Simplex.Messaging.Server.QueueStore.Postgres.Migrations
( serverMigrations,
)
where
import Data.List (sortOn)
import Data.Text (Text)
@@ -3,7 +3,14 @@
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE TemplateHaskell #-}
module Simplex.Messaging.Server.QueueStore.QueueInfo where
module Simplex.Messaging.Server.QueueStore.QueueInfo
( QueueInfo (..),
QSub (..),
QSubThread (..),
MsgInfo (..),
MsgType (..),
QueueMode (..),
) where
import qualified Data.Aeson as J
import qualified Data.Aeson.TH as JQ
@@ -5,7 +5,12 @@
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE TypeFamilyDependencies #-}
module Simplex.Messaging.Server.QueueStore.Types where
module Simplex.Messaging.Server.QueueStore.Types
( StoreQueueClass (..),
QueueStoreClass (..),
EntityCounts (..),
withLoadedQueues,
) where
import Control.Concurrent.STM
import Control.Monad
+34 -1
View File
@@ -6,7 +6,40 @@
{-# LANGUAGE TupleSections #-}
{-# LANGUAGE TypeApplications #-}
module Simplex.Messaging.Server.Stats where
module Simplex.Messaging.Server.Stats
( ServerStats (..),
ServerStatsData (..),
PeriodStats (..),
PeriodStatsData (..),
PeriodStatCounts (..),
ProxyStats (..),
ProxyStatsData (..),
ServiceStats (..),
ServiceStatsData (..),
TimeBuckets (..),
newServerStats,
getServerStatsData,
setServerStats,
newPeriodStats,
newPeriodStatsData,
getPeriodStatsData,
setPeriodStats,
periodStatDataCounts,
periodStatCounts,
updatePeriodStats,
newProxyStats,
newProxyStatsData,
getProxyStatsData,
getResetProxyStatsData,
setProxyStats,
newServiceStatsData,
newServiceStats,
getServiceStatsData,
getResetServiceStatsData,
setServiceStats,
emptyTimeBuckets,
updateTimeBuckets,
) where
import Control.Applicative (optional, (<|>))
import qualified Data.Attoparsec.ByteString.Char8 as A
@@ -7,7 +7,10 @@
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TypeApplications #-}
module Simplex.Messaging.Server.StoreLog.ReadWrite where
module Simplex.Messaging.Server.StoreLog.ReadWrite
( writeQueueStore,
readQueueStore,
) where
import Control.Concurrent.STM
import Control.Logger.Simple
@@ -2,7 +2,9 @@
{-# LANGUAGE GADTs #-}
{-# LANGUAGE KindSignatures #-}
module Simplex.Messaging.Server.StoreLog.Types where
module Simplex.Messaging.Server.StoreLog.Types
( StoreLog (..),
) where
import System.IO (Handle, IOMode (..))
+5 -1
View File
@@ -1,7 +1,11 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.Messaging.ServiceScheme where
module Simplex.Messaging.ServiceScheme
( ServiceScheme (..),
SrvLoc (..),
simplexChat,
) where
import Control.Applicative ((<|>))
import qualified Data.Attoparsec.ByteString.Char8 as A
+6 -1
View File
@@ -2,7 +2,12 @@
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE ScopedTypeVariables #-}
module Simplex.Messaging.Session where
module Simplex.Messaging.Session
( SessionVar (..),
getSessVar,
removeSessVar,
tryReadSessVar,
) where
import Control.Concurrent.STM
import Data.Time (UTCTime)
+9 -1
View File
@@ -6,7 +6,15 @@
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TypeApplications #-}
module Simplex.Messaging.SystemTime where
module Simplex.Messaging.SystemTime
( RoundedSystemTime (..),
SystemDate,
SystemSeconds,
getRoundedSystemTime,
getSystemDate,
getSystemSeconds,
roundedToUTCTime,
) where
import Data.Aeson (FromJSON, ToJSON)
import Data.Int (Int64)
+9 -1
View File
@@ -2,7 +2,15 @@
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.Messaging.Transport.Buffer where
module Simplex.Messaging.Transport.Buffer
( TBuffer (..),
newTBuffer,
peekBuffered,
getBuffered,
withTimedErr,
getLnBuffered,
trimCR,
) where
import Control.Concurrent.STM
import qualified Control.Exception as E
+9 -1
View File
@@ -1,7 +1,15 @@
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.Messaging.Transport.HTTP2 where
module Simplex.Messaging.Transport.HTTP2
( HTTP2Body (..),
defaultHTTP2BufferSize,
withHTTP2,
http2TLSParams,
getHTTP2Body,
httpALPN,
httpALPN11,
) where
import qualified Control.Exception as E
import Data.ByteString.Char8 (ByteString)
@@ -8,7 +8,19 @@
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TypeApplications #-}
module Simplex.Messaging.Transport.HTTP2.Client where
module Simplex.Messaging.Transport.HTTP2.Client
( HTTP2Client (..),
HClient (..),
HTTP2Response (..),
HTTP2ClientConfig (..),
HTTP2ClientError (..),
defaultHTTP2ClientConfig,
getHTTP2Client,
getVerifiedHTTP2Client,
attachHTTP2Client,
closeHTTP2Client,
sendRequest,
) where
import Control.Concurrent.Async
import Control.Exception (Handler (..), IOException, SomeAsyncException, SomeException)
@@ -1,6 +1,11 @@
{-# LANGUAGE MultiWayIf #-}
module Simplex.Messaging.Transport.HTTP2.File where
module Simplex.Messaging.Transport.HTTP2.File
( fileBlockSize,
hReceiveFile,
hSendFile,
getFileChunk,
) where
import Data.ByteString (ByteString)
import qualified Data.ByteString as B
+20 -11
View File
@@ -16,7 +16,7 @@ import Numeric.Natural (Natural)
import Simplex.Messaging.Server.Expiration
import Simplex.Messaging.Transport (ALPN, SessionId, TLS, closeConnection, tlsALPN, tlsUniq)
import Simplex.Messaging.Transport.HTTP2
import Simplex.Messaging.Transport.Server (ServerCredentials, TransportServerConfig (..), loadServerCredential, runTransportServer)
import Simplex.Messaging.Transport.Server (SNICredentialUsed, ServerCredentials, TLSServerCredential (..), TransportServerConfig (..), loadServerCredential, newSocketState, runTransportServerState_)
import Simplex.Messaging.Util (threadDelay')
import UnliftIO (finally)
import UnliftIO.Concurrent (forkIO, killThread)
@@ -54,7 +54,7 @@ getHTTP2Server HTTP2ServerConfig {qSize, http2Port, bufferSize, bodyHeadSize, se
started <- newEmptyTMVarIO
reqQ <- newTBQueueIO qSize
action <- async $
runHTTP2Server started http2Port bufferSize serverSupported srvCreds transportConfig Nothing (const $ pure ()) $ \sessionId sessionALPN r sendResponse -> do
runHTTP2Server started http2Port bufferSize serverSupported srvCreds Nothing transportConfig Nothing (const $ pure ()) $ \_sniUsed sessionId sessionALPN r sendResponse -> do
reqBody <- getHTTP2Body r bodyHeadSize
atomically $ writeTBQueue reqQ HTTP2Request {sessionId, sessionALPN, request = r, reqBody, sendResponse}
void . atomically $ takeTMVar started
@@ -63,24 +63,33 @@ getHTTP2Server HTTP2ServerConfig {qSize, http2Port, bufferSize, bodyHeadSize, se
closeHTTP2Server :: HTTP2Server -> IO ()
closeHTTP2Server = uninterruptibleCancel . action
runHTTP2Server :: TMVar Bool -> ServiceName -> BufferSize -> T.Supported -> T.Credential -> TransportServerConfig -> Maybe ExpirationConfig -> (SessionId -> IO ()) -> HTTP2ServerFunc -> IO ()
runHTTP2Server started port bufferSize srvSupported srvCreds transportConfig expCfg_ clientFinished = runHTTP2ServerWith_ expCfg_ clientFinished bufferSize setup
runHTTP2Server :: TMVar Bool -> ServiceName -> BufferSize -> T.Supported -> T.Credential -> Maybe T.Credential -> TransportServerConfig -> Maybe ExpirationConfig -> (SessionId -> IO ()) -> (SNICredentialUsed -> HTTP2ServerFunc) -> IO ()
runHTTP2Server started port bufferSize srvSupported srvCreds httpCreds_ transportConfig expCfg_ clientFinished = runHTTP2ServerWith_ expCfg_ clientFinished bufferSize setup
where
setup = runTransportServer started port srvSupported srvCreds transportConfig
setup handler = do
ss <- newSocketState
let combinedCreds = TLSServerCredential {credential = srvCreds, sniCredential = httpCreds_}
runTransportServerState_ ss started port srvSupported combinedCreds transportConfig $ \_ -> handler
-- HTTP2 server can be run on both client and server TLS connections.
runHTTP2ServerWith :: BufferSize -> ((TLS p -> IO ()) -> a) -> HTTP2ServerFunc -> a
runHTTP2ServerWith = runHTTP2ServerWith_ Nothing (\_sessId -> pure ())
runHTTP2ServerWith bufferSize tlsSetup http2Server =
runHTTP2ServerWith_
Nothing
(\_sessId -> pure ())
bufferSize
(\handler -> tlsSetup $ \tls -> handler (False, tls))
(const http2Server)
runHTTP2ServerWith_ :: Maybe ExpirationConfig -> (SessionId -> IO ()) -> BufferSize -> ((TLS p -> IO ()) -> a) -> HTTP2ServerFunc -> a
runHTTP2ServerWith_ expCfg_ clientFinished bufferSize setup http2Server = setup $ \tls -> do
runHTTP2ServerWith_ :: Maybe ExpirationConfig -> (SessionId -> IO ()) -> BufferSize -> (((SNICredentialUsed, TLS p) -> IO ()) -> a) -> (SNICredentialUsed -> HTTP2ServerFunc) -> a
runHTTP2ServerWith_ expCfg_ clientFinished bufferSize setup http2Server = setup $ \(sniUsed, tls) -> do
activeAt <- newTVarIO =<< getSystemTime
tid_ <- mapM (forkIO . expireInactiveClient tls activeAt) expCfg_
withHTTP2 bufferSize (run tls activeAt) (clientFinished $ tlsUniq tls) tls `finally` mapM_ killThread tid_
withHTTP2 bufferSize (run sniUsed tls activeAt) (clientFinished $ tlsUniq tls) tls `finally` mapM_ killThread tid_
where
run tls activeAt cfg = H.run cfg $ \req _aux sendResp -> do
run sniUsed tls activeAt cfg = H.run cfg $ \req _aux sendResp -> do
getSystemTime >>= atomically . writeTVar activeAt
http2Server (tlsUniq tls) (tlsALPN tls) req (`sendResp` [])
http2Server sniUsed (tlsUniq tls) (tlsALPN tls) req (`sendResp` [])
expireInactiveClient tls activeAt expCfg = loop
where
loop = do
+5 -1
View File
@@ -4,7 +4,11 @@
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE TemplateHaskell #-}
module Simplex.Messaging.Transport.KeepAlive where
module Simplex.Messaging.Transport.KeepAlive
( KeepAliveOpts (..),
defaultKeepAliveOpts,
setSocketKeepAlive,
) where
import qualified Data.Aeson.TH as J
import Foreign.C (CInt (..))
+7 -3
View File
@@ -11,6 +11,7 @@ module Simplex.Messaging.Transport.Server
( TransportServerConfig (..),
ServerCredentials (..),
TLSServerCredential (..),
SNICredentialUsed,
AddHTTP,
mkTransportServerConfig,
runTransportServerState,
@@ -62,6 +63,7 @@ data TransportServerConfig = TransportServerConfig
{ logTLSErrors :: Bool,
serverALPN :: Maybe [ALPN],
askClientCert :: Bool,
addCORSHeaders :: Bool,
tlsSetupTimeout :: Int,
transportTimeout :: Int
}
@@ -91,6 +93,7 @@ mkTransportServerConfig logTLSErrors serverALPN askClientCert =
{ logTLSErrors,
serverALPN,
askClientCert,
addCORSHeaders = False,
tlsSetupTimeout = 60000000,
transportTimeout = 40000000
}
@@ -274,9 +277,10 @@ paramsAskClientCert clientCert params =
{ T.serverWantClientCert = True,
T.serverHooks =
(T.serverHooks params)
{ T.onClientCertificate = \cc -> validateClientCertificate cc >>= \case
Just reason -> T.CertificateUsageReject reason <$ atomically (tryPutTMVar clientCert Nothing)
Nothing -> T.CertificateUsageAccept <$ atomically (tryPutTMVar clientCert $ Just cc)
{ T.onClientCertificate = \cc ->
validateClientCertificate cc >>= \case
Just reason -> T.CertificateUsageReject reason <$ atomically (tryPutTMVar clientCert Nothing)
Nothing -> T.CertificateUsageAccept <$ atomically (tryPutTMVar clientCert $ Just cc)
}
}
+6 -1
View File
@@ -2,7 +2,12 @@
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE OverloadedStrings #-}
module Simplex.Messaging.Transport.Shared where
module Simplex.Messaging.Transport.Shared
( ChainCertificates (..),
chainIdCaCerts,
x509validate,
takePeerCertChain,
) where
import Control.Concurrent.STM
import qualified Control.Exception as E
+131 -19
View File
@@ -3,8 +3,70 @@
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE ScopedTypeVariables #-}
module Simplex.Messaging.Util where
module Simplex.Messaging.Util
( AnyError (..),
(<$?>),
($>>),
(<$$),
(<$$>),
raceAny_,
bshow,
tshow,
maybeWord,
liftError,
liftError',
liftEitherWith,
ifM,
whenM,
unlessM,
anyM,
($>>=),
mapME,
bindRight,
forME,
mapAccumLM,
packZipWith,
tryWriteTBQueue,
catchAll,
catchAll_,
tryAllErrors,
tryAllErrors',
catchAllErrors,
catchAllErrors',
catchThrow,
allFinally,
isOwnException,
isAsyncCancellation,
catchOwn',
catchOwn,
tryAllOwnErrors,
tryAllOwnErrors',
catchAllOwnErrors,
catchAllOwnErrors',
eitherToMaybe,
listToEither,
firstRow,
maybeFirstRow,
maybeFirstRow',
firstRow',
groupOn,
groupOn',
eqOn,
groupAllOn,
toChunks,
safeDecodeUtf8,
timeoutThrow,
threadDelay',
diffToMicroseconds,
diffToMilliseconds,
labelMyThread,
atomicModifyIORef'_,
encodeJSON,
decodeJSON,
traverseWithKey_,
) where
import Control.Exception (AllocationLimitExceeded (..), AsyncException (..))
import qualified Control.Exception as E
import Control.Monad
import Control.Monad.Except
@@ -23,9 +85,9 @@ import Data.Int (Int64)
import Data.List (groupBy, sortOn)
import Data.List.NonEmpty (NonEmpty (..))
import qualified Data.List.NonEmpty as L
import Data.Maybe (listToMaybe)
import Data.Map.Strict (Map)
import qualified Data.Map.Strict as M
import Data.Maybe (listToMaybe)
import Data.Text (Text)
import qualified Data.Text as T
import Data.Text.Encoding (decodeUtf8With, encodeUtf8)
@@ -98,7 +160,7 @@ anyM :: Monad m => [m Bool] -> m Bool
anyM = foldM (\r a -> if r then pure r else (r ||) <$!> a) False
{-# INLINE anyM #-}
infixl 1 $>>, $>>=
infixl 1 $>>, $>>=
($>>=) :: (Monad m, Monad f, Traversable f) => m (f a) -> (a -> m (f b)) -> m (f b)
f $>>= g = f >>= fmap join . mapM g
@@ -120,15 +182,19 @@ forME :: (Monad m, Traversable t) => t (Either e a) -> (a -> m (Either e b)) ->
forME = flip mapME
{-# INLINE forME #-}
-- | Monadic version of mapAccumL
-- Copied from ghc-9.6.3 package: https://hackage.haskell.org/package/ghc-9.12.1/docs/GHC-Utils-Monad.html#v:mapAccumLM
-- for backward compatibility with 8.10.7.
mapAccumLM :: (Monad m, Traversable t)
=> (acc -> x -> m (acc, y)) -- ^ combining function
-> acc -- ^ initial state
-> t x -- ^ inputs
-> m (acc, t y) -- ^ final state, outputs
mapAccumLM ::
(Monad m, Traversable t) =>
-- | combining function
(acc -> x -> m (acc, y)) ->
-- | initial state
acc ->
-- | inputs
t x ->
-- | final state, outputs
m (acc, t y)
{-# INLINE [1] mapAccumLM #-}
-- INLINE pragma. mapAccumLM is called in inner loops. Like 'map',
-- we inline it so that we can take advantage of knowing 'f'.
@@ -137,26 +203,31 @@ mapAccumLM :: (Monad m, Traversable t)
mapAccumLM f s = fmap swap . flip runStateT s . traverse f'
where
f' = StateT . (fmap . fmap) swap . flip f
{-# RULES "mapAccumLM/List" mapAccumLM = mapAccumLM_List #-}
{-# RULES "mapAccumLM/NonEmpty" mapAccumLM = mapAccumLM_NonEmpty #-}
mapAccumLM_List
:: Monad m
=> (acc -> x -> m (acc, y))
-> acc -> [x] -> m (acc, [y])
mapAccumLM_List ::
Monad m =>
(acc -> x -> m (acc, y)) ->
acc ->
[x] ->
m (acc, [y])
{-# INLINE mapAccumLM_List #-}
mapAccumLM_List f = go
where
go s (x : xs) = do
(s1, x') <- f s x
(s1, x') <- f s x
(s2, xs') <- go s1 xs
return (s2, x' : xs')
return (s2, x' : xs')
go s [] = return (s, [])
mapAccumLM_NonEmpty
:: Monad m
=> (acc -> x -> m (acc, y))
-> acc -> NonEmpty x -> m (acc, NonEmpty y)
mapAccumLM_NonEmpty ::
Monad m =>
(acc -> x -> m (acc, y)) ->
acc ->
NonEmpty x ->
m (acc, NonEmpty y)
{-# INLINE mapAccumLM_NonEmpty #-}
mapAccumLM_NonEmpty f s (x :| xs) =
[(s2, x' :| xs') | (s1, x') <- f s x, (s2, xs') <- mapAccumLM_List f s1 xs]
@@ -223,6 +294,47 @@ allFinally :: (AnyError e, MonadUnliftIO m) => ExceptT e m a -> ExceptT e m b ->
allFinally action final = tryAllErrors action >>= \r -> final >> except r
{-# INLINE allFinally #-}
isOwnException :: E.SomeException -> Bool
isOwnException e = case E.fromException e of
Just StackOverflow -> True
Just HeapOverflow -> True
_ -> case E.fromException e of
Just AllocationLimitExceeded -> True
_ -> False
{-# INLINE isOwnException #-}
isAsyncCancellation :: E.SomeException -> Bool
isAsyncCancellation e = case E.fromException e of
Just (_ :: SomeAsyncException) -> not $ isOwnException e
Nothing -> False
{-# INLINE isAsyncCancellation #-}
catchOwn' :: IO a -> (E.SomeException -> IO a) -> IO a
catchOwn' action handleInternal = action `E.catch` \e -> if isAsyncCancellation e then E.throwIO e else handleInternal e
{-# INLINE catchOwn' #-}
catchOwn :: MonadUnliftIO m => m a -> (E.SomeException -> m a) -> m a
catchOwn action handleInternal =
withRunInIO $ \run ->
run action `E.catch` \e -> if isAsyncCancellation e then E.throwIO e else run (handleInternal e)
{-# INLINE catchOwn #-}
tryAllOwnErrors :: (AnyError e, MonadUnliftIO m) => ExceptT e m a -> ExceptT e m (Either e a)
tryAllOwnErrors action = ExceptT $ Right <$> runExceptT action `catchOwn` (pure . Left . fromSomeException)
{-# INLINE tryAllOwnErrors #-}
tryAllOwnErrors' :: (AnyError e, MonadUnliftIO m) => ExceptT e m a -> m (Either e a)
tryAllOwnErrors' action = runExceptT action `catchOwn` (pure . Left . fromSomeException)
{-# INLINE tryAllOwnErrors' #-}
catchAllOwnErrors :: (AnyError e, MonadUnliftIO m) => ExceptT e m a -> (e -> ExceptT e m a) -> ExceptT e m a
catchAllOwnErrors action handler = tryAllOwnErrors action >>= either handler pure
{-# INLINE catchAllOwnErrors #-}
catchAllOwnErrors' :: (AnyError e, MonadUnliftIO m) => ExceptT e m a -> (e -> m a) -> m a
catchAllOwnErrors' action handler = tryAllOwnErrors' action >>= either handler pure
{-# INLINE catchAllOwnErrors' #-}
eitherToMaybe :: Either a b -> Maybe b
eitherToMaybe = either (const Nothing) Just
{-# INLINE eitherToMaybe #-}
+3 -1
View File
@@ -1,4 +1,6 @@
module Simplex.Messaging.Version.Internal where
module Simplex.Messaging.Version.Internal
( Version (..),
) where
import Data.Aeson (FromJSON (..), ToJSON (..))
import Data.Word (Word16)

Some files were not shown because too many files have changed in this diff Show More