Files
simplexmq/xftp-web/web/crypto.worker.ts
Evgeny f6aca47604 xftp: implementation of XFTP client as web page (#1708)
* xftp: implementation of XFTP client as web page (rfc, low level functions)

* protocol, file descriptions, more cryptogrpahy, handshake encoding, etc.

* xftp server changes to support web slients: SNI-based certificate choice, CORS headers, OPTIONS request

* web handshake

* test for xftp web handshake

* xftp-web client functions, fix transmission encoding

* support description "redirect" in agent.ts and cross-platform compatibility tests (Haskell <> TypeScript)

* rfc: web transport

* client transport abstraction

* browser environment

* persistent client sessions

* move rfcs

* web page plan

* improve plan

* webpage implementation (not tested)

* fix test

* fix test 2

* fix test 3

* fixes and page test plan

* allow sending xftp client hello after handshake - for web clients that dont know if established connection exists

* page tests pass

* concurrent and padded hellos in the server

* update TS client to pad hellos

* fix tests

* preview:local

* local preview over https

* fixed https in the test page

* web test cert fixtures

* debug logging in web page and server

* remove debug logging in server/browser, run preview xftp server via cabal run to ensure the latest code is used

* debug logging for page sessions

* add plan

* improve error handling, handle browser reconnections/re-handshake

* fix

* debugging

* opfs fallback

* delete test screenshot

* xftp CLI to support link

* fix encoding for XFTPServerHandshake

* support redirect file descriptions in xftp CLI receive

* refactor CLI redirect

* xftp-web: fixes and multi-server upload (#1714)

* fix: await sodium.ready in crypto/keys.ts (+ digest.ts StateAddress cast)

* multi-server parallel upload, remove pickRandomServer

* fix worker message race: wait for ready signal before posting messages

* suppress vite build warnings: emptyOutDir, externals, chunkSizeWarningLimit

* fix Haskell web tests: use agent+server API, wrap server in array, suppress debug logs

* remove dead APIs: un-export connectXFTP, delete closeXFTP

* fix TypeScript errors in check:web (#1716)

- client.ts: cast globalThis.process to any for browser tsconfig,
  suppress node:http2 import, use any for Buffer/chunks, cast fetch body
- crypto.worker.ts: cast sha512_init() return to StateAddress

* fix: serialize worker message processing to prevent OPFS handle race

async onmessage allows interleaved execution at await points.
When downloadFileRaw fetches chunks from multiple servers in parallel,
concurrent handleDecryptAndStore calls both see downloadWriteHandle
as null and race on createSyncAccessHandle for the same file,
causing intermittent NoModificationAllowedError.

Chain message handlers on a promise queue so each runs to completion
before the next starts.

* xftp-web: prepare for npm publishing (#1715)

* prepare package.json for npm publishing

Remove private flag, add description/license/repository/publishConfig,
rename postinstall to pretest, add prepublishOnly, set files and main.

* stable output filenames in production build

* fix repository url format, expand files array

* embeddable component: scoped CSS, dark mode, i18n, events, share

- worker output to assets/ for single-directory deployment
- scoped all CSS under #app, removed global resets
- dark mode via .dark ancestor class
- progress ring reads colors from CSS custom properties
- i18n via window.__XFTP_I18N__ with t() helper
- configurable mount element via data-xftp-app attribute
- optional hashchange listener (data-no-hashchange)
- completion events: xftp:upload-complete, xftp:download-complete
- enhanced file-too-large error mentioning SimpleX app
- native share button via navigator.share

* deferred init and runtime server configuration

- data-defer-init attribute skips auto-initialization
- window.__XFTP_SERVERS__ overrides baked-in server list

* use relative base path for relocatable build output

* xftp-web: retry resets to default state, use innerHTML for errors

* xftp-web: only enter download mode for valid XFTP URIs in hash

* xftp-web: render UI before WASM is ready

Move sodium.ready await after UI initialization so the upload/download
interface appears instantly. WASM is only needed when user triggers
an actual upload or download. Dispatch xftp:ready event once WASM loads.

* xftp-web: CLS placeholder HTML and embedder CSS selectors

Add placeholder HTML to index.html so the page renders a styled card
before JS executes, preventing layout shift. Use a <template> element
with an inline script to swap to the download placeholder when the URL
hash indicates a file download. Auto-compute CSP SHA-256 hashes for
inline scripts in the vite build plugin.

Change all CSS selectors from #app to :is(#app, [data-xftp-app]) so
styles apply when the widget is embedded with data-xftp-app attribute.

* xftp-web: progress ring overhaul

Rewrite progress ring with smooth lerp animation, green checkmark on
completion, theme reactivity via MutationObserver, and per-phase color
variables (encrypt/upload/download/decrypt).

Show honest per-phase progress: each phase animates 0-100% independently
with a ring color change between phases. Add decrypt progress callback
from the web worker so the decryption phase tracks real chunk processing
instead of showing an indeterminate spinner.

Snap immediately on phase reset (0) and completion (1) to avoid
lingering partial progress. Clean up animation and observers via
destroy() in finally blocks.

* xftp-web: single progress ring for upload, simplify ring color

* xftp-web: single progress ring for download

* feat(xftp-web): granular progress for encrypt/decrypt phases

Add byte-level progress callbacks to encryptFile, decryptChunks,
and sha512Streaming by processing data in 256KB segments. Worker
reports fine-grained progress across all phases (encrypt+hash+write
for upload, read+hash+decrypt for download). Progress ring gains
fillTo method for smooth ease-out animation during minimum display
delays. Encrypt/decrypt phases fill their weighted regions (0-15%
and 85-99%) with real callbacks, with fillTo covering remaining
time when work finishes under the 1s minimum for files >= 100KB.

* rename package

---------

Co-authored-by: Evgeny Poberezkin <evgeny@poberezkin.com>

---------

Co-authored-by: Evgeny @ SimpleX Chat <259188159+evgeny-simplex@users.noreply.github.com>
Co-authored-by: shum <github.shum@liber.li>
Co-authored-by: sh <37271604+shumvgolove@users.noreply.github.com>
2026-03-02 09:57:46 +00:00

335 lines
14 KiB
TypeScript

import sodium from 'libsodium-wrappers-sumo'
import {encryptFile, encodeFileHeader, decryptChunks} from '../src/crypto/file.js'
import {sha512Streaming} from '../src/crypto/digest.js'
import {prepareChunkSizes, fileSizeLen, authTagSize} from '../src/protocol/chunks.js'
import {decryptReceivedChunk} from '../src/download.js'
// ── OPFS session management ─────────────────────────────────────
const SESSION_DIR = `session-${Date.now()}-${crypto.randomUUID()}`
let uploadReadHandle: FileSystemSyncAccessHandle | null = null
let downloadWriteHandle: FileSystemSyncAccessHandle | null = null
const chunkMeta = new Map<number, {offset: number, size: number}>()
let currentDownloadOffset = 0
let sessionDir: FileSystemDirectoryHandle | null = null
let useMemory = false
const memoryChunks = new Map<number, Uint8Array>()
async function getSessionDir(): Promise<FileSystemDirectoryHandle> {
if (!sessionDir) {
const root = await navigator.storage.getDirectory()
sessionDir = await root.getDirectoryHandle(SESSION_DIR, {create: true})
}
return sessionDir
}
async function sweepStale() {
const root = await navigator.storage.getDirectory()
const oneHourAgo = Date.now() - 3600_000
for await (const [name] of (root as any).entries()) {
if (!name.startsWith('session-')) continue
const parts = name.split('-')
const ts = parseInt(parts[1], 10)
if (!isNaN(ts) && ts < oneHourAgo) {
try { await root.removeEntry(name, {recursive: true}) } catch (_) {}
}
}
}
// ── Message handlers ────────────────────────────────────────────
async function handleEncrypt(id: number, data: ArrayBuffer, fileName: string) {
const source = new Uint8Array(data)
const key = new Uint8Array(32)
const nonce = new Uint8Array(24)
crypto.getRandomValues(key)
crypto.getRandomValues(nonce)
const fileHdr = encodeFileHeader({fileName, fileExtra: null})
const fileSize = BigInt(fileHdr.length + source.length)
const payloadSize = Number(fileSize) + fileSizeLen + authTagSize
const chunkSizes = prepareChunkSizes(payloadSize)
const encSize = BigInt(chunkSizes.reduce((a: number, b: number) => a + b, 0))
const encDataLen = Number(encSize)
const total = source.length + encDataLen * 2 // encrypt + hash + write
const encData = encryptFile(source, fileHdr, key, nonce, fileSize, encSize, (done) => {
self.postMessage({id, type: 'progress', done, total})
})
const digest = sha512Streaming([encData], (done) => {
self.postMessage({id, type: 'progress', done: source.length + done, total})
}, encDataLen)
console.log(`[WORKER-DBG] encrypt: encData.len=${encData.length} digest=${_whex(digest, 64)} chunkSizes=[${chunkSizes.join(',')}]`)
// Write to OPFS
const dir = await getSessionDir()
const fileHandle = await dir.getFileHandle('upload.bin', {create: true})
const writeHandle = await fileHandle.createSyncAccessHandle()
const written = writeHandle.write(encData)
if (written !== encData.length) throw new Error(`OPFS upload write: ${written}/${encData.length}`)
writeHandle.flush()
writeHandle.close()
// Reopen as persistent read handle
uploadReadHandle = await fileHandle.createSyncAccessHandle()
self.postMessage({id, type: 'progress', done: total, total})
self.postMessage({id, type: 'encrypted', digest, key, nonce, chunkSizes})
}
function handleReadChunk(id: number, offset: number, size: number) {
if (!uploadReadHandle) {
self.postMessage({id, type: 'error', message: 'No upload file open'})
return
}
const buf = new Uint8Array(size)
uploadReadHandle.read(buf, {at: offset})
const ab = buf.buffer as ArrayBuffer
self.postMessage({id, type: 'chunk', data: ab}, [ab])
}
async function handleDecryptAndStore(
id: number, dhSecret: Uint8Array, nonce: Uint8Array,
body: ArrayBuffer, chunkDigest: Uint8Array, chunkNo: number
) {
const bodyArr = new Uint8Array(body)
console.log(`[WORKER-DBG] store chunk=${chunkNo} body.len=${bodyArr.length} nonce=${_whex(nonce, 24)} dhSecret=${_whex(dhSecret)} digest=${_whex(chunkDigest, 32)} body[0..8]=${_whex(bodyArr)} body[-8..]=${_whex(bodyArr.slice(-8))}`)
const decrypted = decryptReceivedChunk(dhSecret, nonce, bodyArr, chunkDigest)
console.log(`[WORKER-DBG] decrypted chunk=${chunkNo} len=${decrypted.length} [0..8]=${_whex(decrypted)} [-8..]=${_whex(decrypted.slice(-8))}`)
if (useMemory) {
memoryChunks.set(chunkNo, decrypted)
self.postMessage({id, type: 'stored'})
return
}
if (!downloadWriteHandle) {
const dir = await getSessionDir()
const fileHandle = await dir.getFileHandle('download.bin', {create: true})
downloadWriteHandle = await fileHandle.createSyncAccessHandle()
}
const offset = currentDownloadOffset
currentDownloadOffset += decrypted.length
chunkMeta.set(chunkNo, {offset, size: decrypted.length})
const written = downloadWriteHandle.write(decrypted, {at: offset})
console.log(`[WORKER-DBG] OPFS write chunk=${chunkNo} offset=${offset} size=${decrypted.length} written=${written}`)
if (written !== decrypted.length) {
console.warn(`[WORKER] OPFS write failed chunk=${chunkNo}: ${written}/${decrypted.length}, falling back to in-memory storage`)
// Migrate previously written chunks from OPFS to memory
for (const [cn, meta] of chunkMeta.entries()) {
if (cn === chunkNo) continue
const buf = new Uint8Array(meta.size)
downloadWriteHandle.read(buf, {at: meta.offset})
memoryChunks.set(cn, buf)
}
downloadWriteHandle.close()
downloadWriteHandle = null
try {
const dir = await getSessionDir()
await dir.removeEntry('download.bin')
} catch (_) {}
chunkMeta.clear()
currentDownloadOffset = 0
memoryChunks.set(chunkNo, decrypted)
useMemory = true
self.postMessage({id, type: 'stored'})
return
}
downloadWriteHandle.flush()
// Verify: read back and compare first/last 8 bytes
const verifyBuf = new Uint8Array(Math.min(8, decrypted.length))
downloadWriteHandle.read(verifyBuf, {at: offset})
const verifyEnd = new Uint8Array(Math.min(8, decrypted.length))
downloadWriteHandle.read(verifyEnd, {at: offset + decrypted.length - verifyEnd.length})
console.log(`[WORKER-DBG] OPFS verify chunk=${chunkNo} readBack[0..8]=${_whex(verifyBuf)} readBack[-8..]=${_whex(verifyEnd)} expected[0..8]=${_whex(decrypted)} expected[-8..]=${_whex(decrypted.slice(-8))}`)
self.postMessage({id, type: 'stored'})
}
async function handleVerifyAndDecrypt(
id: number, size: number, digest: Uint8Array, key: Uint8Array, nonce: Uint8Array
) {
console.log(`[WORKER-DBG] verify: expectedSize=${size} expectedDigest=${_whex(digest, 64)} useMemory=${useMemory} chunkMeta.size=${chunkMeta.size} memoryChunks.size=${memoryChunks.size}`)
// Read chunks — from memory (fallback) or OPFS
const chunks: Uint8Array[] = []
let totalSize = 0
const total = size * 3 // read + hash + decrypt (byte-based progress)
let done = 0
if (useMemory) {
const sorted = [...memoryChunks.entries()].sort((a, b) => a[0] - b[0])
for (const [chunkNo, data] of sorted) {
console.log(`[WORKER-DBG] verify memory chunk=${chunkNo} size=${data.length}`)
chunks.push(data)
totalSize += data.length
done += data.length
self.postMessage({id, type: 'progress', done, total})
}
} else {
// Close write handle, reopen as read
if (downloadWriteHandle) {
downloadWriteHandle.flush()
downloadWriteHandle.close()
downloadWriteHandle = null
}
const dir = await getSessionDir()
const fileHandle = await dir.getFileHandle('download.bin')
const readHandle = await fileHandle.createSyncAccessHandle()
console.log(`[WORKER-DBG] verify: OPFS file size=${readHandle.getSize()}`)
const sortedEntries = [...chunkMeta.entries()].sort((a, b) => a[0] - b[0])
for (const [chunkNo, meta] of sortedEntries) {
const buf = new Uint8Array(meta.size)
const bytesRead = readHandle.read(buf, {at: meta.offset})
console.log(`[WORKER-DBG] verify read chunk=${chunkNo} offset=${meta.offset} size=${meta.size} bytesRead=${bytesRead} [0..8]=${_whex(buf)} [-8..]=${_whex(buf.slice(-8))}`)
chunks.push(buf)
totalSize += meta.size
done += meta.size
self.postMessage({id, type: 'progress', done, total})
}
readHandle.close()
}
if (totalSize !== size) {
self.postMessage({id, type: 'error', message: `File size mismatch: ${totalSize} !== ${size}`})
return
}
// Compute SHA-512 with byte-level progress
const hashSEG = 4 * 1024 * 1024
const state = sodium.crypto_hash_sha512_init() as unknown as import('libsodium-wrappers').StateAddress
for (let i = 0; i < chunks.length; i++) {
const chunk = chunks[i]
for (let off = 0; off < chunk.length; off += hashSEG) {
const end = Math.min(off + hashSEG, chunk.length)
sodium.crypto_hash_sha512_update(state, chunk.subarray(off, end))
done += end - off
self.postMessage({id, type: 'progress', done, total})
}
}
const actualDigest = sodium.crypto_hash_sha512_final(state)
if (!digestEqual(actualDigest, digest)) {
console.error(`[WORKER-DBG] DIGEST MISMATCH: expected=${_whex(digest, 64)} actual=${_whex(actualDigest, 64)} chunks=${chunks.length} totalSize=${totalSize}`)
const state2 = sodium.crypto_hash_sha512_init() as unknown as import('libsodium-wrappers').StateAddress
for (let i = 0; i < chunks.length; i++) {
const chunk = chunks[i]
for (let off = 0; off < chunk.length; off += hashSEG) {
sodium.crypto_hash_sha512_update(state2, chunk.subarray(off, Math.min(off + hashSEG, chunk.length)))
}
const chunkDigest = sha512Streaming([chunk])
console.error(`[WORKER-DBG] chunk[${i}] size=${chunk.length} sha512=${_whex(chunkDigest, 32)}… [0..8]=${_whex(chunk)} [-8..]=${_whex(chunk.slice(-8))}`)
}
self.postMessage({id, type: 'error', message: 'File digest mismatch'})
return
}
console.log(`[WORKER-DBG] verify: digest OK`)
// File-level decrypt with byte-level progress
const result = decryptChunks(BigInt(size), chunks, key, nonce, (d) => {
self.postMessage({id, type: 'progress', done: size * 2 + d, total})
})
self.postMessage({id, type: 'progress', done: total, total})
// Clean up download state
if (!useMemory) {
const dir = await getSessionDir()
try { await dir.removeEntry('download.bin') } catch (_) {}
}
chunkMeta.clear()
memoryChunks.clear()
currentDownloadOffset = 0
useMemory = false
const contentBuf = result.content.buffer.slice(
result.content.byteOffset,
result.content.byteOffset + result.content.byteLength
)
self.postMessage(
{id, type: 'decrypted', header: result.header, content: contentBuf},
[contentBuf]
)
}
async function handleCleanup(id: number) {
if (uploadReadHandle) {
uploadReadHandle.close()
uploadReadHandle = null
}
if (downloadWriteHandle) {
downloadWriteHandle.close()
downloadWriteHandle = null
}
chunkMeta.clear()
memoryChunks.clear()
currentDownloadOffset = 0
useMemory = false
try {
const root = await navigator.storage.getDirectory()
await root.removeEntry(SESSION_DIR, {recursive: true})
} catch (_) {}
sessionDir = null
self.postMessage({id, type: 'cleaned'})
}
// ── Message dispatch ────────────────────────────────────────────
// Serialize all message processing — async onmessage would allow
// interleaved execution at await points, racing on shared OPFS handles
// when downloadFileRaw fetches chunks from multiple servers in parallel.
let queue: Promise<void> = Promise.resolve()
self.onmessage = (e: MessageEvent) => {
const msg = e.data
queue = queue.then(async () => {
try {
await initPromise
switch (msg.type) {
case 'encrypt':
await handleEncrypt(msg.id, msg.data, msg.fileName)
break
case 'readChunk':
handleReadChunk(msg.id, msg.offset, msg.size)
break
case 'decryptAndStoreChunk':
await handleDecryptAndStore(msg.id, msg.dhSecret, msg.nonce, msg.body, msg.chunkDigest, msg.chunkNo)
break
case 'verifyAndDecrypt':
await handleVerifyAndDecrypt(msg.id, msg.size, msg.digest, msg.key, msg.nonce)
break
case 'cleanup':
await handleCleanup(msg.id)
break
default:
self.postMessage({id: msg.id, type: 'error', message: `Unknown message type: ${msg.type}`})
}
} catch (err: any) {
self.postMessage({id: msg.id, type: 'error', message: err?.message ?? String(err)})
}
})
}
// ── Helpers ─────────────────────────────────────────────────────
function _whex(b: Uint8Array, n = 8): string {
return Array.from(b.slice(0, n)).map(x => x.toString(16).padStart(2, '0')).join('')
}
function digestEqual(a: Uint8Array, b: Uint8Array): boolean {
if (a.length !== b.length) return false
let diff = 0
for (let i = 0; i < a.length; i++) diff |= a[i] ^ b[i]
return diff === 0
}
// ── Init ────────────────────────────────────────────────────────
const initPromise = (async () => {
await sodium.ready
await sweepStale()
})()
// Signal main thread that the worker is ready to receive messages
initPromise.then(() => self.postMessage({type: 'ready'}), () => {})