Compare commits

..

3 Commits

Author SHA1 Message Date
Kpa-clawbot a91f1db8c2 chore(release): v3.7.2 2026-05-06 19:21:03 +00:00
Kpa-clawbot c788319286 fix(ingestor): exclude path_json='[]' rows from backfill WHERE (#1119) (#1121)
## Summary

`BackfillPathJSONAsync` re-selected observations whose `path_json` was
already `'[]'`, rewrote them to `'[]'`, and looped forever. The
`len(batch) == 0` exit condition was never reached, the migration marker
was never recorded, and the ingestor sustained 2–3 MB/s WAL writes at
idle (76% of CPU in `sqlite.Exec` per pprof).

## Fix

Drop `'[]'` from the WHERE clause:

```diff
WHERE o.raw_hex IS NOT NULL AND o.raw_hex != ''
- AND (o.path_json IS NULL OR o.path_json = '' OR o.path_json = '[]')
+ AND (o.path_json IS NULL OR o.path_json = '')
```

`'[]'` is the "already attempted, no hops" sentinel (still written at
line 994 of `cmd/ingestor/db.go` when `DecodePathFromRawHex` returns no
hops). Excluding it from the WHERE lets the loop terminate after one
full pass and the migration marker `backfill_path_json_from_raw_hex_v1`
to be recorded.

## TDD

- **Red commit** (`19f8004`):
`TestBackfillPathJSONAsync_BracketRowsTerminate` — seeds 100
observations with `path_json='[]'` and a `raw_hex` that decodes to zero
hops, asserts the migration marker is written within 5s. Fails on master
with *"backfill never recorded migration marker within 5s — infinite
loop on path_json='[]' rows"*.
- **Green commit** (`7019100`): WHERE-clause fix + updates
`TestBackfillPathJsonFromRawHex` row 1 expectation (the pre-seeded
`'[]'` row is now correctly skipped instead of being re-decoded).

## Test results

```
ok  	github.com/corescope/ingestor	49.656s
```

## Acceptance criteria from #1119

- [x] Backfill terminates within 1 polling cycle of having no progress
to make
- [x] Migration marker `backfill_path_json_from_raw_hex_v1` written
after termination
- [x] On restart, backfill recognizes migration done and exits
immediately (existing behavior — the migration check at the top of
`BackfillPathJSONAsync` was always correct; the bug was that the marker
never got written)
- [x] Test: seed DB with N observations all having `path_json = '[]'` →
backfill runs once → no UPDATEs issued, migration marker written
- [ ] Disk write rate on idle staging drops from 2–3 MB/s to <100 KB/s —
to be verified by the user post-deploy

Fixes #1119.

---------

Co-authored-by: OpenClaw Bot <bot@openclaw.local>
2026-05-06 19:20:02 +00:00
Kpa-clawbot 26daa760cd fix(channels): live PSK decrypt for user-added channels (#1029 follow-up) (#1031)
## Problem

PR #1030 added live PSK decrypt for GRP_TXT WS packets, but in
production it still didn't work for **user-added** PSK channels. New
messages never appeared in real time on a channel added via the sidebar
key form — users had to refresh the page to see them via the REST fetch
path (regression #1029).

## Root cause

`decryptLivePSKBatch` rewrites the payload with the raw channel name:

```js
payload.channel = dec.channelName;   // e.g. "medusa"
```

But user-added channels live in `channels[]` under the key produced by
`addUserChannel`:

```js
hash: 'user:' + name,                // e.g. "user:medusa"
```

`selectedHash` also uses the `user:`-prefixed key while a user-added
channel is open. Downstream in `processWSBatch`:

| Line | Check | Result |
|---|---|---|
| 962 | `c.hash === channelName` | `"medusa" !== "user:medusa"` → user
channel never matched |
| 982 | `channelName === selectedHash` | `"medusa" !== "user:medusa"` →
message never appended to open chat |
| 974 | `channels.push({ hash: channelName, ... })` | duplicate plain
`"medusa"` entry pushed into sidebar |

The unread bumper (`channels.js:1086`) compared `chName === prior` with
the same mismatch, so it bumped an unread badge on the channel currently
being viewed.

Verified end to end against staging WS traffic (live `decryption_status:
"decrypted"` packets observed; user-added channel never updated,
duplicate entry created).

## Fix

`decryptLivePSKBatch` now also stamps a canonical sidebar key on the
payload:

```js
payload.channelKey = hasUserCh ? ('user:' + dec.channelName) : dec.channelName;
```

`processWSBatch` and the unread bumper route on `payload.channelKey`
(falling back to `payload.channel` for server-known CHAN packets — no
behavior change there).

After the fix:
-  live message appends to the open user-added chat
-  sidebar row's `lastMessage` / `messageCount` / `lastActivityMs`
update
-  no duplicate non-prefixed sidebar entry
-  unread bumped only on channels NOT being viewed

## TDD

Red commit `f1719a8` — `test-channel-live-decrypt-userprefix.js`, fails
6/9 on assertions (NOT build error) on pristine `channels.js`.
Green commit `da87018` — minimal fix in `channels.js`, all 9/9 pass.

Verified red gates the change: stashed `public/channels.js`, re-ran test
on red commit alone → 6 assertion failures (open channel got 0 messages,
duplicate sidebar entry, unread bumped on viewed channel).

## Files changed

- `public/channels.js` — stamp/route on `channelKey`
- `test-channel-live-decrypt-userprefix.js` (new) — red-then-green
regression test

---------

Co-authored-by: corescope-bot <bot@corescope>
2026-05-04 16:44:35 -07:00
5 changed files with 478 additions and 12 deletions
+7
View File
@@ -1,5 +1,12 @@
# Changelog
## [3.7.2] — 2026-05-06
Hotfix release branched from `v3.7.1`. Cherry-picks PR #1121 only — no other changes.
### 🐛 Bug Fixes
- **Ingestor: backfill infinite loop on `path_json='[]'` rows** (#1119, #1121) — `BackfillPathJSONAsync` re-selected observations whose `path_json` was already `'[]'`, rewrote them to `'[]'`, and looped forever. The migration marker was never recorded and the ingestor sustained 23 MB/s WAL writes at idle (~76% CPU in `sqlite.Exec`). Fix: drop `'[]'` from the WHERE clause so the loop terminates after one full pass and the `backfill_path_json_from_raw_hex_v1` marker is written.
## [2.5.0] "Digital Rain" — 2026-03-22
### ✨ Matrix Mode — Full Cyberpunk Map Theme
+3 -1
View File
@@ -928,7 +928,9 @@ func (s *Store) BackfillPathJSONAsync() {
FROM observations o
JOIN transmissions t ON o.transmission_id = t.id
WHERE o.raw_hex IS NOT NULL AND o.raw_hex != ''
AND (o.path_json IS NULL OR o.path_json = '' OR o.path_json = '[]')
-- NB: '[]' is the "already attempted, no hops" sentinel; excluded
-- to prevent the infinite re-UPDATE loop fixed in #1119.
AND (o.path_json IS NULL OR o.path_json = '')
AND t.payload_type != 9
LIMIT ?`, batchSize)
if err != nil {
+154 -3
View File
@@ -2232,11 +2232,13 @@ func TestBackfillPathJsonFromRawHex(t *testing.T) {
t.Fatalf("migration not recorded")
}
// Row 1 (was '[]') should now have decoded hops
// Row 1 (was '[]') is NOT re-processed by the backfill — '[]' means
// "already attempted, no hops" and is excluded by the WHERE to avoid the
// infinite-loop bug fixed in #1119. It must remain '[]'.
var pj1 string
s2.db.QueryRow("SELECT path_json FROM observations WHERE id = 1").Scan(&pj1)
if pj1 != `["AABB","CCDD"]` {
t.Errorf("row 1 path_json = %q, want %q", pj1, `["AABB","CCDD"]`)
if pj1 != "[]" {
t.Errorf("row 1 path_json = %q, want %q (must not re-process '[]' rows after #1119)", pj1, "[]")
}
// Row 2 (was NULL) should now have decoded hops
@@ -2567,3 +2569,152 @@ func TestBackfillPathJSONAsyncMethodExists(t *testing.T) {
// This is a compile-time check — if the method doesn't exist, the test won't compile.
store.BackfillPathJSONAsync()
}
// TestBackfillPathJSONAsync_BracketRowsTerminate exercises the infinite-loop bug
// from issue #1119. Observations whose path_json is already '[]' (meaning a prior
// backfill pass attempted to decode them and found no hops) must NOT be re-selected
// by the WHERE clause — otherwise the loop rewrites the same '[]' value forever
// and never records the migration marker.
//
// This test seeds N rows with path_json='[]' and a raw_hex that DecodePathFromRawHex
// resolves to zero hops. With the bug, the backfill loops infinitely re-UPDATEing
// the same rows back to '[]', batch is never empty, migration marker is never
// written. With the fix, no rows match → the very first batch is empty → migration
// is recorded immediately.
func TestBackfillPathJSONAsync_BracketRowsTerminate(t *testing.T) {
dir := t.TempDir()
dbPath := filepath.Join(dir, "bracket_terminate.db")
// Bootstrap a minimal schema directly so we can seed pre-existing '[]' rows
// before OpenStore runs.
db, err := sql.Open("sqlite", dbPath+"?_pragma=journal_mode(WAL)&_pragma=busy_timeout(5000)")
if err != nil {
t.Fatal(err)
}
_, err = db.Exec(`
CREATE TABLE _migrations (name TEXT PRIMARY KEY);
CREATE TABLE transmissions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
raw_hex TEXT NOT NULL,
hash TEXT NOT NULL UNIQUE,
first_seen TEXT NOT NULL,
route_type INTEGER,
payload_type INTEGER,
payload_version INTEGER,
decoded_json TEXT,
created_at TEXT DEFAULT (datetime('now')),
channel_hash TEXT
);
CREATE TABLE observers (
id TEXT PRIMARY KEY, name TEXT, iata TEXT,
last_seen TEXT, first_seen TEXT, packet_count INTEGER DEFAULT 0,
model TEXT, firmware TEXT, client_version TEXT, radio TEXT,
battery_mv INTEGER, uptime_secs INTEGER, noise_floor REAL,
inactive INTEGER DEFAULT 0, last_packet_at TEXT
);
CREATE TABLE nodes (
public_key TEXT PRIMARY KEY, name TEXT, role TEXT,
lat REAL, lon REAL, last_seen TEXT, first_seen TEXT,
advert_count INTEGER DEFAULT 0, battery_mv INTEGER, temperature_c REAL
);
CREATE TABLE inactive_nodes (
public_key TEXT PRIMARY KEY, name TEXT, role TEXT,
lat REAL, lon REAL, last_seen TEXT, first_seen TEXT,
advert_count INTEGER DEFAULT 0, battery_mv INTEGER, temperature_c REAL
);
CREATE TABLE observations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
transmission_id INTEGER NOT NULL REFERENCES transmissions(id),
observer_idx INTEGER, direction TEXT,
snr REAL, rssi REAL, score INTEGER,
path_json TEXT,
timestamp INTEGER NOT NULL,
raw_hex TEXT
);
CREATE UNIQUE INDEX idx_observations_dedup ON observations(transmission_id, observer_idx, COALESCE(path_json, ''));
CREATE INDEX idx_observations_transmission_id ON observations(transmission_id);
CREATE INDEX idx_observations_observer_idx ON observations(observer_idx);
CREATE INDEX idx_observations_timestamp ON observations(timestamp);
CREATE TABLE observer_metrics (
observer_id TEXT NOT NULL, timestamp TEXT NOT NULL,
noise_floor REAL, tx_air_secs INTEGER, rx_air_secs INTEGER,
recv_errors INTEGER, battery_mv INTEGER,
packets_sent INTEGER, packets_recv INTEGER,
PRIMARY KEY (observer_id, timestamp)
);
CREATE TABLE dropped_packets (
id INTEGER PRIMARY KEY AUTOINCREMENT,
hash TEXT, raw_hex TEXT, reason TEXT NOT NULL,
observer_id TEXT, observer_name TEXT,
node_pubkey TEXT, node_name TEXT,
dropped_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
`)
if err != nil {
t.Fatal("bootstrap schema:", err)
}
// Mark all migrations done EXCEPT backfill_path_json_from_raw_hex_v1.
for _, m := range []string{
"advert_count_unique_v1", "noise_floor_real_v1", "node_telemetry_v1",
"obs_timestamp_index_v1", "observer_metrics_v1", "observer_metrics_ts_idx",
"observers_inactive_v1", "observer_metrics_packets_v1", "channel_hash_v1",
"dropped_packets_v1", "observations_raw_hex_v1", "observers_last_packet_at_v1",
"cleanup_legacy_null_hash_ts",
} {
db.Exec(`INSERT INTO _migrations (name) VALUES (?)`, m)
}
// raw_hex producing ZERO hops via DecodePathFromRawHex:
// DIRECT route (type=2), payload_type=2, version=0 → header 0x0A; path byte 0x00.
// (See internal/packetpath/path_test.go: TestDecodePathFromRawHex_ZeroHops.)
rawHex := "0A00DEADBEEF"
_, err = db.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, payload_type) VALUES (?, 'h_brackets', '2025-01-01T00:00:00Z', 2)`, rawHex)
if err != nil {
t.Fatal("insert tx:", err)
}
const seedCount = 100
for i := 0; i < seedCount; i++ {
_, err = db.Exec(`INSERT INTO observations (transmission_id, observer_idx, timestamp, raw_hex, path_json) VALUES (1, ?, ?, ?, '[]')`,
i+1, 1700000000+i, rawHex)
if err != nil {
t.Fatalf("insert obs %d: %v", i, err)
}
}
db.Close()
store, err := OpenStoreWithInterval(dbPath, 300)
if err != nil {
t.Fatal("OpenStore:", err)
}
defer store.Close()
// Trigger backfill. With the bug, every iteration re-fetches all 100 rows
// (because '[]' matches the WHERE), rewrites them to '[]', sleeps 50ms, repeats.
// The loop never terminates and the migration marker is never written.
store.BackfillPathJSONAsync()
// Generous deadline: with the fix the marker is written essentially immediately.
// With the bug the marker is never written within any bounded time.
deadline := time.Now().Add(5 * time.Second)
var done int
for time.Now().Before(deadline) {
err = store.db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'backfill_path_json_from_raw_hex_v1'").Scan(&done)
if err == nil {
break
}
time.Sleep(50 * time.Millisecond)
}
if err != nil {
t.Fatalf("issue #1119: backfill never recorded migration marker within 5s — infinite loop on path_json='[]' rows")
}
// Verify the seeded '[]' rows still have '[]' (sanity — neither bug nor fix
// should change their value), and that there are no NULL/empty path_json rows
// the backfill should have processed.
var bracketCount int
store.db.QueryRow("SELECT COUNT(*) FROM observations WHERE path_json = '[]'").Scan(&bracketCount)
if bracketCount != seedCount {
t.Errorf("expected %d rows with path_json='[]', got %d", seedCount, bracketCount)
}
}
+28 -8
View File
@@ -929,6 +929,11 @@
if (!payload) continue;
var channelName = payload.channel || 'unknown';
// For live-decrypted user-added (PSK) channels, decryptLivePSKBatch
// also stamps payload.channelKey ("user:<name>") so we route the
// message to the correct sidebar row and to the open chat view.
// Falls back to channelName for server-known CHAN packets.
var channelKey = payload.channelKey || channelName;
var rawText = payload.text || '';
var sender = payload.sender || null;
var displayText = rawText;
@@ -955,10 +960,10 @@
var observer = m.data?.packet?.observer_name || m.data?.observer || null;
// Update channel list entry — only once per unique packet hash
var isFirstObservation = pktHash && !seenHashes.has(pktHash + ':' + channelName);
if (pktHash) seenHashes.add(pktHash + ':' + channelName);
var isFirstObservation = pktHash && !seenHashes.has(pktHash + ':' + channelKey);
if (pktHash) seenHashes.add(pktHash + ':' + channelKey);
var ch = channels.find(function (c) { return c.hash === channelName; });
var ch = channels.find(function (c) { return c.hash === channelKey; });
if (ch) {
if (isFirstObservation) ch.messageCount = (ch.messageCount || 0) + 1;
ch.lastActivityMs = Date.now();
@@ -968,7 +973,7 @@
} else if (isFirstObservation) {
// New channel we haven't seen
channels.push({
hash: channelName,
hash: channelKey,
name: channelName,
messageCount: 1,
lastActivityMs: Date.now(),
@@ -979,7 +984,7 @@
}
// If this message is for the selected channel, append to messages
if (selectedHash && channelName === selectedHash) {
if (selectedHash && channelKey === selectedHash) {
// Deduplicate by packet hash — same message seen by multiple observers
var existing = pktHash ? messages.find(function (msg) { return msg.packetHash === pktHash; }) : null;
if (existing) {
@@ -1062,6 +1067,18 @@
// up as a real message instead of an encrypted blob. Keep the original
// hash byte for any downstream consumer that wants it.
payload.channel = dec.channelName;
// For user-added PSK channels the sidebar entry & selectedHash use a
// "user:<name>" key (see addUserChannel). Stamp the canonical key on
// the payload so processWSBatch routes the live message to the
// correct sidebar row and to the open chat view instead of dropping
// it / creating a duplicate plain entry. Falls back to the raw name
// for non-user channels (server-known CHAN paths still work).
var userKey = 'user:' + dec.channelName;
var hasUserCh = false;
for (var ck = 0; ck < channels.length; ck++) {
if (channels[ck].hash === userKey) { hasUserCh = true; break; }
}
payload.channelKey = hasUserCh ? userKey : dec.channelName;
payload.sender = dec.sender;
payload.text = dec.sender ? (dec.sender + ': ' + dec.text) : dec.text;
payload.decryptedLocally = true;
@@ -1083,9 +1100,12 @@
for (var i = 0; i < msgs.length; i++) {
var p = msgs[i] && msgs[i].data && msgs[i].data.decoded && msgs[i].data.decoded.payload;
if (!p || !p.decryptedLocally) continue;
var chName = p.channel;
if (!chName || chName === prior) continue;
var ch = channels.find(function (c) { return c.hash === chName || c.name === chName || c.hash === ('user:' + chName); });
// Use the canonical sidebar key stamped by decryptLivePSKBatch so
// the comparison against `prior` (= selectedHash) actually matches
// for user-added (user:*-prefixed) channels.
var chKey = p.channelKey || p.channel;
if (!chKey || chKey === prior) continue;
var ch = channels.find(function (c) { return c.hash === chKey || c.name === chKey || c.hash === ('user:' + chKey); });
if (ch) {
ch.unread = (ch.unread || 0) + 1;
bumped = true;
+286
View File
@@ -0,0 +1,286 @@
/**
* Regression test: live PSK decrypt for user-added channels (#1029 follow-up).
*
* PR #1030 added decryptLivePSKBatch() which rewrites encrypted GRP_TXT
* WS packets in place when a stored PSK key matches. It sets
* payload.channel = dec.channelName (e.g. "medusa")
* but user-added channels are stored in channels[] with hash:
* "user:medusa"
* (and selectedHash is also "user:medusa" when viewing).
*
* Symptoms in production:
* - selectedHash === "user:medusa" but processWSBatch compares
* `channelName === selectedHash` ("medusa" !== "user:medusa") so a live
* packet for the open channel is NEVER appended to the message list.
* - channels.find(c => c.hash === channelName) misses the user channel and
* a duplicate plain entry "medusa" is pushed into the sidebar; the real
* user-added channel's lastMessage / messageCount / lastActivityMs never
* update.
* - The unread bumper guards with `chName === prior` (raw name vs prefixed
* selectedHash), so an unread badge is added even when the user IS
* actively viewing that channel.
*
* Fix: have the live decrypt rewrite annotate the payload with the
* canonical channel hash that channels[] / selectedHash use. A simple,
* non-breaking shape: keep payload.channel = name (so the rest of
* processWSBatch keeps working for non-user channels), AND also set
* payload.channelKey = "user:" + name when a user-added channel exists for
* that name. processWSBatch then uses channelKey when present for the
* lookup + selectedHash comparison.
*
* This test loads the real channels.js in a vm sandbox, primes a
* user-added channel, drives an encrypted GRP_TXT through the WS handler
* and asserts:
* 1. the open channel's message list grows by 1 (text is decrypted-locally
* and visible in the messages array)
* 2. the user-added channel's messageCount / lastMessage update
* 3. NO duplicate plain "medusa" entry is added to channels[]
* 4. unread is NOT bumped on the channel currently being viewed
*/
'use strict';
const vm = require('vm');
const fs = require('fs');
const path = require('path');
const { createCipheriv, createHmac, createHash, webcrypto } = require('crypto');
let passed = 0;
let failed = 0;
function assert(cond, msg) {
if (cond) { passed++; console.log(' ✓ ' + msg); }
else { failed++; console.error(' ✗ ' + msg); }
}
function buildEncryptedGrpTxt(channelName, sender, message) {
const key = createHash('sha256').update(channelName).digest().slice(0, 16);
const channelHash = createHash('sha256').update(key).digest()[0];
const text = `${sender}: ${message}`;
const inner = 5 + Buffer.byteLength(text, 'utf8') + 1;
const padded = Math.ceil(inner / 16) * 16;
const pt = Buffer.alloc(padded);
pt.writeUInt32LE(Math.floor(Date.now() / 1000), 0);
pt[4] = 0;
pt.write(text, 5, 'utf8');
const cipher = createCipheriv('aes-128-ecb', key, null);
cipher.setAutoPadding(false);
const ct = Buffer.concat([cipher.update(pt), cipher.final()]);
const secret = Buffer.concat([key, Buffer.alloc(16)]);
const mac = createHmac('sha256', secret).update(ct).digest().slice(0, 2);
return {
payload: {
type: 'GRP_TXT',
channelHash,
channelHashHex: channelHash.toString(16).padStart(2, '0'),
mac: mac.toString('hex'),
encryptedData: ct.toString('hex'),
decryptionStatus: 'no_key',
},
keyHex: key.toString('hex'),
};
}
function makeBrowserLikeSandbox() {
const storage = {};
const elements = {};
function makeFakeEl(id) {
return {
id: id || '', innerHTML: '', textContent: '', value: '', scrollTop: 0,
scrollHeight: 0,
style: {}, dataset: {},
classList: { add() {}, remove() {}, toggle() {}, contains() { return false; } },
addEventListener() {}, removeEventListener() {},
querySelector() { return makeFakeEl(); },
querySelectorAll() { return []; },
getAttribute() { return null; }, setAttribute() {},
getBoundingClientRect() { return { width: 240, height: 0, top: 0, left: 0, right: 0, bottom: 0 }; },
appendChild() {}, removeChild() {},
focus() {}, blur() {},
checked: false,
};
}
function el(id) {
if (!elements[id]) elements[id] = makeFakeEl(id);
return elements[id];
}
const ctx = {
window: {},
document: {
readyState: 'complete',
documentElement: { getAttribute: () => null, setAttribute() {}, classList: { add() {}, remove() {}, toggle() {}, contains() { return false; } } },
createElement: () => ({ id: '', textContent: '', innerHTML: '', style: {}, classList: { add() {}, remove() {}, toggle() {}, contains() { return false; } }, addEventListener() {}, appendChild() {}, querySelector() { return null; }, querySelectorAll() { return []; } }),
head: { appendChild() {} },
body: { appendChild() {} },
getElementById: el,
addEventListener() {}, removeEventListener() {},
querySelector: () => null,
querySelectorAll: () => [],
},
console,
Date, Math, Array, Object, String, Number, JSON, RegExp, Error, TypeError, Set, Map, Promise,
parseInt, parseFloat, isNaN, isFinite,
encodeURIComponent, decodeURIComponent,
setTimeout: (fn) => { Promise.resolve().then(fn); return 0; },
clearTimeout: () => {},
setInterval: () => 0,
clearInterval: () => {},
fetch: () => Promise.resolve({ ok: true, json: () => Promise.resolve({}) }),
performance: { now: () => Date.now() },
localStorage: {
getItem: (k) => Object.prototype.hasOwnProperty.call(storage, k) ? storage[k] : null,
setItem: (k, v) => { storage[k] = String(v); },
removeItem: (k) => { delete storage[k]; },
},
location: { hash: '' },
history: { replaceState() {}, pushState() {} },
crypto: webcrypto,
TextEncoder, TextDecoder,
Uint8Array, Uint16Array, Uint32Array, Int8Array, Int16Array, Int32Array, ArrayBuffer,
URLSearchParams,
CustomEvent: class CustomEvent {},
MutationObserver: class MutationObserver { observe() {} disconnect() {} },
requestAnimationFrame: (cb) => setTimeout(cb, 0),
matchMedia: () => ({ matches: false, addEventListener() {}, removeEventListener() {} }),
addEventListener() {}, dispatchEvent() {},
getHashParams: () => new URLSearchParams(),
};
ctx.self = ctx;
ctx.globalThis = ctx;
vm.createContext(ctx);
return ctx;
}
function loadInCtx(ctx, file) {
const src = fs.readFileSync(path.join(__dirname, file), 'utf8');
vm.runInContext(src, ctx, { filename: file });
for (const k of Object.keys(ctx.window)) ctx[k] = ctx.window[k];
}
async function run() {
console.log('\n=== Live PSK decrypt: user-added channel (user:* prefix) routing ===');
const ctx = makeBrowserLikeSandbox();
ctx.window.matchMedia = () => ({ matches: false, addEventListener() {}, removeEventListener() {} });
ctx.window.addEventListener = () => {};
ctx.btoa = (s) => Buffer.from(String(s), 'binary').toString('base64');
ctx.atob = (s) => Buffer.from(String(s), 'base64').toString('binary');
// App.js stubs: provide debouncedOnWS / onWS / offWS / api / debounce /
// invalidateApiCache / registerPage so channels.js loads cleanly.
let wsListeners = [];
ctx.onWS = (fn) => { wsListeners.push(fn); };
ctx.offWS = (fn) => { wsListeners = wsListeners.filter(f => f !== fn); };
ctx.debouncedOnWS = function (fn) {
function handler(msg) { fn([msg]); }
wsListeners.push(handler);
return handler;
};
ctx.debounce = (fn) => fn;
ctx.api = () => Promise.resolve({ channels: [], observers: [] });
ctx.invalidateApiCache = () => {};
ctx.CLIENT_TTL = { channels: 60000, observers: 600000 };
ctx.escapeHtml = (s) => String(s == null ? '' : s);
ctx.truncate = (s, n) => { s = String(s || ''); return s.length > n ? s.slice(0, n) : s; };
ctx.formatHashHex = (h) => String(h);
ctx.formatSecondsAgo = () => '';
ctx.payloadTypeName = () => 'GRP_TXT';
ctx.RegionFilter = {
init() {},
onChange(fn) { return () => {}; },
offChange() {},
getRegionParam() { return ''; },
getSelected() { return null; },
};
ctx.ChannelColors = { get() { return null; }, remove() {} };
ctx.ChannelColorPicker = { open() {} };
ctx.normalizeObserverNameKey = (s) => String(s || '').toLowerCase();
let pageMod = null;
ctx.registerPage = (name, mod) => { if (name === 'channels') pageMod = mod; };
// Load AES + ChannelDecrypt + channels.js
loadInCtx(ctx, 'public/vendor/aes-ecb.js');
loadInCtx(ctx, 'public/channel-decrypt.js');
loadInCtx(ctx, 'public/channels.js');
const CD = ctx.window.ChannelDecrypt;
assert(typeof CD.tryDecryptLive === 'function', 'ChannelDecrypt.tryDecryptLive available');
const channelName = 'medusa';
const fixture = buildEncryptedGrpTxt(channelName, 'Alice', 'hello darkness');
CD.storeKey(channelName, fixture.keyHex);
// Initialize the channels page so wsHandler is wired up
const appEl = ctx.document.getElementById('page');
appEl.innerHTML = '';
await pageMod.init(appEl, null);
// pump microtasks
await new Promise((r) => setTimeout(r, 0));
ctx.window._channelsSetStateForTest({
channels: [{
hash: 'user:' + channelName,
name: channelName,
messageCount: 0,
lastActivityMs: 0,
lastSender: '',
lastMessage: 'Encrypted — click to decrypt',
encrypted: true,
userAdded: true,
}],
messages: [],
selectedHash: 'user:' + channelName,
});
// Drive the WS path — same shape the Go server broadcasts
const wsMsg = {
type: 'packet',
data: {
id: 12345,
hash: 'deadbeef',
observer_name: 'TestObserver',
packet: { observer_name: 'TestObserver' },
decoded: {
header: { payloadTypeName: 'GRP_TXT' },
payload: fixture.payload,
},
},
};
for (const fn of wsListeners) fn(wsMsg);
// Allow async decryptLivePSKBatch + setTimeout chain to settle
for (let i = 0; i < 20; i++) await new Promise((r) => setTimeout(r, 0));
const state = ctx.window._channelsGetStateForTest();
// (1) Message list for the open channel grew
assert(state.messages.length === 1,
'open user-added channel receives the live-decrypted message (got ' + state.messages.length + ')');
if (state.messages[0]) {
assert(state.messages[0].text === 'hello darkness',
'decrypted text is rendered (got ' + JSON.stringify(state.messages[0].text) + ')');
assert(state.messages[0].sender === 'Alice',
'decrypted sender is rendered (got ' + JSON.stringify(state.messages[0].sender) + ')');
}
// (2) The user-added channel's metadata updated
const userCh = state.channels.find((c) => c.hash === 'user:' + channelName);
assert(userCh && userCh.messageCount === 1,
'user-added channel messageCount incremented (got ' + (userCh && userCh.messageCount) + ')');
assert(userCh && userCh.lastMessage && userCh.lastMessage.indexOf('hello') !== -1,
'user-added channel lastMessage updated (got ' + (userCh && userCh.lastMessage) + ')');
// (3) No duplicate plain "medusa" entry was created in the sidebar
const dupes = state.channels.filter((c) => c.hash === channelName);
assert(dupes.length === 0,
'no duplicate non-prefixed channel entry created (got ' + dupes.length + ')');
assert(state.channels.length === 1,
'sidebar still has exactly the one user-added channel (got ' + state.channels.length + ')');
// (4) Unread NOT bumped on the channel actively being viewed
assert(!userCh || !userCh.unread,
'unread NOT bumped on the actively-viewed channel (got ' + (userCh && userCh.unread) + ')');
console.log('\n=== Results ===');
console.log('Passed: ' + passed + ', Failed: ' + failed);
process.exit(failed > 0 ? 1 : 0);
}
run().catch((e) => { console.error(e); process.exit(1); });