Compare commits

..

313 Commits

Author SHA1 Message Date
clawbot 35e1f46b36 test(#1085): E2E for Roles fold-in into Analytics tab
Adds three failing assertions covering the acceptance criteria:
1. Top nav must NOT contain a 'Roles' link
2. Analytics page must include a [data-tab=roles] tab that renders Roles content
3. Old #/roles URL must redirect to #/analytics?tab=roles

Replaces the legacy 'Roles page renders distribution table' E2E (issue #818)
which assumed a standalone /#/roles SPA page.

Red commit — production code in a follow-up.
2026-05-05 09:13:23 +00:00
Kpa-clawbot 282074b19d feat(#1034): wire QR generate + scan into channel modal (PR 3/3) (#1081)
## Summary

**PR 3/3 of #1034** — wires the existing `window.ChannelQR` module (PR2
#1035) into the existing channel modal placeholders (PR1 #1037).

### Changes

**`public/channels.js`**
- **Generate handler** (`#chGenerateBtn`): replaced the "QR coming in
next update" placeholder text with a real call to
`window.ChannelQR.generate(label || channelName, keyHex, qrOut)`.
Renders QR canvas + `meshcore://channel/add?...` URL + Copy Key inline
into `#qr-output`.
- **Scan handler** (`#scan-qr-btn`): removed `disabled` attribute,
refreshed title, and added a click handler that calls
`window.ChannelQR.scan()`. On success it populates `#chPskKey` (from
`result.secret`) and `#chPskName` (from `result.name`); on cancel it's a
no-op; on error it surfaces the message via `#chPskError`.

The Share button on sidebar entries was already wired to
`ChannelQR.generate` in PR1 (no change needed).

### TDD

1. **Red commit** (`178020b`): `test-channel-qr-wiring.js` — 12
assertions, 7 failed against the placeholder code (Generate handler
still printed "coming in next update", scan button still disabled).
2. **Green commit** (`e708f3f`): wiring added → all 12 assertions pass.

### E2E (rule 18)

`test-e2e-playwright.js` gains 3 Playwright tests (run against the live
Go server with fixture DB in CI):

- Generate → asserts `#qr-output canvas` and the
`meshcore://channel/add` URL appear after the click.
- Scan button is enabled (no `disabled` attribute).
- Stubs `ChannelQR.scan` to return `{name, secret}`, clicks the button,
asserts `#chPskKey` + `#chPskName` are populated.

### CI registration

Added `node test-channel-qr-wiring.js` and `node
test-channel-modal-ux.js` to the JS unit-test step in
`.github/workflows/deploy.yml` (and `test-all.sh`).

### Closes

Closes #1034 (final PR in the redesign series).

---------

Co-authored-by: OpenClaw Bot <bot@openclaw.local>
2026-05-05 01:59:17 -07:00
Kpa-clawbot 136e1d23c8 feat(#730): foreign-advert detection — flag instead of silent drop (#1084)
## Summary

**Partial fix for #730 (M1 only — M2 frontend and M3 alerting
deferred).**

Today the ingestor **silently drops** ADVERTs whose GPS lies outside the
configured `geo_filter` polygon. That's the wrong default for an
analytics tool — operators get zero visibility into bridged or leaked
meshes.

This PR makes the new default **flag, don't drop**: foreign adverts are
stored, the node row is tagged `foreign_advert=1`, and the API surfaces
`"foreign": true` so dashboards / map overlays can be built on top.

## Behavior

| Mode | What happens to an ADVERT outside `geo_filter` |
|---|---|
| (default) flag | Stored, marked `foreign_advert=1`, exposed via API |
| drop (legacy) | Silently dropped (preserves old behavior for ops who
want it) |

## What's done (M1 — Backend)
- ingestor stores foreign adverts instead of dropping
- `nodes.foreign_advert` column added (migration)
- `/api/nodes` and `/api/nodes/{pk}` expose `foreign: true` field
- Config: `geofilter.action: "flag"|"drop"` (default `flag`)
- Tests + config docs

## What's NOT done (deferred to M2 + M3)

- **M2 — Frontend:** Map overlay showing foreign adverts as distinct
markers, foreign-advert filter on packets/nodes pages, dedicated
foreign-advert dashboard
- **M3 — Alerting:** Time-series detection of bridging events, alert
when foreign advert rate spikes, identify bridge entry-point nodes

Issue #730 remains open for M2 and M3.

---------

Co-authored-by: corescope-bot <bot@corescope>
2026-05-05 01:58:52 -07:00
Kpa-clawbot f7d8a7cb8f feat(packets): filter UX — in-UI docs + autocomplete + right-click + saved filters (#966) (#1083)
## Summary

Implements the full filter-input UX upgrade from #966 — Wireshark-style
help, autocomplete, right-click-to-filter, and saved filters.

Closes #966.

## Surfaces

### A. Help popover (ⓘ button next to filter input)
Auto-generated from `PacketFilter.FIELDS` / `OPERATORS` so it stays in
sync with the parser. Includes:
- Syntax overview (boolean ops, parens, case-insensitivity,
URL-shareable filters)
- Full field reference (27 entries: top-level + `payload.*`)
- Full operator reference with one example per op
- 10 ready-to-paste examples
- Tips (right-click, autocomplete, save)

### B. Autocomplete dropdown
- Type partial field name → field suggestions (top-level + dynamic
`payload.*` keys discovered from visible packets)
- Type `field` → operator suggestions
- Type `type ==` → list of canonical type values (`ADVERT`, `GRP_TXT`,
…)
- Type `route ==` → list of route values (`FLOOD`, `DIRECT`,
`TRANSPORT_FLOOD`, …)
- Keyboard nav: ↑/↓, Tab/Enter to accept, Esc to dismiss

### C. Right-click → filter by this value
Right-click any of these cells in the packet table:
- `hash`, `size`, `type`, `observer`

Context menu offers `==`, `!=`, `contains`. Click → clause appended to
filter input (with `&&` if expression already present).

### D. Saved filters
- ★ Saved ▾ dropdown next to the input
- 7 starter defaults (Adverts only, Channel traffic, Direct messages,
Strong signal SNR > 5, Multi-hop, Repeater adverts, Recent < 5m)
- "+ Save current expression" prompts for a name and persists to
`localStorage` under `corescope_saved_filters_v1`
- User filters can be deleted (✕); defaults cannot
- User filters with the same name as a default override it

## Implementation

**`public/packet-filter.js`** — exposes `FIELDS`, `OPERATORS`,
`TYPE_VALUES`, `ROUTE_VALUES`, and a new `suggest(input, cursor, opts)`
function that returns ranked autocomplete suggestions with
replace-range. Pure function — no DOM, fully unit-tested.

**NEW `public/filter-ux.js`** — `window.FilterUX` IIFE owning the help
popover, autocomplete dropdown, context menu, and saved-filters store.
`init()` is idempotent, called once after the filter input renders.

**`public/packets.js`** — calls `FilterUX.init()` after the filter input
IIFE; row builders gain `data-filter-field` / `data-filter-value` attrs
on hash/size/type/observer cells. `filter-group` wrapper now `position:
relative` so dropdowns anchor correctly.

**`public/style.css`** — scoped `.fux-*` styles using existing CSS
variables (no new theme tokens).

## Tests

- `test-packet-filter-ux.js` (19 unit tests, wired into `test-all.sh`):
  - Metadata exposure (FIELDS / OPERATORS / TYPE_VALUES / ROUTE_VALUES)
- `suggest()` for empty input, prefix match, after `==`, dynamic
`payload.*` keys
- `SavedFilters.list/save/delete` — defaults, persistence, override,
dedup
- `buildCellFilterClause()` and `appendClauseToExpr()` quoting +
appending
- `test-filter-ux-e2e.js` (Playwright, wired into `deploy.yml`):
  - Navigate /packets → metadata exposed
  - Help popover opens with field reference, operators, examples
  - Autocomplete shows on focus, filters by prefix, accepts on Enter
  - Saved-filter dropdown lists defaults, click populates input
  - Right-click on TYPE cell → context menu → click appends clause
  - Save current expression persists to localStorage

TDD red commit (`bddf1c1`) — assertion failures only, no import errors.
Green commit (`0d3f381`) — all 19 unit tests pass.

## Browser validation

Spawned local server on :39966 against the e2e fixture DB and exercised
every UX surface via the openclaw browser tool. Confirmed:
- `window.PacketFilter.FIELDS.length === 27`, `suggest()` available
- `FilterUX.SavedFilters.list().length === 7` (defaults seeded)
- Help popover renders with `payload.name`, `contains`, `ADVERT` text
content
- Right-click on a `data-filter-field="type"` /
`data-filter-value="Response"` cell → context menu showed three options
→ clicking == populated the input with `type == "Response"` (and the
existing alias resolver matched it to `payload_type === 1`)
- Autocomplete on `pay` returned `payload_bytes`, `payload_hex`,
`payload.name`, `payload.lat`, `payload.lon`, `payload.text`

## Out of scope (deferred per the issue)

- Server-synced saved filters (cross-device)
- Visual filter builder
- Custom field expressions

## Acceptance criteria

- [x] Help icon (ⓘ) next to filter input opens documentation popover
- [x] Field reference table + operator reference + 6+ examples in
popover
- [x] Autocomplete dropdown on field names (top-level + `payload.*`)
- [x] Autocomplete dropdown on values for `type` / `route` operators
- [x] Right-click on packet cell → "Filter ==" / "Filter !=" / "Filter
contains"
- [x] Right-click context menu hides when clicking elsewhere / Esc
- [x] Saved-filters dropdown with at least 5 default examples (7
shipped)
- [x] User-saved filters persist in localStorage
- [x] Real-time match count next to filter input (already shipped
pre-PR; preserved)
- [ ] Improved error messages with token + position — partial: existing
parse errors already cite position; not a regression
- [x] No regression in existing filter behavior
(`test-packet-filter.js`: 69/69 pass)

---------

Co-authored-by: meshcore-bot <bot@meshcore.local>
2026-05-05 01:50:12 -07:00
Kpa-clawbot e9c801b41a feat(live): filter incoming packets by IATA region (#1045) (#1080)
Closes #1045.

## What
Adds an optional region dropdown to the **Live** page that filters
incoming packets by observer IATA. When a user selects one or more
regions, only packets observed by repeaters in those regions render in
the feed/animation/audio.

## How
- New `liveRegionFilter` container in the live header toggles row,
initialised via the shared `RegionFilter` component in `dropdown` mode
(matches packets/nodes/observers pages).
- On page init, fetches `/api/observers` once and builds an `observer_id
→ IATA` map.
- `packetMatchesRegion(packets, obsMap, selected)` (pure helper, OR
across observations, case-insensitive) gates `renderPacketTree` next to
the existing favorite + node filters.
- Selection persists in localStorage via the existing `RegionFilter`
machinery — no per-page key needed.
- Listener cleanup hooked into the existing live-page teardown.

## TDD
- Red commit `55097ce`: `test-live-region-filter.js` asserts
`_livePacketMatchesRegion` exists and behaves correctly across 9 cases
(no-selection passthrough, single match, no-match, OR across
observations, multi-region selection, unknown observer, missing
observer_id, case-insensitivity, observer-map override). Fails with
`_livePacketMatchesRegion must be exposed` against master.
- Green commit `fdec7bf`: implements helper + UI wiring + CSS; test
passes.

Test wired into `.github/workflows/deploy.yml` JS unit-test step.

## Notes
- Server-side WS broadcast is unchanged — filtering is purely
client-side, as the issue requests ("something a user can activate
themselves, and not something that would be server wide").
- Pre-existing `test-live.js` / `test-live-dedup.js` failures on master
are not introduced or affected by this PR (verified by running both on
master HEAD).

---------

Co-authored-by: meshcore-bot <bot@openclaw.local>
2026-05-05 01:43:05 -07:00
Kpa-clawbot 3ab404b545 feat(node-battery): voltage trend chart + /api/nodes/{pubkey}/battery (#663) (#1082)
## Summary

Closes #663 (Phase 2 + 3 partial — time-series tracking + thresholds for
nodes that are also observers).

Adds a per-node battery voltage trend chart and
`/api/nodes/{pubkey}/battery` endpoint, sourced from the existing
`observer_metrics.battery_mv` samples populated by observer status
messages. No new ingest or schema changes — purely surfaces data we were
already collecting.

## Scope (TDD red→green)

**RED commit:** test(node-battery) — DB query, endpoint shape
(200/404/no-data), and config getters all asserted.
**GREEN commit:** feat(node-battery) — implementation only.

## Changes

### Backend
- `cmd/server/node_battery.go` (new):
- `DB.GetNodeBatteryHistory(pubkey, since)` — pulls `(timestamp,
battery_mv)` rows from `observer_metrics WHERE LOWER(observer_id) =
LOWER(public_key) AND battery_mv IS NOT NULL`. Case-insensitive join
tolerates historical pubkey casing variation (observers persist
uppercase, nodes lowercase in this DB).
- `Server.handleNodeBattery` — `GET /api/nodes/{pubkey}/battery?days=N`
(default 7, max 365). Returns `{public_key, days, samples[], latest_mv,
latest_ts, status, thresholds}`.
- `Config.LowBatteryMv()` / `CriticalBatteryMv()` — defaults 3300 / 3000
mV.
- `cmd/server/config.go` — `BatteryThresholds *BatteryThresholdsConfig`
field.
- `cmd/server/routes.go` — route registration alongside existing
`/health`, `/analytics`.

### Frontend
- `public/node-analytics.js` — new "Battery Voltage" chart card with
status badge (🔋 OK / ⚠️ Low / 🪫 Critical / No data). Renders dashed
threshold lines at `lowMv` and `criticalMv`. Empty-state message when no
samples in window.

### Config
- `config.example.json` — `batteryThresholds: { lowMv: 3300, criticalMv:
3000 }` with `_comment` per Config Documentation Rule.

## Status semantics

| latest_mv             | status     |
|-----------------------|------------|
| no samples in window  | `unknown`  |
| `>= lowMv`            | `ok`       |
| `< lowMv`, `>= critMv`| `low`      |
| `< criticalMv`        | `critical` |

## What this PR does NOT do (deferred)

The issue's full Phase 1 (writing decoded sensor advert telemetry into
`nodes.battery_mv` / `temperature_c` from server-side decoder) and Phase
4 (firmware/active polling for repeaters without observers) are out of
scope here. This PR delivers the requested Phase 2/3 surfacing for the
data path that already lands rows: `observer_metrics`. Repeaters that
are also observers (i.e. publish status to MQTT) will get a voltage
trend immediately; pure passive nodes won't until Phase 1 lands.

## Tests

- `TestGetNodeBatteryHistory_FromObserverMetrics` — case-insensitive
join, NULL skipping, ordering.
- `TestNodeBatteryEndpoint` — full happy path with thresholds + status.
- `TestNodeBatteryEndpoint_NoData` — 200 + status=unknown.
- `TestNodeBatteryEndpoint_404` — unknown node.
- `TestBatteryThresholds_ConfigOverride` — config getters + defaults.

`cd cmd/server && go test ./...` — green.

## Performance

Endpoint is per-pubkey (called once on analytics page open), indexed by
`(observer_id, timestamp)` PK on `observer_metrics`. No hot-path impact.

---------

Co-authored-by: bot <bot@corescope>
2026-05-05 01:41:00 -07:00
Kpa-clawbot aa3d26f314 fix(nav): stop nav bar from jumping when Live is selected (#1046) (#1078)
## Summary

The `🔴 Live` nav link could wrap onto two lines at certain viewport
widths once it became the `.active` link, which grew `.nav-link`'s
height and made the whole `.top-nav` "hop" the instant Live was selected
(issue #1046).

Adding `white-space: nowrap` to the base `.nav-link` rule keeps every
nav label on a single line at every breakpoint (default desktop + the
768–1279px and <768px responsive overrides), eliminating the jump.

## Changes

- `public/style.css` — `white-space: nowrap` on `.nav-link`.
- `test-e2e-playwright.js` — new assertion at viewport 1115px (the width
in the issue screenshots) that:
  - computed `white-space` prevents wrapping
  - the Live link renders on a single line in both states
  - `.top-nav` height does not change when `.active` is toggled

## TDD

- Red commit `ba906a5` — test added, fails because base `.nav-link` has
no `white-space` rule (default `normal`).
- Green commit `51906cb` — single-line CSS fix makes the test pass.

Fixes #1046

---------

Co-authored-by: corescope-bot <bot@corescope.local>
2026-05-05 01:36:08 -07:00
Kpa-clawbot 5f6c5af0cf fix(observers): correct column headings after Last Packet (#1039) (#1075)
## Summary

Fixes #1039 — the Observers page table had 10 `<td>` cells per row but
only 9 `<th>` headings, so labels drifted starting at the Packet Health
badge cell. The headings `Packets`, `Packets/Hour`, `Clock Offset`,
`Uptime` were each one column to the left of their data.

## Changes

- `public/observers.js`: added missing `Packet Health` heading (over the
`packetBadge()` cell) and renamed the count column header from `Packets`
to `Total Packets` to disambiguate from `Packets/Hour`.

## TDD

- **Red commit** (`7cae61c`): `test-observers-headings.js` asserts
`<th>` count equals `<td>` count and verifies the expected header order.
Both assertions fail on master (9 vs 10; `Packets` vs `Packet
Health`/`Total Packets`).
- **Green commit** (`8ed7f7c`): heading row updated; both assertions
pass.

## Test

```
$ node test-observers-headings.js
── Observers table headings (#1039) ──
  ✓ thead column count equals tbody row column count
  ✓ expected headings present and ordered
2 passed, 0 failed
```

Wired into `test-all.sh`.

## Risk

Frontend-only, static template change. No data flow / perf impact.

Fixes #1039

---------

Co-authored-by: openclaw-bot <bot@openclaw.local>
2026-05-05 01:35:09 -07:00
Kpa-clawbot f33801ecb4 feat(repeater): usefulness score — traffic axis (#672) (#1079)
## Summary

Implements the **Traffic axis** of the repeater usefulness score (#672).
Does NOT close #672 — Bridge, Coverage, and Redundancy axes are deferred
to follow-up PRs.

Adds `usefulness_score` (0..1) to repeater/room node API responses
representing what fraction of non-advert traffic passes through this
repeater as a relay hop.

## Why traffic-axis-first

The issue proposes a 4-axis composite (Bridge, Coverage, Traffic,
Redundancy). Bridge/Coverage/Redundancy require betweenness centrality
and neighbor graph infrastructure (#773 Neighbor Graph V2). Traffic axis
can ship independently using existing path-hop data.

## Remaining work for #672

- Bridge axis (betweenness centrality — depends on #773)
- Coverage axis (observer reach comparison)
- Redundancy axis (node-removal simulation — depends on #687)
- Composite score combining all 4 axes

Partial fix for #672.

---------

Co-authored-by: meshcore-bot <bot@meshcore.local>
2026-05-05 01:34:08 -07:00
Kpa-clawbot d05e468598 feat(memlimit): GOMEMLIMIT support, derive from packetStore.maxMemoryMB (#836) (#1077)
## Summary

Implements **part 1** of #836 — `GOMEMLIMIT` support so the Go runtime
self-throttles GC under cgroup memory pressure instead of getting
SIGKILLed.

(Parts 2 & 3 — bounded cold-load batching + README ops docs — land in
follow-up PRs.)

## Behavior

On startup `cmd/server/main.go` now calls `applyMemoryLimit(maxMemoryMB,
envSet)`:

| Condition | Action | Log |
|---|---|---|
| `GOMEMLIMIT` env set | Honor the runtime's parse, do nothing |
`[memlimit] using GOMEMLIMIT from environment (...)` |
| env unset, `packetStore.maxMemoryMB > 0` | `debug.SetMemoryLimit(maxMB
* 1.5 MiB)` | `[memlimit] derived from packetStore.maxMemoryMB=512 → 768
MiB (1.5x headroom)` |
| env unset, `maxMemoryMB == 0` | No-op | `[memlimit] no soft memory
limit set ... recommend setting one to avoid container OOM-kill` |

The 1.5x headroom covers Go's NextGC trigger at ~2× live heap (per #836
heap profile: 680 MB live → 1.38 GB NextGC).

## Tests (TDD red→green visible in commit history)

- `TestApplyMemoryLimit_FromEnv` — env wins, function does not override
- `TestApplyMemoryLimit_DerivedFromMaxMemoryMB` — verifies bytes
computation + `debug.SetMemoryLimit` actually applied at runtime
- `TestApplyMemoryLimit_None` — no env, no config → reports `"none"`, no
side effect

Red commit: `7de3c62` (assertion failures, builds clean)
Green commit: `454516d`

## Config docs

`config.example.json` `packetStore._comment_gomemlimit` documents
env/derived/override behavior.

## Out of scope

- Cold-load transient bounding (item 2 in #836)
- README container-size table (item 3)
- QA §1.1 rewrite

Closes part 1 of #836.

---------

Co-authored-by: corescope-bot <bot@corescope>
2026-05-05 01:33:23 -07:00
Kpa-clawbot d192330bdc feat(compare): asymmetric overlap stats for reference observer comparison (#671) (#1076)
## Summary

Adds asymmetric overlap percentages to the existing observer compare
page so it can be used as a **reference observer comparison** tool
(Uncle Lit's request, #671).

## What changed

`public/compare.js` (frontend only — no backend changes)

- New `computeOverlapStats(cmp)` helper that turns a
`comparePacketSets()` result into two-way coverage:
  - `aSeesOfB` — % of B's packets that A also saw
  - `bSeesOfA` — % of A's packets that B also saw
  - plus shared / onlyA / onlyB / totalA / totalB
- Two callout cards on the compare summary view:
  - `<A> saw N of <B>'s X packets` (Y%)
  - `<B> saw N of <A>'s X packets` (Y%)
- Existing "Only A / Only B / Both" tabs already identify unique
packets; that's the second half of the issue and is left intact.

## Operator workflow

Pick a known-good observer (LOS to key nodes) as the reference. Pair it
with a candidate. If the candidate's overlap with the reference is high
→ healthy. If low → investigate antenna, obstruction, or RF deafness.

## Out of scope (future work)

Issue lists several follow-on milestones — full Analytics sub-tab with
reference-vs-many table, SNR delta, geographic proximity filter,
server-side `/api/analytics/observer-comparison` endpoint. Those are
larger and tracked by the issue's M1-M4 milestones; this PR closes the
core ask (asymmetric overlap on the existing compare page) and leaves
the rest for follow-ups.

## Tests

`test-compare-overlap.js` — 6 unit tests via vm sandbox:

- exposes `computeOverlapStats` on `window`
- basic asymmetric scenario (8/10 vs 8/12)
- zero packets — no division by zero
- one observer empty — both percentages 0
- perfect overlap — 100% both ways
- disjoint observers — 0% both ways

TDD: red commit landed first with stub returning zeros (assertions
failed), green commit added the math.

Closes #671

---------

Co-authored-by: bot <bot@corescope.local>
2026-05-05 01:33:04 -07:00
Kpa-clawbot 2b01ecd051 ci: update go-server-coverage.json [skip ci] 2026-05-05 08:28:49 +00:00
Kpa-clawbot 34b418894a ci: update go-ingestor-coverage.json [skip ci] 2026-05-05 08:28:48 +00:00
Kpa-clawbot 1860cb4c54 ci: update frontend-tests.json [skip ci] 2026-05-05 08:28:47 +00:00
Kpa-clawbot 6a715e6af7 ci: update frontend-coverage.json [skip ci] 2026-05-05 08:28:46 +00:00
Kpa-clawbot fc16b4e069 ci: update e2e-tests.json [skip ci] 2026-05-05 08:28:45 +00:00
Kpa-clawbot 45f30fcadc feat(repeater): liveness detection — distinguish actively relaying from advert-only (#662) (#1073)
## Summary

Implements repeater liveness detection per #662 — distinguishes a
repeater that is **actively relaying traffic** from one that is **alive
but idle** (only sending its own adverts).

## Approach

The backend already maintains a `byPathHop` index keyed by lowercase
hop/pubkey for every transmission. Decode-window writes also key it by
**resolved pubkey** for relay hops. We just weren't surfacing it.

`GetRepeaterRelayInfo(pubkey, windowHours)`:
- Reads `byPathHop[pubkey]`.
- Skips packets whose `payload_type == 4` (advert) — a self-advert
proves liveness, not relaying.
- Returns the most recent `FirstSeen` as `lastRelayed`, plus
`relayActive` (within window) and the `windowHours` actually used.

## Three states (per issue)

| State | Indicator | Condition |
|---|---|---|
| 🟢 Relaying | green | `last_relayed` within `relayActiveHours` |
| 🟡 Alive (idle) | yellow | repeater is in the DB but
`relay_active=false` (no recent path-hop appearance, or none ever) |
|  Stale | existing | falls out of the existing `getNodeStatus` logic |

## API

- `GET /api/nodes` — repeater/room rows now include `last_relayed`
(omitted if never observed) and `relay_active`.
- `GET /api/nodes/{pubkey}` — same fields plus `relay_window_hours`.

## Config

New optional field under `healthThresholds`:

```json
"healthThresholds": {
  ...,
  "relayActiveHours": 24
}
```

Default 24h. Documented in `config.example.json`.

## Frontend

Node detail page gains a **Last Relayed** row for repeaters/rooms with
the 🟢/🟡 state badge. Tooltip explains the distinction from "Last Heard".

## TDD

- **Red commit** `4445f91`: `repeater_liveness_test.go` + stub
`GetRepeaterRelayInfo` returning zero. Active and Stale tests fail on
assertion (LastRelayed empty / mismatched). Idle and IgnoresAdverts
already match the desired behavior under the stub. Compiles, runs, fails
on assertions — not on imports.
- **Green commit** `5fcfb57`: Implementation. All four tests pass. Full
`cmd/server` suite green (~22s).

## Performance

`O(N)` over `byPathHop[pubkey]` per call. The index is bounded by store
eviction; a single repeater has at most a few hundred entries on real
data. The `/api/nodes` loop adds one map read + scan per repeater row —
negligible against the existing enrichment work.

## Limitations (per issue body)

1. Observer coverage gaps — if no observer hears a repeater's relay,
it'll show as idle even when actively relaying. This is inherent to
passive observation.
2. Low-traffic networks — a repeater in a quiet area legitimately shows
idle. The 🟡 indicator copy makes that explicit ("alive (idle)").
3. Hash collisions are mitigated by the existing `resolveWithContext`
path before pubkeys land in `byPathHop`.

Fixes #662

---------

Co-authored-by: clawbot <bot@corescope.local>
2026-05-05 01:17:52 -07:00
Kpa-clawbot 8b924cd217 feat(ui): encode view & filter state in URL hash (#749) (#1072)
## Summary

Encodes view + filter state in the URL hash so deep links restore the
exact page state (issue #749).

## Changes

New shared helper `public/url-state.js` exposing `URLState`:
- `parseSort('col:asc')` → `{column, direction}` (defaults to `desc`)
- `serializeSort('col', 'desc')` → `'col'` (omits default direction)
- `parseHash('#/nodes/abc?tab=x')` → `{route: 'nodes/abc', params:
{tab:'x'}}`
- `buildHash(route, params)` and `updateHashParams(updates,
currentHash)` for round-tripping while preserving subpaths.

Wired into:

- **packets.js** — sort column/direction now in
`#/packets?sort=col[:asc]`, restored on init (overrides localStorage).
Subpath `#/packets/<hash>` preserved.
- **nodes.js** — sort encoded as `#/nodes?sort=col[:asc]`, restored on
init. Subpath `#/nodes/<pubkey>` preserved.
- **analytics.js** — both selected tab (`tab=topology`) AND time-window
picker value (`window=7d`) now round-trip via URL. Subview keys used by
rf-health (`range/observer/from/to`) cleared when switching tabs to keep
URLs clean.

Existing deep links (`#/nodes/<pubkey>`, `#/packets/<hash>`,
`?filter=…`, `?node=…`, `?observer=…`, `?channel=…`, `?timeWindow=…`,
`?region=…`) all keep working — additive change only.

## Tests

TDD red→green:
- Red: `5e1482e` (stub throws "not implemented"; 18/18 tests fail on
assertions)
- Green: `512940e` (helper implemented; 18/18 pass)

Wired `test-url-state.js` into `test-all.sh`.

Fixes #749

---------

Co-authored-by: clawbot <clawbot@users.noreply.github.com>
2026-05-05 01:17:22 -07:00
Kpa-clawbot 83881e6b71 fix(#688): auto-discover hashtag channels from message text (#1071)
## Summary

Auto-discovers previously-unknown hashtag channels by scanning decoded
channel message text for `#name` mentions and surfacing them via
`GetChannels`.

Workflow (per the issue):
1. New channel message arrives on a known channel
2. Decoded text is scanned for `#hashtag` mentions
3. Any mention that doesn't match an existing channel is surfaced as a
discovered channel (`discovered: true`, `messageCount: 0`)
4. Future traffic on that channel will populate the entry once it has
its own packets

## Changes

- `cmd/server/discovered_channels.go` — new file.
`extractHashtagsFromText` parses `#name` mentions from free text,
deduped, order-preserving. Trailing punctuation is excluded by the
character class.
- `cmd/server/store.go` — `GetChannels` now scans CHAN packet text for
hashtags after building the primary channel map, and appends any unseen
hashtag mentions as discovered entries.
- `cmd/server/discovered_channels_test.go` — new tests covering parser
edge cases (single, multi, dedup, punctuation, none, bare `#`) and
end-to-end discovery via `GetChannels`.

## TDD

- Red: `34f1817` — stub returns `nil`, both new tests fail on assertion
(verified).
- Green: `d27b3ed` — real implementation, full `cmd/server` test suite
passes (21.7s).

## Notes

- Discovered channels carry `messageCount: 0` and `lastActivity` set to
the most recent mention's `firstSeen`, so they sort naturally alongside
real channels.
- Names are matched against existing entries by both `#name` and bare
`name` so a channel that already has decoded traffic isn't
double-listed.
- The existing `channelsCache` (15s) covers the new code path; no
separate invalidation needed since the source data (`byPayloadType[5]`)
drives both maps.

Fixes #688

---------

Co-authored-by: corescope-bot <bot@corescope.local>
2026-05-05 01:16:57 -07:00
Kpa-clawbot 417b460fa0 feat(css): fluid scaffolding — clamp() spacing/type/container tokens (#1054) (#1066)
## Summary

Lands the **fluid CSS foundation** for the responsive scaffolding effort
(parent #1050). Pure additive change to the top of `public/style.css` —
no component CSS touched.

## What changed

### New tokens in `:root`
- **Spacing scale** — `--space-xs … --space-2xl` via `clamp()`. 1440px
targets match the prior hardcoded `4 / 8 / 16 / 24 / 32 / 48` px values
to within ~1px.
- **Type scale** — `--fs-sm … --fs-2xl` via `clamp(min, vw-based, max)`.
Floors keep text readable at 768px; caps prevent runaway growth at
2560px+.
- **Radii** — `--radius-sm/md/lg` via `clamp()`.
- **Container layout** — `--gutter` (`clamp()`) and `--content-max`
(`min(100% - 2*gutter, 1600px)`) for fluid horizontal layout without
media queries.

### Base consumption
- `html, body` now sets `font-size: var(--fs-md)`.

### Parallel-work safety
- Added `FLUID SCAFFOLDING` section header at the top.
- Added `COMPONENT STYLES` section header marking where the rest of the
file (nav, tables, charts, map, packets, analytics …) begins. Sibling
tasks 1050-3..6 / 1052-* edit inside that region and won't conflict with
this PR.

## TDD

- **Red:** `2d6f90a` — `test-fluid-scaffolding.js` asserts the new
tokens exist with `clamp()`/`min()`, that `html, body` consumes
`--fs-md`, and that the section marker is present. Fails on assertions
(15 failed, 0 passed).
- **Green:** `7b4d59b` — implementation in `public/style.css`. All 15
assertions pass.

## Acceptance criteria

- [x] Fluid spacing scale `--space-xs..--space-2xl` via `clamp()`
- [x] Fluid type scale `--fs-sm..--fs-2xl` via `clamp()`
- [x] Replace base body font-size with the new token
- [x] Container layout vars `--content-max`, `--gutter` via
`min()`/`clamp()`
- [x] No component CSS edits (only `:root`, `html`, `body`)
- [x] No visual regression at 1440px (token targets numerically match
prior px values)

## Notes for reviewers

- Pre-existing `test-frontend-helpers.js` failure on master is unrelated
(`nodesContainer.setAttribute is not a function`) and not introduced
here.
- `--content-max` uses `min(100% - 2*gutter, 1600px)` — the `100% - …`
arm wins on small viewports and guarantees a gutter always remains.

Fixes #1054

---------

Co-authored-by: clawbot <bot@corescope.local>
Co-authored-by: openclaw-bot <bot@openclaw.local>
Co-authored-by: meshcore-bot <bot@meshcore.local>
2026-05-05 01:14:39 -07:00
Kpa-clawbot 78dabd5bda feat(filter): timestamp predicates (after/before/between/age) — #289 (#1070)
Fixes #289.

Adds Wireshark-style timestamp predicates to the client-side packet
filter
engine (`public/packet-filter.js`).

## New syntax

| Form | Meaning |
| --- | --- |
| `time after "2024-01-01"` | packets with timestamp strictly after the
given datetime |
| `time before "2024-12-31T23:59:59Z"` | packets strictly before |
| `time between "2024-01-01" "2024-02-01"` | inclusive range
(order-insensitive) |
| `age < 1h` | packets newer than 1 hour |
| `age > 24h` | packets older than 24 hours |
| `age < 7d && type == ADVERT` | composes with existing predicates |

Duration units: `s` / `m` / `h` / `d` / `w`. Datetime values use
`Date.parse`
(ISO 8601 + bare `YYYY-MM-DD`). `time` is also accepted as `timestamp`.

## Implementation

- `OP_WORDS` extended with `after`, `before`, `between`.
- New `TK.DURATION` token: lexer recognises `<number><unit>` and
pre-converts
  to seconds at lex time (no per-evaluation parsing cost).
- `between` is a two-value op handled in `parseComparison`.
- Field resolver:
- `time` / `timestamp` → epoch-ms; falls back to `first_seen` then
`latest`
    so grouped rows from `/api/packets?groupByHash=true` work.
  - `age` → seconds since `Date.now()`.
- Parse-time validation rejects invalid datetimes and unknown duration
units
(silent-fail would have been a footgun — every packet would just
disappear).
- Null/missing timestamps → predicate returns `false`, consistent with
the
  existing null-field behaviour for `snr` / `rssi`.

## Open questions from the issue

- **UTC vs local**: defaults to whatever `Date.parse` returns. Bare
dates like
`"2024-01-01"` are interpreted as UTC midnight by the spec. Tying this
to
  the #286 timestamp display setting can be a follow-up.
- **URL query string**: out of scope for this PR.

## Tests

- New `test-packet-filter-time.js`: 20 tests covering
`after`/`before`/`between`,
ISO datetimes, all duration units, composition with `&&`, null-timestamp
safety,
  invalid-datetime / invalid-unit errors, and `first_seen` fallback.
- Wired into `.github/workflows/deploy.yml` JS unit-test step.
- Existing `test-packet-filter.js` (69 tests) and inline self-tests
still pass.

## Commits

- Red: `5ccfad3` — failing tests + lexer-only stub (compiles, asserts
fail)
- Green: `976d50f` — implementation

---------

Co-authored-by: OpenClaw Bot <bot@openclaw.local>
2026-05-05 01:13:48 -07:00
Kpa-clawbot e2050f8ec8 fix(docker): default BUILDPLATFORM so plain docker build works (fixes #884) (#1069)
## Summary

Plain `docker build .` (no buildx) fails immediately:

```
Step 1/45 : FROM --platform=$BUILDPLATFORM golang:1.22-alpine AS builder
failed to parse platform : "" is an invalid component of "": platform specifier
component must match "^[A-Za-z0-9_-]+$"
```

`$BUILDPLATFORM` is only auto-populated by buildx; under plain
BuildKit/`docker build` it's empty.

## Fix

Add `ARG BUILDPLATFORM=linux/amd64` before the `FROM` so the variable
always resolves.

## Multi-arch preserved

`docker buildx build --platform=linux/arm64,linux/amd64 .` still
overrides `BUILDPLATFORM` at invocation time — the ARG default only
applies when the caller doesn't set one. The existing CI multi-arch
workflow is unaffected.

Fixes #884

Co-authored-by: meshcore-bot <bot@meshcore.local>
2026-05-05 01:12:40 -07:00
Kpa-clawbot cbfd159f8e feat(ws): pull-to-reconnect on touch devices (Fixes #1063) (#1068)
## Summary

Reframes the browser's native pull-to-refresh on touch devices as a
**WebSocket reconnect** instead of a full page reload. On data pages
(Packets, Nodes, Channels — and globally, since the WS is shared) a
downward pull at `scrollTop=0` cycles the WS, which is what users
actually want when they reach for that gesture.

Fixes #1063.

## Behavior

- **Touch-only**: gated by `('ontouchstart' in window) ||
navigator.maxTouchPoints > 0`. Desktop is untouched.
- **Scroll-safe**: every handler re-checks `scrollTop > 0` and bails out
— never hijacks normal scroll.
- **Visual affordance**: a fixed chip slides down from the top with a
rotating ⟳ icon; opacity and rotation scale with pull progress (0 →
`PULL_THRESHOLD_PX = 80px`).
- **`preventDefault` is conservative**: only after `dy > 16px` and only
on `touchmove`, so taps and short swipes are not affected.
- **Result feedback**: a brief toast — green `Connected ✓` if WS was
already OPEN, `Reconnecting…` otherwise. Both auto-dismiss after ~1.8s.
- **Reconnect path**: closes the existing WS so the existing `onclose`
auto-reconnect fires immediately; an explicit `connectWS()` is also
called as a safety net when `ws` is null.
- **No regression** to existing WS auto-reconnect — same `connectWS` /
`setTimeout(connectWS, 3000)` chain, just kicked manually.

## TDD

- **Red commit** `f90f5e9` — adds `test-pull-to-reconnect.js` with 6
assertions; stub functions added to `app.js` so tests reach assertion
failures (not ReferenceError). 3/6 fail on behavior.
- **Green commit** `53adbd9` — full implementation; 6/6 pass.

## Files

- `public/app.js` — `pullReconnect()`, `setupPullToReconnect()`,
`_ensurePullIndicator()`, `_showPullToast()`, `_isTouchDevice()`. Wired
into `DOMContentLoaded` next to `connectWS()`. Touched the WS section
only.
- `test-pull-to-reconnect.js` — vm sandbox suite covering exposure,
WS-close, listener wiring, threshold trigger, scroll-position gate.

## Acceptance criteria check

-  Pull-down at scroll-top triggers WS reconnect + data refetch
(debounced cache invalidate fires on next WS message)
-  Visible affordance during pull (rotating chip)
-  Resolves on success (toast), shows status toast on disconnect path
-  Disabled when not at `scrollTop=0`
-  No regression to existing WS auto-reconnect

---------

Co-authored-by: openclaw-bot <bot@openclaw.local>
Co-authored-by: Kpa-clawbot <bot@kpa-clawbot>
2026-05-05 01:11:59 -07:00
Kpa-clawbot eaf14a61f5 fix(css): 48px touch targets, :active states, hover→tap (#1060) (#1067)
## Summary

Fixes #1060 — free-win CSS pass for touch usability.

- All major interactive controls (`.btn`, `.btn-icon`, `.nav-btn`,
`.nav-link`, `.ch-icon-btn`, `.ch-remove-btn`, `.ch-share-btn`,
`.ch-gear-btn`, `.panel-close-btn`, `.mc-jump-btn`, `button.ch-item`)
now declare `min-height: 48px` / `min-width: 48px`. Hit-area grows;
visual padding/icon size unchanged on desktop because the rules use
`inline-flex` centering.
- Added visible `:active` feedback (background shift + `transform:
scale(0.92–0.97)` + opacity) on every button class — touch devices have
no hover, so `:active` is the only press signal.
- Hover-only `.sort-help` tooltip rule is now wrapped in `@media (hover:
hover)`; added a CSS-only `:focus` / `:focus-within` tap-to-reveal path
with a visible focus ring so the same content is reachable on touch (and
via keyboard).
- All changes scoped to the `=== Touch Targets ===` section. No other
CSS section modified, no JS touched, no markup edits.

## Acceptance criteria

- [x] All interactive controls reach 48×48 CSS-px touch target (verified
by `test-touch-targets.js`).
- [x] Every button has a visible `:active` state (no hover-only
feedback).
- [x] Hover tooltip rule is gated behind `@media (hover: hover)`, with
`:focus-within` tap-to-reveal fallback.
- [x] Desktop visuals preserved (padding-based, not visual-size-based).

## TDD

- Red commit `327473b` — `test-touch-targets.js` asserts every required
selector/property; it compiles and fails on assertion against pre-change
CSS.
- Green commit `e319a8f` — Touch Targets section rewrite; test passes.

```
$ node test-touch-targets.js
test-touch-targets.js: OK
```

Fixes #1060

---------

Co-authored-by: bot <bot@corescope>
2026-05-05 01:11:08 -07:00
Kpa-clawbot b71c290783 ci: update go-server-coverage.json [skip ci] 2026-05-05 07:24:14 +00:00
Kpa-clawbot d7fbd4755e ci: update go-ingestor-coverage.json [skip ci] 2026-05-05 07:24:13 +00:00
Kpa-clawbot 13b6eecc82 ci: update frontend-tests.json [skip ci] 2026-05-05 07:24:12 +00:00
Kpa-clawbot b18ebe1a26 ci: update frontend-coverage.json [skip ci] 2026-05-05 07:24:11 +00:00
Kpa-clawbot 9aa94166df ci: update e2e-tests.json [skip ci] 2026-05-05 07:24:09 +00:00
Kpa-clawbot 38703c75e6 fix(e2e): make Nodes WS auto-update test deterministic (#1051)
## Problem

The Playwright E2E test `Nodes page has WebSocket auto-update`
(`test-e2e-playwright.js:259`) has flaked 7+ times this session,
blocking CI. Failure mode:

```
page.waitForSelector: Timeout 10000ms exceeded
waiting for locator('table tbody tr') to be visible
```

## Root cause

The test navigates to `/#/nodes`, waits for `[data-loaded="true"]`
(passes), then waits for `table tbody tr` (10s, fails intermittently).
Rows in this code path only appear via WebSocket push — which is
timing-dependent in CI (no guaranteed live MQTT feed within the 10s
window).

## Fix

Drop the `table tbody tr` wait. This test's contract is **WS
infrastructure existence**, not data delivery:

- `#liveDot` element present
- `onWS` / `offWS` globals defined
- Best-effort connected-state check (already tolerant of failure)

All those assertions are deterministic post-DOMContentLoaded. Initial
table population is already covered by the preceding `Nodes page loads
with data` test.

## Coverage

No coverage loss — the WS infra assertions are unchanged. Only the
timing-dependent row-presence wait is removed.

## TDD note

This is a test-fix, not a behavior change. The "red" is the existing
intermittent CI failure; the "green" is this commit removing the flaky
wait. No production code touched.

Co-authored-by: meshcore-bot <bot@meshcore.local>
2026-05-05 00:11:18 -07:00
Kpa-clawbot f9cd43f06f fix(analytics): integrate channels list with PSK decrypt UX + add link from Channels page (#1042)
## What

Integrates the Analytics → Channels section with the PSK decrypt UX (PRs
#1021–#1040). Replaces nonsense `chNNN` placeholders with useful display
names and groups the table the same way the Channels sidebar does.

## Before
- Encrypted channels showed raw `ch185`, `ch64`, `ch?` placeholders.
- Locally-decrypted PSK channels (with stored keys + labels) were not
surfaced — every encrypted row looked identical and useless.
- Single flat list, sorted by last activity by default.

## After
- **My Channels** 🔑 — any analytics row whose hash byte matches a stored
PSK key (via `ChannelDecrypt.getStoredKeys()` + `computeChannelHash`).
Display name uses the user's label if set, otherwise the key name.
- **Network** 📻 — known cleartext channels (server-provided names) and
rainbow-table-decoded encrypted channels.
- **Encrypted** 🔒 — unknown encrypted, rendered as `🔒 Encrypted (0xNN)`
instead of `chNNN`.
- Within each group: messages descending (most active first).
- New `📊 Channel Analytics →` link in the Channels page sidebar header →
`#/analytics`.

## How
- Pure `decorateAnalyticsChannels(channels, hashByteToKeyName, labels)`
— testable in isolation, sets `displayName` + `group` per row.
- `buildHashKeyMap()` — async helper that resolves stored PSK keys to
their channel hash bytes via `computeChannelHash`. Used at render time;
first paint uses an empty map (best-effort) and re-renders once keys
resolve. Graceful fallback when `ChannelDecrypt` is missing or there are
no stored keys.
- `channelTbodyHtml` gains an `opts.grouped` flag — opt-in so the
existing flat sort still works for any other caller.
- The analytics API endpoint is **unchanged** — this is purely frontend
rendering.

## Tests
`test-analytics-channels-integration.js` — 19 assertions covering
decoration, grouping, sort order, and the channels-page link. Added to
`test-all.sh`.

Red commit: `5081b12` (12 assertion failures + stub).  
Green commit: `6be16d9` (all 19 pass).

---------

Co-authored-by: bot <bot@corescope.local>
Co-authored-by: meshcore-bot <bot@meshcore.local>
2026-05-05 00:05:09 -07:00
Kpa-clawbot 26a914274f ci: update go-server-coverage.json [skip ci] 2026-05-05 06:52:56 +00:00
Kpa-clawbot e4f358f562 ci: update go-ingestor-coverage.json [skip ci] 2026-05-05 06:52:55 +00:00
Kpa-clawbot 5ff9d4f31d ci: update frontend-tests.json [skip ci] 2026-05-05 06:52:54 +00:00
Kpa-clawbot db75dbee44 ci: update frontend-coverage.json [skip ci] 2026-05-05 06:52:52 +00:00
Kpa-clawbot 16e1ff9e6c ci: update e2e-tests.json [skip ci] 2026-05-05 06:52:51 +00:00
Kpa-clawbot d144764d38 fix(analytics): multiByteCapability missing under region filter → all rows 'unknown' (#1049)
## Bug

`https://meshcore.meshat.se/#/analytics`:

- Unfiltered → 0 adopter rows show "unknown" (correct).
- Region filter `JKG` → 14 rows show "unknown" (wrong — same nodes, all
confirmed when unfiltered).

Multi-byte capability is a property of the NODE, derived from its own
adverts (the full pubkey is in the advert payload, no prefix collision
risk). The observing region should only control which nodes appear in
the analytics list — it must not change a node's cap evidence.

## Root cause

`PacketStore.GetAnalyticsHashSizes(region)` only attached
`result["multiByteCapability"]` when `region == ""`. Under any region
filter the field was absent. The frontend (`public/analytics.js:1011`)
does `data.multiByteCapability || []`, so every adopter row falls
through the merge with no cap status and renders as "unknown".

## Fix

Always populate `multiByteCapability`. When a region filter is active,
source the global adopter hash-size set from a no-region compute pass so
out-of-region observers' adverts still count as evidence.

## TDD

Red commit (`0968137`): adds
`cmd/server/multibyte_region_filter_test.go`, asserts that
`GetAnalyticsHashSizes("JKG")` returns a populated `multiByteCapability`
with Node A as `confirmed`. Fails on the assertion (field missing)
before the fix.

Green commit (`6616730`): always compute capability against the global
advert dataset.

## Files changed

- `cmd/server/store.go` — `GetAnalyticsHashSizes`: drop the `region ==
""` gate, always populate `multiByteCapability`.
- `cmd/server/multibyte_region_filter_test.go` — new red→green test.

## Verification

```
go test ./... -count=1   # all server tests pass (21s)
```

---------

Co-authored-by: clawbot <bot@corescope.local>
2026-05-05 06:42:58 +00:00
Kpa-clawbot c4fac7fe2e ci: update go-server-coverage.json [skip ci] 2026-05-05 06:29:28 +00:00
Kpa-clawbot 13587584d2 ci: update go-ingestor-coverage.json [skip ci] 2026-05-05 06:29:27 +00:00
Kpa-clawbot 68cd9d77c6 ci: update frontend-tests.json [skip ci] 2026-05-05 06:29:26 +00:00
Kpa-clawbot f2ee74c8f3 ci: update frontend-coverage.json [skip ci] 2026-05-05 06:29:24 +00:00
Kpa-clawbot f676c146ae ci: update e2e-tests.json [skip ci] 2026-05-05 06:29:23 +00:00
Kpa-clawbot 227f375b4a test(ingestor): regression test for observer metadata persistence (#1044) (#1047)
Adds end-to-end test proving that `extractObserverMeta` +
`UpsertObserver` correctly stores model, firmware, battery_mv,
noise_floor, uptime_secs from a real MQTT status payload.

Test passes — confirms the code path works. #1044 was caused by upstream
observers not including metadata fields in their status payloads (older
`meshcoretomqtt` client versions), not a code bug.

Closes #1044

Co-authored-by: meshcore-bot <bot@meshcore.local>
2026-05-05 06:18:47 +00:00
Kpa-clawbot 2e959145aa ci: update go-server-coverage.json [skip ci] 2026-05-05 04:06:12 +00:00
Kpa-clawbot 72dd377ba1 ci: update go-ingestor-coverage.json [skip ci] 2026-05-05 04:06:12 +00:00
Kpa-clawbot 8a536c5899 ci: update frontend-tests.json [skip ci] 2026-05-05 04:06:11 +00:00
Kpa-clawbot f3a7d0d435 ci: update frontend-coverage.json [skip ci] 2026-05-05 04:06:10 +00:00
Kpa-clawbot ccc7cf5a77 ci: update e2e-tests.json [skip ci] 2026-05-05 04:06:09 +00:00
Kpa-clawbot 67da696a42 fix(channels): hide raw psk:* in header, label share button, red delete button (#1041)
## Channel UX round 2 (follow-up to #1040)

Three UX issues reported after #1040 landed:

### 1. Header shows raw `psk:372a9c93` for PSK channels
The selected-channel title rendered `ch.name` directly, which for
user-added PSK channels is the synthetic `psk:<hex8>` string. Users see
opaque key fragments where they expected the friendly name they typed.

**Fix:** new `channelDisplayName(ch)` helper. Returns `ch.userLabel`
when set, falls back to `"Private Channel"` for any `psk:*` name, then
to the original name, then to `Channel <hash>`. Used in both
`selectChannel` (header) and `renderChannelRow` (sidebar).

### 2. Share button `⤴` is unrecognizable
Up-arrow glyph carried no meaning — users didn't know it opened the
QR/key reshare modal.

**Fix:** swap `⤴` for `📤 Share` text label. Same hook, same handler.

### 3. ✕ delete button is a subtle span, not a destructive button
Looked like decorative text, not a real action.

**Fix:** `.ch-remove-btn` gets `background: var(--statusRed, #b54a4a)`,
`color: white`, `border-radius: 4px`, `padding: 4px 8px`, `font-weight:
bold`. Now reads as a destructive action.

### TDD
- Red commit `2d05bbf`: 9 failing assertions (helper missing, ⤴ still
present, CSS rules absent), test compiles + runs to assertion failure.
- Green commit `938f3fc`: all 12 assertions pass. Existing
`test-channel-ux-followup.js` still 28/28.

### Files
- `public/channels.js` — `channelDisplayName` helper, header + row
rendering, share button label
- `public/style.css` — `.ch-remove-btn` destructive styling
- `test-channel-ux-round2.js` — new test (helper behavior + source/CSS
assertions)

---------

Co-authored-by: openclaw-bot <bot@openclaw.dev>
Co-authored-by: corescope-bot <bot@corescope.local>
2026-05-04 20:56:01 -07:00
Kpa-clawbot 5829d2328d ci: update go-server-coverage.json [skip ci] 2026-05-05 03:19:29 +00:00
Kpa-clawbot df60f324e9 ci: update go-ingestor-coverage.json [skip ci] 2026-05-05 03:19:28 +00:00
Kpa-clawbot 0aeb33f757 ci: update frontend-tests.json [skip ci] 2026-05-05 03:19:26 +00:00
Kpa-clawbot e334f8611e ci: update frontend-coverage.json [skip ci] 2026-05-05 03:19:25 +00:00
Kpa-clawbot 32ba77eaf8 ci: update e2e-tests.json [skip ci] 2026-05-05 03:19:24 +00:00
Kpa-clawbot 724a96f35b ci: update go-server-coverage.json [skip ci] 2026-05-05 03:12:40 +00:00
Kpa-clawbot 849bf1c335 ci: update go-ingestor-coverage.json [skip ci] 2026-05-05 03:12:39 +00:00
Kpa-clawbot a0b791254c ci: update frontend-tests.json [skip ci] 2026-05-05 03:12:38 +00:00
Kpa-clawbot 62a2a13251 ci: update frontend-coverage.json [skip ci] 2026-05-05 03:12:38 +00:00
Kpa-clawbot c94ba05c01 ci: update e2e-tests.json [skip ci] 2026-05-05 03:12:37 +00:00
Kpa-clawbot c00b585ee5 fix(channels): UX follow-ups to #1037 (touch target, '0 messages', share, locality, #meshcore) (#1040)
## Summary

Seven UX follow-ups to the channel modal/sidebar redesign in #1037.

## Fixes

1. **✕ touch target** — was 13px font + 0×4 padding, far below WCAG
2.5.5 / Apple HIG 44×44px. Bumped `.ch-remove-btn` to a 44×44 hit area
without disturbing desktop layout.
2. **"0 messages" preview** — user-added (PSK) channel rows showed `0
messages` even when dozens were decrypted. `messageCount` only tracks
server-known activity, not PSK decrypts. Drop the misleading fallback:
when no last message is known and the count is zero/absent, render
nothing.
3. **Privacy footer wording** — old copy "Clear browser data to remove
stored keys" was misleading after #1037 added per-channel ✕. Reworded to
point users at the ✕ button.
4. **Reshare affordance** — each user-added row now exposes a `⤴` Share
button that re-opens the QR + key for that channel via
`ChannelQR.generate` (with a plain-hex + `meshcore://channel/add?...`
URL fallback when the QR vendor lib isn't loaded). Reuses the Add
Channel modal; cleared on close.
5. **Drop "(your key)" suffix** from the row preview. The 🔑 badge
already conveys ownership; the suffix was noise. The key hex itself is
now only revealed on explicit Share, not in the sidebar.
6. **Make browser-local nature obvious** — the prior framing made
local-only sound like a feature when it's actually a constraint users
need to plan around. Adds:
- Prominent `.ch-modal-callout` in the Add Channel modal: *"Channels are
saved to **THIS browser only**. They won't appear on other devices or
browsers, and clearing browser data will remove them."*
   - `🖥️ (this browser)` marker in the **My Channels** section header
- Remove-confirm prompt now explicitly says *"permanently remove the key
from this browser"*
7. **#meshcore, not #LongFast** — `#LongFast` is Meshtastic's default
channel name. The meshcore network's analogous default is `#meshcore`.
Updated placeholder + case-sensitivity example in the modal.

## TDD

- Red commit `878d872` — failing assertions for fixes 1–6.
- Green commit `444cf81` — implementation.
- Red commit `6cab596` — failing assertions for fix 7.
- Green commit `9adc1a3` — `#meshcore` swap.

`test-channel-ux-followup.js` (18 assertions) passes. Existing
`test-channel-modal-ux.js` (33) and `test-channel-sidebar-layout.js` (8)
remain green.

## Files
- `public/channels.js` — row template, share handler, modal
callout/footer, sidebar header, confirm copy, placeholder swap
- `public/style.css` — `.ch-remove-btn` / `.ch-share-btn` 44×44,
`.ch-modal-callout`, `.ch-section-locality`
- `test-channel-ux-followup.js` — new test file

---------

Co-authored-by: clawbot <clawbot@local>
2026-05-04 19:57:53 -07:00
Kpa-clawbot e2bd9a8fa2 ci: update go-server-coverage.json [skip ci] 2026-05-05 02:07:56 +00:00
Kpa-clawbot 1f3c8130ef ci: update go-ingestor-coverage.json [skip ci] 2026-05-05 02:07:55 +00:00
Kpa-clawbot e5606058c1 ci: update frontend-tests.json [skip ci] 2026-05-05 02:07:54 +00:00
Kpa-clawbot 47b4021346 ci: update frontend-coverage.json [skip ci] 2026-05-05 02:07:53 +00:00
Kpa-clawbot c93c008867 ci: update e2e-tests.json [skip ci] 2026-05-05 02:07:52 +00:00
Kpa-clawbot cea2c70d12 feat(#1034): channel UX redesign PR1 — Add Channel modal + sectioned sidebar (#1037)
## Summary

PR 1 of 3 for #1034 — channel UX redesign. Replaces the cramped inline
"type a name or 32-hex blob" form with a clear modal dialog, and
reorganizes the sidebar into three labeled sections.

**Scope of this PR:** Modal UI + sectioned sidebar. QR generation/scan
is deferred to PR #2 (placeholders are wired and ready).
`channel-decrypt.js` crypto is untouched.

## What changed

### New modal: `[+ Add Channel]`

Triggered by the new sidebar button. Three sections:

1. **Generate PSK Channel** — name + `[Generate & Show QR]` →
`crypto.getRandomValues(16)` → hex → `ChannelDecrypt.storeKey`. QR
rendering ships in PR #2; for now `#qr-output` surfaces the hex key as
text.
2. **Add Private Channel (PSK)** — 32-hex input (regex-validated),
optional display name, `[Add]`. `[📷 Scan QR]` placeholder is present but
`disabled` (PR #2 wires it).
3. **Monitor Hashtag Channel** — non-editable `#` prefix + free text +
case-sensitivity warning + `[Monitor]`. Reuses
`ChannelDecrypt.deriveKey`.

Privacy footer: _"🔒 Keys stay in your browser. CoreScope is a passive
observer..."_

Close ✕, backdrop click, and Escape all dismiss.

### Sectioned sidebar

`renderChannelList()` rewritten to render three sections:

- **My Channels** — `userAdded` channels. ✕ always visible. Last sender
+ relative time.
- **Network** — server-known cleartext channels.
- **Encrypted (N)** — collapsed by default (toggle persists in
`localStorage`). Shows hash byte + packet count.

The legacy "🔒 No key" checkbox and `#chShowEncrypted` toggle are removed
entirely. Encrypted channels are always fetched; the renderer groups
them.

## Tests

- **Unit** — `test-channel-modal-ux.js` (33 assertions): added to
`test-all.sh`. Covers sidebar button, modal markup, three sections, QR
placeholders, privacy footer, sectioned sidebar, modal handlers (incl.
`crypto.getRandomValues(16)`).
- **E2E** — `test-channel-modal-e2e.js` (Playwright, 14 steps). Covers
modal open/close, section rendering, invalid-hex error, valid-hex
storage, encrypted-section toggle. Run with:
  ```
CHROMIUM_PATH=/usr/bin/chromium-browser BASE_URL=http://localhost:38201
node test-channel-modal-e2e.js
  ```
- `test-channel-psk-ux.js` — updated to reference `#chPskName` (was
`#chKeyLabelInput`).

### Red→green proof

- Red commit (`7ee421b`): test added with 31 expected assertion
failures, no source change.
- Green commit (`897be8f`): implementation lands, test passes 33/33.

## Browser-validated

Built `cmd/server/`, ran against `test-fixtures/e2e-fixture.db`,
exercised modal open → invalid hex → valid hex → key persisted → modal
closes → sectioned sidebar renders + Encrypted toggle expands. All 14
E2E steps pass.

## What's NOT in this PR

- QR code rendering (PR #2)
- Camera/QR scanning (PR #2)
- Migration of legacy localStorage format (PR #3, if needed — current
key format is unchanged)
- `channel-decrypt.js` changes (none — UI-only PR)

## Acceptance criteria from #1034

- [x] Modal opens on `[+ Add Channel]` click
- [x] Three sections clearly separated with labels
- [x] Add PSK: accepts 32-hex (QR scan = PR #2)
- [x] Monitor Hashtag: derives key, case-sensitivity warning shown
- [x] Privacy footer present
- [x] Sidebar: three sections (My Channels / Network / Encrypted)
- [x] ✕ button visible and functional on My Channels entries
- [x] "No key" checkbox removed
- [ ] Generate PSK QR display — text fallback only; QR is PR #2
- [ ] Old stored keys migrate seamlessly — no migration needed (storage
format unchanged)

Refs #1034

---------

Co-authored-by: meshcore-bot <bot@meshcore.local>
2026-05-04 18:40:46 -07:00
Kpa-clawbot 71f82d5d25 ci: update go-server-coverage.json [skip ci] 2026-05-05 01:39:54 +00:00
Kpa-clawbot 81430cf4c4 ci: update go-ingestor-coverage.json [skip ci] 2026-05-05 01:39:53 +00:00
Kpa-clawbot 1178bae18f ci: update frontend-tests.json [skip ci] 2026-05-05 01:39:52 +00:00
Kpa-clawbot 27c8514d70 ci: update frontend-coverage.json [skip ci] 2026-05-05 01:39:51 +00:00
Kpa-clawbot a24ec6e767 ci: update e2e-tests.json [skip ci] 2026-05-05 01:39:50 +00:00
Kpa-clawbot c1d0daf200 feat(#1034): channel QR generate + scan module (PR 2/3) (#1035)
## PR #2 of channel UX redesign (#1034) — QR generation + scanning

Self-contained QR module for MeshCore channel sharing. Wirable but **not
wired** — PR #3 wires this into the modal placeholders shipped by PR #1.

### What's in
- **`public/channel-qr.js`** — new module exporting `window.ChannelQR`:
- `buildUrl(name, secretHex)` →
`meshcore://channel/add?name=<urlencoded>&secret=<32hex>`
- `parseChannelUrl(url)` → `{name, secret}` or `null` (strict: scheme,
path, hex32 secret)
- `generate(name, secretHex, target)` — renders QR (via vendored
qrcode.js) + the URL string + a "Copy Key" button into `target`
- `scan()` → `Promise<{name, secret} | null>` — opens a camera overlay,
decodes with jsQR, parses, auto-closes on first valid match. Graceful
no-camera/permission-denied fallback ("Camera not available — paste key
manually").
- **`public/vendor/jsqr.min.js`** — vendored jsQR 1.4.0
- **`public/index.html`** — loads `vendor/jsqr.min.js` + `channel-qr.js`
after `channel-decrypt.js`
- **`test-channel-qr.js`** + wired into `test-all.sh` — 16 assertions on
`buildUrl` / `parseChannelUrl` (DOM/camera paths covered by Playwright
in #3)

### TDD
- Red commit `d6ba89e` — stub module + failing assertions on `buildUrl`
/ `parseChannelUrl` (compiles, runs, fails on assertion)
- Green commit `25328ac` — real impl, 16/16 pass

### License note
Brief specified jsQR as MIT — it's actually **Apache-2.0**
(https://github.com/cozmo/jsQR/blob/master/package.json). Apache-2.0 is
permissive and compatible with the repo's ISC license; flagging here so
reviewers can confirm. Cited in the file header.

### Independence guarantees
- Does **not** touch `channels.js` or `channel-decrypt.js`
- Does not call any UI from `channels.js`; PR #3 will call
`ChannelQR.generate(...)` into `#qr-output` and wire `#scan-qr-btn` to
`ChannelQR.scan()`

Refs #1034

---------

Co-authored-by: openclaw-bot <bot@openclaw.local>
2026-05-04 18:29:48 -07:00
Kpa-clawbot d967170dd3 fix(channels): sidebar layout for user-added (PSK) rows — nested <button> bug (#1033)
## Problem

Channel sidebar layout broke for user-added (PSK) channels. Visible
symptoms in the screenshot:

- No ✕ (delete) button on user-added rows
- 🔑 emoji floating in the wrong position
- Message preview text (e.g. `KpaPocket: Тест`) orphaned **between**
channel entries instead of inside the row
- Spinner/loading dots misaligned

## Root cause

**HTML5 forbids nested `<button>` elements.** The `.ch-item` row is a
`<button>`, and #1024 added a `<button class="ch-remove-btn">` inside
it. The HTML parser implicitly closes the outer `.ch-item` the moment it
sees the inner `<button>`, then re-parents everything after it (✕ and
the `.ch-item-preview` line) outside the row.

Resulting DOM tree (parser-corrected, simplified):

```
<button class="ch-item">[icon] Levski 🔑</button>   <-- closes early
<button class="ch-remove-btn">✕</button>            <-- orphaned, "floating"
<div class="ch-item-preview">KpaPocket: Тест</div>  <-- orphaned
<button class="ch-item">[icon] #bookclub …</button>
```

Compounded by `.ch-remove-btn { opacity: 0 }` (only visible on row
hover), which made the ✕ undiscoverable on touch devices even before the
parser bug.

## Fix

`public/channels.js`
- Replace the inner `<button class="ch-remove-btn">` with `<span
class="ch-remove-btn" role="button" tabindex="0">`. Click delegation
already keys off `[data-remove-channel]` so behavior is unchanged.
- Add `keydown` (Enter / Space) handler on `#chList` so the role=button
span stays keyboard-accessible.
- Relabel the ambiguous `🔒 No key` toggle to `🔒 Show encrypted (no
key)`, with an explanatory `title` ("Show encrypted channels you don't
have a key for (locked, can't decrypt)") so users understand it controls
visibility of channels they haven't added a PSK for.

`public/style.css`
- `.ch-remove-btn`: drop `opacity: 0` default. Now `0.55` idle, `0.9` on
row hover, `1` on direct hover/focus. Added `:focus` outline removal +
`display: inline-flex` so the ✕ centers cleanly.
- Add `.ch-user-badge` rule (was unstyled — contributed to the
misalignment of the 🔑).

## TDD

- Red commit `eeb94ad` — `test-channel-sidebar-layout.js` (7 assertions,
3 failing on master).
- Green commit `2959c3d` — fix; all 7 pass.
- Wire commit `4d6100d` — added to `test-all.sh`.

Existing channel test files still pass (`test-channel-psk-ux.js`,
`test-channel-live-decrypt.js`,
`test-channel-live-decrypt-userprefix.js`,
`test-channel-decrypt-m345.js`,
`test-channel-decrypt-insecure-context.js`).

## Files changed
- `public/channels.js`
- `public/style.css`
- `test-channel-sidebar-layout.js` (new)
- `test-all.sh`
2026-05-04 18:29:45 -07:00
Kpa-clawbot 2f0c97604b feat(map): cluster markers with Leaflet.markercluster (#1036) (#1038)
## Summary
Implements map marker clustering for large meshes (500+ nodes) using
vendored `Leaflet.markercluster@1.5.3`. Closes the long-standing no-op
`Show clusters` checkbox.

## What changed
**Vendored library** — `public/vendor/leaflet.markercluster.js` +
`MarkerCluster.css` + `MarkerCluster.Default.css`. No CDN: this runs
offline on mesh-operator deployments.

**`map.js`**
- `createClusterGroup()` instantiates `L.markerClusterGroup` with:
  - `chunkedLoading: true` (no frame drops on initial render)
- `removeOutsideVisibleBounds: true` (viewport culling — key win at 2k+
nodes)
  - `disableClusteringAtZoom: 16` (fully expanded at high zoom)
  - `spiderfyOnMaxZoom: true` (fan out at max zoom)
  - `showCoverageOnHover: false`
  - `animate` disabled on mobile UA for perf
- `makeClusterIcon(cluster)` produces a CoreScope-themed `L.divIcon`:
  - Bold total count, centered
- Up to 4 role-color mini-pills (repeater / companion / room / sensor /
observer) using `ROLE_COLORS`
- Bucketed `mc-sm` / `mc-md` / `mc-lg` background (info / warning /
accent CSS vars)
- `#mcClusters` checkbox repurposed from no-op `Show clusters` →
`Cluster markers`, default **ON**, persisted to
`localStorage['meshcore-map-clustering']`
- Render branches at the marker-add step: clustering ON → `addLayers()`
to `clusterGroup`, skip `deconflictLabels` + `_updateOffsetIndicator`
polylines + `_repositionMarkers` on zoom/resize. Clustering OFF →
original flow unchanged.
- Route polylines (`drawPacketRoute`) already removed both layers — no
change needed beyond actually instantiating `clusterGroup`.
- `?node=PUBKEY` deep-link lookup now searches both `markerLayer` and
`clusterGroup` so it works in either mode.

**`style.css`** — cluster bubble + role-pill styles using `--info` /
`--warning` / `--accent` CSS variables; hover scale.

**`index.html`** — vendor CSS + JS tags after the Leaflet bundle
(cache-busted via `__BUST__`).

## TDD
- **Red commit** `e10af23` — `test-map-clustering.js` + stub
`createClusterGroup`/`makeClusterIcon` returning null/empty divIcon.
Compiles, runs, fails 4/5 on assertions.
- **Green commit** `482ea2e` — real implementation. 5/5 pass.

```
=== map.js: clustering ===
   exposes test hooks (__meshcoreMapInternals)
   createClusterGroup returns an L.MarkerClusterGroup with required options
   cluster group accepts markers via addLayer
   makeClusterIcon: includes total count and role-pill counts
   makeClusterIcon: bucket sm/md/lg by total
```

## Behavior preserved
- Clustering OFF (existing checkbox unchecked) → all original behavior
intact: deconfliction spiral, offset-indicator polylines, per-zoom
reposition.
- Default ON. Operators with small meshes can disable via the checkbox;
choice persists.
- Spiderfying enabled at max zoom (built-in markercluster behavior).

## Performance target
Smooth pan/zoom at 2000 nodes — `chunkedLoading` keeps the main thread
responsive during initial add, `removeOutsideVisibleBounds` keeps DOM
bounded to the viewport. Per AGENTS.md rule 0: complexity is O(n) for
the initial add (chunked across frames), per-zoom re-cluster is internal
to markercluster (well-tested at 10k+ scale).

## Out of scope (filed as follow-ups in spec)
- Canvas marker renderer — only if 5k+ nodes per viewport materializes
- Server-side viewport culling (`/api/nodes?bbox=`)
- Cluster-by-role split groups
- 2k-node fixture + Playwright DOM assertions — repo doesn't currently
ship a `fixture=` query param; the unit test exercises the integration
deterministically.

Fixes #1036

---------

Co-authored-by: corescope-bot <bot@corescope>
2026-05-04 18:29:42 -07:00
Kpa-clawbot 0b0fda5bb2 ci: update go-server-coverage.json [skip ci] 2026-05-04 23:54:17 +00:00
Kpa-clawbot e966ecc71a ci: update go-ingestor-coverage.json [skip ci] 2026-05-04 23:54:16 +00:00
Kpa-clawbot e7aa8eded8 ci: update frontend-tests.json [skip ci] 2026-05-04 23:54:16 +00:00
Kpa-clawbot d652b7c39d ci: update frontend-coverage.json [skip ci] 2026-05-04 23:54:15 +00:00
Kpa-clawbot 6a8ed98d8f ci: update e2e-tests.json [skip ci] 2026-05-04 23:54:14 +00:00
Kpa-clawbot 26daa760cd fix(channels): live PSK decrypt for user-added channels (#1029 follow-up) (#1031)
## Problem

PR #1030 added live PSK decrypt for GRP_TXT WS packets, but in
production it still didn't work for **user-added** PSK channels. New
messages never appeared in real time on a channel added via the sidebar
key form — users had to refresh the page to see them via the REST fetch
path (regression #1029).

## Root cause

`decryptLivePSKBatch` rewrites the payload with the raw channel name:

```js
payload.channel = dec.channelName;   // e.g. "medusa"
```

But user-added channels live in `channels[]` under the key produced by
`addUserChannel`:

```js
hash: 'user:' + name,                // e.g. "user:medusa"
```

`selectedHash` also uses the `user:`-prefixed key while a user-added
channel is open. Downstream in `processWSBatch`:

| Line | Check | Result |
|---|---|---|
| 962 | `c.hash === channelName` | `"medusa" !== "user:medusa"` → user
channel never matched |
| 982 | `channelName === selectedHash` | `"medusa" !== "user:medusa"` →
message never appended to open chat |
| 974 | `channels.push({ hash: channelName, ... })` | duplicate plain
`"medusa"` entry pushed into sidebar |

The unread bumper (`channels.js:1086`) compared `chName === prior` with
the same mismatch, so it bumped an unread badge on the channel currently
being viewed.

Verified end to end against staging WS traffic (live `decryption_status:
"decrypted"` packets observed; user-added channel never updated,
duplicate entry created).

## Fix

`decryptLivePSKBatch` now also stamps a canonical sidebar key on the
payload:

```js
payload.channelKey = hasUserCh ? ('user:' + dec.channelName) : dec.channelName;
```

`processWSBatch` and the unread bumper route on `payload.channelKey`
(falling back to `payload.channel` for server-known CHAN packets — no
behavior change there).

After the fix:
-  live message appends to the open user-added chat
-  sidebar row's `lastMessage` / `messageCount` / `lastActivityMs`
update
-  no duplicate non-prefixed sidebar entry
-  unread bumped only on channels NOT being viewed

## TDD

Red commit `f1719a8` — `test-channel-live-decrypt-userprefix.js`, fails
6/9 on assertions (NOT build error) on pristine `channels.js`.
Green commit `da87018` — minimal fix in `channels.js`, all 9/9 pass.

Verified red gates the change: stashed `public/channels.js`, re-ran test
on red commit alone → 6 assertion failures (open channel got 0 messages,
duplicate sidebar entry, unread bumped on viewed channel).

## Files changed

- `public/channels.js` — stamp/route on `channelKey`
- `test-channel-live-decrypt-userprefix.js` (new) — red-then-green
regression test

---------

Co-authored-by: corescope-bot <bot@corescope>
2026-05-04 16:44:35 -07:00
Kpa-clawbot c196030ec0 ci: update go-server-coverage.json [skip ci] 2026-05-04 05:06:19 +00:00
Kpa-clawbot 7b07761fb9 ci: update go-ingestor-coverage.json [skip ci] 2026-05-04 05:06:19 +00:00
Kpa-clawbot e47257222e ci: update frontend-tests.json [skip ci] 2026-05-04 05:06:18 +00:00
Kpa-clawbot 6f2d70599a ci: update frontend-coverage.json [skip ci] 2026-05-04 05:06:17 +00:00
Kpa-clawbot c120b5eef2 ci: update e2e-tests.json [skip ci] 2026-05-04 05:06:16 +00:00
Kpa-clawbot 3290ff1ed5 fix(channels): auto-decrypt PSK channels on WebSocket live feed (#1029) (#1030)
Closes #1029.

## Problem

PSK-decrypted channels show new messages only after a full page refresh.
The WebSocket live feed delivers `GRP_TXT` packets as encrypted blobs
and the channel UI has no hook to auto-decrypt them with stored keys.
The REST fetch path (used on initial load + on `selectChannel`) already
decrypts; the WS path silently dropped on the floor.

## Fix

Two new helpers in `public/channel-decrypt.js`:

- `buildKeyMap()` → `Map<channelHashByte, { channelName, keyBytes,
keyHex }>`
  built from `getStoredKeys()`. Cached and invalidated on `saveKey` /
  `removeKey`, so the WS hot path is O(1) per packet after the first
  build.
- `tryDecryptLive(payload, keyMap)` → returns
`{ sender, text, channelName, channelHashByte }` when the payload is an
  encrypted `GRP_TXT` whose channel hash matches a stored key and whose
  MAC verifies; `null` otherwise.

`public/channels.js` wraps `debouncedOnWS` with an async pre-pass
(`decryptLivePSKBatch`) that:

1. Skips the work entirely when no encrypted `GRP_TXT` is in the batch
   or no PSK keys are stored.
2. For each match, rewrites `payload.channel`, `payload.sender`, and
   `payload.text` so the existing `processWSBatch` consumes the packet
   exactly the same way it consumes a server-decrypted `CHAN`.
3. Bumps a per-channel `unread` counter for any decrypted message
   whose channel is not currently selected. The badge renders in the
   sidebar (`.ch-unread-badge`) and resets on `selectChannel`.

`processWSBatch` itself is untouched, so the existing channel-view
behavior, dedup-by-packet-hash, region filtering, and timestamp ticker
all continue to work as before.

## TDD

- **Red** (`2e1ff05`): `test-channel-live-decrypt.js` asserts the new
  helpers + the channels.js integration contract. With stub
  `buildKeyMap`/`tryDecryptLive` returning empty/null, the test compiles
  and runs to completion with **8/14 assertion failures** (no crashes,
  no missing-symbol errors).
- **Green** (`1783658`): real implementation lands; **14/14 pass**.

## Verification (Rule 18)

- `node test-channel-live-decrypt.js` → 14/14 pass
- All other channel tests still pass:
  - `test-channel-decrypt-ecb.js` 7/7
  - `test-channel-decrypt-insecure-context.js` 8/8
  - `test-channel-decrypt-m345.js` 24/24
  - `test-channel-psk-ux.js` 19/19
- `cd cmd/server && go build ./...` clean
- Booted the server against the fixture DB and curled
  `/channel-decrypt.js`, `/channels.js`, `/style.css` — all three serve
  the new code with the auto-injected `__BUST__` cache buster.

## Performance

The WS pre-pass is gated by a quick scan: zero-cost when no encrypted
`GRP_TXT` is present in the batch. When PSK keys exist, the key map is
cached (sig-keyed on the stored-keys snapshot) so `crypto.subtle.digest`
runs once per stored key per change, not per packet. Each match costs
one MAC verify + one ECB decrypt — the same work
`fetchAndDecryptChannel`
already does, just amortized over time instead of in a single batch.

## Out of scope

- Decoupling the badge from the live feed (server should ideally tag
  packets with `decryptionStatus` before broadcast). Tracked separately.
- Persisting the `unread` counter across reloads (currently in-memory).

---------

Co-authored-by: clawbot <bot@corescope.local>
2026-05-04 04:56:43 +00:00
Kpa-clawbot 505206feb4 ci: update go-server-coverage.json [skip ci] 2026-05-04 04:52:13 +00:00
Kpa-clawbot 41762a873a ci: update go-ingestor-coverage.json [skip ci] 2026-05-04 04:52:12 +00:00
Kpa-clawbot 7ab05c5a19 ci: update frontend-tests.json [skip ci] 2026-05-04 04:52:11 +00:00
Kpa-clawbot c3138a96f7 ci: update frontend-coverage.json [skip ci] 2026-05-04 04:52:10 +00:00
Kpa-clawbot 03c895addc ci: update e2e-tests.json [skip ci] 2026-05-04 04:52:09 +00:00
Kpa-clawbot c9301fee9c fix(ingestor): extract per-hop SNR for TRACE packets at ingest time (#1028)
## Problem

PR #1007 added per-hop SNR extraction (`snrValues`) for TRACE packets to
`cmd/server/decoder.go`. That code path is only hit by the on-demand
re-decode endpoint (packet detail). The actual ingest pipeline runs
`cmd/ingestor/decoder.go`, decodes the packet once, and persists
`decoded_json` into SQLite. The server then serves `decoded_json` as-is
for list/feed queries.

Net effect: `snrValues` never appears in any production response,
because the ingestor's decoder was never updated.

Confirmed empirically: `strings /app/corescope-ingestor | grep snrVal`
returns nothing.

## Fix

Port the SNR extraction logic from `cmd/server/decoder.go` (lines
410–422) into `cmd/ingestor/decoder.go`. For TRACE packets, the header
path bytes are int8 SNR values in quarter-dB encoding; extract them into
`payload.SNRValues` **before** `path.Hops` is overwritten with
payload-derived hop IDs.

Also adds the matching `SNRValues []float64` field to the ingestor's
`Payload` struct so it serializes into `decoded_json`.

## TDD

- **Red commit** (`6ae4c07`): adds `TestDecodeTraceExtractsSNRValues` +
`SNRValues` field stub. Compiles, fails on assertion (`len(SNRValues)=0,
want 2`).
- **Green commit** (`4a4f3f3`): adds extraction loop. Test passes.

Test packet: `26022FF8116A23A80000000001C0DE1000DEDE`
- header `0x26` = TRACE + DIRECT
- pathByte `0x02` = hash_size 1, hash_count 2
- header path `2F F8` → SNR `[int8(0x2F)/4, int8(0xF8)/4]` = `[11.75,
-2.0]`

## Files

- `cmd/ingestor/decoder.go` — `+16` (field + extraction)
- `cmd/ingestor/decoder_test.go` — `+29` (red test)

## Out of scope

- `cmd/server/decoder.go` is already correct (PR #1007). Untouched.
- Backfill of historical `decoded_json` rows. New TRACE packets get SNR;
old rows do not until re-decoded.

---------

Co-authored-by: corescope-bot <bot@corescope.local>
2026-05-03 21:42:14 -07:00
Kpa-clawbot dd66f678be ci: update go-server-coverage.json [skip ci] 2026-05-04 04:17:59 +00:00
Kpa-clawbot 8ec355c6d6 ci: update go-ingestor-coverage.json [skip ci] 2026-05-04 04:17:58 +00:00
Kpa-clawbot 98e5fe6adf ci: update frontend-tests.json [skip ci] 2026-05-04 04:17:57 +00:00
Kpa-clawbot b40719a21e ci: update frontend-coverage.json [skip ci] 2026-05-04 04:17:56 +00:00
Kpa-clawbot a695110ea4 ci: update e2e-tests.json [skip ci] 2026-05-04 04:17:54 +00:00
Kpa-clawbot 3aaa21bbc0 fix(channel-decrypt): pure-JS SHA-256/HMAC fallback for HTTP context (P0 follow-up to #1021) (#1027)
## P0: PSK channel decryption silently failed on HTTP origins

User reported PSK key `372a9c93260507adcbf36a84bec0f33d` "still doesn't
work" after PRs #1021 (AES-ECB pure-JS) and #1024 (PSK UX) merged.
Reproduced end-to-end and found the actual remaining bug.

### Root cause

PR #1021 fixed the AES-ECB path by vendoring a pure-JS core, but
**SHA-256 and HMAC-SHA256 in `public/channel-decrypt.js` are still
pinned to `crypto.subtle`**. `SubtleCrypto` is exposed **only in secure
contexts** (HTTPS / localhost); when CoreScope is served over plain HTTP
— common for self-hosted instances — `crypto.subtle` is `undefined`,
and:

- `computeChannelHash(key)` → `Cannot read properties of undefined
(reading 'digest')`
- `verifyMAC(...)` → `Cannot read properties of undefined (reading
'importKey')`

Both throws are swallowed by `addUserChannel`'s `try/catch`, so the only
user-visible signal is the toast `"Failed to decrypt"` with no
console-friendly explanation. Verdict: PR #1021 only fixed half of the
crypto-in-insecure-context problem.

### Reproduction (no browser required)

`test-channel-decrypt-insecure-context.js` loads the production
`public/channel-decrypt.js` in a `vm` sandbox where `crypto.subtle` is
undefined (mirrors HTTP browser). Pre-fix it failed 8/8 with the exact
error above; post-fix it passes 8/8.

### Fix

- New `public/vendor/sha256-hmac.js`: minimal pure-JS SHA-256 +
HMAC-SHA256 (FIPS-180-4 + RFC 2104, ~120 LOC, MIT). Verified against
Node `crypto` for SHA-256 (empty / "abc" / 1000 bytes) and RFC 4231
HMAC-SHA256 TC1.
- `public/channel-decrypt.js`: `hasSubtle()` guard. `deriveKey`,
`computeChannelHash`, and `verifyMAC` use `crypto.subtle` when available
and fall back to `window.PureCrypto` otherwise. Same API, same return
types, same async signatures.
- `public/index.html`: load `vendor/sha256-hmac.js` immediately before
`channel-decrypt.js` (mirrors the `vendor/aes-ecb.js` wiring from
#1021).

### TDD

- **Red** (`8075b55`): `test-channel-decrypt-insecure-context.js` — runs
the **unmodified** prod module in a no-`subtle` sandbox, asserts on the
known PSK key (hash byte `0xb7`) and synthetic encrypted packet
round-trip. Compiles, runs, **fails 8/8 on assertions** (not on import
errors).
- **Green** (`232add6`): vendor + delegate. Test passes 8/8.
- Wired into `test-all.sh` and `.github/workflows/deploy.yml` so CI
gates the regression.

### Validation (all green post-fix)

| Test | Result |
|---|---|
| `test-channel-decrypt-insecure-context.js` | 8/8 |
| `test-channel-decrypt-ecb.js` (#1021 KAT) | 7/7 |
| `test-channel-decrypt-m345.js` (existing) | 24/24 |
| `test-channel-psk-ux.js` (#1024) | 19/19 |
| `test-packet-filter.js` | 69/69 |

### Files changed

- `public/vendor/sha256-hmac.js` — **new** (~150 LOC, MIT, decrypt-side
only)
- `public/channel-decrypt.js` — `hasSubtle()` guard + fallback in
`deriveKey`/`computeChannelHash`/`verifyMAC`
- `public/index.html` — script tag for `vendor/sha256-hmac.js`
- `test-channel-decrypt-insecure-context.js` — **new** (8 assertions,
pure Node, no browser)
- `test-all.sh` + `.github/workflows/deploy.yml` — wire the test

### Risk / scope

- Frontend-only, decrypt-side only. No server, schema, or config changes
(Config Documentation Rule N/A).
- Secure-context behaviour unchanged (still uses Web Crypto when
present).
- HMAC `secret` building, MAC truncation (2 bytes), and AES-ECB
delegation untouched.
- Hash vector for the user's PSK key matches:
`SHA-256(372a9c93260507adcbf36a84bec0f33d) = b7ce04…`, channel hash byte
`0xb7` (183) — confirmed against Node `crypto` and against the new
pure-JS path.

### Note on the FIPS test data in the new test

The PSK `372a9c93260507adcbf36a84bec0f33d` is shared test data from the
bug report, not a real channel secret.

---------

Co-authored-by: openclaw-bot <bot@openclaw.local>
2026-05-03 21:06:59 -07:00
Kpa-clawbot 4def3ed7c4 ci: update go-server-coverage.json [skip ci] 2026-05-04 03:19:34 +00:00
Kpa-clawbot cfb4d652a7 ci: update go-ingestor-coverage.json [skip ci] 2026-05-04 03:19:33 +00:00
Kpa-clawbot 9bf4c103d8 ci: update frontend-tests.json [skip ci] 2026-05-04 03:19:32 +00:00
Kpa-clawbot 49857dd748 ci: update frontend-coverage.json [skip ci] 2026-05-04 03:19:31 +00:00
Kpa-clawbot 8815b194d8 ci: update e2e-tests.json [skip ci] 2026-05-04 03:19:30 +00:00
Kpa-clawbot 9f55ef802b fix(#804): attribute analytics by repeater home region, not observer (#1025)
Fixes #804.

## Problem
Analytics filtered region purely by **observer** region: a multi-byte
repeater whose home is PDX would leak into SJC results whenever its
flood
adverts were relayed past an SJC observer. Per-node groupings
(`multiByteNodes`, `distributionByRepeaters`) inherited the same bug.

## Fix

Two new helpers in `cmd/server/store.go`:

- `iataMatchesRegion(iata, regionParam)` — case-insensitive IATA→region
  match using the existing `normalizeRegionCodes` parser.
- `computeNodeHomeRegions()` — derives each node's HOME IATA from its
  zero-hop DIRECT adverts. Path byte for those packets is set locally on
  the originating radio and the packet has not been relayed, so the
  observer that hears it must be in direct RF range. Plurality vote when
  zero-hop adverts span multiple regions.

`computeAnalyticsHashSizes` now applies these in two ways:

1. **Observer-region filter is relaxed for ADVERT packets** when the
   originator's home region matches the requested region. A flood advert
   from a PDX repeater that's only heard by an SJC observer still
   attributes to PDX.
2. **Per-node grouping** (`multiByteNodes`, `distributionByRepeaters`)
   excludes nodes whose HOME region disagrees with the requested region.
   Falls back to the observer-region filter when home is unknown.

Adds `attributionMethod` to the response (`"observer"` or `"repeater"`)
so operators can tell which method was applied.

## Backwards compatibility

- No region filter requested → behavior unchanged (`attributionMethod`
  is `"observer"`).
- Region filter requested but no zero-hop direct adverts seen for a node
  → falls back to the prior observer-region check for that node.
- Operators without IATA-tagged observers see no change.

## TDD

- **Red commit** (`c35d349`): adds
`TestIssue804_AnalyticsAttributesByRepeaterRegion`
with three subtests (PDX leak into SJC, attributionMethod field present,
  SJC leak into PDX). Compiles, runs, fails on assertions.
- **Green commit** (`11b157f`): the implementation. All subtests pass,
  full `cmd/server` package green.

## Files changed
- `cmd/server/store.go` — helpers + analytics filter logic (+236/-51)
- `cmd/server/issue804_repeater_region_test.go` — new test (+147)

---------

Co-authored-by: CoreScope Bot <bot@corescope.local>
Co-authored-by: openclaw-bot <bot@openclaw.local>
2026-05-03 20:10:02 -07:00
Kpa-clawbot 019ace3645 ci: update go-server-coverage.json [skip ci] 2026-05-04 02:59:47 +00:00
Kpa-clawbot c5139f5de5 ci: update go-ingestor-coverage.json [skip ci] 2026-05-04 02:59:46 +00:00
Kpa-clawbot 0add429d24 ci: update frontend-tests.json [skip ci] 2026-05-04 02:59:45 +00:00
Kpa-clawbot c8b29d0482 ci: update frontend-coverage.json [skip ci] 2026-05-04 02:59:44 +00:00
Kpa-clawbot 9c5e13d133 ci: update e2e-tests.json [skip ci] 2026-05-04 02:59:42 +00:00
Kpa-clawbot 1f4969c1a6 fix(#770): treat region 'All' as no-filter + document region behavior (#1026)
## Summary

Fixes #770 — selecting "All" in the region filter dropdown produced an
empty channel list.

## Root cause

`normalizeRegionCodes` (cmd/server/db.go) treated any non-empty input as
a literal IATA code. The frontend region filter labels its catch-all
option **"All"**; while `region-filter.js` normally sends an empty
string when "All" is selected, any code path that ends up sending
`?region=All` (deep-link URLs, manual queries, future callers) caused
the function to return `["ALL"]`. Downstream queries then filtered
observers for `iata = 'ALL'`, which never matches anything → empty
response.

## Fix

`normalizeRegionCodes` now treats `All` / `ALL` / `all`
(case-insensitive, with optional whitespace, mixed in CSV) as equivalent
to an empty value, returning `nil` to signal "no filter". Real IATA
codes (`SJC`, `PDX`, `sjc,PDX` → `[SJC PDX]`) still pass through
unchanged.

This is a defensive server-side fix: a single chokepoint that all
region-aware endpoints already flow through (channels, packets,
analytics, encrypted channels, observer ID resolution).

## Documentation

Expanded `_comment_regions` in `config.example.json` to explain:
- How IATA codes are resolved (payload > topic > source config — set in
#1012)
- What the `regions` map controls (display labels) vs runtime-discovered
codes
- That observers without an IATA tag only appear under "All Regions"
- That the `All` sentinel is server-side safe

## TDD

- **Red commit** (`4f65bf4`): `cmd/server/region_filter_test.go` —
`TestNormalizeRegionCodes_AllIsNoFilter` asserts `All` / `ALL` / `all` /
`""` / `"All,"` all collapse to `nil`. Compiles, runs, fails on
assertion (`got [ALL], want nil`). Companion test
`TestNormalizeRegionCodes_RealCodesPreserved` locks in that `sjc,PDX`
still returns `[SJC PDX]`.
- **Green commit** (`c9fb965`): two-line change in
`normalizeRegionCodes` + docs update.

## Verification

```
$ go test -run TestNormalizeRegionCodes -count=1 ./cmd/server
ok      github.com/corescope/server     0.023s

$ go test -count=1 ./cmd/server
ok      github.com/corescope/server    21.454s
```

Full suite green; no existing region tests regressed.

Fixes #770

---------

Co-authored-by: Kpa-clawbot <bot@corescope>
2026-05-03 19:50:01 -07:00
Kpa-clawbot 38ae1c92de ci: update go-server-coverage.json [skip ci] 2026-05-04 01:48:16 +00:00
Kpa-clawbot ac881e4f4a ci: update go-ingestor-coverage.json [skip ci] 2026-05-04 01:48:15 +00:00
Kpa-clawbot 7e15022d2d ci: update frontend-tests.json [skip ci] 2026-05-04 01:48:14 +00:00
Kpa-clawbot b3dba21460 ci: update frontend-coverage.json [skip ci] 2026-05-04 01:48:13 +00:00
Kpa-clawbot aabc892272 ci: update e2e-tests.json [skip ci] 2026-05-04 01:48:12 +00:00
Kpa-clawbot a1f4cb9b5d fix(channels): PSK channel UX — delete, label, badge, toast (#1020) (#1024)
## Problem

The PSK channel decrypt UX was unusable (#1020):

1. ✕ button only appeared when a `userAdded` flag happened to be set,
which wasn't reliable for keys matching server-known hashes.
2. PSK channels visually indistinguishable from server-known encrypted
channels — both rendered with 🔒.
3. No way to give a PSK channel a friendly name; sidebar always showed
`psk:<hex8>`.
4. "Decrypt count" toast was scraped from `#chMessages .ch-msg` after a
race, so it often reported zero or stale numbers.

## Changes

### `public/channel-decrypt.js`
- **New API**: `saveLabel(name, label)`, `getLabel(name)`,
`getLabels()`.
- `storeKey(name, hex, label?)` — third optional `label` argument
persists alongside the key under a separate `corescope_channel_labels`
localStorage namespace.
- `removeKey` now also clears the stored label.

### `public/channels.js`
- Add-channel form gets a second row with `#chKeyLabelInput` ("optional
name (e.g. My Crew)").
- `addUserChannel(val, label)` — passes the label through to `storeKey`.
- `mergeUserChannels()` reads `getLabels()` and propagates `userLabel`
onto channel objects (both new ones and ones that match an existing
server-known hash).
- `renderChannelList()` distinguishes user-added rows:
  - `.ch-user-added` class + `data-user-added="true"` attribute.
- 🔓 badge icon (vs 🔒 for server-known no-key) and a 🔑 marker next to the
name.
  - Display name uses the user-supplied label when present.
- ✕ remove button is now keyed off `userAdded` (which
`mergeUserChannels` always sets for stored keys).
- `selectChannel` now returns `{ messageCount, wrongKey?, error?, stale?
}`. `addUserChannel` uses that for the toast instead of scraping the
DOM, and surfaces `wrongKey` explicitly: "Key does not match any packets
for …".

## Acceptance criteria

- [x] ✕ (delete) button on all user-added PSK channels in sidebar
- [x] Clicking ✕ removes key + label + cache from localStorage and
removes from sidebar
- [x] Visual badge/icon distinguishing "my keys" (🔓 + 🔑 +
`.ch-user-added`) from "unknown encrypted" (🔒 + `.ch-encrypted`)
- [x] Optional name field in the add-channel form (`#chKeyLabelInput`),
stored alongside key in localStorage
- [x] Name displayed in sidebar instead of `psk:<hex>`
- [x] Toast shows decrypt result count after adding (and reports
`wrongKey` explicitly)

## Tests

`test-channel-psk-ux.js` (added to `test-all.sh`) — 19 assertions:

- ChannelDecrypt label storage + retrieval + `removeKey` cascade.
- E2E DOM contract for `channels.js`: `#chKeyLabelInput`,
`.ch-user-added`, 🔓 icon, `addUserChannel` accepts label, no DOM
scraping for decrypt count.
- End-to-end `mergeUserChannels` label propagation through a
sandbox-loaded `ChannelDecrypt`.

Red commit (`da6d477`) failed 8/15 assertions; green commit (`542bb1d`)
— all 19 pass. Existing channel tests still green:

```
node test-channel-decrypt-ecb.js   → 7/7
node test-channel-decrypt-m345.js  → 24/24
node test-channel-psk-ux.js        → 19/19
```

(The pre-existing `test-frontend-helpers.js` failure on `nodes.js`
`loadNodes` reproduces on `origin/master` — unrelated.)

## Notes

- Decrypt logic untouched (PR #1021 already fixed it).
- No config fields added.
- Keys + labels stay in the user's browser; nothing transmitted.

Fixes #1020

---------

Co-authored-by: corescope-bot <bot@corescope.local>
2026-05-03 18:38:18 -07:00
Kpa-clawbot 01a687e912 ci: update go-server-coverage.json [skip ci] 2026-05-04 01:06:39 +00:00
Kpa-clawbot 8652ddc7c0 ci: update go-ingestor-coverage.json [skip ci] 2026-05-04 01:06:38 +00:00
Kpa-clawbot 739bb67fc9 ci: update frontend-tests.json [skip ci] 2026-05-04 01:06:37 +00:00
Kpa-clawbot 2363a988dc ci: update frontend-coverage.json [skip ci] 2026-05-04 01:06:35 +00:00
Kpa-clawbot b6b25390e8 ci: update e2e-tests.json [skip ci] 2026-05-04 01:06:34 +00:00
Kpa-clawbot b06adf9f2a feat: /api/backup — one-click SQLite database export (#474) (#1022)
## Summary

Implements `GET /api/backup` — one-click SQLite database export per
#474.

Operators can now grab a complete, consistent snapshot of the analyzer
DB with a single authenticated request — no SSH, no scripts, no DB
tooling.

## Endpoint

```
GET /api/backup
X-API-Key: <key>            # required
→ 200 OK
  Content-Type: application/octet-stream
  Content-Disposition: attachment; filename="corescope-backup-<unix>.db"
  <body: complete SQLite database file>
```

## Approach

Uses SQLite's `VACUUM INTO 'path'` to produce an atomic, defragmented
copy of the database into a fresh file:

- **Consistent**: VACUUM INTO runs at read isolation — the snapshot
reflects a single point in time even while the ingestor is writing to
the WAL.
- **Non-blocking**: writers continue uninterrupted; we never hold a
write lock.
- **Works on read-only connections**: verified manually against a
WAL-mode source DB (`mode=ro` connection successfully produces a
snapshot).
- **No corruption risk**: even if the live on-disk DB has issues, VACUUM
INTO surfaces what the server can read rather than copying broken pages
byte-for-byte.

The snapshot is staged in `os.MkdirTemp(...)` and removed after the
response body is fully streamed (deferred cleanup). Requesting client IP
is logged for audit.

The issue suggested an alternative in-memory rebuild path; `VACUUM INTO`
is simpler, faster, and produces a strictly more accurate copy of what
the server actually sees, so going with it.

## Security

- Mounted under `requireAPIKey` middleware — same gate as other admin
endpoints (`/api/admin/prune`, `/api/perf/reset`).
- Returns 401 without a valid `X-API-Key` header.
- Returns 403 if no API key is configured server-side.
- `X-Content-Type-Options: nosniff` set on the response.

## TDD

- **Red** (`99548f2`): `cmd/server/backup_test.go` adds
`TestBackupRequiresAPIKey` + `TestBackupReturnsValidSQLiteSnapshot`.
Stub handler returns 200 with no body so the tests fail on assertions
(Content-Type / Content-Disposition / SQLite magic header), not on
import or build errors.
- **Green** (`837b2fe`): real implementation lands; both tests pass;
full `go test ./...` suite stays green.

## Files

- `cmd/server/backup.go` — handler implementation
- `cmd/server/backup_test.go` — red-then-green tests
- `cmd/server/routes.go` — route registration under `requireAPIKey`
- `cmd/server/openapi.go` — OpenAPI metadata so `/api/openapi`
advertises the endpoint

## Out of scope (follow-ups)

- Rate limiting (issue suggested 1 req/min). Not added here —
admin-key-gated endpoint with a fast snapshot path is acceptable for v1;
happy to add a token-bucket limiter in a follow-up if operators report
hammering.
- UI button to trigger the download (frontend work — separate PR).

Fixes #474

---------

Co-authored-by: corescope-bot <bot@corescope.local>
2026-05-03 17:56:42 -07:00
Kpa-clawbot 51b9fed15e feat(roles): /#/roles page + /api/analytics/roles endpoint (Fixes #818) (#1023)
## Summary

Implements `/#/roles` per QA #809 §5.4 / issue #818. The page previously
showed "Page not yet implemented."

### Backend
- New `GET /api/analytics/roles` returns `{ totalNodes, roles: [{ role,
nodeCount, withSkew, meanAbsSkewSec, medianAbsSkewSec, okCount,
warningCount, criticalCount, absurdCount, noClockCount }] }`.
- Pure `computeRoleAnalytics(nodesByPubkey, skewByPubkey)` does the
bucketing/aggregation — no store/lock dependency, fully unit-testable.
- Roles are normalised (lowercased + trimmed; empty bucketed as
`unknown`).

### Frontend
- New `public/roles-page.js` renders a distribution table: count, share,
distribution bar, w/ skew, median |skew|, mean |skew|, severity
breakdown (OK / Warning / Critical / Absurd / No-clock).
- Registered as the `roles` page in the SPA router and linked from the
main nav.
- Auto-refreshes every 60 s, with a manual refresh button.

### Tests (TDD)
- **Red commit** (`9726d5b`): two assertion-failing tests against a stub
`computeRoleAnalytics` that returns an empty result. Compiles, runs,
fails on `TotalNodes = 0, want 5` and `len(Roles) = 0, want 1`.
- **Green commit** (`7efb76a`): full implementation, route wiring,
frontend page + nav, plus E2E test in `test-e2e-playwright.js` covering
both the empty-state contract (no "Page not yet implemented"
placeholder) and the populated-table case (header columns, body rows,
API response shape).

### Verification
- `go test ./cmd/server/...` green.
- Local server with the e2e fixture: `GET /api/analytics/roles` returns
`{"totalNodes":200,"roles":[{"role":"repeater","nodeCount":168,...},{"role":"room","nodeCount":23,...},{"role":"companion","nodeCount":9,...}]}`.

Fixes #818

---------

Co-authored-by: corescope-bot <bot@corescope>
2026-05-03 17:56:12 -07:00
Kpa-clawbot cb21305dc4 fix(channel-decrypt): replace AES-CBC ECB hack with pure-JS AES-128 ECB (P0) (#1021)
## P0: channel decryption broken on prod (`OperationError` in
`decryptECB`)

### Symptom
```
Uncaught (in promise) OperationError
    at decryptECB (channel-decrypt.js:89)
    at async Object.decrypt (channel-decrypt.js:181)
    at async decryptCandidates (channels.js:568)
```
Channel message decryption fails for most ciphertext blocks in the
browser console on `analyzer.00id.net`.

### Root cause
The original `decryptECB()` simulated AES-128-ECB via Web Crypto AES-CBC
with a zero IV plus an appended dummy PKCS7 padding block (16 × `0x10`).
Web Crypto **always** validates PKCS7 padding on the decrypted output,
and after CBC-decrypting the dummy padding block it almost never
produces a valid PKCS7 sequence, so Chrome/Firefox throw
`OperationError`. There is no Web Crypto knob to disable that check —
and Web Crypto doesn't expose raw ECB at all.

This is a well-known dead end: every project that needs ECB in browsers
ends up with a small pure-JS AES core.

### Fix
- Vendor a minimal pure-JS **AES-128 ECB decrypt-only** core into
`public/vendor/aes-ecb.js`.
- **Source:** [aes-js](https://github.com/ricmoo/aes-js) by Richard
Moore — MIT License (cited in the header comment).
- **Trimmed to:** S-boxes, key expansion (FIPS-197 §5.2), inverse cipher
(FIPS-197 §5.3). No encrypt path. No other modes. No padding logic. ~150
lines.
- `decryptECB(key, ciphertext)` keeps the same API surface:
`Promise<Uint8Array | null>`. It now delegates to
`window.AES_ECB.decrypt(...)`.
- `verifyMAC` and `computeChannelHash` keep using Web Crypto
(HMAC-SHA256 / SHA-256 — no padding pathology).
- Wired `vendor/aes-ecb.js` into `public/index.html` immediately before
`channel-decrypt.js`.

### TDD
- **Red commit (`36f6882`)** — adds `test-channel-decrypt-ecb.js` pinned
to the **FIPS-197 Appendix C.1** AES-128 known-answer vector. Compiles,
runs, and fails on assertion (`OperationError`) against the existing
implementation.
- **Green commit (`bbbd2d1`)** — vendors the pure-JS AES core and
rewires `decryptECB`. Test now passes (7/7), including a multi-block
assertion that two identical ciphertext blocks decrypt to two identical
plaintext blocks (true ECB, no chaining).
- Existing `test-channel-decrypt-m345.js` still passes (24/24).

### Files changed
- `public/vendor/aes-ecb.js` — **new** (vendored AES-128 ECB decrypt,
MIT, ~150 LOC)
- `public/channel-decrypt.js` — `decryptECB()` rewritten to delegate to
vendor
- `public/index.html` — script tag added for `vendor/aes-ecb.js`
- `test-channel-decrypt-ecb.js` — **new** TDD test (FIPS-197 KAT +
multi-block + edge cases)

### Risk / scope
- Decrypt-only, client-side, no server changes, no schema changes, no
config changes (Config Documentation Rule N/A).
- ECB is a single 16-byte block per packet for MeshCore channel traffic,
so the perf delta vs Web Crypto is negligible (a single `decryptBlock`
is ~10 round transforms on 16 bytes).
- HTTP-context safe (no Web Crypto required for ECB anymore).

### Validation
- All 7 FIPS-197 KAT + multi-block tests pass.
- Existing channel-decrypt M3/M4/M5 tests still pass (24/24).
- `test-packet-filter.js` (62/62), `test-aging.js` (18/18) unaffected.
- `test-frontend-helpers.js` has a pre-existing failure on master
unrelated to this PR (verified by stashing the patch).

---------

Co-authored-by: openclaw-bot <bot@openclaw.local>
2026-05-04 00:46:24 +00:00
Kpa-clawbot a56ee5c4fe feat(analytics): selectable timeframes via ?window/?from/?to (#842) (#1018)
## Summary
Selectable analytics timeframes (#842). Adds backend support for
`?window=1h|24h|7d|30d` and `?from=&to=` on the three main analytics
endpoints (`/api/analytics/rf`, `/api/analytics/topology`,
`/api/analytics/channels`), and a time-window picker in the Analytics
page UI that drives them. Default behavior with no query params is
unchanged.

## TDD trail
- Red: `bbab04d` — adds `TimeWindow` + `ParseTimeWindow` stub and tests;
tests fail on assertions because the stub returns the zero window.
- Green: `75d27f9` — implements `ParseTimeWindow`, threads `TimeWindow`
through `compute*` loops + caches, wires HTTP handlers, adds frontend
picker + E2E.

## Backend changes
- `cmd/server/time_window.go` — full `ParseTimeWindow` (`?window=`
aliases + `?from=/&to=` RFC3339 absolute range; invalid input → zero
window for backwards compatibility).
- `cmd/server/store.go` — new
`GetAnalytics{RF,Topology,Channels}WithWindow` wrappers; `compute*`
loops skip transmissions whose `FirstSeen` (or per-obs `Timestamp` for
the region+observer slice) falls outside the window. Cache key composes
`region|window` so different windows do not poison each other.
- `cmd/server/routes.go` — handlers call `ParseTimeWindow(r)` and
dispatch to the `*WithWindow` methods.

## Frontend changes
- `public/analytics.js` — new `<select id="analyticsTimeWindow">`
rendered under the region filter (All / 1h / 24h / 7d / 30d). Selecting
an option triggers `loadAnalytics()` which appends `&window=…` to every
analytics fetch.

## Tests
- `cmd/server/time_window_test.go` — covers all aliases, absolute range,
no-params backwards compatibility, `Includes()` bounds, and `CacheKey()`
distinctness.
- `cmd/server/topology_dedup_test.go`,
`cmd/server/channel_analytics_test.go` — updated callers to pass
`TimeWindow{}`.

## E2E (rule 18)
`test-e2e-playwright.js:592-611` — opens `/#/analytics`, asserts the
picker is rendered with a `24h` option, then asserts that selecting
`24h` triggers a network request to `/api/analytics/rf?…window=24h`.

## Backwards compatibility
No params → zero `TimeWindow` → original code paths (no filter,
region-only cache key). Verified by
`TestParseTimeWindow_NoParams_BackwardsCompatible` and by the existing
analytics tests still passing unchanged on `_wt-fix-842`.

Fixes #842

---------

Co-authored-by: you <you@example.com>
Co-authored-by: corescope-bot <bot@corescope>
2026-05-03 17:41:22 -07:00
Kpa-clawbot df69a17718 feat(#772): short pubkey-prefix URLs for mesh sharing (#1016)
## Summary

Fixes #772 — adds a short-URL form for node detail pages so operators
can paste node links into a mesh chat without bringing along a
64-hex-char public key.

## Approach

**Pubkey-prefix resolution** (no allocator, no lookup table).

- The SPA hash route `#/nodes/<key>` already accepts whatever
pubkey-shaped string the user pastes; the front end forwards it to `GET
/api/nodes/<key>`.
- When that lookup misses **and** the path is 8..63 hex chars, the
backend now calls `DB.GetNodeByPrefix` and:
  - returns the matching node when exactly one node has that prefix,
- returns **409 Conflict** when multiple nodes share the prefix (with a
"use a longer prefix" hint),
  - falls through to the existing 404 otherwise.
- 8 hex chars = 32 bits of entropy, which is enough for fleets in the
low thousands. Operators can extend to 10–12 chars if collisions become
common.
- The full-screen node detail card gets a new **📡 Copy short URL**
button that copies `…/#/nodes/<first 8 hex chars>`.

### Why not an opaque ID table (`/s/<id>`)?

Considered and rejected:

- Needs persistence + an allocator + cleanup story.
- IDs aren't self-describing — operators can't sanity-check them.
- IDs don't survive a DB rebuild.
- 32 bits of pubkey already buys us collision resistance with zero
moving parts.

If the directory grows past the point where 8-char prefixes routinely
collide, we can extend the minimum length without changing the URL
shape.

## Changes

- `cmd/server/db.go` — new `GetNodeByPrefix(prefix)` returning `(node,
ambiguous, error)`. Validates hex; rejects <8 chars; `LIMIT 2` to detect
collisions cheaply.
- `cmd/server/routes.go` — `handleNodeDetail` falls back to prefix
resolution; canonicalizes pubkey downstream; emits 409 on ambiguity;
honors blacklist on the resolved pubkey.
- `public/nodes.js` — adds **📡 Copy short URL** button + handler on the
full-screen node detail card.
- `cmd/server/short_url_test.go` — Go tests (red-then-green).
- `test-e2e-playwright.js` — E2E: navigates via prefix-only URL and
asserts the new button surfaces.

## TDD evidence

- Red commit: `2dea97a` — tests added with a stub `GetNodeByPrefix`
returning `(nil, false, nil)`. All four assertions failed (assertion
failures, not build errors): expected node got nil; expected
ambiguous=true got false; route 404 vs expected 200/409.
- Green commit: `9b8f146` — implementation lands; `go test ./...` passes
locally in `cmd/server`.

## Compatibility

- Existing 64-char pubkey URLs are untouched (exact lookup runs first).
- Blacklist is enforced both on the raw input and on the resolved
pubkey.
- No new config knobs.

## What I did **not** touch

- `cmd/server/db_test.go`, other route tests — unchanged.
- Packet-detail short URLs (issue scopes nodes; revisit in a follow-up
if asked).

Fixes #772

---------

Co-authored-by: clawbot <bot@corescope.local>
2026-05-03 17:40:54 -07:00
Kpa-clawbot f229e15869 feat(packet-filter): transport boolean + T_FLOOD/T_DIRECT route aliases (#339) (#1014)
## Summary

Adds Wireshark-style filter support for transport route type to the
packets-page filter engine, per #339.

## New filter syntax

| Filter | Matches |
|---|---|
| `transport == true` | route_type 0 (TRANSPORT_FLOOD) or 3
(TRANSPORT_DIRECT) |
| `transport == false` | route_type 1 (FLOOD) or 2 (DIRECT) |
| `transport` | bare truthy — same as `transport == true` |
| `route == T_FLOOD` | alias for `route == TRANSPORT_FLOOD` |
| `route == T_DIRECT` | alias for `route == TRANSPORT_DIRECT` |
| `route == TRANSPORT_FLOOD` / `TRANSPORT_DIRECT` | already worked —
canonical names |

Aliases are case-insensitive (`route == t_flood` works).

## Implementation

- `public/packet-filter.js`: new `transport` virtual boolean field
driven by `isTransportRouteType(rt)` which returns `rt === 0 || rt ===
3`, mirroring `isTransportRoute()` in `cmd/server/decoder.go`.
- `ROUTE_ALIASES = { t_flood: 'TRANSPORT_FLOOD', t_direct:
'TRANSPORT_DIRECT' }` resolved in the equality comparator, same pattern
as the existing `TYPE_ALIASES`.
- All client-side; no backend changes (issue noted this).

## Tests / TDD

Red commit: `9d8fdf0` — five new assertion-failing test cases + wires
`test-packet-filter.js` into CI (it existed but wasn't being executed).
Green commit: `c67612b` — implementation makes all 69 tests pass.

The CI wiring is part of the red commit on purpose: previously
`test-packet-filter.js` was never run by CI, so a frontend filter
regression couldn't fail the build. Now it can.

## CI gating proof

Run `git revert c67612b` locally → `node test-packet-filter.js` reports
5 assertion failures (not build/import errors). Re-applying the green
commit returns all tests to passing.

Fixes #339

---------

Co-authored-by: openclaw-bot <bot@openclaw.local>
2026-05-03 17:40:12 -07:00
Kpa-clawbot 912cd52a59 ci: update go-server-coverage.json [skip ci] 2026-05-03 18:52:57 +00:00
Kpa-clawbot 51c5842c10 ci: update go-ingestor-coverage.json [skip ci] 2026-05-03 18:52:57 +00:00
Kpa-clawbot b9c967be18 ci: update frontend-tests.json [skip ci] 2026-05-03 18:52:56 +00:00
Kpa-clawbot a45b921e09 ci: update frontend-coverage.json [skip ci] 2026-05-03 18:52:55 +00:00
Kpa-clawbot 7b11497cd8 ci: update e2e-tests.json [skip ci] 2026-05-03 18:52:54 +00:00
Kpa-clawbot d3920f66e9 fix(test): correct leaflet-container selector in geofilter E2E (#1017)
## Summary
Fixes the `Geofilter draft: save → reload → load → download round-trip`
Playwright E2E test that was failing on master with a 10s
`waitForFunction` timeout.

## Root cause
`test-e2e-playwright.js:2270` used the descendant combinator `'#map
.leaflet-container'`, expecting a child element. Leaflet's
`L.map('map')` adds the `leaflet-container` class **directly to the
`#map` element itself**, so the descendant query never matched and the
wait hung until timeout.

## Fix
Single-character edit: drop the space between `#map` and
`.leaflet-container` so the selector matches the same element
(`#map.leaflet-container`).

```diff
-await page.waitForFunction(() => window.L && document.querySelector('#map .leaflet-container'), { timeout: 10000 });
+await page.waitForFunction(() => window.L && document.querySelector('#map.leaflet-container'), { timeout: 10000 });
```

The working `Map page loads with markers` test at line 289 already uses
the bare `.leaflet-container` selector, confirming the convention.

## TDD exemption
**Test-fix exemption (per AGENTS.md TDD rules):** this PR fixes an
existing failing test assertion with no production behavior change. The
"red" state is current master (test currently times out in CI run
25287101810). No production code is touched; the geofilter feature
itself works (Leaflet initializes correctly — the test just never
observed it due to the broken selector). Going forward, the test
continues to gate the geofilter draft round-trip behavior.

## Verification
- CI Playwright E2E job should now reach past line 2270 and exercise the
geofilter buttons (`#btnSaveDraft`, `#btnLoadDraft`, `#btnDownload`).
- No other tests modified.

Co-authored-by: you <you@example.com>
2026-05-03 11:43:12 -07:00
Kpa-clawbot 5e01de0d52 fix: make path_json backfill async to unblock MQTT startup (#1013)
## Summary

**P0 fix**: The `path_json` backfill migration (PR #983) ran
synchronously in `applySchema`, blocking the ingestor main goroutine. On
staging (~502K observations), MQTT never connected — no new packets
ingested for 15+ hours.

## Fix

Extract the backfill into `BackfillPathJSONAsync()` — a method on
`*Store` that launches the work in a background goroutine. Called from
`main.go` before MQTT connect, it runs concurrently without blocking
subscription.

**Pattern**: identical to `backfillResolvedPathsAsync` in the server
(same lesson learned).

## Safety

- Idempotent: checks `_migrations` table, skips if already recorded
- Only touches `path_json IS NULL` rows — no conflict with live ingest
(new observations get `path_json` at write time)
- Panic-recovered goroutine with start/completion logging
- Batched (1000 rows per iteration) to avoid memory pressure

## TDD

- **Red commit**: `c6e1375` — test asserts `BackfillPathJSONAsync`
method exists + OpenStore doesn't block
- **Green commit**: `015871f` — implements async method, all tests pass

## Files changed

- `cmd/ingestor/db.go` — removed sync backfill from `applySchema`, added
`BackfillPathJSONAsync()`
- `cmd/ingestor/main.go` — call `store.BackfillPathJSONAsync()` after
store creation
- `cmd/ingestor/db_test.go` — new async tests + updated existing test to
use async API

---------

Co-authored-by: you <you@example.com>
2026-05-03 11:29:56 -07:00
Kpa-clawbot 4d043579f8 feat: geofilter draft save (localStorage) + downloadable config snippet (#1006)
## Issue

Closes #819

## Summary

Adds Save Draft / Load Draft / Download buttons to
`/geofilter-builder.html` so operators can:
- Persist their work-in-progress polygon across sessions (localStorage)
- Reload it later to continue editing
- Download a ready-to-paste `geo_filter` JSON snippet for `config.json`

## Implementation

- New module `public/geofilter-draft.js` exposes `GeofilterDraft` global
with `saveDraft / loadDraft / clearDraft / buildConfigSnippet /
downloadConfig`.
- Builder HTML wires three new buttons; updates the help text to
document the new flow.

## TDD

- Red commit: `b0a1a4c` (tests fail — module doesn't exist)
- Green commit: `a717f33` (implementation added, all tests pass)

## How to test

1. Open `/geofilter-builder.html`
2. Click 3+ points on the map
3. Click "Save Draft" — reload page — click "Load Draft" → polygon
restored
4. Click "Download" → `geofilter-config-snippet.json` downloaded with
correct format

---

E2E assertion added: test-e2e-playwright.js:2264

---------

Co-authored-by: you <you@example.com>
Co-authored-by: openclaw-bot <bot@openclaw.local>
2026-05-03 18:24:08 +00:00
Kpa-clawbot b0e4d2fa18 feat: add optional MQTT region field (#788) (#1012)
## Summary

Add optional `region` field to MQTT source config and JSON payload,
enabling publishers to explicitly provide region data without relying
solely on topic path structure.

## Changes

- **`MQTTSource.Region`** — new optional config field. When set, acts as
default region for all messages from that source (useful when a broker
serves a single region).
- **`MQTTPacketMessage.Region`** — new optional JSON payload field.
Publishers can include `"region": "PDX"` in their MQTT messages.
- **`PacketData.Region`** — carries the resolved region through to
storage.
- **Priority resolution**: payload `region` > topic-derived region >
source config `region`
- Observer IATA is updated with the effective region on every packet.

## Config example

```json
{
  "mqttSources": [
    {
      "name": "cascadia",
      "broker": "tcp://cascadia-broker:1883",
      "topics": ["meshcore/#"],
      "region": "PDX"
    }
  ]
}
```

## Payload example

```json
{"raw": "0a1b2c...", "SNR": 5.2, "region": "PDX"}
```

## TDD

- Red commit: `980304c` (tests fail at compile — fields don't exist)
- Green commit: `4caf88b` (implementation, all tests pass)

## Unblocks

- #804, #770, #730 (all depend on region being available on
observations)

Fixes #788

---------

Co-authored-by: you <you@example.com>
2026-05-03 11:21:54 -07:00
Kpa-clawbot c186129d47 feat: parse and display per-hop SNR values for TRACE packets (#1007)
## Summary

Parse and display per-hop SNR values from TRACE packets in the Packet
Byte Breakdown panel.

## Changes

### Backend (`cmd/server/decoder.go`)
- Added `SNRValues []float64` field to Payload struct
(`json:"snrValues,omitempty"`)
- In the TRACE-specific block, extract SNR from header path bytes before
they're overwritten with route hops
- Each header path byte is `int8(SNR_dB * 4.0)` per firmware — decode by
dividing by 4.0

### Frontend (`public/packets.js`)
- Added "SNR Path" section in `buildFieldTable()` showing per-hop SNR
values in dB when packet type is TRACE
- Added TRACE-specific payload rendering (trace tag, auth code, flags
with hash_size, route hops)

## TDD

- Red commit: `4dba4e8` — test asserts `Payload.SNRValues` field
(compile fails, field doesn't exist)
- Green commit: `5a496bd` — implementation passes all tests

## Testing

- `go test ./...` passes (all existing + 2 new TRACE SNR tests)
- No frontend test changes needed (no existing TRACE UI tests; rendering
is additive)

Fixes #979

---------

Co-authored-by: you <you@example.com>
2026-05-03 11:17:25 -07:00
Kpa-clawbot 43cb0d2ea6 ci: update go-server-coverage.json [skip ci] 2026-05-03 17:33:33 +00:00
Kpa-clawbot f282323cc6 ci: update go-ingestor-coverage.json [skip ci] 2026-05-03 17:33:32 +00:00
Kpa-clawbot aba3e05d1b ci: update frontend-tests.json [skip ci] 2026-05-03 17:33:32 +00:00
Kpa-clawbot ce2ed99e41 ci: update frontend-coverage.json [skip ci] 2026-05-03 17:33:31 +00:00
Kpa-clawbot 935e40b26c ci: update e2e-tests.json [skip ci] 2026-05-03 17:33:30 +00:00
Kpa-clawbot 153308134e feat: add global observer IATA whitelist config (#1001)
## Summary

Adds a global `observerIATAWhitelist` config field that restricts which
observer IATA regions are processed by the ingestor.

## Problem

Operators running regional instances (e.g., Sweden) want to ensure only
observers physically in their region contribute data. The existing
per-source `iataFilter` only filters packet messages but still allows
status messages through, meaning observers from other regions appear in
the database.

## Solution

New top-level config field `observerIATAWhitelist`:
- When non-empty, **all** messages (status + packets) from observers
outside the whitelist are silently dropped
- Case-insensitive matching
- Empty list = all regions allowed (fully backwards compatible)
- Lazy O(1) lookup via cached uppercase set (same pattern as
`observerBlacklist`)

### Config example
```json
{
  "observerIATAWhitelist": ["ARN", "GOT"]
}
```

## TDD

- **Red commit:** `f19c2b2` — tests for `ObserverIATAWhitelist` field
and `IsObserverIATAAllowed` method (build fails)
- **Green commit:** `782f516` — implementation + integration test

## Files changed
- `cmd/ingestor/config.go` — new field, new method
`IsObserverIATAAllowed`
- `cmd/ingestor/main.go` — whitelist check in `handleMessage` before
status processing
- `cmd/ingestor/config_test.go` — unit tests for config parsing and
matching
- `cmd/ingestor/main_test.go` — integration test for handleMessage
filtering

Fixes #914

---------

Co-authored-by: you <you@example.com>
2026-05-03 10:23:35 -07:00
Kpa-clawbot a500d6d506 ci: update go-server-coverage.json [skip ci] 2026-05-03 16:08:37 +00:00
Kpa-clawbot e7c15818c9 ci: update go-ingestor-coverage.json [skip ci] 2026-05-03 16:08:36 +00:00
Kpa-clawbot f3f9ef5353 ci: update frontend-tests.json [skip ci] 2026-05-03 16:08:35 +00:00
Kpa-clawbot e4422efa5c ci: update frontend-coverage.json [skip ci] 2026-05-03 16:08:34 +00:00
Kpa-clawbot c5460d37dd ci: update e2e-tests.json [skip ci] 2026-05-03 16:08:34 +00:00
Kpa-clawbot 23d1e8d328 feat: add flood/direct packet filter to observer comparison page (#1000)
## Summary

Adds a **Flood / Direct packet filter** dropdown to the observer
comparison page. This addresses the issue that direct packets (heard by
only one observer) skew the comparison percentages.

## Changes

- **`public/compare.js`**: Added `filterPacketsByRoute(packets, mode)`
function and a "Packet Type" dropdown (All / Flood only / Direct only)
to the comparison controls. Changing the filter re-runs the comparison
with filtered packets.
- **`test-compare-flood-filter.js`**: Unit tests for the filter
function.

## Route type mapping (from firmware)

| Route Type | Value | Filter |
|---|---|---|
| TransportFlood | 0 | Flood |
| Flood | 1 | Flood |
| Direct | 2 | Direct |
| TransportDirect | 3 | Direct |

## TDD

- Red commit: `484fa72` (test only, fails)
- Green commit: `5661f71` (implementation, tests pass)

Fixes #928

---------

Co-authored-by: you <you@example.com>
2026-05-03 08:58:25 -07:00
Kpa-clawbot 1ca665efde docs: document removal of 15 prefix helper tests (fixes #437) (#999)
## Summary

Documents the removal of 15 prefix helper tests
(`buildOneBytePrefixMap`, `buildTwoBytePrefixInfo`,
`buildCollisionHops`) from `test-frontend-helpers.js`.

These functions were moved server-side in PR #415. The equivalent logic
is now covered by Go tests:
- `cmd/server/collision_details_test.go` — collision prefix + node-pair
assertions
- `cmd/server/store_test.go` — hash-collision endpoint integration

Adds a documentation comment in the test file where the tests previously
lived, explaining the rationale and pointing to the Go test equivalents.

Fixes #437

---------

Co-authored-by: you <you@example.com>
2026-05-03 08:56:46 -07:00
Kpa-clawbot e86b5a3a0c feat: show multi-byte hash support indicator on map markers (#1002)
## Summary

Show 2-byte hash support indicator on map markers. Fixes #903.

## What changed

### Backend (`cmd/server/store.go`, `cmd/server/routes.go`)

- **`EnrichNodeWithMultiByte()`** — new enrichment function that adds
`multi_byte_status` (confirmed/suspected/unknown), `multi_byte_evidence`
(advert/path), and `multi_byte_max_hash_size` fields to node API
responses
- **`GetMultiByteCapMap()`** — cached (15s TTL) map of pubkey →
`MultiByteCapEntry`, reusing the existing `computeMultiByteCapability()`
logic that combines advert-based and path-hop-based evidence
- Wired into both `/api/nodes` (list) and `/api/nodes/{pubkey}` (detail)
endpoints

### Frontend (`public/map.js`)

- Added **"Multi-byte support"** checkbox in the map Display controls
section
- When toggled on, repeater markers change color:
  - 🟢 Green (`#27ae60`) — **confirmed** (advertised with hash_size ≥ 2)
- 🟡 Yellow (`#f39c12`) — **suspected** (seen as hop in multi-byte path)
  - 🔴 Red (`#e74c3c`) — **unknown** (no multi-byte evidence)
- Popup tooltip shows multi-byte status and evidence for repeaters
- State persisted in localStorage (`meshcore-map-multibyte-overlay`)

## TDD

- Red commit: `2f49cbc` — failing test for `EnrichNodeWithMultiByte`
- Green commit: `4957782` — implementation + passing tests

## Performance

- `GetMultiByteCapMap()` uses a 15s TTL cache (same pattern as
`GetNodeHashSizeInfo`)
- Enrichment is O(n) over nodes, no per-item API calls
- Frontend color override is computed inline during existing marker
render loop — no additional DOM rebuilds

---------

Co-authored-by: you <you@example.com>
2026-05-03 08:56:09 -07:00
Kpa-clawbot ed8d7d68bd ci: update go-server-coverage.json [skip ci] 2026-05-03 06:25:11 +00:00
Kpa-clawbot 7960191a62 ci: update go-ingestor-coverage.json [skip ci] 2026-05-03 06:25:10 +00:00
Kpa-clawbot f1b2dfcc56 ci: update frontend-tests.json [skip ci] 2026-05-03 06:25:10 +00:00
Kpa-clawbot 436c2bb12d ci: update frontend-coverage.json [skip ci] 2026-05-03 06:25:09 +00:00
Kpa-clawbot 62f9962e01 ci: update e2e-tests.json [skip ci] 2026-05-03 06:25:08 +00:00
Kpa-clawbot 2e3a94b86d chore(db): one-time cleanup of legacy packets with empty hash or null timestamp (closes #994) (#997)
## Summary

One-time startup migration that deletes legacy packets (transmissions +
observations) with empty hash or empty `first_seen` timestamp. This is
the write-side cleanup following #993's read-side filter.

### Migration: `cleanup_legacy_null_hash_ts`

- Checks `_migrations` table for marker
- If not present: deletes observations referencing bad transmissions,
then deletes the transmissions themselves
- Logs count of deleted rows
- Records marker for idempotency

### TDD

- **Red commit:** `b1a24a1` — test asserts migration deletes bad rows
(fails without implementation)
- **Green commit:** `2b94522` — implements the migration, all tests pass

Fixes #994

---------

Co-authored-by: you <you@example.com>
2026-05-02 23:15:20 -07:00
Kpa-clawbot 81aeadafbf ci: update go-server-coverage.json [skip ci] 2026-05-03 06:13:55 +00:00
Kpa-clawbot 4c0c39823f ci: update go-ingestor-coverage.json [skip ci] 2026-05-03 06:13:54 +00:00
Kpa-clawbot 7d5d130095 ci: update frontend-tests.json [skip ci] 2026-05-03 06:13:53 +00:00
Kpa-clawbot 50a0eda1aa ci: update frontend-coverage.json [skip ci] 2026-05-03 06:13:52 +00:00
Kpa-clawbot a745847f3b ci: update e2e-tests.json [skip ci] 2026-05-03 06:13:52 +00:00
Kpa-clawbot 8dfcec2ff3 feat: include favorites and claimed nodes in export/import JSON (#1003)
## Summary

Extends the customizer v2 export/import to include favorite nodes and
claimed ("My Mesh") nodes, so users can transfer their full setup
between browsers/devices.

## Changes

### `public/customize-v2.js`
- `readOverrides()` now merges `favorites` (from `meshcore-favorites`)
and `myNodes` (from `meshcore-my-nodes`) into the exported JSON
- `writeOverrides()` extracts `favorites`/`myNodes` arrays and writes
them to their respective localStorage keys, keeping theme overrides
separate
- `validateShape()` validates both new keys as arrays, rejecting
non-array values
- `VALID_SECTIONS` updated to include `favorites` and `myNodes`

### `test-customizer-v2.js`
- 8 new tests covering read/write/validate for both favorites and
myNodes

## TDD
- Red commit: `0405fb7` (failing tests)
- Green commit: `bb9dc34` (implementation)

Fixes #895

---------

Co-authored-by: you <you@example.com>
2026-05-02 23:04:20 -07:00
Kpa-clawbot 84ffed96ed ci: update go-server-coverage.json [skip ci] 2026-05-03 05:28:20 +00:00
Kpa-clawbot b21db32d2e ci: update go-ingestor-coverage.json [skip ci] 2026-05-03 05:28:19 +00:00
Kpa-clawbot f34a233ba7 ci: update frontend-tests.json [skip ci] 2026-05-03 05:28:18 +00:00
Kpa-clawbot 9342ed2799 ci: update frontend-coverage.json [skip ci] 2026-05-03 05:28:17 +00:00
Kpa-clawbot e2d49a62ee ci: update e2e-tests.json [skip ci] 2026-05-03 05:28:16 +00:00
Kpa-clawbot 564d93d6aa fix: dedup topology analytics by resolved pubkey (#998)
## Fix topology analytics double-counting repeaters/pairs (#909)

### Problem

`computeAnalyticsTopology()` aggregates by raw hop hex string. When
firmware emits variable-length path hashes (1-3 bytes per hop), the same
physical node appears multiple times with different prefix lengths (e.g.
`"07"`, `"0735bc"`, `"0735bc6d"` all referring to the same node). This
inflates repeater counts and creates duplicate pair entries.

### Solution

Added a confidence-gated dedup pass after frequency counting:

1. **For each hop prefix**, check if it resolves unambiguously (exactly
1 candidate in the prefix map)
2. **Unambiguous prefixes** → group by resolved pubkey, sum counts, keep
longest prefix as display identifier
3. **Ambiguous prefixes** (multiple candidates for that prefix) → left
as separate entries (conservative)
4. **Same treatment for pairs**: canonicalize by sorted pubkey pair

### Addressing @efiten's collision concern

At scale (~2000+ repeaters), 1-byte prefixes (256 buckets) WILL collide.
This fix explicitly checks the prefix map candidate count. Ambiguous
prefixes (where `len(pm.m[hop]) > 1`) are never merged — they remain as
separate entries. Only prefixes with a single matching node are eligible
for dedup.

### TDD

- **Red commit**: `4dbf9c0` — added 3 failing tests
- **Green commit**: `d6cae9a` — implemented dedup, all tests pass

### Tests added

- `TestTopologyDedup_RepeatersMergeByPubkey` — verifies entries with
different prefix lengths for same node merge to single entry with summed
count
- `TestTopologyDedup_AmbiguousPrefixNotMerged` — verifies colliding
short prefix stays separate from unambiguous longer prefix
- `TestTopologyDedup_PairsMergeByPubkey` — verifies pair entries merge
by resolved pubkey pair

Fixes #909

---------

Co-authored-by: you <you@example.com>
2026-05-02 22:19:49 -07:00
Kpa-clawbot 0b7c4c41c6 ci: update go-server-coverage.json [skip ci] 2026-05-03 04:14:32 +00:00
Kpa-clawbot f87654e7d8 ci: update go-ingestor-coverage.json [skip ci] 2026-05-03 04:14:31 +00:00
Kpa-clawbot 0c9b305a99 ci: update frontend-tests.json [skip ci] 2026-05-03 04:14:30 +00:00
Kpa-clawbot 4aebc4d90b ci: update frontend-coverage.json [skip ci] 2026-05-03 04:14:29 +00:00
Kpa-clawbot 78d96d24db ci: update e2e-tests.json [skip ci] 2026-05-03 04:14:28 +00:00
Kpa-clawbot 440bda6244 fix(channels): channel color picker UX (closes #681) (#995)
## Summary

Fixes the channel color picker UX issues on both Live page and Channels
page.

Closes #681

## Repro Evidence (on master at HEAD)

- **Live feed dots**: 12px inline — too small to reliably click in a
fast-moving feed
- **Right-click hijack**: `contextmenu` listener on live feed conflicts
with browser context menu
- **Channels page**: No way to clear an assigned color without opening
the picker popover
- **Popover positioning**: 8px edge margin causes overlap with panel
borders

## Root Cause

| Issue | File:Line |
|-------|-----------|
| Tiny dots | `public/live.js:2847` — inline `width:12px;height:12px` |
| Context menu hijack | `public/channel-color-picker.js:231` —
`feed.addEventListener('contextmenu', ...)` |
| No clear affordance | `public/channels.js:1101` — dot rendered without
adjacent clear button |
| Popover overlap | `public/channel-color-picker.js:108-109` — `vw - pw
- 8` margin |

## Fix

1. Increased feed color dots to 18px (visible, clickable)
2. Removed contextmenu listener from live feed — dots are the
interaction point
3. Added inline `✕` clear button next to colored dots on channels page
4. Increased popover edge margin to 14px

## TDD Evidence

- **Red commit:** `2034071` — 6/8 tests fail (dot size, contextmenu,
clear affordance, margins)
- **Green commit:** `49636e5` — all 8 tests pass

## Verification

- `node test-color-picker-ux.js` — 8/8 pass
- `node test-channel-color-picker.js` — 17/17 pass (existing tests
unbroken)

---------

Co-authored-by: you <you@example.com>
2026-05-02 21:05:15 -07:00
Kpa-clawbot aea0a9caee fix(packets): preserve scroll position on filter change + group expand/collapse (closes #431) (#996)
## Summary

Closes #431. Preserves scroll position on the packets page when filters
change or groups are expanded/collapsed.

## Problem

When an operator scrolls down through packet history then changes a
filter (type, observer, packet-filter expression) or expands/collapses a
group, `renderTableRows()` rebuilds the DOM which resets `scrollTop` to
0. This forces the user back to the top — frustrating when digging
through hundreds of packets.

## Fix

Save `scrollContainer.scrollTop` at the start of `renderTableRows()`,
restore it after DOM rebuild completes. Two restore points:
1. **Empty-results path** (line ~1821): after `tbody.innerHTML = ...` 
2. **Normal virtual-scroll path** (line ~1840): after
`renderVisibleRows()`

### Key lines changed
- `public/packets.js` lines 1748–1749: save scrollTop
- `public/packets.js` line 1821: restore after empty-state DOM write  
- `public/packets.js` line 1840: restore after renderVisibleRows

## TDD evidence

- **Red commit:** a99ba21 — test asserts scrollTop preserved; fails
without fix
- **Green commit:** 35cc4bf — adds save/restore; test passes

## Anti-tautology

Removing the `scrollContainer.scrollTop = savedScrollTop` lines causes
the test to fail (scrollTop becomes 0 instead of 500). Verified locally.

## Verification

- `node test-packets.js` — 83 passed, 0 failed
- `node test-packet-filter.js` — 62 passed, 0 failed

---------

Co-authored-by: you <you@example.com>
2026-05-02 21:03:01 -07:00
Kpa-clawbot 01246f9412 ci: update go-server-coverage.json [skip ci] 2026-05-03 03:44:19 +00:00
Kpa-clawbot 4c309bad80 ci: update go-ingestor-coverage.json [skip ci] 2026-05-03 03:44:18 +00:00
Kpa-clawbot ce769950dd ci: update frontend-tests.json [skip ci] 2026-05-03 03:44:17 +00:00
Kpa-clawbot 73c04a9ba3 ci: update frontend-coverage.json [skip ci] 2026-05-03 03:44:17 +00:00
Kpa-clawbot e2eaf4c656 ci: update e2e-tests.json [skip ci] 2026-05-03 03:44:16 +00:00
Kpa-clawbot b7c280c20a fix: drop/filter packets with null hash or timestamp (closes #871) (#993)
## Summary

Closes #871

The `/api/packets` endpoint could return packets with `null` hash or
timestamp fields. This was caused by legacy data in SQLite (rows with
empty `hash` or `NULL`/empty `first_seen`) predating the ingestor's
existing validation guard (`if hash == "" { return false, nil }` at
`cmd/ingestor/db.go:610`).

## Root Cause

`cmd/server/store.go` `filterPackets()` had no data-integrity guard.
Legacy rows with empty `hash` or `first_seen` were loaded into the
in-memory store and returned verbatim. The `strOrNil("")` helper then
serialized these as JSON `null`.

## Fix

Added a data-integrity predicate at the top of `filterPackets`'s scan
callback (`cmd/server/store.go:2278`):

```go
if tx.Hash == "" || tx.FirstSeen == "" {
    return false
}
```

This filters bad legacy rows at query time. The write path (ingestor)
already rejects empty hashes, so no new bad data enters.

## TDD Evidence

- **Red commit:** `15774c3` — test `TestIssue871_NoNullHashOrTimestamp`
asserts no packet in API response has null/empty hash or timestamp
- **Green commit:** `281fd6f` — adds the filter guard, test passes

## Testing

- `go test ./...` in `cmd/server` passes (full suite)
- Client-side defensive filter from PR #868 remains as defense-in-depth

---------

Co-authored-by: you <you@example.com>
2026-05-02 20:35:15 -07:00
Kpa-clawbot d43c95a4bb fix(ingestor): warn when TRACE payload decode fails but observation stored (closes #889) (#992)
## Summary

Closes #889.

When a TRACE packet's payload is too short to decode (< 9 bytes),
`decodeTrace` returns an error in `Payload.Error` but the observation is
still stored with empty `Path.Hops`. Previously this was completely
silent — no log, no anomaly flag, no indication the row is degraded.

This fix populates `DecodedPacket.Anomaly` with the decode error message
(e.g., `"TRACE payload decode failed: too short"`) so operators and
downstream consumers can identify degraded observations.

## TDD Commit History

1. **Red commit** `04e0165` — failing test asserting `Anomaly` is set
when TRACE payload decode fails
2. **Green commit** `d3e72d1` — 3-line fix in `decoder.go` line 601-603:
check `payload.Error != ""` for TRACE packets and set anomaly

## What Changed

`cmd/ingestor/decoder.go` (lines 601-603): Added a check before the
existing TRACE path-parsing block. If `payload.Error` is non-empty for a
TRACE packet, `anomaly` is set to `"TRACE payload decode failed:
<error>"`.

`cmd/ingestor/decoder_test.go`: Added
`TestDecodeTracePayloadFailSetsAnomaly` — constructs a TRACE packet with
a 4-byte payload (too short), asserts the packet is still returned
(observation stored) and `Anomaly` is populated.

## Verification

- `go build ./...` ✓
- `go test ./...` ✓ (all pass including new test)
- Anti-tautology: reverting the fix causes the new test to fail (asserts
`pkt.Anomaly == ""` → error)

---------

Co-authored-by: you <you@example.com>
2026-05-02 20:34:27 -07:00
Kpa-clawbot bed5e0267f ci: update go-server-coverage.json [skip ci] 2026-05-03 03:24:29 +00:00
Kpa-clawbot 999ecfc84d ci: update go-ingestor-coverage.json [skip ci] 2026-05-03 03:24:28 +00:00
Kpa-clawbot f12428c460 ci: update frontend-tests.json [skip ci] 2026-05-03 03:24:26 +00:00
Kpa-clawbot 2199d404c9 ci: update frontend-coverage.json [skip ci] 2026-05-03 03:24:25 +00:00
Kpa-clawbot 016a6f2750 ci: update e2e-tests.json [skip ci] 2026-05-03 03:24:24 +00:00
Kpa-clawbot dd2f044f2b fix: cache RW SQLite connection + dedup DBConfig (closes #921) (#982)
Closes #921

## Summary

Follow-up to #920 (incremental auto-vacuum). Addresses both items from
the adversarial review:

### 1. RW connection caching

Previously, every call to `openRW(dbPath)` opened a new SQLite RW
connection and closed it after use. This happened in:
- `runIncrementalVacuum` (~4x/hour)
- `PruneOldPackets`, `PruneOldMetrics`, `RemoveStaleObservers`
- `buildAndPersistEdges`, `PruneNeighborEdges`
- All neighbor persist operations

Now a single `*sql.DB` handle (with `MaxOpenConns(1)`) is cached
process-wide via `cachedRW(dbPath)`. The underlying connection pool
manages serialization. The original `openRW()` function is retained for
one-shot test usage.

### 2. DBConfig dedup

`DBConfig` was defined identically in both `cmd/server/config.go` and
`cmd/ingestor/config.go`. Extracted to `internal/dbconfig/` as a shared
package; both binaries now use a type alias (`type DBConfig =
dbconfig.DBConfig`).

## Tests added

| Test | File |
|------|------|
| `TestCachedRW_ReturnsSameHandle` | `cmd/server/rw_cache_test.go` |
| `TestCachedRW_100Calls_SingleConnection` |
`cmd/server/rw_cache_test.go` |
| `TestGetIncrementalVacuumPages_Default` |
`internal/dbconfig/dbconfig_test.go` |
| `TestGetIncrementalVacuumPages_Configured` |
`internal/dbconfig/dbconfig_test.go` |

## Verification

```
ok  github.com/corescope/server    20.069s
ok  github.com/corescope/ingestor  47.117s
ok  github.com/meshcore-analyzer/dbconfig  0.003s
```

Both binaries build cleanly. 100 sequential `cachedRW()` calls return
the same handle with exactly 1 entry in the cache map.

---------

Co-authored-by: you <you@example.com>
2026-05-02 20:15:30 -07:00
Kpa-clawbot 736b09697d fix(analytics): apply customizer timestamp format to chart axes (closes #756) (#981)
## Summary

Fixes #756 — the customizer timestamp format setting (ISO/ISO+ms/locale)
and timezone (UTC/local) were not applied to chart X-axis labels,
tooltips, or certain inline timestamps in the analytics pages.

## Changes

### `public/app.js`
- Added `formatChartAxisLabel(date, shortForm)` — a shared helper that
reads the customizer's `timestampFormat` and `timestampTimezone`
preferences and formats dates for chart axes accordingly.
`shortForm=true` returns time-only (for intra-day charts),
`shortForm=false` returns date+time (for multi-day ranges).

### `public/analytics.js`
- `rfXAxisLabels()`: now calls `formatChartAxisLabel()` instead of
hardcoded `toLocaleTimeString()`
- `rfTooltipCircles()`: tooltip timestamps now use
`formatAbsoluteTimestamp()` instead of raw ISO
- Subpath detail first/last seen: now uses `formatAbsoluteTimestamp()`
- Neighbor graph last_seen: now uses `formatAbsoluteTimestamp()`

### `public/node-analytics.js`
- Packet timeline chart labels: now use `formatChartAxisLabel()`
(respects short vs long form based on time range)
- SNR over time chart labels: now use `formatChartAxisLabel()`

## Behavior by setting

| Setting | Chart axis (short) | Chart axis (long) |
|---------|-------------------|-------------------|
| ISO | `14:30` | `05-03 14:30` |
| ISO+ms | `14:30:05` | `05-03 14:30:05` |
| Locale | `2:30 PM` | `May 3, 2:30 PM` |

All respect the UTC/local timezone toggle.

## Testing

- Server builds cleanly (`go build`)
- Served `app.js` contains `formatChartAxisLabel` (verified via curl)
- Graceful fallback: all callsites check `typeof formatChartAxisLabel
=== 'function'` before calling, preserving backward compat if script
load order changes

---------

Co-authored-by: you <you@example.com>
2026-05-02 20:10:29 -07:00
Kpa-clawbot b3b96b3dda ci: update go-server-coverage.json [skip ci] 2026-05-03 03:02:27 +00:00
Kpa-clawbot 5c9860db46 ci: update go-ingestor-coverage.json [skip ci] 2026-05-03 03:02:26 +00:00
Kpa-clawbot de288e71da ci: update frontend-tests.json [skip ci] 2026-05-03 03:02:25 +00:00
Kpa-clawbot 3529b1334b ci: update frontend-coverage.json [skip ci] 2026-05-03 03:02:24 +00:00
Kpa-clawbot 7bd1f396df ci: update e2e-tests.json [skip ci] 2026-05-03 03:02:23 +00:00
Kpa-clawbot 58484ad924 feat(ingestor): backfill observations.path_json from raw_hex (closes #888) (#983)
## Summary

Adds an idempotent startup migration to the ingestor that backfills
`observations.path_json` from per-observation `raw_hex` (added in #882).

**Approach: Server-side migration (Option B)** — runs automatically at
startup, chunked in batches of 1000, tracked via `_migrations` table.
Chosen over a standalone script because:
1. Follows existing migration pattern (channel_hash, last_packet_at,
etc.)
2. Zero operator action required — just deploy
3. Idempotent — safe to restart mid-migration (uncommitted rows get
picked up next run)

## What it does

- Selects observations where `raw_hex` is populated but `path_json` is
NULL/empty/`[]`
- Excludes TRACE packets (`payload_type = 9`) at the SQL level — their
header bytes are SNR values, not hops
- Decodes hops via `packetpath.DecodePathFromRawHex` (reuses existing
helper)
- Updates `path_json` with the decoded JSON array
- Marks rows with undecoded/empty hops as `'[]'` to prevent infinite
re-scanning
- Records `backfill_path_json_from_raw_hex_v1` in `_migrations` when
complete

## Safety

- **Never overwrites** existing non-empty `path_json` — only fills where
missing
- **Batched** (1000 rows per iteration) — won't OOM on large DBs
- **TRACE-safe** — excluded at query level per
`packetpath.PathBytesAreHops` semantics

## Test

`TestBackfillPathJsonFromRawHex` — creates synthetic observations with:
- Empty path_json + valid raw_hex → verifies backfill populates
correctly
- NULL path_json → verifies backfill populates
- Existing path_json → verifies NO overwrite
- TRACE packet → verifies skip

Anti-tautology: test asserts specific decoded values (`["AABB","CCDD"]`)
from known raw_hex input, not just "something changed."

Closes #888

Co-authored-by: you <you@example.com>
2026-05-02 19:52:43 -07:00
Kpa-clawbot 1a2170bf92 ci: update go-server-coverage.json [skip ci] 2026-05-03 01:07:07 +00:00
Kpa-clawbot 8a3c87e5a2 ci: update go-ingestor-coverage.json [skip ci] 2026-05-03 01:07:06 +00:00
Kpa-clawbot 722cf480f8 ci: update frontend-tests.json [skip ci] 2026-05-03 01:07:05 +00:00
Kpa-clawbot 5cbfb4a8e7 ci: update frontend-coverage.json [skip ci] 2026-05-03 01:07:05 +00:00
Kpa-clawbot b7933553a6 ci: update e2e-tests.json [skip ci] 2026-05-03 01:07:04 +00:00
Kpa-clawbot fc57433f27 fix(analytics): merge channel buckets by hash byte; reject rainbow-table mismatches (closes #978) (#980)
## Summary

Closes #978 — analytics channels duplicated by encrypted/decrypted split
+ rainbow-table collisions.

## Root cause

Two distinct bugs in `computeAnalyticsChannels` (`cmd/server/store.go`):

1. **Encrypted/decrypted split**: The grouping key included the decoded
channel name (`hash + "_" + channel`), so packets from observers that
could decrypt a channel created a separate bucket from packets where
decryption failed. Same physical channel, two entries.

2. **Rainbow-table collisions**: Some observers' lookup tables map hash
bytes to wrong channel names. E.g., hash `72` incorrectly claimed to be
`#wardriving` (real hash is `129`). This created ghost 1-message
entries.

## Fix

1. **Always group by hash byte alone** (drop `_channel` suffix from
`chKey`). When any packet decrypts successfully, upgrade the bucket's
display name from placeholder (`chN`) to the real name
(first-decrypter-wins for stability).

2. **Validate channel names** against the firmware hash invariant:
`SHA256(SHA256("#name")[:16])[0] == channelHash`. Mismatches are treated
as encrypted (placeholder name, no trust in decoded channel). Guard is
in the analytics handler (not the ingestor) to avoid breaking other
surfaces that use the decoded field for display.

## Verification (e2e-fixture.db)

| Metric | BEFORE | AFTER |
|--------|--------|-------|
| Total channels | 22 | 19 |
| Duplicate hash bytes | 3 (hashes 217, 202, 17) | 0 |

## Tests added

- `TestComputeAnalyticsChannels_MergesEncryptedAndDecrypted` — same
hash, mixed encrypted/decrypted → ONE bucket
- `TestComputeAnalyticsChannels_RejectsRainbowTableMismatch` — hash 72
claimed as `#wardriving` (real=129) → rejected, stays `ch72`
- `TestChannelNameMatchesHash` — unit test for hash validation helper
- `TestIsPlaceholderName` — unit test for placeholder detection

Anti-tautology gate: both main tests fail when their respective fix
lines are reverted.

Co-authored-by: you <you@example.com>
2026-05-02 16:05:56 -07:00
Kpa-clawbot 53ab302dd6 fix(packets): clear-filters button (rebased + addresses greybeard) (closes #964) (#975)
Rebased version of #973 onto current master, with greybeard review
fixes.

## Changes from #973
- **Stowaway revert dropped**: The original PR branched from older
master and inadvertently reverted PR #926's MQTT connect-retry fix
(`cmd/ingestor/main.go` + `cmd/ingestor/main_test.go`). After rebasing
onto current master (which includes #926 + #970), these files no longer
appear in the diff.
- **Greybeard M1 fixed**: Time-window filter (`savedTimeWindowMin`,
`fTimeWindow` dropdown, `localStorage 'meshcore-time-window'`) is now
reset by the clear-filters button. The clear-button visibility predicate
also accounts for non-default time window.
- **Greybeard m1 fixed**: Replaced 7 tautological source-grep tests with
8 behavioral vm-sandbox tests that extract and execute the actual clear
handler + `updatePacketsUrl`, asserting real state transitions.

## Original feature (from #973)
Clear-filters button for the packets page — resets all filter state
(hash, node, observer, channel, type, expression, myNodes, time window,
region) and refreshes. Button visibility auto-toggles based on active
filter state.

Closes #964
Supersedes #973

## Tests
- `node test-clear-filters.js` — 8 behavioral tests 
- `node test-packets.js` — 82 tests 
- `cd cmd/ingestor && go test ./...` — 

---------

Co-authored-by: you <you@example.com>
2026-05-02 12:12:51 -07:00
Kpa-clawbot 5aa8f795cd feat(ingestor): per-source MQTT connect timeout (#931) (#977)
## Summary

Per-source MQTT connect timeout, correctly targeting the `WaitTimeout`
startup gate (#931).

## What changed

- Added `connectTimeoutSec` field to `MQTTSource` struct (per-source,
not global) — `config.go:24`
- Added `ConnectTimeoutOrDefault()` helper returning configured value or
30 (default from #926) — `config.go:29`
- Replaced hardcoded `WaitTimeout(30 * time.Second)` with
`WaitTimeout(time.Duration(connectTimeout) * time.Second)` —
`main.go:173`
- Updated `config.example.json` with field at source level
- Unit tests for default (30) and custom values

## Why this supersedes #976

PR #976 made paho's `SetConnectTimeout` (per-TCP-dial, was 10s)
configurable via a **global** `mqttConnectTimeoutSeconds` field. Issue
#931 explicitly references the **30s timeout** — which is
`WaitTimeout(30s)`, the startup gate from #926. It also requests
**per-source** config, not global.

This PR targets the correct timeout at the correct granularity.

## Live verification (Rule 18)

Two sources pointed at unreachable brokers:
- `fast` (`connectTimeoutSec: 5`): timed out in 5s 
- `default` (unset): timed out in 30s 

```
19:00:35 MQTT [fast] connect timeout: 5s
19:00:40 MQTT [fast] initial connection timed out — retrying in background
19:00:40 MQTT [default] connect timeout: 30s
19:01:10 MQTT [default] initial connection timed out — retrying in background
```

Closes #931
Supersedes #976

Co-authored-by: you <you@example.com>
2026-05-02 12:08:25 -07:00
Kpa-clawbot 1e7c187521 fix(ingestor): address review BLOCKERs from PR #926 (goroutine leak + guard semantics) [v2] (#974)
## fix(ingestor): address review BLOCKERs from PR #926 (goroutine leak +
guard semantics)

Supersedes #970. Rebased onto current master to resolve merge conflicts.

### Changes (same as #970)
- **BL1 (goroutine leak):** Call `client.Disconnect(0)` on the error
path after `Connect()` fails with `ConnectRetry=true`, preventing Paho's
internal retry goroutines from leaking.
- **BL2 (guard semantics):** Use `connectedCount == 0` instead of
`len(clients) == 0` to detect zero-connected state, since timed-out
clients are appended to the slice.
- **Tests:** `TestBL1_GoroutineLeakOnHardFailure` and
`TestBL2_ZeroConnectedFatals` covering both blockers.

### Context
- Fixes blockers raised in review of #926
- Related: #910 (original hang bug)

Co-authored-by: you <you@example.com>
2026-05-02 12:05:02 -07:00
Kpa-clawbot 4b8d8143f4 feat(server): explicit CORS policy with configurable origin allowlist (#883) (#971)
## Summary

Adds explicit CORS policy support to the CoreScope API server, closing
#883.

### Problem

The API relied on browser same-origin defaults with no way for operators
to configure cross-origin access. Operators running dashboards or
third-party frontends on different origins had no supported way to make
API calls.

### Solution

**New config option:** `corsAllowedOrigins` (string array, default `[]`)

**Middleware behavior:**
| Config | Behavior |
|--------|----------|
| `[]` (default) | No `Access-Control-*` headers added — browsers
enforce same-origin. **Preserves current behavior.** |
| `["https://dashboard.example.com"]` | Echoes matching `Origin`, sets
`Allow-Methods`/`Allow-Headers` |
| `["*"]` | Sets `Access-Control-Allow-Origin: *` (explicit opt-in only)
|

**Headers set when origin matches:**
- `Access-Control-Allow-Origin: <origin>` (or `*`)
- `Access-Control-Allow-Methods: GET, POST, OPTIONS`
- `Access-Control-Allow-Headers: Content-Type, X-API-Key`
- `Vary: Origin` (non-wildcard only)

**Preflight handling:** `OPTIONS` → `204 No Content` with CORS headers
(or `403` if origin not in allowlist).

### Config example

```json
{
  "corsAllowedOrigins": ["https://dashboard.example.com", "https://monitor.internal"]
}
```

### Files changed

| File | Change |
|------|--------|
| `cmd/server/cors.go` | New CORS middleware |
| `cmd/server/cors_test.go` | 7 unit tests covering all branches |
| `cmd/server/config.go` | `CORSAllowedOrigins` field |
| `cmd/server/routes.go` | Wire middleware before all routes |

### Testing

**Unit tests (7):**
- Default config → no CORS headers
- Allowlist match → headers present with `Vary: Origin`
- Allowlist miss → no CORS headers
- Preflight allowed → 204 with headers
- Preflight rejected → 403
- Wildcard → `*` without `Vary`
- No `Origin` header → pass-through

**Live verification (Rule 18):**

```
# Default (empty corsAllowedOrigins):
$ curl -I -H "Origin: https://evil.example" localhost:19883/api/health
HTTP/1.1 200 OK
# No Access-Control-* headers ✓

# With corsAllowedOrigins: ["https://good.example"]:
$ curl -I -H "Origin: https://good.example" localhost:19884/api/health
Access-Control-Allow-Origin: https://good.example
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: Content-Type, X-API-Key
Vary: Origin ✓

$ curl -I -H "Origin: https://evil.example" localhost:19884/api/health
# No Access-Control-* headers ✓

$ curl -I -X OPTIONS -H "Origin: https://good.example" localhost:19884/api/health
HTTP/1.1 204 No Content
Access-Control-Allow-Origin: https://good.example ✓
```

Closes #883

Co-authored-by: you <you@example.com>
2026-05-02 12:04:37 -07:00
Kpa-clawbot 3364eed303 feat: separate "Last Status Update" from "Last Packet Observation" for observers (v3 rebase) (#969)
Rebased version of #968 (which was itself a rebase of #905) — resolves
merge conflict with #906 (clock-skew UI) that landed on master.

## Conflict resolution

**`public/observers.js`** — master (#906) added "Clock Offset" column to
observer table; #968 split "Last Seen" into "Last Status" + "Last
Packet" columns. Combined both: the table now has Status | Name | Region
| Last Status | Last Packet | Packets | Packets/Hour | Clock Offset |
Uptime.

## What this PR adds (unchanged from #968/#905)

- `last_packet_at` column in observers DB table
- Separate "Last Status Update" and "Last Packet Observation" display in
observers list and detail page
- Server-side migration to add the column automatically
- Backfill heuristic for existing data
- Tests for ingestor and server

## Verification

- All Go tests pass (`cmd/server`, `cmd/ingestor`)
- Frontend tests pass (`test-packets.js`, `test-hash-color.js`)
- Built server, hit `/api/observers` — `last_packet_at` field present in
JSON
- Observer table header has all 9 columns including both Last Packet and
Clock Offset

## Prior PRs

- #905 — original (conflicts with master)
- #968 — first rebase (conflicts after #906 landed)
- This PR — second rebase, resolves #906 conflict

Supersedes #968. Closes #905.

---------

Co-authored-by: you <you@example.com>
2026-05-02 12:03:42 -07:00
efiten d65122491e fix(ingestor): unblock startup when one of multiple MQTT sources is unreachable (#926)
## Summary

- With `ConnectRetry=true`, paho's `token.Wait()` only returns on
success — it blocks forever for unreachable brokers, stalling the entire
startup loop before any other source connects
- Switches to `token.WaitTimeout(30s)`: on timeout the client is still
tracked so `ConnectRetry` keeps retrying in background; `OnConnect`
fires and subscribes when it eventually connects
- Adds `TestMQTTConnectRetryTimeoutDoesNotBlock` to confirm
`WaitTimeout` returns within deadline for unreachable brokers
(regression guard for this exact failure mode)

Fixes #910

## Test plan

- [x] Two MQTT sources configured, one unreachable: ingestor reaches
`Running` status and ingests from the reachable source immediately on
startup
- [x] Unreachable source logs `initial connection timed out — retrying
in background` and reconnects automatically when the broker comes back
- [x] Single source, reachable: behaviour unchanged (`Running — 1 MQTT
source(s) connected`)
- [x] Single source, unreachable: `Running — 0 MQTT source(s) connected,
1 retrying in background`; ingestion starts once broker is available
- [x] `go test ./...` passes (excluding pre-existing
`TestOpenStoreInvalidPath` failure on master)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-02 11:31:51 -07:00
Kpa-clawbot 4f0f7bc6dd fix(ui): fill remaining gaps in payload-type lookup tables (10/11/15) (#967)
## Summary

Fill the remaining gaps in payload-type lookup tables noted out-of-scope
on #965. Every firmware-defined payload type (0–11, 15) now has entries
in all four frontend tables.

## Changes

Three types were missing from one or more tables:

| Type | Name | `PAYLOAD_COLORS` (app.js) | `TYPE_NAMES` (packets.js) |
`TYPE_COLORS` (roles.js) | `TYPE_BADGE_MAP` (roles.js) |

|------|------|--------------------------|--------------------------|-------------------------|---------------------------|
| 10 | Multipart | added | added | added `#0d9488` | added |
| 11 | Control | added |  (already) | added `#b45309` | added |
| 15 | Raw Custom | added | added | added `#c026d3` | added |

## Color choices

- **MULTIPART** `#0d9488` (teal) — multi-fragment stitching, distinct
from PATH's `#14b8a6`
- **CONTROL** `#b45309` (amber) — warm brown, distinct hue from ACK's
grey `#6b7280`
- **RAW_CUSTOM** `#c026d3` (fuchsia) — magenta, distinct from TRACE's
pink `#ec4899`

All pass WCAG 3:1 contrast against both white and dark (#1e1e1e)
backgrounds.

## Tests

- `test-packets.js`: 82/82 
- `test-hash-color.js`: 32/32 

Badge CSS auto-generation: `syncBadgeColors()` in `roles.js` iterates
`TYPE_BADGE_MAP` keyed against `TYPE_COLORS`, so the three new entries
automatically get `.type-badge.multipart`, `.type-badge.control`, and
`.type-badge.raw-custom` CSS rules injected at page load.

Firmware source: `firmware/src/Packet.h:19-32` — types 0x00–0x0B and
0x0F. Types 0x0C–0x0E are not defined.

Follows up on #965.

---------

Co-authored-by: you <you@example.com>
2026-05-02 11:17:34 -07:00
efiten 40c3aa13f9 fix(paths): exclude false-positive paths from short-prefix collisions (#930)
Fixes #929

## Summary

- `handleNodePaths` pulls candidates from `byPathHop` using 2-char and
4-char prefix keys (e.g. `"7a"` for a node using 1-byte adverts)
- When two nodes share the same short prefix, paths through the *other*
node are included as candidates
- The `resolved_path` post-filter covers decoded packets but falls
through conservatively (`inIndex = true`) when `resolved_path` is NULL,
letting false positives reach the response

**Fix:** during the aggregation phase (which already calls `resolveHop`
per hop), add a `containsTarget` check. If every hop resolves to a
different node's pubkey, skip the path. Packets confirmed via the
full-pubkey index key or via SQL bypass the check. Unresolvable hops are
kept conservatively.

## Test plan
- [x] `TestHandleNodePaths_PrefixCollisionExclusion`: two nodes sharing
`"7a"` prefix; verifies the path with no `resolved_path` (false
positive) is excluded and the SQL-confirmed path (true positive) is
included
- [x] Full test suite: `go test github.com/corescope/server` — all pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-02 11:15:25 -07:00
Kpa-clawbot b47587f031 feat(#690): expose observer skew + per-hash evidence in clock UI (#906)
## Summary

UI completion of #690 — surfaces observer clock skew and per-hash
evidence that the backend already computes but wasn't exposed in the
frontend.

**Not related to #845/PR #894** (bimodal detection) — this is the UI
surface for the original #690 scope.

## Changes

### Backend: per-hash evidence in node clock-skew API (commit 1)
- Extended `GET /api/nodes/{pubkey}/clock-skew` to return
`recentHashEvidence` (most recent 10 hashes with per-observer
raw/corrected skew and observer offset) and `calibrationSummary`
(total/calibrated/uncalibrated counts).
- Evidence is cached during `ClockSkewEngine.Recompute()` — route
handler is cheap.
- Fleet endpoint omits evidence to keep payload small.

### Frontend: observer list page — clock offset column (commit 2)
- Added "Clock Offset" column to observers table.
- Fetches `/api/observers/clock-skew` once on page load, joins by
ObserverID.
- Color-coded severity badge + sample count tooltip.
- Singleton observers show "—" not "0".

### Frontend: observer-detail clock card (commit 3)
- Added clock offset card mirroring node clock card style.
- Shows: offset value, sample count, severity badge.
- Inline explainer describing how offset is computed from multi-observer
packets.

### Frontend: node clock card evidence panel (commit 4)
- Collapsible "Evidence" section in existing node clock skew card.
- Per-hash breakdown: observer count, median corrected skew,
per-observer raw/corrected/offset.
- Calibration summary line and plain-English severity reason at top.

## Test Results

```
go test ./... (cmd/server) — PASS (19.3s)
go test ./... (cmd/ingestor) — PASS (31.6s)
Frontend helpers: 610 passed, 0 failed
```

New test: `TestNodeClockSkew_EvidencePayload` — 3-observer scenario
verifying per-hash array shape, corrected = raw + offset math, and
median.

No frontend JS smoke test added — no existing test harness for
clock/observer rendering. Noted for future.

## Screenshots

Screenshots TBD

## Perf justification

Evidence is computed inside the existing `Recompute()` cycle (already
O(n) on samples). The `hashEvidence` map adds ~32 bytes per sample of
memory. Evidence is stripped from fleet responses. Per-node endpoint
returns at most 10 evidence entries — bounded payload.

---------

Co-authored-by: you <you@example.com>
2026-05-02 10:30:54 -07:00
Kpa-clawbot c67f3347ce fix(ui): add GRP_DATA (type 6) to filter dropdown + color tables (#965)
## Bug

Packet type 6 (`PAYLOAD_TYPE_GRP_DATA` per `firmware/src/Packet.h:25`)
was missing from three frontend lookup tables:
- `public/app.js:7` — `PAYLOAD_COLORS` had no entry for 6 → badge color
fell back to `unknown` (grey)
- `public/packets.js:29` — `TYPE_NAMES` (used by the Packets page
type-filter dropdown) had no entry for 6 → "Group Data" missing from the
menu
- `public/roles.js:17,24` — `TYPE_COLORS` and `TYPE_BADGE_MAP` had no
`GRP_DATA` entry → no dedicated CSS class

The packet detail page already handled it (via `PAYLOAD_TYPES` in
`app.js:6` which had `6: 'Group Data'`) so individual GRP_DATA packets
render correctly. The gap was only in the filter UI + badge styling.

## Fix

Add the missing entry in each table. 4 lines across 3 files.

- `app.js`: add `6: 'grp-data'` to `PAYLOAD_COLORS`
- `packets.js`: add `6:'Group Data'` to `TYPE_NAMES`
- `roles.js`: add `GRP_DATA: '#8b5cf6'` to `TYPE_COLORS` and `GRP_DATA:
'grp-data'` to `TYPE_BADGE_MAP`

Color choice `#8b5cf6` (violet) — distinct from GRP_TXT's blue but
visually adjacent so operators read them as related types.

## Verification (rule 18 + 19)

Built server locally, served the JS files, grepped the rendered output:

```
$ curl -s http://localhost:13900/packets.js | grep TYPE_NAMES
const TYPE_NAMES = { ... 5:'Channel Msg', 6:'Group Data', 7:'Anon Req' ... };

$ curl -s http://localhost:13900/app.js | grep PAYLOAD_TYPES
const PAYLOAD_TYPES = { ... 5: 'Channel Msg', 6: 'Group Data', 7: 'Anon Req' ... };

$ curl -s http://localhost:13900/roles.js | grep GRP_DATA
ADVERT: '#22c55e', GRP_TXT: '#3b82f6', GRP_DATA: '#8b5cf6', ...
ADVERT: 'advert', GRP_TXT: 'grp-txt', GRP_DATA: 'grp-data', ...
```

Frontend tests pass: `test-packets.js` 82/82, `test-hash-color.js`
32/32.

## Out of scope

Consolidating the duplicated PAYLOAD_TYPES / TYPE_NAMES tables into a
single source of truth is a separate cleanup. Two parallel name maps
continues to be a footgun (this is the second time a new type's been
added to one but not the other).

Co-authored-by: Kpa-clawbot <bot@example.invalid>
2026-05-02 09:55:09 -07:00
Kpa-clawbot b3a9677c52 feat(ingestor + server): observerBlacklist config (#962) (#963)
## Summary

Implements `observerBlacklist` config — mirrors the existing
`nodeBlacklist` pattern for observers. Drop observers by pubkey at
ingest, with defense-in-depth filtering on the server side.

Closes #962

## Changes

### Ingestor (`cmd/ingestor/`)
- **`config.go`**: Added `ObserverBlacklist []string` field +
`IsObserverBlacklisted()` method (case-insensitive, whitespace-trimmed)
- **`main.go`**: Early return in `handleMessage` when `parts[2]`
(observer ID from MQTT topic) matches blacklist — before status
handling, before IATA filter. No UpsertObserver, no observations, no
metrics insert. Log line: `observer <pubkey-short> blacklisted,
dropping`

### Server (`cmd/server/`)
- **`config.go`**: Same `ObserverBlacklist` field +
`IsObserverBlacklisted()` with `sync.Once` cached set (same pattern as
`nodeBlacklist`)
- **`routes.go`**: Defense-in-depth filtering in `handleObservers` (skip
blacklisted in list) and `handleObserverDetail` (404 for blacklisted ID)
- **`main.go`**: Startup `softDeleteBlacklistedObservers()` marks
matching rows `inactive=1` so historical data is hidden
- **`neighbor_persist.go`**: `softDeleteBlacklistedObservers()`
implementation

### Tests
- `cmd/ingestor/observer_blacklist_test.go`: config method tests
(case-insensitive, empty, nil)
- `cmd/server/observer_blacklist_test.go`: config tests + HTTP handler
tests (list excludes blacklisted, detail returns 404, no-blacklist
passes all, concurrent safety)

## Config

```json
{
  "observerBlacklist": [
    "EE550DE547D7B94848A952C98F585881FCF946A128E72905E95517475F83CFB1"
  ]
}
```

## Verification (Rule 18 — actual server output)

**Before blacklist** (no config):
```
Total: 31
DUBLIN in list: True
```

**After blacklist** (DUBLIN Observer pubkey in `observerBlacklist`):
```
[observer-blacklist] soft-deleted 1 blacklisted observer(s)
Total: 30
DUBLIN in list: False
```

Detail endpoint for blacklisted observer returns **404**.

All existing tests pass (`go test ./...` for both server and ingestor).

---------

Co-authored-by: you <you@example.com>
2026-05-01 23:11:27 -07:00
Kpa-clawbot 707228ad91 ci: update go-server-coverage.json [skip ci] 2026-05-02 02:14:13 +00:00
Kpa-clawbot 8d379baf5e ci: update go-ingestor-coverage.json [skip ci] 2026-05-02 02:14:12 +00:00
Kpa-clawbot 3b436c768b ci: update frontend-tests.json [skip ci] 2026-05-02 02:14:11 +00:00
Kpa-clawbot 6d49cf939c ci: update frontend-coverage.json [skip ci] 2026-05-02 02:14:09 +00:00
Kpa-clawbot 8d39b33111 ci: update e2e-tests.json [skip ci] 2026-05-02 02:14:08 +00:00
Kpa-clawbot e1a1be1735 fix(server): add observers.inactive column at startup if missing (root cause of CI flake) (#961)
## The actual root cause

PR #954 added `WHERE inactive IS NULL OR inactive = 0` to the server's
observer queries, but the `inactive` column is only added by the
**ingestor** migration (`cmd/ingestor/db.go:344-354`). When the server
runs against a DB the ingestor never touched (e.g. the e2e fixture), the
column doesn't exist:

```
$ sqlite3 test-fixtures/e2e-fixture.db "SELECT COUNT(*) FROM observers WHERE inactive IS NULL OR inactive = 0;"
Error: no such column: inactive
```

The server's `db.QueryRow().Scan()` swallows that error →
`totalObservers` stays 0 → `/api/observers` returns empty → map test
fails with "No map markers/overlays found".

This explains all the failing CI runs since #954 merged. PR #957
(freshen fixture) helped with the `nodes` time-rot but couldn't fix the
missing-column problem. PR #960 (freshen observers) added the right
timestamps but the column was still missing. PR #959 (data-loaded in
finally) fixed a different real bug. None of those touched the actual
mechanism.

## Fix

Mirror the existing `ensureResolvedPathColumn` pattern: add
`ensureObserverInactiveColumn` that runs at server startup, checks if
the column exists via `PRAGMA table_info`, adds it with `ALTER TABLE
observers ADD COLUMN inactive INTEGER DEFAULT 0` if missing.

Wired into `cmd/server/main.go` immediately after
`ensureResolvedPathColumn`.

## Verification

End-to-end on a freshened fixture:

```
$ sqlite3 /tmp/e2e-verify.db "PRAGMA table_info(observers);" | grep inactive
(no output — column absent)

$ ./cs-fixed -port 13702 -db /tmp/e2e-verify.db -public public &
[store] Added inactive column to observers

$ curl 'http://localhost:13702/api/observers'
returned=31    # was 0 before fix
```

`go test ./...` passes (19.8s).

## Lessons

I should have run `sqlite3 fixture "SELECT ... WHERE inactive ..."`
directly the first time the map test failed after #954 instead of
writing four "fix" PRs that didn't address the actual mechanism.
Apologies for the wild goose chase.

Co-authored-by: Kpa-clawbot <bot@example.invalid>
2026-05-01 19:04:23 -07:00
Kpa-clawbot b97fe5758c fix(ci): freshen observer timestamps so RemoveStaleObservers doesn't prune them on startup (#960)
## Bug

Master CI failing on `Map page loads with markers: No map
markers/overlays found` since #954 (observer filter) merged.

## Root cause chain

1. Fixture has 31 observers, all dated `2026-03-26` to `2026-03-29` (33+
days old)
2. PR #957's `tools/freshen-fixture.sh` shifts `nodes`, `transmissions`,
`neighbor_edges` timestamps but NOT `observers.last_seen`
3. Server startup runs `RemoveStaleObservers(14)` per
`cmd/server/main.go:382` — marks all 33-day-old observers `inactive=1`
4. PR #954's `GetObservers` filter then excludes them
5. `/api/observers` returns 0 → map has no observer markers → test
asserts >0 → fails

Server log line confirms: `[db] transmissions=499 observations=500
nodes=200 observers=0`

## Fix

Extend `freshen-fixture.sh` to also shift `observers.last_seen` (same
algorithm — preserve relative ordering, max anchored to now). Also
defensively clear any stale `inactive=1` flags from prior failed runs.
The `inactive` column may not exist on a fresh fixture (server adds via
migration); script silently no-ops if column absent.

## Verification

```
$ bash tools/freshen-fixture.sh /tmp/test.db
nodes: min=2026-05-01T11:07:29Z max=2026-05-01T18:49:02Z
observers: count=31 max=2026-05-01T18:49:02Z
```

After: 31 observers, oldest 3 days old, within the 14d retention window.
Server's startup prune won't touch them.

Co-authored-by: Kpa-clawbot <bot@example.invalid>
2026-05-01 16:55:25 -07:00
Kpa-clawbot 568de4b441 fix(observers): exclude soft-deleted observers from /api/observers and totalObservers (#954)
## Bug

`/api/observers` returned soft-deleted (inactive=1) observers. Operators
saw stale observers in the UI even after the auto-prune marked them
inactive on schedule. Reproduced on staging: 14 observers older than 14
days returned by the API; all of them had `inactive=1` in the DB.

## Root cause

`DB.GetObservers()` (`cmd/server/db.go:974`) ran `SELECT ... FROM
observers ORDER BY last_seen DESC` with no WHERE filter. The
`RemoveStaleObservers` path correctly soft-deletes by setting
`inactive=1`, but the read path didn't honor it.

`statsRow` (`cmd/server/db.go:234`) had the same bug — `totalObservers`
count included soft-deleted rows.

## Fix

Add `WHERE inactive IS NULL OR inactive = 0` to both:

```go
// GetObservers
"SELECT ... FROM observers WHERE inactive IS NULL OR inactive = 0 ORDER BY last_seen DESC"

// statsRow.TotalObservers
"SELECT COUNT(*) FROM observers WHERE inactive IS NULL OR inactive = 0"
```

`NULL` check preserves backward compatibility with rows from before the
`inactive` migration.

## Tests

Added regression `TestGetObservers_ExcludesInactive`:
- Seed two observers, mark one inactive, assert `GetObservers()` returns
only the other.
- **Anti-tautology gate verified**: reverting the WHERE clause causes
the test to fail with `expected 1 observer, got 2` and `inactive
observer obs2 should be excluded`.

`go test ./...` passes (19.6s).

## Out of scope

- `GetObserverByID` lookup at line 1009 still returns inactive observers
— this is intentional, so an old deep link to `/observers/<id>` shows
"inactive" rather than 404.
- Frontend may also have its own caching layer; this fix is server-side
only.

---------

Co-authored-by: Kpa-clawbot <bot@example.invalid>
Co-authored-by: you <you@example.com>
Co-authored-by: KpaBap <kpabap@gmail.com>
2026-05-01 17:51:08 +00:00
Kpa-clawbot 04c8558768 fix(spa): data-loaded setAttribute in finally so it fires on errors (#959)
## Bug

PR #958 added `data-loaded="true"` attributes for E2E sync, but placed
the `setAttribute` call inside the `try` block of `loadNodes()` /
`loadPackets()` / `loadNodes()` (map). When the API call failed (e.g.
`/api/observers` returns 500, or any other exception), the `catch`
swallowed the error and `setAttribute` was never reached. E2E tests then
waited 15s for `[data-loaded="true"]` and timed out.

This blocked PR #954 CI repeatedly with `Map page loads with markers:
page.waitForSelector: Timeout 15000ms exceeded`.

## Fix

Move `setAttribute('data-loaded', 'true')` to a `finally` block in all
three handlers (`map.js`, `nodes.js`, `packets.js`). The attribute now
fires on both success and error paths, so E2E tests proceed (test still
asserts on the actual rendered state — markers, rows, etc — so an empty
page still fails the right assertion, just much faster).

Removed the duplicate setAttribute calls inside the try blocks (the
finally is the single source of truth now).

## Verification

- `node test-packets.js` 82/82 
- `node test-hash-color.js` 32/32 
- Code reading: each `finally` runs after either success or catch, sets
the same attribute on the same container element.

## Why CI didn't catch this on #958

The PR #958 tests passed because the staging fixture happened to load
successfully when those tests ran. The flake only manifests when an
upstream fetch fails (e.g. observer API returning unexpected shape,
network blip, server still warming).

Co-authored-by: Kpa-clawbot <bot@example.invalid>
2026-05-01 10:49:21 -07:00
Kpa-clawbot 52b5ae86d6 ci: update go-server-coverage.json [skip ci] 2026-05-01 15:17:33 +00:00
Kpa-clawbot 8397f2bb1c ci: update go-ingestor-coverage.json [skip ci] 2026-05-01 15:17:32 +00:00
Kpa-clawbot ed65498281 ci: update frontend-tests.json [skip ci] 2026-05-01 15:17:31 +00:00
Kpa-clawbot c53af5cf66 ci: update frontend-coverage.json [skip ci] 2026-05-01 15:17:30 +00:00
Kpa-clawbot 9f606600e2 ci: update e2e-tests.json [skip ci] 2026-05-01 15:17:28 +00:00
Kpa-clawbot 053aef1994 fix(spa): decouple navigate() from theme fetch + add data-loaded sync attributes (#955) (#958)
## Summary

Fixes the chained async init race identified in RCA #3 of #955.

`navigate()` (which dispatches page handlers and fetches data) was gated
behind `/api/config/theme` resolving via `.finally()`. Tests use
`waitUntil: 'domcontentloaded'` which returns BEFORE theme fetch
resolves, creating a race condition where 3+ serial network requests
must complete before any DOM rows appear.

## Changes

### Decouple navigate() from theme fetch (public/app.js)
- Move `navigate()` call out of the theme fetch `.finally()` block
- Call it immediately on DOMContentLoaded — theme is purely cosmetic and
applies in parallel

### Add data-loaded sync attributes (public/nodes.js, map.js,
packets.js)
- Set `data-loaded="true"` on the container element after each page's
data fetch resolves and DOM renders
- Nodes: set on `#nodesLeft` after `loadNodes()` renders rows
- Map: set on `#leaflet-map` after `renderMarkers()` completes
- Packets: set on `#pktLeft` after `loadPackets()` renders rows

### Update E2E tests (test-e2e-playwright.js)
- Add `await page.waitForSelector('[data-loaded="true"]', { timeout:
15000 })` before row/marker assertions
- Increase map marker timeout from 3s to 8s as additional safety margin
- Tests now synchronize on data readiness rather than racing DOM
appearance

## Verification

- Spun up local server on port 13586 with e2e-fixture.db
- Confirmed navigate() is called immediately (not gated on theme)
- Confirmed data-loaded attributes are present in served JS
- API returns data correctly (2 nodes from fixture)

Closes #955 (RCA #3)

Co-authored-by: you <you@example.com>
2026-05-01 08:07:08 -07:00
Kpa-clawbot 7aef3c355c fix(ci): freshen fixture timestamps before E2E to avoid time-based filter exclusion (#955) (#957)
## Problem

The E2E fixture DB (`test-fixtures/e2e-fixture.db`) has static
timestamps from March 29, 2026. The map page applies a default
`lastHeard=30d` filter, so once the fixture ages past 30 days all nodes
are excluded from `/api/nodes?lastHeard=30d` — causing the "Map page
loads with markers" test to fail deterministically.

This started blocking all CI on ~April 28, 2026 (30 days after March
29).

Closes #955 (RCA #1: time-based fixture rot)

## Fix

Added `tools/freshen-fixture.sh` — a small script that shifts all
`last_seen`/`first_seen` timestamps forward so the newest is near
`now()`, preserving relative ordering between nodes. Runs in CI before
the Go server starts. Does **not** modify the checked-in fixture (no
binary blob churn).

## Verification

```
$ cp test-fixtures/e2e-fixture.db /tmp/fix4.db
$ bash tools/freshen-fixture.sh /tmp/fix4.db
Fixture timestamps freshened in /tmp/fix4.db
nodes: min=2026-05-01T07:10:00Z max=2026-05-01T14:51:33Z

$ ./corescope-server -port 13585 -db /tmp/fix4.db -public public &
$ curl -s "http://localhost:13585/api/nodes?limit=200&lastHeard=30d" | jq '{total, count: (.nodes | length)}'
{
  "total": 200,
  "count": 200
}
```

All 200 nodes returned with the 30-day filter after freshening (vs 0
without the fix).

Co-authored-by: you <you@example.com>
2026-05-01 08:06:19 -07:00
Kpa-clawbot 9ac484607f ci: update go-server-coverage.json [skip ci] 2026-05-01 15:05:20 +00:00
Kpa-clawbot b562de32ff ci: update go-ingestor-coverage.json [skip ci] 2026-05-01 15:05:19 +00:00
Kpa-clawbot 6f0c58c94a ci: update frontend-tests.json [skip ci] 2026-05-01 15:05:18 +00:00
Kpa-clawbot 7d1c679f4f ci: update frontend-coverage.json [skip ci] 2026-05-01 15:05:17 +00:00
Kpa-clawbot ead08c721d ci: update e2e-tests.json [skip ci] 2026-05-01 15:05:16 +00:00
Kpa-clawbot 57e272494d feat(server): /api/healthz readiness endpoint gated on store load (#955) (#956)
## Summary

Fixes RCA #2 from #955: the HTTP listener and `/api/stats` go live
before background goroutines (pickBestObservation, neighbor graph build)
finish, causing CI readiness checks to pass prematurely.

## Changes

1. **`cmd/server/healthz.go`** — New `GET /api/healthz` endpoint:
- Returns `503 {"ready":false,"reason":"loading"}` while background init
is running
   - Returns `200 {"ready":true,"loadedTx":N,"loadedObs":N}` once ready

2. **`cmd/server/main.go`** — Added `sync.WaitGroup` tracking
pickBestObservation and neighbor graph build goroutines. A coordinator
goroutine sets `readiness.Store(1)` when all complete.
`backfillResolvedPathsAsync` is NOT gated (async by design, can take 20+
min).

3. **`cmd/server/routes.go`** — Wired `/api/healthz` before system
endpoints.

4. **`.github/workflows/deploy.yml`** — CI wait-for-ready loop now polls
`/api/healthz` instead of `/api/stats`.

5. **`cmd/server/healthz_test.go`** — Tests for 503-before-ready,
200-after-ready, JSON shape, and anti-tautology gate.

## Rule 18 Verification

Built and ran against `test-fixtures/e2e-fixture.db` (499 tx):
- With the small fixture DB, init completes in <300ms so both immediate
and delayed curls return 200
- Unit tests confirm 503 behavior when `readiness=0` (simulating slow
init)
- On production DBs with 100K+ txs, the 503 window would be 5-15s
(pickBestObservation processes in 5000-tx chunks with 10ms yields)

## Test Results

```
=== RUN   TestHealthzNotReady    --- PASS
=== RUN   TestHealthzReady       --- PASS  
=== RUN   TestHealthzAntiTautology --- PASS
ok  github.com/corescope/server  19.662s (full suite)
```

Co-authored-by: you <you@example.com>
2026-05-01 07:55:57 -07:00
Kpa-clawbot d870a693d0 ci: update go-server-coverage.json [skip ci] 2026-05-01 09:36:22 +00:00
Kpa-clawbot d9904cc138 ci: update go-ingestor-coverage.json [skip ci] 2026-05-01 09:36:21 +00:00
Kpa-clawbot 7aa59eabde ci: update frontend-tests.json [skip ci] 2026-05-01 09:36:20 +00:00
Kpa-clawbot ac7d2b64f7 ci: update frontend-coverage.json [skip ci] 2026-05-01 09:36:19 +00:00
Kpa-clawbot fd3bf1a892 ci: update e2e-tests.json [skip ci] 2026-05-01 09:36:18 +00:00
Kpa-clawbot f16afe7fdf ci: update go-server-coverage.json [skip ci] 2026-05-01 09:34:29 +00:00
Kpa-clawbot ed66e54e57 ci: update go-ingestor-coverage.json [skip ci] 2026-05-01 09:34:28 +00:00
Kpa-clawbot 22079a1fc4 ci: update frontend-tests.json [skip ci] 2026-05-01 09:34:27 +00:00
Kpa-clawbot 232882d308 ci: update frontend-coverage.json [skip ci] 2026-05-01 09:34:25 +00:00
Kpa-clawbot fb640bcfc3 ci: update e2e-tests.json [skip ci] 2026-05-01 09:34:25 +00:00
Kpa-clawbot 096887228f release: v3.6.0 'Forensics' notes 2026-05-01 09:24:42 +00:00
Kpa-clawbot 4c39f041ba ci: update go-server-coverage.json [skip ci] 2026-05-01 09:10:25 +00:00
Kpa-clawbot 1c755ed525 ci: update go-ingestor-coverage.json [skip ci] 2026-05-01 09:10:24 +00:00
Kpa-clawbot c78606a416 ci: update frontend-tests.json [skip ci] 2026-05-01 09:10:24 +00:00
Kpa-clawbot 718d2e201a ci: update frontend-coverage.json [skip ci] 2026-05-01 09:10:23 +00:00
Kpa-clawbot d3d41f3bf2 ci: update e2e-tests.json [skip ci] 2026-05-01 09:10:22 +00:00
Kpa-clawbot 7bb5ff9a7f fix(e2e): tag flying-packet polyline so test selector doesn't pick up geofilter polygons (#953)
## Bug

Master CI failing on `Map trace polyline uses hash-derived color when
toggle ON`. The test selector `path.leaflet-interactive` was too broad —
it matched **geofilter region polygons** (`L.polygon` calls in
`live.js:1052`/`map.js:327`), which are styled with theme variables, not
`hsl()`. None of those polygons have an `hsl(` stroke, so the assertion
failed even though the actual flying-packet polylines DO use hash colors
correctly.

## Fix

1. Tag flying-packet polylines with a dedicated class
`live-packet-trace` (`public/live.js:2728`).
2. Update the test selector to target that class specifically.
3. Treat "no flying-packet polylines drawn in the test window" as SKIP
(not fail) — animation may not trigger in 3s.

## Verification (rule 18)

- Read implementation at `live.js:2724-2729`: polyline color IS set from
`hashFill` when toggle is ON. The implementation is correct.
- Read polygon callers at `live.js:1052` (geofilter regions) — confirmed
they share the same `path.leaflet-interactive` class.
- The test was selecting wrong DOM nodes; fix narrows to dedicated
class.

No code logic changed — only DOM tagging + test selector.

Co-authored-by: Kpa-clawbot <bot@example.invalid>
2026-05-01 09:00:49 +00:00
Kpa-clawbot b9758111b0 feat(hash-color): bright vivid fill + dark outline + live feed/polyline surfaces (#951)
## Hash-Color: Bright Vivid Fill + Dark Outline + Extended Surfaces

Follow-up to #948 (merged). Revises the hash-color algorithm for better
perceptual discrimination and extends hash coloring to additional Live
page surfaces.

### Algorithm Changes (`public/hash-color.js`)
- **Hue**: bytes 0-1 (16-bit → 0-360°) — unchanged
- **Saturation**: byte 2 (55-95%) — NEW, was fixed 70%
- **Lightness**: byte 3 (light 50-65%, dark 55-72%) — NEW, was fixed
L=30/38/65
- **Outline** (`hashToOutline`): same-hue dark color (L=25% light, L=15%
dark) — NEW
- Sentinel threshold raised to 8 hex chars (need 4 bytes of entropy)
- Drops WCAG fill-darkening approach — outline carries contrast instead

### Live Page Updates (`public/live.js`)
- **Dot marker**: uses `hashToOutline()` for stroke (was TYPE_COLOR)
- **Polyline trace**: uses hash fill color (unified dot + trace by hash)
- **Feed items**: 4px `border-left` stripe matching packets table

### Test Updates
- `test-hash-color.js`: 32 tests (S variability, L variability, outline
< fill, same hue, pairwise distance)
- `test-e2e-playwright.js`: 2 new assertions (feed stripe, polyline hsl
stroke)

### Verification
- 20 real advert hashes from fixture DB: all produce unique hues (20/20)
- Pairwise HSL distance: avg=0.51, min=0.04
- Go server built and run against fixture DB — HTML serves updated
module
- VM sandbox render-check confirms distinct vivid fills with darker
outlines

Closes #946 §2.10/§2.11 scope extension.

---------

Co-authored-by: you <you@example.com>
Co-authored-by: Kpa-clawbot <bot@example.invalid>
2026-05-01 08:53:04 +00:00
Kpa-clawbot 3bd354338e ci: update go-server-coverage.json [skip ci] 2026-05-01 08:20:25 +00:00
Kpa-clawbot 81ae2689f0 ci: update go-ingestor-coverage.json [skip ci] 2026-05-01 08:20:25 +00:00
Kpa-clawbot f428064efe ci: update frontend-tests.json [skip ci] 2026-05-01 08:20:24 +00:00
Kpa-clawbot c024a55328 ci: update frontend-coverage.json [skip ci] 2026-05-01 08:20:23 +00:00
Kpa-clawbot 7034fe74b5 ci: update e2e-tests.json [skip ci] 2026-05-01 08:20:22 +00:00
Kpa-clawbot 0a9a4c4223 feat(live + packets): color packet markers by hash (#946) (#948)
## Summary

Implements #946 — deterministic HSL coloring of packet markers by hash
for visual propagation tracing.

### What's new

1. **`public/hash-color.js`** — Pure IIFE
(`window.HashColor.hashToHsl(hashHex, theme)`) deriving hue from first 2
bytes of packet hash. Theme-aware lightness with WCAG ≥3.0 contrast
against `--content-bg` (`#f4f5f7` light / `#0f0f23` dark,
`style.css:32,55`). Green/yellow zone (hue 45°-195°) uses L=30% in light
theme to maintain contrast.

2. **Live page dots + contrails** — `drawAnimatedLine` fills the flying
dot and tints the contrail polyline with the hash-derived HSL when
toggle is ON. Ghost-hop dots remain grey (`#94a3b8`). Matrix mode path
(`drawMatrixLine`) is untouched.

3. **Packets table stripe** — `border-left: 4px solid <hsl>` on `<tr>`
in both `buildGroupRowHtml` (group + child rows) and `buildFlatRowHtml`.
Absent when toggle OFF.

4. **Toggle UI** — "Color by hash" checkbox in `#liveControls` between
Realistic and Favorites. Default ON. Persisted to
`localStorage('meshcore-color-packets-by-hash')`. Dispatches `storage`
event for cross-tab sync. Packets page listens and re-renders.

### Performance

- `hashToHsl` is O(1) — two `parseInt` calls + arithmetic. No allocation
beyond the result string.
- Called once per `drawAnimatedLine` invocation (not per animation
frame).
- Packets table: called once per visible row during render (existing
virtualization applies).

### Tests

- `test-hash-color.js`: 16 unit tests — purity, theme split, yellow-zone
clamp, sentinel, variability (anti-tautology gate), WCAG sweep (step 15°
both themes).
- `test-packets.js`: 82 tests still passing (no regression).
- `test-e2e-playwright.js`: 4 new E2E tests — toggle presence/default,
persistence across reload, table stripe present when ON, absent when
OFF.

### Acceptance criteria addressed

All items from spec §6 implemented. TYPE_COLORS retained on
borders/lines. Ghost hops stay grey. Matrix mode suppressed. Cross-tab
storage event dispatched.

Closes #946

---------

Co-authored-by: you <you@example.com>
Co-authored-by: Kpa-clawbot <bot@example.invalid>
2026-05-01 01:10:11 -07:00
Kpa-clawbot 994544604f fix(path-inspector): Show on Map missed origin and stripped first hop (#950)
## Bug

Path Inspector "Show on Map" only rendered the first node of a candidate
path. Multi-hop candidates appeared as a single dot instead of a
polyline.

## Root cause

`public/path-inspector.js:186-188` and `public/map.js:574` (cross-page
handler) called:

```js
drawPacketRoute(candidate.path.slice(1), candidate.path[0]);
```

But `drawPacketRoute(hopKeys, origin)` (`public/map.js:390`) expects
`origin` to be an **object** with `pubkey`/`lat`/`lon`/`name` properties
— not a bare string. The code at lines 451-460 does `origin.lat` /
`origin.pubkey` lookups; with a string, both branches fail, `originPos`
stays null, and the originating node never gets prepended to
`positions`.

Combined with `slice(1)` stripping the head, the resulting polyline was
missing the first hop AND the origin marker — and short paths could
collapse to a single resolved node.

## Fix

Pass the full path as `hopKeys` and `null` as origin. `drawPacketRoute`
already iterates `hopKeys`, resolves each against `nodes[]`, and draws a
marker for every resolved hop. The "origin" arg was meant for cases
where the originator is a separate object (e.g., from packet detail with
sender metadata), not for paths where the origin IS the first hop.

```js
drawPacketRoute(candidate.path, null);
```

Two call sites fixed: in-page direct call (`path-inspector.js:188`) and
cross-page handler (`map.js:574`).

## Verification

**Code reading only.** I did NOT manually load the page or visually
verify the polyline renders. Reviewer should:
1. Open Path Inspector, query a multi-prefix path with ≥3 known hops
2. Click "Show on Map"
3. Confirm polyline draws through every resolved node, not just the
first

`drawPacketRoute` is hard to unit-test without a real Leaflet map, so no
automated test added.

---------

Co-authored-by: Kpa-clawbot <bot@example.invalid>
Co-authored-by: you <you@example.com>
2026-05-01 08:01:37 +00:00
Kpa-clawbot 405094f7eb ci: update go-server-coverage.json [skip ci] 2026-05-01 07:24:51 +00:00
Kpa-clawbot 89b63dc38a ci: update go-ingestor-coverage.json [skip ci] 2026-05-01 07:24:50 +00:00
Kpa-clawbot 8194801b94 ci: update frontend-tests.json [skip ci] 2026-05-01 07:24:49 +00:00
Kpa-clawbot 4427c92c32 ci: update frontend-coverage.json [skip ci] 2026-05-01 07:24:48 +00:00
Kpa-clawbot c5799f868e ci: update e2e-tests.json [skip ci] 2026-05-01 07:24:47 +00:00
Kpa-clawbot 6345c6fb05 fix(ingestor): observability + bounded backoff for MQTT reconnect (#947) (#949)
## Summary

Fixes #947 — MQTT ingestor silently stalls after `pingresp not received`
disconnect due to paho's default 10-minute reconnect backoff and zero
observability of reconnect attempts.

## Changes

### `cmd/ingestor/main.go`
- **Extract `buildMQTTOpts()`** — encapsulates MQTT client option
construction for testability
- **`SetMaxReconnectInterval(30s)`** — bounds paho's default 10-minute
exponential backoff (source: `options.go:137` in
`paho.mqtt.golang@v1.5.0`)
- **`SetConnectTimeout(10s)`** — prevents stuck connect attempts from
blocking reconnect cycle
- **`SetWriteTimeout(10s)`** — prevents stuck publish writes
- **`SetReconnectingHandler`** — logs `MQTT [<tag>] reconnecting to
<broker>` on every reconnect attempt, giving operators visibility into
retry behavior
- **Enhanced `SetConnectionLostHandler`** — now includes broker address
in log line for multi-source disambiguation

### `cmd/ingestor/mqtt_opts_test.go` (new)
- Tests verify `MaxReconnectInterval`, `ConnectTimeout`, `WriteTimeout`
are set correctly
- Tests verify credential and TLS configuration
- Anti-tautology: tests fail if timing settings are removed from
`buildMQTTOpts()`

## Operator impact

After this change, a pingresp disconnect produces:
```
MQTT [staging] disconnected from tcp://broker:1883: pingresp not received, disconnecting
MQTT [staging] reconnecting to tcp://broker:1883
MQTT [staging] reconnecting to tcp://broker:1883
MQTT [staging] connected to tcp://broker:1883
MQTT [staging] subscribed to meshcore/#
```

Max gap between disconnect and first reconnect attempt: ~30s (was up to
10 minutes).

---------

Co-authored-by: you <you@example.com>
2026-05-01 00:01:07 -07:00
efiten f99c9c21d9 feat(live): node filter — show only traffic through a specific node (closes #771 M3) (#924)
Adds a node filter input to the live controls bar. When active, only
packets whose hop chain passes through the selected node(s) are
animated. A counter shows "Showing X of Y" so operators know traffic is
filtered, not absent.

## Changes
- `packetInvolvesFilterNode(pkt, filterKeys)`: pure filter function
using same prefix-matching logic as the favorites filter
- `setNodeFilter(keys)`: sets filter state, resets counters, persists to
localStorage
- `updateNodeFilterUI()`: updates counter + clear button visibility +
datalist autocomplete
- Filter input in controls bar (after Favorites toggle): text input +
datalist autocomplete + × clear button + counter div
- Filter wired into `renderPacketTree`: increments total/shown counters,
returns early when packet doesn't match
- URL hash sync: `?node=ABCD1234` — read on init, written on filter
change

## Tests
8 new unit tests covering filter logic, localStorage persistence, and
edge cases; 78 pass, 0 regressions.

Closes #771 (M3 of 3)

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 23:53:08 -07:00
efiten 69080a852f feat(geofilter-docs): app-served docs page (#820) (#900)
## Summary

- Adds `public/geofilter-docs.html` — a self-contained, app-served
documentation page for the geofilter feature, matching the builder's
dark theme
- Updates the GeoFilter Builder's help-bar "Documentation" link from
GitHub markdown URL to the local `/geofilter-docs.html`

## Docs coverage

Polygon syntax, coordinate ordering (`[lat, lon]` — not GeoJSON `[lon,
lat]`), multi-polygon clarification (single polygon only), examples
(Belgium rectangle + irregular shape), legacy bounding box format, prune
script usage.

## Test plan

- [x] Open `/geofilter-docs.html` — dark theme renders, all sections
visible
- [x] Open `/geofilter-builder.html` → click "Documentation" → navigates
to `/geofilter-docs.html` in same tab
- [x] Click "← GeoFilter Builder" on docs page → navigates back to
`/geofilter-builder.html`

Closes #820

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 23:50:54 -07:00
efiten e460932668 fix(store): apply retentionHours cutoff in Load() to prevent OOM on cold start (#917)
## Problem

`Load()` loaded all transmissions from the DB regardless of
`retentionHours`, so `buildSubpathIndex()` processed the full DB history
on every startup. On a DB with ~280K paths this produces ~13.5M subpath
index entries, OOM-killing the process before it ever starts listening —
causing a supervisord crash loop with no useful error message.

## Fix

Apply the same `retentionHours` cutoff to `Load()`'s SQL that
`EvictStale()` already uses at runtime. Both conditions
(`retentionHours` window and `maxPackets` cap) are combined with AND so
neither safety limit is bypassed.

Startup now builds indexes only over the retention window, making
startup time and memory proportional to recent activity rather than
total DB history.

## Docs

- `config.example.json`: adds `retentionHours` to the `packetStore`
block with recommended value `168` (7 days) and a warning about `0` on
large DBs
- `docs/user-guide/configuration.md`: documents the field and adds an
explicit OOM warning

## Test plan

- [x] `cd cmd/server && go test ./... -run TestRetentionLoad` — covers
the retention-filtered load: verifies packets outside the window are
excluded, and that `retentionHours: 0` still loads everything
- [x] Deploy on an instance with a large DB (>100K paths) and
`retentionHours: 168` — server reaches "listening" in seconds instead of
OOM-crashing
- [x] Verify `config.example.json` has `retentionHours: 168` in the
`packetStore` block
- [x] Verify `docs/user-guide/configuration.md` documents the field and
warning

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Kpa-clawbot <kpaclawbot@outlook.com>
2026-05-01 06:47:55 +00:00
Kpa-clawbot aeae7813bc fix: enable SQLite incremental auto-vacuum so DB shrinks after retention (#919) (#920)
Closes #919

## Summary

Enables SQLite incremental auto-vacuum so the database file actually
shrinks after retention reaper deletes old data. Previously, `DELETE`
operations freed pages internally but never returned disk space to the
OS.

## Changes

### 1. Auto-vacuum on new databases
- `PRAGMA auto_vacuum = INCREMENTAL` set via DSN pragma before
`journal_mode(WAL)` in the ingestor's `OpenStoreWithInterval`
- Must be set before any tables are created; DSN ordering ensures this

### 2. Post-reaper incremental vacuum
- `PRAGMA incremental_vacuum(N)` runs after every retention reaper cycle
(packets, metrics, observers, neighbor edges)
- N defaults to 1024 pages, configurable via `db.incrementalVacuumPages`
- Noop on `auto_vacuum=NONE` databases (safe before migration)
- Added to both server and ingestor

### 3. Opt-in full VACUUM for existing databases
- Startup check logs a clear warning if `auto_vacuum != INCREMENTAL`
- `db.vacuumOnStartup: true` config triggers one-time `PRAGMA
auto_vacuum = INCREMENTAL; VACUUM`
- Logs start/end time for operator visibility

### 4. Documentation
- `docs/user-guide/configuration.md`: retention section notes that
lowering retention doesn't immediately shrink the DB
- `docs/user-guide/database.md`: new guide covering WAL, auto-vacuum,
migration, manual VACUUM

### 5. Tests
- `TestNewDBHasIncrementalAutoVacuum` — fresh DB gets `auto_vacuum=2`
- `TestExistingDBHasAutoVacuumNone` — old DB stays at `auto_vacuum=0`
- `TestVacuumOnStartupMigratesDB` — full VACUUM sets `auto_vacuum=2`
- `TestIncrementalVacuumReducesFreelist` — DELETE + vacuum shrinks
freelist
- `TestCheckAutoVacuumLogs` — handles both modes without panic
- `TestConfigIncrementalVacuumPages` — config defaults and overrides

## Migration path for existing databases

1. On startup, CoreScope logs: `[db] auto_vacuum=NONE — DB needs
one-time VACUUM...`
2. Set `db.vacuumOnStartup: true` in config.json
3. Restart — VACUUM runs (blocks startup, minutes on large DBs)
4. Remove `vacuumOnStartup` after migration

## Test results

```
ok  github.com/corescope/server    19.448s
ok  github.com/corescope/ingestor  30.682s
```

---------

Co-authored-by: you <you@example.com>
2026-04-30 23:45:00 -07:00
Kpa-clawbot 43f17ed770 ci: update go-server-coverage.json [skip ci] 2026-05-01 06:40:20 +00:00
Kpa-clawbot 9ada3d7e93 ci: update go-ingestor-coverage.json [skip ci] 2026-05-01 06:40:19 +00:00
Kpa-clawbot 7a04462dde ci: update frontend-tests.json [skip ci] 2026-05-01 06:40:18 +00:00
Kpa-clawbot c0f39e298a ci: update frontend-coverage.json [skip ci] 2026-05-01 06:40:17 +00:00
Kpa-clawbot cc2b731c77 ci: update e2e-tests.json [skip ci] 2026-05-01 06:40:16 +00:00
efiten f2689123f3 fix(geobuilder): wrap longitude to [-180,180] to fix southern hemisphere polygons (#925)
## Summary

- Fixes #912 — geofilter-builder generates out-of-range longitudes for
southern hemisphere locations
- Root cause: Leaflet's `latlng.lng` is unbounded; panning from Europe
to Australia produces values like `-210` instead of `150`
- Fix: call `latlng.wrap()` in `latLonPair()` to normalise longitude to
`[-180, 180]` before writing the config JSON

## Details

When the user opens the builder (default view: Europe, `[50.5, 4.4]`)
and pans east to Australia, Leaflet tracks the cumulative pan offset and
returns `lng = 150 - 360 = -210` to keep the path continuous. The
builder was passing that raw value straight into the output JSON,
producing coordinates that fall outside any valid bounding box.

`L.LatLng.wrap()` is Leaflet's built-in normalisation method — collapses
any longitude to `[-180, 180]` with no loss of precision.

## Test plan

- [x] Open the builder, navigate to NSW Australia, place a polygon —
confirm longitudes are `~141`–`154`, not `~-219`–`-206`
- [x] Repeat for a northern hemisphere location (e.g. Belgium) — confirm
output is unchanged
- [x] Paste the generated config into CoreScope — confirm nodes appear
on Maps and Live view

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Kpa-clawbot <kpaclawbot@outlook.com>
2026-05-01 06:40:12 +00:00
Kpa-clawbot 308b67ed66 chore: remove tautological/vacuous frontend tests (#938)
## Summary

Remove tautological/vacuous frontend tests identified in the test audit
(2026-04-30).

## Deleted Files

- **`test-anim-perf.js`** — Tests local simulation functions, not actual
`live.js` animation code. Behavioral equivalents exist in
`test-live-anims.js`.
- **`test-channel-add-ux.js`** — Pure HTML/CSS substring grep tests
(`src.includes(...)`) with zero behavioral value. Any refactor breaks
them without indicating a real bug.

## Edited Files

- **`test-aging.js`** — Removed `mockGetStatusInfo` (5 tests) and
`getStatusTooltip` (6 tests) blocks. These re-implement SUT logic inside
the test file and assert on the local copy, not the real code. The
GOLD-rated `getNodeStatus` boundary tests (18 assertions) and the BUG
CHECK section are preserved.

## Why

These tests are unfailable by construction — they pass regardless of
whether the code under test is correct. They inflate test counts without
providing regression protection, and they break on harmless refactors
(false negatives).

## Validation

- `npm run test:unit` passes (all 3 files: 62 + 18 + 601 assertions).
- 9 pre-existing failures in `test-frontend-helpers.js` (hop-resolver
affinity tests, unrelated to this change).
- No harness references (`test-all.sh`, `package.json`) needed updating
— deleted files were not listed there.

Co-authored-by: you <you@example.com>
2026-04-30 23:28:30 -07:00
Kpa-clawbot 54f7f9d35b feat: path-prefix candidate inspector with map view (#944) (#945)
## feat: path-prefix candidate inspector with map view (#944)

Implements the locked spec from #944: a beam-search-based path prefix
inspector that enumerates candidate full-pubkey paths from short hex
prefixes and scores them.

### Server (`cmd/server/path_inspect.go`)

- **`POST /api/paths/inspect`** — accepts 1-64 hex prefixes (1-3 bytes,
uniform length per request)
- Beam search (width 20) over cached `prefixMap` + `NeighborGraph`
- Per-hop scoring: edge weight (35%), GPS plausibility (20%), recency
(15%), prefix selectivity (30%)
- Geometric mean aggregation with 0.05 floor per hop
- Speculative threshold: score < 0.7
- Score cache: 30s TTL, keyed by (prefixes, observer, window)
- Cold-start: synchronous NeighborGraph rebuild with 2s hard timeout →
503 `{retry:true}`
- Body limit: 4096 bytes via `http.MaxBytesReader`
- Zero SQL queries in handler hot path
- Request validation: rejects empty, odd-length, >3 bytes, mixed
lengths, >64 hops

### Frontend (`public/path-inspector.js`)

- New page under Tools route with input field (comma/space separated hex
prefixes)
- Client-side validation with error feedback
- Results table: rank, score (color-coded speculative), path names,
per-hop evidence (collapsed)
- "Show on Map" button calls `drawPacketRoute` (one path at a time,
clears prior)
- Deep link: `#/tools/path-inspector?prefixes=2c,a1,f4`

### Nav reorganization

- `Traces` nav item renamed to `Tools`
- Backward-compat: `#/traces/<hash>` redirects to `#/tools/trace/<hash>`
- Tools sub-routing dispatches to traces or path-inspector

### Store changes

- Added `LastSeen time.Time` to `nodeInfo` struct, populated from
`nodes.last_seen`
- Added `inspectMu` + `inspectCache` fields to `PacketStore`

### Tests

- **Go unit tests** (`path_inspect_test.go`): scoreHop components, beam
width cap, speculative flag, all validation error cases, valid request
integration
- **Frontend tests** (`test-path-inspector.js`): parse
comma/space/mixed, validation (empty, odd, >3 bytes, mixed lengths,
invalid hex, valid)
- Anti-tautology gate verified: removing beam pruning fails width test;
removing validation fails reject tests

### CSS

- `--path-inspector-speculative` variable in both themes (amber, WCAG AA
on both dark/light backgrounds)
- All colors via CSS variables (no hardcoded hex in production code)

Closes #944

---------

Co-authored-by: you <you@example.com>
2026-04-30 23:28:16 -07:00
Kpa-clawbot dbd2726b27 ci: update go-server-coverage.json [skip ci] 2026-05-01 03:56:41 +00:00
Kpa-clawbot d9757626bc ci: update go-ingestor-coverage.json [skip ci] 2026-05-01 03:56:40 +00:00
Kpa-clawbot 472c9f2aa2 ci: update frontend-tests.json [skip ci] 2026-05-01 03:56:39 +00:00
Kpa-clawbot dccfb0a328 ci: update frontend-coverage.json [skip ci] 2026-05-01 03:56:38 +00:00
Kpa-clawbot 086b8b7983 ci: update e2e-tests.json [skip ci] 2026-05-01 03:56:36 +00:00
efiten 9293ff408d fix(customize): skip panel re-render while a text field has focus (#927)
## Summary

- `_debouncedWrite()` was calling `_refreshPanel()` 300ms after every
keystroke
- `_refreshPanel()` sets `container.innerHTML`, destroying the focused
input element
- On mobile, losing the focused input collapses the virtual keyboard
after each keypress

Guard the `_refreshPanel()` call so it is skipped when
`document.activeElement` is inside the panel. The pipeline
(`_runPipeline`) still runs immediately — CSS updates apply. Override
dots update on the next natural re-render (tab switch, dark-mode toggle,
panel reopen).

## Root cause

`customize-v2.js` → `_debouncedWrite()` → `_refreshPanel()` →
`_renderPanel()` → `container.innerHTML = ...`

## Test plan

- [ ] New Playwright E2E test: open Customize, focus a text field, type,
wait 500ms past debounce — asserts input element is still connected to
DOM and focus remains inside panel
- [ ] Manual: open Customize on mobile (or DevTools mobile emulation),
type in Site Name — keyboard must not collapse after each character

Fixes #896
2026-04-30 20:46:59 -07:00
Kpa-clawbot 8c3b2e2248 test(e2e): retry click on table rows when handles detach (#943)
## Problem

E2E test `Node detail loads` intermittently fails with:

> elementHandle.click: Element is not attached to the DOM

(e.g. PR #938 CI run job 73889426640.) Same flake class as #ngStats
hydration race fixed in #940.

## Root cause

```js
const firstRow = await page.$('table tbody tr');
await firstRow.click();
```

Between the `$()` and `.click()`, the nodes table re-renders from a
WebSocket push. The captured handle is detached from the new DOM, click
throws.

## Fix

Switch to a selector-based click with a small retry loop (3 attempts ×
200ms backoff), so a detach mid-attempt re-resolves a fresh element.

Test logic unchanged; just defensive against re-render between query and
click.

Co-authored-by: Kpa-clawbot <bot@example.invalid>
2026-04-30 20:46:03 -07:00
Kpa-clawbot a4b99a98e1 ci: update go-server-coverage.json [skip ci] 2026-05-01 03:11:11 +00:00
Kpa-clawbot e05e3cb2f2 ci: update go-ingestor-coverage.json [skip ci] 2026-05-01 03:11:10 +00:00
Kpa-clawbot 61719c2218 ci: update frontend-tests.json [skip ci] 2026-05-01 03:11:09 +00:00
Kpa-clawbot e73f8996a8 ci: update frontend-coverage.json [skip ci] 2026-05-01 03:11:08 +00:00
Kpa-clawbot 292075fd0d ci: update e2e-tests.json [skip ci] 2026-05-01 03:11:07 +00:00
Kpa-clawbot 6273a8797b test(e2e): wait for #ngStats hydration before counting cards (#940)
## Problem

E2E test "Analytics Neighbor Graph tab renders canvas and stats"
intermittently fails with `Neighbor Graph stats should have >=3 cards,
got 0` (e.g. run 25185836669).

The same suite passes on neighboring runs (master + PR #939) within
minutes. The failure correlates with timing/load, not code change.

## Root cause

`#ngStats` cards render asynchronously after `#ngCanvas` mounts. The
test waits for the canvas, then immediately reads `#ngStats .stat-card`
count. On slower runs the read happens before stats hydrate → 0 cards →
assert fail.

Other Analytics tabs in the same file already use
`page.waitForFunction(...)` to poll for content (e.g. Distance tab on
line 654). Neighbor Graph block was missing the equivalent wait.

## Fix

Add the same defensive wait before counting:

```js
await page.waitForFunction(
  () => document.querySelectorAll('#ngStats .stat-card').length >= 3,
  { timeout: 8000 },
);
```

Test-only change. No frontend code touched. Bounded by 8s timeout
matching other Analytics waits.

Co-authored-by: Kpa-clawbot <bot@example.invalid>
2026-04-30 20:01:35 -07:00
Kpa-clawbot f84142b1d2 fix(packets): hash filter must bypass saved region filter (#939)
## Summary

Direct packet links like `/#/packets?hash=<HASH>` silently returned zero
rows when the user's saved region filter excluded the packet's observer
region. The packet existed and rendered in the side panel (which fetches
without region filter), but the main packet table was empty — leaving
the user with no rows to click and no obvious diagnostic.

## Root cause

`loadPackets()` in `public/packets.js` always added the `region` query
param to `/api/packets`, even when `filters.hash` was set. The
time-window filter is already correctly suppressed when `filters.hash`
is present (see line 619: `if (windowMin > 0 && !filters.hash)`); the
region filter should follow the same rule. A specific hash is an exact
identifier — the user wants THAT packet regardless of where their saved
region selection points.

## Change

Extracted the param-building logic into a pure helper
`buildPacketsParams(...)` so it's testable, then suppressed the `region`
param when `filters.hash` is set.

## Tests

Added 7 unit tests in `test-packets.js` covering:

- hash filter suppresses region (the bug)
- hash filter suppresses region with default windowMin=0
- region applies normally when no hash filter
- empty regionParam doesn't produce spurious `region=` param
- node/observer/channel filters still pass through alongside a hash
- groupByHash=true / false flag handling

Anti-tautology gate verified: reverting the one-line fix (`!filters.hash
&&` → removed) causes 3 of the 7 new tests to fail. The fix is the
smallest change that makes them pass.

`node test-packets.js`: 80 passed, 0 failed.

## Reproduction

1. Set region filter to e.g. `SJC`
2. Open `/#/packets?hash=<HASH_FROM_ANOTHER_REGION>`
3. Before fix: empty table, no diagnostic
4. After fix: packet renders

---------

Co-authored-by: Kpa-clawbot <bot@example.invalid>
2026-04-30 19:51:53 -07:00
Kpa-clawbot 17df9bf06e ci: update go-server-coverage.json [skip ci] 2026-05-01 02:51:21 +00:00
Kpa-clawbot d0a955b72c ci: update go-ingestor-coverage.json [skip ci] 2026-05-01 02:51:20 +00:00
Kpa-clawbot cbf5b8bbd0 ci: update frontend-tests.json [skip ci] 2026-05-01 02:51:19 +00:00
Kpa-clawbot 5a30406392 ci: update frontend-coverage.json [skip ci] 2026-05-01 02:51:18 +00:00
Kpa-clawbot f3ee60ed62 ci: update e2e-tests.json [skip ci] 2026-05-01 02:51:16 +00:00
Kpa-clawbot d81852736d ci: re-enable staging deploy now that VM is back (#932)
Reverts the `if: false` guard from #908.

## Why
- Azure subscription was blocked, staging VM `meshcore-runner-2`
deallocated.
- Subscription unblocked, VM started, runner online, smoke CI [run
#25117292530](https://github.com/Kpa-clawbot/CoreScope/actions/runs/25117292530)
passed.
- Time to resume automatic staging deploys on master pushes.

## Changes
- `deploy` job: `if: false` → `if: github.event_name == 'push'`
(original condition from before #908).
- `publish` job: `needs: [build-and-publish]` → `needs: [deploy]`
(original wiring restored).

## Verify after merge
- Next master push triggers the full chain: go-test → e2e-test →
build-and-publish → deploy → publish.
- `docker ps` on staging VM shows `corescope-staging-go` updated to the
new commit.

Co-authored-by: you <you@example.com>
2026-04-30 19:40:51 -07:00
Kpa-clawbot 5678874128 fix: exclude non-repeater nodes from path-hop resolution (#935) (#936)
Fixes #935

## Problem

`buildPrefixMap()` indexed ALL nodes regardless of role, causing
companions/sensors to appear as repeater hops when their pubkey prefix
collided with a path-hop hash byte.

## Fix

### Server (`cmd/server/store.go`)
- Added `canAppearInPath(role string) bool` — allowlist of roles that
can forward packets (repeater, room_server, room)
- `buildPrefixMap` now skips nodes that fail this check

### Client (`public/hop-resolver.js`)
- Added matching `canAppearInPath(role)` helper
- `init()` now only populates `prefixIdx` for path-eligible nodes
- `pubkeyIdx` remains complete — `resolveFromServer()` still resolves
any node type by full pubkey (for server-confirmed `resolved_path`
arrays)

## Tests

- `cmd/server/prefix_map_role_test.go`: 7 new tests covering role
filtering in prefix map and resolveWithContext
- `test-hop-resolver-affinity.js`: 4 new tests verifying client-side
role filter + pubkeyIdx completeness
- All existing tests updated to include `Role: "repeater"` where needed
- `go test ./cmd/server/...` — PASS
- `node test-hop-resolver-affinity.js` — 16/17 pass (1 pre-existing
centroid failure unrelated to this change)

## Commits

1. `fix: filter prefix map to only repeater/room roles (#935)` — server
implementation
2. `test: prefix map role filter coverage (#935)` — server tests
3. `ui: filter HopResolver prefix index to repeater/room roles (#935)` —
client implementation
4. `test: hop-resolver role filter coverage (#935)` — client tests

---------

Co-authored-by: you <you@example.com>
2026-04-30 09:25:51 -07:00
Kpa-clawbot e857e0b1ce ci: update go-server-coverage.json [skip ci] 2026-04-25 00:34:06 +00:00
Kpa-clawbot 9da7c71cc5 ci: update go-ingestor-coverage.json [skip ci] 2026-04-25 00:34:05 +00:00
Kpa-clawbot 03484ea38d ci: update frontend-tests.json [skip ci] 2026-04-25 00:34:04 +00:00
Kpa-clawbot 27af4098e6 ci: update frontend-coverage.json [skip ci] 2026-04-25 00:34:03 +00:00
Kpa-clawbot 474023b9b7 ci: update e2e-tests.json [skip ci] 2026-04-25 00:34:02 +00:00
Kpa-clawbot f4484adb52 ci: move to GitHub-hosted runners, disable staging deploy (#908)
## Why

The Azure staging VM (`meshcore-vm`) is offline. Self-hosted runners are
unavailable, blocking all CI.

## What changed (per job)

| Job | Change | Revert |
|-----|--------|--------|
| `e2e-test` | `runs-on: [self-hosted, Linux]` → `ubuntu-latest`;
removed self-hosted-specific "Free disk space" step | Change `runs-on`
back to `[self-hosted, Linux]`, restore disk cleanup step |
| `build-and-publish` | `runs-on: [self-hosted, meshcore-runner-2]` →
`ubuntu-latest`; removed "Free disk space" prune step (noop on fresh
GH-hosted runners) | Change `runs-on` back, restore prune step |
| `deploy` | `if: false # disabled` (was `github.event_name == 'push'`);
`runs-on` kept as-is | Change `if:` back to `github.event_name ==
'push'` |
| `publish` | `runs-on: [self-hosted, Linux]` → `ubuntu-latest`; `needs:
[deploy]` → `needs: [build-and-publish]` | Change both back |

## Notes

- `go-test` and `release-artifacts` were already on `ubuntu-latest` —
untouched.
- The `deploy` job is disabled via `if: false` for trivial one-line
revert when the VM returns.
- No new `setup-*` actions were needed — `setup-node`, `setup-go`,
`docker/setup-buildx-action`, and `docker/login-action` were already
present.

Co-authored-by: you <you@example.com>
2026-04-24 17:25:53 -07:00
Kpa-clawbot a47fe26085 fix(channels): allow removing user-added keys for server-known channels (#898)
## Problem
Adding a channel key in the Channels UI for a channel the server already
knows about (e.g. `#public` from rainbow / config) leaves the
localStorage entry **unremovable**:

- `mergeUserChannels` sees the name already exists in the channel list
and skips the user entry.
- The existing channel row is never marked `userAdded:true`.
- The ✕ button (`[data-remove-channel]`) is only rendered for
`userAdded` rows.
- Result: stuck localStorage key, no UI to delete it.

There was also a latent bug in the remove handler — for non-`user:`
rows, it used the raw hash (e.g. `enc_11`) as the
`ChannelDecrypt.removeKey()` argument, but the storage key is the
channel **name**.

## Fix
1. **`mergeUserChannels`**: when a stored key matches an existing
channel by name/hash, mark the existing channel `userAdded=true` so the
✕ renders on it. (No magical/auto deletion of stored keys — the user
explicitly chooses to remove.)
2. **Remove handler**:
- Look up the channel object to get the correct display name for the
localStorage key.
- Keep server-known channels in the list when their ✕ is clicked (only
the user's localStorage entry + cache are cleared, `userAdded` is
unset). The channel still exists upstream.
   - Pure `user:`-prefixed channels are removed from the list as before.

## Repro
1. Open Channels.
2. Add a key for `#public` (or any rainbow-known channel).
3. Reload. Before this PR: row has no ✕, key is stuck. After this PR: ✕
appears, click clears the local key and cache.

## Files
- `public/channels.js` only.

## Notes
- No backend changes.
- No new APIs.
- Behaviour for purely user-added channels (e.g. `user:#somechannel` not
known to the server) is unchanged.

---------

Co-authored-by: you <you@example.com>
2026-04-22 21:41:43 -07:00
Kpa-clawbot abd9c46aa7 fix: side-panel Details button opens full-screen on desktop (#892)
## Symptom
🔍 Details button in the nodes side panel does nothing on click.

## Root cause (4th regression of the same shape)
- Row click → `selectNode()` → `history.replaceState(null, '',
'#/nodes/' + pk)`
- Details button click → `location.hash = '#/nodes/' + pk`
- Hash is already that value → assignment is a no-op → no `hashchange`
event → no router → panel stays open.

## Fix
Mirror the analytics-link branch already inside the panel click handler:
`destroy()` then `init(appEl, pubkey)` directly (which hits the
`directNode` full-screen branch unconditionally). Also `replaceState` to
keep the URL in sync.

## Test
New Playwright E2E: open side panel via row click, click Details, assert
`.node-fullscreen` appears.

## Why this keeps regressing
Every time we tighten the row-click handler to use `replaceState`
(correct — avoids hashchange flicker), the button-click handler that
uses `location.hash` becomes a no-op for the same pubkey. Need to
remember they're coupled. Worth a follow-up to extract a
`navigateToNode(pk)` helper that always works regardless of current hash
state — filing as #890-followup if not already there.

Co-authored-by: you <you@example.com>
2026-04-21 22:37:15 -07:00
Kpa-clawbot 6ca5e86df6 fix: compute hex-dump byte ranges client-side from per-obs raw_hex (#891)
## Symptom
The colored byte strip in the packet detail pane is offset from the
labeled byte breakdown below it. Off by N bytes where N is the
difference between the top-level packet's path length and the displayed
observation's path length.

## Root cause
Server computes `breakdown.ranges` once from the top-level packet's
raw_hex (in `BuildBreakdown`) and ships it in the API response. After
#882 we render each observation's own raw_hex, but we keep using the
top-level breakdown — so a 7-hop top-level packet shipped "Path: bytes
2-8", and when we rendered an 8-hop observation we coloured 7 of the 8
path bytes and bled into the payload.

The labeled rows below (which use `buildFieldTable`) parse the displayed
raw_hex on the client, so they were correct — they just didn't match the
strip above.

## Fix
Port `BuildBreakdown()` to JS as `computeBreakdownRanges()` in `app.js`.
Use it in `renderDetail()` from the actually-rendered (per-obs) raw_hex.

## Test
Manually verified the JS function output matches the Go implementation
for FLOOD/non-transport, transport, ADVERT, and direct-advert (zero
hops) cases.

Closes nothing (caught in post-tag bug bash).

---------

Co-authored-by: you <you@example.com>
2026-04-21 22:17:14 -07:00
Kpa-clawbot 56ec590bc4 fix(#886): derive path_json from raw_hex at ingest (#887)
## Problem

Per-observation `path_json` disagrees with `raw_hex` path section for
TRACE packets.

**Reproducer:** packet `af081a2c41281b1e`, observer `lutin🏡`
- `path_json`: `["67","33","D6","33","67"]` (5 hops — from TRACE
payload)
- `raw_hex` path section: `30 2D 0D 23` (4 bytes — SNR values in header)

## Root Cause

`DecodePacket` correctly parses TRACE packets by replacing `path.Hops`
with hop IDs from the payload's `pathData` field (the actual route).
However, the header path bytes for TRACE packets contain **SNR values**
(one per completed hop), not hop IDs.

`BuildPacketData` used `decoded.Path.Hops` to build `path_json`, which
for TRACE packets contained the payload-derived hops — not the header
path bytes that `raw_hex` stores. This caused `path_json` and `raw_hex`
to describe completely different paths.

## Fix

- Added `DecodePathFromRawHex(rawHex)` — extracts header path hops
directly from raw hex bytes, independent of any TRACE payload
overwriting.
- `BuildPacketData` now calls `DecodePathFromRawHex(msg.Raw)` instead of
using `decoded.Path.Hops`, guaranteeing `path_json` always matches the
`raw_hex` path section.

## Tests (8 new)

**`DecodePathFromRawHex` unit tests:**
- hash_size 1, 2, 3, 4
- zero-hop direct packets
- transport route (4-byte transport codes before path)

**`BuildPacketData` integration tests:**
- TRACE packet: asserts path_json matches raw_hex header path (not
payload hops)
- Non-TRACE packet: asserts path_json matches raw_hex header path

All existing tests continue to pass (`go test ./...` for both ingestor
and server).

Fixes #886

---------

Co-authored-by: you <you@example.com>
2026-04-21 21:13:58 -07:00
Kpa-clawbot 67aa47175f fix: path pill and byte breakdown agree on hop count (#885)
## Problem
On the packet detail pane, the **path pill** (top) and the **byte
breakdown** (bottom) showed different numbers of hops for the same
packet. Example: `46cf35504a21ef0d` rendered as `1 hop` badge followed
by 8 node names in the path pill, while the byte breakdown listed only 1
hop row.

## Root cause
Mixed data sources:
- Path-pill badge used `(raw_hex path_len) & 0x3F` (= firmware truth for
one observer = 1)
- Path-pill names used `path_json.length` (= server-aggregated longest
path across observers = 8)
- Byte breakdown section header used `(raw_hex path_len) & 0x3F` (= 1)
- Byte breakdown rows were sliced from `raw_hex` (= 1 row)
- `renderPath(pathHops, ...)` iterated all `path_json` entries

For group-header view, `packet.path_json` is aggregated across observers
and therefore longer than the raw_hex of any single observer's packet.

## Fix
Both surfaces now render from `pathHops` (= effective observation's
`path_json`). The raw_hex vs path_json mismatch is still logged as a
console.warn for diagnostics, but does not drive the UI.

With per-observation `raw_hex` (#882) shipped, clicking an observation
row already swaps the effective packet so both surfaces stay consistent.

## Testing
- Adds E2E regression `Packet detail path pill and byte breakdown agree
on hop count` that asserts:
  1. `pill badge count == byte breakdown section count`
  2. `rendered hop names ≈ badge count` (within 1 for separators)
  3. `byte breakdown rendered rows == section count`
- Manually reproduced on staging with `46cf35504a21ef0d` (8-name path +
`1 hop` badge before fix).

Related: #881 #882 #866

---------

Co-authored-by: you <you@example.com>
2026-04-21 17:57:06 -07:00
Kpa-clawbot 2b9f305698 fix(#874): hop-resolver affinity picker — score candidates by neighbor-graph edges + geographic centroid (#876)
## Problem

`pickByAffinity` in `hop-resolver.js` picks wrong regional candidates
when 1-byte pubkey prefixes collide. The old implementation only
considers one adjacent hop (forward OR backward pass), leading to
suboptimal picks when both neighbors provide useful context.

Measured on staging: **61.6% of hops have ≥2 same-prefix candidates**,
making collision resolution critical.

## Fix

Replaced the separate forward/backward pass disambiguation with a
**combined iterative resolver** that scores candidates against BOTH prev
and next resolved hops:

1. **Neighbor-graph edge weight** (priority 1): Sum edge scores to prev
+ next pubkeys. Pick max sum.
2. **Geographic centroid** (priority 2): Average lat/lon of prev + next
positions. Pick closest candidate by haversine distance.
3. **Single-anchor geo** (priority 3): When only one neighbor is
resolved, use it directly.
4. **Fallback** (priority 4): First candidate when no context exists.

The iterative approach resolves cascading dependencies — resolving one
ambiguous hop may unlock context for its neighbors.

### Dev-mode trace

Multi-candidate picks now emit: `[hop-resolver] hash=46 candidates=N
scored=[...] chose=<pubkey> method=graph|centroid|fallback`

## Before/After (staging, 1539 packets, 12928 hops)

| Metric | Before | After |
|--------|--------|-------|
| Unreliable hops | 39 (0.3%) | 23 (0.2%) |
| Packets with unreliable | 33 (2.14%) | 17 (1.10%) |

~41% reduction in unreliable hops, ~48% reduction in affected packets.

## Tests

5 new tests in `test-frontend-helpers.js`:
- Graph edge scoring picks correct regional candidate
- Next hop breaks tie when prev has no edges
- Centroid fallback when no graph edges exist
- Centroid uses average of prev+next positions
- Fallback when no context at all

All 595 tests pass. No regressions in `test-packet-filter.js` (62 pass)
or `test-aging.js` (29 pass).

Closes #874

---------

Co-authored-by: you <you@example.com>
2026-04-21 14:03:40 -07:00
Kpa-clawbot a605518d6d fix(#881): per-observation raw_hex — each observer sees different bytes on air (#882)
## Problem

Each MeshCore observer receives a physically distinct over-the-air byte
sequence for the same transmission (different path bytes, flags/hops
remaining). The `observations` table stored only `path_json` per
observer — all observations pointed at one `transmissions.raw_hex`. This
prevented the hex pane from updating when switching observations in the
packet detail view.

## Changes

| Layer | Change |
|-------|--------|
| **Schema** | `ALTER TABLE observations ADD COLUMN raw_hex TEXT`
(nullable). Migration: `observations_raw_hex_v1` |
| **Ingestor** | `stmtInsertObservation` now stores per-observer
`raw_hex` from MQTT payload |
| **View** | `packets_v` uses `COALESCE(o.raw_hex, t.raw_hex)` —
backward compatible with NULL historical rows |
| **Server** | `enrichObs` prefers `obs.RawHex` when non-empty, falls
back to `tx.RawHex` |
| **Frontend** | No changes — `effectivePkt.raw_hex` already flows
through `renderDetail` |

## Tests

- **Ingestor**: `TestPerObservationRawHex` — two MQTT packets for same
hash from different observers → both stored with distinct raw_hex
- **Server**: `TestPerObservationRawHexEnrich` — enrichObs returns
per-obs raw_hex when present, tx fallback when NULL
- **E2E**: Playwright assertion in `test-e2e-playwright.js` for hex pane
update on observation switch

E2E assertion added: `test-e2e-playwright.js:1794`

## Scope

- Historical observations: raw_hex stays NULL, UI falls back to
transmission raw_hex silently
- No backfill, no path_json reconstruction, no frontend changes

Closes #881

---------

Co-authored-by: you <you@example.com>
2026-04-21 13:45:29 -07:00
Kpa-clawbot 0ca559e348 fix(#866): per-observation children in expanded packet groups (#880)
## Problem
When a packet group is expanded in the Packets table, clicking any child
row pointed the side pane at the same aggregate packet — not the clicked
observation. URL would flip between `?obs=<packet_id>` values instead of
real observation ids.

## Root cause
The expand fetch used `/api/packets?hash=X&limit=20`, which returns ONE
aggregate row keyed by packet.id. Every child therefore carried
`data-value=<packet.id>`.

## Fix
Switch the expand fetch to `/api/packets/<hash>`, which includes the
full `observations[]` array. Build `_children` as `{...pkt, ...obs}` so
each child row gets a unique observation id and observation-level fields
(observer, path, timestamp, snr/rssi) override the aggregate.

## Verified live on staging
Tested on multiple packets:
- Click group-header → side pane shows observation 1 of N (first
observer)
- Click child row → pane updates to show THAT observer's details:
observer name, path, timestamp, obs counter (K of N), URL
`?obs=<observation_id>`

## Tests
592 frontend tests pass (no new ones — this is a wiring fix, live
E2E-verified instead).

Closes #866

---------

Co-authored-by: Kpa-clawbot <agent@corescope.local>
Co-authored-by: you <you@example.com>
2026-04-21 13:36:45 -07:00
168 changed files with 30588 additions and 1568 deletions
+1 -1
View File
@@ -1 +1 @@
{"schemaVersion":1,"label":"e2e tests","message":"45 passed","color":"brightgreen"}
{"schemaVersion":1,"label":"e2e tests","message":"104 passed","color":"brightgreen"}
+1 -1
View File
@@ -1 +1 @@
{"schemaVersion":1,"label":"frontend coverage","message":"39.68%","color":"red"}
{"schemaVersion":1,"label":"frontend coverage","message":"38.41%","color":"red"}
+19 -17
View File
@@ -79,6 +79,17 @@ jobs:
go test ./...
echo "--- Decrypt CLI tests passed ---"
- name: Run JS unit tests (packet-filter)
run: |
set -e
node test-packet-filter.js
node test-packet-filter-time.js
node test-channel-decrypt-insecure-context.js
node test-live-region-filter.js
node test-channel-qr.js
node test-channel-qr-wiring.js
node test-channel-modal-ux.js
- name: Verify proto syntax
run: |
set -e
@@ -135,7 +146,7 @@ jobs:
e2e-test:
name: "🎭 Playwright E2E Tests"
needs: [go-test]
runs-on: [self-hosted, Linux]
runs-on: ubuntu-latest
defaults:
run:
shell: bash
@@ -145,13 +156,6 @@ jobs:
with:
fetch-depth: 0
- name: Free disk space
run: |
# Prune old runner diagnostic logs (can accumulate 50MB+)
find ~/actions-runner/_diag/ -name '*.log' -mtime +3 -delete 2>/dev/null || true
# Show available disk space
df -h / | tail -1
- name: Set up Node.js 22
uses: actions/setup-node@v5
with:
@@ -183,6 +187,9 @@ jobs:
- name: Instrument frontend JS for coverage
run: sh scripts/instrument-frontend.sh
- name: Freshen fixture timestamps
run: bash tools/freshen-fixture.sh test-fixtures/e2e-fixture.db
- name: Start Go server with fixture DB
run: |
fuser -k 13581/tcp 2>/dev/null || true
@@ -190,7 +197,7 @@ jobs:
./corescope-server -port 13581 -db test-fixtures/e2e-fixture.db -public public-instrumented &
echo $! > .server.pid
for i in $(seq 1 30); do
if curl -sf http://localhost:13581/api/stats > /dev/null 2>&1; then
if curl -sf http://localhost:13581/api/healthz > /dev/null 2>&1; then
echo "Server ready after ${i}s"
break
fi
@@ -204,6 +211,7 @@ jobs:
- name: Run Playwright E2E tests (fail-fast)
run: |
BASE_URL=http://localhost:13581 node test-e2e-playwright.js 2>&1 | tee e2e-output.txt
BASE_URL=http://localhost:13581 node test-filter-ux-e2e.js 2>&1 | tee -a e2e-output.txt
- name: Collect frontend coverage (parallel)
if: success() && github.event_name == 'push'
@@ -252,17 +260,11 @@ jobs:
build-and-publish:
name: "🏗️ Build & Publish Docker Image"
needs: [e2e-test]
runs-on: [self-hosted, meshcore-runner-2]
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v5
- name: Free disk space
run: |
docker system prune -af 2>/dev/null || true
docker builder prune -af 2>/dev/null || true
df -h /
- name: Compute build metadata
id: meta
run: |
@@ -462,7 +464,7 @@ jobs:
name: "📝 Publish Badges & Summary"
if: github.event_name == 'push'
needs: [deploy]
runs-on: [self-hosted, Linux]
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v5
+7
View File
@@ -1,5 +1,8 @@
# Build stage always runs natively on the builder's arch ($BUILDPLATFORM)
# and cross-compiles to $TARGETOS/$TARGETARCH via Go toolchain. No QEMU.
# BUILDPLATFORM is auto-set by buildx; default to linux/amd64 so plain
# `docker build` (without buildx) doesn't fail on an empty platform string.
ARG BUILDPLATFORM=linux/amd64
FROM --platform=$BUILDPLATFORM golang:1.22-alpine AS builder
ARG APP_VERSION=unknown
@@ -14,6 +17,8 @@ WORKDIR /build/server
COPY cmd/server/go.mod cmd/server/go.sum ./
COPY internal/geofilter/ ../../internal/geofilter/
COPY internal/sigvalidate/ ../../internal/sigvalidate/
COPY internal/packetpath/ ../../internal/packetpath/
COPY internal/dbconfig/ ../../internal/dbconfig/
RUN go mod download
COPY cmd/server/ ./
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} \
@@ -24,6 +29,8 @@ WORKDIR /build/ingestor
COPY cmd/ingestor/go.mod cmd/ingestor/go.sum ./
COPY internal/geofilter/ ../../internal/geofilter/
COPY internal/sigvalidate/ ../../internal/sigvalidate/
COPY internal/packetpath/ ../../internal/packetpath/
COPY internal/dbconfig/ ../../internal/dbconfig/
RUN go mod download
COPY cmd/ingestor/ ./
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} \
+207
View File
@@ -0,0 +1,207 @@
# v3.6.0 - The Forensics
CoreScope just got eyes everywhere. This release drops **path inspection**, **color-by-hash markers**, **clock skew detection**, **full channel encryption**, an **observer graph**, and a pile of robustness fixes that make your mesh network feel like it's being watched by someone who actually cares.
134 commits, 105 PRs merged, 18K+ lines added. Here's what shipped.
---
## 🚀 New Features
### Path-Prefix Candidate Inspector (#944, #945)
The marquee feature. Click any path segment and CoreScope opens an interactive inspector showing every candidate node that could match that hop prefix - plotted on a map with scoring by neighbor-graph affinity and geographic centroid. Ambiguous hops? Now you can see *why* they're ambiguous and pick the right one.
**Why you'll love it:** No more guessing which `0xA3` is the real repeater. The inspector lays out every candidate, scores them, and lets you drill in visually.
### Color-by-Hash Packet Markers (#948, #951)
Every packet type gets a vivid, hash-derived color - on the live feed, map polylines, and flying-packet animations. Bright fill with dark outline for contrast. No more monochrome blobs - you can visually track packet flows by color at a glance.
### Node Filter on Live Page (#924, #771)
Filter the live packet stream to show only traffic flowing through a specific node. Pick a repeater, see exactly what it's carrying. That simple.
### Clock Skew Detection (#746, #752, #828, #850)
Full pipeline: backend computes drift using Theil-Sen regression with outlier rejection (#828), the UI shows per-node badges, detail sparklines, and fleet-wide analytics (#752). Bimodal clock severity (#850) surfaces flaky-RTC nodes that toggle between accurate and drifted - instead of hiding them as "No Clock."
**Why you'll love it:** Nodes with bad clocks silently corrupt your timeline. Now they glow red before they ruin your analysis.
### Observer Graph (M1+M2) (#774)
Observers are now first-class graph citizens. CoreScope builds a neighbor graph from observation overlaps, scores hop-resolver candidates by graph edges (#876), and uses geographic centroid for tiebreaking. The observer topology is visible and queryable.
### Channel Encryption - Full Stack (#726, #733, #750, #760)
Three milestones landed as one: DB-backed channel message history (#726), client-side PSK decryption in the browser (#733), and PSK channel management with add/remove UX and message caching (#750). Add a channel key in the UI, and CoreScope decrypts messages client-side - no server-side key storage. The add-channel button (#760) makes it dead simple.
**Why you'll love it:** Encrypted channels are no longer black boxes. Add your PSK, see the messages, search history - all without exposing keys to the server.
### Hash Collision Inspector (#758)
The Hash Usage Matrix now shows collision details for all hash sizes. When two nodes share a prefix, you see exactly who collides and at what size.
### Geofilter Builder - In-App (#735, #900)
The geofilter polygon builder is now served directly from CoreScope with a full docs page (#900). No more hunting for external tools. Link from the customizer, draw your polygon, done.
### Node Blacklist (#742)
`nodeBlacklist` in config hides abusive or troll nodes from all views. They're gone.
### Observer Retention (#764)
Stale observers are automatically pruned after a configurable number of days. Your observer list stays clean without manual intervention.
### Advert Signature Validation (#794)
Corrupt packets with invalid advert signatures are now rejected at ingest. Bad data never hits your store.
### Bounded Cold Load (#790)
`Load()` now respects a memory budget - no more OOM on cold start with a fat database. Combined with retention-hours cutoff (#917), cold start is safe on constrained hardware.
### Multi-Arch Docker Images (#869)
Official images now publish `amd64` + `arm64` in a single multi-arch manifest. Raspberry Pi operators: pull and run. No special tags needed.
### /nodes Detail Panel + Search (#868)
The nodes detail panel ships with search improvements (#862) - find nodes fast, see their full detail in a slide-out panel.
### Deduplicated Top Longest Hops (#848)
Longest hops are now deduplicated by pair with observation count and SNR cues. No more seeing the same link 47 times.
---
## 🔥 Performance Wins
### StoreTx ResolvedPath Elimination (#806)
The per-transaction `ResolvedPath` computation is gone - replaced by a membership index with on-demand decode. This was one of the hottest paths in the ingestor.
### Node Packet Queries (#803)
Raw JSON text search for node packets replaced with a proper `byNode` index (#673). Night and day.
### Channel Query Performance (#762, #763)
New `channel_hash` column enables SQL-level channel filtering. No more full-table scan to find messages in a channel.
### SQLite Auto-Vacuum (#919, #920)
Incremental auto-vacuum enabled - the database file actually shrinks after retention pruning. No more 2GB database holding 200MB of live data.
### Retention-Hours Cutoff on Load (#917)
`Load()` now applies `retentionHours` at read time, preventing OOM when the DB has more history than memory allows.
---
## 🛡️ Security & Robustness
### MQTT Reconnect with Bounded Backoff (#947, #949)
The ingestor now reconnects to MQTT brokers with exponential backoff, observability logging, and bounded retry. No more silent disconnects that kill your data stream.
---
## 🐛 Bugs Squashed
This release exterminates **40+ bugs** — from protocol-level hash mismatches to pixel-level CSS breakage. Operators told us what hurt; we listened.
- **Path inspector "Show on Map" missed origin and first hop** (#950) - map view now includes all hops
- **Content hash used full header byte** (#787) - content hashing now uses payload type bits only, fixing hash collisions between packets that differ only in header flags
- **Encrypted channel deep links showed broken UI** (#825, #826, #815) - deep links to encrypted channels now show a lock message instead of broken UI when you don't have the key
- **Geofilter longitude wrapping** (#925) - geofilter builder wraps longitude to [-180, 180]; southern hemisphere polygons no longer invert
- **Hash filter bypasses saved region filter** (#939) - hash lookups now skip the geo filter as intended
- **Companion-as-repeater excluded from path hops** (#935, #936) - non-repeater nodes no longer pollute hop resolution
- **Customize panel re-renders while typing** (#927) - text fields keep focus during config changes
- **Per-observation raw_hex** (#881, #882) - each observer's hex dump now shows what *that observer* actually received
- **Per-observation children in packet groups** (#866, #880) - expanded groups show per-obs data, not cross-observer aggregates
- **Full-page obs-switch** (#866, #870) - switching observers updates hex, path, and direction correctly
- **Packet detail shows wrong observation** (#849, #851) - clicking a specific observation opens *that* observation
- **Byte breakdown hop count** (#844, #846) - derived from `path_len`, not aggregated `_parsedPath`
- **Transport-route path_len offset** (#852, #853) - correct offset calculation + CSS variable fix
- **Packets/hour chart bars + x-axis** (#858, #865) - bars render correctly, x-axis labels properly decimated
- **Channel timeline capped to top 8** (#860, #864) - no more 47-channel chart spaghetti
- **Reachability row opacity removed** (#859, #863) - clean rows without misleading gradient
- **Sticky table headers on mobile** (#861, #867) - restored after regression
- **Map popup 'Show Neighbors' on iOS Safari** (#840, #841) - link actually works now
- **Node detail Recent Packets invisible text** (#829, #830) - CSS fix
- **/api/packets/{hash} falls back to DB** (#827, #831) - when in-memory store misses, DB catches it
- **IATA filter bypass for status messages** (#694, #802) - status packets no longer filtered out by airport codes
- **Desktop node click URL hash** (#676, #739) - clicking a node updates the URL for deep linking
- **Filter params in URL hash** (#682, #740) - all filter state serialized for shareable links
- **Hide undecryptable channel messages** (#727, #728) - clean default view
- **TRACE path_json uses path_sz** (#732) - correct field from flags byte, not header hash_size
- **Multi-byte adopters** (#754, #767) - all node types, role column, advert precedence
- **Channel key case sensitivity** (#761) - Public decode works correctly
- **Transport route field offsets** (#766) - correct offsets in field table
- **Clock skew sanity checks** (#769) - filter epoch-0, cap drift, require minimum samples
- **Neighbor graph slider persistence** (#776) - default 0.7, persisted to localStorage
- **Node detail panel navigation** (#779, #785) - Details/Analytics links actually navigate
- **Channel key removal** (#898) - user-added keys for server-known channels can be removed
- **Side-panel Details on desktop** (#892) - opens full-screen correctly
- **Hex-dump byte ranges client-side** (#891) - computed from per-obs raw_hex
- **path_json derived from raw_hex at ingest** (#886, #887) - single source of truth
- **Path pill and byte breakdown hop agreement** (#885) - they match now
- **Mobile close button + toolbar scroll** (#797, #805) - accessible and scrollable
- **/health.recentPackets resolved_path fallback** (#810, #821) - falls back to longest sibling observation
- **Channel filter on Packets page** (#812, #816) - UI and API both fixed
- **Clock-skew section in side panel** (#813, #814) - renders correctly
- **Real RSS in /api/stats** (#832, #835) - surface actual RSS alongside tracked store bytes
- **Hash size detection for transport routes + zero-hop adverts** (#747) - correct detection
- **Repeater+observer merged map marker** (#745) - single marker, not two overlapping
---
## 🎨 UI Polish
- QA findings applied across the board (#832, #833, #836, #837, #838) - dozens of small UX fixes from systematic QA pass
---
## 📦 Upgrading
```bash
git pull
docker compose down
docker compose build prod
docker compose up -d prod
```
Your existing `config.json` works as-is. New optional config keys:
- `nodeBlacklist` - array of node hashes to hide
- `observerRetentionDays` - days before stale observers are pruned
- `memoryBudgetMB` - cap on in-memory packet store
### Verify
```bash
curl -s http://localhost/api/health | jq .version
# "3.6.0"
```
---
## 🙏 External Contributors
- **#735** ([@efiten](https://github.com/efiten)) - Serve geofilter builder from app, link from customizer
- **#739** ([@efiten](https://github.com/efiten)) - Desktop node click updates URL hash for deep linking
- **#740** ([@efiten](https://github.com/efiten)) - Serialize filter params in URL hash for shareable links
- **#742** ([@Joel-Claw](https://github.com/Joel-Claw)) - Add nodeBlacklist config to hide abusive/troll nodes
- **#761** ([@copelaje](https://github.com/copelaje)) - Fix channel key case sensitivity for Public decode
- **#764** ([@Joel-Claw](https://github.com/Joel-Claw)) - Add observer retention - prune stale observers after configurable days
- **#802** ([@efiten](https://github.com/efiten)) - Bypass IATA filter for status messages, fill SNR on duplicate observations
- **#803** ([@efiten](https://github.com/efiten)) - Replace raw JSON text search with byNode index for node packet queries
- **#805** ([@efiten](https://github.com/efiten)) - Mobile close button accessible + toolbar scrollable
- **#900** ([@efiten](https://github.com/efiten)) - App-served geofilter docs page
- **#917** ([@efiten](https://github.com/efiten)) - Apply retentionHours cutoff in Load() to prevent OOM on cold start
- **#924** ([@efiten](https://github.com/efiten)) - Node filter on live page - show only traffic through a specific node
- **#925** ([@efiten](https://github.com/efiten)) - Fix geobuilder longitude wrapping for southern hemisphere polygons
- **#927** ([@efiten](https://github.com/efiten)) - Skip customize panel re-render while text field has focus
---
## ⚠️ Breaking Changes
**None.** All API endpoints remain backwards-compatible. New fields are additive only.
---
## 📊 By the Numbers
| Stat | Count |
|------|-------|
| Commits | 134 |
| PRs merged | 105 |
| Lines added | 18,480 |
| Lines removed | 1,632 |
| Files changed | 110 |
| Contributors | 4 |
---
*Previous release: [v3.5.2](https://github.com/Kpa-clawbot/CoreScope/releases/tag/v3.5.2)*
+100 -1
View File
@@ -7,7 +7,9 @@ import (
"log"
"os"
"strings"
"sync"
"github.com/meshcore-analyzer/dbconfig"
"github.com/meshcore-analyzer/geofilter"
)
@@ -20,6 +22,17 @@ type MQTTSource struct {
RejectUnauthorized *bool `json:"rejectUnauthorized,omitempty"`
Topics []string `json:"topics"`
IATAFilter []string `json:"iataFilter,omitempty"`
ConnectTimeoutSec int `json:"connectTimeoutSec,omitempty"`
Region string `json:"region,omitempty"`
}
// ConnectTimeoutOrDefault returns the per-source connect timeout in seconds,
// or 30 if not set (matching the WaitTimeout default from #926).
func (s MQTTSource) ConnectTimeoutOrDefault() int {
if s.ConnectTimeoutSec > 0 {
return s.ConnectTimeoutSec
}
return 30
}
// MQTTLegacy is the old single-broker config format.
@@ -39,13 +52,51 @@ type Config struct {
HashChannels []string `json:"hashChannels,omitempty"`
Retention *RetentionConfig `json:"retention,omitempty"`
Metrics *MetricsConfig `json:"metrics,omitempty"`
GeoFilter *GeoFilterConfig `json:"geo_filter,omitempty"`
GeoFilter *GeoFilterConfig `json:"geo_filter,omitempty"`
ForeignAdverts *ForeignAdvertConfig `json:"foreignAdverts,omitempty"`
ValidateSignatures *bool `json:"validateSignatures,omitempty"`
DB *DBConfig `json:"db,omitempty"`
// ObserverIATAWhitelist restricts which observer IATA regions are processed.
// When non-empty, only observers whose IATA code (from the MQTT topic) matches
// one of these entries are accepted. Case-insensitive. An empty list means all
// IATA codes are allowed. This applies globally, unlike the per-source iataFilter.
ObserverIATAWhitelist []string `json:"observerIATAWhitelist,omitempty"`
// obsIATAWhitelistCached is the lazily-built uppercase set for O(1) lookups.
obsIATAWhitelistCached map[string]bool
obsIATAWhitelistOnce sync.Once
// ObserverBlacklist is a list of observer public keys to drop at ingest.
// Messages from blacklisted observers are silently discarded — no DB writes,
// no UpsertObserver, no observations, no metrics.
ObserverBlacklist []string `json:"observerBlacklist,omitempty"`
// obsBlacklistSetCached is the lazily-built lowercase set for O(1) lookups.
obsBlacklistSetCached map[string]bool
obsBlacklistOnce sync.Once
}
// GeoFilterConfig is an alias for the shared geofilter.Config type.
type GeoFilterConfig = geofilter.Config
// ForeignAdvertConfig controls how the ingestor handles ADVERTs whose GPS lies
// outside the configured geofilter polygon (#730). Modes:
// - "flag" (default): store the advert/node and tag it foreign for visibility.
// - "drop": silently discard the advert (legacy behavior).
type ForeignAdvertConfig struct {
Mode string `json:"mode,omitempty"`
}
// IsDropMode reports whether the foreign-advert config is set to "drop".
// Defaults to false ("flag" mode) when nil or unset.
func (f *ForeignAdvertConfig) IsDropMode() bool {
if f == nil {
return false
}
return strings.EqualFold(strings.TrimSpace(f.Mode), "drop")
}
// RetentionConfig controls how long stale nodes are kept before being moved to inactive_nodes.
type RetentionConfig struct {
NodeDays int `json:"nodeDays"`
@@ -58,6 +109,17 @@ type MetricsConfig struct {
SampleIntervalSec int `json:"sampleIntervalSec"`
}
// DBConfig is the shared SQLite vacuum/maintenance config (#919, #921).
type DBConfig = dbconfig.DBConfig
// IncrementalVacuumPages returns the configured pages per vacuum or 1024 default.
func (c *Config) IncrementalVacuumPages() int {
if c.DB != nil && c.DB.IncrementalVacuumPages > 0 {
return c.DB.IncrementalVacuumPages
}
return 1024
}
// ShouldValidateSignatures returns true (default) unless explicitly disabled.
func (c *Config) ShouldValidateSignatures() bool {
if c.ValidateSignatures != nil {
@@ -99,6 +161,43 @@ func (c *Config) ObserverDaysOrDefault() int {
return 14
}
// IsObserverBlacklisted returns true if the given observer ID is in the observerBlacklist.
func (c *Config) IsObserverBlacklisted(id string) bool {
if c == nil || len(c.ObserverBlacklist) == 0 {
return false
}
c.obsBlacklistOnce.Do(func() {
m := make(map[string]bool, len(c.ObserverBlacklist))
for _, pk := range c.ObserverBlacklist {
trimmed := strings.ToLower(strings.TrimSpace(pk))
if trimmed != "" {
m[trimmed] = true
}
}
c.obsBlacklistSetCached = m
})
return c.obsBlacklistSetCached[strings.ToLower(strings.TrimSpace(id))]
}
// IsObserverIATAAllowed returns true if the given IATA code is permitted.
// When ObserverIATAWhitelist is empty, all codes are allowed.
func (c *Config) IsObserverIATAAllowed(iata string) bool {
if c == nil || len(c.ObserverIATAWhitelist) == 0 {
return true
}
c.obsIATAWhitelistOnce.Do(func() {
m := make(map[string]bool, len(c.ObserverIATAWhitelist))
for _, code := range c.ObserverIATAWhitelist {
trimmed := strings.ToUpper(strings.TrimSpace(code))
if trimmed != "" {
m[trimmed] = true
}
}
c.obsIATAWhitelistCached = m
})
return c.obsIATAWhitelistCached[strings.ToUpper(strings.TrimSpace(iata))]
}
// LoadConfig reads configuration from a JSON file, with env var overrides.
// If the config file does not exist, sensible defaults are used (zero-config startup).
func LoadConfig(path string) (*Config, error) {
+110
View File
@@ -284,3 +284,113 @@ func TestLoadConfigWithAllFields(t *testing.T) {
t.Errorf("iataFilter=%v", src.IATAFilter)
}
}
func TestConnectTimeoutOrDefault(t *testing.T) {
// Default when unset
s := MQTTSource{}
if got := s.ConnectTimeoutOrDefault(); got != 30 {
t.Errorf("default: got %d, want 30", got)
}
// Custom value
s.ConnectTimeoutSec = 5
if got := s.ConnectTimeoutOrDefault(); got != 5 {
t.Errorf("custom: got %d, want 5", got)
}
// Zero treated as unset
s.ConnectTimeoutSec = 0
if got := s.ConnectTimeoutOrDefault(); got != 30 {
t.Errorf("zero: got %d, want 30", got)
}
}
func TestConnectTimeoutFromJSON(t *testing.T) {
dir := t.TempDir()
cfgPath := dir + "/config.json"
os.WriteFile(cfgPath, []byte(`{"mqttSources":[{"name":"s1","broker":"tcp://b:1883","topics":["#"],"connectTimeoutSec":5}]}`), 0644)
cfg, err := LoadConfig(cfgPath)
if err != nil {
t.Fatal(err)
}
if got := cfg.MQTTSources[0].ConnectTimeoutOrDefault(); got != 5 {
t.Errorf("from JSON: got %d, want 5", got)
}
}
func TestObserverIATAWhitelist(t *testing.T) {
// Config with whitelist set
cfg := Config{
ObserverIATAWhitelist: []string{"ARN", "got"},
}
// Matching (case-insensitive)
if !cfg.IsObserverIATAAllowed("ARN") {
t.Error("ARN should be allowed")
}
if !cfg.IsObserverIATAAllowed("arn") {
t.Error("arn (lowercase) should be allowed")
}
if !cfg.IsObserverIATAAllowed("GOT") {
t.Error("GOT should be allowed")
}
// Non-matching
if cfg.IsObserverIATAAllowed("SJC") {
t.Error("SJC should NOT be allowed")
}
// Empty string not allowed
if cfg.IsObserverIATAAllowed("") {
t.Error("empty IATA should NOT be allowed")
}
}
func TestObserverIATAWhitelistEmpty(t *testing.T) {
// No whitelist = allow all
cfg := Config{}
if !cfg.IsObserverIATAAllowed("SJC") {
t.Error("with no whitelist, all IATAs should be allowed")
}
if !cfg.IsObserverIATAAllowed("") {
t.Error("with no whitelist, even empty IATA should be allowed")
}
}
func TestObserverIATAWhitelistJSON(t *testing.T) {
json := `{
"dbPath": "test.db",
"observerIATAWhitelist": ["ARN", "GOT"]
}`
tmp := t.TempDir() + "/config.json"
os.WriteFile(tmp, []byte(json), 0644)
cfg, err := LoadConfig(tmp)
if err != nil {
t.Fatal(err)
}
if len(cfg.ObserverIATAWhitelist) != 2 {
t.Fatalf("expected 2 entries, got %d", len(cfg.ObserverIATAWhitelist))
}
if !cfg.IsObserverIATAAllowed("ARN") {
t.Error("ARN should be allowed after loading from JSON")
}
}
func TestMQTTSourceRegionField(t *testing.T) {
dir := t.TempDir()
cfgPath := filepath.Join(dir, "config.json")
os.WriteFile(cfgPath, []byte(`{
"dbPath": "/tmp/test.db",
"mqttSources": [
{"name": "cascadia", "broker": "tcp://localhost:1883", "topics": ["meshcore/#"], "region": "PDX"}
]
}`), 0o644)
cfg, err := LoadConfig(cfgPath)
if err != nil {
t.Fatal(err)
}
if cfg.MQTTSources[0].Region != "PDX" {
t.Fatalf("expected region PDX, got %q", cfg.MQTTSources[0].Region)
}
}
+7 -2
View File
@@ -428,7 +428,12 @@ func TestHandleMessageAdvertGeoFiltered(t *testing.T) {
topic: "meshcore/SJC/obs1/packets",
payload: []byte(`{"raw":"` + rawHex + `"}`),
}
handleMessage(store, "test", source, msg, nil, &Config{GeoFilter: gf})
// Legacy silent-drop behavior is now opt-in via ForeignAdverts.Mode="drop"
// (#730). The new default — flag — is covered by foreign_advert_test.go.
handleMessage(store, "test", source, msg, nil, &Config{
GeoFilter: gf,
ForeignAdverts: &ForeignAdvertConfig{Mode: "drop"},
})
// Geo-filtered adverts should not create nodes
var nodeCount int
@@ -436,7 +441,7 @@ func TestHandleMessageAdvertGeoFiltered(t *testing.T) {
t.Fatal(err)
}
if nodeCount != 0 {
t.Errorf("nodes=%d, want 0 (geo-filtered advert should not create node)", nodeCount)
t.Errorf("nodes=%d, want 0 (geo-filtered advert in drop mode should not create node)", nodeCount)
}
}
+268 -16
View File
@@ -8,9 +8,11 @@ import (
"os"
"path/filepath"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/meshcore-analyzer/packetpath"
_ "modernc.org/sqlite"
)
@@ -43,6 +45,7 @@ type Store struct {
stmtUpsertMetrics *sql.Stmt
sampleIntervalSec int
backfillWg sync.WaitGroup
}
// OpenStore opens or creates a SQLite DB at the given path, applying the
@@ -58,7 +61,7 @@ func OpenStoreWithInterval(dbPath string, sampleIntervalSec int) (*Store, error)
return nil, fmt.Errorf("creating data dir: %w", err)
}
db, err := sql.Open("sqlite", dbPath+"?_pragma=journal_mode(WAL)&_pragma=foreign_keys(ON)&_pragma=busy_timeout(5000)")
db, err := sql.Open("sqlite", dbPath+"?_pragma=auto_vacuum(INCREMENTAL)&_pragma=journal_mode(WAL)&_pragma=foreign_keys(ON)&_pragma=busy_timeout(5000)")
if err != nil {
return nil, fmt.Errorf("opening db: %w", err)
}
@@ -84,6 +87,9 @@ func OpenStoreWithInterval(dbPath string, sampleIntervalSec int) (*Store, error)
}
func applySchema(db *sql.DB) error {
// auto_vacuum=INCREMENTAL is set via DSN pragma (must be before journal_mode).
// Logging of current mode is handled by CheckAutoVacuum — no duplicate log here.
schema := `
CREATE TABLE IF NOT EXISTS nodes (
public_key TEXT PRIMARY KEY,
@@ -95,7 +101,8 @@ func applySchema(db *sql.DB) error {
first_seen TEXT,
advert_count INTEGER DEFAULT 0,
battery_mv INTEGER,
temperature_c REAL
temperature_c REAL,
foreign_advert INTEGER DEFAULT 0
);
CREATE TABLE IF NOT EXISTS observers (
@@ -112,7 +119,8 @@ func applySchema(db *sql.DB) error {
battery_mv INTEGER,
uptime_secs INTEGER,
noise_floor REAL,
inactive INTEGER DEFAULT 0
inactive INTEGER DEFAULT 0,
last_packet_at TEXT DEFAULT NULL
);
CREATE INDEX IF NOT EXISTS idx_nodes_last_seen ON nodes(last_seen);
@@ -128,7 +136,8 @@ func applySchema(db *sql.DB) error {
first_seen TEXT,
advert_count INTEGER DEFAULT 0,
battery_mv INTEGER,
temperature_c REAL
temperature_c REAL,
foreign_advert INTEGER DEFAULT 0
);
CREATE INDEX IF NOT EXISTS idx_inactive_nodes_last_seen ON inactive_nodes(last_seen);
@@ -189,7 +198,7 @@ func applySchema(db *sql.DB) error {
db.Exec(`DROP VIEW IF EXISTS packets_v`)
_, vErr := db.Exec(`
CREATE VIEW packets_v AS
SELECT o.id, t.raw_hex,
SELECT o.id, COALESCE(o.raw_hex, t.raw_hex) AS raw_hex,
datetime(o.timestamp, 'unixepoch') AS timestamp,
obs.id AS observer_id, obs.name AS observer_name,
o.direction, o.snr, o.rssi, o.score, t.hash, t.route_type,
@@ -408,6 +417,73 @@ func applySchema(db *sql.DB) error {
log.Println("[migration] dropped_packets table created")
}
// Migration: add raw_hex column to observations (#881)
row = db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'observations_raw_hex_v1'")
if row.Scan(&migDone) != nil {
log.Println("[migration] Adding raw_hex column to observations...")
db.Exec(`ALTER TABLE observations ADD COLUMN raw_hex TEXT`)
db.Exec(`INSERT INTO _migrations (name) VALUES ('observations_raw_hex_v1')`)
log.Println("[migration] observations.raw_hex column added")
}
// Migration: add last_packet_at column to observers (#last-packet-at)
row = db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'observers_last_packet_at_v1'")
if row.Scan(&migDone) != nil {
log.Println("[migration] Adding last_packet_at column to observers...")
_, alterErr := db.Exec(`ALTER TABLE observers ADD COLUMN last_packet_at TEXT DEFAULT NULL`)
if alterErr != nil && !strings.Contains(alterErr.Error(), "duplicate column") {
return fmt.Errorf("observers last_packet_at ALTER: %w", alterErr)
}
// Backfill: set last_packet_at = last_seen only for observers that actually have
// observation rows (packet_count alone is unreliable — UpsertObserver sets it to 1
// on INSERT even for status-only observers).
res, err := db.Exec(`UPDATE observers SET last_packet_at = last_seen
WHERE last_packet_at IS NULL
AND rowid IN (SELECT DISTINCT observer_idx FROM observations WHERE observer_idx IS NOT NULL)`)
if err == nil {
n, _ := res.RowsAffected()
log.Printf("[migration] Backfilled last_packet_at for %d observers with packets", n)
}
db.Exec(`INSERT INTO _migrations (name) VALUES ('observers_last_packet_at_v1')`)
log.Println("[migration] observers.last_packet_at column added")
}
// Migration: backfill observations.path_json from raw_hex (#888)
// NOTE: This runs ASYNC via BackfillPathJSONAsync() to avoid blocking MQTT startup.
// See staging outage where ~502K rows blocked ingest for 15+ hours.
// One-time cleanup: delete legacy packets with empty hash or empty first_seen (#994)
row = db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'cleanup_legacy_null_hash_ts'")
if row.Scan(&migDone) != nil {
log.Println("[migration] Cleaning up legacy packets with empty hash/timestamp...")
db.Exec(`DELETE FROM observations WHERE transmission_id IN (SELECT id FROM transmissions WHERE hash = '' OR first_seen = '')`)
res, err := db.Exec(`DELETE FROM transmissions WHERE hash = '' OR first_seen = ''`)
if err == nil {
deleted, _ := res.RowsAffected()
log.Printf("[migration] deleted %d legacy packets with empty hash/timestamp", deleted)
}
db.Exec(`INSERT INTO _migrations (name) VALUES ('cleanup_legacy_null_hash_ts')`)
}
// Migration: foreign_advert column on nodes/inactive_nodes (#730)
// Marks nodes whose ADVERT GPS lies outside the configured geofilter polygon.
// Default 0; set to 1 by the ingestor when GeoFilter is configured and
// PassesFilter() returns false. Allows operators to surface bridged/leaked
// adverts without silently dropping them.
row = db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'foreign_advert_v1'")
if row.Scan(&migDone) != nil {
log.Println("[migration] Adding foreign_advert column to nodes/inactive_nodes...")
if _, err := db.Exec(`ALTER TABLE nodes ADD COLUMN foreign_advert INTEGER DEFAULT 0`); err != nil {
log.Printf("[migration] nodes.foreign_advert: %v (may already exist)", err)
}
if _, err := db.Exec(`ALTER TABLE inactive_nodes ADD COLUMN foreign_advert INTEGER DEFAULT 0`); err != nil {
log.Printf("[migration] inactive_nodes.foreign_advert: %v (may already exist)", err)
}
db.Exec(`CREATE INDEX IF NOT EXISTS idx_nodes_foreign_advert ON nodes(foreign_advert) WHERE foreign_advert = 1`)
db.Exec(`INSERT INTO _migrations (name) VALUES ('foreign_advert_v1')`)
log.Println("[migration] foreign_advert column added")
}
return nil
}
@@ -433,12 +509,13 @@ func (s *Store) prepareStatements() error {
}
s.stmtInsertObservation, err = s.db.Prepare(`
INSERT INTO observations (transmission_id, observer_idx, direction, snr, rssi, score, path_json, timestamp)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
INSERT INTO observations (transmission_id, observer_idx, direction, snr, rssi, score, path_json, timestamp, raw_hex)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(transmission_id, observer_idx, COALESCE(path_json, '')) DO UPDATE SET
snr = COALESCE(excluded.snr, snr),
rssi = COALESCE(excluded.rssi, rssi),
score = COALESCE(excluded.score, score)
snr = COALESCE(excluded.snr, snr),
rssi = COALESCE(excluded.rssi, rssi),
score = COALESCE(excluded.score, score),
raw_hex = COALESCE(excluded.raw_hex, raw_hex)
`)
if err != nil {
return err
@@ -490,7 +567,7 @@ func (s *Store) prepareStatements() error {
return err
}
s.stmtUpdateObserverLastSeen, err = s.db.Prepare("UPDATE observers SET last_seen = ? WHERE rowid = ?")
s.stmtUpdateObserverLastSeen, err = s.db.Prepare("UPDATE observers SET last_seen = ?, last_packet_at = ? WHERE rowid = ?")
if err != nil {
return err
}
@@ -569,9 +646,9 @@ func (s *Store) InsertTransmission(data *PacketData) (bool, error) {
err := s.stmtGetObserverRowid.QueryRow(data.ObserverID).Scan(&rowid)
if err == nil {
observerIdx = &rowid
// Update observer last_seen on every packet to prevent
// Update observer last_seen and last_packet_at on every packet to prevent
// low-traffic observers from appearing offline (#463)
_, _ = s.stmtUpdateObserverLastSeen.Exec(now, rowid)
_, _ = s.stmtUpdateObserverLastSeen.Exec(now, now, rowid)
}
}
@@ -584,7 +661,7 @@ func (s *Store) InsertTransmission(data *PacketData) (bool, error) {
_, err = s.stmtInsertObservation.Exec(
txID, observerIdx, data.Direction,
data.SNR, data.RSSI, data.Score,
data.PathJSON, epochTs,
data.PathJSON, epochTs, nilIfEmpty(data.RawHex),
)
if err != nil {
s.Stats.WriteErrors.Add(1)
@@ -620,6 +697,21 @@ func (s *Store) IncrementAdvertCount(pubKey string) error {
return err
}
// MarkNodeForeign sets foreign_advert=1 on the node row identified by pubKey.
// Used when an ADVERT arrives whose GPS lies outside the configured geofilter
// polygon (#730). Idempotent — safe to call repeatedly. No-op if pubKey is
// empty.
func (s *Store) MarkNodeForeign(pubKey string) error {
if pubKey == "" {
return nil
}
_, err := s.db.Exec(`UPDATE nodes SET foreign_advert = 1 WHERE public_key = ?`, pubKey)
if err != nil {
s.Stats.WriteErrors.Add(1)
}
return err
}
// UpdateNodeTelemetry updates battery and temperature for a node.
func (s *Store) UpdateNodeTelemetry(pubKey string, batteryMv *int, temperatureC *float64) error {
var bv, tc interface{}
@@ -700,6 +792,7 @@ func (s *Store) UpsertObserver(id, name, iata string, meta *ObserverMeta) error
// Close checkpoints the WAL and closes the database.
func (s *Store) Close() error {
s.backfillWg.Wait()
s.Checkpoint()
return s.db.Close()
}
@@ -777,6 +870,58 @@ func (s *Store) PruneOldMetrics(retentionDays int) (int64, error) {
return n, nil
}
// CheckAutoVacuum inspects the current auto_vacuum mode and logs a warning
// if not INCREMENTAL. Performs opt-in full VACUUM if db.vacuumOnStartup is set (#919).
func (s *Store) CheckAutoVacuum(cfg *Config) {
var autoVacuum int
if err := s.db.QueryRow("PRAGMA auto_vacuum").Scan(&autoVacuum); err != nil {
log.Printf("[db] warning: could not read auto_vacuum: %v", err)
return
}
if autoVacuum == 2 {
log.Printf("[db] auto_vacuum=INCREMENTAL")
return
}
modes := map[int]string{0: "NONE", 1: "FULL", 2: "INCREMENTAL"}
mode := modes[autoVacuum]
if mode == "" {
mode = fmt.Sprintf("UNKNOWN(%d)", autoVacuum)
}
log.Printf("[db] auto_vacuum=%s — DB needs one-time VACUUM to enable incremental auto-vacuum. "+
"Set db.vacuumOnStartup: true in config to migrate (will block startup for several minutes on large DBs). "+
"See https://github.com/Kpa-clawbot/CoreScope/issues/919", mode)
if cfg.DB != nil && cfg.DB.VacuumOnStartup {
// WARNING: Full VACUUM creates a temporary copy of the entire DB file.
// Requires ~2× the DB file size in free disk space or it will fail.
log.Printf("[db] vacuumOnStartup=true — starting one-time full VACUUM (ensure 2x DB size free disk space)...")
start := time.Now()
if _, err := s.db.Exec("PRAGMA auto_vacuum = INCREMENTAL"); err != nil {
log.Printf("[db] VACUUM failed: could not set auto_vacuum: %v", err)
return
}
if _, err := s.db.Exec("VACUUM"); err != nil {
log.Printf("[db] VACUUM failed: %v", err)
return
}
elapsed := time.Since(start)
log.Printf("[db] VACUUM complete in %v — auto_vacuum is now INCREMENTAL", elapsed.Round(time.Millisecond))
}
}
// RunIncrementalVacuum returns free pages to the OS (#919).
// Safe to call on auto_vacuum=NONE databases (noop).
func (s *Store) RunIncrementalVacuum(pages int) {
if _, err := s.db.Exec(fmt.Sprintf("PRAGMA incremental_vacuum(%d)", pages)); err != nil {
log.Printf("[vacuum] incremental_vacuum error: %v", err)
}
}
// Checkpoint forces a WAL checkpoint to release the WAL lock file,
// preventing lock contention with a new process starting up.
func (s *Store) Checkpoint() {
@@ -787,6 +932,92 @@ func (s *Store) Checkpoint() {
}
}
// BackfillPathJSONAsync launches the path_json backfill in a background goroutine.
// It processes observations with NULL/empty path_json that have raw_hex available,
// decoding hop paths and updating the column. Safe to run concurrently with ingest
// because new observations get path_json at write time; this only touches NULL rows.
// Idempotent: skips if migration already recorded.
func (s *Store) BackfillPathJSONAsync() {
s.backfillWg.Add(1)
go func() {
defer s.backfillWg.Done()
defer func() {
if r := recover(); r != nil {
log.Printf("[backfill] path_json async panic recovered: %v", r)
}
}()
var migDone int
row := s.db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'backfill_path_json_from_raw_hex_v1'")
if row.Scan(&migDone) == nil {
return // already done
}
log.Println("[backfill] Starting async path_json backfill from raw_hex...")
updated := 0
errored := false
const batchSize = 1000
batchNum := 0
for {
rows, err := s.db.Query(`
SELECT o.id, o.raw_hex
FROM observations o
JOIN transmissions t ON o.transmission_id = t.id
WHERE o.raw_hex IS NOT NULL AND o.raw_hex != ''
AND (o.path_json IS NULL OR o.path_json = '' OR o.path_json = '[]')
AND t.payload_type != 9
LIMIT ?`, batchSize)
if err != nil {
log.Printf("[backfill] path_json query error: %v", err)
errored = true
break
}
type pendingRow struct {
id int64
rawHex string
}
var batch []pendingRow
for rows.Next() {
var r pendingRow
if err := rows.Scan(&r.id, &r.rawHex); err == nil {
batch = append(batch, r)
}
}
rows.Close()
if len(batch) == 0 {
break
}
for _, r := range batch {
hops, err := packetpath.DecodePathFromRawHex(r.rawHex)
if err != nil || len(hops) == 0 {
if _, execErr := s.db.Exec(`UPDATE observations SET path_json = '[]' WHERE id = ?`, r.id); execErr != nil {
log.Printf("[backfill] write error (id=%d): %v", r.id, execErr)
}
continue
}
b, _ := json.Marshal(hops)
if _, execErr := s.db.Exec(`UPDATE observations SET path_json = ? WHERE id = ?`, string(b), r.id); execErr != nil {
log.Printf("[backfill] write error (id=%d): %v", r.id, execErr)
} else {
updated++
}
}
batchNum++
if batchNum%50 == 0 {
log.Printf("[backfill] progress: %d observations updated so far (%d batches)", updated, batchNum)
}
// Throttle: yield to ingest writers between batches
time.Sleep(50 * time.Millisecond)
}
log.Printf("[backfill] Async path_json backfill complete: %d observations updated", updated)
if !errored {
s.db.Exec(`INSERT INTO _migrations (name) VALUES ('backfill_path_json_from_raw_hex_v1')`)
} else {
log.Printf("[backfill] NOT recording migration due to errors — will retry on next restart")
}
}()
}
// LogStats logs current operational metrics.
func (s *Store) LogStats() {
log.Printf("[stats] tx_inserted=%d tx_dupes=%d obs_inserted=%d node_upserts=%d observer_upserts=%d write_errors=%d sig_drops=%d",
@@ -910,6 +1141,8 @@ type PacketData struct {
PathJSON string
DecodedJSON string
ChannelHash string // grouping key for channel queries (#762)
Region string // observer region: payload > topic > source config (#788)
Foreign bool // true when ADVERT GPS lies outside configured geofilter (#730)
}
// nilIfEmpty returns nil for empty strings (for nullable DB columns).
@@ -928,14 +1161,26 @@ type MQTTPacketMessage struct {
Score *float64 `json:"score"`
Direction *string `json:"direction"`
Origin string `json:"origin"`
Region string `json:"region,omitempty"` // optional region override (#788)
}
// BuildPacketData constructs a PacketData from a decoded packet and MQTT message.
// path_json is derived directly from raw_hex header bytes (not decoded.Path.Hops)
// to guarantee the stored path always matches the raw bytes. This matters for
// TRACE packets where decoded.Path.Hops is overwritten with payload hops (#886).
func BuildPacketData(msg *MQTTPacketMessage, decoded *DecodedPacket, observerID, region string) *PacketData {
now := time.Now().UTC().Format(time.RFC3339)
pathJSON := "[]"
if len(decoded.Path.Hops) > 0 {
b, _ := json.Marshal(decoded.Path.Hops)
// For TRACE packets, path_json must be the payload-decoded route hops
// (decoded.Path.Hops), NOT the raw_hex header bytes which are SNR values.
// For all other packet types, derive path from raw_hex (#886).
if !packetpath.PathBytesAreHops(byte(decoded.Header.PayloadType)) {
if len(decoded.Path.Hops) > 0 {
b, _ := json.Marshal(decoded.Path.Hops)
pathJSON = string(b)
}
} else if hops, err := packetpath.DecodePathFromRawHex(msg.Raw); err == nil && len(hops) > 0 {
b, _ := json.Marshal(hops)
pathJSON = string(b)
}
@@ -956,6 +1201,13 @@ func BuildPacketData(msg *MQTTPacketMessage, decoded *DecodedPacket, observerID,
DecodedJSON: PayloadJSON(&decoded.Payload),
}
// Region priority: payload field > topic-derived parameter (#788)
if msg.Region != "" {
pd.Region = msg.Region
} else {
pd.Region = region
}
// Populate channel_hash for fast channel queries (#762)
if decoded.Header.PayloadType == PayloadGRP_TXT {
if decoded.Payload.Type == "CHAN" && decoded.Payload.Channel != "" {
+599
View File
@@ -2,6 +2,7 @@ package main
import (
"database/sql"
"encoding/json"
"fmt"
"os"
"path/filepath"
@@ -10,6 +11,8 @@ import (
"sync/atomic"
"testing"
"time"
"github.com/meshcore-analyzer/packetpath"
)
func tempDBPath(t *testing.T) string {
@@ -566,6 +569,61 @@ func TestInsertTransmissionUpdatesObserverLastSeen(t *testing.T) {
}
}
func TestLastPacketAtUpdatedOnPacketOnly(t *testing.T) {
s, err := OpenStore(tempDBPath(t))
if err != nil {
t.Fatal(err)
}
defer s.Close()
// Insert observer via status path — last_packet_at should be NULL
if err := s.UpsertObserver("obs1", "Observer1", "SJC", nil); err != nil {
t.Fatal(err)
}
var lastPacketAt sql.NullString
s.db.QueryRow("SELECT last_packet_at FROM observers WHERE id = ?", "obs1").Scan(&lastPacketAt)
if lastPacketAt.Valid {
t.Fatalf("expected last_packet_at to be NULL after UpsertObserver, got %s", lastPacketAt.String)
}
// Insert a packet from this observer — last_packet_at should be set
data := &PacketData{
RawHex: "0A00D69F",
Timestamp: "2026-04-24T12:00:00Z",
ObserverID: "obs1",
Hash: "lastpackettest123456",
RouteType: 2,
PayloadType: 2,
PathJSON: "[]",
DecodedJSON: `{"type":"TXT_MSG"}`,
}
if _, err := s.InsertTransmission(data); err != nil {
t.Fatal(err)
}
s.db.QueryRow("SELECT last_packet_at FROM observers WHERE id = ?", "obs1").Scan(&lastPacketAt)
if !lastPacketAt.Valid {
t.Fatal("expected last_packet_at to be non-NULL after InsertTransmission")
}
// InsertTransmission uses `now = data.Timestamp || time.Now()`, so last_packet_at
// should match the packet's Timestamp when provided (same source-of-truth as last_seen).
if lastPacketAt.String != "2026-04-24T12:00:00Z" {
t.Errorf("expected last_packet_at=2026-04-24T12:00:00Z, got %s", lastPacketAt.String)
}
// UpsertObserver again (status path) — last_packet_at should NOT change
if err := s.UpsertObserver("obs1", "Observer1", "SJC", nil); err != nil {
t.Fatal(err)
}
var lastPacketAtAfterStatus sql.NullString
s.db.QueryRow("SELECT last_packet_at FROM observers WHERE id = ?", "obs1").Scan(&lastPacketAtAfterStatus)
if !lastPacketAtAfterStatus.Valid || lastPacketAtAfterStatus.String != lastPacketAt.String {
t.Errorf("UpsertObserver should not change last_packet_at; expected %s, got %v", lastPacketAt.String, lastPacketAtAfterStatus)
}
}
func TestEndToEndIngest(t *testing.T) {
s, err := OpenStore(tempDBPath(t))
if err != nil {
@@ -1968,3 +2026,544 @@ func TestInsertObservationSNRFillIn(t *testing.T) {
t.Errorf("RSSI overwritten by null arrival: got %v, want %v", rssi3, rssi)
}
}
// TestPerObservationRawHex verifies that two MQTT packets for the same hash
// from different observers store distinct raw_hex per observation (#881).
func TestPerObservationRawHex(t *testing.T) {
store, err := OpenStore(tempDBPath(t))
if err != nil {
t.Fatal(err)
}
defer store.Close()
// Register two observers
store.UpsertObserver("obs-A", "Observer A", "", nil)
store.UpsertObserver("obs-B", "Observer B", "", nil)
hash := "abc123def456"
rawA := "c0ffee01"
rawB := "c0ffee0201aa"
dir := "RX"
// First observation from observer A
pdA := &PacketData{
RawHex: rawA,
Hash: hash,
Timestamp: "2026-04-21T10:00:00Z",
ObserverID: "obs-A",
Direction: &dir,
PathJSON: "[]",
}
isNew, err := store.InsertTransmission(pdA)
if err != nil {
t.Fatalf("insert A: %v", err)
}
if !isNew {
t.Fatal("expected new transmission")
}
// Second observation from observer B (same hash, different raw bytes)
pdB := &PacketData{
RawHex: rawB,
Hash: hash,
Timestamp: "2026-04-21T10:00:01Z",
ObserverID: "obs-B",
Direction: &dir,
PathJSON: `["aabb"]`,
}
isNew2, err := store.InsertTransmission(pdB)
if err != nil {
t.Fatalf("insert B: %v", err)
}
if isNew2 {
t.Fatal("expected duplicate transmission")
}
// Query observations and verify per-observation raw_hex
rows, err := store.db.Query(`
SELECT o.raw_hex, obs.id
FROM observations o
LEFT JOIN observers obs ON obs.rowid = o.observer_idx
ORDER BY o.id ASC
`)
if err != nil {
t.Fatalf("query: %v", err)
}
defer rows.Close()
type obsResult struct {
rawHex string
observerID string
}
var results []obsResult
for rows.Next() {
var rh, oid sql.NullString
if err := rows.Scan(&rh, &oid); err != nil {
t.Fatal(err)
}
results = append(results, obsResult{
rawHex: rh.String,
observerID: oid.String,
})
}
if len(results) != 2 {
t.Fatalf("expected 2 observations, got %d", len(results))
}
if results[0].rawHex != rawA {
t.Errorf("obs A raw_hex: got %q, want %q", results[0].rawHex, rawA)
}
if results[1].rawHex != rawB {
t.Errorf("obs B raw_hex: got %q, want %q", results[1].rawHex, rawB)
}
if results[0].rawHex == results[1].rawHex {
t.Error("both observations have same raw_hex — should differ")
}
}
// TestBuildPacketData_TraceUsesPayloadHops verifies that TRACE packets use
// payload-decoded route hops in path_json (NOT the raw_hex header SNR bytes).
// Issue #886 / #887.
func TestBuildPacketData_TraceUsesPayloadHops(t *testing.T) {
// TRACE packet: header path has SNR bytes [30,2D,0D,23], but decoded.Path.Hops
// is overwritten to payload hops [67,33,D6,33,67].
rawHex := "2604302D0D2359FEE7B100000000006733D63367"
decoded, err := DecodePacket(rawHex, nil, false)
if err != nil {
t.Fatal(err)
}
// decoded.Path.Hops should be the TRACE-replaced hops (payload hops)
if len(decoded.Path.Hops) != 5 {
t.Fatalf("expected 5 decoded hops, got %d", len(decoded.Path.Hops))
}
msg := &MQTTPacketMessage{Raw: rawHex}
pd := BuildPacketData(msg, decoded, "test-obs", "TST")
// For TRACE: path_json MUST be the payload-decoded route hops, NOT the SNR bytes
expectedPathJSON := `["67","33","D6","33","67"]`
if pd.PathJSON != expectedPathJSON {
t.Errorf("path_json = %s, want %s (TRACE must use payload hops)", pd.PathJSON, expectedPathJSON)
}
// Verify that DecodePathFromRawHex returns the SNR bytes (header path) which differ
headerHops, herr := packetpath.DecodePathFromRawHex(rawHex)
if herr != nil {
t.Fatal(herr)
}
headerJSON, _ := json.Marshal(headerHops)
if string(headerJSON) == expectedPathJSON {
t.Error("header path (SNR) should differ from payload hops for TRACE")
}
}
// TestBuildPacketData_NonTracePathJSON verifies non-TRACE packets also derive path from raw_hex.
func TestBuildPacketData_NonTracePathJSON(t *testing.T) {
// A simple ADVERT packet (payload type 0) with 2 hops, hash_size 1
// Header 0x09 = FLOOD(1), ADVERT(2), version 0
// Path byte 0x02 = hash_size 1, hash_count 2
// Path bytes: AA BB
rawHex := "0902AABB" + "00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
decoded, err := DecodePacket(rawHex, nil, false)
if err != nil {
t.Fatal(err)
}
msg := &MQTTPacketMessage{Raw: rawHex}
pd := BuildPacketData(msg, decoded, "obs1", "TST")
expectedPathJSON := `["AA","BB"]`
if pd.PathJSON != expectedPathJSON {
t.Errorf("path_json = %s, want %s", pd.PathJSON, expectedPathJSON)
}
}
// --- Issue #888: Backfill path_json from raw_hex ---
func TestBackfillPathJsonFromRawHex(t *testing.T) {
dbPath := tempDBPath(t)
s, err := OpenStore(dbPath)
if err != nil {
t.Fatal(err)
}
// Insert a transmission with payload_type != TRACE (e.g. 0x01)
// raw_hex: header 0x05 (route FLOOD, payload 0x01), path byte 0x42 (hash_size=2, count=2),
// hops: AABB, CCDD, then some payload bytes
rawHex := "0542AABBCCDD0000000000000000000000000000"
s.db.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, payload_type) VALUES (?, 'h1', '2025-01-01T00:00:00Z', 1)`, rawHex)
// Insert observation with raw_hex but empty path_json
s.db.Exec(`INSERT INTO observations (transmission_id, timestamp, raw_hex, path_json) VALUES (1, 1000, ?, '[]')`, rawHex)
// Insert observation with raw_hex and NULL path_json
s.db.Exec(`INSERT INTO observations (transmission_id, timestamp, raw_hex, path_json) VALUES (1, 1001, ?, NULL)`, rawHex)
// Insert observation with existing path_json (should NOT be overwritten)
s.db.Exec(`INSERT INTO observations (transmission_id, timestamp, raw_hex, path_json) VALUES (1, 1002, ?, '["XX","YY"]')`, rawHex)
// Insert a TRACE transmission (payload_type = 0x09) — should be skipped
traceRaw := "2604302D0D2359FEE7B100000000006733D63367"
s.db.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, payload_type) VALUES (?, 'h2', '2025-01-01T00:00:00Z', 9)`, traceRaw)
s.db.Exec(`INSERT INTO observations (transmission_id, timestamp, raw_hex, path_json) VALUES (2, 1003, ?, '[]')`, traceRaw)
// Remove the migration marker so it runs again on reopen
s.db.Exec(`DELETE FROM _migrations WHERE name = 'backfill_path_json_from_raw_hex_v1'`)
s.Close()
// Reopen — backfill is now async, must trigger explicitly
s2, err := OpenStore(dbPath)
if err != nil {
t.Fatal(err)
}
defer s2.Close()
// Trigger async backfill and wait for completion
s2.BackfillPathJSONAsync()
deadline := time.Now().Add(10 * time.Second)
var migCount int
for time.Now().Before(deadline) {
s2.db.QueryRow("SELECT COUNT(*) FROM _migrations WHERE name = 'backfill_path_json_from_raw_hex_v1'").Scan(&migCount)
if migCount == 1 {
break
}
time.Sleep(50 * time.Millisecond)
}
if migCount != 1 {
t.Fatalf("migration not recorded")
}
// Row 1 (was '[]') should now have decoded hops
var pj1 string
s2.db.QueryRow("SELECT path_json FROM observations WHERE id = 1").Scan(&pj1)
if pj1 != `["AABB","CCDD"]` {
t.Errorf("row 1 path_json = %q, want %q", pj1, `["AABB","CCDD"]`)
}
// Row 2 (was NULL) should now have decoded hops
var pj2 string
s2.db.QueryRow("SELECT path_json FROM observations WHERE id = 2").Scan(&pj2)
if pj2 != `["AABB","CCDD"]` {
t.Errorf("row 2 path_json = %q, want %q", pj2, `["AABB","CCDD"]`)
}
// Row 3 (had existing data) should NOT be overwritten
var pj3 string
s2.db.QueryRow("SELECT path_json FROM observations WHERE id = 3").Scan(&pj3)
if pj3 != `["XX","YY"]` {
t.Errorf("row 3 path_json = %q, want %q (should not be overwritten)", pj3, `["XX","YY"]`)
}
// Row 4 (TRACE) should NOT be updated
var pj4 string
s2.db.QueryRow("SELECT path_json FROM observations WHERE id = 4").Scan(&pj4)
if pj4 != "[]" {
t.Errorf("row 4 (TRACE) path_json = %q, want %q (should be skipped)", pj4, "[]")
}
}
func TestCleanupLegacyNullHashTimestamp(t *testing.T) {
path := tempDBPath(t)
// Create a bare-bones DB with legacy bad data
db, err := sql.Open("sqlite", path+"?_pragma=journal_mode(WAL)&_pragma=busy_timeout(5000)")
if err != nil {
t.Fatal(err)
}
db.Exec(`CREATE TABLE IF NOT EXISTS transmissions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
raw_hex TEXT NOT NULL,
hash TEXT NOT NULL,
first_seen TEXT NOT NULL,
route_type INTEGER,
payload_type INTEGER,
payload_version INTEGER,
decoded_json TEXT,
created_at TEXT DEFAULT (datetime('now')),
channel_hash TEXT DEFAULT NULL
)`)
db.Exec(`CREATE TABLE IF NOT EXISTS observations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
transmission_id INTEGER NOT NULL REFERENCES transmissions(id),
observer_idx INTEGER,
direction TEXT,
snr REAL,
rssi REAL,
score INTEGER,
path_json TEXT,
timestamp INTEGER NOT NULL
)`)
db.Exec(`CREATE TABLE IF NOT EXISTS _migrations (name TEXT PRIMARY KEY)`)
db.Exec(`CREATE TABLE IF NOT EXISTS nodes (public_key TEXT PRIMARY KEY, name TEXT, role TEXT, lat REAL, lon REAL, last_seen TEXT, first_seen TEXT, advert_count INTEGER DEFAULT 0, battery_mv INTEGER, temperature_c REAL)`)
db.Exec(`CREATE TABLE IF NOT EXISTS observers (id TEXT PRIMARY KEY, name TEXT, iata TEXT, last_seen TEXT, first_seen TEXT, packet_count INTEGER DEFAULT 0, model TEXT, firmware TEXT, client_version TEXT, radio TEXT, battery_mv INTEGER, uptime_secs INTEGER, noise_floor REAL, inactive INTEGER DEFAULT 0, last_packet_at TEXT DEFAULT NULL)`)
// Insert good transmission
db.Exec(`INSERT INTO transmissions (id, raw_hex, hash, first_seen) VALUES (1, 'aabb', 'abc123', '2024-01-01T00:00:00Z')`)
db.Exec(`INSERT INTO observations (transmission_id, observer_idx, timestamp) VALUES (1, 1, 1704067200)`)
// Insert bad: empty hash
db.Exec(`INSERT INTO transmissions (id, raw_hex, hash, first_seen) VALUES (2, 'ccdd', '', '2024-01-01T00:00:00Z')`)
db.Exec(`INSERT INTO observations (transmission_id, observer_idx, timestamp) VALUES (2, 1, 1704067200)`)
// Insert bad: empty first_seen
db.Exec(`INSERT INTO transmissions (id, raw_hex, hash, first_seen) VALUES (3, 'eeff', 'def456', '')`)
db.Exec(`INSERT INTO observations (transmission_id, observer_idx, timestamp) VALUES (3, 2, 1704067200)`)
db.Close()
// Now open via OpenStore which should run the migration
s, err := OpenStore(path)
if err != nil {
t.Fatal(err)
}
defer s.Close()
// Good transmission should remain
var count int
s.db.QueryRow("SELECT COUNT(*) FROM transmissions WHERE id = 1").Scan(&count)
if count != 1 {
t.Error("good transmission should not be deleted")
}
// Bad transmissions should be gone
s.db.QueryRow("SELECT COUNT(*) FROM transmissions WHERE id = 2").Scan(&count)
if count != 0 {
t.Errorf("transmission with empty hash should be deleted, got count=%d", count)
}
s.db.QueryRow("SELECT COUNT(*) FROM transmissions WHERE id = 3").Scan(&count)
if count != 0 {
t.Errorf("transmission with empty first_seen should be deleted, got count=%d", count)
}
// Observations for bad transmissions should be gone
s.db.QueryRow("SELECT COUNT(*) FROM observations WHERE transmission_id IN (2, 3)").Scan(&count)
if count != 0 {
t.Errorf("observations for bad transmissions should be deleted, got count=%d", count)
}
// Observation for good transmission should remain
s.db.QueryRow("SELECT COUNT(*) FROM observations WHERE transmission_id = 1").Scan(&count)
if count != 1 {
t.Error("observation for good transmission should remain")
}
// Migration marker should exist
var migCount int
s.db.QueryRow("SELECT COUNT(*) FROM _migrations WHERE name = 'cleanup_legacy_null_hash_ts'").Scan(&migCount)
if migCount != 1 {
t.Error("migration marker cleanup_legacy_null_hash_ts should be recorded")
}
// Idempotent: opening again should not error
s.Close()
s2, err := OpenStore(path)
if err != nil {
t.Fatal("second open should not fail:", err)
}
s2.Close()
}
func TestBuildPacketDataRegionFromPayload(t *testing.T) {
msg := &MQTTPacketMessage{Raw: "0102030405060708", Region: "PDX"}
decoded := &DecodedPacket{
Header: Header{RouteType: 1, PayloadType: 3},
}
pkt := BuildPacketData(msg, decoded, "obs1", "SJC")
// When payload has region, it should override the topic-derived region
if pkt.Region != "PDX" {
t.Fatalf("expected region PDX from payload, got %q", pkt.Region)
}
}
func TestBuildPacketDataRegionFallsBackToTopic(t *testing.T) {
msg := &MQTTPacketMessage{Raw: "0102030405060708"}
decoded := &DecodedPacket{
Header: Header{RouteType: 1, PayloadType: 3},
}
pkt := BuildPacketData(msg, decoded, "obs1", "SJC")
if pkt.Region != "SJC" {
t.Fatalf("expected region SJC from topic, got %q", pkt.Region)
}
}
// TestBackfillPathJSONAsync verifies that the path_json backfill does NOT block
// OpenStore from returning. MQTT connect happens immediately after OpenStore;
// if the backfill is synchronous, MQTT would be delayed indefinitely on large DBs.
// This test creates pending backfill rows, opens the store, and asserts that
// OpenStore returns before the migration is recorded — proving async execution.
func TestBackfillPathJSONAsync(t *testing.T) {
dir := t.TempDir()
dbPath := filepath.Join(dir, "async_test.db")
// Bootstrap schema manually so we can insert test data BEFORE OpenStore
db, err := sql.Open("sqlite", dbPath+"?_pragma=journal_mode(WAL)&_pragma=busy_timeout(5000)")
if err != nil {
t.Fatal(err)
}
// Create tables manually (minimal schema for this test)
_, err = db.Exec(`
CREATE TABLE _migrations (name TEXT PRIMARY KEY);
CREATE TABLE transmissions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
raw_hex TEXT NOT NULL,
hash TEXT NOT NULL UNIQUE,
first_seen TEXT NOT NULL,
route_type INTEGER,
payload_type INTEGER,
payload_version INTEGER,
decoded_json TEXT,
created_at TEXT DEFAULT (datetime('now')),
channel_hash TEXT
);
CREATE TABLE observers (
id TEXT PRIMARY KEY,
name TEXT,
iata TEXT,
last_seen TEXT,
first_seen TEXT,
packet_count INTEGER DEFAULT 0,
model TEXT,
firmware TEXT,
client_version TEXT,
radio TEXT,
battery_mv INTEGER,
uptime_secs INTEGER,
noise_floor REAL,
inactive INTEGER DEFAULT 0,
last_packet_at TEXT
);
CREATE TABLE nodes (
public_key TEXT PRIMARY KEY,
name TEXT, role TEXT, lat REAL, lon REAL,
last_seen TEXT, first_seen TEXT, advert_count INTEGER DEFAULT 0,
battery_mv INTEGER, temperature_c REAL
);
CREATE TABLE inactive_nodes (
public_key TEXT PRIMARY KEY,
name TEXT, role TEXT, lat REAL, lon REAL,
last_seen TEXT, first_seen TEXT, advert_count INTEGER DEFAULT 0,
battery_mv INTEGER, temperature_c REAL
);
CREATE TABLE observations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
transmission_id INTEGER NOT NULL REFERENCES transmissions(id),
observer_idx INTEGER,
direction TEXT,
snr REAL, rssi REAL, score INTEGER,
path_json TEXT,
timestamp INTEGER NOT NULL,
raw_hex TEXT
);
CREATE UNIQUE INDEX idx_observations_dedup ON observations(transmission_id, observer_idx, COALESCE(path_json, ''));
CREATE INDEX idx_observations_transmission_id ON observations(transmission_id);
CREATE INDEX idx_observations_observer_idx ON observations(observer_idx);
CREATE INDEX idx_observations_timestamp ON observations(timestamp);
CREATE TABLE observer_metrics (
observer_id TEXT NOT NULL,
timestamp TEXT NOT NULL,
noise_floor REAL, tx_air_secs INTEGER, rx_air_secs INTEGER,
recv_errors INTEGER, battery_mv INTEGER,
packets_sent INTEGER, packets_recv INTEGER,
PRIMARY KEY (observer_id, timestamp)
);
CREATE TABLE dropped_packets (
id INTEGER PRIMARY KEY AUTOINCREMENT,
hash TEXT, raw_hex TEXT, reason TEXT NOT NULL,
observer_id TEXT, observer_name TEXT,
node_pubkey TEXT, node_name TEXT,
dropped_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
`)
if err != nil {
t.Fatal("bootstrap schema:", err)
}
// Mark all migrations as done EXCEPT the path_json backfill
for _, m := range []string{
"advert_count_unique_v1", "noise_floor_real_v1", "node_telemetry_v1",
"obs_timestamp_index_v1", "observer_metrics_v1", "observer_metrics_ts_idx",
"observers_inactive_v1", "observer_metrics_packets_v1", "channel_hash_v1",
"dropped_packets_v1", "observations_raw_hex_v1", "observers_last_packet_at_v1",
"cleanup_legacy_null_hash_ts",
} {
db.Exec(`INSERT INTO _migrations (name) VALUES (?)`, m)
}
// Insert a transmission + observations with NULL path_json and valid raw_hex
// raw_hex "0102AABBCCDD0000" has 2-hop path decodable by packetpath
rawHex := "41020304AABBCCDD05060708"
_, err = db.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, payload_type) VALUES (?, 'hash1', '2025-01-01T00:00:00Z', 4)`, rawHex)
if err != nil {
t.Fatal("insert tx:", err)
}
// Insert 100 observations needing backfill
for i := 0; i < 100; i++ {
_, err = db.Exec(`INSERT INTO observations (transmission_id, observer_idx, timestamp, raw_hex, path_json) VALUES (1, ?, ?, ?, NULL)`,
i+1, 1700000000+i, rawHex)
if err != nil {
// dedup index might fire — use unique observer_idx
t.Fatalf("insert obs %d: %v", i, err)
}
}
db.Close()
// Now open store via OpenStore — this must return QUICKLY (non-blocking)
start := time.Now()
store, err := OpenStoreWithInterval(dbPath, 300)
elapsed := time.Since(start)
if err != nil {
t.Fatal("OpenStore:", err)
}
defer store.Close()
// OpenStore must return in under 2 seconds (backfill is no longer in applySchema)
if elapsed > 2*time.Second {
t.Fatalf("OpenStore blocked for %v — backfill must not run in applySchema", elapsed)
}
// Backfill must NOT be recorded yet — it hasn't been triggered
var done int
err = store.db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'backfill_path_json_from_raw_hex_v1'").Scan(&done)
if err == nil {
t.Fatal("migration recorded during OpenStore — backfill must be async via BackfillPathJSONAsync()")
}
// Now trigger the async backfill (simulates what main.go does after OpenStore)
store.BackfillPathJSONAsync()
// Wait for backfill to complete (should be very fast with 100 rows)
deadline := time.Now().Add(10 * time.Second)
for time.Now().Before(deadline) {
err = store.db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'backfill_path_json_from_raw_hex_v1'").Scan(&done)
if err == nil {
break
}
time.Sleep(100 * time.Millisecond)
}
if err != nil {
t.Fatal("backfill never completed within 10s")
}
// Verify backfill actually worked — observations should have non-NULL path_json
var nullCount int
store.db.QueryRow("SELECT COUNT(*) FROM observations WHERE path_json IS NULL").Scan(&nullCount)
if nullCount > 0 {
t.Errorf("backfill left %d observations with NULL path_json", nullCount)
}
}
// TestBackfillPathJSONAsyncMethodExists verifies the async backfill API surface
// exists — BackfillPathJSONAsync must be callable independently from OpenStore.
func TestBackfillPathJSONAsyncMethodExists(t *testing.T) {
dir := t.TempDir()
dbPath := filepath.Join(dir, "method_test.db")
store, err := OpenStoreWithInterval(dbPath, 300)
if err != nil {
t.Fatal(err)
}
defer store.Close()
// BackfillPathJSONAsync must exist as a method on *Store
// This is a compile-time check — if the method doesn't exist, the test won't compile.
store.BackfillPathJSONAsync()
}
+22 -1
View File
@@ -12,6 +12,7 @@ import (
"strings"
"unicode/utf8"
"github.com/meshcore-analyzer/packetpath"
"github.com/meshcore-analyzer/sigvalidate"
)
@@ -130,6 +131,7 @@ type Payload struct {
SenderTimestamp uint32 `json:"sender_timestamp,omitempty"`
EphemeralPubKey string `json:"ephemeralPubKey,omitempty"`
PathData string `json:"pathData,omitempty"`
SNRValues []float64 `json:"snrValues,omitempty"`
Tag uint32 `json:"tag,omitempty"`
AuthCode uint32 `json:"authCode,omitempty"`
TraceFlags *int `json:"traceFlags,omitempty"`
@@ -192,8 +194,9 @@ func decodePath(pathByte byte, buf []byte, offset int) (Path, int) {
}, totalBytes
}
// isTransportRoute delegates to packetpath.IsTransportRoute.
func isTransportRoute(routeType int) bool {
return routeType == RouteTransportFlood || routeType == RouteTransportDirect
return packetpath.IsTransportRoute(routeType)
}
func decodeEncryptedPayload(typeName string, buf []byte) Payload {
@@ -597,6 +600,9 @@ func DecodePacket(hexString string, channelKeys map[string]string, validateSigna
// We expose hopsCompleted (count of SNR bytes) so consumers can distinguish
// how far the trace got vs the full intended route.
var anomaly string
if header.PayloadType == PayloadTRACE && payload.Error != "" {
anomaly = fmt.Sprintf("TRACE payload decode failed: %s", payload.Error)
}
if header.PayloadType == PayloadTRACE && payload.PathData != "" {
// Flag anomalous routing — firmware only sends TRACE as DIRECT
if header.RouteType != RouteDirect && header.RouteType != RouteTransportDirect {
@@ -604,6 +610,21 @@ func DecodePacket(hexString string, channelKeys map[string]string, validateSigna
}
// The header path hops count represents SNR entries = completed hops
hopsCompleted := path.HashCount
// Extract per-hop SNR from header path bytes (int8, quarter-dB encoding).
// Mirrors cmd/server/decoder.go — must be done at ingest time so SNR
// values are persisted in decoded_json (server endpoint serves DB as-is).
if hopsCompleted > 0 && len(path.Hops) >= hopsCompleted {
snrVals := make([]float64, 0, hopsCompleted)
for i := 0; i < hopsCompleted; i++ {
b, err := hex.DecodeString(path.Hops[i])
if err == nil && len(b) == 1 {
snrVals = append(snrVals, float64(int8(b[0]))/4.0)
}
}
if len(snrVals) > 0 {
payload.SNRValues = snrVals
}
}
pathBytes, err := hex.DecodeString(payload.PathData)
if err == nil && payload.TraceFlags != nil {
// path_sz from flags byte is a power-of-two exponent per firmware:
+154
View File
@@ -11,6 +11,7 @@ import (
"strings"
"testing"
"github.com/meshcore-analyzer/packetpath"
"github.com/meshcore-analyzer/sigvalidate"
)
@@ -1822,3 +1823,156 @@ func TestDecodeAdvertWithSignatureValidation(t *testing.T) {
t.Error("SignatureValid should be nil when validation disabled")
}
}
// === Tests for DecodePathFromRawHex (issue #886) ===
func TestDecodePathFromRawHex_HashSize1(t *testing.T) {
// Header byte 0x26 = route_type DIRECT, payload TRACE
// Path byte 0x04 = hash_size 1 (bits 7-6 = 00 → 0+1=1), hash_count 4
// Path bytes: 30 2D 0D 23
raw := "2604302D0D2359FEE7B100000000006733D63367"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
expected := []string{"30", "2D", "0D", "23"}
if len(hops) != len(expected) {
t.Fatalf("got %d hops, want %d", len(hops), len(expected))
}
for i, h := range hops {
if h != expected[i] {
t.Errorf("hop[%d] = %s, want %s", i, h, expected[i])
}
}
}
func TestDecodePathFromRawHex_HashSize2(t *testing.T) {
// Path byte 0x42 = hash_size 2 (bits 7-6 = 01 → 1+1=2), hash_count 2
// Header 0x09 = FLOOD route (rt=1), payload ADVERT (pt=2)
// Path bytes: AABB CCDD (4 bytes = 2 hops * 2 bytes)
raw := "0942AABBCCDD" + "00000000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
expected := []string{"AABB", "CCDD"}
if len(hops) != len(expected) {
t.Fatalf("got %d hops, want %d", len(hops), len(expected))
}
for i, h := range hops {
if h != expected[i] {
t.Errorf("hop[%d] = %s, want %s", i, h, expected[i])
}
}
}
func TestDecodePathFromRawHex_HashSize3(t *testing.T) {
// Path byte 0x81 = hash_size 3 (bits 7-6 = 10 → 2+1=3), hash_count 1
// Header 0x09 = FLOOD route (rt=1), payload ADVERT
raw := "0981AABBCC" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
if len(hops) != 1 || hops[0] != "AABBCC" {
t.Fatalf("got %v, want [AABBCC]", hops)
}
}
func TestDecodePathFromRawHex_HashSize4(t *testing.T) {
// Path byte 0xC1 = hash_size 4 (bits 7-6 = 11 → 3+1=4), hash_count 1
// Header 0x09 = FLOOD route (rt=1)
raw := "09C1AABBCCDD" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
if len(hops) != 1 || hops[0] != "AABBCCDD" {
t.Fatalf("got %v, want [AABBCCDD]", hops)
}
}
func TestDecodePathFromRawHex_DirectZeroHops(t *testing.T) {
// Path byte 0x00 = hash_size 1, hash_count 0
// Header 0x0A = DIRECT route (rt=2), payload ADVERT
raw := "0A00" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
if len(hops) != 0 {
t.Fatalf("got %d hops, want 0", len(hops))
}
}
func TestDecodePathFromRawHex_Transport(t *testing.T) {
// Route type 3 = TRANSPORT_DIRECT → 4 transport code bytes before path byte
// Header 0x27 = route_type 3, payload TRACE
// Transport codes: 1122 3344
// Path byte 0x02 = hash_size 1, hash_count 2
// Path bytes: AA BB
raw := "2711223344" + "02AABB" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
expected := []string{"AA", "BB"}
if len(hops) != len(expected) {
t.Fatalf("got %d hops, want %d", len(hops), len(expected))
}
for i, h := range hops {
if h != expected[i] {
t.Errorf("hop[%d] = %s, want %s", i, h, expected[i])
}
}
}
func TestDecodeTracePayloadFailSetsAnomaly(t *testing.T) {
// Issue #889: TRACE packet with payload too short to decode (< 9 bytes)
// should still return a DecodedPacket (observation stored) but with Anomaly
// set to warn operators that the decode was degraded.
// Packet: header 0x26 (TRACE+DIRECT), pathByte 0x00, payload 4 bytes (too short).
pkt, err := DecodePacket("2600aabbccdd", nil, false)
if err != nil {
t.Fatalf("DecodePacket error: %v", err)
}
if pkt.Payload.Type != "TRACE" {
t.Fatalf("payload type=%s, want TRACE", pkt.Payload.Type)
}
if pkt.Payload.Error == "" {
t.Fatal("expected payload.Error to indicate decode failure")
}
// The key assertion: Anomaly must be set when TRACE decode fails
if pkt.Anomaly == "" {
t.Error("expected Anomaly to be set when TRACE payload decode fails but observation is stored")
}
}
// TestDecodeTraceExtractsSNRValues verifies that for TRACE packets, the header
// path bytes are interpreted as int8 SNR values (quarter-dB) and exposed via
// payload.SNRValues. Mirrors logic in cmd/server/decoder.go (issue: SNR values
// extracted by server but never written into decoded_json by ingestor).
//
// Packet 26022FF8116A23A80000000001C0DE1000DEDE:
// header 0x26 → TRACE (pt=9), DIRECT (rt=2)
// pathByte 0x02 → hash_size=1, hash_count=2
// header path: 2F F8 → SNR = [int8(0x2F)/4, int8(0xF8)/4] = [11.75, -2.0]
// payload (15B): tag=116A23A8 auth=00000000 flags=0x01 pathData=C0DE1000DEDE
func TestDecodeTraceExtractsSNRValues(t *testing.T) {
pkt, err := DecodePacket("26022FF8116A23A80000000001C0DE1000DEDE", nil, false)
if err != nil {
t.Fatalf("DecodePacket error: %v", err)
}
if pkt.Payload.Type != "TRACE" {
t.Fatalf("payload type=%s, want TRACE", pkt.Payload.Type)
}
if len(pkt.Payload.SNRValues) != 2 {
t.Fatalf("len(SNRValues)=%d, want 2 (got %v)", len(pkt.Payload.SNRValues), pkt.Payload.SNRValues)
}
if pkt.Payload.SNRValues[0] != 11.75 {
t.Errorf("SNRValues[0]=%v, want 11.75", pkt.Payload.SNRValues[0])
}
if pkt.Payload.SNRValues[1] != -2.0 {
t.Errorf("SNRValues[1]=%v, want -2.0", pkt.Payload.SNRValues[1])
}
}
+112
View File
@@ -0,0 +1,112 @@
package main
import (
"testing"
)
// TestHandleMessageAdvertForeign_FlagModeStoresWithFlag asserts that when an
// ADVERT comes from a node whose GPS is OUTSIDE the configured geofilter,
// the ingestor (in default "flag" mode) stores the node and marks it foreign,
// instead of silently dropping it (#730).
func TestHandleMessageAdvertForeign_FlagModeStoresWithFlag(t *testing.T) {
store, source := newTestContext(t)
// Real ADVERT raw hex from existing TestHandleMessageAdvertGeoFiltered.
// Decoder will produce a node with a known GPS — the test below just
// asserts that with a tight geofilter that EXCLUDES that GPS, the node
// is still stored AND tagged as foreign.
rawHex := "120046D62DE27D4C5194D7821FC5A34A45565DCC2537B300B9AB6275255CEFB65D840CE5C169C94C9AED39E8BCB6CB6EB0335497A198B33A1A610CD3B03D8DCFC160900E5244280323EE0B44CACAB8F02B5B38B91CFA18BD067B0B5E63E94CFC85F758A8530B9240933402E0E6B8F84D5252322D52"
latMin, latMax := -1.0, 1.0
lonMin, lonMax := -1.0, 1.0
gf := &GeoFilterConfig{
LatMin: &latMin, LatMax: &latMax,
LonMin: &lonMin, LonMax: &lonMax,
}
msg := &mockMessage{
topic: "meshcore/SJC/obs1/packets",
payload: []byte(`{"raw":"` + rawHex + `"}`),
}
// Default mode (no ForeignAdverts.Mode set) MUST be "flag", per #730 design.
handleMessage(store, "test", source, msg, nil, &Config{GeoFilter: gf})
var nodeCount int
if err := store.db.QueryRow("SELECT COUNT(*) FROM nodes").Scan(&nodeCount); err != nil {
t.Fatal(err)
}
if nodeCount != 1 {
t.Fatalf("nodes=%d, want 1 (foreign advert should be stored, not dropped, in flag mode)", nodeCount)
}
var foreign int
if err := store.db.QueryRow("SELECT foreign_advert FROM nodes").Scan(&foreign); err != nil {
t.Fatalf("foreign_advert column missing or unreadable: %v", err)
}
if foreign != 1 {
t.Errorf("foreign_advert=%d, want 1", foreign)
}
}
// TestHandleMessageAdvertForeign_DropModeStillDrops asserts the legacy
// drop-on-foreign behavior is preserved when ForeignAdverts.Mode = "drop".
func TestHandleMessageAdvertForeign_DropModeStillDrops(t *testing.T) {
store, source := newTestContext(t)
rawHex := "120046D62DE27D4C5194D7821FC5A34A45565DCC2537B300B9AB6275255CEFB65D840CE5C169C94C9AED39E8BCB6CB6EB0335497A198B33A1A610CD3B03D8DCFC160900E5244280323EE0B44CACAB8F02B5B38B91CFA18BD067B0B5E63E94CFC85F758A8530B9240933402E0E6B8F84D5252322D52"
latMin, latMax := -1.0, 1.0
lonMin, lonMax := -1.0, 1.0
gf := &GeoFilterConfig{
LatMin: &latMin, LatMax: &latMax,
LonMin: &lonMin, LonMax: &lonMax,
}
msg := &mockMessage{
topic: "meshcore/SJC/obs1/packets",
payload: []byte(`{"raw":"` + rawHex + `"}`),
}
cfg := &Config{
GeoFilter: gf,
ForeignAdverts: &ForeignAdvertConfig{Mode: "drop"},
}
handleMessage(store, "test", source, msg, nil, cfg)
var nodeCount int
if err := store.db.QueryRow("SELECT COUNT(*) FROM nodes").Scan(&nodeCount); err != nil {
t.Fatal(err)
}
if nodeCount != 0 {
t.Errorf("nodes=%d, want 0 (drop mode preserves legacy silent-drop behavior)", nodeCount)
}
}
// TestHandleMessageAdvertInRegion_NotFlaggedForeign asserts in-region
// adverts are NOT marked foreign.
func TestHandleMessageAdvertInRegion_NotFlaggedForeign(t *testing.T) {
store, source := newTestContext(t)
rawHex := "120046D62DE27D4C5194D7821FC5A34A45565DCC2537B300B9AB6275255CEFB65D840CE5C169C94C9AED39E8BCB6CB6EB0335497A198B33A1A610CD3B03D8DCFC160900E5244280323EE0B44CACAB8F02B5B38B91CFA18BD067B0B5E63E94CFC85F758A8530B9240933402E0E6B8F84D5252322D52"
// Wide-open geofilter: every coord passes.
latMin, latMax := -90.0, 90.0
lonMin, lonMax := -180.0, 180.0
gf := &GeoFilterConfig{
LatMin: &latMin, LatMax: &latMax,
LonMin: &lonMin, LonMax: &lonMax,
}
msg := &mockMessage{
topic: "meshcore/SJC/obs1/packets",
payload: []byte(`{"raw":"` + rawHex + `"}`),
}
handleMessage(store, "test", source, msg, nil, &Config{GeoFilter: gf})
var foreign int
err := store.db.QueryRow("SELECT foreign_advert FROM nodes").Scan(&foreign)
if err != nil {
t.Fatalf("query foreign_advert: %v", err)
}
if foreign != 0 {
t.Errorf("foreign_advert=%d, want 0 (in-region node)", foreign)
}
}
+8
View File
@@ -13,6 +13,14 @@ replace github.com/meshcore-analyzer/geofilter => ../../internal/geofilter
replace github.com/meshcore-analyzer/sigvalidate => ../../internal/sigvalidate
require github.com/meshcore-analyzer/packetpath v0.0.0
replace github.com/meshcore-analyzer/packetpath => ../../internal/packetpath
require github.com/meshcore-analyzer/dbconfig v0.0.0
replace github.com/meshcore-analyzer/dbconfig => ../../internal/dbconfig
require (
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/google/uuid v1.6.0 // indirect
+129 -27
View File
@@ -57,6 +57,12 @@ func main() {
defer store.Close()
log.Printf("SQLite opened: %s", cfg.DBPath)
// Async backfill: path_json from raw_hex (#888) — must not block MQTT startup
store.BackfillPathJSONAsync()
// Check auto_vacuum mode and optionally migrate (#919)
store.CheckAutoVacuum(cfg)
// Node retention: move stale nodes to inactive_nodes on startup
nodeDays := cfg.NodeDaysOrDefault()
store.MoveStaleNodes(nodeDays)
@@ -69,12 +75,15 @@ func main() {
metricsDays := cfg.MetricsRetentionDays()
store.PruneOldMetrics(metricsDays)
store.PruneDroppedPackets(metricsDays)
vacuumPages := cfg.IncrementalVacuumPages()
store.RunIncrementalVacuum(vacuumPages)
// Daily ticker for node retention
retentionTicker := time.NewTicker(1 * time.Hour)
go func() {
for range retentionTicker.C {
store.MoveStaleNodes(nodeDays)
store.RunIncrementalVacuum(vacuumPages)
}
}()
@@ -83,8 +92,10 @@ func main() {
go func() {
time.Sleep(90 * time.Second) // stagger after metrics prune
store.RemoveStaleObservers(observerDays)
store.RunIncrementalVacuum(vacuumPages)
for range observerRetentionTicker.C {
store.RemoveStaleObservers(observerDays)
store.RunIncrementalVacuum(vacuumPages)
}
}()
@@ -94,6 +105,7 @@ func main() {
for range metricsRetentionTicker.C {
store.PruneOldMetrics(metricsDays)
store.PruneDroppedPackets(metricsDays)
store.RunIncrementalVacuum(vacuumPages)
}
}()
@@ -114,29 +126,16 @@ func main() {
// Connect to each MQTT source
var clients []mqtt.Client
connectedCount := 0
for _, source := range sources {
tag := source.Name
if tag == "" {
tag = source.Broker
}
opts := mqtt.NewClientOptions().
AddBroker(source.Broker).
SetAutoReconnect(true).
SetConnectRetry(true).
SetOrderMatters(true)
if source.Username != "" {
opts.SetUsername(source.Username)
}
if source.Password != "" {
opts.SetPassword(source.Password)
}
if source.RejectUnauthorized != nil && !*source.RejectUnauthorized {
opts.SetTLSConfig(&tls.Config{InsecureSkipVerify: true})
} else if strings.HasPrefix(source.Broker, "ssl://") {
opts.SetTLSConfig(&tls.Config{})
}
opts := buildMQTTOpts(source)
connectTimeout := source.ConnectTimeoutOrDefault()
log.Printf("MQTT [%s] connect timeout: %ds", tag, connectTimeout)
opts.SetOnConnectHandler(func(c mqtt.Client) {
log.Printf("MQTT [%s] connected to %s", tag, source.Broker)
@@ -156,7 +155,11 @@ func main() {
})
opts.SetConnectionLostHandler(func(c mqtt.Client, err error) {
log.Printf("MQTT [%s] disconnected: %v", tag, err)
log.Printf("MQTT [%s] disconnected from %s: %v", tag, source.Broker, err)
})
opts.SetReconnectingHandler(func(c mqtt.Client, options *mqtt.ClientOptions) {
log.Printf("MQTT [%s] reconnecting to %s", tag, source.Broker)
})
// Capture source for closure
@@ -167,19 +170,43 @@ func main() {
client := mqtt.NewClient(opts)
token := client.Connect()
token.Wait()
if token.Error() != nil {
log.Printf("MQTT [%s] connection failed (non-fatal): %v", tag, token.Error())
// With ConnectRetry=true, token.Wait() blocks forever for unreachable brokers.
// WaitTimeout lets startup proceed; the client keeps retrying in the background
// and OnConnect fires (subscribing) when it eventually connects (#910).
if !token.WaitTimeout(time.Duration(connectTimeout) * time.Second) {
log.Printf("MQTT [%s] initial connection timed out — retrying in background", tag)
clients = append(clients, client)
continue
}
if token.Error() != nil {
log.Printf("MQTT [%s] connection failed (non-fatal): %v", tag, token.Error())
// BL1 fix: Disconnect to stop Paho's internal retry goroutines.
// With ConnectRetry=true, Connect() spawns background goroutines
// that leak if the client is simply discarded.
client.Disconnect(0)
continue
}
connectedCount++
clients = append(clients, client)
}
if len(clients) == 0 {
log.Fatal("no MQTT connections established — check broker is running (default: mqtt://localhost:1883). Set MQTT_BROKER env var or configure mqttSources in config.json")
// BL2 fix: require at least one immediately-connected source. Timed-out
// clients are retrying in background (tracked in clients) but don't count
// as "connected" — a single unreachable broker must not silently run with
// zero active connections.
if connectedCount == 0 {
// Clean up any timed-out clients still retrying
for _, c := range clients {
c.Disconnect(0)
}
log.Fatal("no MQTT sources connected — all timed out or failed. Check broker is running (default: mqtt://localhost:1883). Set MQTT_BROKER env var or configure mqttSources in config.json")
}
log.Printf("Running — %d MQTT source(s) connected", len(clients))
if connectedCount < len(clients) {
log.Printf("Running — %d MQTT source(s) connected, %d retrying in background", connectedCount, len(clients)-connectedCount)
} else {
log.Printf("Running — %d MQTT source(s) connected", connectedCount)
}
// Wait for shutdown signal
sig := make(chan os.Signal, 1)
@@ -197,6 +224,32 @@ func main() {
log.Println("Done.")
}
// buildMQTTOpts creates MQTT client options for a source with bounded reconnect
// backoff, connect timeout, and TLS/auth configuration.
func buildMQTTOpts(source MQTTSource) *mqtt.ClientOptions {
opts := mqtt.NewClientOptions().
AddBroker(source.Broker).
SetAutoReconnect(true).
SetConnectRetry(true).
SetOrderMatters(true).
SetMaxReconnectInterval(30 * time.Second).
SetConnectTimeout(10 * time.Second).
SetWriteTimeout(10 * time.Second)
if source.Username != "" {
opts.SetUsername(source.Username)
}
if source.Password != "" {
opts.SetPassword(source.Password)
}
if source.RejectUnauthorized != nil && !*source.RejectUnauthorized {
opts.SetTLSConfig(&tls.Config{InsecureSkipVerify: true})
} else if strings.HasPrefix(source.Broker, "ssl://") {
opts.SetTLSConfig(&tls.Config{})
}
return opts
}
func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message, channelKeys map[string]string, cfg *Config) {
defer func() {
if r := recover(); r != nil {
@@ -217,8 +270,21 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
return
}
// Observer blacklist: drop ALL messages from blacklisted observers before any
// DB writes (status, metrics, packets). Trumps IATA filter.
if len(parts) > 2 && cfg.IsObserverBlacklisted(parts[2]) {
log.Printf("MQTT [%s] observer %.8s blacklisted, dropping", tag, parts[2])
return
}
// Global observer IATA whitelist: if configured, drop messages from observers
// in non-whitelisted IATA regions. Applies to ALL message types (status + packets).
if len(parts) > 1 && !cfg.IsObserverIATAAllowed(parts[1]) {
return
}
// Status topic: meshcore/<region>/<observer_id>/status
// IATA filter does NOT apply here — observer metadata (noise_floor, battery, etc.)
// Per-source IATA filter does NOT apply here — observer metadata (noise_floor, battery, etc.)
// is region-independent and should be accepted from all observers regardless of
// which IATA regions are configured for packet ingestion.
if len(parts) >= 4 && parts[3] == "status" {
@@ -282,8 +348,16 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
if len(parts) > 1 {
region = parts[1]
}
// Fallback to source-level region config when topic has no region (#788)
if region == "" && source.Region != "" {
region = source.Region
}
mqttMsg := &MQTTPacketMessage{Raw: rawHex}
// Parse optional region from JSON payload (#788)
if v, ok := msg["region"].(string); ok && v != "" {
mqttMsg.Region = v
}
if v, ok := msg["SNR"]; ok {
if f, ok := toFloat64(v); ok {
mqttMsg.SNR = &f
@@ -348,10 +422,28 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
})
return
}
foreign := false
if !NodePassesGeoFilter(decoded.Payload.Lat, decoded.Payload.Lon, cfg.GeoFilter) {
return
if cfg.ForeignAdverts.IsDropMode() {
return
}
foreign = true
lat, lon := 0.0, 0.0
if decoded.Payload.Lat != nil {
lat = *decoded.Payload.Lat
}
if decoded.Payload.Lon != nil {
lon = *decoded.Payload.Lon
}
truncPK := decoded.Payload.PubKey
if len(truncPK) > 16 {
truncPK = truncPK[:16]
}
log.Printf("MQTT [%s] foreign advert: node=%s name=%s lat=%.4f lon=%.4f observer=%s",
tag, truncPK, decoded.Payload.Name, lat, lon, firstNonEmpty(mqttMsg.Origin, observerID))
}
pktData := BuildPacketData(mqttMsg, decoded, observerID, region)
pktData.Foreign = foreign
isNew, err := store.InsertTransmission(pktData)
if err != nil {
log.Printf("MQTT [%s] db insert error: %v", tag, err)
@@ -360,6 +452,11 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
if err := store.UpsertNode(decoded.Payload.PubKey, decoded.Payload.Name, role, decoded.Payload.Lat, decoded.Payload.Lon, pktData.Timestamp); err != nil {
log.Printf("MQTT [%s] node upsert error: %v", tag, err)
}
if foreign {
if err := store.MarkNodeForeign(decoded.Payload.PubKey); err != nil {
log.Printf("MQTT [%s] mark foreign error: %v", tag, err)
}
}
if isNew {
if err := store.IncrementAdvertCount(decoded.Payload.PubKey); err != nil {
log.Printf("MQTT [%s] advert count error: %v", tag, err)
@@ -383,7 +480,12 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
// Upsert observer
if observerID != "" {
origin, _ := msg["origin"].(string)
if err := store.UpsertObserver(observerID, origin, region, nil); err != nil {
// Use effective region: payload > topic > source config (#788)
effectiveRegion := region
if mqttMsg.Region != "" {
effectiveRegion = mqttMsg.Region
}
if err := store.UpsertObserver(observerID, origin, effectiveRegion, nil); err != nil {
log.Printf("MQTT [%s] observer upsert error: %v", tag, err)
}
}
+155
View File
@@ -5,8 +5,11 @@ import (
"math"
"os"
"path/filepath"
"runtime"
"testing"
"time"
mqtt "github.com/eclipse/paho.mqtt.golang"
)
func TestToFloat64(t *testing.T) {
@@ -780,3 +783,155 @@ func TestIATAFilterDoesNotDropStatusMessages(t *testing.T) {
t.Error("packet from out-of-region BFL should still be filtered by IATA")
}
}
// TestMQTTConnectRetryTimeoutDoesNotBlock verifies that WaitTimeout returns within
// the deadline for an unreachable broker when ConnectRetry=true (#910). Previously,
// token.Wait() would block forever in this configuration.
func TestMQTTConnectRetryTimeoutDoesNotBlock(t *testing.T) {
opts := mqtt.NewClientOptions().
AddBroker("tcp://127.0.0.1:1"). // port 1 — nothing listening, fast refusal
SetConnectRetry(true).
SetAutoReconnect(true)
client := mqtt.NewClient(opts)
token := client.Connect()
defer client.Disconnect(100)
start := time.Now()
connected := token.WaitTimeout(3 * time.Second)
elapsed := time.Since(start)
if connected {
t.Skip("port 1 unexpectedly accepted a connection — skipping")
}
if elapsed > 4*time.Second {
t.Errorf("WaitTimeout blocked for %v — token.Wait() would block forever with ConnectRetry=true", elapsed)
}
}
// TestBL1_GoroutineLeakOnHardFailure reproduces BLOCKER 1: without Disconnect()
// on the error path, Paho's internal retry goroutines leak when a client is
// discarded after Connect() with ConnectRetry=true.
//
// We prove the leak by creating N clients WITHOUT Disconnect — goroutines grow
// proportionally. The fix (client.Disconnect(0) before continue) prevents this.
func TestBL1_GoroutineLeakOnHardFailure(t *testing.T) {
runtime.GC()
time.Sleep(100 * time.Millisecond)
baseline := runtime.NumGoroutine()
// Create multiple clients connected to unreachable broker, WITHOUT disconnecting.
// Each one spawns Paho retry goroutines that accumulate.
const numClients = 10
clients := make([]mqtt.Client, numClients)
for i := 0; i < numClients; i++ {
opts := mqtt.NewClientOptions().
AddBroker("tcp://127.0.0.1:1").
SetConnectRetry(true).
SetAutoReconnect(true).
SetConnectTimeout(500 * time.Millisecond)
c := mqtt.NewClient(opts)
tok := c.Connect()
tok.WaitTimeout(1 * time.Second)
clients[i] = c
}
time.Sleep(200 * time.Millisecond)
leaked := runtime.NumGoroutine()
goroutineGrowth := leaked - baseline
// Clean up to not actually leak in test
for _, c := range clients {
c.Disconnect(0)
}
t.Logf("baseline=%d, after %d undisconnected clients=%d, growth=%d",
baseline, numClients, leaked, goroutineGrowth)
// With ConnectRetry=true, each Connect() spawns retry goroutines.
// Without Disconnect, these accumulate. Verify growth is meaningful.
if goroutineGrowth < 3 {
t.Skip("Connect didn't spawn enough extra goroutines to measure leak")
}
// The fix: calling client.Disconnect(0) on the error path prevents accumulation.
// Anti-tautology: removing the Disconnect(0) call from main.go's error path
// would cause goroutine accumulation proportional to failed broker count.
t.Logf("CONFIRMED: %d leaked goroutines from %d clients without Disconnect — fix adds Disconnect(0) on error path", goroutineGrowth, numClients)
}
// TestBL2_ZeroConnectedFatals verifies BLOCKER 2: when all brokers are unreachable,
// connectedCount==0 must be detected. We test the logic directly — if only timed-out
// clients exist (appended to clients slice) but connectedCount is 0, the guard triggers.
func TestBL2_ZeroConnectedFatals(t *testing.T) {
// Simulate the connection loop result: 1 timed-out client, 0 connected
var clients []mqtt.Client
connectedCount := 0
// Create a client that times out (unreachable broker)
opts := mqtt.NewClientOptions().
AddBroker("tcp://127.0.0.1:1").
SetConnectRetry(true).
SetAutoReconnect(true)
client := mqtt.NewClient(opts)
token := client.Connect()
if !token.WaitTimeout(2 * time.Second) {
// Timed out — PR #926 appends to clients
clients = append(clients, client)
}
defer func() {
for _, c := range clients {
c.Disconnect(0)
}
}()
// OLD bug: len(clients) == 0 would be false (1 timed-out client in list)
// → ingestor would silently run with zero connections
if len(clients) == 0 {
t.Fatal("expected timed-out client to be in clients slice")
}
// NEW fix: connectedCount == 0 catches this
if connectedCount != 0 {
t.Errorf("connectedCount should be 0, got %d", connectedCount)
}
// The real code does: if connectedCount == 0 { log.Fatal(...) }
// This test proves len(clients) > 0 but connectedCount == 0 — the old guard
// would have missed it.
if len(clients) > 0 && connectedCount == 0 {
t.Log("BL2 confirmed: old guard len(clients)==0 would NOT fatal; new guard connectedCount==0 correctly catches zero-connected state")
}
}
func TestHandleMessageObserverIATAWhitelist(t *testing.T) {
store := newTestStore(t)
source := MQTTSource{Name: "test"}
cfg := &Config{
ObserverIATAWhitelist: []string{"ARN"},
}
// Message from non-whitelisted region GOT — should be dropped
handleMessage(store, "test", source, &mockMessage{
topic: "meshcore/GOT/obs1/status",
payload: []byte(`{"origin":"node1","noise_floor":-110}`),
}, nil, cfg)
var count int
store.db.QueryRow("SELECT COUNT(*) FROM observers WHERE id='obs1'").Scan(&count)
if count != 0 {
t.Error("observer from non-whitelisted IATA GOT should be dropped")
}
// Message from whitelisted region ARN — should be accepted
handleMessage(store, "test", source, &mockMessage{
topic: "meshcore/ARN/obs2/status",
payload: []byte(`{"origin":"node2","noise_floor":-105}`),
}, nil, cfg)
store.db.QueryRow("SELECT COUNT(*) FROM observers WHERE id='obs2'").Scan(&count)
if count != 1 {
t.Errorf("observer from whitelisted IATA ARN should be accepted, got count=%d", count)
}
}
+76
View File
@@ -0,0 +1,76 @@
package main
import (
"testing"
"time"
)
func TestBuildMQTTOpts_ReconnectSettings(t *testing.T) {
source := MQTTSource{
Broker: "tcp://localhost:1883",
Name: "test",
}
opts := buildMQTTOpts(source)
if opts.MaxReconnectInterval != 30*time.Second {
t.Errorf("MaxReconnectInterval = %v, want 30s", opts.MaxReconnectInterval)
}
if opts.ConnectTimeout != 10*time.Second {
t.Errorf("ConnectTimeout = %v, want 10s", opts.ConnectTimeout)
}
if opts.WriteTimeout != 10*time.Second {
t.Errorf("WriteTimeout = %v, want 10s", opts.WriteTimeout)
}
if !opts.AutoReconnect {
t.Error("AutoReconnect should be true")
}
if !opts.ConnectRetry {
t.Error("ConnectRetry should be true")
}
}
func TestBuildMQTTOpts_Credentials(t *testing.T) {
source := MQTTSource{
Broker: "tcp://broker:1883",
Username: "user1",
Password: "pass1",
}
opts := buildMQTTOpts(source)
if opts.Username != "user1" {
t.Errorf("Username = %q, want %q", opts.Username, "user1")
}
if opts.Password != "pass1" {
t.Errorf("Password = %q, want %q", opts.Password, "pass1")
}
}
func TestBuildMQTTOpts_TLS_InsecureSkipVerify(t *testing.T) {
f := false
source := MQTTSource{
Broker: "ssl://broker:8883",
RejectUnauthorized: &f,
}
opts := buildMQTTOpts(source)
if opts.TLSConfig == nil {
t.Fatal("TLSConfig should be set")
}
if !opts.TLSConfig.InsecureSkipVerify {
t.Error("InsecureSkipVerify should be true when RejectUnauthorized=false")
}
}
func TestBuildMQTTOpts_TLS_SSL_Prefix(t *testing.T) {
source := MQTTSource{
Broker: "ssl://broker:8883",
}
opts := buildMQTTOpts(source)
if opts.TLSConfig == nil {
t.Fatal("TLSConfig should be set for ssl:// brokers")
}
if opts.TLSConfig.InsecureSkipVerify {
t.Error("InsecureSkipVerify should be false by default")
}
}
+43
View File
@@ -0,0 +1,43 @@
package main
import (
"testing"
)
func TestIngestorIsObserverBlacklisted(t *testing.T) {
cfg := &Config{
ObserverBlacklist: []string{"OBS1", "obs2"},
}
tests := []struct {
id string
want bool
}{
{"OBS1", true},
{"obs1", true},
{"OBS2", true},
{"obs3", false},
{"", false},
}
for _, tt := range tests {
got := cfg.IsObserverBlacklisted(tt.id)
if got != tt.want {
t.Errorf("IsObserverBlacklisted(%q) = %v, want %v", tt.id, got, tt.want)
}
}
}
func TestIngestorIsObserverBlacklistedEmpty(t *testing.T) {
cfg := &Config{}
if cfg.IsObserverBlacklisted("anything") {
t.Error("empty blacklist should not match")
}
}
func TestIngestorIsObserverBlacklistedNil(t *testing.T) {
var cfg *Config
if cfg.IsObserverBlacklisted("anything") {
t.Error("nil config should not match")
}
}
+96
View File
@@ -0,0 +1,96 @@
package main
import (
"encoding/json"
"testing"
)
// Regression test for #1044: observer metadata (model, firmware, battery_mv,
// noise_floor) is silently dropped when an MQTT status payload arrives, even
// though the same payload's `radio` and `client_version` fields ARE persisted.
//
// Real-world payload captured from the production MQTT bridge:
//
// {"status":"online","origin":"TestObserver","origin_id":"AABBCCDD",
// "radio":"910.5250244,62.5,7,5",
// "model":"Heltec V3",
// "firmware_version":"1.12.0-test",
// "client_version":"meshcoretomqtt/1.0.8.0",
// "stats":{"battery_mv":4209,"uptime_secs":75821,"noise_floor":-109,
// "tx_air_secs":80,"rx_air_secs":1903,"recv_errors":934}}
func TestStatusMessageMetadataPersisted_Issue1044(t *testing.T) {
const payload = `{"status":"online","origin":"TestObserver","origin_id":"AABBCCDD","radio":"910.5250244,62.5,7,5","model":"Heltec V3","firmware_version":"1.12.0-test","client_version":"meshcoretomqtt/1.0.8.0","stats":{"battery_mv":4209,"uptime_secs":75821,"noise_floor":-109,"tx_air_secs":80,"rx_air_secs":1903,"recv_errors":934}}`
var msg map[string]interface{}
if err := json.Unmarshal([]byte(payload), &msg); err != nil {
t.Fatalf("unmarshal: %v", err)
}
meta := extractObserverMeta(msg)
if meta == nil {
t.Fatal("extractObserverMeta returned nil for a payload that contains model/firmware/battery_mv")
}
if meta.Model == nil || *meta.Model != "Heltec V3" {
t.Errorf("meta.Model = %v, want \"Heltec V3\"", meta.Model)
}
if meta.Firmware == nil || *meta.Firmware != "1.12.0-test" {
t.Errorf("meta.Firmware = %v, want \"1.12.0-test\"", meta.Firmware)
}
if meta.ClientVersion == nil || *meta.ClientVersion != "meshcoretomqtt/1.0.8.0" {
t.Errorf("meta.ClientVersion = %v, want \"meshcoretomqtt/1.0.8.0\"", meta.ClientVersion)
}
if meta.Radio == nil || *meta.Radio != "910.5250244,62.5,7,5" {
t.Errorf("meta.Radio = %v, want radio string", meta.Radio)
}
if meta.BatteryMv == nil || *meta.BatteryMv != 4209 {
t.Errorf("meta.BatteryMv = %v, want 4209", meta.BatteryMv)
}
if meta.NoiseFloor == nil || *meta.NoiseFloor != -109 {
t.Errorf("meta.NoiseFloor = %v, want -109", meta.NoiseFloor)
}
if meta.UptimeSecs == nil || *meta.UptimeSecs != 75821 {
t.Errorf("meta.UptimeSecs = %v, want 75821", meta.UptimeSecs)
}
// Now drive the meta through UpsertObserver and verify the row.
s, err := OpenStore(tempDBPath(t))
if err != nil {
t.Fatal(err)
}
defer s.Close()
if err := s.UpsertObserver("AABBCCDD", "TestObserver", "SJC", meta); err != nil {
t.Fatalf("UpsertObserver: %v", err)
}
var (
gotModel, gotFirmware, gotClientVersion, gotRadio string
gotBattery int
gotUptime int64
gotNoise float64
)
err = s.db.QueryRow(`SELECT model, firmware, client_version, radio,
battery_mv, uptime_secs, noise_floor
FROM observers WHERE id = 'AABBCCDD'`).Scan(
&gotModel, &gotFirmware, &gotClientVersion, &gotRadio,
&gotBattery, &gotUptime, &gotNoise,
)
if err != nil {
t.Fatalf("scan observer row: %v", err)
}
if gotModel != "Heltec V3" {
t.Errorf("DB model = %q, want \"Heltec V3\"", gotModel)
}
if gotFirmware != "1.12.0-test" {
t.Errorf("DB firmware = %q, want \"1.12.0-test\"", gotFirmware)
}
if gotBattery != 4209 {
t.Errorf("DB battery_mv = %d, want 4209", gotBattery)
}
if gotUptime != 75821 {
t.Errorf("DB uptime_secs = %d, want 75821", gotUptime)
}
if gotNoise != -109 {
t.Errorf("DB noise_floor = %f, want -109", gotNoise)
}
}
+89
View File
@@ -0,0 +1,89 @@
package main
import (
"fmt"
"io"
"log"
"net/http"
"os"
"path/filepath"
"strings"
"time"
)
// handleBackup streams a consistent SQLite snapshot of the analyzer DB.
//
// Requires API-key authentication (mounted via requireAPIKey in routes.go).
//
// Strategy: SQLite's `VACUUM INTO 'path'` produces an atomic, defragmented
// copy of the current database into a new file. It runs at READ ISOLATION
// against the source DB (works on our read-only connection) and never
// blocks concurrent writers — the ingestor keeps writing to the WAL while
// the snapshot is taken from a consistent read transaction.
//
// Response:
//
// 200 OK
// Content-Type: application/octet-stream
// Content-Disposition: attachment; filename="corescope-backup-<unix>.db"
// <body: complete SQLite database file>
//
// The temp file is removed after the response is fully written, regardless
// of whether the client successfully consumed the stream.
func (s *Server) handleBackup(w http.ResponseWriter, r *http.Request) {
if s.db == nil || s.db.conn == nil {
writeError(w, http.StatusServiceUnavailable, "database unavailable")
return
}
ts := time.Now().UTC().Unix()
clientIP := r.Header.Get("X-Forwarded-For")
if clientIP == "" {
clientIP = r.RemoteAddr
}
log.Printf("[backup] generating backup for client %s", clientIP)
// Stage the snapshot in the OS temp dir so we never touch the live DB
// directory (avoids confusing operators / accidental WAL clobber).
tmpDir, err := os.MkdirTemp("", "corescope-backup-")
if err != nil {
writeError(w, http.StatusInternalServerError, "tempdir failed: "+err.Error())
return
}
defer func() {
if rmErr := os.RemoveAll(tmpDir); rmErr != nil {
log.Printf("[backup] cleanup error: %v", rmErr)
}
}()
snapshotPath := filepath.Join(tmpDir, fmt.Sprintf("corescope-backup-%d.db", ts))
// SQLite parses the path literal — escape any single quotes defensively.
// (mkdtemp output won't contain quotes, but be paranoid for future-proofing.)
escaped := strings.ReplaceAll(snapshotPath, "'", "''")
if _, err := s.db.conn.ExecContext(r.Context(), fmt.Sprintf("VACUUM INTO '%s'", escaped)); err != nil {
writeError(w, http.StatusInternalServerError, "snapshot failed: "+err.Error())
return
}
f, err := os.Open(snapshotPath)
if err != nil {
writeError(w, http.StatusInternalServerError, "open snapshot failed: "+err.Error())
return
}
defer f.Close()
stat, err := f.Stat()
if err == nil {
w.Header().Set("Content-Length", fmt.Sprintf("%d", stat.Size()))
}
w.Header().Set("Content-Type", "application/octet-stream")
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"corescope-backup-%d.db\"", ts))
w.Header().Set("X-Content-Type-Options", "nosniff")
w.WriteHeader(http.StatusOK)
if _, err := io.Copy(w, f); err != nil {
// Headers already flushed; just log. Client will see truncated stream.
log.Printf("[backup] stream error: %v", err)
}
}
+55
View File
@@ -0,0 +1,55 @@
package main
import (
"net/http"
"net/http/httptest"
"strings"
"testing"
)
// sqliteMagic is the 16-byte file header identifying a valid SQLite 3 database.
// See https://www.sqlite.org/fileformat.html#magic_header_string
const sqliteMagic = "SQLite format 3\x00"
func TestBackupRequiresAPIKey(t *testing.T) {
_, router := setupTestServerWithAPIKey(t, "test-secret-key-strong-enough")
req := httptest.NewRequest("GET", "/api/backup", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusUnauthorized {
t.Fatalf("expected 401 without API key, got %d (body: %s)", w.Code, w.Body.String())
}
}
func TestBackupReturnsValidSQLiteSnapshot(t *testing.T) {
const apiKey = "test-secret-key-strong-enough"
_, router := setupTestServerWithAPIKey(t, apiKey)
req := httptest.NewRequest("GET", "/api/backup", nil)
req.Header.Set("X-API-Key", apiKey)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d (body: %s)", w.Code, w.Body.String())
}
ct := w.Header().Get("Content-Type")
if ct != "application/octet-stream" {
t.Errorf("expected Content-Type application/octet-stream, got %q", ct)
}
cd := w.Header().Get("Content-Disposition")
if !strings.HasPrefix(cd, "attachment;") || !strings.Contains(cd, "filename=\"corescope-backup-") || !strings.HasSuffix(cd, ".db\"") {
t.Errorf("expected Content-Disposition attachment with corescope-backup-<ts>.db filename, got %q", cd)
}
body := w.Body.Bytes()
if len(body) < len(sqliteMagic) {
t.Fatalf("backup body too short (%d bytes) — expected SQLite file", len(body))
}
if got := string(body[:len(sqliteMagic)]); got != sqliteMagic {
t.Fatalf("expected SQLite magic header %q, got %q", sqliteMagic, got)
}
}
+88 -2
View File
@@ -127,6 +127,92 @@ func TestBoundedLoad_AscendingOrder(t *testing.T) {
}
}
// loadStoreWithRetention creates a PacketStore with retentionHours set.
func loadStoreWithRetention(t *testing.T, dbPath string, retentionHours float64) *PacketStore {
t.Helper()
db, err := OpenDB(dbPath)
if err != nil {
t.Fatal(err)
}
cfg := &PacketStoreConfig{RetentionHours: retentionHours}
store := NewPacketStore(db, cfg)
if err := store.Load(); err != nil {
t.Fatal(err)
}
return store
}
// createTestDBWithAgedPackets inserts numRecent packets with timestamps within
// the last hour and numOld packets with timestamps 48 hours ago.
func createTestDBWithAgedPackets(t *testing.T, numRecent, numOld int) string {
t.Helper()
dir := t.TempDir()
dbPath := filepath.Join(dir, "test.db")
conn, err := sql.Open("sqlite", dbPath+"?_journal_mode=WAL")
if err != nil {
t.Fatal(err)
}
defer conn.Close()
execOrFail := func(s string) {
if _, err := conn.Exec(s); err != nil {
t.Fatalf("setup: %v\nSQL: %s", err, s)
}
}
execOrFail(`CREATE TABLE transmissions (id INTEGER PRIMARY KEY, raw_hex TEXT, hash TEXT, first_seen TEXT, route_type INTEGER, payload_type INTEGER, payload_version INTEGER, decoded_json TEXT)`)
execOrFail(`CREATE TABLE observations (id INTEGER PRIMARY KEY, transmission_id INTEGER, observer_id TEXT, observer_name TEXT, direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT, raw_hex TEXT)`)
execOrFail(`CREATE TABLE observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
execOrFail(`CREATE TABLE nodes (pubkey TEXT PRIMARY KEY, name TEXT, role TEXT, lat REAL, lon REAL, last_seen TEXT, first_seen TEXT, frequency REAL)`)
execOrFail(`CREATE TABLE schema_version (version INTEGER)`)
execOrFail(`INSERT INTO schema_version (version) VALUES (1)`)
execOrFail(`CREATE INDEX idx_tx_first_seen ON transmissions(first_seen)`)
now := time.Now().UTC()
id := 1
// Insert old packets (48 hours ago)
for i := 0; i < numOld; i++ {
ts := now.Add(-48 * time.Hour).Add(time.Duration(i) * time.Second).Format(time.RFC3339)
conn.Exec("INSERT INTO transmissions VALUES (?,?,?,?,0,4,1,?)", id, "aa", fmt.Sprintf("old%d", i), ts, `{}`)
conn.Exec("INSERT INTO observations VALUES (?,?,?,?,?,?,?,?,?,?,?)", id, id, "obs1", "Obs1", "RX", -10.0, -80.0, 5, `[]`, ts, "")
id++
}
// Insert recent packets (within last hour)
for i := 0; i < numRecent; i++ {
ts := now.Add(-30 * time.Minute).Add(time.Duration(i) * time.Second).Format(time.RFC3339)
conn.Exec("INSERT INTO transmissions VALUES (?,?,?,?,0,4,1,?)", id, "bb", fmt.Sprintf("new%d", i), ts, `{}`)
conn.Exec("INSERT INTO observations VALUES (?,?,?,?,?,?,?,?,?,?,?)", id, id, "obs1", "Obs1", "RX", -10.0, -80.0, 5, `[]`, ts, "")
id++
}
return dbPath
}
func TestRetentionLoad_OnlyLoadsRecentPackets(t *testing.T) {
dbPath := createTestDBWithAgedPackets(t, 50, 100)
defer os.RemoveAll(filepath.Dir(dbPath))
// retention = 2 hours — should load only the 50 recent packets, not the 100 old ones
store := loadStoreWithRetention(t, dbPath, 2)
defer store.db.conn.Close()
if len(store.packets) != 50 {
t.Errorf("expected 50 recent packets, got %d (old packets should be excluded by retentionHours)", len(store.packets))
}
}
func TestRetentionLoad_ZeroRetentionLoadsAll(t *testing.T) {
dbPath := createTestDBWithAgedPackets(t, 50, 100)
defer os.RemoveAll(filepath.Dir(dbPath))
// retention = 0 (unlimited) — should load all 150 packets
store := loadStoreWithRetention(t, dbPath, 0)
defer store.db.conn.Close()
if len(store.packets) != 150 {
t.Errorf("expected all 150 packets with retentionHours=0, got %d", len(store.packets))
}
}
func TestEstimateStoreTxBytesTypical(t *testing.T) {
est := estimateStoreTxBytesTypical(10)
if est < 1000 {
@@ -229,7 +315,7 @@ func createTestDBAt(tb testing.TB, dbPath string, numTx int) {
id INTEGER PRIMARY KEY,
transmission_id INTEGER, observer_id TEXT, observer_name TEXT,
direction TEXT, snr REAL, rssi REAL, score INTEGER,
path_json TEXT, timestamp TEXT
path_json TEXT, timestamp TEXT, raw_hex TEXT
)`)
execOrFail(`CREATE TABLE IF NOT EXISTS observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
execOrFail(`CREATE TABLE IF NOT EXISTS nodes (
@@ -280,7 +366,7 @@ func createTestDBWithObs(tb testing.TB, dbPath string, numTx int) {
)`)
execOrFail(`CREATE TABLE IF NOT EXISTS observations (
id INTEGER PRIMARY KEY, transmission_id INTEGER, observer_id TEXT, observer_name TEXT,
direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT
direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT, raw_hex TEXT
)`)
execOrFail(`CREATE TABLE IF NOT EXISTS observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
execOrFail(`CREATE TABLE IF NOT EXISTS nodes (
+168
View File
@@ -0,0 +1,168 @@
package main
import (
"encoding/json"
"testing"
"time"
)
var _ = time.Second // suppress unused import
// Helper to create a minimal PacketStore with GRP_TXT packets for channel analytics testing.
func newChannelTestStore(packets []*StoreTx) *PacketStore {
ps := &PacketStore{
packets: packets,
byHash: make(map[string]*StoreTx),
byTxID: make(map[int]*StoreTx),
byObsID: make(map[int]*StoreObs),
byObserver: make(map[string][]*StoreObs),
byNode: make(map[string][]*StoreTx),
byPathHop: make(map[string][]*StoreTx),
nodeHashes: make(map[string]map[string]bool),
byPayloadType: make(map[int][]*StoreTx),
rfCache: make(map[string]*cachedResult),
topoCache: make(map[string]*cachedResult),
hashCache: make(map[string]*cachedResult),
collisionCache: make(map[string]*cachedResult),
chanCache: make(map[string]*cachedResult),
distCache: make(map[string]*cachedResult),
subpathCache: make(map[string]*cachedResult),
spIndex: make(map[string]int),
spTxIndex: make(map[string][]*StoreTx),
advertPubkeys: make(map[string]int),
lastSeenTouched: make(map[string]time.Time),
clockSkew: NewClockSkewEngine(),
}
ps.byPayloadType[5] = packets
return ps
}
func makeGrpTx(channelHash int, channel, text, sender string) *StoreTx {
decoded := map[string]interface{}{
"type": "CHAN",
"channelHash": float64(channelHash),
"channel": channel,
"text": text,
"sender": sender,
}
b, _ := json.Marshal(decoded)
pt := 5
return &StoreTx{
ID: 1,
DecodedJSON: string(b),
FirstSeen: "2026-05-01T12:00:00Z",
PayloadType: &pt,
}
}
// TestComputeAnalyticsChannels_MergesEncryptedAndDecrypted verifies that packets
// with the same hash byte but different decryption status merge into ONE bucket.
func TestComputeAnalyticsChannels_MergesEncryptedAndDecrypted(t *testing.T) {
// Hash 129 is the real hash for #wardriving: SHA256(SHA256("#wardriving")[:16])[0] = 129
// Some packets are decrypted (have channel name), some are not (encrypted)
packets := []*StoreTx{
makeGrpTx(129, "#wardriving", "hello", "alice"),
makeGrpTx(129, "#wardriving", "world", "bob"),
makeGrpTx(129, "", "", ""), // encrypted — no channel name
makeGrpTx(129, "", "", ""), // encrypted
}
store := newChannelTestStore(packets)
result := store.computeAnalyticsChannels("", TimeWindow{})
channels := result["channels"].([]map[string]interface{})
if len(channels) != 1 {
t.Fatalf("expected 1 channel bucket, got %d: %+v", len(channels), channels)
}
ch := channels[0]
if ch["name"] != "#wardriving" {
t.Errorf("expected name '#wardriving', got %q", ch["name"])
}
if ch["messages"] != 4 {
t.Errorf("expected 4 messages, got %v", ch["messages"])
}
if ch["encrypted"] != false {
t.Errorf("expected encrypted=false (some packets decrypted), got %v", ch["encrypted"])
}
}
// TestComputeAnalyticsChannels_RejectsRainbowTableMismatch verifies that a packet
// with channelHash=72 but channel="#wardriving" (mismatch) does NOT create a
// "#wardriving" bucket — it falls into "ch72" instead.
func TestComputeAnalyticsChannels_RejectsRainbowTableMismatch(t *testing.T) {
// Hash 72 is NOT the correct hash for #wardriving (which is 129).
// This simulates a rainbow-table collision/mismatch.
packets := []*StoreTx{
makeGrpTx(72, "#wardriving", "ghost", "eve"), // mismatch: hash 72 != wardriving's real hash
makeGrpTx(129, "#wardriving", "real", "alice"), // correct match
}
store := newChannelTestStore(packets)
result := store.computeAnalyticsChannels("", TimeWindow{})
channels := result["channels"].([]map[string]interface{})
if len(channels) != 2 {
t.Fatalf("expected 2 channel buckets, got %d: %+v", len(channels), channels)
}
// Find the buckets
var ch72, ch129 map[string]interface{}
for _, ch := range channels {
if ch["hash"] == "72" {
ch72 = ch
} else if ch["hash"] == "129" {
ch129 = ch
}
}
if ch72 == nil {
t.Fatal("expected a bucket for hash 72")
}
if ch129 == nil {
t.Fatal("expected a bucket for hash 129")
}
// ch72 should NOT be named "#wardriving" — it should be the placeholder
if ch72["name"] == "#wardriving" {
t.Errorf("hash 72 bucket should NOT be named '#wardriving' (rainbow-table mismatch rejected)")
}
if ch72["name"] != "ch72" {
t.Errorf("expected hash 72 bucket named 'ch72', got %q", ch72["name"])
}
// ch129 should be named "#wardriving"
if ch129["name"] != "#wardriving" {
t.Errorf("expected hash 129 bucket named '#wardriving', got %q", ch129["name"])
}
}
// TestChannelNameMatchesHash verifies the hash validation function.
func TestChannelNameMatchesHash(t *testing.T) {
// #wardriving hashes to 129
if !channelNameMatchesHash("#wardriving", "129") {
t.Error("expected #wardriving to match hash 129")
}
if channelNameMatchesHash("#wardriving", "72") {
t.Error("expected #wardriving to NOT match hash 72")
}
// Without leading # should also work
if !channelNameMatchesHash("wardriving", "129") {
t.Error("expected wardriving (without #) to match hash 129")
}
}
// TestIsPlaceholderName verifies placeholder detection.
func TestIsPlaceholderName(t *testing.T) {
if !isPlaceholderName("ch129") {
t.Error("ch129 should be placeholder")
}
if !isPlaceholderName("ch0") {
t.Error("ch0 should be placeholder")
}
if isPlaceholderName("#wardriving") {
t.Error("#wardriving should NOT be placeholder")
}
if isPlaceholderName("Public") {
t.Error("Public should NOT be placeholder")
}
}
+123 -4
View File
@@ -120,6 +120,8 @@ type NodeClockSkew struct {
GoodFraction float64 `json:"goodFraction"` // fraction of recent samples with |skew| <= 1h
RecentBadSampleCount int `json:"recentBadSampleCount"` // count of recent samples with |skew| > 1h
RecentSampleCount int `json:"recentSampleCount"` // total recent samples in window
RecentHashEvidence []HashEvidence `json:"recentHashEvidence,omitempty"`
CalibrationSummary *CalibrationSummary `json:"calibrationSummary,omitempty"`
NodeName string `json:"nodeName,omitempty"` // populated in fleet responses
NodeRole string `json:"nodeRole,omitempty"` // populated in fleet responses
}
@@ -130,6 +132,31 @@ type SkewSample struct {
SkewSec float64 `json:"skew"` // corrected skew in seconds
}
// HashEvidenceObserver is one observer's contribution to a per-hash evidence entry.
type HashEvidenceObserver struct {
ObserverID string `json:"observerID"`
ObserverName string `json:"observerName"`
RawSkewSec float64 `json:"rawSkewSec"`
CorrectedSkewSec float64 `json:"correctedSkewSec"`
ObserverOffsetSec float64 `json:"observerOffsetSec"`
Calibrated bool `json:"calibrated"`
}
// HashEvidence is per-hash clock skew evidence showing individual observer contributions.
type HashEvidence struct {
Hash string `json:"hash"`
Observers []HashEvidenceObserver `json:"observers"`
MedianCorrectedSkewSec float64 `json:"medianCorrectedSkewSec"`
Timestamp int64 `json:"timestamp"`
}
// CalibrationSummary counts how many samples were corrected via observer calibration.
type CalibrationSummary struct {
TotalSamples int `json:"totalSamples"`
CalibratedSamples int `json:"calibratedSamples"`
UncalibratedSamples int `json:"uncalibratedSamples"`
}
// txSkewResult maps tx hash → per-transmission skew stats. This is an
// intermediate result keyed by hash (not pubkey); the store maps hash → pubkey
// when building the final per-node view.
@@ -143,15 +170,27 @@ type ClockSkewEngine struct {
observerOffsets map[string]float64 // observerID → calibrated offset (seconds)
observerSamples map[string]int // observerID → number of multi-observer packets used
nodeSkew txSkewResult
hashEvidence map[string][]hashEvidenceEntry // hash → per-observer raw/corrected data
lastComputed time.Time
computeInterval time.Duration
}
// hashEvidenceEntry stores raw evidence per observer per hash, cached during Recompute.
type hashEvidenceEntry struct {
observerID string
rawSkew float64
corrected float64
offset float64
calibrated bool
observedTS int64
}
func NewClockSkewEngine() *ClockSkewEngine {
return &ClockSkewEngine{
observerOffsets: make(map[string]float64),
observerSamples: make(map[string]int),
nodeSkew: make(txSkewResult),
hashEvidence: make(map[string][]hashEvidenceEntry),
computeInterval: 30 * time.Second,
}
}
@@ -176,14 +215,16 @@ func (e *ClockSkewEngine) Recompute(store *PacketStore) {
var newOffsets map[string]float64
var newSamples map[string]int
var newNodeSkew txSkewResult
var newHashEvidence map[string][]hashEvidenceEntry
if len(samples) > 0 {
newOffsets, newSamples = calibrateObservers(samples)
newNodeSkew = computeNodeSkew(samples, newOffsets)
newNodeSkew, newHashEvidence = computeNodeSkew(samples, newOffsets)
} else {
newOffsets = make(map[string]float64)
newSamples = make(map[string]int)
newNodeSkew = make(txSkewResult)
newHashEvidence = make(map[string][]hashEvidenceEntry)
}
// Swap results under brief write lock.
@@ -196,6 +237,7 @@ func (e *ClockSkewEngine) Recompute(store *PacketStore) {
e.observerOffsets = newOffsets
e.observerSamples = newSamples
e.nodeSkew = newNodeSkew
e.hashEvidence = newHashEvidence
e.lastComputed = time.Now()
e.mu.Unlock()
}
@@ -332,7 +374,7 @@ func calibrateObservers(samples []skewSample) (map[string]float64, map[string]in
// ── Phase 3: Per-Node Skew ─────────────────────────────────────────────────────
// computeNodeSkew calculates corrected skew statistics for each node.
func computeNodeSkew(samples []skewSample, obsOffsets map[string]float64) txSkewResult {
func computeNodeSkew(samples []skewSample, obsOffsets map[string]float64) (txSkewResult, map[string][]hashEvidenceEntry) {
// Compute corrected skew per sample, grouped by hash (each hash = one
// node's advert transmission). The caller maps hash → pubkey via byNode.
type correctedSample struct {
@@ -343,6 +385,7 @@ func computeNodeSkew(samples []skewSample, obsOffsets map[string]float64) txSkew
byHash := make(map[string][]correctedSample)
hashAdvertTS := make(map[string]int64)
evidence := make(map[string][]hashEvidenceEntry) // hash → per-observer evidence
for _, s := range samples {
obsOffset, hasCal := obsOffsets[s.observerID]
@@ -359,6 +402,14 @@ func computeNodeSkew(samples []skewSample, obsOffsets map[string]float64) txSkew
calibrated: hasCal,
})
hashAdvertTS[s.hash] = s.advertTS
evidence[s.hash] = append(evidence[s.hash], hashEvidenceEntry{
observerID: s.observerID,
rawSkew: round(rawSkew, 1),
corrected: round(corrected, 1),
offset: round(obsOffset, 1),
calibrated: hasCal,
observedTS: s.observedTS,
})
}
// Each hash represents one advert from one node. Compute median corrected
@@ -397,7 +448,7 @@ func computeNodeSkew(samples []skewSample, obsOffsets map[string]float64) txSkew
LastObservedTS: latestObsTS,
}
}
return result
return result, evidence
}
// ── Integration with PacketStore ───────────────────────────────────────────────
@@ -558,6 +609,70 @@ func (s *PacketStore) getNodeClockSkewLocked(pubkey string) *NodeClockSkew {
samples[i] = SkewSample{Timestamp: p.ts, SkewSec: round(p.skew, 1)}
}
// Build per-hash evidence (most recent 10 hashes with ≥1 observer).
// Observer name lookup from store observations.
obsNameMap := make(map[string]string)
type hashMeta struct {
hash string
ts int64
}
var evidenceHashes []hashMeta
for _, tx := range txs {
if tx.PayloadType == nil || *tx.PayloadType != PayloadADVERT {
continue
}
ev, ok := s.clockSkew.hashEvidence[tx.Hash]
if !ok || len(ev) == 0 {
continue
}
// Collect observer names from tx observations.
for _, obs := range tx.Observations {
if obs.ObserverID != "" && obs.ObserverName != "" {
obsNameMap[obs.ObserverID] = obs.ObserverName
}
}
evidenceHashes = append(evidenceHashes, hashMeta{hash: tx.Hash, ts: ev[0].observedTS})
}
// Sort by timestamp descending, take most recent 10.
sort.Slice(evidenceHashes, func(i, j int) bool { return evidenceHashes[i].ts > evidenceHashes[j].ts })
if len(evidenceHashes) > 10 {
evidenceHashes = evidenceHashes[:10]
}
var recentEvidence []HashEvidence
var calSummary CalibrationSummary
for _, eh := range evidenceHashes {
entries := s.clockSkew.hashEvidence[eh.hash]
var observers []HashEvidenceObserver
var corrSkews []float64
for _, e := range entries {
name := obsNameMap[e.observerID]
if name == "" {
name = e.observerID
}
observers = append(observers, HashEvidenceObserver{
ObserverID: e.observerID,
ObserverName: name,
RawSkewSec: e.rawSkew,
CorrectedSkewSec: e.corrected,
ObserverOffsetSec: e.offset,
Calibrated: e.calibrated,
})
corrSkews = append(corrSkews, e.corrected)
calSummary.TotalSamples++
if e.calibrated {
calSummary.CalibratedSamples++
} else {
calSummary.UncalibratedSamples++
}
}
recentEvidence = append(recentEvidence, HashEvidence{
Hash: eh.hash,
Observers: observers,
MedianCorrectedSkewSec: round(median(corrSkews), 1),
Timestamp: eh.ts,
})
}
return &NodeClockSkew{
Pubkey: pubkey,
MeanSkewSec: round(meanSkew, 1),
@@ -574,6 +689,8 @@ func (s *PacketStore) getNodeClockSkewLocked(pubkey string) *NodeClockSkew {
GoodFraction: round(goodFraction, 2),
RecentBadSampleCount: recentBadCount,
RecentSampleCount: recentSampleCount,
RecentHashEvidence: recentEvidence,
CalibrationSummary: &calSummary,
}
}
@@ -601,8 +718,10 @@ func (s *PacketStore) GetFleetClockSkew() []*NodeClockSkew {
cs.NodeName = ni.Name
cs.NodeRole = ni.Role
}
// Omit samples in fleet response (too much data).
// Omit samples and evidence in fleet response (too much data).
cs.Samples = nil
cs.RecentHashEvidence = nil
cs.CalibrationSummary = nil
results = append(results, cs)
}
return results
+103 -2
View File
@@ -191,7 +191,7 @@ func TestComputeNodeSkew_BasicCorrection(t *testing.T) {
// So the median approach finds obs2 is +5 ahead (relative to median)
// Now compute node skew with those offsets:
nodeSkew := computeNodeSkew(samples, offsets)
nodeSkew, _ := computeNodeSkew(samples, offsets)
cs, ok := nodeSkew["h1"]
if !ok {
t.Fatal("expected skew data for hash h1")
@@ -220,7 +220,7 @@ func TestComputeNodeSkew_ThreeObservers(t *testing.T) {
t.Errorf("obs3 offset = %v, want 30", offsets["obs3"])
}
nodeSkew := computeNodeSkew(samples, offsets)
nodeSkew, _ := computeNodeSkew(samples, offsets)
cs := nodeSkew["h1"]
if cs == nil {
t.Fatal("expected skew data for h1")
@@ -954,3 +954,104 @@ func TestAllGood_OK_845(t *testing.T) {
t.Errorf("recentBadSampleCount = %v, want 0", r.RecentBadSampleCount)
}
}
func TestNodeClockSkew_EvidencePayload(t *testing.T) {
// 3-observer scenario: obs1 ahead by +2s, obs2 on time, obs3 behind by -1s.
// Node clock is 60s ahead. Raw skew = advertTS - obsTS.
// Hash has 3 observations, each observer sees same advert.
ps := NewPacketStore(nil, nil)
pt := 4 // ADVERT
// Advert timestamp: 1700000060 (node 60s ahead of true time 1700000000)
// obs1 sees at 1700000002 (2s ahead of true time) → raw = 60 - 2 = 58
// obs2 sees at 1700000000 (on time) → raw = 60 - 0 = 60
// obs3 sees at 1699999999 (-1s, behind) → raw = 60 + 1 = 61
// Median obsTS = 1700000000, so:
// obs1 offset = 1700000002 - 1700000000 = +2
// obs2 offset = 0
// obs3 offset = 1699999999 - 1700000000 = -1
// Corrected: raw + offset → obs1: 58+2=60, obs2: 60+0=60, obs3: 61+(-1)=60
tx1 := &StoreTx{
Hash: "evidence_hash_1",
PayloadType: &pt,
DecodedJSON: `{"payload":{"timestamp":1700000060}}`,
Observations: []*StoreObs{
{ObserverID: "obs1", ObserverName: "Observer Alpha", Timestamp: "2023-11-14T22:13:22Z"},
{ObserverID: "obs2", ObserverName: "Observer Beta", Timestamp: "2023-11-14T22:13:20Z"},
{ObserverID: "obs3", ObserverName: "Observer Gamma", Timestamp: "2023-11-14T22:13:19Z"},
},
}
// Second hash to ensure we get multiple evidence entries.
tx2 := &StoreTx{
Hash: "evidence_hash_2",
PayloadType: &pt,
DecodedJSON: `{"payload":{"timestamp":1700003660}}`,
Observations: []*StoreObs{
{ObserverID: "obs1", ObserverName: "Observer Alpha", Timestamp: "2023-11-14T23:13:22Z"},
{ObserverID: "obs2", ObserverName: "Observer Beta", Timestamp: "2023-11-14T23:13:20Z"},
{ObserverID: "obs3", ObserverName: "Observer Gamma", Timestamp: "2023-11-14T23:13:19Z"},
},
}
ps.mu.Lock()
ps.byNode["NODETEST"] = []*StoreTx{tx1, tx2}
ps.byPayloadType[4] = []*StoreTx{tx1, tx2}
ps.clockSkew.computeInterval = 0
ps.mu.Unlock()
r := ps.GetNodeClockSkew("NODETEST")
if r == nil {
t.Fatal("expected clock skew result")
}
// Check recentHashEvidence exists.
if len(r.RecentHashEvidence) == 0 {
t.Fatal("expected recentHashEvidence to be populated")
}
if len(r.RecentHashEvidence) != 2 {
t.Errorf("recentHashEvidence length = %d, want 2", len(r.RecentHashEvidence))
}
// Check first evidence entry has 3 observers.
ev := r.RecentHashEvidence[0]
if len(ev.Observers) != 3 {
t.Fatalf("evidence observers = %d, want 3", len(ev.Observers))
}
// Verify corrected = raw + offset for each observer.
for _, o := range ev.Observers {
expected := o.RawSkewSec + o.ObserverOffsetSec
if math.Abs(o.CorrectedSkewSec-expected) > 0.2 {
t.Errorf("observer %s: corrected=%.1f, expected raw(%.1f)+offset(%.1f)=%.1f",
o.ObserverID, o.CorrectedSkewSec, o.RawSkewSec, o.ObserverOffsetSec, expected)
}
}
// All corrected values should be ~60s (node is 60s ahead).
if math.Abs(ev.MedianCorrectedSkewSec-60) > 1 {
t.Errorf("median corrected = %.1f, want ~60", ev.MedianCorrectedSkewSec)
}
// Check calibration summary.
if r.CalibrationSummary == nil {
t.Fatal("expected calibrationSummary")
}
if r.CalibrationSummary.TotalSamples != 6 { // 3 observers × 2 hashes
t.Errorf("calibration total = %d, want 6", r.CalibrationSummary.TotalSamples)
}
if r.CalibrationSummary.CalibratedSamples != 6 {
t.Errorf("calibrated = %d, want 6 (all multi-observer)", r.CalibrationSummary.CalibratedSamples)
}
// Check observer names are populated.
nameFound := false
for _, o := range ev.Observers {
if o.ObserverName == "Observer Alpha" || o.ObserverName == "Observer Beta" {
nameFound = true
}
}
if !nameFound {
t.Error("expected observer names to be populated from tx observations")
}
}
+65
View File
@@ -8,6 +8,7 @@ import (
"strings"
"sync"
"github.com/meshcore-analyzer/dbconfig"
"github.com/meshcore-analyzer/geofilter"
)
@@ -62,16 +63,35 @@ type Config struct {
Retention *RetentionConfig `json:"retention,omitempty"`
DB *DBConfig `json:"db,omitempty"`
PacketStore *PacketStoreConfig `json:"packetStore,omitempty"`
GeoFilter *GeoFilterConfig `json:"geo_filter,omitempty"`
Timestamps *TimestampConfig `json:"timestamps,omitempty"`
// CORSAllowedOrigins is the list of origins permitted to make cross-origin
// requests. When empty (default), no Access-Control-* headers are sent,
// so browsers enforce same-origin policy. Set to ["*"] to allow all origins.
CORSAllowedOrigins []string `json:"corsAllowedOrigins,omitempty"`
DebugAffinity bool `json:"debugAffinity,omitempty"`
// ObserverBlacklist is a list of observer public keys to exclude from API
// responses (defense in depth — ingestor drops at ingest, server filters
// any that slipped through from a prior unblocked window).
ObserverBlacklist []string `json:"observerBlacklist,omitempty"`
// obsBlacklistSetCached is the lazily-built set version of ObserverBlacklist.
obsBlacklistSetCached map[string]bool
obsBlacklistOnce sync.Once
ResolvedPath *ResolvedPathConfig `json:"resolvedPath,omitempty"`
NeighborGraph *NeighborGraphConfig `json:"neighborGraph,omitempty"`
// BatteryThresholds: voltage cutoffs for low/critical alerts (#663).
BatteryThresholds *BatteryThresholdsConfig `json:"batteryThresholds,omitempty"`
}
// weakAPIKeys is the blocklist of known default/example API keys that must be rejected.
@@ -129,6 +149,17 @@ type RetentionConfig struct {
MetricsDays int `json:"metricsDays"`
}
// DBConfig is the shared SQLite vacuum/maintenance config (#919, #921).
type DBConfig = dbconfig.DBConfig
// IncrementalVacuumPages returns the configured pages per vacuum or 1024 default.
func (c *Config) IncrementalVacuumPages() int {
if c.DB != nil && c.DB.IncrementalVacuumPages > 0 {
return c.DB.IncrementalVacuumPages
}
return 1024
}
// MetricsRetentionDays returns configured metrics retention or 30 days default.
func (c *Config) MetricsRetentionDays() int {
if c.Retention != nil && c.Retention.MetricsDays > 0 {
@@ -193,6 +224,10 @@ type HealthThresholds struct {
InfraSilentHours float64 `json:"infraSilentHours"`
NodeDegradedHours float64 `json:"nodeDegradedHours"`
NodeSilentHours float64 `json:"nodeSilentHours"`
// RelayActiveHours: how recent a path-hop appearance must be for a
// repeater to be considered "actively relaying" vs only "alive
// (advert-only)". See issue #662. Defaults to 24h.
RelayActiveHours float64 `json:"relayActiveHours"`
}
// ThemeFile mirrors theme.json overlay.
@@ -261,6 +296,7 @@ func (c *Config) GetHealthThresholds() HealthThresholds {
InfraSilentHours: 72,
NodeDegradedHours: 1,
NodeSilentHours: 24,
RelayActiveHours: 24,
}
if c.HealthThresholds != nil {
if c.HealthThresholds.InfraDegradedHours > 0 {
@@ -275,6 +311,9 @@ func (c *Config) GetHealthThresholds() HealthThresholds {
if c.HealthThresholds.NodeSilentHours > 0 {
h.NodeSilentHours = c.HealthThresholds.NodeSilentHours
}
if c.HealthThresholds.RelayActiveHours > 0 {
h.RelayActiveHours = c.HealthThresholds.RelayActiveHours
}
}
return h
}
@@ -388,3 +427,29 @@ func (c *Config) IsBlacklisted(pubkey string) bool {
}
return c.blacklistSet()[strings.ToLower(strings.TrimSpace(pubkey))]
}
// obsBlacklistSet lazily builds and caches the observerBlacklist as a set for O(1) lookups.
func (c *Config) obsBlacklistSet() map[string]bool {
c.obsBlacklistOnce.Do(func() {
if len(c.ObserverBlacklist) == 0 {
return
}
m := make(map[string]bool, len(c.ObserverBlacklist))
for _, pk := range c.ObserverBlacklist {
trimmed := strings.ToLower(strings.TrimSpace(pk))
if trimmed != "" {
m[trimmed] = true
}
}
c.obsBlacklistSetCached = m
})
return c.obsBlacklistSetCached
}
// IsObserverBlacklisted returns true if the given observer ID is in the observerBlacklist.
func (c *Config) IsObserverBlacklisted(id string) bool {
if c == nil || len(c.ObserverBlacklist) == 0 {
return false
}
return c.obsBlacklistSet()[strings.ToLower(strings.TrimSpace(id))]
}
+66
View File
@@ -0,0 +1,66 @@
package main
import "net/http"
// corsMiddleware returns a middleware that sets CORS headers based on the
// configured allowed origins. When CORSAllowedOrigins is empty (default),
// no Access-Control-* headers are added, preserving browser same-origin policy.
func (s *Server) corsMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
origins := s.cfg.CORSAllowedOrigins
if len(origins) == 0 {
next.ServeHTTP(w, r)
return
}
reqOrigin := r.Header.Get("Origin")
if reqOrigin == "" {
next.ServeHTTP(w, r)
return
}
// Check if origin is allowed
allowed := false
wildcard := false
for _, o := range origins {
if o == "*" {
allowed = true
wildcard = true
break
}
if o == reqOrigin {
allowed = true
break
}
}
if !allowed {
// Origin not in allowlist — don't add CORS headers
if r.Method == http.MethodOptions {
// Still reject preflight with 403
w.WriteHeader(http.StatusForbidden)
return
}
next.ServeHTTP(w, r)
return
}
// Set CORS headers
if wildcard {
w.Header().Set("Access-Control-Allow-Origin", "*")
} else {
w.Header().Set("Access-Control-Allow-Origin", reqOrigin)
w.Header().Set("Vary", "Origin")
}
w.Header().Set("Access-Control-Allow-Methods", "GET, POST, OPTIONS")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type, X-API-Key")
// Handle preflight
if r.Method == http.MethodOptions {
w.WriteHeader(http.StatusNoContent)
return
}
next.ServeHTTP(w, r)
})
}
+149
View File
@@ -0,0 +1,149 @@
package main
import (
"net/http"
"net/http/httptest"
"testing"
)
// newTestServerWithCORS creates a minimal Server with the given CORS config.
func newTestServerWithCORS(origins []string) *Server {
cfg := &Config{CORSAllowedOrigins: origins}
srv := &Server{cfg: cfg}
return srv
}
// dummyHandler is a simple handler that writes 200 OK.
var dummyHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Write([]byte("ok"))
})
func TestCORS_DefaultNoHeaders(t *testing.T) {
srv := newTestServerWithCORS(nil)
handler := srv.corsMiddleware(dummyHandler)
req := httptest.NewRequest("GET", "/api/health", nil)
req.Header.Set("Origin", "https://evil.example")
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if rr.Code != 200 {
t.Fatalf("expected 200, got %d", rr.Code)
}
if v := rr.Header().Get("Access-Control-Allow-Origin"); v != "" {
t.Fatalf("expected no ACAO header, got %q", v)
}
}
func TestCORS_AllowlistMatch(t *testing.T) {
srv := newTestServerWithCORS([]string{"https://good.example"})
handler := srv.corsMiddleware(dummyHandler)
req := httptest.NewRequest("GET", "/api/health", nil)
req.Header.Set("Origin", "https://good.example")
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if rr.Code != 200 {
t.Fatalf("expected 200, got %d", rr.Code)
}
if v := rr.Header().Get("Access-Control-Allow-Origin"); v != "https://good.example" {
t.Fatalf("expected origin echo, got %q", v)
}
if v := rr.Header().Get("Access-Control-Allow-Methods"); v != "GET, POST, OPTIONS" {
t.Fatalf("expected methods header, got %q", v)
}
if v := rr.Header().Get("Access-Control-Allow-Headers"); v != "Content-Type, X-API-Key" {
t.Fatalf("expected headers header, got %q", v)
}
if v := rr.Header().Get("Vary"); v != "Origin" {
t.Fatalf("expected Vary: Origin, got %q", v)
}
}
func TestCORS_AllowlistNoMatch(t *testing.T) {
srv := newTestServerWithCORS([]string{"https://good.example"})
handler := srv.corsMiddleware(dummyHandler)
req := httptest.NewRequest("GET", "/api/health", nil)
req.Header.Set("Origin", "https://evil.example")
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if rr.Code != 200 {
t.Fatalf("expected 200, got %d", rr.Code)
}
if v := rr.Header().Get("Access-Control-Allow-Origin"); v != "" {
t.Fatalf("expected no ACAO header for non-matching origin, got %q", v)
}
}
func TestCORS_PreflightAllowed(t *testing.T) {
srv := newTestServerWithCORS([]string{"https://good.example"})
handler := srv.corsMiddleware(dummyHandler)
req := httptest.NewRequest("OPTIONS", "/api/health", nil)
req.Header.Set("Origin", "https://good.example")
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if rr.Code != http.StatusNoContent {
t.Fatalf("expected 204, got %d", rr.Code)
}
if v := rr.Header().Get("Access-Control-Allow-Origin"); v != "https://good.example" {
t.Fatalf("expected origin echo, got %q", v)
}
}
func TestCORS_PreflightRejected(t *testing.T) {
srv := newTestServerWithCORS([]string{"https://good.example"})
handler := srv.corsMiddleware(dummyHandler)
req := httptest.NewRequest("OPTIONS", "/api/health", nil)
req.Header.Set("Origin", "https://evil.example")
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if rr.Code != http.StatusForbidden {
t.Fatalf("expected 403, got %d", rr.Code)
}
}
func TestCORS_Wildcard(t *testing.T) {
srv := newTestServerWithCORS([]string{"*"})
handler := srv.corsMiddleware(dummyHandler)
req := httptest.NewRequest("GET", "/api/health", nil)
req.Header.Set("Origin", "https://anything.example")
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if rr.Code != 200 {
t.Fatalf("expected 200, got %d", rr.Code)
}
if v := rr.Header().Get("Access-Control-Allow-Origin"); v != "*" {
t.Fatalf("expected *, got %q", v)
}
// Wildcard should NOT set Vary: Origin
if v := rr.Header().Get("Vary"); v == "Origin" {
t.Fatalf("wildcard should not set Vary: Origin")
}
}
func TestCORS_NoOriginHeader(t *testing.T) {
srv := newTestServerWithCORS([]string{"https://good.example"})
handler := srv.corsMiddleware(dummyHandler)
req := httptest.NewRequest("GET", "/api/health", nil)
// No Origin header
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if rr.Code != 200 {
t.Fatalf("expected 200, got %d", rr.Code)
}
if v := rr.Header().Get("Access-Control-Allow-Origin"); v != "" {
t.Fatalf("expected no ACAO without Origin header, got %q", v)
}
}
+17 -16
View File
@@ -35,7 +35,8 @@ func setupTestDBv2(t *testing.T) *DB {
CREATE TABLE observers (
id TEXT PRIMARY KEY, name TEXT, iata TEXT, last_seen TEXT, first_seen TEXT,
packet_count INTEGER DEFAULT 0, model TEXT, firmware TEXT,
client_version TEXT, radio TEXT, battery_mv INTEGER, uptime_secs INTEGER, noise_floor REAL
client_version TEXT, radio TEXT, battery_mv INTEGER, uptime_secs INTEGER, noise_floor REAL,
inactive INTEGER DEFAULT 0
);
CREATE TABLE transmissions (
id INTEGER PRIMARY KEY AUTOINCREMENT, raw_hex TEXT NOT NULL,
@@ -47,7 +48,7 @@ func setupTestDBv2(t *testing.T) *DB {
id INTEGER PRIMARY KEY AUTOINCREMENT,
transmission_id INTEGER NOT NULL REFERENCES transmissions(id),
observer_id TEXT, observer_name TEXT, direction TEXT,
snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp INTEGER NOT NULL
snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp INTEGER NOT NULL, raw_hex TEXT
);
`
if _, err := conn.Exec(schema); err != nil {
@@ -763,9 +764,9 @@ func TestGetChannelsFromStore(t *testing.T) {
func TestPrefixMapResolve(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aabbccdd11223344", Name: "NodeA", HasGPS: true, Lat: 37.5, Lon: -122.0},
{PublicKey: "aabbccdd55667788", Name: "NodeB", HasGPS: false},
{PublicKey: "eeff0011aabbccdd", Name: "NodeC", HasGPS: true, Lat: 38.0, Lon: -121.0},
{Role: "repeater", PublicKey: "aabbccdd11223344", Name: "NodeA", HasGPS: true, Lat: 37.5, Lon: -122.0},
{Role: "repeater", PublicKey: "aabbccdd55667788", Name: "NodeB", HasGPS: false},
{Role: "repeater", PublicKey: "eeff0011aabbccdd", Name: "NodeC", HasGPS: true, Lat: 38.0, Lon: -121.0},
}
pm := buildPrefixMap(nodes)
@@ -805,8 +806,8 @@ func TestPrefixMapResolve(t *testing.T) {
t.Run("multiple candidates no GPS", func(t *testing.T) {
noGPSNodes := []nodeInfo{
{PublicKey: "aa11bb22", Name: "X", HasGPS: false},
{PublicKey: "aa11cc33", Name: "Y", HasGPS: false},
{Role: "repeater", PublicKey: "aa11bb22", Name: "X", HasGPS: false},
{Role: "repeater", PublicKey: "aa11cc33", Name: "Y", HasGPS: false},
}
pm2 := buildPrefixMap(noGPSNodes)
n := pm2.resolve("aa11")
@@ -820,8 +821,8 @@ func TestPrefixMapResolve(t *testing.T) {
func TestPrefixMapCap(t *testing.T) {
// 16-char pubkey — longer than maxPrefixLen
nodes := []nodeInfo{
{PublicKey: "aabbccdd11223344", Name: "LongKey"},
{PublicKey: "eeff0011", Name: "ShortKey"}, // exactly 8 chars
{Role: "repeater", PublicKey: "aabbccdd11223344", Name: "LongKey"},
{Role: "repeater", PublicKey: "eeff0011", Name: "ShortKey"}, // exactly 8 chars
}
pm := buildPrefixMap(nodes)
@@ -2497,9 +2498,9 @@ func TestStoreGetAnalyticsChannelsNumericHash(t *testing.T) {
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (5, 1, 10.0, -90, '[]', ?)`, recentEpoch)
// Also a decrypted CHAN with numeric channelHash
// Also a decrypted CHAN with numeric channelHash — use hash 198 which is the real hash for #general
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
VALUES ('DD03', 'chan_num_hash_3', ?, 1, 5, '{"type":"CHAN","channel":"general","channelHash":97,"channelHashHex":"61","text":"hello","sender":"Alice"}')`, recent)
VALUES ('DD03', 'chan_num_hash_3', ?, 1, 5, '{"type":"CHAN","channel":"general","channelHash":198,"channelHashHex":"C6","text":"hello","sender":"Alice"}')`, recent)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (6, 1, 12.0, -88, '[]', ?)`, recentEpoch)
@@ -2508,8 +2509,8 @@ func TestStoreGetAnalyticsChannelsNumericHash(t *testing.T) {
result := store.GetAnalyticsChannels("")
channels := result["channels"].([]map[string]interface{})
if len(channels) < 2 {
t.Errorf("expected at least 2 channels (hash 97 + hash 42), got %d", len(channels))
if len(channels) < 3 {
t.Errorf("expected at least 3 channels (hash 97 + hash 42 + hash 198), got %d", len(channels))
}
// Verify the numeric-hash channels we inserted have proper hashes (not "?")
@@ -2530,13 +2531,13 @@ func TestStoreGetAnalyticsChannelsNumericHash(t *testing.T) {
t.Error("expected to find channel with hash '42' (numeric channelHash parsing)")
}
// Verify the decrypted CHAN channel has the correct name
// Verify the decrypted CHAN channel has the correct name (now at hash 198)
foundGeneral := false
for _, ch := range channels {
if ch["name"] == "general" {
foundGeneral = true
if ch["hash"] != "97" {
t.Errorf("expected hash '97' for general channel, got %v", ch["hash"])
if ch["hash"] != "198" {
t.Errorf("expected hash '198' for general channel, got %v", ch["hash"])
}
}
}
+86 -18
View File
@@ -20,6 +20,7 @@ type DB struct {
path string // filesystem path to the database file
isV3 bool // v3 schema: observer_idx in observations (vs observer_id in v2)
hasResolvedPath bool // observations table has resolved_path column
hasObsRawHex bool // observations table has raw_hex column (#881)
// Channel list cache (60s TTL) — avoids repeated GROUP BY scans (#762)
channelsCacheMu sync.Mutex
@@ -76,6 +77,9 @@ func (db *DB) detectSchema() {
if colName == "resolved_path" {
db.hasResolvedPath = true
}
if colName == "raw_hex" {
db.hasObsRawHex = true
}
}
}
}
@@ -166,6 +170,7 @@ type Observer struct {
BatteryMv *int `json:"battery_mv"`
UptimeSecs *int64 `json:"uptime_secs"`
NoiseFloor *float64 `json:"noise_floor"`
LastPacketAt *string `json:"last_packet_at"`
}
// Transmission represents a row from the transmissions table.
@@ -227,7 +232,7 @@ func (db *DB) GetStats() (*Stats, error) {
sevenDaysAgo := time.Now().Add(-7 * 24 * time.Hour).Format(time.RFC3339)
db.conn.QueryRow("SELECT COUNT(*) FROM nodes WHERE last_seen > ?", sevenDaysAgo).Scan(&s.TotalNodes)
db.conn.QueryRow("SELECT COUNT(*) FROM nodes").Scan(&s.TotalNodesAllTime)
db.conn.QueryRow("SELECT COUNT(*) FROM observers").Scan(&s.TotalObservers)
db.conn.QueryRow("SELECT COUNT(*) FROM observers WHERE inactive IS NULL OR inactive = 0").Scan(&s.TotalObservers)
oneHourAgo := time.Now().Add(-1 * time.Hour).Unix()
db.conn.QueryRow("SELECT COUNT(*) FROM observations WHERE timestamp > ?", oneHourAgo).Scan(&s.PacketsLastHour)
@@ -782,7 +787,7 @@ func (db *DB) GetNodes(limit, offset int, role, search, before, lastHeard, sortB
var total int
db.conn.QueryRow(fmt.Sprintf("SELECT COUNT(*) FROM nodes %s", w), args...).Scan(&total)
querySQL := fmt.Sprintf("SELECT public_key, name, role, lat, lon, last_seen, first_seen, advert_count, battery_mv, temperature_c FROM nodes %s ORDER BY %s LIMIT ? OFFSET ?", w, order)
querySQL := fmt.Sprintf("SELECT public_key, name, role, lat, lon, last_seen, first_seen, advert_count, battery_mv, temperature_c, foreign_advert FROM nodes %s ORDER BY %s LIMIT ? OFFSET ?", w, order)
qArgs := append(args, limit, offset)
rows, err := db.conn.Query(querySQL, qArgs...)
@@ -808,7 +813,7 @@ func (db *DB) SearchNodes(query string, limit int) ([]map[string]interface{}, er
if limit <= 0 {
limit = 10
}
rows, err := db.conn.Query(`SELECT public_key, name, role, lat, lon, last_seen, first_seen, advert_count, battery_mv, temperature_c
rows, err := db.conn.Query(`SELECT public_key, name, role, lat, lon, last_seen, first_seen, advert_count, battery_mv, temperature_c, foreign_advert
FROM nodes WHERE name LIKE ? OR public_key LIKE ? ORDER BY last_seen DESC LIMIT ?`,
"%"+query+"%", query+"%", limit)
if err != nil {
@@ -826,9 +831,58 @@ func (db *DB) SearchNodes(query string, limit int) ([]map[string]interface{}, er
return nodes, nil
}
// GetNodeByPrefix resolves a hex prefix (>=8 chars) to a unique node.
// Returns (node, ambiguous, error). When multiple nodes share the prefix,
// returns (nil, true, nil). Used by the short-URL feature (issue #772).
//
// Trade-off vs an opaque ID lookup table: prefixes are stable across
// restarts, self-describing (no allocator needed), and resolve to the
// authoritative pubkey on the server. Cost: ambiguity grows with the
// node directory; we mitigate with a hard 8-hex-char (32-bit) minimum
// and surface 409 Conflict when collisions occur.
func (db *DB) GetNodeByPrefix(prefix string) (map[string]interface{}, bool, error) {
if len(prefix) < 8 {
return nil, false, nil
}
// Validate hex (avoid SQL LIKE wildcards leaking through).
for _, c := range prefix {
isHex := (c >= '0' && c <= '9') || (c >= 'a' && c <= 'f') || (c >= 'A' && c <= 'F')
if !isHex {
return nil, false, nil
}
}
rows, err := db.conn.Query(
`SELECT public_key, name, role, lat, lon, last_seen, first_seen, advert_count, battery_mv, temperature_c, foreign_advert
FROM nodes WHERE public_key LIKE ? LIMIT 2`,
prefix+"%",
)
if err != nil {
return nil, false, err
}
defer rows.Close()
var first map[string]interface{}
count := 0
for rows.Next() {
n := scanNodeRow(rows)
if n == nil {
continue
}
count++
if count == 1 {
first = n
} else {
return nil, true, nil
}
}
if count == 0 {
return nil, false, nil
}
return first, false, nil
}
// GetNodeByPubkey returns a single node.
func (db *DB) GetNodeByPubkey(pubkey string) (map[string]interface{}, error) {
rows, err := db.conn.Query("SELECT public_key, name, role, lat, lon, last_seen, first_seen, advert_count, battery_mv, temperature_c FROM nodes WHERE public_key = ?", pubkey)
rows, err := db.conn.Query("SELECT public_key, name, role, lat, lon, last_seen, first_seen, advert_count, battery_mv, temperature_c, foreign_advert FROM nodes WHERE public_key = ?", pubkey)
if err != nil {
return nil, err
}
@@ -966,9 +1020,9 @@ func (db *DB) getObservationsForTransmissions(txIDs []int) map[int][]map[string]
return result
}
// GetObservers returns all observers sorted by last_seen DESC.
// GetObservers returns active observers (not soft-deleted) sorted by last_seen DESC.
func (db *DB) GetObservers() ([]Observer, error) {
rows, err := db.conn.Query("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor FROM observers ORDER BY last_seen DESC")
rows, err := db.conn.Query("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor, last_packet_at FROM observers WHERE inactive IS NULL OR inactive = 0 ORDER BY last_seen DESC")
if err != nil {
return nil, err
}
@@ -979,7 +1033,7 @@ func (db *DB) GetObservers() ([]Observer, error) {
var o Observer
var batteryMv, uptimeSecs sql.NullInt64
var noiseFloor sql.NullFloat64
if err := rows.Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor); err != nil {
if err := rows.Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor, &o.LastPacketAt); err != nil {
continue
}
if batteryMv.Valid {
@@ -1002,8 +1056,8 @@ func (db *DB) GetObserverByID(id string) (*Observer, error) {
var o Observer
var batteryMv, uptimeSecs sql.NullInt64
var noiseFloor sql.NullFloat64
err := db.conn.QueryRow("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor FROM observers WHERE id = ?", id).
Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor)
err := db.conn.QueryRow("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor, last_packet_at FROM observers WHERE id = ?", id).
Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor, &o.LastPacketAt)
if err != nil {
return nil, err
}
@@ -1051,6 +1105,17 @@ func (db *DB) GetObserverIdsForRegion(regionParam string) ([]string, error) {
return ids, nil
}
// normalizeRegionCodes parses a region query parameter into a list of upper-case
// IATA codes. Returns nil to signal "no filter" (match all regions).
//
// Sentinel handling (issue #770): the frontend region filter dropdown labels its
// catch-all option "All". When that option is selected the UI may send
// ?region=All; older code interpreted that literally and tried to match an
// IATA code "ALL", which never exists, returning an empty result set. Treat
// "All" / "ALL" / "all" (case-insensitive, optionally surrounded by whitespace
// or mixed with empty CSV slots) as equivalent to an empty value.
//
// Real IATA codes (e.g. "SJC", "PDX") still pass through unchanged.
func normalizeRegionCodes(regionParam string) []string {
if regionParam == "" {
return nil
@@ -1059,9 +1124,13 @@ func normalizeRegionCodes(regionParam string) []string {
codes := make([]string, 0, len(tokens))
for _, token := range tokens {
code := strings.TrimSpace(strings.ToUpper(token))
if code != "" {
codes = append(codes, code)
if code == "" || code == "ALL" {
continue
}
codes = append(codes, code)
}
if len(codes) == 0 {
return nil
}
return codes
}
@@ -1798,8 +1867,9 @@ func scanNodeRow(rows *sql.Rows) map[string]interface{} {
var advertCount int
var batteryMv sql.NullInt64
var temperatureC sql.NullFloat64
var foreign sql.NullInt64
if err := rows.Scan(&pk, &name, &role, &lat, &lon, &lastSeen, &firstSeen, &advertCount, &batteryMv, &temperatureC); err != nil {
if err := rows.Scan(&pk, &name, &role, &lat, &lon, &lastSeen, &firstSeen, &advertCount, &batteryMv, &temperatureC, &foreign); err != nil {
return nil
}
m := map[string]interface{}{
@@ -1814,6 +1884,7 @@ func scanNodeRow(rows *sql.Rows) map[string]interface{} {
"last_heard": nullStr(lastSeen),
"hash_size": nil,
"hash_size_inconsistent": false,
"foreign": foreign.Valid && foreign.Int64 != 0,
}
if batteryMv.Valid {
m["battery_mv"] = int(batteryMv.Int64)
@@ -1868,11 +1939,10 @@ func nullInt(ni sql.NullInt64) interface{} {
// Returns the number of transmissions deleted.
// Opens a separate read-write connection since the main connection is read-only.
func (db *DB) PruneOldPackets(days int) (int64, error) {
rw, err := openRW(db.path)
rw, err := cachedRW(db.path)
if err != nil {
return 0, err
}
defer rw.Close()
cutoff := time.Now().UTC().AddDate(0, 0, -days).Format(time.RFC3339)
tx, err := rw.Begin()
@@ -2215,11 +2285,10 @@ func (db *DB) GetMetricsSummary(since string) ([]MetricsSummaryRow, error) {
// PruneOldMetrics deletes observer_metrics rows older than retentionDays.
func (db *DB) PruneOldMetrics(retentionDays int) (int64, error) {
rw, err := openRW(db.path)
rw, err := cachedRW(db.path)
if err != nil {
return 0, err
}
defer rw.Close()
cutoff := time.Now().UTC().AddDate(0, 0, -retentionDays).Format(time.RFC3339)
res, err := rw.Exec(`DELETE FROM observer_metrics WHERE timestamp < ?`, cutoff)
@@ -2242,11 +2311,10 @@ func (db *DB) RemoveStaleObservers(observerDays int) (int64, error) {
if observerDays <= -1 {
return 0, nil // keep forever
}
rw, err := openRW(db.path)
rw, err := cachedRW(db.path)
if err != nil {
return 0, err
}
defer rw.Close()
cutoff := time.Now().UTC().AddDate(0, 0, -observerDays).Format(time.RFC3339)
res, err := rw.Exec(`UPDATE observers SET inactive = 1 WHERE last_seen < ? AND (inactive IS NULL OR inactive = 0)`, cutoff)
+140 -6
View File
@@ -32,7 +32,8 @@ func setupTestDB(t *testing.T) *DB {
first_seen TEXT,
advert_count INTEGER DEFAULT 0,
battery_mv INTEGER,
temperature_c REAL
temperature_c REAL,
foreign_advert INTEGER DEFAULT 0
);
CREATE TABLE observers (
@@ -48,7 +49,9 @@ func setupTestDB(t *testing.T) *DB {
radio TEXT,
battery_mv INTEGER,
uptime_secs INTEGER,
noise_floor REAL
noise_floor REAL,
inactive INTEGER DEFAULT 0,
last_packet_at TEXT DEFAULT NULL
);
CREATE TABLE transmissions (
@@ -74,7 +77,8 @@ func setupTestDB(t *testing.T) *DB {
score INTEGER,
path_json TEXT,
timestamp INTEGER NOT NULL,
resolved_path TEXT
resolved_path TEXT,
raw_hex TEXT
);
CREATE TABLE IF NOT EXISTS observer_metrics (
@@ -354,6 +358,35 @@ func TestGetObservers(t *testing.T) {
if observers[0].ID != "obs1" {
t.Errorf("expected obs1 first (most recent), got %s", observers[0].ID)
}
// last_packet_at should be nil since seedTestData doesn't set it
if observers[0].LastPacketAt != nil {
t.Errorf("expected nil LastPacketAt for obs1 from seed, got %v", *observers[0].LastPacketAt)
}
}
// Regression: GetObservers must exclude soft-deleted (inactive=1) rows.
// Stale observers were appearing in /api/observers despite the auto-prune
// marking them inactive, because the SELECT query had no WHERE filter.
func TestGetObservers_ExcludesInactive(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
// Mark obs2 inactive — soft delete simulating a stale-observer prune.
if _, err := db.conn.Exec(`UPDATE observers SET inactive = 1 WHERE id = ?`, "obs2"); err != nil {
t.Fatalf("update inactive: %v", err)
}
observers, err := db.GetObservers()
if err != nil {
t.Fatal(err)
}
if len(observers) != 1 {
t.Errorf("expected 1 observer (obs1) after marking obs2 inactive, got %d", len(observers))
}
for _, o := range observers {
if o.ID == "obs2" {
t.Errorf("inactive observer obs2 should be excluded")
}
}
}
func TestGetObserverByID(t *testing.T) {
@@ -368,6 +401,48 @@ func TestGetObserverByID(t *testing.T) {
if obs.ID != "obs1" {
t.Errorf("expected obs1, got %s", obs.ID)
}
// Verify last_packet_at is nil by default
if obs.LastPacketAt != nil {
t.Errorf("expected nil LastPacketAt, got %v", *obs.LastPacketAt)
}
}
func TestGetObserverLastPacketAt(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
// Set last_packet_at for obs1
ts := "2026-04-24T12:00:00Z"
db.conn.Exec(`UPDATE observers SET last_packet_at = ? WHERE id = ?`, ts, "obs1")
// Verify via GetObservers
observers, err := db.GetObservers()
if err != nil {
t.Fatal(err)
}
var obs1 *Observer
for i := range observers {
if observers[i].ID == "obs1" {
obs1 = &observers[i]
break
}
}
if obs1 == nil {
t.Fatal("obs1 not found")
}
if obs1.LastPacketAt == nil || *obs1.LastPacketAt != ts {
t.Errorf("expected LastPacketAt=%s via GetObservers, got %v", ts, obs1.LastPacketAt)
}
// Verify via GetObserverByID
obs, err := db.GetObserverByID("obs1")
if err != nil {
t.Fatal(err)
}
if obs.LastPacketAt == nil || *obs.LastPacketAt != ts {
t.Errorf("expected LastPacketAt=%s via GetObserverByID, got %v", ts, obs.LastPacketAt)
}
}
func TestGetObserverByIDNotFound(t *testing.T) {
@@ -1099,7 +1174,8 @@ func setupTestDBV2(t *testing.T) *DB {
first_seen TEXT,
advert_count INTEGER DEFAULT 0,
battery_mv INTEGER,
temperature_c REAL
temperature_c REAL,
foreign_advert INTEGER DEFAULT 0
);
CREATE TABLE observers (
@@ -1108,7 +1184,8 @@ func setupTestDBV2(t *testing.T) *DB {
iata TEXT,
last_seen TEXT,
first_seen TEXT,
packet_count INTEGER DEFAULT 0
packet_count INTEGER DEFAULT 0,
last_packet_at TEXT DEFAULT NULL
);
CREATE TABLE transmissions (
@@ -1134,7 +1211,8 @@ func setupTestDBV2(t *testing.T) *DB {
rssi REAL,
score INTEGER,
path_json TEXT,
timestamp INTEGER NOT NULL
timestamp INTEGER NOT NULL,
raw_hex TEXT
);
`
if _, err := conn.Exec(schema); err != nil {
@@ -1975,3 +2053,59 @@ func TestParseWindowDuration(t *testing.T) {
}
}
}
// TestPerObservationRawHexEnrich verifies enrichObs returns per-observation raw_hex
// when available, falling back to transmission raw_hex when NULL (#881).
func TestPerObservationRawHexEnrich(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
// Insert observers
db.conn.Exec(`INSERT INTO observers (id, name) VALUES ('obs-a', 'Observer A')`)
db.conn.Exec(`INSERT INTO observers (id, name) VALUES ('obs-b', 'Observer B')`)
var rowA, rowB int64
db.conn.QueryRow(`SELECT rowid FROM observers WHERE id='obs-a'`).Scan(&rowA)
db.conn.QueryRow(`SELECT rowid FROM observers WHERE id='obs-b'`).Scan(&rowB)
// Insert transmission with raw_hex
txHex := "deadbeef"
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen) VALUES (?, 'hash1', '2026-04-21T10:00:00Z')`, txHex)
// Insert two observations: A has its own raw_hex, B has NULL (historical)
obsAHex := "c0ffee01"
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp, raw_hex)
VALUES (1, ?, -5.0, -90.0, '[]', 1745236800, ?)`, rowA, obsAHex)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (1, ?, -3.0, -85.0, '["aabb"]', 1745236801)`, rowB)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store load: %v", err)
}
tx := store.byHash["hash1"]
if tx == nil {
t.Fatal("transmission not loaded")
}
if len(tx.Observations) < 2 {
t.Fatalf("expected 2 observations, got %d", len(tx.Observations))
}
// Check enriched observations
for _, obs := range tx.Observations {
m := store.enrichObs(obs)
rh, _ := m["raw_hex"].(string)
if obs.RawHex != "" {
// Observer A: should get per-observation raw_hex
if rh != obsAHex {
t.Errorf("obs with own raw_hex: got %q, want %q", rh, obsAHex)
}
} else {
// Observer B: should fall back to transmission raw_hex
if rh != txHex {
t.Errorf("obs without raw_hex: got %q, want %q (tx fallback)", rh, txHex)
}
}
}
}
+262
View File
@@ -0,0 +1,262 @@
package main
import (
"database/sql"
"os"
"path/filepath"
"strings"
"testing"
"time"
_ "modernc.org/sqlite"
)
// createFreshIngestorDB creates a SQLite DB using the ingestor's applySchema logic
// (simulated here) with auto_vacuum=INCREMENTAL set before tables.
func createFreshDBWithAutoVacuum(t *testing.T, path string) *sql.DB {
t.Helper()
// auto_vacuum must be set via DSN before journal_mode creates the DB file
db, err := sql.Open("sqlite", path+"?_pragma=auto_vacuum(INCREMENTAL)&_pragma=journal_mode(WAL)&_pragma=busy_timeout(5000)")
if err != nil {
t.Fatal(err)
}
db.SetMaxOpenConns(1)
// Create minimal schema
_, err = db.Exec(`
CREATE TABLE transmissions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
raw_hex TEXT NOT NULL,
hash TEXT NOT NULL UNIQUE,
first_seen TEXT NOT NULL,
route_type INTEGER,
payload_type INTEGER,
payload_version INTEGER,
decoded_json TEXT,
created_at TEXT DEFAULT (datetime('now')),
channel_hash TEXT
);
CREATE TABLE observations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
transmission_id INTEGER NOT NULL REFERENCES transmissions(id),
observer_idx INTEGER,
direction TEXT,
snr REAL,
rssi REAL,
score INTEGER,
path_json TEXT,
timestamp INTEGER NOT NULL
);
`)
if err != nil {
t.Fatal(err)
}
return db
}
func TestNewDBHasIncrementalAutoVacuum(t *testing.T) {
dir := t.TempDir()
path := filepath.Join(dir, "test.db")
db := createFreshDBWithAutoVacuum(t, path)
defer db.Close()
var autoVacuum int
if err := db.QueryRow("PRAGMA auto_vacuum").Scan(&autoVacuum); err != nil {
t.Fatal(err)
}
if autoVacuum != 2 {
t.Fatalf("expected auto_vacuum=2 (INCREMENTAL), got %d", autoVacuum)
}
}
func TestExistingDBHasAutoVacuumNone(t *testing.T) {
dir := t.TempDir()
path := filepath.Join(dir, "test.db")
// Create DB WITHOUT setting auto_vacuum (simulates old DB)
db, err := sql.Open("sqlite", path+"?_pragma=journal_mode(WAL)")
if err != nil {
t.Fatal(err)
}
db.SetMaxOpenConns(1)
_, err = db.Exec("CREATE TABLE dummy (id INTEGER PRIMARY KEY)")
if err != nil {
t.Fatal(err)
}
var autoVacuum int
if err := db.QueryRow("PRAGMA auto_vacuum").Scan(&autoVacuum); err != nil {
t.Fatal(err)
}
db.Close()
if autoVacuum != 0 {
t.Fatalf("expected auto_vacuum=0 (NONE) for old DB, got %d", autoVacuum)
}
}
func TestVacuumOnStartupMigratesDB(t *testing.T) {
dir := t.TempDir()
path := filepath.Join(dir, "test.db")
// Create DB without auto_vacuum (old DB)
db, err := sql.Open("sqlite", path+"?_pragma=journal_mode(WAL)")
if err != nil {
t.Fatal(err)
}
db.SetMaxOpenConns(1)
_, err = db.Exec("CREATE TABLE dummy (id INTEGER PRIMARY KEY)")
if err != nil {
t.Fatal(err)
}
var before int
db.QueryRow("PRAGMA auto_vacuum").Scan(&before)
if before != 0 {
t.Fatalf("precondition: expected auto_vacuum=0, got %d", before)
}
db.Close()
// Simulate vacuumOnStartup migration using openRW
rw, err := openRW(path)
if err != nil {
t.Fatal(err)
}
if _, err := rw.Exec("PRAGMA auto_vacuum = INCREMENTAL"); err != nil {
t.Fatal(err)
}
if _, err := rw.Exec("VACUUM"); err != nil {
t.Fatal(err)
}
rw.Close()
// Verify migration
db2, err := sql.Open("sqlite", path+"?mode=ro")
if err != nil {
t.Fatal(err)
}
defer db2.Close()
var after int
if err := db2.QueryRow("PRAGMA auto_vacuum").Scan(&after); err != nil {
t.Fatal(err)
}
if after != 2 {
t.Fatalf("expected auto_vacuum=2 after VACUUM migration, got %d", after)
}
}
func TestIncrementalVacuumReducesFreelist(t *testing.T) {
dir := t.TempDir()
path := filepath.Join(dir, "test.db")
db := createFreshDBWithAutoVacuum(t, path)
// Insert a bunch of data
now := time.Now().UTC().Format(time.RFC3339)
for i := 0; i < 500; i++ {
_, err := db.Exec(
"INSERT INTO transmissions (raw_hex, hash, first_seen) VALUES (?, ?, ?)",
strings.Repeat("AA", 200), // ~400 bytes each
"hash_"+string(rune('A'+i%26))+string(rune('0'+i/26)),
now,
)
if err != nil {
t.Fatal(err)
}
}
// Get file size before delete
db.Close()
infoBefore, _ := os.Stat(path)
sizeBefore := infoBefore.Size()
// Reopen and delete all
db, err := sql.Open("sqlite", path+"?_pragma=journal_mode(WAL)&_pragma=busy_timeout(5000)")
if err != nil {
t.Fatal(err)
}
db.SetMaxOpenConns(1)
defer db.Close()
_, err = db.Exec("DELETE FROM transmissions")
if err != nil {
t.Fatal(err)
}
// Check freelist before vacuum
var freelistBefore int64
db.QueryRow("PRAGMA freelist_count").Scan(&freelistBefore)
if freelistBefore == 0 {
t.Fatal("expected non-zero freelist after DELETE")
}
// Run incremental vacuum
_, err = db.Exec("PRAGMA incremental_vacuum(10000)")
if err != nil {
t.Fatal(err)
}
// Check freelist after vacuum
var freelistAfter int64
db.QueryRow("PRAGMA freelist_count").Scan(&freelistAfter)
if freelistAfter >= freelistBefore {
t.Fatalf("expected freelist to shrink: before=%d after=%d", freelistBefore, freelistAfter)
}
// Checkpoint WAL and check file size shrunk
db.Exec("PRAGMA wal_checkpoint(TRUNCATE)")
db.Close()
infoAfter, _ := os.Stat(path)
sizeAfter := infoAfter.Size()
if sizeAfter >= sizeBefore {
t.Logf("warning: file did not shrink (before=%d after=%d) — may depend on page reuse", sizeBefore, sizeAfter)
}
}
func TestCheckAutoVacuumLogs(t *testing.T) {
// This test verifies checkAutoVacuum doesn't panic on various configs
dir := t.TempDir()
path := filepath.Join(dir, "test.db")
// Create a fresh DB with auto_vacuum=INCREMENTAL
dbConn := createFreshDBWithAutoVacuum(t, path)
db := &DB{conn: dbConn, path: path}
cfg := &Config{}
// Should not panic
checkAutoVacuum(db, cfg, path)
dbConn.Close()
// Create a DB without auto_vacuum
path2 := filepath.Join(dir, "test2.db")
dbConn2, _ := sql.Open("sqlite", path2+"?_pragma=journal_mode(WAL)")
dbConn2.SetMaxOpenConns(1)
dbConn2.Exec("CREATE TABLE dummy (id INTEGER PRIMARY KEY)")
db2 := &DB{conn: dbConn2, path: path2}
// Should log warning but not panic
checkAutoVacuum(db2, cfg, path2)
dbConn2.Close()
}
func TestConfigIncrementalVacuumPages(t *testing.T) {
// Default
cfg := &Config{}
if cfg.IncrementalVacuumPages() != 1024 {
t.Fatalf("expected default 1024, got %d", cfg.IncrementalVacuumPages())
}
// Custom
cfg.DB = &DBConfig{IncrementalVacuumPages: 512}
if cfg.IncrementalVacuumPages() != 512 {
t.Fatalf("expected 512, got %d", cfg.IncrementalVacuumPages())
}
// Zero should return default
cfg.DB.IncrementalVacuumPages = 0
if cfg.IncrementalVacuumPages() != 1024 {
t.Fatalf("expected default 1024 for zero, got %d", cfg.IncrementalVacuumPages())
}
}
+17 -101
View File
@@ -10,6 +10,7 @@ import (
"strings"
"time"
"github.com/meshcore-analyzer/packetpath"
"github.com/meshcore-analyzer/sigvalidate"
)
@@ -105,6 +106,7 @@ type Payload struct {
Tag uint32 `json:"tag,omitempty"`
AuthCode uint32 `json:"authCode,omitempty"`
TraceFlags *int `json:"traceFlags,omitempty"`
SNRValues []float64 `json:"snrValues,omitempty"`
RawHex string `json:"raw,omitempty"`
Error string `json:"error,omitempty"`
}
@@ -164,8 +166,9 @@ func decodePath(pathByte byte, buf []byte, offset int) (Path, int) {
}, totalBytes
}
// isTransportRoute delegates to packetpath.IsTransportRoute.
func isTransportRoute(routeType int) bool {
return routeType == RouteTransportFlood || routeType == RouteTransportDirect
return packetpath.IsTransportRoute(routeType)
}
func decodeEncryptedPayload(typeName string, buf []byte) Payload {
@@ -405,6 +408,19 @@ func DecodePacket(hexString string, validateSignatures bool) (*DecodedPacket, er
}
// The header path hops count represents SNR entries = completed hops
hopsCompleted := path.HashCount
// Extract per-hop SNR from header path bytes (int8, quarter-dB encoding)
if hopsCompleted > 0 && len(path.Hops) >= hopsCompleted {
snrVals := make([]float64, 0, hopsCompleted)
for i := 0; i < hopsCompleted; i++ {
b, err := hex.DecodeString(path.Hops[i])
if err == nil && len(b) == 1 {
snrVals = append(snrVals, float64(int8(b[0]))/4.0)
}
}
if len(snrVals) > 0 {
payload.SNRValues = snrVals
}
}
pathBytes, err := hex.DecodeString(payload.PathData)
if err == nil && payload.TraceFlags != nil {
// path_sz from flags byte is a power-of-two exponent per firmware:
@@ -441,106 +457,6 @@ func DecodePacket(hexString string, validateSignatures bool) (*DecodedPacket, er
}, nil
}
// HexRange represents a labeled byte range for the hex breakdown visualization.
type HexRange struct {
Start int `json:"start"`
End int `json:"end"`
Label string `json:"label"`
}
// Breakdown holds colored byte ranges returned by the packet detail endpoint.
type Breakdown struct {
Ranges []HexRange `json:"ranges"`
}
// BuildBreakdown computes labeled byte ranges for each section of a MeshCore packet.
// The returned ranges are consumed by createColoredHexDump() and buildHexLegend()
// in the frontend (public/app.js).
func BuildBreakdown(hexString string) *Breakdown {
hexString = strings.ReplaceAll(hexString, " ", "")
hexString = strings.ReplaceAll(hexString, "\n", "")
hexString = strings.ReplaceAll(hexString, "\r", "")
buf, err := hex.DecodeString(hexString)
if err != nil || len(buf) < 2 {
return &Breakdown{Ranges: []HexRange{}}
}
var ranges []HexRange
offset := 0
// Byte 0: Header
ranges = append(ranges, HexRange{Start: 0, End: 0, Label: "Header"})
offset = 1
header := decodeHeader(buf[0])
// Bytes 1-4: Transport Codes (TRANSPORT_FLOOD / TRANSPORT_DIRECT only)
if isTransportRoute(header.RouteType) {
if len(buf) < offset+4 {
return &Breakdown{Ranges: ranges}
}
ranges = append(ranges, HexRange{Start: offset, End: offset + 3, Label: "Transport Codes"})
offset += 4
}
if offset >= len(buf) {
return &Breakdown{Ranges: ranges}
}
// Next byte: Path Length (bits 7-6 = hashSize-1, bits 5-0 = hashCount)
ranges = append(ranges, HexRange{Start: offset, End: offset, Label: "Path Length"})
pathByte := buf[offset]
offset++
hashSize := int(pathByte>>6) + 1
hashCount := int(pathByte & 0x3F)
pathBytes := hashSize * hashCount
// Path hops
if hashCount > 0 && offset+pathBytes <= len(buf) {
ranges = append(ranges, HexRange{Start: offset, End: offset + pathBytes - 1, Label: "Path"})
}
offset += pathBytes
if offset >= len(buf) {
return &Breakdown{Ranges: ranges}
}
payloadStart := offset
// Payload — break ADVERT into named sub-fields; everything else is one Payload range
if header.PayloadType == PayloadADVERT && len(buf)-payloadStart >= 100 {
ranges = append(ranges, HexRange{Start: payloadStart, End: payloadStart + 31, Label: "PubKey"})
ranges = append(ranges, HexRange{Start: payloadStart + 32, End: payloadStart + 35, Label: "Timestamp"})
ranges = append(ranges, HexRange{Start: payloadStart + 36, End: payloadStart + 99, Label: "Signature"})
appStart := payloadStart + 100
if appStart < len(buf) {
ranges = append(ranges, HexRange{Start: appStart, End: appStart, Label: "Flags"})
appFlags := buf[appStart]
fOff := appStart + 1
if appFlags&0x10 != 0 && fOff+8 <= len(buf) {
ranges = append(ranges, HexRange{Start: fOff, End: fOff + 3, Label: "Latitude"})
ranges = append(ranges, HexRange{Start: fOff + 4, End: fOff + 7, Label: "Longitude"})
fOff += 8
}
if appFlags&0x20 != 0 && fOff+2 <= len(buf) {
fOff += 2
}
if appFlags&0x40 != 0 && fOff+2 <= len(buf) {
fOff += 2
}
if appFlags&0x80 != 0 && fOff < len(buf) {
ranges = append(ranges, HexRange{Start: fOff, End: len(buf) - 1, Label: "Name"})
}
}
} else {
ranges = append(ranges, HexRange{Start: payloadStart, End: len(buf) - 1, Label: "Payload"})
}
return &Breakdown{Ranges: ranges}
}
// ComputeContentHash computes the SHA-256-based content hash (first 16 hex chars).
// It hashes the payload-type nibble + payload (skipping path bytes) to produce a
// route-independent identifier for the same logical packet. For TRACE packets,
+48 -140
View File
@@ -97,146 +97,6 @@ func TestDecodePacket_FloodHasNoCodes(t *testing.T) {
}
}
func TestBuildBreakdown_InvalidHex(t *testing.T) {
b := BuildBreakdown("not-hex!")
if len(b.Ranges) != 0 {
t.Errorf("expected empty ranges for invalid hex, got %d", len(b.Ranges))
}
}
func TestBuildBreakdown_TooShort(t *testing.T) {
b := BuildBreakdown("11") // 1 byte — no path byte
if len(b.Ranges) != 0 {
t.Errorf("expected empty ranges for too-short packet, got %d", len(b.Ranges))
}
}
func TestBuildBreakdown_FloodNonAdvert(t *testing.T) {
// Header 0x15: route=1/FLOOD, payload=5/GRP_TXT
// PathByte 0x01: 1 hop, 1-byte hash
// PathHop: AA
// Payload: FF0011
b := BuildBreakdown("1501AAFFFF00")
labels := rangeLabels(b.Ranges)
expect := []string{"Header", "Path Length", "Path", "Payload"}
if !equalLabels(labels, expect) {
t.Errorf("expected labels %v, got %v", expect, labels)
}
// Verify byte positions
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Path Length", 1, 1)
assertRange(t, b.Ranges, "Path", 2, 2)
assertRange(t, b.Ranges, "Payload", 3, 5)
}
func TestBuildBreakdown_TransportFlood(t *testing.T) {
// Header 0x14: route=0/TRANSPORT_FLOOD, payload=5/GRP_TXT
// TransportCodes: AABBCCDD (4 bytes)
// PathByte 0x01: 1 hop, 1-byte hash
// PathHop: EE
// Payload: FF00
b := BuildBreakdown("14AABBCCDD01EEFF00")
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Transport Codes", 1, 4)
assertRange(t, b.Ranges, "Path Length", 5, 5)
assertRange(t, b.Ranges, "Path", 6, 6)
assertRange(t, b.Ranges, "Payload", 7, 8)
}
func TestBuildBreakdown_FloodNoHops(t *testing.T) {
// Header 0x15: FLOOD/GRP_TXT; PathByte 0x00: 0 hops; Payload: AABB
b := BuildBreakdown("150000AABB")
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Path Length", 1, 1)
// No Path range since hashCount=0
for _, r := range b.Ranges {
if r.Label == "Path" {
t.Error("expected no Path range for zero-hop packet")
}
}
assertRange(t, b.Ranges, "Payload", 2, 4)
}
func TestBuildBreakdown_AdvertBasic(t *testing.T) {
// Header 0x11: FLOOD/ADVERT
// PathByte 0x01: 1 hop, 1-byte hash
// PathHop: AA
// Payload: 100 bytes (PubKey32 + Timestamp4 + Signature64) + Flags=0x02 (repeater, no extras)
pubkey := repeatHex("AB", 32)
ts := "00000000" // 4 bytes
sig := repeatHex("CD", 64)
flags := "02"
hex := "1101AA" + pubkey + ts + sig + flags
b := BuildBreakdown(hex)
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Path Length", 1, 1)
assertRange(t, b.Ranges, "Path", 2, 2)
assertRange(t, b.Ranges, "PubKey", 3, 34)
assertRange(t, b.Ranges, "Timestamp", 35, 38)
assertRange(t, b.Ranges, "Signature", 39, 102)
assertRange(t, b.Ranges, "Flags", 103, 103)
}
func TestBuildBreakdown_AdvertWithLocation(t *testing.T) {
// flags=0x12: hasLocation bit set
pubkey := repeatHex("00", 32)
ts := "00000000"
sig := repeatHex("00", 64)
flags := "12" // 0x10 = hasLocation
latBytes := "00000000"
lonBytes := "00000000"
hex := "1101AA" + pubkey + ts + sig + flags + latBytes + lonBytes
b := BuildBreakdown(hex)
assertRange(t, b.Ranges, "Latitude", 104, 107)
assertRange(t, b.Ranges, "Longitude", 108, 111)
}
func TestBuildBreakdown_AdvertWithName(t *testing.T) {
// flags=0x82: hasName bit set
pubkey := repeatHex("00", 32)
ts := "00000000"
sig := repeatHex("00", 64)
flags := "82" // 0x80 = hasName
name := "4E6F6465" // "Node" in hex
hex := "1101AA" + pubkey + ts + sig + flags + name
b := BuildBreakdown(hex)
assertRange(t, b.Ranges, "Name", 104, 107)
}
// helpers
func rangeLabels(ranges []HexRange) []string {
out := make([]string, len(ranges))
for i, r := range ranges {
out[i] = r.Label
}
return out
}
func equalLabels(a, b []string) bool {
if len(a) != len(b) {
return false
}
for i := range a {
if a[i] != b[i] {
return false
}
}
return true
}
func assertRange(t *testing.T, ranges []HexRange, label string, wantStart, wantEnd int) {
t.Helper()
for _, r := range ranges {
if r.Label == label {
if r.Start != wantStart || r.End != wantEnd {
t.Errorf("range %q: want [%d,%d], got [%d,%d]", label, wantStart, wantEnd, r.Start, r.End)
}
return
}
}
t.Errorf("range %q not found in %v", label, rangeLabels(ranges))
}
func TestZeroHopDirectHashSize(t *testing.T) {
// DIRECT (RouteType=2) + REQ (PayloadType=0) → header byte = 0x02
@@ -580,3 +440,51 @@ func TestDecodeAdvertSignatureValidation(t *testing.T) {
t.Error("expected SignatureValid to be nil when validation disabled")
}
}
func TestDecodePacket_TraceSNRValues(t *testing.T) {
// TRACE packet with 3 SNR bytes in header path:
// SNR byte 0: 0x14 = int8(20) → 20/4.0 = 5.0 dB
// SNR byte 1: 0xF4 = int8(-12) → -12/4.0 = -3.0 dB
// SNR byte 2: 0x08 = int8(8) → 8/4.0 = 2.0 dB
// header: DIRECT+TRACE = (0<<6)|(9<<2)|2 = 0x26
// path_length: hash_size=0b00 (1-byte), hash_count=3 → 0x03
hex := "2603" + "14F408" + // header + path_byte + 3 SNR bytes
"01000000" + // tag
"02000000" + // authCode
"00" + // flags=0 → path_sz=1
"AABBCCDD" // 4 route hops (1-byte each)
pkt, err := DecodePacket(hex, false)
if err != nil {
t.Fatalf("DecodePacket error: %v", err)
}
if pkt.Payload.SNRValues == nil {
t.Fatal("expected SNRValues to be populated")
}
if len(pkt.Payload.SNRValues) != 3 {
t.Fatalf("expected 3 SNR values, got %d", len(pkt.Payload.SNRValues))
}
expected := []float64{5.0, -3.0, 2.0}
for i, want := range expected {
if pkt.Payload.SNRValues[i] != want {
t.Errorf("SNRValues[%d] = %v, want %v", i, pkt.Payload.SNRValues[i], want)
}
}
}
func TestDecodePacket_TraceNoSNRValues(t *testing.T) {
// TRACE with 0 SNR bytes → SNRValues should be nil/empty
hex := "2600" + // header + path_byte (0 hops)
"01000000" + // tag
"02000000" + // authCode
"00" + // flags
"AABB" // 2 route hops
pkt, err := DecodePacket(hex, false)
if err != nil {
t.Fatalf("DecodePacket error: %v", err)
}
if len(pkt.Payload.SNRValues) != 0 {
t.Errorf("expected empty SNRValues, got %v", pkt.Payload.SNRValues)
}
}
+50
View File
@@ -0,0 +1,50 @@
// Package main — discovered channels (#688).
//
// When a decoded channel message text mentions a previously-unknown hashtag
// channel (e.g. "Hey, I created new channel called #mesh, please join"), we
// auto-register that hashtag so future traffic can be displayed. This file
// owns the parsing helper plus the integration glue exposed via GetChannels.
package main
import "regexp"
// hashtagRE matches MeshCore-style hashtag channel mentions inside free text.
// A valid channel name starts with '#', followed by one or more letters,
// digits, underscore, or dash. Trailing punctuation (.,!?:;) is excluded by
// the character class.
var hashtagRE = regexp.MustCompile(`#[A-Za-z0-9_\-]+`)
// extractHashtagsFromText scans a decoded message text and returns the unique
// hashtag channel mentions found, in first-seen order. The leading '#' is
// preserved so callers can match against canonical channel names directly.
//
// Examples:
// extractHashtagsFromText("hi #mesh and #fun") => []string{"#mesh", "#fun"}
// extractHashtagsFromText("nothing here") => nil
// extractHashtagsFromText("dup #x and #x again") => []string{"#x"}
//
func extractHashtagsFromText(text string) []string {
if text == "" {
return nil
}
matches := hashtagRE.FindAllString(text, -1)
if len(matches) == 0 {
return nil
}
seen := make(map[string]struct{}, len(matches))
out := make([]string, 0, len(matches))
for _, m := range matches {
if len(m) < 2 { // bare '#' guard (regex requires 1+ chars but be defensive)
continue
}
if _, ok := seen[m]; ok {
continue
}
seen[m] = struct{}{}
out = append(out, m)
}
if len(out) == 0 {
return nil
}
return out
}
+85
View File
@@ -0,0 +1,85 @@
package main
import (
"reflect"
"testing"
)
// TestExtractHashtagsFromText covers the parsing helper used to discover new
// hashtag channels from decoded message text (issue #688).
func TestExtractHashtagsFromText(t *testing.T) {
cases := []struct {
name string
in string
want []string
}{
{
name: "single mention from issue body",
in: "Hey, I created new channel called #mesh, please join",
want: []string{"#mesh"},
},
{
name: "multiple mentions preserve order",
in: "join #mesh and #wardriving today",
want: []string{"#mesh", "#wardriving"},
},
{
name: "dedup repeated mentions",
in: "#x then #x again",
want: []string{"#x"},
},
{
name: "ignores trailing punctuation",
in: "check #fun!",
want: []string{"#fun"},
},
{
name: "no hashtag returns nil",
in: "nothing to see here",
want: nil,
},
{
name: "bare # is not a channel",
in: "issue #",
want: nil,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got := extractHashtagsFromText(tc.in)
if !reflect.DeepEqual(got, tc.want) {
t.Fatalf("extractHashtagsFromText(%q): got %v, want %v", tc.in, got, tc.want)
}
})
}
}
// TestGetChannels_DiscoversHashtagsFromMessages verifies that when a decoded
// CHAN message body mentions a previously-unknown hashtag channel, that
// channel is auto-registered in the GetChannels output (#688).
func TestGetChannels_DiscoversHashtagsFromMessages(t *testing.T) {
// One known channel (#general) where someone announces a new channel #mesh.
pkt := makeGrpTx(198, "general", "Alice: Hey, I created new channel called #mesh, please join", "Alice")
ps := newChannelTestStore([]*StoreTx{pkt})
channels := ps.GetChannels("")
var sawGeneral, sawMesh bool
for _, ch := range channels {
switch ch["name"] {
case "general":
sawGeneral = true
case "#mesh":
sawMesh = true
if d, _ := ch["discovered"].(bool); !d {
t.Errorf("expected discovered=true on #mesh, got %v", ch["discovered"])
}
}
}
if !sawGeneral {
t.Error("expected the source channel 'general' in GetChannels output")
}
if !sawMesh {
t.Errorf("expected discovered hashtag channel '#mesh' in GetChannels output; got %d channels: %+v", len(channels), channels)
}
}
+56
View File
@@ -0,0 +1,56 @@
package main
import (
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
)
// TestHandleNodes_ExposesForeignAdvertField asserts the /api/nodes response
// surfaces the foreign_advert column as a boolean `foreign` field on each
// node, so operators can see bridged/leaked nodes (#730).
func TestHandleNodes_ExposesForeignAdvertField(t *testing.T) {
srv, router := setupTestServer(t)
conn := srv.db.conn
if _, err := conn.Exec(`INSERT INTO nodes
(public_key, name, role, lat, lon, last_seen, first_seen, advert_count, foreign_advert)
VALUES
('PK_LOCAL', 'local-node', 'companion', 37.0, -122.0, '2026-01-01T00:00:00Z', '2026-01-01T00:00:00Z', 1, 0),
('PK_FOREIGN', 'foreign-node', 'companion', 50.0, 10.0, '2026-01-01T00:00:00Z', '2026-01-01T00:00:00Z', 1, 1)`,
); err != nil {
t.Fatal(err)
}
req := httptest.NewRequest("GET", "/api/nodes?limit=100", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("status=%d body=%s", w.Code, w.Body.String())
}
var resp struct {
Nodes []map[string]interface{} `json:"nodes"`
}
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatal(err)
}
got := map[string]bool{}
for _, n := range resp.Nodes {
pk, _ := n["public_key"].(string)
f, ok := n["foreign"].(bool)
if !ok {
t.Errorf("node %s: missing/non-bool 'foreign' field, got %T %v", pk, n["foreign"], n["foreign"])
continue
}
got[pk] = f
}
if !got["PK_LOCAL"] == false || got["PK_LOCAL"] != false {
t.Errorf("PK_LOCAL foreign=%v, want false", got["PK_LOCAL"])
}
if got["PK_FOREIGN"] != true {
t.Errorf("PK_FOREIGN foreign=%v, want true", got["PK_FOREIGN"])
}
}
+8
View File
@@ -14,6 +14,14 @@ replace github.com/meshcore-analyzer/geofilter => ../../internal/geofilter
replace github.com/meshcore-analyzer/sigvalidate => ../../internal/sigvalidate
require github.com/meshcore-analyzer/packetpath v0.0.0
replace github.com/meshcore-analyzer/packetpath => ../../internal/packetpath
require github.com/meshcore-analyzer/dbconfig v0.0.0
replace github.com/meshcore-analyzer/dbconfig => ../../internal/dbconfig
require (
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/google/uuid v1.6.0 // indirect
+43
View File
@@ -0,0 +1,43 @@
package main
import (
"encoding/json"
"net/http"
"sync/atomic"
)
// readiness tracks whether background init goroutines have completed.
// Set to 1 once store.Load, pickBestObservation, and neighbor graph build are done.
var readiness atomic.Int32
// handleHealthz returns 200 when the server is ready to serve queries,
// or 503 while background initialization is still running.
func (s *Server) handleHealthz(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
if readiness.Load() == 0 {
w.WriteHeader(http.StatusServiceUnavailable)
json.NewEncoder(w).Encode(map[string]interface{}{
"ready": false,
"reason": "loading",
})
return
}
var loadedTx, loadedObs int
if s.store != nil {
s.store.mu.RLock()
loadedTx = len(s.store.packets)
for _, p := range s.store.packets {
loadedObs += len(p.Observations)
}
s.store.mu.RUnlock()
}
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(map[string]interface{}{
"ready": true,
"loadedTx": loadedTx,
"loadedObs": loadedObs,
})
}
+80
View File
@@ -0,0 +1,80 @@
package main
import (
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
)
func TestHealthzNotReady(t *testing.T) {
// Ensure readiness is 0 (not ready)
readiness.Store(0)
defer readiness.Store(0)
srv := &Server{store: &PacketStore{}}
req := httptest.NewRequest("GET", "/api/healthz", nil)
w := httptest.NewRecorder()
srv.handleHealthz(w, req)
if w.Code != http.StatusServiceUnavailable {
t.Fatalf("expected 503, got %d", w.Code)
}
var resp map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("invalid JSON: %v", err)
}
if resp["ready"] != false {
t.Fatalf("expected ready=false, got %v", resp["ready"])
}
if resp["reason"] != "loading" {
t.Fatalf("expected reason=loading, got %v", resp["reason"])
}
}
func TestHealthzReady(t *testing.T) {
readiness.Store(1)
defer readiness.Store(0)
srv := &Server{store: &PacketStore{}}
req := httptest.NewRequest("GET", "/api/healthz", nil)
w := httptest.NewRecorder()
srv.handleHealthz(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d", w.Code)
}
var resp map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("invalid JSON: %v", err)
}
if resp["ready"] != true {
t.Fatalf("expected ready=true, got %v", resp["ready"])
}
if _, ok := resp["loadedTx"]; !ok {
t.Fatal("missing loadedTx field")
}
if _, ok := resp["loadedObs"]; !ok {
t.Fatal("missing loadedObs field")
}
}
func TestHealthzAntiTautology(t *testing.T) {
// When readiness is 0, must NOT return 200
readiness.Store(0)
defer readiness.Store(0)
srv := &Server{store: &PacketStore{}}
req := httptest.NewRequest("GET", "/api/healthz", nil)
w := httptest.NewRecorder()
srv.handleHealthz(w, req)
if w.Code == http.StatusOK {
t.Fatal("anti-tautology: handler returned 200 when readiness=0; gating is broken")
}
}
+147
View File
@@ -0,0 +1,147 @@
package main
import (
"testing"
"time"
)
// TestIssue804_AnalyticsAttributesByRepeaterRegion verifies that analytics
// (specifically GetAnalyticsHashSizes) attribute multi-byte nodes to the
// REPEATER's home region, not the observer that happened to hear the relay.
//
// Scenario from #804:
// - PDX-Repeater is a multi-byte (hashSize=2) repeater whose ZERO-HOP direct
// adverts are only heard by obs-PDX (a PDX observer). That zero-hop direct
// advert is the most reliable home-region signal — it cannot have been
// relayed.
// - A flood advert from PDX-Repeater (hashSize=2) propagates and is heard by
// obs-SJC (a SJC observer) via a multi-hop relay path.
// - When the user asks for region=SJC analytics, the PDX-Repeater MUST NOT
// pollute SJC's multiByteNodes — it lives in PDX.
// - The result should also expose attributionMethod="repeater" so the API
// consumer knows which method was used.
//
// Pre-fix behavior: PDX-Repeater appears in SJC's multiByteNodes because the
// filter is observer-based. This test fails on the pre-fix code at the
// "want PDX-Repeater EXCLUDED" assertion.
func TestIssue804_AnalyticsAttributesByRepeaterRegion(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
now := time.Now().UTC()
recent := now.Add(-1 * time.Hour).Format(time.RFC3339)
recentEpoch := now.Add(-1 * time.Hour).Unix()
// Observers: one in PDX, one in SJC
db.conn.Exec(`INSERT INTO observers (id, name, iata, last_seen, first_seen, packet_count)
VALUES ('obs-pdx', 'Obs PDX', 'PDX', ?, '2026-01-01T00:00:00Z', 100)`, recent)
db.conn.Exec(`INSERT INTO observers (id, name, iata, last_seen, first_seen, packet_count)
VALUES ('obs-sjc', 'Obs SJC', 'SJC', ?, '2026-01-01T00:00:00Z', 100)`, recent)
// PDX-Repeater node (lives in Portland)
pdxPK := "pdx0000000000001"
db.conn.Exec(`INSERT INTO nodes (public_key, name, role)
VALUES (?, 'PDX-Repeater', 'repeater')`, pdxPK)
// SJC-Repeater node (lives in San Jose) — sanity baseline
sjcPK := "sjc0000000000001"
db.conn.Exec(`INSERT INTO nodes (public_key, name, role)
VALUES (?, 'SJC-Repeater', 'repeater')`, sjcPK)
pdxDecoded := `{"pubKey":"` + pdxPK + `","name":"PDX-Repeater","type":"ADVERT","flags":{"isRepeater":true}}`
sjcDecoded := `{"pubKey":"` + sjcPK + `","name":"SJC-Repeater","type":"ADVERT","flags":{"isRepeater":true}}`
// 1) PDX-Repeater zero-hop DIRECT advert heard only by obs-PDX.
// Establishes PDX as the repeater's home region.
// raw_hex header 0x12 = route_type 2 (direct), payload_type 4
// pathByte 0x40 (hashSize bits=01 → 2, hop_count=0)
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
VALUES ('1240aabbccdd', 'pdx_zh_direct', ?, 2, 4, ?)`, recent, pdxDecoded)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (1, 1, 12.0, -85, '[]', ?)`, recentEpoch)
// 2) PDX-Repeater FLOOD advert with hashSize=2 (reliable).
// Heard ONLY by obs-SJC via a relay path (this is the polluting case).
// raw_hex header 0x11 = route_type 1 (flood), payload_type 4
// pathByte 0x41 (hashSize bits=01 → 2, hop_count=1)
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
VALUES ('1141aabbccdd', 'pdx_flood', ?, 1, 4, ?)`, recent, pdxDecoded)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (2, 2, 8.0, -95, '["aa11"]', ?)`, recentEpoch)
// 3) SJC-Repeater zero-hop DIRECT advert heard only by obs-SJC.
// Establishes SJC as the repeater's home region.
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
VALUES ('1240ccddeeff', 'sjc_zh_direct', ?, 2, 4, ?)`, recent, sjcDecoded)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (3, 2, 14.0, -82, '[]', ?)`, recentEpoch)
// 4) SJC-Repeater FLOOD advert with hashSize=2, heard by obs-SJC.
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
VALUES ('1141ccddeeff', 'sjc_flood', ?, 1, 4, ?)`, recent, sjcDecoded)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (4, 2, 11.0, -88, '["cc22"]', ?)`, recentEpoch)
store := NewPacketStore(db, nil)
store.Load()
t.Run("region=SJC excludes PDX-Repeater (heard but not home)", func(t *testing.T) {
result := store.GetAnalyticsHashSizes("SJC")
mb, ok := result["multiByteNodes"].([]map[string]interface{})
if !ok {
t.Fatal("expected multiByteNodes slice")
}
var foundPDX, foundSJC bool
for _, n := range mb {
pk, _ := n["pubkey"].(string)
if pk == pdxPK {
foundPDX = true
}
if pk == sjcPK {
foundSJC = true
}
}
if foundPDX {
t.Errorf("PDX-Repeater leaked into SJC analytics — region attribution still observer-based (#804 not fixed)")
}
if !foundSJC {
t.Errorf("SJC-Repeater missing from SJC analytics — fix over-filtered")
}
})
t.Run("API exposes attributionMethod", func(t *testing.T) {
result := store.GetAnalyticsHashSizes("SJC")
method, ok := result["attributionMethod"].(string)
if !ok {
t.Fatal("expected attributionMethod string field on result")
}
if method != "repeater" {
t.Errorf("attributionMethod = %q, want %q", method, "repeater")
}
})
t.Run("region=PDX excludes SJC-Repeater", func(t *testing.T) {
result := store.GetAnalyticsHashSizes("PDX")
mb, _ := result["multiByteNodes"].([]map[string]interface{})
var foundPDX, foundSJC bool
for _, n := range mb {
pk, _ := n["pubkey"].(string)
if pk == pdxPK {
foundPDX = true
}
if pk == sjcPK {
foundSJC = true
}
}
if !foundPDX {
t.Errorf("PDX-Repeater missing from PDX analytics")
}
if foundSJC {
t.Errorf("SJC-Repeater leaked into PDX analytics")
}
})
}
+46 -177
View File
@@ -1,194 +1,63 @@
package main
import (
"database/sql"
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
"time"
"path/filepath"
_ "modernc.org/sqlite"
"github.com/gorilla/mux"
)
// setupTestDB871 creates a test DB with schema and returns a read-only *DB handle.
func setupTestDB871(t *testing.T) (*DB, *sql.DB) {
t.Helper()
dbPath := filepath.Join(t.TempDir(), "test871.db")
// TestIssue871_NoNullHashOrTimestamp verifies that /api/packets never returns
// packets with null/empty hash or null timestamp (issue #871).
func TestIssue871_NoNullHashOrTimestamp(t *testing.T) {
db := setupTestDB(t)
seedTestData(t, db)
// Open writable connection for setup
rw, err := sql.Open("sqlite", "file:"+dbPath+"?_journal_mode=WAL")
if err != nil {
t.Fatal(err)
// Insert bad legacy data: packet with empty hash
now := time.Now().UTC().Add(-30 * time.Minute).Format(time.RFC3339)
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
VALUES ('DEAD', '', ?, 1, 4, '{}')`, now)
// Insert bad legacy data: packet with NULL first_seen (timestamp)
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
VALUES ('BEEF', 'aa11bb22cc33dd44', NULL, 1, 4, '{}')`)
cfg := &Config{Port: 3000}
hub := NewHub()
srv := NewServer(db, cfg, hub)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store.Load failed: %v", err)
}
srv.store = store
router := mux.NewRouter()
srv.RegisterRoutes(router)
req := httptest.NewRequest(http.MethodGet, "/api/packets?limit=200", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d", w.Code)
}
_, err = rw.Exec(`
CREATE TABLE IF NOT EXISTS nodes (
public_key TEXT PRIMARY KEY,
name TEXT, role TEXT,
lat REAL, lon REAL,
last_seen TEXT, first_seen TEXT,
advert_count INTEGER DEFAULT 0,
battery_mv INTEGER, temperature_c REAL
);
CREATE TABLE IF NOT EXISTS transmissions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
raw_hex TEXT NOT NULL,
hash TEXT NOT NULL UNIQUE,
first_seen TEXT NOT NULL,
route_type INTEGER,
payload_type INTEGER,
payload_version INTEGER,
decoded_json TEXT,
created_at TEXT DEFAULT (datetime('now'))
);
CREATE TABLE IF NOT EXISTS observers (
rowid INTEGER PRIMARY KEY AUTOINCREMENT,
id TEXT NOT NULL UNIQUE,
name TEXT
);
CREATE TABLE IF NOT EXISTS observations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
transmission_id INTEGER NOT NULL,
observer_id TEXT,
observer_name TEXT,
direction TEXT,
snr REAL, rssi REAL, score INTEGER,
path_json TEXT, timestamp TEXT
);
`)
if err != nil {
t.Fatal(err)
var resp struct {
Packets []map[string]interface{} `json:"packets"`
}
if err := json.NewDecoder(w.Body).Decode(&resp); err != nil {
t.Fatalf("decode error: %v", err)
}
// Open read-only handle for the store
db, err := OpenDB(dbPath)
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
db.Close()
rw.Close()
})
return db, rw
}
// TestEnrichObsFallbackToDB verifies that enrichObs falls back to the DB when
// the parent transmission has been evicted from memory (#871 root cause).
func TestEnrichObsFallbackToDB(t *testing.T) {
db, rw := setupTestDB871(t)
now := time.Now().UTC().Format(time.RFC3339)
_, err := rw.Exec(
`INSERT INTO transmissions (raw_hex, hash, first_seen, payload_type, decoded_json) VALUES (?, ?, ?, ?, ?)`,
"aabbcc", "abc123", now, 4, `{"pubKey":"pk1"}`,
)
if err != nil {
t.Fatal(err)
}
store := NewPacketStore(db, &PacketStoreConfig{})
// Observation references tx_id=1, but tx is NOT in byTxID (simulates eviction)
obs := &StoreObs{
ID: 1,
TransmissionID: 1,
ObserverID: "obs1",
ObserverName: "Observer1",
Timestamp: now,
}
result := store.enrichObs(obs)
// hash must be present from DB fallback
if result["hash"] == nil {
t.Errorf("enrichObs: hash is nil — DB fallback failed")
}
if h, ok := result["hash"].(string); !ok || h != "abc123" {
t.Errorf("enrichObs: expected hash 'abc123', got %v", result["hash"])
}
if result["payload_type"] == nil {
t.Errorf("enrichObs: payload_type is nil — DB fallback failed")
}
// When tx IS in memory, it should use the in-memory path
pt := 4
store.byTxID[1] = &StoreTx{
ID: 1, Hash: "abc123", FirstSeen: now,
PayloadType: &pt, RawHex: "aabbcc",
}
result2 := store.enrichObs(obs)
if result2["hash"] == nil {
t.Errorf("enrichObs with in-memory tx: hash is nil")
}
}
// TestGetNodeHealthRecentPacketsNoNilFields verifies that GetNodeHealth's
// recentPackets never contains entries with nil hash or timestamp.
func TestGetNodeHealthRecentPacketsNoNilFields(t *testing.T) {
db, rw := setupTestDB871(t)
now := time.Now().UTC().Format(time.RFC3339)
_, err := rw.Exec(
`INSERT INTO nodes (public_key, name, role, last_seen) VALUES (?, ?, ?, ?)`,
"pk1", "TestNode", "repeater", now,
)
if err != nil {
t.Fatal(err)
}
store := NewPacketStore(db, &PacketStoreConfig{})
pt := 4
tx := &StoreTx{
ID: 1, Hash: "hash1", FirstSeen: now,
PayloadType: &pt, DecodedJSON: `{"pubKey":"pk1"}`,
obsKeys: make(map[string]bool), observerSet: make(map[string]bool),
}
store.byTxID[1] = tx
store.byHash["hash1"] = tx
store.byNode["pk1"] = []*StoreTx{tx}
store.nodeHashes["pk1"] = map[string]bool{"hash1": true}
result, err := store.GetNodeHealth("pk1")
if err != nil {
t.Fatal(err)
}
if result == nil {
t.Fatal("GetNodeHealth returned nil")
}
packets, ok := result["recentPackets"].([]map[string]interface{})
if !ok {
t.Fatal("recentPackets is not []map[string]interface{}")
}
for i, p := range packets {
if p["hash"] == nil {
t.Errorf("recentPackets[%d] has nil hash", i)
for i, p := range resp.Packets {
hash, _ := p["hash"]
ts, _ := p["timestamp"]
if hash == nil || hash == "" {
t.Errorf("packet[%d] has null/empty hash: %v", i, p)
}
if p["timestamp"] == nil {
t.Errorf("recentPackets[%d] has nil timestamp", i)
if ts == nil || ts == "" {
t.Errorf("packet[%d] has null/empty timestamp: %v", i, p)
}
}
}
// TestEnrichObsNilDB verifies enrichObs doesn't panic when db is nil.
func TestEnrichObsNilDB(t *testing.T) {
store := &PacketStore{
byTxID: make(map[int]*StoreTx),
byObsID: make(map[int]*StoreObs),
}
obs := &StoreObs{
ID: 1, TransmissionID: 999,
Timestamp: "2026-01-01T00:00:00Z",
}
// Should not panic
result := store.enrichObs(obs)
if result["hash"] != nil {
t.Errorf("expected nil hash when no DB and no in-memory tx, got %v", result["hash"])
}
}
+75 -2
View File
@@ -108,6 +108,25 @@ func main() {
log.Printf("[security] WARNING: API key is weak or a known default — write endpoints are vulnerable")
}
// Apply Go runtime soft memory limit (#836).
// Honors GOMEMLIMIT if set; otherwise derives from packetStore.maxMemoryMB.
{
_, envSet := os.LookupEnv("GOMEMLIMIT")
maxMB := 0
if cfg.PacketStore != nil {
maxMB = cfg.PacketStore.MaxMemoryMB
}
limit, source := applyMemoryLimit(maxMB, envSet)
switch source {
case "env":
log.Printf("[memlimit] using GOMEMLIMIT from environment (%s)", os.Getenv("GOMEMLIMIT"))
case "derived":
log.Printf("[memlimit] derived from packetStore.maxMemoryMB=%d → %d MiB (1.5x headroom)", maxMB, limit/(1024*1024))
default:
log.Printf("[memlimit] no soft memory limit set (GOMEMLIMIT unset, packetStore.maxMemoryMB=0); recommend setting one to avoid container OOM-kill")
}
}
// Resolve DB path
resolvedDB := cfg.ResolveDBPath(configDir)
log.Printf("[config] port=%d db=%s public=%s", cfg.Port, resolvedDB, publicDir)
@@ -148,6 +167,9 @@ func main() {
stats.TotalTransmissions, stats.TotalObservations, stats.TotalNodes, stats.TotalObservers)
}
// Check auto_vacuum mode and optionally migrate (#919)
checkAutoVacuum(database, cfg, resolvedDB)
// In-memory packet store
store := NewPacketStore(database, cfg.PacketStore, cfg.CacheTTL)
if err := store.Load(); err != nil {
@@ -171,6 +193,34 @@ func main() {
database.hasResolvedPath = true // detectSchema ran before column was added; fix the flag
}
// Ensure observers.inactive column exists (PR #954 filters on it; ingestor migration
// adds it but server may run against DBs ingestor never touched, e.g. e2e fixture).
if err := ensureObserverInactiveColumn(dbPath); err != nil {
log.Printf("[store] warning: could not add observers.inactive column: %v", err)
}
// Ensure observers.last_packet_at column exists (PR #905 reads it; ingestor migration
// adds it but server may run against DBs ingestor never touched, e.g. e2e fixture).
if err := ensureLastPacketAtColumn(dbPath); err != nil {
log.Printf("[store] warning: could not add observers.last_packet_at column: %v", err)
}
// Ensure nodes.foreign_advert column exists (#730 reads it on every /api/nodes
// scan; ingestor migration foreign_advert_v1 adds it but server may run against
// DBs ingestor never touched, e.g. e2e fixture).
if err := ensureForeignAdvertColumn(dbPath); err != nil {
log.Printf("[store] warning: could not add nodes.foreign_advert column: %v", err)
}
// Soft-delete observers that are in the blacklist (mark inactive=1) so
// historical data from a prior unblocked window is hidden too.
if len(cfg.ObserverBlacklist) > 0 {
softDeleteBlacklistedObservers(dbPath, cfg.ObserverBlacklist)
}
// WaitGroup for background init steps that gate /api/healthz readiness.
var initWg sync.WaitGroup
// Load or build neighbor graph
if neighborEdgesTableExists(database.conn) {
store.graph = loadNeighborEdgesFromDB(database.conn)
@@ -178,16 +228,17 @@ func main() {
} else {
log.Printf("[neighbor] no persisted edges found, will build in background...")
store.graph = NewNeighborGraph() // empty graph — gets populated by background goroutine
initWg.Add(1)
go func() {
defer initWg.Done()
defer func() {
if r := recover(); r != nil {
log.Printf("[neighbor] graph build panic recovered: %v", r)
}
}()
rw, rwErr := openRW(dbPath)
rw, rwErr := cachedRW(dbPath)
if rwErr == nil {
edgeCount := buildAndPersistEdges(store, rw)
rw.Close()
log.Printf("[neighbor] persisted %d edges", edgeCount)
}
built := BuildFromStore(store)
@@ -202,7 +253,9 @@ func main() {
// API serves best-effort data until this completes (~10s for 100K txs).
// Processes in chunks of 5000, releasing the lock between chunks so API
// handlers remain responsive.
initWg.Add(1)
go func() {
defer initWg.Done()
defer func() {
if r := recover(); r != nil {
log.Printf("[store] pickBestObservation panic recovered: %v", r)
@@ -230,6 +283,13 @@ func main() {
log.Printf("[store] initial pickBestObservation complete (%d transmissions)", totalPackets)
}()
// Mark server ready once all background init completes.
go func() {
initWg.Wait()
readiness.Store(1)
log.Printf("[server] readiness: ready=true (background init complete)")
}()
// WebSocket hub
hub := NewHub()
@@ -266,6 +326,7 @@ func main() {
defer stopEviction()
// Auto-prune old packets if retention.packetDays is configured
vacuumPages := cfg.IncrementalVacuumPages()
var stopPrune func()
if cfg.Retention != nil && cfg.Retention.PacketDays > 0 {
days := cfg.Retention.PacketDays
@@ -286,6 +347,9 @@ func main() {
log.Printf("[prune] error: %v", err)
} else {
log.Printf("[prune] deleted %d transmissions older than %d days", n, days)
if n > 0 {
runIncrementalVacuum(resolvedDB, vacuumPages)
}
}
for {
select {
@@ -294,6 +358,9 @@ func main() {
log.Printf("[prune] error: %v", err)
} else {
log.Printf("[prune] deleted %d transmissions older than %d days", n, days)
if n > 0 {
runIncrementalVacuum(resolvedDB, vacuumPages)
}
}
case <-pruneDone:
return
@@ -321,10 +388,12 @@ func main() {
}()
time.Sleep(2 * time.Minute) // stagger after packet prune
database.PruneOldMetrics(metricsDays)
runIncrementalVacuum(resolvedDB, vacuumPages)
for {
select {
case <-metricsPruneTicker.C:
database.PruneOldMetrics(metricsDays)
runIncrementalVacuum(resolvedDB, vacuumPages)
case <-metricsPruneDone:
return
}
@@ -354,10 +423,12 @@ func main() {
}()
time.Sleep(3 * time.Minute) // stagger after metrics prune
database.RemoveStaleObservers(observerDays)
runIncrementalVacuum(resolvedDB, vacuumPages)
for {
select {
case <-observerPruneTicker.C:
database.RemoveStaleObservers(observerDays)
runIncrementalVacuum(resolvedDB, vacuumPages)
case <-observerPruneDone:
return
}
@@ -388,6 +459,7 @@ func main() {
g := store.graph
store.mu.RUnlock()
PruneNeighborEdges(dbPath, g, maxAgeDays)
runIncrementalVacuum(resolvedDB, vacuumPages)
for {
select {
case <-edgePruneTicker.C:
@@ -395,6 +467,7 @@ func main() {
g := store.graph
store.mu.RUnlock()
PruneNeighborEdges(dbPath, g, maxAgeDays)
runIncrementalVacuum(resolvedDB, vacuumPages)
case <-edgePruneDone:
return
}
+32
View File
@@ -0,0 +1,32 @@
package main
import (
"runtime/debug"
)
// applyMemoryLimit configures Go's soft memory limit (GOMEMLIMIT).
//
// Behavior:
// - If envSet is true (GOMEMLIMIT env var present), the runtime has already
// parsed it; we leave it alone and report source="env" with limit=0.
// - Otherwise, if maxMemoryMB > 0, we derive a limit of maxMemoryMB * 1.5 MiB
// and set it via debug.SetMemoryLimit. This forces aggressive GC under
// cgroup pressure so the process self-throttles before SIGKILL. See #836.
// - Otherwise, no limit is applied; source="none".
//
// Returns the limit (in bytes) we actually set, or 0 if we did not set one,
// plus a short source identifier ("env" | "derived" | "none") for logging.
func applyMemoryLimit(maxMemoryMB int, envSet bool) (int64, string) {
if envSet {
return 0, "env"
}
if maxMemoryMB <= 0 {
return 0, "none"
}
// 1.5x headroom over the steady-state packet store budget covers
// transient peaks (cold-load row-scan / decode pipeline, Go's NextGC
// trigger at ~2x live heap). See issue #836 heap profile.
limit := int64(maxMemoryMB) * 1024 * 1024 * 3 / 2
debug.SetMemoryLimit(limit)
return limit, "derived"
}
+54
View File
@@ -0,0 +1,54 @@
package main
import (
"runtime/debug"
"testing"
)
func TestApplyMemoryLimit_FromEnv(t *testing.T) {
t.Setenv("GOMEMLIMIT", "850MiB")
// reset to a known state after test
defer debug.SetMemoryLimit(-1)
limit, source := applyMemoryLimit(512, true /* envSet */)
if source != "env" {
t.Fatalf("expected source=env, got %q", source)
}
// When env is set, our function must NOT override it; reported limit is 0.
if limit != 0 {
t.Fatalf("expected limit=0 (not set by us), got %d", limit)
}
}
func TestApplyMemoryLimit_DerivedFromMaxMemoryMB(t *testing.T) {
defer debug.SetMemoryLimit(-1)
// maxMemoryMB=512 → 512 * 1.5 = 768 MiB = 768 * 1024 * 1024 bytes
limit, source := applyMemoryLimit(512, false /* envSet */)
if source != "derived" {
t.Fatalf("expected source=derived, got %q", source)
}
want := int64(768) * 1024 * 1024
if limit != want {
t.Fatalf("expected limit=%d, got %d", want, limit)
}
// Verify it was actually set on the runtime
cur := debug.SetMemoryLimit(-1)
if cur != want {
t.Fatalf("runtime memory limit not set: want=%d got=%d", want, cur)
}
}
func TestApplyMemoryLimit_None(t *testing.T) {
defer debug.SetMemoryLimit(-1)
// Reset to "no limit" (math.MaxInt64) before test
debug.SetMemoryLimit(int64(1<<63 - 1))
limit, source := applyMemoryLimit(0, false)
if source != "none" {
t.Fatalf("expected source=none, got %q", source)
}
if limit != 0 {
t.Fatalf("expected limit=0, got %d", limit)
}
}
+57
View File
@@ -0,0 +1,57 @@
package main
import "testing"
func TestEnrichNodeWithMultiByte(t *testing.T) {
t.Run("nil entry leaves no fields", func(t *testing.T) {
node := map[string]interface{}{"public_key": "abc123"}
EnrichNodeWithMultiByte(node, nil)
if _, ok := node["multi_byte_status"]; ok {
t.Error("expected no multi_byte_status with nil entry")
}
})
t.Run("confirmed entry sets fields", func(t *testing.T) {
node := map[string]interface{}{"public_key": "abc123"}
entry := &MultiByteCapEntry{
Status: "confirmed",
Evidence: "advert",
MaxHashSize: 2,
}
EnrichNodeWithMultiByte(node, entry)
if node["multi_byte_status"] != "confirmed" {
t.Errorf("expected confirmed, got %v", node["multi_byte_status"])
}
if node["multi_byte_evidence"] != "advert" {
t.Errorf("expected advert, got %v", node["multi_byte_evidence"])
}
if node["multi_byte_max_hash_size"] != 2 {
t.Errorf("expected 2, got %v", node["multi_byte_max_hash_size"])
}
})
t.Run("suspected entry sets fields", func(t *testing.T) {
node := map[string]interface{}{"public_key": "abc123"}
entry := &MultiByteCapEntry{
Status: "suspected",
Evidence: "path",
MaxHashSize: 2,
}
EnrichNodeWithMultiByte(node, entry)
if node["multi_byte_status"] != "suspected" {
t.Errorf("expected suspected, got %v", node["multi_byte_status"])
}
})
t.Run("unknown entry sets status unknown", func(t *testing.T) {
node := map[string]interface{}{"public_key": "abc123"}
entry := &MultiByteCapEntry{
Status: "unknown",
MaxHashSize: 1,
}
EnrichNodeWithMultiByte(node, entry)
if node["multi_byte_status"] != "unknown" {
t.Errorf("expected unknown, got %v", node["multi_byte_status"])
}
})
}
+107
View File
@@ -0,0 +1,107 @@
package main
import (
"testing"
"time"
)
// TestMultiByteCapability_RegionFiltered_PreservesConfirmedStatus verifies
// that GetAnalyticsHashSizes returns a populated multiByteCapability list
// even when a region filter is applied. The frontend (analytics.js) merges
// this into the adopter table to render per-node "confirmed/suspected/unknown"
// badges. When the field is missing or empty under a region filter, every
// row falls back to "unknown" — see meshcore.meshat.se/#/analytics filtered
// by JKG showing 14 "unknown" while the unfiltered view shows 0.
//
// Multi-byte capability is a property of the NODE (advertised hash_size from
// its own adverts), not the observing region. Region filter should affect
// which nodes appear in the result list (multiByteNodes), not their cap status.
//
// Pre-fix behavior: multiByteCapability is only populated when region == "".
// This test fails because result["multiByteCapability"] is absent under
// region="JKG", so the lookup returns nil/false.
func TestMultiByteCapability_RegionFiltered_PreservesConfirmedStatus(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
now := time.Now().UTC()
recent := now.Add(-1 * time.Hour).Format(time.RFC3339)
recentEpoch := now.Add(-1 * time.Hour).Unix()
// Two observers in different regions.
db.conn.Exec(`INSERT INTO observers (id, name, iata, last_seen, first_seen, packet_count)
VALUES ('obs-sjc', 'Obs SJC', 'SJC', ?, '2026-01-01T00:00:00Z', 100)`, recent)
db.conn.Exec(`INSERT INTO observers (id, name, iata, last_seen, first_seen, packet_count)
VALUES ('obs-jkg', 'Obs JKG', 'JKG', ?, '2026-01-01T00:00:00Z', 100)`, recent)
// Node A: a JKG-region repeater that advertises multi-byte (hash_size=2).
// Its zero-hop direct advert is only heard by obs-SJC (e.g. an out-of-region
// listener that happens to pick it up). Under the JKG region filter, the
// computeAnalyticsHashSizes() pass will see a smaller advert dataset, but
// the node's multi-byte capability is intrinsic and should still resolve
// to "confirmed" via the global advert evidence.
pkA := "aaa0000000000001"
db.conn.Exec(`INSERT INTO nodes (public_key, name, role)
VALUES (?, 'Node-A', 'repeater')`, pkA)
decodedA := `{"pubKey":"` + pkA + `","name":"Node-A","type":"ADVERT","flags":{"isRepeater":true}}`
// Zero-hop direct advert (route_type=2, payload_type=4),
// pathByte 0x40 → hash_size bits 01 → 2 bytes.
// Heard by obs-SJC ONLY.
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
VALUES ('1240aabbccdd', 'a_zh_direct', ?, 2, 4, ?)`, recent, decodedA)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (1, 1, 12.0, -85, '[]', ?)`, recentEpoch)
// Node A also appears as a path hop in a JKG-observed packet, so it
// shows up in the JKG region's node list.
// route_type=1 (flood), payload_type=4, pathByte 0x41 (hs=2, hops=1)
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
VALUES ('1141aabbccdd', 'a_jkg_relay', ?, 1, 4, ?)`, recent, decodedA)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (2, 2, 8.0, -95, '["aa"]', ?)`, recentEpoch)
store := NewPacketStore(db, nil)
store.Load()
// Sanity: unfiltered view exposes the field.
unfiltered := store.GetAnalyticsHashSizes("")
if _, ok := unfiltered["multiByteCapability"]; !ok {
t.Fatal("unfiltered result missing multiByteCapability — test setup is wrong")
}
// The actual assertion: region-filtered view MUST also expose the field
// AND must report Node A as "confirmed", not "unknown".
result := store.GetAnalyticsHashSizes("JKG")
capsRaw, ok := result["multiByteCapability"]
if !ok {
t.Fatalf("expected multiByteCapability in region=JKG result, got keys: %v", keysOf(result))
}
caps, ok := capsRaw.([]MultiByteCapEntry)
if !ok {
t.Fatalf("expected []MultiByteCapEntry, got %T", capsRaw)
}
var foundA *MultiByteCapEntry
for i := range caps {
if caps[i].PublicKey == pkA {
foundA = &caps[i]
break
}
}
if foundA == nil {
t.Fatalf("Node A missing from region=JKG multiByteCapability (have %d entries)", len(caps))
}
if foundA.Status != "confirmed" {
t.Errorf("Node A status under region=JKG = %q, want %q (region filter wrongly downgraded multi-byte capability evidence)", foundA.Status, "confirmed")
}
}
func keysOf(m map[string]interface{}) []string {
out := make([]string, 0, len(m))
for k := range m {
out = append(out, k)
}
return out
}
+18 -18
View File
@@ -12,9 +12,9 @@ import (
func TestResolveAmbiguousEdges_GeoProximity(t *testing.T) {
// Node A at lat=45, lon=-122. Candidate B1 at lat=45.1, lon=-122.1 (close).
// Candidate B2 at lat=10, lon=10 (far away). Prefix "b0" matches both.
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB1 := nodeInfo{PublicKey: "b0b1eeee", Name: "CloseNode", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeB2 := nodeInfo{PublicKey: "b0c2ffff", Name: "FarNode", HasGPS: true, Lat: 10.0, Lon: 10.0}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB1 := nodeInfo{Role: "repeater", PublicKey: "b0b1eeee", Name: "CloseNode", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeB2 := nodeInfo{Role: "repeater", PublicKey: "b0c2ffff", Name: "FarNode", HasGPS: true, Lat: 10.0, Lon: 10.0}
pm := buildPrefixMap([]nodeInfo{nodeA, nodeB1, nodeB2})
@@ -62,8 +62,8 @@ func TestResolveAmbiguousEdges_GeoProximity(t *testing.T) {
// Test 2: Ambiguous edge merged with existing resolved edge (count accumulation).
func TestResolveAmbiguousEdges_MergeWithExisting(t *testing.T) {
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{PublicKey: "b0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{Role: "repeater", PublicKey: "b0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
pm := buildPrefixMap([]nodeInfo{nodeA, nodeB})
@@ -133,9 +133,9 @@ func TestResolveAmbiguousEdges_MergeWithExisting(t *testing.T) {
// Test 3: Ambiguous edge left as-is when resolution fails.
func TestResolveAmbiguousEdges_FailsNoChange(t *testing.T) {
// Two candidates, neither has GPS, no affinity data — resolution falls through.
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA"}
nodeB1 := nodeInfo{PublicKey: "b0b1eeee", Name: "B1"}
nodeB2 := nodeInfo{PublicKey: "b0c2ffff", Name: "B2"}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA"}
nodeB1 := nodeInfo{Role: "repeater", PublicKey: "b0b1eeee", Name: "B1"}
nodeB2 := nodeInfo{Role: "repeater", PublicKey: "b0c2ffff", Name: "B2"}
pm := buildPrefixMap([]nodeInfo{nodeA, nodeB1, nodeB2})
@@ -175,7 +175,7 @@ func TestResolveAmbiguousEdges_FailsNoChange(t *testing.T) {
// Test 3 (corrected): Resolution fails when prefix has no candidates in prefix map.
func TestResolveAmbiguousEdges_NoMatch(t *testing.T) {
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA"}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA"}
// pm has no entries matching prefix "zz"
pm := buildPrefixMap([]nodeInfo{nodeA})
@@ -215,8 +215,8 @@ func TestResolveAmbiguousEdges_NoMatch(t *testing.T) {
// Test 6: Phase 1 edge collection unchanged (no regression).
func TestPhase1EdgeCollection_Unchanged(t *testing.T) {
// Build a simple graph and verify non-ambiguous edges are not touched.
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{PublicKey: "bbbb2222", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{Role: "repeater", PublicKey: "bbbb2222", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
ts := time.Now().UTC().Format(time.RFC3339)
payloadType := 4
@@ -232,7 +232,7 @@ func TestPhase1EdgeCollection_Unchanged(t *testing.T) {
Observations: obs,
}
store := ngTestStore([]nodeInfo{nodeA, nodeB, {PublicKey: "cccc3333", Name: "Observer"}}, []*StoreTx{tx})
store := ngTestStore([]nodeInfo{nodeA, nodeB, {Role: "repeater", PublicKey: "cccc3333", Name: "Observer"}}, []*StoreTx{tx})
graph := BuildFromStore(store)
edges := graph.Neighbors("aaaa1111")
@@ -255,8 +255,8 @@ func TestPhase1EdgeCollection_Unchanged(t *testing.T) {
// Test 7: Merge preserves higher LastSeen timestamp.
func TestResolveAmbiguousEdges_PreservesHigherLastSeen(t *testing.T) {
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{PublicKey: "b0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{Role: "repeater", PublicKey: "b0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
pm := buildPrefixMap([]nodeInfo{nodeA, nodeB})
graph := NewNeighborGraph()
@@ -307,10 +307,10 @@ func TestResolveAmbiguousEdges_PreservesHigherLastSeen(t *testing.T) {
// Test 5: Integration — node with both 1-byte and 2-byte prefix observations shows single entry.
func TestIntegration_DualPrefixSingleNeighbor(t *testing.T) {
nodeA := nodeInfo{PublicKey: "aaaa1111aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{PublicKey: "b0b1eeeeb0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeB2 := nodeInfo{PublicKey: "b0c2ffffb0c2ffff", Name: "NodeB2", HasGPS: true, Lat: 10.0, Lon: 10.0}
observer := nodeInfo{PublicKey: "cccc3333cccc3333", Name: "Observer"}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{Role: "repeater", PublicKey: "b0b1eeeeb0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeB2 := nodeInfo{Role: "repeater", PublicKey: "b0c2ffffb0c2ffff", Name: "NodeB2", HasGPS: true, Lat: 10.0, Lon: 10.0}
observer := nodeInfo{Role: "repeater", PublicKey: "cccc3333cccc3333", Name: "Observer"}
ts := time.Now().UTC().Format(time.RFC3339)
pt := 4
+63 -63
View File
@@ -86,9 +86,9 @@ func TestBuildNeighborGraph_EmptyStore(t *testing.T) {
func TestBuildNeighborGraph_AdvertSingleHopPath(t *testing.T) {
// ADVERT from X, path=["R1_prefix"] → edges: X↔R1 and Observer↔R1
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["r1aa"]`, nowStr, ngFloatPtr(-10)),
@@ -132,10 +132,10 @@ func TestBuildNeighborGraph_AdvertSingleHopPath(t *testing.T) {
func TestBuildNeighborGraph_AdvertMultiHopPath(t *testing.T) {
// ADVERT from X, path=["R1","R2"] → X↔R1 and Observer↔R2
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "r2ddeeff", Name: "R2"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "r2ddeeff", Name: "R2"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["r1aa","r2dd"]`, nowStr, nil),
@@ -170,8 +170,8 @@ func TestBuildNeighborGraph_AdvertMultiHopPath(t *testing.T) {
func TestBuildNeighborGraph_AdvertZeroHop(t *testing.T) {
// ADVERT from X, path=[] → X↔Observer direct edge
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `[]`, nowStr, nil),
@@ -195,8 +195,8 @@ func TestBuildNeighborGraph_AdvertZeroHop(t *testing.T) {
func TestBuildNeighborGraph_NonAdvertEmptyPath(t *testing.T) {
// Non-ADVERT, path=[] → no edges
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 2, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `[]`, nowStr, nil),
@@ -212,10 +212,10 @@ func TestBuildNeighborGraph_NonAdvertEmptyPath(t *testing.T) {
func TestBuildNeighborGraph_NonAdvertOnlyObserverEdge(t *testing.T) {
// Non-ADVERT with path=["R1","R2"] → only Observer↔R2, NO originator edge
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "r2ddeeff", Name: "R2"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "r2ddeeff", Name: "R2"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 2, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["r1aa","r2dd"]`, nowStr, nil),
@@ -236,9 +236,9 @@ func TestBuildNeighborGraph_NonAdvertOnlyObserverEdge(t *testing.T) {
func TestBuildNeighborGraph_NonAdvertSingleHop(t *testing.T) {
// Non-ADVERT with path=["R1"] → Observer↔R1 only
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 2, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["r1aa"]`, nowStr, nil),
@@ -259,10 +259,10 @@ func TestBuildNeighborGraph_NonAdvertSingleHop(t *testing.T) {
func TestBuildNeighborGraph_HashCollision(t *testing.T) {
// Two nodes share prefix "a3" → ambiguous edge
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "a3bb1111", Name: "CandidateA"},
{PublicKey: "a3bb2222", Name: "CandidateB"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "a3bb1111", Name: "CandidateA"},
{Role: "repeater", PublicKey: "a3bb2222", Name: "CandidateB"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["a3bb"]`, nowStr, nil),
@@ -308,13 +308,13 @@ func TestBuildNeighborGraph_ConfidenceAutoResolve(t *testing.T) {
// CandidateB has no known neighbors (Jaccard = 0).
// An ambiguous edge X↔prefix "a3" with candidates [A, B] should auto-resolve to A.
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "n1111111", Name: "N1"},
{PublicKey: "n2222222", Name: "N2"},
{PublicKey: "n3333333", Name: "N3"},
{PublicKey: "a3001111", Name: "CandidateA"},
{PublicKey: "a3002222", Name: "CandidateB"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "n1111111", Name: "N1"},
{Role: "repeater", PublicKey: "n2222222", Name: "N2"},
{Role: "repeater", PublicKey: "n3333333", Name: "N3"},
{Role: "repeater", PublicKey: "a3001111", Name: "CandidateA"},
{Role: "repeater", PublicKey: "a3002222", Name: "CandidateB"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
// Create resolved edges: X↔N1, X↔N2, X↔N3, A↔N1, A↔N2, A↔N3
@@ -373,11 +373,11 @@ func TestBuildNeighborGraph_ConfidenceAutoResolve(t *testing.T) {
func TestBuildNeighborGraph_EqualScoresAmbiguous(t *testing.T) {
// Two candidates with identical neighbor sets → should NOT auto-resolve.
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "n1111111", Name: "N1"},
{PublicKey: "a3001111", Name: "CandidateA"},
{PublicKey: "a3002222", Name: "CandidateB"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "n1111111", Name: "N1"},
{Role: "repeater", PublicKey: "a3001111", Name: "CandidateA"},
{Role: "repeater", PublicKey: "a3002222", Name: "CandidateB"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
var txs []*StoreTx
@@ -425,8 +425,8 @@ func TestBuildNeighborGraph_EqualScoresAmbiguous(t *testing.T) {
func TestBuildNeighborGraph_ObserverSelfEdgeGuard(t *testing.T) {
// Observer's own prefix in path → should NOT create self-edge.
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["obs0"]`, nowStr, nil),
@@ -445,8 +445,8 @@ func TestBuildNeighborGraph_ObserverSelfEdgeGuard(t *testing.T) {
func TestBuildNeighborGraph_OrphanPrefix(t *testing.T) {
// Path contains prefix matching zero nodes → edge recorded as unresolved.
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["ff99"]`, nowStr, nil),
@@ -506,9 +506,9 @@ func TestAffinityScore_StaleAndLow(t *testing.T) {
func TestBuildNeighborGraph_CountAccumulation(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
var txs []*StoreTx
@@ -535,10 +535,10 @@ func TestBuildNeighborGraph_CountAccumulation(t *testing.T) {
func TestBuildNeighborGraph_MultipleObservers(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "obs00001", Name: "Obs1"},
{PublicKey: "obs00002", Name: "Obs2"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "obs00001", Name: "Obs1"},
{Role: "repeater", PublicKey: "obs00002", Name: "Obs2"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
@@ -565,9 +565,9 @@ func TestBuildNeighborGraph_MultipleObservers(t *testing.T) {
func TestBuildNeighborGraph_TimeDecayOldObservations(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
@@ -592,10 +592,10 @@ func TestBuildNeighborGraph_TimeDecayOldObservations(t *testing.T) {
func TestBuildNeighborGraph_ADVERTOnlyConstraint(t *testing.T) {
// Non-ADVERT: should NOT create originator↔path[0] edge, only observer↔path[last].
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "r2ddeeff", Name: "R2"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "r2ddeeff", Name: "R2"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 2, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["r1aa","r2dd"]`, nowStr, nil),
@@ -631,9 +631,9 @@ func ngPubKeyJSON(pubkey string) string {
func TestBuildNeighborGraph_AdvertPubKeyField(t *testing.T) {
// Real ADVERTs use "pubKey", not "from_node". Verify the builder handles it.
nodes := []nodeInfo{
{PublicKey: "99bf37abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234", Name: "Originator"},
{PublicKey: "r1aabbccdd001122334455667788990011223344556677889900112233445566", Name: "R1"},
{PublicKey: "obs0000100112233445566778899001122334455667788990011223344556677", Name: "Observer"},
{Role: "repeater", PublicKey: "99bf37abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234", Name: "Originator"},
{Role: "repeater", PublicKey: "r1aabbccdd001122334455667788990011223344556677889900112233445566", Name: "R1"},
{Role: "repeater", PublicKey: "obs0000100112233445566778899001122334455667788990011223344556677", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngPubKeyJSON("99bf37abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234"), []*StoreObs{
ngMakeObs("obs0000100112233445566778899001122334455667788990011223344556677", `["r1"]`, nowStr, ngFloatPtr(-8.5)),
@@ -666,10 +666,10 @@ func TestBuildNeighborGraph_OneByteHashPrefixes(t *testing.T) {
// Real-world scenario: 1-byte hash prefixes with multiple candidates.
// Should create edges (possibly ambiguous) rather than empty graph.
nodes := []nodeInfo{
{PublicKey: "c0dedad400000000000000000000000000000000000000000000000000000001", Name: "NodeC0-1"},
{PublicKey: "c0dedad900000000000000000000000000000000000000000000000000000002", Name: "NodeC0-2"},
{PublicKey: "a3bbccdd00000000000000000000000000000000000000000000000000000003", Name: "Originator"},
{PublicKey: "obs1234500000000000000000000000000000000000000000000000000000004", Name: "Observer"},
{Role: "repeater", PublicKey: "c0dedad400000000000000000000000000000000000000000000000000000001", Name: "NodeC0-1"},
{Role: "repeater", PublicKey: "c0dedad900000000000000000000000000000000000000000000000000000002", Name: "NodeC0-2"},
{Role: "repeater", PublicKey: "a3bbccdd00000000000000000000000000000000000000000000000000000003", Name: "Originator"},
{Role: "repeater", PublicKey: "obs1234500000000000000000000000000000000000000000000000000000004", Name: "Observer"},
}
// ADVERT from Originator with 1-byte path hop "c0"
tx := ngMakeTx(1, 4, ngPubKeyJSON("a3bbccdd00000000000000000000000000000000000000000000000000000003"), []*StoreObs{
@@ -809,10 +809,10 @@ func TestExtractFromNode_UsesCachedParse(t *testing.T) {
func BenchmarkBuildFromStore(b *testing.B) {
// Simulate a dataset with many packets and repeated pubkeys
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeA"},
{PublicKey: "bbbb2222", Name: "NodeB"},
{PublicKey: "cccc3333", Name: "NodeC"},
{PublicKey: "dddd4444", Name: "NodeD"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA"},
{Role: "repeater", PublicKey: "bbbb2222", Name: "NodeB"},
{Role: "repeater", PublicKey: "cccc3333", Name: "NodeC"},
{Role: "repeater", PublicKey: "dddd4444", Name: "NodeD"},
}
const numPackets = 1000
packets := make([]*StoreTx, 0, numPackets)
+161 -14
View File
@@ -20,11 +20,10 @@ var persistSem = make(chan struct{}, 1)
// ensureNeighborEdgesTable creates the neighbor_edges table if it doesn't exist.
// Uses a separate read-write connection since the main DB is read-only.
func ensureNeighborEdgesTable(dbPath string) error {
rw, err := openRW(dbPath)
rw, err := cachedRW(dbPath)
if err != nil {
return fmt.Errorf("open rw for neighbor_edges: %w", err)
}
defer rw.Close()
_, err = rw.Exec(`CREATE TABLE IF NOT EXISTS neighbor_edges (
node_a TEXT NOT NULL,
@@ -129,12 +128,11 @@ func asyncPersistResolvedPathsAndEdges(dbPath string, obsUpdates []persistObsUpd
go func() {
defer func() { <-persistSem }()
rw, err := openRW(dbPath)
rw, err := cachedRW(dbPath)
if err != nil {
log.Printf("[store] %s rw open error: %v", logPrefix, err)
return
}
defer rw.Close()
if len(obsUpdates) > 0 {
sqlTx, err := rw.Begin()
@@ -249,11 +247,10 @@ func buildAndPersistEdges(store *PacketStore, rw *sql.DB) int {
// ensureResolvedPathColumn adds the resolved_path column to observations if missing.
func ensureResolvedPathColumn(dbPath string) error {
rw, err := openRW(dbPath)
rw, err := cachedRW(dbPath)
if err != nil {
return err
}
defer rw.Close()
// Check if column already exists
rows, err := rw.Query("PRAGMA table_info(observations)")
@@ -281,6 +278,161 @@ func ensureResolvedPathColumn(dbPath string) error {
return nil
}
// ensureObserverInactiveColumn adds the inactive column to observers if missing.
// The column was originally added by ingestor migration (cmd/ingestor/db.go:344) to
// support soft-delete via RemoveStaleObservers + filtered reads (PR #954). When the
// server starts against a DB that was never touched by the ingestor (e.g. the e2e
// fixture), the column is missing and read queries that filter on it (GetObservers,
// GetStats) silently fail with "no such column: inactive" — leaving /api/observers
// returning empty.
func ensureObserverInactiveColumn(dbPath string) error {
rw, err := cachedRW(dbPath)
if err != nil {
return err
}
rows, err := rw.Query("PRAGMA table_info(observers)")
if err != nil {
return err
}
defer rows.Close()
for rows.Next() {
var cid int
var colName string
var colType sql.NullString
var notNull, pk int
var dflt sql.NullString
if rows.Scan(&cid, &colName, &colType, &notNull, &dflt, &pk) == nil && colName == "inactive" {
return nil // already exists
}
}
_, err = rw.Exec("ALTER TABLE observers ADD COLUMN inactive INTEGER DEFAULT 0")
if err != nil {
return fmt.Errorf("add inactive column: %w", err)
}
log.Println("[store] Added inactive column to observers")
return nil
}
// ensureLastPacketAtColumn adds the last_packet_at column to observers if missing.
// The column was originally added by ingestor migration (observers_last_packet_at_v1)
// to track the most recent packet observation time separately from status updates.
// When the server starts against a DB that was never touched by the ingestor (e.g.
// the e2e fixture), the column is missing and read queries that reference it
// (GetObservers, GetObserverByID) fail with "no such column: last_packet_at".
func ensureLastPacketAtColumn(dbPath string) error {
rw, err := cachedRW(dbPath)
if err != nil {
return err
}
rows, err := rw.Query("PRAGMA table_info(observers)")
if err != nil {
return err
}
defer rows.Close()
for rows.Next() {
var cid int
var colName string
var colType sql.NullString
var notNull, pk int
var dflt sql.NullString
if rows.Scan(&cid, &colName, &colType, &notNull, &dflt, &pk) == nil && colName == "last_packet_at" {
return nil // already exists
}
}
_, err = rw.Exec("ALTER TABLE observers ADD COLUMN last_packet_at TEXT")
if err != nil {
return fmt.Errorf("add last_packet_at column: %w", err)
}
log.Println("[store] Added last_packet_at column to observers")
return nil
}
// ensureForeignAdvertColumn adds the foreign_advert column to nodes/inactive_nodes
// if missing (#730). The column is added by the ingestor migration foreign_advert_v1
// — but the server may run against a DB the ingestor has never touched (e2e fixture,
// fresh installs where the server boots first), in which case scanNodeRow fails
// with "no such column: foreign_advert" and /api/nodes silently returns nothing.
func ensureForeignAdvertColumn(dbPath string) error {
rw, err := cachedRW(dbPath)
if err != nil {
return err
}
for _, table := range []string{"nodes", "inactive_nodes"} {
has, err := tableHasColumn(rw, table, "foreign_advert")
if err != nil {
return fmt.Errorf("inspect %s: %w", table, err)
}
if has {
continue
}
if _, err := rw.Exec(fmt.Sprintf("ALTER TABLE %s ADD COLUMN foreign_advert INTEGER DEFAULT 0", table)); err != nil {
return fmt.Errorf("add foreign_advert to %s: %w", table, err)
}
log.Printf("[store] Added foreign_advert column to %s", table)
}
return nil
}
// tableHasColumn reports whether the named table has the named column.
func tableHasColumn(rw *sql.DB, table, column string) (bool, error) {
rows, err := rw.Query(fmt.Sprintf("PRAGMA table_info(%s)", table))
if err != nil {
return false, err
}
defer rows.Close()
for rows.Next() {
var cid int
var colName string
var colType sql.NullString
var notNull, pk int
var dflt sql.NullString
if rows.Scan(&cid, &colName, &colType, &notNull, &dflt, &pk) == nil && colName == column {
return true, nil
}
}
return false, nil
}
// softDeleteBlacklistedObservers marks observers matching the blacklist as
// inactive=1 so they are hidden from API responses. Runs once at startup.
func softDeleteBlacklistedObservers(dbPath string, blacklist []string) {
rw, err := cachedRW(dbPath)
if err != nil {
log.Printf("[observer-blacklist] warning: could not open DB for soft-delete: %v", err)
return
}
placeholders := make([]string, 0, len(blacklist))
args := make([]interface{}, 0, len(blacklist))
for _, pk := range blacklist {
trimmed := strings.TrimSpace(pk)
if trimmed == "" {
continue
}
placeholders = append(placeholders, "LOWER(?)")
args = append(args, trimmed)
}
if len(placeholders) == 0 {
return
}
query := "UPDATE observers SET inactive = 1 WHERE LOWER(id) IN (" + strings.Join(placeholders, ",") + ") AND (inactive IS NULL OR inactive = 0)"
result, err := rw.Exec(query, args...)
if err != nil {
log.Printf("[observer-blacklist] warning: soft-delete failed: %v", err)
return
}
if n, _ := result.RowsAffected(); n > 0 {
log.Printf("[observer-blacklist] soft-deleted %d blacklisted observer(s)", n)
}
}
// resolvePathForObs resolves hop prefixes to full pubkeys for an observation.
// Returns nil if path is empty.
func resolvePathForObs(pathJSON, observerID string, tx *StoreTx, pm *prefixMap, graph *NeighborGraph) []*string {
@@ -416,16 +568,12 @@ func backfillResolvedPathsAsync(store *PacketStore, dbPath string, chunkSize int
var rw *sql.DB
if dbPath != "" {
var err error
rw, err = openRW(dbPath)
rw, err = cachedRW(dbPath)
if err != nil {
log.Printf("[store] async backfill: open rw error: %v", err)
}
}
defer func() {
if rw != nil {
rw.Close()
}
}()
// rw is cached process-wide; do not close
totalProcessed := 0
for totalProcessed < totalPending {
@@ -650,11 +798,10 @@ func PruneNeighborEdges(dbPath string, graph *NeighborGraph, maxAgeDays int) (in
// 1. Prune from SQLite using a read-write connection
var dbPruned int64
rw, err := openRW(dbPath)
rw, err := cachedRW(dbPath)
if err != nil {
return 0, fmt.Errorf("prune neighbor_edges: open rw: %w", err)
}
defer rw.Close()
res, err := rw.Exec("DELETE FROM neighbor_edges WHERE last_seen < ?", cutoff.Format(time.RFC3339))
if err != nil {
return 0, fmt.Errorf("prune neighbor_edges: %w", err)
+66 -7
View File
@@ -38,7 +38,7 @@ func createTestDBWithSchema(t *testing.T) (*DB, string) {
observer_id TEXT, observer_name TEXT, direction TEXT,
snr REAL, rssi REAL, score INTEGER,
path_json TEXT, timestamp TEXT,
resolved_path TEXT
resolved_path TEXT, raw_hex TEXT
)`)
conn.Exec(`CREATE TABLE nodes (
public_key TEXT PRIMARY KEY, name TEXT, role TEXT,
@@ -58,8 +58,8 @@ func createTestDBWithSchema(t *testing.T) (*DB, string) {
func TestResolvePathForObs(t *testing.T) {
// Build a prefix map with known nodes
nodes := []nodeInfo{
{PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
{PublicKey: "bbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb11", Name: "Node-BB"},
{Role: "repeater", PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
{Role: "repeater", PublicKey: "bbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb11", Name: "Node-BB"},
}
pm := buildPrefixMap(nodes)
graph := NewNeighborGraph()
@@ -97,7 +97,7 @@ func TestResolvePathForObs_EmptyPath(t *testing.T) {
func TestResolvePathForObs_Unresolvable(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
{Role: "repeater", PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
}
pm := buildPrefixMap(nodes)
@@ -264,7 +264,7 @@ func TestEnsureResolvedPathColumn(t *testing.T) {
conn, _ := sql.Open("sqlite", "file:"+dbPath+"?_journal_mode=WAL")
conn.Exec(`CREATE TABLE observations (
id INTEGER PRIMARY KEY, transmission_id INTEGER,
observer_id TEXT, path_json TEXT, timestamp TEXT
observer_id TEXT, path_json TEXT, timestamp TEXT, raw_hex TEXT
)`)
conn.Close()
@@ -437,8 +437,8 @@ func TestExtractEdgesFromObs_NonAdvertNoPath(t *testing.T) {
func TestExtractEdgesFromObs_WithPath(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
{PublicKey: "ffgghhii1234567890aabbccddee1234567890aabbccddee1234567890aabb11", Name: "Node-FF"},
{Role: "repeater", PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
{Role: "repeater", PublicKey: "ffgghhii1234567890aabbccddee1234567890aabbccddee1234567890aabb11", Name: "Node-FF"},
}
pm := buildPrefixMap(nodes)
@@ -538,3 +538,62 @@ func TestOpenRW_BusyTimeout(t *testing.T) {
t.Errorf("expected busy_timeout=5000, got %d", timeout)
}
}
func TestEnsureLastPacketAtColumn(t *testing.T) {
// Create a temp DB with observers table missing last_packet_at
dir := t.TempDir()
dbPath := dir + "/test.db"
db, err := sql.Open("sqlite", dbPath)
if err != nil {
t.Fatal(err)
}
_, err = db.Exec(`CREATE TABLE observers (
id TEXT PRIMARY KEY,
name TEXT,
last_seen TEXT,
lat REAL,
lon REAL,
inactive INTEGER DEFAULT 0
)`)
if err != nil {
t.Fatal(err)
}
db.Close()
// First call: should add the column
if err := ensureLastPacketAtColumn(dbPath); err != nil {
t.Fatalf("first call failed: %v", err)
}
// Verify column exists
db2, err := sql.Open("sqlite", dbPath)
if err != nil {
t.Fatal(err)
}
defer db2.Close()
var found bool
rows, err := db2.Query("PRAGMA table_info(observers)")
if err != nil {
t.Fatal(err)
}
defer rows.Close()
for rows.Next() {
var cid int
var colName string
var colType sql.NullString
var notNull, pk int
var dflt sql.NullString
if rows.Scan(&cid, &colName, &colType, &notNull, &dflt, &pk) == nil && colName == "last_packet_at" {
found = true
}
}
if !found {
t.Fatal("last_packet_at column not found after migration")
}
// Idempotency: second call should succeed without error
if err := ensureLastPacketAtColumn(dbPath); err != nil {
t.Fatalf("idempotent call failed: %v", err)
}
}
+150
View File
@@ -0,0 +1,150 @@
package main
import (
"net/http"
"strconv"
"strings"
"time"
"github.com/gorilla/mux"
)
// BatteryThresholdsConfig: voltage cutoffs for low-battery alerts (#663).
// All values in millivolts. When a node's most-recent battery sample falls
// below LowMv it is flagged "low"; below CriticalMv it is flagged "critical".
type BatteryThresholdsConfig struct {
LowMv int `json:"lowMv"`
CriticalMv int `json:"criticalMv"`
}
// LowBatteryMv returns the configured low-battery threshold or the default 3300mV.
func (c *Config) LowBatteryMv() int {
if c.BatteryThresholds != nil && c.BatteryThresholds.LowMv > 0 {
return c.BatteryThresholds.LowMv
}
return 3300
}
// CriticalBatteryMv returns the configured critical-battery threshold or the default 3000mV.
func (c *Config) CriticalBatteryMv() int {
if c.BatteryThresholds != nil && c.BatteryThresholds.CriticalMv > 0 {
return c.BatteryThresholds.CriticalMv
}
return 3000
}
// NodeBatterySample is a single (timestamp, battery_mv) point.
type NodeBatterySample struct {
Timestamp string `json:"timestamp"`
BatteryMv int `json:"battery_mv"`
}
// GetNodeBatteryHistory returns time-ordered battery_mv samples for a node,
// pulled from observer_metrics by joining observers.id (uppercase pubkey)
// against the node's public_key (lowercase). Rows with NULL battery are skipped.
//
// The match is case-insensitive on observer_id to tolerate historical
// variation in pubkey casing.
func (db *DB) GetNodeBatteryHistory(pubkey, since string) ([]NodeBatterySample, error) {
if pubkey == "" {
return nil, nil
}
pk := strings.ToLower(pubkey)
rows, err := db.conn.Query(`
SELECT timestamp, battery_mv
FROM observer_metrics
WHERE LOWER(observer_id) = ?
AND battery_mv IS NOT NULL
AND timestamp >= ?
ORDER BY timestamp ASC`, pk, since)
if err != nil {
return nil, err
}
defer rows.Close()
var out []NodeBatterySample
for rows.Next() {
var ts string
var mv int
if err := rows.Scan(&ts, &mv); err != nil {
return nil, err
}
out = append(out, NodeBatterySample{Timestamp: ts, BatteryMv: mv})
}
return out, rows.Err()
}
// handleNodeBattery serves GET /api/nodes/{pubkey}/battery?days=N (#663).
//
// Returns voltage time-series for a node and a status flag based on the most
// recent sample evaluated against configured thresholds:
// - "critical" : latest_mv < CriticalBatteryMv
// - "low" : latest_mv < LowBatteryMv
// - "ok" : latest_mv >= LowBatteryMv
// - "unknown" : no samples in window
func (s *Server) handleNodeBattery(w http.ResponseWriter, r *http.Request) {
pubkey := mux.Vars(r)["pubkey"]
if pubkey == "" {
writeError(w, 400, "missing pubkey")
return
}
// 404 if node unknown — keeps URL space tidy and matches /health behavior.
node, err := s.db.GetNodeByPubkey(pubkey)
if err != nil {
writeError(w, 500, err.Error())
return
}
if node == nil {
writeError(w, 404, "node not found")
return
}
days := 7
if d, _ := strconv.Atoi(r.URL.Query().Get("days")); d > 0 && d <= 365 {
days = d
}
since := time.Now().UTC().Add(-time.Duration(days) * 24 * time.Hour).Format(time.RFC3339)
samples, err := s.db.GetNodeBatteryHistory(pubkey, since)
if err != nil {
writeError(w, 500, err.Error())
return
}
if samples == nil {
samples = []NodeBatterySample{}
}
low := s.cfg.LowBatteryMv()
crit := s.cfg.CriticalBatteryMv()
status := "unknown"
var latestMv interface{}
var latestTs interface{}
if n := len(samples); n > 0 {
mv := samples[n-1].BatteryMv
latestMv = mv
latestTs = samples[n-1].Timestamp
switch {
case mv < crit:
status = "critical"
case mv < low:
status = "low"
default:
status = "ok"
}
}
writeJSON(w, map[string]interface{}{
"public_key": strings.ToLower(pubkey),
"days": days,
"samples": samples,
"latest_mv": latestMv,
"latest_ts": latestTs,
"status": status,
"thresholds": map[string]interface{}{
"low_mv": low,
"critical_mv": crit,
},
})
}
+161
View File
@@ -0,0 +1,161 @@
package main
import (
"encoding/json"
"net/http/httptest"
"strings"
"testing"
"time"
"github.com/gorilla/mux"
)
// TestGetNodeBatteryHistory_FromObserverMetrics validates that the DB layer
// can pull a node's battery_mv time-series from observer_metrics, joining
// observers.id (uppercase hex pubkey) to nodes.public_key (lowercase hex).
func TestGetNodeBatteryHistory_FromObserverMetrics(t *testing.T) {
db := setupTestDB(t)
now := time.Now().UTC()
// node + observer with matching pubkey (cases differ on purpose)
pkLower := "deadbeefcafef00d11223344"
idUpper := strings.ToUpper(pkLower)
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, last_seen, first_seen) VALUES (?, 'BatNode', 'repeater', ?, ?)`,
pkLower, now.Format(time.RFC3339), now.Add(-72*time.Hour).Format(time.RFC3339))
db.conn.Exec(`INSERT INTO observers (id, name, last_seen, first_seen) VALUES (?, 'BatNode', ?, ?)`,
idUpper, now.Format(time.RFC3339), now.Add(-72*time.Hour).Format(time.RFC3339))
// 3 metrics samples: 3700, 3500, 3200 mV
for i, mv := range []int{3700, 3500, 3200} {
ts := now.Add(time.Duration(-2+i) * time.Hour).Format(time.RFC3339)
db.conn.Exec(`INSERT INTO observer_metrics (observer_id, timestamp, battery_mv) VALUES (?, ?, ?)`,
idUpper, ts, mv)
}
// One sample with NULL battery should be skipped
db.conn.Exec(`INSERT INTO observer_metrics (observer_id, timestamp) VALUES (?, ?)`,
idUpper, now.Add(-3*time.Hour).Format(time.RFC3339))
since := now.Add(-24 * time.Hour).Format(time.RFC3339)
samples, err := db.GetNodeBatteryHistory(pkLower, since)
if err != nil {
t.Fatalf("GetNodeBatteryHistory: %v", err)
}
if len(samples) != 3 {
t.Fatalf("expected 3 samples, got %d", len(samples))
}
if samples[0].BatteryMv != 3700 || samples[2].BatteryMv != 3200 {
t.Errorf("samples=%+v", samples)
}
}
// TestNodeBatteryEndpoint validates the /api/nodes/{pubkey}/battery endpoint
// returns time-series data plus configured thresholds and a status flag.
func TestNodeBatteryEndpoint(t *testing.T) {
db := setupTestDB(t)
seedTestData(t, db)
now := time.Now().UTC()
pkLower := "aabbccdd11223344"
idUpper := strings.ToUpper(pkLower)
db.conn.Exec(`INSERT INTO observers (id, name, last_seen, first_seen) VALUES (?, 'TestRepeater', ?, ?)`,
idUpper, now.Format(time.RFC3339), now.Add(-72*time.Hour).Format(time.RFC3339))
for i, mv := range []int{3800, 3600, 3200} {
ts := now.Add(time.Duration(-2+i) * time.Hour).Format(time.RFC3339)
db.conn.Exec(`INSERT INTO observer_metrics (observer_id, timestamp, battery_mv) VALUES (?, ?, ?)`,
idUpper, ts, mv)
}
cfg := &Config{Port: 3000}
hub := NewHub()
srv := NewServer(db, cfg, hub)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store.Load: %v", err)
}
srv.store = store
router := mux.NewRouter()
srv.RegisterRoutes(router)
req := httptest.NewRequest("GET", "/api/nodes/"+pkLower+"/battery?days=7", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 200 {
t.Fatalf("expected 200, got %d body=%s", w.Code, w.Body.String())
}
var body map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &body); err != nil {
t.Fatal(err)
}
samples, ok := body["samples"].([]interface{})
if !ok {
t.Fatalf("samples missing: %+v", body)
}
if len(samples) != 3 {
t.Errorf("expected 3 samples, got %d", len(samples))
}
thr, ok := body["thresholds"].(map[string]interface{})
if !ok {
t.Fatalf("thresholds missing: %+v", body)
}
if int(thr["low_mv"].(float64)) != 3300 {
t.Errorf("default low_mv expected 3300, got %v", thr["low_mv"])
}
if int(thr["critical_mv"].(float64)) != 3000 {
t.Errorf("default critical_mv expected 3000, got %v", thr["critical_mv"])
}
// latest 3200 -> "low" (below 3300, above 3000)
if body["status"] != "low" {
t.Errorf("expected status=low, got %v", body["status"])
}
if int(body["latest_mv"].(float64)) != 3200 {
t.Errorf("latest_mv expected 3200, got %v", body["latest_mv"])
}
}
// TestNodeBatteryEndpoint_NoData returns 200 with empty samples and status="unknown".
func TestNodeBatteryEndpoint_NoData(t *testing.T) {
_, router := setupTestServer(t)
req := httptest.NewRequest("GET", "/api/nodes/aabbccdd11223344/battery", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 200 {
t.Fatalf("expected 200, got %d", w.Code)
}
var body map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &body)
if body["status"] != "unknown" {
t.Errorf("expected unknown when no samples, got %v", body["status"])
}
}
// TestNodeBatteryEndpoint_404 returns 404 for unknown node.
func TestNodeBatteryEndpoint_404(t *testing.T) {
_, router := setupTestServer(t)
req := httptest.NewRequest("GET", "/api/nodes/notarealnode00000000/battery", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 404 {
t.Errorf("expected 404, got %d", w.Code)
}
}
// TestBatteryThresholds_ConfigOverride confirms config overrides take effect.
func TestBatteryThresholds_ConfigOverride(t *testing.T) {
cfg := &Config{
BatteryThresholds: &BatteryThresholdsConfig{LowMv: 3500, CriticalMv: 3100},
}
if cfg.LowBatteryMv() != 3500 {
t.Errorf("LowBatteryMv override failed: %d", cfg.LowBatteryMv())
}
if cfg.CriticalBatteryMv() != 3100 {
t.Errorf("CriticalBatteryMv override failed: %d", cfg.CriticalBatteryMv())
}
empty := &Config{}
if empty.LowBatteryMv() != 3300 {
t.Errorf("default LowBatteryMv expected 3300, got %d", empty.LowBatteryMv())
}
if empty.CriticalBatteryMv() != 3000 {
t.Errorf("default CriticalBatteryMv expected 3000, got %d", empty.CriticalBatteryMv())
}
}
+159
View File
@@ -0,0 +1,159 @@
package main
import (
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
)
func TestConfigIsObserverBlacklisted(t *testing.T) {
cfg := &Config{
ObserverBlacklist: []string{"OBS1", "obs2", " Obs3 "},
}
tests := []struct {
id string
want bool
}{
{"OBS1", true},
{"obs1", true}, // case-insensitive
{"OBS2", true},
{"Obs3", true}, // whitespace trimmed
{"obs4", false},
{"", false},
}
for _, tt := range tests {
got := cfg.IsObserverBlacklisted(tt.id)
if got != tt.want {
t.Errorf("IsObserverBlacklisted(%q) = %v, want %v", tt.id, got, tt.want)
}
}
}
func TestConfigIsObserverBlacklistedEmpty(t *testing.T) {
cfg := &Config{}
if cfg.IsObserverBlacklisted("anything") {
t.Error("empty blacklist should not match anything")
}
}
func TestConfigIsObserverBlacklistedNil(t *testing.T) {
var cfg *Config
if cfg.IsObserverBlacklisted("anything") {
t.Error("nil config should not match anything")
}
}
func TestObserverBlacklistFiltersHandleObservers(t *testing.T) {
db := setupTestDB(t)
db.conn.Exec("INSERT OR IGNORE INTO observers (id, name, iata, last_seen) VALUES ('goodobs', 'GoodObs', 'SFO', datetime('now'))")
db.conn.Exec("INSERT OR IGNORE INTO observers (id, name, iata, last_seen) VALUES ('badobs', 'BadObs', 'LAX', datetime('now'))")
cfg := &Config{
ObserverBlacklist: []string{"badobs"},
}
srv := NewServer(db, cfg, NewHub())
srv.RegisterRoutes(setupTestRouter(srv))
req := httptest.NewRequest("GET", "/api/observers", nil)
w := httptest.NewRecorder()
srv.router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var resp ObserverListResponse
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("failed to parse response: %v", err)
}
for _, obs := range resp.Observers {
if obs.ID == "badobs" {
t.Error("blacklisted observer should not appear in observers list")
}
}
foundGood := false
for _, obs := range resp.Observers {
if obs.ID == "goodobs" {
foundGood = true
}
}
if !foundGood {
t.Error("non-blacklisted observer should appear in observers list")
}
}
func TestObserverBlacklistFiltersObserverDetail(t *testing.T) {
db := setupTestDB(t)
db.conn.Exec("INSERT OR IGNORE INTO observers (id, name, iata, last_seen) VALUES ('badobs', 'BadObs', 'LAX', datetime('now'))")
cfg := &Config{
ObserverBlacklist: []string{"badobs"},
}
srv := NewServer(db, cfg, NewHub())
srv.RegisterRoutes(setupTestRouter(srv))
req := httptest.NewRequest("GET", "/api/observers/badobs", nil)
w := httptest.NewRecorder()
srv.router.ServeHTTP(w, req)
if w.Code != http.StatusNotFound {
t.Errorf("expected 404 for blacklisted observer detail, got %d", w.Code)
}
}
func TestNoObserverBlacklistPassesAll(t *testing.T) {
db := setupTestDB(t)
db.conn.Exec("INSERT OR IGNORE INTO observers (id, name, iata, last_seen) VALUES ('someobs', 'SomeObs', 'SFO', datetime('now'))")
cfg := &Config{}
srv := NewServer(db, cfg, NewHub())
srv.RegisterRoutes(setupTestRouter(srv))
req := httptest.NewRequest("GET", "/api/observers", nil)
w := httptest.NewRecorder()
srv.router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d", w.Code)
}
var resp ObserverListResponse
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("failed to parse response: %v", err)
}
foundSome := false
for _, obs := range resp.Observers {
if obs.ID == "someobs" {
foundSome = true
}
}
if !foundSome {
t.Error("without blacklist, observer should appear")
}
}
func TestObserverBlacklistConcurrent(t *testing.T) {
cfg := &Config{
ObserverBlacklist: []string{"AA", "BB", "CC"},
}
done := make(chan struct{})
for i := 0; i < 50; i++ {
go func() {
defer func() { done <- struct{}{} }()
for j := 0; j < 100; j++ {
cfg.IsObserverBlacklisted("AA")
cfg.IsObserverBlacklisted("DD")
}
}()
}
for i := 0; i < 50; i++ {
<-done
}
}
+1
View File
@@ -45,6 +45,7 @@ func routeDescriptions() map[string]routeMeta {
"POST /api/perf/reset": {Summary: "Reset performance stats", Tag: "admin", Auth: true},
"POST /api/admin/prune": {Summary: "Prune old data", Description: "Deletes packets and nodes older than the configured retention period.", Tag: "admin", Auth: true},
"GET /api/debug/affinity": {Summary: "Debug neighbor affinity scores", Tag: "admin", Auth: true},
"GET /api/backup": {Summary: "Download SQLite backup", Description: "Streams a consistent SQLite snapshot of the analyzer DB (VACUUM INTO). Response is application/octet-stream with attachment filename corescope-backup-<unix>.db.", Tag: "admin", Auth: true},
// Packets
"GET /api/packets": {Summary: "List packets", Description: "Returns decoded packets with filtering, sorting, and pagination.", Tag: "packets",
+427
View File
@@ -0,0 +1,427 @@
package main
import (
"encoding/hex"
"encoding/json"
"math"
"net/http"
"sort"
"strings"
"time"
)
// ─── Path Inspector ────────────────────────────────────────────────────────────
// POST /api/paths/inspect — beam-search scorer for prefix path candidates.
// Spec: issue #944 §2.12.5.
// pathInspectRequest is the JSON body for the inspect endpoint.
type pathInspectRequest struct {
Prefixes []string `json:"prefixes"`
Context *pathInspectContext `json:"context,omitempty"`
Limit int `json:"limit,omitempty"`
}
type pathInspectContext struct {
ObserverID string `json:"observerId,omitempty"`
Since string `json:"since,omitempty"`
Until string `json:"until,omitempty"`
}
// pathCandidate is one scored candidate path in the response.
type pathCandidate struct {
Path []string `json:"path"`
Names []string `json:"names"`
Score float64 `json:"score"`
Speculative bool `json:"speculative"`
Evidence pathEvidence `json:"evidence"`
}
type pathEvidence struct {
PerHop []hopEvidence `json:"perHop"`
}
type hopEvidence struct {
Prefix string `json:"prefix"`
CandidatesConsidered int `json:"candidatesConsidered"`
Chosen string `json:"chosen"`
EdgeWeight float64 `json:"edgeWeight"`
Alternatives []hopAlternative `json:"alternatives,omitempty"`
}
// hopAlternative shows a candidate that was considered but not chosen for this hop.
type hopAlternative struct {
PublicKey string `json:"publicKey"`
Name string `json:"name"`
Score float64 `json:"score"`
}
type pathInspectResponse struct {
Candidates []pathCandidate `json:"candidates"`
Input map[string]interface{} `json:"input"`
Stats map[string]interface{} `json:"stats"`
}
// beamEntry represents a partial path being extended during beam search.
type beamEntry struct {
pubkeys []string
names []string
evidence []hopEvidence
score float64 // product of per-hop scores (pre-geometric-mean)
}
const (
beamWidth = 20
maxInputHops = 64
maxPrefixBytes = 3
maxRequestItems = 64
geoMaxKm = 50.0
hopScoreFloor = 0.05
speculativeThreshold = 0.7
inspectCacheTTL = 30 * time.Second
inspectBodyLimit = 4096
)
// Weights per spec §2.3.
const (
wEdge = 0.35
wGeo = 0.20
wRecency = 0.15
wSelectivity = 0.30
)
func (s *Server) handlePathInspect(w http.ResponseWriter, r *http.Request) {
// Body limit per spec §2.1.
r.Body = http.MaxBytesReader(w, r.Body, inspectBodyLimit)
var req pathInspectRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, `{"error":"invalid JSON"}`, http.StatusBadRequest)
return
}
// Validate prefixes.
if len(req.Prefixes) == 0 {
http.Error(w, `{"error":"prefixes required"}`, http.StatusBadRequest)
return
}
if len(req.Prefixes) > maxRequestItems {
http.Error(w, `{"error":"too many prefixes (max 64)"}`, http.StatusBadRequest)
return
}
// Normalize + validate each prefix.
prefixByteLen := -1
for i, p := range req.Prefixes {
p = strings.ToLower(strings.TrimSpace(p))
req.Prefixes[i] = p
if len(p) == 0 || len(p)%2 != 0 {
http.Error(w, `{"error":"prefixes must be even-length hex"}`, http.StatusBadRequest)
return
}
if _, err := hex.DecodeString(p); err != nil {
http.Error(w, `{"error":"prefixes must be valid hex"}`, http.StatusBadRequest)
return
}
byteLen := len(p) / 2
if byteLen > maxPrefixBytes {
http.Error(w, `{"error":"prefix exceeds 3 bytes"}`, http.StatusBadRequest)
return
}
if prefixByteLen == -1 {
prefixByteLen = byteLen
} else if byteLen != prefixByteLen {
http.Error(w, `{"error":"mixed prefix lengths not allowed"}`, http.StatusBadRequest)
return
}
}
limit := req.Limit
if limit <= 0 {
limit = 10
}
if limit > 50 {
limit = 50
}
// Check cache.
cacheKey := s.store.inspectCacheKey(req)
s.store.inspectMu.RLock()
if cached, ok := s.store.inspectCache[cacheKey]; ok && time.Now().Before(cached.expiresAt) {
s.store.inspectMu.RUnlock()
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(cached.data)
return
}
s.store.inspectMu.RUnlock()
// Snapshot data under read lock.
nodes, pm := s.store.getCachedNodesAndPM()
// Build pubkey→nodeInfo map for O(1) geo lookup in scorer.
nodeByPK := make(map[string]*nodeInfo, len(nodes))
for i := range nodes {
nodeByPK[strings.ToLower(nodes[i].PublicKey)] = &nodes[i]
}
// Get neighbor graph; handle cold start.
graph := s.store.graph
if graph == nil || graph.IsStale() {
rebuilt := make(chan struct{})
go func() {
s.store.ensureNeighborGraph()
close(rebuilt)
}()
select {
case <-rebuilt:
graph = s.store.graph
case <-time.After(2 * time.Second):
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusServiceUnavailable)
json.NewEncoder(w).Encode(map[string]interface{}{"retry": true})
return
}
if graph == nil {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusServiceUnavailable)
json.NewEncoder(w).Encode(map[string]interface{}{"retry": true})
return
}
}
now := time.Now()
start := now
// Beam search.
beam := s.store.beamSearch(req.Prefixes, pm, graph, nodeByPK, now)
// Sort by score descending, take top limit.
sortBeam(beam)
if len(beam) > limit {
beam = beam[:limit]
}
// Build response with per-hop alternatives (spec §2.7, M2 fix).
candidates := make([]pathCandidate, 0, len(beam))
for _, entry := range beam {
nHops := len(entry.pubkeys)
var score float64
if nHops > 0 {
score = math.Pow(entry.score, 1.0/float64(nHops))
}
// Populate per-hop alternatives: other candidates at each hop that weren't chosen.
evidence := make([]hopEvidence, len(entry.evidence))
copy(evidence, entry.evidence)
for hi, ev := range evidence {
if hi >= len(req.Prefixes) {
break
}
prefix := req.Prefixes[hi]
allCands := pm.m[prefix]
var alts []hopAlternative
for _, c := range allCands {
if !canAppearInPath(c.Role) || c.PublicKey == ev.Chosen {
continue
}
// Score this alternative in context of the partial path up to this hop.
var partialEntry beamEntry
if hi > 0 {
partialEntry = beamEntry{pubkeys: entry.pubkeys[:hi], names: entry.names[:hi], score: 1.0}
}
altScore := s.store.scoreHop(partialEntry, c, ev.CandidatesConsidered, graph, nodeByPK, now, hi)
alts = append(alts, hopAlternative{PublicKey: c.PublicKey, Name: c.Name, Score: math.Round(altScore*1000) / 1000})
}
// Sort alts by score desc, cap at 5.
sort.Slice(alts, func(i, j int) bool { return alts[i].Score > alts[j].Score })
if len(alts) > 5 {
alts = alts[:5]
}
evidence[hi] = hopEvidence{
Prefix: ev.Prefix,
CandidatesConsidered: ev.CandidatesConsidered,
Chosen: ev.Chosen,
EdgeWeight: ev.EdgeWeight,
Alternatives: alts,
}
}
candidates = append(candidates, pathCandidate{
Path: entry.pubkeys,
Names: entry.names,
Score: math.Round(score*1000) / 1000,
Speculative: score < speculativeThreshold,
Evidence: pathEvidence{PerHop: evidence},
})
}
elapsed := time.Since(start).Milliseconds()
resp := pathInspectResponse{
Candidates: candidates,
Input: map[string]interface{}{
"prefixes": req.Prefixes,
"hops": len(req.Prefixes),
},
Stats: map[string]interface{}{
"beamWidth": beamWidth,
"expansionsRun": len(req.Prefixes) * beamWidth,
"elapsedMs": elapsed,
},
}
// Cache result (and evict stale entries).
s.store.inspectMu.Lock()
if s.store.inspectCache == nil {
s.store.inspectCache = make(map[string]*inspectCachedResult)
}
now2 := time.Now()
for k, v := range s.store.inspectCache {
if now2.After(v.expiresAt) {
delete(s.store.inspectCache, k)
}
}
s.store.inspectCache[cacheKey] = &inspectCachedResult{
data: resp,
expiresAt: now2.Add(inspectCacheTTL),
}
s.store.inspectMu.Unlock()
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(resp)
}
type inspectCachedResult struct {
data pathInspectResponse
expiresAt time.Time
}
func (s *PacketStore) inspectCacheKey(req pathInspectRequest) string {
key := strings.Join(req.Prefixes, ",")
if req.Context != nil {
key += "|" + req.Context.ObserverID + "|" + req.Context.Since + "|" + req.Context.Until
}
return key
}
func (s *PacketStore) beamSearch(prefixes []string, pm *prefixMap, graph *NeighborGraph, nodeByPK map[string]*nodeInfo, now time.Time) []beamEntry {
// Start with empty beam.
beam := []beamEntry{{pubkeys: nil, names: nil, evidence: nil, score: 1.0}}
for hopIdx, prefix := range prefixes {
candidates := pm.m[prefix]
// Filter by role at lookup time (spec §2.2 step 2).
var filtered []nodeInfo
for _, c := range candidates {
if canAppearInPath(c.Role) {
filtered = append(filtered, c)
}
}
candidateCount := len(filtered)
if candidateCount == 0 {
// No candidates for this hop — beam dies.
return nil
}
var nextBeam []beamEntry
for _, entry := range beam {
for _, cand := range filtered {
hopScore := s.scoreHop(entry, cand, candidateCount, graph, nodeByPK, now, hopIdx)
if hopScore < hopScoreFloor {
hopScore = hopScoreFloor
}
newEntry := beamEntry{
pubkeys: append(append([]string{}, entry.pubkeys...), cand.PublicKey),
names: append(append([]string{}, entry.names...), cand.Name),
evidence: append(append([]hopEvidence{}, entry.evidence...), hopEvidence{
Prefix: prefix,
CandidatesConsidered: candidateCount,
Chosen: cand.PublicKey,
EdgeWeight: hopScore,
}),
score: entry.score * hopScore,
}
nextBeam = append(nextBeam, newEntry)
}
}
// Prune to beam width.
sortBeam(nextBeam)
if len(nextBeam) > beamWidth {
nextBeam = nextBeam[:beamWidth]
}
beam = nextBeam
}
return beam
}
func (s *PacketStore) scoreHop(entry beamEntry, cand nodeInfo, candidateCount int, graph *NeighborGraph, nodeByPK map[string]*nodeInfo, now time.Time, hopIdx int) float64 {
var edgeScore float64
var geoScore float64 = 1.0
var recencyScore float64 = 1.0
if hopIdx == 0 || len(entry.pubkeys) == 0 {
// First hop: no prior node to compare against.
edgeScore = 1.0
} else {
lastPK := entry.pubkeys[len(entry.pubkeys)-1]
// Single scan over neighbors for both edge weight and recency.
edges := graph.Neighbors(lastPK)
var foundEdge *NeighborEdge
for _, e := range edges {
peer := e.NodeA
if strings.EqualFold(peer, lastPK) {
peer = e.NodeB
}
if strings.EqualFold(peer, cand.PublicKey) {
foundEdge = e
break
}
}
if foundEdge != nil {
edgeScore = foundEdge.Score(now)
hoursSince := now.Sub(foundEdge.LastSeen).Hours()
if hoursSince <= 24 {
recencyScore = 1.0
} else {
recencyScore = math.Max(0.1, 24.0/hoursSince)
}
} else {
edgeScore = 0
recencyScore = 0
}
// Geographic plausibility.
prevNode := nodeByPK[strings.ToLower(lastPK)]
if prevNode != nil && prevNode.HasGPS && cand.HasGPS {
dist := haversineKm(prevNode.Lat, prevNode.Lon, cand.Lat, cand.Lon)
if dist > geoMaxKm {
geoScore = math.Max(0.1, geoMaxKm/dist)
}
}
}
// Prefix selectivity.
selectivityScore := 1.0 / float64(candidateCount)
return wEdge*edgeScore + wGeo*geoScore + wRecency*recencyScore + wSelectivity*selectivityScore
}
func sortBeam(beam []beamEntry) {
sort.Slice(beam, func(i, j int) bool {
return beam[i].score > beam[j].score
})
}
// ensureNeighborGraph triggers a graph rebuild if nil or stale.
func (s *PacketStore) ensureNeighborGraph() {
if s.graph != nil && !s.graph.IsStale() {
return
}
g := BuildFromStore(s)
s.graph = g
}
+308
View File
@@ -0,0 +1,308 @@
package main
import (
"bytes"
"encoding/json"
"fmt"
"math"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
)
// ─── Unit tests for path inspector (issue #944) ────────────────────────────────
func TestScoreHop_EdgeWeight(t *testing.T) {
store := &PacketStore{}
graph := NewNeighborGraph()
now := time.Now()
// Add an edge between A and B.
graph.mu.Lock()
edge := &NeighborEdge{
NodeA: "aaaa", NodeB: "bbbb",
Count: 50, LastSeen: now.Add(-1 * time.Hour),
Observers: map[string]bool{"obs1": true},
}
key := edgeKey{"aaaa", "bbbb"}
graph.edges[key] = edge
graph.byNode["aaaa"] = append(graph.byNode["aaaa"], edge)
graph.byNode["bbbb"] = append(graph.byNode["bbbb"], edge)
graph.mu.Unlock()
entry := beamEntry{pubkeys: []string{"aaaa"}, names: []string{"NodeA"}}
cand := nodeInfo{PublicKey: "bbbb", Name: "NodeB", Role: "repeater"}
score := store.scoreHop(entry, cand, 2, graph, nil, now, 1)
// With edge present, edgeScore > 0. With 2 candidates, selectivity = 0.5.
// Anti-tautology: if we zero out edge weight constant, score would change.
if score <= 0.05 {
t.Errorf("expected score > floor, got %f", score)
}
// No edge: score should be lower.
candNoEdge := nodeInfo{PublicKey: "cccc", Name: "NodeC", Role: "repeater"}
scoreNoEdge := store.scoreHop(entry, candNoEdge, 2, graph, nil, now, 1)
if scoreNoEdge >= score {
t.Errorf("expected no-edge score (%f) < edge score (%f)", scoreNoEdge, score)
}
}
func TestScoreHop_FirstHop(t *testing.T) {
store := &PacketStore{}
graph := NewNeighborGraph()
now := time.Now()
entry := beamEntry{pubkeys: nil, names: nil}
cand := nodeInfo{PublicKey: "aaaa", Name: "NodeA", Role: "repeater"}
score := store.scoreHop(entry, cand, 3, graph, nil, now, 0)
// First hop: edgeScore=1.0, geoScore=1.0, recencyScore=1.0, selectivity=1/3
// = 0.35*1 + 0.20*1 + 0.15*1 + 0.30*(1/3) = 0.35+0.20+0.15+0.10 = 0.80
expected := 0.35 + 0.20 + 0.15 + 0.30/3.0
if score < expected-0.01 || score > expected+0.01 {
t.Errorf("expected ~%f, got %f", expected, score)
}
}
func TestScoreHop_GeoPlausibility(t *testing.T) {
store := &PacketStore{}
store.nodeCache = []nodeInfo{
{PublicKey: "aaaa", Name: "A", Role: "repeater", Lat: 37.0, Lon: -122.0, HasGPS: true},
{PublicKey: "bbbb", Name: "B", Role: "repeater", Lat: 37.01, Lon: -122.01, HasGPS: true}, // ~1.4km
{PublicKey: "cccc", Name: "C", Role: "repeater", Lat: 40.0, Lon: -120.0, HasGPS: true}, // ~400km
}
store.nodePM = buildPrefixMap(store.nodeCache)
store.nodeCacheTime = time.Now()
graph := NewNeighborGraph()
now := time.Now()
nodeByPK := map[string]*nodeInfo{
"aaaa": &store.nodeCache[0],
"bbbb": &store.nodeCache[1],
"cccc": &store.nodeCache[2],
}
entry := beamEntry{pubkeys: []string{"aaaa"}, names: []string{"A"}}
// Close node should score higher than far node (geo component).
scoreClose := store.scoreHop(entry, store.nodeCache[1], 2, graph, nodeByPK, now, 1)
scoreFar := store.scoreHop(entry, store.nodeCache[2], 2, graph, nodeByPK, now, 1)
if scoreFar >= scoreClose {
t.Errorf("expected far node score (%f) < close node score (%f)", scoreFar, scoreClose)
}
}
func TestBeamSearch_WidthCap(t *testing.T) {
store := &PacketStore{}
graph := NewNeighborGraph()
graph.builtAt = time.Now()
now := time.Now()
// Create 25 nodes that all match prefix "aa".
var nodes []nodeInfo
for i := 0; i < 25; i++ {
// Each node has pubkey starting with "aa" followed by unique hex.
pk := "aa" + strings.Repeat("0", 4) + fmt.Sprintf("%02x", i)
nodes = append(nodes, nodeInfo{PublicKey: pk, Name: pk, Role: "repeater"})
}
pm := buildPrefixMap(nodes)
// Two hops of "aa" — should produce 25*25=625 combos, pruned to 20.
beam := store.beamSearch([]string{"aa", "aa"}, pm, graph, nil, now)
if len(beam) > beamWidth {
t.Errorf("beam exceeded width: got %d, want <= %d", len(beam), beamWidth)
}
// Anti-tautology: without beam pruning, we'd have up to 25*min(25,beamWidth)=500 entries.
// The test verifies pruning is effective.
}
func TestBeamSearch_Speculative(t *testing.T) {
store := &PacketStore{}
graph := NewNeighborGraph()
graph.builtAt = time.Now()
now := time.Now()
// Create nodes with no edges and multiple candidates — should result in low scores (speculative).
nodes := []nodeInfo{
{PublicKey: "aabb", Name: "N1", Role: "repeater"},
{PublicKey: "aabb22", Name: "N1b", Role: "repeater"},
{PublicKey: "ccdd", Name: "N2", Role: "repeater"},
{PublicKey: "ccdd22", Name: "N2b", Role: "repeater"},
{PublicKey: "ccdd33", Name: "N2c", Role: "repeater"},
}
pm := buildPrefixMap(nodes)
beam := store.beamSearch([]string{"aa", "cc"}, pm, graph, nil, now)
if len(beam) == 0 {
t.Fatal("expected at least one result")
}
// Score should be < 0.7 since there's no edge and multiple candidates (speculative).
nHops := len(beam[0].pubkeys)
score := 1.0
if nHops > 0 {
product := beam[0].score
score = pow(product, 1.0/float64(nHops))
}
if score >= speculativeThreshold {
t.Errorf("expected speculative score (< %f), got %f", speculativeThreshold, score)
}
}
func TestHandlePathInspect_EmptyPrefixes(t *testing.T) {
srv := newTestServerForInspect(t)
body := `{"prefixes":[]}`
rr := doInspectRequest(srv, body)
if rr.Code != http.StatusBadRequest {
t.Errorf("expected 400, got %d", rr.Code)
}
}
func TestHandlePathInspect_OddLengthPrefix(t *testing.T) {
srv := newTestServerForInspect(t)
body := `{"prefixes":["abc"]}`
rr := doInspectRequest(srv, body)
if rr.Code != http.StatusBadRequest {
t.Errorf("expected 400 for odd-length prefix, got %d", rr.Code)
}
}
func TestHandlePathInspect_MixedLengths(t *testing.T) {
srv := newTestServerForInspect(t)
body := `{"prefixes":["aa","bbcc"]}`
rr := doInspectRequest(srv, body)
if rr.Code != http.StatusBadRequest {
t.Errorf("expected 400 for mixed lengths, got %d", rr.Code)
}
}
func TestHandlePathInspect_TooLongPrefix(t *testing.T) {
srv := newTestServerForInspect(t)
body := `{"prefixes":["aabbccdd"]}`
rr := doInspectRequest(srv, body)
if rr.Code != http.StatusBadRequest {
t.Errorf("expected 400 for >3-byte prefix, got %d", rr.Code)
}
}
func TestHandlePathInspect_TooManyPrefixes(t *testing.T) {
srv := newTestServerForInspect(t)
prefixes := make([]string, 65)
for i := range prefixes {
prefixes[i] = "aa"
}
b, _ := json.Marshal(map[string]interface{}{"prefixes": prefixes})
rr := doInspectRequest(srv, string(b))
if rr.Code != http.StatusBadRequest {
t.Errorf("expected 400 for >64 prefixes, got %d", rr.Code)
}
}
func TestHandlePathInspect_ValidRequest(t *testing.T) {
srv := newTestServerForInspect(t)
// Seed nodes in the store — multiple candidates per prefix to lower selectivity.
srv.store.nodeCache = []nodeInfo{
{PublicKey: "aabb1234", Name: "NodeA", Role: "repeater", Lat: 37.0, Lon: -122.0, HasGPS: true},
{PublicKey: "aabb5678", Name: "NodeA2", Role: "repeater"},
{PublicKey: "ccdd5678", Name: "NodeB", Role: "repeater", Lat: 37.01, Lon: -122.01, HasGPS: true},
{PublicKey: "ccdd9999", Name: "NodeB2", Role: "repeater"},
{PublicKey: "ccdd1111", Name: "NodeB3", Role: "repeater"},
}
srv.store.nodePM = buildPrefixMap(srv.store.nodeCache)
srv.store.nodeCacheTime = time.Now()
srv.store.graph = NewNeighborGraph()
srv.store.graph.builtAt = time.Now()
body := `{"prefixes":["aa","cc"]}`
rr := doInspectRequest(srv, body)
if rr.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", rr.Code, rr.Body.String())
}
var resp pathInspectResponse
if err := json.Unmarshal(rr.Body.Bytes(), &resp); err != nil {
t.Fatalf("invalid JSON response: %v", err)
}
if len(resp.Candidates) == 0 {
t.Error("expected at least one candidate")
}
if resp.Candidates[0].Speculative != true {
// No edge between nodes, so score should be < 0.7.
t.Error("expected speculative=true for no-edge path")
}
}
// ─── Helpers ──────────────────────────────────────────────────────────────────
func newTestServerForInspect(t *testing.T) *Server {
t.Helper()
store := &PacketStore{
inspectCache: make(map[string]*inspectCachedResult),
}
store.graph = NewNeighborGraph()
store.graph.builtAt = time.Now()
return &Server{store: store}
}
func doInspectRequest(srv *Server, body string) *httptest.ResponseRecorder {
req := httptest.NewRequest("POST", "/api/paths/inspect", bytes.NewBufferString(body))
req.Header.Set("Content-Type", "application/json")
rr := httptest.NewRecorder()
srv.handlePathInspect(rr, req)
return rr
}
func pow(base, exp float64) float64 {
return math.Pow(base, exp)
}
// BenchmarkBeamSearch — performance proof for spec §2.5 (<100ms p99 for ≤64 hops).
// Anti-tautology: removing beam pruning makes this ~625x slower; timing assertion catches it.
func BenchmarkBeamSearch(b *testing.B) {
// Setup: 100 nodes, 10-hop prefix input, realistic neighbor graph.
// Anti-tautology: removing beam pruning makes this ~625x slower.
store := &PacketStore{}
pm := &prefixMap{m: make(map[string][]nodeInfo)}
graph := NewNeighborGraph()
nodes := make([]nodeInfo, 100)
now := time.Now()
for i := 0; i < 100; i++ {
pk := fmt.Sprintf("%064x", i)
prefix := fmt.Sprintf("%02x", i%256)
node := nodeInfo{PublicKey: pk, Name: fmt.Sprintf("Node%d", i), Role: "repeater", Lat: 37.0 + float64(i)*0.01, Lon: -122.0 + float64(i)*0.01}
nodes[i] = node
pm.m[prefix] = append(pm.m[prefix], node)
// Add neighbor edges to create a connected graph.
if i > 0 {
prevPK := fmt.Sprintf("%064x", i-1)
key := makeEdgeKey(prevPK, pk)
edge := &NeighborEdge{NodeA: prevPK, NodeB: pk, LastSeen: now, Count: 10}
graph.edges[key] = edge
graph.byNode[prevPK] = append(graph.byNode[prevPK], edge)
graph.byNode[pk] = append(graph.byNode[pk], edge)
}
}
// 10-hop input using prefixes that map to multiple candidates.
prefixes := make([]string, 10)
for i := 0; i < 10; i++ {
prefixes[i] = fmt.Sprintf("%02x", (i*3)%256)
}
nodeByPK := make(map[string]*nodeInfo)
for idx := range nodes {
nodeByPK[nodes[idx].PublicKey] = &nodes[idx]
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
store.beamSearch(prefixes, pm, graph, nodeByPK, now)
}
}
+78
View File
@@ -0,0 +1,78 @@
package main
import (
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/gorilla/mux"
)
// TestHandleNodePaths_PrefixCollisionExclusion verifies that paths through a node
// sharing a 2-char prefix with another node are not returned as false positives
// when they have no resolved_path data (issue #929).
//
// Setup:
// - nodeA (target): pubkey starts with "7a", no GPS
// - nodeB (other): pubkey starts with "7a", has GPS → "7a" resolves to nodeB
// - tx1: path ["7a"], resolved_path NULL → false positive candidate, must be excluded
// - tx2: path ["7a"], resolved_path contains nodeA pubkey → SQL-confirmed, must be included
func TestHandleNodePaths_PrefixCollisionExclusion(t *testing.T) {
db := setupTestDB(t)
recent := time.Now().Add(-1 * time.Hour).Format(time.RFC3339)
recentEpoch := time.Now().Add(-1 * time.Hour).Unix()
nodeAPK := "7acb1111aaaabbbb"
nodeBPK := "7aff2222ccccdddd" // same "7a" prefix, has GPS so resolveHop("7a") picks B
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, lat, lon, last_seen, first_seen, advert_count)
VALUES (?, 'NodeA', 'repeater', 0, 0, ?, '2026-01-01', 1)`, nodeAPK, recent)
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, lat, lon, last_seen, first_seen, advert_count)
VALUES (?, 'NodeB', 'repeater', 37.5, -122.0, ?, '2026-01-01', 1)`, nodeBPK, recent)
// tx1: no resolved_path — should be excluded by hop-level check
db.conn.Exec(`INSERT INTO transmissions (id, raw_hex, hash, first_seen) VALUES (10, 'AA', 'hash_fp', ?)`, recent)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, path_json, timestamp, resolved_path)
VALUES (10, NULL, '["7a"]', ?, NULL)`, recentEpoch)
// tx2: resolved_path confirms nodeA — must be included
db.conn.Exec(`INSERT INTO transmissions (id, raw_hex, hash, first_seen) VALUES (11, 'BB', 'hash_tp', ?)`, recent)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, path_json, timestamp, resolved_path)
VALUES (11, NULL, '["7a"]', ?, ?)`, recentEpoch, `["`+nodeAPK+`"]`)
cfg := &Config{Port: 3000}
hub := NewHub()
srv := NewServer(db, cfg, hub)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store.Load: %v", err)
}
srv.store = store
router := mux.NewRouter()
srv.RegisterRoutes(router)
req := httptest.NewRequest("GET", "/api/nodes/"+nodeAPK+"/paths", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var resp NodePathsResponse
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("unmarshal: %v", err)
}
// Only the SQL-confirmed path (tx2) should be present; tx1 (false positive) must be excluded.
// tx1 and tx2 share the same raw path ["7a"] so they collapse into 1 unique path group.
// If tx1 were included, TotalTransmissions would be 2.
if resp.TotalPaths != 1 {
t.Errorf("expected 1 path group, got %d", resp.TotalPaths)
}
if resp.TotalTransmissions != 1 {
t.Errorf("expected 1 transmission (false positive tx1 excluded), got %d", resp.TotalTransmissions)
}
}
+212
View File
@@ -0,0 +1,212 @@
package main
import (
"encoding/json"
"testing"
)
func TestCanAppearInPath(t *testing.T) {
cases := []struct {
role string
want bool
}{
{"repeater", true},
{"Repeater", true},
{"REPEATER", true},
{"room_server", true},
{"Room_Server", true},
{"room", true},
{"companion", false},
{"sensor", false},
{"", false},
{"unknown", false},
}
for _, tc := range cases {
if got := canAppearInPath(tc.role); got != tc.want {
t.Errorf("canAppearInPath(%q) = %v, want %v", tc.role, got, tc.want)
}
}
}
func TestBuildPrefixMap_ExcludesCompanions(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "MyCompanion"},
}
pm := buildPrefixMap(nodes)
if len(pm.m) != 0 {
t.Fatalf("expected empty prefix map, got %d entries", len(pm.m))
}
}
func TestBuildPrefixMap_ExcludesSensors(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "sensor", Name: "MySensor"},
}
pm := buildPrefixMap(nodes)
if len(pm.m) != 0 {
t.Fatalf("expected empty prefix map, got %d entries", len(pm.m))
}
}
func TestResolveWithContext_NilWhenOnlyCompanionMatchesPrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "MyCompanion"},
}
pm := buildPrefixMap(nodes)
r, _, _ := pm.resolveWithContext("7a", nil, nil)
if r != nil {
t.Fatalf("expected nil, got %+v", r)
}
}
func TestResolveWithContext_NilWhenOnlySensorMatchesPrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "sensor", Name: "MySensor"},
}
pm := buildPrefixMap(nodes)
r, _, _ := pm.resolveWithContext("7a", nil, nil)
if r != nil {
t.Fatalf("expected nil for sensor-only prefix, got %+v", r)
}
}
func TestResolveWithContext_PrefersRepeaterOverCompanionAtSamePrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "MyCompanion"},
{PublicKey: "7a5678901234", Role: "repeater", Name: "MyRepeater"},
}
pm := buildPrefixMap(nodes)
r, _, _ := pm.resolveWithContext("7a", nil, nil)
if r == nil {
t.Fatal("expected non-nil result")
}
if r.Name != "MyRepeater" {
t.Fatalf("expected MyRepeater, got %s", r.Name)
}
}
func TestResolveWithContext_PrefersRoomServerOverCompanionAtSamePrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "ab1234abcdef", Role: "companion", Name: "MyCompanion"},
{PublicKey: "ab5678901234", Role: "room_server", Name: "MyRoom"},
}
pm := buildPrefixMap(nodes)
r, _, _ := pm.resolveWithContext("ab", nil, nil)
if r == nil {
t.Fatal("expected non-nil result")
}
if r.Name != "MyRoom" {
t.Fatalf("expected MyRoom, got %s", r.Name)
}
}
func TestResolve_NilWhenOnlyCompanionMatchesPrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "MyCompanion"},
}
pm := buildPrefixMap(nodes)
r := pm.resolve("7a")
if r != nil {
t.Fatalf("expected nil from resolve() for companion-only prefix, got %+v", r)
}
}
func TestResolve_NilWhenOnlySensorMatchesPrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "sensor", Name: "MySensor"},
}
pm := buildPrefixMap(nodes)
r := pm.resolve("7a")
if r != nil {
t.Fatalf("expected nil from resolve() for sensor-only prefix, got %+v", r)
}
}
func TestResolveWithContext_PicksRepeaterEvenWhenCompanionHasGPS(t *testing.T) {
// Adversarial: companion has GPS, repeater doesn't. Role filter should
// exclude companion entirely, so repeater wins despite lacking GPS.
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "GPSCompanion", Lat: 37.0, Lon: -122.0, HasGPS: true},
{PublicKey: "7a5678901234", Role: "repeater", Name: "NoGPSRepeater", Lat: 0, Lon: 0, HasGPS: false},
}
pm := buildPrefixMap(nodes)
r, _, _ := pm.resolveWithContext("7a", nil, nil)
if r == nil {
t.Fatal("expected non-nil result")
}
if r.Name != "NoGPSRepeater" {
t.Fatalf("expected NoGPSRepeater (role filter excludes companion), got %s", r.Name)
}
}
func TestComputeDistancesForTx_CompanionNeverInResolvedChain(t *testing.T) {
// Integration test: a path with a prefix matching both a companion and a
// repeater. The resolveHop function (using buildPrefixMap) should only
// return the repeater.
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "BadCompanion", Lat: 37.0, Lon: -122.0, HasGPS: true},
{PublicKey: "7a5678901234", Role: "repeater", Name: "GoodRepeater", Lat: 38.0, Lon: -123.0, HasGPS: true},
{PublicKey: "bb1111111111", Role: "repeater", Name: "OtherRepeater", Lat: 39.0, Lon: -124.0, HasGPS: true},
}
pm := buildPrefixMap(nodes)
nodeByPk := make(map[string]*nodeInfo)
for i := range nodes {
nodeByPk[nodes[i].PublicKey] = &nodes[i]
}
repeaterSet := map[string]bool{
"7a5678901234": true,
"bb1111111111": true,
}
// Build a synthetic StoreTx with a path ["7a", "bb"] and a sender with GPS
senderPK := "cc0000000000"
sender := nodeInfo{PublicKey: senderPK, Role: "repeater", Name: "Sender", Lat: 36.0, Lon: -121.0, HasGPS: true}
nodeByPk[senderPK] = &sender
pathJSON, _ := json.Marshal([]string{"7a", "bb"})
decoded, _ := json.Marshal(map[string]interface{}{"pubKey": senderPK})
tx := &StoreTx{
PathJSON: string(pathJSON),
DecodedJSON: string(decoded),
FirstSeen: "2026-04-30T12:00",
}
resolveHop := func(hop string) *nodeInfo {
return pm.resolve(hop)
}
hops, pathRec := computeDistancesForTx(tx, nodeByPk, repeaterSet, resolveHop)
// Verify BadCompanion's pubkey never appears in hops
badPK := "7a1234abcdef"
for i, h := range hops {
if h.FromPk == badPK || h.ToPk == badPK {
t.Fatalf("hop[%d] contains BadCompanion pubkey: from=%s to=%s", i, h.FromPk, h.ToPk)
}
}
// Verify BadCompanion's pubkey never appears in pathRec
if pathRec == nil {
t.Fatal("expected non-nil path record (3 GPS nodes in chain)")
}
for i, hop := range pathRec.Hops {
if hop.FromPk == badPK || hop.ToPk == badPK {
t.Fatalf("pathRec.Hops[%d] contains BadCompanion pubkey: from=%s to=%s", i, hop.FromPk, hop.ToPk)
}
}
// Verify GoodRepeater IS in the chain (proves the prefix was resolved to the right node)
goodPK := "7a5678901234"
foundGood := false
for _, hop := range pathRec.Hops {
if hop.FromPk == goodPK || hop.ToPk == goodPK {
foundGood = true
break
}
}
if !foundGood {
t.Fatal("expected GoodRepeater (7a5678901234) in pathRec.Hops but not found")
}
}
+41
View File
@@ -0,0 +1,41 @@
package main
import (
"testing"
)
// Issue #770: the region filter dropdown's "All" option was being sent to the
// backend as ?region=All. The backend then tried to match observers with IATA
// code "ALL", which never exists, producing an empty channel/packet list.
//
// "All" / "ALL" / "all" / "" must all be treated as "no region filter".
func TestNormalizeRegionCodes_AllIsNoFilter(t *testing.T) {
cases := []struct {
name string
in string
}{
{"empty", ""},
{"literal All (frontend dropdown label)", "All"},
{"upper ALL", "ALL"},
{"lower all", "all"},
{"All with whitespace", " All "},
{"All in csv with empty siblings", "All,"},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got := normalizeRegionCodes(tc.in)
if got != nil {
t.Errorf("normalizeRegionCodes(%q) = %v, want nil (no filter)", tc.in, got)
}
})
}
}
// Real region codes must still pass through unchanged (case-folded to upper).
// This locks in that the "All" handling does not regress legitimate filters.
func TestNormalizeRegionCodes_RealCodesPreserved(t *testing.T) {
got := normalizeRegionCodes("sjc,PDX")
if len(got) != 2 || got[0] != "SJC" || got[1] != "PDX" {
t.Errorf("normalizeRegionCodes(\"sjc,PDX\") = %v, want [SJC PDX]", got)
}
}
+143
View File
@@ -0,0 +1,143 @@
package main
import (
"strings"
"time"
)
// RepeaterRelayInfo describes whether a repeater has been observed
// relaying traffic (appearing as a path hop in non-advert packets) and
// when. This is distinct from advert-based liveness (last_seen / last_heard),
// which only proves the repeater can transmit its own adverts.
//
// See issue #662.
type RepeaterRelayInfo struct {
// LastRelayed is the ISO-8601 timestamp of the most recent non-advert
// packet where this pubkey appeared as a relay hop. Empty if never.
LastRelayed string `json:"lastRelayed,omitempty"`
// RelayActive is true if LastRelayed falls within the configured
// activity window (default 24h).
RelayActive bool `json:"relayActive"`
// WindowHours is the active-window threshold actually used.
WindowHours float64 `json:"windowHours"`
// RelayCount1h is the count of distinct non-advert packets where this
// pubkey appeared as a relay hop in the last 1 hour.
RelayCount1h int `json:"relayCount1h"`
// RelayCount24h is the count of distinct non-advert packets where this
// pubkey appeared as a relay hop in the last 24 hours.
RelayCount24h int `json:"relayCount24h"`
}
// payloadTypeAdvert is the MeshCore payload type for ADVERT packets.
// See firmware/src/Mesh.h. Adverts are NOT considered relay activity:
// a repeater that only sends adverts proves it is alive, not that it
// is forwarding traffic for other nodes.
const payloadTypeAdvert = 4
// parseRelayTS attempts to parse a packet first-seen timestamp using the
// formats CoreScope writes in practice. Returns zero time and false on
// failure. Accepted (in order):
// - RFC3339Nano — Go's default UTC marshal output
// - RFC3339 — second-precision ISO-8601 with offset
// - "2006-01-02T15:04:05.000Z" — millisecond-precision Z form used by ingest
func parseRelayTS(ts string) (time.Time, bool) {
if ts == "" {
return time.Time{}, false
}
if t, err := time.Parse(time.RFC3339Nano, ts); err == nil {
return t, true
}
if t, err := time.Parse(time.RFC3339, ts); err == nil {
return t, true
}
if t, err := time.Parse("2006-01-02T15:04:05.000Z", ts); err == nil {
return t, true
}
return time.Time{}, false
}
// GetRepeaterRelayInfo returns relay-activity information for a node by
// scanning the byPathHop index for non-advert packets that name the
// pubkey as a hop. It computes the most recent appearance timestamp,
// 1h/24h hop counts, and whether the latest appearance falls within
// windowHours.
//
// Cost: O(N) over the indexed entries for `pubkey`. The byPathHop index
// is bounded by store eviction; on real data this is small per-node.
//
// Note on self-as-source: byPathHop is keyed by every hop in a packet's
// resolved path, including the originator. For ADVERT packets that's the
// node itself, which is filtered above by the payloadTypeAdvert check.
// For non-advert packets a node "originates" rather than "relays" only
// when it is the source; we don't currently have a clean signal for that
// distinction, so the count here is *path-hop appearances in non-advert
// packets*. In practice for a repeater nearly all such appearances are
// relay hops (the firmware doesn't originate user traffic), so this is
// the right approximation for issue #662.
func (s *PacketStore) GetRepeaterRelayInfo(pubkey string, windowHours float64) RepeaterRelayInfo {
info := RepeaterRelayInfo{WindowHours: windowHours}
if pubkey == "" {
return info
}
key := strings.ToLower(pubkey)
s.mu.RLock()
txList := s.byPathHop[key]
// Copy only the timestamps + payload types we need so we can release
// the read lock before doing parsing/compare work below.
type entry struct {
ts string
pt int
}
scratch := make([]entry, 0, len(txList))
for _, tx := range txList {
if tx == nil {
continue
}
pt := -1
if tx.PayloadType != nil {
pt = *tx.PayloadType
}
scratch = append(scratch, entry{ts: tx.FirstSeen, pt: pt})
}
s.mu.RUnlock()
now := time.Now().UTC()
cutoff1h := now.Add(-1 * time.Hour)
cutoff24h := now.Add(-24 * time.Hour)
var latest time.Time
var latestRaw string
for _, e := range scratch {
// Self-originated adverts are not relay activity (see header comment).
if e.pt == payloadTypeAdvert {
continue
}
t, ok := parseRelayTS(e.ts)
if !ok {
continue
}
if t.After(latest) {
latest = t
latestRaw = e.ts
}
if t.After(cutoff24h) {
info.RelayCount24h++
if t.After(cutoff1h) {
info.RelayCount1h++
}
}
}
if latestRaw == "" {
return info
}
info.LastRelayed = latestRaw
if windowHours > 0 {
cutoff := now.Add(-time.Duration(windowHours * float64(time.Hour)))
if latest.After(cutoff) {
info.RelayActive = true
}
}
return info
}
+162
View File
@@ -0,0 +1,162 @@
package main
import (
"testing"
"time"
)
// TestRepeaterRelayActivity_Active verifies that a repeater whose pubkey
// appears as a relay hop in a recent (non-advert) packet is reported with
// a non-zero lastRelayed timestamp and relayActive=true.
func TestRepeaterRelayActivity_Active(t *testing.T) {
db := setupCapabilityTestDB(t)
defer db.conn.Close()
pubkey := "aabbccdd11223344"
db.conn.Exec("INSERT INTO nodes (public_key, name, role, last_seen) VALUES (?, ?, ?, ?)",
pubkey, "RepActive", "repeater", recentTS(1))
store := NewPacketStore(db, nil)
// A non-advert packet (payload_type=1, TXT_MSG) with the repeater pubkey
// indexed as a path hop. Index by lowercase pubkey directly to mirror
// the resolved-path entries that decode-window writes.
pt := 1
relayed := &StoreTx{
RawHex: "0100",
PayloadType: &pt,
PathJSON: `["aa"]`,
FirstSeen: recentTS(2),
}
store.mu.Lock()
relayed.ID = len(store.packets) + 1
relayed.Hash = "test-relay-1"
store.packets = append(store.packets, relayed)
store.byHash[relayed.Hash] = relayed
store.byTxID[relayed.ID] = relayed
store.byPathHop[pubkey] = append(store.byPathHop[pubkey], relayed)
store.mu.Unlock()
info := store.GetRepeaterRelayInfo(pubkey, 24)
if info.LastRelayed == "" {
t.Fatalf("expected non-empty LastRelayed for active relayer, got empty (RelayActive=%v)", info.RelayActive)
}
if !info.RelayActive {
t.Errorf("expected RelayActive=true within 24h window, got false (LastRelayed=%s)", info.LastRelayed)
}
if info.RelayCount1h != 0 {
t.Errorf("expected RelayCount1h=0 (relay was 2h ago, outside 1h window), got %d", info.RelayCount1h)
}
if info.RelayCount24h != 1 {
t.Errorf("expected RelayCount24h=1 (relay was 2h ago, inside 24h window), got %d", info.RelayCount24h)
}
}
// TestRepeaterRelayActivity_Idle verifies that a repeater whose pubkey
// has not appeared as a relay hop reports an empty LastRelayed and
// relayActive=false.
func TestRepeaterRelayActivity_Idle(t *testing.T) {
db := setupCapabilityTestDB(t)
defer db.conn.Close()
pubkey := "ccddeeff55667788"
db.conn.Exec("INSERT INTO nodes (public_key, name, role, last_seen) VALUES (?, ?, ?, ?)",
pubkey, "RepIdle", "repeater", recentTS(1))
store := NewPacketStore(db, nil)
info := store.GetRepeaterRelayInfo(pubkey, 24)
if info.LastRelayed != "" {
t.Errorf("expected empty LastRelayed for idle repeater, got %q", info.LastRelayed)
}
if info.RelayActive {
t.Errorf("expected RelayActive=false for idle repeater, got true")
}
if info.RelayCount1h != 0 || info.RelayCount24h != 0 {
t.Errorf("expected zero relay counts for idle repeater, got 1h=%d 24h=%d", info.RelayCount1h, info.RelayCount24h)
}
}
// TestRepeaterRelayActivity_Stale verifies that a repeater whose only
// relay-hop appearances are older than the configured window reports
// a non-empty LastRelayed but relayActive=false.
func TestRepeaterRelayActivity_Stale(t *testing.T) {
db := setupCapabilityTestDB(t)
defer db.conn.Close()
pubkey := "1122334455667788"
db.conn.Exec("INSERT INTO nodes (public_key, name, role, last_seen) VALUES (?, ?, ?, ?)",
pubkey, "RepStale", "repeater", recentTS(1))
store := NewPacketStore(db, nil)
pt := 1
staleTS := time.Now().UTC().Add(-48 * time.Hour).Format("2006-01-02T15:04:05.000Z")
old := &StoreTx{
RawHex: "0100",
PayloadType: &pt,
PathJSON: `["11"]`,
FirstSeen: staleTS,
}
store.mu.Lock()
old.ID = len(store.packets) + 1
old.Hash = "test-relay-stale"
store.packets = append(store.packets, old)
store.byHash[old.Hash] = old
store.byTxID[old.ID] = old
store.byPathHop[pubkey] = append(store.byPathHop[pubkey], old)
store.mu.Unlock()
info := store.GetRepeaterRelayInfo(pubkey, 24)
if info.LastRelayed != staleTS {
t.Errorf("expected LastRelayed=%q (stale ts), got %q", staleTS, info.LastRelayed)
}
if info.RelayActive {
t.Errorf("expected RelayActive=false for relay older than window, got true")
}
if info.RelayCount1h != 0 || info.RelayCount24h != 0 {
t.Errorf("expected zero relay counts for stale (>24h) repeater, got 1h=%d 24h=%d", info.RelayCount1h, info.RelayCount24h)
}
}
// TestRepeaterRelayActivity_IgnoresAdverts verifies that adverts originated
// by the repeater itself (payload_type=4) are NOT counted as relay activity —
// adverts demonstrate liveness, not relaying.
func TestRepeaterRelayActivity_IgnoresAdverts(t *testing.T) {
db := setupCapabilityTestDB(t)
defer db.conn.Close()
pubkey := "deadbeef00000001"
db.conn.Exec("INSERT INTO nodes (public_key, name, role, last_seen) VALUES (?, ?, ?, ?)",
pubkey, "RepAdvertOnly", "repeater", recentTS(1))
store := NewPacketStore(db, nil)
// Self-advert with the repeater as its own first hop. Should NOT count.
pt := 4
adv := &StoreTx{
RawHex: "0140de",
PayloadType: &pt,
PathJSON: `["de"]`,
FirstSeen: recentTS(2),
}
store.mu.Lock()
adv.ID = len(store.packets) + 1
adv.Hash = "test-advert-1"
store.packets = append(store.packets, adv)
store.byHash[adv.Hash] = adv
store.byTxID[adv.ID] = adv
store.byPathHop[pubkey] = append(store.byPathHop[pubkey], adv)
store.mu.Unlock()
info := store.GetRepeaterRelayInfo(pubkey, 24)
if info.LastRelayed != "" {
t.Errorf("expected empty LastRelayed (adverts ignored), got %q", info.LastRelayed)
}
if info.RelayActive {
t.Errorf("expected RelayActive=false (adverts ignored), got true")
}
if info.RelayCount1h != 0 || info.RelayCount24h != 0 {
t.Errorf("expected zero relay counts (adverts ignored), got 1h=%d 24h=%d", info.RelayCount1h, info.RelayCount24h)
}
}
+64
View File
@@ -0,0 +1,64 @@
package main
import "strings"
// GetRepeaterUsefulnessScore returns a 0..1 score representing what
// fraction of non-advert traffic in the store passes through this
// repeater as a relay hop. Issue #672 (Traffic axis only — bridge,
// coverage, and redundancy axes are deferred to follow-up work).
//
// Numerator: count of non-advert StoreTx entries indexed under
// pubkey in byPathHop.
// Denominator: total non-advert StoreTx entries in the store
// (sum of byPayloadType for all keys != payloadTypeAdvert).
//
// Returns 0 when there is no non-advert traffic, the pubkey is empty,
// or the repeater never appears as a relay hop. Scores are clamped to
// [0,1] for defensive bounds.
//
// Cost: O(N) over byPayloadType keys (typically <20) plus the per-hop
// slice for pubkey. Cheap relative to the per-request enrichment loop
// in handleNodes; if it ever shows up in profiles, denominator can be
// memoized off store invalidation.
func (s *PacketStore) GetRepeaterUsefulnessScore(pubkey string) float64 {
if pubkey == "" {
return 0
}
key := strings.ToLower(pubkey)
s.mu.RLock()
defer s.mu.RUnlock()
// Denominator: total non-advert packets.
totalNonAdvert := 0
for pt, list := range s.byPayloadType {
if pt == payloadTypeAdvert {
continue
}
totalNonAdvert += len(list)
}
if totalNonAdvert == 0 {
return 0
}
// Numerator: this repeater's non-advert hop appearances.
relayed := 0
for _, tx := range s.byPathHop[key] {
if tx == nil {
continue
}
if tx.PayloadType != nil && *tx.PayloadType == payloadTypeAdvert {
continue
}
relayed++
}
score := float64(relayed) / float64(totalNonAdvert)
if score < 0 {
return 0
}
if score > 1 {
return 1
}
return score
}
+100
View File
@@ -0,0 +1,100 @@
package main
import (
"testing"
)
// TestRepeaterUsefulness_BasicShare verifies that usefulness_score is
// relay_count_24h / total_non_advert_traffic_24h. With 1 of 4 relayed
// packets going through the repeater, score should be 0.25.
//
// Issue #672. We are intentionally implementing the *traffic share*
// dimension of the composite score from the issue body — bridge,
// coverage, redundancy are deferred to follow-up work. This is the
// "Traffic" axis of the table in #672.
func TestRepeaterUsefulness_BasicShare(t *testing.T) {
db := setupCapabilityTestDB(t)
defer db.conn.Close()
pubkey := "aabbccdd11223344"
store := NewPacketStore(db, nil)
// 4 non-advert packets total in last hour. The repeater appears in
// the resolved path of exactly one of them.
pt := 1
for i := 0; i < 4; i++ {
tx := &StoreTx{RawHex: "0100", PayloadType: &pt, FirstSeen: recentTS(0)}
// Only first packet has our repeater in its path.
if i == 0 {
store.mu.Lock()
tx.ID = len(store.packets) + 1
tx.Hash = "uf-hit"
store.packets = append(store.packets, tx)
store.byHash[tx.Hash] = tx
store.byTxID[tx.ID] = tx
store.byPayloadType[pt] = append(store.byPayloadType[pt], tx)
store.byPathHop[pubkey] = append(store.byPathHop[pubkey], tx)
store.mu.Unlock()
} else {
addTestPacket(store, tx)
}
}
score := store.GetRepeaterUsefulnessScore(pubkey)
// 1 relay / 4 total = 0.25
if score < 0.24 || score > 0.26 {
t.Errorf("expected usefulness ~0.25, got %f", score)
}
}
// TestRepeaterUsefulness_NoTraffic verifies score is 0 when there is
// no non-advert traffic to share.
func TestRepeaterUsefulness_NoTraffic(t *testing.T) {
db := setupCapabilityTestDB(t)
defer db.conn.Close()
store := NewPacketStore(db, nil)
score := store.GetRepeaterUsefulnessScore("deadbeefcafebabe")
if score != 0 {
t.Errorf("expected 0 for empty store, got %f", score)
}
}
// TestRepeaterUsefulness_AdvertsExcluded verifies that ADVERT packets
// (payload_type=4) are excluded from both numerator and denominator —
// adverts don't count as forwarded traffic.
func TestRepeaterUsefulness_AdvertsExcluded(t *testing.T) {
db := setupCapabilityTestDB(t)
defer db.conn.Close()
pubkey := "11aa22bb33cc44dd"
store := NewPacketStore(db, nil)
// 2 non-advert packets, both with our repeater in path → score = 1.0
pt := 1
for i := 0; i < 2; i++ {
tx := &StoreTx{RawHex: "0100", PayloadType: &pt, FirstSeen: recentTS(0)}
store.mu.Lock()
tx.ID = len(store.packets) + 1
tx.Hash = "uf-non-advert"
if i == 1 {
tx.Hash = "uf-non-advert-2"
}
store.packets = append(store.packets, tx)
store.byHash[tx.Hash] = tx
store.byTxID[tx.ID] = tx
store.byPayloadType[pt] = append(store.byPayloadType[pt], tx)
store.byPathHop[pubkey] = append(store.byPathHop[pubkey], tx)
store.mu.Unlock()
}
// Add 100 adverts — these must be ignored.
advertPT := payloadTypeAdvert
for i := 0; i < 100; i++ {
tx := &StoreTx{RawHex: "0400", PayloadType: &advertPT, FirstSeen: recentTS(0)}
addTestPacket(store, tx)
}
score := store.GetRepeaterUsefulnessScore(pubkey)
if score < 0.99 || score > 1.01 {
t.Errorf("expected usefulness ~1.0 (adverts excluded), got %f", score)
}
}
+29 -29
View File
@@ -11,7 +11,7 @@ import (
func TestResolveWithContext_UniquePrefix(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1b2c3d4", Name: "Node-A", HasGPS: true, Lat: 1, Lon: 2},
{Role: "repeater", PublicKey: "a1b2c3d4", Name: "Node-A", HasGPS: true, Lat: 1, Lon: 2},
})
ni, confidence, _ := pm.resolveWithContext("a1b2c3d4", nil, nil)
if ni == nil || ni.Name != "Node-A" {
@@ -24,7 +24,7 @@ func TestResolveWithContext_UniquePrefix(t *testing.T) {
func TestResolveWithContext_NoMatch(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1b2c3d4", Name: "Node-A"},
{Role: "repeater", PublicKey: "a1b2c3d4", Name: "Node-A"},
})
ni, confidence, _ := pm.resolveWithContext("ff", nil, nil)
if ni != nil {
@@ -37,8 +37,8 @@ func TestResolveWithContext_NoMatch(t *testing.T) {
func TestResolveWithContext_AffinityWins(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "Node-A1"},
{PublicKey: "a1bbbbbb", Name: "Node-A2"},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "Node-A1"},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "Node-A2"},
})
graph := NewNeighborGraph()
@@ -60,9 +60,9 @@ func TestResolveWithContext_AffinityWins(t *testing.T) {
func TestResolveWithContext_AffinityTooClose_FallsToGeo(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "Node-A1", HasGPS: true, Lat: 10, Lon: 20},
{PublicKey: "a1bbbbbb", Name: "Node-A2", HasGPS: true, Lat: 11, Lon: 21},
{PublicKey: "c0c0c0c0", Name: "Ctx", HasGPS: true, Lat: 10.1, Lon: 20.1},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "Node-A1", HasGPS: true, Lat: 10, Lon: 20},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "Node-A2", HasGPS: true, Lat: 11, Lon: 21},
{Role: "repeater", PublicKey: "c0c0c0c0", Name: "Ctx", HasGPS: true, Lat: 10.1, Lon: 20.1},
})
graph := NewNeighborGraph()
@@ -85,8 +85,8 @@ func TestResolveWithContext_AffinityTooClose_FallsToGeo(t *testing.T) {
func TestResolveWithContext_GPSPreference(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "NoGPS"},
{PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "NoGPS"},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
})
ni, confidence, _ := pm.resolveWithContext("a1", nil, nil)
@@ -100,8 +100,8 @@ func TestResolveWithContext_GPSPreference(t *testing.T) {
func TestResolveWithContext_FirstMatchFallback(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "First"},
{PublicKey: "a1bbbbbb", Name: "Second"},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "First"},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "Second"},
})
ni, confidence, _ := pm.resolveWithContext("a1", nil, nil)
@@ -115,8 +115,8 @@ func TestResolveWithContext_FirstMatchFallback(t *testing.T) {
func TestResolveWithContext_NilGraphFallsToGPS(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "NoGPS"},
{PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "NoGPS"},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
})
ni, confidence, _ := pm.resolveWithContext("a1", []string{"someone"}, nil)
@@ -131,8 +131,8 @@ func TestResolveWithContext_NilGraphFallsToGPS(t *testing.T) {
func TestResolveWithContext_BackwardCompatResolve(t *testing.T) {
// Verify original resolve() still works unchanged
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "NoGPS"},
{PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "NoGPS"},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
})
ni := pm.resolve("a1")
if ni == nil || ni.Name != "HasGPS" {
@@ -164,8 +164,8 @@ func TestResolveHopsAPI_UniquePrefix(t *testing.T) {
_ = srv
// Insert a unique node
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"ff11223344", "UniqueNode", 37.0, -122.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"ff11223344", "UniqueNode", 37.0, -122.0, "repeater")
srv.store.InvalidateNodeCache()
req := httptest.NewRequest("GET", "/api/resolve-hops?hops=ff11223344", nil)
@@ -189,10 +189,10 @@ func TestResolveHopsAPI_UniquePrefix(t *testing.T) {
func TestResolveHopsAPI_AmbiguousNoContext(t *testing.T) {
srv, router := setupTestServer(t)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"ee1aaaaaaa", "Node-E1", 37.0, -122.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"ee1bbbbbbb", "Node-E2", 38.0, -121.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"ee1aaaaaaa", "Node-E1", 37.0, -122.0, "repeater")
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"ee1bbbbbbb", "Node-E2", 38.0, -121.0, "repeater")
srv.store.InvalidateNodeCache()
req := httptest.NewRequest("GET", "/api/resolve-hops?hops=ee1", nil)
@@ -224,12 +224,12 @@ func TestResolveHopsAPI_AmbiguousNoContext(t *testing.T) {
func TestResolveHopsAPI_WithAffinityContext(t *testing.T) {
srv, router := setupTestServer(t)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"dd1aaaaaaa", "Node-D1", 37.0, -122.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"dd1bbbbbbb", "Node-D2", 38.0, -121.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"c0c0c0c0c0", "Context", 37.1, -122.1)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"dd1aaaaaaa", "Node-D1", 37.0, -122.0, "repeater")
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"dd1bbbbbbb", "Node-D2", 38.0, -121.0, "repeater")
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"c0c0c0c0c0", "Context", 37.1, -122.1, "repeater")
// Invalidate node cache so the PM includes newly inserted nodes.
srv.store.cacheMu.Lock()
@@ -279,8 +279,8 @@ func TestResolveHopsAPI_WithAffinityContext(t *testing.T) {
func TestResolveHopsAPI_ResponseShape(t *testing.T) {
srv, router := setupTestServer(t)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"bb1aaaaaaa", "Node-B1", 37.0, -122.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"bb1aaaaaaa", "Node-B1", 37.0, -122.0, "repeater")
req := httptest.NewRequest("GET", "/api/resolve-hops?hops=bb1a", nil)
rr := httptest.NewRecorder()
+133
View File
@@ -0,0 +1,133 @@
package main
import (
"math"
"net/http"
"sort"
"strings"
)
// RoleStats summarises one role's population and clock-skew posture.
type RoleStats struct {
Role string `json:"role"`
NodeCount int `json:"nodeCount"`
WithSkew int `json:"withSkew"`
MeanAbsSkewSec float64 `json:"meanAbsSkewSec"`
MedianAbsSkewSec float64 `json:"medianAbsSkewSec"`
OkCount int `json:"okCount"`
WarningCount int `json:"warningCount"`
CriticalCount int `json:"criticalCount"`
AbsurdCount int `json:"absurdCount"`
NoClockCount int `json:"noClockCount"`
}
// RoleAnalyticsResponse is the payload returned by /api/analytics/roles.
type RoleAnalyticsResponse struct {
TotalNodes int `json:"totalNodes"`
Roles []RoleStats `json:"roles"`
}
// normalizeRole canonicalises a role string so empty/unknown roles bucket
// together and case differences don't fragment the distribution.
func normalizeRole(r string) string {
r = strings.ToLower(strings.TrimSpace(r))
if r == "" {
return "unknown"
}
return r
}
// computeRoleAnalytics groups nodes by role and aggregates clock-skew per
// role. Pure function: takes the node roster and the per-pubkey skew map and
// returns the response — no store / lock dependencies, easy to unit test.
//
// `nodesByPubkey` lists every known node (pubkey → role). `skewByPubkey`
// is the subset of pubkeys that have clock-skew data with their severity and
// most-recent corrected skew (in seconds, signed — we take |x| for averages).
func computeRoleAnalytics(nodesByPubkey map[string]string, skewByPubkey map[string]*NodeClockSkew) RoleAnalyticsResponse {
type bucket struct {
stats RoleStats
absSkews []float64
}
buckets := make(map[string]*bucket)
for pk, rawRole := range nodesByPubkey {
role := normalizeRole(rawRole)
b, ok := buckets[role]
if !ok {
b = &bucket{stats: RoleStats{Role: role}}
buckets[role] = b
}
b.stats.NodeCount++
cs, has := skewByPubkey[pk]
if !has || cs == nil {
continue
}
b.stats.WithSkew++
abs := math.Abs(cs.RecentMedianSkewSec)
if abs == 0 {
abs = math.Abs(cs.LastSkewSec)
}
b.absSkews = append(b.absSkews, abs)
switch cs.Severity {
case SkewOK:
b.stats.OkCount++
case SkewWarning:
b.stats.WarningCount++
case SkewCritical:
b.stats.CriticalCount++
case SkewAbsurd:
b.stats.AbsurdCount++
case SkewNoClock:
b.stats.NoClockCount++
}
}
resp := RoleAnalyticsResponse{Roles: make([]RoleStats, 0, len(buckets))}
for _, b := range buckets {
if n := len(b.absSkews); n > 0 {
sum := 0.0
for _, v := range b.absSkews {
sum += v
}
b.stats.MeanAbsSkewSec = round(sum/float64(n), 2)
sorted := make([]float64, n)
copy(sorted, b.absSkews)
sort.Float64s(sorted)
if n%2 == 1 {
b.stats.MedianAbsSkewSec = round(sorted[n/2], 2)
} else {
b.stats.MedianAbsSkewSec = round((sorted[n/2-1]+sorted[n/2])/2, 2)
}
}
resp.TotalNodes += b.stats.NodeCount
resp.Roles = append(resp.Roles, b.stats)
}
// Sort: largest population first, then role name for stable output.
sort.Slice(resp.Roles, func(i, j int) bool {
if resp.Roles[i].NodeCount != resp.Roles[j].NodeCount {
return resp.Roles[i].NodeCount > resp.Roles[j].NodeCount
}
return resp.Roles[i].Role < resp.Roles[j].Role
})
return resp
}
// handleAnalyticsRoles serves /api/analytics/roles.
func (s *Server) handleAnalyticsRoles(w http.ResponseWriter, r *http.Request) {
if s.store == nil {
writeJSON(w, RoleAnalyticsResponse{Roles: []RoleStats{}})
return
}
nodes, _ := s.store.getCachedNodesAndPM()
roles := make(map[string]string, len(nodes))
for _, n := range nodes {
roles[n.PublicKey] = n.Role
}
skewMap := make(map[string]*NodeClockSkew)
for _, cs := range s.store.GetFleetClockSkew() {
if cs == nil {
continue
}
skewMap[cs.Pubkey] = cs
}
writeJSON(w, computeRoleAnalytics(roles, skewMap))
}
+77
View File
@@ -0,0 +1,77 @@
package main
import (
"testing"
)
// TestComputeRoleAnalytics_Distribution verifies that computeRoleAnalytics
// groups nodes by role, normalises empty/case-different roles, and sorts the
// output largest-population first. Asserts on the public RoleAnalyticsResponse
// shape so the bar is "behaviour", not "compiles".
func TestComputeRoleAnalytics_Distribution(t *testing.T) {
nodes := map[string]string{
"pk_a": "Repeater",
"pk_b": "repeater",
"pk_c": "companion",
"pk_d": "",
"pk_e": "ROOM_SERVER",
}
got := computeRoleAnalytics(nodes, nil)
if got.TotalNodes != 5 {
t.Fatalf("TotalNodes = %d, want 5", got.TotalNodes)
}
if len(got.Roles) != 4 {
t.Fatalf("len(Roles) = %d, want 4 (repeater, companion, room_server, unknown), got %+v", len(got.Roles), got.Roles)
}
if got.Roles[0].Role != "repeater" || got.Roles[0].NodeCount != 2 {
t.Errorf("Roles[0] = %+v, want {repeater,2}", got.Roles[0])
}
// Empty roles should bucket as "unknown".
foundUnknown := false
for _, r := range got.Roles {
if r.Role == "unknown" {
foundUnknown = true
if r.NodeCount != 1 {
t.Errorf("unknown bucket NodeCount = %d, want 1", r.NodeCount)
}
}
}
if !foundUnknown {
t.Errorf("no 'unknown' bucket for empty roles in %+v", got.Roles)
}
}
// TestComputeRoleAnalytics_SkewAggregation verifies per-role clock-skew
// aggregation: counts by severity, mean and median absolute skew.
func TestComputeRoleAnalytics_SkewAggregation(t *testing.T) {
nodes := map[string]string{
"pk_1": "repeater",
"pk_2": "repeater",
"pk_3": "repeater",
}
skews := map[string]*NodeClockSkew{
"pk_1": {Pubkey: "pk_1", RecentMedianSkewSec: 10, Severity: SkewOK},
"pk_2": {Pubkey: "pk_2", RecentMedianSkewSec: -400, Severity: SkewWarning},
"pk_3": {Pubkey: "pk_3", RecentMedianSkewSec: 7200, Severity: SkewCritical},
}
got := computeRoleAnalytics(nodes, skews)
if len(got.Roles) != 1 {
t.Fatalf("len(Roles) = %d, want 1; got %+v", len(got.Roles), got.Roles)
}
r := got.Roles[0]
if r.WithSkew != 3 {
t.Errorf("WithSkew = %d, want 3", r.WithSkew)
}
if r.OkCount != 1 || r.WarningCount != 1 || r.CriticalCount != 1 {
t.Errorf("severity counts = ok %d, warn %d, crit %d; want 1/1/1", r.OkCount, r.WarningCount, r.CriticalCount)
}
// mean(|10|, |400|, |7200|) = 7610/3 ≈ 2536.67
if r.MeanAbsSkewSec < 2536 || r.MeanAbsSkewSec > 2537 {
t.Errorf("MeanAbsSkewSec = %v, want ~2536.67", r.MeanAbsSkewSec)
}
// median(10, 400, 7200) = 400
if r.MedianAbsSkewSec != 400 {
t.Errorf("MedianAbsSkewSec = %v, want 400", r.MedianAbsSkewSec)
}
}
+137 -15
View File
@@ -16,6 +16,7 @@ import (
"time"
"github.com/gorilla/mux"
"github.com/meshcore-analyzer/packetpath"
)
// Server holds shared state for route handlers.
@@ -103,6 +104,9 @@ func (s *Server) getMemStats() runtime.MemStats {
// RegisterRoutes sets up all HTTP routes on the given router.
func (s *Server) RegisterRoutes(r *mux.Router) {
s.router = r
// CORS middleware (must run before route handlers)
r.Use(s.corsMiddleware)
// Performance instrumentation middleware
r.Use(s.perfMiddleware)
@@ -117,6 +121,9 @@ func (s *Server) RegisterRoutes(r *mux.Router) {
r.HandleFunc("/api/config/map", s.handleConfigMap).Methods("GET")
r.HandleFunc("/api/config/geo-filter", s.handleConfigGeoFilter).Methods("GET")
// Readiness endpoint (gated on background init completion)
r.HandleFunc("/api/healthz", s.handleHealthz).Methods("GET")
// System endpoints
r.HandleFunc("/api/health", s.handleHealth).Methods("GET")
r.HandleFunc("/api/stats", s.handleStats).Methods("GET")
@@ -125,6 +132,7 @@ func (s *Server) RegisterRoutes(r *mux.Router) {
r.Handle("/api/admin/prune", s.requireAPIKey(http.HandlerFunc(s.handleAdminPrune))).Methods("POST")
r.Handle("/api/debug/affinity", s.requireAPIKey(http.HandlerFunc(s.handleDebugAffinity))).Methods("GET")
r.Handle("/api/dropped-packets", s.requireAPIKey(http.HandlerFunc(s.handleDroppedPackets))).Methods("GET")
r.Handle("/api/backup", s.requireAPIKey(http.HandlerFunc(s.handleBackup))).Methods("GET")
// Packet endpoints
r.HandleFunc("/api/packets/observations", s.handleBatchObservations).Methods("POST")
@@ -143,6 +151,7 @@ func (s *Server) RegisterRoutes(r *mux.Router) {
r.HandleFunc("/api/nodes/{pubkey}/health", s.handleNodeHealth).Methods("GET")
r.HandleFunc("/api/nodes/{pubkey}/paths", s.handleNodePaths).Methods("GET")
r.HandleFunc("/api/nodes/{pubkey}/analytics", s.handleNodeAnalytics).Methods("GET")
r.HandleFunc("/api/nodes/{pubkey}/battery", s.handleNodeBattery).Methods("GET")
r.HandleFunc("/api/nodes/clock-skew", s.handleFleetClockSkew).Methods("GET")
r.HandleFunc("/api/nodes/{pubkey}/clock-skew", s.handleNodeClockSkew).Methods("GET")
r.HandleFunc("/api/observers/clock-skew", s.handleObserverClockSkew).Methods("GET")
@@ -151,6 +160,7 @@ func (s *Server) RegisterRoutes(r *mux.Router) {
r.HandleFunc("/api/nodes", s.handleNodes).Methods("GET")
// Analytics endpoints
r.HandleFunc("/api/analytics/roles", s.handleAnalyticsRoles).Methods("GET")
r.HandleFunc("/api/analytics/rf", s.handleAnalyticsRF).Methods("GET")
r.HandleFunc("/api/analytics/topology", s.handleAnalyticsTopology).Methods("GET")
r.HandleFunc("/api/analytics/channels", s.handleAnalyticsChannels).Methods("GET")
@@ -172,6 +182,7 @@ func (s *Server) RegisterRoutes(r *mux.Router) {
r.HandleFunc("/api/observers/{id}", s.handleObserverDetail).Methods("GET")
r.HandleFunc("/api/observers", s.handleObservers).Methods("GET")
r.HandleFunc("/api/traces/{hash}", s.handleTraces).Methods("GET")
r.HandleFunc("/api/paths/inspect", s.handlePathInspect).Methods("POST")
r.HandleFunc("/api/iata-coords", s.handleIATACoords).Methods("GET")
r.HandleFunc("/api/audio-lab/buckets", s.handleAudioLabBuckets).Methods("GET")
@@ -957,11 +968,9 @@ func (s *Server) handlePacketDetail(w http.ResponseWriter, r *http.Request) {
pathHops = []interface{}{}
}
rawHex, _ := packet["raw_hex"].(string)
writeJSON(w, PacketDetailResponse{
Packet: packet,
Path: pathHops,
Breakdown: BuildBreakdown(rawHex),
ObservationCount: observationCount,
Observations: mapSliceToObservations(observations),
})
@@ -1020,8 +1029,17 @@ func (s *Server) handlePostPacket(w http.ResponseWriter, r *http.Request) {
contentHash := ComputeContentHash(hexStr)
pathJSON := "[]"
if len(decoded.Path.Hops) > 0 {
if pj, e := json.Marshal(decoded.Path.Hops); e == nil {
// For TRACE packets, path_json must be the payload-decoded route hops
// (decoded.Path.Hops), NOT the raw_hex header bytes which are SNR values.
// For all other packet types, derive path from raw_hex (#886).
if !packetpath.PathBytesAreHops(byte(decoded.Header.PayloadType)) {
if len(decoded.Path.Hops) > 0 {
if pj, e := json.Marshal(decoded.Path.Hops); e == nil {
pathJSON = string(pj)
}
}
} else if hops, err := packetpath.DecodePathFromRawHex(hexStr); err == nil && len(hops) > 0 {
if pj, e := json.Marshal(hops); e == nil {
pathJSON = string(pj)
}
}
@@ -1079,15 +1097,38 @@ func (s *Server) handleNodes(w http.ResponseWriter, r *http.Request) {
}
if s.store != nil {
hashInfo := s.store.GetNodeHashSizeInfo()
mbCap := s.store.GetMultiByteCapMap()
relayWindow := s.cfg.GetHealthThresholds().RelayActiveHours
for _, node := range nodes {
if pk, ok := node["public_key"].(string); ok {
EnrichNodeWithHashSize(node, hashInfo[pk])
EnrichNodeWithMultiByte(node, mbCap[pk])
if role, _ := node["role"].(string); role == "repeater" || role == "room" {
info := s.store.GetRepeaterRelayInfo(pk, relayWindow)
if info.LastRelayed != "" {
node["last_relayed"] = info.LastRelayed
}
node["relay_active"] = info.RelayActive
node["relay_count_1h"] = info.RelayCount1h
node["relay_count_24h"] = info.RelayCount24h
node["usefulness_score"] = s.store.GetRepeaterUsefulnessScore(pk)
}
}
}
}
if s.cfg.GeoFilter != nil {
filtered := nodes[:0]
for _, node := range nodes {
// Foreign-flagged nodes (#730) are kept even when their GPS lies
// outside the geofilter polygon — that's the whole point of the
// flag: operators need to SEE bridged/leaked nodes, not have them
// filtered away. The ingestor sets foreign_advert=1 when its
// configured geo_filter rejected the advert; the server must
// surface those.
if isForeign, _ := node["foreign"].(bool); isForeign {
filtered = append(filtered, node)
continue
}
if NodePassesGeoFilter(node["lat"], node["lon"], s.cfg.GeoFilter) {
filtered = append(filtered, node)
}
@@ -1140,14 +1181,56 @@ func (s *Server) handleNodeDetail(w http.ResponseWriter, r *http.Request) {
return
}
node, err := s.db.GetNodeByPubkey(pubkey)
if err != nil || node == nil {
if err != nil {
writeError(w, 500, err.Error())
return
}
// Issue #772: short-URL fallback. If exact pubkey lookup misses and the
// path looks like a hex prefix (>=8 chars, <64), try prefix resolution.
if node == nil && len(pubkey) >= 8 && len(pubkey) < 64 {
resolved, ambiguous, perr := s.db.GetNodeByPrefix(pubkey)
if perr != nil {
writeError(w, 500, perr.Error())
return
}
if ambiguous {
writeError(w, http.StatusConflict, "Ambiguous prefix: multiple nodes match. Use a longer prefix.")
return
}
if resolved != nil {
if pk, _ := resolved["public_key"].(string); pk != "" && s.cfg.IsBlacklisted(pk) {
writeError(w, 404, "Not found")
return
}
node = resolved
}
}
if node == nil {
writeError(w, 404, "Not found")
return
}
// From here on use the canonical pubkey for downstream lookups.
if pk, _ := node["public_key"].(string); pk != "" {
pubkey = pk
}
if s.store != nil {
hashInfo := s.store.GetNodeHashSizeInfo()
EnrichNodeWithHashSize(node, hashInfo[pubkey])
mbCap := s.store.GetMultiByteCapMap()
EnrichNodeWithMultiByte(node, mbCap[pubkey])
if role, _ := node["role"].(string); role == "repeater" || role == "room" {
ht := s.cfg.GetHealthThresholds()
info := s.store.GetRepeaterRelayInfo(pubkey, ht.RelayActiveHours)
if info.LastRelayed != "" {
node["last_relayed"] = info.LastRelayed
}
node["relay_active"] = info.RelayActive
node["relay_window_hours"] = info.WindowHours
node["relay_count_1h"] = info.RelayCount1h
node["relay_count_24h"] = info.RelayCount24h
node["usefulness_score"] = s.store.GetRepeaterUsefulnessScore(pubkey)
}
}
name := ""
@@ -1249,25 +1332,31 @@ func (s *Server) handleNodePaths(w http.ResponseWriter, r *http.Request) {
_, pm := s.store.getCachedNodesAndPM()
// Collect candidate transmissions from the index, deduplicating by tx ID.
// confirmedByFullKey tracks TXs found via the full-pubkey index key — these are
// already resolved_path-confirmed and bypass the hop-level check below.
confirmedByFullKey := make(map[int]bool)
seen := make(map[int]bool)
var candidates []*StoreTx
addCandidates := func(key string) {
addCandidates := func(key string, confirmed bool) {
for _, tx := range s.store.byPathHop[key] {
if !seen[tx.ID] {
seen[tx.ID] = true
if confirmed {
confirmedByFullKey[tx.ID] = true
}
candidates = append(candidates, tx)
}
}
}
addCandidates(lowerPK) // full pubkey match (from resolved_path)
addCandidates(prefix1) // 2-char raw hop match
addCandidates(prefix2) // 4-char raw hop match
addCandidates(lowerPK, true) // full pubkey match (from resolved_path) → confirmed
addCandidates(prefix1, false) // 2-char raw hop match
addCandidates(prefix2, false) // 4-char raw hop match
// Also check any raw hops that start with prefix2 (longer prefixes).
// Raw hops are typically 2 chars, so iterate only keys with HasPrefix
// on the small set of index keys rather than all packets.
for key := range s.store.byPathHop {
if len(key) > 4 && len(key) < len(lowerPK) && strings.HasPrefix(key, prefix2) {
addCandidates(key)
addCandidates(key, false)
}
}
@@ -1304,6 +1393,7 @@ func (s *Server) handleNodePaths(w http.ResponseWriter, r *http.Request) {
s.store.mu.RUnlock()
// Now run SQL checks outside the lock for candidates that need confirmation.
confirmedBySQL := make(map[int]bool)
filtered := candidates[:0]
for _, cc := range checks {
if cc.inIndex {
@@ -1311,6 +1401,7 @@ func (s *Server) handleNodePaths(w http.ResponseWriter, r *http.Request) {
} else if cc.hasReverse {
if s.store.confirmResolvedPathContains(cc.tx.ID, lowerPK) {
filtered = append(filtered, cc.tx)
confirmedBySQL[cc.tx.ID] = true
}
}
// else: not in index → exclude
@@ -1338,10 +1429,14 @@ func (s *Server) handleNodePaths(w http.ResponseWriter, r *http.Request) {
return r
}
for _, tx := range candidates {
totalTransmissions++
hops := txGetParsedPath(tx)
resolvedHops := make([]PathHopResp, len(hops))
sigParts := make([]string, len(hops))
// For candidates not confirmed via full-pubkey index or SQL, verify that at
// least one hop actually resolves to the target. This catches prefix collisions
// (e.g. two nodes sharing a "7a" 1-byte prefix) that slipped through the
// conservative resolved_path fallback.
containsTarget := confirmedByFullKey[tx.ID] || confirmedBySQL[tx.ID]
for i, hop := range hops {
resolved := resolveHop(hop)
entry := PathHopResp{Prefix: hop, Name: hop}
@@ -1353,11 +1448,22 @@ func (s *Server) handleNodePaths(w http.ResponseWriter, r *http.Request) {
entry.Lon = resolved.Lon
}
sigParts[i] = resolved.PublicKey
if strings.ToLower(resolved.PublicKey) == lowerPK {
containsTarget = true
}
} else {
sigParts[i] = hop
// Unresolvable hop: keep conservative if prefix could be the target.
if strings.HasPrefix(lowerPK, strings.ToLower(hop)) {
containsTarget = true
}
}
resolvedHops[i] = entry
}
if !containsTarget {
continue
}
totalTransmissions++
sig := strings.Join(sigParts, "→")
agg := pathGroups[sig]
@@ -1483,8 +1589,9 @@ func (s *Server) handleFleetClockSkew(w http.ResponseWriter, r *http.Request) {
func (s *Server) handleAnalyticsRF(w http.ResponseWriter, r *http.Request) {
region := r.URL.Query().Get("region")
window := ParseTimeWindow(r)
if s.store != nil {
writeJSON(w, s.store.GetAnalyticsRF(region))
writeJSON(w, s.store.GetAnalyticsRFWithWindow(region, window))
return
}
writeJSON(w, RFAnalyticsResponse{
@@ -1503,8 +1610,9 @@ func (s *Server) handleAnalyticsRF(w http.ResponseWriter, r *http.Request) {
func (s *Server) handleAnalyticsTopology(w http.ResponseWriter, r *http.Request) {
region := r.URL.Query().Get("region")
window := ParseTimeWindow(r)
if s.store != nil {
data := s.store.GetAnalyticsTopology(region)
data := s.store.GetAnalyticsTopologyWithWindow(region, window)
if s.cfg != nil && len(s.cfg.NodeBlacklist) > 0 {
data = s.filterBlacklistedFromTopology(data)
}
@@ -1526,7 +1634,8 @@ func (s *Server) handleAnalyticsTopology(w http.ResponseWriter, r *http.Request)
func (s *Server) handleAnalyticsChannels(w http.ResponseWriter, r *http.Request) {
if s.store != nil {
region := r.URL.Query().Get("region")
writeJSON(w, s.store.GetAnalyticsChannels(region))
window := ParseTimeWindow(r)
writeJSON(w, s.store.GetAnalyticsChannelsWithWindow(region, window))
return
}
channels, _ := s.db.GetChannels()
@@ -1920,6 +2029,10 @@ func (s *Server) handleObservers(w http.ResponseWriter, r *http.Request) {
result := make([]ObserverResp, 0, len(observers))
for _, o := range observers {
// Defense in depth: skip observers that are in the blacklist
if s.cfg != nil && s.cfg.IsObserverBlacklisted(o.ID) {
continue
}
plh := 0
if c, ok := pktCounts[o.ID]; ok {
plh = c
@@ -1939,6 +2052,7 @@ func (s *Server) handleObservers(w http.ResponseWriter, r *http.Request) {
ClientVersion: o.ClientVersion, Radio: o.Radio,
BatteryMv: o.BatteryMv, UptimeSecs: o.UptimeSecs,
NoiseFloor: o.NoiseFloor,
LastPacketAt: o.LastPacketAt,
PacketsLastHour: plh,
Lat: lat, Lon: lon, NodeRole: nodeRole,
})
@@ -1951,6 +2065,13 @@ func (s *Server) handleObservers(w http.ResponseWriter, r *http.Request) {
func (s *Server) handleObserverDetail(w http.ResponseWriter, r *http.Request) {
id := mux.Vars(r)["id"]
// Defense in depth: reject blacklisted observer
if s.cfg != nil && s.cfg.IsObserverBlacklisted(id) {
writeError(w, 404, "Observer not found")
return
}
obs, err := s.db.GetObserverByID(id)
if err != nil || obs == nil {
writeError(w, 404, "Observer not found")
@@ -1973,6 +2094,7 @@ func (s *Server) handleObserverDetail(w http.ResponseWriter, r *http.Request) {
ClientVersion: obs.ClientVersion, Radio: obs.Radio,
BatteryMv: obs.BatteryMv, UptimeSecs: obs.UptimeSecs,
NoiseFloor: obs.NoiseFloor,
LastPacketAt: obs.LastPacketAt,
PacketsLastHour: plh,
})
}
@@ -2083,7 +2205,7 @@ func (s *Server) handleObserverAnalytics(w http.ResponseWriter, r *http.Request)
}
snrBuckets[bucket].Count++
}
if i < 20 && enriched["hash"] != nil {
if i < 20 {
recentPackets = append(recentPackets, enriched)
}
}
+59
View File
@@ -0,0 +1,59 @@
package main
import (
"database/sql"
"fmt"
"sync"
)
// rwCache holds a process-wide cached RW connection per database path.
// Instead of opening and closing a new RW connection on every call to openRW,
// we cache a single *sql.DB (which internally manages one connection due to
// SetMaxOpenConns(1)). This eliminates repeated open/close overhead for
// vacuum, prune, persist operations that run frequently (#921).
var rwCache = struct {
mu sync.Mutex
conns map[string]*sql.DB
}{conns: make(map[string]*sql.DB)}
// cachedRW returns a cached read-write connection for the given dbPath.
// The connection is created on first call and reused thereafter.
// Callers MUST NOT call Close() on the returned *sql.DB.
func cachedRW(dbPath string) (*sql.DB, error) {
rwCache.mu.Lock()
defer rwCache.mu.Unlock()
if db, ok := rwCache.conns[dbPath]; ok {
return db, nil
}
dsn := fmt.Sprintf("file:%s?_journal_mode=WAL", dbPath)
db, err := sql.Open("sqlite", dsn)
if err != nil {
return nil, err
}
db.SetMaxOpenConns(1)
if _, err := db.Exec("PRAGMA busy_timeout = 5000"); err != nil {
db.Close()
return nil, fmt.Errorf("set busy_timeout: %w", err)
}
rwCache.conns[dbPath] = db
return db, nil
}
// closeRWCache closes all cached RW connections (for tests/shutdown).
func closeRWCache() {
rwCache.mu.Lock()
defer rwCache.mu.Unlock()
for k, db := range rwCache.conns {
db.Close()
delete(rwCache.conns, k)
}
}
// rwCacheLen returns the number of cached connections (for testing).
func rwCacheLen() int {
rwCache.mu.Lock()
defer rwCache.mu.Unlock()
return len(rwCache.conns)
}
+55
View File
@@ -0,0 +1,55 @@
package main
import (
"os"
"path/filepath"
"testing"
)
func TestCachedRW_ReturnsSameHandle(t *testing.T) {
dir := t.TempDir()
dbPath := filepath.Join(dir, "test.db")
// Create the DB file
f, _ := os.Create(dbPath)
f.Close()
defer closeRWCache()
db1, err := cachedRW(dbPath)
if err != nil {
t.Fatalf("first cachedRW: %v", err)
}
db2, err := cachedRW(dbPath)
if err != nil {
t.Fatalf("second cachedRW: %v", err)
}
if db1 != db2 {
t.Fatalf("cachedRW returned different handles: %p vs %p", db1, db2)
}
}
func TestCachedRW_100Calls_SingleConnection(t *testing.T) {
dir := t.TempDir()
dbPath := filepath.Join(dir, "test.db")
f, _ := os.Create(dbPath)
f.Close()
defer closeRWCache()
var first interface{}
for i := 0; i < 100; i++ {
db, err := cachedRW(dbPath)
if err != nil {
t.Fatalf("call %d: %v", i, err)
}
if i == 0 {
first = db
} else if db != first {
t.Fatalf("call %d returned different handle", i)
}
}
if rwCacheLen() != 1 {
t.Fatalf("expected 1 cached connection, got %d", rwCacheLen())
}
}
+109
View File
@@ -0,0 +1,109 @@
package main
import (
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
)
// Issue #772 — shortened URL for easier sending over the mesh.
//
// Public keys are 64 hex chars. Operators want to share node URLs over a
// mesh radio link where every byte counts. We allow truncating the pubkey
// in the URL down to a minimum 8-hex-char prefix; the server resolves the
// prefix back to the full pubkey when (and only when) it is unambiguous.
func TestResolveNodePrefix_Unique(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
// "aabbccdd" uniquely identifies the seeded TestRepeater (pubkey aabbccdd11223344).
node, ambiguous, err := db.GetNodeByPrefix("aabbccdd")
if err != nil {
t.Fatalf("unexpected err: %v", err)
}
if ambiguous {
t.Fatalf("expected unambiguous match, got ambiguous=true")
}
if node == nil {
t.Fatalf("expected node, got nil")
}
if got, _ := node["public_key"].(string); got != "aabbccdd11223344" {
t.Errorf("expected public_key aabbccdd11223344, got %q", got)
}
}
func TestResolveNodePrefix_Ambiguous(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
// Insert a second node sharing the 8-char prefix "aabbccdd".
if _, err := db.conn.Exec(`INSERT INTO nodes (public_key, name, role, advert_count)
VALUES ('aabbccdd99887766', 'OtherNode', 'companion', 1)`); err != nil {
t.Fatal(err)
}
node, ambiguous, err := db.GetNodeByPrefix("aabbccdd")
if err != nil {
t.Fatalf("unexpected err: %v", err)
}
if !ambiguous {
t.Fatalf("expected ambiguous=true for shared prefix, got false (node=%v)", node)
}
if node != nil {
t.Errorf("expected nil node when ambiguous, got %v", node["public_key"])
}
}
func TestResolveNodePrefix_TooShort(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
// <8 hex chars must NOT resolve, even if it would be unique.
node, _, err := db.GetNodeByPrefix("aabbccd")
if err == nil && node != nil {
t.Errorf("expected nil/error for 7-char prefix, got node %v", node["public_key"])
}
}
// Route-level: GET /api/nodes/<8-char-prefix> resolves to the full node.
func TestNodeDetailRoute_PrefixResolves(t *testing.T) {
_, router := setupTestServer(t)
req := httptest.NewRequest("GET", "/api/nodes/aabbccdd", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200 for unique 8-char prefix, got %d body=%s", w.Code, w.Body.String())
}
var body NodeDetailResponse
if err := json.Unmarshal(w.Body.Bytes(), &body); err != nil {
t.Fatalf("unmarshal: %v", err)
}
pk, _ := body.Node["public_key"].(string)
if pk != "aabbccdd11223344" {
t.Errorf("expected resolved pubkey aabbccdd11223344, got %q", pk)
}
}
// Route-level: GET /api/nodes/<ambiguous-prefix> returns 409 with a hint.
func TestNodeDetailRoute_PrefixAmbiguous(t *testing.T) {
srv, router := setupTestServer(t)
if _, err := srv.db.conn.Exec(`INSERT INTO nodes (public_key, name, role, advert_count)
VALUES ('aabbccdd99887766', 'OtherNode', 'companion', 1)`); err != nil {
t.Fatal(err)
}
req := httptest.NewRequest("GET", "/api/nodes/aabbccdd", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusConflict {
t.Fatalf("expected 409 for ambiguous prefix, got %d body=%s", w.Code, w.Body.String())
}
}
+656 -120
View File
File diff suppressed because it is too large Load Diff
+133
View File
@@ -0,0 +1,133 @@
package main
import (
"net/http"
"time"
)
// TimeWindow is a half-open time range used to bound analytics queries.
// Empty Since/Until means unbounded on that end (backwards compatible).
type TimeWindow struct {
Since string // RFC3339, empty = unbounded
Until string // RFC3339, empty = unbounded
// Label is a stable identifier for the user-requested window
// (e.g. "24h"). For relative windows it is the original alias; for
// absolute ranges it is empty (Since/Until are already stable).
// Used only for cache keying so that "?window=24h" produces a single
// cache entry instead of one per second.
Label string
}
// IsZero reports whether the window imposes no bounds at all.
func (w TimeWindow) IsZero() bool {
return w.Since == "" && w.Until == ""
}
// CacheKey returns a deterministic key suitable for analytics caches.
// For relative windows the key is the alias label so that the cache
// remains stable across the wall-clock advancing.
func (w TimeWindow) CacheKey() string {
if w.IsZero() {
return ""
}
if w.Label != "" {
return "rel:" + w.Label
}
return w.Since + "|" + w.Until
}
// Includes reports whether ts (an RFC3339-style string) falls within the
// window. Empty ts is treated as included (for callers that don't have a
// timestamp on every observation).
//
// Comparison is done by parsing both sides into time.Time. Lex compare is
// unsafe here because stored timestamps carry millisecond precision
// ("...HH:MM:SS.000Z") while bounds emitted by ParseTimeWindow do not
// ("...HH:MM:SSZ"), and '.' (0x2e) sorts before 'Z' (0x5a). If a timestamp
// fails to parse we fall back to lex compare to preserve old behavior.
func (w TimeWindow) Includes(ts string) bool {
if ts == "" {
return true
}
tt, terr := parseAnyRFC3339(ts)
if w.Since != "" {
if s, err := parseAnyRFC3339(w.Since); err == nil && terr == nil {
if tt.Before(s) {
return false
}
} else if ts < w.Since {
return false
}
}
if w.Until != "" {
if u, err := parseAnyRFC3339(w.Until); err == nil && terr == nil {
if tt.After(u) {
return false
}
} else if ts > w.Until {
return false
}
}
return true
}
// parseAnyRFC3339 accepts both fractional-second ("...000Z") and second-
// precision ("...Z") RFC3339 timestamps. time.RFC3339Nano handles both.
func parseAnyRFC3339(s string) (time.Time, error) {
return time.Parse(time.RFC3339Nano, s)
}
// ParseTimeWindow extracts a TimeWindow from query params.
//
// Supported parameters:
//
// ?window=1h | 24h | 7d | 30d — relative window ending "now"
// ?from=<RFC3339>&to=<RFC3339> — absolute custom range (either bound optional)
//
// When neither is set, returns the zero TimeWindow (unbounded; original behavior).
// Invalid values are silently ignored to preserve backwards compatibility.
func ParseTimeWindow(r *http.Request) TimeWindow {
q := r.URL.Query()
// Absolute range takes precedence if either bound is set.
from := q.Get("from")
to := q.Get("to")
if from != "" || to != "" {
w := TimeWindow{}
if from != "" {
if t, err := time.Parse(time.RFC3339, from); err == nil {
w.Since = t.UTC().Format(time.RFC3339)
}
}
if to != "" {
if t, err := time.Parse(time.RFC3339, to); err == nil {
w.Until = t.UTC().Format(time.RFC3339)
}
}
return w
}
// Relative window.
if win := q.Get("window"); win != "" {
var d time.Duration
switch win {
case "1h":
d = 1 * time.Hour
case "24h", "1d":
d = 24 * time.Hour
case "3d":
d = 3 * 24 * time.Hour
case "7d", "1w":
d = 7 * 24 * time.Hour
case "30d":
d = 30 * 24 * time.Hour
default:
// Unknown values are silently ignored — backwards compatible.
return TimeWindow{}
}
since := time.Now().UTC().Add(-d).Format(time.RFC3339)
return TimeWindow{Since: since, Label: win}
}
return TimeWindow{}
}
+144
View File
@@ -0,0 +1,144 @@
package main
import (
"net/http/httptest"
"strings"
"testing"
"time"
)
// Issue #842 — selectable analytics timeframes.
// Backend must accept ?window=1h|24h|7d|30d and ?from=/?to= and yield a
// TimeWindow that correctly bounds analytics queries.
func TestParseTimeWindow_Window24h(t *testing.T) {
r := httptest.NewRequest("GET", "/api/analytics/rf?window=24h", nil)
w := ParseTimeWindow(r)
if w.Since == "" {
t.Fatalf("window=24h: expected non-empty Since, got %q", w.Since)
}
since, err := time.Parse(time.RFC3339, w.Since)
if err != nil {
t.Fatalf("window=24h: Since %q is not RFC3339: %v", w.Since, err)
}
delta := time.Since(since)
if delta < 23*time.Hour || delta > 25*time.Hour {
t.Fatalf("window=24h: Since should be ~24h ago, got delta=%v", delta)
}
}
func TestParseTimeWindow_WindowAliases(t *testing.T) {
cases := map[string]time.Duration{
"1h": 1 * time.Hour,
"24h": 24 * time.Hour,
"7d": 7 * 24 * time.Hour,
"30d": 30 * 24 * time.Hour,
}
for q, want := range cases {
r := httptest.NewRequest("GET", "/api/analytics/rf?window="+q, nil)
got := ParseTimeWindow(r)
if got.Since == "" {
t.Errorf("window=%s: empty Since", q)
continue
}
since, err := time.Parse(time.RFC3339, got.Since)
if err != nil {
t.Errorf("window=%s: bad RFC3339 %q", q, got.Since)
continue
}
delta := time.Since(since)
// allow 5 minutes of slack
if delta < want-5*time.Minute || delta > want+5*time.Minute {
t.Errorf("window=%s: expected ~%v, got %v", q, want, delta)
}
}
}
func TestParseTimeWindow_FromTo(t *testing.T) {
from := "2026-04-01T00:00:00Z"
to := "2026-04-08T00:00:00Z"
r := httptest.NewRequest("GET", "/api/analytics/rf?from="+from+"&to="+to, nil)
w := ParseTimeWindow(r)
if w.Since != from {
t.Errorf("expected Since=%q, got %q", from, w.Since)
}
if w.Until != to {
t.Errorf("expected Until=%q, got %q", to, w.Until)
}
}
func TestParseTimeWindow_NoParams_BackwardsCompatible(t *testing.T) {
r := httptest.NewRequest("GET", "/api/analytics/rf", nil)
w := ParseTimeWindow(r)
if !w.IsZero() {
t.Errorf("no params should yield zero window, got %+v", w)
}
}
func TestTimeWindow_Includes(t *testing.T) {
w := TimeWindow{Since: "2026-04-01T00:00:00Z", Until: "2026-04-08T00:00:00Z"}
if !w.Includes("2026-04-05T12:00:00Z") {
t.Error("mid-range ts should be included")
}
if w.Includes("2026-03-31T23:59:59Z") {
t.Error("ts before Since should be excluded")
}
if w.Includes("2026-04-08T00:00:01Z") {
t.Error("ts after Until should be excluded")
}
// Empty ts always included (some observations lack timestamps)
if !w.Includes("") {
t.Error("empty ts should be included")
}
}
func TestTimeWindow_CacheKey_DistinctPerWindow(t *testing.T) {
a := TimeWindow{Since: "2026-04-01T00:00:00Z"}
b := TimeWindow{Since: "2026-04-02T00:00:00Z"}
z := TimeWindow{}
if a.CacheKey() == b.CacheKey() {
t.Error("different windows must produce different cache keys")
}
if z.CacheKey() != "" {
t.Errorf("zero window cache key must be empty, got %q", z.CacheKey())
}
if !strings.Contains(a.CacheKey(), "2026-04-01") {
t.Errorf("cache key should encode Since, got %q", a.CacheKey())
}
}
// Self-review fixes (#1018 polish).
// B1: a relative window must produce a STABLE cache key across calls,
// otherwise the analytics cache thrashes (one entry per second).
func TestTimeWindow_RelativeWindow_StableCacheKey(t *testing.T) {
r1 := httptest.NewRequest("GET", "/api/analytics/rf?window=24h", nil)
w1 := ParseTimeWindow(r1)
time.Sleep(1100 * time.Millisecond)
r2 := httptest.NewRequest("GET", "/api/analytics/rf?window=24h", nil)
w2 := ParseTimeWindow(r2)
if w1.CacheKey() != w2.CacheKey() {
t.Fatalf("relative window cache key must be stable across calls, got %q vs %q", w1.CacheKey(), w2.CacheKey())
}
}
// B2: stored timestamps use millisecond precision (".000Z") while RFC3339
// bounds have none. Includes() must use time-based compare, not lex compare,
// so tx past Until are correctly excluded regardless of fractional digits.
func TestTimeWindow_Includes_FractionalSecondsBoundary(t *testing.T) {
w := TimeWindow{Until: "2026-04-08T00:00:00Z"}
// A tx 1ms past Until should NOT be included.
if w.Includes("2026-04-08T00:00:00.001Z") {
t.Error("ts 1ms past Until must be excluded; lex compare against fractional ts is wrong")
}
// A tx well inside the window must be included.
if !w.Includes("2026-04-07T23:59:59.999Z") {
t.Error("ts just before Until must be included")
}
w2 := TimeWindow{Since: "2026-04-01T00:00:00Z"}
// A tx at exactly Since should be included.
if !w2.Includes("2026-04-01T00:00:00.000Z") {
t.Error("ts exactly at Since must be included; lex compare excludes it because '.' < 'Z'")
}
}
+338
View File
@@ -0,0 +1,338 @@
package main
import (
"database/sql"
"fmt"
"path/filepath"
"testing"
"time"
_ "modernc.org/sqlite"
)
// TestTopologyDedup_RepeatersMergeByPubkey verifies that topRepeaters
// merges entries whose hop prefixes resolve unambiguously to the same node.
func TestTopologyDedup_RepeatersMergeByPubkey(t *testing.T) {
dir := t.TempDir()
dbPath := filepath.Join(dir, "test.db")
conn, err := sql.Open("sqlite", dbPath+"?_journal_mode=WAL")
if err != nil {
t.Fatal(err)
}
defer conn.Close()
exec := func(s string) {
if _, err := conn.Exec(s); err != nil {
t.Fatalf("SQL exec failed: %v\nSQL: %s", err, s)
}
}
exec(`CREATE TABLE transmissions (
id INTEGER PRIMARY KEY, raw_hex TEXT, hash TEXT, first_seen TEXT,
route_type INTEGER, payload_type INTEGER, payload_version INTEGER, decoded_json TEXT
)`)
exec(`CREATE TABLE observations (
id INTEGER PRIMARY KEY, transmission_id INTEGER, observer_id TEXT, observer_name TEXT,
direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT, raw_hex TEXT
)`)
exec(`CREATE TABLE observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
exec(`CREATE TABLE nodes (
public_key TEXT PRIMARY KEY, name TEXT, role TEXT, lat REAL, lon REAL,
last_seen TEXT, frequency REAL
)`)
exec(`CREATE TABLE schema_version (version INTEGER)`)
exec(`INSERT INTO schema_version (version) VALUES (1)`)
exec(`CREATE INDEX idx_tx_first_seen ON transmissions(first_seen)`)
// Insert two repeater nodes with distinct pubkeys.
// AQUA: pubkey starts with 0735bc...
// BETA: pubkey starts with 99aabb...
exec(`INSERT INTO nodes (public_key, name, role) VALUES ('0735bc6dda4d1122aabbccdd', 'AQUA', 'Repeater')`)
exec(`INSERT INTO nodes (public_key, name, role) VALUES ('99aabb001122334455667788', 'BETA', 'Repeater')`)
base := time.Date(2026, 1, 1, 0, 0, 0, 0, time.UTC)
// Create packets:
// - 10 packets with path ["07", "99aa"] (short prefix for AQUA, medium for BETA)
// - 5 packets with path ["0735bc", "99"] (medium prefix for AQUA, short for BETA)
// - 3 packets with path ["0735bc6dda4d", "99aabb"] (long prefix for both)
txID := 1
obsID := 1
insertTx := func(path string, count int) {
for i := 0; i < count; i++ {
ts := base.Add(time.Duration(txID) * time.Minute).Format(time.RFC3339)
hash := fmt.Sprintf("h%04d", txID)
conn.Exec("INSERT INTO transmissions (id, raw_hex, hash, first_seen, route_type, payload_type, payload_version, decoded_json) VALUES (?, ?, ?, ?, 0, 4, 1, ?)",
txID, "aabb", hash, ts, fmt.Sprintf(`{"pubKey":"pk%04d"}`, txID))
conn.Exec("INSERT INTO observations (id, transmission_id, observer_id, observer_name, direction, snr, rssi, score, path_json, timestamp) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
obsID, txID, "obs1", "Obs1", "RX", -10.0, -80.0, 5, path, ts)
txID++
obsID++
}
}
insertTx(`["07","99aa"]`, 10)
insertTx(`["0735bc","99"]`, 5)
insertTx(`["0735bc6d","99aabb"]`, 3)
// Total: AQUA appears as "07" (10×), "0735bc" (5×), "0735bc6d" (3×) = 18 total
// Total: BETA appears as "99aa" (10×), "99" (5×), "99aabb" (3×) = 18 total
// After dedup, each should appear ONCE with count=18.
db, err := OpenDB(dbPath)
if err != nil {
t.Fatal(err)
}
defer db.conn.Close()
store := NewPacketStore(db, &PacketStoreConfig{MaxMemoryMB: 100})
if err := store.Load(); err != nil {
t.Fatal(err)
}
result := store.computeAnalyticsTopology("", TimeWindow{})
topRepeaters := result["topRepeaters"].([]map[string]interface{})
// Build a map of pubkey → total count from topRepeaters
pubkeyCounts := map[string]int{}
for _, entry := range topRepeaters {
pk, _ := entry["pubkey"].(string)
if pk == "" {
continue
}
pubkeyCounts[pk] += entry["count"].(int)
}
// Each pubkey should appear exactly once in topRepeaters
aquaEntries := 0
betaEntries := 0
for _, entry := range topRepeaters {
pk, _ := entry["pubkey"].(string)
if pk == "0735bc6dda4d1122aabbccdd" {
aquaEntries++
}
if pk == "99aabb001122334455667788" {
betaEntries++
}
}
if aquaEntries != 1 {
t.Errorf("AQUA should appear exactly once in topRepeaters after dedup, got %d entries", aquaEntries)
for _, e := range topRepeaters {
t.Logf(" entry: hop=%v name=%v pubkey=%v count=%v", e["hop"], e["name"], e["pubkey"], e["count"])
}
}
if betaEntries != 1 {
t.Errorf("BETA should appear exactly once in topRepeaters after dedup, got %d entries", betaEntries)
}
// Check that the merged count is correct (18 each)
if c := pubkeyCounts["0735bc6dda4d1122aabbccdd"]; c != 18 {
t.Errorf("AQUA total count should be 18, got %d", c)
}
if c := pubkeyCounts["99aabb001122334455667788"]; c != 18 {
t.Errorf("BETA total count should be 18, got %d", c)
}
}
// TestTopologyDedup_AmbiguousPrefixNotMerged verifies that ambiguous short
// prefixes (matching multiple nodes) are NOT merged — they stay separate.
func TestTopologyDedup_AmbiguousPrefixNotMerged(t *testing.T) {
dir := t.TempDir()
dbPath := filepath.Join(dir, "test.db")
conn, err := sql.Open("sqlite", dbPath+"?_journal_mode=WAL")
if err != nil {
t.Fatal(err)
}
defer conn.Close()
exec := func(s string) {
if _, err := conn.Exec(s); err != nil {
t.Fatalf("SQL exec failed: %v\nSQL: %s", err, s)
}
}
exec(`CREATE TABLE transmissions (
id INTEGER PRIMARY KEY, raw_hex TEXT, hash TEXT, first_seen TEXT,
route_type INTEGER, payload_type INTEGER, payload_version INTEGER, decoded_json TEXT
)`)
exec(`CREATE TABLE observations (
id INTEGER PRIMARY KEY, transmission_id INTEGER, observer_id TEXT, observer_name TEXT,
direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT, raw_hex TEXT
)`)
exec(`CREATE TABLE observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
exec(`CREATE TABLE nodes (
public_key TEXT PRIMARY KEY, name TEXT, role TEXT, lat REAL, lon REAL,
last_seen TEXT, frequency REAL
)`)
exec(`CREATE TABLE schema_version (version INTEGER)`)
exec(`INSERT INTO schema_version (version) VALUES (1)`)
exec(`CREATE INDEX idx_tx_first_seen ON transmissions(first_seen)`)
// Two nodes whose pubkeys share the prefix "ab" — collision!
exec(`INSERT INTO nodes (public_key, name, role) VALUES ('ab11223344556677aabbccdd', 'NODE_A', 'Repeater')`)
exec(`INSERT INTO nodes (public_key, name, role) VALUES ('ab99887766554433aabbccdd', 'NODE_B', 'Repeater')`)
base := time.Date(2026, 1, 1, 0, 0, 0, 0, time.UTC)
txID := 1
obsID := 1
// 10 packets with hop "ab" — ambiguous (matches both NODE_A and NODE_B)
for i := 0; i < 10; i++ {
ts := base.Add(time.Duration(txID) * time.Minute).Format(time.RFC3339)
hash := fmt.Sprintf("h%04d", txID)
conn.Exec("INSERT INTO transmissions (id, raw_hex, hash, first_seen, route_type, payload_type, payload_version, decoded_json) VALUES (?, ?, ?, ?, 0, 4, 1, ?)",
txID, "aabb", hash, ts, fmt.Sprintf(`{"pubKey":"pk%04d"}`, txID))
conn.Exec("INSERT INTO observations (id, transmission_id, observer_id, observer_name, direction, snr, rssi, score, path_json, timestamp) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
obsID, txID, "obs1", "Obs1", "RX", -10.0, -80.0, 5, `["ab"]`, ts)
txID++
obsID++
}
// 5 packets with hop "ab1122" — unambiguous (only NODE_A)
for i := 0; i < 5; i++ {
ts := base.Add(time.Duration(txID) * time.Minute).Format(time.RFC3339)
hash := fmt.Sprintf("h%04d", txID)
conn.Exec("INSERT INTO transmissions (id, raw_hex, hash, first_seen, route_type, payload_type, payload_version, decoded_json) VALUES (?, ?, ?, ?, 0, 4, 1, ?)",
txID, "aabb", hash, ts, fmt.Sprintf(`{"pubKey":"pk%04d"}`, txID))
conn.Exec("INSERT INTO observations (id, transmission_id, observer_id, observer_name, direction, snr, rssi, score, path_json, timestamp) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
obsID, txID, "obs1", "Obs1", "RX", -10.0, -80.0, 5, `["ab1122"]`, ts)
txID++
obsID++
}
db, err := OpenDB(dbPath)
if err != nil {
t.Fatal(err)
}
defer db.conn.Close()
store := NewPacketStore(db, &PacketStoreConfig{MaxMemoryMB: 100})
if err := store.Load(); err != nil {
t.Fatal(err)
}
result := store.computeAnalyticsTopology("", TimeWindow{})
topRepeaters := result["topRepeaters"].([]map[string]interface{})
// "ab" is ambiguous — should NOT be merged with "ab1122"
// We expect two separate entries: one for "ab" (count=10) and one for "ab1122" (count=5)
foundAb := false
foundAb1122 := false
for _, entry := range topRepeaters {
hop := entry["hop"].(string)
count := entry["count"].(int)
if hop == "ab" {
foundAb = true
if count != 10 {
t.Errorf("ambiguous hop 'ab' should have count=10, got %d", count)
}
}
if hop == "ab1122" {
foundAb1122 = true
if count != 5 {
t.Errorf("unambiguous hop 'ab1122' should have count=5, got %d", count)
}
}
}
if !foundAb {
t.Error("ambiguous hop 'ab' should remain as separate entry")
}
if !foundAb1122 {
t.Error("unambiguous hop 'ab1122' should remain as separate entry (not merged with ambiguous 'ab')")
}
}
// TestTopologyDedup_PairsMergeByPubkey verifies that topPairs merges
// pair entries whose hops resolve unambiguously to the same node pair.
func TestTopologyDedup_PairsMergeByPubkey(t *testing.T) {
dir := t.TempDir()
dbPath := filepath.Join(dir, "test.db")
conn, err := sql.Open("sqlite", dbPath+"?_journal_mode=WAL")
if err != nil {
t.Fatal(err)
}
defer conn.Close()
exec := func(s string) {
if _, err := conn.Exec(s); err != nil {
t.Fatalf("SQL exec failed: %v\nSQL: %s", err, s)
}
}
exec(`CREATE TABLE transmissions (
id INTEGER PRIMARY KEY, raw_hex TEXT, hash TEXT, first_seen TEXT,
route_type INTEGER, payload_type INTEGER, payload_version INTEGER, decoded_json TEXT
)`)
exec(`CREATE TABLE observations (
id INTEGER PRIMARY KEY, transmission_id INTEGER, observer_id TEXT, observer_name TEXT,
direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT, raw_hex TEXT
)`)
exec(`CREATE TABLE observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
exec(`CREATE TABLE nodes (
public_key TEXT PRIMARY KEY, name TEXT, role TEXT, lat REAL, lon REAL,
last_seen TEXT, frequency REAL
)`)
exec(`CREATE TABLE schema_version (version INTEGER)`)
exec(`INSERT INTO schema_version (version) VALUES (1)`)
exec(`CREATE INDEX idx_tx_first_seen ON transmissions(first_seen)`)
exec(`INSERT INTO nodes (public_key, name, role) VALUES ('0735bc6dda4d1122aabbccdd', 'AQUA', 'Repeater')`)
exec(`INSERT INTO nodes (public_key, name, role) VALUES ('99aabb001122334455667788', 'BETA', 'Repeater')`)
base := time.Date(2026, 1, 1, 0, 0, 0, 0, time.UTC)
txID := 1
obsID := 1
insertTx := func(path string, count int) {
for i := 0; i < count; i++ {
ts := base.Add(time.Duration(txID) * time.Minute).Format(time.RFC3339)
hash := fmt.Sprintf("h%04d", txID)
conn.Exec("INSERT INTO transmissions (id, raw_hex, hash, first_seen, route_type, payload_type, payload_version, decoded_json) VALUES (?, ?, ?, ?, 0, 4, 1, ?)",
txID, "aabb", hash, ts, fmt.Sprintf(`{"pubKey":"pk%04d"}`, txID))
conn.Exec("INSERT INTO observations (id, transmission_id, observer_id, observer_name, direction, snr, rssi, score, path_json, timestamp) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
obsID, txID, "obs1", "Obs1", "RX", -10.0, -80.0, 5, path, ts)
txID++
obsID++
}
}
// Path ["07","99aa"] → pair "07|99aa", 10 times
// Path ["0735bc","99"] → pair "0735bc|99" but sorted = "0735bc|99", 5 times
// Wait: pair sorting is by string comparison: "07" < "99aa", "0735bc" < "99"
// After dedup both should merge to AQUA|BETA pair with count=15
insertTx(`["07","99aa"]`, 10)
insertTx(`["0735bc","99"]`, 5)
db, err := OpenDB(dbPath)
if err != nil {
t.Fatal(err)
}
defer db.conn.Close()
store := NewPacketStore(db, &PacketStoreConfig{MaxMemoryMB: 100})
if err := store.Load(); err != nil {
t.Fatal(err)
}
result := store.computeAnalyticsTopology("", TimeWindow{})
topPairs := result["topPairs"].([]map[string]interface{})
// Should have exactly 1 pair entry for AQUA-BETA with count=15
aquaBetaPairs := 0
totalCount := 0
for _, entry := range topPairs {
pkA, _ := entry["pubkeyA"].(string)
pkB, _ := entry["pubkeyB"].(string)
if (pkA == "0735bc6dda4d1122aabbccdd" && pkB == "99aabb001122334455667788") ||
(pkA == "99aabb001122334455667788" && pkB == "0735bc6dda4d1122aabbccdd") {
aquaBetaPairs++
totalCount += entry["count"].(int)
}
}
if aquaBetaPairs != 1 {
t.Errorf("AQUA-BETA pair should appear exactly once after dedup, got %d entries", aquaBetaPairs)
for _, e := range topPairs {
t.Logf(" pair: hopA=%v hopB=%v count=%v pkA=%v pkB=%v", e["hopA"], e["hopB"], e["count"], e["pubkeyA"], e["pubkeyB"])
}
}
if totalCount != 15 {
t.Errorf("AQUA-BETA pair total count should be 15, got %d", totalCount)
}
}
+1 -1
View File
@@ -315,7 +315,6 @@ type PacketTimestampsResponse struct {
type PacketDetailResponse struct {
Packet interface{} `json:"packet"`
Path []interface{} `json:"path"`
Breakdown *Breakdown `json:"breakdown"`
ObservationCount int `json:"observation_count"`
Observations []ObservationResp `json:"observations,omitempty"`
}
@@ -860,6 +859,7 @@ type ObserverResp struct {
BatteryMv interface{} `json:"battery_mv"`
UptimeSecs interface{} `json:"uptime_secs"`
NoiseFloor interface{} `json:"noise_floor"`
LastPacketAt interface{} `json:"last_packet_at"`
PacketsLastHour int `json:"packetsLastHour"`
Lat interface{} `json:"lat"`
Lon interface{} `json:"lon"`
+82
View File
@@ -0,0 +1,82 @@
package main
import (
"fmt"
"log"
"time"
)
// checkAutoVacuum inspects the current auto_vacuum mode and logs a warning
// if it's not INCREMENTAL. Optionally performs a one-time full VACUUM if
// the operator has set db.vacuumOnStartup: true in config (#919).
func checkAutoVacuum(db *DB, cfg *Config, dbPath string) {
var autoVacuum int
if err := db.conn.QueryRow("PRAGMA auto_vacuum").Scan(&autoVacuum); err != nil {
log.Printf("[db] warning: could not read auto_vacuum: %v", err)
return
}
if autoVacuum == 2 {
log.Printf("[db] auto_vacuum=INCREMENTAL")
return
}
modes := map[int]string{0: "NONE", 1: "FULL", 2: "INCREMENTAL"}
mode := modes[autoVacuum]
if mode == "" {
mode = fmt.Sprintf("UNKNOWN(%d)", autoVacuum)
}
log.Printf("[db] auto_vacuum=%s — DB needs one-time VACUUM to enable incremental auto-vacuum. "+
"Set db.vacuumOnStartup: true in config to migrate (will block startup for several minutes on large DBs). "+
"See https://github.com/Kpa-clawbot/CoreScope/issues/919", mode)
if cfg.DB != nil && cfg.DB.VacuumOnStartup {
// WARNING: Full VACUUM creates a temporary copy of the entire DB file.
// Requires ~2× the DB file size in free disk space or it will fail.
log.Printf("[db] vacuumOnStartup=true — starting one-time full VACUUM (ensure 2x DB size free disk space)...")
start := time.Now()
rw, err := cachedRW(dbPath)
if err != nil {
log.Printf("[db] VACUUM failed: could not open RW connection: %v", err)
return
}
if _, err := rw.Exec("PRAGMA auto_vacuum = INCREMENTAL"); err != nil {
log.Printf("[db] VACUUM failed: could not set auto_vacuum: %v", err)
return
}
if _, err := rw.Exec("VACUUM"); err != nil {
log.Printf("[db] VACUUM failed: %v", err)
return
}
elapsed := time.Since(start)
log.Printf("[db] VACUUM complete in %v — auto_vacuum is now INCREMENTAL", elapsed.Round(time.Millisecond))
// Re-check
var newMode int
if err := db.conn.QueryRow("PRAGMA auto_vacuum").Scan(&newMode); err == nil {
if newMode == 2 {
log.Printf("[db] auto_vacuum=INCREMENTAL (confirmed after VACUUM)")
} else {
log.Printf("[db] warning: auto_vacuum=%d after VACUUM — expected 2", newMode)
}
}
}
}
// runIncrementalVacuum runs PRAGMA incremental_vacuum(N) on a read-write
// connection. Safe to call on auto_vacuum=NONE databases (noop).
func runIncrementalVacuum(dbPath string, pages int) {
rw, err := cachedRW(dbPath)
if err != nil {
log.Printf("[vacuum] could not open RW connection: %v", err)
return
}
if _, err := rw.Exec(fmt.Sprintf("PRAGMA incremental_vacuum(%d)", pages)); err != nil {
log.Printf("[vacuum] incremental_vacuum error: %v", err)
}
}
+27 -6
View File
@@ -3,12 +3,19 @@
"apiKey": "your-secret-api-key-here",
"nodeBlacklist": [],
"_comment_nodeBlacklist": "Public keys of nodes to hide from all API responses. Use for trolls, offensive names, or nodes reporting false data that operators refuse to fix.",
"observerIATAWhitelist": [],
"_comment_observerIATAWhitelist": "Global IATA region whitelist. When non-empty, only observers whose IATA code (from MQTT topic) matches are processed. Case-insensitive. Empty = allow all. Unlike per-source iataFilter, this applies across all MQTT sources.",
"retention": {
"nodeDays": 7,
"observerDays": 14,
"packetDays": 30,
"_comment": "nodeDays: nodes not seen in N days moved to inactive_nodes (default 7). observerDays: observers not sending data in N days are removed (-1 = keep forever, default 14). packetDays: transmissions older than N days are deleted (0 = disabled)."
},
"db": {
"vacuumOnStartup": false,
"incrementalVacuumPages": 1024,
"_comment": "vacuumOnStartup: run one-time full VACUUM to enable incremental auto-vacuum on existing DBs (blocks startup for minutes on large DBs; requires 2x DB file size in free disk space). incrementalVacuumPages: free pages returned to OS after each retention reaper cycle (default 1024). See #919."
},
"https": {
"cert": "/path/to/cert.pem",
"key": "/path/to/key.pem",
@@ -124,7 +131,9 @@
"SFO",
"OAK",
"MRY"
]
],
"region": "SJC",
"connectTimeoutSec": 45
}
],
"channelKeys": {
@@ -146,7 +155,8 @@
"infraSilentHours": 72,
"nodeDegradedHours": 1,
"nodeSilentHours": 24,
"_comment": "How long (hours) before nodes show as degraded/silent. 'infra' = repeaters & rooms, 'node' = companions & others."
"relayActiveHours": 24,
"_comment": "How long (hours) before nodes show as degraded/silent. 'infra' = repeaters & rooms, 'node' = companions & others. relayActiveHours: a repeater is shown as 'actively relaying' if its pubkey appeared as a path hop in a non-advert packet within this window (issue #662)."
},
"defaultRegion": "SJC",
"mapDefaults": {
@@ -164,7 +174,11 @@
[37.20, -122.52]
],
"bufferKm": 20,
"_comment": "Optional. Restricts ingestion and API responses to nodes within the polygon + bufferKm. Polygon is an array of [lat, lon] pairs (minimum 3). Use tools/geofilter-builder.html to draw a polygon visually. Remove this section to disable filtering. Nodes with no GPS fix are always allowed through."
"_comment": "Optional. Restricts ingestion and API responses to nodes within the polygon + bufferKm. Polygon is an array of [lat, lon] pairs (minimum 3). Use the GeoFilter Builder (`/geofilter-builder.html`) to draw a polygon, save drafts to localStorage with Save Draft, and export a config snippet with Download — paste the snippet here as the `geo_filter` block. Remove this section to disable filtering. Nodes with no GPS fix are always allowed through."
},
"foreignAdverts": {
"mode": "flag",
"_comment": "Controls how the ingestor handles ADVERTs whose GPS is OUTSIDE the geo_filter polygon (#730). 'flag' (default): store the advert/node and tag it foreign_advert=1 so operators can see bridged/leaked nodes via the API ('foreign': true on /api/nodes). 'drop': legacy behavior — silently discard the advert (no log, no node row). Only applies when geo_filter is configured; otherwise has no effect."
},
"regions": {
"SJC": "San Jose, US",
@@ -208,7 +222,9 @@
"packetStore": {
"maxMemoryMB": 1024,
"estimatedPacketBytes": 450,
"_comment": "In-memory packet store. maxMemoryMB caps RAM usage. All packets loaded on startup, served from RAM."
"retentionHours": 168,
"_comment": "In-memory packet store. maxMemoryMB caps RAM usage. retentionHours: only packets younger than this are loaded on startup and kept in memory (0 = unlimited, not recommended for large DBs — causes OOM on cold start). 168 = 7 days. Must be ≤ retention.packetDays * 24.",
"_comment_gomemlimit": "On startup the server reads GOMEMLIMIT from the environment if set; otherwise it derives a Go runtime soft memory limit of maxMemoryMB * 1.5 and applies it via debug.SetMemoryLimit. This forces aggressive GC under cgroup pressure so the process self-throttles before the kernel SIGKILLs it. To override, set GOMEMLIMIT explicitly (e.g. GOMEMLIMIT=850MiB). See issue #836."
},
"resolvedPath": {
"backfillHours": 24,
@@ -218,10 +234,15 @@
"maxAgeDays": 5,
"_comment": "Neighbor edges older than this many days are pruned on startup and daily. Default: 5."
},
"_comment_mqttSources": "Each source connects to an MQTT broker. topics: what to subscribe to. iataFilter: only ingest packets from these regions (optional).",
"batteryThresholds": {
"lowMv": 3300,
"criticalMv": 3000,
"_comment": "Voltage cutoffs (millivolts) for the per-node battery trend chart on /node-analytics. Latest sample below lowMv shows the node as ⚠️ Low; below criticalMv shows 🪫 Critical. Both default to 3300 / 3000 if omitted. Source data: observer_metrics.battery_mv populated from observer status messages; only nodes that are themselves observers (matching pubkey ↔ observer id) yield a series. Issue #663."
},
"_comment_mqttSources": "Each source connects to an MQTT broker. topics: what to subscribe to. iataFilter: only ingest packets from these regions (optional). region: default IATA region for this source — used when packet/topic doesn't specify one (optional, priority: payload > topic > this field).",
"_comment_channelKeys": "Hex keys for decrypting channel messages. Key name = channel display name. public channel key is well-known.",
"_comment_hashChannels": "Channel names whose keys are derived via SHA256. Key = SHA256(name)[:16]. Listed here so the ingestor can auto-derive keys.",
"_comment_defaultRegion": "IATA code shown by default in region filters.",
"_comment_mapDefaults": "Initial map center [lat, lon] and zoom level.",
"_comment_regions": "IATA code to display name mapping. Packets are tagged with region codes by MQTT topic structure."
"_comment_regions": "IATA code display name mapping for the region filter UI. Each key is a 3-letter IATA code that an observer is tagged with (resolved priority: MQTT payload `region` field > topic-derived region > mqttSources.region). Observers without an IATA tag will not appear under any region filter — only under 'All Regions'. The region filter dropdown shows one entry per code listed here PLUS any extra IATA codes the server discovers from observers at runtime (so you can omit codes here and they will still be selectable, just labelled with the bare IATA code instead of a friendly name). Selecting 'All Regions' (or no region) returns results from every observer including those with no IATA tag; selecting one or more codes restricts results to packets observed by observers tagged with those codes. The reserved value 'All' (case-insensitive) is treated as 'no filter' on the server, so the URL ?region=All behaves identically to omitting the param. Issue #770."
}
@@ -0,0 +1,204 @@
# Scope Stats Page — Design Spec
**Issue**: Kpa-clawbot/CoreScope#899
**Date**: 2026-04-23
**Branch target**: `master`
---
## Overview
Add a dedicated **Scopes** page showing scope/region statistics for MeshCore transport-route packets. Scope filtering in MeshCore uses `TRANSPORT_FLOOD` (route_type 0) and `TRANSPORT_DIRECT` (route_type 3) packets that carry two 16-bit transport codes. Code1 ≠ `0000` means the packet is region-scoped.
Feature 3 from the issue (default scope per client via advert) is **not implemented** — the advert format has no scope field in the current firmware.
---
## How Scopes Work (Firmware)
Transport code derivation (authoritative source: `meshcore-dev/MeshCore`):
```
key = SHA256("#regionname")[:16] // TransportKeyStore::getAutoKeyFor
Code1 = HMAC-SHA256(key, type || payload) // TransportKey::calcTransportCode, 2-byte output
```
Code1 is a **per-message** HMAC — the same region produces a different Code1 for every message. Identifying a region from Code1 requires knowing the region name in advance and recomputing the HMAC.
`Code1 = 0000` is the "no scope" sentinel (also `FFFF` is reserved). Packets with route_type 1 or 2 (plain FLOOD/DIRECT) carry no transport codes.
---
## Config
Add `hashRegions` to the ingestor `Config` struct in `cmd/ingestor/config.go`, mirroring `hashChannels`:
```json
"hashRegions": ["#belgium", "#eu", "#brussels"]
```
Normalization (same rules as `hashChannels`):
- Trim whitespace
- Prepend `#` if missing
- Skip empty entries
---
## Ingestor Changes
### Key derivation (`loadRegionKeys`)
```go
func loadRegionKeys(cfg *Config) map[string][]byte {
// key = first 16 bytes of SHA256("#regionname")
}
```
Returns `map[string][]byte` (region name → 16-byte HMAC key). Called once at startup, stored on the `Store`.
### Decoder: expose raw payload bytes
Add `PayloadRaw []byte` to `DecodedPacket` in `cmd/ingestor/decoder.go`. Populated from the raw `buf` slice at the payload offset — zero-copy slice, no allocation. This is the **encrypted** payload bytes, matching what the firmware feeds into `calcTransportCode`.
### At-ingest region matching
In `BuildPacketData`:
- Skip if `route_type` not in `{0, 3}``scope_name` stays `nil`
- If `Code1 == "0000"``scope_name = nil` (unscoped transport, no scope involvement)
- If `Code1 != "0000"` → try each region key:
```
HMAC-SHA256(key, payloadType_byte || PayloadRaw) → first 2 bytes as uint16
```
First match → `scope_name = "#regionname"`. No match → `scope_name = ""` (unknown scope).
Add `ScopeName *string` to `PacketData`.
### MQTT-sourced packets (DM / CHAN paths in main.go)
These are injected directly without going through `BuildPacketData`. They use `route_type = 1` (FLOOD), so they are never transport-route packets. No scope matching needed for these paths.
---
## Database
### Migration
```sql
ALTER TABLE transmissions ADD COLUMN scope_name TEXT DEFAULT NULL;
CREATE INDEX idx_tx_scope_name ON transmissions(scope_name) WHERE scope_name IS NOT NULL;
```
### Column semantics
| Value | Meaning |
|-------|---------|
| `NULL` | Either: non-transport-route packet (route_type 1/2), or transport-route with Code1=0000 |
| `""` (empty string) | Transport-route, Code1 ≠ 0000, but no configured region matched |
| `"#belgium"` | Matched named region |
The API stats queries resolve the NULL ambiguity by always filtering `route_type IN (0, 3)` first:
- `unscoped` count = `route_type IN (0,3) AND scope_name IS NULL`
- `scoped` count = `route_type IN (0,3) AND scope_name IS NOT NULL`
### Backfill
On migration, re-decode `raw_hex` for all rows where `route_type IN (0, 3)` and `scope_name IS NULL`. Run the same HMAC matching logic. Rows with `Code1 = 0000` remain `NULL`.
The backfill runs in the existing migration framework in `cmd/ingestor/db.go`. If no regions are configured, backfill is skipped.
---
## API
### `GET /api/scope-stats`
**Query param**: `window` — one of `1h`, `24h` (default), `7d`
**Time-series bucket sizes**:
| Window | Bucket |
|--------|--------|
| `1h` | 5 min |
| `24h` | 1 hour |
| `7d` | 6 hours|
**Response**:
```json
{
"window": "24h",
"summary": {
"transportTotal": 1240,
"scoped": 890,
"unscoped": 350,
"unknownScope": 42
},
"byRegion": [
{ "name": "#belgium", "count": 612 },
{ "name": "#eu", "count": 236 }
],
"timeSeries": [
{ "t": "2026-04-23T10:00:00Z", "scoped": 45, "unscoped": 18 },
{ "t": "2026-04-23T11:00:00Z", "scoped": 51, "unscoped": 22 }
]
}
```
- `transportTotal` = `scoped + unscoped` (transport-route packets only)
- `scoped` = Code1 ≠ 0000 (named + unknown)
- `unscoped` = transport-route with Code1 = 0000
- `unknownScope` = scoped but no region name matched (subset of `scoped`)
- `byRegion` sorted by count descending, excludes unknown
- `timeSeries` covers the full window at the bucket granularity
Route: `GET /api/scope-stats` registered in `cmd/server/routes.go`.
No auth required (same as other read endpoints).
TTL cache: 30 seconds (heavier query than `/api/stats`).
---
## Frontend
### Navigation
Add nav link between Channels and Nodes in `public/index.html`:
```html
<a href="#/scopes" class="nav-link" data-route="scopes">Scopes</a>
```
### `public/scopes.js`
Three sections on the page:
**1. Summary cards** (reuse existing card CSS pattern from home/analytics pages)
- Transport total, Scoped, Unscoped, Unknown scope
- Each card shows count + percentage of transport total
**2. Per-region table**
Columns: Region, Messages, % of Scoped
Sorted by count descending. Last row: "Unknown scope" (italic) if unknownScope > 0.
Shows "No regions configured" message if `byRegion` is empty and `unknownScope = 0`.
**3. Time-series chart**
- Window selector: `1h / 24h / 7d` (default 24h)
- Two lines: **Scoped** (blue) and **Unscoped** (grey)
- Uses the same lightweight canvas chart pattern as other pages (no external chart lib)
### Cache buster
`scopes.js` added to the `__BUST__` entries in `index.html` in the same commit.
---
## Testing
- Unit tests for `loadRegionKeys`: normalization, key bytes match firmware SHA256 derivation
- Unit tests for HMAC matching: known Code1 value computed from firmware logic, verified against Go implementation
- Integration test: ingest a synthetic transport-route packet with a known region, assert `scope_name` column is set correctly
- API test: `GET /api/scope-stats` returns correct summary counts against fixture DB
---
## Out of Scope
- Feature 3 (default scope per client via advert) — firmware has no advert scope field
- Drill-down from region row to filtered packet list (deferred)
- Private regions (`$`-prefixed) — use secret keys not publicly derivable
+19
View File
@@ -98,6 +98,22 @@ How long (in hours) before a node is marked degraded or silent:
| `retention.nodeDays` | `7` | Nodes not seen in N days move to inactive |
| `retention.packetDays` | `30` | Packets older than N days are deleted daily |
> **Note:** Lowering retention does **not** immediately shrink the database file.
> SQLite marks deleted pages as free but does not return them to the filesystem
> unless [incremental auto-vacuum](database.md) is enabled. New databases created
> after v0.x.x have auto-vacuum enabled automatically. Existing databases require
> a one-time migration — see the [Database](database.md) guide.
## Database
| Field | Default | Description |
|-------|---------|-------------|
| `db.vacuumOnStartup` | `false` | Run a one-time full `VACUUM` on startup to enable incremental auto-vacuum (blocks for minutes on large DBs) |
| `db.incrementalVacuumPages` | `1024` | Free pages returned to the OS after each retention reaper cycle |
See [Database](database.md) for details on SQLite auto-vacuum, WAL, and manual maintenance.
See [#919](https://github.com/Kpa-clawbot/CoreScope/issues/919) for background.
## Channel decryption
| Field | Description |
@@ -150,6 +166,9 @@ Lower values = fresher data but more server load.
|-------|---------|-------------|
| `packetStore.maxMemoryMB` | `1024` | Maximum RAM for in-memory packet store |
| `packetStore.estimatedPacketBytes` | `450` | Estimated bytes per packet (for memory budgeting) |
| `packetStore.retentionHours` | `0` | Only load packets younger than N hours on startup and keep them in memory. **Set this on any instance with a large DB.** `0` = unlimited (loads full DB history — causes OOM on cold start when the DB has hundreds of thousands of paths). Recommended: same as `retention.packetDays × 24` (e.g. `168` for 7 days). |
> **Warning:** Leaving `retentionHours` at `0` on a large database will cause the server to OOM-kill itself on every cold start. The full packet history is loaded into the subpath index at startup; a DB with ~280K paths produces ~13M index entries before the process is killed.
## Timestamps
+82
View File
@@ -0,0 +1,82 @@
# Database
CoreScope uses SQLite in WAL (Write-Ahead Log) mode for both the server
(read-only) and ingestor (read-write).
## WAL mode
WAL mode allows concurrent reads while writes happen. It is set automatically
at connection time via `PRAGMA journal_mode=WAL`. No operator action needed.
The WAL file (`meshcore.db-wal`) grows during writes and is checkpointed
(merged back into the main DB) periodically and at clean shutdown.
## Auto-vacuum
By default, SQLite does not shrink the database file after `DELETE` operations.
Deleted pages are marked free and reused by future writes, but the file size
on disk stays the same. This is surprising when lowering retention settings.
### New databases
Databases created after this feature was added automatically have
`PRAGMA auto_vacuum = INCREMENTAL`. After each retention reaper cycle,
CoreScope runs `PRAGMA incremental_vacuum(N)` to return free pages to the OS.
### Existing databases
The `auto_vacuum` mode is stored in the database header and can only be changed
by rewriting the entire file with `VACUUM`. CoreScope will **not** do this
automatically — on large databases (5+ GB seen in the wild) it takes minutes
and holds an exclusive lock.
**To migrate an existing database:**
1. At startup, CoreScope logs a warning:
```
[db] auto_vacuum=NONE — DB needs one-time VACUUM to enable incremental auto-vacuum.
```
2. **Ensure at least 2× the database file size in free disk space.** Full VACUUM
creates a temporary copy of the entire file — on a near-full disk it will fail.
3. Set `db.vacuumOnStartup: true` in your `config.json`:
```json
{
"db": {
"vacuumOnStartup": true
}
}
```
4. Restart CoreScope. The one-time `VACUUM` will run and block startup.
5. After migration, remove or set `vacuumOnStartup: false` — it's not needed again.
### Configuration
| Field | Default | Description |
|-------|---------|-------------|
| `db.vacuumOnStartup` | `false` | One-time full VACUUM to enable incremental auto-vacuum |
| `db.incrementalVacuumPages` | `1024` | Pages returned to OS per reaper cycle |
## Manual VACUUM
You can also run a manual vacuum from the SQLite CLI:
```bash
sqlite3 data/meshcore.db "PRAGMA auto_vacuum = INCREMENTAL; VACUUM;"
```
This is equivalent to `vacuumOnStartup: true` but can be done offline.
> ⚠️ Full VACUUM requires **2× the database file size** in free disk space (it
> creates a temporary copy). Check with `ls -lh data/meshcore.db` before running.
## Checking current mode
```bash
sqlite3 data/meshcore.db "PRAGMA auto_vacuum;"
```
- `0` = NONE (default for old databases)
- `1` = FULL (automatic, but slower writes)
- `2` = INCREMENTAL (recommended — CoreScope triggers vacuum after deletes)
See [#919](https://github.com/Kpa-clawbot/CoreScope/issues/919) for background on this feature.
+17
View File
@@ -0,0 +1,17 @@
// Package dbconfig provides the shared DBConfig struct used by both the server
// and ingestor binaries for SQLite vacuum and maintenance settings (#919, #921).
package dbconfig
// DBConfig controls SQLite vacuum and maintenance behavior (#919).
type DBConfig struct {
VacuumOnStartup bool `json:"vacuumOnStartup"` // one-time full VACUUM on startup if auto_vacuum is not INCREMENTAL
IncrementalVacuumPages int `json:"incrementalVacuumPages"` // pages returned to OS per reaper cycle (default 1024)
}
// GetIncrementalVacuumPages returns the configured pages or 1024 default.
func (c *DBConfig) GetIncrementalVacuumPages() int {
if c != nil && c.IncrementalVacuumPages > 0 {
return c.IncrementalVacuumPages
}
return 1024
}
+21
View File
@@ -0,0 +1,21 @@
package dbconfig
import "testing"
func TestGetIncrementalVacuumPages_Default(t *testing.T) {
var c *DBConfig
if got := c.GetIncrementalVacuumPages(); got != 1024 {
t.Fatalf("nil DBConfig: got %d, want 1024", got)
}
c = &DBConfig{}
if got := c.GetIncrementalVacuumPages(); got != 1024 {
t.Fatalf("zero DBConfig: got %d, want 1024", got)
}
}
func TestGetIncrementalVacuumPages_Configured(t *testing.T) {
c := &DBConfig{IncrementalVacuumPages: 512}
if got := c.GetIncrementalVacuumPages(); got != 512 {
t.Fatalf("got %d, want 512", got)
}
}
+3
View File
@@ -0,0 +1,3 @@
module github.com/meshcore-analyzer/dbconfig
go 1.22
+3
View File
@@ -0,0 +1,3 @@
module github.com/meshcore-analyzer/packetpath
go 1.22
+76
View File
@@ -0,0 +1,76 @@
// Package packetpath provides shared helpers for extracting path hops from
// raw MeshCore packet hex bytes.
package packetpath
import (
"encoding/hex"
"fmt"
"strings"
)
// DecodePathFromRawHex extracts the header path hops directly from raw hex bytes.
// This is the authoritative path that matches what's in raw_hex, as opposed to
// decoded.Path.Hops which may be overwritten for TRACE packets (issue #886).
//
// WARNING: This function returns the literal header path bytes regardless of
// payload type. For TRACE packets these bytes are SNR values, NOT hop hashes.
// Callers that may receive TRACE packets MUST check PathBytesAreHops(payloadType)
// first, or use the safer DecodeHopsForPayload wrapper.
func DecodePathFromRawHex(rawHex string) ([]string, error) {
buf, err := hex.DecodeString(rawHex)
if err != nil || len(buf) < 2 {
return nil, fmt.Errorf("invalid or too-short hex")
}
headerByte := buf[0]
offset := 1
if IsTransportRoute(int(headerByte & 0x03)) {
if len(buf) < offset+4 {
return nil, fmt.Errorf("too short for transport codes")
}
offset += 4
}
if offset >= len(buf) {
return nil, fmt.Errorf("too short for path byte")
}
pathByte := buf[offset]
offset++
hashSize := int(pathByte>>6) + 1
hashCount := int(pathByte & 0x3F)
hops := make([]string, 0, hashCount)
for i := 0; i < hashCount; i++ {
start := offset + i*hashSize
end := start + hashSize
if end > len(buf) {
break
}
hops = append(hops, strings.ToUpper(hex.EncodeToString(buf[start:end])))
}
return hops, nil
}
// DecodeHopsForPayload returns the header path hops only when the payload type's
// header bytes are actually route hops (i.e. PathBytesAreHops(payloadType) is true).
// For TRACE packets it returns (nil, ErrPayloadHasNoHeaderHops) so the caller is
// forced to source hops from the decoded payload instead.
//
// Prefer this over DecodePathFromRawHex when the payload type is known.
func DecodeHopsForPayload(rawHex string, payloadType byte) ([]string, error) {
if !PathBytesAreHops(payloadType) {
return nil, ErrPayloadHasNoHeaderHops
}
return DecodePathFromRawHex(rawHex)
}
// ErrPayloadHasNoHeaderHops is returned by DecodeHopsForPayload when the
// payload type repurposes the raw_hex header path bytes (e.g. TRACE → SNR values).
var ErrPayloadHasNoHeaderHops = errPayloadHasNoHeaderHops{}
type errPayloadHasNoHeaderHops struct{}
func (errPayloadHasNoHeaderHops) Error() string {
return "payload type repurposes header path bytes; source hops from decoded payload"
}
+150
View File
@@ -0,0 +1,150 @@
package packetpath
import (
"encoding/hex"
"encoding/json"
"strings"
"testing"
)
func TestDecodePathFromRawHex_Basic(t *testing.T) {
// Build a simple FLOOD packet (route_type=1) with 2 hops of hashSize=1
// header: route_type=1, payload_type=2 (TXT_MSG), version=0 → 0b00_0010_01 = 0x09
// path byte: hashSize=1 (bits 7-6 = 0), hashCount=2 (bits 5-0 = 2) → 0x02
// hops: AB, CD
// payload: some bytes
raw := "0902ABCD" + "DEADBEEF"
hops, err := DecodePathFromRawHex(raw)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 2 || hops[0] != "AB" || hops[1] != "CD" {
t.Fatalf("expected [AB, CD], got %v", hops)
}
}
func TestDecodePathFromRawHex_ZeroHops(t *testing.T) {
// DIRECT route (type=2), no hops → 0b00_0010_10 = 0x0A
// path byte: 0x00 (0 hops)
raw := "0A00" + "DEADBEEF"
hops, err := DecodePathFromRawHex(raw)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 0 {
t.Fatalf("expected 0 hops, got %v", hops)
}
}
func TestDecodePathFromRawHex_TransportRoute(t *testing.T) {
// TRANSPORT_FLOOD (route_type=0), payload_type=5 (GRP_TXT), version=0
// header: 0b00_0101_00 = 0x14
// transport codes: 4 bytes
// path byte: hashSize=1, hashCount=1 → 0x01
// hop: FF
raw := "14" + "00112233" + "01" + "FF" + "DEAD"
hops, err := DecodePathFromRawHex(raw)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 1 || hops[0] != "FF" {
t.Fatalf("expected [FF], got %v", hops)
}
}
// buildTracePacket creates a TRACE packet hex string where header path bytes are
// SNR values, and payload contains the actual route hops.
func buildTracePacket() (rawHex string, headerPathHops []string, payloadHops []string) {
// DIRECT route (type=2), TRACE payload (type=9), version=0
// header byte: 0b00_1001_10 = 0x26
headerByte := byte(0x26)
// Header path: 2 SNR bytes (hashSize=1, hashCount=2) → path byte = 0x02
// SNR values: 0x1A (26 dB), 0x0F (15 dB)
pathByte := byte(0x02)
snrBytes := []byte{0x1A, 0x0F}
// TRACE payload: tag(4) + authCode(4) + flags(1) + path hops
tag := []byte{0x01, 0x00, 0x00, 0x00}
authCode := []byte{0x02, 0x00, 0x00, 0x00}
// flags: path_sz=0 (1 byte hops), other bits=0 → 0x00
flags := byte(0x00)
// Payload hops: AA, BB, CC (the actual route)
payloadPathBytes := []byte{0xAA, 0xBB, 0xCC}
var buf []byte
buf = append(buf, headerByte, pathByte)
buf = append(buf, snrBytes...)
buf = append(buf, tag...)
buf = append(buf, authCode...)
buf = append(buf, flags)
buf = append(buf, payloadPathBytes...)
rawHex = strings.ToUpper(hex.EncodeToString(buf))
headerPathHops = []string{"1A", "0F"} // SNR values — NOT route hops
payloadHops = []string{"AA", "BB", "CC"} // actual route hops from payload
return
}
func TestDecodePathFromRawHex_TraceReturnsSNR(t *testing.T) {
rawHex, expectedSNR, _ := buildTracePacket()
hops, err := DecodePathFromRawHex(rawHex)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
// DecodePathFromRawHex always returns header path bytes — for TRACE these are SNR values
if len(hops) != len(expectedSNR) {
t.Fatalf("expected %d hops (SNR), got %d: %v", len(expectedSNR), len(hops), hops)
}
for i, h := range hops {
if h != expectedSNR[i] {
t.Errorf("hop[%d]: expected %s, got %s", i, expectedSNR[i], h)
}
}
}
func TestTracePathJSON_UsesPayloadHops(t *testing.T) {
// This test validates the TRACE vs non-TRACE logic that callers should implement:
// For TRACE: path_json = decoded.Path.Hops (payload-decoded route hops)
// For non-TRACE: path_json = DecodePathFromRawHex(raw_hex)
rawHex, snrHops, payloadHops := buildTracePacket()
// DecodePathFromRawHex returns SNR bytes for TRACE
headerHops, _ := DecodePathFromRawHex(rawHex)
headerJSON, _ := json.Marshal(headerHops)
// payload hops (what decoded.Path.Hops would return after TRACE decoding)
payloadJSON, _ := json.Marshal(payloadHops)
// They must differ — SNR != route hops
if string(headerJSON) == string(payloadJSON) {
t.Fatalf("SNR hops and payload hops should differ for TRACE; both are %s", headerJSON)
}
// For TRACE, path_json should be payloadHops, not headerHops
_ = snrHops // snrHops == headerHops — used for documentation
t.Logf("TRACE: header path (SNR) = %s, payload path (route) = %s", headerJSON, payloadJSON)
}
func TestDecodeHopsForPayload_NonTrace(t *testing.T) {
// header 0x01, path_len 0x02, hops 0xAA 0xBB, then payload bytes
raw := "0102AABB00"
hops, err := DecodeHopsForPayload(raw, 0x05) // GRP_TXT — header path bytes ARE hops
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 2 || hops[0] != "AA" || hops[1] != "BB" {
t.Errorf("expected [AA BB], got %v", hops)
}
}
func TestDecodeHopsForPayload_TraceReturnsError(t *testing.T) {
raw := "010205F00100"
hops, err := DecodeHopsForPayload(raw, PayloadTRACE)
if err != ErrPayloadHasNoHeaderHops {
t.Errorf("expected ErrPayloadHasNoHeaderHops, got %v", err)
}
if hops != nil {
t.Errorf("expected nil hops for TRACE, got %v", hops)
}
}
+24
View File
@@ -0,0 +1,24 @@
package packetpath
// Route type constants (header bits 1-0).
const (
RouteTransportFlood = 0
RouteFlood = 1
RouteDirect = 2
RouteTransportDirect = 3
)
// PayloadTRACE is the payload type constant for TRACE packets.
const PayloadTRACE = 0x09
// IsTransportRoute returns true for TRANSPORT_FLOOD (0) and TRANSPORT_DIRECT (3).
func IsTransportRoute(routeType int) bool {
return routeType == RouteTransportFlood || routeType == RouteTransportDirect
}
// PathBytesAreHops returns true when the raw_hex header path bytes represent
// route hop hashes (the normal case). Returns false for packet types where
// header path bytes are repurposed (e.g. TRACE uses them for SNR values).
func PathBytesAreHops(payloadType byte) bool {
return payloadType != PayloadTRACE
}
+31
View File
@@ -0,0 +1,31 @@
package packetpath
import "testing"
func TestIsTransportRoute(t *testing.T) {
if !IsTransportRoute(RouteTransportFlood) {
t.Error("RouteTransportFlood should be transport")
}
if !IsTransportRoute(RouteTransportDirect) {
t.Error("RouteTransportDirect should be transport")
}
if IsTransportRoute(RouteFlood) {
t.Error("RouteFlood should not be transport")
}
if IsTransportRoute(RouteDirect) {
t.Error("RouteDirect should not be transport")
}
}
func TestPathBytesAreHops(t *testing.T) {
if PathBytesAreHops(PayloadTRACE) {
t.Error("PathBytesAreHops(PayloadTRACE) should be false")
}
// All other known payload types should return true.
otherTypes := []byte{0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F}
for _, pt := range otherTypes {
if !PathBytesAreHops(pt) {
t.Errorf("PathBytesAreHops(0x%02X) should be true", pt)
}
}
}
+201 -22
View File
@@ -75,6 +75,16 @@
<h2>📊 Mesh Analytics</h2>
<p class="text-muted">Deep dive into your mesh network data</p>
<div id="analyticsRegionFilter" class="region-filter-container"></div>
<div class="time-window-filter" style="margin:8px 0">
<label for="analyticsTimeWindow" style="font-size:0.9em;color:var(--text-muted);margin-right:6px">Time window:</label>
<select id="analyticsTimeWindow" data-testid="analytics-time-window" aria-label="Time window">
<option value="">All data</option>
<option value="1h">Last 1 hour</option>
<option value="24h">Last 24 hours</option>
<option value="7d">Last 7 days</option>
<option value="30d">Last 30 days</option>
</select>
</div>
<div class="analytics-tabs" id="analyticsTabs" role="tablist" aria-label="Analytics tabs">
<button class="tab-btn active" data-tab="overview">Overview</button>
<button class="tab-btn" data-tab="rf">RF / Signal</button>
@@ -99,18 +109,38 @@
// Tab handling
const analyticsTabs = document.getElementById('analyticsTabs');
initTabBar(analyticsTabs);
// #749 — keep analytics tab + window in URL for deep-linking.
function _updateAnalyticsUrl() {
if (!window.URLState) return;
var twElNow = document.getElementById('analyticsTimeWindow');
var updates = {
tab: _currentTab && _currentTab !== 'overview' ? _currentTab : '',
window: twElNow && twElNow.value ? twElNow.value : ''
};
// Drop any subview-specific keys that don't belong to the active tab
// so switching tabs gives a clean URL. (rf-health uses 'range', 'observer', 'from', 'to')
if (_currentTab !== 'rf-health') {
var cleared = ['range', 'observer', 'from', 'to'];
for (var i = 0; i < cleared.length; i++) updates[cleared[i]] = '';
}
var newHash = URLState.updateHashParams(updates, location.hash);
if (newHash !== location.hash) history.replaceState(null, '', newHash);
}
analyticsTabs.addEventListener('click', e => {
const btn = e.target.closest('.tab-btn');
if (!btn) return;
document.querySelectorAll('.tab-btn').forEach(b => b.classList.remove('active'));
btn.classList.add('active');
_currentTab = btn.dataset.tab;
_updateAnalyticsUrl();
renderTab(_currentTab);
});
// Deep-link: #/analytics?tab=collisions
// Deep-link: #/analytics?tab=collisions&window=7d
const hashParams = location.hash.split('?')[1] || '';
const urlTab = new URLSearchParams(hashParams).get('tab');
const _ap = new URLSearchParams(hashParams);
const urlTab = _ap.get('tab');
if (urlTab) {
const tabBtn = analyticsTabs.querySelector(`[data-tab="${urlTab}"]`);
if (tabBtn) {
@@ -119,10 +149,22 @@
_currentTab = urlTab;
}
}
// #749 — restore time window from URL.
const urlWindow = _ap.get('window');
if (urlWindow) {
const twInit = document.getElementById('analyticsTimeWindow');
if (twInit) twInit.value = urlWindow;
}
RegionFilter.init(document.getElementById('analyticsRegionFilter'));
RegionFilter.onChange(function () { loadAnalytics(); });
// Time-window picker (#842) — refresh analytics on change.
const tw = document.getElementById('analyticsTimeWindow');
if (tw) {
tw.addEventListener('change', function () { _updateAnalyticsUrl(); loadAnalytics(); });
}
// Delegated click/keyboard handler for clickable table rows
const analyticsContent = document.getElementById('analyticsContent');
if (analyticsContent) {
@@ -150,14 +192,24 @@
async function loadAnalytics() {
try {
_analyticsData = {};
const rqs = RegionFilter.regionQueryString();
const sep = rqs ? '?' + rqs.slice(1) : '';
const rqs = RegionFilter.regionQueryString(); // "&region=..." or ""
// Time window picker (#842) — append &window=… when set.
// NOTE: only the three window-aware endpoints (rf/topology/channels)
// receive ?window=…; hash-sizes and hash-collisions are about node
// identity / hash-byte distribution and intentionally span all data.
const twEl = document.getElementById('analyticsTimeWindow');
const twVal = twEl ? twEl.value : '';
const tws = twVal ? '&window=' + encodeURIComponent(twVal) : '';
const baseQS = rqs.slice(1); // drop leading '&', "" or "region=…"
const sepBase = baseQS ? '?' + baseQS : '';
const windowedQS = (rqs + tws).slice(1);
const sepWin = windowedQS ? '?' + windowedQS : '';
const [hashData, rfData, topoData, chanData, collisionData] = await Promise.all([
api('/analytics/hash-sizes' + sep, { ttl: CLIENT_TTL.analyticsRF }),
api('/analytics/rf' + sep, { ttl: CLIENT_TTL.analyticsRF }),
api('/analytics/topology' + sep, { ttl: CLIENT_TTL.analyticsRF }),
api('/analytics/channels' + sep, { ttl: CLIENT_TTL.analyticsRF }),
api('/analytics/hash-collisions' + sep, { ttl: CLIENT_TTL.analyticsRF }),
api('/analytics/hash-sizes' + sepBase, { ttl: CLIENT_TTL.analyticsRF }),
api('/analytics/rf' + sepWin, { ttl: CLIENT_TTL.analyticsRF }),
api('/analytics/topology' + sepWin, { ttl: CLIENT_TTL.analyticsRF }),
api('/analytics/channels' + sepWin, { ttl: CLIENT_TTL.analyticsRF }),
api('/analytics/hash-collisions' + sepBase, { ttl: CLIENT_TTL.analyticsRF }),
]);
_analyticsData = { hashData, rfData, topoData, chanData, collisionData };
renderTab(_currentTab);
@@ -711,6 +763,7 @@
// ===================== CHANNELS =====================
var _channelSortState = null;
var _channelData = null;
var _channelRenderGen = 0;
var CHANNEL_SORT_KEY = 'meshcore-channel-sort';
function loadChannelSort() {
@@ -721,6 +774,18 @@
return { col: 'lastActivity', dir: 'desc' };
}
// True when the user has explicitly chosen a sort (saved in localStorage).
// Used by the grouped analytics view to decide whether to apply its own
// default ("messages desc") instead of the global flat-list default.
function hasSavedChannelSort() {
try {
var s = localStorage.getItem(CHANNEL_SORT_KEY);
if (!s) return false;
var p = JSON.parse(s);
return !!(p && p.col && p.dir);
} catch (e) { return false; }
}
function saveChannelSort(state) {
try { localStorage.setItem(CHANNEL_SORT_KEY, JSON.stringify(state)); } catch (e) {}
}
@@ -755,20 +820,107 @@
}
function channelRowHtml(c) {
var name = c.displayName || c.name || 'Unknown';
return '<tr class="clickable-row" data-action="navigate" data-value="#/channels?ch=' + c.hash + '" tabindex="0" role="row">' +
'<td><strong>' + esc(c.name || 'Unknown') + '</strong></td>' +
'<td><strong>' + esc(name) + '</strong></td>' +
'<td class="mono">' + (typeof c.hash === 'number' ? '0x' + c.hash.toString(16).toUpperCase().padStart(2, '0') : c.hash) + '</td>' +
'<td>' + c.messages + '</td>' +
'<td>' + c.senders + '</td>' +
'<td>' + timeAgo(c.lastActivity) + '</td>' +
'<td>' + (c.encrypted ? '🔒' : '✅') + '</td>' +
'<td>' + (c.encrypted ? (c.group === 'mine' ? '🔑' : '🔒') : '✅') + '</td>' +
'</tr>';
}
function channelTbodyHtml(channels, col, dir) {
// ── PSK-aware decoration ──────────────────────────────────────────────────
// Server returns raw "chNNN" placeholder names for encrypted channels it
// doesn't know. Decorate so the UI shows a useful display name and a
// group bucket: mine / network / encrypted. Pure function for testability.
function decorateAnalyticsChannels(channels, hashByteToKeyName, labels) {
var keyMap = hashByteToKeyName || {};
var lab = labels || {};
var out = [];
for (var i = 0; i < (channels || []).length; i++) {
var c = channels[i];
var copy = Object.assign({}, c);
var hashNum = typeof c.hash === 'number' ? c.hash : parseInt(c.hash, 10);
var rawName = String(c.name || '');
var isPlaceholder = /^ch(\d+|\?)$/.test(rawName);
if (c.encrypted) {
var keyName = !isNaN(hashNum) ? keyMap[hashNum] : null;
if (keyName) {
copy.displayName = lab[keyName] || keyName;
copy.group = 'mine';
} else if (isPlaceholder || !rawName) {
// Placeholder ("chNNN") or empty name → render as opaque encrypted.
// Empty-name encrypted rows would otherwise leak through with an
// empty <strong> in the row; force the placeholder rendering.
copy.displayName = !isNaN(hashNum)
? '🔒 Encrypted (0x' + hashNum.toString(16).toUpperCase().padStart(2, '0') + ')'
: '🔒 Encrypted';
copy.group = 'encrypted';
} else {
// Server gave us a real name (rainbow table hit) for an encrypted ch.
copy.displayName = rawName;
copy.group = 'network';
}
} else {
copy.displayName = rawName || 'Unknown';
copy.group = 'network';
}
out.push(copy);
}
return out;
}
// Build the (hash byte → key name) map from ChannelDecrypt's stored keys.
// Async because computeChannelHash uses subtle.digest. Returns {} if the
// module or its keys are unavailable (graceful fallback).
async function buildHashKeyMap() {
if (typeof ChannelDecrypt === 'undefined' || !ChannelDecrypt.getStoredKeys) return {};
var keys = ChannelDecrypt.getStoredKeys();
var map = {};
var names = Object.keys(keys || {});
for (var ni = 0; ni < names.length; ni++) {
var name = names[ni];
try {
var bytes = ChannelDecrypt.hexToBytes(keys[name]);
var hb = await ChannelDecrypt.computeChannelHash(bytes);
if (typeof hb === 'number') map[hb] = name;
} catch (e) { /* skip bad key */ }
}
return map;
}
function channelTbodyHtml(channels, col, dir, opts) {
var sorted = sortChannels(channels, col, dir);
var parts = [];
for (var i = 0; i < sorted.length; i++) parts.push(channelRowHtml(sorted[i]));
if (opts && opts.grouped) {
// Group by .group: mine → network → encrypted. Inside each group keep
// the active sort (caller passes col/dir; for the integration we sort
// by messages desc by default).
var groups = { mine: [], network: [], encrypted: [] };
for (var gi = 0; gi < sorted.length; gi++) {
var g = sorted[gi].group || (sorted[gi].encrypted ? 'encrypted' : 'network');
(groups[g] || (groups[g] = [])).push(sorted[gi]);
}
var sections = [
{ key: 'mine', label: '🔑 My Channels' },
{ key: 'network', label: '📻 Network' },
{ key: 'encrypted', label: '🔒 Encrypted' },
];
for (var si = 0; si < sections.length; si++) {
var rows = groups[sections[si].key] || [];
if (!rows.length) continue;
parts.push(
'<tr class="ch-section-row"><td colspan="6" class="ch-section-header">' +
esc(sections[si].label) + ' <span class="text-muted">(' + rows.length + ')</span>' +
'</td></tr>'
);
for (var ri = 0; ri < rows.length; ri++) parts.push(channelRowHtml(rows[ri]));
}
} else {
for (var i = 0; i < sorted.length; i++) parts.push(channelRowHtml(sorted[i]));
}
return parts.join('');
}
@@ -799,13 +951,39 @@
var tbody = document.getElementById('channelsTbody');
var thead = document.querySelector('#channelsTable thead');
if (!tbody || !_channelData) return;
tbody.innerHTML = channelTbodyHtml(_channelData, _channelSortState.col, _channelSortState.dir);
tbody.innerHTML = channelTbodyHtml(_channelData, _channelSortState.col, _channelSortState.dir, { grouped: true });
if (thead) thead.outerHTML = channelTheadHtml(_channelSortState.col, _channelSortState.dir);
}
function renderChannels(el, ch) {
_channelData = ch.channels;
if (!_channelSortState) _channelSortState = loadChannelSort();
// Decorate first so grouping/display name reflect locally-stored PSK keys.
// buildHashKeyMap is async; render once with a sync best-effort empty map,
// then upgrade once keys resolve. That keeps first paint fast and avoids
// blocking on subtle.digest in environments where it's slow.
var rawChannels = ch.channels || [];
// Resolve the persisted sort first so the default-fallback below doesn't
// shadow what the user previously chose. Default for the grouped view is
// messages desc (matches the PR description); only used when nothing saved.
if (!_channelSortState) {
_channelSortState = hasSavedChannelSort()
? loadChannelSort()
: { col: 'messages', dir: 'desc' };
}
var ranOnce = false;
// Generation token: if renderChannels is called again before
// buildHashKeyMap() resolves, the older promise must not clobber the
// newer rawChannels / decoration with stale-key data.
var myGen = ++_channelRenderGen;
function applyDecorate(map) {
if (myGen !== _channelRenderGen) return; // superseded
var labels = (typeof ChannelDecrypt !== 'undefined' && ChannelDecrypt.getLabels)
? ChannelDecrypt.getLabels() : {};
_channelData = decorateAnalyticsChannels(rawChannels, map, labels);
if (ranOnce) updateChannelTable();
}
applyDecorate({});
ranOnce = true;
buildHashKeyMap().then(applyDecorate).catch(function () { /* graceful */ });
var timelineHtml = renderChannelTimeline(ch.channelTimeline);
var topSendersHtml = renderTopSenders(ch.topSenders);
@@ -818,7 +996,7 @@
'<table class="analytics-table" id="channelsTable">' +
channelTheadHtml(_channelSortState.col, _channelSortState.dir) +
'<tbody id="channelsTbody">' +
channelTbodyHtml(_channelData, _channelSortState.col, _channelSortState.dir) +
channelTbodyHtml(_channelData, _channelSortState.col, _channelSortState.dir, { grouped: true }) +
'</tbody>' +
'</table>' +
'</div>' +
@@ -1732,8 +1910,8 @@
<div class="subpath-section">
<h5> Timeline</h5>
<div>First seen: ${data.firstSeen ? new Date(data.firstSeen).toLocaleString() : '—'}</div>
<div>Last seen: ${data.lastSeen ? new Date(data.lastSeen).toLocaleString() : '—'}</div>
<div>First seen: ${data.firstSeen ? (typeof formatAbsoluteTimestamp === 'function' ? formatAbsoluteTimestamp(data.firstSeen) : new Date(data.firstSeen).toLocaleString()) : '—'}</div>
<div>Last seen: ${data.lastSeen ? (typeof formatAbsoluteTimestamp === 'function' ? formatAbsoluteTimestamp(data.lastSeen) : new Date(data.lastSeen).toLocaleString()) : '—'}</div>
</div>
${data.observers.length ? `
@@ -2029,6 +2207,7 @@ function destroy() { _analyticsData = {}; _channelData = null; if (_ngState && _
// Expose for testing
if (typeof window !== 'undefined') {
window._analyticsDecorateChannels = decorateAnalyticsChannels;
window._analyticsSortChannels = sortChannels;
window._analyticsLoadChannelSort = loadChannelSort;
window._analyticsSaveChannelSort = saveChannelSort;
@@ -2660,7 +2839,7 @@ function destroy() { _analyticsData = {}; _channelData = null; if (_ngState && _
const name = esc(n.name || n.public_key.slice(0, 12));
const role = n.role ? `<span class="text-muted" style="font-size:0.82em">${esc(n.role)}</span>` : '';
const hs = n.hash_size ? ` <span class="text-muted" style="font-size:0.78em;opacity:0.7">${n.hash_size}B hash</span>` : '';
const when = n.last_seen ? ` <span class="text-muted" style="font-size:0.8em">${new Date(n.last_seen).toLocaleDateString()}</span>` : '';
const when = n.last_seen ? ` <span class="text-muted" style="font-size:0.8em">${(typeof formatAbsoluteTimestamp === 'function') ? formatAbsoluteTimestamp(n.last_seen) : new Date(n.last_seen).toLocaleDateString()}</span>` : '';
return `<div style="padding:3px 0"><a href="#/nodes/${encodeURIComponent(n.public_key)}" class="analytics-link">${name}</a> ${role}${hs}${when}</div>`;
}
@@ -3158,7 +3337,7 @@ function destroy() { _analyticsData = {}; _channelData = null; if (_ngState && _
const t = new Date(d.t);
const x = sx(t.getTime());
const y = sy(d.v);
const ts = t.toISOString().replace('T', ' ').replace(/\.\d+Z/, ' UTC');
const ts = (typeof formatAbsoluteTimestamp === 'function') ? formatAbsoluteTimestamp(d.t) : t.toISOString().replace('T', ' ').replace(/\.\d+Z/, ' UTC');
const tip = `${label}: ${formatV(d.v)}${unit}\n${ts}`;
svg += `<circle cx="${x.toFixed(1)}" cy="${y.toFixed(1)}" r="8" fill="transparent" stroke="none" pointer-events="all"><title>${tip}</title></circle>`;
});
@@ -3172,7 +3351,7 @@ function destroy() { _analyticsData = {}; _channelData = null; if (_ngState && _
const idx = Math.floor(i * (data.length - 1) / Math.max(xTicks - 1, 1));
const t = new Date(data[idx].t);
const x = sx(t.getTime());
const label = t.toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' });
const label = (typeof formatChartAxisLabel === 'function') ? formatChartAxisLabel(t, true) : t.toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' });
svg += `<text x="${x.toFixed(1)}" y="${h - 5}" text-anchor="middle" font-size="9" fill="var(--text-muted)">${label}</text>`;
}
return svg;
+286 -5
View File
@@ -4,7 +4,7 @@
// --- Route/Payload name maps ---
const ROUTE_TYPES = { 0: 'TRANSPORT_FLOOD', 1: 'FLOOD', 2: 'DIRECT', 3: 'TRANSPORT_DIRECT' };
const PAYLOAD_TYPES = { 0: 'Request', 1: 'Response', 2: 'Direct Msg', 3: 'ACK', 4: 'Advert', 5: 'Channel Msg', 6: 'Group Data', 7: 'Anon Req', 8: 'Path', 9: 'Trace', 10: 'Multipart', 11: 'Control', 15: 'Raw Custom' };
const PAYLOAD_COLORS = { 0: 'req', 1: 'response', 2: 'txt-msg', 3: 'ack', 4: 'advert', 5: 'grp-txt', 7: 'anon-req', 8: 'path', 9: 'trace' };
const PAYLOAD_COLORS = { 0: 'req', 1: 'response', 2: 'txt-msg', 3: 'ack', 4: 'advert', 5: 'grp-txt', 6: 'grp-data', 7: 'anon-req', 8: 'path', 9: 'trace', 10: 'multipart', 11: 'control', 15: 'raw-custom' };
function routeTypeName(n) { return ROUTE_TYPES[n] || 'UNKNOWN'; }
function payloadTypeName(n) { return PAYLOAD_TYPES[n] || 'UNKNOWN'; }
@@ -14,6 +14,71 @@ function isTransportRoute(rt) { return rt === 0 || rt === 3; }
function getPathLenOffset(routeType) { return isTransportRoute(routeType) ? 5 : 1; }
function transportBadge(rt) { return isTransportRoute(rt) ? ' <span class="badge badge-transport" title="' + routeTypeName(rt) + '">T</span>' : ''; }
/**
* Compute breakdown byte ranges from raw_hex on the client.
* Mirrors cmd/server/decoder.go BuildBreakdown(). Used so per-observation raw_hex
* (which can differ in path length from the top-level packet) gets accurate
* highlighted byte ranges, instead of using the server-supplied breakdown
* computed once from the top-level raw_hex.
*/
function computeBreakdownRanges(hexString, routeType, payloadType) {
if (!hexString) return [];
const clean = hexString.replace(/\s+/g, '');
const bytes = clean.length / 2;
if (bytes < 2) return [];
const ranges = [];
// Header
ranges.push({ start: 0, end: 0, label: 'Header' });
let offset = 1;
if (isTransportRoute(routeType)) {
if (bytes < offset + 4) return ranges;
ranges.push({ start: offset, end: offset + 3, label: 'Transport Codes' });
offset += 4;
}
if (offset >= bytes) return ranges;
// Path Length byte
ranges.push({ start: offset, end: offset, label: 'Path Length' });
const pathByte = parseInt(clean.slice(offset * 2, offset * 2 + 2), 16);
offset += 1;
if (isNaN(pathByte)) return ranges;
const hashSize = (pathByte >> 6) + 1;
const hashCount = pathByte & 0x3F;
const pathBytes = hashSize * hashCount;
if (hashCount > 0 && offset + pathBytes <= bytes) {
ranges.push({ start: offset, end: offset + pathBytes - 1, label: 'Path' });
}
offset += pathBytes;
if (offset >= bytes) return ranges;
const payloadStart = offset;
// ADVERT (payload_type 4) gets sub-fields when full record present
if (payloadType === 4 && bytes - payloadStart >= 100) {
ranges.push({ start: payloadStart, end: payloadStart + 31, label: 'PubKey' });
ranges.push({ start: payloadStart + 32, end: payloadStart + 35, label: 'Timestamp' });
ranges.push({ start: payloadStart + 36, end: payloadStart + 99, label: 'Signature' });
const appStart = payloadStart + 100;
if (appStart < bytes) {
ranges.push({ start: appStart, end: appStart, label: 'Flags' });
const appFlags = parseInt(clean.slice(appStart * 2, appStart * 2 + 2), 16);
let fOff = appStart + 1;
if (!isNaN(appFlags)) {
if ((appFlags & 0x10) && fOff + 8 <= bytes) {
ranges.push({ start: fOff, end: fOff + 3, label: 'Latitude' });
ranges.push({ start: fOff + 4, end: fOff + 7, label: 'Longitude' });
fOff += 8;
}
if ((appFlags & 0x20) && fOff + 2 <= bytes) fOff += 2;
if ((appFlags & 0x40) && fOff + 2 <= bytes) fOff += 2;
if ((appFlags & 0x80) && fOff < bytes) {
ranges.push({ start: fOff, end: bytes - 1, label: 'Name' });
}
}
}
} else {
ranges.push({ start: payloadStart, end: bytes - 1, label: 'Payload' });
}
return ranges;
}
// --- Utilities ---
const _apiPerf = { calls: 0, totalMs: 0, log: [], cacheHits: 0 };
const _apiCache = new Map();
@@ -244,6 +309,39 @@ function formatTimestampWithTooltip(isoString, mode) {
return { text, tooltip, isFuture };
}
// Format a Date for chart axis labels, respecting customizer timestamp settings.
// shortForm: true = time only (for intra-day), false = date+time (multi-day).
function formatChartAxisLabel(d, shortForm) {
if (!(d instanceof Date) || !isFinite(d.getTime())) return '—';
var timezone = (typeof getTimestampTimezone === 'function') ? getTimestampTimezone() : 'local';
var preset = (typeof getTimestampFormatPreset === 'function') ? getTimestampFormatPreset() : 'iso';
var useUtc = timezone === 'utc';
if (preset === 'locale') {
if (shortForm) {
var opts = { hour: '2-digit', minute: '2-digit' };
if (useUtc) opts.timeZone = 'UTC';
return d.toLocaleTimeString([], opts);
}
var opts2 = { month: 'short', day: 'numeric', hour: '2-digit', minute: '2-digit' };
if (useUtc) opts2.timeZone = 'UTC';
return d.toLocaleString([], opts2);
}
// ISO-style (iso or iso-seconds)
var hour = useUtc ? d.getUTCHours() : d.getHours();
var minute = useUtc ? d.getUTCMinutes() : d.getMinutes();
var timeStr = pad2(hour) + ':' + pad2(minute);
if (preset === 'iso-seconds') {
var sec = useUtc ? d.getUTCSeconds() : d.getSeconds();
timeStr += ':' + pad2(sec);
}
if (shortForm) return timeStr;
var month = useUtc ? d.getUTCMonth() + 1 : d.getMonth() + 1;
var day = useUtc ? d.getUTCDate() : d.getDate();
return pad2(month) + '-' + pad2(day) + ' ' + timeStr;
}
function truncate(str, len) {
if (!str) return '';
return str.length > len ? str.slice(0, len) + '…' : str;
@@ -403,6 +501,148 @@ function connectWS() {
function onWS(fn) { wsListeners.push(fn); }
function offWS(fn) { wsListeners = wsListeners.filter(f => f !== fn); }
// --- Pull-to-reconnect (#1063) ---
// Touch-device pull-down at scrollTop=0 reconnects the WebSocket
// (instead of triggering native pull-to-refresh full-page reload).
// Visual indicator pulses during pull; toast confirms result.
const PULL_THRESHOLD_PX = 80;
let _pullToast = null;
let _pullToastTimer = null;
let _pullIndicator = null;
function _ensurePullIndicator() {
if (_pullIndicator && document.body && typeof document.body.contains === 'function' && document.body.contains(_pullIndicator)) return _pullIndicator;
if (_pullIndicator) return _pullIndicator;
const el = document.createElement('div');
el.id = 'pullReconnectIndicator';
el.setAttribute('aria-hidden', 'true');
el.innerHTML = '<span class="prr-icon">⟳</span>';
el.style.cssText = [
'position:fixed', 'top:0', 'left:50%', 'transform:translate(-50%,-100%)',
'z-index:99999', 'padding:8px 14px', 'border-radius:0 0 12px 12px',
'background:var(--accent,#2563eb)', 'color:#fff', 'font:14px/1 var(--font,system-ui)',
'box-shadow:0 2px 8px rgba(0,0,0,.2)', 'pointer-events:none',
'transition:transform .15s ease, opacity .15s ease', 'opacity:0',
].join(';');
document.body.appendChild(el);
_pullIndicator = el;
return el;
}
function _showPullToast(msg, ok) {
try {
if (_pullToast && _pullToast.remove) _pullToast.remove();
} catch (e) {}
if (_pullToastTimer) { try { clearTimeout(_pullToastTimer); } catch (e) {} _pullToastTimer = null; }
const el = document.createElement('div');
el.className = 'pull-reconnect-toast';
el.textContent = msg;
el.style.cssText = [
'position:fixed', 'top:12px', 'left:50%', 'transform:translateX(-50%)',
'z-index:99999', 'padding:8px 16px', 'border-radius:8px',
'background:' + (ok ? 'var(--status-green,#16a34a)' : 'var(--status-red,#dc2626)'),
'color:#fff', 'font:14px/1.2 var(--font,system-ui)',
'box-shadow:0 2px 8px rgba(0,0,0,.2)', 'pointer-events:none',
].join(';');
document.body.appendChild(el);
_pullToast = el;
_pullToastTimer = setTimeout(function () {
_pullToastTimer = null;
try { el.remove(); } catch (e) {}
}, 1800);
}
function pullReconnect() {
// If WS is connected (readyState OPEN), give a brief "Connected ✓"
// confirmation but still cycle so the user sees fresh data.
const wasOpen = ws && ws.readyState === 1;
if (wasOpen) {
_showPullToast('Connected ✓', true);
// Fast cycle: close and let onclose reconnect immediately
try { ws.close(); } catch (e) {}
} else {
_showPullToast('Reconnecting…', true);
try { if (ws) ws.close(); } catch (e) {}
// onclose handler schedules reconnect; force one now in case ws was null
try { connectWS(); } catch (e) {}
}
}
function _isTouchDevice() {
try {
return ('ontouchstart' in window) ||
(navigator && (navigator.maxTouchPoints > 0 || navigator.msMaxTouchPoints > 0));
} catch (e) { return false; }
}
function setupPullToReconnect() {
// Always attach listeners (tests + future-proof). Inside the handler we
// gate on _isTouchDevice() AND scrollTop=0 so desktop/scrolled pages are
// unaffected.
let startY = null;
let pulling = false;
let dist = 0;
function getScrollTop() {
return (document.documentElement && document.documentElement.scrollTop) ||
(document.body && document.body.scrollTop) || 0;
}
function onStart(e) {
if (!_isTouchDevice()) return;
if (getScrollTop() > 0) { startY = null; pulling = false; return; }
const t = e.touches && e.touches[0];
startY = t ? t.clientY : null;
pulling = false;
dist = 0;
}
function onMove(e) {
if (startY == null) return;
if (getScrollTop() > 0) { startY = null; pulling = false; return; }
const t = e.touches && e.touches[0];
if (!t) return;
const dy = t.clientY - startY;
if (dy <= 0) return; // upward swipe — ignore
dist = dy;
if (dy > 8) {
pulling = true;
const ind = _ensurePullIndicator();
const pct = Math.min(1, dy / PULL_THRESHOLD_PX);
ind.style.opacity = String(pct);
ind.style.transform = 'translate(-50%, ' + (-100 + pct * 100) + '%)';
const icon = ind.querySelector && ind.querySelector('.prr-icon');
if (icon) icon.style.transform = 'rotate(' + Math.round(pct * 360) + 'deg)';
// Prevent native pull-to-refresh ONLY once we've committed to the gesture
if (dy > 16 && typeof e.preventDefault === 'function' && e.cancelable !== false) {
try { e.preventDefault(); } catch (_) {}
}
}
}
function onEnd() {
const wasPulling = pulling;
const finalDist = dist;
startY = null; pulling = false; dist = 0;
if (_pullIndicator) {
_pullIndicator.style.opacity = '0';
_pullIndicator.style.transform = 'translate(-50%, -100%)';
}
if (wasPulling && finalDist >= PULL_THRESHOLD_PX) {
try { (window.pullReconnect || pullReconnect)(); } catch (e) {}
}
}
document.addEventListener('touchstart', onStart, { passive: true });
document.addEventListener('touchmove', onMove, { passive: false });
document.addEventListener('touchend', onEnd, { passive: true });
document.addEventListener('touchcancel', onEnd, { passive: true });
}
window.pullReconnect = pullReconnect;
window.setupPullToReconnect = setupPullToReconnect;
window.connectWS = connectWS;
/* Global escapeHtml — used by multiple pages */
function escapeHtml(s) {
if (s == null) return '';
@@ -440,6 +680,21 @@ const pages = {};
function registerPage(name, mod) { pages[name] = mod; }
// Tools landing page — shows sub-menu with Trace and Path Inspector (spec §2.8, M1 fix).
registerPage('tools-landing', {
init: function (container) {
container.innerHTML =
'<div class="tools-landing">' +
'<h2>Tools</h2>' +
'<div class="tools-menu">' +
'<a href="#/tools/path-inspector" class="tools-card"><h3>🔍 Path Inspector</h3><p>Resolve prefix paths to candidate full-pubkey routes with confidence scoring.</p></a>' +
'<a href="#/tools/trace/" class="tools-card"><h3>📡 Trace Viewer</h3><p>View detailed packet traces by hash.</p></a>' +
'</div>' +
'</div>';
},
destroy: function () {}
});
let currentPage = null;
function closeNav() {
@@ -460,6 +715,12 @@ function closeMoreMenu() {
function navigate() {
closeNav();
// Backward-compat redirect: #/traces/<hash> → #/tools/trace/<hash> (issue #944).
if (location.hash.startsWith('#/traces/')) {
location.hash = location.hash.replace('#/traces/', '#/tools/trace/');
return;
}
const hash = location.hash.replace('#/', '') || 'packets';
const route = hash.split('?')[0];
@@ -487,9 +748,27 @@ function navigate() {
basePage = 'observer-detail';
}
// Tools sub-routing (issue #944): tools/trace/<hash>, tools/path-inspector
if (basePage === 'tools') {
if (routeParam && routeParam.startsWith('trace/')) {
basePage = 'traces';
routeParam = routeParam.substring(6); // strip "trace/"
} else if (routeParam === 'path-inspector' || (routeParam && routeParam.startsWith('path-inspector'))) {
basePage = 'path-inspector';
routeParam = null;
} else if (!routeParam) {
// Default tools landing shows menu with both entries.
basePage = 'tools-landing';
}
}
// Also support old #/traces (no sub-path) → traces page.
if (basePage === 'traces' && !routeParam) {
basePage = 'traces';
}
// Update nav active state
document.querySelectorAll('.nav-link[data-route]').forEach(el => {
el.classList.toggle('active', el.dataset.route === basePage);
el.classList.toggle('active', el.dataset.route === basePage || (el.dataset.route === 'tools' && (basePage === 'traces' || basePage === 'path-inspector' || basePage === 'tools-landing')));
});
// Update "More" button to show active state if a low-priority page is selected
var moreBtn = document.getElementById('navMoreBtn');
@@ -539,6 +818,7 @@ window.addEventListener('timestamp-mode-changed', () => {
});
window.addEventListener('DOMContentLoaded', () => {
connectWS();
setupPullToReconnect();
// --- Dark Mode ---
const darkToggle = document.getElementById('darkModeToggle');
@@ -861,10 +1141,11 @@ window.addEventListener('DOMContentLoaded', () => {
}).catch(() => {
window.SITE_CONFIG = { timestamps: { defaultMode: 'ago', timezone: 'local', formatPreset: 'iso', customFormat: '', allowCustomFormat: false } };
if (window._customizerV2) window._customizerV2.init(window.SITE_CONFIG);
}).finally(() => {
if (!location.hash || location.hash === '#/') location.hash = '#/home';
else navigate();
});
// Navigate immediately — don't gate data-fetching pages on cosmetic theme fetch
if (!location.hash || location.hash === '#/') location.hash = '#/home';
else navigate();
});
/**
+2 -8
View File
@@ -120,8 +120,8 @@
var ph = rect.height;
var vw = window.innerWidth;
var vh = window.innerHeight;
var finalX = x + pw > vw ? Math.max(0, vw - pw - 8) : x;
var finalY = y + ph > vh ? Math.max(0, vh - ph - 8) : y;
var finalX = x + pw > vw ? Math.max(0, vw - pw - 14) : x;
var finalY = y + ph > vh ? Math.max(0, vh - ph - 14) : y;
el.style.left = finalX + 'px';
el.style.top = finalY + 'px';
}
@@ -228,12 +228,6 @@
if (ch) showPopover(ch, e.clientX, e.clientY);
});
feed.addEventListener('contextmenu', function(e) {
var item = e.target.closest('.live-feed-item');
if (!item || !item._ccChannel) return;
e.preventDefault();
showPopover(item._ccChannel, e.clientX, e.clientY);
});
}
/**
+185 -41
View File
@@ -15,6 +15,7 @@ window.ChannelDecrypt = (function () {
'use strict';
var STORAGE_KEY = 'corescope_channel_keys';
var LABELS_KEY = 'corescope_channel_labels';
var CACHE_KEY = 'corescope_channel_cache';
// ---- Hex utilities ----
@@ -37,6 +38,25 @@ window.ChannelDecrypt = (function () {
// ---- Key derivation ----
// Detect whether SubtleCrypto is available. SubtleCrypto is only exposed
// in **secure contexts** (HTTPS or localhost) — when CoreScope is served
// over plain HTTP, `crypto.subtle` is undefined and any digest/HMAC call
// throws. We fall back to the vendored pure-JS implementation in
// public/vendor/sha256-hmac.js. PR #1021 did the same for AES-ECB.
function hasSubtle() {
return typeof crypto !== 'undefined' && crypto && crypto.subtle && typeof crypto.subtle.digest === 'function';
}
function pureCryptoOrThrow() {
var host = (typeof window !== 'undefined') ? window
: (typeof self !== 'undefined') ? self : null;
if (!host || !host.PureCrypto || !host.PureCrypto.sha256 || !host.PureCrypto.hmacSha256) {
throw new Error('PureCrypto vendor module not loaded (public/vendor/sha256-hmac.js). ' +
'crypto.subtle is unavailable (HTTP context) and no fallback present.');
}
return host.PureCrypto;
}
/**
* Derive AES-128 key from channel name: SHA-256("#channelname")[:16].
* @param {string} channelName - e.g. "#LongFast"
@@ -44,8 +64,12 @@ window.ChannelDecrypt = (function () {
*/
async function deriveKey(channelName) {
var enc = new TextEncoder();
var hash = await crypto.subtle.digest('SHA-256', enc.encode(channelName));
return new Uint8Array(hash).slice(0, 16);
var data = enc.encode(channelName);
if (hasSubtle()) {
var hash = await crypto.subtle.digest('SHA-256', data);
return new Uint8Array(hash).slice(0, 16);
}
return pureCryptoOrThrow().sha256(data).slice(0, 16);
}
/**
@@ -54,46 +78,41 @@ window.ChannelDecrypt = (function () {
* @returns {Promise<number>} single byte (0-255)
*/
async function computeChannelHash(key) {
var hash = await crypto.subtle.digest('SHA-256', key);
return new Uint8Array(hash)[0];
if (hasSubtle()) {
var hash = await crypto.subtle.digest('SHA-256', key);
return new Uint8Array(hash)[0];
}
return pureCryptoOrThrow().sha256(key)[0];
}
// ---- AES-128-ECB via Web Crypto (CBC with zero IV, block-by-block) ----
// ---- AES-128-ECB via vendored pure-JS implementation ----
//
// Web Crypto exposes AES-CBC/CTR/GCM but NOT raw AES-ECB. The previous
// implementation simulated ECB with AES-CBC + zero IV + a dummy PKCS7
// padding block; that hack throws OperationError on real ciphertext
// because Web Crypto validates PKCS7 padding on the decrypted output
// and the dummy padding bytes rarely form a valid PKCS7 sequence
// after decryption. We use a pure-JS AES-128 ECB core
// (public/vendor/aes-ecb.js, MIT, derived from aes-js by Richard
// Moore) so decryption is deterministic across browsers and works in
// HTTP contexts.
/**
* Decrypt AES-128-ECB by decrypting each 16-byte block independently
* using AES-CBC with a zero IV (equivalent to ECB for single blocks).
* Decrypt AES-128-ECB.
* @param {Uint8Array} key - 16-byte AES key
* @param {Uint8Array} ciphertext - must be multiple of 16 bytes
* @returns {Promise<Uint8Array>} plaintext
* @param {Uint8Array} ciphertext - must be a non-zero multiple of 16 bytes
* @returns {Promise<Uint8Array|null>} plaintext, or null on invalid input
*/
async function decryptECB(key, ciphertext) {
if (ciphertext.length === 0 || ciphertext.length % 16 !== 0) {
if (!ciphertext || ciphertext.length === 0 || ciphertext.length % 16 !== 0) {
return null;
}
var cryptoKey = await crypto.subtle.importKey(
'raw', key, { name: 'AES-CBC' }, false, ['decrypt']
);
var zeroIV = new Uint8Array(16);
var plaintext = new Uint8Array(ciphertext.length);
for (var i = 0; i < ciphertext.length; i += 16) {
var block = ciphertext.slice(i, i + 16);
// Append a dummy block (16 bytes of 0x10 = PKCS7 padding for empty next block)
// so Web Crypto doesn't complain about padding
var padded = new Uint8Array(32);
padded.set(block, 0);
// Second block is PKCS7 padding: 16 bytes of 0x10
for (var j = 16; j < 32; j++) padded[j] = 16;
var decrypted = await crypto.subtle.decrypt(
{ name: 'AES-CBC', iv: zeroIV }, cryptoKey, padded
);
var decBytes = new Uint8Array(decrypted);
plaintext.set(decBytes.slice(0, 16), i);
var host = (typeof window !== 'undefined') ? window
: (typeof self !== 'undefined') ? self : null;
if (!host || !host.AES_ECB || !host.AES_ECB.decrypt) {
throw new Error('AES_ECB vendor module not loaded (public/vendor/aes-ecb.js)');
}
return plaintext;
return host.AES_ECB.decrypt(key, ciphertext);
}
// ---- MAC verification ----
@@ -111,13 +130,17 @@ window.ChannelDecrypt = (function () {
secret.set(key, 0);
// remaining 16 bytes are already 0
var cryptoKey = await crypto.subtle.importKey(
'raw', secret, { name: 'HMAC', hash: 'SHA-256' }, false, ['sign']
);
var sig = await crypto.subtle.sign('HMAC', cryptoKey, ciphertext);
var sigBytes = new Uint8Array(sig);
var macBytes = hexToBytes(macHex);
var sigBytes;
if (hasSubtle() && typeof crypto.subtle.importKey === 'function' && typeof crypto.subtle.sign === 'function') {
var cryptoKey = await crypto.subtle.importKey(
'raw', secret, { name: 'HMAC', hash: 'SHA-256' }, false, ['sign']
);
var sig = await crypto.subtle.sign('HMAC', cryptoKey, ciphertext);
sigBytes = new Uint8Array(sig);
} else {
sigBytes = pureCryptoOrThrow().hmacSha256(secret, ciphertext);
}
return sigBytes[0] === macBytes[0] && sigBytes[1] === macBytes[1];
}
@@ -187,12 +210,96 @@ window.ChannelDecrypt = (function () {
// Alias used by channels.js
var decryptPacket = decrypt;
// ---- Live PSK decrypt (WS path) ----
//
// Build a Map<channelHashByte, { channelName, keyBytes, keyHex }> from all
// stored PSK keys so the WebSocket handler can do an O(1) lookup on each
// incoming GRP_TXT packet. Hash byte derivation is async, so we cache the
// map between calls and only rebuild when the stored-keys set changes.
var _keyMapCache = null;
var _keyMapSig = '';
function _keysSignature(keys) {
var names = Object.keys(keys).sort();
var sig = '';
for (var i = 0; i < names.length; i++) {
sig += names[i] + '=' + keys[names[i]] + ';';
}
return sig;
}
async function buildKeyMap() {
var keys = getKeys();
var sig = _keysSignature(keys);
if (_keyMapCache && _keyMapSig === sig) return _keyMapCache;
var map = new Map();
var names = Object.keys(keys);
for (var i = 0; i < names.length; i++) {
var channelName = names[i];
var keyHex = keys[channelName];
if (!keyHex || typeof keyHex !== 'string') continue;
var keyBytes;
try { keyBytes = hexToBytes(keyHex); } catch (e) { continue; }
if (keyBytes.length !== 16) continue;
var hashByte;
try { hashByte = await computeChannelHash(keyBytes); } catch (e) { continue; }
// First-write-wins on collision (rare): different channel names can
// hash to the same byte. The downstream MAC check still gates rendering.
if (!map.has(hashByte)) {
map.set(hashByte, { channelName: channelName, keyBytes: keyBytes, keyHex: keyHex });
}
}
_keyMapCache = map;
_keyMapSig = sig;
return map;
}
/**
* Attempt to decrypt a live GRP_TXT payload using a prebuilt key map.
* Returns { sender, text, channelName, channelHashByte } on success,
* or null when no key matches, MAC verification fails, or the payload
* is not an encrypted GRP_TXT.
*/
async function tryDecryptLive(payload, keyMap) {
if (!payload || payload.type !== 'GRP_TXT') return null;
if (!payload.encryptedData || !payload.mac) return null;
if (!keyMap || typeof keyMap.get !== 'function') return null;
var hashByte = payload.channelHash;
// channelHash arrives as either a number or a hex string in some paths;
// normalize to number so Map.get hits.
if (typeof hashByte === 'string') {
var n = parseInt(hashByte, 16);
if (!isFinite(n)) return null;
hashByte = n;
}
if (typeof hashByte !== 'number') return null;
var entry = keyMap.get(hashByte);
if (!entry) return null;
var result;
try {
result = await decrypt(entry.keyBytes, payload.mac, payload.encryptedData);
} catch (e) { return null; }
if (!result) return null;
return {
sender: result.sender || 'Unknown',
text: result.message || '',
channelName: entry.channelName,
channelHashByte: hashByte,
timestamp: result.timestamp || null
};
}
// ---- Key storage (localStorage) ----
function saveKey(channelName, keyHex) {
function saveKey(channelName, keyHex, label) {
var keys = getKeys();
keys[channelName] = keyHex;
try { localStorage.setItem(STORAGE_KEY, JSON.stringify(keys)); } catch (e) { /* quota */ }
_keyMapCache = null; // invalidate live-decrypt index
if (typeof label === 'string' && label.trim()) {
saveLabel(channelName, label.trim());
}
}
// Alias used by channels.js
@@ -212,8 +319,39 @@ window.ChannelDecrypt = (function () {
var keys = getKeys();
delete keys[channelName];
try { localStorage.setItem(STORAGE_KEY, JSON.stringify(keys)); } catch (e) { /* quota */ }
// Also clear cached messages for this channel
_keyMapCache = null; // invalidate live-decrypt index
// Also clear cached messages and any label for this channel (#1020)
clearChannelCache(channelName);
var labels = getLabels();
if (labels[channelName]) {
delete labels[channelName];
try { localStorage.setItem(LABELS_KEY, JSON.stringify(labels)); } catch (e) { /* quota */ }
}
}
// ---- User-supplied display labels (#1020) ----
// Stored separately from keys so we can display friendly names instead of
// psk:<hex8> for user-added PSK channels.
function getLabels() {
try {
var raw = localStorage.getItem(LABELS_KEY);
return raw ? JSON.parse(raw) : {};
} catch (e) { return {}; }
}
function getLabel(channelName) {
var labels = getLabels();
return labels[channelName] || '';
}
function saveLabel(channelName, label) {
var labels = getLabels();
if (typeof label === 'string' && label.trim()) {
labels[channelName] = label.trim();
} else {
delete labels[channelName];
}
try { localStorage.setItem(LABELS_KEY, JSON.stringify(labels)); } catch (e) { /* quota */ }
}
/** Remove cached messages for a specific channel (by name or hash). */
@@ -286,10 +424,16 @@ window.ChannelDecrypt = (function () {
getKeys: getKeys,
getStoredKeys: getStoredKeys,
removeKey: removeKey,
// #1020: optional user-friendly display labels for stored keys
saveLabel: saveLabel,
getLabel: getLabel,
getLabels: getLabels,
clearChannelCache: clearChannelCache,
cacheMessages: cacheMessages,
getCachedMessages: getCachedMessages,
setCache: setCache,
getCache: getCache
getCache: getCache,
buildKeyMap: buildKeyMap,
tryDecryptLive: tryDecryptLive
};
})();
+256
View File
@@ -0,0 +1,256 @@
/**
* channel-qr.js QR code generation + scanning for MeshCore channels.
*
* URL format (per firmware spec):
* meshcore://channel/add?name=<urlencoded>&secret=<32hex>
*
* Public API (window.ChannelQR):
* buildUrl(name, secretHex) string
* parseChannelUrl(url) {name, secret} | null
* generate(name, secretHex, target) renders QR + URL + Copy Key into `target`
* scan() Promise<{name, secret} | null>
*
* Self-contained: does NOT touch channels.js / channel-decrypt.js.
* The PR that wires the modal into this module is #3.
*
* Vendored deps (loaded by index.html):
* - public/vendor/qrcode.js (davidshimjs/qrcodejs, MIT) QR rendering
* - public/vendor/jsqr.min.js (cozmo/jsQR, Apache-2.0) QR decoding from camera
*/
(function (root) {
'use strict';
const SCHEME_PREFIX = 'meshcore://channel/add';
const HEX32_RE = /^[0-9a-fA-F]{32}$/;
function buildUrl(name, secretHex) {
return SCHEME_PREFIX + '?name=' + encodeURIComponent(String(name)) +
'&secret=' + String(secretHex);
}
/**
* parseChannelUrl(url) { name, secret } | null
* Strict: scheme must be `meshcore:`, host+path `//channel/add`,
* both `name` and `secret` query params present, secret must be 32 hex chars.
*/
function parseChannelUrl(url) {
if (!url || typeof url !== 'string') return null;
if (url.indexOf(SCHEME_PREFIX) !== 0) return null;
// Strip prefix → query string
const rest = url.slice(SCHEME_PREFIX.length);
if (rest[0] !== '?' && rest !== '') return null;
const qs = rest.slice(1);
if (!qs) return null;
const params = {};
const pairs = qs.split('&');
for (let i = 0; i < pairs.length; i++) {
const eq = pairs[i].indexOf('=');
if (eq < 0) continue;
const k = pairs[i].slice(0, eq);
const v = pairs[i].slice(eq + 1);
try { params[k] = decodeURIComponent(v); }
catch (_e) { return null; }
}
if (!params.name || !params.secret) return null;
if (!HEX32_RE.test(params.secret)) return null;
return { name: params.name, secret: params.secret.toLowerCase() };
}
// ---------- DOM helpers (browser-only) ----------
function _hasDom() {
return typeof document !== 'undefined' && document.createElement;
}
/**
* Render QR + URL + Copy Key button into `target`.
* Requires window.QRCode (vendor/qrcode.js) loaded.
*/
function generate(name, secretHex, target) {
if (!_hasDom() || !target) return;
target.innerHTML = '';
const url = buildUrl(name, secretHex);
const qrBox = document.createElement('div');
qrBox.className = 'channel-qr-canvas';
qrBox.style.display = 'inline-block';
target.appendChild(qrBox);
if (typeof root.QRCode === 'function') {
try {
// davidshimjs/qrcodejs API: new QRCode(el, {text, width, height, ...})
new root.QRCode(qrBox, {
text: url,
width: 192,
height: 192,
correctLevel: root.QRCode.CorrectLevel ? root.QRCode.CorrectLevel.M : 0,
});
} catch (e) {
qrBox.textContent = '[QR render failed: ' + (e && e.message || e) + ']';
}
} else {
qrBox.textContent = '[QR library not loaded]';
}
const urlLine = document.createElement('div');
urlLine.className = 'channel-qr-url';
urlLine.style.cssText = 'font-family:monospace;font-size:11px;word-break:break-all;margin-top:6px;';
urlLine.textContent = url;
target.appendChild(urlLine);
const copyBtn = document.createElement('button');
copyBtn.type = 'button';
copyBtn.className = 'channel-qr-copy';
copyBtn.textContent = '📋 Copy Key';
copyBtn.style.cssText = 'margin-top:6px;';
copyBtn.addEventListener('click', function () {
const text = secretHex;
const done = function () {
const orig = copyBtn.textContent;
copyBtn.textContent = '✓ Copied';
setTimeout(function () { copyBtn.textContent = orig; }, 1200);
};
if (root.navigator && root.navigator.clipboard && root.navigator.clipboard.writeText) {
root.navigator.clipboard.writeText(text).then(done, function () {
// Fallback: select text in a temp input
_fallbackCopy(text); done();
});
} else {
_fallbackCopy(text); done();
}
});
target.appendChild(copyBtn);
}
function _fallbackCopy(text) {
if (!_hasDom()) return;
const ta = document.createElement('textarea');
ta.value = text;
ta.style.cssText = 'position:fixed;opacity:0;';
document.body.appendChild(ta);
ta.select();
try { document.execCommand('copy'); } catch (_e) {}
document.body.removeChild(ta);
}
// ---------- Camera scan ----------
/**
* scan() Promise<{name, secret} | null>
*
* Opens a small modal with a live camera preview, decodes via jsQR,
* resolves with the parsed channel info on first valid match. Closes
* camera on resolve/reject. Resolves with `null` if user cancels or
* camera permission is denied (graceful fallback path).
*/
function scan() {
if (!_hasDom()) return Promise.resolve(null);
const nav = root.navigator;
if (!nav || !nav.mediaDevices || !nav.mediaDevices.getUserMedia ||
typeof root.jsQR !== 'function') {
_showCameraFallback();
return Promise.resolve(null);
}
return new Promise(function (resolve) {
const overlay = document.createElement('div');
overlay.className = 'channel-qr-scan-overlay';
overlay.style.cssText = 'position:fixed;inset:0;background:rgba(0,0,0,0.85);' +
'display:flex;flex-direction:column;align-items:center;justify-content:center;z-index:99999;';
const video = document.createElement('video');
video.setAttribute('playsinline', 'true');
video.style.cssText = 'max-width:90vw;max-height:60vh;background:#000;';
overlay.appendChild(video);
const status = document.createElement('div');
status.style.cssText = 'color:#fff;margin-top:12px;font-family:sans-serif;';
status.textContent = 'Point camera at a MeshCore channel QR…';
overlay.appendChild(status);
const cancelBtn = document.createElement('button');
cancelBtn.type = 'button';
cancelBtn.textContent = 'Cancel';
cancelBtn.style.cssText = 'margin-top:12px;';
overlay.appendChild(cancelBtn);
document.body.appendChild(overlay);
const canvas = document.createElement('canvas');
const ctx = canvas.getContext('2d');
let stream = null;
let rafId = 0;
let done = false;
function cleanup(result) {
if (done) return;
done = true;
if (rafId) cancelAnimationFrame(rafId);
if (stream) {
stream.getTracks().forEach(function (t) { try { t.stop(); } catch (_e) {} });
}
if (overlay.parentNode) overlay.parentNode.removeChild(overlay);
resolve(result);
}
cancelBtn.addEventListener('click', function () { cleanup(null); });
function tick() {
if (done) return;
if (video.readyState === video.HAVE_ENOUGH_DATA) {
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
let imgData;
try { imgData = ctx.getImageData(0, 0, canvas.width, canvas.height); }
catch (_e) { rafId = requestAnimationFrame(tick); return; }
const code = root.jsQR(imgData.data, imgData.width, imgData.height, {
inversionAttempts: 'dontInvert',
});
if (code && code.data) {
const parsed = parseChannelUrl(code.data);
if (parsed) { cleanup(parsed); return; }
status.textContent = 'QR found but not a MeshCore channel — keep trying…';
}
}
rafId = requestAnimationFrame(tick);
}
nav.mediaDevices.getUserMedia({ video: { facingMode: 'environment' } })
.then(function (s) {
stream = s;
video.srcObject = s;
video.play().then(function () { tick(); }, function () { tick(); });
})
.catch(function () {
status.textContent = 'Camera not available — paste key manually.';
setTimeout(function () { cleanup(null); }, 1800);
});
});
}
function _showCameraFallback() {
if (!_hasDom()) return;
const note = document.createElement('div');
note.className = 'channel-qr-fallback';
note.style.cssText = 'position:fixed;bottom:20px;left:50%;transform:translateX(-50%);' +
'background:#222;color:#fff;padding:10px 14px;border-radius:6px;z-index:99999;';
note.textContent = 'Camera not available — paste key manually.';
document.body.appendChild(note);
setTimeout(function () {
if (note.parentNode) note.parentNode.removeChild(note);
}, 2500);
}
root.ChannelQR = {
buildUrl: buildUrl,
parseChannelUrl: parseChannelUrl,
generate: generate,
scan: scan,
};
})(typeof window !== 'undefined' ? window : globalThis);
+564 -126
View File
@@ -339,8 +339,10 @@
}
}
// Add a user channel by name (#channelname) or hex key
async function addUserChannel(val) {
// Add a user channel by name (#channelname) or hex key.
// `label` (#1020) is an optional friendly name shown in the sidebar instead
// of "psk:<hex8>" — stored alongside the key in localStorage.
async function addUserChannel(val, label) {
var displayName = val.startsWith('#') ? val : (isHexKey(val) ? val.substring(0, 8) + '…' : '#' + val);
showAddStatus('Decrypting ' + displayName + ' messages…', 'loading');
var channelName, keyHex;
@@ -359,7 +361,8 @@
keyHex = ChannelDecrypt.bytesToHex(keyBytes2);
}
ChannelDecrypt.storeKey(channelName, keyHex);
// #1020: persist optional user-supplied label alongside the key
ChannelDecrypt.storeKey(channelName, keyHex, label);
// Compute channel hash byte to find matching encrypted channels
var keyBytes3 = ChannelDecrypt.hexToBytes(keyHex);
@@ -378,35 +381,53 @@
if (existingEncrypted) {
targetHash = existingEncrypted.hash;
}
await selectChannel(targetHash, { userKey: keyHex, channelHashByte: hashByte, channelName: channelName });
var selectResult = await selectChannel(targetHash, { userKey: keyHex, channelHashByte: hashByte, channelName: channelName });
// Show success feedback (#759)
var msgCount = document.querySelectorAll('#chMessages .ch-msg').length;
var userDisplay = channelName.startsWith('psk:') ? 'Custom channel (' + channelName.substring(4) + ')' : channelName;
if (msgCount > 0) {
showAddStatus('Added ' + userDisplay + ' — ' + msgCount + ' messages decrypted', 'success');
// #1020: derive count from selectChannel's reported result, not from a
// DOM scrape that can race with rendering.
var msgCount = (selectResult && typeof selectResult.messageCount === 'number')
? selectResult.messageCount
: (Array.isArray(messages) ? messages.length : 0);
var displayLabel = (typeof label === 'string' && label.trim()) ? label.trim() :
(channelName.startsWith('psk:') ? 'Custom channel (' + channelName.substring(4) + ')' : channelName);
if (selectResult && selectResult.wrongKey) {
showAddStatus('Key does not match any packets for ' + displayLabel, 'error');
} else if (msgCount > 0) {
showAddStatus('Added ' + displayLabel + ' — ' + msgCount + ' messages decrypted', 'success');
} else {
showAddStatus('No messages found for ' + userDisplay, 'warn');
showAddStatus('Added ' + displayLabel + ' — no messages found yet', 'warn');
}
} catch (err) {
showAddStatus('Failed to decrypt', 'error');
}
}
// Merge user-stored keys into the channel list
// Merge user-stored keys into the channel list.
// If a stored key matches a server-known channel, mark that channel as
// userAdded so the ✕ button appears — otherwise the user has no way to
// remove a key they added but that the server already knows about.
function mergeUserChannels() {
var keys = ChannelDecrypt.getStoredKeys();
var labels = (typeof ChannelDecrypt.getLabels === 'function') ? ChannelDecrypt.getLabels() : {};
var names = Object.keys(keys);
for (var i = 0; i < names.length; i++) {
var name = names[i];
// Check if channel already exists by name
var exists = channels.some(function (ch) {
return ch.name === name || ch.hash === name || ch.hash === ('user:' + name);
});
if (!exists) {
var label = labels[name] || '';
var matched = false;
for (var j = 0; j < channels.length; j++) {
var ch = channels[j];
if (ch.name === name || ch.hash === name || ch.hash === ('user:' + name)) {
ch.userAdded = true;
if (label) ch.userLabel = label;
matched = true;
break;
}
}
if (!matched) {
channels.push({
hash: 'user:' + name,
name: name,
userLabel: label,
messageCount: 0,
lastActivityMs: 0,
lastSender: '',
@@ -610,28 +631,77 @@
<div class="ch-sidebar" aria-label="Channel list">
<div class="ch-sidebar-header">
<div class="ch-sidebar-title"><span class="ch-icon">💬</span> Channels</div>
<label class="ch-encrypted-toggle" title="Show encrypted channels (no key configured)">
<input type="checkbox" id="chShowEncrypted"> <span class="ch-toggle-label">🔒 No key</span>
</label>
</div>
<div class="ch-key-input-wrap" style="padding:4px 8px">
<form id="chKeyForm" autocomplete="off" class="ch-add-form">
<div class="ch-add-row">
<input type="text" id="chKeyInput" class="ch-key-input"
placeholder="#channelname"
aria-label="Channel name or hex key" spellcheck="false">
<button type="submit" class="ch-add-btn" title="Add channel">+</button>
</div>
<div class="ch-add-hint">e.g. #LongFast or 32-char hex key decrypted in your browser.</div>
<div id="chAddStatus" class="ch-add-status" style="display:none"></div>
</form>
<button type="button" id="chAddChannelBtn" class="ch-add-channel-btn"
aria-label="Add channel" title="Add a channel — generate, paste a key, or monitor a hashtag">+ Add Channel</button>
</div>
<a href="#/analytics" class="ch-analytics-link"
title="Open the Analytics page to see channel activity stats">📊 Channel Analytics </a>
<div id="chAddStatus" class="ch-add-status" style="display:none"></div>
<div id="chRegionFilter" class="region-filter-container" style="padding:0 8px"></div>
<div class="ch-channel-list" id="chList" role="listbox" aria-label="Channels">
<div class="ch-loading">Loading channels</div>
</div>
<div class="ch-sidebar-resize" aria-hidden="true"></div>
</div>
<!-- #1034 PR1: Add Channel modal -->
<div id="chAddChannelModal" class="modal-overlay ch-modal-overlay hidden" role="dialog" aria-modal="true" aria-labelledby="chModalTitle" hidden>
<div class="modal ch-modal" role="document">
<button type="button" class="modal-close ch-modal-close" id="chModalClose" data-action="ch-modal-close" aria-label="Close"></button>
<h3 id="chModalTitle">Add Channel</h3>
<div class="ch-modal-callout" role="note">
Channels are saved to <strong>THIS browser only</strong>. They won't appear on other devices or browsers, and clearing browser data will remove them.
</div>
<section class="ch-modal-section" aria-labelledby="chSecGenTitle">
<h4 id="chSecGenTitle" class="ch-modal-section-title">Generate PSK Channel</h4>
<p class="ch-modal-section-hint">Create a new private channel with a random key. Share the QR code with others to add it.</p>
<div class="ch-modal-row">
<input type="text" id="chGenerateName" class="ch-modal-input" placeholder="Channel name (e.g. My Crew)" aria-label="Channel name" spellcheck="false">
<button type="button" id="chGenerateBtn" class="btn-primary">Generate &amp; Show QR</button>
</div>
<div id="qr-output" class="ch-qr-output" aria-live="polite"></div>
</section>
<section class="ch-modal-section" aria-labelledby="chSecPskTitle">
<h4 id="chSecPskTitle" class="ch-modal-section-title">Add Private Channel (PSK)</h4>
<p class="ch-modal-section-hint">Paste a 32-character hex key someone shared with you, or scan their QR code.</p>
<div class="ch-modal-row">
<input type="text" id="chPskKey" class="ch-modal-input ch-modal-input--mono"
placeholder="32-char hex key (0-9, a-f)"
pattern="[0-9a-fA-F]{32}"
maxlength="32"
aria-label="32-character hex PSK key" spellcheck="false" autocomplete="off">
<button type="button" id="scan-qr-btn" class="ch-modal-btn-secondary" title="Scan a meshcore:// channel QR with your camera">📷 Scan QR</button>
</div>
<div class="ch-modal-row">
<input type="text" id="chPskName" class="ch-modal-input" placeholder="Display name (optional)" aria-label="Optional display name" spellcheck="false">
<button type="button" id="chPskAddBtn" class="btn-primary">Add</button>
</div>
<div id="chPskError" class="ch-modal-error" style="display:none" role="alert"></div>
</section>
<section class="ch-modal-section" aria-labelledby="chSecTagTitle">
<h4 id="chSecTagTitle" class="ch-modal-section-title">Monitor Hashtag Channel</h4>
<p class="ch-modal-section-hint">Decrypt traffic on a public hashtag channel by deriving the key from its name.</p>
<div class="ch-modal-row ch-hashtag-row">
<span class="ch-hashtag-prefix" aria-hidden="true">#</span>
<input type="text" id="chHashtagName" class="ch-modal-input"
placeholder="meshcore"
aria-label="Hashtag channel name (without #)" spellcheck="false" autocomplete="off">
<button type="button" id="chHashtagBtn" class="btn-primary">Monitor</button>
</div>
<div class="ch-modal-warn"> Case-sensitive <code>#meshcore</code> <code>#MeshCore</code></div>
</section>
<section id="chShareSection" class="ch-modal-section" hidden aria-labelledby="chShareHeading">
<h4 id="chShareHeading" class="ch-modal-section-title">Share Channel</h4>
<div id="chShareOutput" class="ch-share-output" aria-live="polite"></div>
</section>
<div class="ch-modal-footer">
🔒 Keys stay in your browser CoreScope is a passive observer that monitors and decrypts traffic but cannot transmit over RF. Use to remove individual channels.
</div>
</div>
</div>
<div class="ch-main" role="region" aria-label="Channel messages">
<div class="ch-main-header" id="chHeader">
<button class="ch-back-btn" id="chBackBtn" aria-label="Back to channels" data-action="ch-back"></button>
@@ -647,15 +717,10 @@
RegionFilter.init(document.getElementById('chRegionFilter'));
// Encrypted channels toggle (#727)
var showEncryptedCb = document.getElementById('chShowEncrypted');
var showEncrypted = localStorage.getItem('channels-show-encrypted') === 'true';
showEncryptedCb.checked = showEncrypted;
showEncryptedCb.addEventListener('change', function () {
showEncrypted = showEncryptedCb.checked;
localStorage.setItem('channels-show-encrypted', showEncrypted ? 'true' : 'false');
loadChannels(true);
});
// #1034 PR1: encrypted-channels visibility now driven by sectioned sidebar.
// Always include encrypted channels in the API call; the renderer groups them.
var showEncrypted = true;
try { localStorage.setItem('channels-show-encrypted', 'true'); } catch (e) { /* quota */ }
regionChangeHandler = RegionFilter.onChange(function () {
loadChannels(true).then(async function () {
@@ -664,33 +729,135 @@
});
});
// Channel key input handler (#725 M2, improved UX #759)
var chKeyForm = document.getElementById('chKeyForm');
if (chKeyForm) {
var submitHandler = async function (e) {
e.preventDefault();
var input = document.getElementById('chKeyInput');
var val = (input.value || '').trim();
if (!val) return;
input.value = '';
await addUserChannel(val);
};
chKeyForm.addEventListener('submit', submitHandler);
var chKeyInput = document.getElementById('chKeyInput');
if (chKeyInput) {
chKeyInput.addEventListener('focus', function () {
var st = document.getElementById('chAddStatus');
if (st) { st.style.display = 'none'; clearTimeout(statusTimer); statusTimer = null; }
});
}
// #1034 PR1: Add Channel modal wiring (replaces inline form)
var modalEl = document.getElementById('chAddChannelModal');
function openAddModal() {
if (!modalEl) return;
modalEl.classList.remove('hidden');
modalEl.removeAttribute('hidden');
var first = document.getElementById('chGenerateName');
if (first) try { first.focus(); } catch (e) { /* noop */ }
}
function closeAddModal() {
if (!modalEl) return;
modalEl.classList.add('hidden');
modalEl.setAttribute('hidden', '');
var err = document.getElementById('chPskError');
if (err) { err.style.display = 'none'; err.textContent = ''; }
var shareOut = document.getElementById('chShareOutput');
if (shareOut) { shareOut.innerHTML = ''; }
var shareSec = document.getElementById('chShareSection');
if (shareSec) { shareSec.hidden = true; }
}
var addBtn = document.getElementById('chAddChannelBtn');
if (addBtn) addBtn.addEventListener('click', openAddModal);
if (modalEl) {
modalEl.addEventListener('click', function (e) {
// Close on overlay backdrop click or any [data-action=ch-modal-close]
var closeEl = e.target.closest('[data-action="ch-modal-close"]');
if (closeEl || e.target === modalEl) {
e.preventDefault();
closeAddModal();
}
});
document.addEventListener('keydown', function (e) {
if (e.key === 'Escape' && !modalEl.classList.contains('hidden')) {
closeAddModal();
}
});
}
// Auto-enable encrypted toggle if deep-linking to an encrypted channel
if (routeParam && routeParam.startsWith('enc_') && !showEncrypted) {
showEncrypted = true;
showEncryptedCb.checked = true;
localStorage.setItem('channels-show-encrypted', 'true');
}
// Section 1: Generate PSK
var genBtn = document.getElementById('chGenerateBtn');
if (genBtn) genBtn.addEventListener('click', async function () {
var nameEl = document.getElementById('chGenerateName');
var label = nameEl ? (nameEl.value || '').trim() : '';
// 16 random bytes -> 32-char hex
var bytes = crypto.getRandomValues(new Uint8Array(16));
var keyHex = ChannelDecrypt.bytesToHex(bytes);
var channelName = 'psk:' + keyHex.substring(0, 8);
ChannelDecrypt.storeKey(channelName, keyHex, label);
var qrOut = document.getElementById('qr-output');
if (qrOut) {
qrOut.innerHTML = '';
// Render the QR + meshcore:// URL + Copy Key inline. The QR
// helper handles canvas rendering + accessible copy controls.
if (window.ChannelQR && typeof window.ChannelQR.generate === 'function') {
// Use the user-supplied label when provided so the scanned
// recipient sees a meaningful name; fall back to the
// psk:<prefix> auto-name otherwise.
window.ChannelQR.generate(label || channelName, keyHex, qrOut);
} else {
// Fallback when channel-qr.js failed to load.
qrOut.textContent = 'Key generated: ' + keyHex;
}
}
mergeUserChannels();
renderChannelList();
showAddStatus('Generated channel ' + (label || channelName), 'success');
});
// Section 2: Add PSK
var pskBtn = document.getElementById('chPskAddBtn');
if (pskBtn) pskBtn.addEventListener('click', async function () {
var keyEl = document.getElementById('chPskKey');
var nameEl = document.getElementById('chPskName');
var errEl = document.getElementById('chPskError');
var raw = keyEl ? (keyEl.value || '').trim() : '';
var label = nameEl ? (nameEl.value || '').trim() : '';
if (!isHexKey(raw)) {
if (errEl) { errEl.textContent = 'Key must be 32 hex characters (09, af).'; errEl.style.display = ''; }
return;
}
if (errEl) { errEl.textContent = ''; errEl.style.display = 'none'; }
closeAddModal();
if (keyEl) keyEl.value = '';
if (nameEl) nameEl.value = '';
await addUserChannel(raw.toLowerCase(), label);
});
// Section 2 (cont.): Scan QR — populates #chPskKey + #chPskName
// from a scanned meshcore://channel/add?... URL. Wiring added in
// PR #1034/PR3 against window.ChannelQR (public/channel-qr.js).
var scanBtn = document.getElementById('scan-qr-btn');
if (scanBtn) scanBtn.addEventListener('click', async function () {
var errEl = document.getElementById('chPskError');
if (!window.ChannelQR || typeof window.ChannelQR.scan !== 'function') {
if (errEl) {
errEl.textContent = 'QR scanning is unavailable in this browser.';
errEl.style.display = '';
}
return;
}
try {
var result = await window.ChannelQR.scan();
if (!result) return; // user cancelled
var keyEl = document.getElementById('chPskKey');
var nameEl = document.getElementById('chPskName');
if (keyEl && result.secret) keyEl.value = result.secret;
if (nameEl && result.name) nameEl.value = result.name;
if (errEl) { errEl.textContent = ''; errEl.style.display = 'none'; }
} catch (err) {
if (errEl) {
errEl.textContent = 'Scan failed: ' + (err && err.message ? err.message : 'unknown error');
errEl.style.display = '';
}
}
});
// Section 3: Monitor Hashtag
var tagBtn = document.getElementById('chHashtagBtn');
if (tagBtn) tagBtn.addEventListener('click', async function () {
var tagEl = document.getElementById('chHashtagName');
var raw = tagEl ? (tagEl.value || '').trim() : '';
if (!raw) return;
// Strip a leading '#' if the user typed one — the prefix is implicit.
if (raw.charAt(0) === '#') raw = raw.substring(1);
if (!raw) return;
closeAddModal();
if (tagEl) tagEl.value = '';
await addUserChannel('#' + raw, '');
});
loadObserverRegions();
loadChannels().then(async function () {
@@ -742,30 +909,110 @@
});
// Event delegation for channel selection (touch-friendly)
document.getElementById('chList').addEventListener('click', (e) => {
var chListEl = document.getElementById('chList');
// Keyboard accessibility for the role="button" remove/share spans
// (Enter/Space). Single .closest() call with a combined selector.
chListEl.addEventListener('keydown', function (e) {
if (e.key !== 'Enter' && e.key !== ' ' && e.key !== 'Spacebar') return;
var rb = e.target.closest && e.target.closest('[data-remove-channel],[data-share-channel]');
if (!rb) return;
e.preventDefault();
e.stopPropagation();
// Re-dispatch as a click so the existing click handler runs.
rb.click();
});
chListEl.addEventListener('click', (e) => {
// Share/reshare: open the Add Channel modal and render QR + URL
// for the existing key (no re-generation).
const shareBtn = e.target.closest('[data-share-channel]');
if (shareBtn) {
e.stopPropagation();
var shareHash = shareBtn.getAttribute('data-share-channel');
if (!shareHash) return;
var sCh = channels.find(function (c) { return c.hash === shareHash; });
var sName = shareHash.startsWith('user:')
? shareHash.substring(5)
: (sCh && sCh.name) || shareHash;
var keys = ChannelDecrypt.getStoredKeys();
var keyHex = keys[sName];
if (typeof openAddModal === 'function') openAddModal();
var sec = document.getElementById('chShareSection');
var out = document.getElementById('chShareOutput');
if (!sec || !out) return;
sec.hidden = false;
out.innerHTML = '';
if (!keyHex) {
out.textContent = 'No stored key found for "' + sName + '" — cannot share.';
return;
}
var heading = document.createElement('div');
heading.className = 'ch-share-heading';
heading.textContent = 'Share "' + sName + '"';
out.appendChild(heading);
var holder = document.createElement('div');
out.appendChild(holder);
if (window.ChannelQR && typeof window.ChannelQR.generate === 'function') {
window.ChannelQR.generate(sName, keyHex, holder);
} else {
// Fallback: copyable hex + meshcore:// URL.
var url = 'meshcore://channel/add?name=' + encodeURIComponent(sName) +
'&secret=' + keyHex;
holder.innerHTML =
'<div>Key: <code>' + escapeHtml(keyHex) + '</code></div>' +
'<div>URL: <code>' + escapeHtml(url) + '</code></div>';
}
return;
}
// M4: Remove channel button
const removeBtn = e.target.closest('[data-remove-channel]');
if (removeBtn) {
e.stopPropagation();
var channelHash = removeBtn.getAttribute('data-remove-channel');
if (!channelHash) return;
var chName = channelHash.startsWith('user:') ? channelHash.substring(5) : channelHash;
if (!confirm('Remove channel "' + chName + '"? This will clear saved keys and cached messages.')) return;
// The localStorage key is the channel name. For user:-prefixed entries
// strip the prefix; for server-known channels look up the channel
// object so we use its display name (the hash itself isn't the key).
var ch = channels.find(function (c) { return c.hash === channelHash; });
var chName = channelHash.startsWith('user:')
? channelHash.substring(5)
: (ch && ch.name) || channelHash;
if (!confirm('Remove channel "' + chName + '"?\n\nThis will permanently remove the key from this browser and clear cached messages. You will need to re-enter the key to decrypt this channel again.')) return;
ChannelDecrypt.removeKey(chName);
// Remove from channels array
channels = channels.filter(function (c) { return c.hash !== channelHash; });
if (selectedHash === channelHash) {
selectedHash = null;
messages = [];
history.replaceState(null, '', '#/channels');
var msgEl2 = document.getElementById('chMessages');
if (msgEl2) msgEl2.innerHTML = '<div class="ch-empty">Choose a channel from the sidebar to view messages</div>';
var header2 = document.getElementById('chHeader');
if (header2) header2.querySelector('.ch-header-text').textContent = 'Select a channel';
if (channelHash.startsWith('user:')) {
// Pure user-added channel — drop from the list entirely.
channels = channels.filter(function (c) { return c.hash !== channelHash; });
if (selectedHash === channelHash) {
selectedHash = null;
messages = [];
history.replaceState(null, '', '#/channels');
var msgEl2 = document.getElementById('chMessages');
if (msgEl2) msgEl2.innerHTML = '<div class="ch-empty">Choose a channel from the sidebar to view messages</div>';
var header2 = document.getElementById('chHeader');
if (header2) header2.querySelector('.ch-header-text').textContent = 'Select a channel';
}
} else if (ch) {
// Server-known channel: keep the row, just unmark as user-added so
// the ✕ disappears until they re-add a key.
ch.userAdded = false;
// If this was the selected channel, clear decrypted messages since
// the key is gone — they can't be re-decrypted without re-adding it.
if (selectedHash === channelHash) {
messages = [];
var msgEl2 = document.getElementById('chMessages');
if (msgEl2) msgEl2.innerHTML = '<div class="ch-empty">Key removed — add a key to decrypt messages</div>';
}
}
renderChannelList();
return;
}
// Color clear button — remove color without opening picker (#681)
const clearBtn = e.target.closest('.ch-color-clear');
if (clearBtn && window.ChannelColors) {
e.stopPropagation();
var clearCh = clearBtn.getAttribute('data-channel');
if (clearCh) { window.ChannelColors.remove(clearCh); renderChannelList(); }
return;
}
// Color dot click — open picker, don't select channel
const dot = e.target.closest('.ch-color-dot');
if (dot && window.ChannelColorPicker) {
@@ -873,6 +1120,11 @@
if (!payload) continue;
var channelName = payload.channel || 'unknown';
// For live-decrypted user-added (PSK) channels, decryptLivePSKBatch
// also stamps payload.channelKey ("user:<name>") so we route the
// message to the correct sidebar row and to the open chat view.
// Falls back to channelName for server-known CHAN packets.
var channelKey = payload.channelKey || channelName;
var rawText = payload.text || '';
var sender = payload.sender || null;
var displayText = rawText;
@@ -899,10 +1151,10 @@
var observer = m.data?.packet?.observer_name || m.data?.observer || null;
// Update channel list entry — only once per unique packet hash
var isFirstObservation = pktHash && !seenHashes.has(pktHash + ':' + channelName);
if (pktHash) seenHashes.add(pktHash + ':' + channelName);
var isFirstObservation = pktHash && !seenHashes.has(pktHash + ':' + channelKey);
if (pktHash) seenHashes.add(pktHash + ':' + channelKey);
var ch = channels.find(function (c) { return c.hash === channelName; });
var ch = channels.find(function (c) { return c.hash === channelKey; });
if (ch) {
if (isFirstObservation) ch.messageCount = (ch.messageCount || 0) + 1;
ch.lastActivityMs = Date.now();
@@ -912,7 +1164,7 @@
} else if (isFirstObservation) {
// New channel we haven't seen
channels.push({
hash: channelName,
hash: channelKey,
name: channelName,
messageCount: 1,
lastActivityMs: Date.now(),
@@ -923,7 +1175,7 @@
}
// If this message is for the selected channel, append to messages
if (selectedHash && channelName === selectedHash) {
if (selectedHash && channelKey === selectedHash) {
// Deduplicate by packet hash — same message seen by multiple observers
var existing = pktHash ? messages.find(function (msg) { return msg.packetHash === pktHash; }) : null;
if (existing) {
@@ -976,8 +1228,83 @@
processWSBatch(msgs, selectedRegions);
}
// Pre-pass: rewrite encrypted GRP_TXT live packets into decrypted form
// when a stored PSK key matches their channel hash byte (#1029 — live
// PSK decrypt). Without this, users viewing a PSK-decrypted channel
// had to refresh the page to see new messages.
async function decryptLivePSKBatch(msgs) {
if (typeof ChannelDecrypt === 'undefined' ||
typeof ChannelDecrypt.tryDecryptLive !== 'function') {
return;
}
// Quick scan: do any messages look like encrypted GRP_TXT?
var anyEncrypted = false;
for (var i = 0; i < msgs.length; i++) {
var p = msgs[i] && msgs[i].data && msgs[i].data.decoded && msgs[i].data.decoded.payload;
if (p && p.type === 'GRP_TXT' && p.encryptedData && p.mac) { anyEncrypted = true; break; }
}
if (!anyEncrypted) return;
var keyMap;
try { keyMap = await ChannelDecrypt.buildKeyMap(); } catch (e) { return; }
if (!keyMap || keyMap.size === 0) return;
for (var j = 0; j < msgs.length; j++) {
var m = msgs[j];
var payload = m && m.data && m.data.decoded && m.data.decoded.payload;
if (!payload || payload.type !== 'GRP_TXT' || !payload.encryptedData || !payload.mac) continue;
var dec;
try { dec = await ChannelDecrypt.tryDecryptLive(payload, keyMap); } catch (e) { dec = null; }
if (!dec) continue;
// Rewrite payload into a CHAN-like shape so processWSBatch picks it
// up as a real message instead of an encrypted blob. Keep the original
// hash byte for any downstream consumer that wants it.
payload.channel = dec.channelName;
// For user-added PSK channels the sidebar entry & selectedHash use a
// "user:<name>" key (see addUserChannel). Stamp the canonical key on
// the payload so processWSBatch routes the live message to the
// correct sidebar row and to the open chat view instead of dropping
// it / creating a duplicate plain entry. Falls back to the raw name
// for non-user channels (server-known CHAN paths still work).
var userKey = 'user:' + dec.channelName;
var hasUserCh = false;
for (var ck = 0; ck < channels.length; ck++) {
if (channels[ck].hash === userKey) { hasUserCh = true; break; }
}
payload.channelKey = hasUserCh ? userKey : dec.channelName;
payload.sender = dec.sender;
payload.text = dec.sender ? (dec.sender + ': ' + dec.text) : dec.text;
payload.decryptedLocally = true;
if (m.data.decoded.header) {
// Leave payloadTypeName as GRP_TXT — processWSBatch already
// accepts both 'message' and GRP_TXT-typed packet messages.
}
}
}
wsHandler = debouncedOnWS(function (msgs) {
handleWSBatch(msgs);
var selectedRegions = getSelectedRegionsSnapshot();
var prior = selectedHash;
decryptLivePSKBatch(msgs).then(function () {
// Bump unread for live-decrypted channels the user is NOT viewing.
// Done here (not inside processWSBatch) so the count reflects ONLY
// newly-decrypted live packets, not historical-fetch path.
var bumped = false;
for (var i = 0; i < msgs.length; i++) {
var p = msgs[i] && msgs[i].data && msgs[i].data.decoded && msgs[i].data.decoded.payload;
if (!p || !p.decryptedLocally) continue;
// Use the canonical sidebar key stamped by decryptLivePSKBatch so
// the comparison against `prior` (= selectedHash) actually matches
// for user-added (user:*-prefixed) channels.
var chKey = p.channelKey || p.channel;
if (!chKey || chKey === prior) continue;
var ch = channels.find(function (c) { return c.hash === chKey || c.name === chKey || c.hash === ('user:' + chKey); });
if (ch) {
ch.unread = (ch.unread || 0) + 1;
bumped = true;
}
}
processWSBatch(msgs, selectedRegions);
if (bumped) renderChannelList();
});
});
window._channelsHandleWSBatchForTest = handleWSBatch;
window._channelsProcessWSBatchForTest = processWSBatch;
@@ -1035,59 +1362,171 @@
}
}
// #1041: single source of truth for the user-facing placeholder shown
// when a PSK channel has no user-supplied label. Hoisted so the helper
// and any future call sites stay in sync (i18n / branding-friendly).
const PRIVATE_CHANNEL_LABEL = 'Private Channel';
// Display name for a channel — handles PSK channels where the raw
// "psk:<hex8>" key prefix shouldn't be shown to users. Falls back to
// userLabel, then a friendly placeholder, then a caller-supplied
// fallback, then `Channel <hash>`.
//
// `fallback` lets row rendering preserve its existing "Unknown" /
// "Channel <hash>" semantics for encrypted-but-not-user-added channels
// without duplicating the psk:* check.
function channelDisplayName(ch, fallback) {
if (!ch) return '';
const name = ch.name || '';
if (ch.userLabel) return ch.userLabel;
if (name.indexOf('psk:') === 0) return PRIVATE_CHANNEL_LABEL;
if (name) return name;
if (fallback) return fallback;
return 'Channel ' + (typeof formatHashHex === 'function' ? formatHashHex(ch.hash) : ch.hash);
}
// #1034 PR1: render a single channel row (used by all sidebar sections).
function renderChannelRow(ch) {
const isEncrypted = ch.encrypted === true;
const isUserAdded = ch.userAdded === true;
// #1041: route through channelDisplayName so the psk:* → "Private
// Channel" rule lives in one place. Pass an `encryptedFallback` so
// rows for non-user-added encrypted channels keep showing "Unknown"
// (their existing behavior) when there's no name at all.
const encryptedFallback = isEncrypted ? 'Unknown' : '';
const name = channelDisplayName(ch, encryptedFallback);
const color = isEncrypted && !isUserAdded ? 'var(--text-muted, #6b7280)' : getChannelColor(ch.hash);
const time = ch.lastActivityMs ? formatSecondsAgo(Math.floor((Date.now() - ch.lastActivityMs) / 1000)) : '';
// Preview: show last sender+message when we have one. Otherwise show
// nothing rather than "0 messages" — the count is misleading for
// user-added (PSK) channels where messageCount only reflects
// server-known activity, not actually-decrypted messages.
let preview;
if (ch.lastSender && ch.lastMessage) {
preview = `${ch.lastSender}: ${truncate(ch.lastMessage, 28)}`;
} else if (isEncrypted && !isUserAdded) {
preview = `0x${formatHashHex(ch.hash)}`;
} else if (typeof ch.messageCount === 'number' && ch.messageCount > 0) {
preview = `${ch.messageCount} messages`;
} else {
preview = '';
}
const sel = selectedHash === ch.hash ? ' selected' : '';
const encClass = isUserAdded
? ' ch-user-added'
: (isEncrypted ? ' ch-encrypted' : '');
const badgeIcon = isUserAdded ? '🔓' : (isEncrypted ? '🔒' : null);
const abbr = badgeIcon || (name.startsWith('#') ? name.slice(0, 3) : name.slice(0, 2).toUpperCase());
const chColor = window.ChannelColors ? window.ChannelColors.get(ch.hash) : null;
const dotStyle = chColor ? ` style="background:${chColor}"` : '';
const borderStyle = chColor ? ` style="border-left:3px solid ${chColor}"` : '';
// #1033: must NOT be a <button> — outer .ch-item is itself a <button>;
// nested <button> is invalid HTML5 and the parser orphans everything
// after it. Use <span role="button">; keydown handler on #chList
// (Enter/Space) keeps it keyboard-accessible.
// Icon button factory — used for the per-row remove/share controls.
// Both share the .ch-icon-btn base class (touch target, opacity); a
// modifier class (.ch-remove-btn / .ch-share-btn) supplies size + color.
function iconBtn(modClass, dataAttr, hash, name, glyph, title, ariaVerb, extraAttrs) {
return ' <span class="ch-icon-btn ' + modClass + '" role="button" tabindex="0"'
+ ' ' + dataAttr + '="' + escapeHtml(hash) + '"'
+ (extraAttrs || '')
+ ' title="' + title + '"'
+ ' aria-label="' + ariaVerb + ' ' + escapeHtml(name) + '">' + glyph + '</span>';
}
const removeBtn = isUserAdded
? iconBtn('ch-remove-btn', 'data-remove-channel', ch.hash, name, '✕',
'Remove channel and clear saved key', 'Remove', '')
: '';
const shareBtn = isUserAdded
? iconBtn('ch-share-btn', 'data-share-channel', ch.hash, name, '📤 Share',
'Share channel key (QR + URL)', 'Share', ' aria-haspopup="dialog"')
: '';
const userBadge = isUserAdded ? ' <span class="ch-user-badge" title="You added this key" aria-label="Your key">🔑</span>' : '';
const unreadBadge = (ch.unread && ch.unread > 0)
? ' <span class="ch-unread-badge" data-unread-channel="' + escapeHtml(ch.hash) + '" title="' + ch.unread + ' new" aria-label="' + ch.unread + ' unread">' + (ch.unread > 99 ? '99+' : ch.unread) + '</span>'
: '';
return `<button class="ch-item${sel}${encClass}" data-hash="${ch.hash}"${borderStyle} type="button" role="option" aria-selected="${selectedHash === ch.hash ? 'true' : 'false'}" aria-label="${escapeHtml(name)}"${isEncrypted ? ' data-encrypted="true"' : ''}${isUserAdded ? ' data-user-added="true"' : ''}>
<div class="ch-badge" style="background:${color}" aria-hidden="true">${badgeIcon ? badgeIcon : escapeHtml(abbr)}</div>
<div class="ch-item-body">
<div class="ch-item-top">
<span class="ch-item-name">${escapeHtml(name)}</span>${userBadge}${unreadBadge}
<span class="ch-color-dot" data-channel="${escapeHtml(ch.hash)}"${dotStyle} title="Change channel color" aria-label="Change color for ${escapeHtml(name)}"></span>${chColor ? '<span class="ch-color-clear" data-channel="' + escapeHtml(ch.hash) + '" title="Clear color" aria-label="Clear color for ' + escapeHtml(name) + '"></span>' : ''}
<span class="ch-item-time" data-channel-hash="${ch.hash}">${time}</span>${shareBtn}${removeBtn}
</div>
<div class="ch-item-preview">${escapeHtml(preview)}</div>
</div>
</button>`;
}
// #1034 PR1: sectioned sidebar — My Channels / Network / Encrypted (N).
function renderChannelList() {
const el = document.getElementById('chList');
if (!el) return;
if (channels.length === 0) { el.innerHTML = '<div class="ch-empty">No channels found</div>'; return; }
// Sort by message count desc
const sorted = [...channels].sort((a, b) => {
return (b.messageCount || 0) - (a.messageCount || 0);
});
const sortByActivity = (a, b) => (b.lastActivityMs || 0) - (a.lastActivityMs || 0);
const sortByCount = (a, b) => (b.messageCount || 0) - (a.messageCount || 0);
el.innerHTML = sorted.map(ch => {
const isEncrypted = ch.encrypted === true;
const name = isEncrypted ? (ch.name || 'Unknown') : (ch.name || `Channel ${formatHashHex(ch.hash)}`);
const color = isEncrypted ? 'var(--text-muted, #6b7280)' : getChannelColor(ch.hash);
const time = ch.lastActivityMs ? formatSecondsAgo(Math.floor((Date.now() - ch.lastActivityMs) / 1000)) : '';
const preview = isEncrypted
? `${ch.messageCount} encrypted messages (no key configured)`
: ch.lastSender && ch.lastMessage
? `${ch.lastSender}: ${truncate(ch.lastMessage, 28)}`
: `${ch.messageCount} messages`;
const sel = selectedHash === ch.hash ? ' selected' : '';
const encClass = isEncrypted ? ' ch-encrypted' : '';
const abbr = isEncrypted ? '🔒' : (name.startsWith('#') ? name.slice(0, 3) : name.slice(0, 2).toUpperCase());
// Channel color dot for color picker (#674)
const chColor = window.ChannelColors ? window.ChannelColors.get(ch.hash) : null;
const dotStyle = chColor ? ` style="background:${chColor}"` : '';
// Left border for assigned color
const borderStyle = chColor ? ` style="border-left:3px solid ${chColor}"` : '';
// M4: Remove button for user-added channels
const removeBtn = ch.userAdded ? ' <button class="ch-remove-btn" data-remove-channel="' + escapeHtml(ch.hash) + '" title="Remove channel" aria-label="Remove ' + escapeHtml(name) + '">✕</button>' : '';
const mine = channels.filter(c => c.userAdded === true).sort(sortByActivity);
const network = channels.filter(c => c.userAdded !== true && c.encrypted !== true).sort(sortByActivity);
const encrypted = channels.filter(c => c.userAdded !== true && c.encrypted === true).sort(sortByCount);
return `<button class="ch-item${sel}${encClass}" data-hash="${ch.hash}"${borderStyle} type="button" role="option" aria-selected="${selectedHash === ch.hash ? 'true' : 'false'}" aria-label="${escapeHtml(name)}"${isEncrypted ? ' data-encrypted="true"' : ''}>
<div class="ch-badge" style="background:${color}" aria-hidden="true">${isEncrypted ? '🔒' : escapeHtml(abbr)}</div>
<div class="ch-item-body">
<div class="ch-item-top">
<span class="ch-item-name">${escapeHtml(name)}</span>
<span class="ch-color-dot" data-channel="${escapeHtml(ch.hash)}"${dotStyle} title="Change channel color" aria-label="Change color for ${escapeHtml(name)}"></span>
<span class="ch-item-time" data-channel-hash="${ch.hash}">${time}</span>${removeBtn}
</div>
<div class="ch-item-preview">${escapeHtml(preview)}</div>
// Encrypted section collapsed by default; user toggle persisted in localStorage.
const collapsed = localStorage.getItem('ch-encrypted-collapsed') !== 'false';
const sections = [];
sections.push(
`<div class="ch-section ch-section-mychannels" data-section="mychannels">
<div class="ch-section-header">My Channels <span class="ch-section-locality" title="Saved only in this browser on this device">🖥 (this browser)</span></div>
${mine.length ? mine.map(renderChannelRow).join('') : '<div class="ch-section-empty">No channels yet — click [+ Add Channel] to add one.</div>'}
</div>`
);
sections.push(
`<div class="ch-section ch-section-network" data-section="network">
<div class="ch-section-header">Network</div>
${network.length ? network.map(renderChannelRow).join('') : '<div class="ch-section-empty">No public channels reported by the server.</div>'}
</div>`
);
sections.push(
`<div class="ch-section ch-section-encrypted" data-section="encrypted" data-encrypted-collapsed="${collapsed ? 'true' : 'false'}">
<button type="button" class="ch-section-header ch-section-toggle" id="chEncryptedToggle" aria-expanded="${collapsed ? 'false' : 'true'}" aria-controls="chEncryptedBody">
<span class="ch-section-caret" aria-hidden="true">${collapsed ? '▸' : '▾'}</span>
Encrypted (${encrypted.length})
</button>
<div class="ch-section-body" id="chEncryptedBody"${collapsed ? ' hidden' : ''}>
${encrypted.length ? encrypted.map(renderChannelRow).join('') : '<div class="ch-section-empty">No unkeyed encrypted channels seen.</div>'}
</div>
</button>`;
}).join('');
</div>`
);
el.innerHTML = sections.join('');
// Toggle expand/collapse for the Encrypted section.
const toggle = document.getElementById('chEncryptedToggle');
if (toggle) {
toggle.addEventListener('click', function () {
const wasCollapsed = localStorage.getItem('ch-encrypted-collapsed') !== 'false';
const next = wasCollapsed ? 'false' : 'true';
try { localStorage.setItem('ch-encrypted-collapsed', next); } catch (e) { /* quota */ }
renderChannelList();
});
}
}
async function selectChannel(hash, decryptOpts) {
const rp = RegionFilter.getRegionParam() || '';
const request = beginMessageRequest(hash, rp);
selectedHash = hash;
// Clear unread badge on the channel we're about to view (#1029).
var __selCh = channels.find(function (c) { return c.hash === hash; });
if (__selCh && __selCh.unread) { __selCh.unread = 0; }
history.replaceState(null, '', `#/channels/${encodeURIComponent(hash)}`);
renderChannelList();
const ch = channels.find(c => c.hash === hash);
const name = ch?.name || `Channel ${formatHashHex(hash)}`;
// #1041: never show raw "psk:<hex>" prefixes in the header — use the
// user-supplied label or "Private Channel".
const name = ch ? channelDisplayName(ch) : `Channel ${formatHashHex(hash)}`;
const header = document.getElementById('chHeader');
header.querySelector('.ch-header-text').textContent = `${name}${ch?.messageCount || 0} messages`;
@@ -1110,14 +1549,14 @@
}
}
});
if (isStaleMessageRequest(request)) return true;
if (isStaleMessageRequest(request)) return { stale: true };
if (result.wrongKey) {
msgEl.innerHTML = '<div class="ch-empty ch-wrong-key">🔒 Key does not match — no messages could be decrypted</div>';
return true;
return { wrongKey: true, messageCount: 0 };
}
if (result.error) {
msgEl.innerHTML = '<div class="ch-empty">' + escapeHtml(result.error) + '</div>';
return true;
return { error: result.error, messageCount: 0 };
}
messages = result.messages || [];
if (messages.length === 0) {
@@ -1127,13 +1566,12 @@
renderMessages();
scrollToBottom();
}
return true;
return { messageCount: messages.length };
}
// Client-side decryption path (#725 M2)
if (decryptOpts && decryptOpts.userKey) {
await decryptAndRender(decryptOpts.userKey, decryptOpts.channelHashByte, decryptOpts.channelName);
return;
return await decryptAndRender(decryptOpts.userKey, decryptOpts.channelHashByte, decryptOpts.channelName);
}
// Check if this is a user-added channel that needs decryption
+91 -7
View File
@@ -23,8 +23,58 @@ function comparePacketSets(hashesA, hashesB) {
return { onlyA: onlyA, onlyB: onlyB, both: both };
}
/**
* Filter packets by route type.
* mode: 'all' | 'flood' | 'direct'
* Flood = route_type 0 (TransportFlood) or 1 (Flood)
* Direct = route_type 2 (Direct) or 3 (TransportDirect)
*/
function filterPacketsByRoute(packets, mode) {
if (!packets || mode === 'all') return packets || [];
if (mode === 'flood') {
return packets.filter(function (p) { return p.route_type === 0 || p.route_type === 1; });
}
if (mode === 'direct') {
return packets.filter(function (p) { return p.route_type === 2 || p.route_type === 3; });
}
return packets;
}
/**
* Compute asymmetric overlap statistics between two observer packet sets.
* Given a comparePacketSets() result, returns:
* - totalA / totalB: unique packet count for each observer
* - shared: packets seen by both
* - onlyA / onlyB: exclusive packet counts
* - aSeesOfB: percentage of B's packets that A also saw (rounded to 0.1%)
* - bSeesOfA: percentage of A's packets that B also saw (rounded to 0.1%)
* Returns 0% (not NaN) when a denominator is zero.
*/
function computeOverlapStats(cmp) {
var onlyA = (cmp && cmp.onlyA && cmp.onlyA.length) || 0;
var onlyB = (cmp && cmp.onlyB && cmp.onlyB.length) || 0;
var shared = (cmp && cmp.both && cmp.both.length) || 0;
var totalA = onlyA + shared;
var totalB = onlyB + shared;
var aSeesOfB = totalB > 0 ? Math.round((shared / totalB) * 1000) / 10 : 0;
var bSeesOfA = totalA > 0 ? Math.round((shared / totalA) * 1000) / 10 : 0;
return {
totalA: totalA,
totalB: totalB,
shared: shared,
onlyA: onlyA,
onlyB: onlyB,
aSeesOfB: aSeesOfB,
bSeesOfA: bSeesOfA,
};
}
// Expose for testing
if (typeof window !== 'undefined') window.comparePacketSets = comparePacketSets;
if (typeof window !== 'undefined') {
window.comparePacketSets = comparePacketSets;
window.filterPacketsByRoute = filterPacketsByRoute;
window.computeOverlapStats = computeOverlapStats;
}
(function () {
var PAYLOAD_LABELS = { 0: 'Request', 1: 'Response', 2: 'Direct Msg', 3: 'ACK', 4: 'Advert', 5: 'Channel Msg', 7: 'Anon Req', 8: 'Path', 9: 'Trace', 11: 'Control' };
@@ -36,6 +86,7 @@ if (typeof window !== 'undefined') window.comparePacketSets = comparePacketSets;
var packetsA = [];
var packetsB = [];
var currentView = 'summary';
var routeFilter = 'all';
function init(app, routeParam) {
// Parse preselected observers from URL: #/compare?a=ID1&b=ID2
@@ -47,6 +98,7 @@ if (typeof window !== 'undefined') window.comparePacketSets = comparePacketSets;
packetsA = [];
packetsB = [];
currentView = 'summary';
routeFilter = 'all';
app.innerHTML = '<div class="compare-page" style="padding:16px">' +
'<div class="page-header" style="display:flex;align-items:center;gap:12px;margin-bottom:16px">' +
@@ -76,6 +128,7 @@ if (typeof window !== 'undefined') window.comparePacketSets = comparePacketSets;
comparisonResult = null;
packetsA = [];
packetsB = [];
routeFilter = 'all';
}
async function loadObservers() {
@@ -115,6 +168,14 @@ if (typeof window !== 'undefined') window.comparePacketSets = comparePacketSets;
'<select id="compareObsB" class="compare-select">' + optionsHtml + '</select>' +
'</div>' +
'<button id="compareBtn" class="compare-btn" disabled>Compare</button>' +
'<div class="compare-select-group">' +
'<label for="compareRouteFilter">Packet Type</label>' +
'<select id="compareRouteFilter" class="compare-select">' +
'<option value="all">All packets</option>' +
'<option value="flood">Flood only</option>' +
'<option value="direct">Direct only</option>' +
'</select>' +
'</div>' +
'</div>';
var ddA = document.getElementById('compareObsA');
@@ -124,6 +185,13 @@ if (typeof window !== 'undefined') window.comparePacketSets = comparePacketSets;
if (selA) ddA.value = selA;
if (selB) ddB.value = selB;
var ddRoute = document.getElementById('compareRouteFilter');
ddRoute.value = routeFilter;
ddRoute.addEventListener('change', function () {
routeFilter = ddRoute.value;
if (comparisonResult) runComparison();
});
function updateBtn() {
selA = ddA.value || null;
selB = ddB.value || null;
@@ -162,16 +230,20 @@ if (typeof window !== 'undefined') window.comparePacketSets = comparePacketSets;
packetsA = results[0].packets || [];
packetsB = results[1].packets || [];
var hashesA = new Set(packetsA.map(function (p) { return p.hash; }));
var hashesB = new Set(packetsB.map(function (p) { return p.hash; }));
// Apply flood/direct filter (#928)
var filteredA = filterPacketsByRoute(packetsA, routeFilter);
var filteredB = filterPacketsByRoute(packetsB, routeFilter);
var hashesA = new Set(filteredA.map(function (p) { return p.hash; }));
var hashesB = new Set(filteredB.map(function (p) { return p.hash; }));
comparisonResult = comparePacketSets(hashesA, hashesB);
// Build hash→packet lookups for detail rendering
comparisonResult.packetMapA = new Map();
comparisonResult.packetMapB = new Map();
packetsA.forEach(function (p) { comparisonResult.packetMapA.set(p.hash, p); });
packetsB.forEach(function (p) { comparisonResult.packetMapB.set(p.hash, p); });
filteredA.forEach(function (p) { comparisonResult.packetMapA.set(p.hash, p); });
filteredB.forEach(function (p) { comparisonResult.packetMapB.set(p.hash, p); });
currentView = 'summary';
renderComparison();
@@ -296,12 +368,24 @@ if (typeof window !== 'undefined') window.comparePacketSets = comparePacketSets;
if (currentView === 'summary') {
// Textual summary
var stats = computeOverlapStats(r);
var total = r.onlyA.length + r.onlyB.length + r.both.length;
var overlap = total > 0 ? (r.both.length / total * 100).toFixed(1) : '0.0';
el.innerHTML =
'<div class="compare-summary-text">' +
'<p>In the last 24 hours, <strong>' + nameA + '</strong> saw <strong>' + (r.onlyA.length + r.both.length).toLocaleString() + '</strong> unique packets ' +
'and <strong>' + nameB + '</strong> saw <strong>' + (r.onlyB.length + r.both.length).toLocaleString() + '</strong> unique packets.</p>' +
'<p>In the last 24 hours, <strong>' + nameA + '</strong> saw <strong>' + stats.totalA.toLocaleString() + '</strong> unique packets ' +
'and <strong>' + nameB + '</strong> saw <strong>' + stats.totalB.toLocaleString() + '</strong> unique packets.</p>' +
// #671 — asymmetric reference-observer comparison
'<div class="compare-asymmetric" style="display:flex;gap:12px;flex-wrap:wrap;margin:12px 0">' +
'<div class="compare-asym-card" style="flex:1;min-width:240px;padding:12px;border:1px solid var(--border, #333);border-radius:6px">' +
'<div style="font-size:1.6em;font-weight:bold">' + stats.aSeesOfB.toFixed(1) + '%</div>' +
'<div class="text-muted">' + nameA + ' saw <strong>' + stats.shared.toLocaleString() + '</strong> of ' + nameB + '\u2019s ' + stats.totalB.toLocaleString() + ' packets</div>' +
'</div>' +
'<div class="compare-asym-card" style="flex:1;min-width:240px;padding:12px;border:1px solid var(--border, #333);border-radius:6px">' +
'<div style="font-size:1.6em;font-weight:bold">' + stats.bSeesOfA.toFixed(1) + '%</div>' +
'<div class="text-muted">' + nameB + ' saw <strong>' + stats.shared.toLocaleString() + '</strong> of ' + nameA + '\u2019s ' + stats.totalA.toLocaleString() + ' packets</div>' +
'</div>' +
'</div>' +
'<p><strong>' + r.both.length.toLocaleString() + '</strong> packets (' + overlap + '%) were seen by both observers. ' +
'<strong>' + r.onlyA.length.toLocaleString() + '</strong> were exclusive to ' + nameA + ' and ' +
'<strong>' + r.onlyB.length.toLocaleString() + '</strong> were exclusive to ' + nameB + '.</p>' +
+44 -7
View File
@@ -33,7 +33,7 @@
'meshcore-live-heatmap-opacity'
];
var VALID_SECTIONS = ['branding', 'theme', 'themeDark', 'nodeColors', 'typeColors', 'home', 'timestamps', 'heatmapOpacity', 'liveHeatmapOpacity', 'distanceUnit'];
var VALID_SECTIONS = ['branding', 'theme', 'themeDark', 'nodeColors', 'typeColors', 'home', 'timestamps', 'heatmapOpacity', 'liveHeatmapOpacity', 'distanceUnit', 'favorites', 'myNodes'];
var OBJECT_SECTIONS = ['branding', 'theme', 'themeDark', 'nodeColors', 'typeColors', 'home', 'timestamps'];
var SCALAR_SECTIONS = ['heatmapOpacity', 'liveHeatmapOpacity'];
var DISTANCE_UNIT_VALUES = ['km', 'mi', 'auto'];
@@ -313,9 +313,17 @@
function readOverrides() {
try {
var raw = localStorage.getItem(STORAGE_KEY);
if (raw == null) return {};
var parsed = JSON.parse(raw);
if (parsed == null || typeof parsed !== 'object' || Array.isArray(parsed)) return {};
var parsed = (raw != null) ? JSON.parse(raw) : {};
if (parsed == null || typeof parsed !== 'object' || Array.isArray(parsed)) parsed = {};
// Include favorites and claimed nodes from their own localStorage keys
try {
var favs = JSON.parse(localStorage.getItem('meshcore-favorites') || '[]');
if (Array.isArray(favs) && favs.length) parsed.favorites = favs;
} catch (e) { /* ignore */ }
try {
var myNodes = JSON.parse(localStorage.getItem('meshcore-my-nodes') || '[]');
if (Array.isArray(myNodes) && myNodes.length) parsed.myNodes = myNodes;
} catch (e) { /* ignore */ }
return parsed;
} catch (e) {
return {};
@@ -386,14 +394,28 @@
function writeOverrides(delta) {
if (delta == null || typeof delta !== 'object') return;
// Extract favorites/myNodes and store in their own localStorage keys
if (Array.isArray(delta.favorites)) {
try { localStorage.setItem('meshcore-favorites', JSON.stringify(delta.favorites)); } catch (e) { /* ignore */ }
}
if (Array.isArray(delta.myNodes)) {
try { localStorage.setItem('meshcore-my-nodes', JSON.stringify(delta.myNodes)); } catch (e) { /* ignore */ }
}
// Build theme-only delta (without favorites/myNodes)
var themeDelta = {};
for (var k in delta) {
if (delta.hasOwnProperty(k) && k !== 'favorites' && k !== 'myNodes') {
themeDelta[k] = delta[k];
}
}
// If empty, remove key entirely
var keys = Object.keys(delta);
var keys = Object.keys(themeDelta);
if (keys.length === 0) {
try { localStorage.removeItem(STORAGE_KEY); } catch (e) { /* ignore */ }
_updateSaveStatus('saved');
return;
}
var validated = _validateDelta(delta);
var validated = _validateDelta(themeDelta);
try {
localStorage.setItem(STORAGE_KEY, JSON.stringify(validated));
_updateSaveStatus('saved');
@@ -629,7 +651,11 @@
}
writeOverrides(delta);
_runPipeline();
_refreshPanel();
// Skip re-render while the user is typing inside the panel — setting
// innerHTML would destroy the focused input and collapse the mobile keyboard.
if (!(_panelEl && _panelEl.contains(document.activeElement))) {
_refreshPanel();
}
}, 300);
}
@@ -754,6 +780,17 @@
if (key === 'distanceUnit' && DISTANCE_UNIT_VALUES.indexOf(obj[key]) === -1) {
errors.push('Invalid distanceUnit: "' + obj[key] + '" — must be km, mi, or auto');
}
// Validate favorites and myNodes arrays
if (key === 'favorites') {
if (!Array.isArray(obj[key])) {
errors.push('"favorites" must be an array of public key strings');
}
}
if (key === 'myNodes') {
if (!Array.isArray(obj[key])) {
errors.push('"myNodes" must be an array of node objects');
}
}
}
return { valid: errors.length === 0, errors: errors };
}
+405
View File
@@ -0,0 +1,405 @@
/* filter-ux.js Wireshark-style filter UX (issue #966)
*
* Owns:
* - Help popover (filter syntax, fields, operators, examples)
* - Autocomplete dropdown (field names, operators, type/route values, payload.*)
* - Right-click context menu on packet table cells "Filter by this value"
* - Saved-filter dropdown (localStorage, with starter defaults)
*
* Pure-logic helpers (SavedFilters, buildCellFilterClause, appendClauseToExpr)
* are unit-tested in test-packet-filter-ux.js. DOM glue is exercised by
* test-filter-ux-e2e.js (Playwright).
*/
(function() {
'use strict';
var LS_KEY = 'corescope_saved_filters_v1';
// ── Saved filters store ────────────────────────────────────────────────
var DEFAULT_FILTERS = [
{ name: 'Adverts only', expr: 'type == ADVERT', builtin: true },
{ name: 'Channel traffic', expr: 'type == GRP_TXT', builtin: true },
{ name: 'Direct messages', expr: 'type == TXT_MSG', builtin: true },
{ name: 'Strong signal (SNR > 5)', expr: 'snr > 5', builtin: true },
{ name: 'Multi-hop (hops > 1)', expr: 'hops > 1', builtin: true },
{ name: 'Repeater adverts', expr: 'type == ADVERT && payload.flags.repeater == true', builtin: true },
{ name: 'Recent (last 5 min)', expr: 'age < 5m', builtin: true },
];
function _getStore() {
try {
var raw = window.localStorage.getItem(LS_KEY);
if (!raw) return [];
var parsed = JSON.parse(raw);
return Array.isArray(parsed) ? parsed : [];
} catch (e) { return []; }
}
function _setStore(arr) {
try { window.localStorage.setItem(LS_KEY, JSON.stringify(arr)); } catch (e) {}
}
var SavedFilters = {
defaults: function() { return DEFAULT_FILTERS.slice(); },
list: function() {
// Defaults first, then user filters (deduped by name — user wins on collision)
var user = _getStore();
var userNames = {};
for (var i = 0; i < user.length; i++) userNames[user[i].name] = true;
var defaults = DEFAULT_FILTERS.filter(function(d) { return !userNames[d.name]; });
return defaults.concat(user);
},
save: function(name, expr) {
if (!name || !expr) return;
var user = _getStore();
var idx = -1;
for (var i = 0; i < user.length; i++) { if (user[i].name === name) { idx = i; break; } }
var entry = { name: name, expr: expr, ts: Date.now() };
if (idx >= 0) user[idx] = entry; else user.push(entry);
_setStore(user);
},
delete: function(name) {
var user = _getStore();
_setStore(user.filter(function(f) { return f.name !== name; }));
},
};
// ── Right-click filter clause builders ─────────────────────────────────
// Numeric strings stay unquoted; identifiers from TYPE_VALUES/ROUTE_VALUES
// stay unquoted; everything else gets double-quoted.
function _isNumericString(s) {
if (typeof s !== 'string') return false;
return /^-?\d+(\.\d+)?$/.test(s.trim());
}
function _isBareIdentifier(s) {
return typeof s === 'string' && /^[A-Z_][A-Z0-9_]*$/.test(s);
}
function buildCellFilterClause(field, value, op) {
op = op || '==';
if (value == null) value = '';
var v = String(value);
var rendered;
if (op === 'contains' || op === 'starts_with' || op === 'ends_with') {
// String-only ops: always quote
rendered = '"' + v.replace(/"/g, '\\"') + '"';
} else if (_isNumericString(v)) {
rendered = v;
} else if (_isBareIdentifier(v)) {
rendered = v;
} else {
rendered = '"' + v.replace(/"/g, '\\"') + '"';
}
return field + ' ' + op + ' ' + rendered;
}
function appendClauseToExpr(expr, clause) {
if (!expr || !expr.trim()) return clause;
return expr.trim() + ' && ' + clause;
}
// ── DOM glue (only runs in browser, after init()) ──────────────────────
var _ctxMenu = null;
function _h(tag, attrs, html) {
var el = document.createElement(tag);
if (attrs) for (var k in attrs) {
if (k === 'class') el.className = attrs[k];
else if (k === 'style') el.setAttribute('style', attrs[k]);
else if (k.indexOf('data-') === 0) el.setAttribute(k, attrs[k]);
else el[k] = attrs[k];
}
if (html != null) el.innerHTML = html;
return el;
}
function _esc(s) {
return String(s == null ? '' : s).replace(/[&<>"']/g, function(c) {
return { '&': '&amp;', '<': '&lt;', '>': '&gt;', '"': '&quot;', "'": '&#39;' }[c];
});
}
function _buildHelpHtml() {
var PF = window.PacketFilter;
var rows = (PF.FIELDS || []).map(function(f) {
return '<tr><td class="fux-mono">' + _esc(f.name) + '</td><td>' + _esc(f.desc) + '</td></tr>';
}).join('');
var ops = (PF.OPERATORS || []).map(function(o) {
return '<tr><td class="fux-mono">' + _esc(o.op) + '</td><td>' + _esc(o.desc) +
'</td><td class="fux-mono">' + _esc(o.example) + '</td></tr>';
}).join('');
var examples = [
'type == ADVERT',
'type == GRP_TXT && size > 50',
'payload.name contains "Gilroy"',
'payload.flags.repeater == true',
'snr > 5 && rssi > -90',
'hops < 2',
'observer == "Dorrington" && type == ADVERT',
'(type == ADVERT || type == ACK) && snr > 0',
'age < 1h',
'time after "2025-01-01"',
].map(function(e) { return '<li class="fux-mono">' + _esc(e) + '</li>'; }).join('');
return [
'<h3>Filter syntax</h3>',
'<p>Wireshark-style boolean expressions over packet fields. Combine with <code>&amp;&amp;</code>, <code>||</code>, <code>!</code>, and parentheses. Strings are case-insensitive. Tip: append <code>?filter=…</code> to the URL to share a filter.</p>',
'<h4>Fields</h4>',
'<table class="fux-table"><thead><tr><th>Name</th><th>Description</th></tr></thead><tbody>' + rows + '</tbody></table>',
'<h4>Operators</h4>',
'<table class="fux-table"><thead><tr><th>Op</th><th>Meaning</th><th>Example</th></tr></thead><tbody>' + ops + '</tbody></table>',
'<h4>Examples</h4>',
'<ul class="fux-examples">' + examples + '</ul>',
'<h4>Tips</h4>',
'<ul>',
'<li>Right-click any cell in the packet table to add a clause for that value.</li>',
'<li>Type a partial field name to autocomplete; Tab/Enter accepts, Esc dismisses.</li>',
'<li>Save commonly-used expressions via the ★ Save button — they appear in the Saved dropdown.</li>',
'</ul>',
].join('');
}
function _showHelp() {
var existing = document.getElementById('filterHelpPopover');
if (existing) { existing.remove(); return; }
var pop = _h('div', { id: 'filterHelpPopover', class: 'fux-popover', role: 'dialog', 'aria-label': 'Filter syntax help' });
pop.innerHTML =
'<div class="fux-popover-header"><strong>Filter syntax</strong>' +
'<button type="button" class="fux-popover-close" aria-label="Close">✕</button></div>' +
'<div class="fux-popover-body">' + _buildHelpHtml() + '</div>';
document.body.appendChild(pop);
pop.querySelector('.fux-popover-close').addEventListener('click', function() { pop.remove(); });
document.addEventListener('keydown', function _esc(ev) {
if (ev.key === 'Escape') { pop.remove(); document.removeEventListener('keydown', _esc); }
});
}
// ── Autocomplete ───────────────────────────────────────────────────────
function _wireAutocomplete(input) {
var dd = _h('div', { id: 'filterAcDropdown', class: 'fux-ac-dropdown', role: 'listbox' });
dd.style.display = 'none';
input.parentNode.appendChild(dd);
var sel = -1, items = [];
function _gatherPayloadKeys() {
// Best-effort: scan the first ~50 visible packets for decoded_json keys
var keys = {};
try {
var rows = document.querySelectorAll('#pktTable tbody tr');
for (var r = 0; r < rows.length && r < 50; r++) {
var dj = rows[r].getAttribute('data-decoded');
if (!dj) continue;
var obj = JSON.parse(dj);
for (var k in obj) keys[k] = true;
}
} catch (e) {}
return Object.keys(keys);
}
function close() { dd.style.display = 'none'; sel = -1; items = []; input.removeAttribute('aria-activedescendant'); }
function render() {
if (!items.length) { close(); return; }
dd.innerHTML = items.map(function(it, i) {
return '<div class="fux-ac-item' + (i === sel ? ' active' : '') + '" id="fux-ac-' + i +
'" role="option" data-idx="' + i + '">' +
'<span class="fux-ac-val">' + _esc(it.value) + '</span>' +
(it.desc ? '<span class="fux-ac-desc">' + _esc(it.desc) + '</span>' : '') +
'</div>';
}).join('');
dd.style.display = 'block';
if (sel >= 0) input.setAttribute('aria-activedescendant', 'fux-ac-' + sel);
}
function accept(idx) {
if (!items[idx]) return;
var rs = items._replaceStart, re = items._replaceEnd;
var val = items[idx].value;
var v = input.value;
var newVal = v.slice(0, rs) + val + v.slice(re);
var caret = rs + val.length;
// Append space + helpful next char for fields (so user can type op)
if (items[idx].kind === 'field') { newVal = newVal.slice(0, caret) + ' ' + newVal.slice(caret); caret++; }
input.value = newVal;
input.setSelectionRange(caret, caret);
close();
// Trigger filter recompile
input.dispatchEvent(new Event('input', { bubbles: true }));
}
function refresh() {
var PF = window.PacketFilter;
if (!PF || !PF.suggest) return close();
var r = PF.suggest(input.value, input.selectionStart || 0, { payloadKeys: _gatherPayloadKeys() });
items = (r && r.suggestions) ? r.suggestions.slice(0, 12) : [];
items._replaceStart = r ? r.replaceStart : 0;
items._replaceEnd = r ? r.replaceEnd : 0;
sel = items.length ? 0 : -1;
render();
}
input.addEventListener('input', refresh);
input.addEventListener('focus', refresh);
input.addEventListener('blur', function() { setTimeout(close, 150); });
input.addEventListener('keydown', function(ev) {
if (dd.style.display === 'none') return;
if (ev.key === 'ArrowDown') { sel = (sel + 1) % items.length; render(); ev.preventDefault(); }
else if (ev.key === 'ArrowUp') { sel = (sel - 1 + items.length) % items.length; render(); ev.preventDefault(); }
else if (ev.key === 'Tab' || ev.key === 'Enter') {
if (sel >= 0) { accept(sel); ev.preventDefault(); }
} else if (ev.key === 'Escape') { close(); ev.preventDefault(); }
});
dd.addEventListener('mousedown', function(ev) {
var target = ev.target.closest('.fux-ac-item');
if (!target) return;
ev.preventDefault();
accept(parseInt(target.getAttribute('data-idx'), 10));
});
}
// ── Right-click context menu ───────────────────────────────────────────
function _showContextMenu(x, y, field, value) {
if (_ctxMenu) { _ctxMenu.remove(); _ctxMenu = null; }
var input = document.getElementById('packetFilterInput');
if (!input) return;
var menu = _h('div', { id: 'filterContextMenu', class: 'fux-ctx-menu', role: 'menu' });
var ops = [
{ label: 'Filter ' + field + ' == "' + value + '"', op: '==' },
{ label: 'Filter ' + field + ' != "' + value + '"', op: '!=' },
{ label: 'Filter ' + field + ' contains "' + value + '"', op: 'contains' },
];
menu.innerHTML = ops.map(function(o, i) {
return '<button type="button" class="fux-ctx-item" data-idx="' + i + '" role="menuitem">' + _esc(o.label) + '</button>';
}).join('');
menu.style.left = x + 'px';
menu.style.top = y + 'px';
document.body.appendChild(menu);
_ctxMenu = menu;
menu.addEventListener('click', function(ev) {
var btn = ev.target.closest('.fux-ctx-item');
if (!btn) return;
var op = ops[parseInt(btn.getAttribute('data-idx'), 10)].op;
var clause = buildCellFilterClause(field, value, op);
input.value = appendClauseToExpr(input.value, clause);
input.dispatchEvent(new Event('input', { bubbles: true }));
menu.remove(); _ctxMenu = null;
});
function dismiss(ev) {
if (_ctxMenu && !_ctxMenu.contains(ev.target)) { _ctxMenu.remove(); _ctxMenu = null;
document.removeEventListener('mousedown', dismiss);
document.removeEventListener('keydown', escDismiss);
}
}
function escDismiss(ev) { if (ev.key === 'Escape') dismiss({ target: document.body }); }
setTimeout(function() {
document.addEventListener('mousedown', dismiss);
document.addEventListener('keydown', escDismiss);
}, 0);
}
function _wireContextMenu() {
// Delegated listener on the table — extracts field+value from data-* attrs.
var tbl = document.getElementById('pktTable');
if (!tbl) return;
tbl.addEventListener('contextmenu', function(ev) {
var cell = ev.target.closest('td[data-filter-field]');
if (!cell) return;
var field = cell.getAttribute('data-filter-field');
var value = cell.getAttribute('data-filter-value');
if (!field || value == null || value === '') return;
ev.preventDefault();
_showContextMenu(ev.pageX, ev.pageY, field, value);
});
}
// ── Saved filters dropdown ─────────────────────────────────────────────
function _renderSavedDropdown(container, input) {
var btn = _h('button', { type: 'button', class: 'fux-saved-trigger', id: 'filterSavedTrigger', title: 'Saved filters' }, '★ Saved ▾');
var menu = _h('div', { class: 'fux-saved-menu hidden', id: 'filterSavedMenu', role: 'menu' });
container.appendChild(btn);
container.appendChild(menu);
function build() {
var list = SavedFilters.list();
var rows = list.map(function(f, i) {
var del = f.builtin ? '' :
'<button type="button" class="fux-saved-del" data-name="' + _esc(f.name) + '" title="Delete">✕</button>';
return '<div class="fux-saved-item" data-idx="' + i + '">' +
'<span class="fux-saved-name">' + _esc(f.name) + '</span>' +
'<span class="fux-saved-expr fux-mono">' + _esc(f.expr) + '</span>' +
del + '</div>';
}).join('');
menu.innerHTML =
'<div class="fux-saved-header">Saved filters</div>' +
rows +
'<div class="fux-saved-footer">' +
'<button type="button" id="filterSaveCurrent" class="fux-saved-save"> Save current expression</button>' +
'</div>';
}
btn.addEventListener('click', function(ev) {
ev.stopPropagation();
build();
menu.classList.toggle('hidden');
});
document.addEventListener('click', function(ev) {
if (!menu.contains(ev.target) && ev.target !== btn) menu.classList.add('hidden');
});
menu.addEventListener('click', function(ev) {
var del = ev.target.closest('.fux-saved-del');
if (del) {
SavedFilters.delete(del.getAttribute('data-name'));
build();
ev.stopPropagation();
return;
}
if (ev.target.id === 'filterSaveCurrent') {
var expr = (input.value || '').trim();
if (!expr) { alert('Type a filter expression first.'); return; }
var name = prompt('Name this filter:', '');
if (name && name.trim()) {
SavedFilters.save(name.trim(), expr);
build();
}
return;
}
var item = ev.target.closest('.fux-saved-item');
if (item) {
var list = SavedFilters.list();
var f = list[parseInt(item.getAttribute('data-idx'), 10)];
if (f) {
input.value = f.expr;
input.dispatchEvent(new Event('input', { bubbles: true }));
menu.classList.add('hidden');
}
}
});
}
// ── Init: idempotent, called by packets.js after filter input renders ──
function init() {
var input = document.getElementById('packetFilterInput');
if (!input || input.dataset.fuxInit === '1') return;
input.dataset.fuxInit = '1';
// Help icon + saved-filters dropdown — injected next to the input
var wrap = input.parentNode;
if (wrap) {
var bar = document.getElementById('filterUxBar');
if (!bar) {
bar = _h('div', { id: 'filterUxBar', class: 'fux-bar' });
var helpBtn = _h('button', { type: 'button', class: 'fux-help-btn', id: 'filterHelpBtn',
'aria-label': 'Filter syntax help', title: 'Filter syntax help' }, 'ⓘ Help');
helpBtn.addEventListener('click', _showHelp);
bar.appendChild(helpBtn);
_renderSavedDropdown(bar, input);
wrap.appendChild(bar);
}
}
_wireAutocomplete(input);
_wireContextMenu();
}
var _exports = {
SavedFilters: SavedFilters,
buildCellFilterClause: buildCellFilterClause,
appendClauseToExpr: appendClauseToExpr,
init: init,
_showHelp: _showHelp, // exposed for E2E
};
if (typeof window !== 'undefined') window.FilterUX = _exports;
if (typeof module !== 'undefined' && module.exports) module.exports = _exports;
})();
+48 -3
View File
@@ -26,6 +26,12 @@
#btnCopy { padding: 6px 14px; background: #1a4a7a; color: #7ec8e3; border-radius: 6px; border: none; cursor: pointer; font-size: 0.85rem; white-space: nowrap; align-self: flex-end; }
#btnCopy:hover { background: #2a6aaa; }
#btnCopy.copied { background: #1a6a3a; color: #7effa0; }
#btnSaveDraft { background: #1a5a3a; color: #7effa0; }
#btnSaveDraft:hover { background: #2a7a4a; }
#btnLoadDraft { background: #3a3a1a; color: #ffe07e; }
#btnLoadDraft:hover { background: #5a5a2a; }
#btnDownload { background: #1a4a7a; color: #7ec8e3; }
#btnDownload:hover { background: #2a6aaa; }
#counter { font-size: 0.8rem; color: #888; padding-top: 6px; white-space: nowrap; }
.bufferRow { display: flex; align-items: center; gap: 8px; }
.bufferRow label { font-size: 0.85rem; color: #aaa; }
@@ -45,6 +51,8 @@
<div class="controls">
<button id="btnUndo">↩ Undo</button>
<button id="btnClear">✕ Clear</button>
<button id="btnSaveDraft">💾 Save Draft</button>
<button id="btnLoadDraft">📂 Load Draft</button>
</div>
<div class="bufferRow">
<label for="bufferKm">Buffer km:</label>
@@ -63,16 +71,18 @@
<div style="display:flex;flex-direction:column;gap:8px;align-items:flex-end">
<span id="counter">0 points</span>
<button id="btnCopy">Copy</button>
<button id="btnDownload">⬇ Download</button>
</div>
</div>
<!-- Instructions: paste the output into config.json as a top-level "geo_filter" key, then restart the server -->
<div id="help-bar">
Copy the JSON above → paste as a top-level key in <code>config.json</code> → restart the server.
<strong>Save Draft</strong> preserves your polygon across sessions. <strong>Download</strong> exports a JSON snippet → paste as a top-level key in <code>config.json</code> → restart the server.
Nodes with no GPS fix always pass through. Remove the <code>geo_filter</code> block to disable filtering.
&nbsp;·&nbsp; <a href="https://github.com/Kpa-clawbot/CoreScope/blob/master/docs/user-guide/geofilter.md" target="_blank">Documentation</a>
&nbsp;·&nbsp; <a href="/geofilter-docs.html">Documentation</a>
</div>
<script src="geofilter-draft.js"></script>
<script>
const map = L.map('map').setView([50.5, 4.4], 8);
@@ -87,7 +97,8 @@ let polygon = null;
let closingLine = null;
function latLonPair(latlng) {
return [parseFloat(latlng.lat.toFixed(6)), parseFloat(latlng.lng.toFixed(6))];
const w = latlng.wrap();
return [parseFloat(w.lat.toFixed(6)), parseFloat(w.lng.toFixed(6))];
}
function render() {
@@ -165,6 +176,40 @@ document.getElementById('btnCopy').addEventListener('click', function() {
setTimeout(() => { btn.textContent = 'Copy'; btn.classList.remove('copied'); }, 2000);
});
});
document.getElementById('btnSaveDraft').addEventListener('click', function() {
if (points.length < 3) return;
const bufferKm = parseFloat(document.getElementById('bufferKm').value) || 0;
GeofilterDraft.saveDraft(points, bufferKm);
const btn = document.getElementById('btnSaveDraft');
btn.textContent = '✓ Saved';
setTimeout(() => { btn.textContent = '💾 Save Draft'; }, 2000);
});
document.getElementById('btnLoadDraft').addEventListener('click', function() {
const draft = GeofilterDraft.loadDraft();
if (!draft || !draft.polygon || draft.polygon.length < 3) return;
// Clear current
markers.forEach(m => map.removeLayer(m));
markers = [];
points = draft.polygon.slice();
if (draft.bufferKm != null) document.getElementById('bufferKm').value = draft.bufferKm;
// Recreate markers
points.forEach(function(pt, i) {
const marker = L.circleMarker([pt[0], pt[1]], {
radius: 6, color: '#4a9eff', weight: 2, fillColor: '#4a9eff', fillOpacity: 0.9
}).addTo(map).bindTooltip(String(i + 1), { permanent: true, direction: 'top', offset: [0, -8], className: 'pt-label' });
markers.push(marker);
});
render();
map.fitBounds(L.polygon(points).getBounds().pad(0.2));
});
document.getElementById('btnDownload').addEventListener('click', function() {
if (points.length < 3) return;
const bufferKm = parseFloat(document.getElementById('bufferKm').value) || 0;
GeofilterDraft.downloadConfig(points, bufferKm);
});
</script>
</body>
</html>
+142
View File
@@ -0,0 +1,142 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>GeoFilter Docs — CoreScope</title>
<style>
* { box-sizing: border-box; margin: 0; padding: 0; }
body { font-family: system-ui, sans-serif; background: #1a1a2e; color: #e0e0e0; min-height: 100vh; display: flex; flex-direction: column; }
header { padding: 12px 16px; background: #0f0f23; border-bottom: 1px solid #333; display: flex; align-items: center; gap: 16px; }
header h1 { font-size: 1rem; font-weight: 600; color: #4a9eff; }
#back-link { font-size: 0.8rem; color: #4a9eff; text-decoration: none; white-space: nowrap; }
#back-link:hover { text-decoration: underline; }
main { flex: 1; max-width: 800px; margin: 0 auto; padding: 32px 24px; width: 100%; }
h2 { font-size: 1.1rem; font-weight: 600; color: #4a9eff; margin: 32px 0 12px; border-bottom: 1px solid #222; padding-bottom: 6px; }
h2:first-of-type { margin-top: 0; }
h3 { font-size: 0.95rem; font-weight: 600; color: #c0c0c0; margin: 20px 0 8px; }
p { font-size: 0.9rem; line-height: 1.6; color: #ccc; margin-bottom: 10px; }
ul { padding-left: 20px; margin-bottom: 10px; }
li { font-size: 0.9rem; line-height: 1.7; color: #ccc; }
code { font-family: monospace; font-size: 0.85rem; color: #7ec8e3; background: #111; border: 1px solid #333; border-radius: 3px; padding: 1px 5px; }
pre { background: #111; border: 1px solid #333; border-radius: 6px; padding: 14px 16px; overflow-x: auto; margin: 10px 0 16px; }
pre code { background: none; border: none; padding: 0; font-size: 0.82rem; color: #7ec8e3; }
.note { background: #1a2a1a; border: 1px solid #2a4a2a; border-radius: 6px; padding: 10px 14px; margin: 12px 0; }
.note p { color: #aaddaa; margin: 0; }
.warn { background: #2a1a0a; border: 1px solid #5a3a0a; border-radius: 6px; padding: 10px 14px; margin: 12px 0; }
.warn p { color: #ddbb88; margin: 0; }
table { width: 100%; border-collapse: collapse; margin: 10px 0 16px; font-size: 0.88rem; }
th { background: #0f0f23; color: #888; font-weight: 500; text-align: left; padding: 8px 12px; border: 1px solid #333; }
td { padding: 8px 12px; border: 1px solid #222; color: #ccc; vertical-align: top; }
td code { font-size: 0.82rem; }
</style>
</head>
<body>
<header>
<a href="/geofilter-builder.html" id="back-link">← GeoFilter Builder</a>
<h1>GeoFilter Docs</h1>
</header>
<main>
<h2>How it works</h2>
<p>Geographic filtering restricts which nodes are ingested and returned in API responses. It operates at two levels:</p>
<ul>
<li><strong>Ingest time</strong> — ADVERT packets carrying GPS coordinates are rejected by the ingestor if the node falls outside the configured area. The node never reaches the database.</li>
<li><strong>API responses</strong> — Nodes already in the database are filtered from the <code>/api/nodes</code> response if they fall outside the area. This covers nodes ingested before the filter was configured.</li>
</ul>
<div class="note"><p>Nodes with no GPS fix (<code>lat=0, lon=0</code> or missing coordinates) always pass the filter regardless of configuration.</p></div>
<h2>Configuration</h2>
<p>Add a <code>geo_filter</code> block to <code>config.json</code>:</p>
<pre><code>"geo_filter": {
"polygon": [
[51.55, 3.80],
[51.55, 5.90],
[50.65, 5.90],
[50.65, 3.80]
],
"bufferKm": 20
}</code></pre>
<table>
<thead><tr><th>Field</th><th>Type</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>polygon</code></td><td><code>[[lat, lon], ...]</code></td><td>Array of at least 3 coordinate pairs defining the boundary</td></tr>
<tr><td><code>bufferKm</code></td><td>number</td><td>Extra distance (km) around the polygon edge that is also accepted. <code>0</code> = exact boundary</td></tr>
</tbody>
</table>
<p>Both the server and the ingestor read <code>geo_filter</code> from <code>config.json</code>. Restart both after changing this section.</p>
<p>To disable filtering entirely, remove the <code>geo_filter</code> block.</p>
<h2>Builder workflow: Save Draft, Load Draft, Download</h2>
<p>The <a href="/geofilter-builder.html">GeoFilter Builder</a> lets you draw a polygon on a map and produce the <code>geo_filter</code> snippet without hand-editing JSON. Three buttons drive the workflow:</p>
<ul>
<li><strong>💾 Save Draft</strong> — writes the current polygon and <code>bufferKm</code> to your browser's <code>localStorage</code> under the key <code>geofilter-draft</code>. Drafts persist across page reloads and browser restarts so you can iterate on a shape over multiple sessions.</li>
<li><strong>📂 Load Draft</strong> — restores the most recently saved draft into the builder. The current polygon is replaced. If no draft exists the button is a no-op.</li>
<li><strong>⬇ Download</strong> — exports the current polygon and <code>bufferKm</code> as <code>geofilter-config-snippet.json</code> — a single JSON object containing a top-level <code>geo_filter</code> block. Open the file, copy the <code>geo_filter</code> entry, and paste it into your <code>config.json</code>.</li>
</ul>
<div class="note"><p>Drafts are stored locally in your browser only — they are not uploaded anywhere. Clearing site data or switching browsers will lose the draft. Use <strong>Download</strong> to keep a portable copy.</p></div>
<p>After pasting the snippet into <code>config.json</code>, restart the server and ingestor for the new filter to take effect.</p>
<h2>Coordinate ordering</h2>
<div class="warn"><p><strong>Important:</strong> Coordinates are <code>[lat, lon]</code> — latitude first, longitude second. This is the opposite of GeoJSON, which uses <code>[lon, lat]</code>. Swapping them will place your polygon in the wrong location.</p></div>
<h2>Multi-polygon</h2>
<p>Only a single polygon is supported. If your deployment area consists of multiple disconnected regions, draw a single convex hull that covers all of them, or use the largest region with a generous <code>bufferKm</code> value.</p>
<h2>Examples</h2>
<h3>Belgium (bounding rectangle)</h3>
<pre><code>"geo_filter": {
"polygon": [
[51.55, 3.80],
[51.55, 5.90],
[50.65, 5.90],
[50.65, 3.80]
],
"bufferKm": 20
}</code></pre>
<h3>Irregular shape</h3>
<pre><code>"geo_filter": {
"polygon": [
[51.10, 3.70],
[51.55, 4.20],
[51.30, 5.10],
[50.80, 5.50],
[50.50, 4.80],
[50.70, 3.90]
],
"bufferKm": 10
}</code></pre>
<h2>Legacy bounding box</h2>
<p>An older bounding box format is also supported as a fallback when no <code>polygon</code> is present:</p>
<pre><code>"geo_filter": {
"latMin": 50.65,
"latMax": 51.55,
"lonMin": 3.80,
"lonMax": 5.90
}</code></pre>
<p>Prefer the polygon format — it supports irregular shapes and the <code>bufferKm</code> margin.</p>
<h2>Cleaning up historical nodes</h2>
<p>The ingestor prevents new out-of-bounds nodes from being ingested, but does not retroactively remove nodes stored before the filter was configured. Use the prune script for that:</p>
<pre><code># Dry run — shows what would be deleted without making any changes
python3 scripts/prune-nodes-outside-geo-filter.py --dry-run
# Default paths: /app/data/meshcore.db and /app/config.json
python3 scripts/prune-nodes-outside-geo-filter.py
# Custom paths
python3 scripts/prune-nodes-outside-geo-filter.py /path/to/meshcore.db \
--config /path/to/config.json
# In Docker — run inside the container
docker exec -it meshcore-analyzer \
python3 /app/scripts/prune-nodes-outside-geo-filter.py --dry-run</code></pre>
<p>The script reads <code>geo_filter.polygon</code> and <code>geo_filter.bufferKm</code> from config, lists nodes that fall outside, then asks for <code>yes</code> confirmation before deleting. Nodes without coordinates are always kept.</p>
<p>This is a one-time migration tool — run it once after first configuring <code>geo_filter</code> to clean up pre-filter data.</p>
</main>
</body>
</html>

Some files were not shown because too many files have changed in this diff Show More