Compare commits

..

47 Commits

Author SHA1 Message Date
you 5bec14222a fix: address PR #905 review — migration error handling, backfill heuristic, test comment
1. Migration ALTER error no longer swallowed: check error from ALTER TABLE
   and return if it fails (unless column already exists). Migration is not
   marked complete on failure.

2. Backfill heuristic fixed: use observations table JOIN instead of
   packet_count > 0, since UpsertObserver sets packet_count = 1 on INSERT
   even for status-only observers.

3. Test clarifying comment: document that InsertTransmission uses
   data.Timestamp (not time.Now()) as source-of-truth for last_packet_at,
   so the hardcoded assertion is correct.
2026-04-24 15:38:05 +00:00
you e9f977cd70 fix: bump obs-table min-width to 720px for new Last Packet column
The addition of the Last Packet column brings the table to 8 columns.
The previous min-width of 640px was tight for 7 columns; 720px prevents
cramped rendering and ensures the horizontal scroll trigger is appropriate
on narrow viewports.
2026-04-24 15:31:23 +00:00
you 8a15ea903b test: add last_packet_at tests for ingestor and server
- Ingestor: verify last_packet_at is NULL after UpsertObserver (status path),
  set after InsertTransmission, and unchanged by subsequent UpsertObserver calls
- Server: verify last_packet_at reads back through GetObservers and GetObserverByID
2026-04-24 15:26:59 +00:00
you 330970cce9 feat(ui): show separate Last Status and Last Packet columns for observers
- observers.js: rename 'Last Seen' column to 'Last Status', add 'Last Packet'
  column with a warning badge when no packets observed or packets lag behind
  status by >10min
- observer-detail.js: add 'Last Status Update' and 'Last Packet Observation'
  stat cards with relative + absolute timestamps
2026-04-24 15:26:53 +00:00
you d3a40919f2 feat: add last_packet_at column to observers
Add a new 'last_packet_at' column to the observers table that is only
bumped when an actual packet observation lands (InsertTransmission path),
while 'last_seen' continues to be bumped on both status updates and packets.

This allows the UI to distinguish between an observer that is alive
(sending status pings) vs one that is actively forwarding packets.

Schema migration backfills last_packet_at = last_seen for observers
with packet_count > 0. Server API now returns last_packet_at in the
Observer JSON response.
2026-04-24 15:26:47 +00:00
Kpa-clawbot a47fe26085 fix(channels): allow removing user-added keys for server-known channels (#898)
## Problem
Adding a channel key in the Channels UI for a channel the server already
knows about (e.g. `#public` from rainbow / config) leaves the
localStorage entry **unremovable**:

- `mergeUserChannels` sees the name already exists in the channel list
and skips the user entry.
- The existing channel row is never marked `userAdded:true`.
- The ✕ button (`[data-remove-channel]`) is only rendered for
`userAdded` rows.
- Result: stuck localStorage key, no UI to delete it.

There was also a latent bug in the remove handler — for non-`user:`
rows, it used the raw hash (e.g. `enc_11`) as the
`ChannelDecrypt.removeKey()` argument, but the storage key is the
channel **name**.

## Fix
1. **`mergeUserChannels`**: when a stored key matches an existing
channel by name/hash, mark the existing channel `userAdded=true` so the
✕ renders on it. (No magical/auto deletion of stored keys — the user
explicitly chooses to remove.)
2. **Remove handler**:
- Look up the channel object to get the correct display name for the
localStorage key.
- Keep server-known channels in the list when their ✕ is clicked (only
the user's localStorage entry + cache are cleared, `userAdded` is
unset). The channel still exists upstream.
   - Pure `user:`-prefixed channels are removed from the list as before.

## Repro
1. Open Channels.
2. Add a key for `#public` (or any rainbow-known channel).
3. Reload. Before this PR: row has no ✕, key is stuck. After this PR: ✕
appears, click clears the local key and cache.

## Files
- `public/channels.js` only.

## Notes
- No backend changes.
- No new APIs.
- Behaviour for purely user-added channels (e.g. `user:#somechannel` not
known to the server) is unchanged.

---------

Co-authored-by: you <you@example.com>
2026-04-22 21:41:43 -07:00
Kpa-clawbot abd9c46aa7 fix: side-panel Details button opens full-screen on desktop (#892)
## Symptom
🔍 Details button in the nodes side panel does nothing on click.

## Root cause (4th regression of the same shape)
- Row click → `selectNode()` → `history.replaceState(null, '',
'#/nodes/' + pk)`
- Details button click → `location.hash = '#/nodes/' + pk`
- Hash is already that value → assignment is a no-op → no `hashchange`
event → no router → panel stays open.

## Fix
Mirror the analytics-link branch already inside the panel click handler:
`destroy()` then `init(appEl, pubkey)` directly (which hits the
`directNode` full-screen branch unconditionally). Also `replaceState` to
keep the URL in sync.

## Test
New Playwright E2E: open side panel via row click, click Details, assert
`.node-fullscreen` appears.

## Why this keeps regressing
Every time we tighten the row-click handler to use `replaceState`
(correct — avoids hashchange flicker), the button-click handler that
uses `location.hash` becomes a no-op for the same pubkey. Need to
remember they're coupled. Worth a follow-up to extract a
`navigateToNode(pk)` helper that always works regardless of current hash
state — filing as #890-followup if not already there.

Co-authored-by: you <you@example.com>
2026-04-21 22:37:15 -07:00
Kpa-clawbot 6ca5e86df6 fix: compute hex-dump byte ranges client-side from per-obs raw_hex (#891)
## Symptom
The colored byte strip in the packet detail pane is offset from the
labeled byte breakdown below it. Off by N bytes where N is the
difference between the top-level packet's path length and the displayed
observation's path length.

## Root cause
Server computes `breakdown.ranges` once from the top-level packet's
raw_hex (in `BuildBreakdown`) and ships it in the API response. After
#882 we render each observation's own raw_hex, but we keep using the
top-level breakdown — so a 7-hop top-level packet shipped "Path: bytes
2-8", and when we rendered an 8-hop observation we coloured 7 of the 8
path bytes and bled into the payload.

The labeled rows below (which use `buildFieldTable`) parse the displayed
raw_hex on the client, so they were correct — they just didn't match the
strip above.

## Fix
Port `BuildBreakdown()` to JS as `computeBreakdownRanges()` in `app.js`.
Use it in `renderDetail()` from the actually-rendered (per-obs) raw_hex.

## Test
Manually verified the JS function output matches the Go implementation
for FLOOD/non-transport, transport, ADVERT, and direct-advert (zero
hops) cases.

Closes nothing (caught in post-tag bug bash).

---------

Co-authored-by: you <you@example.com>
2026-04-21 22:17:14 -07:00
Kpa-clawbot 56ec590bc4 fix(#886): derive path_json from raw_hex at ingest (#887)
## Problem

Per-observation `path_json` disagrees with `raw_hex` path section for
TRACE packets.

**Reproducer:** packet `af081a2c41281b1e`, observer `lutin🏡`
- `path_json`: `["67","33","D6","33","67"]` (5 hops — from TRACE
payload)
- `raw_hex` path section: `30 2D 0D 23` (4 bytes — SNR values in header)

## Root Cause

`DecodePacket` correctly parses TRACE packets by replacing `path.Hops`
with hop IDs from the payload's `pathData` field (the actual route).
However, the header path bytes for TRACE packets contain **SNR values**
(one per completed hop), not hop IDs.

`BuildPacketData` used `decoded.Path.Hops` to build `path_json`, which
for TRACE packets contained the payload-derived hops — not the header
path bytes that `raw_hex` stores. This caused `path_json` and `raw_hex`
to describe completely different paths.

## Fix

- Added `DecodePathFromRawHex(rawHex)` — extracts header path hops
directly from raw hex bytes, independent of any TRACE payload
overwriting.
- `BuildPacketData` now calls `DecodePathFromRawHex(msg.Raw)` instead of
using `decoded.Path.Hops`, guaranteeing `path_json` always matches the
`raw_hex` path section.

## Tests (8 new)

**`DecodePathFromRawHex` unit tests:**
- hash_size 1, 2, 3, 4
- zero-hop direct packets
- transport route (4-byte transport codes before path)

**`BuildPacketData` integration tests:**
- TRACE packet: asserts path_json matches raw_hex header path (not
payload hops)
- Non-TRACE packet: asserts path_json matches raw_hex header path

All existing tests continue to pass (`go test ./...` for both ingestor
and server).

Fixes #886

---------

Co-authored-by: you <you@example.com>
2026-04-21 21:13:58 -07:00
Kpa-clawbot 67aa47175f fix: path pill and byte breakdown agree on hop count (#885)
## Problem
On the packet detail pane, the **path pill** (top) and the **byte
breakdown** (bottom) showed different numbers of hops for the same
packet. Example: `46cf35504a21ef0d` rendered as `1 hop` badge followed
by 8 node names in the path pill, while the byte breakdown listed only 1
hop row.

## Root cause
Mixed data sources:
- Path-pill badge used `(raw_hex path_len) & 0x3F` (= firmware truth for
one observer = 1)
- Path-pill names used `path_json.length` (= server-aggregated longest
path across observers = 8)
- Byte breakdown section header used `(raw_hex path_len) & 0x3F` (= 1)
- Byte breakdown rows were sliced from `raw_hex` (= 1 row)
- `renderPath(pathHops, ...)` iterated all `path_json` entries

For group-header view, `packet.path_json` is aggregated across observers
and therefore longer than the raw_hex of any single observer's packet.

## Fix
Both surfaces now render from `pathHops` (= effective observation's
`path_json`). The raw_hex vs path_json mismatch is still logged as a
console.warn for diagnostics, but does not drive the UI.

With per-observation `raw_hex` (#882) shipped, clicking an observation
row already swaps the effective packet so both surfaces stay consistent.

## Testing
- Adds E2E regression `Packet detail path pill and byte breakdown agree
on hop count` that asserts:
  1. `pill badge count == byte breakdown section count`
  2. `rendered hop names ≈ badge count` (within 1 for separators)
  3. `byte breakdown rendered rows == section count`
- Manually reproduced on staging with `46cf35504a21ef0d` (8-name path +
`1 hop` badge before fix).

Related: #881 #882 #866

---------

Co-authored-by: you <you@example.com>
2026-04-21 17:57:06 -07:00
Kpa-clawbot 2b9f305698 fix(#874): hop-resolver affinity picker — score candidates by neighbor-graph edges + geographic centroid (#876)
## Problem

`pickByAffinity` in `hop-resolver.js` picks wrong regional candidates
when 1-byte pubkey prefixes collide. The old implementation only
considers one adjacent hop (forward OR backward pass), leading to
suboptimal picks when both neighbors provide useful context.

Measured on staging: **61.6% of hops have ≥2 same-prefix candidates**,
making collision resolution critical.

## Fix

Replaced the separate forward/backward pass disambiguation with a
**combined iterative resolver** that scores candidates against BOTH prev
and next resolved hops:

1. **Neighbor-graph edge weight** (priority 1): Sum edge scores to prev
+ next pubkeys. Pick max sum.
2. **Geographic centroid** (priority 2): Average lat/lon of prev + next
positions. Pick closest candidate by haversine distance.
3. **Single-anchor geo** (priority 3): When only one neighbor is
resolved, use it directly.
4. **Fallback** (priority 4): First candidate when no context exists.

The iterative approach resolves cascading dependencies — resolving one
ambiguous hop may unlock context for its neighbors.

### Dev-mode trace

Multi-candidate picks now emit: `[hop-resolver] hash=46 candidates=N
scored=[...] chose=<pubkey> method=graph|centroid|fallback`

## Before/After (staging, 1539 packets, 12928 hops)

| Metric | Before | After |
|--------|--------|-------|
| Unreliable hops | 39 (0.3%) | 23 (0.2%) |
| Packets with unreliable | 33 (2.14%) | 17 (1.10%) |

~41% reduction in unreliable hops, ~48% reduction in affected packets.

## Tests

5 new tests in `test-frontend-helpers.js`:
- Graph edge scoring picks correct regional candidate
- Next hop breaks tie when prev has no edges
- Centroid fallback when no graph edges exist
- Centroid uses average of prev+next positions
- Fallback when no context at all

All 595 tests pass. No regressions in `test-packet-filter.js` (62 pass)
or `test-aging.js` (29 pass).

Closes #874

---------

Co-authored-by: you <you@example.com>
2026-04-21 14:03:40 -07:00
Kpa-clawbot a605518d6d fix(#881): per-observation raw_hex — each observer sees different bytes on air (#882)
## Problem

Each MeshCore observer receives a physically distinct over-the-air byte
sequence for the same transmission (different path bytes, flags/hops
remaining). The `observations` table stored only `path_json` per
observer — all observations pointed at one `transmissions.raw_hex`. This
prevented the hex pane from updating when switching observations in the
packet detail view.

## Changes

| Layer | Change |
|-------|--------|
| **Schema** | `ALTER TABLE observations ADD COLUMN raw_hex TEXT`
(nullable). Migration: `observations_raw_hex_v1` |
| **Ingestor** | `stmtInsertObservation` now stores per-observer
`raw_hex` from MQTT payload |
| **View** | `packets_v` uses `COALESCE(o.raw_hex, t.raw_hex)` —
backward compatible with NULL historical rows |
| **Server** | `enrichObs` prefers `obs.RawHex` when non-empty, falls
back to `tx.RawHex` |
| **Frontend** | No changes — `effectivePkt.raw_hex` already flows
through `renderDetail` |

## Tests

- **Ingestor**: `TestPerObservationRawHex` — two MQTT packets for same
hash from different observers → both stored with distinct raw_hex
- **Server**: `TestPerObservationRawHexEnrich` — enrichObs returns
per-obs raw_hex when present, tx fallback when NULL
- **E2E**: Playwright assertion in `test-e2e-playwright.js` for hex pane
update on observation switch

E2E assertion added: `test-e2e-playwright.js:1794`

## Scope

- Historical observations: raw_hex stays NULL, UI falls back to
transmission raw_hex silently
- No backfill, no path_json reconstruction, no frontend changes

Closes #881

---------

Co-authored-by: you <you@example.com>
2026-04-21 13:45:29 -07:00
Kpa-clawbot 0ca559e348 fix(#866): per-observation children in expanded packet groups (#880)
## Problem
When a packet group is expanded in the Packets table, clicking any child
row pointed the side pane at the same aggregate packet — not the clicked
observation. URL would flip between `?obs=<packet_id>` values instead of
real observation ids.

## Root cause
The expand fetch used `/api/packets?hash=X&limit=20`, which returns ONE
aggregate row keyed by packet.id. Every child therefore carried
`data-value=<packet.id>`.

## Fix
Switch the expand fetch to `/api/packets/<hash>`, which includes the
full `observations[]` array. Build `_children` as `{...pkt, ...obs}` so
each child row gets a unique observation id and observation-level fields
(observer, path, timestamp, snr/rssi) override the aggregate.

## Verified live on staging
Tested on multiple packets:
- Click group-header → side pane shows observation 1 of N (first
observer)
- Click child row → pane updates to show THAT observer's details:
observer name, path, timestamp, obs counter (K of N), URL
`?obs=<observation_id>`

## Tests
592 frontend tests pass (no new ones — this is a wiring fix, live
E2E-verified instead).

Closes #866

---------

Co-authored-by: Kpa-clawbot <agent@corescope.local>
Co-authored-by: you <you@example.com>
2026-04-21 13:36:45 -07:00
Kpa-clawbot 1d449eabc7 fix(#872): replace strikethrough with warning badge on unreliable hops (#875)
## Problem

The `hop-unreliable` CSS class applied `text-decoration: line-through`
and `opacity: 0.5`, making hop names look "dead" to operators. This
caused confusion — the repeater itself is fine, only the name→hash
assignment is uncertain.

## Fix

- **CSS**: Removed `line-through` and heavy opacity from
`.hop-unreliable`. Kept subtle `opacity: 0.85` for scanability. Added
`.hop-unreliable-btn` style for the new badge.
- **JS**: Added a `⚠️` warning badge button next to unreliable hops
(similar pattern to existing conflict badges). The badge is always
visible, keyboard-focusable, and has both `title` and `aria-label` with
an informative tooltip explaining geographic inconsistency.
- **Tests**: Added 2 tests in `test-frontend-helpers.js` asserting the
badge renders for unreliable hops and does NOT render for reliable ones,
and that no `line-through` is present.

### Before → After

| Before | After |
|--------|-------|
| ~~NodeName~~ (struck through, 50% opacity) | NodeName ⚠️ (normal text,
small warning badge with tooltip) |

## Scope

Resolver logic untouched — #873 covers threshold tuning, #874 covers
picker correctness. No candidate-dropdown UX (follow-up per issue
discussion).

Closes #872

Co-authored-by: you <you@example.com>
2026-04-21 10:54:32 -07:00
Kpa-clawbot 42ff5a291b fix(#866): full-page obs-switch — update hex + path + direction per observation (#870)
## Problem

On `/#/packets/<hash>?obs=<id>`, clicking a different observation
updated summary fields (Observer, SNR/RSSI, Timestamp) but **not** hex
payload or path details. Sister bug to #849 (fixed in #851 for the
detail dialog).

## Root Causes

| Cause | Impact |
|-------|--------|
| `selectPacket` called `renderDetail` without `selectedObservationId` |
Initial render missed observation context on some code paths |
| `ObservationResp` missing `direction`, `resolved_path`, `raw_hex` |
Frontend obs-switch lost direction and resolved_path context |
| `obsPacket` construction omitted `direction` field | Direction not
preserved when switching observations |

## Fix

- `selectPacket` explicitly passes `selectedObservationId` to
`renderDetail`
- `ObservationResp` gains `Direction`, `ResolvedPath`, `RawHex` fields
- `mapSliceToObservations` copies the three new fields
- `obsPacket` spreads include `direction` from the observation

## Tests

7 new tests in `test-frontend-helpers.js`:
- Observation switch updates `effectivePkt` path
- `raw_hex` preserved from packet when obs has none
- `raw_hex` from obs overrides when API provides it
- `direction` carried through observation spread
- `resolved_path` carried through observation spread
- `getPathLenOffset` cross-check for transport routes
- URL hash `?obs=` round-trip encoding

All 584 frontend + 62 filter + 29 aging tests pass. Go server tests
pass.

Fixes #866

Co-authored-by: you <you@example.com>
2026-04-21 10:40:52 -07:00
Kpa-clawbot 99029e41aa ci(#768): publish multi-arch (amd64+arm64) Docker image (#869)
## Problem

`docker pull` on ARM devices fails because the published image is
amd64-only.

## Fix

Enable multi-arch Docker builds via `docker buildx`. **Builder stage
uses native Go cross-compilation; only the runtime-stage `RUN` steps use
QEMU emulation.**

### Changes

| File | Change |
|------|--------|
| `Dockerfile` | Pin builder stage to `--platform=$BUILDPLATFORM`
(always native), accept `ARG TARGETOS`/`ARG TARGETARCH` from buildx, set
`GOOS=$TARGETOS GOARCH=$TARGETARCH CGO_ENABLED=0` on every `go build` |
| `.github/workflows/deploy.yml` | Add `docker/setup-buildx-action@v3` +
`docker/setup-qemu-action@v3` (latter needed only for runtime-stage
RUNs), set `platforms: linux/amd64,linux/arm64` |

### Build architecture

- **Builder stage** (`FROM --platform=$BUILDPLATFORM
golang:1.22-alpine`) — runs natively on amd64. Go toolchain
cross-compiles the binaries to `$TARGETARCH` via `GOOS/GOARCH`. No
emulation, ~10× faster than emulated builds. Works because
`modernc.org/sqlite` is pure Go (no CGO).
- **Runtime stage** (`FROM alpine:3.20`) — buildx pulls the per-arch
base. RUN steps (`apk add`, `mkdir/chown`, `chmod`) execute inside the
target-arch image, so QEMU is required to interpret arm64 binaries on
the amd64 host. Only a handful of short shell commands run under
emulation, so the QEMU cost is small.

### Verify

After merge, on an ARM device:
```bash
docker pull ghcr.io/kpa-clawbot/corescope:edge
docker inspect ghcr.io/kpa-clawbot/corescope:edge --format '{{.Architecture}}'
# → arm64
```

> First arm64 image appears on the next push to master after this
merges.

Closes #768

---------

Co-authored-by: you <you@example.com>
Co-authored-by: Kpa-clawbot <agent@corescope.local>
2026-04-21 10:32:02 -07:00
Kpa-clawbot c99aa1dadf fix(#855, #856, #857) + feat(#862): /nodes detail panel + search improvements (#868)
## Summary

Four related `/nodes` page fixes batched to avoid merge conflicts (all
touch `public/nodes.js`).

---

### #855 — "Show all neighbors" link doesn't expand

**Problem:** The "View all N neighbors →" link in the side panel
navigated to the full detail page instead of expanding the truncated
list inline.

**Fix:** Replaced navigation link with an inline "Show all N neighbors
▼" button that re-renders the neighbor table without the limit.

**Acceptance:** Click the button → all neighbors appear in the same
panel without page navigation.

Closes #855

---

### #856 — "Details" button is a no-op

**Problem:** The "🔍 Details" link in the side panel was an `<a>` tag
whose `href` matched the current hash (set by `replaceState`), making
clicks a same-hash no-op.

**Fix:** Changed from `<a>` link to a `<button>` with a direct click
handler that sets `location.hash`, ensuring the router always fires.

**Acceptance:** Click "🔍 Details" → navigates to full-screen node detail
view.

Closes #856

---

### #857 — Recent Packets shows bullets but no content

**Problem:** The "Recent Packets (N)" section could render entries with
missing `hash` or `timestamp`, producing colored dots with no meaningful
content beside them.

**Fix:** Added `.filter(a => a.hash && a.timestamp)` before rendering,
and updated the count header to reflect filtered entries only.

**Acceptance:** Recent Packets section only shows entries with valid
data; count matches visible items.

Closes #857

---

### #862 — Pubkey prefix search on /#/nodes

**Problem:** Search box only matched node names. Operators couldn't
search by pubkey prefix.

**Fix:** Extended search to detect hex-only queries (`/^[0-9a-f]+$/i`)
and match them against pubkey prefix (`startsWith`). Non-hex queries
continue matching name as before. Both are composable in the same input.

**Acceptance:**
- Typing `3f` filters to nodes whose pubkey starts with `3f`
- Typing `foo` still filters by name
- Search placeholder updated to indicate pubkey support

5 new unit tests added for the search matching logic.

Closes #862

---------

Co-authored-by: you <you@example.com>
2026-04-21 10:24:27 -07:00
Kpa-clawbot 20843979a7 fix(#861): restore sticky table headers on mobile packets page (#867)
## What

Remove a single line in `makeColumnsResizable()` that set
`th.style.position = 'relative'` on every `<th>` except the last,
overriding the CSS `position: sticky` rule from `.data-table th`.

## Why

The column-resize feature added inline `position: relative` to each
header (except the last) to serve as a containing block for the
absolute-positioned resize handles. This inadvertently broke `position:
sticky` on all headers except "Details" (the last column) — visible on
mobile when scrolling the packets table.

`position: sticky` is itself a positioned value and serves as a
containing block for absolute children, so the resize handles work
identically without the override.

## Test

- Open `/#/packets` on mobile (or narrow viewport)
- Scroll down — ALL column headers now remain sticky at the top
- Column resize handles still function correctly on desktop

Fixes #861

Co-authored-by: you <you@example.com>
2026-04-21 09:53:31 -07:00
Kpa-clawbot ea78581eea fix(#858): packets/hour chart — bars rendering + x-axis label decimation (#865)
Two bugs in the Overview tab Packets/Hour chart:

1. **Bars not rendering**: `barW` went negative when `data.length` was
large (e.g. 720 hours for 30-day range), producing zero-width invisible
bars. Fix: `Math.max(1, ...)` floor on bar width.

2. **X-axis labels overlapping**: Every single hour label was emitted
(`02h03h04h...`). Fix: decimate labels based on time range — every 6h
for ≤24h, every 12h for ≤72h, every 24h beyond. Shows `MM-DD` on
midnight boundaries for multi-day ranges.

**Scope**: Only touches the Overview tab `Packets / Hour` section and
the shared `barChart` floor (one-line change). No modifications to
Topology, Channels, Distance, or other tabs.

Fixes #858

Co-authored-by: you <you@example.com>
2026-04-21 09:53:01 -07:00
Kpa-clawbot b5372d6f73 fix(#859): remove opacity gradient from Per-Observer Reachability rows (#863)
Fixes #859

## What

The "Per-Observer Reachability" and "Best Path to Each Node" sections in
the Topology tab had inline `opacity` styles on each `.reach-ring` row
that decreased with hop count (`1 - hops * 0.06`, floored at 0.3). This
made text progressively darker/unreadable toward the bottom.

## Fix

Removed the inline `opacity:${opacity}` style from both
`renderPerObserverReach()` and `renderBestPath()`. The rows now render
at full opacity with text colors governed by CSS variables as intended.

## Changed
- `public/analytics.js`: removed opacity computation and inline style in
two functions (4 lines removed, 2 added)

## Scope
Only touches Per-Observer Reachability and Best Path rendering. No
changes to Overview, Channels, or shared helpers.

Co-authored-by: you <you@example.com>
2026-04-21 09:52:18 -07:00
Kpa-clawbot 5afed0951b fix(#860): cap channel timeline chart to top 8 by volume (#864)
## What & Why

The "Messages / Hour by Channel" chart on `/#/analytics` Channels tab
rendered all channels in both the SVG and legend, causing legend
overflow when 20+ channels are present.

## Fix

- Sort channels by total message volume (descending)
- Render only the top 8 in the chart and legend
- Show "+N more" in the legend when channels are truncated
- `maxCount` for Y-axis scaling is computed from visible channels only,
so the chart uses its full vertical range

Single-file change: `public/analytics.js` — only
`renderChannelTimeline()` modified. No shared helpers touched.

Fixes #860

Co-authored-by: you <you@example.com>
2026-04-21 09:51:52 -07:00
Kpa-clawbot 3630a32310 fix(#852): transport-route path_len offset + var(--muted) → var(--text-muted) (#853)
## Problem

Two pre-existing bugs found during expert review of #851:

### 1. `hashSize` derivation ignores transport route types

`public/packets.js` hardcoded path-length byte at offset 1:
```js
const rawPathByte = pkt.raw_hex ? parseInt(pkt.raw_hex.slice(2, 4), 16) : NaN;
```

For transport routes (`route_type` 0 DIRECT or 3 TRANSPORT_ROUTE_FLOOD),
bytes 1–4 are `next_hop` + `last_hop` and path-length is at offset 5.
Same bug #846 fixed inside the byte-breakdown function.

### 2. `var(--muted)` CSS variable is undefined

Used in 6 places in `public/packets.js`. No `--muted` variable is
defined anywhere in `public/*.css` — only `--text-muted` exists. Text
styled with `var(--muted)` silently falls through to inherited color,
making badges/hints invisible.

## Fix

### Fix 1: transport-route path_len offset
```js
const plOff = (pkt.route_type === 0 || pkt.route_type === 3) ? 5 : 1;
const rawPathByte = pkt.raw_hex ? parseInt(pkt.raw_hex.slice(plOff * 2, plOff * 2 + 2), 16) : NaN;
```

### Fix 2: `var(--muted)` → `var(--text-muted)`
All 6 occurrences replaced.

## Tests (5 new, 572 total)

- `hashSize` extraction for flood route (route_type=1, offset 1)
- `hashSize` extraction for direct transport route (route_type=0, offset
5)
- `hashSize` extraction for transport route flood (route_type=3, offset
5)
- `hashSize` returns null for missing raw_hex
- Regression guard: no `var(--muted)` in any `public/` JS/CSS file

## Changes

- `public/packets.js`: 7 lines changed (1 offset fix + 6 CSS var fixes)
- `test-frontend-helpers.js`: 46 lines added (5 tests)

Closes #852

---------

Co-authored-by: you <you@example.com>
2026-04-21 09:27:16 -07:00
Kpa-clawbot ff05db7367 ci: fix staging smoke test port — read STAGING_GO_HTTP_PORT, not hardcoded 82 (#854)
## Problem
The "Deploy Staging" job's Smoke Test always fails with `Staging
/api/stats did not return engine field`.

Root cause: the step hardcodes `http://localhost:82/api/stats`, but
`docker-compose.staging.yml:21` publishes the container on
`${STAGING_GO_HTTP_PORT:-80}:80`. Default is port 80, not 82. curl gets
ECONNREFUSED, `-sf` swallows the error, `grep -q engine` sees empty
input → failure.

Verified on staging VM: `ss -lntp` shows only `:80` listening; `docker
ps` confirms `0.0.0.0:80->80/tcp`. A `curl http://localhost:82` returns
connection-refused.

## Fix
Read `STAGING_GO_HTTP_PORT` (same default as compose) so the smoke test
tracks the port the container was actually launched on. Failure message
now includes the resolved port to make future port mismatches
self-diagnosing.

## Tested
Logic only — the curl + grep pattern is unchanged. If any CI env
override sets `STAGING_GO_HTTP_PORT`, the smoke test now follows it.

Co-authored-by: Kpa-clawbot <agent@corescope.local>
2026-04-21 16:23:50 +00:00
Kpa-clawbot 441409203e feat(#845): bimodal_clock severity — surface flaky-RTC nodes instead of hiding as 'No Clock' (#850)
## Problem

Nodes with flaky RTC (firmware emitting interleaved good and nonsense
timestamps) were classified as `no_clock` because the broken samples
poisoned the recent median. Operators lost visibility into these nodes —
they showed "No Clock" even though ~60% of their adverts had valid
timestamps.

Observed on staging: a node with 31K samples where recent adverts
interleave good skew (-6.8s, -13.6s) with firmware nonsense (-56M, -60M
seconds). Under the old logic, median of the mixed window → `no_clock`.

## Solution

New `bimodal_clock` severity tier that surfaces flaky-RTC nodes with
their real (good-sample) skew value.

### Classification order (first match wins)

| Severity | Good Fraction | Description |
|----------|--------------|-------------|
| `no_clock` | < 10% | Essentially no real clock |
| `bimodal_clock` | 10–80% (and bad > 0) | Mixed good/bad — flaky RTC |
| `ok`/`warn`/`critical`/`absurd` | ≥ 80% | Normal classification |

"Good" = `|skew| <= 1 hour`; "bad" = likely uninitialized RTC nonsense.

When `bimodal_clock`, `recentMedianSkewSec` is computed from **good
samples only**, so the dashboard shows the real working-clock value
(e.g. -7s) instead of the broken median.

### Backend changes
- New constant `BimodalSkewThresholdSec = 3600`
- New severity `bimodal_clock` in classification logic
- New API fields: `goodFraction`, `recentBadSampleCount`,
`recentSampleCount`

### Frontend changes
- Amber `Bimodal` badge with tooltip showing bad-sample percentage
- Bimodal nodes render skew value like ok/warn/severe (not the "No
Clock" path)
- Warning line below sparkline: "⚠️ X of last Y adverts had nonsense
timestamps (likely RTC reset)"

### Tests
- 3 new Go unit tests: bimodal (60% good → bimodal_clock), all-bad (→
no_clock), 90%-good (→ ok)
- 1 new frontend test: bimodal badge rendering with tooltip
- Existing `TestReporterScenario_789` passes unchanged

Builds on #789 (recent-window severity).

Closes #845

---------

Co-authored-by: you <you@example.com>
2026-04-21 09:11:14 -07:00
Kpa-clawbot a371d35bfd feat(#847): dedupe Top Longest Hops by pair + add obs count and SNR cues (#848)
## Problem

The "Top 20 Longest Hops" RF analytics card shows the same repeater pair
filling most slots because the query sorts raw hop records by distance
with no pair deduplication. A single long link observed 12+ times
dominates the leaderboard.

## Fix

Dedupe by unordered `(pk1, pk2)` pair. Per pair, keep the max-distance
record and compute reliability metrics:

| Column | Description |
|--------|-------------|
| **Obs** | Total observations of this link |
| **Best SNR** | Maximum SNR seen (dB) |
| **Median SNR** | Median SNR across all observations (dB) |

Tooltip on each row shows the timestamp of the best observation.

### Before
| # | From | To | Distance | Type | SNR | Packet |
|---|------|----|----------|------|-----|--------|
| 1 | NodeX | NodeY | 200 mi | R↔R | 5 dB | abc… |
| 2 | NodeX | NodeY | 199 mi | R↔R | 6 dB | def… |
| 3 | NodeX | NodeY | 198 mi | R↔R | 4 dB | ghi… |

### After
| # | From | To | Distance | Type | Obs | Best SNR | Median SNR | Packet
|

|---|------|----|----------|------|-----|----------|------------|--------|
| 1 | NodeX | NodeY | 200 mi | R↔R | 12 | 8.0 dB | 5.2 dB | abc… |
| 2 | NodeA | NodeB | 150 mi | C↔R | 3 | 6.5 dB | 6.5 dB | jkl… |

## Changes

- **`cmd/server/store.go`**: Group `filteredHops` by unordered pair key,
accumulate obs count / best SNR / median SNR per group, sort by max
distance, take top 20
- **`cmd/server/types.go`**: Update `DistanceHop` struct — replace `SNR`
with `BestSnr`, `MedianSnr`, add `ObsCount`
- **`public/analytics.js`**: Replace single SNR column with Obs, Best
SNR, Median SNR; add row tooltip with best observation timestamp
- **`cmd/server/store_tophops_test.go`**: 3 unit tests — basic dedupe,
reverse-pair merge, nil SNR edge case

## Test Coverage

- `TestDedupeTopHopsByPair`: 5 records on pair (A,B) + 1 on (C,D) → 2
results, correct obsCount/dist/bestSnr/medianSnr
- `TestDedupeTopHopsReversePairMerges`: (B,A) and (A,B) merge into one
entry
- `TestDedupeTopHopsNilSNR`: all-nil SNR records → bestSnr and medianSnr
both nil
- Existing `TestAnalyticsRFEndpoint` and `TestAnalyticsRFWithRegion`
still pass

Closes #847

---------

Co-authored-by: you <you@example.com>
2026-04-21 09:09:39 -07:00
Kpa-clawbot 7c01a97178 fix(#849): Packet Detail dialog — show exact clicked observation, not cross-observer aggregate (#851)
## Problem

The Packet Detail dialog summary (Observer, Path, Hops, SNR/RSSI,
Timestamp) used the **aggregated cross-observer view** (`_parsedPath` /
`getParsedPath(pkt)`), which contradicted the byte breakdown after #844.
A packet observed with 2 hops by one observer would show "Path: 7 hops"
in the summary because it merged all observers' paths.

## Fix

The dialog is now **per-observation**:

- `renderDetail` resolves a `currentObservation` from
`selectedObservationId` (set when clicking an observation child row) or
defaults to `observations[0]`
- All summary fields read from the current observation: Observer,
SNR/RSSI, Timestamp, Path, Direction
- Hop count badge comes from `path_len & 0x3F` of the observation's
`raw_hex` (firmware truth, same source as byte breakdown). Cross-checked
against `path_json` length — logs a console warning on mismatch
- **Observations table** rendered inside the detail panel when multiple
observations exist. Clicking a row updates `currentObservation` and
re-renders the summary in-place (no dialog close/reopen)
- `.observation-current` CSS class highlights the selected observation
row

### Cross-observer aggregate (Option B)

A read-only "Cross-observer aggregate" section below the observations
table shows the longest observed path across all observers. This is
**not** the default view — it's always visible as secondary context.

## Tests

8 new tests in `test-frontend-helpers.js`:
- Hop count extraction from raw_hex (normal, direct, transport route
types)
- Inconsistency detection between path_json and raw_hex
- Per-observation field override of aggregated packet fields
- First observation used when no specific observation selected
- Observation row click selects that observation
- Null/missing raw_hex handling

All 572 tests pass (564 frontend + 62 filter + 29 aging).

## Acceptance

- Summary shows per-observation path/hops/SNR/RSSI/timestamp
- Switching observations in the detail updates everything
- Cross-observer aggregate available as secondary section
- Byte breakdown untouched (owned by #846)

## Related

- Closes #849
- Related: #844 (#846) — byte breakdown fix (separate PR, different code
region)

---------

Co-authored-by: you <you@example.com>
2026-04-21 09:08:58 -07:00
Kpa-clawbot f1eea9ee3c fix(#844): Packet Byte Breakdown — derive hop count from path_len, not aggregated _parsedPath (#846)
## Problem

The Packet Detail dialog's "Packet Byte Breakdown" section was using the
aggregated `_parsedPath` (longest path observed across all observers) to
render hop entries, instead of deriving the hop count from the
`path_len` byte in `raw_hex`. This caused:

- Wrong hop count (e.g., "Path (7 hops)" when `raw_hex` only contains 2)
- Hop values from the aggregated path displayed at incorrect byte
offsets
- Subsequent fields (pubkey, timestamp, signature) rendered at wrong
offsets because `off` was advanced by the wrong amount

## Fix

In `buildFieldTable()` (packets.js), the Path section now:

1. Derives `hashCountVal` from `path_len & 0x3F` (firmware truth per
`Packet.h:79-83`)
2. Derives `hashSize` from `(path_len >> 6) + 1`
3. Reads each hop's hex value directly from `raw_hex` at the correct
byte offset
4. Advances `off` by `hashSize * hashCountVal`
5. Skips the Path section entirely when `hashCountVal === 0` (direct
advert)

The "Path" summary section above the breakdown (which uses the
aggregated path for route visualization) is unchanged — only the byte
breakdown is fixed.

## Tests

3 new tests in `test-frontend-helpers.js`:
- Verifies 2 hops rendered (not 7) when `path_len=0x42` despite 7-hop
aggregated path
- Verifies pubkey offset is 6 (not 16) after a 2-hop path
- Verifies direct advert (`hashCount=0`) skips Path section

Also fixed pre-existing `HopDisplay is not defined` failures in the
`#765` transport offset test sandbox (added mock).

All 559 tests pass.

Closes #844

---------

Co-authored-by: you <you@example.com>
2026-04-21 08:26:12 -07:00
Kpa-clawbot f30e6bef28 qa(plan): reconcile §8.2/§5.3/§6.2 + add §8.7 (Recent Packets readability) (#838)
Doc-only reconciliation of v3.6.0-rc plan with what actually shipped.

## Changes
- **§8.2** — desktop deep link now opens full-screen view
(post-#823/#824), not split panel as the plan still asserted.
- **§5.3** — pin that severity now derives from `recentMedianSkewSec`
(#789), not the all-time `medianSkewSec` — a re-tester needs to know
which field drives the badge.
- **§6.2** — pin the existing observer-graph element location
(`public/analytics.js:2048-2051`).
- **New §8.7** — side-panel "Recent Packets" entries must navigate to a
valid packet detail (DB-fallback per #827) AND text must be readable in
the current theme (explicit color per #829). Both bugs surfaced this
session.

No CI gates.

Co-authored-by: Kpa-clawbot <agent@corescope.local>
2026-04-21 08:01:17 -07:00
Kpa-clawbot 20f456da58 fix(#840): map popup 'Show Neighbors' link does nothing on iOS Safari (#841)
Closes #840

## What
Switch the map-popup "Show Neighbors" link from `<a href="#">` to `<a
href="javascript:void(0)" role="button">` so iOS Safari doesn't navigate
when the document-level click delegation fails to fire.

## Why
On iOS Safari, when a user taps the link inside a Leaflet popup:
- The document-level click delegation at `public/map.js:927` calls
`e.preventDefault()` and triggers `selectReferenceNode`.
- BUT inside a Leaflet popup, `L.DomEvent.disableClickPropagation()` is
internally applied to popup content — on iOS Safari the click sometimes
doesn't bubble to `document`.
- When that happens, the browser's default `<a href="#">` action runs:
  - hash becomes empty (`#`)
- `navigate()` in `app.js:458` sees empty hash → defaults to `'packets'`
- map page is destroyed mid-tap → user perceives "nothing happened" (or
a brief flash if they back-button)

`href="javascript:void(0)"` removes the navigation fall-through
entirely. The `role="button"` keeps a11y semantics, `cursor:pointer`
keeps the visual cue.

## Tested
- Headless Chromium desktop + iPhone 13 emulation: tap fires
`/api/nodes/{pk}/neighbors?min_count=3`, marker count drops from 441 →
44, `#mcNeighbors` checkbox toggles on, URL stays on `/#/map`. Same as
before.
- Frontend helpers: 556/0
- Real iOS Safari fix verification needs a physical-device test
post-deploy

## Out of scope (follow-up)
- Same `<a href="#">` pattern exists for the topright "Close route"
control at `public/map.js:389` — uses `L.DomEvent.preventDefault` so
should work, but worth auditing if the symptom recurs.

Co-authored-by: Kpa-clawbot <agent@corescope.local>
2026-04-21 07:58:55 -07:00
Kpa-clawbot e31e14cae9 qa(plan): apply v3.6.0-rc QA findings (#832/#833/#836) (#837)
Apply v3.6.0-rc QA learnings to the plan.

## Changes
- **§1.1** — 1 GB cap is unrealistic on real DBs without `GOMEMLIMIT` +
bounded cold-load. Raised target to 3 GB and pointed to follow-up
**#836**. (Investigation showed cold-load transient blows past any
sub-2GB cap regardless of `maxMemoryMB` setting because
`runtime.MemStats.NextGC` ignores cgroup ceilings.)
- **§1.4** — `trackedBytes`/`trackedMB` is in-store packet bytes only
and under-reports RSS by ~3–5× (no indexes, caches, runtime overhead,
cgo). Switched the assertion to use `processRSSMB` exposed by **#832**
(PR **#835**).
- **§11.1** — noted the Playwright deep-link E2E assertion was updated
by **#833** (PR **#834**) to match the post-#823 full-screen behavior.

## Why
Three real findings from the QA ops sweep ([§1.4 fail
comment](https://github.com/Kpa-clawbot/CoreScope/issues/809#issuecomment-4286113141)).
Updating the plan so the next run doesn't replay the same
false-fail/false-pass conditions.

Co-authored-by: Kpa-clawbot <agent@corescope.local>
2026-04-20 23:29:18 -07:00
Kpa-clawbot bb0f816a6b fix(channels): only show lock for confirmed-encrypted #channel deep links (#825) (#826)
Closes #825

## Root cause
PR #815 added a `#`-prefix branch in `selectChannel` that
unconditionally rendered the lock affordance whenever the channel object
wasn't in the loaded `channels` list. With the encrypted toggle off,
unencrypted channels like `#test` are also absent from the list, so the
new branch wrongly locked them instead of falling through to the REST
fetch.

## Fix
When no stored key matches, refetch `/channels?includeEncrypted=true`
and check `ch.encrypted` before locking. Only render the lock when we
positively know the channel is encrypted; otherwise fall through to the
existing REST messages fetch.

This regresses #815's behavior **only for the unencrypted case** (which
is the bug). The encrypted-no-key (#811) and encrypted-with-stored-key
(#815) paths are preserved.

## Tests
3 new regression tests in `test-frontend-helpers.js`:
- `#test` (unencrypted) deep link → REST fetched, no lock
- `#private` (encrypted, no key) deep link → lock, no REST (#811
preserved)
- `#private` (encrypted, with stored key) deep link → decrypt path (#815
preserved)

`node test-frontend-helpers.js` → 556 passed, 0 failed.

## Perf
One extra REST call per cold deep link to a `#`-named channel that's not
in the toggle-off list — same endpoint already cached via
`CLIENT_TTL.channels`, so subsequent navigations are free.

---------

Co-authored-by: you <you@example.com>
2026-04-20 23:11:20 -07:00
Kpa-clawbot 3f26dc7190 obs: surface real RSS alongside tracked store bytes in /api/stats (#832) (#835)
Closes #832.

## Root cause confirmed
\`trackedMB\` (\`s.trackedBytes\` in \`store.go\`) only sums per-packet
struct + payload sizes recorded at insertion. It excludes the index maps
(\`byHash\`, \`byTxID\`, \`byNode\`, \`byObserver\`, \`byPathHop\`,
\`byPayloadType\`, hash-prefix maps, name lookups), the analytics LRUs
(rfCache/topoCache/hashCache/distCache/subpathCache/chanCache/collisionCache),
WS broadcast queues, and Go runtime overhead. It's \"useful packet
bytes,\" not RSS — typically 3–5× off on staging.

## Fix (Option C from the issue)
Expose four memory fields on \`/api/stats\` from a single cached
snapshot:

| Field | Source | Semantics |
|---|---|---|
| \`storeDataMB\` | \`s.trackedBytes\` | in-store packet bytes; eviction
watermark input |
| \`goHeapInuseMB\` | \`runtime.MemStats.HeapInuse\` | live Go heap |
| \`goSysMB\` | \`runtime.MemStats.Sys\` | total Go-managed memory |
| \`processRSSMB\` | \`/proc/self/status VmRSS\` (Linux), falls back to
\`goSysMB\` | what the kernel sees |

\`trackedMB\` is retained as a deprecated alias for \`storeDataMB\` so
existing dashboards/QA scripts keep working.

Field invariants are documented on \`MemorySnapshot\`: \`processRSSMB ≥
goSysMB ≥ goHeapInuseMB ≥ storeDataMB\` (typical).

## Performance
Single \`getMemorySnapshot\` call cached for 1s —
\`runtime.ReadMemStats\` (stop-the-world) and the \`/proc/self/status\`
read are amortized across burst polling. \`/proc\` read is bounded to 8
KiB, parsed with \`strconv\` only — no shell-out, no untrusted input.

\`cgoBytesMB\` is omitted: the build uses pure-Go
\`modernc.org/sqlite\`, so there is no cgo allocator to measure.
Documented in code comment.

## Tests
\`cmd/server/stats_memory_test.go\` asserts presence, types, sign, and
ordering invariants. Avoids the flaky \"matches RSS to ±X%\" pattern.

\`\`\`
$ go test ./... -count=1 -timeout 180s
ok  	github.com/corescope/server	19.410s
\`\`\`

## QA plan
§1.4 now compares \`processRSSMB\` against procfs RSS (the right
invariant); threshold stays at 0.20.

---------

Co-authored-by: MeshCore Agent <meshcore-agent@openclaw.local>
2026-04-20 23:10:33 -07:00
Kpa-clawbot 886aabf0ae fix(#827): /api/packets/{hash} falls back to DB when in-memory store misses (#831)
Closes #827.

## Problem
`/api/packets/{hash}` only consulted the in-memory `PacketStore`. When a
packet aged out of memory, the handler 404'd — even though SQLite still
had it and `/api/nodes/{pubkey}` `recentAdverts` (which reads from the
DB) was actively surfacing the hash. Net effect: the **Analyze →** link
on older adverts in the node detail page led to a dead "Not found".

Two-store inconsistency: DB has the packet, in-memory doesn't, node
detail surfaces it from DB → packet detail can't serve it.

## Fix
In `handlePacketDetail`:
- After in-memory miss, fall back to `db.GetPacketByHash` (already
existed) for hash lookups, and `db.GetTransmissionByID` for numeric IDs.
- Track when the result came from the DB; if so and the store has no
observations, populate from DB via a new `db.GetObservationsForHash` so
the response shows real observations instead of the misleading
`observation_count = 1` fallback.

## Tests
- `TestPacketDetailFallsBackToDBWhenStoreMisses` — insert a packet
directly into the DB after `store.Load()`, confirm store doesn't have
it, assert 200 + populated observations.
- `TestPacketDetail404WhenAbsentFromBoth` — neither store nor DB → 404
(no false positives).
- `TestPacketDetailPrefersStoreOverDB` — both have it; store result wins
(no double-fetch).
- `TestHandlePacketDetailNoStore` updated: it previously asserted the
old buggy 404 behavior; now asserts the correct DB-fallback 200.

All `go test ./... -run "PacketDetail|Packet|GetPacket"` and the full
`cmd/server` suite pass.

## Out of scope
The `/api/packets?hash=` filter is the live in-memory list endpoint and
intentionally store-only for performance. Not touched here — happy to
file a follow-up if you'd rather harmonise.

## Repro context
Verified against prod with a recently-adverting repeater whose recent
advert hash lives in `recentAdverts` (DB) but had been evicted from the
in-memory store; pre-fix 404, post-fix 200 with full observations.

Co-authored-by: you <you@example.com>
2026-04-20 22:50:01 -07:00
Kpa-clawbot a0fddb50aa fix(#789): severity from recent samples; Theil-Sen drift with outlier rejection (#828)
Closes #789.

## The two bugs

1. **Severity from stale median.** `classifySkew(absMedian)` used the
all-time `MedianSkewSec` over every advert ever recorded for the node. A
repeater that was off for hours and then GPS-corrected stayed pinned to
`absurd` because hundreds of historical bad samples poisoned the median.
Reporter's case: `medianSkewSec: -59,063,561.8` while `lastSkewSec:
-0.8` — current health was perfect, dashboard said catastrophic.

2. **Drift from a single correction jump.** Drift used OLS over every
`(ts, skew)` pair, with no outlier rejection. A single GPS-correction
event (skew jumps millions of seconds in ~30s) dominated the regression
and produced `+1,793,549.9 s/day` — physically nonsense; the existing
`maxReasonableDriftPerDay` cap then zeroed it (better than absurd, but
still useless).

## The two fixes

1. **Recent-window severity.** New field `recentMedianSkewSec` = median
over the last `N=5` samples or last `1h`, whichever is narrower (more
current view). Severity now derives from `abs(recentMedianSkewSec)`.
`MeanSkewSec`, `MedianSkewSec`, `LastSkewSec` are preserved unchanged so
the frontend, fleet view, and any external consumers continue to work.

2. **Theil-Sen drift with outlier filter.** Drift now uses the Theil-Sen
estimator (median of all pairwise slopes — textbook robust regression,
~29% breakdown point) on a series pre-filtered to drop samples whose
skew jumps more than `maxPlausibleSkewJumpSec = 60s` from the previous
accepted point. Real µC drift is fractions of a second per advert; clock
corrections fall well outside. Capped at `theilSenMaxPoints = 200`
(most-recent) so O(n²) stays bounded for chatty nodes.

## What stays the same

- Epoch-0 / out-of-range advert filter (PR #769).
- `minDriftSamples = 5` floor.
- `maxReasonableDriftPerDay = 86400` hard backstop.
- API shape: only additions (`recentMedianSkewSec`); no fields removed
or renamed.

## Tests

All in `cmd/server/clock_skew_test.go`:

- `TestSeverityUsesRecentNotMedian` — 100 bad samples (-60s) + 5 good
(-1s) → severity = `ok`, historical median still huge.
- `TestDriftRejectsCorrectionJump` — 30 min of clean linear drift + one
1000s jump → drift small (~12 s/day).
- `TestTheilSenMatchesOLSWhenClean` — clean linear data, Theil-Sen
within ~1% of OLS.
- `TestReporterScenario_789` — exact reproducer: 1662 samples, 1657 @
-683 days then 5 @ -1s → severity `ok`, `recentMedianSkewSec ≈ 0`, drift
bounded; legacy `medianSkewSec` preserved as historical context.

`go test ./... -count=1` (cmd/server) and `node
test-frontend-helpers.js` both pass.

---------

Co-authored-by: clawbot <bot@corescope.local>
Co-authored-by: you <you@example.com>
2026-04-20 22:47:10 -07:00
Kpa-clawbot bb09123f34 test(#833): update deep-link Playwright assertion for full-screen desktop view (#834)
Closes #833

## What
Update Playwright E2E assertion for desktop deep link to
`/#/nodes/{pubkey}`. Now expects `.node-fullscreen` to be present
(matches the spec set by PR #824 / issue #823).

## Why
The previous assertion encoded the old pre-#823 behavior — "split panel
on desktop deep link." PR #824 intentionally removed the
`window.innerWidth <= 640` gate so desktop deep links open the
full-screen view (matching the Details link path that #779/#785/#824
ultimately made work). The test failed on every PR that rebased onto
master, blocking `Deploy Staging`.

## Verified
- 1-test diff, no other behavior change
- Mobile-viewport assertions elsewhere already exercise the same
`.node-fullscreen` selector

Co-authored-by: Kpa-clawbot <agent@corescope.local>
2026-04-21 05:37:05 +00:00
Kpa-clawbot 31a0a944f9 fix(#829): node-detail side panel Recent Packets text invisible (#830)
Closes #829

## What
Add explicit `color: var(--text)` to `.advert-info` (and `var(--accent)`
to its links) so the side-panel "Recent Packets" entries stay readable
in all themes.

## Why
`.advert-info` had only `font-size` + `line-height` rules — text
inherited from ancestors. In default light/dark themes the inherited
color happens to differ enough from `--card-bg`. Under custom themes
where they collide, text becomes invisible — only the colored
`.advert-dot` shows. Operator screenshot confirmed the symptom.

Same class of bug as the existing fix at `style.css:660` ("Bug 7 fix:
neighbor table text inherits accent color — force readable text") which
forced `color: var(--text)` on `.node-detail-section .data-table td`.
The advert timeline doesn't use a data-table, so it fell through.

## Verified
- DOM contains correct text — only the rendered color was wrong
- `getComputedStyle(.advert-info).color` previously matched `--card-bg`
under affected themes
- After fix: `.advert-info` resolves to `var(--text)` regardless of
inherited chain
- Frontend helpers: 553/0
- Full-screen `node-full-card` view (separate `.node-activity-item`
markup) unaffected

Co-authored-by: Kpa-clawbot <agent@corescope.local>
2026-04-21 05:34:08 +00:00
efiten cad1f11073 fix: bypass IATA filter for status messages, fill SNR on duplicate obs (#694) (#802)
## Problems

Two independent ingestor bugs identified in #694:

### 1. IATA filter drops status messages from out-of-region observers

The IATA filter ran at the top of `handleMessage()` before any
message-type discrimination. Status messages carrying observer metadata
(`noise_floor`, battery, airtime) from observers outside the configured
IATA regions were silently discarded before `UpsertObserver()` and
`InsertMetrics()` ran.

**Impact:** Observers running `meshcoretomqtt/1.0.8.0` in BFL and LAX —
the only client versions that include `noise_floor` in status messages —
had their health data dropped entirely on prod instances filtering to
SJC.

**Fix:** Moved the IATA filter to the packet path only (after the
`parts[3] == "status"` branch). Status messages now always populate
observer health data regardless of configured region filter.

### 2. `INSERT OR IGNORE` discards SNR/RSSI on late arrival

When the same `(transmission_id, observer_idx, path_json)` observation
arrived twice — first without RF fields, then with — `INSERT OR IGNORE`
silently discarded the SNR/RSSI from the second arrival.

**Fix:** Changed to `ON CONFLICT(...) DO UPDATE SET snr =
COALESCE(excluded.snr, snr), rssi = ..., score = ...`. A later arrival
with SNR fills in a `NULL`; a later arrival without SNR does not
overwrite an existing value.

## Tests

- `TestIATAFilterDoesNotDropStatusMessages` — verifies BFL status
message is processed when IATA filter includes only SJC, and that BFL
packet is still filtered
- `TestInsertObservationSNRFillIn` — verifies SNR fills in on second
arrival, and is not overwritten by a subsequent null arrival

## Related

Partially addresses #694 (upstream client issue of missing SNR in packet
messages is out of scope)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-20 22:16:01 -07:00
efiten 7f024b7aa7 fix(#673): replace raw JSON text search with byNode index for node packet queries (#803)
## Summary

Fixes #673

- GRP_TXT packets whose message text contains a node's pubkey were
incorrectly counted as packets for that node, inflating packet counts
and type breakdowns
- Two code paths in `store.go` used `strings.Contains` on the full
`DecodedJSON` blob — this matched pubkeys appearing anywhere in the
JSON, including inside chat message text
- `filterPackets` slow path (combined node + other filters): replaced
substring search with a hash-set membership check against
`byNode[nodePK]`
- `GetNodeAnalytics`: removed the full-packet-scan + text search branch
entirely; always uses the `byNode` index (which already covers
`pubKey`/`destPubKey`/`srcPubKey` via structured field indexing)

## Test Plan

- [x] `TestGetNodeAnalytics_ExcludesGRPTXTWithPubkeyInText` — verifies a
GRP_TXT packet with the node's pubkey in its text field is not counted
in that node's analytics
- [x] `TestFilterPackets_NodeQueryDoesNotMatchChatText` — verifies the
combined-filter slow path of `filterPackets` returns only the indexed
ADVERT, not the chat packet

Both tests were written as failing tests against the buggy code and pass
after the fix.

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-20 22:15:02 -07:00
Kpa-clawbot ddd18cb12f fix(nodes): Details link opens full-screen on desktop (#823) (#824)
Closes #823

## What
Remove the `window.innerWidth <= 640` gate on the `directNode`
full-screen branch in `init()` so the 🔍 Details link works on desktop.

## Why
- #739 (`e6ace95`) gated full-screen to mobile so desktop **deep links**
would land on the split panel.
- But the same gate broke the **Details link** flow (#779/#785): the
click handler calls `init(app, pubkey)` directly. On desktop the gated
branch was skipped, the list re-rendered with `selectedKey = pubkey`,
and the side panel was already open → no visible change.
- Dropping the gate makes the directNode branch the single, unambiguous
path to full-screen for both the Details link and any deep link.

## Why the desktop split-panel UX is still preserved
Row clicks call `selectNode()`, which uses `history.replaceState` — no
`hashchange` event, no router re-init, no `directNode` set. Only the
Details link handler (which calls `init()` explicitly) and a fresh
deep-link load reach this branch.

## Repro / verify
1. Desktop, viewport > 640px, open `/#/nodes`.
2. Click a node row → split panel opens (unchanged).
3. Click 🔍 Details inside the panel → full-screen single-node view (was
broken; now works).
4. Back button / Escape → back to list view.
5. Paste `/#/nodes/{pubkey}` directly → full-screen on both desktop and
mobile.

## Tests
`node test-frontend-helpers.js` → 553 passed, 0 failed.

Co-authored-by: you <you@example.com>
2026-04-21 05:13:52 +00:00
efiten 997bf190ce fix(mobile): close button accessible + toolbar scrollable (#797) (#805)
## Summary

- **Node detail `top: 60px` → `64px`**: aligns with other overlay
panels, gives proper clearance from the 52px fixed nav bar
- **Mobile bottom sheet `z-index: 1050`**: node detail now renders above
the VCR bar (`z-index: 1000`), close button never obscured
- **Mobile `max-height: 60vh` → `60dvh`**: respects iOS Safari browser
chrome correctly
- **`.live-toggles` horizontal scroll**: `overflow-x: auto; flex-wrap:
nowrap` — all 8 checkboxes reachable via horizontal swipe

Fixes #797

## Test plan

- [x] Mobile portrait (<640px): tap a map node → bottom sheet slides up,
close button (✕) visible and tappable above VCR bar
- [x] Mobile portrait: scroll the live-header toggles horizontally → all
checkboxes reachable
- [x] Desktop/tablet (>640px): node detail panel top-right corner fully
below the nav bar
- [x] Desktop: close button functional, panel hides correctly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-20 22:10:18 -07:00
Kpa-clawbot 5ff4b75a07 qa: automate §10.1/§10.2 nodeBlacklist test (#822)
Automates QA plan §10.1 (nodeBlacklist hide) and §10.2 (DB retain),
flipping both rows from `human` to `auto`. Stacks on top of #808.

**What**
- New `qa/scripts/blacklist-test.sh` — env-driven harness:
  - Args: `BASELINE_URL TARGET_URL TEST_PUBKEY`
- Env: `TARGET_SSH_HOST`, `TARGET_SSH_KEY` (default
`/root/.ssh/id_ed25519`), `TARGET_CONFIG_PATH`, `TARGET_CONTAINER`,
optional `TARGET_DB_PATH` / `ADMIN_API_TOKEN`.
- Edits `nodeBlacklist` on target via remote `jq` (python3 fallback),
atomic move with preserved perms.
  - Restarts container, waits up to 120 s for `/api/stats == 200`.
- §10.1 asserts `/api/nodes/{pk}` is 404 **or** absent from `/api/nodes`
listing, and `/api/topology` does not reference the pubkey.
- §10.2 prefers `/api/admin/transmissions` if `ADMIN_API_TOKEN` set,
else falls back to `sqlite3` inside the container (and host as last
resort).
- **Teardown is mandatory** (`trap … EXIT INT TERM`): removes pubkey,
restarts, verifies the node is visible again. Teardown failures count
toward exit code.
- Exit code = number of failures; per-step / with classified failure
modes (`ssh-failed`, `restart-stuck`, `hide-failed`, `retain-failed`,
`teardown-failed`).
- `qa/plans/v3.6.0-rc.md` §10.1 / §10.2 mode → `auto
(qa/scripts/blacklist-test.sh)`.

**Why**
Manual blacklist verification was the slowest item in the §10 block and
the easiest to get wrong (forgetting teardown leaks state into the next
QA pass). Now it's a single command, public-repo-safe (zero PII /
hardcoded hosts), and the trap guarantees the target is restored.

`bash -n` passes locally. Live run requires staging credentials.

---------

Co-authored-by: meshcore-agent <agent@meshcore>
Co-authored-by: meshcore-agent <meshcore@openclaw.local>
2026-04-21 04:53:55 +00:00
Kpa-clawbot 2460e33f94 fix(#810): /health.recentPackets resolved_path falls back to longest sibling obs (#821)
## What + why

`fetchResolvedPathForTxBest` (used by every API path that fills the
top-level `resolved_path`, including
`/api/nodes/{pk}/health.recentPackets`) picked the observation with the
longest `path_json` and queried SQL for that single obs ID. When the
longest-path obs had `resolved_path` NULL but a shorter sibling had one,
the helper returned nil and the top-level field was dropped — even
though the data exists. QA #809 §2.1 caught it on the health endpoint
because that page surfaces it per-tx.

Fix: keep the LRU-friendly fast path (try the longest-path obs), then
fall back to scanning all observations of the tx and picking the longest
`path_json` that actually has a stored `resolved_path`.

## Changes
- `cmd/server/resolved_index.go`: extend `fetchResolvedPathForTxBest`
with a fallback through `fetchResolvedPathsForTx`.
- `cmd/server/issue810_repro_test.go`: regression test — seeds a tx
whose longest-path obs lacks `resolved_path` and a shorter sibling has
it, then asserts `/api/packets` and
`/api/nodes/{pk}/health.recentPackets` agree.

## Tests
`go test ./... -count=1` from `cmd/server` — PASS (full suite, ~19s).

## Perf
Fast path unchanged (single LRU/SQL lookup, dominant case). Fallback
only runs when the longest-path obs has NULL `resolved_path` — one
indexed query per affected tx, bounded by observations-per-tx (small).

Closes #810

---------

Co-authored-by: you <you@example.com>
2026-04-21 04:51:24 +00:00
Kpa-clawbot f701121672 Add qa/ — project-specific QA artifacts for the qa-suite skill (#808)
Adds the CoreScope-side artifacts that pair with the generic [`qa-suite`
skill](https://github.com/Kpa-clawbot/ai-sdlc/pull/1).

## Layout

```
qa/
├── README.md
├── plans/
│   └── v3.6.0-rc.md       # 34-commit test plan since v3.5.1
└── scripts/
    └── api-contract-diff.sh  # CoreScope-tuned API contract diff
```

The skill ships the reusable engine + qa-engineer persona + an example
plan. This PR adds the CoreScope-tuned plan and the CoreScope-tuned
script (correct seed lookups for our `{packets, total}` response shape,
our endpoint list, our `resolved_path` requirement). Read by the parent
agent at runtime.

## How to use

From chat:

- `qa staging` — runs the latest `qa/plans/v*-rc.md` against staging,
files a fresh GH issue with the report
- `qa pr <N>` — uses `qa/plans/pr-<N>.md` if present, else latest RC
plan; comments on the PR
- `qa v3.6.0-rc` — runs that specific plan

The qa-engineer subagent walks every step, classifying each as `auto`
(script) / `browser` (UI assertion) / `human` (manual) / `browser+auto`.
Quantified pass criteria are mandatory — banned phrases: 'visually
aligned' / 'fast' / 'no regression'.

## Plan v3.6.0-rc contents

Covers the 34 commits since v3.5.1:
- §1 Memory & Load (#806, #790, #807) — heap thresholds, sawtooth
pattern
- §2 API contract (#806) — every endpoint that should carry
`resolved_path`, auto-checked by `api-contract-diff.sh`
- §3 Decoder & hashing (#787, #732, #747, #766, #794, #761)
- §4 Channels (#725 series M1–M5)
- §5 Clock skew (#690 series M1–M3)
- §6 Observers (#764, #774)
- §7 Multi-byte hash adopters (#758, #767)
- §8 Frontend nav & deep linking (#739, #740, #779, #785, #776, #745)
- §9 Geofilter (#735, #734)
- §10 Node blacklist (#742)
- §11 Deploy/ops

Release blockers: §1.2, §2, §3. §4 is the headline-feature gate.

## Adding new plans

Per release: copy `plans/v<last>-rc.md` to `plans/v<new>-rc.md` and
update commit-range header, new sections, GO criteria.

Per PR: create `plans/pr-<N>.md` with the bare minimum for that PR's
surface area.

Co-authored-by: you <you@example.com>
2026-04-20 21:46:57 -07:00
Kpa-clawbot d7fe24e2db Fix channel filter on Packets page (UI + API) — #812 (#816)
Closes #812

## Root causes

**Server (`/api/packets?channel=…` returned identical totals):**
The handler in `cmd/server/routes.go` never read the `channel` query
parameter into `PacketQuery`, so it was silently ignored by both the
SQLite path (`db.go::buildTransmissionWhere`) and the in-memory path
(`store.go::filterPackets`). The codebase already had everything else in
place — the `channel_hash` column with an index from #762, decoded
`channel` / `channelHashHex` fields on each packet — it just wasn't
wired up.

**UI (`/#/packets` had no channel filter):**
`public/packets.js` rendered observer / type / time-window / region
filters but no channel control, and didn't read `?channel=` from the
URL.

## Fix

### Server
- New `Channel` field on `PacketQuery`; `handlePackets` reads
`r.URL.Query().Get("channel")`.
- DB path filters by the indexed `channel_hash` column (exact match).
- In-memory path: helper `packetMatchesChannel` matches
`decoded.channel` (plaintext, e.g. `#test`, `public`) or `enc_<HEX>`
against `channelHashHex` for undecryptable GRP_TXT. Uses cached
`ParsedDecoded()` so it's O(1) after first parse. Fast-path index guards
and the grouped-cache key updated to include channel.
- Regression test (`channel_filter_test.go`): `channel=#test` returns ≥1
GRP_TXT packet and fewer than baseline; `channel=nonexistentchannel`
returns `total=0`.

### UI
- New `<select id="fChannel">` populated from `/api/channels`.
- Round-trips via `?channel=…` on the URL hash (read on init, written on
change).
- Pre-seeds the current value as an option so encrypted hashes not in
`/api/channels` still display as selected on reload.
- On change, calls `loadPackets()` so the server-side filter applies
before pagination.

## Perf

Filter adds at most one cached map lookup per packet (DB path uses
indexed column, store path uses `ParsedDecoded()` cache). Staging
baseline 149–190 ms for `?channel=#test&limit=50`; the new comparison is
negligible. Target ≤ 500 ms preserved.

## Tests
`cd cmd/server && go test ./... -count=1 -timeout 120s` → PASS.

---------

Co-authored-by: you <you@example.com>
2026-04-20 21:46:34 -07:00
Kpa-clawbot a9732e64ae fix(nodes): render clock-skew section in side panel (#813) (#814)
Closes #813

## Root cause
The Node detail **side panel** (`renderDetail()`,
`public/nodes.js:1145`) was missing both the `#node-clock-skew`
placeholder div and the `loadClockSkew()` IIFE loader. Those exist only
in the **full-screen** detail page (`loadFullNode`, lines 498 / 632), so
any node opened via deep link or click in the listing — which uses the
side panel — showed no clock-skew UI even when
`/api/nodes/{pk}/clock-skew` returned rich data.

## Fix
Mirror the full-screen template branch and IIFE in `renderDetail`:
- Add `<div class="node-detail-section skew-detail-section"
id="node-clock-skew" style="display:none">` to the side-panel template
(right above Observers).
- Add an async `loadClockSkewPanel()` IIFE after the panel `innerHTML`
is set, using the same severity/badge/drift/sparkline rendering and the
`severity === 'no_clock'` branch the full-screen view uses.

No new helpers — reuses existing window globals (`formatSkew`,
`formatDrift`, `renderSkewBadge`, `renderSkewSparkline`).

## Verification
- Syntax check: `node -c public/nodes.js` ✓
- `node test-frontend-helpers.js` → 553/553 ✓
- Browser: staging runs master so I couldn't validate the deployed UI
yet. Manual repro after deploy:
1. Open `https://analyzer.00id.net/#/nodes`, click any node with a known
skew (e.g. Puppy Solar `a8dde6d7…` shows ` -23d 8h` in listing).
2. Side panel should show a ** Clock Skew** section with median skew,
severity badge, drift line, and sparkline.
3. For `severity === 'no_clock'` (e.g. SKCE_RS `14531bd2…`), section
shows "No Clock" instead of skew value.

---------

Co-authored-by: you <you@example.com>
2026-04-20 21:45:42 -07:00
Kpa-clawbot 60be48dc5e fix(channels): lock affordance on deep link to encrypted channel without key (#815)
Closes #811

## What
Deep linking to `/#/channels/%23private` (encrypted channel, no key
configured) now shows the existing 🔒 lock affordance instead of an empty
"No messages in this channel yet" pane.

## Why
`selectChannel` only rendered the lock message inside the `if (ch &&
ch.encrypted)` branch. On a cold deep link:

- `loadChannels` omits encrypted channels unless the toggle is on, so
`ch` is `undefined`.
- The hash isn't `user:`-prefixed, so that branch is skipped too.
- Code falls through to the REST fetch, returns 0 messages, and
`renderMessages` shows the generic empty state.

## Fix
Add a `#`-prefixed-hash branch immediately before the REST fetch:

- If a stored key matches the channel name → decrypt and render.
- Otherwise → reuse the existing 🔒 "encrypted and no decryption key is
configured" message.

## Trace (URL → render)
1. `#/channels/%23private` → `init(routeParam='#private')`
2. `loadChannels()` → `channels` has no `#private` entry (toggle off)
3. `selectChannel('#private')` → `ch` undefined → skips encrypted
branches → **new check fires** → lock message
4. With key stored: same check → `decryptAndRender`

## Validation
- `node test-frontend-helpers.js` → 553 passed, 0 failed
- Manual trace above; change is a 15-line localized guard before the
REST fetch, no hot-path or perf impact.

Co-authored-by: meshcore-agent <agent@corescope.local>
2026-04-20 21:38:59 -07:00
Kpa-clawbot 9e90548637 perf(#800): remove per-StoreTx ResolvedPath, replace with membership index + on-demand decode (#806)
## Summary

Remove `ResolvedPath []*string` field from `StoreTx` and `StoreObs`
structs, replacing it with a compact membership index + on-demand SQL
decode. This eliminates the dominant heap cost identified in profiling
(#791, #799).

**Spec:** #800 (consolidated from two rounds of expert + implementer
review on #799)

Closes #800
Closes #791

## Design

### Removed
- `StoreTx.ResolvedPath []*string`
- `StoreObs.ResolvedPath []*string`
- `TransmissionResp.ResolvedPath`, `ObservationResp.ResolvedPath` struct
fields

### Added
| Structure | Purpose | Est. cost at 1M obs |
|---|---|---:|
| `resolvedPubkeyIndex map[uint64][]int` | FNV-1a(pubkey) → []txID
forward index | 50–120 MB |
| `resolvedPubkeyReverse map[int][]uint64` | txID → []hashes for clean
removal | ~40 MB |
| `apiResolvedPathLRU` (10K entries) | FIFO cache for on-demand API
decode | ~2 MB |

### Decode-window discipline
`resolved_path` JSON decoded once per packet. Consumers fed in order,
temp slice dropped — never stored on struct:
1. `addToByNode` — relay node indexing
2. `touchRelayLastSeen` — relay liveness DB updates
3. `byPathHop` resolved-key entries
4. `resolvedPubkeyIndex` + reverse insert
5. WebSocket broadcast map (raw JSON bytes)
6. Persist batch (raw JSON bytes for SQL UPDATE)

### Collision safety
When the forward index returns candidates, a batched SQL query confirms
exact pubkey presence using `LIKE '%"pubkey"%'` on the `resolved_path`
column.

### Feature flag
`useResolvedPathIndex` (default `true`). Off-path is conservative: all
candidates kept, index not consulted. For one-release rollback safety.

## Files changed

| File | Changes |
|---|---|
| `resolved_index.go` | **New** — index structures, LRU cache, on-demand
SQL helpers, collision safety |
| `store.go` | Remove RP fields, decode-window discipline in
Load/Ingest, on-demand txToMap/obsToMap/enrichObs, eviction cleanup via
SQL, memory accounting update |
| `types.go` | Remove RP fields from TransmissionResp/ObservationResp |
| `routes.go` | Replace `nodeInResolvedPath` with
`nodeInResolvedPathViaIndex`, remove RP from mapSlice helpers |
| `neighbor_persist.go` | Refactor backfill: reverse-map removal →
forward+reverse insert → LRU invalidation |

## Tests added (27 new)

**Unit:**
- `TestStoreTx_ResolvedPathFieldAbsent` — reflection guard
- `TestResolvedPubkeyIndex_BuildFromLoad` — forward+reverse consistency
- `TestResolvedPubkeyIndex_HashCollision` — SQL collision safety
- `TestResolvedPubkeyIndex_IngestUpdate` — maps reflect new ingests
- `TestResolvedPubkeyIndex_RemoveOnEvict` — clean removal via reverse
map
- `TestResolvedPubkeyIndex_PerObsCoverage` — non-best obs pubkeys
indexed
- `TestAddToByNode_WithoutResolvedPathField`
- `TestTouchRelayLastSeen_WithoutResolvedPathField`
- `TestWebSocketBroadcast_IncludesResolvedPath`
- `TestBackfill_InvalidatesLRU`
- `TestEviction_ByNodeCleanup_OnDemandSQL`
- `TestExtractResolvedPubkeys`, `TestMergeResolvedPubkeys`
- `TestResolvedPubkeyHash_Deterministic`
- `TestLRU_EvictionOnFull`

**Endpoint:**
- `TestPathsThroughNode_NilResolvedPathFallback`
- `TestPacketsAPI_OnDemandResolvedPath`
- `TestPacketsAPI_OnDemandResolvedPath_LRUHit`
- `TestPacketsAPI_OnDemandResolvedPath_Empty`

**Feature flag:**
- `TestFeatureFlag_OffPath_PreservesOldBehavior`
- `TestFeatureFlag_Toggle_NoStateLeak`

**Concurrency:**
- `TestReverseMap_NoLeakOnPartialFailure`
- `TestDecodeWindow_LockHoldTimeBounded`
- `TestLivePolling_LRUUnderConcurrentIngest`

**Regression:**
- `TestRepeaterLiveness_StillAccurate`

**Benchmarks:**
- `BenchmarkLoad_BeforeAfter`
- `BenchmarkResolvedPubkeyIndex_Memory`
- `BenchmarkPathsThroughNode_Latency`
- `BenchmarkLivePolling_UnderIngest`

## Benchmark results

```
BenchmarkResolvedPubkeyIndex_Memory/pubkeys=50K     429ms  103MB   777K allocs
BenchmarkResolvedPubkeyIndex_Memory/pubkeys=500K   4205ms  896MB  7.67M allocs
BenchmarkLoad_BeforeAfter                            65ms   20MB   202K allocs
BenchmarkPathsThroughNode_Latency                   3.9µs    0B      0 allocs
BenchmarkLivePolling_UnderIngest                    5.4µs  545B      7 allocs
```

Key: per-obs `[]*string` overhead completely eliminated. At 1M obs with
3 hops average, this saves ~72 bytes/obs × 1M = ~68 MB just from the
slice headers + pointers, plus the JSON-decoded string data (~900 MB at
scale per profiling).

## Design choices

- **FNV-1a instead of xxhash**: stdlib availability, no external
dependency. Performance is equivalent for this use case (pubkey strings
are short).
- **FIFO LRU instead of true LRU**: simpler implementation, adequate for
the access pattern (mostly sequential obs IDs from live polling).
- **Grouped packets view omits resolved_path**: cold path, not worth SQL
round-trip per page render.
- **Backfill pending check uses reverse-map presence** instead of
per-obs field: if a tx has any indexed pubkeys, its observations are
considered resolved.


Closes #807

---------

Co-authored-by: you <you@example.com>
2026-04-20 19:55:00 -07:00
63 changed files with 7670 additions and 1158 deletions
+8 -3
View File
@@ -290,6 +290,10 @@ jobs:
if: github.event_name == 'push'
uses: docker/setup-buildx-action@v3
- name: Set up QEMU (arm64 runtime stage)
if: github.event_name == 'push'
uses: docker/setup-qemu-action@v3
- name: Log in to GHCR
if: github.event_name == 'push'
uses: docker/login-action@v3
@@ -317,7 +321,7 @@ jobs:
with:
context: .
push: true
platforms: linux/amd64
platforms: linux/amd64,linux/arm64
tags: ${{ steps.docker-meta.outputs.tags }}
labels: ${{ steps.docker-meta.outputs.labels }}
build-args: |
@@ -432,10 +436,11 @@ jobs:
- name: Smoke test staging API
run: |
if curl -sf http://localhost:82/api/stats | grep -q engine; then
PORT="${STAGING_GO_HTTP_PORT:-80}"
if curl -sf "http://localhost:${PORT}/api/stats" | grep -q engine; then
echo "Staging verified — engine field present ✅"
else
echo "Staging /api/stats did not return engine field"
echo "Staging /api/stats did not return engine field (port ${PORT})"
exit 1
fi
+15 -7
View File
@@ -1,28 +1,35 @@
FROM golang:1.22-alpine AS builder
RUN apk add --no-cache build-base
# Build stage always runs natively on the builder's arch ($BUILDPLATFORM)
# and cross-compiles to $TARGETOS/$TARGETARCH via Go toolchain. No QEMU.
FROM --platform=$BUILDPLATFORM golang:1.22-alpine AS builder
ARG APP_VERSION=unknown
ARG GIT_COMMIT=unknown
ARG BUILD_TIME=unknown
# Provided by buildx for multi-arch builds
ARG TARGETOS
ARG TARGETARCH
# Build server
# Build server (pure-Go sqlite — no CGO needed, cross-compiles cleanly)
WORKDIR /build/server
COPY cmd/server/go.mod cmd/server/go.sum ./
COPY internal/geofilter/ ../../internal/geofilter/
COPY internal/sigvalidate/ ../../internal/sigvalidate/
COPY internal/packetpath/ ../../internal/packetpath/
RUN go mod download
COPY cmd/server/ ./
RUN go build -ldflags "-X main.Version=${APP_VERSION} -X main.Commit=${GIT_COMMIT} -X main.BuildTime=${BUILD_TIME}" -o /corescope-server .
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} \
go build -ldflags "-X main.Version=${APP_VERSION} -X main.Commit=${GIT_COMMIT} -X main.BuildTime=${BUILD_TIME}" -o /corescope-server .
# Build ingestor
WORKDIR /build/ingestor
COPY cmd/ingestor/go.mod cmd/ingestor/go.sum ./
COPY internal/geofilter/ ../../internal/geofilter/
COPY internal/sigvalidate/ ../../internal/sigvalidate/
COPY internal/packetpath/ ../../internal/packetpath/
RUN go mod download
COPY cmd/ingestor/ ./
RUN go build -o /corescope-ingestor .
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} \
go build -o /corescope-ingestor .
# Build decrypt CLI
WORKDIR /build/decrypt
@@ -30,7 +37,8 @@ COPY cmd/decrypt/go.mod cmd/decrypt/go.sum ./
COPY internal/channel/ ../../internal/channel/
RUN go mod download
COPY cmd/decrypt/ ./
RUN CGO_ENABLED=0 go build -ldflags="-s -w" -o /corescope-decrypt .
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} \
go build -ldflags="-s -w" -o /corescope-decrypt .
# Runtime image
FROM alpine:3.20
+59 -10
View File
@@ -11,6 +11,7 @@ import (
"sync/atomic"
"time"
"github.com/meshcore-analyzer/packetpath"
_ "modernc.org/sqlite"
)
@@ -112,7 +113,8 @@ func applySchema(db *sql.DB) error {
battery_mv INTEGER,
uptime_secs INTEGER,
noise_floor REAL,
inactive INTEGER DEFAULT 0
inactive INTEGER DEFAULT 0,
last_packet_at TEXT DEFAULT NULL
);
CREATE INDEX IF NOT EXISTS idx_nodes_last_seen ON nodes(last_seen);
@@ -189,7 +191,7 @@ func applySchema(db *sql.DB) error {
db.Exec(`DROP VIEW IF EXISTS packets_v`)
_, vErr := db.Exec(`
CREATE VIEW packets_v AS
SELECT o.id, t.raw_hex,
SELECT o.id, COALESCE(o.raw_hex, t.raw_hex) AS raw_hex,
datetime(o.timestamp, 'unixepoch') AS timestamp,
obs.id AS observer_id, obs.name AS observer_name,
o.direction, o.snr, o.rssi, o.score, t.hash, t.route_type,
@@ -408,6 +410,37 @@ func applySchema(db *sql.DB) error {
log.Println("[migration] dropped_packets table created")
}
// Migration: add raw_hex column to observations (#881)
row = db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'observations_raw_hex_v1'")
if row.Scan(&migDone) != nil {
log.Println("[migration] Adding raw_hex column to observations...")
db.Exec(`ALTER TABLE observations ADD COLUMN raw_hex TEXT`)
db.Exec(`INSERT INTO _migrations (name) VALUES ('observations_raw_hex_v1')`)
log.Println("[migration] observations.raw_hex column added")
}
// Migration: add last_packet_at column to observers (#last-packet-at)
row = db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'observers_last_packet_at_v1'")
if row.Scan(&migDone) != nil {
log.Println("[migration] Adding last_packet_at column to observers...")
_, alterErr := db.Exec(`ALTER TABLE observers ADD COLUMN last_packet_at TEXT DEFAULT NULL`)
if alterErr != nil && !strings.Contains(alterErr.Error(), "duplicate column") {
return fmt.Errorf("observers last_packet_at ALTER: %w", alterErr)
}
// Backfill: set last_packet_at = last_seen only for observers that actually have
// observation rows (packet_count alone is unreliable — UpsertObserver sets it to 1
// on INSERT even for status-only observers).
res, err := db.Exec(`UPDATE observers SET last_packet_at = last_seen
WHERE last_packet_at IS NULL
AND rowid IN (SELECT DISTINCT observer_idx FROM observations WHERE observer_idx IS NOT NULL)`)
if err == nil {
n, _ := res.RowsAffected()
log.Printf("[migration] Backfilled last_packet_at for %d observers with packets", n)
}
db.Exec(`INSERT INTO _migrations (name) VALUES ('observers_last_packet_at_v1')`)
log.Println("[migration] observers.last_packet_at column added")
}
return nil
}
@@ -433,8 +466,13 @@ func (s *Store) prepareStatements() error {
}
s.stmtInsertObservation, err = s.db.Prepare(`
INSERT OR IGNORE INTO observations (transmission_id, observer_idx, direction, snr, rssi, score, path_json, timestamp)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
INSERT INTO observations (transmission_id, observer_idx, direction, snr, rssi, score, path_json, timestamp, raw_hex)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(transmission_id, observer_idx, COALESCE(path_json, '')) DO UPDATE SET
snr = COALESCE(excluded.snr, snr),
rssi = COALESCE(excluded.rssi, rssi),
score = COALESCE(excluded.score, score),
raw_hex = COALESCE(excluded.raw_hex, raw_hex)
`)
if err != nil {
return err
@@ -486,7 +524,7 @@ func (s *Store) prepareStatements() error {
return err
}
s.stmtUpdateObserverLastSeen, err = s.db.Prepare("UPDATE observers SET last_seen = ? WHERE rowid = ?")
s.stmtUpdateObserverLastSeen, err = s.db.Prepare("UPDATE observers SET last_seen = ?, last_packet_at = ? WHERE rowid = ?")
if err != nil {
return err
}
@@ -565,9 +603,9 @@ func (s *Store) InsertTransmission(data *PacketData) (bool, error) {
err := s.stmtGetObserverRowid.QueryRow(data.ObserverID).Scan(&rowid)
if err == nil {
observerIdx = &rowid
// Update observer last_seen on every packet to prevent
// Update observer last_seen and last_packet_at on every packet to prevent
// low-traffic observers from appearing offline (#463)
_, _ = s.stmtUpdateObserverLastSeen.Exec(now, rowid)
_, _ = s.stmtUpdateObserverLastSeen.Exec(now, now, rowid)
}
}
@@ -580,7 +618,7 @@ func (s *Store) InsertTransmission(data *PacketData) (bool, error) {
_, err = s.stmtInsertObservation.Exec(
txID, observerIdx, data.Direction,
data.SNR, data.RSSI, data.Score,
data.PathJSON, epochTs,
data.PathJSON, epochTs, nilIfEmpty(data.RawHex),
)
if err != nil {
s.Stats.WriteErrors.Add(1)
@@ -927,11 +965,22 @@ type MQTTPacketMessage struct {
}
// BuildPacketData constructs a PacketData from a decoded packet and MQTT message.
// path_json is derived directly from raw_hex header bytes (not decoded.Path.Hops)
// to guarantee the stored path always matches the raw bytes. This matters for
// TRACE packets where decoded.Path.Hops is overwritten with payload hops (#886).
func BuildPacketData(msg *MQTTPacketMessage, decoded *DecodedPacket, observerID, region string) *PacketData {
now := time.Now().UTC().Format(time.RFC3339)
pathJSON := "[]"
if len(decoded.Path.Hops) > 0 {
b, _ := json.Marshal(decoded.Path.Hops)
// For TRACE packets, path_json must be the payload-decoded route hops
// (decoded.Path.Hops), NOT the raw_hex header bytes which are SNR values.
// For all other packet types, derive path from raw_hex (#886).
if !packetpath.PathBytesAreHops(byte(decoded.Header.PayloadType)) {
if len(decoded.Path.Hops) > 0 {
b, _ := json.Marshal(decoded.Path.Hops)
pathJSON = string(b)
}
} else if hops, err := packetpath.DecodePathFromRawHex(msg.Raw); err == nil && len(hops) > 0 {
b, _ := json.Marshal(hops)
pathJSON = string(b)
}
+296
View File
@@ -2,6 +2,7 @@ package main
import (
"database/sql"
"encoding/json"
"fmt"
"os"
"path/filepath"
@@ -10,6 +11,8 @@ import (
"sync/atomic"
"testing"
"time"
"github.com/meshcore-analyzer/packetpath"
)
func tempDBPath(t *testing.T) string {
@@ -566,6 +569,61 @@ func TestInsertTransmissionUpdatesObserverLastSeen(t *testing.T) {
}
}
func TestLastPacketAtUpdatedOnPacketOnly(t *testing.T) {
s, err := OpenStore(tempDBPath(t))
if err != nil {
t.Fatal(err)
}
defer s.Close()
// Insert observer via status path — last_packet_at should be NULL
if err := s.UpsertObserver("obs1", "Observer1", "SJC", nil); err != nil {
t.Fatal(err)
}
var lastPacketAt sql.NullString
s.db.QueryRow("SELECT last_packet_at FROM observers WHERE id = ?", "obs1").Scan(&lastPacketAt)
if lastPacketAt.Valid {
t.Fatalf("expected last_packet_at to be NULL after UpsertObserver, got %s", lastPacketAt.String)
}
// Insert a packet from this observer — last_packet_at should be set
data := &PacketData{
RawHex: "0A00D69F",
Timestamp: "2026-04-24T12:00:00Z",
ObserverID: "obs1",
Hash: "lastpackettest123456",
RouteType: 2,
PayloadType: 2,
PathJSON: "[]",
DecodedJSON: `{"type":"TXT_MSG"}`,
}
if _, err := s.InsertTransmission(data); err != nil {
t.Fatal(err)
}
s.db.QueryRow("SELECT last_packet_at FROM observers WHERE id = ?", "obs1").Scan(&lastPacketAt)
if !lastPacketAt.Valid {
t.Fatal("expected last_packet_at to be non-NULL after InsertTransmission")
}
// InsertTransmission uses `now = data.Timestamp || time.Now()`, so last_packet_at
// should match the packet's Timestamp when provided (same source-of-truth as last_seen).
if lastPacketAt.String != "2026-04-24T12:00:00Z" {
t.Errorf("expected last_packet_at=2026-04-24T12:00:00Z, got %s", lastPacketAt.String)
}
// UpsertObserver again (status path) — last_packet_at should NOT change
if err := s.UpsertObserver("obs1", "Observer1", "SJC", nil); err != nil {
t.Fatal(err)
}
var lastPacketAtAfterStatus sql.NullString
s.db.QueryRow("SELECT last_packet_at FROM observers WHERE id = ?", "obs1").Scan(&lastPacketAtAfterStatus)
if !lastPacketAtAfterStatus.Valid || lastPacketAtAfterStatus.String != lastPacketAt.String {
t.Errorf("UpsertObserver should not change last_packet_at; expected %s, got %v", lastPacketAt.String, lastPacketAtAfterStatus)
}
}
func TestEndToEndIngest(t *testing.T) {
s, err := OpenStore(tempDBPath(t))
if err != nil {
@@ -1882,3 +1940,241 @@ func TestExtractObserverMetaNewFields(t *testing.T) {
t.Errorf("RecvErrors = %v, want 3", meta.RecvErrors)
}
}
// TestInsertObservationSNRFillIn verifies that when the same observation is
// received twice — first without SNR, then with SNR — the SNR is filled in
// rather than silently discarded. The unique dedup index is
// (transmission_id, observer_idx, COALESCE(path_json, '')); observer_idx must
// be non-NULL for the conflict to fire (SQLite treats NULL != NULL).
func TestInsertObservationSNRFillIn(t *testing.T) {
s, err := OpenStore(tempDBPath(t))
if err != nil {
t.Fatal(err)
}
defer s.Close()
// Register the observer so observer_idx is non-NULL (required for dedup).
if err := s.UpsertObserver("pymc-obs1", "PyMC Observer", "SJC", nil); err != nil {
t.Fatal(err)
}
// First arrival: same observer, no SNR/RSSI (e.g. broker replay without RF fields).
data1 := &PacketData{
RawHex: "0A00D69FD7A5A7475DB07337749AE61FA53A4788E976",
Timestamp: "2026-04-20T00:00:00Z",
Hash: "snrfillin0001hash",
RouteType: 1,
ObserverID: "pymc-obs1",
SNR: nil,
RSSI: nil,
}
if _, err := s.InsertTransmission(data1); err != nil {
t.Fatal(err)
}
var snr1, rssi1 *float64
s.db.QueryRow("SELECT snr, rssi FROM observations LIMIT 1").Scan(&snr1, &rssi1)
if snr1 != nil || rssi1 != nil {
t.Fatalf("precondition: first insert should have nil SNR/RSSI, got snr=%v rssi=%v", snr1, rssi1)
}
// Second arrival: same packet, same observer, now WITH SNR/RSSI.
snr := 10.5
rssi := -88.0
data2 := &PacketData{
RawHex: data1.RawHex,
Timestamp: data1.Timestamp,
Hash: data1.Hash,
RouteType: data1.RouteType,
ObserverID: "pymc-obs1",
SNR: &snr,
RSSI: &rssi,
}
if _, err := s.InsertTransmission(data2); err != nil {
t.Fatal(err)
}
var snr2, rssi2 *float64
s.db.QueryRow("SELECT snr, rssi FROM observations LIMIT 1").Scan(&snr2, &rssi2)
if snr2 == nil || *snr2 != snr {
t.Errorf("SNR not filled in by second arrival: got %v, want %v", snr2, snr)
}
if rssi2 == nil || *rssi2 != rssi {
t.Errorf("RSSI not filled in by second arrival: got %v, want %v", rssi2, rssi)
}
// Third arrival: same packet again, SNR absent — must NOT overwrite existing SNR.
data3 := &PacketData{
RawHex: data1.RawHex,
Timestamp: data1.Timestamp,
Hash: data1.Hash,
RouteType: data1.RouteType,
ObserverID: "pymc-obs1",
SNR: nil,
RSSI: nil,
}
if _, err := s.InsertTransmission(data3); err != nil {
t.Fatal(err)
}
var snr3, rssi3 *float64
s.db.QueryRow("SELECT snr, rssi FROM observations LIMIT 1").Scan(&snr3, &rssi3)
if snr3 == nil || *snr3 != snr {
t.Errorf("SNR overwritten by null arrival: got %v, want %v", snr3, snr)
}
if rssi3 == nil || *rssi3 != rssi {
t.Errorf("RSSI overwritten by null arrival: got %v, want %v", rssi3, rssi)
}
}
// TestPerObservationRawHex verifies that two MQTT packets for the same hash
// from different observers store distinct raw_hex per observation (#881).
func TestPerObservationRawHex(t *testing.T) {
store, err := OpenStore(tempDBPath(t))
if err != nil {
t.Fatal(err)
}
defer store.Close()
// Register two observers
store.UpsertObserver("obs-A", "Observer A", "", nil)
store.UpsertObserver("obs-B", "Observer B", "", nil)
hash := "abc123def456"
rawA := "c0ffee01"
rawB := "c0ffee0201aa"
dir := "RX"
// First observation from observer A
pdA := &PacketData{
RawHex: rawA,
Hash: hash,
Timestamp: "2026-04-21T10:00:00Z",
ObserverID: "obs-A",
Direction: &dir,
PathJSON: "[]",
}
isNew, err := store.InsertTransmission(pdA)
if err != nil {
t.Fatalf("insert A: %v", err)
}
if !isNew {
t.Fatal("expected new transmission")
}
// Second observation from observer B (same hash, different raw bytes)
pdB := &PacketData{
RawHex: rawB,
Hash: hash,
Timestamp: "2026-04-21T10:00:01Z",
ObserverID: "obs-B",
Direction: &dir,
PathJSON: `["aabb"]`,
}
isNew2, err := store.InsertTransmission(pdB)
if err != nil {
t.Fatalf("insert B: %v", err)
}
if isNew2 {
t.Fatal("expected duplicate transmission")
}
// Query observations and verify per-observation raw_hex
rows, err := store.db.Query(`
SELECT o.raw_hex, obs.id
FROM observations o
LEFT JOIN observers obs ON obs.rowid = o.observer_idx
ORDER BY o.id ASC
`)
if err != nil {
t.Fatalf("query: %v", err)
}
defer rows.Close()
type obsResult struct {
rawHex string
observerID string
}
var results []obsResult
for rows.Next() {
var rh, oid sql.NullString
if err := rows.Scan(&rh, &oid); err != nil {
t.Fatal(err)
}
results = append(results, obsResult{
rawHex: rh.String,
observerID: oid.String,
})
}
if len(results) != 2 {
t.Fatalf("expected 2 observations, got %d", len(results))
}
if results[0].rawHex != rawA {
t.Errorf("obs A raw_hex: got %q, want %q", results[0].rawHex, rawA)
}
if results[1].rawHex != rawB {
t.Errorf("obs B raw_hex: got %q, want %q", results[1].rawHex, rawB)
}
if results[0].rawHex == results[1].rawHex {
t.Error("both observations have same raw_hex — should differ")
}
}
// TestBuildPacketData_TraceUsesPayloadHops verifies that TRACE packets use
// payload-decoded route hops in path_json (NOT the raw_hex header SNR bytes).
// Issue #886 / #887.
func TestBuildPacketData_TraceUsesPayloadHops(t *testing.T) {
// TRACE packet: header path has SNR bytes [30,2D,0D,23], but decoded.Path.Hops
// is overwritten to payload hops [67,33,D6,33,67].
rawHex := "2604302D0D2359FEE7B100000000006733D63367"
decoded, err := DecodePacket(rawHex, nil, false)
if err != nil {
t.Fatal(err)
}
// decoded.Path.Hops should be the TRACE-replaced hops (payload hops)
if len(decoded.Path.Hops) != 5 {
t.Fatalf("expected 5 decoded hops, got %d", len(decoded.Path.Hops))
}
msg := &MQTTPacketMessage{Raw: rawHex}
pd := BuildPacketData(msg, decoded, "test-obs", "TST")
// For TRACE: path_json MUST be the payload-decoded route hops, NOT the SNR bytes
expectedPathJSON := `["67","33","D6","33","67"]`
if pd.PathJSON != expectedPathJSON {
t.Errorf("path_json = %s, want %s (TRACE must use payload hops)", pd.PathJSON, expectedPathJSON)
}
// Verify that DecodePathFromRawHex returns the SNR bytes (header path) which differ
headerHops, herr := packetpath.DecodePathFromRawHex(rawHex)
if herr != nil {
t.Fatal(herr)
}
headerJSON, _ := json.Marshal(headerHops)
if string(headerJSON) == expectedPathJSON {
t.Error("header path (SNR) should differ from payload hops for TRACE")
}
}
// TestBuildPacketData_NonTracePathJSON verifies non-TRACE packets also derive path from raw_hex.
func TestBuildPacketData_NonTracePathJSON(t *testing.T) {
// A simple ADVERT packet (payload type 0) with 2 hops, hash_size 1
// Header 0x09 = FLOOD(1), ADVERT(2), version 0
// Path byte 0x02 = hash_size 1, hash_count 2
// Path bytes: AA BB
rawHex := "0902AABB" + "00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
decoded, err := DecodePacket(rawHex, nil, false)
if err != nil {
t.Fatal(err)
}
msg := &MQTTPacketMessage{Raw: rawHex}
pd := BuildPacketData(msg, decoded, "obs1", "TST")
expectedPathJSON := `["AA","BB"]`
if pd.PathJSON != expectedPathJSON {
t.Errorf("path_json = %s, want %s", pd.PathJSON, expectedPathJSON)
}
}
+3 -1
View File
@@ -12,6 +12,7 @@ import (
"strings"
"unicode/utf8"
"github.com/meshcore-analyzer/packetpath"
"github.com/meshcore-analyzer/sigvalidate"
)
@@ -192,8 +193,9 @@ func decodePath(pathByte byte, buf []byte, offset int) (Path, int) {
}, totalBytes
}
// isTransportRoute delegates to packetpath.IsTransportRoute.
func isTransportRoute(routeType int) bool {
return routeType == RouteTransportFlood || routeType == RouteTransportDirect
return packetpath.IsTransportRoute(routeType)
}
func decodeEncryptedPayload(typeName string, buf []byte) Payload {
+104
View File
@@ -11,6 +11,7 @@ import (
"strings"
"testing"
"github.com/meshcore-analyzer/packetpath"
"github.com/meshcore-analyzer/sigvalidate"
)
@@ -1822,3 +1823,106 @@ func TestDecodeAdvertWithSignatureValidation(t *testing.T) {
t.Error("SignatureValid should be nil when validation disabled")
}
}
// === Tests for DecodePathFromRawHex (issue #886) ===
func TestDecodePathFromRawHex_HashSize1(t *testing.T) {
// Header byte 0x26 = route_type DIRECT, payload TRACE
// Path byte 0x04 = hash_size 1 (bits 7-6 = 00 → 0+1=1), hash_count 4
// Path bytes: 30 2D 0D 23
raw := "2604302D0D2359FEE7B100000000006733D63367"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
expected := []string{"30", "2D", "0D", "23"}
if len(hops) != len(expected) {
t.Fatalf("got %d hops, want %d", len(hops), len(expected))
}
for i, h := range hops {
if h != expected[i] {
t.Errorf("hop[%d] = %s, want %s", i, h, expected[i])
}
}
}
func TestDecodePathFromRawHex_HashSize2(t *testing.T) {
// Path byte 0x42 = hash_size 2 (bits 7-6 = 01 → 1+1=2), hash_count 2
// Header 0x09 = FLOOD route (rt=1), payload ADVERT (pt=2)
// Path bytes: AABB CCDD (4 bytes = 2 hops * 2 bytes)
raw := "0942AABBCCDD" + "00000000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
expected := []string{"AABB", "CCDD"}
if len(hops) != len(expected) {
t.Fatalf("got %d hops, want %d", len(hops), len(expected))
}
for i, h := range hops {
if h != expected[i] {
t.Errorf("hop[%d] = %s, want %s", i, h, expected[i])
}
}
}
func TestDecodePathFromRawHex_HashSize3(t *testing.T) {
// Path byte 0x81 = hash_size 3 (bits 7-6 = 10 → 2+1=3), hash_count 1
// Header 0x09 = FLOOD route (rt=1), payload ADVERT
raw := "0981AABBCC" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
if len(hops) != 1 || hops[0] != "AABBCC" {
t.Fatalf("got %v, want [AABBCC]", hops)
}
}
func TestDecodePathFromRawHex_HashSize4(t *testing.T) {
// Path byte 0xC1 = hash_size 4 (bits 7-6 = 11 → 3+1=4), hash_count 1
// Header 0x09 = FLOOD route (rt=1)
raw := "09C1AABBCCDD" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
if len(hops) != 1 || hops[0] != "AABBCCDD" {
t.Fatalf("got %v, want [AABBCCDD]", hops)
}
}
func TestDecodePathFromRawHex_DirectZeroHops(t *testing.T) {
// Path byte 0x00 = hash_size 1, hash_count 0
// Header 0x0A = DIRECT route (rt=2), payload ADVERT
raw := "0A00" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
if len(hops) != 0 {
t.Fatalf("got %d hops, want 0", len(hops))
}
}
func TestDecodePathFromRawHex_Transport(t *testing.T) {
// Route type 3 = TRANSPORT_DIRECT → 4 transport code bytes before path byte
// Header 0x27 = route_type 3, payload TRACE
// Transport codes: 1122 3344
// Path byte 0x02 = hash_size 1, hash_count 2
// Path bytes: AA BB
raw := "2711223344" + "02AABB" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
expected := []string{"AA", "BB"}
if len(hops) != len(expected) {
t.Fatalf("got %d hops, want %d", len(hops), len(expected))
}
for i, h := range hops {
if h != expected[i] {
t.Errorf("hop[%d] = %s, want %s", i, h, expected[i])
}
}
}
+4
View File
@@ -13,6 +13,10 @@ replace github.com/meshcore-analyzer/geofilter => ../../internal/geofilter
replace github.com/meshcore-analyzer/sigvalidate => ../../internal/sigvalidate
require github.com/meshcore-analyzer/packetpath v0.0.0
replace github.com/meshcore-analyzer/packetpath => ../../internal/packetpath
require (
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/google/uuid v1.6.0 // indirect
+18 -15
View File
@@ -207,21 +207,6 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
topic := m.Topic()
parts := strings.Split(topic, "/")
// IATA filter
if len(source.IATAFilter) > 0 && len(parts) > 1 {
region := parts[1]
matched := false
for _, f := range source.IATAFilter {
if f == region {
matched = true
break
}
}
if !matched {
return
}
}
var msg map[string]interface{}
if err := json.Unmarshal(m.Payload(), &msg); err != nil {
return
@@ -233,6 +218,9 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
}
// Status topic: meshcore/<region>/<observer_id>/status
// IATA filter does NOT apply here — observer metadata (noise_floor, battery, etc.)
// is region-independent and should be accepted from all observers regardless of
// which IATA regions are configured for packet ingestion.
if len(parts) >= 4 && parts[3] == "status" {
observerID := parts[2]
name, _ := msg["origin"].(string)
@@ -261,6 +249,21 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
return
}
// IATA filter applies to packet messages only — not status messages above.
if len(source.IATAFilter) > 0 && len(parts) > 1 {
region := parts[1]
matched := false
for _, f := range source.IATAFilter {
if f == region {
matched = true
break
}
}
if !matched {
return
}
}
// Format 1: Raw packet (meshcoretomqtt / Cisien format)
rawHex, _ := msg["raw"].(string)
if rawHex != "" {
+41
View File
@@ -739,3 +739,44 @@ func TestToFloat64WithUnits(t *testing.T) {
}
}
}
// TestIATAFilterDoesNotDropStatusMessages verifies that status messages from
// out-of-region observers are still processed (noise_floor, battery, etc.)
// even when an IATA filter is configured for packet data.
func TestIATAFilterDoesNotDropStatusMessages(t *testing.T) {
store := newTestStore(t)
source := MQTTSource{Name: "test", IATAFilter: []string{"SJC"}}
// BFL observer sends a status message with noise_floor — outside the IATA filter.
msg := &mockMessage{
topic: "meshcore/BFL/bfl-obs1/status",
payload: []byte(`{"origin":"BFLObserver","stats":{"noise_floor":-105.0}}`),
}
handleMessage(store, "test", source, msg, nil, &Config{})
var name string
var noiseFloor *float64
err := store.db.QueryRow("SELECT name, noise_floor FROM observers WHERE id = 'bfl-obs1'").Scan(&name, &noiseFloor)
if err != nil {
t.Fatalf("observer not found after status from out-of-region observer: %v", err)
}
if name != "BFLObserver" {
t.Errorf("name=%q, want BFLObserver", name)
}
if noiseFloor == nil || *noiseFloor != -105.0 {
t.Errorf("noise_floor=%v, want -105.0 — status message was dropped by IATA filter when it should not be", noiseFloor)
}
// Verify that a packet from BFL is still filtered.
rawHex := "0A00D69FD7A5A7475DB07337749AE61FA53A4788E976"
pktMsg := &mockMessage{
topic: "meshcore/BFL/bfl-obs1/packets",
payload: []byte(`{"raw":"` + rawHex + `"}`),
}
handleMessage(store, "test", source, pktMsg, nil, &Config{})
var count int
store.db.QueryRow("SELECT COUNT(*) FROM transmissions").Scan(&count)
if count != 0 {
t.Error("packet from out-of-region BFL should still be filtered by IATA")
}
}
+2 -2
View File
@@ -229,7 +229,7 @@ func createTestDBAt(tb testing.TB, dbPath string, numTx int) {
id INTEGER PRIMARY KEY,
transmission_id INTEGER, observer_id TEXT, observer_name TEXT,
direction TEXT, snr REAL, rssi REAL, score INTEGER,
path_json TEXT, timestamp TEXT
path_json TEXT, timestamp TEXT, raw_hex TEXT
)`)
execOrFail(`CREATE TABLE IF NOT EXISTS observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
execOrFail(`CREATE TABLE IF NOT EXISTS nodes (
@@ -280,7 +280,7 @@ func createTestDBWithObs(tb testing.TB, dbPath string, numTx int) {
)`)
execOrFail(`CREATE TABLE IF NOT EXISTS observations (
id INTEGER PRIMARY KEY, transmission_id INTEGER, observer_id TEXT, observer_name TEXT,
direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT
direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT, raw_hex TEXT
)`)
execOrFail(`CREATE TABLE IF NOT EXISTS observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
execOrFail(`CREATE TABLE IF NOT EXISTS nodes (
+57
View File
@@ -0,0 +1,57 @@
package main
import (
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
)
// TestPacketsChannelFilter verifies /api/packets?channel=... actually filters
// (regression test for #812).
func TestPacketsChannelFilter(t *testing.T) {
_, router := setupTestServer(t)
get := func(url string) map[string]interface{} {
req := httptest.NewRequest("GET", url, nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("GET %s: expected 200, got %d", url, w.Code)
}
var body map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &body); err != nil {
t.Fatalf("decode %s: %v", url, err)
}
return body
}
all := get("/api/packets?limit=50")
allTotal := int(all["total"].(float64))
if allTotal < 2 {
t.Fatalf("expected baseline >= 2 packets, got %d", allTotal)
}
test := get("/api/packets?limit=50&channel=%23test")
testTotal := int(test["total"].(float64))
if testTotal == 0 {
t.Fatalf("channel=#test: expected >= 1 match, got 0 (filter ignored?)")
}
if testTotal >= allTotal {
t.Fatalf("channel=#test: expected fewer packets than baseline (%d), got %d", allTotal, testTotal)
}
// Every returned packet must be a CHAN/GRP_TXT (payload_type=5) on #test.
pkts, _ := test["packets"].([]interface{})
for _, p := range pkts {
m := p.(map[string]interface{})
if pt, _ := m["payload_type"].(float64); int(pt) != 5 {
t.Errorf("channel=#test: returned non-GRP_TXT packet (payload_type=%v)", m["payload_type"])
}
}
none := get("/api/packets?limit=50&channel=nonexistentchannel")
if int(none["total"].(float64)) != 0 {
t.Fatalf("channel=nonexistentchannel: expected total=0, got %v", none["total"])
}
}
+201 -33
View File
@@ -16,7 +16,8 @@ const (
SkewWarning SkewSeverity = "warning" // 5 min 1 hour
SkewCritical SkewSeverity = "critical" // 1 hour 30 days
SkewAbsurd SkewSeverity = "absurd" // > 30 days
SkewNoClock SkewSeverity = "no_clock" // > 365 days — uninitialized RTC
SkewNoClock SkewSeverity = "no_clock" // > 365 days — uninitialized RTC
SkewBimodalClock SkewSeverity = "bimodal_clock" // mixed good+bad recent samples (flaky RTC)
)
// Default thresholds in seconds.
@@ -33,6 +34,38 @@ const (
// maxReasonableDriftPerDay caps drift display. Physically impossible
// drift rates (> 1 day/day) indicate insufficient or outlier samples.
maxReasonableDriftPerDay = 86400.0
// recentSkewWindowCount is the number of most-recent advert samples
// used to derive the "current" skew for severity classification (see
// issue #789). The all-time median is poisoned by historical bad
// samples (e.g. a node that was off and then GPS-corrected); severity
// must reflect current health, not lifetime statistics.
recentSkewWindowCount = 5
// recentSkewWindowSec bounds the recent-window in time as well: only
// samples from the last N seconds count as "recent" for severity.
// The effective window is min(recentSkewWindowCount, samples in 1h).
recentSkewWindowSec = 3600
// bimodalSkewThresholdSec is the absolute skew threshold (1 hour)
// above which a sample is considered "bad" — likely firmware emitting
// a nonsense timestamp from an uninitialized RTC, not real drift.
// Chosen to match the warning/critical severity boundary: real clock
// drift rarely exceeds 1 hour, while epoch-0 RTCs produce ~1.7B sec.
bimodalSkewThresholdSec = 3600.0
// maxPlausibleSkewJumpSec is the largest skew change between
// consecutive samples that we treat as physical drift. Anything larger
// (e.g. a GPS sync that jumps the clock by minutes/days) is rejected
// as an outlier when computing drift. Real microcontroller drift is
// fractions of a second per advert; 60s is a generous safety factor.
maxPlausibleSkewJumpSec = 60.0
// theilSenMaxPoints caps the number of points fed to Theil-Sen
// regression (O(n²) in pairs). For nodes with thousands of samples we
// keep the most-recent points, which are also the most relevant for
// current drift.
theilSenMaxPoints = 200
)
// classifySkew maps absolute skew (seconds) to a severity level.
@@ -76,6 +109,7 @@ type NodeClockSkew struct {
MeanSkewSec float64 `json:"meanSkewSec"` // corrected mean skew (positive = node ahead)
MedianSkewSec float64 `json:"medianSkewSec"` // corrected median skew
LastSkewSec float64 `json:"lastSkewSec"` // most recent corrected skew
RecentMedianSkewSec float64 `json:"recentMedianSkewSec"` // median across most-recent samples (drives severity, see #789)
DriftPerDaySec float64 `json:"driftPerDaySec"` // linear drift rate (sec/day)
Severity SkewSeverity `json:"severity"`
SampleCount int `json:"sampleCount"`
@@ -83,6 +117,9 @@ type NodeClockSkew struct {
LastAdvertTS int64 `json:"lastAdvertTS"` // most recent advert timestamp
LastObservedTS int64 `json:"lastObservedTS"` // most recent observation timestamp
Samples []SkewSample `json:"samples,omitempty"` // time-series for sparklines
GoodFraction float64 `json:"goodFraction"` // fraction of recent samples with |skew| <= 1h
RecentBadSampleCount int `json:"recentBadSampleCount"` // count of recent samples with |skew| > 1h
RecentSampleCount int `json:"recentSampleCount"` // total recent samples in window
NodeName string `json:"nodeName,omitempty"` // populated in fleet responses
NodeRole string `json:"nodeRole,omitempty"` // populated in fleet responses
}
@@ -419,12 +456,95 @@ func (s *PacketStore) getNodeClockSkewLocked(pubkey string) *NodeClockSkew {
medSkew := median(allSkews)
meanSkew := mean(allSkews)
absMedian := math.Abs(medSkew)
severity := classifySkew(absMedian)
// For no_clock nodes (uninitialized RTC), skip drift — data is meaningless.
// Severity is derived from RECENT samples only (issue #789). The
// all-time median is poisoned by historical bad data — a node that
// was off for hours and then GPS-corrected can have median = -59M sec
// while its current skew is -0.8s. Operators need severity to reflect
// current health, so they trust the dashboard.
//
// Sort tsSkews by time and take the last recentSkewWindowCount samples
// (or all samples within recentSkewWindowSec of the latest, whichever
// gives FEWER samples — we want the more-current view; a chatty node
// can fit dozens of samples in 1h, in which case the count cap wins).
sort.Slice(tsSkews, func(i, j int) bool { return tsSkews[i].ts < tsSkews[j].ts })
recentSkew := lastSkew
var recentVals []float64
if n := len(tsSkews); n > 0 {
latestTS := tsSkews[n-1].ts
// Index-based window: last K samples.
startByCount := n - recentSkewWindowCount
if startByCount < 0 {
startByCount = 0
}
// Time-based window: samples newer than latestTS - windowSec.
startByTime := n - 1
for i := n - 1; i >= 0; i-- {
if latestTS-tsSkews[i].ts <= recentSkewWindowSec {
startByTime = i
} else {
break
}
}
// Pick the narrower (larger-index) of the two windows — the most
// current view of the node's clock health.
start := startByCount
if startByTime > start {
start = startByTime
}
recentVals = make([]float64, 0, n-start)
for i := start; i < n; i++ {
recentVals = append(recentVals, tsSkews[i].skew)
}
if len(recentVals) > 0 {
recentSkew = median(recentVals)
}
}
// ── Bimodal detection (#845) ─────────────────────────────────────────
// Split recent samples into "good" (|skew| <= 1h, real clock) and
// "bad" (|skew| > 1h, firmware nonsense from uninitialized RTC).
// Classification order (first match wins):
// no_clock — goodFraction < 0.10 (essentially no real clock)
// bimodal_clock — 0.10 <= goodFraction < 0.80 AND badCount > 0
// ok/warn/etc. — goodFraction >= 0.80 (normal, outliers filtered)
var goodSamples []float64
for _, v := range recentVals {
if math.Abs(v) <= bimodalSkewThresholdSec {
goodSamples = append(goodSamples, v)
}
}
recentSampleCount := len(recentVals)
recentBadCount := recentSampleCount - len(goodSamples)
var goodFraction float64
if recentSampleCount > 0 {
goodFraction = float64(len(goodSamples)) / float64(recentSampleCount)
}
var severity SkewSeverity
if goodFraction < 0.10 {
// Essentially no real clock — classify as no_clock regardless
// of the raw skew magnitude.
severity = SkewNoClock
} else if goodFraction < 0.80 && recentBadCount > 0 {
// Bimodal: use median of GOOD samples as the "real" skew.
severity = SkewBimodalClock
if len(goodSamples) > 0 {
recentSkew = median(goodSamples)
}
} else {
// Normal path: if there are good samples, use their median
// (filters out rare outliers in ≥80% good case).
if len(goodSamples) > 0 && recentBadCount > 0 {
recentSkew = median(goodSamples)
}
severity = classifySkew(math.Abs(recentSkew))
}
// For no_clock / bimodal_clock nodes, skip drift when data is unreliable.
var drift float64
if severity != SkewNoClock && len(tsSkews) >= minDriftSamples {
if severity != SkewNoClock && severity != SkewBimodalClock && len(tsSkews) >= minDriftSamples {
drift = computeDrift(tsSkews)
// Cap physically impossible drift rates.
if math.Abs(drift) > maxReasonableDriftPerDay {
@@ -432,25 +552,28 @@ func (s *PacketStore) getNodeClockSkewLocked(pubkey string) *NodeClockSkew {
}
}
// Build sparkline samples from tsSkews (sorted by time).
sort.Slice(tsSkews, func(i, j int) bool { return tsSkews[i].ts < tsSkews[j].ts })
// Build sparkline samples from tsSkews (already sorted by time above).
samples := make([]SkewSample, len(tsSkews))
for i, p := range tsSkews {
samples[i] = SkewSample{Timestamp: p.ts, SkewSec: round(p.skew, 1)}
}
return &NodeClockSkew{
Pubkey: pubkey,
MeanSkewSec: round(meanSkew, 1),
MedianSkewSec: round(medSkew, 1),
LastSkewSec: round(lastSkew, 1),
DriftPerDaySec: round(drift, 2),
Severity: severity,
SampleCount: totalSamples,
Calibrated: anyCal,
LastAdvertTS: lastAdvTS,
LastObservedTS: lastObsTS,
Samples: samples,
Pubkey: pubkey,
MeanSkewSec: round(meanSkew, 1),
MedianSkewSec: round(medSkew, 1),
LastSkewSec: round(lastSkew, 1),
RecentMedianSkewSec: round(recentSkew, 1),
DriftPerDaySec: round(drift, 2),
Severity: severity,
SampleCount: totalSamples,
Calibrated: anyCal,
LastAdvertTS: lastAdvTS,
LastObservedTS: lastObsTS,
Samples: samples,
GoodFraction: round(goodFraction, 2),
RecentBadSampleCount: recentBadCount,
RecentSampleCount: recentSampleCount,
}
}
@@ -544,7 +667,18 @@ type tsSkewPair struct {
}
// computeDrift estimates linear drift in seconds per day from time-ordered
// (timestamp, skew) pairs using simple linear regression.
// (timestamp, skew) pairs. Issue #789: a single GPS-correction event (huge
// skew jump in seconds) used to dominate ordinary least squares and produce
// absurd drift like 1.7M sec/day. We now:
//
// 1. Drop pairs whose consecutive skew jump exceeds maxPlausibleSkewJumpSec
// (clock corrections, not physical drift). This protects both OLS-style
// consumers and Theil-Sen.
// 2. Use Theil-Sen regression — the slope is the median of all pairwise
// slopes, naturally robust to remaining outliers (breakdown point ~29%).
//
// For very small samples after filtering we fall back to a simple slope
// between first and last calibrated samples.
func computeDrift(pairs []tsSkewPair) float64 {
if len(pairs) < 2 {
return 0
@@ -560,21 +694,55 @@ func computeDrift(pairs []tsSkewPair) float64 {
return 0
}
// Simple linear regression: skew = a + b*t
n := float64(len(pairs))
var sumX, sumY, sumXY, sumX2 float64
for _, p := range pairs {
x := float64(p.ts - pairs[0].ts) // normalize to avoid large numbers
y := p.skew
sumX += x
sumY += y
sumXY += x * y
sumX2 += x * x
// Outlier filter: drop samples where the skew jumps more than
// maxPlausibleSkewJumpSec from the running "stable" baseline.
// We anchor on the first sample, then accept each subsequent point
// that's within the threshold of the most recent accepted point —
// this preserves a slow drift while rejecting correction events.
filtered := make([]tsSkewPair, 0, len(pairs))
filtered = append(filtered, pairs[0])
for i := 1; i < len(pairs); i++ {
prev := filtered[len(filtered)-1]
if math.Abs(pairs[i].skew-prev.skew) <= maxPlausibleSkewJumpSec {
filtered = append(filtered, pairs[i])
}
}
denom := n*sumX2 - sumX*sumX
if denom == 0 {
// If the filter killed too much (e.g. unstable node), fall back to the
// raw series so we at least produce *something* — it'll be capped by
// maxReasonableDriftPerDay downstream.
if len(filtered) < 2 || float64(filtered[len(filtered)-1].ts-filtered[0].ts) < 3600 {
filtered = pairs
}
// Cap point count for Theil-Sen (O(n²) on pairs). Keep most-recent.
if len(filtered) > theilSenMaxPoints {
filtered = filtered[len(filtered)-theilSenMaxPoints:]
}
return theilSenSlope(filtered) * 86400 // sec/sec → sec/day
}
// theilSenSlope returns the Theil-Sen estimator: median of all pairwise
// slopes (yj - yi) / (tj - ti) for i < j. Naturally robust to outliers.
// Pairs must be sorted by timestamp ascending.
func theilSenSlope(pairs []tsSkewPair) float64 {
n := len(pairs)
if n < 2 {
return 0
}
slope := (n*sumXY - sumX*sumY) / denom // seconds of drift per second
return slope * 86400 // convert to seconds per day
// Pre-allocate: n*(n-1)/2 pairs.
slopes := make([]float64, 0, n*(n-1)/2)
for i := 0; i < n; i++ {
for j := i + 1; j < n; j++ {
dt := float64(pairs[j].ts - pairs[i].ts)
if dt <= 0 {
continue
}
slopes = append(slopes, (pairs[j].skew-pairs[i].skew)/dt)
}
}
if len(slopes) == 0 {
return 0
}
return median(slopes)
}
+410
View File
@@ -544,3 +544,413 @@ func TestGetNodeClockSkew_NormalNodeWithDrift(t *testing.T) {
func formatInt64(n int64) string {
return fmt.Sprintf("%d", n)
}
// ── #789: Recent-window severity & robust drift ───────────────────────────────
// TestSeverityUsesRecentNotMedian: 100 historical bad samples (skew=-60s,
// each ~5min apart) followed by 5 fresh good samples (skew=-1s). All-time
// median is still huge-ish but recent-window severity must reflect the
// current healthy state.
func TestSeverityUsesRecentNotMedian(t *testing.T) {
ps := NewPacketStore(nil, nil)
pt := 4
baseObs := int64(1700000000)
var txs []*StoreTx
for i := 0; i < 105; i++ {
obsTS := baseObs + int64(i)*300 // 5 min apart
var skew int64 = -60
if i >= 100 {
skew = -1 // good samples at the tail
}
advTS := obsTS + skew
tx := &StoreTx{
Hash: fmt.Sprintf("recent-h%03d", i),
PayloadType: &pt,
DecodedJSON: `{"payload":{"timestamp":` + formatInt64(advTS) + `}}`,
Observations: []*StoreObs{
{ObserverID: "obs1", Timestamp: time.Unix(obsTS, 0).UTC().Format(time.RFC3339)},
},
}
txs = append(txs, tx)
}
ps.mu.Lock()
ps.byNode["RECENT"] = txs
for _, tx := range txs {
ps.byPayloadType[4] = append(ps.byPayloadType[4], tx)
}
ps.clockSkew.computeInterval = 0
ps.mu.Unlock()
r := ps.GetNodeClockSkew("RECENT")
if r == nil {
t.Fatal("nil result")
}
if r.Severity != SkewOK {
t.Errorf("severity = %v, want ok (recent samples are healthy)", r.Severity)
}
if math.Abs(r.RecentMedianSkewSec) > 5 {
t.Errorf("recentMedianSkewSec = %v, want ~-1", r.RecentMedianSkewSec)
}
// Historical median should still be retained for context.
if math.Abs(r.MedianSkewSec) < 30 {
t.Errorf("medianSkewSec = %v, expected historical median to remain large", r.MedianSkewSec)
}
}
// TestDriftRejectsCorrectionJump: 30 minutes of clean linear drift, then a
// single 60-second skew jump. The pre-jump slope should win — drift must
// not be catastrophically inflated by the correction event.
func TestDriftRejectsCorrectionJump(t *testing.T) {
pairs := []tsSkewPair{}
// 30 min of stable, ~12 sec/day drift: 1s per 7200s.
for i := 0; i < 12; i++ {
ts := int64(i) * 300
skew := float64(i) * (1.0 / 24.0) // ~0.04s per 5min step → 12 s/day
pairs = append(pairs, tsSkewPair{ts: ts, skew: skew})
}
// Wait an hour, then a single 1000-sec correction jump (clearly outlier).
pairs = append(pairs, tsSkewPair{ts: 3600 + 12*300, skew: 1000})
drift := computeDrift(pairs)
// Without rejection this would be ~ (1000-0)/(end-0) * 86400 = enormous.
if math.Abs(drift) > 100 {
t.Errorf("drift = %v, expected small (~12 s/day), correction jump should be filtered", drift)
}
}
// TestTheilSenMatchesOLSWhenClean: on clean linear data Theil-Sen should
// produce essentially the OLS answer.
func TestTheilSenMatchesOLSWhenClean(t *testing.T) {
// 1 sec drift per hour = 24 sec/day, 20 evenly-spaced samples.
pairs := []tsSkewPair{}
for i := 0; i < 20; i++ {
pairs = append(pairs, tsSkewPair{
ts: int64(i) * 600,
skew: float64(i) * (600.0 / 3600.0),
})
}
drift := computeDrift(pairs)
if math.Abs(drift-24.0) > 0.25 { // ~1%
t.Errorf("drift = %v, want ~24", drift)
}
}
// TestReporterScenario_789: reproduce the exact scenario from issue #789.
// Reporter saw mean=-52565156, median=-59063561, last=-0.8, sample count
// 1662, drift +1793549.9 s/day, severity=absurd. After the fix, severity
// must be ok (recent samples are healthy) and drift must be sane.
func TestReporterScenario_789(t *testing.T) {
ps := NewPacketStore(nil, nil)
pt := 4
baseObs := int64(1700000000)
var txs []*StoreTx
// 1657 samples with the bad ~-683-day skew (the historical poison),
// then 5 freshly corrected samples at -0.8s — totals 1662.
for i := 0; i < 1662; i++ {
obsTS := baseObs + int64(i)*60 // 1 min apart
var skew int64
if i < 1657 {
skew = -59063561 // ~ -683 days
} else {
skew = -1 // corrected (rounded; reporter saw -0.8)
}
advTS := obsTS + skew
tx := &StoreTx{
Hash: fmt.Sprintf("rep-%04d", i),
PayloadType: &pt,
DecodedJSON: `{"payload":{"timestamp":` + formatInt64(advTS) + `}}`,
Observations: []*StoreObs{
{ObserverID: "obs1", Timestamp: time.Unix(obsTS, 0).UTC().Format(time.RFC3339)},
},
}
txs = append(txs, tx)
}
ps.mu.Lock()
ps.byNode["REPNODE"] = txs
for _, tx := range txs {
ps.byPayloadType[4] = append(ps.byPayloadType[4], tx)
}
ps.clockSkew.computeInterval = 0
ps.mu.Unlock()
r := ps.GetNodeClockSkew("REPNODE")
if r == nil {
t.Fatal("nil result")
}
// Severity must reflect current health, not the all-time median.
if r.Severity != SkewOK && r.Severity != SkewWarning {
t.Errorf("severity = %v, want ok/warning (recent samples are healthy)", r.Severity)
}
if math.Abs(r.RecentMedianSkewSec) > 5 {
t.Errorf("recentMedianSkewSec = %v, want near 0", r.RecentMedianSkewSec)
}
// Drift must not be absurd. The historical jump is one event between
// the 1657th and 1658th sample; outlier rejection must contain it.
if math.Abs(r.DriftPerDaySec) > maxReasonableDriftPerDay {
t.Errorf("drift = %v, must be <= cap %v", r.DriftPerDaySec, maxReasonableDriftPerDay)
}
// And it should be close to zero (stable historical + stable corrected).
if math.Abs(r.DriftPerDaySec) > 1000 {
t.Errorf("drift = %v, expected near zero after outlier rejection", r.DriftPerDaySec)
}
// Historical median is preserved as context.
if math.Abs(r.MedianSkewSec) < 1e6 {
t.Errorf("medianSkewSec = %v, expected historical poison preserved as context", r.MedianSkewSec)
}
}
// TestBimodalClock_845: 60% good samples → bimodal_clock severity.
func TestBimodalClock_845(t *testing.T) {
ps := NewPacketStore(nil, nil)
pt := 4
baseObs := int64(1700000000)
var txs []*StoreTx
// 6 good samples (-5s each), 4 bad samples (-50000000s each) = 60% good
// Interleave so the recent window (last 5) captures both good and bad.
skews := []int64{-5, -5, -50000000, -5, -50000000, -5, -50000000, -5, -50000000, -5}
for i := 0; i < 10; i++ {
obsTS := baseObs + int64(i)*60
advTS := obsTS + skews[i]
tx := &StoreTx{
Hash: fmt.Sprintf("bimodal-%04d", i),
PayloadType: &pt,
DecodedJSON: `{"payload":{"timestamp":` + formatInt64(advTS) + `}}`,
Observations: []*StoreObs{
{ObserverID: "obs1", Timestamp: time.Unix(obsTS, 0).UTC().Format(time.RFC3339)},
},
}
txs = append(txs, tx)
}
ps.mu.Lock()
ps.byNode["BIMODAL"] = txs
for _, tx := range txs {
ps.byPayloadType[4] = append(ps.byPayloadType[4], tx)
}
ps.clockSkew.computeInterval = 0
ps.mu.Unlock()
r := ps.GetNodeClockSkew("BIMODAL")
if r == nil {
t.Fatal("nil result")
}
if r.Severity != SkewBimodalClock {
t.Errorf("severity = %v, want bimodal_clock", r.Severity)
}
if math.Abs(r.RecentMedianSkewSec-(-5)) > 1 {
t.Errorf("recentMedianSkewSec = %v, want ≈ -5 (median of good samples)", r.RecentMedianSkewSec)
}
if r.GoodFraction < 0.5 || r.GoodFraction > 0.7 {
t.Errorf("goodFraction = %v, want ~0.6", r.GoodFraction)
}
if r.RecentBadSampleCount < 1 {
t.Errorf("recentBadSampleCount = %v, want > 0", r.RecentBadSampleCount)
}
}
// TestAllBad_NoClock_845: all samples bad → no_clock.
func TestAllBad_NoClock_845(t *testing.T) {
ps := NewPacketStore(nil, nil)
pt := 4
baseObs := int64(1700000000)
var txs []*StoreTx
for i := 0; i < 10; i++ {
obsTS := baseObs + int64(i)*60
advTS := obsTS - 50000000
tx := &StoreTx{
Hash: fmt.Sprintf("allbad-%04d", i),
PayloadType: &pt,
DecodedJSON: `{"payload":{"timestamp":` + formatInt64(advTS) + `}}`,
Observations: []*StoreObs{
{ObserverID: "obs1", Timestamp: time.Unix(obsTS, 0).UTC().Format(time.RFC3339)},
},
}
txs = append(txs, tx)
}
ps.mu.Lock()
ps.byNode["ALLBAD"] = txs
for _, tx := range txs {
ps.byPayloadType[4] = append(ps.byPayloadType[4], tx)
}
ps.clockSkew.computeInterval = 0
ps.mu.Unlock()
r := ps.GetNodeClockSkew("ALLBAD")
if r == nil {
t.Fatal("nil result")
}
if r.Severity != SkewNoClock {
t.Errorf("severity = %v, want no_clock", r.Severity)
}
}
// TestMostlyGood_OK_845: 90% good 10% bad → ok (outlier filtered).
func TestMostlyGood_OK_845(t *testing.T) {
ps := NewPacketStore(nil, nil)
pt := 4
baseObs := int64(1700000000)
var txs []*StoreTx
// 9 good at -5s, 1 bad at -50000000s
for i := 0; i < 10; i++ {
obsTS := baseObs + int64(i)*60
var skew int64
if i < 9 {
skew = -5
} else {
skew = -50000000
}
advTS := obsTS + skew
tx := &StoreTx{
Hash: fmt.Sprintf("mostly-%04d", i),
PayloadType: &pt,
DecodedJSON: `{"payload":{"timestamp":` + formatInt64(advTS) + `}}`,
Observations: []*StoreObs{
{ObserverID: "obs1", Timestamp: time.Unix(obsTS, 0).UTC().Format(time.RFC3339)},
},
}
txs = append(txs, tx)
}
ps.mu.Lock()
ps.byNode["MOSTLY"] = txs
for _, tx := range txs {
ps.byPayloadType[4] = append(ps.byPayloadType[4], tx)
}
ps.clockSkew.computeInterval = 0
ps.mu.Unlock()
r := ps.GetNodeClockSkew("MOSTLY")
if r == nil {
t.Fatal("nil result")
}
// 90% good → normal classification path, median of good samples = -5s → ok
if r.Severity != SkewOK {
t.Errorf("severity = %v, want ok", r.Severity)
}
if math.Abs(r.RecentMedianSkewSec-(-5)) > 1 {
t.Errorf("recentMedianSkewSec = %v, want ≈ -5", r.RecentMedianSkewSec)
}
}
// TestSingleSample_845: one good sample → ok.
func TestSingleSample_845(t *testing.T) {
ps := NewPacketStore(nil, nil)
pt := 4
obsTS := int64(1700000000)
advTS := obsTS - 30 // 30s skew
tx := &StoreTx{
Hash: "single-0001",
PayloadType: &pt,
DecodedJSON: `{"payload":{"timestamp":` + formatInt64(advTS) + `}}`,
Observations: []*StoreObs{
{ObserverID: "obs1", Timestamp: time.Unix(obsTS, 0).UTC().Format(time.RFC3339)},
},
}
ps.mu.Lock()
ps.byNode["SINGLE"] = []*StoreTx{tx}
ps.byPayloadType[4] = append(ps.byPayloadType[4], tx)
ps.clockSkew.computeInterval = 0
ps.mu.Unlock()
r := ps.GetNodeClockSkew("SINGLE")
if r == nil {
t.Fatal("nil result")
}
if r.Severity != SkewOK {
t.Errorf("severity = %v, want ok", r.Severity)
}
if r.RecentSampleCount != 1 {
t.Errorf("recentSampleCount = %d, want 1", r.RecentSampleCount)
}
if r.GoodFraction != 1.0 {
t.Errorf("goodFraction = %v, want 1.0", r.GoodFraction)
}
}
// TestFiftyFifty_Bimodal_845: 50% good / 50% bad → bimodal_clock.
func TestFiftyFifty_Bimodal_845(t *testing.T) {
ps := NewPacketStore(nil, nil)
pt := 4
baseObs := int64(1700000000)
var txs []*StoreTx
for i := 0; i < 10; i++ {
obsTS := baseObs + int64(i)*60
var skew int64
if i%2 == 0 {
skew = -10
} else {
skew = -50000000
}
tx := &StoreTx{
Hash: fmt.Sprintf("fifty-%04d", i),
PayloadType: &pt,
DecodedJSON: `{"payload":{"timestamp":` + formatInt64(obsTS+skew) + `}}`,
Observations: []*StoreObs{
{ObserverID: "obs1", Timestamp: time.Unix(obsTS, 0).UTC().Format(time.RFC3339)},
},
}
txs = append(txs, tx)
}
ps.mu.Lock()
ps.byNode["FIFTY"] = txs
for _, tx := range txs {
ps.byPayloadType[4] = append(ps.byPayloadType[4], tx)
}
ps.clockSkew.computeInterval = 0
ps.mu.Unlock()
r := ps.GetNodeClockSkew("FIFTY")
if r == nil {
t.Fatal("nil result")
}
if r.Severity != SkewBimodalClock {
t.Errorf("severity = %v, want bimodal_clock", r.Severity)
}
if r.GoodFraction < 0.4 || r.GoodFraction > 0.6 {
t.Errorf("goodFraction = %v, want ~0.5", r.GoodFraction)
}
}
// TestAllGood_OK_845: all samples good → ok, no bimodal.
func TestAllGood_OK_845(t *testing.T) {
ps := NewPacketStore(nil, nil)
pt := 4
baseObs := int64(1700000000)
var txs []*StoreTx
for i := 0; i < 10; i++ {
obsTS := baseObs + int64(i)*60
tx := &StoreTx{
Hash: fmt.Sprintf("allgood-%04d", i),
PayloadType: &pt,
DecodedJSON: `{"payload":{"timestamp":` + formatInt64(obsTS-3) + `}}`,
Observations: []*StoreObs{
{ObserverID: "obs1", Timestamp: time.Unix(obsTS, 0).UTC().Format(time.RFC3339)},
},
}
txs = append(txs, tx)
}
ps.mu.Lock()
ps.byNode["ALLGOOD"] = txs
for _, tx := range txs {
ps.byPayloadType[4] = append(ps.byPayloadType[4], tx)
}
ps.clockSkew.computeInterval = 0
ps.mu.Unlock()
r := ps.GetNodeClockSkew("ALLGOOD")
if r == nil {
t.Fatal("nil result")
}
if r.Severity != SkewOK {
t.Errorf("severity = %v, want ok", r.Severity)
}
if r.GoodFraction != 1.0 {
t.Errorf("goodFraction = %v, want 1.0", r.GoodFraction)
}
if r.RecentBadSampleCount != 0 {
t.Errorf("recentBadSampleCount = %v, want 0", r.RecentBadSampleCount)
}
}
+2 -1
View File
@@ -115,7 +115,8 @@ type NeighborGraphConfig struct {
// PacketStoreConfig controls in-memory packet store limits.
type PacketStoreConfig struct {
RetentionHours float64 `json:"retentionHours"` // max age of packets in hours (0 = unlimited)
MaxMemoryMB int `json:"maxMemoryMB"` // hard memory ceiling in MB (0 = unlimited)
MaxMemoryMB int `json:"maxMemoryMB"` // hard memory ceiling in MB (0 = unlimited)
MaxResolvedPubkeyIndexEntries int `json:"maxResolvedPubkeyIndexEntries"` // warning threshold for index size (0 = 5M default)
}
// GeoFilterConfig is an alias for the shared geofilter.Config type.
+49 -135
View File
@@ -47,7 +47,7 @@ func setupTestDBv2(t *testing.T) *DB {
id INTEGER PRIMARY KEY AUTOINCREMENT,
transmission_id INTEGER NOT NULL REFERENCES transmissions(id),
observer_id TEXT, observer_name TEXT, direction TEXT,
snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp INTEGER NOT NULL
snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp INTEGER NOT NULL, raw_hex TEXT
);
`
if _, err := conn.Exec(schema); err != nil {
@@ -585,12 +585,15 @@ func TestHandlePacketsMultiNodeWithStore(t *testing.T) {
func TestHandlePacketDetailNoStore(t *testing.T) {
_, router := setupNoStoreServer(t)
// With no in-memory store, handlePacketDetail now falls back to the DB
// (#827). The seeded transmissions are present in the DB, so by-hash and
// by-ID lookups succeed; only truly absent IDs return 404.
t.Run("by hash", func(t *testing.T) {
req := httptest.NewRequest("GET", "/api/packets/abc123def4567890", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 404 {
t.Fatalf("expected 404 (no store), got %d: %s", w.Code, w.Body.String())
if w.Code != 200 {
t.Fatalf("expected 200 (DB fallback), got %d: %s", w.Code, w.Body.String())
}
})
@@ -598,8 +601,8 @@ func TestHandlePacketDetailNoStore(t *testing.T) {
req := httptest.NewRequest("GET", "/api/packets/1", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 404 {
t.Fatalf("expected 404 (no store), got %d: %s", w.Code, w.Body.String())
if w.Code != 200 {
t.Fatalf("expected 200 (DB fallback), got %d: %s", w.Code, w.Body.String())
}
})
@@ -2145,13 +2148,6 @@ func setupRichTestDB(t *testing.T) *DB {
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (5, 1, 14.0, -88, '["aa"]', ?)`, recentEpoch)
// Extra packet sharing subpath "eeff,0011" with hash_with_path_02 above,
// so that subpath has count>=2 and survives singleton pruning.
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
VALUES ('0140eeff0011', 'hash_shared_subpath', ?, 1, 4, '{"pubKey":"eeff001199887766","name":"TestShared","type":"ADVERT"}')`, recent)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (6, 1, 9.0, -92, '["eeff","0011"]', ?)`, recentEpoch)
return db
}
@@ -2283,11 +2279,14 @@ func TestSubpathPrecomputedIndex(t *testing.T) {
t.Fatal("expected spTotalPaths > 0 after Load()")
}
// The rich test DB has paths ["aa","bb"], ["aabb","ccdd"],
// ["eeff","0011","2233"], and ["eeff","0011"]. After singleton pruning,
// only subpaths with count>=2 survive. "eeff,0011" appears in two packets.
// The rich test DB has paths ["aa","bb"], ["aabb","ccdd"], and
// ["eeff","0011","2233"]. That yields 5 unique raw subpaths.
expectedRaw := map[string]int{
"eeff,0011": 2,
"aa,bb": 1,
"aabb,ccdd": 1,
"eeff,0011": 1,
"0011,2233": 1,
"eeff,0011,2233": 1,
}
for key, want := range expectedRaw {
got, ok := store.spIndex[key]
@@ -2297,16 +2296,8 @@ func TestSubpathPrecomputedIndex(t *testing.T) {
t.Errorf("spIndex[%q] = %d, want %d", key, got, want)
}
}
// Singleton subpaths must have been pruned
singletons := []string{"aa,bb", "aabb,ccdd", "0011,2233", "eeff,0011,2233"}
for _, key := range singletons {
if _, ok := store.spIndex[key]; ok {
t.Errorf("expected singleton spIndex[%q] to be pruned", key)
}
}
if store.spTotalPaths != 4 {
t.Errorf("spTotalPaths = %d, want 4", store.spTotalPaths)
if store.spTotalPaths != 3 {
t.Errorf("spTotalPaths = %d, want 3", store.spTotalPaths)
}
// Fast-path (no region) and slow-path (with region) must return the
@@ -2334,19 +2325,31 @@ func TestSubpathTxIndexPopulated(t *testing.T) {
store := NewPacketStore(db, nil)
store.Load()
// spIndex must be populated after Load()
if len(store.spIndex) == 0 {
t.Fatal("expected spIndex to be populated after Load()")
// spTxIndex must be populated alongside spIndex
if len(store.spTxIndex) == 0 {
t.Fatal("expected spTxIndex to be populated after Load()")
}
// GetSubpathDetail should return correct match count via scan fallback
// Every key in spIndex must also exist in spTxIndex with matching count
for key, count := range store.spIndex {
txs, ok := store.spTxIndex[key]
if !ok {
t.Errorf("spTxIndex missing key %q that exists in spIndex", key)
continue
}
if len(txs) != count {
t.Errorf("spTxIndex[%q] has %d txs, spIndex count is %d", key, len(txs), count)
}
}
// GetSubpathDetail should return correct match count via indexed lookup
detail := store.GetSubpathDetail([]string{"eeff", "0011"})
if detail == nil {
t.Fatal("expected non-nil detail for existing subpath")
}
matches, _ := detail["totalMatches"].(int)
if matches != 2 {
t.Errorf("totalMatches = %d, want 2", matches)
if matches != 1 {
t.Errorf("totalMatches = %d, want 1", matches)
}
// Non-existent subpath should return 0 matches
@@ -2394,55 +2397,6 @@ func TestSubpathDetailMixedCaseHops(t *testing.T) {
}
}
// TestSubpathSingletonDrop verifies that singleton entries are pruned from
// spIndex while count>=2 entries are preserved.
func TestSubpathSingletonDrop(t *testing.T) {
db := setupRichTestDB(t)
defer db.Close()
store := NewPacketStore(db, nil)
store.Load()
// "eeff,0011" appears in 2 packets — must survive singleton pruning
if count, ok := store.spIndex["eeff,0011"]; !ok {
t.Fatal("expected spIndex[\"eeff,0011\"] to survive singleton pruning")
} else if count != 2 {
t.Errorf("spIndex[\"eeff,0011\"] = %d, want 2", count)
}
// All count==1 entries must be gone
for key, count := range store.spIndex {
if count < 2 {
t.Errorf("spIndex[%q] = %d, singletons should have been pruned", key, count)
}
}
}
// TestSubpathEmptyDB verifies that the store loads successfully on a DB
// with no transmissions (no subpaths at all).
func TestSubpathEmptyDB(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
store := NewPacketStore(db, nil)
store.Load()
if len(store.spIndex) != 0 {
t.Errorf("expected empty spIndex on empty DB, got %d entries", len(store.spIndex))
}
if store.spTotalPaths != 0 {
t.Errorf("expected spTotalPaths=0 on empty DB, got %d", store.spTotalPaths)
}
// GetSubpathDetail should still work (return zero matches)
detail := store.GetSubpathDetail([]string{"aa", "bb"})
if detail == nil {
t.Fatal("expected non-nil detail even on empty DB")
}
matches, _ := detail["totalMatches"].(int)
if matches != 0 {
t.Errorf("totalMatches on empty DB = %d, want 0", matches)
}
}
func TestStoreGetAnalyticsRFCacheHit(t *testing.T) {
db := setupRichTestDB(t)
defer db.Close()
@@ -4365,88 +4319,48 @@ func TestIndexByNodePreCheck(t *testing.T) {
})
}
// TestIndexByNodeResolvedPath tests that resolved_path entries are indexed in byNode.
// TestIndexByNodeResolvedPath tests that indexByNode only indexes decoded JSON pubkeys.
// After #800, resolved_path entries are handled via the decode-window, not indexByNode.
func TestIndexByNodeResolvedPath(t *testing.T) {
store := &PacketStore{
byNode: make(map[string][]*StoreTx),
nodeHashes: make(map[string]map[string]bool),
}
t.Run("indexes resolved path pubkeys from observations", func(t *testing.T) {
relayPK := "aabb1122334455ff"
t.Run("decoded JSON pubkeys still indexed", func(t *testing.T) {
pk := "aabb1122334455ff"
tx := &StoreTx{
Hash: "rp1",
DecodedJSON: `{"type":"CHAN","text":"hello"}`, // no pubKey fields
Observations: []*StoreObs{
{ResolvedPath: []*string{&relayPK}},
},
}
store.indexByNode(tx)
if len(store.byNode[relayPK]) != 1 {
t.Errorf("expected relay pubkey indexed, got %d", len(store.byNode[relayPK]))
}
})
t.Run("skips null entries in resolved path", func(t *testing.T) {
pk := "cc11dd22ee33ff44"
tx := &StoreTx{
Hash: "rp2",
Observations: []*StoreObs{
{ResolvedPath: []*string{nil, &pk, nil}},
},
DecodedJSON: `{"pubKey":"` + pk + `"}`,
}
store.indexByNode(tx)
if len(store.byNode[pk]) != 1 {
t.Errorf("expected resolved pubkey indexed, got %d", len(store.byNode[pk]))
}
// Verify nil entries didn't create empty-string keys
if _, exists := store.byNode[""]; exists {
t.Error("nil/empty resolved path entries should not create byNode entries")
t.Errorf("expected decoded pubkey indexed, got %d", len(store.byNode[pk]))
}
})
t.Run("relay-only node appears in byNode", func(t *testing.T) {
// A packet with no decoded pubkey fields, only a relay in resolved path
relayOnly := "relay0only0pubkey"
t.Run("resolved path pubkeys NOT indexed by indexByNode", func(t *testing.T) {
// After #800, indexByNode only handles decoded JSON fields.
// Resolved path pubkeys are handled by the decode-window.
tx := &StoreTx{
Hash: "rp3",
// No DecodedJSON at all — pure relay
Observations: []*StoreObs{
{ResolvedPath: []*string{&relayOnly}},
},
Hash: "rp2",
DecodedJSON: `{"type":"CHAN","text":"hello"}`, // no pubKey fields
}
store.indexByNode(tx)
if len(store.byNode[relayOnly]) != 1 {
t.Errorf("expected relay-only node indexed, got %d", len(store.byNode[relayOnly]))
}
// No new entries expected since there are no decoded pubkeys
})
t.Run("dedup between decoded JSON and resolved path", func(t *testing.T) {
t.Run("dedup within decoded JSON", func(t *testing.T) {
pk := "dedup0test0pk1234"
tx := &StoreTx{
Hash: "rp4",
DecodedJSON: `{"pubKey":"` + pk + `"}`,
Observations: []*StoreObs{
{ResolvedPath: []*string{&pk}},
},
DecodedJSON: `{"pubKey":"` + pk + `","destPubKey":"` + pk + `"}`,
}
store.indexByNode(tx)
if len(store.byNode[pk]) != 1 {
t.Errorf("expected dedup to keep 1 entry, got %d", len(store.byNode[pk]))
}
})
t.Run("indexes tx.ResolvedPath when observations empty", func(t *testing.T) {
rpPK := "txlevel0resolved1"
tx := &StoreTx{
Hash: "rp5",
ResolvedPath: []*string{&rpPK},
}
store.indexByNode(tx)
if len(store.byNode[rpPK]) != 1 {
t.Errorf("expected tx-level resolved path indexed, got %d", len(store.byNode[rpPK]))
}
})
}
// BenchmarkIndexByNode measures indexByNode performance with and without pubkey
+29 -4
View File
@@ -20,6 +20,7 @@ type DB struct {
path string // filesystem path to the database file
isV3 bool // v3 schema: observer_idx in observations (vs observer_id in v2)
hasResolvedPath bool // observations table has resolved_path column
hasObsRawHex bool // observations table has raw_hex column (#881)
// Channel list cache (60s TTL) — avoids repeated GROUP BY scans (#762)
channelsCacheMu sync.Mutex
@@ -76,6 +77,9 @@ func (db *DB) detectSchema() {
if colName == "resolved_path" {
db.hasResolvedPath = true
}
if colName == "raw_hex" {
db.hasObsRawHex = true
}
}
}
}
@@ -166,6 +170,7 @@ type Observer struct {
BatteryMv *int `json:"battery_mv"`
UptimeSecs *int64 `json:"uptime_secs"`
NoiseFloor *float64 `json:"noise_floor"`
LastPacketAt *string `json:"last_packet_at"`
}
// Transmission represents a row from the transmissions table.
@@ -384,6 +389,7 @@ type PacketQuery struct {
Until string
Region string
Node string
Channel string // channel_hash filter (#812). Plain names like "#test"/"public" or "enc_<HEX>" for encrypted
Order string // ASC or DESC
ExpandObservations bool // when true, include observation sub-maps in txToMap output
}
@@ -620,6 +626,11 @@ func (db *DB) buildTransmissionWhere(q PacketQuery) ([]string, []interface{}) {
where = append(where, "t.decoded_json LIKE ?")
args = append(args, "%"+pk+"%")
}
if q.Channel != "" {
// channel_hash column is indexed for payload_type = 5; filter is exact match.
where = append(where, "t.channel_hash = ?")
args = append(args, q.Channel)
}
if q.Observer != "" {
ids := strings.Split(q.Observer, ",")
placeholders := strings.Repeat("?,", len(ids))
@@ -686,6 +697,20 @@ func (db *DB) GetPacketByHash(hash string) (map[string]interface{}, error) {
return nil, nil
}
// GetObservationsForHash returns all observations for the transmission with
// the given content hash. Used as a fallback by the packet-detail handler
// when the in-memory PacketStore has pruned the entry but the DB still has it.
func (db *DB) GetObservationsForHash(hash string) []map[string]interface{} {
var txID int
err := db.conn.QueryRow("SELECT id FROM transmissions WHERE hash = ?",
strings.ToLower(hash)).Scan(&txID)
if err != nil {
return nil
}
obsByTx := db.getObservationsForTransmissions([]int{txID})
return obsByTx[txID]
}
// GetNodes returns filtered, paginated node list.
func (db *DB) GetNodes(limit, offset int, role, search, before, lastHeard, sortBy, region string) ([]map[string]interface{}, int, map[string]int, error) {
@@ -948,7 +973,7 @@ func (db *DB) getObservationsForTransmissions(txIDs []int) map[int][]map[string]
// GetObservers returns all observers sorted by last_seen DESC.
func (db *DB) GetObservers() ([]Observer, error) {
rows, err := db.conn.Query("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor FROM observers ORDER BY last_seen DESC")
rows, err := db.conn.Query("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor, last_packet_at FROM observers ORDER BY last_seen DESC")
if err != nil {
return nil, err
}
@@ -959,7 +984,7 @@ func (db *DB) GetObservers() ([]Observer, error) {
var o Observer
var batteryMv, uptimeSecs sql.NullInt64
var noiseFloor sql.NullFloat64
if err := rows.Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor); err != nil {
if err := rows.Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor, &o.LastPacketAt); err != nil {
continue
}
if batteryMv.Valid {
@@ -982,8 +1007,8 @@ func (db *DB) GetObserverByID(id string) (*Observer, error) {
var o Observer
var batteryMv, uptimeSecs sql.NullInt64
var noiseFloor sql.NullFloat64
err := db.conn.QueryRow("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor FROM observers WHERE id = ?", id).
Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor)
err := db.conn.QueryRow("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor, last_packet_at FROM observers WHERE id = ?", id).
Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor, &o.LastPacketAt)
if err != nil {
return nil, err
}
+110 -4
View File
@@ -48,7 +48,8 @@ func setupTestDB(t *testing.T) *DB {
radio TEXT,
battery_mv INTEGER,
uptime_secs INTEGER,
noise_floor REAL
noise_floor REAL,
last_packet_at TEXT DEFAULT NULL
);
CREATE TABLE transmissions (
@@ -74,7 +75,8 @@ func setupTestDB(t *testing.T) *DB {
score INTEGER,
path_json TEXT,
timestamp INTEGER NOT NULL,
resolved_path TEXT
resolved_path TEXT,
raw_hex TEXT
);
CREATE TABLE IF NOT EXISTS observer_metrics (
@@ -354,6 +356,10 @@ func TestGetObservers(t *testing.T) {
if observers[0].ID != "obs1" {
t.Errorf("expected obs1 first (most recent), got %s", observers[0].ID)
}
// last_packet_at should be nil since seedTestData doesn't set it
if observers[0].LastPacketAt != nil {
t.Errorf("expected nil LastPacketAt for obs1 from seed, got %v", *observers[0].LastPacketAt)
}
}
func TestGetObserverByID(t *testing.T) {
@@ -368,6 +374,48 @@ func TestGetObserverByID(t *testing.T) {
if obs.ID != "obs1" {
t.Errorf("expected obs1, got %s", obs.ID)
}
// Verify last_packet_at is nil by default
if obs.LastPacketAt != nil {
t.Errorf("expected nil LastPacketAt, got %v", *obs.LastPacketAt)
}
}
func TestGetObserverLastPacketAt(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
// Set last_packet_at for obs1
ts := "2026-04-24T12:00:00Z"
db.conn.Exec(`UPDATE observers SET last_packet_at = ? WHERE id = ?`, ts, "obs1")
// Verify via GetObservers
observers, err := db.GetObservers()
if err != nil {
t.Fatal(err)
}
var obs1 *Observer
for i := range observers {
if observers[i].ID == "obs1" {
obs1 = &observers[i]
break
}
}
if obs1 == nil {
t.Fatal("obs1 not found")
}
if obs1.LastPacketAt == nil || *obs1.LastPacketAt != ts {
t.Errorf("expected LastPacketAt=%s via GetObservers, got %v", ts, obs1.LastPacketAt)
}
// Verify via GetObserverByID
obs, err := db.GetObserverByID("obs1")
if err != nil {
t.Fatal(err)
}
if obs.LastPacketAt == nil || *obs.LastPacketAt != ts {
t.Errorf("expected LastPacketAt=%s via GetObserverByID, got %v", ts, obs.LastPacketAt)
}
}
func TestGetObserverByIDNotFound(t *testing.T) {
@@ -1108,7 +1156,8 @@ func setupTestDBV2(t *testing.T) *DB {
iata TEXT,
last_seen TEXT,
first_seen TEXT,
packet_count INTEGER DEFAULT 0
packet_count INTEGER DEFAULT 0,
last_packet_at TEXT DEFAULT NULL
);
CREATE TABLE transmissions (
@@ -1134,7 +1183,8 @@ func setupTestDBV2(t *testing.T) *DB {
rssi REAL,
score INTEGER,
path_json TEXT,
timestamp INTEGER NOT NULL
timestamp INTEGER NOT NULL,
raw_hex TEXT
);
`
if _, err := conn.Exec(schema); err != nil {
@@ -1975,3 +2025,59 @@ func TestParseWindowDuration(t *testing.T) {
}
}
}
// TestPerObservationRawHexEnrich verifies enrichObs returns per-observation raw_hex
// when available, falling back to transmission raw_hex when NULL (#881).
func TestPerObservationRawHexEnrich(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
// Insert observers
db.conn.Exec(`INSERT INTO observers (id, name) VALUES ('obs-a', 'Observer A')`)
db.conn.Exec(`INSERT INTO observers (id, name) VALUES ('obs-b', 'Observer B')`)
var rowA, rowB int64
db.conn.QueryRow(`SELECT rowid FROM observers WHERE id='obs-a'`).Scan(&rowA)
db.conn.QueryRow(`SELECT rowid FROM observers WHERE id='obs-b'`).Scan(&rowB)
// Insert transmission with raw_hex
txHex := "deadbeef"
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen) VALUES (?, 'hash1', '2026-04-21T10:00:00Z')`, txHex)
// Insert two observations: A has its own raw_hex, B has NULL (historical)
obsAHex := "c0ffee01"
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp, raw_hex)
VALUES (1, ?, -5.0, -90.0, '[]', 1745236800, ?)`, rowA, obsAHex)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (1, ?, -3.0, -85.0, '["aabb"]', 1745236801)`, rowB)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store load: %v", err)
}
tx := store.byHash["hash1"]
if tx == nil {
t.Fatal("transmission not loaded")
}
if len(tx.Observations) < 2 {
t.Fatalf("expected 2 observations, got %d", len(tx.Observations))
}
// Check enriched observations
for _, obs := range tx.Observations {
m := store.enrichObs(obs)
rh, _ := m["raw_hex"].(string)
if obs.RawHex != "" {
// Observer A: should get per-observation raw_hex
if rh != obsAHex {
t.Errorf("obs with own raw_hex: got %q, want %q", rh, obsAHex)
}
} else {
// Observer B: should fall back to transmission raw_hex
if rh != txHex {
t.Errorf("obs without raw_hex: got %q, want %q (tx fallback)", rh, txHex)
}
}
}
}
+3 -101
View File
@@ -10,6 +10,7 @@ import (
"strings"
"time"
"github.com/meshcore-analyzer/packetpath"
"github.com/meshcore-analyzer/sigvalidate"
)
@@ -164,8 +165,9 @@ func decodePath(pathByte byte, buf []byte, offset int) (Path, int) {
}, totalBytes
}
// isTransportRoute delegates to packetpath.IsTransportRoute.
func isTransportRoute(routeType int) bool {
return routeType == RouteTransportFlood || routeType == RouteTransportDirect
return packetpath.IsTransportRoute(routeType)
}
func decodeEncryptedPayload(typeName string, buf []byte) Payload {
@@ -441,106 +443,6 @@ func DecodePacket(hexString string, validateSignatures bool) (*DecodedPacket, er
}, nil
}
// HexRange represents a labeled byte range for the hex breakdown visualization.
type HexRange struct {
Start int `json:"start"`
End int `json:"end"`
Label string `json:"label"`
}
// Breakdown holds colored byte ranges returned by the packet detail endpoint.
type Breakdown struct {
Ranges []HexRange `json:"ranges"`
}
// BuildBreakdown computes labeled byte ranges for each section of a MeshCore packet.
// The returned ranges are consumed by createColoredHexDump() and buildHexLegend()
// in the frontend (public/app.js).
func BuildBreakdown(hexString string) *Breakdown {
hexString = strings.ReplaceAll(hexString, " ", "")
hexString = strings.ReplaceAll(hexString, "\n", "")
hexString = strings.ReplaceAll(hexString, "\r", "")
buf, err := hex.DecodeString(hexString)
if err != nil || len(buf) < 2 {
return &Breakdown{Ranges: []HexRange{}}
}
var ranges []HexRange
offset := 0
// Byte 0: Header
ranges = append(ranges, HexRange{Start: 0, End: 0, Label: "Header"})
offset = 1
header := decodeHeader(buf[0])
// Bytes 1-4: Transport Codes (TRANSPORT_FLOOD / TRANSPORT_DIRECT only)
if isTransportRoute(header.RouteType) {
if len(buf) < offset+4 {
return &Breakdown{Ranges: ranges}
}
ranges = append(ranges, HexRange{Start: offset, End: offset + 3, Label: "Transport Codes"})
offset += 4
}
if offset >= len(buf) {
return &Breakdown{Ranges: ranges}
}
// Next byte: Path Length (bits 7-6 = hashSize-1, bits 5-0 = hashCount)
ranges = append(ranges, HexRange{Start: offset, End: offset, Label: "Path Length"})
pathByte := buf[offset]
offset++
hashSize := int(pathByte>>6) + 1
hashCount := int(pathByte & 0x3F)
pathBytes := hashSize * hashCount
// Path hops
if hashCount > 0 && offset+pathBytes <= len(buf) {
ranges = append(ranges, HexRange{Start: offset, End: offset + pathBytes - 1, Label: "Path"})
}
offset += pathBytes
if offset >= len(buf) {
return &Breakdown{Ranges: ranges}
}
payloadStart := offset
// Payload — break ADVERT into named sub-fields; everything else is one Payload range
if header.PayloadType == PayloadADVERT && len(buf)-payloadStart >= 100 {
ranges = append(ranges, HexRange{Start: payloadStart, End: payloadStart + 31, Label: "PubKey"})
ranges = append(ranges, HexRange{Start: payloadStart + 32, End: payloadStart + 35, Label: "Timestamp"})
ranges = append(ranges, HexRange{Start: payloadStart + 36, End: payloadStart + 99, Label: "Signature"})
appStart := payloadStart + 100
if appStart < len(buf) {
ranges = append(ranges, HexRange{Start: appStart, End: appStart, Label: "Flags"})
appFlags := buf[appStart]
fOff := appStart + 1
if appFlags&0x10 != 0 && fOff+8 <= len(buf) {
ranges = append(ranges, HexRange{Start: fOff, End: fOff + 3, Label: "Latitude"})
ranges = append(ranges, HexRange{Start: fOff + 4, End: fOff + 7, Label: "Longitude"})
fOff += 8
}
if appFlags&0x20 != 0 && fOff+2 <= len(buf) {
fOff += 2
}
if appFlags&0x40 != 0 && fOff+2 <= len(buf) {
fOff += 2
}
if appFlags&0x80 != 0 && fOff < len(buf) {
ranges = append(ranges, HexRange{Start: fOff, End: len(buf) - 1, Label: "Name"})
}
}
} else {
ranges = append(ranges, HexRange{Start: payloadStart, End: len(buf) - 1, Label: "Payload"})
}
return &Breakdown{Ranges: ranges}
}
// ComputeContentHash computes the SHA-256-based content hash (first 16 hex chars).
// It hashes the payload-type nibble + payload (skipping path bytes) to produce a
// route-independent identifier for the same logical packet. For TRACE packets,
-140
View File
@@ -97,146 +97,6 @@ func TestDecodePacket_FloodHasNoCodes(t *testing.T) {
}
}
func TestBuildBreakdown_InvalidHex(t *testing.T) {
b := BuildBreakdown("not-hex!")
if len(b.Ranges) != 0 {
t.Errorf("expected empty ranges for invalid hex, got %d", len(b.Ranges))
}
}
func TestBuildBreakdown_TooShort(t *testing.T) {
b := BuildBreakdown("11") // 1 byte — no path byte
if len(b.Ranges) != 0 {
t.Errorf("expected empty ranges for too-short packet, got %d", len(b.Ranges))
}
}
func TestBuildBreakdown_FloodNonAdvert(t *testing.T) {
// Header 0x15: route=1/FLOOD, payload=5/GRP_TXT
// PathByte 0x01: 1 hop, 1-byte hash
// PathHop: AA
// Payload: FF0011
b := BuildBreakdown("1501AAFFFF00")
labels := rangeLabels(b.Ranges)
expect := []string{"Header", "Path Length", "Path", "Payload"}
if !equalLabels(labels, expect) {
t.Errorf("expected labels %v, got %v", expect, labels)
}
// Verify byte positions
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Path Length", 1, 1)
assertRange(t, b.Ranges, "Path", 2, 2)
assertRange(t, b.Ranges, "Payload", 3, 5)
}
func TestBuildBreakdown_TransportFlood(t *testing.T) {
// Header 0x14: route=0/TRANSPORT_FLOOD, payload=5/GRP_TXT
// TransportCodes: AABBCCDD (4 bytes)
// PathByte 0x01: 1 hop, 1-byte hash
// PathHop: EE
// Payload: FF00
b := BuildBreakdown("14AABBCCDD01EEFF00")
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Transport Codes", 1, 4)
assertRange(t, b.Ranges, "Path Length", 5, 5)
assertRange(t, b.Ranges, "Path", 6, 6)
assertRange(t, b.Ranges, "Payload", 7, 8)
}
func TestBuildBreakdown_FloodNoHops(t *testing.T) {
// Header 0x15: FLOOD/GRP_TXT; PathByte 0x00: 0 hops; Payload: AABB
b := BuildBreakdown("150000AABB")
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Path Length", 1, 1)
// No Path range since hashCount=0
for _, r := range b.Ranges {
if r.Label == "Path" {
t.Error("expected no Path range for zero-hop packet")
}
}
assertRange(t, b.Ranges, "Payload", 2, 4)
}
func TestBuildBreakdown_AdvertBasic(t *testing.T) {
// Header 0x11: FLOOD/ADVERT
// PathByte 0x01: 1 hop, 1-byte hash
// PathHop: AA
// Payload: 100 bytes (PubKey32 + Timestamp4 + Signature64) + Flags=0x02 (repeater, no extras)
pubkey := repeatHex("AB", 32)
ts := "00000000" // 4 bytes
sig := repeatHex("CD", 64)
flags := "02"
hex := "1101AA" + pubkey + ts + sig + flags
b := BuildBreakdown(hex)
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Path Length", 1, 1)
assertRange(t, b.Ranges, "Path", 2, 2)
assertRange(t, b.Ranges, "PubKey", 3, 34)
assertRange(t, b.Ranges, "Timestamp", 35, 38)
assertRange(t, b.Ranges, "Signature", 39, 102)
assertRange(t, b.Ranges, "Flags", 103, 103)
}
func TestBuildBreakdown_AdvertWithLocation(t *testing.T) {
// flags=0x12: hasLocation bit set
pubkey := repeatHex("00", 32)
ts := "00000000"
sig := repeatHex("00", 64)
flags := "12" // 0x10 = hasLocation
latBytes := "00000000"
lonBytes := "00000000"
hex := "1101AA" + pubkey + ts + sig + flags + latBytes + lonBytes
b := BuildBreakdown(hex)
assertRange(t, b.Ranges, "Latitude", 104, 107)
assertRange(t, b.Ranges, "Longitude", 108, 111)
}
func TestBuildBreakdown_AdvertWithName(t *testing.T) {
// flags=0x82: hasName bit set
pubkey := repeatHex("00", 32)
ts := "00000000"
sig := repeatHex("00", 64)
flags := "82" // 0x80 = hasName
name := "4E6F6465" // "Node" in hex
hex := "1101AA" + pubkey + ts + sig + flags + name
b := BuildBreakdown(hex)
assertRange(t, b.Ranges, "Name", 104, 107)
}
// helpers
func rangeLabels(ranges []HexRange) []string {
out := make([]string, len(ranges))
for i, r := range ranges {
out[i] = r.Label
}
return out
}
func equalLabels(a, b []string) bool {
if len(a) != len(b) {
return false
}
for i := range a {
if a[i] != b[i] {
return false
}
}
return true
}
func assertRange(t *testing.T, ranges []HexRange, label string, wantStart, wantEnd int) {
t.Helper()
for _, r := range ranges {
if r.Label == label {
if r.Start != wantStart || r.End != wantEnd {
t.Errorf("range %q: want [%d,%d], got [%d,%d]", label, wantStart, wantEnd, r.Start, r.End)
}
return
}
}
t.Errorf("range %q not found in %v", label, rangeLabels(ranges))
}
func TestZeroHopDirectHashSize(t *testing.T) {
// DIRECT (RouteType=2) + REQ (PayloadType=0) → header byte = 0x02
+35 -10
View File
@@ -247,6 +247,11 @@ func TestEvictStale_CleansNodeIndexes(t *testing.T) {
func TestEvictStale_CleansResolvedPathNodeIndexes(t *testing.T) {
now := time.Now().UTC()
// Create a temp DB for on-demand SQL fetch during eviction
db := setupTestDB(t)
defer db.Close()
store := &PacketStore{
packets: make([]*StoreTx, 0),
byHash: make(map[string]*StoreTx),
@@ -267,25 +272,33 @@ func TestEvictStale_CleansResolvedPathNodeIndexes(t *testing.T) {
subpathCache: make(map[string]*cachedResult),
rfCacheTTL: 15 * time.Second,
retentionHours: 24,
db: db,
useResolvedPathIndex: true,
}
store.initResolvedPathIndex()
// Create a packet indexed only via resolved_path (no decoded JSON pubkeys)
// Create a packet indexed via resolved_path pubkeys
relayPK := "relay0001abcdef"
txID := 1
obsID := 100
tx := &StoreTx{
ID: 1,
ID: txID,
Hash: "hash_rp_001",
FirstSeen: now.Add(-48 * time.Hour).UTC().Format(time.RFC3339),
}
rpPtr := &relayPK
obs := &StoreObs{
ID: 100,
TransmissionID: 1,
ID: obsID,
TransmissionID: txID,
ObserverID: "obs0",
Timestamp: tx.FirstSeen,
ResolvedPath: []*string{rpPtr},
}
tx.Observations = append(tx.Observations, obs)
tx.ResolvedPath = []*string{rpPtr}
// Insert into DB so on-demand SQL fetch works during eviction
db.conn.Exec("INSERT INTO transmissions (id, raw_hex, hash, first_seen) VALUES (?, '', ?, ?)",
txID, tx.Hash, tx.FirstSeen)
db.conn.Exec("INSERT INTO observations (id, transmission_id, observer_idx, path_json, timestamp, resolved_path) VALUES (?, ?, 1, ?, ?, ?)",
obsID, txID, `["aa"]`, now.Add(-48*time.Hour).Unix(), `["`+relayPK+`"]`)
store.packets = append(store.packets, tx)
store.byHash[tx.Hash] = tx
@@ -293,8 +306,9 @@ func TestEvictStale_CleansResolvedPathNodeIndexes(t *testing.T) {
store.byObsID[obs.ID] = obs
store.byObserver["obs0"] = append(store.byObserver["obs0"], obs)
// Index via resolved_path
store.indexByNode(tx)
// Index relay via decode-window simulation
store.addToByNode(tx, relayPK)
store.addToResolvedPubkeyIndex(txID, []string{relayPK})
// Verify indexed
if len(store.byNode[relayPK]) != 1 {
@@ -304,7 +318,7 @@ func TestEvictStale_CleansResolvedPathNodeIndexes(t *testing.T) {
t.Fatalf("expected nodeHashes[%s] to contain %s", relayPK, tx.Hash)
}
evicted := store.EvictStale()
evicted := store.RunEviction()
if evicted != 1 {
t.Fatalf("expected 1 evicted, got %d", evicted)
}
@@ -316,6 +330,14 @@ func TestEvictStale_CleansResolvedPathNodeIndexes(t *testing.T) {
if _, exists := store.nodeHashes[relayPK]; exists {
t.Fatalf("expected nodeHashes[%s] to be deleted after eviction", relayPK)
}
// Verify resolved pubkey index is cleaned up
h := resolvedPubkeyHash(relayPK)
if len(store.resolvedPubkeyIndex[h]) != 0 {
t.Fatalf("expected resolvedPubkeyIndex to be empty after eviction")
}
if _, exists := store.resolvedPubkeyReverse[txID]; exists {
t.Fatalf("expected resolvedPubkeyReverse to be empty after eviction")
}
}
func TestEvictStale_RunEvictionThreadSafe(t *testing.T) {
@@ -546,6 +568,9 @@ func TestEstimateStoreTxBytes(t *testing.T) {
manualCalc := int64(storeTxBaseBytes) + int64(len(tx.RawHex)+len(tx.Hash)+len(tx.DecodedJSON)+len(tx.PathJSON)) + int64(numIndexesPerTx*indexEntryBytes)
manualCalc += perTxMapsBytes
manualCalc += hops * perPathHopBytes
if hops > 1 {
manualCalc += (hops * (hops - 1) / 2) * perSubpathEntryBytes
}
if est != manualCalc {
t.Fatalf("estimateStoreTxBytes = %d, want %d (manual calc)", est, manualCalc)
}
+4
View File
@@ -14,6 +14,10 @@ replace github.com/meshcore-analyzer/geofilter => ../../internal/geofilter
replace github.com/meshcore-analyzer/sigvalidate => ../../internal/sigvalidate
require github.com/meshcore-analyzer/packetpath v0.0.0
replace github.com/meshcore-analyzer/packetpath => ../../internal/packetpath
require (
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/google/uuid v1.6.0 // indirect
+107
View File
@@ -0,0 +1,107 @@
package main
import (
"encoding/json"
"testing"
"time"
_ "modernc.org/sqlite"
)
const issue673NodePK = "7502f19f44cad6d7b626e1d811c00a914af452636182ccded3fd019803395ec9"
// setupIssue673Store builds an in-memory store with one repeater node having:
// - one ADVERT packet (legitimately indexed in byNode)
// - one GRP_TXT packet whose decoded text contains the node's pubkey (false-positive candidate)
func setupIssue673Store(t *testing.T) (*PacketStore, *DB) {
t.Helper()
db := setupTestDB(t)
_, err := db.conn.Exec(
"INSERT INTO nodes (public_key, name, role) VALUES (?, ?, ?)",
issue673NodePK, "Quail Hollow Park", "repeater",
)
if err != nil {
t.Fatal(err)
}
ps := NewPacketStore(db, nil)
now := time.Now().UTC().Format(time.RFC3339)
pt4 := 4 // ADVERT
pt5 := 5 // GRP_TXT
advertDecoded, _ := json.Marshal(map[string]interface{}{"pubKey": issue673NodePK})
advert := &StoreTx{
ID: 1,
Hash: "advert_hash_673",
PayloadType: &pt4,
DecodedJSON: string(advertDecoded),
FirstSeen: now,
}
otherPK := "aabbccddaabbccddaabbccddaabbccddaabbccddaabbccddaabbccddaabbccdd"
chatDecoded, _ := json.Marshal(map[string]interface{}{
"srcPubKey": otherPK,
"text": "Check out node " + issue673NodePK + " on the analyzer",
})
chat := &StoreTx{
ID: 2,
Hash: "chat_hash_673",
PayloadType: &pt5,
DecodedJSON: string(chatDecoded),
FirstSeen: now,
}
ps.mu.Lock()
ps.packets = append(ps.packets, advert, chat)
ps.byHash[advert.Hash] = advert
ps.byHash[chat.Hash] = chat
ps.byTxID[advert.ID] = advert
ps.byTxID[chat.ID] = chat
ps.byNode[issue673NodePK] = []*StoreTx{advert}
ps.mu.Unlock()
return ps, db
}
// TestGetNodeAnalytics_ExcludesGRPTXTWithPubkeyInText verifies that a GRP_TXT packet
// whose message text contains a node's pubkey is not counted in that node's analytics.
func TestGetNodeAnalytics_ExcludesGRPTXTWithPubkeyInText(t *testing.T) {
ps, db := setupIssue673Store(t)
defer db.Close()
analytics, err := ps.GetNodeAnalytics(issue673NodePK, 30)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if analytics == nil {
t.Fatal("expected analytics, got nil")
}
for _, ptc := range analytics.PacketTypeBreakdown {
if ptc.PayloadType == 5 {
t.Errorf("GRP_TXT (type 5) should not appear in analytics for repeater node, got count=%d", ptc.Count)
}
}
}
// TestFilterPackets_NodeQueryDoesNotMatchChatText verifies that the slow path of
// filterPackets (node filter combined with Since) does not return a GRP_TXT packet
// whose pubkey appears only in message text, not in a structured pubkey field.
func TestFilterPackets_NodeQueryDoesNotMatchChatText(t *testing.T) {
ps, db := setupIssue673Store(t)
defer db.Close()
yesterday := time.Now().Add(-24 * time.Hour).UTC().Format(time.RFC3339)
result := ps.QueryPackets(PacketQuery{Node: issue673NodePK, Since: yesterday, Limit: 50})
if result.Total != 1 {
t.Errorf("expected 1 packet for node (ADVERT only), got %d", result.Total)
}
for _, pkt := range result.Packets {
if pkt["hash"] == "chat_hash_673" {
t.Errorf("GRP_TXT with pubkey in message text was incorrectly returned for node query")
}
}
}
+78
View File
@@ -0,0 +1,78 @@
package main
import (
"encoding/json"
"net/http/httptest"
"testing"
"time"
"github.com/gorilla/mux"
)
// TestRepro810 reproduces #810: when the longest-path observation has NULL
// resolved_path but a shorter-path observation has one, fetchResolvedPathForTxBest
// returns nil → /api/nodes/{pk}/health.recentPackets[].resolved_path is missing
// while /api/packets shows it.
func TestRepro810(t *testing.T) {
db := setupTestDB(t)
now := time.Now().UTC()
recent := now.Add(-1 * time.Hour).Format(time.RFC3339)
recentEpoch := now.Add(-1 * time.Hour).Unix()
db.conn.Exec(`INSERT INTO observers (id, name, last_seen, first_seen, packet_count) VALUES ('obs1','O1',?, '2026-01-01T00:00:00Z', 100)`, recent)
db.conn.Exec(`INSERT INTO observers (id, name, last_seen, first_seen, packet_count) VALUES ('obs2','O2',?, '2026-01-01T00:00:00Z', 100)`, recent)
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, last_seen, first_seen, advert_count) VALUES ('aabbccdd11223344','R','repeater',?, '2026-01-01T00:00:00Z', 1)`, recent)
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json) VALUES ('AABB','testhash00000001',?,1,4,'{"pubKey":"aabbccdd11223344","type":"ADVERT"}')`, recent)
// Longest-path obs WITHOUT resolved_path
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp) VALUES (1,1,12.5,-90,'["aa","bb","cc"]',?)`, recentEpoch)
// Shorter-path obs WITH resolved_path
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp, resolved_path) VALUES (1,2,8.0,-95,'["aa","bb"]',?,'["aabbccdd11223344","eeff00112233aabb"]')`, recentEpoch-100)
cfg := &Config{Port: 3000}
hub := NewHub()
srv := NewServer(db, cfg, hub)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatal(err)
}
srv.store = store
router := mux.NewRouter()
srv.RegisterRoutes(router)
// Sanity: /api/packets should show resolved_path for this tx.
reqP := httptest.NewRequest("GET", "/api/packets?limit=10", nil)
wP := httptest.NewRecorder()
router.ServeHTTP(wP, reqP)
var pktsBody map[string]interface{}
json.Unmarshal(wP.Body.Bytes(), &pktsBody)
pkts, _ := pktsBody["packets"].([]interface{})
hasOnPackets := false
for _, p := range pkts {
pm := p.(map[string]interface{})
if pm["hash"] == "testhash00000001" && pm["resolved_path"] != nil {
hasOnPackets = true
}
}
if !hasOnPackets {
t.Fatal("precondition: /api/packets must report resolved_path for tx")
}
req := httptest.NewRequest("GET", "/api/nodes/aabbccdd11223344/health", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
var body map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &body)
rp, _ := body["recentPackets"].([]interface{})
if len(rp) == 0 {
t.Fatal("no recentPackets")
}
for _, p := range rp {
pm := p.(map[string]interface{})
if pm["hash"] == "testhash00000001" {
if pm["resolved_path"] == nil {
t.Fatal("BUG #810: /health.recentPackets resolved_path is nil despite /api/packets reporting it")
}
return
}
}
t.Fatal("tx not found in recentPackets")
}
+132
View File
@@ -0,0 +1,132 @@
package main
import (
"os"
"strconv"
"strings"
"sync"
"time"
)
// MemorySnapshot is a point-in-time view of process memory across several
// vantage points. Values are in MB (1024*1024 bytes), rounded to one decimal.
//
// Field invariants (typical, not guaranteed under exotic conditions):
//
// processRSSMB >= goSysMB >= goHeapInuseMB >= storeDataMB
//
// - processRSSMB is what the kernel charges the process (resident set).
// Read from /proc/self/status `VmRSS:` on Linux; falls back to goSysMB
// on other platforms or when /proc is unavailable.
// - goSysMB is the total memory obtained from the OS by the Go runtime
// (heap, stacks, GC metadata, mspans, mcache, etc.). Includes
// fragmentation and unused-but-mapped span overhead.
// - goHeapInuseMB is the live, in-use Go heap (HeapInuse). Excludes
// idle spans and runtime overhead.
// - storeDataMB is the in-store packet byte estimate (transmissions +
// observations). Subset of HeapInuse. Does not include index maps,
// analytics caches, broadcast queues, or runtime overhead. Used as
// the input to the eviction watermark.
//
// processRSSMB and storeDataMB are monotonic only relative to ingest +
// eviction; both can shrink when packets age out. goHeapInuseMB and goSysMB
// fluctuate with GC.
//
// cgoBytesMB intentionally absent: this build uses the pure-Go
// modernc.org/sqlite driver, so there is no cgo allocator to measure.
// Reintroduce only if we ever switch back to mattn/go-sqlite3.
type MemorySnapshot struct {
ProcessRSSMB float64 `json:"processRSSMB"`
GoHeapInuseMB float64 `json:"goHeapInuseMB"`
GoSysMB float64 `json:"goSysMB"`
StoreDataMB float64 `json:"storeDataMB"`
}
// rssCache rate-limits the /proc/self/status read. Go memory stats are
// already cached by Server.getMemStats (5s TTL). We use a tighter 1s TTL
// here so processRSSMB stays reasonably fresh during ops debugging
// without paying the syscall cost on every /api/stats hit.
var (
rssCacheMu sync.Mutex
rssCacheValueMB float64
rssCacheCachedAt time.Time
)
const rssCacheTTL = 1 * time.Second
// getMemorySnapshot composes a MemorySnapshot using the Server's existing
// runtime.MemStats cache (5s TTL, used by /api/health and /api/perf too)
// plus a rate-limited /proc RSS read. storeDataMB is supplied by the
// caller because the packet store is the source of truth.
func (s *Server) getMemorySnapshot(storeDataMB float64) MemorySnapshot {
ms := s.getMemStats()
rssCacheMu.Lock()
if time.Since(rssCacheCachedAt) > rssCacheTTL {
rssCacheValueMB = readProcRSSMB()
rssCacheCachedAt = time.Now()
}
rssMB := rssCacheValueMB
rssCacheMu.Unlock()
if rssMB <= 0 {
// Fallback when /proc is unavailable (non-Linux, sandboxes, etc.).
// runtime.Sys is an upper bound on Go-attributable memory and a
// reasonable proxy for pure-Go builds.
rssMB = float64(ms.Sys) / 1048576.0
}
return MemorySnapshot{
ProcessRSSMB: roundMB(rssMB),
GoHeapInuseMB: roundMB(float64(ms.HeapInuse) / 1048576.0),
GoSysMB: roundMB(float64(ms.Sys) / 1048576.0),
StoreDataMB: roundMB(storeDataMB),
}
}
// readProcRSSMB parses /proc/self/status for the VmRSS line. Returns 0 on
// any failure (file missing, malformed line, parse error) — the caller
// then uses a runtime fallback. Linux only; macOS/Windows return 0.
//
// Safety notes (djb): the file path is hard-coded, no untrusted input is
// concatenated. We bound the read at 8 KiB (the whole status file is
// well under 4 KiB on modern kernels) so a corrupt /proc can't OOM us.
// We only parse digits with strconv; no shell, no exec, no format strings.
func readProcRSSMB() float64 {
const maxStatusBytes = 8 * 1024
f, err := os.Open("/proc/self/status")
if err != nil {
return 0
}
defer f.Close()
buf := make([]byte, maxStatusBytes)
n, err := f.Read(buf)
if err != nil && n == 0 {
return 0
}
for _, line := range strings.Split(string(buf[:n]), "\n") {
if !strings.HasPrefix(line, "VmRSS:") {
continue
}
// Format: "VmRSS:\t 123456 kB"
fields := strings.Fields(line[len("VmRSS:"):])
if len(fields) < 2 {
return 0
}
kb, err := strconv.ParseFloat(fields[0], 64)
if err != nil || kb < 0 {
return 0
}
// Unit is kB per kernel convention; convert to MB.
return kb / 1024.0
}
return 0
}
func roundMB(v float64) float64 {
if v < 0 {
return 0
}
return float64(int64(v*10+0.5)) / 10.0
}
+52 -9
View File
@@ -381,7 +381,13 @@ func backfillResolvedPathsAsync(store *PacketStore, dbPath string, chunkSize int
}
}
for _, obs := range tx.Observations {
if obs.ResolvedPath == nil && obs.PathJSON != "" && obs.PathJSON != "[]" {
// Check if this observation has been resolved: look up in the index.
// If the tx has no reverse-map entries AND path is non-empty, it needs backfill.
hasRP := false
if _, ok := store.resolvedPubkeyReverse[tx.ID]; ok {
hasRP = true
}
if !hasRP && obs.PathJSON != "" && obs.PathJSON != "[]" {
allPending = append(allPending, obsRef{
obsID: obs.ID,
pathJSON: obs.PathJSON,
@@ -482,24 +488,61 @@ func backfillResolvedPathsAsync(store *PacketStore, dbPath string, chunkSize int
}
}
// Update in-memory state and re-pick best observation under a single
// write lock. The per-tx pickBestObservation is O(observations) which is
// typically <10 per tx — negligible cost vs. the race risk of splitting
// the lock (pollAndMerge can append to tx.Observations concurrently).
// Update in-memory state: update resolved pubkey index, re-pick best observation,
// and invalidate LRU cache entries for backfilled observations (#800).
//
// Lock ordering: always take s.mu BEFORE lruMu. The read path
// (fetchResolvedPathForObs) takes lruMu independently of s.mu,
// so we must NOT hold s.mu while taking lruMu. Instead, collect
// obsIDs to invalidate under s.mu, release it, then take lruMu.
store.mu.Lock()
affectedSet := make(map[string]bool)
lruInvalidate := make([]int, 0, len(results))
for _, r := range results {
if obs, ok := store.byObsID[r.obsID]; ok {
obs.ResolvedPath = r.rp
}
// Remove old index entries for this tx, then re-add with new pubkeys
if !affectedSet[r.txHash] {
affectedSet[r.txHash] = true
if tx, ok := store.byHash[r.txHash]; ok {
pickBestObservation(tx)
store.removeFromResolvedPubkeyIndex(tx.ID)
}
}
// Add new resolved pubkeys to index
if tx, ok := store.byHash[r.txHash]; ok {
pks := extractResolvedPubkeys(r.rp)
store.addToResolvedPubkeyIndex(tx.ID, pks)
// Update byNode for relay nodes
for _, pk := range pks {
store.addToByNode(tx, pk)
}
// Update byPathHop resolved-key entries
hopsSeen := make(map[string]bool)
for _, hop := range txGetParsedPath(tx) {
hopsSeen[strings.ToLower(hop)] = true
}
for _, pk := range pks {
if !hopsSeen[pk] {
hopsSeen[pk] = true
store.byPathHop[pk] = append(store.byPathHop[pk], tx)
}
}
}
lruInvalidate = append(lruInvalidate, r.obsID)
}
// Re-pick best observation for affected transmissions
for txHash := range affectedSet {
if tx, ok := store.byHash[txHash]; ok {
pickBestObservation(tx)
}
}
store.mu.Unlock()
// Invalidate LRU entries AFTER releasing s.mu to maintain lock
// ordering (lruMu must never be taken while s.mu is held).
store.lruMu.Lock()
for _, obsID := range lruInvalidate {
store.lruDelete(obsID)
}
store.lruMu.Unlock()
}
totalProcessed += len(chunk)
+30 -52
View File
@@ -38,7 +38,7 @@ func createTestDBWithSchema(t *testing.T) (*DB, string) {
observer_id TEXT, observer_name TEXT, direction TEXT,
snr REAL, rssi REAL, score INTEGER,
path_json TEXT, timestamp TEXT,
resolved_path TEXT
resolved_path TEXT, raw_hex TEXT
)`)
conn.Exec(`CREATE TABLE nodes (
public_key TEXT PRIMARY KEY, name TEXT, role TEXT,
@@ -203,14 +203,14 @@ func TestLoadNeighborEdgesFromDB(t *testing.T) {
}
func TestStoreObsResolvedPathInBroadcast(t *testing.T) {
// Verify resolved_path appears in broadcast maps
pk := "aabbccdd"
// After #800 refactor, resolved_path is no longer stored on StoreTx/StoreObs structs.
// Broadcast maps carry resolved_path from the decode-window, not from struct fields.
// This test verifies pickBestObservation no longer sets ResolvedPath on tx.
obs := &StoreObs{
ID: 1,
ObserverID: "obs1",
ObserverName: "Observer 1",
PathJSON: `["aa"]`,
ResolvedPath: []*string{&pk},
Timestamp: "2024-01-01T00:00:00Z",
}
@@ -221,32 +221,26 @@ func TestStoreObsResolvedPathInBroadcast(t *testing.T) {
}
pickBestObservation(tx)
if tx.ResolvedPath == nil {
t.Fatal("expected ResolvedPath to be set on tx after pickBestObservation")
}
if *tx.ResolvedPath[0] != "aabbccdd" {
t.Errorf("expected resolved path to be aabbccdd, got %s", *tx.ResolvedPath[0])
// tx should NOT have a ResolvedPath field anymore (compile-time guard)
// Verify the best observation's fields are propagated correctly
if tx.ObserverID != "obs1" {
t.Errorf("expected ObserverID=obs1, got %s", tx.ObserverID)
}
}
func TestResolvedPathInTxToMap(t *testing.T) {
pk := "aabbccdd"
// After #800, txToMap no longer includes resolved_path from the struct.
// resolved_path is only available via on-demand SQL fetch (txToMapWithRP).
tx := &StoreTx{
ID: 1,
Hash: "abc123",
PathJSON: `["aa"]`,
ResolvedPath: []*string{&pk},
obsKeys: make(map[string]bool),
ID: 1,
Hash: "abc123",
PathJSON: `["aa"]`,
obsKeys: make(map[string]bool),
}
m := txToMap(tx)
rp, ok := m["resolved_path"]
if !ok {
t.Fatal("resolved_path not in txToMap output")
}
rpSlice, ok := rp.([]*string)
if !ok || len(rpSlice) != 1 || *rpSlice[0] != "aabbccdd" {
t.Errorf("unexpected resolved_path: %v", rp)
if _, ok := m["resolved_path"]; ok {
t.Error("resolved_path should not be in txToMap output (removed in #800)")
}
}
@@ -270,7 +264,7 @@ func TestEnsureResolvedPathColumn(t *testing.T) {
conn, _ := sql.Open("sqlite", "file:"+dbPath+"?_journal_mode=WAL")
conn.Exec(`CREATE TABLE observations (
id INTEGER PRIMARY KEY, transmission_id INTEGER,
observer_id TEXT, path_json TEXT, timestamp TEXT
observer_id TEXT, path_json TEXT, timestamp TEXT, raw_hex TEXT
)`)
conn.Close()
@@ -365,27 +359,21 @@ func TestLoadWithResolvedPath(t *testing.T) {
t.Fatalf("expected 1 observation, got %d", len(tx.Observations))
}
obs := tx.Observations[0]
if obs.ResolvedPath == nil {
t.Fatal("expected ResolvedPath to be loaded")
}
if len(obs.ResolvedPath) != 1 || *obs.ResolvedPath[0] != "aabbccdd" {
t.Errorf("unexpected ResolvedPath: %v", obs.ResolvedPath)
}
// Check that pickBestObservation propagated resolved_path to tx
if tx.ResolvedPath == nil || len(tx.ResolvedPath) != 1 {
t.Error("expected ResolvedPath to be propagated to tx")
// After #800, ResolvedPath is not stored on StoreObs struct.
// Instead, resolved pubkeys are in the membership index.
_ = tx.Observations[0] // obs exists
h := resolvedPubkeyHash("aabbccdd")
if len(store.resolvedPubkeyIndex[h]) != 1 {
t.Fatal("expected resolved pubkey to be indexed")
}
}
func TestResolvedPathInAPIResponse(t *testing.T) {
// Test that TransmissionResp properly marshals resolved_path
pk := "aabbccddee"
// After #800, TransmissionResp no longer has ResolvedPath field.
// resolved_path is included dynamically in map-based API responses.
resp := TransmissionResp{
ID: 1,
Hash: "test",
ResolvedPath: []*string{&pk, nil},
ID: 1,
Hash: "test",
}
data, err := json.Marshal(resp)
@@ -396,19 +384,9 @@ func TestResolvedPathInAPIResponse(t *testing.T) {
var m map[string]interface{}
json.Unmarshal(data, &m)
rp, ok := m["resolved_path"]
if !ok {
t.Fatal("resolved_path missing from JSON")
}
rpArr, ok := rp.([]interface{})
if !ok || len(rpArr) != 2 {
t.Fatalf("unexpected resolved_path shape: %v", rp)
}
if rpArr[0] != "aabbccddee" {
t.Errorf("first element wrong: %v", rpArr[0])
}
if rpArr[1] != nil {
t.Errorf("second element should be null: %v", rpArr[1])
// resolved_path should NOT be in the marshaled JSON
if _, ok := m["resolved_path"]; ok {
t.Error("resolved_path should not be in TransmissionResp JSON (#800)")
}
}
+475
View File
@@ -0,0 +1,475 @@
package main
// Lock ordering contract (MUST be followed everywhere):
//
// s.mu → s.lruMu (s.mu is the outer lock, lruMu is the inner lock)
//
// • Never acquire s.lruMu while holding s.mu.
// • fetchResolvedPathForObs takes lruMu independently — callers under s.mu
// must NOT call it directly; instead collect IDs under s.mu, release, then
// do LRU ops under lruMu separately.
// • The backfill path (backfillResolvedPathsAsync) follows this by collecting
// obsIDs to invalidate under s.mu, releasing it, then taking lruMu.
import (
"database/sql"
"hash/fnv"
"log"
"strings"
)
// resolvedPubkeyHash computes a fast 64-bit hash for membership index keying.
// Uses FNV-1a from stdlib — good distribution, no external dependency.
func resolvedPubkeyHash(pk string) uint64 {
h := fnv.New64a()
h.Write([]byte(strings.ToLower(pk)))
return h.Sum64()
}
// addToResolvedPubkeyIndex adds a txID under each resolved pubkey hash.
// Deduplicates both within a single call AND across calls — won't add the
// same (hash, txID) pair twice even when called multiple times for the same tx.
// Must be called under s.mu write lock.
func (s *PacketStore) addToResolvedPubkeyIndex(txID int, resolvedPubkeys []string) {
if !s.useResolvedPathIndex {
return
}
seen := make(map[uint64]bool, len(resolvedPubkeys))
for _, pk := range resolvedPubkeys {
if pk == "" {
continue
}
h := resolvedPubkeyHash(pk)
if seen[h] {
continue
}
seen[h] = true
// Cross-call dedup: check if (h, txID) already exists in forward index.
existing := s.resolvedPubkeyIndex[h]
alreadyPresent := false
for _, id := range existing {
if id == txID {
alreadyPresent = true
break
}
}
if alreadyPresent {
continue
}
s.resolvedPubkeyIndex[h] = append(existing, txID)
s.resolvedPubkeyReverse[txID] = append(s.resolvedPubkeyReverse[txID], h)
}
}
// removeFromResolvedPubkeyIndex removes all index entries for a txID using the reverse map.
// Must be called under s.mu write lock.
func (s *PacketStore) removeFromResolvedPubkeyIndex(txID int) {
if !s.useResolvedPathIndex {
return
}
hashes := s.resolvedPubkeyReverse[txID]
for _, h := range hashes {
list := s.resolvedPubkeyIndex[h]
// Remove ALL occurrences of txID (not just the first) to prevent orphans.
filtered := list[:0]
for _, id := range list {
if id != txID {
filtered = append(filtered, id)
}
}
if len(filtered) == 0 {
delete(s.resolvedPubkeyIndex, h)
} else {
s.resolvedPubkeyIndex[h] = filtered
}
}
delete(s.resolvedPubkeyReverse, txID)
}
// extractResolvedPubkeys extracts all non-nil, non-empty pubkeys from a resolved path.
func extractResolvedPubkeys(rp []*string) []string {
if len(rp) == 0 {
return nil
}
result := make([]string, 0, len(rp))
for _, p := range rp {
if p != nil && *p != "" {
result = append(result, *p)
}
}
return result
}
// mergeResolvedPubkeys collects unique non-empty pubkeys from multiple resolved paths.
func mergeResolvedPubkeys(paths ...[]*string) []string {
seen := make(map[string]bool)
var result []string
for _, rp := range paths {
for _, p := range rp {
if p != nil && *p != "" && !seen[*p] {
seen[*p] = true
result = append(result, *p)
}
}
}
return result
}
// nodeInResolvedPathViaIndex checks whether a transmission is associated with
// a target pubkey using the membership index + collision-safety SQL check.
// Must be called under s.mu RLock at minimum.
func (s *PacketStore) nodeInResolvedPathViaIndex(tx *StoreTx, targetPK string) bool {
if !s.useResolvedPathIndex {
// Flag off: can't disambiguate, keep candidate (conservative)
return true
}
// If this tx has no indexed pubkeys at all, we can't disambiguate —
// keep the candidate (same as old behavior for NULL resolved_path).
if _, hasReverse := s.resolvedPubkeyReverse[tx.ID]; !hasReverse {
return true
}
h := resolvedPubkeyHash(targetPK)
txIDs := s.resolvedPubkeyIndex[h]
// Check if this tx's ID is in the candidate list
for _, id := range txIDs {
if id == tx.ID {
// Found in index. Collision-safety: verify with SQL.
if s.db != nil && s.db.conn != nil {
return s.confirmResolvedPathContains(tx.ID, targetPK)
}
return true // no DB, trust the index
}
}
return false
}
// confirmResolvedPathContains verifies an exact pubkey match in resolved_path
// via SQL. This is the collision-safety fallback for the membership index.
func (s *PacketStore) confirmResolvedPathContains(txID int, pubkey string) bool {
if s.db == nil || s.db.conn == nil {
return true
}
// Use INSTR with surrounding quotes for exact match — avoids LIKE escape issues.
// resolved_path format: ["pubkey1","pubkey2",...]
needle := `"` + strings.ToLower(pubkey) + `"`
var count int
err := s.db.conn.QueryRow(
`SELECT COUNT(*) FROM observations WHERE transmission_id = ? AND INSTR(LOWER(resolved_path), ?) > 0`,
txID, needle,
).Scan(&count)
if err != nil {
return true // on error, keep the candidate
}
return count > 0
}
// fetchResolvedPathsForTx fetches resolved_path from SQLite for all observations
// of a transmission. Used for on-demand API responses and eviction cleanup.
func (s *PacketStore) fetchResolvedPathsForTx(txID int) map[int][]*string {
if s.db == nil || s.db.conn == nil {
return nil
}
rows, err := s.db.conn.Query(
`SELECT id, resolved_path FROM observations WHERE transmission_id = ? AND resolved_path IS NOT NULL`,
txID,
)
if err != nil {
return nil
}
defer rows.Close()
result := make(map[int][]*string)
for rows.Next() {
var obsID int
var rpJSON sql.NullString
if err := rows.Scan(&obsID, &rpJSON); err != nil {
continue
}
if rpJSON.Valid && rpJSON.String != "" {
result[obsID] = unmarshalResolvedPath(rpJSON.String)
}
}
return result
}
// fetchResolvedPathForObs fetches resolved_path for a single observation,
// using the LRU cache.
func (s *PacketStore) fetchResolvedPathForObs(obsID int) []*string {
if s.db == nil || s.db.conn == nil {
return nil
}
// Check LRU cache first
s.lruMu.RLock()
if s.apiResolvedPathLRU != nil {
if entry, ok := s.apiResolvedPathLRU[obsID]; ok {
s.lruMu.RUnlock()
return entry
}
}
s.lruMu.RUnlock()
var rpJSON sql.NullString
err := s.db.conn.QueryRow(
`SELECT resolved_path FROM observations WHERE id = ?`, obsID,
).Scan(&rpJSON)
if err != nil || !rpJSON.Valid {
return nil
}
rp := unmarshalResolvedPath(rpJSON.String)
// Store in LRU
s.lruMu.Lock()
s.lruPut(obsID, rp)
s.lruMu.Unlock()
return rp
}
// fetchResolvedPathForTxBest returns the best observation's resolved_path for a tx.
//
// "Best" = the longest path_json among observations that actually have a stored
// resolved_path. Earlier versions picked the longest-path obs unconditionally
// and queried SQL for that single ID — if the longest-path obs had NULL
// resolved_path while a shorter sibling had one, the call returned nil and
// callers (e.g. /api/nodes/{pk}/health.recentPackets) lost the field. Fixes
// #810 by checking all observations and falling back to the longest sibling
// that has a stored path.
func (s *PacketStore) fetchResolvedPathForTxBest(tx *StoreTx) []*string {
if tx == nil || len(tx.Observations) == 0 {
return nil
}
// Fast path: try the longest-path obs first via the LRU/SQL helper.
longest := tx.Observations[0]
longestLen := pathLen(longest.PathJSON)
for _, obs := range tx.Observations[1:] {
if l := pathLen(obs.PathJSON); l > longestLen {
longest = obs
longestLen = l
}
}
if rp := s.fetchResolvedPathForObs(longest.ID); rp != nil {
return rp
}
// Fallback: longest-path obs has no stored resolved_path. Query all
// observations for this tx and pick the one with the longest path_json
// that actually has a stored resolved_path.
rpMap := s.fetchResolvedPathsForTx(tx.ID)
if len(rpMap) == 0 {
return nil
}
var bestRP []*string
bestObsID := 0
bestLen := -1
for _, obs := range tx.Observations {
rp, ok := rpMap[obs.ID]
if !ok || rp == nil {
continue
}
if l := pathLen(obs.PathJSON); l > bestLen {
bestLen = l
bestRP = rp
bestObsID = obs.ID
}
}
// Populate LRU so repeat lookups for this tx don't re-issue the multi-row
// SQL fallback (e.g. dashboard polling /api/nodes/{pk}/health).
if bestRP != nil && bestObsID != 0 {
s.lruMu.Lock()
s.lruPut(bestObsID, bestRP)
s.lruMu.Unlock()
}
return bestRP
}
// --- Simple LRU cache for resolved paths ---
const lruMaxSize = 10000
// lruPut adds an entry. Must be called under s.lruMu write lock.
func (s *PacketStore) lruPut(obsID int, rp []*string) {
if s.apiResolvedPathLRU == nil {
return
}
if _, exists := s.apiResolvedPathLRU[obsID]; exists {
return
}
// Compact lruOrder if stale entries exceed 50% of capacity.
// This prevents effective capacity degradation after bulk deletions.
if len(s.lruOrder) >= lruMaxSize && len(s.apiResolvedPathLRU) < lruMaxSize/2 {
compacted := make([]int, 0, len(s.apiResolvedPathLRU))
for _, id := range s.lruOrder {
if _, ok := s.apiResolvedPathLRU[id]; ok {
compacted = append(compacted, id)
}
}
s.lruOrder = compacted
}
if len(s.lruOrder) >= lruMaxSize {
// Evict oldest, skipping stale entries
for len(s.lruOrder) > 0 {
evictID := s.lruOrder[0]
s.lruOrder = s.lruOrder[1:]
if _, ok := s.apiResolvedPathLRU[evictID]; ok {
delete(s.apiResolvedPathLRU, evictID)
break
}
// stale entry — skip and continue
}
}
s.apiResolvedPathLRU[obsID] = rp
s.lruOrder = append(s.lruOrder, obsID)
}
// lruDelete removes an entry. Must be called under s.lruMu write lock.
func (s *PacketStore) lruDelete(obsID int) {
if s.apiResolvedPathLRU == nil {
return
}
delete(s.apiResolvedPathLRU, obsID)
// Don't scan lruOrder — eviction handles stale entries naturally.
}
// resolvedPubkeysForEvictionBatch fetches resolved pubkeys for multiple txIDs
// from SQL in a single batched query. Returns a map from txID to unique pubkeys.
// MUST be called WITHOUT holding s.mu — this is the whole point of the batch approach.
// Chunks queries to stay under SQLite's 500-parameter limit.
func (s *PacketStore) resolvedPubkeysForEvictionBatch(txIDs []int) map[int][]string {
result := make(map[int][]string, len(txIDs))
if len(txIDs) == 0 || s.db == nil || s.db.conn == nil {
return result
}
const chunkSize = 499 // SQLite SQLITE_MAX_VARIABLE_NUMBER default is 999; stay well under
for start := 0; start < len(txIDs); start += chunkSize {
end := start + chunkSize
if end > len(txIDs) {
end = len(txIDs)
}
chunk := txIDs[start:end]
// Build query with placeholders
placeholders := make([]byte, 0, len(chunk)*2)
args := make([]interface{}, len(chunk))
for i, id := range chunk {
if i > 0 {
placeholders = append(placeholders, ',')
}
placeholders = append(placeholders, '?')
args[i] = id
}
query := "SELECT transmission_id, resolved_path FROM observations WHERE transmission_id IN (" +
string(placeholders) + ") AND resolved_path IS NOT NULL"
rows, err := s.db.conn.Query(query, args...)
if err != nil {
continue
}
for rows.Next() {
var txID int
var rpJSON sql.NullString
if err := rows.Scan(&txID, &rpJSON); err != nil {
continue
}
if !rpJSON.Valid || rpJSON.String == "" {
continue
}
rp := unmarshalResolvedPath(rpJSON.String)
for _, p := range rp {
if p != nil && *p != "" {
result[txID] = append(result[txID], *p)
}
}
}
rows.Close()
}
// Deduplicate per-txID
for txID, pks := range result {
seen := make(map[string]bool, len(pks))
deduped := pks[:0]
for _, pk := range pks {
if !seen[pk] {
seen[pk] = true
deduped = append(deduped, pk)
}
}
result[txID] = deduped
}
return result
}
// initResolvedPathIndex initializes the resolved path index data structures.
func (s *PacketStore) initResolvedPathIndex() {
s.resolvedPubkeyIndex = make(map[uint64][]int, 4096)
s.resolvedPubkeyReverse = make(map[int][]uint64, 4096)
s.apiResolvedPathLRU = make(map[int][]*string, lruMaxSize)
s.lruOrder = make([]int, 0, lruMaxSize)
}
// CompactResolvedPubkeyIndex reclaims memory from the resolved pubkey index maps
// after eviction. It removes empty forward-index entries (shouldn't exist if
// removeFromResolvedPubkeyIndex is correct, but defense in depth) and clips
// oversized slice backing arrays where cap > 2*len.
// Must be called under s.mu write lock.
func (s *PacketStore) CompactResolvedPubkeyIndex() {
if !s.useResolvedPathIndex {
return
}
for h, ids := range s.resolvedPubkeyIndex {
if len(ids) == 0 {
delete(s.resolvedPubkeyIndex, h)
continue
}
// Clip oversized backing arrays: if cap > 2*len, reallocate.
if cap(ids) > 2*len(ids)+8 {
clipped := make([]int, len(ids))
copy(clipped, ids)
s.resolvedPubkeyIndex[h] = clipped
}
}
for txID, hashes := range s.resolvedPubkeyReverse {
if len(hashes) == 0 {
delete(s.resolvedPubkeyReverse, txID)
continue
}
if cap(hashes) > 2*len(hashes)+8 {
clipped := make([]uint64, len(hashes))
copy(clipped, hashes)
s.resolvedPubkeyReverse[txID] = clipped
}
}
}
// defaultMaxResolvedPubkeyIndexEntries is the default hard cap for the forward
// index. When exceeded, a warning is logged. No auto-eviction — that's the
// eviction ticker's job.
const defaultMaxResolvedPubkeyIndexEntries = 5_000_000
// CheckResolvedPubkeyIndexSize logs a warning if the resolved pubkey forward
// index exceeds the configured maximum entries. Must be called under s.mu
// read lock at minimum.
func (s *PacketStore) CheckResolvedPubkeyIndexSize() {
if !s.useResolvedPathIndex {
return
}
maxEntries := s.maxResolvedPubkeyIndexEntries
if maxEntries <= 0 {
maxEntries = defaultMaxResolvedPubkeyIndexEntries
}
fwdLen := len(s.resolvedPubkeyIndex)
revLen := len(s.resolvedPubkeyReverse)
if fwdLen > maxEntries || revLen > maxEntries {
log.Printf("[store] WARNING: resolvedPubkeyIndex size exceeds limit — forward=%d reverse=%d limit=%d",
fwdLen, revLen, maxEntries)
}
}
File diff suppressed because it is too large Load Diff
+99 -15
View File
@@ -16,6 +16,7 @@ import (
"time"
"github.com/gorilla/mux"
"github.com/meshcore-analyzer/packetpath"
)
// Server holds shared state for route handlers.
@@ -569,6 +570,16 @@ func (s *Server) handleStats(w http.ResponseWriter, r *http.Request) {
backfillProgress = 1
}
// Memory accounting (#832). storeDataMB is the in-store packet byte
// estimate (the old "trackedMB"); processRSSMB / goHeapInuseMB / goSysMB
// give ops the breakdown needed to reason about real RSS. All values
// share a single 1s-cached snapshot to amortize ReadMemStats cost.
var storeDataMB float64
if s.store != nil {
storeDataMB = s.store.trackedMemoryMB()
}
mem := s.getMemorySnapshot(storeDataMB)
resp := &StatsResponse{
TotalPackets: stats.TotalPackets,
TotalTransmissions: &stats.TotalTransmissions,
@@ -592,6 +603,12 @@ func (s *Server) handleStats(w http.ResponseWriter, r *http.Request) {
BackfillProgress: backfillProgress,
SignatureDrops: s.db.GetSignatureDropCount(),
HashMigrationComplete: s.store != nil && s.store.hashMigrationComplete.Load(),
TrackedMB: mem.StoreDataMB, // deprecated alias
StoreDataMB: mem.StoreDataMB,
ProcessRSSMB: mem.ProcessRSSMB,
GoHeapInuseMB: mem.GoHeapInuseMB,
GoSysMB: mem.GoSysMB,
}
s.statsMu.Lock()
@@ -774,6 +791,7 @@ func (s *Server) handlePackets(w http.ResponseWriter, r *http.Request) {
Until: r.URL.Query().Get("until"),
Region: r.URL.Query().Get("region"),
Node: r.URL.Query().Get("node"),
Channel: r.URL.Query().Get("channel"),
Order: "DESC",
ExpandObservations: r.URL.Query().Get("expand") == "observations",
}
@@ -876,9 +894,11 @@ func (s *Server) handleBatchObservations(w http.ResponseWriter, r *http.Request)
func (s *Server) handlePacketDetail(w http.ResponseWriter, r *http.Request) {
param := mux.Vars(r)["id"]
var packet map[string]interface{}
fromDB := false
isHash := hashPattern.MatchString(strings.ToLower(param))
if s.store != nil {
if hashPattern.MatchString(strings.ToLower(param)) {
if isHash {
packet = s.store.GetPacketByHash(param)
}
if packet == nil {
@@ -891,6 +911,25 @@ func (s *Server) handlePacketDetail(w http.ResponseWriter, r *http.Request) {
}
}
}
// DB fallback: in-memory PacketStore prunes old entries, but the SQLite
// DB retains them and is the source for /api/nodes recentAdverts. Without
// this fallback, links from node-detail pages 404 once the packet ages out.
if packet == nil && s.db != nil {
if isHash {
if dbPkt, err := s.db.GetPacketByHash(param); err == nil && dbPkt != nil {
packet = dbPkt
fromDB = true
}
}
if packet == nil {
if id, parseErr := strconv.Atoi(param); parseErr == nil {
if dbPkt, err := s.db.GetTransmissionByID(id); err == nil && dbPkt != nil {
packet = dbPkt
fromDB = true
}
}
}
}
if packet == nil {
writeError(w, 404, "Not found")
return
@@ -901,6 +940,9 @@ func (s *Server) handlePacketDetail(w http.ResponseWriter, r *http.Request) {
if s.store != nil {
observations = s.store.GetObservationsForHash(hash)
}
if len(observations) == 0 && fromDB && s.db != nil && hash != "" {
observations = s.db.GetObservationsForHash(hash)
}
observationCount := len(observations)
if observationCount == 0 {
observationCount = 1
@@ -916,11 +958,9 @@ func (s *Server) handlePacketDetail(w http.ResponseWriter, r *http.Request) {
pathHops = []interface{}{}
}
rawHex, _ := packet["raw_hex"].(string)
writeJSON(w, PacketDetailResponse{
Packet: packet,
Path: pathHops,
Breakdown: BuildBreakdown(rawHex),
ObservationCount: observationCount,
Observations: mapSliceToObservations(observations),
})
@@ -979,8 +1019,17 @@ func (s *Server) handlePostPacket(w http.ResponseWriter, r *http.Request) {
contentHash := ComputeContentHash(hexStr)
pathJSON := "[]"
if len(decoded.Path.Hops) > 0 {
if pj, e := json.Marshal(decoded.Path.Hops); e == nil {
// For TRACE packets, path_json must be the payload-decoded route hops
// (decoded.Path.Hops), NOT the raw_hex header bytes which are SNR values.
// For all other packet types, derive path from raw_hex (#886).
if !packetpath.PathBytesAreHops(byte(decoded.Header.PayloadType)) {
if len(decoded.Path.Hops) > 0 {
if pj, e := json.Marshal(decoded.Path.Hops); e == nil {
pathJSON = string(pj)
}
}
} else if hops, err := packetpath.DecodePathFromRawHex(hexStr); err == nil && len(hops) > 0 {
if pj, e := json.Marshal(hops); e == nil {
pathJSON = string(pj)
}
}
@@ -1233,14 +1282,52 @@ func (s *Server) handleNodePaths(w http.ResponseWriter, r *http.Request) {
// Post-filter: verify target node actually appears in each candidate's resolved_path.
// The byPathHop index uses short prefixes which can collide (e.g. "c0" matches multiple nodes).
// We lean on resolved_path (from neighbor affinity graph) to disambiguate.
filtered := candidates[:0] // reuse backing array
for _, tx := range candidates {
if nodeInResolvedPath(tx, lowerPK) {
filtered = append(filtered, tx)
//
// Collect candidate IDs and index membership under the read lock, then release
// the lock before running SQL queries (confirmResolvedPathContains does disk I/O).
type candidateCheck struct {
tx *StoreTx
hasReverse bool
inIndex bool
}
checks := make([]candidateCheck, len(candidates))
for i, tx := range candidates {
cc := candidateCheck{tx: tx}
if !s.store.useResolvedPathIndex {
cc.inIndex = true // flag off — keep all
} else if _, hasRev := s.store.resolvedPubkeyReverse[tx.ID]; !hasRev {
cc.inIndex = true // no indexed pubkeys — keep (conservative)
} else {
h := resolvedPubkeyHash(lowerPK)
for _, id := range s.store.resolvedPubkeyIndex[h] {
if id == tx.ID {
cc.hasReverse = true // needs SQL confirmation
break
}
}
// If not in index at all, it's a definite no
}
checks[i] = cc
}
s.store.mu.RUnlock()
// Now run SQL checks outside the lock for candidates that need confirmation.
filtered := candidates[:0]
for _, cc := range checks {
if cc.inIndex {
filtered = append(filtered, cc.tx)
} else if cc.hasReverse {
if s.store.confirmResolvedPathContains(cc.tx.ID, lowerPK) {
filtered = append(filtered, cc.tx)
}
}
// else: not in index → exclude
}
candidates = filtered
// Re-acquire read lock for the aggregation phase that reads store data.
s.store.mu.RLock()
type pathAgg struct {
Hops []PathHopResp
Count int
@@ -2287,9 +2374,6 @@ func mapSliceToTransmissions(maps []map[string]interface{}) []TransmissionResp {
tx.PathJSON = m["path_json"]
tx.Direction = m["direction"]
tx.Score = m["score"]
if rp, ok := m["resolved_path"].([]*string); ok {
tx.ResolvedPath = rp
}
result = append(result, tx)
}
return result
@@ -2310,10 +2394,10 @@ func mapSliceToObservations(maps []map[string]interface{}) []ObservationResp {
obs.SNR = m["snr"]
obs.RSSI = m["rssi"]
obs.PathJSON = m["path_json"]
obs.ResolvedPath = m["resolved_path"]
obs.Direction = m["direction"]
obs.RawHex = m["raw_hex"]
obs.Timestamp = m["timestamp"]
if rp, ok := m["resolved_path"].([]*string); ok {
obs.ResolvedPath = rp
}
result = append(result, obs)
}
return result
+205 -41
View File
@@ -3681,67 +3681,55 @@ func TestNodePathsPrefixCollisionFilter(t *testing.T) {
func TestNodeInResolvedPath(t *testing.T) {
target := "aabbccdd11223344"
// Case 1: tx.ResolvedPath contains target
pk := "aabbccdd11223344"
tx1 := &StoreTx{ResolvedPath: []*string{&pk}}
if !nodeInResolvedPath(tx1, target) {
t.Error("should match when ResolvedPath contains target")
// After #800, nodeInResolvedPath is replaced by nodeInResolvedPathViaIndex
// which uses the membership index. Test the index-based approach.
store := &PacketStore{
byNode: make(map[string][]*StoreTx),
nodeHashes: make(map[string]map[string]bool),
useResolvedPathIndex: true,
}
store.initResolvedPathIndex()
// Case 1: tx indexed with target pubkey
tx1 := &StoreTx{ID: 1}
store.addToResolvedPubkeyIndex(1, []string{target})
if !store.nodeInResolvedPathViaIndex(tx1, target) {
t.Error("should match when index contains target")
}
// Case 2: tx.ResolvedPath contains different node
other := "aacafe0000000000"
tx2 := &StoreTx{ResolvedPath: []*string{&other}}
if nodeInResolvedPath(tx2, target) {
t.Error("should not match when ResolvedPath contains different node")
// Case 2: tx indexed with different pubkey
tx2 := &StoreTx{ID: 2}
store.addToResolvedPubkeyIndex(2, []string{"aacafe0000000000"})
if store.nodeInResolvedPathViaIndex(tx2, target) {
t.Error("should not match when index contains different node")
}
// Case 3: nil ResolvedPath — should match (no data to disambiguate, keep it)
tx3 := &StoreTx{}
if !nodeInResolvedPath(tx3, target) {
t.Error("should match when ResolvedPath is nil (no data to disambiguate)")
}
// Case 4: ResolvedPath with nil elements only — has data but no match
tx4 := &StoreTx{ResolvedPath: []*string{nil, nil}}
if nodeInResolvedPath(tx4, target) {
t.Error("should not match when all ResolvedPath elements are nil")
}
// Case 5: target in observation but not in tx.ResolvedPath
tx5 := &StoreTx{
ResolvedPath: []*string{&other},
Observations: []*StoreObs{
{ResolvedPath: []*string{&pk}},
},
}
if !nodeInResolvedPath(tx5, target) {
t.Error("should match when observation's ResolvedPath contains target")
// Case 3: tx not in index at all — should match (no data to disambiguate)
tx3 := &StoreTx{ID: 3}
if !store.nodeInResolvedPathViaIndex(tx3, target) {
t.Error("should match when tx has no index entries (no data to disambiguate)")
}
}
func TestPathHopIndexIncrementalUpdate(t *testing.T) {
// Test that addTxToPathHopIndex and removeTxFromPathHopIndex work correctly
// After #800, addTxToPathHopIndex only indexes raw hops (not resolved pubkeys).
// Resolved pubkeys are handled by the resolved pubkey membership index.
idx := make(map[string][]*StoreTx)
pk1 := "fullpubkey1"
tx1 := &StoreTx{
ID: 1,
PathJSON: `["ab","cd"]`,
ResolvedPath: []*string{&pk1, nil},
}
addTxToPathHopIndex(idx, tx1)
// Should be indexed under "ab", "cd", and "fullpubkey1"
// Should be indexed under "ab" and "cd" only (no resolved pubkey)
if len(idx["ab"]) != 1 {
t.Errorf("expected 1 entry for 'ab', got %d", len(idx["ab"]))
}
if len(idx["cd"]) != 1 {
t.Errorf("expected 1 entry for 'cd', got %d", len(idx["cd"]))
}
if len(idx["fullpubkey1"]) != 1 {
t.Errorf("expected 1 entry for resolved pubkey, got %d", len(idx["fullpubkey1"]))
}
// Add another tx with overlapping hop
tx2 := &StoreTx{
@@ -3766,9 +3754,6 @@ func TestPathHopIndexIncrementalUpdate(t *testing.T) {
if _, ok := idx["cd"]; ok {
t.Error("expected 'cd' key to be deleted after removal")
}
if _, ok := idx["fullpubkey1"]; ok {
t.Error("expected resolved pubkey key to be deleted after removal")
}
}
func TestMetricsAPIEndpoints(t *testing.T) {
@@ -3808,3 +3793,182 @@ func TestMetricsAPIEndpoints(t *testing.T) {
t.Errorf("expected 1 observer in summary, got %v", resp2["observers"])
}
}
// TestNodeHealth_RecentPackets_ResolvedPath verifies that recentPackets in the
// node health endpoint include resolved_path (regression for Codex review item #2).
func TestNodeHealth_RecentPackets_ResolvedPath(t *testing.T) {
_, router := setupTestServer(t)
req := httptest.NewRequest("GET", "/api/nodes/aabbccdd11223344/health", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 200 {
t.Fatalf("expected 200, got %d (body: %s)", w.Code, w.Body.String())
}
var body map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &body); err != nil {
t.Fatalf("json decode: %v", err)
}
rp, ok := body["recentPackets"].([]interface{})
if !ok || len(rp) == 0 {
t.Fatal("expected non-empty recentPackets")
}
// At least one packet should have resolved_path (tx 1 has observations with resolved_path)
found := false
for _, p := range rp {
pm, ok := p.(map[string]interface{})
if !ok {
continue
}
if pm["resolved_path"] != nil {
found = true
break
}
}
if !found {
t.Error("expected at least one recentPacket with resolved_path")
}
}
// TestPacketsExpand_ResolvedPath verifies that expandObservations=true includes
// resolved_path on expanded observations (regression for Codex review item #3).
func TestPacketsExpand_ResolvedPath(t *testing.T) {
_, router := setupTestServer(t)
req := httptest.NewRequest("GET", "/api/packets?expand=observations&limit=10", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 200 {
t.Fatalf("expected 200, got %d (body: %s)", w.Code, w.Body.String())
}
var body map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &body); err != nil {
t.Fatalf("json decode: %v", err)
}
packets, ok := body["packets"].([]interface{})
if !ok || len(packets) == 0 {
t.Fatal("expected non-empty packets")
}
// Find a packet with observations that should have resolved_path
found := false
for _, p := range packets {
pm, ok := p.(map[string]interface{})
if !ok {
continue
}
obs, ok := pm["observations"].([]interface{})
if !ok {
continue
}
for _, o := range obs {
om, ok := o.(map[string]interface{})
if !ok {
continue
}
if om["resolved_path"] != nil {
found = true
break
}
}
if found {
break
}
}
if !found {
t.Error("expected at least one expanded observation with resolved_path")
}
}
// TestPacketDetailFallsBackToDBWhenStoreMisses verifies that handlePacketDetail
// serves transmissions present in the DB but absent from the in-memory store.
// This is the recentAdverts → "Not found" bug (#827).
func TestPacketDetailFallsBackToDBWhenStoreMisses(t *testing.T) {
srv, router := setupTestServer(t)
// Insert a transmission directly into the DB AFTER store.Load(), so the
// in-memory PacketStore won't see it. Mirrors the production case where
// the store has pruned an entry but the DB still has it.
const dbOnlyHash = "deadbeef00112233"
now := time.Now().UTC().Format(time.RFC3339)
if _, err := srv.db.conn.Exec(`INSERT INTO transmissions
(raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
VALUES ('FFEE', ?, ?, 1, 4, '{"type":"ADVERT"}')`, dbOnlyHash, now); err != nil {
t.Fatalf("insert: %v", err)
}
var txID int
if err := srv.db.conn.QueryRow("SELECT id FROM transmissions WHERE hash = ?", dbOnlyHash).Scan(&txID); err != nil {
t.Fatalf("lookup tx id: %v", err)
}
if _, err := srv.db.conn.Exec(`INSERT INTO observations
(transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (?, 1, 7.5, -99, '[]', ?)`, txID, time.Now().Unix()); err != nil {
t.Fatalf("insert obs: %v", err)
}
// Confirm the store really doesn't have it (precondition for the fix).
if got := srv.store.GetPacketByHash(dbOnlyHash); got != nil {
t.Fatalf("test precondition failed: store unexpectedly has %s", dbOnlyHash)
}
req := httptest.NewRequest("GET", "/api/packets/"+dbOnlyHash, nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 200 {
t.Fatalf("expected 200, got %d (body: %s)", w.Code, w.Body.String())
}
var body map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &body); err != nil {
t.Fatal(err)
}
pkt, ok := body["packet"].(map[string]interface{})
if !ok {
t.Fatal("expected packet object")
}
if pkt["hash"] != dbOnlyHash {
t.Errorf("expected hash %s, got %v", dbOnlyHash, pkt["hash"])
}
// Observations fallback should populate from DB too.
obs, _ := body["observations"].([]interface{})
if len(obs) == 0 {
t.Errorf("expected DB observations to be returned, got 0")
}
}
// TestPacketDetail404WhenAbsentFromBoth verifies that a hash present in
// neither store nor DB still returns 404 (no false positives from the fallback).
func TestPacketDetail404WhenAbsentFromBoth(t *testing.T) {
_, router := setupTestServer(t)
req := httptest.NewRequest("GET", "/api/packets/0011223344556677", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 404 {
t.Errorf("expected 404, got %d (body: %s)", w.Code, w.Body.String())
}
}
// TestPacketDetailPrefersStoreOverDB verifies the store result wins when the
// hash exists in both — the DB fallback must not double-fetch / overwrite.
func TestPacketDetailPrefersStoreOverDB(t *testing.T) {
srv, router := setupTestServer(t)
// abc123def4567890 is seeded in both DB and (after Load) the store.
const hash = "abc123def4567890"
if got := srv.store.GetPacketByHash(hash); got == nil {
t.Fatalf("test precondition failed: store should have %s", hash)
}
req := httptest.NewRequest("GET", "/api/packets/"+hash, nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 200 {
t.Fatalf("expected 200, got %d", w.Code)
}
var body map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &body)
pkt, _ := body["packet"].(map[string]interface{})
if pkt == nil || pkt["hash"] != hash {
t.Fatalf("expected packet with hash %s, got %v", hash, pkt)
}
// observation_count comes from store observations (2 seeded for tx 1).
if cnt, _ := body["observation_count"].(float64); cnt != 2 {
t.Errorf("expected observation_count=2 (from store), got %v", body["observation_count"])
}
}
+95
View File
@@ -0,0 +1,95 @@
package main
import (
"encoding/json"
"net/http/httptest"
"strings"
"testing"
)
// TestStatsMemoryFields verifies that /api/stats exposes the new memory
// breakdown introduced for issue #832: storeDataMB, processRSSMB,
// goHeapInuseMB, goSysMB, plus the deprecated trackedMB alias.
//
// We assert presence, type, sign, and ordering invariants — but NOT
// "RSS within X% of true RSS" because that is flaky in CI under cgo,
// containerization, and shared-runner load.
func TestStatsMemoryFields(t *testing.T) {
_, router := setupTestServer(t)
req := httptest.NewRequest("GET", "/api/stats", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 200 {
t.Fatalf("expected 200, got %d", w.Code)
}
var body map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &body); err != nil {
t.Fatalf("json decode: %v", err)
}
required := []string{"trackedMB", "storeDataMB", "processRSSMB", "goHeapInuseMB", "goSysMB"}
values := make(map[string]float64, len(required))
for _, k := range required {
v, ok := body[k]
if !ok {
t.Fatalf("missing field %q in /api/stats response", k)
}
f, ok := v.(float64)
if !ok {
t.Fatalf("field %q is %T, expected float64", k, v)
}
if f < 0 {
t.Errorf("field %q is negative: %v", k, f)
}
values[k] = f
}
// trackedMB is a deprecated alias for storeDataMB; they must match.
if values["trackedMB"] != values["storeDataMB"] {
t.Errorf("trackedMB (%v) != storeDataMB (%v); they must remain aliased",
values["trackedMB"], values["storeDataMB"])
}
// Ordering invariants. goSys is the runtime's view of total OS memory;
// HeapInuse is a subset of it. storeData is a subset of HeapInuse.
// processRSS may be 0 in environments without /proc — treat 0 as
// "unknown" rather than a failure.
if values["goHeapInuseMB"] > values["goSysMB"]+0.5 {
t.Errorf("invariant violated: goHeapInuseMB (%v) > goSysMB (%v)",
values["goHeapInuseMB"], values["goSysMB"])
}
if values["storeDataMB"] > values["goHeapInuseMB"]+0.5 && values["storeDataMB"] > 0 {
// In the test fixture storeDataMB is typically 0 (no packets in
// store); only enforce the bound when both are nonzero.
t.Errorf("invariant violated: storeDataMB (%v) > goHeapInuseMB (%v)",
values["storeDataMB"], values["goHeapInuseMB"])
}
if values["processRSSMB"] > 0 && values["goSysMB"] > 0 {
// goSys can briefly exceed RSS if pages are reserved-but-not-touched,
// so allow some slack.
if values["goSysMB"] > values["processRSSMB"]*4 {
t.Errorf("suspicious: goSysMB (%v) >> processRSSMB (%v)",
values["goSysMB"], values["processRSSMB"])
}
}
}
// TestStatsMemoryFieldsRawJSON spot-checks that the JSON wire format uses
// the documented camelCase names (no accidental rename through struct tags).
func TestStatsMemoryFieldsRawJSON(t *testing.T) {
_, router := setupTestServer(t)
req := httptest.NewRequest("GET", "/api/stats", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
body := w.Body.String()
for _, key := range []string{
`"trackedMB":`, `"storeDataMB":`,
`"processRSSMB":`, `"goHeapInuseMB":`, `"goSysMB":`,
} {
if !strings.Contains(body, key) {
t.Errorf("missing %s in raw response: %s", key, body)
}
}
}
+523 -305
View File
File diff suppressed because it is too large Load Diff
+116
View File
@@ -0,0 +1,116 @@
package main
import (
"testing"
)
func f64(v float64) *float64 { return &v }
func TestDedupeTopHopsByPair(t *testing.T) {
hops := []distHopRecord{
{FromPk: "AAA", ToPk: "BBB", FromName: "A", ToName: "B", Dist: 100, Type: "R↔R", SNR: f64(5.0), Hash: "h1", Timestamp: "t1"},
{FromPk: "AAA", ToPk: "BBB", FromName: "A", ToName: "B", Dist: 90, Type: "R↔R", SNR: f64(8.0), Hash: "h2", Timestamp: "t2"},
{FromPk: "BBB", ToPk: "AAA", FromName: "B", ToName: "A", Dist: 80, Type: "R↔R", SNR: f64(3.0), Hash: "h3", Timestamp: "t3"},
{FromPk: "AAA", ToPk: "BBB", FromName: "A", ToName: "B", Dist: 70, Type: "R↔R", SNR: f64(6.0), Hash: "h4", Timestamp: "t4"},
{FromPk: "AAA", ToPk: "BBB", FromName: "A", ToName: "B", Dist: 60, Type: "R↔R", SNR: f64(4.0), Hash: "h5", Timestamp: "t5"},
{FromPk: "CCC", ToPk: "DDD", FromName: "C", ToName: "D", Dist: 50, Type: "C↔R", SNR: f64(7.0), Hash: "h6", Timestamp: "t6"},
}
result := dedupeHopsByPair(hops, 20)
if len(result) != 2 {
t.Fatalf("expected 2 entries, got %d", len(result))
}
// First entry: A↔B pair, max distance = 100, obsCount = 5
ab := result[0]
if ab["dist"].(float64) != 100 {
t.Errorf("expected dist 100, got %v", ab["dist"])
}
if ab["obsCount"].(int) != 5 {
t.Errorf("expected obsCount 5, got %v", ab["obsCount"])
}
if ab["hash"].(string) != "h1" {
t.Errorf("expected hash h1 (from max-dist record), got %v", ab["hash"])
}
if ab["bestSnr"].(float64) != 8.0 {
t.Errorf("expected bestSnr 8.0, got %v", ab["bestSnr"])
}
// medianSnr of [3,4,5,6,8] = 5.0
if ab["medianSnr"].(float64) != 5.0 {
t.Errorf("expected medianSnr 5.0, got %v", ab["medianSnr"])
}
// Second entry: C↔D pair
cd := result[1]
if cd["dist"].(float64) != 50 {
t.Errorf("expected dist 50, got %v", cd["dist"])
}
if cd["obsCount"].(int) != 1 {
t.Errorf("expected obsCount 1, got %v", cd["obsCount"])
}
}
func TestDedupeTopHopsReversePairMerges(t *testing.T) {
hops := []distHopRecord{
{FromPk: "BBB", ToPk: "AAA", FromName: "B", ToName: "A", Dist: 50, Type: "R↔R", Hash: "h1"},
{FromPk: "AAA", ToPk: "BBB", FromName: "A", ToName: "B", Dist: 80, Type: "R↔R", Hash: "h2"},
}
result := dedupeHopsByPair(hops, 20)
if len(result) != 1 {
t.Fatalf("expected 1 entry, got %d", len(result))
}
if result[0]["obsCount"].(int) != 2 {
t.Errorf("expected obsCount 2, got %v", result[0]["obsCount"])
}
if result[0]["dist"].(float64) != 80 {
t.Errorf("expected dist 80, got %v", result[0]["dist"])
}
}
func TestDedupeTopHopsNilSNR(t *testing.T) {
hops := []distHopRecord{
{FromPk: "AAA", ToPk: "BBB", FromName: "A", ToName: "B", Dist: 100, Type: "R↔R", SNR: nil, Hash: "h1"},
{FromPk: "AAA", ToPk: "BBB", FromName: "A", ToName: "B", Dist: 90, Type: "R↔R", SNR: nil, Hash: "h2"},
}
result := dedupeHopsByPair(hops, 20)
if len(result) != 1 {
t.Fatalf("expected 1 entry, got %d", len(result))
}
if result[0]["bestSnr"] != nil {
t.Errorf("expected bestSnr nil, got %v", result[0]["bestSnr"])
}
if result[0]["medianSnr"] != nil {
t.Errorf("expected medianSnr nil, got %v", result[0]["medianSnr"])
}
}
func TestDedupeTopHopsLimit(t *testing.T) {
// Generate 25 unique pairs, verify limit=20 caps output
hops := make([]distHopRecord, 25)
for i := range hops {
hops[i] = distHopRecord{
FromPk: "A", ToPk: string(rune('a' + i)),
Dist: float64(i), Type: "R↔R", Hash: "h",
}
}
result := dedupeHopsByPair(hops, 20)
if len(result) != 20 {
t.Errorf("expected 20 entries, got %d", len(result))
}
}
func TestDedupeTopHopsEvenMedian(t *testing.T) {
// Even count: median = avg of two middle values
hops := []distHopRecord{
{FromPk: "A", ToPk: "B", Dist: 10, Type: "R↔R", SNR: f64(2.0), Hash: "h1"},
{FromPk: "A", ToPk: "B", Dist: 20, Type: "R↔R", SNR: f64(4.0), Hash: "h2"},
{FromPk: "A", ToPk: "B", Dist: 30, Type: "R↔R", SNR: f64(6.0), Hash: "h3"},
{FromPk: "A", ToPk: "B", Dist: 40, Type: "R↔R", SNR: f64(8.0), Hash: "h4"},
}
result := dedupeHopsByPair(hops, 20)
// sorted SNR: [2,4,6,8], median = (4+6)/2 = 5.0
if result[0]["medianSnr"].(float64) != 5.0 {
t.Errorf("expected medianSnr 5.0, got %v", result[0]["medianSnr"])
}
}
+10 -4
View File
@@ -42,14 +42,20 @@
"type": {
"type": "string"
},
"snr": {
"type": "number"
},
"hash": {
"type": "string"
},
"timestamp": {
"type": "string"
},
"bestSnr": {
"type": "number"
},
"medianSnr": {
"type": "number"
},
"obsCount": {
"type": "number"
}
}
}
@@ -1580,4 +1586,4 @@
}
}
}
}
}
+10 -21
View File
@@ -69,13 +69,11 @@ func TestTouchRelayLastSeen_Debouncing(t *testing.T) {
lastSeenTouched: make(map[string]time.Time),
}
pk := "relay1"
tx := &StoreTx{
ResolvedPath: []*string{&pk},
}
// After #800, touchRelayLastSeen takes a []string of pubkeys (from decode-window)
pks := []string{"relay1"}
now := time.Now()
s.touchRelayLastSeen(tx, now)
s.touchRelayLastSeen(pks, now)
// Verify it was written
var lastSeen sql.NullString
@@ -88,7 +86,7 @@ func TestTouchRelayLastSeen_Debouncing(t *testing.T) {
db.conn.Exec("UPDATE nodes SET last_seen = NULL WHERE public_key = ?", "relay1")
// Call again within 5 minutes — should be debounced (no write)
s.touchRelayLastSeen(tx, now.Add(2*time.Minute))
s.touchRelayLastSeen(pks, now.Add(2*time.Minute))
db.conn.QueryRow("SELECT last_seen FROM nodes WHERE public_key = ?", "relay1").Scan(&lastSeen)
if lastSeen.Valid {
@@ -96,14 +94,14 @@ func TestTouchRelayLastSeen_Debouncing(t *testing.T) {
}
// Call after 5 minutes — should write again
s.touchRelayLastSeen(tx, now.Add(6*time.Minute))
s.touchRelayLastSeen(pks, now.Add(6*time.Minute))
db.conn.QueryRow("SELECT last_seen FROM nodes WHERE public_key = ?", "relay1").Scan(&lastSeen)
if !lastSeen.Valid {
t.Fatal("expected write after debounce interval expired")
}
}
func TestTouchRelayLastSeen_SkipsNilResolvedPath(t *testing.T) {
func TestTouchRelayLastSeen_SkipsEmptyPubkeys(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
@@ -112,13 +110,9 @@ func TestTouchRelayLastSeen_SkipsNilResolvedPath(t *testing.T) {
lastSeenTouched: make(map[string]time.Time),
}
// tx with nil entries and empty resolved_path
tx := &StoreTx{
ResolvedPath: []*string{nil, nil},
}
// Should not panic or error
s.touchRelayLastSeen(tx, time.Now())
// Empty pubkeys — should not panic or error
s.touchRelayLastSeen([]string{}, time.Now())
s.touchRelayLastSeen(nil, time.Now())
}
func TestTouchRelayLastSeen_NilDB(t *testing.T) {
@@ -127,11 +121,6 @@ func TestTouchRelayLastSeen_NilDB(t *testing.T) {
lastSeenTouched: make(map[string]time.Time),
}
pk := "abc"
tx := &StoreTx{
ResolvedPath: []*string{&pk},
}
// Should not panic with nil db
s.touchRelayLastSeen(tx, time.Now())
s.touchRelayLastSeen([]string{"abc"}, time.Now())
}
+22 -26
View File
@@ -28,7 +28,7 @@ func TestEstimateStoreTxBytes_ReasonableValues(t *testing.T) {
}
// TestEstimateStoreTxBytes_ManyHopsSubpaths verifies that packets with many
// hops estimate more due to per-hop byPathHop index entries.
// hops estimate significantly more due to O(path²) subpath index entries.
func TestEstimateStoreTxBytes_ManyHopsSubpaths(t *testing.T) {
tx2 := &StoreTx{
Hash: "aabb",
@@ -43,37 +43,35 @@ func TestEstimateStoreTxBytes_ManyHopsSubpaths(t *testing.T) {
est2 := estimateStoreTxBytes(tx2)
est10 := estimateStoreTxBytes(tx10)
// 10 hops vs 2 hops → 8 extra byPathHop entries × perPathHopBytes
// 10 hops → 45 subpath combos × 40 = 1800 bytes just for subpaths
if est10 <= est2 {
t.Errorf("10-hop (%d) should estimate more than 2-hop (%d)", est10, est2)
}
// spTxIndex eliminated in #791; cost difference is now linear (per-hop only)
expectedDiff := int64(8) * perPathHopBytes // 8 extra hops
if est10 < est2+expectedDiff {
t.Errorf("10-hop (%d) should estimate at least %d more than 2-hop (%d)", est10, expectedDiff, est2)
if est10 < est2+1500 {
t.Errorf("10-hop (%d) should estimate at least 1500 more than 2-hop (%d)", est10, est2)
}
}
// TestEstimateStoreObsBytes_WithResolvedPath verifies that observations with
// ResolvedPath estimate more than those without.
func TestEstimateStoreObsBytes_WithResolvedPath(t *testing.T) {
s1, s2, s3 := "node1", "node2", "node3"
obsNoRP := &StoreObs{
// TestEstimateStoreObsBytes_AfterRefactor verifies that after #800 refactor,
// observations no longer have ResolvedPath overhead in their estimate.
func TestEstimateStoreObsBytes_AfterRefactor(t *testing.T) {
obs := &StoreObs{
ObserverID: "obs1",
PathJSON: `["a","b"]`,
}
obsWithRP := &StoreObs{
ObserverID: "obs1",
PathJSON: `["a","b"]`,
ResolvedPath: []*string{&s1, &s2, &s3},
est := estimateStoreObsBytes(obs)
if est <= 0 {
t.Errorf("estimate should be positive, got %d", est)
}
estNo := estimateStoreObsBytes(obsNoRP)
estWith := estimateStoreObsBytes(obsWithRP)
if estWith <= estNo {
t.Errorf("obs with ResolvedPath (%d) should estimate more than without (%d)", estWith, estNo)
// After #800, all obs estimates should be the same (no RP field variation)
obs2 := &StoreObs{
ObserverID: "obs1",
PathJSON: `["a","b"]`,
}
est2 := estimateStoreObsBytes(obs2)
if est != est2 {
t.Errorf("estimates should be equal after #800 (no RP field), got %d vs %d", est, est2)
}
}
@@ -157,11 +155,9 @@ func BenchmarkEstimateStoreTxBytes(b *testing.B) {
// BenchmarkEstimateStoreObsBytes verifies the obs estimate function is fast.
func BenchmarkEstimateStoreObsBytes(b *testing.B) {
s := "resolvedNodePubkey123456"
obs := &StoreObs{
ObserverID: "observer1234",
PathJSON: `["a","b","c"]`,
ResolvedPath: []*string{&s, &s, &s},
ObserverID: "observer1234",
PathJSON: `["a","b","c"]`,
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
+22 -4
View File
@@ -72,6 +72,22 @@ type StatsResponse struct {
BackfillProgress float64 `json:"backfillProgress"`
SignatureDrops int64 `json:"signatureDrops,omitempty"`
HashMigrationComplete bool `json:"hashMigrationComplete"`
// Memory accounting (issue #832). All values in MB.
//
// StoreDataMB ("trackedMB" historically) is the in-store packet byte
// estimate — useful packet bytes only. Subset of HeapInuse. Used as
// the eviction watermark input. NOT a proxy for RSS; ops dashboards
// should prefer ProcessRSSMB for capacity decisions.
//
// Old field name TrackedMB is retained for backward compatibility
// with pre-v3.6 consumers; it carries the same value as StoreDataMB
// and is deprecated.
TrackedMB float64 `json:"trackedMB"` // deprecated alias for storeDataMB
StoreDataMB float64 `json:"storeDataMB"` // in-store packet bytes (subset of heap)
ProcessRSSMB float64 `json:"processRSSMB"` // process RSS from /proc (Linux) or runtime.Sys fallback
GoHeapInuseMB float64 `json:"goHeapInuseMB"` // runtime.MemStats.HeapInuse
GoSysMB float64 `json:"goSysMB"` // runtime.MemStats.Sys (total Go-managed)
}
// ─── Health ────────────────────────────────────────────────────────────────────
@@ -247,7 +263,6 @@ type TransmissionResp struct {
SNR interface{} `json:"snr"`
RSSI interface{} `json:"rssi"`
PathJSON interface{} `json:"path_json"`
ResolvedPath []*string `json:"resolved_path,omitempty"`
Direction interface{} `json:"direction"`
Score interface{} `json:"score,omitempty"`
Observations []ObservationResp `json:"observations,omitempty"`
@@ -262,7 +277,9 @@ type ObservationResp struct {
SNR interface{} `json:"snr"`
RSSI interface{} `json:"rssi"`
PathJSON interface{} `json:"path_json"`
ResolvedPath []*string `json:"resolved_path,omitempty"`
ResolvedPath interface{} `json:"resolved_path,omitempty"`
Direction interface{} `json:"direction,omitempty"`
RawHex interface{} `json:"raw_hex,omitempty"`
Timestamp interface{} `json:"timestamp"`
}
@@ -298,7 +315,6 @@ type PacketTimestampsResponse struct {
type PacketDetailResponse struct {
Packet interface{} `json:"packet"`
Path []interface{} `json:"path"`
Breakdown *Breakdown `json:"breakdown"`
ObservationCount int `json:"observation_count"`
Observations []ObservationResp `json:"observations,omitempty"`
}
@@ -664,7 +680,9 @@ type DistanceHop struct {
ToPk string `json:"toPk"`
Dist float64 `json:"dist"`
Type string `json:"type"`
SNR interface{} `json:"snr"`
BestSnr interface{} `json:"bestSnr"`
MedianSnr interface{} `json:"medianSnr"`
ObsCount int `json:"obsCount"`
Hash string `json:"hash"`
Timestamp string `json:"timestamp"`
}
+3
View File
@@ -0,0 +1,3 @@
module github.com/meshcore-analyzer/packetpath
go 1.22
+76
View File
@@ -0,0 +1,76 @@
// Package packetpath provides shared helpers for extracting path hops from
// raw MeshCore packet hex bytes.
package packetpath
import (
"encoding/hex"
"fmt"
"strings"
)
// DecodePathFromRawHex extracts the header path hops directly from raw hex bytes.
// This is the authoritative path that matches what's in raw_hex, as opposed to
// decoded.Path.Hops which may be overwritten for TRACE packets (issue #886).
//
// WARNING: This function returns the literal header path bytes regardless of
// payload type. For TRACE packets these bytes are SNR values, NOT hop hashes.
// Callers that may receive TRACE packets MUST check PathBytesAreHops(payloadType)
// first, or use the safer DecodeHopsForPayload wrapper.
func DecodePathFromRawHex(rawHex string) ([]string, error) {
buf, err := hex.DecodeString(rawHex)
if err != nil || len(buf) < 2 {
return nil, fmt.Errorf("invalid or too-short hex")
}
headerByte := buf[0]
offset := 1
if IsTransportRoute(int(headerByte & 0x03)) {
if len(buf) < offset+4 {
return nil, fmt.Errorf("too short for transport codes")
}
offset += 4
}
if offset >= len(buf) {
return nil, fmt.Errorf("too short for path byte")
}
pathByte := buf[offset]
offset++
hashSize := int(pathByte>>6) + 1
hashCount := int(pathByte & 0x3F)
hops := make([]string, 0, hashCount)
for i := 0; i < hashCount; i++ {
start := offset + i*hashSize
end := start + hashSize
if end > len(buf) {
break
}
hops = append(hops, strings.ToUpper(hex.EncodeToString(buf[start:end])))
}
return hops, nil
}
// DecodeHopsForPayload returns the header path hops only when the payload type's
// header bytes are actually route hops (i.e. PathBytesAreHops(payloadType) is true).
// For TRACE packets it returns (nil, ErrPayloadHasNoHeaderHops) so the caller is
// forced to source hops from the decoded payload instead.
//
// Prefer this over DecodePathFromRawHex when the payload type is known.
func DecodeHopsForPayload(rawHex string, payloadType byte) ([]string, error) {
if !PathBytesAreHops(payloadType) {
return nil, ErrPayloadHasNoHeaderHops
}
return DecodePathFromRawHex(rawHex)
}
// ErrPayloadHasNoHeaderHops is returned by DecodeHopsForPayload when the
// payload type repurposes the raw_hex header path bytes (e.g. TRACE → SNR values).
var ErrPayloadHasNoHeaderHops = errPayloadHasNoHeaderHops{}
type errPayloadHasNoHeaderHops struct{}
func (errPayloadHasNoHeaderHops) Error() string {
return "payload type repurposes header path bytes; source hops from decoded payload"
}
+150
View File
@@ -0,0 +1,150 @@
package packetpath
import (
"encoding/hex"
"encoding/json"
"strings"
"testing"
)
func TestDecodePathFromRawHex_Basic(t *testing.T) {
// Build a simple FLOOD packet (route_type=1) with 2 hops of hashSize=1
// header: route_type=1, payload_type=2 (TXT_MSG), version=0 → 0b00_0010_01 = 0x09
// path byte: hashSize=1 (bits 7-6 = 0), hashCount=2 (bits 5-0 = 2) → 0x02
// hops: AB, CD
// payload: some bytes
raw := "0902ABCD" + "DEADBEEF"
hops, err := DecodePathFromRawHex(raw)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 2 || hops[0] != "AB" || hops[1] != "CD" {
t.Fatalf("expected [AB, CD], got %v", hops)
}
}
func TestDecodePathFromRawHex_ZeroHops(t *testing.T) {
// DIRECT route (type=2), no hops → 0b00_0010_10 = 0x0A
// path byte: 0x00 (0 hops)
raw := "0A00" + "DEADBEEF"
hops, err := DecodePathFromRawHex(raw)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 0 {
t.Fatalf("expected 0 hops, got %v", hops)
}
}
func TestDecodePathFromRawHex_TransportRoute(t *testing.T) {
// TRANSPORT_FLOOD (route_type=0), payload_type=5 (GRP_TXT), version=0
// header: 0b00_0101_00 = 0x14
// transport codes: 4 bytes
// path byte: hashSize=1, hashCount=1 → 0x01
// hop: FF
raw := "14" + "00112233" + "01" + "FF" + "DEAD"
hops, err := DecodePathFromRawHex(raw)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 1 || hops[0] != "FF" {
t.Fatalf("expected [FF], got %v", hops)
}
}
// buildTracePacket creates a TRACE packet hex string where header path bytes are
// SNR values, and payload contains the actual route hops.
func buildTracePacket() (rawHex string, headerPathHops []string, payloadHops []string) {
// DIRECT route (type=2), TRACE payload (type=9), version=0
// header byte: 0b00_1001_10 = 0x26
headerByte := byte(0x26)
// Header path: 2 SNR bytes (hashSize=1, hashCount=2) → path byte = 0x02
// SNR values: 0x1A (26 dB), 0x0F (15 dB)
pathByte := byte(0x02)
snrBytes := []byte{0x1A, 0x0F}
// TRACE payload: tag(4) + authCode(4) + flags(1) + path hops
tag := []byte{0x01, 0x00, 0x00, 0x00}
authCode := []byte{0x02, 0x00, 0x00, 0x00}
// flags: path_sz=0 (1 byte hops), other bits=0 → 0x00
flags := byte(0x00)
// Payload hops: AA, BB, CC (the actual route)
payloadPathBytes := []byte{0xAA, 0xBB, 0xCC}
var buf []byte
buf = append(buf, headerByte, pathByte)
buf = append(buf, snrBytes...)
buf = append(buf, tag...)
buf = append(buf, authCode...)
buf = append(buf, flags)
buf = append(buf, payloadPathBytes...)
rawHex = strings.ToUpper(hex.EncodeToString(buf))
headerPathHops = []string{"1A", "0F"} // SNR values — NOT route hops
payloadHops = []string{"AA", "BB", "CC"} // actual route hops from payload
return
}
func TestDecodePathFromRawHex_TraceReturnsSNR(t *testing.T) {
rawHex, expectedSNR, _ := buildTracePacket()
hops, err := DecodePathFromRawHex(rawHex)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
// DecodePathFromRawHex always returns header path bytes — for TRACE these are SNR values
if len(hops) != len(expectedSNR) {
t.Fatalf("expected %d hops (SNR), got %d: %v", len(expectedSNR), len(hops), hops)
}
for i, h := range hops {
if h != expectedSNR[i] {
t.Errorf("hop[%d]: expected %s, got %s", i, expectedSNR[i], h)
}
}
}
func TestTracePathJSON_UsesPayloadHops(t *testing.T) {
// This test validates the TRACE vs non-TRACE logic that callers should implement:
// For TRACE: path_json = decoded.Path.Hops (payload-decoded route hops)
// For non-TRACE: path_json = DecodePathFromRawHex(raw_hex)
rawHex, snrHops, payloadHops := buildTracePacket()
// DecodePathFromRawHex returns SNR bytes for TRACE
headerHops, _ := DecodePathFromRawHex(rawHex)
headerJSON, _ := json.Marshal(headerHops)
// payload hops (what decoded.Path.Hops would return after TRACE decoding)
payloadJSON, _ := json.Marshal(payloadHops)
// They must differ — SNR != route hops
if string(headerJSON) == string(payloadJSON) {
t.Fatalf("SNR hops and payload hops should differ for TRACE; both are %s", headerJSON)
}
// For TRACE, path_json should be payloadHops, not headerHops
_ = snrHops // snrHops == headerHops — used for documentation
t.Logf("TRACE: header path (SNR) = %s, payload path (route) = %s", headerJSON, payloadJSON)
}
func TestDecodeHopsForPayload_NonTrace(t *testing.T) {
// header 0x01, path_len 0x02, hops 0xAA 0xBB, then payload bytes
raw := "0102AABB00"
hops, err := DecodeHopsForPayload(raw, 0x05) // GRP_TXT — header path bytes ARE hops
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 2 || hops[0] != "AA" || hops[1] != "BB" {
t.Errorf("expected [AA BB], got %v", hops)
}
}
func TestDecodeHopsForPayload_TraceReturnsError(t *testing.T) {
raw := "010205F00100"
hops, err := DecodeHopsForPayload(raw, PayloadTRACE)
if err != ErrPayloadHasNoHeaderHops {
t.Errorf("expected ErrPayloadHasNoHeaderHops, got %v", err)
}
if hops != nil {
t.Errorf("expected nil hops for TRACE, got %v", hops)
}
}
+24
View File
@@ -0,0 +1,24 @@
package packetpath
// Route type constants (header bits 1-0).
const (
RouteTransportFlood = 0
RouteFlood = 1
RouteDirect = 2
RouteTransportDirect = 3
)
// PayloadTRACE is the payload type constant for TRACE packets.
const PayloadTRACE = 0x09
// IsTransportRoute returns true for TRANSPORT_FLOOD (0) and TRANSPORT_DIRECT (3).
func IsTransportRoute(routeType int) bool {
return routeType == RouteTransportFlood || routeType == RouteTransportDirect
}
// PathBytesAreHops returns true when the raw_hex header path bytes represent
// route hop hashes (the normal case). Returns false for packet types where
// header path bytes are repurposed (e.g. TRACE uses them for SNR values).
func PathBytesAreHops(payloadType byte) bool {
return payloadType != PayloadTRACE
}
+31
View File
@@ -0,0 +1,31 @@
package packetpath
import "testing"
func TestIsTransportRoute(t *testing.T) {
if !IsTransportRoute(RouteTransportFlood) {
t.Error("RouteTransportFlood should be transport")
}
if !IsTransportRoute(RouteTransportDirect) {
t.Error("RouteTransportDirect should be transport")
}
if IsTransportRoute(RouteFlood) {
t.Error("RouteFlood should not be transport")
}
if IsTransportRoute(RouteDirect) {
t.Error("RouteDirect should not be transport")
}
}
func TestPathBytesAreHops(t *testing.T) {
if PathBytesAreHops(PayloadTRACE) {
t.Error("PathBytesAreHops(PayloadTRACE) should be false")
}
// All other known payload types should return true.
otherTypes := []byte{0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F}
for _, pt := range otherTypes {
if !PathBytesAreHops(pt) {
t.Errorf("PathBytesAreHops(0x%02X) should be true", pt)
}
}
}
+56 -18
View File
@@ -28,7 +28,7 @@
function barChart(data, labels, colors, w = 800, h = 220, pad = 40) {
const max = Math.max(...data, 1);
const barW = Math.min((w - pad * 2) / data.length - 2, 30);
const barW = Math.max(1, Math.min((w - pad * 2) / data.length - 2, 30));
let svg = `<svg viewBox="0 0 ${w} ${h}" style="width:100%;max-height:${h}px" role="img" aria-label="Bar chart showing data distribution"><title>Bar chart showing data distribution</title>`;
// Grid
for (let i = 0; i <= 4; i++) {
@@ -263,7 +263,25 @@
<div class="analytics-row">
<div class="analytics-card flex-1">
<h3>📈 Packets / Hour</h3>
${barChart(rf.packetsPerHour.map(h=>h.count), rf.packetsPerHour.map(h=>h.hour.slice(11)+'h'), 'var(--accent)')}
${(() => {
const pph = rf.packetsPerHour;
const counts = pph.map(h => h.count);
// Decimate x-axis labels to avoid overlap
const totalHours = pph.length;
// Pick label interval: <=24h show every 6h, <=72h every 12h, else every 24h
const labelInterval = totalHours <= 24 ? 6 : totalHours <= 72 ? 12 : 24;
const labels = pph.map((h, i) => {
const hh = h.hour.slice(11, 13); // "HH"
const hourNum = parseInt(hh, 10);
if (hourNum % labelInterval === 0) {
// For multi-day ranges, show date on 00h boundaries
if (totalHours > 48 && hourNum === 0) return h.hour.slice(5, 10);
return hh + 'h';
}
return ''; // skip label
});
return barChart(counts, labels, 'var(--accent)');
})()}
</div>
</div>
@@ -624,14 +642,13 @@
if (!data || !data.rings.length) return '<div class="text-muted">No path data for this observer</div>';
let html = `<div class="reach-rings">`;
data.rings.forEach(ring => {
const opacity = Math.max(0.3, 1 - ring.hops * 0.06);
const nodeLinks = ring.nodes.slice(0, 8).map(n => {
const label = n.name ? `<a href="#/nodes/${encodeURIComponent(n.pubkey)}" class="analytics-link">${esc(n.name)}</a>` : `<span class="mono">${n.hop}</span>`;
const detail = n.distRange ? ` <span class="text-muted">(${n.distRange})</span>` : '';
return label + detail;
}).join(', ');
const extra = ring.nodes.length > 8 ? ` <span class="text-muted">+${ring.nodes.length - 8} more</span>` : '';
html += `<div class="reach-ring" style="opacity:${opacity}">
html += `<div class="reach-ring">
<div class="reach-hop">${ring.hops} hop${ring.hops > 1 ? 's' : ''}</div>
<div class="reach-nodes">${nodeLinks}${extra}</div>
<div class="reach-count">${ring.nodes.length} node${ring.nodes.length > 1 ? 's' : ''}</div>
@@ -675,7 +692,6 @@
});
let html = '<div class="reach-rings">';
Object.entries(byDist).sort((a, b) => +a[0] - +b[0]).forEach(([dist, nodes]) => {
const opacity = Math.max(0.3, 1 - (+dist) * 0.06);
const nodeLinks = nodes.slice(0, 10).map(n => {
const label = n.name
? `<a href="#/nodes/${encodeURIComponent(n.pubkey)}" class="analytics-link">${esc(n.name)}</a>`
@@ -683,7 +699,7 @@
return label + ` <span class="text-muted">via ${esc(n.observer_name)}</span>`;
}).join(', ');
const extra = nodes.length > 10 ? ` <span class="text-muted">+${nodes.length - 10} more</span>` : '';
html += `<div class="reach-ring" style="opacity:${opacity}">
html += `<div class="reach-ring">
<div class="reach-hop">${dist} hop${+dist > 1 ? 's' : ''}</div>
<div class="reach-nodes">${nodeLinks}${extra}</div>
<div class="reach-count">${nodes.length} node${nodes.length > 1 ? 's' : ''}</div>
@@ -840,29 +856,44 @@
}
}
var CHANNEL_TIMELINE_MAX_SERIES = 8;
function renderChannelTimeline(data) {
if (!data.length) return '<div class="text-muted">No data</div>';
var hours = []; var hourSet = {};
var channelList = []; var channelSet = {};
var lookup = {};
var maxCount = 1;
var channelVolume = {};
for (var i = 0; i < data.length; i++) {
var d = data[i];
if (!hourSet[d.hour]) { hourSet[d.hour] = 1; hours.push(d.hour); }
if (!channelSet[d.channel]) { channelSet[d.channel] = 1; channelList.push(d.channel); }
lookup[d.hour + '|' + d.channel] = d.count;
if (d.count > maxCount) maxCount = d.count;
channelVolume[d.channel] = (channelVolume[d.channel] || 0) + d.count;
}
hours.sort();
// Sort channels by total volume descending, cap to top N
channelList.sort(function(a, b) { return channelVolume[b] - channelVolume[a]; });
var hiddenCount = Math.max(0, channelList.length - CHANNEL_TIMELINE_MAX_SERIES);
var visibleChannels = channelList.slice(0, CHANNEL_TIMELINE_MAX_SERIES);
var maxCount = 1;
for (var vi = 0; vi < visibleChannels.length; vi++) {
for (var hi2 = 0; hi2 < hours.length; hi2++) {
var c = lookup[hours[hi2] + '|' + visibleChannels[vi]] || 0;
if (c > maxCount) maxCount = c;
}
}
var colors = ['#ef4444','#22c55e','#3b82f6','#f59e0b','#8b5cf6','#ec4899','#14b8a6','#64748b'];
var w = 600, h = 180, pad = 35;
var xScale = (w - pad * 2) / Math.max(hours.length - 1, 1);
var yScale = (h - pad * 2) / maxCount;
var svg = '<svg viewBox="0 0 ' + w + ' ' + h + '" style="width:100%;max-height:180px" role="img" aria-label="Channel message activity over time"><title>Channel message activity over time</title>';
for (var ci = 0; ci < channelList.length; ci++) {
for (var ci = 0; ci < visibleChannels.length; ci++) {
var pts = [];
for (var hi = 0; hi < hours.length; hi++) {
var count = lookup[hours[hi] + '|' + channelList[ci]] || 0;
var count = lookup[hours[hi] + '|' + visibleChannels[ci]] || 0;
var x = pad + hi * xScale;
var y = h - pad - count * yScale;
pts.push(x + ',' + y);
@@ -876,8 +907,11 @@
}
svg += '</svg>';
var legendParts = [];
for (var lci = 0; lci < channelList.length; lci++) {
legendParts.push('<span><span class="legend-dot" style="background:' + colors[lci % colors.length] + '"></span>' + esc(channelList[lci]) + '</span>');
for (var lci = 0; lci < visibleChannels.length; lci++) {
legendParts.push('<span><span class="legend-dot" style="background:' + colors[lci % colors.length] + '"></span>' + esc(visibleChannels[lci]) + '</span>');
}
if (hiddenCount > 0) {
legendParts.push('<span class="text-muted">+' + hiddenCount + ' more</span>');
}
svg += '<div class="timeline-legend">' + legendParts.join('') + '</div>';
return svg;
@@ -1937,15 +1971,18 @@
}
// Top hops leaderboard
html += `<div class="analytics-section"><h3>🏆 Top 20 Longest Hops</h3><table class="data-table"><thead><tr><th scope="col">#</th><th scope="col">From</th><th scope="col">To</th><th scope="col">Distance (${distUnitLabel})</th><th scope="col">Type</th><th scope="col">SNR</th><th scope="col">Packet</th><th scope="col"></th></tr></thead><tbody>`;
html += `<div class="analytics-section"><h3>🏆 Top 20 Longest Hops</h3><table class="data-table"><thead><tr><th scope="col">#</th><th scope="col">From</th><th scope="col">To</th><th scope="col">Distance (${distUnitLabel})</th><th scope="col">Type</th><th scope="col">Obs</th><th scope="col">Best SNR</th><th scope="col">Median SNR</th><th scope="col">Packet</th><th scope="col"></th></tr></thead><tbody>`;
const top20 = data.topHops.slice(0, 20);
top20.forEach((h, i) => {
const fromLink = h.fromPk ? `<a href="#/nodes/${encodeURIComponent(h.fromPk)}" class="analytics-link">${esc(h.fromName)}</a>` : esc(h.fromName || '?');
const toLink = h.toPk ? `<a href="#/nodes/${encodeURIComponent(h.toPk)}" class="analytics-link">${esc(h.toName)}</a>` : esc(h.toName || '?');
const snr = h.snr != null ? h.snr + ' dB' : '<span class="text-muted">—</span>';
const bestSnr = h.bestSnr != null ? Number(h.bestSnr).toFixed(1) + ' dB' : '<span class="text-muted">—</span>';
const medianSnr = h.medianSnr != null ? Number(h.medianSnr).toFixed(1) + ' dB' : '<span class="text-muted">—</span>';
const obs = h.obsCount != null ? h.obsCount : 1;
const pktLink = h.hash ? `<a href="#/packet/${encodeURIComponent(h.hash)}" class="analytics-link mono" style="font-size:0.85em">${esc(h.hash.slice(0, 12))}…</a>` : '—';
const mapBtn = h.fromPk && h.toPk ? `<button class="btn-icon dist-map-hop" data-from="${esc(h.fromPk)}" data-to="${esc(h.toPk)}" title="View on map">🗺️</button>` : '';
html += `<tr><td>${i+1}</td><td>${fromLink}</td><td>${toLink}</td><td><strong>${formatDistance(h.dist)}</strong></td><td>${esc(h.type)}</td><td>${snr}</td><td>${pktLink}</td><td>${mapBtn}</td></tr>`;
const tsTitle = h.timestamp ? `Best observation: ${h.timestamp}` : '';
html += `<tr title="${esc(tsTitle)}"><td>${i+1}</td><td>${fromLink}</td><td>${toLink}</td><td><strong>${formatDistance(h.dist)}</strong></td><td>${esc(h.type)}</td><td>${obs}</td><td>${bestSnr}</td><td>${medianSnr}</td><td>${pktLink}</td><td>${mapBtn}</td></tr>`;
});
html += `</tbody></table></div>`;
@@ -3448,7 +3485,7 @@ function destroy() { _analyticsData = {}; _channelData = null; if (_ngState && _
if (sortKey === 'severity') {
v = (SKEW_SEVERITY_ORDER[a.severity] || 9) - (SKEW_SEVERITY_ORDER[b.severity] || 9);
} else if (sortKey === 'skew') {
v = Math.abs(b.medianSkewSec || 0) - Math.abs(a.medianSkewSec || 0);
v = Math.abs(window.currentSkewValue(b) || 0) - Math.abs(window.currentSkewValue(a) || 0);
} else if (sortKey === 'name') {
v = (a.nodeName || '').localeCompare(b.nodeName || '');
} else if (sortKey === 'drift') {
@@ -3475,12 +3512,13 @@ function destroy() { _analyticsData = {}; _channelData = null; if (_ngState && _
var rowsHtml = filtered.map(function(n) {
var rowClass = 'clock-fleet-row--' + (n.severity || 'ok');
var lastAdv = n.lastObservedTS ? new Date(n.lastObservedTS * 1000).toISOString().replace('T', ' ').replace(/\.\d+Z/, ' UTC') : '—';
var skewText = n.severity === 'no_clock' ? 'No Clock' : formatSkew(n.medianSkewSec);
var skewVal = window.currentSkewValue(n);
var skewText = n.severity === 'no_clock' ? 'No Clock' : formatSkew(skewVal);
var driftText = n.severity === 'no_clock' || !n.driftPerDaySec ? '' : formatDrift(n.driftPerDaySec);
return '<tr class="' + rowClass + '" data-pubkey="' + esc(n.pubkey) + '" style="cursor:pointer">' +
'<td><strong>' + esc(n.nodeName || n.pubkey.slice(0, 12)) + '</strong></td>' +
'<td style="font-family:var(--mono,monospace)">' + skewText + '</td>' +
'<td>' + renderSkewBadge(n.severity, n.medianSkewSec) + '</td>' +
'<td>' + renderSkewBadge(n.severity, skewVal, n) + '</td>' +
'<td style="font-family:var(--mono,monospace)">' + driftText + '</td>' +
'<td style="font-size:11px">' + lastAdv + '</td>' +
'</tr>';
+67 -1
View File
@@ -10,8 +10,75 @@ function routeTypeName(n) { return ROUTE_TYPES[n] || 'UNKNOWN'; }
function payloadTypeName(n) { return PAYLOAD_TYPES[n] || 'UNKNOWN'; }
function payloadTypeColor(n) { return PAYLOAD_COLORS[n] || 'unknown'; }
function isTransportRoute(rt) { return rt === 0 || rt === 3; }
/** Byte offset of path_len in raw_hex: 5 for transport routes (4 bytes of next/last hop codes precede it), 1 otherwise. */
function getPathLenOffset(routeType) { return isTransportRoute(routeType) ? 5 : 1; }
function transportBadge(rt) { return isTransportRoute(rt) ? ' <span class="badge badge-transport" title="' + routeTypeName(rt) + '">T</span>' : ''; }
/**
* Compute breakdown byte ranges from raw_hex on the client.
* Mirrors cmd/server/decoder.go BuildBreakdown(). Used so per-observation raw_hex
* (which can differ in path length from the top-level packet) gets accurate
* highlighted byte ranges, instead of using the server-supplied breakdown
* computed once from the top-level raw_hex.
*/
function computeBreakdownRanges(hexString, routeType, payloadType) {
if (!hexString) return [];
const clean = hexString.replace(/\s+/g, '');
const bytes = clean.length / 2;
if (bytes < 2) return [];
const ranges = [];
// Header
ranges.push({ start: 0, end: 0, label: 'Header' });
let offset = 1;
if (isTransportRoute(routeType)) {
if (bytes < offset + 4) return ranges;
ranges.push({ start: offset, end: offset + 3, label: 'Transport Codes' });
offset += 4;
}
if (offset >= bytes) return ranges;
// Path Length byte
ranges.push({ start: offset, end: offset, label: 'Path Length' });
const pathByte = parseInt(clean.slice(offset * 2, offset * 2 + 2), 16);
offset += 1;
if (isNaN(pathByte)) return ranges;
const hashSize = (pathByte >> 6) + 1;
const hashCount = pathByte & 0x3F;
const pathBytes = hashSize * hashCount;
if (hashCount > 0 && offset + pathBytes <= bytes) {
ranges.push({ start: offset, end: offset + pathBytes - 1, label: 'Path' });
}
offset += pathBytes;
if (offset >= bytes) return ranges;
const payloadStart = offset;
// ADVERT (payload_type 4) gets sub-fields when full record present
if (payloadType === 4 && bytes - payloadStart >= 100) {
ranges.push({ start: payloadStart, end: payloadStart + 31, label: 'PubKey' });
ranges.push({ start: payloadStart + 32, end: payloadStart + 35, label: 'Timestamp' });
ranges.push({ start: payloadStart + 36, end: payloadStart + 99, label: 'Signature' });
const appStart = payloadStart + 100;
if (appStart < bytes) {
ranges.push({ start: appStart, end: appStart, label: 'Flags' });
const appFlags = parseInt(clean.slice(appStart * 2, appStart * 2 + 2), 16);
let fOff = appStart + 1;
if (!isNaN(appFlags)) {
if ((appFlags & 0x10) && fOff + 8 <= bytes) {
ranges.push({ start: fOff, end: fOff + 3, label: 'Latitude' });
ranges.push({ start: fOff + 4, end: fOff + 7, label: 'Longitude' });
fOff += 8;
}
if ((appFlags & 0x20) && fOff + 2 <= bytes) fOff += 2;
if ((appFlags & 0x40) && fOff + 2 <= bytes) fOff += 2;
if ((appFlags & 0x80) && fOff < bytes) {
ranges.push({ start: fOff, end: bytes - 1, label: 'Name' });
}
}
}
} else {
ranges.push({ start: payloadStart, end: bytes - 1, label: 'Payload' });
}
return ranges;
}
// --- Utilities ---
const _apiPerf = { calls: 0, totalMs: 0, log: [], cacheHits: 0 };
const _apiCache = new Map();
@@ -1027,7 +1094,6 @@ function makeColumnsResizable(tableSelector, storageKey) {
// Add resize handles
ths.forEach((th, i) => {
if (i === ths.length - 1) return;
th.style.position = 'relative';
const handle = document.createElement('div');
handle.className = 'col-resize-handle';
handle.addEventListener('mousedown', (e) => {
+78 -17
View File
@@ -393,17 +393,25 @@
}
}
// Merge user-stored keys into the channel list
// Merge user-stored keys into the channel list.
// If a stored key matches a server-known channel, mark that channel as
// userAdded so the ✕ button appears — otherwise the user has no way to
// remove a key they added but that the server already knows about.
function mergeUserChannels() {
var keys = ChannelDecrypt.getStoredKeys();
var names = Object.keys(keys);
for (var i = 0; i < names.length; i++) {
var name = names[i];
// Check if channel already exists by name
var exists = channels.some(function (ch) {
return ch.name === name || ch.hash === name || ch.hash === ('user:' + name);
});
if (!exists) {
var matched = false;
for (var j = 0; j < channels.length; j++) {
var ch = channels[j];
if (ch.name === name || ch.hash === name || ch.hash === ('user:' + name)) {
ch.userAdded = true;
matched = true;
break;
}
}
if (!matched) {
channels.push({
hash: 'user:' + name,
name: name,
@@ -749,19 +757,38 @@
e.stopPropagation();
var channelHash = removeBtn.getAttribute('data-remove-channel');
if (!channelHash) return;
var chName = channelHash.startsWith('user:') ? channelHash.substring(5) : channelHash;
// The localStorage key is the channel name. For user:-prefixed entries
// strip the prefix; for server-known channels look up the channel
// object so we use its display name (the hash itself isn't the key).
var ch = channels.find(function (c) { return c.hash === channelHash; });
var chName = channelHash.startsWith('user:')
? channelHash.substring(5)
: (ch && ch.name) || channelHash;
if (!confirm('Remove channel "' + chName + '"? This will clear saved keys and cached messages.')) return;
ChannelDecrypt.removeKey(chName);
// Remove from channels array
channels = channels.filter(function (c) { return c.hash !== channelHash; });
if (selectedHash === channelHash) {
selectedHash = null;
messages = [];
history.replaceState(null, '', '#/channels');
var msgEl2 = document.getElementById('chMessages');
if (msgEl2) msgEl2.innerHTML = '<div class="ch-empty">Choose a channel from the sidebar to view messages</div>';
var header2 = document.getElementById('chHeader');
if (header2) header2.querySelector('.ch-header-text').textContent = 'Select a channel';
if (channelHash.startsWith('user:')) {
// Pure user-added channel — drop from the list entirely.
channels = channels.filter(function (c) { return c.hash !== channelHash; });
if (selectedHash === channelHash) {
selectedHash = null;
messages = [];
history.replaceState(null, '', '#/channels');
var msgEl2 = document.getElementById('chMessages');
if (msgEl2) msgEl2.innerHTML = '<div class="ch-empty">Choose a channel from the sidebar to view messages</div>';
var header2 = document.getElementById('chHeader');
if (header2) header2.querySelector('.ch-header-text').textContent = 'Select a channel';
}
} else if (ch) {
// Server-known channel: keep the row, just unmark as user-added so
// the ✕ disappears until they re-add a key.
ch.userAdded = false;
// If this was the selected channel, clear decrypted messages since
// the key is gone — they can't be re-decrypted without re-adding it.
if (selectedHash === channelHash) {
messages = [];
var msgEl2 = document.getElementById('chMessages');
if (msgEl2) msgEl2.innerHTML = '<div class="ch-empty">Key removed — add a key to decrypt messages</div>';
}
}
renderChannelList();
return;
@@ -1165,6 +1192,40 @@
return;
}
// #811: Deep link to a `#`-named channel that's not in the loaded list.
// If a stored key matches, decrypt. Otherwise we must distinguish an
// encrypted-no-key channel (show lock) from an unencrypted channel that
// simply isn't in the toggle-off list (#825 — must fall through to REST).
if (hash.charAt(0) === '#') {
if (storedKeys[hash]) {
var keyHex2 = storedKeys[hash];
var keyBytes2 = ChannelDecrypt.hexToBytes(keyHex2);
var hashByte2 = await ChannelDecrypt.computeChannelHash(keyBytes2);
await decryptAndRender(keyHex2, hashByte2, hash);
return;
}
// #825: confirm encrypted-ness via an encrypted-included channel list
// before assuming a lock state. Conservative on error — fall through.
// Show a loading affordance so cold deep links don't display stale content
// for the duration of the metadata RTT (cached 15s thereafter).
msgEl.innerHTML = '<div class="ch-loading">Loading messages…</div>';
try {
var rpInc = RegionFilter.getRegionParam();
var paramsInc = ['includeEncrypted=true'];
if (rpInc) paramsInc.push('region=' + encodeURIComponent(rpInc));
var allCh = await api('/channels?' + paramsInc.join('&'), { ttl: CLIENT_TTL.channels });
if (isStaleMessageRequest(request)) return;
var foundCh = (allCh.channels || []).find(function (c) { return c.hash === hash; });
if (foundCh && foundCh.encrypted === true) {
msgEl.innerHTML = '<div class="ch-empty">🔒 This channel is encrypted and no decryption key is configured</div>';
return;
}
// Unencrypted (or unknown) — fall through to the REST fetch below.
} catch (e) {
// ignore — fall through to REST fetch
}
}
msgEl.innerHTML = '<div class="ch-loading">Loading messages…</div>';
try {
+5 -1
View File
@@ -81,9 +81,13 @@ window.HopDisplay = (function() {
const regionalConflicts = conflicts.filter(c => c.regional);
const badgeCount = regionalConflicts.length > 0 ? regionalConflicts.length : (globalFallback ? conflicts.length : 0);
const conflictData = escapeHtml(JSON.stringify({ h, conflicts, globalFallback }));
const warnBadge = badgeCount > 1
const conflictBadge = badgeCount > 1
? ` <button class="hop-conflict-btn" data-conflict='${conflictData}' onclick="event.preventDefault();event.stopPropagation();HopDisplay._showFromBtn(this)" title="${badgeCount} candidates — click for details">⚠${badgeCount}</button>`
: '';
const unreliableBadge = unreliable
? ' <button class="hop-unreliable-btn" aria-label="Unreliable name resolution" title="Unreliable name resolution — this hash\u2192name match is geographically inconsistent with the surrounding path hops. The repeater itself may be fine; this specific hop assignment is uncertain.">⚠️</button>'
: '';
const warnBadge = conflictBadge + unreliableBadge;
const cls = [
'hop',
+126 -68
View File
@@ -72,33 +72,89 @@ window.HopResolver = (function() {
}
/**
* Pick the best candidate using affinity first, then geo-distance fallback.
* Pick the best candidate by scoring against BOTH prev and next resolved hops.
*
* Strategy (in priority order):
* 1. Neighbor-graph edge weight: sum of edge scores to prevPubkey + nextPubkey. Pick max.
* 2. Geographic centroid: if no candidate has graph edges, compute centroid of
* prev+next positions and pick closest candidate by haversine distance.
* 3. Single-anchor geo fallback: if only one neighbor is resolved, use it as anchor.
* 4. Original heuristic: first candidate (when no context at all).
*
* @param {Array} candidates - candidates with lat/lon/pubkey/name
* @param {string|null} adjacentPubkey - pubkey of the previously/next resolved hop
* @param {Object|null} anchor - {lat, lon} for geo fallback
* @param {number|null} fallbackLat - fallback anchor lat (e.g. observer)
* @param {number|null} fallbackLon - fallback anchor lon
* @param {string|null} prevPubkey - pubkey of previous resolved hop
* @param {string|null} nextPubkey - pubkey of next resolved hop
* @param {Object|null} prevPos - {lat, lon} of previous resolved hop or origin
* @param {Object|null} nextPos - {lat, lon} of next resolved hop or observer
* @returns {Object} best candidate
*/
function pickByAffinity(candidates, adjacentPubkey, anchor, fallbackLat, fallbackLon) {
// If we have affinity data and an adjacent hop, prefer neighbors
if (adjacentPubkey && Object.keys(affinityMap).length > 0) {
const withAffinity = candidates
.map(c => ({ ...c, affinity: getAffinity(adjacentPubkey, c.pubkey) }))
.filter(c => c.affinity > 0);
if (withAffinity.length > 0) {
withAffinity.sort((a, b) => b.affinity - a.affinity);
return withAffinity[0];
function pickByAffinity(candidates, prevPubkey, nextPubkey, prevPos, nextPos) {
const hasGraph = Object.keys(affinityMap).length > 0;
const hasAdj = prevPubkey || nextPubkey;
// Strategy 1: neighbor-graph edge weights (sum of prev + next)
if (hasGraph && hasAdj) {
const scored = candidates.map(function(c) {
let s = 0;
if (prevPubkey) s += getAffinity(prevPubkey, c.pubkey);
if (nextPubkey) s += getAffinity(nextPubkey, c.pubkey);
return { candidate: c, edgeScore: s };
});
const withEdges = scored.filter(function(s) { return s.edgeScore > 0; });
if (withEdges.length > 0) {
withEdges.sort(function(a, b) { return b.edgeScore - a.edgeScore; });
_traceMultiCandidate(candidates, scored, withEdges[0].candidate, 'graph');
return withEdges[0].candidate;
}
}
// Fallback: geo-distance sort (existing behavior)
const effectiveAnchor = anchor || (fallbackLat != null ? { lat: fallbackLat, lon: fallbackLon } : null);
if (effectiveAnchor) {
candidates.sort((a, b) => dist(a.lat, a.lon, effectiveAnchor.lat, effectiveAnchor.lon) - dist(b.lat, b.lon, effectiveAnchor.lat, effectiveAnchor.lon));
// Strategy 2/3: geographic — centroid of prev+next, or single anchor
let anchorLat = null, anchorLon = null, anchorCount = 0;
if (prevPos && prevPos.lat != null && prevPos.lon != null) {
anchorLat = (anchorLat || 0) + prevPos.lat;
anchorLon = (anchorLon || 0) + prevPos.lon;
anchorCount++;
}
if (nextPos && nextPos.lat != null && nextPos.lon != null) {
anchorLat = (anchorLat || 0) + nextPos.lat;
anchorLon = (anchorLon || 0) + nextPos.lon;
anchorCount++;
}
if (anchorCount > 0) {
anchorLat /= anchorCount;
anchorLon /= anchorCount;
const geoScored = candidates.map(function(c) {
const d = (c.lat != null && c.lon != null && !(c.lat === 0 && c.lon === 0))
? haversineKm(c.lat, c.lon, anchorLat, anchorLon) : 999999;
return { candidate: c, distKm: d };
});
geoScored.sort(function(a, b) { return a.distKm - b.distKm; });
_traceMultiCandidate(candidates, geoScored, geoScored[0].candidate, 'centroid');
return geoScored[0].candidate;
}
// Strategy 4: no context — return first candidate
_traceMultiCandidate(candidates, null, candidates[0], 'fallback');
return candidates[0];
}
/** Dev-mode console trace for multi-candidate picks */
function _traceMultiCandidate(candidates, scored, chosen, method) {
if (typeof console === 'undefined' || !console.debug) return;
if (candidates.length < 2) return;
try {
const prefix = candidates[0].pubkey ? candidates[0].pubkey.slice(0, 2) : '??';
const scoreSummary = scored ? scored.map(function(s) {
const pk = (s.candidate || s).pubkey || '?';
const val = s.edgeScore != null ? s.edgeScore : (s.distKm != null ? s.distKm + 'km' : '?');
return pk.slice(0, 8) + ':' + val;
}) : [];
console.debug('[hop-resolver] hash=' + prefix + ' candidates=' + candidates.length +
' scored=[' + scoreSummary.join(',') + '] chose=' + (chosen.pubkey || '?').slice(0, 8) +
' method=' + method);
} catch(e) { /* trace is best-effort */ }
}
/**
* Resolve an array of hex hop prefixes to node info.
* Returns a map: { hop: {name, pubkey, lat, lon, ambiguous, unreliable} }
@@ -169,52 +225,54 @@ window.HopResolver = (function() {
}
}
// Forward pass
let lastPos = (originLat != null && originLon != null) ? { lat: originLat, lon: originLon } : null;
let lastResolvedPubkey = null;
for (let i = 0; i < hops.length; i++) {
const hop = hops[i];
if (hopPositions[hop]) {
lastPos = hopPositions[hop];
lastResolvedPubkey = resolved[hop] ? resolved[hop].pubkey : null;
continue;
// Combined disambiguation: resolve ambiguous hops using both neighbors.
// We iterate until no more hops can be resolved (handles cascading dependencies).
const originPos = (originLat != null && originLon != null) ? { lat: originLat, lon: originLon } : null;
const observerPos = (observerLat != null && observerLon != null) ? { lat: observerLat, lon: observerLon } : null;
let changed = true;
let maxIter = hops.length + 1; // prevent infinite loops
while (changed && maxIter-- > 0) {
changed = false;
for (let i = 0; i < hops.length; i++) {
const hop = hops[i];
if (hopPositions[hop]) continue; // already resolved
const r = resolved[hop];
if (!r || !r.ambiguous) continue;
const withLoc = r.candidates.filter(c => c.lat != null && c.lon != null && !(c.lat === 0 && c.lon === 0));
if (!withLoc.length) continue;
// Find prev resolved neighbor
let prevPubkey = null, prevPos = null;
for (let j = i - 1; j >= 0; j--) {
if (hopPositions[hops[j]]) {
prevPos = hopPositions[hops[j]];
prevPubkey = resolved[hops[j]] ? resolved[hops[j]].pubkey : null;
break;
}
}
if (!prevPos && originPos) prevPos = originPos;
// Find next resolved neighbor
let nextPubkey = null, nextPos = null;
for (let j = i + 1; j < hops.length; j++) {
if (hopPositions[hops[j]]) {
nextPos = hopPositions[hops[j]];
nextPubkey = resolved[hops[j]] ? resolved[hops[j]].pubkey : null;
break;
}
}
if (!nextPos && observerPos) nextPos = observerPos;
// Skip if we have zero context (wait for a later iteration or neighbor resolution)
if (!prevPubkey && !nextPubkey && !prevPos && !nextPos) continue;
const picked = pickByAffinity(withLoc, prevPubkey, nextPubkey, prevPos, nextPos);
r.name = picked.name;
r.pubkey = picked.pubkey;
hopPositions[hop] = { lat: picked.lat, lon: picked.lon };
changed = true;
}
const r = resolved[hop];
if (!r || !r.ambiguous) continue;
const withLoc = r.candidates.filter(c => c.lat && c.lon && !(c.lat === 0 && c.lon === 0));
if (!withLoc.length) continue;
// Affinity-aware: prefer candidates that are neighbors of the previous hop
const picked = pickByAffinity(withLoc, lastResolvedPubkey, lastPos, i === hops.length - 1 ? observerLat : null, i === hops.length - 1 ? observerLon : null);
r.name = picked.name;
r.pubkey = picked.pubkey;
hopPositions[hop] = { lat: picked.lat, lon: picked.lon };
lastPos = hopPositions[hop];
lastResolvedPubkey = picked.pubkey;
}
// Backward pass
let nextPos = (observerLat != null && observerLon != null) ? { lat: observerLat, lon: observerLon } : null;
let nextResolvedPubkey = null;
for (let i = hops.length - 1; i >= 0; i--) {
const hop = hops[i];
if (hopPositions[hop]) {
nextPos = hopPositions[hop];
nextResolvedPubkey = resolved[hop] ? resolved[hop].pubkey : null;
continue;
}
const r = resolved[hop];
if (!r || !r.ambiguous) continue;
const withLoc = r.candidates.filter(c => c.lat && c.lon && !(c.lat === 0 && c.lon === 0));
if (!withLoc.length || !nextPos) continue;
// Affinity-aware: prefer candidates that are neighbors of the next hop
const picked = pickByAffinity(withLoc, nextResolvedPubkey, nextPos, null, null);
r.name = picked.name;
r.pubkey = picked.pubkey;
hopPositions[hop] = { lat: picked.lat, lon: picked.lon };
nextPos = hopPositions[hop];
nextResolvedPubkey = picked.pubkey;
}
// Sanity check: drop hops impossibly far from neighbors
@@ -276,13 +334,13 @@ window.HopResolver = (function() {
*/
function resolveFromServer(hops, resolvedPath) {
if (!hops || !resolvedPath || hops.length !== resolvedPath.length) return {};
var result = {};
for (var i = 0; i < hops.length; i++) {
var hop = hops[i];
var pubkey = resolvedPath[i];
const result = {};
for (let i = 0; i < hops.length; i++) {
const hop = hops[i];
const pubkey = resolvedPath[i];
if (!pubkey) continue; // null = unresolved, leave for client-side fallback
// O(1) lookup via pubkeyIdx built during init()
var node = pubkeyIdx[pubkey.toLowerCase()] || null;
const node = pubkeyIdx[pubkey.toLowerCase()] || null;
result[hop] = {
name: node ? node.name : pubkey.slice(0, 8),
pubkey: pubkey,
+18 -3
View File
@@ -132,7 +132,7 @@
/* ---- Node Detail Panel ---- */
.live-node-detail {
top: 60px;
top: 64px;
right: 12px;
width: 320px;
max-height: calc(100vh - 140px);
@@ -325,11 +325,14 @@
}
.live-stats-row { flex-wrap: wrap; gap: 4px; }
.live-stat-pill { font-size: 11px; padding: 2px 7px; }
.live-toggles { font-size: 10px; gap: 6px; margin-left: 0; }
.live-toggles { font-size: 10px; gap: 6px; margin-left: 0; overflow-x: auto; flex-wrap: nowrap; -webkit-overflow-scrolling: touch; width: 100%; min-width: 0; }
.live-title { font-size: 12px; letter-spacing: 1px; }
/* #203 — bottom-sheet node detail on mobile */
.live-node-detail { width: 100%; right: 0; left: 0; top: auto; bottom: 0; max-height: 60vh; border-radius: 16px 16px 0 0; overflow-y: auto; }
.live-node-detail { width: 100%; right: 0; left: 0; top: auto; bottom: 0; max-height: 60dvh; border-radius: 16px 16px 0 0; overflow-y: auto; z-index: 1050; }
.live-node-detail.hidden { transform: translateY(100%); }
/* Close button was unreachable: panel-header collapsed to 8px on mobile, panel-content
scroll area started at y=8, overlapping the button's 36px tap target (y=642) */
.live-node-detail .panel-header { min-height: 44px; }
.feed-detail-card {
position: fixed !important;
right: 0 !important;
@@ -689,6 +692,18 @@
.live-feed { bottom: 68px; }
.feed-show-btn { bottom: 68px !important; }
/* Backdrop for mobile tap-outside-to-close (#797) */
.node-detail-backdrop {
display: none;
position: absolute;
inset: 0;
z-index: 1049;
background: rgba(0, 0, 0, 0.25);
}
@media (max-width: 640px) {
.node-detail-backdrop.active { display: block; }
}
/* Mobile VCR */
@media (max-width: 640px) {
/* Mobile VCR: two-row stacked layout */
+8 -2
View File
@@ -849,6 +849,7 @@
<div class="panel-content" aria-live="polite" aria-relevant="additions" role="log"></div>
</div>
<button class="feed-show-btn hidden" id="feedShowBtn" title="Show feed">📋</button>
<div id="nodeDetailBackdrop" class="node-detail-backdrop"></div>
<div class="live-overlay live-node-detail hidden" id="liveNodeDetail">
<div class="panel-header">
<button class="panel-corner-btn" data-panel="liveNodeDetail" title="Move panel to next corner" aria-label="Move panel to next corner"></button>
@@ -1216,10 +1217,14 @@
// Node detail panel
const nodeDetailPanel = document.getElementById('liveNodeDetail');
const nodeDetailContent = document.getElementById('nodeDetailContent');
document.getElementById('nodeDetailClose').addEventListener('click', () => {
const nodeDetailBackdrop = document.getElementById('nodeDetailBackdrop');
function closeNodeDetail() {
activeNodeDetailKey = null;
nodeDetailPanel.classList.add('hidden');
});
nodeDetailBackdrop.classList.remove('active');
}
document.getElementById('nodeDetailClose').addEventListener('click', closeNodeDetail);
nodeDetailBackdrop.addEventListener('click', closeNodeDetail);
// Feed panel resize handle (#27)
const savedFeedWidth = localStorage.getItem('live-feed-width');
@@ -1451,6 +1456,7 @@
const panel = document.getElementById('liveNodeDetail');
const content = document.getElementById('nodeDetailContent');
panel.classList.remove('hidden');
document.getElementById('nodeDetailBackdrop').classList.add('active');
content.innerHTML = '<div style="padding:20px;color:var(--text-muted)">Loading…</div>';
try {
const [data, healthData] = await Promise.all([
+1 -1
View File
@@ -965,7 +965,7 @@
</dl>
<div style="margin-top:8px;clear:both;">
<a href="#/nodes/${node.public_key}" style="color:var(--accent);font-size:12px;">View Node </a>
${node.public_key ? ` · <a href="#" data-show-neighbors data-pubkey="${escapeHtml(node.public_key)}" data-name="${escapeHtml(node.name || 'Unknown')}" style="color:var(--accent);font-size:12px;">Show Neighbors</a>` : ''}
${node.public_key ? ` · <a href="javascript:void(0)" role="button" data-show-neighbors data-pubkey="${escapeHtml(node.public_key)}" data-name="${escapeHtml(node.name || 'Unknown')}" style="color:var(--accent);font-size:12px;cursor:pointer;">Show Neighbors</a>` : ''}
</div>
</div>`;
}
+118 -56
View File
@@ -286,11 +286,29 @@
if (h) h.textContent = 'Neighbors (' + data.neighbors.length + ')';
}
var html = renderNeighborTable(data.neighbors, limit);
if (limit && data.neighbors.length > limit && viewAllPubkey) {
html += '<div style="margin-top:6px;text-align:right"><a href="#/nodes/' + encodeURIComponent(viewAllPubkey) + '?section=node-neighbors" style="font-size:12px">View all ' + data.neighbors.length + ' neighbors </a></div>';
if (limit && data.neighbors.length > limit) {
html += '<div style="margin-top:6px;text-align:right"><button class="btn-link show-all-neighbors-btn" style="font-size:12px;cursor:pointer;background:none;border:none;color:var(--accent);padding:0">Show all ' + data.neighbors.length + ' neighbors </button></div>';
} else if (!limit && data.neighbors.length > 5) {
// Collapse toggle when expanded (#855)
html += '<div style="margin-top:6px;text-align:right"><button class="btn-link collapse-neighbors-btn" style="font-size:12px;cursor:pointer;background:none;border:none;color:var(--accent);padding:0">Show fewer ▲</button></div>';
}
el.innerHTML = html;
// Wire "Show all neighbors" expand button (#855)
var expandBtn = el.querySelector('.show-all-neighbors-btn');
if (expandBtn) {
expandBtn.addEventListener('click', function() {
renderNeighborData(data, containerId, 0, headerSelector, null);
});
}
// Wire collapse button (#855)
var collapseBtn = el.querySelector('.collapse-neighbors-btn');
if (collapseBtn) {
collapseBtn.addEventListener('click', function() {
renderNeighborData(data, containerId, 5, headerSelector, null);
});
}
// Initialize TableSort on neighbor table
var neighborTable = el.querySelector('.neighbor-sort-table');
if (neighborTable && window.TableSort) {
@@ -318,8 +336,11 @@
function init(app, routeParam) {
directNode = routeParam || null;
if (directNode && window.innerWidth <= 640) {
// Full-screen single node view (mobile only)
if (directNode) {
// Full-screen single node view (desktop + mobile).
// Reached via the 🔍 Details link or a deep link to #/nodes/{pubkey}.
// Row clicks use history.replaceState (no hashchange → no re-init),
// so the split-panel UX on desktop is preserved.
app.innerHTML = `<div class="node-fullscreen">
<div class="node-full-header">
<button class="detail-back-btn node-back-btn" id="nodeBackBtn" aria-label="Back to nodes"></button>
@@ -352,7 +373,7 @@
app.innerHTML = `<div class="nodes-page">
<div class="nodes-topbar">
<input type="text" class="nodes-search" id="nodeSearch" placeholder="Search nodes by name…" aria-label="Search nodes by name">
<input type="text" class="nodes-search" id="nodeSearch" placeholder="Search by name or pubkey prefix…" aria-label="Search nodes by name or pubkey prefix">
<div class="nodes-counts" id="nodeCounts"></div>
</div>
<div id="nodesRegionFilter" class="region-filter-container"></div>
@@ -538,9 +559,10 @@
</div>
<div class="node-full-card" id="node-packets">
<h4>Recent Packets (${adverts.length})</h4>
${(() => { const validPackets = adverts.filter(p => p.hash && p.timestamp); return `
<h4>Recent Packets (${validPackets.length})</h4>
<div class="node-activity-list">
${adverts.length ? adverts.map(p => {
${validPackets.length ? validPackets.map(p => {
let decoded; try { decoded = JSON.parse(p.decoded_json); } catch {}
const typeLabel = p.payload_type === 4 ? '📡 Advert' : p.payload_type === 5 ? '💬 Channel' : p.payload_type === 2 ? '✉️ DM' : '📦 Packet';
const detail = decoded?.text ? ': ' + escapeHtml(truncate(decoded.text, 50)) : decoded?.name ? ' — ' + escapeHtml(decoded.name) : '';
@@ -566,6 +588,7 @@
</div>`;
}).join('') : '<div class="text-muted">No recent packets</div>'}
</div>
`; })()}
</div>`;
// Map
@@ -628,34 +651,9 @@
headerSelector: '#fullNeighborsHeader'
});
// #690 — Clock Skew detail section
(async function loadClockSkew() {
var container = document.getElementById('node-clock-skew');
if (!container) return;
try {
var cs = await api('/nodes/' + encodeURIComponent(n.public_key) + '/clock-skew', { ttl: 30000 });
if (!cs || !cs.severity) return;
container.style.display = '';
var severityColor = SKEW_SEVERITY_COLORS[cs.severity] || 'var(--text-muted)';
var severityLabel = SKEW_SEVERITY_LABELS[cs.severity] || cs.severity;
var driftHtml = cs.driftPerDaySec ? '<div style="font-size:12px;color:var(--text-muted);margin-top:2px">Drift: ' + formatDrift(cs.driftPerDaySec) + '</div>' : '';
var sparkHtml = renderSkewSparkline(cs.samples, 200, 32);
var skewDisplay = cs.severity === 'no_clock'
? '<span style="font-size:18px;font-weight:700;color:var(--text-muted)">No Clock</span>'
: '<span style="font-size:18px;font-weight:700;font-family:var(--mono)">' + formatSkew(cs.medianSkewSec) + '</span>';
container.innerHTML =
'<h4 style="margin:0 0 6px">⏰ Clock Skew</h4>' +
'<div style="display:flex;align-items:center;gap:12px;flex-wrap:wrap">' +
skewDisplay +
renderSkewBadge(cs.severity, cs.medianSkewSec) +
(cs.calibrated ? ' <span style="font-size:10px;color:var(--text-muted)" title="Observer-calibrated">✓ calibrated</span>' : '') +
'</div>' +
driftHtml +
(sparkHtml ? '<div class="skew-sparkline-wrap" style="margin-top:8px">' + sparkHtml + '<div style="font-size:10px;color:var(--text-muted)">Skew over time (' + (cs.samples || []).length + ' samples)</div></div>' : '');
} catch (e) {
// Non-fatal — section stays hidden
}
})();
// #690 — Clock Skew detail section (full-screen view)
loadClockSkewInto(document.getElementById('node-clock-skew'), n.public_key);
// Affinity debug panel — show if debugAffinity is enabled
(function loadAffinityDebug() {
@@ -810,7 +808,44 @@
let _themeRefreshHandler = null;
let _allNodes = null; // cached full node list
let _fleetSkew = null; // cached clock skew map: pubkey → {severity, medianSkewSec, ...}
let _fleetSkew = null; // cached clock skew map: pubkey → {severity, recentMedianSkewSec, medianSkewSec, ...}
/**
* Fetch per-node clock skew and render into the given container element.
* Shared between the full-screen detail page and the side panel (#813, #690).
* No-op if the container is missing, the API errors, or the response lacks severity.
*/
async function loadClockSkewInto(container, pubkey) {
if (!container) return;
try {
var cs = await api('/nodes/' + encodeURIComponent(pubkey) + '/clock-skew', { ttl: 30000 });
if (!cs || !cs.severity) return;
container.style.display = '';
var driftHtml = cs.driftPerDaySec ? '<div style="font-size:12px;color:var(--text-muted);margin-top:2px">Drift: ' + formatDrift(cs.driftPerDaySec) + '</div>' : '';
var sparkHtml = renderSkewSparkline(cs.samples, 200, 32);
var skewVal = window.currentSkewValue(cs);
var skewDisplay = cs.severity === 'no_clock'
? '<span style="font-size:18px;font-weight:700;color:var(--text-muted)">No Clock</span>'
: '<span style="font-size:18px;font-weight:700;font-family:var(--mono)">' + formatSkew(skewVal) + '</span>';
var bimodalWarning = '';
if (cs.severity === 'bimodal_clock') {
var totalRecent = cs.recentSampleCount || 0;
bimodalWarning = '<div style="font-size:12px;color:var(--status-amber-text);margin-top:4px">⚠️ ' + (cs.recentBadSampleCount || '?') + ' of last ' + (totalRecent || '?') + ' adverts had nonsense timestamps (likely RTC reset)</div>';
}
container.innerHTML =
'<h4 style="margin:0 0 6px">⏰ Clock Skew</h4>' +
'<div style="display:flex;align-items:center;gap:12px;flex-wrap:wrap">' +
skewDisplay +
renderSkewBadge(cs.severity, skewVal, cs) +
(cs.calibrated ? ' <span style="font-size:10px;color:var(--text-muted)" title="Observer-calibrated">✓ calibrated</span>' : '') +
'</div>' +
driftHtml +
(sparkHtml ? '<div class="skew-sparkline-wrap" style="margin-top:8px">' + sparkHtml + '<div style="font-size:10px;color:var(--text-muted)">Skew over time (' + (cs.samples || []).length + ' samples)</div></div>' : '') +
bimodalWarning;
} catch (e) {
// Non-fatal — section stays hidden
}
}
/** Fetch fleet clock skew once, return map keyed by pubkey */
async function getFleetSkew() {
@@ -867,8 +902,7 @@
let filtered = _allNodes;
if (activeTab !== 'all') filtered = filtered.filter(n => (n.role || '').toLowerCase() === activeTab);
if (search) {
const q = search.toLowerCase();
filtered = filtered.filter(n => (n.name || '').toLowerCase().includes(q) || (n.public_key || '').toLowerCase().includes(q));
filtered = filtered.filter(n => window._nodesMatchesSearch(n, search));
}
if (lastHeard) {
const ms = { '1h': 3600000, '2h': 7200000, '6h': 21600000, '12h': 43200000, '24h': 86400000, '48h': 172800000, '3d': 259200000, '7d': 604800000, '14d': 1209600000, '30d': 2592000000 }[lastHeard];
@@ -1039,24 +1073,13 @@
// #630: Close button for node detail panel (important for mobile full-screen overlay)
document.getElementById('nodesRight').addEventListener('click', function(e) {
// #778: Details/Analytics links don't navigate because replaceState
// already set the hash to #/nodes/PUBKEY, so clicking <a href="#/nodes/PUBKEY">
// is a same-hash no-op. For the detail link (same page), call init()
// directly — faster than a full router teardown/rebuild cycle.
// For analytics (different page), force hashchange via replaceState + assign.
// #778/#856: Analytics link — force hashchange via replaceState + assign.
// (Details button is handled separately via .node-detail-btn click listener)
var link = e.target.closest('a.btn-primary[href^="#/nodes/"]');
if (link) {
e.preventDefault();
var href = link.getAttribute('href');
if (href.indexOf('/analytics') === -1) {
// Detail link — re-init with the pubkey directly;
// destroy() first to clean up WS handlers, maps, listeners
destroy();
var pubkey = href.replace('#/nodes/', '').split('/')[0];
var appEl = document.getElementById('app');
init(appEl, decodeURIComponent(pubkey));
history.replaceState(null, '', href);
} else {
if (href.indexOf('/analytics') !== -1) {
// Analytics link — different page, force hashchange via replaceState + assign
history.replaceState(null, '', '#/');
location.hash = href.substring(1);
@@ -1108,7 +1131,7 @@
const status = getNodeStatus(n.role || 'companion', lastSeenTime ? new Date(lastSeenTime).getTime() : 0);
const lastSeenClass = status === 'active' ? 'last-seen-active' : 'last-seen-stale';
const cs = _fleetSkew && _fleetSkew[n.public_key];
const skewBadgeHtml = cs && cs.severity && cs.severity !== 'ok' ? renderSkewBadge(cs.severity, cs.medianSkewSec) : '';
const skewBadgeHtml = cs && cs.severity && cs.severity !== 'ok' ? renderSkewBadge(cs.severity, window.currentSkewValue(cs), cs) : '';
return `<tr data-key="${n.public_key}" data-action="select" data-value="${n.public_key}" tabindex="0" role="row" class="${selectedKey === n.public_key ? 'selected' : ''}${isClaimed ? ' claimed-row' : ''}">
<td>${favStar(n.public_key, 'node-fav')}${isClaimed ? '<span class="claimed-badge" title="My Mesh">★</span> ' : ''}<strong>${n.name || '(unnamed)'}</strong>${dupNameBadge(n.name, n.public_key, dupMap)}${skewBadgeHtml}</td>
<td class="mono col-pubkey">${truncate(n.public_key, 16)}</td>
@@ -1121,6 +1144,19 @@
makeColumnsResizable('#nodesTable', 'meshcore-nodes-col-widths');
}
/**
* Navigate to the full-screen node view for `pubkey` from anywhere within
* the nodes module. Single source of navigation truth works regardless
* of current hash state (hash assignment alone is a no-op when the hash
* is already the target).
*/
function navigateToNode(pubkey) {
destroy();
var appEl = document.getElementById('app');
history.replaceState(null, '', '#/nodes/' + encodeURIComponent(pubkey));
init(appEl, pubkey);
}
async function selectNode(pubkey) {
// On mobile, navigate to full-screen node view
if (window.innerWidth <= 640) {
@@ -1167,7 +1203,7 @@
<div class="node-detail">
<div class="node-detail-name">${escapeHtml(n.name || '(unnamed)')}${dupBadge}</div>
<div class="node-detail-role">${renderNodeBadges(n, roleColor)}
<a href="#/nodes/${encodeURIComponent(n.public_key)}" class="btn-primary" style="display:inline-block;text-decoration:none;font-size:11px;padding:2px 8px;margin-left:8px">🔍 Details</a>
<button class="btn-primary node-detail-btn" data-pubkey="${encodeURIComponent(n.public_key)}" aria-label="View details for ${escapeHtml(n.name || n.public_key)}" style="font-size:11px;padding:2px 8px;margin-left:8px;cursor:pointer">🔍 Details</button>
<a href="#/nodes/${encodeURIComponent(n.public_key)}/analytics" class="btn-primary" style="display:inline-block;margin-left:4px;text-decoration:none;font-size:11px;padding:2px 8px">📊 Analytics</a>
</div>
${renderStatusExplanation(n)}
@@ -1194,6 +1230,8 @@
</dl>
</div>
<div class="node-detail-section skew-detail-section" id="node-clock-skew" style="display:none"></div>
${observers.length ? `<div class="node-detail-section">
${(() => { const regions = [...new Set(observers.map(o => o.iata).filter(Boolean))]; return regions.length ? `<div style="margin-bottom:6px;font-size:12px"><strong>Regions:</strong> ${regions.join(', ')}</div>` : ''; })()}
<h4>Heard By (${observers.length} observer${observers.length > 1 ? 's' : ''})</h4>
@@ -1216,9 +1254,10 @@
</div>
<div class="node-detail-section">
<h4>Recent Packets (${adverts.length})</h4>
${(() => { const validPackets = adverts.filter(a => a.hash && a.timestamp); return `
<h4>Recent Packets (${validPackets.length})</h4>
<div id="advertTimeline">
${adverts.length ? adverts.map(a => {
${validPackets.length ? validPackets.map(a => {
let decoded;
try { decoded = JSON.parse(a.decoded_json); } catch {}
const pType = PAYLOAD_TYPES[a.payload_type] || 'Packet';
@@ -1237,6 +1276,7 @@
</div>`;
}).join('') : '<div class="text-muted" style="padding:8px">No recent packets</div>'}
</div>
`; })()}
</div>
</div>`;
@@ -1280,6 +1320,14 @@
} catch {}
}
// Wire "Details" button via the unified navigateToNode helper
var detailBtn = panel.querySelector('.node-detail-btn');
if (detailBtn) {
detailBtn.addEventListener('click', function() {
navigateToNode(decodeURIComponent(detailBtn.getAttribute('data-pubkey')));
});
}
// Fetch neighbors for this node (condensed panel — top 5)
fetchAndRenderNeighbors(n.public_key, 'panelNeighborsContent', {
limit: 5,
@@ -1287,6 +1335,10 @@
viewAllPubkey: n.public_key
});
// #813 — Clock Skew section in side panel (mirrors full-screen view)
loadClockSkewInto(document.getElementById('node-clock-skew'), n.public_key);
// Fetch paths through this node
api('/nodes/' + encodeURIComponent(n.public_key) + '/paths', { ttl: CLIENT_TTL.nodeDetail }).then(pathData => {
const el = document.getElementById('pathsContent');
@@ -1385,4 +1437,14 @@
window._nodesRenderNodeTimestampText = renderNodeTimestampText;
window._nodesGetStatusInfo = getStatusInfo;
window._nodesGetStatusTooltip = getStatusTooltip;
// #862: Expose search filter logic for testing
window._nodesMatchesSearch = function(node, query) {
if (!query) return true;
var q = query.toLowerCase();
var isHex = /^[0-9a-f]+$/i.test(q);
if ((node.name || '').toLowerCase().includes(q)) return true;
if (isHex && (node.public_key || '').toLowerCase().startsWith(q)) return true;
return false;
};
})();
+8
View File
@@ -150,6 +150,14 @@
<div class="stat-label">First Seen</div>
<div class="stat-value" style="font-size:0.85em">${obs.first_seen ? new Date(obs.first_seen).toLocaleDateString() : '—'}</div>
</div>
<div class="stat-card">
<div class="stat-label">Last Status Update</div>
<div class="stat-value" style="font-size:0.85em">${obs.last_seen ? timeAgo(obs.last_seen) + '<br><span style="font-size:0.8em;color:var(--text-muted)">' + new Date(obs.last_seen).toLocaleString() + '</span>' : '—'}</div>
</div>
<div class="stat-card">
<div class="stat-label">Last Packet Observation</div>
<div class="stat-value" style="font-size:0.85em">${obs.last_packet_at ? timeAgo(obs.last_packet_at) + '<br><span style="font-size:0.8em;color:var(--text-muted)">' + new Date(obs.last_packet_at).toLocaleString() + '</span>' : '<span style="color:var(--text-muted)">never</span>'}</div>
</div>
</div>
<div class="mono" style="font-size:0.75em;color:var(--text-muted);margin-bottom:20px;word-break:break-all">
ID: ${obs.id}
+13 -1
View File
@@ -75,6 +75,17 @@
return { cls: 'health-red', label: 'Offline' };
}
function packetBadge(o) {
if (!o.last_packet_at) return '<span title="No packets ever observed">📡⚠ never</span>';
const pktAgo = Date.now() - new Date(o.last_packet_at).getTime();
const statusAgo = o.last_seen ? Date.now() - new Date(o.last_seen).getTime() : Infinity;
const gap = pktAgo - statusAgo;
if (gap > 600000) {
return `<span title="Last packet ${timeAgo(o.last_packet_at)} — status is newer by ${Math.round(gap/60000)}min. Observer may be alive but not forwarding packets.">📡⚠ ${timeAgo(o.last_packet_at)}</span>`;
}
return timeAgo(o.last_packet_at);
}
function uptimeStr(firstSeen) {
if (!firstSeen) return '—';
const ms = Date.now() - new Date(firstSeen).getTime();
@@ -123,7 +134,7 @@
<div class="obs-table-scroll"><table class="data-table obs-table" id="obsTable">
<caption class="sr-only">Observer status and statistics</caption>
<thead><tr>
<th scope="col">Status</th><th scope="col">Name</th><th scope="col">Region</th><th scope="col">Last Seen</th>
<th scope="col">Status</th><th scope="col">Name</th><th scope="col">Region</th><th scope="col">Last Status</th><th scope="col">Last Packet</th>
<th scope="col">Packets</th><th scope="col">Packets/Hour</th><th scope="col">Uptime</th>
</tr></thead>
<tbody>${filtered.map(o => {
@@ -134,6 +145,7 @@
<td class="mono">${o.name || o.id}</td>
<td>${o.iata ? `<span class="badge-region">${o.iata}</span>` : '—'}</td>
<td>${timeAgo(o.last_seen)}</td>
<td>${packetBadge(o)}</td>
<td>${(o.packet_count || 0).toLocaleString()}</td>
<td>${sparkBar(o.packetsLastHour || 0, maxPktsHr)}</td>
<td>${uptimeStr(o.first_seen)}</td>
+209 -34
View File
@@ -48,6 +48,7 @@
if (filters.hash) parts.push('hash=' + encodeURIComponent(filters.hash));
if (filters.node) parts.push('node=' + encodeURIComponent(filters.node));
if (filters.observer) parts.push('observer=' + encodeURIComponent(filters.observer));
if (filters.channel) parts.push('channel=' + encodeURIComponent(filters.channel));
if (filters._filterExpr) parts.push('filter=' + encodeURIComponent(filters._filterExpr));
return parts.length ? '?' + parts.join('&') : '';
}
@@ -352,6 +353,8 @@
if (_urlNode) { filters.node = _urlNode; filters.nodeName = _urlNode.slice(0, 8); }
var _urlObserver = _initUrlParams.get('observer');
if (_urlObserver) filters.observer = _urlObserver;
var _urlChannel = _initUrlParams.get('channel');
if (_urlChannel) filters.channel = _urlChannel;
var _urlFilterExpr = _initUrlParams.get('filter');
if (_urlFilterExpr) filters._filterExpr = _urlFilterExpr;
@@ -384,9 +387,9 @@
const obs = data.observations.find(o => String(o.id) === String(obsTarget));
if (obs) {
expandedHashes.add(h);
const obsPacket = {...data.packet, observer_id: obs.observer_id, observer_name: obs.observer_name, snr: obs.snr, rssi: obs.rssi, path_json: obs.path_json, resolved_path: obs.resolved_path, timestamp: obs.timestamp, first_seen: obs.timestamp};
const obsPacket = {...data.packet, observer_id: obs.observer_id, observer_name: obs.observer_name, snr: obs.snr, rssi: obs.rssi, path_json: obs.path_json, resolved_path: obs.resolved_path, direction: obs.direction, timestamp: obs.timestamp, first_seen: obs.timestamp};
clearParsedCache(obsPacket);
selectPacket(obs.id, h, {packet: obsPacket, breakdown: data.breakdown, observations: data.observations}, obs.id);
selectPacket(obs.id, h, {packet: obsPacket, observations: data.observations}, obs.id);
} else {
selectPacket(data.packet.id, h, data);
}
@@ -516,7 +519,7 @@
if (p.decoded_json) existing.decoded_json = p.decoded_json;
// Update expanded children if this group is expanded
if (expandedHashes.has(h) && existing._children) {
existing._children.unshift(p);
existing._children.unshift(clearParsedCache({...p, _isObservation: true}));
if (existing._children.length > 200) existing._children.length = 200;
sortGroupChildren(existing);
// Invalidate row counts — child count changed, so virtual scroll
@@ -622,6 +625,7 @@
if (filters.hash) params.set('hash', filters.hash);
if (filters.node) params.set('node', filters.node);
if (filters.observer) params.set('observer', filters.observer);
if (filters.channel) params.set('channel', filters.channel);
if (groupByHash) {
params.set('groupByHash', 'true');
} else {
@@ -679,10 +683,14 @@
// Restore expanded group children (parallel fetch, Map lookup)
if (groupByHash && expandedHashes.size > 0) {
const expandedArr = [...expandedHashes];
// Fetch the full packet detail (which includes per-observation rows) for each expanded hash.
// Previously this used `/packets?hash=X&limit=20` which returned ONE aggregate row, causing
// every "child" row in the table to carry the parent packet.id instead of unique observation
// ids — so clicking any child pointed the side pane at the same aggregate. See #866.
const results = await Promise.all(expandedArr.map(hash => {
const group = hashIndex.get(hash);
if (!group) return { hash, group: null, data: null };
return api(`/packets?hash=${hash}&limit=20`)
return api(`/packets/${hash}`)
.then(data => ({ hash, group, data }))
.catch(() => ({ hash, group, data: null }));
}));
@@ -690,7 +698,15 @@
if (!group) {
expandedHashes.delete(hash);
} else if (data) {
group._children = data.packets || [];
const pkt = data.packet || group;
// Build per-observation children. Spread (pkt, obs) so obs-level fields
// (id, observer_id/name, path_json, snr/rssi, timestamp, raw_hex) override
// the aggregate. Each child's `id` is the observation id (unique per observer).
const obs = data.observations || [];
group._children = obs.length
? obs.map(o => clearParsedCache({...pkt, ...o, _isObservation: true}))
: [pkt];
group._fetchedData = { packet: pkt, observations: obs };
sortGroupChildren(group);
}
}
@@ -750,6 +766,11 @@
<button class="multi-select-trigger" id="typeTrigger" title="Filter by packet type">All Types </button>
<div class="multi-select-menu" id="typeMenu"></div>
</div>
<div class="filter-group" style="display:inline-flex;align-items:center;gap:4px">
<select id="fChannel" class="filter-select" aria-label="Filter by channel" title="Filter Channel Messages (GRP_TXT) by channel">
<option value="">All Channels</option>
</select>
</div>
</div>
<div class="filter-group">
<button class="btn ${groupByHash ? 'active' : ''}" id="fGroup" title="Collapse duplicate observations of the same packet into expandable groups">Group by Hash</button>
@@ -938,6 +959,63 @@
renderTableRows();
});
// --- Channel filter (#812) ---
// Server-side filter: /api/packets?channel=<hash>. Triggers loadPackets()
// (not just renderTableRows) so the filter applies before pagination.
const channelSel = document.getElementById('fChannel');
if (channelSel) {
if (filters.channel) {
// Pre-seed an option so the current filter shows as selected even
// before the channels list arrives. Replaced when populateChannels resolves.
const opt = document.createElement('option');
opt.value = filters.channel;
opt.textContent = filters.channel;
opt.selected = true;
channelSel.appendChild(opt);
}
api('/channels').then(data => {
const channels = (data && data.channels) || [];
// Build options via DOM API: channel names are network-supplied
// and must NOT be interpolated into innerHTML (XSS, #812).
// Sort alphabetically (case-insensitive) for predictable picker order;
// the API returns last-activity order which is unstable for a dropdown.
const sorted = channels.slice().sort((a, b) => {
const an = (a.name || a.hash || '').toLowerCase();
const bn = (b.name || b.hash || '').toLowerCase();
return an < bn ? -1 : an > bn ? 1 : 0;
});
channelSel.textContent = '';
const allOpt = document.createElement('option');
allOpt.value = '';
allOpt.textContent = 'All Channels';
channelSel.appendChild(allOpt);
let matched = false;
for (const ch of sorted) {
const v = ch.hash || ch.name || '';
if (!v) continue;
const opt = document.createElement('option');
opt.value = v;
opt.textContent = ch.name || v;
if (v === filters.channel) { opt.selected = true; matched = true; }
channelSel.appendChild(opt);
}
// If current filter isn't in the list (encrypted hash, stale, or
// race with cache), keep it as a selected option so the UI reflects state.
if (filters.channel && !matched) {
const opt = document.createElement('option');
opt.value = filters.channel;
opt.textContent = filters.channel;
opt.selected = true;
channelSel.appendChild(opt);
}
}).catch(() => {});
channelSel.addEventListener('change', (e) => {
filters.channel = e.target.value || undefined;
updatePacketsUrl();
loadPackets();
});
}
// Close multi-select menus on outside click
bindDocumentHandler('menu', 'click', (e) => {
const obsWrap = document.getElementById('observerFilterWrap');
@@ -1099,7 +1177,7 @@
const nodes = data.nodes || [];
if (nodes.length === 0) { fNodeDrop.classList.add('hidden'); fNode.setAttribute('aria-expanded', 'false'); return; }
fNodeDrop.innerHTML = nodes.map((n, i) =>
`<div class="node-filter-option" id="fNodeOpt-${i}" role="option" data-key="${n.public_key}" data-name="${escapeHtml(n.name || n.public_key.slice(0,8))}">${escapeHtml(n.name || n.public_key.slice(0,8))} <span style="color:var(--muted);font-size:0.8em">${n.public_key.slice(0,8)}</span></div>`
`<div class="node-filter-option" id="fNodeOpt-${i}" role="option" data-key="${n.public_key}" data-name="${escapeHtml(n.name || n.public_key.slice(0,8))}">${escapeHtml(n.name || n.public_key.slice(0,8))} <span style="color:var(--text-muted);font-size:0.8em">${n.public_key.slice(0,8)}</span></div>`
).join('');
fNodeDrop.classList.remove('hidden');
fNode.setAttribute('aria-expanded', 'true');
@@ -1180,9 +1258,9 @@
const child = group?._children?.find(c => String(c.id) === String(value));
if (child) {
const parentData = group._fetchedData;
const obsPacket = parentData ? {...parentData.packet, observer_id: child.observer_id, observer_name: child.observer_name, snr: child.snr, rssi: child.rssi, path_json: child.path_json, resolved_path: child.resolved_path, timestamp: child.timestamp, first_seen: child.timestamp} : child;
const obsPacket = parentData ? {...parentData.packet, observer_id: child.observer_id, observer_name: child.observer_name, snr: child.snr, rssi: child.rssi, path_json: child.path_json, resolved_path: child.resolved_path, direction: child.direction, timestamp: child.timestamp, first_seen: child.timestamp} : child;
if (parentData) { clearParsedCache(obsPacket); }
selectPacket(child.id, parentHash, {packet: obsPacket, breakdown: parentData?.breakdown, observations: parentData?.observations}, child.id);
selectPacket(child.id, parentHash, {packet: obsPacket, observations: parentData?.observations}, child.id);
}
}
else if (action === 'select-hash') pktSelectHash(value);
@@ -1731,19 +1809,56 @@
panel.innerHTML = isMobileNow ? '' : '<div class="panel-resize-handle" id="pktResizeHandle"></div>' + PANEL_CLOSE_HTML;
const content = document.createElement('div');
panel.appendChild(content);
await renderDetail(content, data);
await renderDetail(content, data, selectedObservationId);
if (!isMobileNow) initPanelResize();
} catch (e) {
panel.innerHTML = `<div class="text-muted">Error: ${e.message}</div>`;
}
}
async function renderDetail(panel, data) {
async function renderDetail(panel, data, chosenObsId) {
const pkt = data.packet;
const breakdown = data.breakdown || {};
const ranges = breakdown.ranges || [];
const decoded = getParsedDecoded(pkt) || {};
const pathHops = getParsedPath(pkt) || [];
const observations = data.observations || [];
// Per-observation rendering (issue #849):
// When opened from a packet row (no specific observer), default to first observation.
// When opened from an observation child row, use that observation.
// Clicking a different observation row in the detail re-renders with that observation.
let currentObs = null;
const targetObsId = chosenObsId || selectedObservationId;
if (targetObsId && observations.length) {
currentObs = observations.find(o => String(o.id) === String(targetObsId));
}
if (!currentObs && observations.length) {
currentObs = observations[0]; // fall back to first observation
}
// If we have a current observation, build pkt fields from it so summary is per-observation
const effectivePkt = currentObs ? clearParsedCache({...pkt, ...currentObs, _isObservation: true}) : pkt;
const decoded = getParsedDecoded(effectivePkt) || {};
const pathHops = getParsedPath(effectivePkt) || [];
// Compute breakdown ranges from the actually-rendered raw_hex (per-observation).
// Single source of truth — derived from the same bytes we display, so a
// post-#882 per-obs raw_hex with a different path length than the top-level
// packet's raw_hex still gets accurate byte highlights.
const obsRawHexForRanges = effectivePkt.raw_hex || pkt.raw_hex || '';
const ranges = obsRawHexForRanges
? computeBreakdownRanges(obsRawHexForRanges, pkt.route_type, pkt.payload_type)
: [];
// Cross-check: hop count from raw_hex path_len byte vs path_json length
const obsRawHex = effectivePkt.raw_hex || pkt.raw_hex || '';
let rawHopCount = null;
if (obsRawHex.length >= 4) {
// path_len byte position depends on route type
const plOff = getPathLenOffset(pkt.route_type);
const plByte = parseInt(obsRawHex.slice(plOff * 2, plOff * 2 + 2), 16);
if (!isNaN(plByte)) rawHopCount = plByte & 0x3F;
}
if (rawHopCount != null && pathHops.length !== rawHopCount) {
console.warn(`[CoreScope] Hop count inconsistency for packet ${pkt.hash}: path_json has ${pathHops.length} hops but raw_hex path_len has ${rawHopCount}. UI shows path_json.`);
}
// Resolve sender GPS — from packet directly, or from known node in DB
let senderLat = decoded.lat != null ? decoded.lat : (decoded.latitude || null);
@@ -1787,15 +1902,16 @@
}
// Parse hash size from path byte
const rawPathByte = pkt.raw_hex ? parseInt(pkt.raw_hex.slice(2, 4), 16) : NaN;
const plOff = getPathLenOffset(pkt.route_type);
const rawPathByte = pkt.raw_hex ? parseInt(pkt.raw_hex.slice(plOff * 2, plOff * 2 + 2), 16) : NaN;
const hashSize = (isNaN(rawPathByte) || (rawPathByte & 0x3F) === 0) ? null : ((rawPathByte >> 6) + 1);
const size = pkt.raw_hex ? Math.floor(pkt.raw_hex.length / 2) : 0;
const size = effectivePkt.raw_hex ? Math.floor(effectivePkt.raw_hex.length / 2) : (pkt.raw_hex ? Math.floor(pkt.raw_hex.length / 2) : 0);
const typeName = payloadTypeName(pkt.payload_type);
const snr = pkt.snr ?? decoded.SNR ?? decoded.snr ?? null;
const rssi = pkt.rssi ?? decoded.RSSI ?? decoded.rssi ?? null;
const hasRawHex = !!pkt.raw_hex;
const snr = effectivePkt.snr ?? decoded.SNR ?? decoded.snr ?? null;
const rssi = effectivePkt.rssi ?? decoded.RSSI ?? decoded.rssi ?? null;
const hasRawHex = !!(effectivePkt.raw_hex || pkt.raw_hex);
// Build message preview
let messageHtml = '';
@@ -1806,17 +1922,16 @@
const meta = [chLabel, hopLabel, snrLabel].filter(Boolean).join(' · ');
messageHtml = `<div class="detail-message" style="padding:12px;margin:8px 0;background:var(--card-bg);border-radius:8px;border-left:3px solid var(--accent)">
<div style="font-size:1.1em">${escapeHtml(decoded.text)}</div>
${meta ? `<div style="font-size:0.85em;color:var(--muted);margin-top:4px">${meta}</div>` : ''}
${meta ? `<div style="font-size:0.85em;color:var(--text-muted);margin-top:4px">${meta}</div>` : ''}
</div>`;
} else if (decoded.type === 'GRP_TXT' && decoded.channelHash != null) {
const hashHex = decoded.channelHashHex || decoded.channelHash.toString(16).padStart(2, '0').toUpperCase();
const statusLabel = decoded.decryptionStatus === 'no_key' ? 'no key' : 'decryption failed';
messageHtml = `<div class="detail-message" style="padding:12px;margin:8px 0;background:var(--card-bg);border-radius:8px;border-left:3px solid var(--warning, #f0ad4e)">
<div style="font-size:1.1em">🔒 Channel Hash: 0x${hashHex} <span style="color:var(--muted)">(${statusLabel})</span></div>
<div style="font-size:1.1em">🔒 Channel Hash: 0x${hashHex} <span style="color:var(--text-muted)">(${statusLabel})</span></div>
</div>`;
}
const observations = data.observations || [];
const obsCount = data.observation_count || observations.length || 1;
const uniqueObservers = new Set(observations.map(o => o.observer_id)).size;
@@ -1879,21 +1994,30 @@
? `<div class="anomaly-banner" style="background:var(--warning, #f0ad4e); color:#000; padding:8px 12px; border-radius:4px; margin-bottom:8px; font-weight:600;">⚠️ Anomaly: ${escapeHtml(decoded.anomaly)}</div>`
: '';
// Hop count display: use pathHops length (= effective observation's path_json).
// The raw_hex/path_json mismatch warning is logged above for diagnostics; the UI
// must stay self-consistent — top pill names and byte breakdown rows must agree.
const displayHopCount = pathHops.length;
const obsIndicator = currentObs && observations.length > 1
? `<span style="font-size:0.8em;color:var(--text-muted);margin-left:6px">(observation ${observations.indexOf(currentObs) + 1} of ${observations.length})</span>`
: '';
panel.innerHTML = `
${anomalyBanner}
<div class="detail-title">${hasRawHex ? `Packet Byte Breakdown (${size} bytes)` : typeName + ' Packet'}</div>
<div class="detail-hash">${pkt.hash || 'Packet #' + pkt.id}</div>
<div class="detail-hash">${pkt.hash || 'Packet #' + pkt.id}${obsIndicator}</div>
${messageHtml}
<dl class="detail-meta">
<dt>Observer</dt><dd>${obsName(pkt.observer_id)}</dd>
<dt>Observer</dt><dd>${obsName(effectivePkt.observer_id)}</dd>
<dt>Location</dt><dd>${locationHtml}</dd>
<dt>SNR / RSSI</dt><dd>${snr != null ? snr + ' dB' : '—'} / ${rssi != null ? rssi + ' dBm' : '—'}</dd>
<dt>Route Type</dt><dd>${routeTypeName(pkt.route_type)}</dd>
<dt>Payload Type</dt><dd><span class="badge badge-${payloadTypeColor(pkt.payload_type)}">${typeName}</span></dd>
${hashSize ? `<dt>Hash Size</dt><dd>${hashSize} byte${hashSize !== 1 ? 's' : ''}</dd>` : ''}
<dt>Timestamp</dt><dd>${renderTimestampCell(pkt.timestamp)}</dd>
<dt>Timestamp</dt><dd>${renderTimestampCell(effectivePkt.timestamp)}</dd>
<dt>Propagation</dt><dd>${propagationHtml}</dd>
<dt>Path</dt><dd>${pathHops.length ? renderPath(pathHops, pkt.observer_id) : ''}</dd>
<dt>Path</dt><dd>${displayHopCount > 0 ? `<span class="badge badge-info">${displayHopCount} hop${displayHopCount !== 1 ? 's' : ''}</span> ` + renderPath(pathHops, effectivePkt.observer_id) : ' (direct)'}</dd>
${effectivePkt.direction ? `<dt>Direction</dt><dd>${escapeHtml(effectivePkt.direction)}</dd>` : ''}
</dl>
<div class="detail-actions">
<button class="copy-link-btn" data-packet-hash="${pkt.hash || ''}" data-packet-id="${pkt.id}" title="Copy link to this packet">🔗 Copy Link</button>
@@ -1903,11 +2027,59 @@
</div>
${hasRawHex ? `<div class="hex-legend">${buildHexLegend(ranges)}</div>
<div class="hex-dump">${createColoredHexDump(pkt.raw_hex, ranges)}</div>` : ''}
<div class="hex-dump">${createColoredHexDump(effectivePkt.raw_hex || pkt.raw_hex, ranges)}</div>` : ''}
${hasRawHex ? buildFieldTable(pkt, decoded, pathHops, ranges) : buildDecodedTable(decoded)}
${hasRawHex ? buildFieldTable(effectivePkt.raw_hex ? effectivePkt : pkt, decoded, pathHops, ranges) : buildDecodedTable(decoded)}
${observations.length > 1 ? `
<div class="detail-observations" style="margin-top:16px">
<div style="font-weight:600;margin-bottom:6px">Observations (${observations.length})</div>
<table class="detail-obs-table" style="width:100%;border-collapse:collapse;font-size:0.9em">
<thead><tr style="border-bottom:1px solid var(--border)">
<th style="padding:4px 6px;text-align:left">Observer</th>
<th style="padding:4px 6px;text-align:left">Hops</th>
<th style="padding:4px 6px;text-align:left">SNR</th>
<th style="padding:4px 6px;text-align:left">RSSI</th>
<th style="padding:4px 6px;text-align:left">Time</th>
</tr></thead>
<tbody>${observations.map(o => {
const oPath = getParsedPath(o);
const isCurrent = currentObs && String(o.id) === String(currentObs.id);
return `<tr class="detail-obs-row${isCurrent ? ' observation-current' : ''}" data-obs-id="${o.id}" style="cursor:pointer;${isCurrent ? 'background:var(--accent-bg, rgba(0,122,255,0.1))' : ''}" title="Click to view this observation">
<td style="padding:4px 6px">${obsName(o.observer_id)}</td>
<td style="padding:4px 6px">${oPath.length}</td>
<td style="padding:4px 6px">${o.snr != null ? o.snr + ' dB' : '—'}</td>
<td style="padding:4px 6px">${o.rssi != null ? o.rssi + ' dBm' : '—'}</td>
<td style="padding:4px 6px">${renderTimestampCell(o.timestamp)}</td>
</tr>`;
}).join('')}</tbody>
</table>
</div>` : ''}
${observations.length > 1 ? (() => {
// Cross-observer aggregate (Option B): show longest observed path across all observers
const aggregatePath = getParsedPath(pkt) || [];
return `<div class="detail-aggregate" style="margin-top:12px;padding:10px;background:var(--card-bg);border-radius:6px;border:1px solid var(--border);font-size:0.9em">
<div style="font-weight:600;margin-bottom:4px;color:var(--text-muted)">Cross-observer aggregate</div>
<div>Longest observed path: ${aggregatePath.length ? `${aggregatePath.length} hops — ${renderPath(aggregatePath, pkt.observer_id)}` : '— (direct)'}</div>
<div style="font-size:0.8em;color:var(--text-muted);margin-top:2px">Longest path seen across all ${uniqueObservers} observer${uniqueObservers !== 1 ? 's' : ''}</div>
</div>`;
})() : ''}
`;
// Wire up observation row click handlers — re-render detail with clicked observation
panel.querySelectorAll('.detail-obs-row').forEach(row => {
row.addEventListener('click', () => {
const obsId = row.dataset.obsId;
selectedObservationId = obsId;
// Update URL hash to reflect selected observation (deep linking)
const pktHash = pkt.hash || pkt.id;
const obsParam = obsId ? `?obs=${obsId}` : '';
history.replaceState(null, '', `#/packets/${pktHash}${obsParam}`);
renderDetail(panel, data, obsId);
});
});
// Wire up copy link button
const copyLinkBtn = panel.querySelector('.copy-link-btn');
if (copyLinkBtn) {
@@ -2015,7 +2187,7 @@
// Transport codes come BEFORE path length for transport routes (bytes 1-4)
let off = 1;
if (pkt.route_type === 0 || pkt.route_type === 3) {
if (isTransportRoute(pkt.route_type)) {
rows += sectionRow('Transport Codes', 'section-transport');
rows += fieldRow(off, 'Next Hop', buf.slice(off * 2, (off + 2) * 2), '');
rows += fieldRow(off + 2, 'Last Hop', buf.slice((off + 2) * 2, (off + 4) * 2), '');
@@ -2030,14 +2202,17 @@
rows += fieldRow(off, 'Path Length', '0x' + (buf.slice(off * 2, off * 2 + 2) || '??'), hashCountVal === 0 ? `hash_count=0 (direct advert)` : `hash_size=${hashSizeVal} byte${hashSizeVal !== 1 ? 's' : ''}, hash_count=${hashCountVal}`);
off += 1;
// Path
// Path — render hops from path_json (what this observation reported).
// Byte offsets advance by hashSize * pathHops.length to match.
const hashSize = isNaN(pathByte0) ? 1 : ((pathByte0 >> 6) + 1);
if (pathHops.length > 0) {
rows += sectionRow('Path (' + pathHops.length + ' hops)', 'section-path');
const hashSize = isNaN(pathByte0) ? 1 : ((pathByte0 >> 6) + 1);
for (let i = 0; i < pathHops.length; i++) {
const hopHtml = HopDisplay.renderHop(pathHops[i], hopNameCache[pathHops[i]]);
const hopOff = off + i * hashSize;
const hex = String(pathHops[i] || '').toUpperCase();
const hopHtml = HopDisplay.renderHop(hex, hopNameCache[hex]);
const label = `Hop ${i}${hopHtml}`;
rows += fieldRow(off + i * hashSize, label, pathHops[i], '');
rows += fieldRow(hopOff, label, hex, '');
}
off += hashSize * pathHops.length;
}
@@ -2313,7 +2488,7 @@
renderTableRows();
return;
}
// Single fetch — gets packet + observations + path + breakdown
// Single fetch — gets packet + observations + path
try {
const data = await api(`/packets/${hash}`);
const pkt = data.packet;
+18 -3
View File
@@ -401,12 +401,13 @@
warning: 'var(--status-yellow)',
critical: 'var(--status-orange)',
absurd: 'var(--status-purple)',
bimodal_clock: 'var(--status-amber)',
no_clock: 'var(--text-muted)'
};
var SKEW_SEVERITY_LABELS = {
ok: 'OK', warning: 'Warning', critical: 'Critical', absurd: 'Absurd', no_clock: 'No Clock'
ok: 'OK', warning: 'Warning', critical: 'Critical', absurd: 'Absurd', bimodal_clock: 'Bimodal', no_clock: 'No Clock'
};
var SKEW_SEVERITY_ORDER = { no_clock: 0, absurd: 1, critical: 2, warning: 3, ok: 4 };
var SKEW_SEVERITY_ORDER = { no_clock: 0, bimodal_clock: 1, absurd: 2, critical: 3, warning: 4, ok: 5 };
window.SKEW_SEVERITY_COLORS = SKEW_SEVERITY_COLORS;
window.SKEW_SEVERITY_LABELS = SKEW_SEVERITY_LABELS;
@@ -429,13 +430,27 @@
return (secPerDay >= 0 ? '+' : '') + secPerDay.toFixed(1) + ' s/day';
};
/** Pick the skew value that drives current-health UI: prefer the
* recent-window median (#789, current health) over the all-time median
* (poisoned by historical bad samples). Falls back gracefully if the
* field isn't present (older API responses). */
window.currentSkewValue = function(cs) {
if (!cs) return null;
return cs.recentMedianSkewSec != null ? cs.recentMedianSkewSec : cs.medianSkewSec;
};
/** Render a clock skew badge HTML */
window.renderSkewBadge = function(severity, skewSec) {
window.renderSkewBadge = function(severity, skewSec, cs) {
if (!severity) return '';
var cls = 'skew-badge skew-badge--' + severity;
if (severity === 'no_clock') {
return '<span class="' + cls + '" title="Uninitialized RTC — no valid clock">🚫 No Clock</span>';
}
if (severity === 'bimodal_clock' && cs) {
var badPct = cs.goodFraction != null ? Math.round((1 - cs.goodFraction) * 100) : '?';
var label = '⏰ ' + window.formatSkew(skewSec);
return '<span class="' + cls + '" title="Clock skew: ' + window.formatSkew(skewSec) + ' (bimodal: ' + badPct + '% of recent adverts have nonsense timestamps)">' + label + '</span>';
}
var label = severity === 'ok' ? '⏰' : '⏰ ' + window.formatSkew(skewSec);
return '<span class="' + cls + '" title="Clock skew: ' + window.formatSkew(skewSec) + ' (' + (SKEW_SEVERITY_LABELS[severity] || severity) + ')">' + label + '</span>';
};
+20 -3
View File
@@ -13,6 +13,9 @@
--status-red: #ef4444;
--status-orange: #f97316;
--status-purple: #a855f7;
--status-amber: #f59e0b;
--status-amber-light: #fef3c7;
--status-amber-text: #92400e;
--role-observer: #8b5cf6;
--accent-hover: #6db3ff;
--text: #1a1a2e;
@@ -46,6 +49,9 @@
--status-red: #ef4444;
--status-orange: #f97316;
--status-purple: #a855f7;
--status-amber: #f59e0b;
--status-amber-light: #422006;
--status-amber-text: #fcd34d;
--surface-0: #0f0f23;
--surface-1: #1a1a2e;
--surface-2: #232340;
@@ -72,6 +78,9 @@
--status-red: #ef4444;
--status-orange: #f97316;
--status-purple: #a855f7;
--status-amber: #f59e0b;
--status-amber-light: #422006;
--status-amber-text: #fcd34d;
--surface-0: #0f0f23;
--surface-1: #1a1a2e;
--surface-2: #232340;
@@ -345,6 +354,9 @@ a:focus-visible, button:focus-visible, input:focus-visible, select:focus-visible
}
.detail-meta dt { color: var(--text-muted); font-size: 11px; text-transform: uppercase; letter-spacing: .3px; }
.detail-meta dd { font-weight: 500; margin-bottom: 4px; }
.observation-current { background: var(--accent-bg, rgba(0,122,255,0.1)); font-weight: 600; }
.detail-obs-row:hover { background: var(--hover-bg, rgba(255,255,255,0.05)); }
.detail-obs-table th { font-size: 0.8em; text-transform: uppercase; color: var(--text-muted); }
/* === Hex Dump === */
.hex-dump {
@@ -697,7 +709,9 @@ button.ch-item:hover .ch-remove-btn { opacity: 0.6; }
.advert-dot {
width: 10px; height: 10px; border-radius: 50%; flex-shrink: 0; margin-top: 4px;
}
.advert-info { font-size: 12px; line-height: 1.5; }
/* #829: explicit color so text stays readable when inherited color matches card-bg */
.advert-info { font-size: 12px; line-height: 1.5; color: var(--text); }
.advert-info a { color: var(--accent); }
/* === Traces Page === */
.traces-page { padding: 16px; max-width: var(--trace-max-width, 95vw); margin: 0 auto; }
@@ -1423,7 +1437,9 @@ button.ch-item.ch-item-encrypted .ch-badge { filter: grayscale(0.6); }
.hop-conflict-name { font-weight: 600; flex: 1; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }
.hop-conflict-dist { font-size: 11px; color: var(--text-muted); font-family: var(--mono); white-space: nowrap; }
.hop-conflict-pk { font-size: 10px; color: var(--text-muted); font-family: var(--mono); }
.hop-unreliable { opacity: 0.5; text-decoration: line-through; }
.hop-unreliable { opacity: 0.85; }
.hop-unreliable-btn { background: none; border: none; color: var(--status-yellow, #f59e0b); font-size: 13px;
cursor: help; vertical-align: middle; margin-left: 2px; padding: 0 2px; line-height: 1; }
.hop-global-fallback { border-bottom: 1px dashed var(--status-red); }
.hop-current { font-weight: 700 !important; color: var(--accent) !important; }
@@ -1540,7 +1556,7 @@ tr[data-hops]:hover { background: rgba(59,130,246,0.1); }
/* #20 — Observers table horizontal scroll on mobile */
.obs-table-scroll { overflow-x: auto; -webkit-overflow-scrolling: touch; }
.obs-table-scroll .obs-table { min-width: 640px; }
.obs-table-scroll .obs-table { min-width: 720px; }
/* #206 — Analytics/Compare tables scroll wrappers on mobile */
.analytics-table-scroll { overflow-x: auto; -webkit-overflow-scrolling: touch; }
@@ -2280,6 +2296,7 @@ th.sort-active { color: var(--accent, #60a5fa); }
.skew-badge--critical { background: var(--status-orange); color: #fff; }
.skew-badge--absurd { background: var(--status-purple); color: #fff; }
.skew-badge--no_clock { background: var(--text-muted); color: #fff; }
.skew-badge--bimodal_clock { background: var(--status-amber-light); color: var(--status-amber-text); border: 1px solid var(--status-amber); }
.skew-detail-section { padding: 10px 16px; margin-bottom: 8px; }
.skew-sparkline-wrap { margin-top: 6px; }
+45
View File
@@ -0,0 +1,45 @@
# CoreScope QA artifacts
Project-specific assets for the [`qa-suite`](https://github.com/Kpa-clawbot/ai-sdlc/tree/master/skills/qa-suite) skill.
## Layout
```
qa/
├── README.md ← this file
├── plans/
│ └── <release>.md ← per-release test plans (one file per RC)
└── scripts/
└── api-contract-diff.sh ← CoreScope-tuned API contract diff
```
## How to run
```
qa staging # use the latest plans/v*-rc.md against staging
qa pr 806 # use plans/pr-806.md if it exists, else latest plans/v*-rc.md
qa v3.6.0-rc # use plans/v3.6.0-rc.md
```
The parent agent loads the qa-suite skill, which reads:
1. The plan file from `qa/plans/`
2. Bundled scripts from `qa/scripts/`
3. The reusable engine + qa-engineer persona from the skill itself
## Adding a new plan
For each release candidate, copy the latest `plans/v*-rc.md` to `plans/<new-tag>.md` and update:
- The commit-range header (`vN.M..master`)
- Any new sections for new features in the release
- The "Test data" section if new fixture types are needed
- The GO criteria (which sections are blockers)
## Adding a new script
Custom scripts go in `qa/scripts/` with `mode=auto: <script-name>` referenced from the plan. The qa-engineer subagent runs them with two args: `BASELINE_URL TARGET_URL`.
Authoring rules from the qa-suite skill:
- 4-way error classification: `curl-failed` / `parse-empty` / `shape-diff` / field-missing
- Distinguish HTTP errors from jq parse failures
- Don't silence stderr — script bugs must surface
- Exit code = number of failures
+108
View File
@@ -0,0 +1,108 @@
# Plan: v3.6.0-rc
Targets the changes between v3.5.1 and v3.6.0 candidate (~34 commits).
## Test data
The qa-engineer should pick concrete test fixtures at run time and include them in the report:
- **Pivot node pubkey**: pick the top-result from `/api/nodes?limit=20&sort=advert_count` that has `role=repeater` AND a non-zero `totalPaths` from `/api/nodes/{pk}/paths`. Used for sections 5.1, 8.1, 8.2.
- **Multi-role pubkey** (section 8.6): pick a node whose pubkey appears in BOTH `/api/observers` and `/api/nodes?role=repeater`. If none → mark 8.6 `needs-human`.
- **Sample packet hash**: `/api/packets?limit=1``.packets[0].hash`. Used for sections 3.x.
- **Channel sample**: pick a channel name from `/api/channels` (if endpoint exists) or scrape `/#channels` page.
Record every fixture used in the final report so failures are reproducible.
## Sections
### 1. Memory & Load
| # | Step | Pass criteria | Source | Mode |
|---|---|---|---|---|
| 1.1 | Container with **3 GB** limit starts on heaviest available DB | No OOM, steady state under limit. Note: 1 GB cap is unrealistic without `GOMEMLIMIT` and bounded cold-load — see #836 | #806/#836 | human |
| 1.2 | Hit `/debug/pprof/heap` after Load completes; run `pprof-snapshot.sh` | `unmarshalResolvedPath` absent from top-15 inuse_space; `Load()`-attributed inuse_space ≤ 250 MB on staging-sized DB (~1.5M obs); total heap < 1 GB | #806 | auto: pprof-snapshot.sh |
| 1.3 | Set tight `MaxLoadMemMB`, restart | Load stops gracefully at budget; server still serves `/api/stats` 200 | #790 | human |
| 1.4 | Watch `processRSSMB` (from `/api/stats`) vs procfs RSS over ingest+eviction cycles | `processRSSMB` tracks `cat /proc/$(pidof corescope)/status | awk '/VmRSS/{print $2}'` (kB → MB) within ±20% across one full eviction cycle. Note: `storeDataMB` (formerly `trackedMB`) is the in-store packet byte estimate and is expected to be a **subset** of RSS, not equal to it. | #751, #832 | human |
| 1.5 | Run 30 min under live ingest | Sawtooth heap pattern (≥1 eviction-driven dip), not monotonic ramp | #806/#807 | human |
### 2. API contract
Run `scripts/api-contract-diff.sh BASELINE_URL TARGET_URL` once. Report the script's exit code; nonzero = failures.
| # | Step | Pass criteria | Source | Mode |
|---|---|---|---|---|
| 2.1 | api-contract-diff baseline vs target | Exit code 0; all endpoints carry `resolved_path` where expected | #806 | auto: api-contract-diff.sh |
| 2.2 | WebSocket `/ws` carries `resolved_path` on broadcasts | Run JS hook in browser console: `(function(){let n=0,r=0; const W=WebSocket; window.WebSocket=function(...a){const s=new W(...a); s.addEventListener('message',e=>{n++; try{const m=JSON.parse(e.data); if(m && (m.resolved_path !== undefined || (m.observations||[]).some(o=>o.resolved_path!==undefined))) r++;}catch{}}); return s;}; window.__wsCount=()=>({n,r});})()` then navigate to `/`, wait 30s, eval `__wsCount()``r` must be ≥ 1 if `n` ≥ 1 | #806 | browser |
### 3. Decoder & hashing
| # | Step | Pass criteria | Source | Mode |
|---|---|---|---|---|
| 3.1 | Recompute content hashes for sample of recent packets vs stored | All match (hash uses payload-type bits only) | #787 | human |
| 3.2 | Inspect a TRACE packet detail panel | path_json length matches path_sz from flags byte | #732 | browser |
| 3.3 | Check `hash_size` on transport-route packet AND zero-hop advert | Correct hash_size detected | #747 | browser |
| 3.4 | Field-table column offsets for transport-route packet | Snapshot of detail panel: each field row has nonzero `offset`/`length` cells AND offsets monotonically increase | #766 | browser |
| 3.5 | Corrupt advert ingest log check | Rejected, counted, no DB entry | #794 | human |
| 3.6 | Public channel packet rendering | No empty/garbled decode | #761 | browser |
### 4. Channels (#725)
| # | Step | Pass criteria | Source | Mode |
|---|---|---|---|---|
| 4.1 | Channel list — full message history loads from DB | Past messages persist across reload | #726 | browser |
| 4.2 | Add custom channel via UI (then remove it as teardown) | Channel appears, encrypted msgs decrypt; teardown removes it cleanly. STAGING ONLY. | #733 | browser |
| 4.3 | PSK channel add + channel removal (already a self-teardown) | Both work, UI state correct after | #750 | browser |
| 4.4 | Deep link to encrypted channel without key | Lock message shows | #783 | browser |
| 4.5 | Undecryptable msgs hidden by default + toggle | Hidden default; toggle shows | #728 | browser |
| 4.6 | Add-channel button + hint + status feedback | All present | #760 | browser |
| 4.7 | Filter packets by channel | Functional: filter applies, packet count drops; performance: response time ≤ 500 ms for `/api/packets?channel=<name>&limit=100` (timed via `curl -w '%{time_total}'`) | #762/#763 | browser+auto |
### 5. Clock skew (#690)
| # | Step | Pass criteria | Source | Mode |
|---|---|---|---|---|
| 5.1 | Node detail clock-skew badge + sparkline | Both render | #746/#752 | browser |
| 5.2 | Analytics fleet clock-skew page | Renders, epoch-0 filtered | #769 | browser |
| 5.3 | Outlier sample doesn't poison median | Sanity caps respected; severity uses `recentMedianSkewSec` (#789), not all-time `medianSkewSec` | #769/#789 | human |
| 5.4 | Roles page clock-skew indicator | Renders | #752 | browser |
### 6. Observers
| # | Step | Pass criteria | Source | Mode |
|---|---|---|---|---|
| 6.1 | Observer with no packets in N days disappears after retention sweep | Removed | #764 | human |
| 6.2 | Analytics observer-graph (M1+M2) | Renders (`#observerGraph` element present at `public/analytics.js:2048-2051`) | #774 | browser |
### 7. Multi-byte hash adopters
| # | Step | Pass criteria | Source | Mode |
|---|---|---|---|---|
| 7.1 | Hash Usage Matrix collision details for all hash sizes | Click cell → colliding pubkeys shown | #758 | browser |
| 7.2 | Multi-byte adopter table includes all node types | Repeaters, room servers, sensors all present | #767 | browser |
| 7.3 | Role column reflects multi-byte adoption + advert precedence | For 3 sample multi-byte adopter pubkeys (from #758 matrix), the Role column on `/#nodes` matches the role inferred from their latest advert flags via `/api/nodes/{pk}/health` | #767 | browser |
### 8. Frontend nav & deep linking
| # | Step | Pass criteria | Source | Mode |
|---|---|---|---|---|
| 8.1 | Click node on map/list — URL hash updates, panel opens | Hash matches | #739 | browser |
| 8.2 | Open saved deep-link to a node | Full-screen detail view opens (post-#823: desktop deep links match the Details-link path) | #739/#823 | browser |
| 8.3 | Packets page filter URL hash | Reload preserves filters | #740 | browser |
| 8.4 | Details/Analytics links in node detail panel | Navigate without router glitch | #779/#785 | browser |
| 8.5 | Neighbor graph slider | Persists across reloads, default 0.7 | #776 | browser |
| 8.6 | Repeater that's also observer | Single map marker | #745 | browser |
| 8.7 | Side-panel "Recent Packets" — click any entry, lands on packet detail (no 404), entry text is readable in current theme | DB-fallback works (#827); `.advert-info` has explicit color (#829) | #827/#829 | browser |
### 9. Geofilter & customizer
| # | Step | Pass criteria | Source | Mode |
|---|---|---|---|---|
| 9.1 | Customize → "Open geofilter builder" link | Opens app-served builder | #735 | browser |
| 9.2 | Build a filter, save, reload (STAGING ONLY; teardown: delete the saved filter) | Persists across reload; teardown removes it | #735 | browser |
| 9.3 | Geofilter docs page | Renders, content matches behavior | #734 | browser |
### 10. Node blacklist
| # | Step | Pass criteria | Source | Mode |
|---|---|---|---|---|
| 10.1 | Add node pubkey to nodeBlacklist config; restart | Hidden from listings/map/neighbor graph | #742 | auto: blacklist-test.sh |
| 10.2 | Packets still in DB | Yes (filter not delete) | #742 | auto: blacklist-test.sh |
`blacklist-test.sh` covers both 10.1 and 10.2 in one run. Required env: `TEST_NODE_PUBKEY` (hex, of a real visible node on TARGET), `TARGET_SSH_HOST`, `TARGET_CONFIG_PATH`, `TARGET_CONTAINER`. Optional: `TARGET_DB_PATH` or `ADMIN_API_TOKEN` for §10.2 probe; `TARGET_SSH_KEY` (default `/root/.ssh/id_ed25519`). Mandatory teardown removes the pubkey and verifies the node returns to listings.
### 11. Deploy/ops
| # | Step | Pass criteria | Source | Mode |
|---|---|---|---|---|
| 11.1 | Force-redeploy staging | Container removed cleanly even if `docker run`, not compose. Playwright E2E `Desktop: deep link #/nodes/{pubkey} opens full-screen detail view` passes (updated #833 — was asserting old pre-#823 split-panel behavior). | fa348ef/#833 | human |
## GO criteria
- Sections 1.2, 2, 3 must all pass — release blockers
- Section 4 (channels) — any visible regression must be fixed before tag
- Other sections: file follow-up issues; decide per-item whether to tag with known issues
+134
View File
@@ -0,0 +1,134 @@
#!/usr/bin/env bash
# api-contract-diff.sh — diff CoreScope API endpoints between two deployments.
# Usage: api-contract-diff.sh BASELINE_URL TARGET_URL [-k AUTH_HEADER]
#
# Compares JSON shape (recursive key set) per endpoint and asserts presence of
# `resolved_path` where contract requires it. Prints a per-endpoint result line
# (✅/❌) and a summary. Exit code = number of failures.
#
# Distinguishes:
# curl-failed → HTTP error or network timeout (real outage)
# parse-empty → curl succeeded but response shape unexpected (probable
# contract drift in this script or in the API)
# shape-diff → recursive key set differs between baseline and target
# rp-missing → resolved_path absent on target where it was promised
#
# PUBLIC repo: do not commit URLs or keys here. Caller passes them.
set -uo pipefail
OLD="${1:-}"; NEW="${2:-}"
[[ -z "$OLD" || -z "$NEW" ]] && { echo "usage: $0 BASELINE_URL TARGET_URL [-k AUTH_HEADER]" >&2; exit 2; }
shift 2 || true
AUTH=""
while [[ $# -gt 0 ]]; do
case "$1" in
-k) AUTH="$2"; shift 2 ;;
*) echo "unknown arg: $1" >&2; exit 2 ;;
esac
done
TMP=$(mktemp -d); trap 'rm -rf "$TMP"' EXIT
# Wrapper: fetch URL, return body on stdout, exit 1 on HTTP error / timeout.
fetch() {
local url="$1" out="$2"
local code
code=$(curl -s -m 30 -o "$out" -w "%{http_code}" ${AUTH:+-H "$AUTH"} "$url" 2>/dev/null) || code="000"
if [[ "$code" != "2"* ]]; then
echo " HTTP $code"
return 1
fi
return 0
}
# Seed lookups from TARGET (so the picked IDs are guaranteed present there).
seed_packets="$TMP/seed_packets.json"
seed_observers="$TMP/seed_observers.json"
seed_nodes="$TMP/seed_nodes.json"
if ! fetch "$NEW/api/packets?limit=1" "$seed_packets"; then echo "seed /api/packets failed" >&2; fi
if ! fetch "$NEW/api/observers" "$seed_observers"; then echo "seed /api/observers failed" >&2; fi
if ! fetch "$NEW/api/nodes?limit=1" "$seed_nodes"; then echo "seed /api/nodes failed" >&2; fi
HASH=$(jq -r '.packets[0].hash // empty' "$seed_packets" 2>/dev/null || true)
OBSID=$(jq -r '.observers[0].id // empty' "$seed_observers" 2>/dev/null || true)
NODEPK=$(jq -r '.nodes[0].public_key // empty' "$seed_nodes" 2>/dev/null || true)
[[ -z "$HASH" ]] && echo "warn: no packet hash from /api/packets — packet-detail endpoints will be skipped" >&2
[[ -z "$OBSID" ]] && echo "warn: no observer id from /api/observers — observer-detail endpoints will be skipped" >&2
[[ -z "$NODEPK" ]] && echo "warn: no node pubkey from /api/nodes — node-detail endpoints will be skipped" >&2
# Endpoints to diff: path | jq filter (selects subobject to compare) | RP-required(yes/no)
declare -a ENDPOINTS
ENDPOINTS+=("/api/packets?limit=20|.packets[0]|yes")
ENDPOINTS+=("/api/packets?limit=20&expandObservations=true|.packets[0]|yes")
ENDPOINTS+=("/api/observers|.observers[0]|no")
[[ -n "$HASH" ]] && ENDPOINTS+=("/api/packets/$HASH|.|yes")
[[ -n "$OBSID" ]] && ENDPOINTS+=("/api/observers/$OBSID|.|no")
[[ -n "$OBSID" ]] && ENDPOINTS+=("/api/observers/$OBSID/analytics|.|no")
[[ -n "$NODEPK" ]] && ENDPOINTS+=("/api/nodes/$NODEPK/health|.recentPackets[0]|yes")
[[ -n "$NODEPK" ]] && ENDPOINTS+=("/api/nodes/$NODEPK/paths|.|no")
# Strip volatile fields (timestamps + counters) from a JSON value.
STRIP='walk(if type=="object" then del(.timestamp, .first_seen, .last_seen, .last_heard, .updated_at, .server_time, .packet_count, .packetsLastHour, .uptime_secs, .battery_mv, .noise_floor, .observation_count, .advert_count) else . end)'
fails=0
for ep in "${ENDPOINTS[@]}"; do
IFS='|' read -r path filter need_rp <<<"$ep"
echo "=== $path (resolved_path required: $need_rp) ==="
oldfile="$TMP/old.json"; newfile="$TMP/new.json"
if ! fetch "$OLD$path" "$oldfile"; then echo " ❌ baseline curl-failed"; fails=$((fails+1)); continue; fi
if ! fetch "$NEW$path" "$newfile"; then echo " ❌ target curl-failed"; fails=$((fails+1)); continue; fi
# Selector + strip on each side. jq stderr is preserved so script bugs surface.
oldj=$(jq "$filter | $STRIP" "$oldfile")
jq_old_rc=$?
newj=$(jq "$filter | $STRIP" "$newfile")
jq_new_rc=$?
if [[ $jq_old_rc -ne 0 ]]; then
echo " ❌ baseline jq-error (filter='$filter') — likely script bug or API shape changed"
fails=$((fails+1)); continue
fi
if [[ $jq_new_rc -ne 0 ]]; then
echo " ❌ target jq-error (filter='$filter') — likely script bug or API shape changed"
fails=$((fails+1)); continue
fi
if [[ -z "$oldj" || "$oldj" == "null" ]]; then
echo " ❌ baseline parse-empty (filter returned empty/null; check API shape)"
fails=$((fails+1)); continue
fi
if [[ -z "$newj" || "$newj" == "null" ]]; then
echo " ❌ target parse-empty (filter returned empty/null; check API shape)"
fails=$((fails+1)); continue
fi
# Recursive key-set diff. Canonicalize array indices (numbers) → "[]" so two
# different sample responses with different array lengths don't false-positive.
KEYS_FILTER='[paths(scalars or type=="null" or (type=="array" and length==0) or (type=="object" and length==0)) | map(if type=="number" then "[]" else . end) | join(".")] | unique | .[]'
oldkeys=$(echo "$oldj" | jq -r "$KEYS_FILTER" | sort -u)
newkeys=$(echo "$newj" | jq -r "$KEYS_FILTER" | sort -u)
if ! diff <(echo "$oldkeys") <(echo "$newkeys") >/dev/null; then
echo " ❌ shape-diff (key set differs):"
diff <(echo "$oldkeys") <(echo "$newkeys") | sed 's/^/ /'
fails=$((fails+1))
continue
fi
# If RP expected, assert present on target (any value, may be null).
if [[ "$need_rp" == "yes" ]]; then
if ! echo "$newj" | jq -e '.. | objects | select(has("resolved_path")) | .resolved_path' >/dev/null 2>&1; then
echo " ❌ rp-missing (resolved_path not present anywhere in selector)"
fails=$((fails+1))
continue
fi
fi
echo " ✅ ok"
done
echo
echo "failures: $fails / ${#ENDPOINTS[@]}"
exit $fails
+271
View File
@@ -0,0 +1,271 @@
#!/usr/bin/env bash
# blacklist-test.sh — verify nodeBlacklist hides a pubkey from API surface
# while retaining its packets in the DB. Implements QA plan §10.1 + §10.2.
#
# Usage:
# blacklist-test.sh BASELINE_URL TARGET_URL
#
# BASELINE_URL is currently unused for assertions but kept as a positional
# arg for parity with other qa-suite scripts (always called with two URLs).
#
# Required env (target host control + test data):
# TEST_NODE_PUBKEY — hex pubkey of a real, currently-visible node on TARGET_URL
# TARGET_SSH_HOST — e.g. runner@example
# TARGET_SSH_KEY — path to ssh private key (default: /root/.ssh/id_ed25519)
# TARGET_CONFIG_PATH — absolute path to config.json on the target
# TARGET_CONTAINER — docker container name on the target
# Optional env:
# TARGET_DB_PATH — sqlite db path on the target (for §10.2 sqlite probe)
# ADMIN_API_TOKEN — if /api/admin/transmissions exists, use it instead of ssh+sqlite
# (read from env, not argv — never appears in ps)
# CURL_TIMEOUT — per-request curl timeout, seconds (default 60)
# RESTART_WAIT_S — max wait for /api/stats after restart (default 120)
#
# Distinguishes:
# ssh-failed → cannot reach/control target
# restart-stuck → /api/stats not 200 within RESTART_WAIT_S
# hide-failed → blacklisted pubkey still surfaced via API (§10.1 fail)
# retain-failed → blacklisted pubkey absent from DB (§10.2 fail)
# teardown-failed→ post-test removal did not restore listing
#
# Exit code = number of failures (0 = pass).
# PUBLIC repo: zero PII — no real pubkeys, IPs, or hostnames as defaults.
set -uo pipefail
BASELINE_URL="${1:-}"
TARGET_URL="${2:-}"
if [[ -z "$BASELINE_URL" || -z "$TARGET_URL" ]]; then
echo "usage: $0 BASELINE_URL TARGET_URL (TEST_NODE_PUBKEY+TARGET_* via env)" >&2
exit 2
fi
TEST_PUBKEY="${TEST_NODE_PUBKEY:-}"
TARGET_SSH_HOST="${TARGET_SSH_HOST:-}"
TARGET_SSH_KEY="${TARGET_SSH_KEY:-/root/.ssh/id_ed25519}"
TARGET_CONFIG_PATH="${TARGET_CONFIG_PATH:-}"
TARGET_CONTAINER="${TARGET_CONTAINER:-}"
TARGET_DB_PATH="${TARGET_DB_PATH:-}"
ADMIN_API_TOKEN="${ADMIN_API_TOKEN:-}"
if [[ -z "$TEST_PUBKEY" || -z "$TARGET_SSH_HOST" || -z "$TARGET_CONFIG_PATH" || -z "$TARGET_CONTAINER" ]]; then
echo "error: TEST_NODE_PUBKEY, TARGET_SSH_HOST, TARGET_CONFIG_PATH, TARGET_CONTAINER are required" >&2
exit 2
fi
# Hard input validation — these strings are interpolated into remote shell/SQL.
# Pubkey must be hex (MeshCore pubkeys are hex-encoded ed25519 prefixes).
if ! [[ "$TEST_PUBKEY" =~ ^[0-9a-fA-F]+$ ]]; then
echo "error: TEST_NODE_PUBKEY must be hex (got: redacted)" >&2
exit 2
fi
# Container name must match docker's allowed chars: [a-zA-Z0-9][a-zA-Z0-9_.-]*
if ! [[ "$TARGET_CONTAINER" =~ ^[a-zA-Z0-9][a-zA-Z0-9_.-]*$ ]]; then
echo "error: TARGET_CONTAINER has illegal chars" >&2
exit 2
fi
# Config path must be an absolute, sane path (no spaces, quotes, $, ;, etc.).
if ! [[ "$TARGET_CONFIG_PATH" =~ ^/[A-Za-z0-9_./-]+$ ]]; then
echo "error: TARGET_CONFIG_PATH must be a sane absolute path" >&2
exit 2
fi
if [[ -n "$TARGET_DB_PATH" ]] && ! [[ "$TARGET_DB_PATH" =~ ^/[A-Za-z0-9_./-]+$ ]]; then
echo "error: TARGET_DB_PATH must be a sane absolute path" >&2
exit 2
fi
CURL_TIMEOUT="${CURL_TIMEOUT:-60}"
RESTART_WAIT_S="${RESTART_WAIT_S:-120}"
SSH_OPTS=(-i "$TARGET_SSH_KEY" -o StrictHostKeyChecking=accept-new -o ConnectTimeout=15 -o BatchMode=yes)
ssh_t() { ssh "${SSH_OPTS[@]}" "$TARGET_SSH_HOST" "$@"; }
TMP=$(mktemp -d)
fails=0
TEARDOWN_DONE=0
# -----------------------------------------------------------------------------
# Teardown — MANDATORY in all exit paths.
# -----------------------------------------------------------------------------
teardown() {
local rc=$?
if [[ "$TEARDOWN_DONE" == "1" ]]; then rm -rf "$TMP"; exit "$rc"; fi
TEARDOWN_DONE=1
echo "=== teardown: removing $TEST_PUBKEY from nodeBlacklist ==="
if remove_from_blacklist && restart_target && wait_for_stats; then
if node_visible; then
echo " ✅ teardown ok — node returned to listings"
else
echo " ❌ teardown-failed: node still hidden after removal"
rc=$((rc + 1))
fi
else
echo " ❌ teardown-failed: could not restore config / restart / stats"
rc=$((rc + 1))
fi
rm -rf "$TMP"
exit "$rc"
}
trap teardown EXIT INT TERM
# -----------------------------------------------------------------------------
# Helpers
# -----------------------------------------------------------------------------
fetch_code() {
local url="$1" out="$2"
curl -s -m "$CURL_TIMEOUT" -o "$out" -w "%{http_code}" "$url" 2>/dev/null || echo "000"
}
wait_for_stats() {
local deadline code
echo " waiting up to ${RESTART_WAIT_S}s for $TARGET_URL/api/stats ..."
deadline=$(( $(date +%s) + RESTART_WAIT_S ))
while (( $(date +%s) < deadline )); do
code=$(fetch_code "$TARGET_URL/api/stats" "$TMP/stats.json")
if [[ "$code" == "200" ]]; then echo " stats OK"; return 0; fi
sleep 3
done
echo " ❌ restart-stuck: /api/stats never returned 200"
return 1
}
restart_target() {
echo " restarting container $TARGET_CONTAINER ..."
# TARGET_CONTAINER is validated above; still quote defensively.
if ! ssh_t "docker restart $(printf %q "$TARGET_CONTAINER")" >/dev/null; then
echo " ❌ ssh-failed: docker restart failed"
return 1
fi
return 0
}
# Mutate config.json on target. Values pass via env (printf %q + single-quoted
# heredoc) so $TEST_PUBKEY etc. never enter the remote shell as code.
set_blacklist_state() {
local mode="$1" # add | remove
ssh_t "CFG=$(printf %q "$TARGET_CONFIG_PATH") PK=$(printf %q "$TEST_PUBKEY") MODE=$(printf %q "$mode") bash -s" <<'REMOTE'
set -euo pipefail
TMP="$(mktemp)"
trap 'rm -f "$TMP"' EXIT
if command -v jq >/dev/null; then
if [ "$MODE" = "add" ]; then
jq --arg pk "$PK" '.nodeBlacklist = ((.nodeBlacklist // []) + [$pk] | unique)' "$CFG" > "$TMP"
else
jq --arg pk "$PK" '.nodeBlacklist = ((.nodeBlacklist // []) - [$pk])' "$CFG" > "$TMP"
fi
else
python3 - "$CFG" "$PK" "$MODE" "$TMP" <<'PY'
import json, sys
cfg, pk, mode, out = sys.argv[1:]
with open(cfg) as f: d = json.load(f)
bl = list(dict.fromkeys(d.get("nodeBlacklist") or []))
if mode == "add":
if pk not in bl: bl.append(pk)
else:
bl = [x for x in bl if x != pk]
d["nodeBlacklist"] = bl
with open(out, "w") as f: json.dump(d, f, indent=2)
PY
fi
# Preserve mode and ownership; mv across same FS is atomic.
chmod --reference="$CFG" "$TMP" 2>/dev/null || true
chown --reference="$CFG" "$TMP" 2>/dev/null || true
mv "$TMP" "$CFG"
trap - EXIT
REMOTE
local rc=$?
if (( rc != 0 )); then
echo " ❌ ssh-failed: could not edit $TARGET_CONFIG_PATH ($mode)"
return 1
fi
return 0
}
add_to_blacklist() { set_blacklist_state add; }
remove_from_blacklist() { set_blacklist_state remove; }
node_visible() {
# Returns 0 if the pubkey is currently visible via API.
local code
code=$(fetch_code "$TARGET_URL/api/nodes/$TEST_PUBKEY" "$TMP/node.json")
if [[ "$code" == "200" ]]; then return 0; fi
fetch_code "$TARGET_URL/api/nodes?limit=10000" "$TMP/nodes.json" >/dev/null
if grep -qF -- "\"$TEST_PUBKEY\"" "$TMP/nodes.json" 2>/dev/null; then
return 0
fi
return 1
}
# -----------------------------------------------------------------------------
# §10.1 — hide
# -----------------------------------------------------------------------------
echo "=== §10.1 add $TEST_PUBKEY to nodeBlacklist ==="
if ! add_to_blacklist; then fails=$((fails+1)); exit "$fails"; fi
if ! restart_target; then fails=$((fails+1)); exit "$fails"; fi
if ! wait_for_stats; then fails=$((fails+1)); exit "$fails"; fi
detail_code=$(fetch_code "$TARGET_URL/api/nodes/$TEST_PUBKEY" "$TMP/detail.json")
list_code=$(fetch_code "$TARGET_URL/api/nodes?limit=10000" "$TMP/list.json")
in_list=0
if [[ "$list_code" == "200" ]] && grep -qF -- "\"$TEST_PUBKEY\"" "$TMP/list.json"; then
in_list=1
fi
if [[ "$detail_code" == "404" || "$in_list" == "0" ]]; then
echo " ✅ hide ok: detail=$detail_code in_list=$in_list"
else
echo " ❌ hide-failed: detail=$detail_code in_list=$in_list — pubkey still surfaced"
fails=$((fails+1))
fi
topo_code=$(fetch_code "$TARGET_URL/api/topology" "$TMP/topo.json")
if [[ "$topo_code" != "200" ]]; then
echo " ⚠️ /api/topology HTTP $topo_code — skipping topology assertion"
elif grep -qF -- "$TEST_PUBKEY" "$TMP/topo.json"; then
echo " ❌ hide-failed: /api/topology references blacklisted pubkey"
fails=$((fails+1))
else
echo " ✅ topology clean"
fi
# -----------------------------------------------------------------------------
# §10.2 — DB retain
# -----------------------------------------------------------------------------
echo "=== §10.2 verify packets retained in DB ==="
count=""
if [[ -n "$ADMIN_API_TOKEN" ]]; then
# Read auth header from stdin so the token never enters argv (ps-safe).
code=$(printf 'header = "Authorization: Bearer %s"\n' "$ADMIN_API_TOKEN" | \
curl -s -m "$CURL_TIMEOUT" -K - -o "$TMP/admin.json" -w "%{http_code}" \
"$TARGET_URL/api/admin/transmissions?from_node=$TEST_PUBKEY&count=1" 2>/dev/null || echo "000")
if [[ "$code" == "200" ]]; then
count=$(jq -r '.count // ((.transmissions // []) | length)' "$TMP/admin.json" 2>/dev/null || echo "")
fi
fi
if [[ -z "$count" ]]; then
if [[ -z "$TARGET_DB_PATH" ]]; then
echo " ❌ retain-failed: TARGET_DB_PATH unset and no ADMIN_API_TOKEN — cannot probe"
fails=$((fails+1))
else
# TEST_PUBKEY is hex-validated → safe to inline single-quoted in SQL.
# Container/db path also validated; printf %q for defense in depth.
q="SELECT COUNT(*) FROM transmissions WHERE from_node = '$TEST_PUBKEY';"
qq=$(printf %q "$q")
if ! count=$(ssh_t "docker exec $(printf %q "$TARGET_CONTAINER") sqlite3 $(printf %q "$TARGET_DB_PATH") $qq" 2>/dev/null); then
count=$(ssh_t "sqlite3 $(printf %q "$TARGET_DB_PATH") $qq" 2>/dev/null || echo "")
fi
fi
fi
if [[ -z "$count" ]]; then
echo " ❌ retain-failed: could not read transmissions count"
fails=$((fails+1))
elif [[ "$count" =~ ^[0-9]+$ ]] && (( count > 0 )); then
echo " ✅ DB retains $count packets from $TEST_PUBKEY"
else
echo " ❌ retain-failed: count=$count (expected > 0)"
fails=$((fails+1))
fi
echo "=== summary: $fails failure(s) before teardown ==="
# trap handles teardown + exit
exit "$fails"
+343 -7
View File
@@ -15,6 +15,11 @@ async function test(name, fn) {
results.push({ name, pass: true });
console.log(` \u2705 ${name}`);
} catch (err) {
if (err.skip) {
results.push({ name, pass: true, skipped: true });
console.log(`${name}: ${err.message}`);
return;
}
results.push({ name, pass: false, error: err.message });
console.log(` \u274c ${name}: ${err.message}`);
console.log(`\nFail-fast: stopping after first failure.`);
@@ -1701,18 +1706,18 @@ async function run() {
assert(!url.includes('node-fullscreen') || await page.$('#nodesRight:not(.empty)'), 'Split panel should be visible on desktop');
});
// Test: loading #/nodes/{pubkey} on desktop shows split panel (#676)
await test('Desktop: deep link #/nodes/{pubkey} opens split panel, not full-screen', async () => {
// Test: loading #/nodes/{pubkey} on desktop opens full-screen detail view (#823)
// Updated from #676's earlier "split panel on desktop" assertion. The Details
// link now opens the full-screen single-node view on desktop too — see PR #824.
await test('Desktop: deep link #/nodes/{pubkey} opens full-screen detail view', async () => {
await page.setViewportSize({ width: 1280, height: 800 });
await page.goto(BASE + '#/nodes', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('#nodesBody tr[data-key]', { timeout: 10000 });
const pubkey = await page.$eval('#nodesBody tr[data-key]', el => el.dataset.key);
await page.goto(BASE + '#/nodes/' + encodeURIComponent(pubkey), { waitUntil: 'domcontentloaded' });
await page.waitForTimeout(500);
const hasSplitPanel = await page.$('#nodesRight:not(.empty)');
const hasFullScreen = await page.$('.node-fullscreen');
assert(hasSplitPanel, 'Split panel should be open on desktop deep link');
assert(!hasFullScreen, 'Full-screen view should NOT appear on desktop deep link');
assert(hasFullScreen, 'Full-screen detail view should be open on desktop deep link (#823)');
});
// Test: packets timeWindow deep link
@@ -1778,12 +1783,343 @@ async function run() {
}
});
// Test: Expanded group children have unique observation ids (#866)
await test('Expanded group children update detail pane per-observation', async () => {
await page.goto(`${BASE}/#/packets`, { waitUntil: 'domcontentloaded' });
// Ensure grouped mode and wide time window
await page.evaluate(() => {
localStorage.setItem('meshcore-time-window', '525600');
localStorage.setItem('meshcore-groupbyhash', 'true');
});
await page.reload({ waitUntil: 'load' });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
// Find a group row with observation_count > 1 (has expand button)
const expandBtn = await page.$('table tbody tr .expand-btn, table tbody tr [data-expand]');
if (!expandBtn) {
console.log(' ️ No expandable groups found — skipping child assertion');
return;
}
// Click expand and wait for the /packets/<hash> detail API call
const [detailResp] = await Promise.all([
page.waitForResponse(resp => {
const u = new URL(resp.url(), BASE);
// Match /api/packets/<hash> but not /api/packets?... or /api/packets/observations
return /\/api\/packets\/[A-Fa-f0-9]+$/.test(u.pathname) && resp.status() === 200;
}, { timeout: 15000 }),
expandBtn.click(),
]);
assert(detailResp, 'Expected /api/packets/<hash> response on expand');
// Wait for child rows to appear
await page.waitForSelector('table tbody tr.child-row, table tbody tr[class*="child"]', { timeout: 5000 });
const childRows = await page.$$('table tbody tr.child-row, table tbody tr[class*="child"]');
if (childRows.length < 2) {
console.log(' ️ Group has < 2 children — skipping per-observation assertion');
return;
}
// Click first child row
await childRows[0].click();
await page.waitForFunction(() => {
const panel = document.getElementById('pktRight');
return panel && !panel.classList.contains('empty') && panel.textContent.trim().length > 0;
}, { timeout: 10000 });
const content1 = await page.$eval('#pktRight', el => el.textContent.trim());
const url1 = page.url();
// Click second child row
await childRows[1].click();
await page.waitForTimeout(500);
const content2 = await page.$eval('#pktRight', el => el.textContent.trim());
const url2 = page.url();
// URL should contain ?obs= with a real observation id
assert(url1.includes('obs=') || url2.includes('obs='), `URL should contain obs= parameter, got: ${url1}`);
// The two children should show different detail pane content (different observers)
// At minimum, the URL obs= values should differ
if (url1.includes('obs=') && url2.includes('obs=')) {
const obs1 = new URL(url1).hash.match(/obs=(\d+)/)?.[1];
const obs2 = new URL(url2).hash.match(/obs=(\d+)/)?.[1];
if (obs1 && obs2) {
assert(obs1 !== obs2, `Two children should have different obs ids, both got obs=${obs1}`);
}
}
// Verify obs id is NOT the aggregate packet id (the bug from #866)
const obsMatch = url2.match(/obs=(\d+)/);
if (obsMatch) {
const detailJson = await detailResp.json().catch(() => null);
if (detailJson?.packet?.id) {
const aggId = String(detailJson.packet.id);
// At least one child obs id should differ from the aggregate packet id
const obs1 = url1.match(/obs=(\d+)/)?.[1];
const obs2 = url2.match(/obs=(\d+)/)?.[1];
const allSameAsAgg = obs1 === aggId && obs2 === aggId;
assert(!allSameAsAgg, `Child obs ids should not all equal aggregate packet.id (${aggId})`);
}
}
});
// Test: per-observation raw_hex — hex pane updates when switching observations (#881)
await test('Packet detail hex pane updates per observation', async () => {
await page.goto(BASE + '#/packets', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
await page.waitForTimeout(500);
// Try clicking packet rows to find one with multiple observations
const rows = await page.$$('table tbody tr[data-action]');
let obsRows = [];
for (let i = 0; i < Math.min(rows.length, 10); i++) {
await rows[i].click({ timeout: 3000 }).catch(() => null);
await page.waitForTimeout(600);
obsRows = await page.$$('.detail-obs-row');
if (obsRows.length >= 2) break;
}
if (obsRows.length < 2) {
console.log(' ⏭ Skipped: no packet with ≥2 observations found in first 10 rows');
return;
}
// Click first observation, capture hex dump
await obsRows[0].click({ timeout: 5000 });
await page.waitForTimeout(500);
const hex1 = await page.$eval('.hex-dump', el => el.textContent).catch(() => '');
// Click second observation, capture hex dump
await obsRows[1].click({ timeout: 5000 });
await page.waitForTimeout(500);
const hex2 = await page.$eval('.hex-dump', el => el.textContent).catch(() => '');
// If both have content and differ, the feature works
if (hex1 && hex2 && hex1 !== hex2) {
console.log(' ✓ Hex pane content differs between observations');
} else if (hex1 && hex2 && hex1 === hex2) {
console.log(' ⏭ Hex same for both observations (likely historical NULL raw_hex — OK)');
} else {
console.log(' ⏭ Could not capture hex content from both observations');
}
});
// Test: path pill (top) and byte breakdown (bottom) agree on hop count
// Regression for visual mismatch where badge said "1 hop" but path text listed N names
await test('Packet detail path pill and byte breakdown agree on hop count', async () => {
await page.goto(BASE + '#/packets', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
await page.waitForTimeout(500);
// Click rows until we find one whose detail pane renders a multi-hop path
const rows = await page.$$('table tbody tr[data-action]');
let found = false;
for (let i = 0; i < Math.min(rows.length, 15); i++) {
await rows[i].click({ timeout: 3000 }).catch(() => null);
await page.waitForTimeout(500);
const result = await page.evaluate(() => {
// Path pill: <dt>Path</dt><dd><span class="badge ...">N hops</span> ...names...</dd>
const dts = document.querySelectorAll('dl.detail-meta dt');
let pillBadgeCount = null;
let pillNameCount = null;
for (const dt of dts) {
if (dt.textContent.trim() === 'Path') {
const dd = dt.nextElementSibling;
if (!dd) break;
const badge = dd.querySelector('.badge');
if (badge) {
const m = badge.textContent.match(/(\d+)\s*hop/);
if (m) pillBadgeCount = parseInt(m[1], 10);
}
// Count rendered hop links/spans (HopDisplay.renderHop output)
const hops = dd.querySelectorAll('.hop-link, [data-hop-link], .hop-named, .hop-anonymous');
pillNameCount = hops.length;
break;
}
}
// Byte breakdown: section row "Path (N hops)" + N "Hop X — ..." rows
let breakdownSectionCount = null;
let breakdownRowCount = 0;
const fieldTable = document.querySelector('table.field-table');
if (fieldTable) {
for (const tr of fieldTable.querySelectorAll('tr')) {
const txt = tr.textContent.trim();
const sec = txt.match(/^Path\s*\((\d+)\s*hops?\)/);
if (sec) breakdownSectionCount = parseInt(sec[1], 10);
if (/^\s*\d+\s*Hop\s+\d+\s*—/.test(txt) || /^Hop\s+\d+\s*—/.test(txt.replace(/^\d+/, '').trim())) {
breakdownRowCount++;
}
}
}
return { pillBadgeCount, pillNameCount, breakdownSectionCount, breakdownRowCount };
});
if (result.pillBadgeCount && result.pillBadgeCount > 0 && result.breakdownSectionCount != null) {
found = true;
// Top badge count must equal bottom section count
assert(result.pillBadgeCount === result.breakdownSectionCount,
`Path pill badge says ${result.pillBadgeCount} hops but byte breakdown says ${result.breakdownSectionCount} hops`);
// Number of rendered hop names in pill should also match (within 1, since renderPath may add separators)
if (result.pillNameCount != null && result.pillNameCount > 0) {
assert(Math.abs(result.pillNameCount - result.pillBadgeCount) <= 1,
`Path pill badge ${result.pillBadgeCount} but rendered ${result.pillNameCount} hop names`);
}
// And breakdown rendered rows should match its own section count
assert(result.breakdownRowCount > 0,
'breakdown rows selector matched nothing — selector or DOM changed');
assert(result.breakdownRowCount === result.breakdownSectionCount,
`Byte breakdown section says ${result.breakdownSectionCount} hops but rendered ${result.breakdownRowCount} hop rows`);
console.log(` ✓ Path pill (${result.pillBadgeCount}) and byte breakdown (${result.breakdownSectionCount}) agree`);
break;
}
}
if (!found) {
if (process.env.E2E_REQUIRE_PATH_TEST === '1') {
throw new Error('BLOCKED — no multi-hop packet found in first 15 rows (E2E_REQUIRE_PATH_TEST=1 requires it)');
}
const skipErr = new Error('SKIP: No multi-hop packet with byte breakdown found in first 15 rows — needs fixture');
skipErr.skip = true;
throw skipErr;
}
});
// Test: hex-strip color spans match the labeled byte rows (per-obs raw_hex).
// Regression #891: server-supplied breakdown was computed once from top-level
// raw_hex, so per-observation rendering had off-by-N highlights vs the labels.
await test('Packet detail hex strip Path range matches hop row count', async () => {
await page.goto(BASE + '#/packets', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
await page.waitForTimeout(500);
const rows = await page.$$('table tbody tr[data-action]');
let checked = 0;
for (let i = 0; i < Math.min(rows.length, 25) && checked < 3; i++) {
await rows[i].click({ timeout: 3000 }).catch(() => null);
await page.waitForTimeout(400);
const result = await page.evaluate(() => {
const dump = document.querySelector('.hex-dump');
const fieldTable = document.querySelector('table.field-table');
if (!dump || !fieldTable) return null;
const pathSpan = dump.querySelector('span.hex-byte.hex-path');
const pathBytes = pathSpan ? pathSpan.textContent.trim().split(/\s+/).filter(Boolean).length : 0;
const hopRows = [];
for (const tr of fieldTable.querySelectorAll('tr')) {
const cells = [...tr.cells].map(c => c.textContent.trim());
if (cells.length >= 2 && /^Hop\s+\d+/.test(cells[1])) hopRows.push(cells[2]);
}
return { pathBytes, hopRows };
});
if (!result || (result.pathBytes === 0 && result.hopRows.length === 0)) continue;
checked++;
// Either both zero, or the count of bytes inside hex-path == hop rows.
// (For multi-byte hash sizes this is bytes-per-hop * hops; for hash_size=1 it's just hops.)
// The simpler invariant: if there are hop rows, hex-path span must exist and have at least
// as many bytes as there are hops (== exactly hops * hash_size).
assert(result.hopRows.length > 0,
`row ${i}: hex-path span has ${result.pathBytes} bytes but no hop rows in the labeled table`);
assert(result.pathBytes >= result.hopRows.length,
`row ${i}: hex-path has ${result.pathBytes} bytes but ${result.hopRows.length} hop rows — strip and labels disagree`);
assert(result.pathBytes % result.hopRows.length === 0,
`row ${i}: hex-path has ${result.pathBytes} bytes but ${result.hopRows.length} hop rows — bytes/hops not divisible (hash_size violated)`);
console.log(` ✓ row ${i}: hex-path ${result.pathBytes} bytes / ${result.hopRows.length} hop rows (hash_size=${result.pathBytes / result.hopRows.length})`);
}
if (checked === 0) {
const skipErr = new Error('SKIP: no packet with rendered hex strip + hop rows found in first 25 rows');
skipErr.skip = true;
throw skipErr;
}
});
// Test: clicking a different observation row re-renders strip + breakdown consistently.
// Regression: observations of the same packet hash have different raw_hex (#882),
// so picking a different obs must recompute the byte ranges, not reuse the old ones.
await test('Packet detail switches consistently across observations', async () => {
await page.goto(BASE + '#/packets?groupByHash=1', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
await page.waitForTimeout(500);
let opened = false;
const groupRows = await page.$$('table tbody tr[data-action]');
for (let i = 0; i < Math.min(groupRows.length, 10); i++) {
await groupRows[i].click({ timeout: 3000 }).catch(() => null);
await page.waitForTimeout(400);
const obsCount = await page.evaluate(() => {
return document.querySelectorAll('table.observations-table tbody tr, .obs-row').length;
});
if (obsCount >= 2) { opened = true; break; }
}
if (!opened) {
const skipErr = new Error('SKIP: no multi-observation packet found in first 10 group rows');
skipErr.skip = true;
throw skipErr;
}
async function snapshot() {
return page.evaluate(() => {
const dump = document.querySelector('.hex-dump');
const fieldTable = document.querySelector('table.field-table');
if (!dump || !fieldTable) return null;
const pathSpan = dump.querySelector('span.hex-byte.hex-path');
const pathBytes = pathSpan ? pathSpan.textContent.trim().split(/\s+/).filter(Boolean).length : 0;
const hopRows = [];
for (const tr of fieldTable.querySelectorAll('tr')) {
const cells = [...tr.cells].map(c => c.textContent.trim());
if (cells.length >= 2 && /^Hop\s+\d+/.test(cells[1])) hopRows.push(cells[2]);
}
const rawHexParts = [...dump.querySelectorAll('span.hex-byte')].map(s => s.textContent.trim());
return { pathBytes, hopCount: hopRows.length, rawHexJoined: rawHexParts.join('|') };
});
}
const snapA = await snapshot();
assert(snapA, 'first snapshot must have hex dump + field table');
assert(snapA.hopCount === 0 || snapA.pathBytes >= snapA.hopCount,
`obs A inconsistent: hex-path ${snapA.pathBytes} bytes vs ${snapA.hopCount} hop rows`);
const switched = await page.evaluate(() => {
const obsRows = [...document.querySelectorAll('table.observations-table tbody tr, .obs-row')];
if (obsRows.length < 2) return false;
obsRows[1].click();
return true;
});
assert(switched, 'should click second observation row');
await page.waitForTimeout(500);
const snapB = await snapshot();
assert(snapB, 'second snapshot must have hex dump + field table');
assert(snapB.hopCount === 0 || snapB.pathBytes >= snapB.hopCount,
`obs B inconsistent: hex-path ${snapB.pathBytes} bytes vs ${snapB.hopCount} hop rows`);
console.log(` ✓ obs A: ${snapA.pathBytes} path bytes / ${snapA.hopCount} hops; obs B: ${snapB.pathBytes} / ${snapB.hopCount}`);
});
// Test: clicking the 🔍 Details button in the nodes side panel navigates to
// the full-screen node detail view. Regression: hash already === target,
// so location.hash assignment was a no-op and the panel stayed open.
await test('Nodes side panel Details button opens full-screen view', async () => {
await page.goto(BASE + '#/nodes', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('table tbody tr[data-action]', { timeout: 15000 });
await page.waitForTimeout(500);
// Open side panel
await page.click('table tbody tr[data-action]');
await page.waitForSelector('#nodesRight .node-detail-btn', { timeout: 5000 });
// Click Details
await page.click('#nodesRight .node-detail-btn');
// Wait for full-screen view to appear
await page.waitForSelector('.node-fullscreen', { timeout: 5000 });
const isFullScreen = await page.evaluate(() => !!document.querySelector('.node-fullscreen'));
assert(isFullScreen, 'Details button should open full-screen node view');
});
await browser.close();
// Summary
const passed = results.filter(r => r.pass).length;
const skipped = results.filter(r => r.skipped).length;
const passed = results.filter(r => r.pass && !r.skipped).length;
const failed = results.filter(r => !r.pass).length;
console.log(`\n${passed}/${results.length} tests passed${failed ? `, ${failed} failed` : ''}`);
console.log(`\n${passed}/${results.length} tests passed${skipped ? `, ${skipped} skipped` : ''}${failed ? `, ${failed} failed` : ''}`);
process.exit(failed > 0 ? 1 : 0);
}
+755
View File
@@ -222,6 +222,10 @@ console.log('\n=== app.js: routeTypeName / payloadTypeName ===');
test('payloadTypeName(4) = Advert', () => assert.strictEqual(ctx.payloadTypeName(4), 'Advert'));
test('payloadTypeName(2) = Direct Msg', () => assert.strictEqual(ctx.payloadTypeName(2), 'Direct Msg'));
test('payloadTypeName(99) = UNKNOWN', () => assert.strictEqual(ctx.payloadTypeName(99), 'UNKNOWN'));
test('getPathLenOffset: transport route (0) → 5', () => assert.strictEqual(ctx.getPathLenOffset(0), 5));
test('getPathLenOffset: transport route (3) → 5', () => assert.strictEqual(ctx.getPathLenOffset(3), 5));
test('getPathLenOffset: flood route (1) → 1', () => assert.strictEqual(ctx.getPathLenOffset(1), 1));
test('getPathLenOffset: direct route (2) → 1', () => assert.strictEqual(ctx.getPathLenOffset(2), 1));
}
console.log('\n=== app.js: truncate ===');
@@ -686,6 +690,88 @@ console.log('\n=== haversineKm (hop-resolver.js) ===');
});
}
// ===== pickByAffinity — neighbor-graph + centroid scoring (#874) =====
console.log('\n=== pickByAffinity neighbor-graph scoring (#874) ===');
{
const ctx = makeSandbox();
ctx.IATA_COORDS_GEO = {};
loadInCtx(ctx, 'public/hop-resolver.js');
const HR = ctx.window.HopResolver;
// Two nodes sharing prefix "ab", hundreds of km apart.
// NodeSF is near San Francisco, NodeDEN is near Denver.
const nodeSF = { public_key: 'ab11111111111111', name: 'NodeSF', lat: 37.7, lon: -122.4 };
const nodeDEN = { public_key: 'ab22222222222222', name: 'NodeDEN', lat: 39.7, lon: -104.9 };
// A known neighbor of NodeSF (in the graph)
const nodeNeighbor = { public_key: 'cc33333333333333', name: 'SFNeighbor', lat: 37.8, lon: -122.3 };
// Another known node near Denver
const nodeDenNeighbor = { public_key: 'dd44444444444444', name: 'DENNeighbor', lat: 39.8, lon: -105.0 };
test('#874: graph edge scoring picks correct regional candidate (SF)', () => {
HR.init([nodeSF, nodeDEN, nodeNeighbor, nodeDenNeighbor]);
HR.setAffinity({ edges: [
{ source: 'cc33333333333333', target: 'ab11111111111111', weight: 5 },
{ source: 'dd44444444444444', target: 'ab22222222222222', weight: 5 },
]});
// Path: SFNeighbor → [ab??] → DENNeighbor
// With graph edges, ab11 (NodeSF) has edge to SFNeighbor, ab22 (NodeDEN) has edge to DENNeighbor
// Prev=SFNeighbor, Next=DENNeighbor → both have score 5, but SFNeighbor edge only to ab11
const result = HR.resolve(['cc', 'ab', 'dd'],
null, null, null, null);
assert.strictEqual(result['ab'].name, 'NodeSF',
'Should pick NodeSF because it has a graph edge to prev hop SFNeighbor');
});
test('#874: graph edge scoring — next hop breaks tie', () => {
HR.init([nodeSF, nodeDEN, nodeNeighbor, nodeDenNeighbor]);
HR.setAffinity({ edges: [
{ source: 'dd44444444444444', target: 'ab22222222222222', weight: 8 },
// No edge from SFNeighbor to either ab node
]});
// Path: SFNeighbor → [ab??] → DENNeighbor
// Only ab22 (NodeDEN) has edge to DENNeighbor (next hop)
const result = HR.resolve(['cc', 'ab', 'dd'],
null, null, null, null);
assert.strictEqual(result['ab'].name, 'NodeDEN',
'Should pick NodeDEN because it has graph edge to next hop DENNeighbor');
});
test('#874: centroid fallback when no graph edges exist', () => {
HR.init([nodeSF, nodeDEN, nodeNeighbor]);
HR.setAffinity({ edges: [] }); // no edges at all
// Path: SFNeighbor → [ab??]
// SFNeighbor is at (37.8, -122.3), centroid is just that point
// NodeSF (37.7, -122.4) is ~14km away, NodeDEN (39.7, -104.9) is ~1500km away
const result = HR.resolve(['cc', 'ab'],
null, null, null, null);
assert.strictEqual(result['ab'].name, 'NodeSF',
'Should pick NodeSF via centroid proximity to SFNeighbor');
});
test('#874: centroid uses average of prev+next positions', () => {
// Prev near SF, next near Denver → centroid is midpoint (~Nevada)
// NodeDEN is closer to Nevada midpoint than NodeSF
const nodeMid = { public_key: 'ee55555555555555', name: 'MidNode', lat: 38.5, lon: -114.0 };
HR.init([nodeSF, nodeDEN, nodeNeighbor, nodeDenNeighbor, nodeMid]);
HR.setAffinity({ edges: [] });
// Path: SFNeighbor → [ab??] → DENNeighbor
// centroid = avg(37.8,-122.3, 39.8,-105.0) = (38.8, -113.65) — closer to Denver
const result = HR.resolve(['cc', 'ab', 'dd'],
null, null, null, null);
assert.strictEqual(result['ab'].name, 'NodeDEN',
'Should pick NodeDEN because centroid of SF+Denver neighbors is closer to Denver');
});
test('#874: fallback when no context at all', () => {
HR.init([nodeSF, nodeDEN]);
HR.setAffinity({ edges: [] });
// Single ambiguous hop, no origin/observer, no neighbors
const result = HR.resolve(['ab'], null, null, null, null);
assert.ok(result['ab'].ambiguous || result['ab'].name != null,
'Should resolve to some candidate without crashing');
});
}
// ===== SNR/RSSI Number casting =====
{
// These test the pattern used in observer-detail.js, home.js, traces.js, live.js
@@ -1718,6 +1804,128 @@ console.log('\n=== app.js: formatEngineBadge ===');
});
}
// ===== APP.JS: computeBreakdownRanges =====
console.log('\n=== app.js: computeBreakdownRanges ===');
{
const ctx = makeSandbox();
loadInCtx(ctx, 'public/roles.js');
loadInCtx(ctx, 'public/app.js');
const computeBreakdownRanges = ctx.computeBreakdownRanges;
function findRange(ranges, label) {
return ranges.find(r => r.label === label);
}
test('returns [] for empty hex', () => {
assert.deepEqual(computeBreakdownRanges('', 1, 5), []);
});
test('returns [] for too-short hex (< 2 bytes)', () => {
assert.deepEqual(computeBreakdownRanges('15', 1, 5), []);
});
test('FLOOD non-transport: 4-hop hash_size=1', () => {
// header=15, plb=04 → hash_size=1, hash_count=4
// bytes: 15 04 90 FA F9 10 6E 01 D9
const r = computeBreakdownRanges('150490FAF910 6E01D9'.replace(/\s/g,''), 1, 5);
assert.deepEqual(findRange(r, 'Header'), { start: 0, end: 0, label: 'Header' });
assert.deepEqual(findRange(r, 'Path Length'), { start: 1, end: 1, label: 'Path Length' });
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 5, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 6, end: 8, label: 'Payload' });
assert.strictEqual(findRange(r, 'Transport Codes'), undefined);
});
test('FLOOD non-transport: 7-hop hash_size=1', () => {
// header=15, plb=07
const hex = '15077f6d7d1cadeca33988fd95e0851ebf01ea12e1879e';
const r = computeBreakdownRanges(hex, 1, 5);
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 8, label: 'Path' });
const payload = findRange(r, 'Payload');
assert.strictEqual(payload.start, 9, 'payload starts after the 7 path bytes');
});
test('FLOOD non-transport: 8-hop hash_size=1', () => {
const hex = '1508' + '11223344556677AA' + 'BBCCDD';
const r = computeBreakdownRanges(hex, 1, 5);
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 9, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 10, end: 12, label: 'Payload' });
});
test('Direct advert: 0-hop, no Path range', () => {
// plb=00 → 0 hops; expect Path Length but NO Path range
const r = computeBreakdownRanges('1100AABBCCDD', 1, 4);
assert.deepEqual(findRange(r, 'Path Length'), { start: 1, end: 1, label: 'Path Length' });
assert.strictEqual(findRange(r, 'Path'), undefined);
});
test('Transport route shifts path-length offset by 4', () => {
// route_type=0 (TRANSPORT_FLOOD): bytes 1..4 are Transport Codes
// header=14, transport=AABBCCDD, plb=02, hops=11 22, payload=99
const hex = '14AABBCCDD021122' + '99';
const r = computeBreakdownRanges(hex, 0, 5);
assert.deepEqual(findRange(r, 'Transport Codes'), { start: 1, end: 4, label: 'Transport Codes' });
assert.deepEqual(findRange(r, 'Path Length'), { start: 5, end: 5, label: 'Path Length' });
assert.deepEqual(findRange(r, 'Path'), { start: 6, end: 7, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 8, end: 8, label: 'Payload' });
});
test('hash_size=2 (plb top bits=01): 4 hops × 2 bytes', () => {
// plb = 01 0001 00 = 0x44 → hash_size=2, hash_count=4 → 8 path bytes
const hex = '15' + '44' + 'AABB' + 'CCDD' + 'EEFF' + '1122' + '9988';
const r = computeBreakdownRanges(hex, 1, 5);
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 9, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 10, end: 11, label: 'Payload' });
});
test('hash_size=3 (plb top bits=10): 2 hops × 3 bytes', () => {
// plb = 10 0000 10 = 0x82 → hash_size=3, hash_count=2 → 6 path bytes
const hex = '15' + '82' + 'AABBCC' + 'DDEEFF' + '99';
const r = computeBreakdownRanges(hex, 1, 5);
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 7, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 8, end: 8, label: 'Payload' });
});
test('hash_size=4 (plb top bits=11): 2 hops × 4 bytes', () => {
// plb = 11 0000 10 = 0xC2 → hash_size=4, hash_count=2 → 8 path bytes
const hex = '15' + 'C2' + 'AABBCCDD' + 'EEFF1122' + '99887766';
const r = computeBreakdownRanges(hex, 1, 5);
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 9, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 10, end: 13, label: 'Payload' });
});
test('truncated path: not enough bytes → no Path range', () => {
// plb=04 says 4 hops but only 2 bytes remain
const hex = '1504AABB';
const r = computeBreakdownRanges(hex, 1, 5);
assert.strictEqual(findRange(r, 'Path'), undefined);
});
test('ADVERT (payload_type=4) with full record: PubKey/Timestamp/Signature/Flags', () => {
// header=11, plb=00 (direct advert)
// payload: 32 bytes pubkey + 4 bytes ts + 64 bytes sig + 1 byte flags
const pubkey = 'AB'.repeat(32);
const ts = '11223344';
const sig = 'CD'.repeat(64);
const flags = '00';
const hex = '1100' + pubkey + ts + sig + flags;
const r = computeBreakdownRanges(hex, 1, 4);
assert.deepEqual(findRange(r, 'PubKey'), { start: 2, end: 33, label: 'PubKey' });
assert.deepEqual(findRange(r, 'Timestamp'), { start: 34, end: 37, label: 'Timestamp' });
assert.deepEqual(findRange(r, 'Signature'), { start: 38, end: 101, label: 'Signature' });
assert.deepEqual(findRange(r, 'Flags'), { start: 102, end: 102, label: 'Flags' });
});
test('NaN-safe: malformed path-length byte produces no Path range', () => {
// hex with non-hex char in plb position would parseInt-fail → bail
// Use a 1-byte payload that makes pathByte parseInt produce NaN-ish via X
// (parseInt of 'XY' is NaN). Since fs reads only hex chars, simulate via short hex.
// Easier: empty string already returns []; 1-byte returns []. Both covered above.
// Use plb=FF (hash_size=4, hash_count=63) too long for input → no Path
const r = computeBreakdownRanges('15FF' + 'AA', 1, 5);
assert.strictEqual(findRange(r, 'Path'), undefined);
});
}
// ===== APP.JS: isTransportRoute + transportBadge =====
console.log('\n=== app.js: isTransportRoute + transportBadge ===');
{
@@ -2807,6 +3015,126 @@ console.log('\n=== channels.js: encrypted channel without key shows lock message
const messageApiFetched = apiCallPaths.some(p => p.indexOf('/messages') !== -1);
assert.ok(!messageApiFetched, 'should NOT fetch messages API for encrypted channel without key');
});
// #825 regression: deep link to a `#`-named channel not in the loaded list.
// The 3 acceptance cases (unencrypted / encrypted-no-key / encrypted-with-key)
// must each behave correctly without the unconditional lock affordance.
async function runHashDeepLinkScenario(opts) {
// opts: { includeEncryptedChannels: [...], storedKey: { name, hex } | null, target: '#name' }
const ctx = makeSandbox();
const dom = {};
function makeEl(id) {
if (dom[id]) return dom[id];
dom[id] = {
id, innerHTML: '', textContent: '', value: '',
scrollTop: 0, scrollHeight: 100, clientHeight: 80,
style: {}, dataset: {},
classList: { add() {}, remove() {}, toggle() {}, contains() { return false; } },
addEventListener() {}, removeEventListener() {},
querySelector() { return null; }, querySelectorAll() { return []; },
getBoundingClientRect() { return { left: 0, bottom: 0, width: 0 }; },
setAttribute() {}, removeAttribute() {}, focus() {},
};
return dom[id];
}
const headerText = { textContent: '' };
makeEl('chHeader').querySelector = (sel) => (sel === '.ch-header-text' ? headerText : null);
['chMessages', 'chList', 'chScrollBtn', 'chAriaLive', 'chBackBtn', 'chRegionFilter'].forEach(makeEl);
const appEl = {
innerHTML: '',
querySelector(sel) {
if (sel === '.ch-sidebar' || sel === '.ch-sidebar-resize' || sel === '.ch-main') return makeEl(sel);
if (sel === '.ch-layout') return { classList: { add() {}, remove() {}, contains() { return false; } } };
return makeEl(sel);
},
addEventListener() {},
};
let apiCallPaths = [];
ctx.document.getElementById = makeEl;
ctx.document.querySelector = (sel) => {
if (sel === '.ch-layout') return { classList: { add() {}, remove() {}, contains() { return false; } } };
return null;
};
ctx.document.querySelectorAll = () => [];
ctx.document.addEventListener = () => {};
ctx.document.removeEventListener = () => {};
ctx.document.documentElement = { getAttribute: () => null, setAttribute: () => {} };
ctx.document.body = { appendChild() {}, removeChild() {}, contains() { return false; } };
ctx.history = { replaceState() {} };
ctx.matchMedia = () => ({ matches: false });
ctx.window.matchMedia = ctx.matchMedia;
ctx.MutationObserver = function () { this.observe = () => {}; this.disconnect = () => {}; };
ctx.RegionFilter = { init() {}, onChange() { return () => {}; }, offChange() {}, getRegionParam() { return ''; } };
ctx.debouncedOnWS = (fn) => fn;
ctx.onWS = () => {};
ctx.offWS = () => {};
ctx.api = (path) => {
apiCallPaths.push(path);
if (path.indexOf('/observers') === 0) return Promise.resolve({ observers: [] });
if (path.indexOf('/channels') === 0 && path.indexOf('/messages') === -1) {
// Toggle-off list never includes encrypted channels for the initial load
if (path.indexOf('includeEncrypted=true') !== -1) {
return Promise.resolve({ channels: opts.includeEncryptedChannels || [] });
}
return Promise.resolve({ channels: [] });
}
if (path.indexOf('/messages') !== -1) {
return Promise.resolve({ messages: [{ sender: 'X', text: 'hello', timestamp: '2025-01-01T00:00:00Z' }] });
}
return Promise.resolve({});
};
ctx.CLIENT_TTL = { observers: 120000, channels: 15000, channelMessages: 10000, nodeDetail: 10000 };
ctx.ROLE_EMOJI = {}; ctx.ROLE_LABELS = {};
ctx.timeAgo = () => '1m ago';
ctx.registerPage = (name, handlers) => { ctx._pageHandlers = handlers; };
ctx.btoa = (s) => Buffer.from(String(s), 'utf8').toString('base64');
ctx.atob = (s) => Buffer.from(String(s), 'base64').toString('utf8');
ctx.crypto = { subtle: require('crypto').webcrypto.subtle };
ctx.TextEncoder = TextEncoder; ctx.TextDecoder = TextDecoder; ctx.Uint8Array = Uint8Array;
loadInCtx(ctx, 'public/channel-decrypt.js');
loadInCtx(ctx, 'public/channels.js');
if (opts.storedKey) {
ctx.ChannelDecrypt.saveKey(opts.storedKey.name, opts.storedKey.hex);
}
ctx._pageHandlers.init(appEl);
for (let i = 0; i < 10; i++) await Promise.resolve();
apiCallPaths = [];
await ctx.window._channelsSelectChannelForTest(opts.target);
return { msgHtml: dom['chMessages'].innerHTML, apiCallPaths };
}
test('#825: deep link to unencrypted #channel falls through to REST and renders messages', async () => {
const r = await runHashDeepLinkScenario({
target: '#test',
includeEncryptedChannels: [{ hash: '#test', name: '#test', messageCount: 3, lastActivity: null, encrypted: null }],
storedKey: null,
});
assert.ok(!r.msgHtml.includes('🔒'), 'unencrypted #channel must NOT show lock affordance');
const messageApiFetched = r.apiCallPaths.some(p => p.indexOf('/messages') !== -1);
assert.ok(messageApiFetched, 'unencrypted #channel must fetch messages REST endpoint');
});
test('#811 preserved: deep link to encrypted #channel without key shows lock', async () => {
const r = await runHashDeepLinkScenario({
target: '#private',
includeEncryptedChannels: [{ hash: '#private', name: '#private', messageCount: 5, lastActivity: null, encrypted: true }],
storedKey: null,
});
assert.ok(r.msgHtml.includes('🔒'), 'encrypted #channel without key must show lock affordance');
assert.ok(r.msgHtml.includes('no decryption key'), 'lock should mention no decryption key');
const messageApiFetched = r.apiCallPaths.some(p => p.indexOf('/messages') !== -1);
assert.ok(!messageApiFetched, 'must NOT fetch /messages REST for encrypted channel without key');
});
test('#815 preserved: deep link to #channel with stored key triggers decrypt path (no lock)', async () => {
const r = await runHashDeepLinkScenario({
target: '#private',
includeEncryptedChannels: [{ hash: '#private', name: '#private', messageCount: 5, lastActivity: null, encrypted: true }],
storedKey: { name: '#private', hex: 'abcd1234abcd1234abcd1234abcd1234' },
});
assert.ok(!r.msgHtml.includes('no decryption key'), 'must not show no-key lock when key is stored');
// Decrypt path either renders something or shows decrypt-specific empty/wrong-key state — never the no-key lock.
});
}
// ===== PACKETS.JS: savedTimeWindowMin default guard =====
console.log('\n=== packets.js: savedTimeWindowMin defaults ===');
@@ -5240,6 +5568,11 @@ console.log('\n=== packets.js: buildFieldTable transport offsets (#765) ===');
ftCtx.window.truncate = ftCtx.truncate;
ftCtx.escapeHtml = (s) => String(s || '').replace(/&/g,'&amp;').replace(/</g,'&lt;').replace(/>/g,'&gt;');
ftCtx.window.escapeHtml = ftCtx.escapeHtml;
ftCtx.window.HopDisplay = { renderHop: (hex) => hex };
ftCtx.isTransportRoute = (rt) => rt === 0 || rt === 3;
ftCtx.window.isTransportRoute = ftCtx.isTransportRoute;
ftCtx.getPathLenOffset = (rt) => ftCtx.isTransportRoute(rt) ? 5 : 1;
ftCtx.window.getPathLenOffset = ftCtx.getPathLenOffset;
loadInCtx(ftCtx, 'public/packets.js');
const { buildFieldTable, fieldRow } = ftCtx.window._packetsTestAPI;
@@ -5305,6 +5638,73 @@ console.log('\n=== packets.js: buildFieldTable transport offsets (#765) ===');
});
}
// ===== packets.js: buildFieldTable hop count from path_len (#844) =====
console.log('\n=== packets.js: buildFieldTable hop count from path_len (#844) ===');
{
const ftCtx = makeSandbox();
ftCtx.registerPage = () => {};
ftCtx.onWS = () => {};
ftCtx.offWS = () => {};
ftCtx.api = () => Promise.resolve({});
ftCtx.window.getParsedPath = () => [];
ftCtx.window.getParsedDecoded = () => ({});
const ROUTE_TYPES = {0:'TRANSPORT_FLOOD',1:'FLOOD',2:'DIRECT',3:'TRANSPORT_DIRECT'};
const PAYLOAD_TYPES = {0:'ADVERT',1:'TXT_MSG',2:'GRP_TXT',3:'REQ',4:'ACK'};
ftCtx.routeTypeName = (n) => ROUTE_TYPES[n] || 'UNKNOWN';
ftCtx.payloadTypeName = (n) => PAYLOAD_TYPES[n] || 'UNKNOWN';
ftCtx.window.routeTypeName = ftCtx.routeTypeName;
ftCtx.window.payloadTypeName = ftCtx.payloadTypeName;
ftCtx.truncate = (str, len) => str && str.length > len ? str.slice(0, len) + '…' : (str || '');
ftCtx.window.truncate = ftCtx.truncate;
ftCtx.escapeHtml = (s) => String(s || '').replace(/&/g,'&amp;').replace(/</g,'&lt;').replace(/>/g,'&gt;');
ftCtx.window.escapeHtml = ftCtx.escapeHtml;
ftCtx.window.HopDisplay = { renderHop: (hex) => hex };
ftCtx.isTransportRoute = (rt) => rt === 0 || rt === 3;
ftCtx.window.isTransportRoute = ftCtx.isTransportRoute;
ftCtx.getPathLenOffset = (rt) => ftCtx.isTransportRoute(rt) ? 5 : 1;
ftCtx.window.getPathLenOffset = ftCtx.getPathLenOffset;
loadInCtx(ftCtx, 'public/packets.js');
const { buildFieldTable } = ftCtx.window._packetsTestAPI;
test('#885: byte breakdown uses pathHops length (single source of truth)', () => {
// After #885 the byte breakdown agrees with the path pill: both render
// from the per-observation path_json. raw_hex is the underlying bytes
// for that same observation, so consistency is by construction.
// path_len = 0x42 → hash_size=2, hash_count=2
// raw_hex: header(11) + path_len(42) + hop0(41B1) + hop1(27D7) + pubkey(32 bytes)...
const pubkey = 'C0DEDAD4'.padEnd(64, '0'); // 32 bytes = 64 hex chars
const raw = '1142' + '41B1' + '27D7' + pubkey + '00000000' + '0'.repeat(128);
const pkt = { raw_hex: raw, route_type: 1, payload_type: 0 };
// Per-obs path_json IS the source of truth — pass the 2 hops that match raw_hex.
const pathHops = ['41B1', '27D7'];
const html = buildFieldTable(pkt, {}, pathHops, {});
assert.ok(html.includes('Path (2 hops)'), 'Should show "Path (2 hops)"');
assert.ok(html.includes('41B1'), 'Should show hop 0 = 41B1');
assert.ok(html.includes('27D7'), 'Should show hop 1 = 27D7');
});
test('#885: pubkey offset advances by hashSize * pathHops.length', () => {
const pubkey = 'C0DEDAD4'.padEnd(64, '0');
const raw = '1142' + '41B1' + '27D7' + pubkey + '00000000' + '0'.repeat(128);
const pkt = { raw_hex: raw, route_type: 1, payload_type: 0 };
const html = buildFieldTable(pkt, { type: 'ADVERT', pubKey: pubkey }, ['41B1', '27D7'], {});
// Public Key should be at offset 6 (1 header + 1 path_len + 2*2 hops = 6)
assert.ok(html.includes('>6<') || html.includes('"6"'),
'Public Key should be at offset 6');
});
test('#844: hashCountVal=0 (direct advert) skips Path section', () => {
// path_len = 0x00 → hash_size=1, hash_count=0
const raw = '1100' + '0'.repeat(200);
const pkt = { raw_hex: raw, route_type: 1, payload_type: 0 };
const html = buildFieldTable(pkt, {}, [], {});
assert.ok(!html.includes('section-path'), 'Should not render Path section for direct advert');
assert.ok(html.includes('direct advert'), 'Should note direct advert in path_length description');
});
}
// ===== live.js: anomaly icon in feed =====
console.log('\n=== live.js: anomaly icon in feed ===');
{
@@ -5504,6 +5904,15 @@ console.log('\n=== channel-decrypt.js: key derivation, MAC, parsing, storage ===
assert.strictEqual(ctx.window.renderSkewBadge(null, 0), '');
});
test('renderSkewBadge renders bimodal_clock badge with tooltip (#845)', () => {
var cs = { goodFraction: 0.6, recentBadSampleCount: 4, recentSampleCount: 10 };
var html = ctx.window.renderSkewBadge('bimodal_clock', -5, cs);
assert.ok(html.includes('skew-badge--bimodal_clock'), 'should contain bimodal_clock class');
assert.ok(html.includes('bimodal'), 'tooltip should mention bimodal');
assert.ok(html.includes('40%'), 'tooltip should show bad percentage');
assert.ok(html.includes('⏰'), 'should contain clock emoji');
});
test('renderSkewSparkline returns SVG with data points', () => {
var samples = [
{ ts: 1000, skew: 10 },
@@ -5696,6 +6105,352 @@ console.log('\n=== analytics.js: renderCollisionsFromServer collision table ==='
});
}
// ===== Issue #849: Per-observation packet detail tests =====
{
console.log('\n=== Issue #849: Per-observation packet detail ===');
// Test helper: extract hop count from raw_hex path_len byte
function extractRawHopCount(rawHex, routeType) {
if (!rawHex || rawHex.length < 4) return null;
let plOff = 1;
if (routeType === 0 || routeType === 3) plOff = 5;
const plByte = parseInt(rawHex.slice(plOff * 2, plOff * 2 + 2), 16);
if (isNaN(plByte)) return null;
return plByte & 0x3F;
}
test('#849: hop count from raw_hex path_len byte (2 hops)', () => {
// path_len byte = 0x82: hash_size=2+1=3, hash_count=2
const rawHex = '0482aabbccddee'; // header + path_len(0x82) + path data
assert.strictEqual(extractRawHopCount(rawHex, 1), 2);
});
test('#849: hop count from raw_hex path_len byte (0 hops = direct)', () => {
const rawHex = '0400'; // header + path_len=0x00
assert.strictEqual(extractRawHopCount(rawHex, 1), 0);
});
test('#849: hop count from raw_hex for transport route (offset 5)', () => {
// Transport routes have 4 bytes of transport codes before path_len
const rawHex = '00112233440541B127D7'; // header + 4 transport bytes + path_len(0x05)=5 hops
assert.strictEqual(extractRawHopCount(rawHex, 0), 5);
});
test('#849: hop count warns on inconsistency (path_json vs raw_hex)', () => {
// path_json has 3 hops, but raw_hex says 2
const pathJson = ['41B1', '27D7', '5EB0'];
const rawHopCount = 2;
assert.notStrictEqual(pathJson.length, rawHopCount, 'should detect inconsistency');
// In production code, rawHopCount is trusted
assert.strictEqual(rawHopCount, 2);
});
test('#849: per-observation fields override aggregated packet fields', () => {
const pkt = { id: 1, hash: 'abc', observer_id: 'obs-agg', snr: 10, rssi: -90, path_json: '["A","B","C"]', timestamp: '2026-01-01T00:00:00Z' };
const obs = { id: 2, observer_id: 'obs-1', snr: 5, rssi: -85, path_json: '["A"]', timestamp: '2026-01-01T00:01:00Z' };
// Simulate what renderDetail does: spread obs over pkt
const effective = {...pkt, ...obs, _isObservation: true};
delete effective._parsedPath; // clear cache
assert.strictEqual(effective.observer_id, 'obs-1');
assert.strictEqual(effective.snr, 5);
assert.strictEqual(effective.rssi, -85);
assert.strictEqual(effective.timestamp, '2026-01-01T00:01:00Z');
});
test('#849: first observation used when no specific observation selected', () => {
const observations = [
{ id: 10, observer_id: 'obs-A', path_json: '["X"]' },
{ id: 20, observer_id: 'obs-B', path_json: '["X","Y","Z"]' }
];
// No targetObsId → use observations[0]
const currentObs = observations[0];
assert.strictEqual(currentObs.id, 10);
assert.strictEqual(currentObs.observer_id, 'obs-A');
});
test('#849: clicking observation row selects that observation', () => {
const observations = [
{ id: 10, observer_id: 'obs-A', path_json: '["X"]' },
{ id: 20, observer_id: 'obs-B', path_json: '["X","Y","Z"]' }
];
const targetObsId = '20';
const currentObs = observations.find(o => String(o.id) === String(targetObsId));
assert.ok(currentObs);
assert.strictEqual(currentObs.observer_id, 'obs-B');
});
test('#849: null/missing raw_hex returns null hop count', () => {
assert.strictEqual(extractRawHopCount(null, 1), null);
assert.strictEqual(extractRawHopCount('', 1), null);
assert.strictEqual(extractRawHopCount('04', 1), null); // too short
});
}
// ===== Issue #852: hashSize offset + var(--muted) regression =====
{
console.log('\n=== Issue #852: hashSize path_len offset + var(--muted) regression ===');
// Use getPathLenOffset from app.js (loaded via vm context) to avoid duplicating offset logic
const ctx852 = makeSandbox();
loadInCtx(ctx852, 'public/roles.js');
loadInCtx(ctx852, 'public/app.js');
function extractHashSize(rawHex, routeType) {
const plOff = ctx852.getPathLenOffset(routeType);
const rawPathByte = rawHex ? parseInt(rawHex.slice(plOff * 2, plOff * 2 + 2), 16) : NaN;
return (isNaN(rawPathByte) || (rawPathByte & 0x3F) === 0) ? null : ((rawPathByte >> 6) + 1);
}
test('#852: hashSize for flood route (route_type=1, offset 1)', () => {
// Byte at offset 1 = 0x82 → hash_size = (0x82 >> 6) + 1 = 3
const rawHex = '0482aabbccddee';
assert.strictEqual(extractHashSize(rawHex, 1), 3);
});
test('#852: hashSize for direct transport route (route_type=0, offset 5)', () => {
// Bytes 1-4 are next_hop+last_hop, byte at offset 5 = 0x45 → hash_size = (0x45 >> 6) + 1 = 2
const rawHex = '001122334445aabb';
assert.strictEqual(extractHashSize(rawHex, 0), 2);
});
test('#852: hashSize for transport route flood (route_type=3, offset 5)', () => {
const rawHex = '00aabbccdd85aabb';
assert.strictEqual(extractHashSize(rawHex, 3), 3); // 0x85 >> 6 = 2, +1 = 3
});
test('#852: hashSize returns null for missing raw_hex', () => {
assert.strictEqual(extractHashSize(null, 1), null);
assert.strictEqual(extractHashSize('', 0), null);
});
test('#852: no var(--muted) in public/ files (regression guard)', () => {
const fs = require('fs');
const path = require('path');
const pubDir = path.join(__dirname, 'public');
const files = fs.readdirSync(pubDir).filter(f => f.endsWith('.js') || f.endsWith('.css'));
files.forEach(f => {
const content = fs.readFileSync(path.join(pubDir, f), 'utf8');
// Match var(--muted) but not var(--text-muted) or var(--bg-muted) etc.
const matches = content.match(/var\(--muted\)/g);
if (matches) throw new Error(`${f} contains undefined CSS var var(--muted); use var(--text-muted)`);
});
});
}
// ─── #862: Pubkey prefix search ──────────────────────────────────────────────
{
const ctx = makeSandbox();
ctx.ROLE_COLORS = { repeater: '#22c55e', room: '#6366f1', companion: '#3b82f6', sensor: '#f59e0b' };
ctx.ROLE_STYLE = {};
ctx.TYPE_COLORS = {};
ctx.getNodeStatus = () => 'active';
ctx.getHealthThresholds = () => ({ staleMs: 600000, degradedMs: 1800000, silentMs: 86400000 });
ctx.timeAgo = () => '1m ago';
ctx.truncate = (s) => s;
ctx.escapeHtml = (s) => String(s || '');
ctx.payloadTypeName = () => 'Advert';
ctx.payloadTypeColor = () => 'advert';
ctx.registerPage = () => {};
ctx.RegionFilter = { init: () => {}, onChange: () => () => {}, getRegionParam: () => '' };
ctx.debouncedOnWS = () => null;
ctx.onWS = () => {};
ctx.offWS = () => {};
ctx.debounce = (fn) => fn;
ctx.api = () => Promise.resolve({ nodes: [], counts: {} });
ctx.invalidateApiCache = () => {};
ctx.CLIENT_TTL = { nodeList: 90000, nodeDetail: 240000, nodeHealth: 240000 };
ctx.initTabBar = () => {};
ctx.getFavorites = () => [];
ctx.favStar = () => '';
ctx.bindFavStars = () => {};
ctx.makeColumnsResizable = () => {};
ctx.Set = Set;
ctx.HEALTH_THRESHOLDS = { infraSilentMs: 86400000, nodeSilentMs: 7200000 };
loadInCtx(ctx, 'public/nodes.js');
const matchesSearch = ctx.window._nodesMatchesSearch;
test('#862: _nodesMatchesSearch matches name substring', () => {
const node = { name: 'MyRepeater', public_key: '3faebb0011223344' };
assert.strictEqual(matchesSearch(node, 'repeat'), true);
assert.strictEqual(matchesSearch(node, 'REPEAT'), true);
});
test('#862: _nodesMatchesSearch matches pubkey prefix (hex)', () => {
const node = { name: 'MyRepeater', public_key: '3faebb0011223344' };
assert.strictEqual(matchesSearch(node, '3f'), true);
assert.strictEqual(matchesSearch(node, '3fae'), true);
assert.strictEqual(matchesSearch(node, '3FAEBB'), true);
});
test('#862: _nodesMatchesSearch does NOT match pubkey substring (only prefix)', () => {
const node = { name: 'MyRepeater', public_key: '3faebb0011223344' };
assert.strictEqual(matchesSearch(node, 'aebb'), false);
});
test('#862: _nodesMatchesSearch returns true for empty query', () => {
const node = { name: 'Test', public_key: 'abcdef1234567890' };
assert.strictEqual(matchesSearch(node, ''), true);
assert.strictEqual(matchesSearch(node, null), true);
});
test('#862: _nodesMatchesSearch mixed query (non-hex) only matches name', () => {
const node = { name: 'alpha', public_key: 'abcdef1234567890' };
assert.strictEqual(matchesSearch(node, 'xyz'), false);
assert.strictEqual(matchesSearch(node, 'alph'), true);
});
test('#862: _nodesMatchesSearch hex-named node — name "cafe" with pubkey "deadbeef..."', () => {
const node = { name: 'cafe', public_key: 'deadbeef11223344' };
// "cafe" matches by name (substring), NOT pubkey prefix
assert.strictEqual(matchesSearch(node, 'cafe'), true);
// "dead" matches by pubkey prefix
assert.strictEqual(matchesSearch(node, 'dead'), true);
// "cafe" should NOT match pubkey (not a prefix of "deadbeef")
assert.strictEqual(matchesSearch(node, 'beef'), false); // not a prefix, not in name
// "ca" matches name substring
assert.strictEqual(matchesSearch(node, 'ca'), true);
});
}
// ===== Issue #866: Full-page obs-switch — hex + path must update per observation =====
{
console.log('\n=== Issue #866: Full-page observation switch ===');
const ctx866 = makeSandbox();
loadInCtx(ctx866, 'public/roles.js');
loadInCtx(ctx866, 'public/app.js');
loadInCtx(ctx866, 'public/packet-helpers.js');
test('#866: switching observation updates effectivePkt path_json', () => {
const pkt = { id: 1, hash: 'abc123', observer_id: 'obs-agg', path_json: '["A","B","C","D"]', raw_hex: '0484A1B1C1D1', route_type: 1, timestamp: '2026-01-01T00:00:00Z' };
const obs1 = { id: 10, observer_id: 'obs-1', path_json: '["A","B"]', snr: 5, rssi: -80, timestamp: '2026-01-01T00:01:00Z' };
const obs2 = { id: 20, observer_id: 'obs-2', path_json: '["A","B","C","D"]', snr: 8, rssi: -75, timestamp: '2026-01-01T00:02:00Z' };
// Simulate renderDetail logic: pick obs1
const eff1 = ctx866.clearParsedCache({...pkt, ...obs1, _isObservation: true});
const path1 = ctx866.getParsedPath(eff1);
assert.deepStrictEqual(path1, ['A', 'B']);
assert.strictEqual(eff1.observer_id, 'obs-1');
assert.strictEqual(eff1.snr, 5);
// Switch to obs2
const eff2 = ctx866.clearParsedCache({...pkt, ...obs2, _isObservation: true});
const path2 = ctx866.getParsedPath(eff2);
assert.deepStrictEqual(path2, ['A', 'B', 'C', 'D']);
assert.strictEqual(eff2.observer_id, 'obs-2');
assert.strictEqual(eff2.snr, 8);
});
test('#866: effectivePkt preserves raw_hex from packet when obs has none', () => {
const pkt = { id: 1, hash: 'h1', raw_hex: '0482AABB', route_type: 1 };
const obs = { id: 10, observer_id: 'obs-1', path_json: '["AA"]', snr: 3, rssi: -90, timestamp: '2026-01-01T00:00:00Z' };
const eff = ctx866.clearParsedCache({...pkt, ...obs, _isObservation: true});
// obs doesn't have raw_hex, so packet's raw_hex survives spread
assert.strictEqual(eff.raw_hex, '0482AABB');
});
test('#866: effectivePkt uses obs raw_hex when available (API now returns it)', () => {
const pkt = { id: 1, hash: 'h1', raw_hex: '0482AABB', route_type: 1 };
const obs = { id: 10, observer_id: 'obs-1', raw_hex: '0441CC', path_json: '["CC"]', snr: 3, rssi: -90, timestamp: '2026-01-01T00:00:00Z' };
const eff = ctx866.clearParsedCache({...pkt, ...obs, _isObservation: true});
// obs has raw_hex from API, should override
assert.strictEqual(eff.raw_hex, '0441CC');
});
test('#866: direction field carried through observation spread', () => {
const pkt = { id: 1, hash: 'h1', direction: 'rx', route_type: 1 };
const obs = { id: 10, observer_id: 'obs-1', direction: 'tx', path_json: '[]', timestamp: '2026-01-01T00:00:00Z' };
const eff = {...pkt, ...obs, _isObservation: true};
assert.strictEqual(eff.direction, 'tx');
});
test('#866: resolved_path carried through observation spread', () => {
const pkt = { id: 1, hash: 'h1', resolved_path: '["aaa","bbb","ccc"]', route_type: 1 };
const obs = { id: 10, observer_id: 'obs-1', resolved_path: '["aaa"]', path_json: '["AA"]', timestamp: '2026-01-01T00:00:00Z' };
const eff = ctx866.clearParsedCache({...pkt, ...obs, _isObservation: true});
const rp = ctx866.getResolvedPath(eff);
assert.deepStrictEqual(rp, ['aaa']);
});
test('#866: getPathLenOffset used for hop count cross-check', () => {
// Flood route: offset 1
assert.strictEqual(ctx866.getPathLenOffset(1), 1);
assert.strictEqual(ctx866.getPathLenOffset(2), 1);
// Transport route: offset 5
assert.strictEqual(ctx866.getPathLenOffset(0), 5);
assert.strictEqual(ctx866.getPathLenOffset(3), 5);
});
test('#866: URL hash should encode obs parameter for deep linking', () => {
// Simulate the URL construction pattern from renderDetail obs click
const pktHash = 'abc123def456';
const obsId = '42';
const url = `#/packets/${pktHash}?obs=${obsId}`;
assert.strictEqual(url, '#/packets/abc123def456?obs=42');
// Parse back
const qIdx = url.indexOf('?');
const qs = new URLSearchParams(url.substring(qIdx));
assert.strictEqual(qs.get('obs'), '42');
});
}
// ===== #872 — hop-display unreliable badge =====
{
console.log('\n--- #872: hop-display unreliable warning badge ---');
function makeHopDisplaySandbox() {
const sb = {
window: { addEventListener: () => {}, dispatchEvent: () => {} },
document: {
readyState: 'complete',
createElement: () => ({ id: '', textContent: '', innerHTML: '' }),
head: { appendChild: () => {} },
getElementById: () => null,
addEventListener: () => {},
querySelectorAll: () => [],
querySelector: () => null,
},
console,
Date, Math, Array, Object, String, Number, JSON, RegExp, Map, Set,
encodeURIComponent, parseInt, parseFloat, isNaN, Infinity, NaN, undefined,
setTimeout: () => {}, setInterval: () => {}, clearTimeout: () => {}, clearInterval: () => {},
};
sb.window.document = sb.document;
sb.self = sb.window;
sb.globalThis = sb.window;
const ctx = vm.createContext(sb);
const hopSrc = fs.readFileSync(__dirname + '/public/hop-display.js', 'utf8');
vm.runInContext(hopSrc, ctx);
return ctx;
}
const hopCtx = makeHopDisplaySandbox();
test('#872: unreliable hop renders warning badge, not strikethrough', () => {
const html = hopCtx.window.HopDisplay.renderHop('AABB', {
name: 'TestNode', pubkey: 'pk123', unreliable: true,
ambiguous: false, conflicts: [], globalFallback: false,
}, {});
// Must contain unreliable warning badge button
assert.ok(html.includes('hop-unreliable-btn'), 'should have unreliable badge button');
assert.ok(html.includes('⚠️'), 'should have ⚠️ icon');
assert.ok(html.includes('Unreliable name resolution'), 'should have tooltip text');
// Must NOT contain line-through in inline style (CSS class no longer has it)
assert.ok(!html.includes('line-through'), 'should not contain line-through');
// Should still have hop-unreliable class for subtle styling
assert.ok(html.includes('hop-unreliable'), 'should have hop-unreliable class');
});
test('#872: reliable hop does NOT render unreliable badge', () => {
const html = hopCtx.window.HopDisplay.renderHop('CCDD', {
name: 'GoodNode', pubkey: 'pk456', unreliable: false,
ambiguous: false, conflicts: [], globalFallback: false,
}, {});
assert.ok(!html.includes('hop-unreliable-btn'), 'should not have unreliable badge');
});
}
// ===== SUMMARY =====
Promise.allSettled(pendingTests).then(() => {
console.log(`\n${'═'.repeat(40)}`);
+22
View File
@@ -95,5 +95,27 @@ const result6 = HopResolver.resolve(['ee44'], null, null, null, null, null);
assert(result6['ee44'].name === 'NodeD', 'Unique prefix resolves directly — got: ' + result6['ee44'].name);
assert(!result6['ee44'].ambiguous, 'Should not be marked ambiguous');
// Test 7: lat=0 / lon=0 candidates are NOT excluded (equator/prime-meridian bug fix)
console.log('\nTest 7: lat=0 / lon=0 candidates are included in geo scoring');
const nodeEquator = { public_key: 'ab5555', name: 'EquatorNode', lat: 0, lon: 10 };
const nodeFar = { public_key: 'ab6666', name: 'FarNode', lat: 60, lon: 60 };
const anchorNearEq = { public_key: 'cd7777', name: 'AnchorEq', lat: 1, lon: 11 };
HopResolver.init([nodeEquator, nodeFar, anchorNearEq]);
HopResolver.setAffinity({});
// Anchor near equator — EquatorNode (0,10) should be geo-closest
const result7 = HopResolver.resolve(['cd77', 'ab'], 1.0, 11.0, null, null, null);
assert(result7['ab'].name === 'EquatorNode',
'lat=0 candidate should be included and win by geo — got: ' + result7['ab'].name);
// Test 8: lon=0 candidate is also included
console.log('\nTest 8: lon=0 candidate is included in geo scoring');
const nodePrime = { public_key: 'ab8888', name: 'PrimeMeridian', lat: 10, lon: 0 };
const anchorNearPM = { public_key: 'cd9999', name: 'AnchorPM', lat: 11, lon: 1 };
HopResolver.init([nodePrime, nodeFar, anchorNearPM]);
HopResolver.setAffinity({});
const result8 = HopResolver.resolve(['cd99', 'ab'], 11.0, 1.0, null, null, null);
assert(result8['ab'].name === 'PrimeMeridian',
'lon=0 candidate should be included and win by geo — got: ' + result8['ab'].name);
console.log('\n' + (passed + failed) + ' tests, ' + passed + ' passed, ' + failed + ' failed\n');
process.exit(failed > 0 ? 1 : 0);