Compare commits

..

16 Commits

Author SHA1 Message Date
you 0be8b897bc test: add concurrent ingest + eviction race coverage 2026-04-30 19:44:38 +00:00
Kpa-clawbot 5678874128 fix: exclude non-repeater nodes from path-hop resolution (#935) (#936)
Fixes #935

## Problem

`buildPrefixMap()` indexed ALL nodes regardless of role, causing
companions/sensors to appear as repeater hops when their pubkey prefix
collided with a path-hop hash byte.

## Fix

### Server (`cmd/server/store.go`)
- Added `canAppearInPath(role string) bool` — allowlist of roles that
can forward packets (repeater, room_server, room)
- `buildPrefixMap` now skips nodes that fail this check

### Client (`public/hop-resolver.js`)
- Added matching `canAppearInPath(role)` helper
- `init()` now only populates `prefixIdx` for path-eligible nodes
- `pubkeyIdx` remains complete — `resolveFromServer()` still resolves
any node type by full pubkey (for server-confirmed `resolved_path`
arrays)

## Tests

- `cmd/server/prefix_map_role_test.go`: 7 new tests covering role
filtering in prefix map and resolveWithContext
- `test-hop-resolver-affinity.js`: 4 new tests verifying client-side
role filter + pubkeyIdx completeness
- All existing tests updated to include `Role: "repeater"` where needed
- `go test ./cmd/server/...` — PASS
- `node test-hop-resolver-affinity.js` — 16/17 pass (1 pre-existing
centroid failure unrelated to this change)

## Commits

1. `fix: filter prefix map to only repeater/room roles (#935)` — server
implementation
2. `test: prefix map role filter coverage (#935)` — server tests
3. `ui: filter HopResolver prefix index to repeater/room roles (#935)` —
client implementation
4. `test: hop-resolver role filter coverage (#935)` — client tests

---------

Co-authored-by: you <you@example.com>
2026-04-30 09:25:51 -07:00
Kpa-clawbot e857e0b1ce ci: update go-server-coverage.json [skip ci] 2026-04-25 00:34:06 +00:00
Kpa-clawbot 9da7c71cc5 ci: update go-ingestor-coverage.json [skip ci] 2026-04-25 00:34:05 +00:00
Kpa-clawbot 03484ea38d ci: update frontend-tests.json [skip ci] 2026-04-25 00:34:04 +00:00
Kpa-clawbot 27af4098e6 ci: update frontend-coverage.json [skip ci] 2026-04-25 00:34:03 +00:00
Kpa-clawbot 474023b9b7 ci: update e2e-tests.json [skip ci] 2026-04-25 00:34:02 +00:00
Kpa-clawbot f4484adb52 ci: move to GitHub-hosted runners, disable staging deploy (#908)
## Why

The Azure staging VM (`meshcore-vm`) is offline. Self-hosted runners are
unavailable, blocking all CI.

## What changed (per job)

| Job | Change | Revert |
|-----|--------|--------|
| `e2e-test` | `runs-on: [self-hosted, Linux]` → `ubuntu-latest`;
removed self-hosted-specific "Free disk space" step | Change `runs-on`
back to `[self-hosted, Linux]`, restore disk cleanup step |
| `build-and-publish` | `runs-on: [self-hosted, meshcore-runner-2]` →
`ubuntu-latest`; removed "Free disk space" prune step (noop on fresh
GH-hosted runners) | Change `runs-on` back, restore prune step |
| `deploy` | `if: false # disabled` (was `github.event_name == 'push'`);
`runs-on` kept as-is | Change `if:` back to `github.event_name ==
'push'` |
| `publish` | `runs-on: [self-hosted, Linux]` → `ubuntu-latest`; `needs:
[deploy]` → `needs: [build-and-publish]` | Change both back |

## Notes

- `go-test` and `release-artifacts` were already on `ubuntu-latest` —
untouched.
- The `deploy` job is disabled via `if: false` for trivial one-line
revert when the VM returns.
- No new `setup-*` actions were needed — `setup-node`, `setup-go`,
`docker/setup-buildx-action`, and `docker/login-action` were already
present.

Co-authored-by: you <you@example.com>
2026-04-24 17:25:53 -07:00
Kpa-clawbot a47fe26085 fix(channels): allow removing user-added keys for server-known channels (#898)
## Problem
Adding a channel key in the Channels UI for a channel the server already
knows about (e.g. `#public` from rainbow / config) leaves the
localStorage entry **unremovable**:

- `mergeUserChannels` sees the name already exists in the channel list
and skips the user entry.
- The existing channel row is never marked `userAdded:true`.
- The ✕ button (`[data-remove-channel]`) is only rendered for
`userAdded` rows.
- Result: stuck localStorage key, no UI to delete it.

There was also a latent bug in the remove handler — for non-`user:`
rows, it used the raw hash (e.g. `enc_11`) as the
`ChannelDecrypt.removeKey()` argument, but the storage key is the
channel **name**.

## Fix
1. **`mergeUserChannels`**: when a stored key matches an existing
channel by name/hash, mark the existing channel `userAdded=true` so the
✕ renders on it. (No magical/auto deletion of stored keys — the user
explicitly chooses to remove.)
2. **Remove handler**:
- Look up the channel object to get the correct display name for the
localStorage key.
- Keep server-known channels in the list when their ✕ is clicked (only
the user's localStorage entry + cache are cleared, `userAdded` is
unset). The channel still exists upstream.
   - Pure `user:`-prefixed channels are removed from the list as before.

## Repro
1. Open Channels.
2. Add a key for `#public` (or any rainbow-known channel).
3. Reload. Before this PR: row has no ✕, key is stuck. After this PR: ✕
appears, click clears the local key and cache.

## Files
- `public/channels.js` only.

## Notes
- No backend changes.
- No new APIs.
- Behaviour for purely user-added channels (e.g. `user:#somechannel` not
known to the server) is unchanged.

---------

Co-authored-by: you <you@example.com>
2026-04-22 21:41:43 -07:00
Kpa-clawbot abd9c46aa7 fix: side-panel Details button opens full-screen on desktop (#892)
## Symptom
🔍 Details button in the nodes side panel does nothing on click.

## Root cause (4th regression of the same shape)
- Row click → `selectNode()` → `history.replaceState(null, '',
'#/nodes/' + pk)`
- Details button click → `location.hash = '#/nodes/' + pk`
- Hash is already that value → assignment is a no-op → no `hashchange`
event → no router → panel stays open.

## Fix
Mirror the analytics-link branch already inside the panel click handler:
`destroy()` then `init(appEl, pubkey)` directly (which hits the
`directNode` full-screen branch unconditionally). Also `replaceState` to
keep the URL in sync.

## Test
New Playwright E2E: open side panel via row click, click Details, assert
`.node-fullscreen` appears.

## Why this keeps regressing
Every time we tighten the row-click handler to use `replaceState`
(correct — avoids hashchange flicker), the button-click handler that
uses `location.hash` becomes a no-op for the same pubkey. Need to
remember they're coupled. Worth a follow-up to extract a
`navigateToNode(pk)` helper that always works regardless of current hash
state — filing as #890-followup if not already there.

Co-authored-by: you <you@example.com>
2026-04-21 22:37:15 -07:00
Kpa-clawbot 6ca5e86df6 fix: compute hex-dump byte ranges client-side from per-obs raw_hex (#891)
## Symptom
The colored byte strip in the packet detail pane is offset from the
labeled byte breakdown below it. Off by N bytes where N is the
difference between the top-level packet's path length and the displayed
observation's path length.

## Root cause
Server computes `breakdown.ranges` once from the top-level packet's
raw_hex (in `BuildBreakdown`) and ships it in the API response. After
#882 we render each observation's own raw_hex, but we keep using the
top-level breakdown — so a 7-hop top-level packet shipped "Path: bytes
2-8", and when we rendered an 8-hop observation we coloured 7 of the 8
path bytes and bled into the payload.

The labeled rows below (which use `buildFieldTable`) parse the displayed
raw_hex on the client, so they were correct — they just didn't match the
strip above.

## Fix
Port `BuildBreakdown()` to JS as `computeBreakdownRanges()` in `app.js`.
Use it in `renderDetail()` from the actually-rendered (per-obs) raw_hex.

## Test
Manually verified the JS function output matches the Go implementation
for FLOOD/non-transport, transport, ADVERT, and direct-advert (zero
hops) cases.

Closes nothing (caught in post-tag bug bash).

---------

Co-authored-by: you <you@example.com>
2026-04-21 22:17:14 -07:00
Kpa-clawbot 56ec590bc4 fix(#886): derive path_json from raw_hex at ingest (#887)
## Problem

Per-observation `path_json` disagrees with `raw_hex` path section for
TRACE packets.

**Reproducer:** packet `af081a2c41281b1e`, observer `lutin🏡`
- `path_json`: `["67","33","D6","33","67"]` (5 hops — from TRACE
payload)
- `raw_hex` path section: `30 2D 0D 23` (4 bytes — SNR values in header)

## Root Cause

`DecodePacket` correctly parses TRACE packets by replacing `path.Hops`
with hop IDs from the payload's `pathData` field (the actual route).
However, the header path bytes for TRACE packets contain **SNR values**
(one per completed hop), not hop IDs.

`BuildPacketData` used `decoded.Path.Hops` to build `path_json`, which
for TRACE packets contained the payload-derived hops — not the header
path bytes that `raw_hex` stores. This caused `path_json` and `raw_hex`
to describe completely different paths.

## Fix

- Added `DecodePathFromRawHex(rawHex)` — extracts header path hops
directly from raw hex bytes, independent of any TRACE payload
overwriting.
- `BuildPacketData` now calls `DecodePathFromRawHex(msg.Raw)` instead of
using `decoded.Path.Hops`, guaranteeing `path_json` always matches the
`raw_hex` path section.

## Tests (8 new)

**`DecodePathFromRawHex` unit tests:**
- hash_size 1, 2, 3, 4
- zero-hop direct packets
- transport route (4-byte transport codes before path)

**`BuildPacketData` integration tests:**
- TRACE packet: asserts path_json matches raw_hex header path (not
payload hops)
- Non-TRACE packet: asserts path_json matches raw_hex header path

All existing tests continue to pass (`go test ./...` for both ingestor
and server).

Fixes #886

---------

Co-authored-by: you <you@example.com>
2026-04-21 21:13:58 -07:00
Kpa-clawbot 67aa47175f fix: path pill and byte breakdown agree on hop count (#885)
## Problem
On the packet detail pane, the **path pill** (top) and the **byte
breakdown** (bottom) showed different numbers of hops for the same
packet. Example: `46cf35504a21ef0d` rendered as `1 hop` badge followed
by 8 node names in the path pill, while the byte breakdown listed only 1
hop row.

## Root cause
Mixed data sources:
- Path-pill badge used `(raw_hex path_len) & 0x3F` (= firmware truth for
one observer = 1)
- Path-pill names used `path_json.length` (= server-aggregated longest
path across observers = 8)
- Byte breakdown section header used `(raw_hex path_len) & 0x3F` (= 1)
- Byte breakdown rows were sliced from `raw_hex` (= 1 row)
- `renderPath(pathHops, ...)` iterated all `path_json` entries

For group-header view, `packet.path_json` is aggregated across observers
and therefore longer than the raw_hex of any single observer's packet.

## Fix
Both surfaces now render from `pathHops` (= effective observation's
`path_json`). The raw_hex vs path_json mismatch is still logged as a
console.warn for diagnostics, but does not drive the UI.

With per-observation `raw_hex` (#882) shipped, clicking an observation
row already swaps the effective packet so both surfaces stay consistent.

## Testing
- Adds E2E regression `Packet detail path pill and byte breakdown agree
on hop count` that asserts:
  1. `pill badge count == byte breakdown section count`
  2. `rendered hop names ≈ badge count` (within 1 for separators)
  3. `byte breakdown rendered rows == section count`
- Manually reproduced on staging with `46cf35504a21ef0d` (8-name path +
`1 hop` badge before fix).

Related: #881 #882 #866

---------

Co-authored-by: you <you@example.com>
2026-04-21 17:57:06 -07:00
Kpa-clawbot 2b9f305698 fix(#874): hop-resolver affinity picker — score candidates by neighbor-graph edges + geographic centroid (#876)
## Problem

`pickByAffinity` in `hop-resolver.js` picks wrong regional candidates
when 1-byte pubkey prefixes collide. The old implementation only
considers one adjacent hop (forward OR backward pass), leading to
suboptimal picks when both neighbors provide useful context.

Measured on staging: **61.6% of hops have ≥2 same-prefix candidates**,
making collision resolution critical.

## Fix

Replaced the separate forward/backward pass disambiguation with a
**combined iterative resolver** that scores candidates against BOTH prev
and next resolved hops:

1. **Neighbor-graph edge weight** (priority 1): Sum edge scores to prev
+ next pubkeys. Pick max sum.
2. **Geographic centroid** (priority 2): Average lat/lon of prev + next
positions. Pick closest candidate by haversine distance.
3. **Single-anchor geo** (priority 3): When only one neighbor is
resolved, use it directly.
4. **Fallback** (priority 4): First candidate when no context exists.

The iterative approach resolves cascading dependencies — resolving one
ambiguous hop may unlock context for its neighbors.

### Dev-mode trace

Multi-candidate picks now emit: `[hop-resolver] hash=46 candidates=N
scored=[...] chose=<pubkey> method=graph|centroid|fallback`

## Before/After (staging, 1539 packets, 12928 hops)

| Metric | Before | After |
|--------|--------|-------|
| Unreliable hops | 39 (0.3%) | 23 (0.2%) |
| Packets with unreliable | 33 (2.14%) | 17 (1.10%) |

~41% reduction in unreliable hops, ~48% reduction in affected packets.

## Tests

5 new tests in `test-frontend-helpers.js`:
- Graph edge scoring picks correct regional candidate
- Next hop breaks tie when prev has no edges
- Centroid fallback when no graph edges exist
- Centroid uses average of prev+next positions
- Fallback when no context at all

All 595 tests pass. No regressions in `test-packet-filter.js` (62 pass)
or `test-aging.js` (29 pass).

Closes #874

---------

Co-authored-by: you <you@example.com>
2026-04-21 14:03:40 -07:00
Kpa-clawbot a605518d6d fix(#881): per-observation raw_hex — each observer sees different bytes on air (#882)
## Problem

Each MeshCore observer receives a physically distinct over-the-air byte
sequence for the same transmission (different path bytes, flags/hops
remaining). The `observations` table stored only `path_json` per
observer — all observations pointed at one `transmissions.raw_hex`. This
prevented the hex pane from updating when switching observations in the
packet detail view.

## Changes

| Layer | Change |
|-------|--------|
| **Schema** | `ALTER TABLE observations ADD COLUMN raw_hex TEXT`
(nullable). Migration: `observations_raw_hex_v1` |
| **Ingestor** | `stmtInsertObservation` now stores per-observer
`raw_hex` from MQTT payload |
| **View** | `packets_v` uses `COALESCE(o.raw_hex, t.raw_hex)` —
backward compatible with NULL historical rows |
| **Server** | `enrichObs` prefers `obs.RawHex` when non-empty, falls
back to `tx.RawHex` |
| **Frontend** | No changes — `effectivePkt.raw_hex` already flows
through `renderDetail` |

## Tests

- **Ingestor**: `TestPerObservationRawHex` — two MQTT packets for same
hash from different observers → both stored with distinct raw_hex
- **Server**: `TestPerObservationRawHexEnrich` — enrichObs returns
per-obs raw_hex when present, tx fallback when NULL
- **E2E**: Playwright assertion in `test-e2e-playwright.js` for hex pane
update on observation switch

E2E assertion added: `test-e2e-playwright.js:1794`

## Scope

- Historical observations: raw_hex stays NULL, UI falls back to
transmission raw_hex silently
- No backfill, no path_json reconstruction, no frontend changes

Closes #881

---------

Co-authored-by: you <you@example.com>
2026-04-21 13:45:29 -07:00
Kpa-clawbot 0ca559e348 fix(#866): per-observation children in expanded packet groups (#880)
## Problem
When a packet group is expanded in the Packets table, clicking any child
row pointed the side pane at the same aggregate packet — not the clicked
observation. URL would flip between `?obs=<packet_id>` values instead of
real observation ids.

## Root cause
The expand fetch used `/api/packets?hash=X&limit=20`, which returns ONE
aggregate row keyed by packet.id. Every child therefore carried
`data-value=<packet.id>`.

## Fix
Switch the expand fetch to `/api/packets/<hash>`, which includes the
full `observations[]` array. Build `_children` as `{...pkt, ...obs}` so
each child row gets a unique observation id and observation-level fields
(observer, path, timestamp, snr/rssi) override the aggregate.

## Verified live on staging
Tested on multiple packets:
- Click group-header → side pane shows observation 1 of N (first
observer)
- Click child row → pane updates to show THAT observer's details:
observer name, path, timestamp, obs counter (K of N), URL
`?obs=<observation_id>`

## Tests
592 frontend tests pass (no new ones — this is a wiring fix, live
E2E-verified instead).

Closes #866

---------

Co-authored-by: Kpa-clawbot <agent@corescope.local>
Co-authored-by: you <you@example.com>
2026-04-21 13:36:45 -07:00
38 changed files with 2533 additions and 547 deletions
+1 -1
View File
@@ -1 +1 @@
{"schemaVersion":1,"label":"e2e tests","message":"45 passed","color":"brightgreen"}
{"schemaVersion":1,"label":"e2e tests","message":"82 passed","color":"brightgreen"}
+1 -1
View File
@@ -1 +1 @@
{"schemaVersion":1,"label":"frontend coverage","message":"39.68%","color":"red"}
{"schemaVersion":1,"label":"frontend coverage","message":"37.26%","color":"red"}
+5 -18
View File
@@ -135,7 +135,7 @@ jobs:
e2e-test:
name: "🎭 Playwright E2E Tests"
needs: [go-test]
runs-on: [self-hosted, Linux]
runs-on: ubuntu-latest
defaults:
run:
shell: bash
@@ -145,13 +145,6 @@ jobs:
with:
fetch-depth: 0
- name: Free disk space
run: |
# Prune old runner diagnostic logs (can accumulate 50MB+)
find ~/actions-runner/_diag/ -name '*.log' -mtime +3 -delete 2>/dev/null || true
# Show available disk space
df -h / | tail -1
- name: Set up Node.js 22
uses: actions/setup-node@v5
with:
@@ -252,17 +245,11 @@ jobs:
build-and-publish:
name: "🏗️ Build & Publish Docker Image"
needs: [e2e-test]
runs-on: [self-hosted, meshcore-runner-2]
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v5
- name: Free disk space
run: |
docker system prune -af 2>/dev/null || true
docker builder prune -af 2>/dev/null || true
df -h /
- name: Compute build metadata
id: meta
run: |
@@ -372,7 +359,7 @@ jobs:
# ───────────────────────────────────────────────────────────────
deploy:
name: "🚀 Deploy Staging"
if: github.event_name == 'push'
if: false # disabled: staging VM offline, manual deploy required
needs: [build-and-publish]
runs-on: [self-hosted, meshcore-runner-2]
steps:
@@ -461,8 +448,8 @@ jobs:
publish:
name: "📝 Publish Badges & Summary"
if: github.event_name == 'push'
needs: [deploy]
runs-on: [self-hosted, Linux]
needs: [build-and-publish]
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v5
+2
View File
@@ -14,6 +14,7 @@ WORKDIR /build/server
COPY cmd/server/go.mod cmd/server/go.sum ./
COPY internal/geofilter/ ../../internal/geofilter/
COPY internal/sigvalidate/ ../../internal/sigvalidate/
COPY internal/packetpath/ ../../internal/packetpath/
RUN go mod download
COPY cmd/server/ ./
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} \
@@ -24,6 +25,7 @@ WORKDIR /build/ingestor
COPY cmd/ingestor/go.mod cmd/ingestor/go.sum ./
COPY internal/geofilter/ ../../internal/geofilter/
COPY internal/sigvalidate/ ../../internal/sigvalidate/
COPY internal/packetpath/ ../../internal/packetpath/
RUN go mod download
COPY cmd/ingestor/ ./
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} \
+31 -9
View File
@@ -11,6 +11,7 @@ import (
"sync/atomic"
"time"
"github.com/meshcore-analyzer/packetpath"
_ "modernc.org/sqlite"
)
@@ -189,7 +190,7 @@ func applySchema(db *sql.DB) error {
db.Exec(`DROP VIEW IF EXISTS packets_v`)
_, vErr := db.Exec(`
CREATE VIEW packets_v AS
SELECT o.id, t.raw_hex,
SELECT o.id, COALESCE(o.raw_hex, t.raw_hex) AS raw_hex,
datetime(o.timestamp, 'unixepoch') AS timestamp,
obs.id AS observer_id, obs.name AS observer_name,
o.direction, o.snr, o.rssi, o.score, t.hash, t.route_type,
@@ -408,6 +409,15 @@ func applySchema(db *sql.DB) error {
log.Println("[migration] dropped_packets table created")
}
// Migration: add raw_hex column to observations (#881)
row = db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'observations_raw_hex_v1'")
if row.Scan(&migDone) != nil {
log.Println("[migration] Adding raw_hex column to observations...")
db.Exec(`ALTER TABLE observations ADD COLUMN raw_hex TEXT`)
db.Exec(`INSERT INTO _migrations (name) VALUES ('observations_raw_hex_v1')`)
log.Println("[migration] observations.raw_hex column added")
}
return nil
}
@@ -433,12 +443,13 @@ func (s *Store) prepareStatements() error {
}
s.stmtInsertObservation, err = s.db.Prepare(`
INSERT INTO observations (transmission_id, observer_idx, direction, snr, rssi, score, path_json, timestamp)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
INSERT INTO observations (transmission_id, observer_idx, direction, snr, rssi, score, path_json, timestamp, raw_hex)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(transmission_id, observer_idx, COALESCE(path_json, '')) DO UPDATE SET
snr = COALESCE(excluded.snr, snr),
rssi = COALESCE(excluded.rssi, rssi),
score = COALESCE(excluded.score, score)
snr = COALESCE(excluded.snr, snr),
rssi = COALESCE(excluded.rssi, rssi),
score = COALESCE(excluded.score, score),
raw_hex = COALESCE(excluded.raw_hex, raw_hex)
`)
if err != nil {
return err
@@ -584,7 +595,7 @@ func (s *Store) InsertTransmission(data *PacketData) (bool, error) {
_, err = s.stmtInsertObservation.Exec(
txID, observerIdx, data.Direction,
data.SNR, data.RSSI, data.Score,
data.PathJSON, epochTs,
data.PathJSON, epochTs, nilIfEmpty(data.RawHex),
)
if err != nil {
s.Stats.WriteErrors.Add(1)
@@ -931,11 +942,22 @@ type MQTTPacketMessage struct {
}
// BuildPacketData constructs a PacketData from a decoded packet and MQTT message.
// path_json is derived directly from raw_hex header bytes (not decoded.Path.Hops)
// to guarantee the stored path always matches the raw bytes. This matters for
// TRACE packets where decoded.Path.Hops is overwritten with payload hops (#886).
func BuildPacketData(msg *MQTTPacketMessage, decoded *DecodedPacket, observerID, region string) *PacketData {
now := time.Now().UTC().Format(time.RFC3339)
pathJSON := "[]"
if len(decoded.Path.Hops) > 0 {
b, _ := json.Marshal(decoded.Path.Hops)
// For TRACE packets, path_json must be the payload-decoded route hops
// (decoded.Path.Hops), NOT the raw_hex header bytes which are SNR values.
// For all other packet types, derive path from raw_hex (#886).
if !packetpath.PathBytesAreHops(byte(decoded.Header.PayloadType)) {
if len(decoded.Path.Hops) > 0 {
b, _ := json.Marshal(decoded.Path.Hops)
pathJSON = string(b)
}
} else if hops, err := packetpath.DecodePathFromRawHex(msg.Raw); err == nil && len(hops) > 0 {
b, _ := json.Marshal(hops)
pathJSON = string(b)
}
+155
View File
@@ -2,6 +2,7 @@ package main
import (
"database/sql"
"encoding/json"
"fmt"
"os"
"path/filepath"
@@ -10,6 +11,8 @@ import (
"sync/atomic"
"testing"
"time"
"github.com/meshcore-analyzer/packetpath"
)
func tempDBPath(t *testing.T) string {
@@ -1968,3 +1971,155 @@ func TestInsertObservationSNRFillIn(t *testing.T) {
t.Errorf("RSSI overwritten by null arrival: got %v, want %v", rssi3, rssi)
}
}
// TestPerObservationRawHex verifies that two MQTT packets for the same hash
// from different observers store distinct raw_hex per observation (#881).
func TestPerObservationRawHex(t *testing.T) {
store, err := OpenStore(tempDBPath(t))
if err != nil {
t.Fatal(err)
}
defer store.Close()
// Register two observers
store.UpsertObserver("obs-A", "Observer A", "", nil)
store.UpsertObserver("obs-B", "Observer B", "", nil)
hash := "abc123def456"
rawA := "c0ffee01"
rawB := "c0ffee0201aa"
dir := "RX"
// First observation from observer A
pdA := &PacketData{
RawHex: rawA,
Hash: hash,
Timestamp: "2026-04-21T10:00:00Z",
ObserverID: "obs-A",
Direction: &dir,
PathJSON: "[]",
}
isNew, err := store.InsertTransmission(pdA)
if err != nil {
t.Fatalf("insert A: %v", err)
}
if !isNew {
t.Fatal("expected new transmission")
}
// Second observation from observer B (same hash, different raw bytes)
pdB := &PacketData{
RawHex: rawB,
Hash: hash,
Timestamp: "2026-04-21T10:00:01Z",
ObserverID: "obs-B",
Direction: &dir,
PathJSON: `["aabb"]`,
}
isNew2, err := store.InsertTransmission(pdB)
if err != nil {
t.Fatalf("insert B: %v", err)
}
if isNew2 {
t.Fatal("expected duplicate transmission")
}
// Query observations and verify per-observation raw_hex
rows, err := store.db.Query(`
SELECT o.raw_hex, obs.id
FROM observations o
LEFT JOIN observers obs ON obs.rowid = o.observer_idx
ORDER BY o.id ASC
`)
if err != nil {
t.Fatalf("query: %v", err)
}
defer rows.Close()
type obsResult struct {
rawHex string
observerID string
}
var results []obsResult
for rows.Next() {
var rh, oid sql.NullString
if err := rows.Scan(&rh, &oid); err != nil {
t.Fatal(err)
}
results = append(results, obsResult{
rawHex: rh.String,
observerID: oid.String,
})
}
if len(results) != 2 {
t.Fatalf("expected 2 observations, got %d", len(results))
}
if results[0].rawHex != rawA {
t.Errorf("obs A raw_hex: got %q, want %q", results[0].rawHex, rawA)
}
if results[1].rawHex != rawB {
t.Errorf("obs B raw_hex: got %q, want %q", results[1].rawHex, rawB)
}
if results[0].rawHex == results[1].rawHex {
t.Error("both observations have same raw_hex — should differ")
}
}
// TestBuildPacketData_TraceUsesPayloadHops verifies that TRACE packets use
// payload-decoded route hops in path_json (NOT the raw_hex header SNR bytes).
// Issue #886 / #887.
func TestBuildPacketData_TraceUsesPayloadHops(t *testing.T) {
// TRACE packet: header path has SNR bytes [30,2D,0D,23], but decoded.Path.Hops
// is overwritten to payload hops [67,33,D6,33,67].
rawHex := "2604302D0D2359FEE7B100000000006733D63367"
decoded, err := DecodePacket(rawHex, nil, false)
if err != nil {
t.Fatal(err)
}
// decoded.Path.Hops should be the TRACE-replaced hops (payload hops)
if len(decoded.Path.Hops) != 5 {
t.Fatalf("expected 5 decoded hops, got %d", len(decoded.Path.Hops))
}
msg := &MQTTPacketMessage{Raw: rawHex}
pd := BuildPacketData(msg, decoded, "test-obs", "TST")
// For TRACE: path_json MUST be the payload-decoded route hops, NOT the SNR bytes
expectedPathJSON := `["67","33","D6","33","67"]`
if pd.PathJSON != expectedPathJSON {
t.Errorf("path_json = %s, want %s (TRACE must use payload hops)", pd.PathJSON, expectedPathJSON)
}
// Verify that DecodePathFromRawHex returns the SNR bytes (header path) which differ
headerHops, herr := packetpath.DecodePathFromRawHex(rawHex)
if herr != nil {
t.Fatal(herr)
}
headerJSON, _ := json.Marshal(headerHops)
if string(headerJSON) == expectedPathJSON {
t.Error("header path (SNR) should differ from payload hops for TRACE")
}
}
// TestBuildPacketData_NonTracePathJSON verifies non-TRACE packets also derive path from raw_hex.
func TestBuildPacketData_NonTracePathJSON(t *testing.T) {
// A simple ADVERT packet (payload type 0) with 2 hops, hash_size 1
// Header 0x09 = FLOOD(1), ADVERT(2), version 0
// Path byte 0x02 = hash_size 1, hash_count 2
// Path bytes: AA BB
rawHex := "0902AABB" + "00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
decoded, err := DecodePacket(rawHex, nil, false)
if err != nil {
t.Fatal(err)
}
msg := &MQTTPacketMessage{Raw: rawHex}
pd := BuildPacketData(msg, decoded, "obs1", "TST")
expectedPathJSON := `["AA","BB"]`
if pd.PathJSON != expectedPathJSON {
t.Errorf("path_json = %s, want %s", pd.PathJSON, expectedPathJSON)
}
}
+3 -1
View File
@@ -12,6 +12,7 @@ import (
"strings"
"unicode/utf8"
"github.com/meshcore-analyzer/packetpath"
"github.com/meshcore-analyzer/sigvalidate"
)
@@ -192,8 +193,9 @@ func decodePath(pathByte byte, buf []byte, offset int) (Path, int) {
}, totalBytes
}
// isTransportRoute delegates to packetpath.IsTransportRoute.
func isTransportRoute(routeType int) bool {
return routeType == RouteTransportFlood || routeType == RouteTransportDirect
return packetpath.IsTransportRoute(routeType)
}
func decodeEncryptedPayload(typeName string, buf []byte) Payload {
+104
View File
@@ -11,6 +11,7 @@ import (
"strings"
"testing"
"github.com/meshcore-analyzer/packetpath"
"github.com/meshcore-analyzer/sigvalidate"
)
@@ -1822,3 +1823,106 @@ func TestDecodeAdvertWithSignatureValidation(t *testing.T) {
t.Error("SignatureValid should be nil when validation disabled")
}
}
// === Tests for DecodePathFromRawHex (issue #886) ===
func TestDecodePathFromRawHex_HashSize1(t *testing.T) {
// Header byte 0x26 = route_type DIRECT, payload TRACE
// Path byte 0x04 = hash_size 1 (bits 7-6 = 00 → 0+1=1), hash_count 4
// Path bytes: 30 2D 0D 23
raw := "2604302D0D2359FEE7B100000000006733D63367"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
expected := []string{"30", "2D", "0D", "23"}
if len(hops) != len(expected) {
t.Fatalf("got %d hops, want %d", len(hops), len(expected))
}
for i, h := range hops {
if h != expected[i] {
t.Errorf("hop[%d] = %s, want %s", i, h, expected[i])
}
}
}
func TestDecodePathFromRawHex_HashSize2(t *testing.T) {
// Path byte 0x42 = hash_size 2 (bits 7-6 = 01 → 1+1=2), hash_count 2
// Header 0x09 = FLOOD route (rt=1), payload ADVERT (pt=2)
// Path bytes: AABB CCDD (4 bytes = 2 hops * 2 bytes)
raw := "0942AABBCCDD" + "00000000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
expected := []string{"AABB", "CCDD"}
if len(hops) != len(expected) {
t.Fatalf("got %d hops, want %d", len(hops), len(expected))
}
for i, h := range hops {
if h != expected[i] {
t.Errorf("hop[%d] = %s, want %s", i, h, expected[i])
}
}
}
func TestDecodePathFromRawHex_HashSize3(t *testing.T) {
// Path byte 0x81 = hash_size 3 (bits 7-6 = 10 → 2+1=3), hash_count 1
// Header 0x09 = FLOOD route (rt=1), payload ADVERT
raw := "0981AABBCC" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
if len(hops) != 1 || hops[0] != "AABBCC" {
t.Fatalf("got %v, want [AABBCC]", hops)
}
}
func TestDecodePathFromRawHex_HashSize4(t *testing.T) {
// Path byte 0xC1 = hash_size 4 (bits 7-6 = 11 → 3+1=4), hash_count 1
// Header 0x09 = FLOOD route (rt=1)
raw := "09C1AABBCCDD" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
if len(hops) != 1 || hops[0] != "AABBCCDD" {
t.Fatalf("got %v, want [AABBCCDD]", hops)
}
}
func TestDecodePathFromRawHex_DirectZeroHops(t *testing.T) {
// Path byte 0x00 = hash_size 1, hash_count 0
// Header 0x0A = DIRECT route (rt=2), payload ADVERT
raw := "0A00" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
if len(hops) != 0 {
t.Fatalf("got %d hops, want 0", len(hops))
}
}
func TestDecodePathFromRawHex_Transport(t *testing.T) {
// Route type 3 = TRANSPORT_DIRECT → 4 transport code bytes before path byte
// Header 0x27 = route_type 3, payload TRACE
// Transport codes: 1122 3344
// Path byte 0x02 = hash_size 1, hash_count 2
// Path bytes: AA BB
raw := "2711223344" + "02AABB" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
expected := []string{"AA", "BB"}
if len(hops) != len(expected) {
t.Fatalf("got %d hops, want %d", len(hops), len(expected))
}
for i, h := range hops {
if h != expected[i] {
t.Errorf("hop[%d] = %s, want %s", i, h, expected[i])
}
}
}
+4
View File
@@ -13,6 +13,10 @@ replace github.com/meshcore-analyzer/geofilter => ../../internal/geofilter
replace github.com/meshcore-analyzer/sigvalidate => ../../internal/sigvalidate
require github.com/meshcore-analyzer/packetpath v0.0.0
replace github.com/meshcore-analyzer/packetpath => ../../internal/packetpath
require (
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/google/uuid v1.6.0 // indirect
+2 -2
View File
@@ -229,7 +229,7 @@ func createTestDBAt(tb testing.TB, dbPath string, numTx int) {
id INTEGER PRIMARY KEY,
transmission_id INTEGER, observer_id TEXT, observer_name TEXT,
direction TEXT, snr REAL, rssi REAL, score INTEGER,
path_json TEXT, timestamp TEXT
path_json TEXT, timestamp TEXT, raw_hex TEXT
)`)
execOrFail(`CREATE TABLE IF NOT EXISTS observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
execOrFail(`CREATE TABLE IF NOT EXISTS nodes (
@@ -280,7 +280,7 @@ func createTestDBWithObs(tb testing.TB, dbPath string, numTx int) {
)`)
execOrFail(`CREATE TABLE IF NOT EXISTS observations (
id INTEGER PRIMARY KEY, transmission_id INTEGER, observer_id TEXT, observer_name TEXT,
direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT
direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT, raw_hex TEXT
)`)
execOrFail(`CREATE TABLE IF NOT EXISTS observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
execOrFail(`CREATE TABLE IF NOT EXISTS nodes (
@@ -0,0 +1,403 @@
package main
import (
"fmt"
"sync"
"sync/atomic"
"testing"
"time"
)
// TestConcurrentIngestAndEviction exercises the race between IngestNewFromDB
// adding packets (via direct store manipulation simulating the locked section)
// and RunEviction removing packets. Without proper locking this would trigger
// the race detector and produce inconsistent index state.
func TestConcurrentIngestAndEviction(t *testing.T) {
// Seed store with 200 old packets that are eligible for eviction
startTime := time.Now().UTC().Add(-48 * time.Hour)
store := makeTestStore(200, startTime, 1)
store.retentionHours = 24 // everything older than 24h is evictable
store.loaded = true
// Track bytes for all seeded packets
for _, tx := range store.packets {
store.trackedBytes += estimateStoreTxBytes(tx)
for _, obs := range tx.Observations {
store.trackedBytes += estimateStoreObsBytes(obs)
}
}
const numIngestGoroutines = 5
const packetsPerGoroutine = 50
const numEvictionGoroutines = 3
var wg sync.WaitGroup
var ingestedCount int64
// Concurrent ingest: simulate what IngestNewFromDB does under the lock
for g := 0; g < numIngestGoroutines; g++ {
wg.Add(1)
go func(goroutineID int) {
defer wg.Done()
for i := 0; i < packetsPerGoroutine; i++ {
txID := 1000 + goroutineID*1000 + i
hash := fmt.Sprintf("new_hash_%d_%04d", goroutineID, i)
pt := 5 // GRP_TXT
ts := time.Now().UTC().Format(time.RFC3339)
tx := &StoreTx{
ID: txID,
Hash: hash,
FirstSeen: ts,
LatestSeen: ts,
PayloadType: &pt,
DecodedJSON: fmt.Sprintf(`{"pubKey":"newpk_%d_%04d"}`, goroutineID, i),
obsKeys: make(map[string]bool),
observerSet: make(map[string]bool),
}
obs := &StoreObs{
ID: txID*10 + 1,
TransmissionID: txID,
ObserverID: fmt.Sprintf("obs_g%d", goroutineID),
ObserverName: fmt.Sprintf("Observer_g%d", goroutineID),
Timestamp: ts,
}
tx.Observations = append(tx.Observations, obs)
tx.ObservationCount = 1
// Acquire write lock (same as IngestNewFromDB)
store.mu.Lock()
store.packets = append(store.packets, tx)
store.byHash[hash] = tx
store.byTxID[txID] = tx
store.byObsID[obs.ID] = obs
store.byObserver[obs.ObserverID] = append(store.byObserver[obs.ObserverID], obs)
store.byPayloadType[pt] = append(store.byPayloadType[pt], tx)
pk := fmt.Sprintf("newpk_%d_%04d", goroutineID, i)
if store.nodeHashes[pk] == nil {
store.nodeHashes[pk] = make(map[string]bool)
}
store.nodeHashes[pk][hash] = true
store.byNode[pk] = append(store.byNode[pk], tx)
store.trackedBytes += estimateStoreTxBytes(tx)
store.trackedBytes += estimateStoreObsBytes(obs)
store.totalObs++
store.mu.Unlock()
atomic.AddInt64(&ingestedCount, 1)
}
}(g)
}
// Concurrent eviction goroutines
var evictedTotal int64
for g := 0; g < numEvictionGoroutines; g++ {
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 10; i++ {
store.mu.Lock()
n := store.EvictStale()
store.mu.Unlock()
atomic.AddInt64(&evictedTotal, int64(n))
time.Sleep(time.Millisecond)
}
}()
}
// Concurrent readers (QueryPackets uses RLock)
for g := 0; g < 3; g++ {
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 20; i++ {
store.mu.RLock()
_ = len(store.packets)
_ = len(store.byHash)
store.mu.RUnlock()
time.Sleep(500 * time.Microsecond)
}
}()
}
wg.Wait()
// --- Post-state assertions ---
store.mu.RLock()
defer store.mu.RUnlock()
totalIngested := int(atomic.LoadInt64(&ingestedCount))
totalEvicted := int(atomic.LoadInt64(&evictedTotal))
if totalIngested != numIngestGoroutines*packetsPerGoroutine {
t.Fatalf("expected %d ingested, got %d", numIngestGoroutines*packetsPerGoroutine, totalIngested)
}
// Invariant: packets remaining = initial(200) + ingested - evicted
expectedRemaining := 200 + totalIngested - totalEvicted
if len(store.packets) != expectedRemaining {
t.Fatalf("packets count mismatch: got %d, expected %d (200 + %d ingested - %d evicted)",
len(store.packets), expectedRemaining, totalIngested, totalEvicted)
}
// Invariant: byHash must be consistent with packets slice
if len(store.byHash) != len(store.packets) {
t.Fatalf("byHash size %d != packets len %d", len(store.byHash), len(store.packets))
}
// Invariant: every packet in the slice must be in byHash
for _, tx := range store.packets {
if store.byHash[tx.Hash] != tx {
t.Fatalf("packet %s in slice but not in byHash (or points to different tx)", tx.Hash)
}
}
// Invariant: byTxID must map to packets in the slice
byTxIDCount := 0
for _, tx := range store.packets {
if store.byTxID[tx.ID] == tx {
byTxIDCount++
}
}
if byTxIDCount != len(store.packets) {
t.Fatalf("byTxID consistency: %d/%d packets found", byTxIDCount, len(store.packets))
}
// Invariant: trackedBytes must be non-negative
if store.trackedBytes < 0 {
t.Fatalf("trackedBytes went negative: %d", store.trackedBytes)
}
// Verify eviction actually happened (old packets were eligible)
if totalEvicted == 0 {
t.Fatal("expected some evictions to occur but got 0")
}
t.Logf("OK: ingested=%d, evicted=%d, remaining=%d, trackedBytes=%d",
totalIngested, totalEvicted, len(store.packets), store.trackedBytes)
}
// TestConcurrentIngestNewObservationsAndEviction exercises the race between
// adding new observations to existing transmissions and eviction removing those
// same transmissions. This targets the IngestNewObservations path.
func TestConcurrentIngestNewObservationsAndEviction(t *testing.T) {
// Create store with 100 packets, half old (evictable), half recent
now := time.Now().UTC()
store := makeTestStore(0, now, 1) // empty, we'll add manually
store.retentionHours = 1
// Add 50 old packets (2h ago) and 50 recent packets
for i := 0; i < 100; i++ {
var ts time.Time
if i < 50 {
ts = now.Add(-2 * time.Hour).Add(time.Duration(i) * time.Second)
} else {
ts = now.Add(-time.Duration(100-i) * time.Second)
}
hash := fmt.Sprintf("obs_hash_%04d", i)
txID := i + 1
pt := 4
tx := &StoreTx{
ID: txID,
Hash: hash,
FirstSeen: ts.UTC().Format(time.RFC3339),
LatestSeen: ts.UTC().Format(time.RFC3339),
PayloadType: &pt,
DecodedJSON: fmt.Sprintf(`{"pubKey":"pk%04d"}`, i),
obsKeys: make(map[string]bool),
observerSet: make(map[string]bool),
}
store.packets = append(store.packets, tx)
store.byHash[hash] = tx
store.byTxID[txID] = tx
store.byPayloadType[pt] = append(store.byPayloadType[pt], tx)
store.trackedBytes += estimateStoreTxBytes(tx)
}
store.loaded = true
const numObsGoroutines = 4
const obsPerGoroutine = 100
var wg sync.WaitGroup
var addedObs int64
// Goroutines adding observations to RECENT packets (index 50-99)
for g := 0; g < numObsGoroutines; g++ {
wg.Add(1)
go func(gID int) {
defer wg.Done()
for i := 0; i < obsPerGoroutine; i++ {
targetIdx := 50 + (i % 50) // only target recent packets
hash := fmt.Sprintf("obs_hash_%04d", targetIdx)
store.mu.Lock()
tx := store.byHash[hash]
if tx != nil {
obsID := 50000 + gID*10000 + i
obs := &StoreObs{
ID: obsID,
TransmissionID: tx.ID,
ObserverID: fmt.Sprintf("obs_new_%d", gID),
ObserverName: fmt.Sprintf("NewObs_%d", gID),
Timestamp: time.Now().UTC().Format(time.RFC3339),
}
dk := obs.ObserverID + "|"
if !tx.obsKeys[dk] || true { // allow duplicates for stress
tx.Observations = append(tx.Observations, obs)
tx.ObservationCount++
store.byObsID[obsID] = obs
store.byObserver[obs.ObserverID] = append(store.byObserver[obs.ObserverID], obs)
store.trackedBytes += estimateStoreObsBytes(obs)
store.totalObs++
atomic.AddInt64(&addedObs, 1)
}
}
store.mu.Unlock()
}
}(g)
}
// Concurrent eviction
var evictedTotal int64
for g := 0; g < 2; g++ {
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 15; i++ {
store.mu.Lock()
n := store.EvictStale()
store.mu.Unlock()
atomic.AddInt64(&evictedTotal, int64(n))
time.Sleep(500 * time.Microsecond)
}
}()
}
wg.Wait()
// --- Assertions ---
store.mu.RLock()
defer store.mu.RUnlock()
totalEvicted := int(atomic.LoadInt64(&evictedTotal))
totalAdded := int(atomic.LoadInt64(&addedObs))
// All 50 old packets should have been evicted
if totalEvicted < 50 {
t.Fatalf("expected at least 50 evictions (old packets), got %d", totalEvicted)
}
// Recent packets (50) should survive
if len(store.packets) < 50 {
t.Fatalf("expected at least 50 remaining packets (recent ones), got %d", len(store.packets))
}
// byHash consistency
for _, tx := range store.packets {
if store.byHash[tx.Hash] != tx {
t.Fatalf("byHash inconsistency for %s", tx.Hash)
}
}
// No evicted packet should remain in byHash
for i := 0; i < 50; i++ {
hash := fmt.Sprintf("obs_hash_%04d", i)
if store.byHash[hash] != nil {
t.Fatalf("evicted packet %s still in byHash", hash)
}
}
// byObsID should not reference observations from evicted packets
for obsID, obs := range store.byObsID {
if store.byTxID[obs.TransmissionID] == nil {
t.Fatalf("byObsID[%d] references evicted transmission %d", obsID, obs.TransmissionID)
}
}
// trackedBytes non-negative
if store.trackedBytes < 0 {
t.Fatalf("trackedBytes negative: %d", store.trackedBytes)
}
t.Logf("OK: evicted=%d, added_obs=%d, remaining=%d, trackedBytes=%d",
totalEvicted, totalAdded, len(store.packets), store.trackedBytes)
}
// TestConcurrentRunEvictionWithReads exercises RunEviction's two-phase locking
// against concurrent read operations (simulating QueryPackets / GetStoreStats).
// Without proper RWMutex usage, this would race on slice/map reads.
func TestConcurrentRunEvictionWithReads(t *testing.T) {
startTime := time.Now().UTC().Add(-3 * time.Hour)
store := makeTestStore(500, startTime, 1)
store.retentionHours = 1
store.loaded = true
for _, tx := range store.packets {
store.trackedBytes += estimateStoreTxBytes(tx)
for _, obs := range tx.Observations {
store.trackedBytes += estimateStoreObsBytes(obs)
}
}
var wg sync.WaitGroup
// Multiple RunEviction calls (uses its own locking)
var evicted int64
for g := 0; g < 3; g++ {
wg.Add(1)
go func() {
defer wg.Done()
n := store.RunEviction()
atomic.AddInt64(&evicted, int64(n))
}()
}
// Concurrent readers using the public read-lock pattern
var readCount int64
for g := 0; g < 5; g++ {
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 50; i++ {
store.mu.RLock()
count := len(store.packets)
_ = count
// Iterate a portion of byHash (simulating query)
for hash, tx := range store.byHash {
_ = hash
_ = tx.ObservationCount
break // just access one
}
store.mu.RUnlock()
atomic.AddInt64(&readCount, 1)
}
}()
}
wg.Wait()
store.mu.RLock()
defer store.mu.RUnlock()
totalEvicted := int(atomic.LoadInt64(&evicted))
// Must have evicted packets older than 1h (most of the 500 are 1-3h old)
if totalEvicted == 0 {
t.Fatal("expected evictions but got 0")
}
// Consistency: byHash == packets len
if len(store.byHash) != len(store.packets) {
t.Fatalf("byHash %d != packets %d after concurrent RunEviction+reads",
len(store.byHash), len(store.packets))
}
// All reads completed without panic
if atomic.LoadInt64(&readCount) != 250 {
t.Fatalf("not all reads completed: %d/250", atomic.LoadInt64(&readCount))
}
t.Logf("OK: evicted=%d, remaining=%d, reads=%d",
totalEvicted, len(store.packets), atomic.LoadInt64(&readCount))
}
+8 -8
View File
@@ -47,7 +47,7 @@ func setupTestDBv2(t *testing.T) *DB {
id INTEGER PRIMARY KEY AUTOINCREMENT,
transmission_id INTEGER NOT NULL REFERENCES transmissions(id),
observer_id TEXT, observer_name TEXT, direction TEXT,
snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp INTEGER NOT NULL
snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp INTEGER NOT NULL, raw_hex TEXT
);
`
if _, err := conn.Exec(schema); err != nil {
@@ -763,9 +763,9 @@ func TestGetChannelsFromStore(t *testing.T) {
func TestPrefixMapResolve(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aabbccdd11223344", Name: "NodeA", HasGPS: true, Lat: 37.5, Lon: -122.0},
{PublicKey: "aabbccdd55667788", Name: "NodeB", HasGPS: false},
{PublicKey: "eeff0011aabbccdd", Name: "NodeC", HasGPS: true, Lat: 38.0, Lon: -121.0},
{Role: "repeater", PublicKey: "aabbccdd11223344", Name: "NodeA", HasGPS: true, Lat: 37.5, Lon: -122.0},
{Role: "repeater", PublicKey: "aabbccdd55667788", Name: "NodeB", HasGPS: false},
{Role: "repeater", PublicKey: "eeff0011aabbccdd", Name: "NodeC", HasGPS: true, Lat: 38.0, Lon: -121.0},
}
pm := buildPrefixMap(nodes)
@@ -805,8 +805,8 @@ func TestPrefixMapResolve(t *testing.T) {
t.Run("multiple candidates no GPS", func(t *testing.T) {
noGPSNodes := []nodeInfo{
{PublicKey: "aa11bb22", Name: "X", HasGPS: false},
{PublicKey: "aa11cc33", Name: "Y", HasGPS: false},
{Role: "repeater", PublicKey: "aa11bb22", Name: "X", HasGPS: false},
{Role: "repeater", PublicKey: "aa11cc33", Name: "Y", HasGPS: false},
}
pm2 := buildPrefixMap(noGPSNodes)
n := pm2.resolve("aa11")
@@ -820,8 +820,8 @@ func TestPrefixMapResolve(t *testing.T) {
func TestPrefixMapCap(t *testing.T) {
// 16-char pubkey — longer than maxPrefixLen
nodes := []nodeInfo{
{PublicKey: "aabbccdd11223344", Name: "LongKey"},
{PublicKey: "eeff0011", Name: "ShortKey"}, // exactly 8 chars
{Role: "repeater", PublicKey: "aabbccdd11223344", Name: "LongKey"},
{Role: "repeater", PublicKey: "eeff0011", Name: "ShortKey"}, // exactly 8 chars
}
pm := buildPrefixMap(nodes)
+4
View File
@@ -20,6 +20,7 @@ type DB struct {
path string // filesystem path to the database file
isV3 bool // v3 schema: observer_idx in observations (vs observer_id in v2)
hasResolvedPath bool // observations table has resolved_path column
hasObsRawHex bool // observations table has raw_hex column (#881)
// Channel list cache (60s TTL) — avoids repeated GROUP BY scans (#762)
channelsCacheMu sync.Mutex
@@ -76,6 +77,9 @@ func (db *DB) detectSchema() {
if colName == "resolved_path" {
db.hasResolvedPath = true
}
if colName == "raw_hex" {
db.hasObsRawHex = true
}
}
}
}
+60 -2
View File
@@ -74,7 +74,8 @@ func setupTestDB(t *testing.T) *DB {
score INTEGER,
path_json TEXT,
timestamp INTEGER NOT NULL,
resolved_path TEXT
resolved_path TEXT,
raw_hex TEXT
);
CREATE TABLE IF NOT EXISTS observer_metrics (
@@ -1134,7 +1135,8 @@ func setupTestDBV2(t *testing.T) *DB {
rssi REAL,
score INTEGER,
path_json TEXT,
timestamp INTEGER NOT NULL
timestamp INTEGER NOT NULL,
raw_hex TEXT
);
`
if _, err := conn.Exec(schema); err != nil {
@@ -1975,3 +1977,59 @@ func TestParseWindowDuration(t *testing.T) {
}
}
}
// TestPerObservationRawHexEnrich verifies enrichObs returns per-observation raw_hex
// when available, falling back to transmission raw_hex when NULL (#881).
func TestPerObservationRawHexEnrich(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
// Insert observers
db.conn.Exec(`INSERT INTO observers (id, name) VALUES ('obs-a', 'Observer A')`)
db.conn.Exec(`INSERT INTO observers (id, name) VALUES ('obs-b', 'Observer B')`)
var rowA, rowB int64
db.conn.QueryRow(`SELECT rowid FROM observers WHERE id='obs-a'`).Scan(&rowA)
db.conn.QueryRow(`SELECT rowid FROM observers WHERE id='obs-b'`).Scan(&rowB)
// Insert transmission with raw_hex
txHex := "deadbeef"
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen) VALUES (?, 'hash1', '2026-04-21T10:00:00Z')`, txHex)
// Insert two observations: A has its own raw_hex, B has NULL (historical)
obsAHex := "c0ffee01"
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp, raw_hex)
VALUES (1, ?, -5.0, -90.0, '[]', 1745236800, ?)`, rowA, obsAHex)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (1, ?, -3.0, -85.0, '["aabb"]', 1745236801)`, rowB)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store load: %v", err)
}
tx := store.byHash["hash1"]
if tx == nil {
t.Fatal("transmission not loaded")
}
if len(tx.Observations) < 2 {
t.Fatalf("expected 2 observations, got %d", len(tx.Observations))
}
// Check enriched observations
for _, obs := range tx.Observations {
m := store.enrichObs(obs)
rh, _ := m["raw_hex"].(string)
if obs.RawHex != "" {
// Observer A: should get per-observation raw_hex
if rh != obsAHex {
t.Errorf("obs with own raw_hex: got %q, want %q", rh, obsAHex)
}
} else {
// Observer B: should fall back to transmission raw_hex
if rh != txHex {
t.Errorf("obs without raw_hex: got %q, want %q (tx fallback)", rh, txHex)
}
}
}
}
+3 -101
View File
@@ -10,6 +10,7 @@ import (
"strings"
"time"
"github.com/meshcore-analyzer/packetpath"
"github.com/meshcore-analyzer/sigvalidate"
)
@@ -164,8 +165,9 @@ func decodePath(pathByte byte, buf []byte, offset int) (Path, int) {
}, totalBytes
}
// isTransportRoute delegates to packetpath.IsTransportRoute.
func isTransportRoute(routeType int) bool {
return routeType == RouteTransportFlood || routeType == RouteTransportDirect
return packetpath.IsTransportRoute(routeType)
}
func decodeEncryptedPayload(typeName string, buf []byte) Payload {
@@ -441,106 +443,6 @@ func DecodePacket(hexString string, validateSignatures bool) (*DecodedPacket, er
}, nil
}
// HexRange represents a labeled byte range for the hex breakdown visualization.
type HexRange struct {
Start int `json:"start"`
End int `json:"end"`
Label string `json:"label"`
}
// Breakdown holds colored byte ranges returned by the packet detail endpoint.
type Breakdown struct {
Ranges []HexRange `json:"ranges"`
}
// BuildBreakdown computes labeled byte ranges for each section of a MeshCore packet.
// The returned ranges are consumed by createColoredHexDump() and buildHexLegend()
// in the frontend (public/app.js).
func BuildBreakdown(hexString string) *Breakdown {
hexString = strings.ReplaceAll(hexString, " ", "")
hexString = strings.ReplaceAll(hexString, "\n", "")
hexString = strings.ReplaceAll(hexString, "\r", "")
buf, err := hex.DecodeString(hexString)
if err != nil || len(buf) < 2 {
return &Breakdown{Ranges: []HexRange{}}
}
var ranges []HexRange
offset := 0
// Byte 0: Header
ranges = append(ranges, HexRange{Start: 0, End: 0, Label: "Header"})
offset = 1
header := decodeHeader(buf[0])
// Bytes 1-4: Transport Codes (TRANSPORT_FLOOD / TRANSPORT_DIRECT only)
if isTransportRoute(header.RouteType) {
if len(buf) < offset+4 {
return &Breakdown{Ranges: ranges}
}
ranges = append(ranges, HexRange{Start: offset, End: offset + 3, Label: "Transport Codes"})
offset += 4
}
if offset >= len(buf) {
return &Breakdown{Ranges: ranges}
}
// Next byte: Path Length (bits 7-6 = hashSize-1, bits 5-0 = hashCount)
ranges = append(ranges, HexRange{Start: offset, End: offset, Label: "Path Length"})
pathByte := buf[offset]
offset++
hashSize := int(pathByte>>6) + 1
hashCount := int(pathByte & 0x3F)
pathBytes := hashSize * hashCount
// Path hops
if hashCount > 0 && offset+pathBytes <= len(buf) {
ranges = append(ranges, HexRange{Start: offset, End: offset + pathBytes - 1, Label: "Path"})
}
offset += pathBytes
if offset >= len(buf) {
return &Breakdown{Ranges: ranges}
}
payloadStart := offset
// Payload — break ADVERT into named sub-fields; everything else is one Payload range
if header.PayloadType == PayloadADVERT && len(buf)-payloadStart >= 100 {
ranges = append(ranges, HexRange{Start: payloadStart, End: payloadStart + 31, Label: "PubKey"})
ranges = append(ranges, HexRange{Start: payloadStart + 32, End: payloadStart + 35, Label: "Timestamp"})
ranges = append(ranges, HexRange{Start: payloadStart + 36, End: payloadStart + 99, Label: "Signature"})
appStart := payloadStart + 100
if appStart < len(buf) {
ranges = append(ranges, HexRange{Start: appStart, End: appStart, Label: "Flags"})
appFlags := buf[appStart]
fOff := appStart + 1
if appFlags&0x10 != 0 && fOff+8 <= len(buf) {
ranges = append(ranges, HexRange{Start: fOff, End: fOff + 3, Label: "Latitude"})
ranges = append(ranges, HexRange{Start: fOff + 4, End: fOff + 7, Label: "Longitude"})
fOff += 8
}
if appFlags&0x20 != 0 && fOff+2 <= len(buf) {
fOff += 2
}
if appFlags&0x40 != 0 && fOff+2 <= len(buf) {
fOff += 2
}
if appFlags&0x80 != 0 && fOff < len(buf) {
ranges = append(ranges, HexRange{Start: fOff, End: len(buf) - 1, Label: "Name"})
}
}
} else {
ranges = append(ranges, HexRange{Start: payloadStart, End: len(buf) - 1, Label: "Payload"})
}
return &Breakdown{Ranges: ranges}
}
// ComputeContentHash computes the SHA-256-based content hash (first 16 hex chars).
// It hashes the payload-type nibble + payload (skipping path bytes) to produce a
// route-independent identifier for the same logical packet. For TRACE packets,
-140
View File
@@ -97,146 +97,6 @@ func TestDecodePacket_FloodHasNoCodes(t *testing.T) {
}
}
func TestBuildBreakdown_InvalidHex(t *testing.T) {
b := BuildBreakdown("not-hex!")
if len(b.Ranges) != 0 {
t.Errorf("expected empty ranges for invalid hex, got %d", len(b.Ranges))
}
}
func TestBuildBreakdown_TooShort(t *testing.T) {
b := BuildBreakdown("11") // 1 byte — no path byte
if len(b.Ranges) != 0 {
t.Errorf("expected empty ranges for too-short packet, got %d", len(b.Ranges))
}
}
func TestBuildBreakdown_FloodNonAdvert(t *testing.T) {
// Header 0x15: route=1/FLOOD, payload=5/GRP_TXT
// PathByte 0x01: 1 hop, 1-byte hash
// PathHop: AA
// Payload: FF0011
b := BuildBreakdown("1501AAFFFF00")
labels := rangeLabels(b.Ranges)
expect := []string{"Header", "Path Length", "Path", "Payload"}
if !equalLabels(labels, expect) {
t.Errorf("expected labels %v, got %v", expect, labels)
}
// Verify byte positions
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Path Length", 1, 1)
assertRange(t, b.Ranges, "Path", 2, 2)
assertRange(t, b.Ranges, "Payload", 3, 5)
}
func TestBuildBreakdown_TransportFlood(t *testing.T) {
// Header 0x14: route=0/TRANSPORT_FLOOD, payload=5/GRP_TXT
// TransportCodes: AABBCCDD (4 bytes)
// PathByte 0x01: 1 hop, 1-byte hash
// PathHop: EE
// Payload: FF00
b := BuildBreakdown("14AABBCCDD01EEFF00")
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Transport Codes", 1, 4)
assertRange(t, b.Ranges, "Path Length", 5, 5)
assertRange(t, b.Ranges, "Path", 6, 6)
assertRange(t, b.Ranges, "Payload", 7, 8)
}
func TestBuildBreakdown_FloodNoHops(t *testing.T) {
// Header 0x15: FLOOD/GRP_TXT; PathByte 0x00: 0 hops; Payload: AABB
b := BuildBreakdown("150000AABB")
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Path Length", 1, 1)
// No Path range since hashCount=0
for _, r := range b.Ranges {
if r.Label == "Path" {
t.Error("expected no Path range for zero-hop packet")
}
}
assertRange(t, b.Ranges, "Payload", 2, 4)
}
func TestBuildBreakdown_AdvertBasic(t *testing.T) {
// Header 0x11: FLOOD/ADVERT
// PathByte 0x01: 1 hop, 1-byte hash
// PathHop: AA
// Payload: 100 bytes (PubKey32 + Timestamp4 + Signature64) + Flags=0x02 (repeater, no extras)
pubkey := repeatHex("AB", 32)
ts := "00000000" // 4 bytes
sig := repeatHex("CD", 64)
flags := "02"
hex := "1101AA" + pubkey + ts + sig + flags
b := BuildBreakdown(hex)
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Path Length", 1, 1)
assertRange(t, b.Ranges, "Path", 2, 2)
assertRange(t, b.Ranges, "PubKey", 3, 34)
assertRange(t, b.Ranges, "Timestamp", 35, 38)
assertRange(t, b.Ranges, "Signature", 39, 102)
assertRange(t, b.Ranges, "Flags", 103, 103)
}
func TestBuildBreakdown_AdvertWithLocation(t *testing.T) {
// flags=0x12: hasLocation bit set
pubkey := repeatHex("00", 32)
ts := "00000000"
sig := repeatHex("00", 64)
flags := "12" // 0x10 = hasLocation
latBytes := "00000000"
lonBytes := "00000000"
hex := "1101AA" + pubkey + ts + sig + flags + latBytes + lonBytes
b := BuildBreakdown(hex)
assertRange(t, b.Ranges, "Latitude", 104, 107)
assertRange(t, b.Ranges, "Longitude", 108, 111)
}
func TestBuildBreakdown_AdvertWithName(t *testing.T) {
// flags=0x82: hasName bit set
pubkey := repeatHex("00", 32)
ts := "00000000"
sig := repeatHex("00", 64)
flags := "82" // 0x80 = hasName
name := "4E6F6465" // "Node" in hex
hex := "1101AA" + pubkey + ts + sig + flags + name
b := BuildBreakdown(hex)
assertRange(t, b.Ranges, "Name", 104, 107)
}
// helpers
func rangeLabels(ranges []HexRange) []string {
out := make([]string, len(ranges))
for i, r := range ranges {
out[i] = r.Label
}
return out
}
func equalLabels(a, b []string) bool {
if len(a) != len(b) {
return false
}
for i := range a {
if a[i] != b[i] {
return false
}
}
return true
}
func assertRange(t *testing.T, ranges []HexRange, label string, wantStart, wantEnd int) {
t.Helper()
for _, r := range ranges {
if r.Label == label {
if r.Start != wantStart || r.End != wantEnd {
t.Errorf("range %q: want [%d,%d], got [%d,%d]", label, wantStart, wantEnd, r.Start, r.End)
}
return
}
}
t.Errorf("range %q not found in %v", label, rangeLabels(ranges))
}
func TestZeroHopDirectHashSize(t *testing.T) {
// DIRECT (RouteType=2) + REQ (PayloadType=0) → header byte = 0x02
+4
View File
@@ -14,6 +14,10 @@ replace github.com/meshcore-analyzer/geofilter => ../../internal/geofilter
replace github.com/meshcore-analyzer/sigvalidate => ../../internal/sigvalidate
require github.com/meshcore-analyzer/packetpath v0.0.0
replace github.com/meshcore-analyzer/packetpath => ../../internal/packetpath
require (
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/google/uuid v1.6.0 // indirect
+18 -18
View File
@@ -12,9 +12,9 @@ import (
func TestResolveAmbiguousEdges_GeoProximity(t *testing.T) {
// Node A at lat=45, lon=-122. Candidate B1 at lat=45.1, lon=-122.1 (close).
// Candidate B2 at lat=10, lon=10 (far away). Prefix "b0" matches both.
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB1 := nodeInfo{PublicKey: "b0b1eeee", Name: "CloseNode", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeB2 := nodeInfo{PublicKey: "b0c2ffff", Name: "FarNode", HasGPS: true, Lat: 10.0, Lon: 10.0}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB1 := nodeInfo{Role: "repeater", PublicKey: "b0b1eeee", Name: "CloseNode", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeB2 := nodeInfo{Role: "repeater", PublicKey: "b0c2ffff", Name: "FarNode", HasGPS: true, Lat: 10.0, Lon: 10.0}
pm := buildPrefixMap([]nodeInfo{nodeA, nodeB1, nodeB2})
@@ -62,8 +62,8 @@ func TestResolveAmbiguousEdges_GeoProximity(t *testing.T) {
// Test 2: Ambiguous edge merged with existing resolved edge (count accumulation).
func TestResolveAmbiguousEdges_MergeWithExisting(t *testing.T) {
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{PublicKey: "b0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{Role: "repeater", PublicKey: "b0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
pm := buildPrefixMap([]nodeInfo{nodeA, nodeB})
@@ -133,9 +133,9 @@ func TestResolveAmbiguousEdges_MergeWithExisting(t *testing.T) {
// Test 3: Ambiguous edge left as-is when resolution fails.
func TestResolveAmbiguousEdges_FailsNoChange(t *testing.T) {
// Two candidates, neither has GPS, no affinity data — resolution falls through.
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA"}
nodeB1 := nodeInfo{PublicKey: "b0b1eeee", Name: "B1"}
nodeB2 := nodeInfo{PublicKey: "b0c2ffff", Name: "B2"}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA"}
nodeB1 := nodeInfo{Role: "repeater", PublicKey: "b0b1eeee", Name: "B1"}
nodeB2 := nodeInfo{Role: "repeater", PublicKey: "b0c2ffff", Name: "B2"}
pm := buildPrefixMap([]nodeInfo{nodeA, nodeB1, nodeB2})
@@ -175,7 +175,7 @@ func TestResolveAmbiguousEdges_FailsNoChange(t *testing.T) {
// Test 3 (corrected): Resolution fails when prefix has no candidates in prefix map.
func TestResolveAmbiguousEdges_NoMatch(t *testing.T) {
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA"}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA"}
// pm has no entries matching prefix "zz"
pm := buildPrefixMap([]nodeInfo{nodeA})
@@ -215,8 +215,8 @@ func TestResolveAmbiguousEdges_NoMatch(t *testing.T) {
// Test 6: Phase 1 edge collection unchanged (no regression).
func TestPhase1EdgeCollection_Unchanged(t *testing.T) {
// Build a simple graph and verify non-ambiguous edges are not touched.
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{PublicKey: "bbbb2222", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{Role: "repeater", PublicKey: "bbbb2222", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
ts := time.Now().UTC().Format(time.RFC3339)
payloadType := 4
@@ -232,7 +232,7 @@ func TestPhase1EdgeCollection_Unchanged(t *testing.T) {
Observations: obs,
}
store := ngTestStore([]nodeInfo{nodeA, nodeB, {PublicKey: "cccc3333", Name: "Observer"}}, []*StoreTx{tx})
store := ngTestStore([]nodeInfo{nodeA, nodeB, {Role: "repeater", PublicKey: "cccc3333", Name: "Observer"}}, []*StoreTx{tx})
graph := BuildFromStore(store)
edges := graph.Neighbors("aaaa1111")
@@ -255,8 +255,8 @@ func TestPhase1EdgeCollection_Unchanged(t *testing.T) {
// Test 7: Merge preserves higher LastSeen timestamp.
func TestResolveAmbiguousEdges_PreservesHigherLastSeen(t *testing.T) {
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{PublicKey: "b0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{Role: "repeater", PublicKey: "b0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
pm := buildPrefixMap([]nodeInfo{nodeA, nodeB})
graph := NewNeighborGraph()
@@ -307,10 +307,10 @@ func TestResolveAmbiguousEdges_PreservesHigherLastSeen(t *testing.T) {
// Test 5: Integration — node with both 1-byte and 2-byte prefix observations shows single entry.
func TestIntegration_DualPrefixSingleNeighbor(t *testing.T) {
nodeA := nodeInfo{PublicKey: "aaaa1111aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{PublicKey: "b0b1eeeeb0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeB2 := nodeInfo{PublicKey: "b0c2ffffb0c2ffff", Name: "NodeB2", HasGPS: true, Lat: 10.0, Lon: 10.0}
observer := nodeInfo{PublicKey: "cccc3333cccc3333", Name: "Observer"}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{Role: "repeater", PublicKey: "b0b1eeeeb0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeB2 := nodeInfo{Role: "repeater", PublicKey: "b0c2ffffb0c2ffff", Name: "NodeB2", HasGPS: true, Lat: 10.0, Lon: 10.0}
observer := nodeInfo{Role: "repeater", PublicKey: "cccc3333cccc3333", Name: "Observer"}
ts := time.Now().UTC().Format(time.RFC3339)
pt := 4
+63 -63
View File
@@ -86,9 +86,9 @@ func TestBuildNeighborGraph_EmptyStore(t *testing.T) {
func TestBuildNeighborGraph_AdvertSingleHopPath(t *testing.T) {
// ADVERT from X, path=["R1_prefix"] → edges: X↔R1 and Observer↔R1
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["r1aa"]`, nowStr, ngFloatPtr(-10)),
@@ -132,10 +132,10 @@ func TestBuildNeighborGraph_AdvertSingleHopPath(t *testing.T) {
func TestBuildNeighborGraph_AdvertMultiHopPath(t *testing.T) {
// ADVERT from X, path=["R1","R2"] → X↔R1 and Observer↔R2
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "r2ddeeff", Name: "R2"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "r2ddeeff", Name: "R2"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["r1aa","r2dd"]`, nowStr, nil),
@@ -170,8 +170,8 @@ func TestBuildNeighborGraph_AdvertMultiHopPath(t *testing.T) {
func TestBuildNeighborGraph_AdvertZeroHop(t *testing.T) {
// ADVERT from X, path=[] → X↔Observer direct edge
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `[]`, nowStr, nil),
@@ -195,8 +195,8 @@ func TestBuildNeighborGraph_AdvertZeroHop(t *testing.T) {
func TestBuildNeighborGraph_NonAdvertEmptyPath(t *testing.T) {
// Non-ADVERT, path=[] → no edges
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 2, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `[]`, nowStr, nil),
@@ -212,10 +212,10 @@ func TestBuildNeighborGraph_NonAdvertEmptyPath(t *testing.T) {
func TestBuildNeighborGraph_NonAdvertOnlyObserverEdge(t *testing.T) {
// Non-ADVERT with path=["R1","R2"] → only Observer↔R2, NO originator edge
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "r2ddeeff", Name: "R2"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "r2ddeeff", Name: "R2"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 2, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["r1aa","r2dd"]`, nowStr, nil),
@@ -236,9 +236,9 @@ func TestBuildNeighborGraph_NonAdvertOnlyObserverEdge(t *testing.T) {
func TestBuildNeighborGraph_NonAdvertSingleHop(t *testing.T) {
// Non-ADVERT with path=["R1"] → Observer↔R1 only
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 2, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["r1aa"]`, nowStr, nil),
@@ -259,10 +259,10 @@ func TestBuildNeighborGraph_NonAdvertSingleHop(t *testing.T) {
func TestBuildNeighborGraph_HashCollision(t *testing.T) {
// Two nodes share prefix "a3" → ambiguous edge
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "a3bb1111", Name: "CandidateA"},
{PublicKey: "a3bb2222", Name: "CandidateB"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "a3bb1111", Name: "CandidateA"},
{Role: "repeater", PublicKey: "a3bb2222", Name: "CandidateB"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["a3bb"]`, nowStr, nil),
@@ -308,13 +308,13 @@ func TestBuildNeighborGraph_ConfidenceAutoResolve(t *testing.T) {
// CandidateB has no known neighbors (Jaccard = 0).
// An ambiguous edge X↔prefix "a3" with candidates [A, B] should auto-resolve to A.
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "n1111111", Name: "N1"},
{PublicKey: "n2222222", Name: "N2"},
{PublicKey: "n3333333", Name: "N3"},
{PublicKey: "a3001111", Name: "CandidateA"},
{PublicKey: "a3002222", Name: "CandidateB"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "n1111111", Name: "N1"},
{Role: "repeater", PublicKey: "n2222222", Name: "N2"},
{Role: "repeater", PublicKey: "n3333333", Name: "N3"},
{Role: "repeater", PublicKey: "a3001111", Name: "CandidateA"},
{Role: "repeater", PublicKey: "a3002222", Name: "CandidateB"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
// Create resolved edges: X↔N1, X↔N2, X↔N3, A↔N1, A↔N2, A↔N3
@@ -373,11 +373,11 @@ func TestBuildNeighborGraph_ConfidenceAutoResolve(t *testing.T) {
func TestBuildNeighborGraph_EqualScoresAmbiguous(t *testing.T) {
// Two candidates with identical neighbor sets → should NOT auto-resolve.
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "n1111111", Name: "N1"},
{PublicKey: "a3001111", Name: "CandidateA"},
{PublicKey: "a3002222", Name: "CandidateB"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "n1111111", Name: "N1"},
{Role: "repeater", PublicKey: "a3001111", Name: "CandidateA"},
{Role: "repeater", PublicKey: "a3002222", Name: "CandidateB"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
var txs []*StoreTx
@@ -425,8 +425,8 @@ func TestBuildNeighborGraph_EqualScoresAmbiguous(t *testing.T) {
func TestBuildNeighborGraph_ObserverSelfEdgeGuard(t *testing.T) {
// Observer's own prefix in path → should NOT create self-edge.
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["obs0"]`, nowStr, nil),
@@ -445,8 +445,8 @@ func TestBuildNeighborGraph_ObserverSelfEdgeGuard(t *testing.T) {
func TestBuildNeighborGraph_OrphanPrefix(t *testing.T) {
// Path contains prefix matching zero nodes → edge recorded as unresolved.
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["ff99"]`, nowStr, nil),
@@ -506,9 +506,9 @@ func TestAffinityScore_StaleAndLow(t *testing.T) {
func TestBuildNeighborGraph_CountAccumulation(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
var txs []*StoreTx
@@ -535,10 +535,10 @@ func TestBuildNeighborGraph_CountAccumulation(t *testing.T) {
func TestBuildNeighborGraph_MultipleObservers(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "obs00001", Name: "Obs1"},
{PublicKey: "obs00002", Name: "Obs2"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "obs00001", Name: "Obs1"},
{Role: "repeater", PublicKey: "obs00002", Name: "Obs2"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
@@ -565,9 +565,9 @@ func TestBuildNeighborGraph_MultipleObservers(t *testing.T) {
func TestBuildNeighborGraph_TimeDecayOldObservations(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
@@ -592,10 +592,10 @@ func TestBuildNeighborGraph_TimeDecayOldObservations(t *testing.T) {
func TestBuildNeighborGraph_ADVERTOnlyConstraint(t *testing.T) {
// Non-ADVERT: should NOT create originator↔path[0] edge, only observer↔path[last].
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "r2ddeeff", Name: "R2"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "r2ddeeff", Name: "R2"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 2, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["r1aa","r2dd"]`, nowStr, nil),
@@ -631,9 +631,9 @@ func ngPubKeyJSON(pubkey string) string {
func TestBuildNeighborGraph_AdvertPubKeyField(t *testing.T) {
// Real ADVERTs use "pubKey", not "from_node". Verify the builder handles it.
nodes := []nodeInfo{
{PublicKey: "99bf37abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234", Name: "Originator"},
{PublicKey: "r1aabbccdd001122334455667788990011223344556677889900112233445566", Name: "R1"},
{PublicKey: "obs0000100112233445566778899001122334455667788990011223344556677", Name: "Observer"},
{Role: "repeater", PublicKey: "99bf37abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234", Name: "Originator"},
{Role: "repeater", PublicKey: "r1aabbccdd001122334455667788990011223344556677889900112233445566", Name: "R1"},
{Role: "repeater", PublicKey: "obs0000100112233445566778899001122334455667788990011223344556677", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngPubKeyJSON("99bf37abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234"), []*StoreObs{
ngMakeObs("obs0000100112233445566778899001122334455667788990011223344556677", `["r1"]`, nowStr, ngFloatPtr(-8.5)),
@@ -666,10 +666,10 @@ func TestBuildNeighborGraph_OneByteHashPrefixes(t *testing.T) {
// Real-world scenario: 1-byte hash prefixes with multiple candidates.
// Should create edges (possibly ambiguous) rather than empty graph.
nodes := []nodeInfo{
{PublicKey: "c0dedad400000000000000000000000000000000000000000000000000000001", Name: "NodeC0-1"},
{PublicKey: "c0dedad900000000000000000000000000000000000000000000000000000002", Name: "NodeC0-2"},
{PublicKey: "a3bbccdd00000000000000000000000000000000000000000000000000000003", Name: "Originator"},
{PublicKey: "obs1234500000000000000000000000000000000000000000000000000000004", Name: "Observer"},
{Role: "repeater", PublicKey: "c0dedad400000000000000000000000000000000000000000000000000000001", Name: "NodeC0-1"},
{Role: "repeater", PublicKey: "c0dedad900000000000000000000000000000000000000000000000000000002", Name: "NodeC0-2"},
{Role: "repeater", PublicKey: "a3bbccdd00000000000000000000000000000000000000000000000000000003", Name: "Originator"},
{Role: "repeater", PublicKey: "obs1234500000000000000000000000000000000000000000000000000000004", Name: "Observer"},
}
// ADVERT from Originator with 1-byte path hop "c0"
tx := ngMakeTx(1, 4, ngPubKeyJSON("a3bbccdd00000000000000000000000000000000000000000000000000000003"), []*StoreObs{
@@ -809,10 +809,10 @@ func TestExtractFromNode_UsesCachedParse(t *testing.T) {
func BenchmarkBuildFromStore(b *testing.B) {
// Simulate a dataset with many packets and repeated pubkeys
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeA"},
{PublicKey: "bbbb2222", Name: "NodeB"},
{PublicKey: "cccc3333", Name: "NodeC"},
{PublicKey: "dddd4444", Name: "NodeD"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA"},
{Role: "repeater", PublicKey: "bbbb2222", Name: "NodeB"},
{Role: "repeater", PublicKey: "cccc3333", Name: "NodeC"},
{Role: "repeater", PublicKey: "dddd4444", Name: "NodeD"},
}
const numPackets = 1000
packets := make([]*StoreTx, 0, numPackets)
+7 -7
View File
@@ -38,7 +38,7 @@ func createTestDBWithSchema(t *testing.T) (*DB, string) {
observer_id TEXT, observer_name TEXT, direction TEXT,
snr REAL, rssi REAL, score INTEGER,
path_json TEXT, timestamp TEXT,
resolved_path TEXT
resolved_path TEXT, raw_hex TEXT
)`)
conn.Exec(`CREATE TABLE nodes (
public_key TEXT PRIMARY KEY, name TEXT, role TEXT,
@@ -58,8 +58,8 @@ func createTestDBWithSchema(t *testing.T) (*DB, string) {
func TestResolvePathForObs(t *testing.T) {
// Build a prefix map with known nodes
nodes := []nodeInfo{
{PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
{PublicKey: "bbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb11", Name: "Node-BB"},
{Role: "repeater", PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
{Role: "repeater", PublicKey: "bbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb11", Name: "Node-BB"},
}
pm := buildPrefixMap(nodes)
graph := NewNeighborGraph()
@@ -97,7 +97,7 @@ func TestResolvePathForObs_EmptyPath(t *testing.T) {
func TestResolvePathForObs_Unresolvable(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
{Role: "repeater", PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
}
pm := buildPrefixMap(nodes)
@@ -264,7 +264,7 @@ func TestEnsureResolvedPathColumn(t *testing.T) {
conn, _ := sql.Open("sqlite", "file:"+dbPath+"?_journal_mode=WAL")
conn.Exec(`CREATE TABLE observations (
id INTEGER PRIMARY KEY, transmission_id INTEGER,
observer_id TEXT, path_json TEXT, timestamp TEXT
observer_id TEXT, path_json TEXT, timestamp TEXT, raw_hex TEXT
)`)
conn.Close()
@@ -437,8 +437,8 @@ func TestExtractEdgesFromObs_NonAdvertNoPath(t *testing.T) {
func TestExtractEdgesFromObs_WithPath(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
{PublicKey: "ffgghhii1234567890aabbccddee1234567890aabbccddee1234567890aabb11", Name: "Node-FF"},
{Role: "repeater", PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
{Role: "repeater", PublicKey: "ffgghhii1234567890aabbccddee1234567890aabbccddee1234567890aabb11", Name: "Node-FF"},
}
pm := buildPrefixMap(nodes)
+212
View File
@@ -0,0 +1,212 @@
package main
import (
"encoding/json"
"testing"
)
func TestCanAppearInPath(t *testing.T) {
cases := []struct {
role string
want bool
}{
{"repeater", true},
{"Repeater", true},
{"REPEATER", true},
{"room_server", true},
{"Room_Server", true},
{"room", true},
{"companion", false},
{"sensor", false},
{"", false},
{"unknown", false},
}
for _, tc := range cases {
if got := canAppearInPath(tc.role); got != tc.want {
t.Errorf("canAppearInPath(%q) = %v, want %v", tc.role, got, tc.want)
}
}
}
func TestBuildPrefixMap_ExcludesCompanions(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "MyCompanion"},
}
pm := buildPrefixMap(nodes)
if len(pm.m) != 0 {
t.Fatalf("expected empty prefix map, got %d entries", len(pm.m))
}
}
func TestBuildPrefixMap_ExcludesSensors(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "sensor", Name: "MySensor"},
}
pm := buildPrefixMap(nodes)
if len(pm.m) != 0 {
t.Fatalf("expected empty prefix map, got %d entries", len(pm.m))
}
}
func TestResolveWithContext_NilWhenOnlyCompanionMatchesPrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "MyCompanion"},
}
pm := buildPrefixMap(nodes)
r, _, _ := pm.resolveWithContext("7a", nil, nil)
if r != nil {
t.Fatalf("expected nil, got %+v", r)
}
}
func TestResolveWithContext_NilWhenOnlySensorMatchesPrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "sensor", Name: "MySensor"},
}
pm := buildPrefixMap(nodes)
r, _, _ := pm.resolveWithContext("7a", nil, nil)
if r != nil {
t.Fatalf("expected nil for sensor-only prefix, got %+v", r)
}
}
func TestResolveWithContext_PrefersRepeaterOverCompanionAtSamePrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "MyCompanion"},
{PublicKey: "7a5678901234", Role: "repeater", Name: "MyRepeater"},
}
pm := buildPrefixMap(nodes)
r, _, _ := pm.resolveWithContext("7a", nil, nil)
if r == nil {
t.Fatal("expected non-nil result")
}
if r.Name != "MyRepeater" {
t.Fatalf("expected MyRepeater, got %s", r.Name)
}
}
func TestResolveWithContext_PrefersRoomServerOverCompanionAtSamePrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "ab1234abcdef", Role: "companion", Name: "MyCompanion"},
{PublicKey: "ab5678901234", Role: "room_server", Name: "MyRoom"},
}
pm := buildPrefixMap(nodes)
r, _, _ := pm.resolveWithContext("ab", nil, nil)
if r == nil {
t.Fatal("expected non-nil result")
}
if r.Name != "MyRoom" {
t.Fatalf("expected MyRoom, got %s", r.Name)
}
}
func TestResolve_NilWhenOnlyCompanionMatchesPrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "MyCompanion"},
}
pm := buildPrefixMap(nodes)
r := pm.resolve("7a")
if r != nil {
t.Fatalf("expected nil from resolve() for companion-only prefix, got %+v", r)
}
}
func TestResolve_NilWhenOnlySensorMatchesPrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "sensor", Name: "MySensor"},
}
pm := buildPrefixMap(nodes)
r := pm.resolve("7a")
if r != nil {
t.Fatalf("expected nil from resolve() for sensor-only prefix, got %+v", r)
}
}
func TestResolveWithContext_PicksRepeaterEvenWhenCompanionHasGPS(t *testing.T) {
// Adversarial: companion has GPS, repeater doesn't. Role filter should
// exclude companion entirely, so repeater wins despite lacking GPS.
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "GPSCompanion", Lat: 37.0, Lon: -122.0, HasGPS: true},
{PublicKey: "7a5678901234", Role: "repeater", Name: "NoGPSRepeater", Lat: 0, Lon: 0, HasGPS: false},
}
pm := buildPrefixMap(nodes)
r, _, _ := pm.resolveWithContext("7a", nil, nil)
if r == nil {
t.Fatal("expected non-nil result")
}
if r.Name != "NoGPSRepeater" {
t.Fatalf("expected NoGPSRepeater (role filter excludes companion), got %s", r.Name)
}
}
func TestComputeDistancesForTx_CompanionNeverInResolvedChain(t *testing.T) {
// Integration test: a path with a prefix matching both a companion and a
// repeater. The resolveHop function (using buildPrefixMap) should only
// return the repeater.
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "BadCompanion", Lat: 37.0, Lon: -122.0, HasGPS: true},
{PublicKey: "7a5678901234", Role: "repeater", Name: "GoodRepeater", Lat: 38.0, Lon: -123.0, HasGPS: true},
{PublicKey: "bb1111111111", Role: "repeater", Name: "OtherRepeater", Lat: 39.0, Lon: -124.0, HasGPS: true},
}
pm := buildPrefixMap(nodes)
nodeByPk := make(map[string]*nodeInfo)
for i := range nodes {
nodeByPk[nodes[i].PublicKey] = &nodes[i]
}
repeaterSet := map[string]bool{
"7a5678901234": true,
"bb1111111111": true,
}
// Build a synthetic StoreTx with a path ["7a", "bb"] and a sender with GPS
senderPK := "cc0000000000"
sender := nodeInfo{PublicKey: senderPK, Role: "repeater", Name: "Sender", Lat: 36.0, Lon: -121.0, HasGPS: true}
nodeByPk[senderPK] = &sender
pathJSON, _ := json.Marshal([]string{"7a", "bb"})
decoded, _ := json.Marshal(map[string]interface{}{"pubKey": senderPK})
tx := &StoreTx{
PathJSON: string(pathJSON),
DecodedJSON: string(decoded),
FirstSeen: "2026-04-30T12:00",
}
resolveHop := func(hop string) *nodeInfo {
return pm.resolve(hop)
}
hops, pathRec := computeDistancesForTx(tx, nodeByPk, repeaterSet, resolveHop)
// Verify BadCompanion's pubkey never appears in hops
badPK := "7a1234abcdef"
for i, h := range hops {
if h.FromPk == badPK || h.ToPk == badPK {
t.Fatalf("hop[%d] contains BadCompanion pubkey: from=%s to=%s", i, h.FromPk, h.ToPk)
}
}
// Verify BadCompanion's pubkey never appears in pathRec
if pathRec == nil {
t.Fatal("expected non-nil path record (3 GPS nodes in chain)")
}
for i, hop := range pathRec.Hops {
if hop.FromPk == badPK || hop.ToPk == badPK {
t.Fatalf("pathRec.Hops[%d] contains BadCompanion pubkey: from=%s to=%s", i, hop.FromPk, hop.ToPk)
}
}
// Verify GoodRepeater IS in the chain (proves the prefix was resolved to the right node)
goodPK := "7a5678901234"
foundGood := false
for _, hop := range pathRec.Hops {
if hop.FromPk == goodPK || hop.ToPk == goodPK {
foundGood = true
break
}
}
if !foundGood {
t.Fatal("expected GoodRepeater (7a5678901234) in pathRec.Hops but not found")
}
}
+29 -29
View File
@@ -11,7 +11,7 @@ import (
func TestResolveWithContext_UniquePrefix(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1b2c3d4", Name: "Node-A", HasGPS: true, Lat: 1, Lon: 2},
{Role: "repeater", PublicKey: "a1b2c3d4", Name: "Node-A", HasGPS: true, Lat: 1, Lon: 2},
})
ni, confidence, _ := pm.resolveWithContext("a1b2c3d4", nil, nil)
if ni == nil || ni.Name != "Node-A" {
@@ -24,7 +24,7 @@ func TestResolveWithContext_UniquePrefix(t *testing.T) {
func TestResolveWithContext_NoMatch(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1b2c3d4", Name: "Node-A"},
{Role: "repeater", PublicKey: "a1b2c3d4", Name: "Node-A"},
})
ni, confidence, _ := pm.resolveWithContext("ff", nil, nil)
if ni != nil {
@@ -37,8 +37,8 @@ func TestResolveWithContext_NoMatch(t *testing.T) {
func TestResolveWithContext_AffinityWins(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "Node-A1"},
{PublicKey: "a1bbbbbb", Name: "Node-A2"},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "Node-A1"},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "Node-A2"},
})
graph := NewNeighborGraph()
@@ -60,9 +60,9 @@ func TestResolveWithContext_AffinityWins(t *testing.T) {
func TestResolveWithContext_AffinityTooClose_FallsToGeo(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "Node-A1", HasGPS: true, Lat: 10, Lon: 20},
{PublicKey: "a1bbbbbb", Name: "Node-A2", HasGPS: true, Lat: 11, Lon: 21},
{PublicKey: "c0c0c0c0", Name: "Ctx", HasGPS: true, Lat: 10.1, Lon: 20.1},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "Node-A1", HasGPS: true, Lat: 10, Lon: 20},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "Node-A2", HasGPS: true, Lat: 11, Lon: 21},
{Role: "repeater", PublicKey: "c0c0c0c0", Name: "Ctx", HasGPS: true, Lat: 10.1, Lon: 20.1},
})
graph := NewNeighborGraph()
@@ -85,8 +85,8 @@ func TestResolveWithContext_AffinityTooClose_FallsToGeo(t *testing.T) {
func TestResolveWithContext_GPSPreference(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "NoGPS"},
{PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "NoGPS"},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
})
ni, confidence, _ := pm.resolveWithContext("a1", nil, nil)
@@ -100,8 +100,8 @@ func TestResolveWithContext_GPSPreference(t *testing.T) {
func TestResolveWithContext_FirstMatchFallback(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "First"},
{PublicKey: "a1bbbbbb", Name: "Second"},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "First"},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "Second"},
})
ni, confidence, _ := pm.resolveWithContext("a1", nil, nil)
@@ -115,8 +115,8 @@ func TestResolveWithContext_FirstMatchFallback(t *testing.T) {
func TestResolveWithContext_NilGraphFallsToGPS(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "NoGPS"},
{PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "NoGPS"},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
})
ni, confidence, _ := pm.resolveWithContext("a1", []string{"someone"}, nil)
@@ -131,8 +131,8 @@ func TestResolveWithContext_NilGraphFallsToGPS(t *testing.T) {
func TestResolveWithContext_BackwardCompatResolve(t *testing.T) {
// Verify original resolve() still works unchanged
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "NoGPS"},
{PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "NoGPS"},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
})
ni := pm.resolve("a1")
if ni == nil || ni.Name != "HasGPS" {
@@ -164,8 +164,8 @@ func TestResolveHopsAPI_UniquePrefix(t *testing.T) {
_ = srv
// Insert a unique node
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"ff11223344", "UniqueNode", 37.0, -122.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"ff11223344", "UniqueNode", 37.0, -122.0, "repeater")
srv.store.InvalidateNodeCache()
req := httptest.NewRequest("GET", "/api/resolve-hops?hops=ff11223344", nil)
@@ -189,10 +189,10 @@ func TestResolveHopsAPI_UniquePrefix(t *testing.T) {
func TestResolveHopsAPI_AmbiguousNoContext(t *testing.T) {
srv, router := setupTestServer(t)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"ee1aaaaaaa", "Node-E1", 37.0, -122.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"ee1bbbbbbb", "Node-E2", 38.0, -121.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"ee1aaaaaaa", "Node-E1", 37.0, -122.0, "repeater")
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"ee1bbbbbbb", "Node-E2", 38.0, -121.0, "repeater")
srv.store.InvalidateNodeCache()
req := httptest.NewRequest("GET", "/api/resolve-hops?hops=ee1", nil)
@@ -224,12 +224,12 @@ func TestResolveHopsAPI_AmbiguousNoContext(t *testing.T) {
func TestResolveHopsAPI_WithAffinityContext(t *testing.T) {
srv, router := setupTestServer(t)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"dd1aaaaaaa", "Node-D1", 37.0, -122.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"dd1bbbbbbb", "Node-D2", 38.0, -121.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"c0c0c0c0c0", "Context", 37.1, -122.1)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"dd1aaaaaaa", "Node-D1", 37.0, -122.0, "repeater")
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"dd1bbbbbbb", "Node-D2", 38.0, -121.0, "repeater")
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"c0c0c0c0c0", "Context", 37.1, -122.1, "repeater")
// Invalidate node cache so the PM includes newly inserted nodes.
srv.store.cacheMu.Lock()
@@ -279,8 +279,8 @@ func TestResolveHopsAPI_WithAffinityContext(t *testing.T) {
func TestResolveHopsAPI_ResponseShape(t *testing.T) {
srv, router := setupTestServer(t)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"bb1aaaaaaa", "Node-B1", 37.0, -122.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"bb1aaaaaaa", "Node-B1", 37.0, -122.0, "repeater")
req := httptest.NewRequest("GET", "/api/resolve-hops?hops=bb1a", nil)
rr := httptest.NewRecorder()
+15 -4
View File
@@ -16,6 +16,7 @@ import (
"time"
"github.com/gorilla/mux"
"github.com/meshcore-analyzer/packetpath"
)
// Server holds shared state for route handlers.
@@ -957,11 +958,9 @@ func (s *Server) handlePacketDetail(w http.ResponseWriter, r *http.Request) {
pathHops = []interface{}{}
}
rawHex, _ := packet["raw_hex"].(string)
writeJSON(w, PacketDetailResponse{
Packet: packet,
Path: pathHops,
Breakdown: BuildBreakdown(rawHex),
ObservationCount: observationCount,
Observations: mapSliceToObservations(observations),
})
@@ -1020,8 +1019,17 @@ func (s *Server) handlePostPacket(w http.ResponseWriter, r *http.Request) {
contentHash := ComputeContentHash(hexStr)
pathJSON := "[]"
if len(decoded.Path.Hops) > 0 {
if pj, e := json.Marshal(decoded.Path.Hops); e == nil {
// For TRACE packets, path_json must be the payload-decoded route hops
// (decoded.Path.Hops), NOT the raw_hex header bytes which are SNR values.
// For all other packet types, derive path from raw_hex (#886).
if !packetpath.PathBytesAreHops(byte(decoded.Header.PayloadType)) {
if len(decoded.Path.Hops) > 0 {
if pj, e := json.Marshal(decoded.Path.Hops); e == nil {
pathJSON = string(pj)
}
}
} else if hops, err := packetpath.DecodePathFromRawHex(hexStr); err == nil && len(hops) > 0 {
if pj, e := json.Marshal(hops); e == nil {
pathJSON = string(pj)
}
}
@@ -2386,6 +2394,9 @@ func mapSliceToObservations(maps []map[string]interface{}) []ObservationResp {
obs.SNR = m["snr"]
obs.RSSI = m["rssi"]
obs.PathJSON = m["path_json"]
obs.ResolvedPath = m["resolved_path"]
obs.Direction = m["direction"]
obs.RawHex = m["raw_hex"]
obs.Timestamp = m["timestamp"]
result = append(result, obs)
}
+61 -11
View File
@@ -63,6 +63,7 @@ type StoreObs struct {
RSSI *float64
Score *int
PathJSON string
RawHex string
Timestamp string
}
@@ -458,6 +459,10 @@ func (s *PacketStore) Load() error {
if s.db.hasResolvedPath {
rpCol = ",\n\t\t\t\to.resolved_path"
}
obsRawHexCol := ""
if s.db.hasObsRawHex {
obsRawHexCol = ", o.raw_hex"
}
limitClause := ""
if maxPackets > 0 {
@@ -469,7 +474,7 @@ func (s *PacketStore) Load() error {
loadSQL = `SELECT t.id, t.raw_hex, t.hash, t.first_seen, t.route_type,
t.payload_type, t.payload_version, t.decoded_json,
o.id, obs.id, obs.name, o.direction,
o.snr, o.rssi, o.score, o.path_json, strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch')` + rpCol + `
o.snr, o.rssi, o.score, o.path_json, strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch')` + obsRawHexCol + rpCol + `
FROM transmissions t
LEFT JOIN observations o ON o.transmission_id = t.id
LEFT JOIN observers obs ON obs.rowid = o.observer_idx` + limitClause + `
@@ -478,7 +483,7 @@ func (s *PacketStore) Load() error {
loadSQL = `SELECT t.id, t.raw_hex, t.hash, t.first_seen, t.route_type,
t.payload_type, t.payload_version, t.decoded_json,
o.id, o.observer_id, o.observer_name, o.direction,
o.snr, o.rssi, o.score, o.path_json, o.timestamp` + rpCol + `
o.snr, o.rssi, o.score, o.path_json, o.timestamp` + obsRawHexCol + rpCol + `
FROM transmissions t
LEFT JOIN observations o ON o.transmission_id = t.id` + limitClause + `
ORDER BY t.first_seen ASC, o.timestamp DESC`
@@ -500,12 +505,16 @@ func (s *PacketStore) Load() error {
var observerID, observerName, direction, pathJSON, obsTimestamp sql.NullString
var snr, rssi sql.NullFloat64
var score sql.NullInt64
var obsRawHex sql.NullString
var resolvedPathStr sql.NullString
scanArgs := []interface{}{&txID, &rawHex, &hash, &firstSeen, &routeType, &payloadType,
&payloadVersion, &decodedJSON,
&obsID, &observerID, &observerName, &direction,
&snr, &rssi, &score, &pathJSON, &obsTimestamp}
if s.db.hasObsRawHex {
scanArgs = append(scanArgs, &obsRawHex)
}
if s.db.hasResolvedPath {
scanArgs = append(scanArgs, &resolvedPathStr)
}
@@ -565,6 +574,7 @@ func (s *PacketStore) Load() error {
RSSI: nullFloatPtr(rssi),
Score: nullIntPtr(score),
PathJSON: obsPJ,
RawHex: nullStrVal(obsRawHex),
Timestamp: normalizeTimestamp(nullStrVal(obsTimestamp)),
}
@@ -1384,11 +1394,15 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
// New ingests always resolve fresh using the current prefix map and neighbor graph.
// On restart, Load() handles reading persisted resolved_path values. (review item #7)
var querySQL string
obsRHCol := ""
if s.db.hasObsRawHex {
obsRHCol = ", o.raw_hex"
}
if s.db.isV3 {
querySQL = `SELECT t.id, t.raw_hex, t.hash, t.first_seen, t.route_type,
t.payload_type, t.payload_version, t.decoded_json,
o.id, obs.id, obs.name, o.direction,
o.snr, o.rssi, o.score, o.path_json, strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch')
o.snr, o.rssi, o.score, o.path_json, strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch')` + obsRHCol + `
FROM transmissions t
LEFT JOIN observations o ON o.transmission_id = t.id
LEFT JOIN observers obs ON obs.rowid = o.observer_idx
@@ -1398,7 +1412,7 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
querySQL = `SELECT t.id, t.raw_hex, t.hash, t.first_seen, t.route_type,
t.payload_type, t.payload_version, t.decoded_json,
o.id, o.observer_id, o.observer_name, o.direction,
o.snr, o.rssi, o.score, o.path_json, o.timestamp
o.snr, o.rssi, o.score, o.path_json, o.timestamp` + obsRHCol + `
FROM transmissions t
LEFT JOIN observations o ON o.transmission_id = t.id
WHERE t.id > ?
@@ -1419,6 +1433,7 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
routeType, payloadType *int
obsID *int
observerID, observerName, direction, pathJSON, obsTS string
obsRawHex string
snr, rssi *float64
score *int
}
@@ -1435,11 +1450,16 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
var observerID, observerName, direction, pathJSON, obsTimestamp sql.NullString
var snrVal, rssiVal sql.NullFloat64
var scoreVal sql.NullInt64
var obsRawHex sql.NullString
if err := rows.Scan(&txID, &rawHex, &hash, &firstSeen, &routeType, &payloadType,
scanArgs2 := []interface{}{&txID, &rawHex, &hash, &firstSeen, &routeType, &payloadType,
&payloadVersion, &decodedJSON,
&obsIDVal, &observerID, &observerName, &direction,
&snrVal, &rssiVal, &scoreVal, &pathJSON, &obsTimestamp); err != nil {
&snrVal, &rssiVal, &scoreVal, &pathJSON, &obsTimestamp}
if s.db.hasObsRawHex {
scanArgs2 = append(scanArgs2, &obsRawHex)
}
if err := rows.Scan(scanArgs2...); err != nil {
continue
}
@@ -1464,6 +1484,7 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
direction: nullStrVal(direction),
pathJSON: nullStrVal(pathJSON),
obsTS: nullStrVal(obsTimestamp),
obsRawHex: nullStrVal(obsRawHex),
snr: nullFloatPtr(snrVal),
rssi: nullFloatPtr(rssiVal),
score: nullIntPtr(scoreVal),
@@ -1564,6 +1585,7 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
RSSI: r.rssi,
Score: r.score,
PathJSON: r.pathJSON,
RawHex: r.obsRawHex,
Timestamp: normalizeTimestamp(r.obsTS),
}
@@ -1806,9 +1828,13 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
}
var querySQL string
obsRHCol2 := ""
if s.db.hasObsRawHex {
obsRHCol2 = ", o.raw_hex"
}
if s.db.isV3 {
querySQL = `SELECT o.id, o.transmission_id, obs.id, obs.name, o.direction,
o.snr, o.rssi, o.score, o.path_json, strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch')
o.snr, o.rssi, o.score, o.path_json, strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch')` + obsRHCol2 + `
FROM observations o
LEFT JOIN observers obs ON obs.rowid = o.observer_idx
WHERE o.id > ?
@@ -1816,7 +1842,7 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
LIMIT ?`
} else {
querySQL = `SELECT o.id, o.transmission_id, o.observer_id, o.observer_name, o.direction,
o.snr, o.rssi, o.score, o.path_json, o.timestamp
o.snr, o.rssi, o.score, o.path_json, o.timestamp` + obsRHCol2 + `
FROM observations o
WHERE o.id > ?
ORDER BY o.id ASC
@@ -1839,6 +1865,7 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
snr, rssi *float64
score *int
pathJSON string
rawHex string
timestamp string
}
@@ -1848,9 +1875,14 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
var observerID, observerName, direction, pathJSON, ts sql.NullString
var snr, rssi sql.NullFloat64
var score sql.NullInt64
var obsRawHex sql.NullString
if err := rows.Scan(&oid, &txID, &observerID, &observerName, &direction,
&snr, &rssi, &score, &pathJSON, &ts); err != nil {
scanArgs3 := []interface{}{&oid, &txID, &observerID, &observerName, &direction,
&snr, &rssi, &score, &pathJSON, &ts}
if s.db.hasObsRawHex {
scanArgs3 = append(scanArgs3, &obsRawHex)
}
if err := rows.Scan(scanArgs3...); err != nil {
continue
}
@@ -1864,6 +1896,7 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
rssi: nullFloatPtr(rssi),
score: nullIntPtr(score),
pathJSON: nullStrVal(pathJSON),
rawHex: nullStrVal(obsRawHex),
timestamp: nullStrVal(ts),
})
}
@@ -1919,6 +1952,7 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
RSSI: r.rssi,
Score: r.score,
PathJSON: r.pathJSON,
RawHex: r.rawHex,
Timestamp: normalizeTimestamp(r.timestamp),
}
@@ -2408,7 +2442,12 @@ func (s *PacketStore) enrichObs(obs *StoreObs) map[string]interface{} {
if tx != nil {
m["hash"] = strOrNil(tx.Hash)
m["raw_hex"] = strOrNil(tx.RawHex)
// Prefer per-observation raw_hex; fall back to transmission-level (#881)
if obs.RawHex != "" {
m["raw_hex"] = obs.RawHex
} else {
m["raw_hex"] = strOrNil(tx.RawHex)
}
m["payload_type"] = intPtrOrNil(tx.PayloadType)
m["route_type"] = intPtrOrNil(tx.RouteType)
m["decoded_json"] = strOrNil(tx.DecodedJSON)
@@ -4512,9 +4551,20 @@ type prefixMap struct {
// entries to ~7×N (+ 1 full-key entry per node for exact-match lookups).
const maxPrefixLen = 8
// canAppearInPath returns true if the node's role allows it to appear as a
// path hop. Only repeaters, room servers, and rooms can forward packets;
// companions and sensors originate but never relay.
func canAppearInPath(role string) bool {
r := strings.ToLower(role)
return strings.Contains(r, "repeater") || strings.Contains(r, "room_server") || r == "room"
}
func buildPrefixMap(nodes []nodeInfo) *prefixMap {
pm := &prefixMap{m: make(map[string][]nodeInfo, len(nodes)*(maxPrefixLen+1))}
for _, n := range nodes {
if !canAppearInPath(n.Role) {
continue
}
pk := strings.ToLower(n.PublicKey)
maxLen := maxPrefixLen
if maxLen > len(pk) {
+3 -1
View File
@@ -277,6 +277,9 @@ type ObservationResp struct {
SNR interface{} `json:"snr"`
RSSI interface{} `json:"rssi"`
PathJSON interface{} `json:"path_json"`
ResolvedPath interface{} `json:"resolved_path,omitempty"`
Direction interface{} `json:"direction,omitempty"`
RawHex interface{} `json:"raw_hex,omitempty"`
Timestamp interface{} `json:"timestamp"`
}
@@ -312,7 +315,6 @@ type PacketTimestampsResponse struct {
type PacketDetailResponse struct {
Packet interface{} `json:"packet"`
Path []interface{} `json:"path"`
Breakdown *Breakdown `json:"breakdown"`
ObservationCount int `json:"observation_count"`
Observations []ObservationResp `json:"observations,omitempty"`
}
+3
View File
@@ -0,0 +1,3 @@
module github.com/meshcore-analyzer/packetpath
go 1.22
+76
View File
@@ -0,0 +1,76 @@
// Package packetpath provides shared helpers for extracting path hops from
// raw MeshCore packet hex bytes.
package packetpath
import (
"encoding/hex"
"fmt"
"strings"
)
// DecodePathFromRawHex extracts the header path hops directly from raw hex bytes.
// This is the authoritative path that matches what's in raw_hex, as opposed to
// decoded.Path.Hops which may be overwritten for TRACE packets (issue #886).
//
// WARNING: This function returns the literal header path bytes regardless of
// payload type. For TRACE packets these bytes are SNR values, NOT hop hashes.
// Callers that may receive TRACE packets MUST check PathBytesAreHops(payloadType)
// first, or use the safer DecodeHopsForPayload wrapper.
func DecodePathFromRawHex(rawHex string) ([]string, error) {
buf, err := hex.DecodeString(rawHex)
if err != nil || len(buf) < 2 {
return nil, fmt.Errorf("invalid or too-short hex")
}
headerByte := buf[0]
offset := 1
if IsTransportRoute(int(headerByte & 0x03)) {
if len(buf) < offset+4 {
return nil, fmt.Errorf("too short for transport codes")
}
offset += 4
}
if offset >= len(buf) {
return nil, fmt.Errorf("too short for path byte")
}
pathByte := buf[offset]
offset++
hashSize := int(pathByte>>6) + 1
hashCount := int(pathByte & 0x3F)
hops := make([]string, 0, hashCount)
for i := 0; i < hashCount; i++ {
start := offset + i*hashSize
end := start + hashSize
if end > len(buf) {
break
}
hops = append(hops, strings.ToUpper(hex.EncodeToString(buf[start:end])))
}
return hops, nil
}
// DecodeHopsForPayload returns the header path hops only when the payload type's
// header bytes are actually route hops (i.e. PathBytesAreHops(payloadType) is true).
// For TRACE packets it returns (nil, ErrPayloadHasNoHeaderHops) so the caller is
// forced to source hops from the decoded payload instead.
//
// Prefer this over DecodePathFromRawHex when the payload type is known.
func DecodeHopsForPayload(rawHex string, payloadType byte) ([]string, error) {
if !PathBytesAreHops(payloadType) {
return nil, ErrPayloadHasNoHeaderHops
}
return DecodePathFromRawHex(rawHex)
}
// ErrPayloadHasNoHeaderHops is returned by DecodeHopsForPayload when the
// payload type repurposes the raw_hex header path bytes (e.g. TRACE → SNR values).
var ErrPayloadHasNoHeaderHops = errPayloadHasNoHeaderHops{}
type errPayloadHasNoHeaderHops struct{}
func (errPayloadHasNoHeaderHops) Error() string {
return "payload type repurposes header path bytes; source hops from decoded payload"
}
+150
View File
@@ -0,0 +1,150 @@
package packetpath
import (
"encoding/hex"
"encoding/json"
"strings"
"testing"
)
func TestDecodePathFromRawHex_Basic(t *testing.T) {
// Build a simple FLOOD packet (route_type=1) with 2 hops of hashSize=1
// header: route_type=1, payload_type=2 (TXT_MSG), version=0 → 0b00_0010_01 = 0x09
// path byte: hashSize=1 (bits 7-6 = 0), hashCount=2 (bits 5-0 = 2) → 0x02
// hops: AB, CD
// payload: some bytes
raw := "0902ABCD" + "DEADBEEF"
hops, err := DecodePathFromRawHex(raw)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 2 || hops[0] != "AB" || hops[1] != "CD" {
t.Fatalf("expected [AB, CD], got %v", hops)
}
}
func TestDecodePathFromRawHex_ZeroHops(t *testing.T) {
// DIRECT route (type=2), no hops → 0b00_0010_10 = 0x0A
// path byte: 0x00 (0 hops)
raw := "0A00" + "DEADBEEF"
hops, err := DecodePathFromRawHex(raw)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 0 {
t.Fatalf("expected 0 hops, got %v", hops)
}
}
func TestDecodePathFromRawHex_TransportRoute(t *testing.T) {
// TRANSPORT_FLOOD (route_type=0), payload_type=5 (GRP_TXT), version=0
// header: 0b00_0101_00 = 0x14
// transport codes: 4 bytes
// path byte: hashSize=1, hashCount=1 → 0x01
// hop: FF
raw := "14" + "00112233" + "01" + "FF" + "DEAD"
hops, err := DecodePathFromRawHex(raw)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 1 || hops[0] != "FF" {
t.Fatalf("expected [FF], got %v", hops)
}
}
// buildTracePacket creates a TRACE packet hex string where header path bytes are
// SNR values, and payload contains the actual route hops.
func buildTracePacket() (rawHex string, headerPathHops []string, payloadHops []string) {
// DIRECT route (type=2), TRACE payload (type=9), version=0
// header byte: 0b00_1001_10 = 0x26
headerByte := byte(0x26)
// Header path: 2 SNR bytes (hashSize=1, hashCount=2) → path byte = 0x02
// SNR values: 0x1A (26 dB), 0x0F (15 dB)
pathByte := byte(0x02)
snrBytes := []byte{0x1A, 0x0F}
// TRACE payload: tag(4) + authCode(4) + flags(1) + path hops
tag := []byte{0x01, 0x00, 0x00, 0x00}
authCode := []byte{0x02, 0x00, 0x00, 0x00}
// flags: path_sz=0 (1 byte hops), other bits=0 → 0x00
flags := byte(0x00)
// Payload hops: AA, BB, CC (the actual route)
payloadPathBytes := []byte{0xAA, 0xBB, 0xCC}
var buf []byte
buf = append(buf, headerByte, pathByte)
buf = append(buf, snrBytes...)
buf = append(buf, tag...)
buf = append(buf, authCode...)
buf = append(buf, flags)
buf = append(buf, payloadPathBytes...)
rawHex = strings.ToUpper(hex.EncodeToString(buf))
headerPathHops = []string{"1A", "0F"} // SNR values — NOT route hops
payloadHops = []string{"AA", "BB", "CC"} // actual route hops from payload
return
}
func TestDecodePathFromRawHex_TraceReturnsSNR(t *testing.T) {
rawHex, expectedSNR, _ := buildTracePacket()
hops, err := DecodePathFromRawHex(rawHex)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
// DecodePathFromRawHex always returns header path bytes — for TRACE these are SNR values
if len(hops) != len(expectedSNR) {
t.Fatalf("expected %d hops (SNR), got %d: %v", len(expectedSNR), len(hops), hops)
}
for i, h := range hops {
if h != expectedSNR[i] {
t.Errorf("hop[%d]: expected %s, got %s", i, expectedSNR[i], h)
}
}
}
func TestTracePathJSON_UsesPayloadHops(t *testing.T) {
// This test validates the TRACE vs non-TRACE logic that callers should implement:
// For TRACE: path_json = decoded.Path.Hops (payload-decoded route hops)
// For non-TRACE: path_json = DecodePathFromRawHex(raw_hex)
rawHex, snrHops, payloadHops := buildTracePacket()
// DecodePathFromRawHex returns SNR bytes for TRACE
headerHops, _ := DecodePathFromRawHex(rawHex)
headerJSON, _ := json.Marshal(headerHops)
// payload hops (what decoded.Path.Hops would return after TRACE decoding)
payloadJSON, _ := json.Marshal(payloadHops)
// They must differ — SNR != route hops
if string(headerJSON) == string(payloadJSON) {
t.Fatalf("SNR hops and payload hops should differ for TRACE; both are %s", headerJSON)
}
// For TRACE, path_json should be payloadHops, not headerHops
_ = snrHops // snrHops == headerHops — used for documentation
t.Logf("TRACE: header path (SNR) = %s, payload path (route) = %s", headerJSON, payloadJSON)
}
func TestDecodeHopsForPayload_NonTrace(t *testing.T) {
// header 0x01, path_len 0x02, hops 0xAA 0xBB, then payload bytes
raw := "0102AABB00"
hops, err := DecodeHopsForPayload(raw, 0x05) // GRP_TXT — header path bytes ARE hops
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 2 || hops[0] != "AA" || hops[1] != "BB" {
t.Errorf("expected [AA BB], got %v", hops)
}
}
func TestDecodeHopsForPayload_TraceReturnsError(t *testing.T) {
raw := "010205F00100"
hops, err := DecodeHopsForPayload(raw, PayloadTRACE)
if err != ErrPayloadHasNoHeaderHops {
t.Errorf("expected ErrPayloadHasNoHeaderHops, got %v", err)
}
if hops != nil {
t.Errorf("expected nil hops for TRACE, got %v", hops)
}
}
+24
View File
@@ -0,0 +1,24 @@
package packetpath
// Route type constants (header bits 1-0).
const (
RouteTransportFlood = 0
RouteFlood = 1
RouteDirect = 2
RouteTransportDirect = 3
)
// PayloadTRACE is the payload type constant for TRACE packets.
const PayloadTRACE = 0x09
// IsTransportRoute returns true for TRANSPORT_FLOOD (0) and TRANSPORT_DIRECT (3).
func IsTransportRoute(routeType int) bool {
return routeType == RouteTransportFlood || routeType == RouteTransportDirect
}
// PathBytesAreHops returns true when the raw_hex header path bytes represent
// route hop hashes (the normal case). Returns false for packet types where
// header path bytes are repurposed (e.g. TRACE uses them for SNR values).
func PathBytesAreHops(payloadType byte) bool {
return payloadType != PayloadTRACE
}
+31
View File
@@ -0,0 +1,31 @@
package packetpath
import "testing"
func TestIsTransportRoute(t *testing.T) {
if !IsTransportRoute(RouteTransportFlood) {
t.Error("RouteTransportFlood should be transport")
}
if !IsTransportRoute(RouteTransportDirect) {
t.Error("RouteTransportDirect should be transport")
}
if IsTransportRoute(RouteFlood) {
t.Error("RouteFlood should not be transport")
}
if IsTransportRoute(RouteDirect) {
t.Error("RouteDirect should not be transport")
}
}
func TestPathBytesAreHops(t *testing.T) {
if PathBytesAreHops(PayloadTRACE) {
t.Error("PathBytesAreHops(PayloadTRACE) should be false")
}
// All other known payload types should return true.
otherTypes := []byte{0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F}
for _, pt := range otherTypes {
if !PathBytesAreHops(pt) {
t.Errorf("PathBytesAreHops(0x%02X) should be true", pt)
}
}
}
+65
View File
@@ -14,6 +14,71 @@ function isTransportRoute(rt) { return rt === 0 || rt === 3; }
function getPathLenOffset(routeType) { return isTransportRoute(routeType) ? 5 : 1; }
function transportBadge(rt) { return isTransportRoute(rt) ? ' <span class="badge badge-transport" title="' + routeTypeName(rt) + '">T</span>' : ''; }
/**
* Compute breakdown byte ranges from raw_hex on the client.
* Mirrors cmd/server/decoder.go BuildBreakdown(). Used so per-observation raw_hex
* (which can differ in path length from the top-level packet) gets accurate
* highlighted byte ranges, instead of using the server-supplied breakdown
* computed once from the top-level raw_hex.
*/
function computeBreakdownRanges(hexString, routeType, payloadType) {
if (!hexString) return [];
const clean = hexString.replace(/\s+/g, '');
const bytes = clean.length / 2;
if (bytes < 2) return [];
const ranges = [];
// Header
ranges.push({ start: 0, end: 0, label: 'Header' });
let offset = 1;
if (isTransportRoute(routeType)) {
if (bytes < offset + 4) return ranges;
ranges.push({ start: offset, end: offset + 3, label: 'Transport Codes' });
offset += 4;
}
if (offset >= bytes) return ranges;
// Path Length byte
ranges.push({ start: offset, end: offset, label: 'Path Length' });
const pathByte = parseInt(clean.slice(offset * 2, offset * 2 + 2), 16);
offset += 1;
if (isNaN(pathByte)) return ranges;
const hashSize = (pathByte >> 6) + 1;
const hashCount = pathByte & 0x3F;
const pathBytes = hashSize * hashCount;
if (hashCount > 0 && offset + pathBytes <= bytes) {
ranges.push({ start: offset, end: offset + pathBytes - 1, label: 'Path' });
}
offset += pathBytes;
if (offset >= bytes) return ranges;
const payloadStart = offset;
// ADVERT (payload_type 4) gets sub-fields when full record present
if (payloadType === 4 && bytes - payloadStart >= 100) {
ranges.push({ start: payloadStart, end: payloadStart + 31, label: 'PubKey' });
ranges.push({ start: payloadStart + 32, end: payloadStart + 35, label: 'Timestamp' });
ranges.push({ start: payloadStart + 36, end: payloadStart + 99, label: 'Signature' });
const appStart = payloadStart + 100;
if (appStart < bytes) {
ranges.push({ start: appStart, end: appStart, label: 'Flags' });
const appFlags = parseInt(clean.slice(appStart * 2, appStart * 2 + 2), 16);
let fOff = appStart + 1;
if (!isNaN(appFlags)) {
if ((appFlags & 0x10) && fOff + 8 <= bytes) {
ranges.push({ start: fOff, end: fOff + 3, label: 'Latitude' });
ranges.push({ start: fOff + 4, end: fOff + 7, label: 'Longitude' });
fOff += 8;
}
if ((appFlags & 0x20) && fOff + 2 <= bytes) fOff += 2;
if ((appFlags & 0x40) && fOff + 2 <= bytes) fOff += 2;
if ((appFlags & 0x80) && fOff < bytes) {
ranges.push({ start: fOff, end: bytes - 1, label: 'Name' });
}
}
}
} else {
ranges.push({ start: payloadStart, end: bytes - 1, label: 'Payload' });
}
return ranges;
}
// --- Utilities ---
const _apiPerf = { calls: 0, totalMs: 0, log: [], cacheHits: 0 };
const _apiCache = new Map();
+44 -17
View File
@@ -393,17 +393,25 @@
}
}
// Merge user-stored keys into the channel list
// Merge user-stored keys into the channel list.
// If a stored key matches a server-known channel, mark that channel as
// userAdded so the ✕ button appears — otherwise the user has no way to
// remove a key they added but that the server already knows about.
function mergeUserChannels() {
var keys = ChannelDecrypt.getStoredKeys();
var names = Object.keys(keys);
for (var i = 0; i < names.length; i++) {
var name = names[i];
// Check if channel already exists by name
var exists = channels.some(function (ch) {
return ch.name === name || ch.hash === name || ch.hash === ('user:' + name);
});
if (!exists) {
var matched = false;
for (var j = 0; j < channels.length; j++) {
var ch = channels[j];
if (ch.name === name || ch.hash === name || ch.hash === ('user:' + name)) {
ch.userAdded = true;
matched = true;
break;
}
}
if (!matched) {
channels.push({
hash: 'user:' + name,
name: name,
@@ -749,19 +757,38 @@
e.stopPropagation();
var channelHash = removeBtn.getAttribute('data-remove-channel');
if (!channelHash) return;
var chName = channelHash.startsWith('user:') ? channelHash.substring(5) : channelHash;
// The localStorage key is the channel name. For user:-prefixed entries
// strip the prefix; for server-known channels look up the channel
// object so we use its display name (the hash itself isn't the key).
var ch = channels.find(function (c) { return c.hash === channelHash; });
var chName = channelHash.startsWith('user:')
? channelHash.substring(5)
: (ch && ch.name) || channelHash;
if (!confirm('Remove channel "' + chName + '"? This will clear saved keys and cached messages.')) return;
ChannelDecrypt.removeKey(chName);
// Remove from channels array
channels = channels.filter(function (c) { return c.hash !== channelHash; });
if (selectedHash === channelHash) {
selectedHash = null;
messages = [];
history.replaceState(null, '', '#/channels');
var msgEl2 = document.getElementById('chMessages');
if (msgEl2) msgEl2.innerHTML = '<div class="ch-empty">Choose a channel from the sidebar to view messages</div>';
var header2 = document.getElementById('chHeader');
if (header2) header2.querySelector('.ch-header-text').textContent = 'Select a channel';
if (channelHash.startsWith('user:')) {
// Pure user-added channel — drop from the list entirely.
channels = channels.filter(function (c) { return c.hash !== channelHash; });
if (selectedHash === channelHash) {
selectedHash = null;
messages = [];
history.replaceState(null, '', '#/channels');
var msgEl2 = document.getElementById('chMessages');
if (msgEl2) msgEl2.innerHTML = '<div class="ch-empty">Choose a channel from the sidebar to view messages</div>';
var header2 = document.getElementById('chHeader');
if (header2) header2.querySelector('.ch-header-text').textContent = 'Select a channel';
}
} else if (ch) {
// Server-known channel: keep the row, just unmark as user-added so
// the ✕ disappears until they re-add a key.
ch.userAdded = false;
// If this was the selected channel, clear decrypted messages since
// the key is gone — they can't be re-decrypted without re-adding it.
if (selectedHash === channelHash) {
messages = [];
var msgEl2 = document.getElementById('chMessages');
if (msgEl2) msgEl2.innerHTML = '<div class="ch-empty">Key removed — add a key to decrypt messages</div>';
}
}
renderChannelList();
return;
+138 -68
View File
@@ -7,6 +7,14 @@ window.HopResolver = (function() {
const MAX_HOP_DIST = 1.8; // ~200km in degrees
const REGION_RADIUS_KM = 300;
// Only repeaters and room servers can appear as path hops per protocol.
// Companions/sensors originate but never relay packets.
function canAppearInPath(role) {
if (!role) return false;
var r = String(role).toLowerCase();
return r.indexOf('repeater') >= 0 || r.indexOf('room_server') >= 0 || r === 'room';
}
let prefixIdx = {}; // lowercase hex prefix → [node, ...]
let pubkeyIdx = {}; // full lowercase pubkey → node (O(1) lookup)
let nodesList = [];
@@ -40,7 +48,11 @@ window.HopResolver = (function() {
for (const n of nodesList) {
if (!n.public_key) continue;
const pk = n.public_key.toLowerCase();
// pubkeyIdx includes ALL nodes — used by resolveFromServer for
// server-confirmed full-pubkey lookups (any node type).
pubkeyIdx[pk] = n;
// prefixIdx only includes nodes that can appear as path hops.
if (!canAppearInPath(n.role)) continue;
for (let len = 1; len <= 3; len++) {
const p = pk.slice(0, len * 2);
if (!prefixIdx[p]) prefixIdx[p] = [];
@@ -72,33 +84,89 @@ window.HopResolver = (function() {
}
/**
* Pick the best candidate using affinity first, then geo-distance fallback.
* Pick the best candidate by scoring against BOTH prev and next resolved hops.
*
* Strategy (in priority order):
* 1. Neighbor-graph edge weight: sum of edge scores to prevPubkey + nextPubkey. Pick max.
* 2. Geographic centroid: if no candidate has graph edges, compute centroid of
* prev+next positions and pick closest candidate by haversine distance.
* 3. Single-anchor geo fallback: if only one neighbor is resolved, use it as anchor.
* 4. Original heuristic: first candidate (when no context at all).
*
* @param {Array} candidates - candidates with lat/lon/pubkey/name
* @param {string|null} adjacentPubkey - pubkey of the previously/next resolved hop
* @param {Object|null} anchor - {lat, lon} for geo fallback
* @param {number|null} fallbackLat - fallback anchor lat (e.g. observer)
* @param {number|null} fallbackLon - fallback anchor lon
* @param {string|null} prevPubkey - pubkey of previous resolved hop
* @param {string|null} nextPubkey - pubkey of next resolved hop
* @param {Object|null} prevPos - {lat, lon} of previous resolved hop or origin
* @param {Object|null} nextPos - {lat, lon} of next resolved hop or observer
* @returns {Object} best candidate
*/
function pickByAffinity(candidates, adjacentPubkey, anchor, fallbackLat, fallbackLon) {
// If we have affinity data and an adjacent hop, prefer neighbors
if (adjacentPubkey && Object.keys(affinityMap).length > 0) {
const withAffinity = candidates
.map(c => ({ ...c, affinity: getAffinity(adjacentPubkey, c.pubkey) }))
.filter(c => c.affinity > 0);
if (withAffinity.length > 0) {
withAffinity.sort((a, b) => b.affinity - a.affinity);
return withAffinity[0];
function pickByAffinity(candidates, prevPubkey, nextPubkey, prevPos, nextPos) {
const hasGraph = Object.keys(affinityMap).length > 0;
const hasAdj = prevPubkey || nextPubkey;
// Strategy 1: neighbor-graph edge weights (sum of prev + next)
if (hasGraph && hasAdj) {
const scored = candidates.map(function(c) {
let s = 0;
if (prevPubkey) s += getAffinity(prevPubkey, c.pubkey);
if (nextPubkey) s += getAffinity(nextPubkey, c.pubkey);
return { candidate: c, edgeScore: s };
});
const withEdges = scored.filter(function(s) { return s.edgeScore > 0; });
if (withEdges.length > 0) {
withEdges.sort(function(a, b) { return b.edgeScore - a.edgeScore; });
_traceMultiCandidate(candidates, scored, withEdges[0].candidate, 'graph');
return withEdges[0].candidate;
}
}
// Fallback: geo-distance sort (existing behavior)
const effectiveAnchor = anchor || (fallbackLat != null ? { lat: fallbackLat, lon: fallbackLon } : null);
if (effectiveAnchor) {
candidates.sort((a, b) => dist(a.lat, a.lon, effectiveAnchor.lat, effectiveAnchor.lon) - dist(b.lat, b.lon, effectiveAnchor.lat, effectiveAnchor.lon));
// Strategy 2/3: geographic — centroid of prev+next, or single anchor
let anchorLat = null, anchorLon = null, anchorCount = 0;
if (prevPos && prevPos.lat != null && prevPos.lon != null) {
anchorLat = (anchorLat || 0) + prevPos.lat;
anchorLon = (anchorLon || 0) + prevPos.lon;
anchorCount++;
}
if (nextPos && nextPos.lat != null && nextPos.lon != null) {
anchorLat = (anchorLat || 0) + nextPos.lat;
anchorLon = (anchorLon || 0) + nextPos.lon;
anchorCount++;
}
if (anchorCount > 0) {
anchorLat /= anchorCount;
anchorLon /= anchorCount;
const geoScored = candidates.map(function(c) {
const d = (c.lat != null && c.lon != null && !(c.lat === 0 && c.lon === 0))
? haversineKm(c.lat, c.lon, anchorLat, anchorLon) : 999999;
return { candidate: c, distKm: d };
});
geoScored.sort(function(a, b) { return a.distKm - b.distKm; });
_traceMultiCandidate(candidates, geoScored, geoScored[0].candidate, 'centroid');
return geoScored[0].candidate;
}
// Strategy 4: no context — return first candidate
_traceMultiCandidate(candidates, null, candidates[0], 'fallback');
return candidates[0];
}
/** Dev-mode console trace for multi-candidate picks */
function _traceMultiCandidate(candidates, scored, chosen, method) {
if (typeof console === 'undefined' || !console.debug) return;
if (candidates.length < 2) return;
try {
const prefix = candidates[0].pubkey ? candidates[0].pubkey.slice(0, 2) : '??';
const scoreSummary = scored ? scored.map(function(s) {
const pk = (s.candidate || s).pubkey || '?';
const val = s.edgeScore != null ? s.edgeScore : (s.distKm != null ? s.distKm + 'km' : '?');
return pk.slice(0, 8) + ':' + val;
}) : [];
console.debug('[hop-resolver] hash=' + prefix + ' candidates=' + candidates.length +
' scored=[' + scoreSummary.join(',') + '] chose=' + (chosen.pubkey || '?').slice(0, 8) +
' method=' + method);
} catch(e) { /* trace is best-effort */ }
}
/**
* Resolve an array of hex hop prefixes to node info.
* Returns a map: { hop: {name, pubkey, lat, lon, ambiguous, unreliable} }
@@ -169,52 +237,54 @@ window.HopResolver = (function() {
}
}
// Forward pass
let lastPos = (originLat != null && originLon != null) ? { lat: originLat, lon: originLon } : null;
let lastResolvedPubkey = null;
for (let i = 0; i < hops.length; i++) {
const hop = hops[i];
if (hopPositions[hop]) {
lastPos = hopPositions[hop];
lastResolvedPubkey = resolved[hop] ? resolved[hop].pubkey : null;
continue;
// Combined disambiguation: resolve ambiguous hops using both neighbors.
// We iterate until no more hops can be resolved (handles cascading dependencies).
const originPos = (originLat != null && originLon != null) ? { lat: originLat, lon: originLon } : null;
const observerPos = (observerLat != null && observerLon != null) ? { lat: observerLat, lon: observerLon } : null;
let changed = true;
let maxIter = hops.length + 1; // prevent infinite loops
while (changed && maxIter-- > 0) {
changed = false;
for (let i = 0; i < hops.length; i++) {
const hop = hops[i];
if (hopPositions[hop]) continue; // already resolved
const r = resolved[hop];
if (!r || !r.ambiguous) continue;
const withLoc = r.candidates.filter(c => c.lat != null && c.lon != null && !(c.lat === 0 && c.lon === 0));
if (!withLoc.length) continue;
// Find prev resolved neighbor
let prevPubkey = null, prevPos = null;
for (let j = i - 1; j >= 0; j--) {
if (hopPositions[hops[j]]) {
prevPos = hopPositions[hops[j]];
prevPubkey = resolved[hops[j]] ? resolved[hops[j]].pubkey : null;
break;
}
}
if (!prevPos && originPos) prevPos = originPos;
// Find next resolved neighbor
let nextPubkey = null, nextPos = null;
for (let j = i + 1; j < hops.length; j++) {
if (hopPositions[hops[j]]) {
nextPos = hopPositions[hops[j]];
nextPubkey = resolved[hops[j]] ? resolved[hops[j]].pubkey : null;
break;
}
}
if (!nextPos && observerPos) nextPos = observerPos;
// Skip if we have zero context (wait for a later iteration or neighbor resolution)
if (!prevPubkey && !nextPubkey && !prevPos && !nextPos) continue;
const picked = pickByAffinity(withLoc, prevPubkey, nextPubkey, prevPos, nextPos);
r.name = picked.name;
r.pubkey = picked.pubkey;
hopPositions[hop] = { lat: picked.lat, lon: picked.lon };
changed = true;
}
const r = resolved[hop];
if (!r || !r.ambiguous) continue;
const withLoc = r.candidates.filter(c => c.lat && c.lon && !(c.lat === 0 && c.lon === 0));
if (!withLoc.length) continue;
// Affinity-aware: prefer candidates that are neighbors of the previous hop
const picked = pickByAffinity(withLoc, lastResolvedPubkey, lastPos, i === hops.length - 1 ? observerLat : null, i === hops.length - 1 ? observerLon : null);
r.name = picked.name;
r.pubkey = picked.pubkey;
hopPositions[hop] = { lat: picked.lat, lon: picked.lon };
lastPos = hopPositions[hop];
lastResolvedPubkey = picked.pubkey;
}
// Backward pass
let nextPos = (observerLat != null && observerLon != null) ? { lat: observerLat, lon: observerLon } : null;
let nextResolvedPubkey = null;
for (let i = hops.length - 1; i >= 0; i--) {
const hop = hops[i];
if (hopPositions[hop]) {
nextPos = hopPositions[hop];
nextResolvedPubkey = resolved[hop] ? resolved[hop].pubkey : null;
continue;
}
const r = resolved[hop];
if (!r || !r.ambiguous) continue;
const withLoc = r.candidates.filter(c => c.lat && c.lon && !(c.lat === 0 && c.lon === 0));
if (!withLoc.length || !nextPos) continue;
// Affinity-aware: prefer candidates that are neighbors of the next hop
const picked = pickByAffinity(withLoc, nextResolvedPubkey, nextPos, null, null);
r.name = picked.name;
r.pubkey = picked.pubkey;
hopPositions[hop] = { lat: picked.lat, lon: picked.lon };
nextPos = hopPositions[hop];
nextResolvedPubkey = picked.pubkey;
}
// Sanity check: drop hops impossibly far from neighbors
@@ -276,13 +346,13 @@ window.HopResolver = (function() {
*/
function resolveFromServer(hops, resolvedPath) {
if (!hops || !resolvedPath || hops.length !== resolvedPath.length) return {};
var result = {};
for (var i = 0; i < hops.length; i++) {
var hop = hops[i];
var pubkey = resolvedPath[i];
const result = {};
for (let i = 0; i < hops.length; i++) {
const hop = hops[i];
const pubkey = resolvedPath[i];
if (!pubkey) continue; // null = unresolved, leave for client-side fallback
// O(1) lookup via pubkeyIdx built during init()
var node = pubkeyIdx[pubkey.toLowerCase()] || null;
const node = pubkeyIdx[pubkey.toLowerCase()] || null;
result[hop] = {
name: node ? node.name : pubkey.slice(0, 8),
pubkey: pubkey,
+15 -3
View File
@@ -1144,6 +1144,19 @@
makeColumnsResizable('#nodesTable', 'meshcore-nodes-col-widths');
}
/**
* Navigate to the full-screen node view for `pubkey` from anywhere within
* the nodes module. Single source of navigation truth works regardless
* of current hash state (hash assignment alone is a no-op when the hash
* is already the target).
*/
function navigateToNode(pubkey) {
destroy();
var appEl = document.getElementById('app');
history.replaceState(null, '', '#/nodes/' + encodeURIComponent(pubkey));
init(appEl, pubkey);
}
async function selectNode(pubkey) {
// On mobile, navigate to full-screen node view
if (window.innerWidth <= 640) {
@@ -1307,12 +1320,11 @@
} catch {}
}
// #856: Wire "Details" button to navigate to full-screen node view
// Wire "Details" button via the unified navigateToNode helper
var detailBtn = panel.querySelector('.node-detail-btn');
if (detailBtn) {
detailBtn.addEventListener('click', function() {
var pk = detailBtn.getAttribute('data-pubkey');
location.hash = '#/nodes/' + pk;
navigateToNode(decodeURIComponent(detailBtn.getAttribute('data-pubkey')));
});
}
+42 -20
View File
@@ -387,9 +387,9 @@
const obs = data.observations.find(o => String(o.id) === String(obsTarget));
if (obs) {
expandedHashes.add(h);
const obsPacket = {...data.packet, observer_id: obs.observer_id, observer_name: obs.observer_name, snr: obs.snr, rssi: obs.rssi, path_json: obs.path_json, resolved_path: obs.resolved_path, timestamp: obs.timestamp, first_seen: obs.timestamp};
const obsPacket = {...data.packet, observer_id: obs.observer_id, observer_name: obs.observer_name, snr: obs.snr, rssi: obs.rssi, path_json: obs.path_json, resolved_path: obs.resolved_path, direction: obs.direction, timestamp: obs.timestamp, first_seen: obs.timestamp};
clearParsedCache(obsPacket);
selectPacket(obs.id, h, {packet: obsPacket, breakdown: data.breakdown, observations: data.observations}, obs.id);
selectPacket(obs.id, h, {packet: obsPacket, observations: data.observations}, obs.id);
} else {
selectPacket(data.packet.id, h, data);
}
@@ -519,7 +519,7 @@
if (p.decoded_json) existing.decoded_json = p.decoded_json;
// Update expanded children if this group is expanded
if (expandedHashes.has(h) && existing._children) {
existing._children.unshift(p);
existing._children.unshift(clearParsedCache({...p, _isObservation: true}));
if (existing._children.length > 200) existing._children.length = 200;
sortGroupChildren(existing);
// Invalidate row counts — child count changed, so virtual scroll
@@ -683,10 +683,14 @@
// Restore expanded group children (parallel fetch, Map lookup)
if (groupByHash && expandedHashes.size > 0) {
const expandedArr = [...expandedHashes];
// Fetch the full packet detail (which includes per-observation rows) for each expanded hash.
// Previously this used `/packets?hash=X&limit=20` which returned ONE aggregate row, causing
// every "child" row in the table to carry the parent packet.id instead of unique observation
// ids — so clicking any child pointed the side pane at the same aggregate. See #866.
const results = await Promise.all(expandedArr.map(hash => {
const group = hashIndex.get(hash);
if (!group) return { hash, group: null, data: null };
return api(`/packets?hash=${hash}&limit=20`)
return api(`/packets/${hash}`)
.then(data => ({ hash, group, data }))
.catch(() => ({ hash, group, data: null }));
}));
@@ -694,7 +698,15 @@
if (!group) {
expandedHashes.delete(hash);
} else if (data) {
group._children = data.packets || [];
const pkt = data.packet || group;
// Build per-observation children. Spread (pkt, obs) so obs-level fields
// (id, observer_id/name, path_json, snr/rssi, timestamp, raw_hex) override
// the aggregate. Each child's `id` is the observation id (unique per observer).
const obs = data.observations || [];
group._children = obs.length
? obs.map(o => clearParsedCache({...pkt, ...o, _isObservation: true}))
: [pkt];
group._fetchedData = { packet: pkt, observations: obs };
sortGroupChildren(group);
}
}
@@ -1246,9 +1258,9 @@
const child = group?._children?.find(c => String(c.id) === String(value));
if (child) {
const parentData = group._fetchedData;
const obsPacket = parentData ? {...parentData.packet, observer_id: child.observer_id, observer_name: child.observer_name, snr: child.snr, rssi: child.rssi, path_json: child.path_json, resolved_path: child.resolved_path, timestamp: child.timestamp, first_seen: child.timestamp} : child;
const obsPacket = parentData ? {...parentData.packet, observer_id: child.observer_id, observer_name: child.observer_name, snr: child.snr, rssi: child.rssi, path_json: child.path_json, resolved_path: child.resolved_path, direction: child.direction, timestamp: child.timestamp, first_seen: child.timestamp} : child;
if (parentData) { clearParsedCache(obsPacket); }
selectPacket(child.id, parentHash, {packet: obsPacket, breakdown: parentData?.breakdown, observations: parentData?.observations}, child.id);
selectPacket(child.id, parentHash, {packet: obsPacket, observations: parentData?.observations}, child.id);
}
}
else if (action === 'select-hash') pktSelectHash(value);
@@ -1797,7 +1809,7 @@
panel.innerHTML = isMobileNow ? '' : '<div class="panel-resize-handle" id="pktResizeHandle"></div>' + PANEL_CLOSE_HTML;
const content = document.createElement('div');
panel.appendChild(content);
await renderDetail(content, data);
await renderDetail(content, data, selectedObservationId);
if (!isMobileNow) initPanelResize();
} catch (e) {
panel.innerHTML = `<div class="text-muted">Error: ${e.message}</div>`;
@@ -1806,8 +1818,6 @@
async function renderDetail(panel, data, chosenObsId) {
const pkt = data.packet;
const breakdown = data.breakdown || {};
const ranges = breakdown.ranges || [];
const observations = data.observations || [];
// Per-observation rendering (issue #849):
@@ -1828,6 +1838,15 @@
const decoded = getParsedDecoded(effectivePkt) || {};
const pathHops = getParsedPath(effectivePkt) || [];
// Compute breakdown ranges from the actually-rendered raw_hex (per-observation).
// Single source of truth — derived from the same bytes we display, so a
// post-#882 per-obs raw_hex with a different path length than the top-level
// packet's raw_hex still gets accurate byte highlights.
const obsRawHexForRanges = effectivePkt.raw_hex || pkt.raw_hex || '';
const ranges = obsRawHexForRanges
? computeBreakdownRanges(obsRawHexForRanges, pkt.route_type, pkt.payload_type)
: [];
// Cross-check: hop count from raw_hex path_len byte vs path_json length
const obsRawHex = effectivePkt.raw_hex || pkt.raw_hex || '';
let rawHopCount = null;
@@ -1838,7 +1857,7 @@
if (!isNaN(plByte)) rawHopCount = plByte & 0x3F;
}
if (rawHopCount != null && pathHops.length !== rawHopCount) {
console.warn(`[CoreScope] Hop count inconsistency for packet ${pkt.hash}: path_json has ${pathHops.length} hops but raw_hex path_len has ${rawHopCount}. Trusting raw_hex.`);
console.warn(`[CoreScope] Hop count inconsistency for packet ${pkt.hash}: path_json has ${pathHops.length} hops but raw_hex path_len has ${rawHopCount}. UI shows path_json.`);
}
// Resolve sender GPS — from packet directly, or from known node in DB
@@ -1975,8 +1994,10 @@
? `<div class="anomaly-banner" style="background:var(--warning, #f0ad4e); color:#000; padding:8px 12px; border-radius:4px; margin-bottom:8px; font-weight:600;">⚠️ Anomaly: ${escapeHtml(decoded.anomaly)}</div>`
: '';
// Hop count display: trust raw_hex (firmware truth) over path_json
const displayHopCount = rawHopCount != null ? rawHopCount : pathHops.length;
// Hop count display: use pathHops length (= effective observation's path_json).
// The raw_hex/path_json mismatch warning is logged above for diagnostics; the UI
// must stay self-consistent — top pill names and byte breakdown rows must agree.
const displayHopCount = pathHops.length;
const obsIndicator = currentObs && observations.length > 1
? `<span style="font-size:0.8em;color:var(--text-muted);margin-left:6px">(observation ${observations.indexOf(currentObs) + 1} of ${observations.length})</span>`
: '';
@@ -2181,18 +2202,19 @@
rows += fieldRow(off, 'Path Length', '0x' + (buf.slice(off * 2, off * 2 + 2) || '??'), hashCountVal === 0 ? `hash_count=0 (direct advert)` : `hash_size=${hashSizeVal} byte${hashSizeVal !== 1 ? 's' : ''}, hash_count=${hashCountVal}`);
off += 1;
// Path — derive hop count from path_len byte (firmware truth), not aggregated _parsedPath
// Path — render hops from path_json (what this observation reported).
// Byte offsets advance by hashSize * pathHops.length to match.
const hashSize = isNaN(pathByte0) ? 1 : ((pathByte0 >> 6) + 1);
if (typeof hashCountVal === 'number' && hashCountVal > 0) {
rows += sectionRow('Path (' + hashCountVal + ' hops)', 'section-path');
for (let i = 0; i < hashCountVal; i++) {
if (pathHops.length > 0) {
rows += sectionRow('Path (' + pathHops.length + ' hops)', 'section-path');
for (let i = 0; i < pathHops.length; i++) {
const hopOff = off + i * hashSize;
const hex = buf.slice(hopOff * 2, (hopOff + hashSize) * 2).toUpperCase();
const hex = String(pathHops[i] || '').toUpperCase();
const hopHtml = HopDisplay.renderHop(hex, hopNameCache[hex]);
const label = `Hop ${i}${hopHtml}`;
rows += fieldRow(hopOff, label, hex, '');
}
off += hashSize * hashCountVal;
off += hashSize * pathHops.length;
}
// Payload
@@ -2466,7 +2488,7 @@
renderTableRows();
return;
}
// Single fetch — gets packet + observations + path + breakdown
// Single fetch — gets packet + observations + path
try {
const data = await api(`/packets/${hash}`);
const pkt = data.packet;
+338 -2
View File
@@ -15,6 +15,11 @@ async function test(name, fn) {
results.push({ name, pass: true });
console.log(` \u2705 ${name}`);
} catch (err) {
if (err.skip) {
results.push({ name, pass: true, skipped: true });
console.log(`${name}: ${err.message}`);
return;
}
results.push({ name, pass: false, error: err.message });
console.log(` \u274c ${name}: ${err.message}`);
console.log(`\nFail-fast: stopping after first failure.`);
@@ -1778,12 +1783,343 @@ async function run() {
}
});
// Test: Expanded group children have unique observation ids (#866)
await test('Expanded group children update detail pane per-observation', async () => {
await page.goto(`${BASE}/#/packets`, { waitUntil: 'domcontentloaded' });
// Ensure grouped mode and wide time window
await page.evaluate(() => {
localStorage.setItem('meshcore-time-window', '525600');
localStorage.setItem('meshcore-groupbyhash', 'true');
});
await page.reload({ waitUntil: 'load' });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
// Find a group row with observation_count > 1 (has expand button)
const expandBtn = await page.$('table tbody tr .expand-btn, table tbody tr [data-expand]');
if (!expandBtn) {
console.log(' ️ No expandable groups found — skipping child assertion');
return;
}
// Click expand and wait for the /packets/<hash> detail API call
const [detailResp] = await Promise.all([
page.waitForResponse(resp => {
const u = new URL(resp.url(), BASE);
// Match /api/packets/<hash> but not /api/packets?... or /api/packets/observations
return /\/api\/packets\/[A-Fa-f0-9]+$/.test(u.pathname) && resp.status() === 200;
}, { timeout: 15000 }),
expandBtn.click(),
]);
assert(detailResp, 'Expected /api/packets/<hash> response on expand');
// Wait for child rows to appear
await page.waitForSelector('table tbody tr.child-row, table tbody tr[class*="child"]', { timeout: 5000 });
const childRows = await page.$$('table tbody tr.child-row, table tbody tr[class*="child"]');
if (childRows.length < 2) {
console.log(' ️ Group has < 2 children — skipping per-observation assertion');
return;
}
// Click first child row
await childRows[0].click();
await page.waitForFunction(() => {
const panel = document.getElementById('pktRight');
return panel && !panel.classList.contains('empty') && panel.textContent.trim().length > 0;
}, { timeout: 10000 });
const content1 = await page.$eval('#pktRight', el => el.textContent.trim());
const url1 = page.url();
// Click second child row
await childRows[1].click();
await page.waitForTimeout(500);
const content2 = await page.$eval('#pktRight', el => el.textContent.trim());
const url2 = page.url();
// URL should contain ?obs= with a real observation id
assert(url1.includes('obs=') || url2.includes('obs='), `URL should contain obs= parameter, got: ${url1}`);
// The two children should show different detail pane content (different observers)
// At minimum, the URL obs= values should differ
if (url1.includes('obs=') && url2.includes('obs=')) {
const obs1 = new URL(url1).hash.match(/obs=(\d+)/)?.[1];
const obs2 = new URL(url2).hash.match(/obs=(\d+)/)?.[1];
if (obs1 && obs2) {
assert(obs1 !== obs2, `Two children should have different obs ids, both got obs=${obs1}`);
}
}
// Verify obs id is NOT the aggregate packet id (the bug from #866)
const obsMatch = url2.match(/obs=(\d+)/);
if (obsMatch) {
const detailJson = await detailResp.json().catch(() => null);
if (detailJson?.packet?.id) {
const aggId = String(detailJson.packet.id);
// At least one child obs id should differ from the aggregate packet id
const obs1 = url1.match(/obs=(\d+)/)?.[1];
const obs2 = url2.match(/obs=(\d+)/)?.[1];
const allSameAsAgg = obs1 === aggId && obs2 === aggId;
assert(!allSameAsAgg, `Child obs ids should not all equal aggregate packet.id (${aggId})`);
}
}
});
// Test: per-observation raw_hex — hex pane updates when switching observations (#881)
await test('Packet detail hex pane updates per observation', async () => {
await page.goto(BASE + '#/packets', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
await page.waitForTimeout(500);
// Try clicking packet rows to find one with multiple observations
const rows = await page.$$('table tbody tr[data-action]');
let obsRows = [];
for (let i = 0; i < Math.min(rows.length, 10); i++) {
await rows[i].click({ timeout: 3000 }).catch(() => null);
await page.waitForTimeout(600);
obsRows = await page.$$('.detail-obs-row');
if (obsRows.length >= 2) break;
}
if (obsRows.length < 2) {
console.log(' ⏭ Skipped: no packet with ≥2 observations found in first 10 rows');
return;
}
// Click first observation, capture hex dump
await obsRows[0].click({ timeout: 5000 });
await page.waitForTimeout(500);
const hex1 = await page.$eval('.hex-dump', el => el.textContent).catch(() => '');
// Click second observation, capture hex dump
await obsRows[1].click({ timeout: 5000 });
await page.waitForTimeout(500);
const hex2 = await page.$eval('.hex-dump', el => el.textContent).catch(() => '');
// If both have content and differ, the feature works
if (hex1 && hex2 && hex1 !== hex2) {
console.log(' ✓ Hex pane content differs between observations');
} else if (hex1 && hex2 && hex1 === hex2) {
console.log(' ⏭ Hex same for both observations (likely historical NULL raw_hex — OK)');
} else {
console.log(' ⏭ Could not capture hex content from both observations');
}
});
// Test: path pill (top) and byte breakdown (bottom) agree on hop count
// Regression for visual mismatch where badge said "1 hop" but path text listed N names
await test('Packet detail path pill and byte breakdown agree on hop count', async () => {
await page.goto(BASE + '#/packets', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
await page.waitForTimeout(500);
// Click rows until we find one whose detail pane renders a multi-hop path
const rows = await page.$$('table tbody tr[data-action]');
let found = false;
for (let i = 0; i < Math.min(rows.length, 15); i++) {
await rows[i].click({ timeout: 3000 }).catch(() => null);
await page.waitForTimeout(500);
const result = await page.evaluate(() => {
// Path pill: <dt>Path</dt><dd><span class="badge ...">N hops</span> ...names...</dd>
const dts = document.querySelectorAll('dl.detail-meta dt');
let pillBadgeCount = null;
let pillNameCount = null;
for (const dt of dts) {
if (dt.textContent.trim() === 'Path') {
const dd = dt.nextElementSibling;
if (!dd) break;
const badge = dd.querySelector('.badge');
if (badge) {
const m = badge.textContent.match(/(\d+)\s*hop/);
if (m) pillBadgeCount = parseInt(m[1], 10);
}
// Count rendered hop links/spans (HopDisplay.renderHop output)
const hops = dd.querySelectorAll('.hop-link, [data-hop-link], .hop-named, .hop-anonymous');
pillNameCount = hops.length;
break;
}
}
// Byte breakdown: section row "Path (N hops)" + N "Hop X — ..." rows
let breakdownSectionCount = null;
let breakdownRowCount = 0;
const fieldTable = document.querySelector('table.field-table');
if (fieldTable) {
for (const tr of fieldTable.querySelectorAll('tr')) {
const txt = tr.textContent.trim();
const sec = txt.match(/^Path\s*\((\d+)\s*hops?\)/);
if (sec) breakdownSectionCount = parseInt(sec[1], 10);
if (/^\s*\d+\s*Hop\s+\d+\s*—/.test(txt) || /^Hop\s+\d+\s*—/.test(txt.replace(/^\d+/, '').trim())) {
breakdownRowCount++;
}
}
}
return { pillBadgeCount, pillNameCount, breakdownSectionCount, breakdownRowCount };
});
if (result.pillBadgeCount && result.pillBadgeCount > 0 && result.breakdownSectionCount != null) {
found = true;
// Top badge count must equal bottom section count
assert(result.pillBadgeCount === result.breakdownSectionCount,
`Path pill badge says ${result.pillBadgeCount} hops but byte breakdown says ${result.breakdownSectionCount} hops`);
// Number of rendered hop names in pill should also match (within 1, since renderPath may add separators)
if (result.pillNameCount != null && result.pillNameCount > 0) {
assert(Math.abs(result.pillNameCount - result.pillBadgeCount) <= 1,
`Path pill badge ${result.pillBadgeCount} but rendered ${result.pillNameCount} hop names`);
}
// And breakdown rendered rows should match its own section count
assert(result.breakdownRowCount > 0,
'breakdown rows selector matched nothing — selector or DOM changed');
assert(result.breakdownRowCount === result.breakdownSectionCount,
`Byte breakdown section says ${result.breakdownSectionCount} hops but rendered ${result.breakdownRowCount} hop rows`);
console.log(` ✓ Path pill (${result.pillBadgeCount}) and byte breakdown (${result.breakdownSectionCount}) agree`);
break;
}
}
if (!found) {
if (process.env.E2E_REQUIRE_PATH_TEST === '1') {
throw new Error('BLOCKED — no multi-hop packet found in first 15 rows (E2E_REQUIRE_PATH_TEST=1 requires it)');
}
const skipErr = new Error('SKIP: No multi-hop packet with byte breakdown found in first 15 rows — needs fixture');
skipErr.skip = true;
throw skipErr;
}
});
// Test: hex-strip color spans match the labeled byte rows (per-obs raw_hex).
// Regression #891: server-supplied breakdown was computed once from top-level
// raw_hex, so per-observation rendering had off-by-N highlights vs the labels.
await test('Packet detail hex strip Path range matches hop row count', async () => {
await page.goto(BASE + '#/packets', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
await page.waitForTimeout(500);
const rows = await page.$$('table tbody tr[data-action]');
let checked = 0;
for (let i = 0; i < Math.min(rows.length, 25) && checked < 3; i++) {
await rows[i].click({ timeout: 3000 }).catch(() => null);
await page.waitForTimeout(400);
const result = await page.evaluate(() => {
const dump = document.querySelector('.hex-dump');
const fieldTable = document.querySelector('table.field-table');
if (!dump || !fieldTable) return null;
const pathSpan = dump.querySelector('span.hex-byte.hex-path');
const pathBytes = pathSpan ? pathSpan.textContent.trim().split(/\s+/).filter(Boolean).length : 0;
const hopRows = [];
for (const tr of fieldTable.querySelectorAll('tr')) {
const cells = [...tr.cells].map(c => c.textContent.trim());
if (cells.length >= 2 && /^Hop\s+\d+/.test(cells[1])) hopRows.push(cells[2]);
}
return { pathBytes, hopRows };
});
if (!result || (result.pathBytes === 0 && result.hopRows.length === 0)) continue;
checked++;
// Either both zero, or the count of bytes inside hex-path == hop rows.
// (For multi-byte hash sizes this is bytes-per-hop * hops; for hash_size=1 it's just hops.)
// The simpler invariant: if there are hop rows, hex-path span must exist and have at least
// as many bytes as there are hops (== exactly hops * hash_size).
assert(result.hopRows.length > 0,
`row ${i}: hex-path span has ${result.pathBytes} bytes but no hop rows in the labeled table`);
assert(result.pathBytes >= result.hopRows.length,
`row ${i}: hex-path has ${result.pathBytes} bytes but ${result.hopRows.length} hop rows — strip and labels disagree`);
assert(result.pathBytes % result.hopRows.length === 0,
`row ${i}: hex-path has ${result.pathBytes} bytes but ${result.hopRows.length} hop rows — bytes/hops not divisible (hash_size violated)`);
console.log(` ✓ row ${i}: hex-path ${result.pathBytes} bytes / ${result.hopRows.length} hop rows (hash_size=${result.pathBytes / result.hopRows.length})`);
}
if (checked === 0) {
const skipErr = new Error('SKIP: no packet with rendered hex strip + hop rows found in first 25 rows');
skipErr.skip = true;
throw skipErr;
}
});
// Test: clicking a different observation row re-renders strip + breakdown consistently.
// Regression: observations of the same packet hash have different raw_hex (#882),
// so picking a different obs must recompute the byte ranges, not reuse the old ones.
await test('Packet detail switches consistently across observations', async () => {
await page.goto(BASE + '#/packets?groupByHash=1', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
await page.waitForTimeout(500);
let opened = false;
const groupRows = await page.$$('table tbody tr[data-action]');
for (let i = 0; i < Math.min(groupRows.length, 10); i++) {
await groupRows[i].click({ timeout: 3000 }).catch(() => null);
await page.waitForTimeout(400);
const obsCount = await page.evaluate(() => {
return document.querySelectorAll('table.observations-table tbody tr, .obs-row').length;
});
if (obsCount >= 2) { opened = true; break; }
}
if (!opened) {
const skipErr = new Error('SKIP: no multi-observation packet found in first 10 group rows');
skipErr.skip = true;
throw skipErr;
}
async function snapshot() {
return page.evaluate(() => {
const dump = document.querySelector('.hex-dump');
const fieldTable = document.querySelector('table.field-table');
if (!dump || !fieldTable) return null;
const pathSpan = dump.querySelector('span.hex-byte.hex-path');
const pathBytes = pathSpan ? pathSpan.textContent.trim().split(/\s+/).filter(Boolean).length : 0;
const hopRows = [];
for (const tr of fieldTable.querySelectorAll('tr')) {
const cells = [...tr.cells].map(c => c.textContent.trim());
if (cells.length >= 2 && /^Hop\s+\d+/.test(cells[1])) hopRows.push(cells[2]);
}
const rawHexParts = [...dump.querySelectorAll('span.hex-byte')].map(s => s.textContent.trim());
return { pathBytes, hopCount: hopRows.length, rawHexJoined: rawHexParts.join('|') };
});
}
const snapA = await snapshot();
assert(snapA, 'first snapshot must have hex dump + field table');
assert(snapA.hopCount === 0 || snapA.pathBytes >= snapA.hopCount,
`obs A inconsistent: hex-path ${snapA.pathBytes} bytes vs ${snapA.hopCount} hop rows`);
const switched = await page.evaluate(() => {
const obsRows = [...document.querySelectorAll('table.observations-table tbody tr, .obs-row')];
if (obsRows.length < 2) return false;
obsRows[1].click();
return true;
});
assert(switched, 'should click second observation row');
await page.waitForTimeout(500);
const snapB = await snapshot();
assert(snapB, 'second snapshot must have hex dump + field table');
assert(snapB.hopCount === 0 || snapB.pathBytes >= snapB.hopCount,
`obs B inconsistent: hex-path ${snapB.pathBytes} bytes vs ${snapB.hopCount} hop rows`);
console.log(` ✓ obs A: ${snapA.pathBytes} path bytes / ${snapA.hopCount} hops; obs B: ${snapB.pathBytes} / ${snapB.hopCount}`);
});
// Test: clicking the 🔍 Details button in the nodes side panel navigates to
// the full-screen node detail view. Regression: hash already === target,
// so location.hash assignment was a no-op and the panel stayed open.
await test('Nodes side panel Details button opens full-screen view', async () => {
await page.goto(BASE + '#/nodes', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('table tbody tr[data-action]', { timeout: 15000 });
await page.waitForTimeout(500);
// Open side panel
await page.click('table tbody tr[data-action]');
await page.waitForSelector('#nodesRight .node-detail-btn', { timeout: 5000 });
// Click Details
await page.click('#nodesRight .node-detail-btn');
// Wait for full-screen view to appear
await page.waitForSelector('.node-fullscreen', { timeout: 5000 });
const isFullScreen = await page.evaluate(() => !!document.querySelector('.node-fullscreen'));
assert(isFullScreen, 'Details button should open full-screen node view');
});
await browser.close();
// Summary
const passed = results.filter(r => r.pass).length;
const skipped = results.filter(r => r.skipped).length;
const passed = results.filter(r => r.pass && !r.skipped).length;
const failed = results.filter(r => !r.pass).length;
console.log(`\n${passed}/${results.length} tests passed${failed ? `, ${failed} failed` : ''}`);
console.log(`\n${passed}/${results.length} tests passed${skipped ? `, ${skipped} skipped` : ''}${failed ? `, ${failed} failed` : ''}`);
process.exit(failed > 0 ? 1 : 0);
}
+352 -17
View File
@@ -690,6 +690,88 @@ console.log('\n=== haversineKm (hop-resolver.js) ===');
});
}
// ===== pickByAffinity — neighbor-graph + centroid scoring (#874) =====
console.log('\n=== pickByAffinity neighbor-graph scoring (#874) ===');
{
const ctx = makeSandbox();
ctx.IATA_COORDS_GEO = {};
loadInCtx(ctx, 'public/hop-resolver.js');
const HR = ctx.window.HopResolver;
// Two nodes sharing prefix "ab", hundreds of km apart.
// NodeSF is near San Francisco, NodeDEN is near Denver.
const nodeSF = { public_key: 'ab11111111111111', name: 'NodeSF', lat: 37.7, lon: -122.4 };
const nodeDEN = { public_key: 'ab22222222222222', name: 'NodeDEN', lat: 39.7, lon: -104.9 };
// A known neighbor of NodeSF (in the graph)
const nodeNeighbor = { public_key: 'cc33333333333333', name: 'SFNeighbor', lat: 37.8, lon: -122.3 };
// Another known node near Denver
const nodeDenNeighbor = { public_key: 'dd44444444444444', name: 'DENNeighbor', lat: 39.8, lon: -105.0 };
test('#874: graph edge scoring picks correct regional candidate (SF)', () => {
HR.init([nodeSF, nodeDEN, nodeNeighbor, nodeDenNeighbor]);
HR.setAffinity({ edges: [
{ source: 'cc33333333333333', target: 'ab11111111111111', weight: 5 },
{ source: 'dd44444444444444', target: 'ab22222222222222', weight: 5 },
]});
// Path: SFNeighbor → [ab??] → DENNeighbor
// With graph edges, ab11 (NodeSF) has edge to SFNeighbor, ab22 (NodeDEN) has edge to DENNeighbor
// Prev=SFNeighbor, Next=DENNeighbor → both have score 5, but SFNeighbor edge only to ab11
const result = HR.resolve(['cc', 'ab', 'dd'],
null, null, null, null);
assert.strictEqual(result['ab'].name, 'NodeSF',
'Should pick NodeSF because it has a graph edge to prev hop SFNeighbor');
});
test('#874: graph edge scoring — next hop breaks tie', () => {
HR.init([nodeSF, nodeDEN, nodeNeighbor, nodeDenNeighbor]);
HR.setAffinity({ edges: [
{ source: 'dd44444444444444', target: 'ab22222222222222', weight: 8 },
// No edge from SFNeighbor to either ab node
]});
// Path: SFNeighbor → [ab??] → DENNeighbor
// Only ab22 (NodeDEN) has edge to DENNeighbor (next hop)
const result = HR.resolve(['cc', 'ab', 'dd'],
null, null, null, null);
assert.strictEqual(result['ab'].name, 'NodeDEN',
'Should pick NodeDEN because it has graph edge to next hop DENNeighbor');
});
test('#874: centroid fallback when no graph edges exist', () => {
HR.init([nodeSF, nodeDEN, nodeNeighbor]);
HR.setAffinity({ edges: [] }); // no edges at all
// Path: SFNeighbor → [ab??]
// SFNeighbor is at (37.8, -122.3), centroid is just that point
// NodeSF (37.7, -122.4) is ~14km away, NodeDEN (39.7, -104.9) is ~1500km away
const result = HR.resolve(['cc', 'ab'],
null, null, null, null);
assert.strictEqual(result['ab'].name, 'NodeSF',
'Should pick NodeSF via centroid proximity to SFNeighbor');
});
test('#874: centroid uses average of prev+next positions', () => {
// Prev near SF, next near Denver → centroid is midpoint (~Nevada)
// NodeDEN is closer to Nevada midpoint than NodeSF
const nodeMid = { public_key: 'ee55555555555555', name: 'MidNode', lat: 38.5, lon: -114.0 };
HR.init([nodeSF, nodeDEN, nodeNeighbor, nodeDenNeighbor, nodeMid]);
HR.setAffinity({ edges: [] });
// Path: SFNeighbor → [ab??] → DENNeighbor
// centroid = avg(37.8,-122.3, 39.8,-105.0) = (38.8, -113.65) — closer to Denver
const result = HR.resolve(['cc', 'ab', 'dd'],
null, null, null, null);
assert.strictEqual(result['ab'].name, 'NodeDEN',
'Should pick NodeDEN because centroid of SF+Denver neighbors is closer to Denver');
});
test('#874: fallback when no context at all', () => {
HR.init([nodeSF, nodeDEN]);
HR.setAffinity({ edges: [] });
// Single ambiguous hop, no origin/observer, no neighbors
const result = HR.resolve(['ab'], null, null, null, null);
assert.ok(result['ab'].ambiguous || result['ab'].name != null,
'Should resolve to some candidate without crashing');
});
}
// ===== SNR/RSSI Number casting =====
{
// These test the pattern used in observer-detail.js, home.js, traces.js, live.js
@@ -1722,6 +1804,128 @@ console.log('\n=== app.js: formatEngineBadge ===');
});
}
// ===== APP.JS: computeBreakdownRanges =====
console.log('\n=== app.js: computeBreakdownRanges ===');
{
const ctx = makeSandbox();
loadInCtx(ctx, 'public/roles.js');
loadInCtx(ctx, 'public/app.js');
const computeBreakdownRanges = ctx.computeBreakdownRanges;
function findRange(ranges, label) {
return ranges.find(r => r.label === label);
}
test('returns [] for empty hex', () => {
assert.deepEqual(computeBreakdownRanges('', 1, 5), []);
});
test('returns [] for too-short hex (< 2 bytes)', () => {
assert.deepEqual(computeBreakdownRanges('15', 1, 5), []);
});
test('FLOOD non-transport: 4-hop hash_size=1', () => {
// header=15, plb=04 → hash_size=1, hash_count=4
// bytes: 15 04 90 FA F9 10 6E 01 D9
const r = computeBreakdownRanges('150490FAF910 6E01D9'.replace(/\s/g,''), 1, 5);
assert.deepEqual(findRange(r, 'Header'), { start: 0, end: 0, label: 'Header' });
assert.deepEqual(findRange(r, 'Path Length'), { start: 1, end: 1, label: 'Path Length' });
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 5, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 6, end: 8, label: 'Payload' });
assert.strictEqual(findRange(r, 'Transport Codes'), undefined);
});
test('FLOOD non-transport: 7-hop hash_size=1', () => {
// header=15, plb=07
const hex = '15077f6d7d1cadeca33988fd95e0851ebf01ea12e1879e';
const r = computeBreakdownRanges(hex, 1, 5);
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 8, label: 'Path' });
const payload = findRange(r, 'Payload');
assert.strictEqual(payload.start, 9, 'payload starts after the 7 path bytes');
});
test('FLOOD non-transport: 8-hop hash_size=1', () => {
const hex = '1508' + '11223344556677AA' + 'BBCCDD';
const r = computeBreakdownRanges(hex, 1, 5);
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 9, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 10, end: 12, label: 'Payload' });
});
test('Direct advert: 0-hop, no Path range', () => {
// plb=00 → 0 hops; expect Path Length but NO Path range
const r = computeBreakdownRanges('1100AABBCCDD', 1, 4);
assert.deepEqual(findRange(r, 'Path Length'), { start: 1, end: 1, label: 'Path Length' });
assert.strictEqual(findRange(r, 'Path'), undefined);
});
test('Transport route shifts path-length offset by 4', () => {
// route_type=0 (TRANSPORT_FLOOD): bytes 1..4 are Transport Codes
// header=14, transport=AABBCCDD, plb=02, hops=11 22, payload=99
const hex = '14AABBCCDD021122' + '99';
const r = computeBreakdownRanges(hex, 0, 5);
assert.deepEqual(findRange(r, 'Transport Codes'), { start: 1, end: 4, label: 'Transport Codes' });
assert.deepEqual(findRange(r, 'Path Length'), { start: 5, end: 5, label: 'Path Length' });
assert.deepEqual(findRange(r, 'Path'), { start: 6, end: 7, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 8, end: 8, label: 'Payload' });
});
test('hash_size=2 (plb top bits=01): 4 hops × 2 bytes', () => {
// plb = 01 0001 00 = 0x44 → hash_size=2, hash_count=4 → 8 path bytes
const hex = '15' + '44' + 'AABB' + 'CCDD' + 'EEFF' + '1122' + '9988';
const r = computeBreakdownRanges(hex, 1, 5);
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 9, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 10, end: 11, label: 'Payload' });
});
test('hash_size=3 (plb top bits=10): 2 hops × 3 bytes', () => {
// plb = 10 0000 10 = 0x82 → hash_size=3, hash_count=2 → 6 path bytes
const hex = '15' + '82' + 'AABBCC' + 'DDEEFF' + '99';
const r = computeBreakdownRanges(hex, 1, 5);
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 7, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 8, end: 8, label: 'Payload' });
});
test('hash_size=4 (plb top bits=11): 2 hops × 4 bytes', () => {
// plb = 11 0000 10 = 0xC2 → hash_size=4, hash_count=2 → 8 path bytes
const hex = '15' + 'C2' + 'AABBCCDD' + 'EEFF1122' + '99887766';
const r = computeBreakdownRanges(hex, 1, 5);
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 9, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 10, end: 13, label: 'Payload' });
});
test('truncated path: not enough bytes → no Path range', () => {
// plb=04 says 4 hops but only 2 bytes remain
const hex = '1504AABB';
const r = computeBreakdownRanges(hex, 1, 5);
assert.strictEqual(findRange(r, 'Path'), undefined);
});
test('ADVERT (payload_type=4) with full record: PubKey/Timestamp/Signature/Flags', () => {
// header=11, plb=00 (direct advert)
// payload: 32 bytes pubkey + 4 bytes ts + 64 bytes sig + 1 byte flags
const pubkey = 'AB'.repeat(32);
const ts = '11223344';
const sig = 'CD'.repeat(64);
const flags = '00';
const hex = '1100' + pubkey + ts + sig + flags;
const r = computeBreakdownRanges(hex, 1, 4);
assert.deepEqual(findRange(r, 'PubKey'), { start: 2, end: 33, label: 'PubKey' });
assert.deepEqual(findRange(r, 'Timestamp'), { start: 34, end: 37, label: 'Timestamp' });
assert.deepEqual(findRange(r, 'Signature'), { start: 38, end: 101, label: 'Signature' });
assert.deepEqual(findRange(r, 'Flags'), { start: 102, end: 102, label: 'Flags' });
});
test('NaN-safe: malformed path-length byte produces no Path range', () => {
// hex with non-hex char in plb position would parseInt-fail → bail
// Use a 1-byte payload that makes pathByte parseInt produce NaN-ish via X
// (parseInt of 'XY' is NaN). Since fs reads only hex chars, simulate via short hex.
// Easier: empty string already returns []; 1-byte returns []. Both covered above.
// Use plb=FF (hash_size=4, hash_count=63) too long for input → no Path
const r = computeBreakdownRanges('15FF' + 'AA', 1, 5);
assert.strictEqual(findRange(r, 'Path'), undefined);
});
}
// ===== APP.JS: isTransportRoute + transportBadge =====
console.log('\n=== app.js: isTransportRoute + transportBadge ===');
{
@@ -5462,40 +5666,33 @@ console.log('\n=== packets.js: buildFieldTable hop count from path_len (#844) ==
loadInCtx(ftCtx, 'public/packets.js');
const { buildFieldTable } = ftCtx.window._packetsTestAPI;
test('#844: byte breakdown uses path_len hop count, not aggregated _parsedPath', () => {
test('#885: byte breakdown uses pathHops length (single source of truth)', () => {
// After #885 the byte breakdown agrees with the path pill: both render
// from the per-observation path_json. raw_hex is the underlying bytes
// for that same observation, so consistency is by construction.
// path_len = 0x42 → hash_size=2, hash_count=2
// raw_hex: header(11) + path_len(42) + hop0(41B1) + hop1(27D7) + pubkey(32 bytes)...
const pubkey = 'C0DEDAD4'.padEnd(64, '0'); // 32 bytes = 64 hex chars
const raw = '1142' + '41B1' + '27D7' + pubkey + '00000000' + '0'.repeat(128);
const pkt = { raw_hex: raw, route_type: 1, payload_type: 0 };
// Pass aggregated pathHops with 7 hops (mismatched)
const pathHops = ['41B1', '5EB0', '1000', '2DD2', '52F8', '9535', '762B'];
// Per-obs path_json IS the source of truth — pass the 2 hops that match raw_hex.
const pathHops = ['41B1', '27D7'];
const html = buildFieldTable(pkt, {}, pathHops, {});
// Section header should say "2 hops", not "7 hops"
assert.ok(html.includes('Path (2 hops)'), 'Should show "Path (2 hops)" from path_len, got: ' +
(html.match(/Path \(\d+ hops\)/)?.[0] || 'no match'));
assert.ok(!html.includes('Path (7 hops)'), 'Should NOT show 7 hops from aggregated path');
// Should contain hop values from raw_hex
assert.ok(html.includes('Path (2 hops)'), 'Should show "Path (2 hops)"');
assert.ok(html.includes('41B1'), 'Should show hop 0 = 41B1');
assert.ok(html.includes('27D7'), 'Should show hop 1 = 27D7');
// Should NOT contain hops from aggregated path that aren't in raw_hex
assert.ok(!html.includes('5EB0'), 'Should NOT show aggregated hop 5EB0');
assert.ok(!html.includes('9535'), 'Should NOT show aggregated hop 9535');
});
test('#844: pubkey offset correct after 2-hop path (not after 7-hop)', () => {
test('#885: pubkey offset advances by hashSize * pathHops.length', () => {
const pubkey = 'C0DEDAD4'.padEnd(64, '0');
const raw = '1142' + '41B1' + '27D7' + pubkey + '00000000' + '0'.repeat(128);
const pkt = { raw_hex: raw, route_type: 1, payload_type: 0 };
const html = buildFieldTable(pkt, { type: 'ADVERT', pubKey: pubkey }, ['41B1','5EB0','1000','2DD2','52F8','9535','762B'], {});
const html = buildFieldTable(pkt, { type: 'ADVERT', pubKey: pubkey }, ['41B1', '27D7'], {});
// Public Key should be at offset 6 (1 header + 1 path_len + 2*2 hops = 6)
// Not at offset 16 (1 + 1 + 2*7 = 16)
assert.ok(html.includes('>6<') || html.includes('"6"'),
'Public Key should be at offset 6, not 16');
'Public Key should be at offset 6');
});
test('#844: hashCountVal=0 (direct advert) skips Path section', () => {
@@ -6116,6 +6313,144 @@ console.log('\n=== analytics.js: renderCollisionsFromServer collision table ==='
});
}
// ===== Issue #866: Full-page obs-switch — hex + path must update per observation =====
{
console.log('\n=== Issue #866: Full-page observation switch ===');
const ctx866 = makeSandbox();
loadInCtx(ctx866, 'public/roles.js');
loadInCtx(ctx866, 'public/app.js');
loadInCtx(ctx866, 'public/packet-helpers.js');
test('#866: switching observation updates effectivePkt path_json', () => {
const pkt = { id: 1, hash: 'abc123', observer_id: 'obs-agg', path_json: '["A","B","C","D"]', raw_hex: '0484A1B1C1D1', route_type: 1, timestamp: '2026-01-01T00:00:00Z' };
const obs1 = { id: 10, observer_id: 'obs-1', path_json: '["A","B"]', snr: 5, rssi: -80, timestamp: '2026-01-01T00:01:00Z' };
const obs2 = { id: 20, observer_id: 'obs-2', path_json: '["A","B","C","D"]', snr: 8, rssi: -75, timestamp: '2026-01-01T00:02:00Z' };
// Simulate renderDetail logic: pick obs1
const eff1 = ctx866.clearParsedCache({...pkt, ...obs1, _isObservation: true});
const path1 = ctx866.getParsedPath(eff1);
assert.deepStrictEqual(path1, ['A', 'B']);
assert.strictEqual(eff1.observer_id, 'obs-1');
assert.strictEqual(eff1.snr, 5);
// Switch to obs2
const eff2 = ctx866.clearParsedCache({...pkt, ...obs2, _isObservation: true});
const path2 = ctx866.getParsedPath(eff2);
assert.deepStrictEqual(path2, ['A', 'B', 'C', 'D']);
assert.strictEqual(eff2.observer_id, 'obs-2');
assert.strictEqual(eff2.snr, 8);
});
test('#866: effectivePkt preserves raw_hex from packet when obs has none', () => {
const pkt = { id: 1, hash: 'h1', raw_hex: '0482AABB', route_type: 1 };
const obs = { id: 10, observer_id: 'obs-1', path_json: '["AA"]', snr: 3, rssi: -90, timestamp: '2026-01-01T00:00:00Z' };
const eff = ctx866.clearParsedCache({...pkt, ...obs, _isObservation: true});
// obs doesn't have raw_hex, so packet's raw_hex survives spread
assert.strictEqual(eff.raw_hex, '0482AABB');
});
test('#866: effectivePkt uses obs raw_hex when available (API now returns it)', () => {
const pkt = { id: 1, hash: 'h1', raw_hex: '0482AABB', route_type: 1 };
const obs = { id: 10, observer_id: 'obs-1', raw_hex: '0441CC', path_json: '["CC"]', snr: 3, rssi: -90, timestamp: '2026-01-01T00:00:00Z' };
const eff = ctx866.clearParsedCache({...pkt, ...obs, _isObservation: true});
// obs has raw_hex from API, should override
assert.strictEqual(eff.raw_hex, '0441CC');
});
test('#866: direction field carried through observation spread', () => {
const pkt = { id: 1, hash: 'h1', direction: 'rx', route_type: 1 };
const obs = { id: 10, observer_id: 'obs-1', direction: 'tx', path_json: '[]', timestamp: '2026-01-01T00:00:00Z' };
const eff = {...pkt, ...obs, _isObservation: true};
assert.strictEqual(eff.direction, 'tx');
});
test('#866: resolved_path carried through observation spread', () => {
const pkt = { id: 1, hash: 'h1', resolved_path: '["aaa","bbb","ccc"]', route_type: 1 };
const obs = { id: 10, observer_id: 'obs-1', resolved_path: '["aaa"]', path_json: '["AA"]', timestamp: '2026-01-01T00:00:00Z' };
const eff = ctx866.clearParsedCache({...pkt, ...obs, _isObservation: true});
const rp = ctx866.getResolvedPath(eff);
assert.deepStrictEqual(rp, ['aaa']);
});
test('#866: getPathLenOffset used for hop count cross-check', () => {
// Flood route: offset 1
assert.strictEqual(ctx866.getPathLenOffset(1), 1);
assert.strictEqual(ctx866.getPathLenOffset(2), 1);
// Transport route: offset 5
assert.strictEqual(ctx866.getPathLenOffset(0), 5);
assert.strictEqual(ctx866.getPathLenOffset(3), 5);
});
test('#866: URL hash should encode obs parameter for deep linking', () => {
// Simulate the URL construction pattern from renderDetail obs click
const pktHash = 'abc123def456';
const obsId = '42';
const url = `#/packets/${pktHash}?obs=${obsId}`;
assert.strictEqual(url, '#/packets/abc123def456?obs=42');
// Parse back
const qIdx = url.indexOf('?');
const qs = new URLSearchParams(url.substring(qIdx));
assert.strictEqual(qs.get('obs'), '42');
});
}
// ===== #872 — hop-display unreliable badge =====
{
console.log('\n--- #872: hop-display unreliable warning badge ---');
function makeHopDisplaySandbox() {
const sb = {
window: { addEventListener: () => {}, dispatchEvent: () => {} },
document: {
readyState: 'complete',
createElement: () => ({ id: '', textContent: '', innerHTML: '' }),
head: { appendChild: () => {} },
getElementById: () => null,
addEventListener: () => {},
querySelectorAll: () => [],
querySelector: () => null,
},
console,
Date, Math, Array, Object, String, Number, JSON, RegExp, Map, Set,
encodeURIComponent, parseInt, parseFloat, isNaN, Infinity, NaN, undefined,
setTimeout: () => {}, setInterval: () => {}, clearTimeout: () => {}, clearInterval: () => {},
};
sb.window.document = sb.document;
sb.self = sb.window;
sb.globalThis = sb.window;
const ctx = vm.createContext(sb);
const hopSrc = fs.readFileSync(__dirname + '/public/hop-display.js', 'utf8');
vm.runInContext(hopSrc, ctx);
return ctx;
}
const hopCtx = makeHopDisplaySandbox();
test('#872: unreliable hop renders warning badge, not strikethrough', () => {
const html = hopCtx.window.HopDisplay.renderHop('AABB', {
name: 'TestNode', pubkey: 'pk123', unreliable: true,
ambiguous: false, conflicts: [], globalFallback: false,
}, {});
// Must contain unreliable warning badge button
assert.ok(html.includes('hop-unreliable-btn'), 'should have unreliable badge button');
assert.ok(html.includes('⚠️'), 'should have ⚠️ icon');
assert.ok(html.includes('Unreliable name resolution'), 'should have tooltip text');
// Must NOT contain line-through in inline style (CSS class no longer has it)
assert.ok(!html.includes('line-through'), 'should not contain line-through');
// Should still have hop-unreliable class for subtle styling
assert.ok(html.includes('hop-unreliable'), 'should have hop-unreliable class');
});
test('#872: reliable hop does NOT render unreliable badge', () => {
const html = hopCtx.window.HopDisplay.renderHop('CCDD', {
name: 'GoodNode', pubkey: 'pk456', unreliable: false,
ambiguous: false, conflicts: [], globalFallback: false,
}, {});
assert.ok(!html.includes('hop-unreliable-btn'), 'should not have unreliable badge');
});
}
// ===== SUMMARY =====
Promise.allSettled(pendingTests).then(() => {
console.log(`\n${'═'.repeat(40)}`);
+57 -4
View File
@@ -22,9 +22,9 @@ function assert(condition, msg) {
// ── Test nodes ──
// Two nodes share the same 1-byte prefix "ab"
const nodeA = { public_key: 'ab1111', name: 'NodeA', lat: 37.0, lon: -122.0 };
const nodeB = { public_key: 'ab2222', name: 'NodeB', lat: 38.0, lon: -123.0 };
const nodeC = { public_key: 'cd3333', name: 'NodeC', lat: 37.5, lon: -122.5 };
const nodeA = { public_key: 'ab1111', name: 'NodeA', role: 'repeater', lat: 37.0, lon: -122.0 };
const nodeB = { public_key: 'ab2222', name: 'NodeB', role: 'repeater', lat: 38.0, lon: -123.0 };
const nodeC = { public_key: 'cd3333', name: 'NodeC', role: 'repeater', lat: 37.5, lon: -122.5 };
console.log('\n=== HopResolver Affinity Tests ===\n');
@@ -88,12 +88,65 @@ assert(result5['ab'].name === 'NodeB', 'Should pick NodeB (highest affinity 0.9)
// Test 6: Unambiguous hops are not affected by affinity
console.log('\nTest 6: Unambiguous hops unaffected by affinity');
const nodeD = { public_key: 'ee4444', name: 'NodeD', lat: 36.0, lon: -121.0 };
const nodeD = { public_key: 'ee4444', name: 'NodeD', role: 'repeater', lat: 36.0, lon: -121.0 };
HopResolver.init([nodeA, nodeB, nodeC, nodeD]);
HopResolver.setAffinity({ edges: [] });
const result6 = HopResolver.resolve(['ee44'], null, null, null, null, null);
assert(result6['ee44'].name === 'NodeD', 'Unique prefix resolves directly — got: ' + result6['ee44'].name);
assert(!result6['ee44'].ambiguous, 'Should not be marked ambiguous');
// Test 7: lat=0 / lon=0 candidates are NOT excluded (equator/prime-meridian bug fix)
console.log('\nTest 7: lat=0 / lon=0 candidates are included in geo scoring');
const nodeEquator = { public_key: 'ab5555', name: 'EquatorNode', role: 'repeater', lat: 0, lon: 10 };
const nodeFar = { public_key: 'ab6666', name: 'FarNode', role: 'repeater', lat: 60, lon: 60 };
const anchorNearEq = { public_key: 'cd7777', name: 'AnchorEq', role: 'repeater', lat: 1, lon: 11 };
HopResolver.init([nodeEquator, nodeFar, anchorNearEq]);
HopResolver.setAffinity({});
// Anchor near equator — EquatorNode (0,10) should be geo-closest
const result7 = HopResolver.resolve(['cd77', 'ab'], 1.0, 11.0, null, null, null);
assert(result7['ab'].name === 'EquatorNode',
'lat=0 candidate should be included and win by geo — got: ' + result7['ab'].name);
// Test 8: lon=0 candidate is also included
console.log('\nTest 8: lon=0 candidate is included in geo scoring');
const nodePrime = { public_key: 'ab8888', name: 'PrimeMeridian', role: 'repeater', lat: 10, lon: 0 };
const anchorNearPM = { public_key: 'cd9999', name: 'AnchorPM', role: 'repeater', lat: 11, lon: 1 };
HopResolver.init([nodePrime, nodeFar, anchorNearPM]);
HopResolver.setAffinity({});
const result8 = HopResolver.resolve(['cd99', 'ab'], 11.0, 1.0, null, null, null);
assert(result8['ab'].name === 'PrimeMeridian',
'lon=0 candidate should be included and win by geo — got: ' + result8['ab'].name);
// ── Role filter tests (#935) ──
console.log('\nTest: Role filter — companions excluded from prefixIdx');
const companion = { public_key: 'ab9999', name: 'Companion1', role: 'companion', lat: 37.0, lon: -122.0 };
const sensor = { public_key: 'ab7777', name: 'Sensor1', role: 'sensor', lat: 37.0, lon: -122.0 };
const repeater = { public_key: 'ab1234', name: 'Repeater1', role: 'repeater', lat: 37.0, lon: -122.0 };
const roomSrv = { public_key: 'ff1234', name: 'RoomSrv1', role: 'room_server', lat: 37.0, lon: -122.0 };
HopResolver.init([companion, sensor, repeater, roomSrv]);
HopResolver.setAffinity({});
// Prefix 'ab' should only resolve to repeater (companion/sensor excluded)
const r1 = HopResolver.resolve(['ab12'], 0, 0, null, null, null);
assert(r1['ab12'] && r1['ab12'].name === 'Repeater1',
'prefix ab12 resolves to Repeater1 not companion — got: ' + (r1['ab12'] && r1['ab12'].name));
// Prefix 'ff' should resolve to room_server
const r2 = HopResolver.resolve(['ff12'], 0, 0, null, null, null);
assert(r2['ff12'] && r2['ff12'].name === 'RoomSrv1',
'prefix ff12 resolves to RoomSrv1 — got: ' + (r2['ff12'] && r2['ff12'].name));
// Prefix that only matches companion should return nothing
const r3 = HopResolver.resolve(['ab99'], 0, 0, null, null, null);
assert(!r3['ab99'] || !r3['ab99'].name,
'prefix ab99 (companion only) resolves to nothing — got: ' + (r3['ab99'] && r3['ab99'].name));
// pubkeyIdx should still have companion (full pubkey lookup)
console.log('\nTest: pubkeyIdx still includes all roles');
const fromServer = HopResolver.resolveFromServer(['ab99'], [companion.public_key]);
assert(fromServer['ab99'] && fromServer['ab99'].name === 'Companion1',
'resolveFromServer finds companion by full pubkey — got: ' + (fromServer['ab99'] && fromServer['ab99'].name));
console.log('\n' + (passed + failed) + ' tests, ' + passed + ' passed, ' + failed + ' failed\n');
process.exit(failed > 0 ? 1 : 0);