Compare commits

..

20 Commits

Author SHA1 Message Date
you 86ca793b60 fix: bump default-epoch uptime cap to 3 years for solar repeater lifetimes 2026-04-25 00:08:09 +00:00
you 4291b387f5 fix: classifySkew defensive absolute value 2026-04-24 23:49:21 +00:00
you 3cd7186563 test: integration tests for epoch-0 and missing-timestamp adverts
TestGetNodeClockSkew_EpochZeroAdvert: verifies advert with timestamp==0
flows through PacketStore and classifies as severity=default, epoch=0.

TestGetNodeClockSkew_MissingTimestamp: verifies advert with no timestamp
field is skipped (extractTimestamp returns -1, filtered by collectSamples).

Review item #4 on PR #907.
2026-04-24 23:42:56 +00:00
you 86a4403136 fix: computeNodeSkew picks chronologically-latest observation
Uses max observedTS instead of last-appended slice element to
determine the most recent skew sample per hash. Consolidates
the latestObsTS and anyCal loop into a single pass.

Review item #3 on PR #907.
2026-04-24 23:42:56 +00:00
you c46a60f78a fix: rename CSS fleet row classes to match severity names
.clock-fleet-row--warning  → .clock-fleet-row--degrading
.clock-fleet-row--critical → .clock-fleet-row--degraded

The JS in analytics.js builds classes as 'clock-fleet-row--<severity>'
so the CSS must match the actual severity strings.

Review item #2 on PR #907.
2026-04-24 23:42:56 +00:00
you d4b1aa40d0 fix: extractTimestamp returns -1 sentinel for missing timestamp
Distinguishes 'no timestamp field' (returns -1) from real epoch-0
(returns 0). Adds jsonNumberOk helper that returns (value, bool).
The collectSamples guard 'advertTS < 0' correctly filters missing
timestamps while allowing epoch-0 through to isDefaultEpoch.

Updates TestExtractTimestamp to verify both cases.

Review item #1 on PR #907.
2026-04-24 23:42:56 +00:00
you d617a55155 polish: remove leftover recentMedianSkewSec comment, add overlapping-epoch test
- Remove stale 'recentMedianSkewSec' reference in nodes.js comment
- Add TestIsDefault_OverlappingWindowsPicksLargest covering epoch
  selection when default ranges overlap
2026-04-24 23:33:49 +00:00
you 2106cc0b8b ui: per-tier explainer line in node clock card 2026-04-24 23:29:26 +00:00
you 0acbac6fde ui: rename clock skew tiers to default/ok/degrading/degraded/wrong 2026-04-24 23:27:16 +00:00
you 2c675f5ab2 test: cover default-detection classifier tiers and edge cases 2026-04-24 23:25:35 +00:00
you 545df2788d feat: replace clock skew classifier with default-detection model 2026-04-24 23:20:12 +00:00
you f872fd90bf docs: clock-skew classifier redesign spec 2026-04-24 23:18:15 +00:00
Kpa-clawbot a47fe26085 fix(channels): allow removing user-added keys for server-known channels (#898)
## Problem
Adding a channel key in the Channels UI for a channel the server already
knows about (e.g. `#public` from rainbow / config) leaves the
localStorage entry **unremovable**:

- `mergeUserChannels` sees the name already exists in the channel list
and skips the user entry.
- The existing channel row is never marked `userAdded:true`.
- The ✕ button (`[data-remove-channel]`) is only rendered for
`userAdded` rows.
- Result: stuck localStorage key, no UI to delete it.

There was also a latent bug in the remove handler — for non-`user:`
rows, it used the raw hash (e.g. `enc_11`) as the
`ChannelDecrypt.removeKey()` argument, but the storage key is the
channel **name**.

## Fix
1. **`mergeUserChannels`**: when a stored key matches an existing
channel by name/hash, mark the existing channel `userAdded=true` so the
✕ renders on it. (No magical/auto deletion of stored keys — the user
explicitly chooses to remove.)
2. **Remove handler**:
- Look up the channel object to get the correct display name for the
localStorage key.
- Keep server-known channels in the list when their ✕ is clicked (only
the user's localStorage entry + cache are cleared, `userAdded` is
unset). The channel still exists upstream.
   - Pure `user:`-prefixed channels are removed from the list as before.

## Repro
1. Open Channels.
2. Add a key for `#public` (or any rainbow-known channel).
3. Reload. Before this PR: row has no ✕, key is stuck. After this PR: ✕
appears, click clears the local key and cache.

## Files
- `public/channels.js` only.

## Notes
- No backend changes.
- No new APIs.
- Behaviour for purely user-added channels (e.g. `user:#somechannel` not
known to the server) is unchanged.

---------

Co-authored-by: you <you@example.com>
2026-04-22 21:41:43 -07:00
Kpa-clawbot abd9c46aa7 fix: side-panel Details button opens full-screen on desktop (#892)
## Symptom
🔍 Details button in the nodes side panel does nothing on click.

## Root cause (4th regression of the same shape)
- Row click → `selectNode()` → `history.replaceState(null, '',
'#/nodes/' + pk)`
- Details button click → `location.hash = '#/nodes/' + pk`
- Hash is already that value → assignment is a no-op → no `hashchange`
event → no router → panel stays open.

## Fix
Mirror the analytics-link branch already inside the panel click handler:
`destroy()` then `init(appEl, pubkey)` directly (which hits the
`directNode` full-screen branch unconditionally). Also `replaceState` to
keep the URL in sync.

## Test
New Playwright E2E: open side panel via row click, click Details, assert
`.node-fullscreen` appears.

## Why this keeps regressing
Every time we tighten the row-click handler to use `replaceState`
(correct — avoids hashchange flicker), the button-click handler that
uses `location.hash` becomes a no-op for the same pubkey. Need to
remember they're coupled. Worth a follow-up to extract a
`navigateToNode(pk)` helper that always works regardless of current hash
state — filing as #890-followup if not already there.

Co-authored-by: you <you@example.com>
2026-04-21 22:37:15 -07:00
Kpa-clawbot 6ca5e86df6 fix: compute hex-dump byte ranges client-side from per-obs raw_hex (#891)
## Symptom
The colored byte strip in the packet detail pane is offset from the
labeled byte breakdown below it. Off by N bytes where N is the
difference between the top-level packet's path length and the displayed
observation's path length.

## Root cause
Server computes `breakdown.ranges` once from the top-level packet's
raw_hex (in `BuildBreakdown`) and ships it in the API response. After
#882 we render each observation's own raw_hex, but we keep using the
top-level breakdown — so a 7-hop top-level packet shipped "Path: bytes
2-8", and when we rendered an 8-hop observation we coloured 7 of the 8
path bytes and bled into the payload.

The labeled rows below (which use `buildFieldTable`) parse the displayed
raw_hex on the client, so they were correct — they just didn't match the
strip above.

## Fix
Port `BuildBreakdown()` to JS as `computeBreakdownRanges()` in `app.js`.
Use it in `renderDetail()` from the actually-rendered (per-obs) raw_hex.

## Test
Manually verified the JS function output matches the Go implementation
for FLOOD/non-transport, transport, ADVERT, and direct-advert (zero
hops) cases.

Closes nothing (caught in post-tag bug bash).

---------

Co-authored-by: you <you@example.com>
2026-04-21 22:17:14 -07:00
Kpa-clawbot 56ec590bc4 fix(#886): derive path_json from raw_hex at ingest (#887)
## Problem

Per-observation `path_json` disagrees with `raw_hex` path section for
TRACE packets.

**Reproducer:** packet `af081a2c41281b1e`, observer `lutin🏡`
- `path_json`: `["67","33","D6","33","67"]` (5 hops — from TRACE
payload)
- `raw_hex` path section: `30 2D 0D 23` (4 bytes — SNR values in header)

## Root Cause

`DecodePacket` correctly parses TRACE packets by replacing `path.Hops`
with hop IDs from the payload's `pathData` field (the actual route).
However, the header path bytes for TRACE packets contain **SNR values**
(one per completed hop), not hop IDs.

`BuildPacketData` used `decoded.Path.Hops` to build `path_json`, which
for TRACE packets contained the payload-derived hops — not the header
path bytes that `raw_hex` stores. This caused `path_json` and `raw_hex`
to describe completely different paths.

## Fix

- Added `DecodePathFromRawHex(rawHex)` — extracts header path hops
directly from raw hex bytes, independent of any TRACE payload
overwriting.
- `BuildPacketData` now calls `DecodePathFromRawHex(msg.Raw)` instead of
using `decoded.Path.Hops`, guaranteeing `path_json` always matches the
`raw_hex` path section.

## Tests (8 new)

**`DecodePathFromRawHex` unit tests:**
- hash_size 1, 2, 3, 4
- zero-hop direct packets
- transport route (4-byte transport codes before path)

**`BuildPacketData` integration tests:**
- TRACE packet: asserts path_json matches raw_hex header path (not
payload hops)
- Non-TRACE packet: asserts path_json matches raw_hex header path

All existing tests continue to pass (`go test ./...` for both ingestor
and server).

Fixes #886

---------

Co-authored-by: you <you@example.com>
2026-04-21 21:13:58 -07:00
Kpa-clawbot 67aa47175f fix: path pill and byte breakdown agree on hop count (#885)
## Problem
On the packet detail pane, the **path pill** (top) and the **byte
breakdown** (bottom) showed different numbers of hops for the same
packet. Example: `46cf35504a21ef0d` rendered as `1 hop` badge followed
by 8 node names in the path pill, while the byte breakdown listed only 1
hop row.

## Root cause
Mixed data sources:
- Path-pill badge used `(raw_hex path_len) & 0x3F` (= firmware truth for
one observer = 1)
- Path-pill names used `path_json.length` (= server-aggregated longest
path across observers = 8)
- Byte breakdown section header used `(raw_hex path_len) & 0x3F` (= 1)
- Byte breakdown rows were sliced from `raw_hex` (= 1 row)
- `renderPath(pathHops, ...)` iterated all `path_json` entries

For group-header view, `packet.path_json` is aggregated across observers
and therefore longer than the raw_hex of any single observer's packet.

## Fix
Both surfaces now render from `pathHops` (= effective observation's
`path_json`). The raw_hex vs path_json mismatch is still logged as a
console.warn for diagnostics, but does not drive the UI.

With per-observation `raw_hex` (#882) shipped, clicking an observation
row already swaps the effective packet so both surfaces stay consistent.

## Testing
- Adds E2E regression `Packet detail path pill and byte breakdown agree
on hop count` that asserts:
  1. `pill badge count == byte breakdown section count`
  2. `rendered hop names ≈ badge count` (within 1 for separators)
  3. `byte breakdown rendered rows == section count`
- Manually reproduced on staging with `46cf35504a21ef0d` (8-name path +
`1 hop` badge before fix).

Related: #881 #882 #866

---------

Co-authored-by: you <you@example.com>
2026-04-21 17:57:06 -07:00
Kpa-clawbot 2b9f305698 fix(#874): hop-resolver affinity picker — score candidates by neighbor-graph edges + geographic centroid (#876)
## Problem

`pickByAffinity` in `hop-resolver.js` picks wrong regional candidates
when 1-byte pubkey prefixes collide. The old implementation only
considers one adjacent hop (forward OR backward pass), leading to
suboptimal picks when both neighbors provide useful context.

Measured on staging: **61.6% of hops have ≥2 same-prefix candidates**,
making collision resolution critical.

## Fix

Replaced the separate forward/backward pass disambiguation with a
**combined iterative resolver** that scores candidates against BOTH prev
and next resolved hops:

1. **Neighbor-graph edge weight** (priority 1): Sum edge scores to prev
+ next pubkeys. Pick max sum.
2. **Geographic centroid** (priority 2): Average lat/lon of prev + next
positions. Pick closest candidate by haversine distance.
3. **Single-anchor geo** (priority 3): When only one neighbor is
resolved, use it directly.
4. **Fallback** (priority 4): First candidate when no context exists.

The iterative approach resolves cascading dependencies — resolving one
ambiguous hop may unlock context for its neighbors.

### Dev-mode trace

Multi-candidate picks now emit: `[hop-resolver] hash=46 candidates=N
scored=[...] chose=<pubkey> method=graph|centroid|fallback`

## Before/After (staging, 1539 packets, 12928 hops)

| Metric | Before | After |
|--------|--------|-------|
| Unreliable hops | 39 (0.3%) | 23 (0.2%) |
| Packets with unreliable | 33 (2.14%) | 17 (1.10%) |

~41% reduction in unreliable hops, ~48% reduction in affected packets.

## Tests

5 new tests in `test-frontend-helpers.js`:
- Graph edge scoring picks correct regional candidate
- Next hop breaks tie when prev has no edges
- Centroid fallback when no graph edges exist
- Centroid uses average of prev+next positions
- Fallback when no context at all

All 595 tests pass. No regressions in `test-packet-filter.js` (62 pass)
or `test-aging.js` (29 pass).

Closes #874

---------

Co-authored-by: you <you@example.com>
2026-04-21 14:03:40 -07:00
Kpa-clawbot a605518d6d fix(#881): per-observation raw_hex — each observer sees different bytes on air (#882)
## Problem

Each MeshCore observer receives a physically distinct over-the-air byte
sequence for the same transmission (different path bytes, flags/hops
remaining). The `observations` table stored only `path_json` per
observer — all observations pointed at one `transmissions.raw_hex`. This
prevented the hex pane from updating when switching observations in the
packet detail view.

## Changes

| Layer | Change |
|-------|--------|
| **Schema** | `ALTER TABLE observations ADD COLUMN raw_hex TEXT`
(nullable). Migration: `observations_raw_hex_v1` |
| **Ingestor** | `stmtInsertObservation` now stores per-observer
`raw_hex` from MQTT payload |
| **View** | `packets_v` uses `COALESCE(o.raw_hex, t.raw_hex)` —
backward compatible with NULL historical rows |
| **Server** | `enrichObs` prefers `obs.RawHex` when non-empty, falls
back to `tx.RawHex` |
| **Frontend** | No changes — `effectivePkt.raw_hex` already flows
through `renderDetail` |

## Tests

- **Ingestor**: `TestPerObservationRawHex` — two MQTT packets for same
hash from different observers → both stored with distinct raw_hex
- **Server**: `TestPerObservationRawHexEnrich` — enrichObs returns
per-obs raw_hex when present, tx fallback when NULL
- **E2E**: Playwright assertion in `test-e2e-playwright.js` for hex pane
update on observation switch

E2E assertion added: `test-e2e-playwright.js:1794`

## Scope

- Historical observations: raw_hex stays NULL, UI falls back to
transmission raw_hex silently
- No backfill, no path_json reconstruction, no frontend changes

Closes #881

---------

Co-authored-by: you <you@example.com>
2026-04-21 13:45:29 -07:00
Kpa-clawbot 0ca559e348 fix(#866): per-observation children in expanded packet groups (#880)
## Problem
When a packet group is expanded in the Packets table, clicking any child
row pointed the side pane at the same aggregate packet — not the clicked
observation. URL would flip between `?obs=<packet_id>` values instead of
real observation ids.

## Root cause
The expand fetch used `/api/packets?hash=X&limit=20`, which returns ONE
aggregate row keyed by packet.id. Every child therefore carried
`data-value=<packet.id>`.

## Fix
Switch the expand fetch to `/api/packets/<hash>`, which includes the
full `observations[]` array. Build `_children` as `{...pkt, ...obs}` so
each child row gets a unique observation id and observation-level fields
(observer, path, timestamp, snr/rssi) override the aggregate.

## Verified live on staging
Tested on multiple packets:
- Click group-header → side pane shows observation 1 of N (first
observer)
- Click child row → pane updates to show THAT observer's details:
observer name, path, timestamp, obs counter (K of N), URL
`?obs=<observation_id>`

## Tests
592 frontend tests pass (no new ones — this is a wiring fix, live
E2E-verified instead).

Closes #866

---------

Co-authored-by: Kpa-clawbot <agent@corescope.local>
Co-authored-by: you <you@example.com>
2026-04-21 13:36:45 -07:00
36 changed files with 2510 additions and 1307 deletions
+2
View File
@@ -14,6 +14,7 @@ WORKDIR /build/server
COPY cmd/server/go.mod cmd/server/go.sum ./
COPY internal/geofilter/ ../../internal/geofilter/
COPY internal/sigvalidate/ ../../internal/sigvalidate/
COPY internal/packetpath/ ../../internal/packetpath/
RUN go mod download
COPY cmd/server/ ./
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} \
@@ -24,6 +25,7 @@ WORKDIR /build/ingestor
COPY cmd/ingestor/go.mod cmd/ingestor/go.sum ./
COPY internal/geofilter/ ../../internal/geofilter/
COPY internal/sigvalidate/ ../../internal/sigvalidate/
COPY internal/packetpath/ ../../internal/packetpath/
RUN go mod download
COPY cmd/ingestor/ ./
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} \
+31 -9
View File
@@ -11,6 +11,7 @@ import (
"sync/atomic"
"time"
"github.com/meshcore-analyzer/packetpath"
_ "modernc.org/sqlite"
)
@@ -189,7 +190,7 @@ func applySchema(db *sql.DB) error {
db.Exec(`DROP VIEW IF EXISTS packets_v`)
_, vErr := db.Exec(`
CREATE VIEW packets_v AS
SELECT o.id, t.raw_hex,
SELECT o.id, COALESCE(o.raw_hex, t.raw_hex) AS raw_hex,
datetime(o.timestamp, 'unixepoch') AS timestamp,
obs.id AS observer_id, obs.name AS observer_name,
o.direction, o.snr, o.rssi, o.score, t.hash, t.route_type,
@@ -408,6 +409,15 @@ func applySchema(db *sql.DB) error {
log.Println("[migration] dropped_packets table created")
}
// Migration: add raw_hex column to observations (#881)
row = db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'observations_raw_hex_v1'")
if row.Scan(&migDone) != nil {
log.Println("[migration] Adding raw_hex column to observations...")
db.Exec(`ALTER TABLE observations ADD COLUMN raw_hex TEXT`)
db.Exec(`INSERT INTO _migrations (name) VALUES ('observations_raw_hex_v1')`)
log.Println("[migration] observations.raw_hex column added")
}
return nil
}
@@ -433,12 +443,13 @@ func (s *Store) prepareStatements() error {
}
s.stmtInsertObservation, err = s.db.Prepare(`
INSERT INTO observations (transmission_id, observer_idx, direction, snr, rssi, score, path_json, timestamp)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
INSERT INTO observations (transmission_id, observer_idx, direction, snr, rssi, score, path_json, timestamp, raw_hex)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(transmission_id, observer_idx, COALESCE(path_json, '')) DO UPDATE SET
snr = COALESCE(excluded.snr, snr),
rssi = COALESCE(excluded.rssi, rssi),
score = COALESCE(excluded.score, score)
snr = COALESCE(excluded.snr, snr),
rssi = COALESCE(excluded.rssi, rssi),
score = COALESCE(excluded.score, score),
raw_hex = COALESCE(excluded.raw_hex, raw_hex)
`)
if err != nil {
return err
@@ -584,7 +595,7 @@ func (s *Store) InsertTransmission(data *PacketData) (bool, error) {
_, err = s.stmtInsertObservation.Exec(
txID, observerIdx, data.Direction,
data.SNR, data.RSSI, data.Score,
data.PathJSON, epochTs,
data.PathJSON, epochTs, nilIfEmpty(data.RawHex),
)
if err != nil {
s.Stats.WriteErrors.Add(1)
@@ -931,11 +942,22 @@ type MQTTPacketMessage struct {
}
// BuildPacketData constructs a PacketData from a decoded packet and MQTT message.
// path_json is derived directly from raw_hex header bytes (not decoded.Path.Hops)
// to guarantee the stored path always matches the raw bytes. This matters for
// TRACE packets where decoded.Path.Hops is overwritten with payload hops (#886).
func BuildPacketData(msg *MQTTPacketMessage, decoded *DecodedPacket, observerID, region string) *PacketData {
now := time.Now().UTC().Format(time.RFC3339)
pathJSON := "[]"
if len(decoded.Path.Hops) > 0 {
b, _ := json.Marshal(decoded.Path.Hops)
// For TRACE packets, path_json must be the payload-decoded route hops
// (decoded.Path.Hops), NOT the raw_hex header bytes which are SNR values.
// For all other packet types, derive path from raw_hex (#886).
if !packetpath.PathBytesAreHops(byte(decoded.Header.PayloadType)) {
if len(decoded.Path.Hops) > 0 {
b, _ := json.Marshal(decoded.Path.Hops)
pathJSON = string(b)
}
} else if hops, err := packetpath.DecodePathFromRawHex(msg.Raw); err == nil && len(hops) > 0 {
b, _ := json.Marshal(hops)
pathJSON = string(b)
}
+155
View File
@@ -2,6 +2,7 @@ package main
import (
"database/sql"
"encoding/json"
"fmt"
"os"
"path/filepath"
@@ -10,6 +11,8 @@ import (
"sync/atomic"
"testing"
"time"
"github.com/meshcore-analyzer/packetpath"
)
func tempDBPath(t *testing.T) string {
@@ -1968,3 +1971,155 @@ func TestInsertObservationSNRFillIn(t *testing.T) {
t.Errorf("RSSI overwritten by null arrival: got %v, want %v", rssi3, rssi)
}
}
// TestPerObservationRawHex verifies that two MQTT packets for the same hash
// from different observers store distinct raw_hex per observation (#881).
func TestPerObservationRawHex(t *testing.T) {
store, err := OpenStore(tempDBPath(t))
if err != nil {
t.Fatal(err)
}
defer store.Close()
// Register two observers
store.UpsertObserver("obs-A", "Observer A", "", nil)
store.UpsertObserver("obs-B", "Observer B", "", nil)
hash := "abc123def456"
rawA := "c0ffee01"
rawB := "c0ffee0201aa"
dir := "RX"
// First observation from observer A
pdA := &PacketData{
RawHex: rawA,
Hash: hash,
Timestamp: "2026-04-21T10:00:00Z",
ObserverID: "obs-A",
Direction: &dir,
PathJSON: "[]",
}
isNew, err := store.InsertTransmission(pdA)
if err != nil {
t.Fatalf("insert A: %v", err)
}
if !isNew {
t.Fatal("expected new transmission")
}
// Second observation from observer B (same hash, different raw bytes)
pdB := &PacketData{
RawHex: rawB,
Hash: hash,
Timestamp: "2026-04-21T10:00:01Z",
ObserverID: "obs-B",
Direction: &dir,
PathJSON: `["aabb"]`,
}
isNew2, err := store.InsertTransmission(pdB)
if err != nil {
t.Fatalf("insert B: %v", err)
}
if isNew2 {
t.Fatal("expected duplicate transmission")
}
// Query observations and verify per-observation raw_hex
rows, err := store.db.Query(`
SELECT o.raw_hex, obs.id
FROM observations o
LEFT JOIN observers obs ON obs.rowid = o.observer_idx
ORDER BY o.id ASC
`)
if err != nil {
t.Fatalf("query: %v", err)
}
defer rows.Close()
type obsResult struct {
rawHex string
observerID string
}
var results []obsResult
for rows.Next() {
var rh, oid sql.NullString
if err := rows.Scan(&rh, &oid); err != nil {
t.Fatal(err)
}
results = append(results, obsResult{
rawHex: rh.String,
observerID: oid.String,
})
}
if len(results) != 2 {
t.Fatalf("expected 2 observations, got %d", len(results))
}
if results[0].rawHex != rawA {
t.Errorf("obs A raw_hex: got %q, want %q", results[0].rawHex, rawA)
}
if results[1].rawHex != rawB {
t.Errorf("obs B raw_hex: got %q, want %q", results[1].rawHex, rawB)
}
if results[0].rawHex == results[1].rawHex {
t.Error("both observations have same raw_hex — should differ")
}
}
// TestBuildPacketData_TraceUsesPayloadHops verifies that TRACE packets use
// payload-decoded route hops in path_json (NOT the raw_hex header SNR bytes).
// Issue #886 / #887.
func TestBuildPacketData_TraceUsesPayloadHops(t *testing.T) {
// TRACE packet: header path has SNR bytes [30,2D,0D,23], but decoded.Path.Hops
// is overwritten to payload hops [67,33,D6,33,67].
rawHex := "2604302D0D2359FEE7B100000000006733D63367"
decoded, err := DecodePacket(rawHex, nil, false)
if err != nil {
t.Fatal(err)
}
// decoded.Path.Hops should be the TRACE-replaced hops (payload hops)
if len(decoded.Path.Hops) != 5 {
t.Fatalf("expected 5 decoded hops, got %d", len(decoded.Path.Hops))
}
msg := &MQTTPacketMessage{Raw: rawHex}
pd := BuildPacketData(msg, decoded, "test-obs", "TST")
// For TRACE: path_json MUST be the payload-decoded route hops, NOT the SNR bytes
expectedPathJSON := `["67","33","D6","33","67"]`
if pd.PathJSON != expectedPathJSON {
t.Errorf("path_json = %s, want %s (TRACE must use payload hops)", pd.PathJSON, expectedPathJSON)
}
// Verify that DecodePathFromRawHex returns the SNR bytes (header path) which differ
headerHops, herr := packetpath.DecodePathFromRawHex(rawHex)
if herr != nil {
t.Fatal(herr)
}
headerJSON, _ := json.Marshal(headerHops)
if string(headerJSON) == expectedPathJSON {
t.Error("header path (SNR) should differ from payload hops for TRACE")
}
}
// TestBuildPacketData_NonTracePathJSON verifies non-TRACE packets also derive path from raw_hex.
func TestBuildPacketData_NonTracePathJSON(t *testing.T) {
// A simple ADVERT packet (payload type 0) with 2 hops, hash_size 1
// Header 0x09 = FLOOD(1), ADVERT(2), version 0
// Path byte 0x02 = hash_size 1, hash_count 2
// Path bytes: AA BB
rawHex := "0902AABB" + "00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
decoded, err := DecodePacket(rawHex, nil, false)
if err != nil {
t.Fatal(err)
}
msg := &MQTTPacketMessage{Raw: rawHex}
pd := BuildPacketData(msg, decoded, "obs1", "TST")
expectedPathJSON := `["AA","BB"]`
if pd.PathJSON != expectedPathJSON {
t.Errorf("path_json = %s, want %s", pd.PathJSON, expectedPathJSON)
}
}
+3 -1
View File
@@ -12,6 +12,7 @@ import (
"strings"
"unicode/utf8"
"github.com/meshcore-analyzer/packetpath"
"github.com/meshcore-analyzer/sigvalidate"
)
@@ -192,8 +193,9 @@ func decodePath(pathByte byte, buf []byte, offset int) (Path, int) {
}, totalBytes
}
// isTransportRoute delegates to packetpath.IsTransportRoute.
func isTransportRoute(routeType int) bool {
return routeType == RouteTransportFlood || routeType == RouteTransportDirect
return packetpath.IsTransportRoute(routeType)
}
func decodeEncryptedPayload(typeName string, buf []byte) Payload {
+104
View File
@@ -11,6 +11,7 @@ import (
"strings"
"testing"
"github.com/meshcore-analyzer/packetpath"
"github.com/meshcore-analyzer/sigvalidate"
)
@@ -1822,3 +1823,106 @@ func TestDecodeAdvertWithSignatureValidation(t *testing.T) {
t.Error("SignatureValid should be nil when validation disabled")
}
}
// === Tests for DecodePathFromRawHex (issue #886) ===
func TestDecodePathFromRawHex_HashSize1(t *testing.T) {
// Header byte 0x26 = route_type DIRECT, payload TRACE
// Path byte 0x04 = hash_size 1 (bits 7-6 = 00 → 0+1=1), hash_count 4
// Path bytes: 30 2D 0D 23
raw := "2604302D0D2359FEE7B100000000006733D63367"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
expected := []string{"30", "2D", "0D", "23"}
if len(hops) != len(expected) {
t.Fatalf("got %d hops, want %d", len(hops), len(expected))
}
for i, h := range hops {
if h != expected[i] {
t.Errorf("hop[%d] = %s, want %s", i, h, expected[i])
}
}
}
func TestDecodePathFromRawHex_HashSize2(t *testing.T) {
// Path byte 0x42 = hash_size 2 (bits 7-6 = 01 → 1+1=2), hash_count 2
// Header 0x09 = FLOOD route (rt=1), payload ADVERT (pt=2)
// Path bytes: AABB CCDD (4 bytes = 2 hops * 2 bytes)
raw := "0942AABBCCDD" + "00000000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
expected := []string{"AABB", "CCDD"}
if len(hops) != len(expected) {
t.Fatalf("got %d hops, want %d", len(hops), len(expected))
}
for i, h := range hops {
if h != expected[i] {
t.Errorf("hop[%d] = %s, want %s", i, h, expected[i])
}
}
}
func TestDecodePathFromRawHex_HashSize3(t *testing.T) {
// Path byte 0x81 = hash_size 3 (bits 7-6 = 10 → 2+1=3), hash_count 1
// Header 0x09 = FLOOD route (rt=1), payload ADVERT
raw := "0981AABBCC" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
if len(hops) != 1 || hops[0] != "AABBCC" {
t.Fatalf("got %v, want [AABBCC]", hops)
}
}
func TestDecodePathFromRawHex_HashSize4(t *testing.T) {
// Path byte 0xC1 = hash_size 4 (bits 7-6 = 11 → 3+1=4), hash_count 1
// Header 0x09 = FLOOD route (rt=1)
raw := "09C1AABBCCDD" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
if len(hops) != 1 || hops[0] != "AABBCCDD" {
t.Fatalf("got %v, want [AABBCCDD]", hops)
}
}
func TestDecodePathFromRawHex_DirectZeroHops(t *testing.T) {
// Path byte 0x00 = hash_size 1, hash_count 0
// Header 0x0A = DIRECT route (rt=2), payload ADVERT
raw := "0A00" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
if len(hops) != 0 {
t.Fatalf("got %d hops, want 0", len(hops))
}
}
func TestDecodePathFromRawHex_Transport(t *testing.T) {
// Route type 3 = TRANSPORT_DIRECT → 4 transport code bytes before path byte
// Header 0x27 = route_type 3, payload TRACE
// Transport codes: 1122 3344
// Path byte 0x02 = hash_size 1, hash_count 2
// Path bytes: AA BB
raw := "2711223344" + "02AABB" + "0000000000"
hops, err := packetpath.DecodePathFromRawHex(raw)
if err != nil {
t.Fatal(err)
}
expected := []string{"AA", "BB"}
if len(hops) != len(expected) {
t.Fatalf("got %d hops, want %d", len(hops), len(expected))
}
for i, h := range hops {
if h != expected[i] {
t.Errorf("hop[%d] = %s, want %s", i, h, expected[i])
}
}
}
+4
View File
@@ -13,6 +13,10 @@ replace github.com/meshcore-analyzer/geofilter => ../../internal/geofilter
replace github.com/meshcore-analyzer/sigvalidate => ../../internal/sigvalidate
require github.com/meshcore-analyzer/packetpath v0.0.0
replace github.com/meshcore-analyzer/packetpath => ../../internal/packetpath
require (
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/google/uuid v1.6.0 // indirect
+2 -2
View File
@@ -229,7 +229,7 @@ func createTestDBAt(tb testing.TB, dbPath string, numTx int) {
id INTEGER PRIMARY KEY,
transmission_id INTEGER, observer_id TEXT, observer_name TEXT,
direction TEXT, snr REAL, rssi REAL, score INTEGER,
path_json TEXT, timestamp TEXT
path_json TEXT, timestamp TEXT, raw_hex TEXT
)`)
execOrFail(`CREATE TABLE IF NOT EXISTS observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
execOrFail(`CREATE TABLE IF NOT EXISTS nodes (
@@ -280,7 +280,7 @@ func createTestDBWithObs(tb testing.TB, dbPath string, numTx int) {
)`)
execOrFail(`CREATE TABLE IF NOT EXISTS observations (
id INTEGER PRIMARY KEY, transmission_id INTEGER, observer_id TEXT, observer_name TEXT,
direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT
direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT, raw_hex TEXT
)`)
execOrFail(`CREATE TABLE IF NOT EXISTS observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
execOrFail(`CREATE TABLE IF NOT EXISTS nodes (
+165 -255
View File
@@ -12,20 +12,28 @@ import (
type SkewSeverity string
const (
SkewOK SkewSeverity = "ok" // < 5 min
SkewWarning SkewSeverity = "warning" // 5 min 1 hour
SkewCritical SkewSeverity = "critical" // 1 hour 30 days
SkewAbsurd SkewSeverity = "absurd" // > 30 days
SkewNoClock SkewSeverity = "no_clock" // > 365 days — uninitialized RTC
SkewBimodalClock SkewSeverity = "bimodal_clock" // mixed good+bad recent samples (flaky RTC)
SkewDefault SkewSeverity = "default" // firmware-default epoch + uptime
SkewOK SkewSeverity = "ok" // |skew| <= 15s
SkewDegrading SkewSeverity = "degrading" // 15s < |skew| <= 60s
SkewDegraded SkewSeverity = "degraded" // 60s < |skew| <= 600s
SkewWrong SkewSeverity = "wrong" // |skew| > 600s and not default
)
// Known firmware default epochs. Nodes with advert_ts in
// [epoch, epoch + maxPlausibleUptimeSec] are classified as "default".
// See docs/clock-skew-redesign.md for provenance of each value.
var defaultEpochs = []int64{0, 1609459200, 1672531200, 1715770351}
// Default thresholds in seconds.
const (
skewThresholdWarnSec = 5 * 60 // 5 minutes
skewThresholdCriticalSec = 60 * 60 // 1 hour
skewThresholdAbsurdSec = 30 * 24 * 3600 // 30 days
skewThresholdNoClockSec = 365 * 24 * 3600 // 365 days — uninitialized RTC
// maxPlausibleUptimeSec caps how far past a default epoch we still
// consider "default + uptime ticking". 730 days ≈ 2 years.
maxPlausibleUptimeSec = 1095 * 86400 // 3 years — covers solar repeater deployment lifetimes at firmware default
// Severity band boundaries (absolute skew in seconds).
skewThresholdOKSec = 15
skewThresholdDegradingSec = 60
skewThresholdDegradedSec = 600
// minDriftSamples is the minimum number of advert transmissions needed
// to compute a meaningful linear drift rate.
@@ -35,54 +43,52 @@ const (
// drift rates (> 1 day/day) indicate insufficient or outlier samples.
maxReasonableDriftPerDay = 86400.0
// recentSkewWindowCount is the number of most-recent advert samples
// used to derive the "current" skew for severity classification (see
// issue #789). The all-time median is poisoned by historical bad
// samples (e.g. a node that was off and then GPS-corrected); severity
// must reflect current health, not lifetime statistics.
recentSkewWindowCount = 5
// recentSkewWindowSec bounds the recent-window in time as well: only
// samples from the last N seconds count as "recent" for severity.
// The effective window is min(recentSkewWindowCount, samples in 1h).
recentSkewWindowSec = 3600
// bimodalSkewThresholdSec is the absolute skew threshold (1 hour)
// above which a sample is considered "bad" — likely firmware emitting
// a nonsense timestamp from an uninitialized RTC, not real drift.
// Chosen to match the warning/critical severity boundary: real clock
// drift rarely exceeds 1 hour, while epoch-0 RTCs produce ~1.7B sec.
bimodalSkewThresholdSec = 3600.0
// maxPlausibleSkewJumpSec is the largest skew change between
// consecutive samples that we treat as physical drift. Anything larger
// (e.g. a GPS sync that jumps the clock by minutes/days) is rejected
// as an outlier when computing drift. Real microcontroller drift is
// fractions of a second per advert; 60s is a generous safety factor.
// consecutive samples that we treat as physical drift.
maxPlausibleSkewJumpSec = 60.0
// theilSenMaxPoints caps the number of points fed to Theil-Sen
// regression (O(n²) in pairs). For nodes with thousands of samples we
// keep the most-recent points, which are also the most relevant for
// current drift.
// regression (O(n²) in pairs).
theilSenMaxPoints = 200
)
// classifySkew maps absolute skew (seconds) to a severity level.
// Float64 comparison is safe: inputs are rounded to 1 decimal via round(),
// and thresholds are integer multiples of 60 — no rounding artifacts.
func classifySkew(absSkewSec float64) SkewSeverity {
// isDefaultEpoch returns true if the raw advert timestamp falls within
// [epoch, epoch + maxPlausibleUptimeSec] for any known firmware default.
// If matched, returns the matched epoch; otherwise returns 0.
func isDefaultEpoch(advertTS int64) (bool, int64) {
// Find the largest epoch <= advertTS (closest match). Since ranges
// overlap, picking the closest avoids attributing a 2023-firmware
// node's timestamp to the 2024 epoch.
bestEpoch := int64(-1)
for _, epoch := range defaultEpochs {
if epoch <= advertTS && epoch > bestEpoch {
bestEpoch = epoch
}
}
if bestEpoch >= 0 && advertTS <= bestEpoch+maxPlausibleUptimeSec {
return true, bestEpoch
}
return false, 0
}
// classifySkew maps a raw advert timestamp and corrected skew (signed)
// to a severity level. Takes math.Abs internally so callers may pass
// signed values. Default detection runs on the raw advert_ts
// (independent of observer calibration).
func classifySkew(advertTS int64, skewSec float64) (SkewSeverity, int64) {
if ok, epoch := isDefaultEpoch(advertTS); ok {
return SkewDefault, epoch
}
abs := math.Abs(skewSec)
switch {
case absSkewSec >= skewThresholdNoClockSec:
return SkewNoClock
case absSkewSec >= skewThresholdAbsurdSec:
return SkewAbsurd
case absSkewSec >= skewThresholdCriticalSec:
return SkewCritical
case absSkewSec >= skewThresholdWarnSec:
return SkewWarning
case abs <= skewThresholdOKSec:
return SkewOK, 0
case abs <= skewThresholdDegradingSec:
return SkewDegrading, 0
case abs <= skewThresholdDegradedSec:
return SkewDegraded, 0
default:
return SkewOK
return SkewWrong, 0
}
}
@@ -90,38 +96,35 @@ func classifySkew(absSkewSec float64) SkewSeverity {
// skewSample is a single raw skew measurement from one advert observation.
type skewSample struct {
advertTS int64 // node's advert Unix timestamp
observedTS int64 // observation Unix timestamp
observerID string // which observer saw this
hash string // transmission hash (for multi-observer grouping)
advertTS int64 // node's advert Unix timestamp
observedTS int64 // observation Unix timestamp
observerID string // which observer saw this
hash string // transmission hash (for multi-observer grouping)
}
// ObserverCalibration holds the computed clock offset for an observer.
type ObserverCalibration struct {
ObserverID string `json:"observerID"`
OffsetSec float64 `json:"offsetSec"` // positive = observer clock ahead
Samples int `json:"samples"` // number of multi-observer packets used
OffsetSec float64 `json:"offsetSec"` // positive = observer clock ahead
Samples int `json:"samples"` // number of multi-observer packets used
}
// NodeClockSkew is the API response for a single node's clock skew data.
type NodeClockSkew struct {
Pubkey string `json:"pubkey"`
MeanSkewSec float64 `json:"meanSkewSec"` // corrected mean skew (positive = node ahead)
MedianSkewSec float64 `json:"medianSkewSec"` // corrected median skew
LastSkewSec float64 `json:"lastSkewSec"` // most recent corrected skew
RecentMedianSkewSec float64 `json:"recentMedianSkewSec"` // median across most-recent samples (drives severity, see #789)
DriftPerDaySec float64 `json:"driftPerDaySec"` // linear drift rate (sec/day)
Severity SkewSeverity `json:"severity"`
SampleCount int `json:"sampleCount"`
Calibrated bool `json:"calibrated"` // true if observer calibration was applied
LastAdvertTS int64 `json:"lastAdvertTS"` // most recent advert timestamp
LastObservedTS int64 `json:"lastObservedTS"` // most recent observation timestamp
Samples []SkewSample `json:"samples,omitempty"` // time-series for sparklines
GoodFraction float64 `json:"goodFraction"` // fraction of recent samples with |skew| <= 1h
RecentBadSampleCount int `json:"recentBadSampleCount"` // count of recent samples with |skew| > 1h
RecentSampleCount int `json:"recentSampleCount"` // total recent samples in window
NodeName string `json:"nodeName,omitempty"` // populated in fleet responses
NodeRole string `json:"nodeRole,omitempty"` // populated in fleet responses
Pubkey string `json:"pubkey"`
MeanSkewSec float64 `json:"meanSkewSec"` // corrected mean skew (positive = node ahead)
MedianSkewSec float64 `json:"medianSkewSec"` // corrected median skew
LastSkewSec float64 `json:"lastSkewSec"` // most recent corrected skew
DriftPerDaySec float64 `json:"driftPerDaySec"` // linear drift rate (sec/day)
Severity SkewSeverity `json:"severity"`
SampleCount int `json:"sampleCount"`
Calibrated bool `json:"calibrated"` // true if observer calibration was applied
LastAdvertTS int64 `json:"lastAdvertTS"` // most recent advert timestamp
LastObservedTS int64 `json:"lastObservedTS"` // most recent observation timestamp
DefaultEpoch *int64 `json:"defaultEpoch,omitempty"` // matched epoch when severity=default
Samples []SkewSample `json:"samples,omitempty"` // time-series for sparklines
NodeName string `json:"nodeName,omitempty"` // populated in fleet responses
NodeRole string `json:"nodeRole,omitempty"` // populated in fleet responses
}
// SkewSample is a single (timestamp, skew) point for sparkline rendering.
@@ -130,28 +133,26 @@ type SkewSample struct {
SkewSec float64 `json:"skew"` // corrected skew in seconds
}
// txSkewResult maps tx hash → per-transmission skew stats. This is an
// intermediate result keyed by hash (not pubkey); the store maps hash → pubkey
// when building the final per-node view.
// txSkewResult maps tx hash → per-transmission skew stats.
type txSkewResult = map[string]*NodeClockSkew
// ── Clock Skew Engine ──────────────────────────────────────────────────────────
// ClockSkewEngine computes and caches clock skew data for nodes and observers.
type ClockSkewEngine struct {
mu sync.RWMutex
observerOffsets map[string]float64 // observerID → calibrated offset (seconds)
observerSamples map[string]int // observerID → number of multi-observer packets used
nodeSkew txSkewResult
lastComputed time.Time
computeInterval time.Duration
mu sync.RWMutex
observerOffsets map[string]float64 // observerID → calibrated offset (seconds)
observerSamples map[string]int // observerID → number of multi-observer packets used
nodeSkew txSkewResult
lastComputed time.Time
computeInterval time.Duration
}
func NewClockSkewEngine() *ClockSkewEngine {
return &ClockSkewEngine{
observerOffsets: make(map[string]float64),
observerOffsets: make(map[string]float64),
observerSamples: make(map[string]int),
nodeSkew: make(txSkewResult),
nodeSkew: make(txSkewResult),
computeInterval: 30 * time.Second,
}
}
@@ -188,7 +189,6 @@ func (e *ClockSkewEngine) Recompute(store *PacketStore) {
// Swap results under brief write lock.
e.mu.Lock()
// Re-check: another goroutine may have computed while we were working.
if time.Since(e.lastComputed) < e.computeInterval {
e.mu.Unlock()
return
@@ -214,13 +214,13 @@ func collectSamples(store *PacketStore) []skewSample {
if decoded == nil {
continue
}
// Extract advert timestamp from decoded JSON.
advertTS := extractTimestamp(decoded)
if advertTS <= 0 {
if advertTS < 0 {
continue
}
// Sanity: skip timestamps before year 2020 or after year 2100.
if advertTS < 1577836800 || advertTS > 4102444800 {
// Allow epoch 0 and above (needed for default-epoch detection).
// Upper bound: year 2100.
if advertTS > 4102444800 {
continue
}
@@ -240,21 +240,43 @@ func collectSamples(store *PacketStore) []skewSample {
return samples
}
// timestampMissing is the sentinel returned by extractTimestamp when no
// timestamp field is present in the decoded advert. Using -1 lets us
// distinguish "field absent" from a real epoch-0 timestamp (ts == 0).
const timestampMissing int64 = -1
// extractTimestamp gets the Unix timestamp from a decoded ADVERT payload.
// Returns timestampMissing (-1) if no timestamp field is found.
func extractTimestamp(decoded map[string]interface{}) int64 {
// Try payload.timestamp first (nested in "payload" key).
if payload, ok := decoded["payload"]; ok {
if pm, ok := payload.(map[string]interface{}); ok {
if ts := jsonNumber(pm, "timestamp"); ts > 0 {
if ts, ok := jsonNumberOk(pm, "timestamp"); ok {
return ts
}
}
}
// Fallback: top-level timestamp.
if ts := jsonNumber(decoded, "timestamp"); ts > 0 {
if ts, ok := jsonNumberOk(decoded, "timestamp"); ok {
return ts
}
return 0
return timestampMissing
}
// jsonNumberOk extracts an int64 from a JSON-parsed map, returning (value, true)
// if the key exists and is numeric, or (0, false) otherwise.
func jsonNumberOk(m map[string]interface{}, key string) (int64, bool) {
v, ok := m[key]
if !ok || v == nil {
return 0, false
}
switch n := v.(type) {
case float64:
return int64(n), true
case int64:
return n, true
case int:
return int64(n), true
}
return 0, false
}
// jsonNumber extracts an int64 from a JSON-parsed map (handles float64 and json.Number).
@@ -281,7 +303,6 @@ func parseISO(s string) int64 {
}
t, err := time.Parse(time.RFC3339, s)
if err != nil {
// Try with fractional seconds.
t, err = time.Parse("2006-01-02T15:04:05.999999999Z07:00", s)
if err != nil {
return 0
@@ -295,19 +316,16 @@ func parseISO(s string) int64 {
// calibrateObservers computes each observer's clock offset using multi-observer
// packets. Returns offset map and sample count map.
func calibrateObservers(samples []skewSample) (map[string]float64, map[string]int) {
// Group observations by packet hash.
byHash := make(map[string][]skewSample)
for _, s := range samples {
byHash[s.hash] = append(byHash[s.hash], s)
}
// For each multi-observer packet, compute per-observer deviation from median.
deviations := make(map[string][]float64) // observerID → list of deviations
deviations := make(map[string][]float64)
for _, group := range byHash {
if len(group) < 2 {
continue // single-observer packet, can't calibrate
continue
}
// Compute median observation timestamp for this packet.
obsTimes := make([]float64, len(group))
for i, s := range group {
obsTimes[i] = float64(s.observedTS)
@@ -319,7 +337,6 @@ func calibrateObservers(samples []skewSample) (map[string]float64, map[string]in
}
}
// Each observer's offset = median of its deviations.
offsets := make(map[string]float64, len(deviations))
counts := make(map[string]int, len(deviations))
for obsID, devs := range deviations {
@@ -333,8 +350,6 @@ func calibrateObservers(samples []skewSample) (map[string]float64, map[string]in
// computeNodeSkew calculates corrected skew statistics for each node.
func computeNodeSkew(samples []skewSample, obsOffsets map[string]float64) txSkewResult {
// Compute corrected skew per sample, grouped by hash (each hash = one
// node's advert transmission). The caller maps hash → pubkey via byNode.
type correctedSample struct {
skew float64
observedTS int64
@@ -349,8 +364,6 @@ func computeNodeSkew(samples []skewSample, obsOffsets map[string]float64) txSkew
rawSkew := float64(s.advertTS - s.observedTS)
corrected := rawSkew
if hasCal {
// Observer offset = obs_ts - median(all_obs_ts). If observer is ahead,
// its obs_ts is inflated, making raw_skew too low. Add offset to correct.
corrected = rawSkew + obsOffset
}
byHash[s.hash] = append(byHash[s.hash], correctedSample{
@@ -361,10 +374,7 @@ func computeNodeSkew(samples []skewSample, obsOffsets map[string]float64) txSkew
hashAdvertTS[s.hash] = s.advertTS
}
// Each hash represents one advert from one node. Compute median corrected
// skew per hash (across multiple observers).
result := make(map[string]*NodeClockSkew) // keyed by hash for now
result := make(map[string]*NodeClockSkew)
for hash, cs := range byHash {
skews := make([]float64, len(cs))
for i, c := range cs {
@@ -373,29 +383,37 @@ func computeNodeSkew(samples []skewSample, obsOffsets map[string]float64) txSkew
medSkew := median(skews)
meanSkew := mean(skews)
// Find latest observation.
var latestObsTS int64
// Pick the skew from the most recent observation (max observedTS),
// not the last-appended sample which may be non-chronological.
var latest correctedSample
var anyCal bool
for _, c := range cs {
if c.observedTS > latestObsTS {
latestObsTS = c.observedTS
if c.observedTS > latest.observedTS {
latest = c
}
if c.calibrated {
anyCal = true
}
}
lastCorrectedSkew := latest.skew
advTS := hashAdvertTS[hash]
severity, matchedEpoch := classifySkew(advTS, lastCorrectedSkew)
absMedian := math.Abs(medSkew)
result[hash] = &NodeClockSkew{
ncs := &NodeClockSkew{
MeanSkewSec: round(meanSkew, 1),
MedianSkewSec: round(medSkew, 1),
LastSkewSec: round(cs[len(cs)-1].skew, 1),
Severity: classifySkew(absMedian),
LastSkewSec: round(lastCorrectedSkew, 1),
Severity: severity,
SampleCount: len(cs),
Calibrated: anyCal,
LastAdvertTS: hashAdvertTS[hash],
LastObservedTS: latestObsTS,
LastAdvertTS: advTS,
LastObservedTS: latest.observedTS,
}
if severity == SkewDefault {
ep := matchedEpoch
ncs.DefaultEpoch = &ep
}
result[hash] = ncs
}
return result
}
@@ -457,124 +475,45 @@ func (s *PacketStore) getNodeClockSkewLocked(pubkey string) *NodeClockSkew {
medSkew := median(allSkews)
meanSkew := mean(allSkews)
// Severity is derived from RECENT samples only (issue #789). The
// all-time median is poisoned by historical bad data — a node that
// was off for hours and then GPS-corrected can have median = -59M sec
// while its current skew is -0.8s. Operators need severity to reflect
// current health, so they trust the dashboard.
//
// Sort tsSkews by time and take the last recentSkewWindowCount samples
// (or all samples within recentSkewWindowSec of the latest, whichever
// gives FEWER samples — we want the more-current view; a chatty node
// can fit dozens of samples in 1h, in which case the count cap wins).
sort.Slice(tsSkews, func(i, j int) bool { return tsSkews[i].ts < tsSkews[j].ts })
// Classify using the most recent advert's raw timestamp and
// the most recent corrected skew. No windowing or median-driven
// severity — per-advert classification per the spec.
severity, matchedEpoch := classifySkew(lastAdvTS, lastSkew)
recentSkew := lastSkew
var recentVals []float64
if n := len(tsSkews); n > 0 {
latestTS := tsSkews[n-1].ts
// Index-based window: last K samples.
startByCount := n - recentSkewWindowCount
if startByCount < 0 {
startByCount = 0
}
// Time-based window: samples newer than latestTS - windowSec.
startByTime := n - 1
for i := n - 1; i >= 0; i-- {
if latestTS-tsSkews[i].ts <= recentSkewWindowSec {
startByTime = i
} else {
break
}
}
// Pick the narrower (larger-index) of the two windows — the most
// current view of the node's clock health.
start := startByCount
if startByTime > start {
start = startByTime
}
recentVals = make([]float64, 0, n-start)
for i := start; i < n; i++ {
recentVals = append(recentVals, tsSkews[i].skew)
}
if len(recentVals) > 0 {
recentSkew = median(recentVals)
}
}
// ── Bimodal detection (#845) ─────────────────────────────────────────
// Split recent samples into "good" (|skew| <= 1h, real clock) and
// "bad" (|skew| > 1h, firmware nonsense from uninitialized RTC).
// Classification order (first match wins):
// no_clock — goodFraction < 0.10 (essentially no real clock)
// bimodal_clock — 0.10 <= goodFraction < 0.80 AND badCount > 0
// ok/warn/etc. — goodFraction >= 0.80 (normal, outliers filtered)
var goodSamples []float64
for _, v := range recentVals {
if math.Abs(v) <= bimodalSkewThresholdSec {
goodSamples = append(goodSamples, v)
}
}
recentSampleCount := len(recentVals)
recentBadCount := recentSampleCount - len(goodSamples)
var goodFraction float64
if recentSampleCount > 0 {
goodFraction = float64(len(goodSamples)) / float64(recentSampleCount)
}
var severity SkewSeverity
if goodFraction < 0.10 {
// Essentially no real clock — classify as no_clock regardless
// of the raw skew magnitude.
severity = SkewNoClock
} else if goodFraction < 0.80 && recentBadCount > 0 {
// Bimodal: use median of GOOD samples as the "real" skew.
severity = SkewBimodalClock
if len(goodSamples) > 0 {
recentSkew = median(goodSamples)
}
} else {
// Normal path: if there are good samples, use their median
// (filters out rare outliers in ≥80% good case).
if len(goodSamples) > 0 && recentBadCount > 0 {
recentSkew = median(goodSamples)
}
severity = classifySkew(math.Abs(recentSkew))
}
// For no_clock / bimodal_clock nodes, skip drift when data is unreliable.
// Drift: display only, not a classifier input.
var drift float64
if severity != SkewNoClock && severity != SkewBimodalClock && len(tsSkews) >= minDriftSamples {
if severity != SkewDefault && len(tsSkews) >= minDriftSamples {
drift = computeDrift(tsSkews)
// Cap physically impossible drift rates.
if math.Abs(drift) > maxReasonableDriftPerDay {
drift = 0
}
}
// Build sparkline samples from tsSkews (already sorted by time above).
// Build sparkline samples.
sort.Slice(tsSkews, func(i, j int) bool { return tsSkews[i].ts < tsSkews[j].ts })
samples := make([]SkewSample, len(tsSkews))
for i, p := range tsSkews {
samples[i] = SkewSample{Timestamp: p.ts, SkewSec: round(p.skew, 1)}
}
return &NodeClockSkew{
Pubkey: pubkey,
MeanSkewSec: round(meanSkew, 1),
MedianSkewSec: round(medSkew, 1),
LastSkewSec: round(lastSkew, 1),
RecentMedianSkewSec: round(recentSkew, 1),
DriftPerDaySec: round(drift, 2),
Severity: severity,
SampleCount: totalSamples,
Calibrated: anyCal,
LastAdvertTS: lastAdvTS,
LastObservedTS: lastObsTS,
Samples: samples,
GoodFraction: round(goodFraction, 2),
RecentBadSampleCount: recentBadCount,
RecentSampleCount: recentSampleCount,
result := &NodeClockSkew{
Pubkey: pubkey,
MeanSkewSec: round(meanSkew, 1),
MedianSkewSec: round(medSkew, 1),
LastSkewSec: round(lastSkew, 1),
DriftPerDaySec: round(drift, 2),
Severity: severity,
SampleCount: totalSamples,
Calibrated: anyCal,
LastAdvertTS: lastAdvTS,
LastObservedTS: lastObsTS,
Samples: samples,
}
if severity == SkewDefault {
ep := matchedEpoch
result.DefaultEpoch = &ep
}
return result
}
// GetFleetClockSkew returns clock skew data for all nodes that have skew data.
@@ -583,7 +522,6 @@ func (s *PacketStore) GetFleetClockSkew() []*NodeClockSkew {
s.mu.RLock()
defer s.mu.RUnlock()
// Build name/role lookup from DB cache (requires s.mu held).
allNodes, _ := s.getCachedNodesAndPM()
nameMap := make(map[string]nodeInfo, len(allNodes))
for _, ni := range allNodes {
@@ -596,12 +534,10 @@ func (s *PacketStore) GetFleetClockSkew() []*NodeClockSkew {
if cs == nil {
continue
}
// Enrich with node name/role.
if ni, ok := nameMap[pubkey]; ok {
cs.NodeName = ni.Name
cs.NodeRole = ni.Role
}
// Omit samples in fleet response (too much data).
cs.Samples = nil
results = append(results, cs)
}
@@ -626,7 +562,6 @@ func (s *PacketStore) GetObserverCalibrations() []ObserverCalibration {
Samples: s.clockSkew.observerSamples[obsID],
})
}
// Sort by absolute offset descending.
sort.Slice(result, func(i, j int) bool {
return math.Abs(result[i].OffsetSec) > math.Abs(result[j].OffsetSec)
})
@@ -667,38 +602,20 @@ type tsSkewPair struct {
}
// computeDrift estimates linear drift in seconds per day from time-ordered
// (timestamp, skew) pairs. Issue #789: a single GPS-correction event (huge
// skew jump in seconds) used to dominate ordinary least squares and produce
// absurd drift like 1.7M sec/day. We now:
//
// 1. Drop pairs whose consecutive skew jump exceeds maxPlausibleSkewJumpSec
// (clock corrections, not physical drift). This protects both OLS-style
// consumers and Theil-Sen.
// 2. Use Theil-Sen regression — the slope is the median of all pairwise
// slopes, naturally robust to remaining outliers (breakdown point ~29%).
//
// For very small samples after filtering we fall back to a simple slope
// between first and last calibrated samples.
// (timestamp, skew) pairs using Theil-Sen regression with outlier filtering.
func computeDrift(pairs []tsSkewPair) float64 {
if len(pairs) < 2 {
return 0
}
// Sort by timestamp.
sort.Slice(pairs, func(i, j int) bool {
return pairs[i].ts < pairs[j].ts
})
// Time span too short? Skip.
spanSec := float64(pairs[len(pairs)-1].ts - pairs[0].ts)
if spanSec < 3600 { // need at least 1 hour of data
if spanSec < 3600 {
return 0
}
// Outlier filter: drop samples where the skew jumps more than
// maxPlausibleSkewJumpSec from the running "stable" baseline.
// We anchor on the first sample, then accept each subsequent point
// that's within the threshold of the most recent accepted point —
// this preserves a slow drift while rejecting correction events.
filtered := make([]tsSkewPair, 0, len(pairs))
filtered = append(filtered, pairs[0])
for i := 1; i < len(pairs); i++ {
@@ -707,30 +624,23 @@ func computeDrift(pairs []tsSkewPair) float64 {
filtered = append(filtered, pairs[i])
}
}
// If the filter killed too much (e.g. unstable node), fall back to the
// raw series so we at least produce *something* — it'll be capped by
// maxReasonableDriftPerDay downstream.
if len(filtered) < 2 || float64(filtered[len(filtered)-1].ts-filtered[0].ts) < 3600 {
filtered = pairs
}
// Cap point count for Theil-Sen (O(n²) on pairs). Keep most-recent.
if len(filtered) > theilSenMaxPoints {
filtered = filtered[len(filtered)-theilSenMaxPoints:]
}
return theilSenSlope(filtered) * 86400 // sec/sec → sec/day
return theilSenSlope(filtered) * 86400
}
// theilSenSlope returns the Theil-Sen estimator: median of all pairwise
// slopes (yj - yi) / (tj - ti) for i < j. Naturally robust to outliers.
// Pairs must be sorted by timestamp ascending.
// theilSenSlope returns the Theil-Sen estimator: median of all pairwise slopes.
func theilSenSlope(pairs []tsSkewPair) float64 {
n := len(pairs)
if n < 2 {
return 0
}
// Pre-allocate: n*(n-1)/2 pairs.
slopes := make([]float64, 0, n*(n-1)/2)
for i := 0; i < n; i++ {
for j := i + 1; j < n; j++ {
File diff suppressed because it is too large Load Diff
+1 -1
View File
@@ -47,7 +47,7 @@ func setupTestDBv2(t *testing.T) *DB {
id INTEGER PRIMARY KEY AUTOINCREMENT,
transmission_id INTEGER NOT NULL REFERENCES transmissions(id),
observer_id TEXT, observer_name TEXT, direction TEXT,
snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp INTEGER NOT NULL
snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp INTEGER NOT NULL, raw_hex TEXT
);
`
if _, err := conn.Exec(schema); err != nil {
+4
View File
@@ -20,6 +20,7 @@ type DB struct {
path string // filesystem path to the database file
isV3 bool // v3 schema: observer_idx in observations (vs observer_id in v2)
hasResolvedPath bool // observations table has resolved_path column
hasObsRawHex bool // observations table has raw_hex column (#881)
// Channel list cache (60s TTL) — avoids repeated GROUP BY scans (#762)
channelsCacheMu sync.Mutex
@@ -76,6 +77,9 @@ func (db *DB) detectSchema() {
if colName == "resolved_path" {
db.hasResolvedPath = true
}
if colName == "raw_hex" {
db.hasObsRawHex = true
}
}
}
}
+60 -2
View File
@@ -74,7 +74,8 @@ func setupTestDB(t *testing.T) *DB {
score INTEGER,
path_json TEXT,
timestamp INTEGER NOT NULL,
resolved_path TEXT
resolved_path TEXT,
raw_hex TEXT
);
CREATE TABLE IF NOT EXISTS observer_metrics (
@@ -1134,7 +1135,8 @@ func setupTestDBV2(t *testing.T) *DB {
rssi REAL,
score INTEGER,
path_json TEXT,
timestamp INTEGER NOT NULL
timestamp INTEGER NOT NULL,
raw_hex TEXT
);
`
if _, err := conn.Exec(schema); err != nil {
@@ -1975,3 +1977,59 @@ func TestParseWindowDuration(t *testing.T) {
}
}
}
// TestPerObservationRawHexEnrich verifies enrichObs returns per-observation raw_hex
// when available, falling back to transmission raw_hex when NULL (#881).
func TestPerObservationRawHexEnrich(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
// Insert observers
db.conn.Exec(`INSERT INTO observers (id, name) VALUES ('obs-a', 'Observer A')`)
db.conn.Exec(`INSERT INTO observers (id, name) VALUES ('obs-b', 'Observer B')`)
var rowA, rowB int64
db.conn.QueryRow(`SELECT rowid FROM observers WHERE id='obs-a'`).Scan(&rowA)
db.conn.QueryRow(`SELECT rowid FROM observers WHERE id='obs-b'`).Scan(&rowB)
// Insert transmission with raw_hex
txHex := "deadbeef"
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen) VALUES (?, 'hash1', '2026-04-21T10:00:00Z')`, txHex)
// Insert two observations: A has its own raw_hex, B has NULL (historical)
obsAHex := "c0ffee01"
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp, raw_hex)
VALUES (1, ?, -5.0, -90.0, '[]', 1745236800, ?)`, rowA, obsAHex)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (1, ?, -3.0, -85.0, '["aabb"]', 1745236801)`, rowB)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store load: %v", err)
}
tx := store.byHash["hash1"]
if tx == nil {
t.Fatal("transmission not loaded")
}
if len(tx.Observations) < 2 {
t.Fatalf("expected 2 observations, got %d", len(tx.Observations))
}
// Check enriched observations
for _, obs := range tx.Observations {
m := store.enrichObs(obs)
rh, _ := m["raw_hex"].(string)
if obs.RawHex != "" {
// Observer A: should get per-observation raw_hex
if rh != obsAHex {
t.Errorf("obs with own raw_hex: got %q, want %q", rh, obsAHex)
}
} else {
// Observer B: should fall back to transmission raw_hex
if rh != txHex {
t.Errorf("obs without raw_hex: got %q, want %q (tx fallback)", rh, txHex)
}
}
}
}
+3 -101
View File
@@ -10,6 +10,7 @@ import (
"strings"
"time"
"github.com/meshcore-analyzer/packetpath"
"github.com/meshcore-analyzer/sigvalidate"
)
@@ -164,8 +165,9 @@ func decodePath(pathByte byte, buf []byte, offset int) (Path, int) {
}, totalBytes
}
// isTransportRoute delegates to packetpath.IsTransportRoute.
func isTransportRoute(routeType int) bool {
return routeType == RouteTransportFlood || routeType == RouteTransportDirect
return packetpath.IsTransportRoute(routeType)
}
func decodeEncryptedPayload(typeName string, buf []byte) Payload {
@@ -441,106 +443,6 @@ func DecodePacket(hexString string, validateSignatures bool) (*DecodedPacket, er
}, nil
}
// HexRange represents a labeled byte range for the hex breakdown visualization.
type HexRange struct {
Start int `json:"start"`
End int `json:"end"`
Label string `json:"label"`
}
// Breakdown holds colored byte ranges returned by the packet detail endpoint.
type Breakdown struct {
Ranges []HexRange `json:"ranges"`
}
// BuildBreakdown computes labeled byte ranges for each section of a MeshCore packet.
// The returned ranges are consumed by createColoredHexDump() and buildHexLegend()
// in the frontend (public/app.js).
func BuildBreakdown(hexString string) *Breakdown {
hexString = strings.ReplaceAll(hexString, " ", "")
hexString = strings.ReplaceAll(hexString, "\n", "")
hexString = strings.ReplaceAll(hexString, "\r", "")
buf, err := hex.DecodeString(hexString)
if err != nil || len(buf) < 2 {
return &Breakdown{Ranges: []HexRange{}}
}
var ranges []HexRange
offset := 0
// Byte 0: Header
ranges = append(ranges, HexRange{Start: 0, End: 0, Label: "Header"})
offset = 1
header := decodeHeader(buf[0])
// Bytes 1-4: Transport Codes (TRANSPORT_FLOOD / TRANSPORT_DIRECT only)
if isTransportRoute(header.RouteType) {
if len(buf) < offset+4 {
return &Breakdown{Ranges: ranges}
}
ranges = append(ranges, HexRange{Start: offset, End: offset + 3, Label: "Transport Codes"})
offset += 4
}
if offset >= len(buf) {
return &Breakdown{Ranges: ranges}
}
// Next byte: Path Length (bits 7-6 = hashSize-1, bits 5-0 = hashCount)
ranges = append(ranges, HexRange{Start: offset, End: offset, Label: "Path Length"})
pathByte := buf[offset]
offset++
hashSize := int(pathByte>>6) + 1
hashCount := int(pathByte & 0x3F)
pathBytes := hashSize * hashCount
// Path hops
if hashCount > 0 && offset+pathBytes <= len(buf) {
ranges = append(ranges, HexRange{Start: offset, End: offset + pathBytes - 1, Label: "Path"})
}
offset += pathBytes
if offset >= len(buf) {
return &Breakdown{Ranges: ranges}
}
payloadStart := offset
// Payload — break ADVERT into named sub-fields; everything else is one Payload range
if header.PayloadType == PayloadADVERT && len(buf)-payloadStart >= 100 {
ranges = append(ranges, HexRange{Start: payloadStart, End: payloadStart + 31, Label: "PubKey"})
ranges = append(ranges, HexRange{Start: payloadStart + 32, End: payloadStart + 35, Label: "Timestamp"})
ranges = append(ranges, HexRange{Start: payloadStart + 36, End: payloadStart + 99, Label: "Signature"})
appStart := payloadStart + 100
if appStart < len(buf) {
ranges = append(ranges, HexRange{Start: appStart, End: appStart, Label: "Flags"})
appFlags := buf[appStart]
fOff := appStart + 1
if appFlags&0x10 != 0 && fOff+8 <= len(buf) {
ranges = append(ranges, HexRange{Start: fOff, End: fOff + 3, Label: "Latitude"})
ranges = append(ranges, HexRange{Start: fOff + 4, End: fOff + 7, Label: "Longitude"})
fOff += 8
}
if appFlags&0x20 != 0 && fOff+2 <= len(buf) {
fOff += 2
}
if appFlags&0x40 != 0 && fOff+2 <= len(buf) {
fOff += 2
}
if appFlags&0x80 != 0 && fOff < len(buf) {
ranges = append(ranges, HexRange{Start: fOff, End: len(buf) - 1, Label: "Name"})
}
}
} else {
ranges = append(ranges, HexRange{Start: payloadStart, End: len(buf) - 1, Label: "Payload"})
}
return &Breakdown{Ranges: ranges}
}
// ComputeContentHash computes the SHA-256-based content hash (first 16 hex chars).
// It hashes the payload-type nibble + payload (skipping path bytes) to produce a
// route-independent identifier for the same logical packet. For TRACE packets,
-140
View File
@@ -97,146 +97,6 @@ func TestDecodePacket_FloodHasNoCodes(t *testing.T) {
}
}
func TestBuildBreakdown_InvalidHex(t *testing.T) {
b := BuildBreakdown("not-hex!")
if len(b.Ranges) != 0 {
t.Errorf("expected empty ranges for invalid hex, got %d", len(b.Ranges))
}
}
func TestBuildBreakdown_TooShort(t *testing.T) {
b := BuildBreakdown("11") // 1 byte — no path byte
if len(b.Ranges) != 0 {
t.Errorf("expected empty ranges for too-short packet, got %d", len(b.Ranges))
}
}
func TestBuildBreakdown_FloodNonAdvert(t *testing.T) {
// Header 0x15: route=1/FLOOD, payload=5/GRP_TXT
// PathByte 0x01: 1 hop, 1-byte hash
// PathHop: AA
// Payload: FF0011
b := BuildBreakdown("1501AAFFFF00")
labels := rangeLabels(b.Ranges)
expect := []string{"Header", "Path Length", "Path", "Payload"}
if !equalLabels(labels, expect) {
t.Errorf("expected labels %v, got %v", expect, labels)
}
// Verify byte positions
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Path Length", 1, 1)
assertRange(t, b.Ranges, "Path", 2, 2)
assertRange(t, b.Ranges, "Payload", 3, 5)
}
func TestBuildBreakdown_TransportFlood(t *testing.T) {
// Header 0x14: route=0/TRANSPORT_FLOOD, payload=5/GRP_TXT
// TransportCodes: AABBCCDD (4 bytes)
// PathByte 0x01: 1 hop, 1-byte hash
// PathHop: EE
// Payload: FF00
b := BuildBreakdown("14AABBCCDD01EEFF00")
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Transport Codes", 1, 4)
assertRange(t, b.Ranges, "Path Length", 5, 5)
assertRange(t, b.Ranges, "Path", 6, 6)
assertRange(t, b.Ranges, "Payload", 7, 8)
}
func TestBuildBreakdown_FloodNoHops(t *testing.T) {
// Header 0x15: FLOOD/GRP_TXT; PathByte 0x00: 0 hops; Payload: AABB
b := BuildBreakdown("150000AABB")
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Path Length", 1, 1)
// No Path range since hashCount=0
for _, r := range b.Ranges {
if r.Label == "Path" {
t.Error("expected no Path range for zero-hop packet")
}
}
assertRange(t, b.Ranges, "Payload", 2, 4)
}
func TestBuildBreakdown_AdvertBasic(t *testing.T) {
// Header 0x11: FLOOD/ADVERT
// PathByte 0x01: 1 hop, 1-byte hash
// PathHop: AA
// Payload: 100 bytes (PubKey32 + Timestamp4 + Signature64) + Flags=0x02 (repeater, no extras)
pubkey := repeatHex("AB", 32)
ts := "00000000" // 4 bytes
sig := repeatHex("CD", 64)
flags := "02"
hex := "1101AA" + pubkey + ts + sig + flags
b := BuildBreakdown(hex)
assertRange(t, b.Ranges, "Header", 0, 0)
assertRange(t, b.Ranges, "Path Length", 1, 1)
assertRange(t, b.Ranges, "Path", 2, 2)
assertRange(t, b.Ranges, "PubKey", 3, 34)
assertRange(t, b.Ranges, "Timestamp", 35, 38)
assertRange(t, b.Ranges, "Signature", 39, 102)
assertRange(t, b.Ranges, "Flags", 103, 103)
}
func TestBuildBreakdown_AdvertWithLocation(t *testing.T) {
// flags=0x12: hasLocation bit set
pubkey := repeatHex("00", 32)
ts := "00000000"
sig := repeatHex("00", 64)
flags := "12" // 0x10 = hasLocation
latBytes := "00000000"
lonBytes := "00000000"
hex := "1101AA" + pubkey + ts + sig + flags + latBytes + lonBytes
b := BuildBreakdown(hex)
assertRange(t, b.Ranges, "Latitude", 104, 107)
assertRange(t, b.Ranges, "Longitude", 108, 111)
}
func TestBuildBreakdown_AdvertWithName(t *testing.T) {
// flags=0x82: hasName bit set
pubkey := repeatHex("00", 32)
ts := "00000000"
sig := repeatHex("00", 64)
flags := "82" // 0x80 = hasName
name := "4E6F6465" // "Node" in hex
hex := "1101AA" + pubkey + ts + sig + flags + name
b := BuildBreakdown(hex)
assertRange(t, b.Ranges, "Name", 104, 107)
}
// helpers
func rangeLabels(ranges []HexRange) []string {
out := make([]string, len(ranges))
for i, r := range ranges {
out[i] = r.Label
}
return out
}
func equalLabels(a, b []string) bool {
if len(a) != len(b) {
return false
}
for i := range a {
if a[i] != b[i] {
return false
}
}
return true
}
func assertRange(t *testing.T, ranges []HexRange, label string, wantStart, wantEnd int) {
t.Helper()
for _, r := range ranges {
if r.Label == label {
if r.Start != wantStart || r.End != wantEnd {
t.Errorf("range %q: want [%d,%d], got [%d,%d]", label, wantStart, wantEnd, r.Start, r.End)
}
return
}
}
t.Errorf("range %q not found in %v", label, rangeLabels(ranges))
}
func TestZeroHopDirectHashSize(t *testing.T) {
// DIRECT (RouteType=2) + REQ (PayloadType=0) → header byte = 0x02
+4
View File
@@ -14,6 +14,10 @@ replace github.com/meshcore-analyzer/geofilter => ../../internal/geofilter
replace github.com/meshcore-analyzer/sigvalidate => ../../internal/sigvalidate
require github.com/meshcore-analyzer/packetpath v0.0.0
replace github.com/meshcore-analyzer/packetpath => ../../internal/packetpath
require (
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/google/uuid v1.6.0 // indirect
+2 -2
View File
@@ -38,7 +38,7 @@ func createTestDBWithSchema(t *testing.T) (*DB, string) {
observer_id TEXT, observer_name TEXT, direction TEXT,
snr REAL, rssi REAL, score INTEGER,
path_json TEXT, timestamp TEXT,
resolved_path TEXT
resolved_path TEXT, raw_hex TEXT
)`)
conn.Exec(`CREATE TABLE nodes (
public_key TEXT PRIMARY KEY, name TEXT, role TEXT,
@@ -264,7 +264,7 @@ func TestEnsureResolvedPathColumn(t *testing.T) {
conn, _ := sql.Open("sqlite", "file:"+dbPath+"?_journal_mode=WAL")
conn.Exec(`CREATE TABLE observations (
id INTEGER PRIMARY KEY, transmission_id INTEGER,
observer_id TEXT, path_json TEXT, timestamp TEXT
observer_id TEXT, path_json TEXT, timestamp TEXT, raw_hex TEXT
)`)
conn.Close()
+15 -4
View File
@@ -16,6 +16,7 @@ import (
"time"
"github.com/gorilla/mux"
"github.com/meshcore-analyzer/packetpath"
)
// Server holds shared state for route handlers.
@@ -957,11 +958,9 @@ func (s *Server) handlePacketDetail(w http.ResponseWriter, r *http.Request) {
pathHops = []interface{}{}
}
rawHex, _ := packet["raw_hex"].(string)
writeJSON(w, PacketDetailResponse{
Packet: packet,
Path: pathHops,
Breakdown: BuildBreakdown(rawHex),
ObservationCount: observationCount,
Observations: mapSliceToObservations(observations),
})
@@ -1020,8 +1019,17 @@ func (s *Server) handlePostPacket(w http.ResponseWriter, r *http.Request) {
contentHash := ComputeContentHash(hexStr)
pathJSON := "[]"
if len(decoded.Path.Hops) > 0 {
if pj, e := json.Marshal(decoded.Path.Hops); e == nil {
// For TRACE packets, path_json must be the payload-decoded route hops
// (decoded.Path.Hops), NOT the raw_hex header bytes which are SNR values.
// For all other packet types, derive path from raw_hex (#886).
if !packetpath.PathBytesAreHops(byte(decoded.Header.PayloadType)) {
if len(decoded.Path.Hops) > 0 {
if pj, e := json.Marshal(decoded.Path.Hops); e == nil {
pathJSON = string(pj)
}
}
} else if hops, err := packetpath.DecodePathFromRawHex(hexStr); err == nil && len(hops) > 0 {
if pj, e := json.Marshal(hops); e == nil {
pathJSON = string(pj)
}
}
@@ -2386,6 +2394,9 @@ func mapSliceToObservations(maps []map[string]interface{}) []ObservationResp {
obs.SNR = m["snr"]
obs.RSSI = m["rssi"]
obs.PathJSON = m["path_json"]
obs.ResolvedPath = m["resolved_path"]
obs.Direction = m["direction"]
obs.RawHex = m["raw_hex"]
obs.Timestamp = m["timestamp"]
result = append(result, obs)
}
+50 -11
View File
@@ -63,6 +63,7 @@ type StoreObs struct {
RSSI *float64
Score *int
PathJSON string
RawHex string
Timestamp string
}
@@ -458,6 +459,10 @@ func (s *PacketStore) Load() error {
if s.db.hasResolvedPath {
rpCol = ",\n\t\t\t\to.resolved_path"
}
obsRawHexCol := ""
if s.db.hasObsRawHex {
obsRawHexCol = ", o.raw_hex"
}
limitClause := ""
if maxPackets > 0 {
@@ -469,7 +474,7 @@ func (s *PacketStore) Load() error {
loadSQL = `SELECT t.id, t.raw_hex, t.hash, t.first_seen, t.route_type,
t.payload_type, t.payload_version, t.decoded_json,
o.id, obs.id, obs.name, o.direction,
o.snr, o.rssi, o.score, o.path_json, strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch')` + rpCol + `
o.snr, o.rssi, o.score, o.path_json, strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch')` + obsRawHexCol + rpCol + `
FROM transmissions t
LEFT JOIN observations o ON o.transmission_id = t.id
LEFT JOIN observers obs ON obs.rowid = o.observer_idx` + limitClause + `
@@ -478,7 +483,7 @@ func (s *PacketStore) Load() error {
loadSQL = `SELECT t.id, t.raw_hex, t.hash, t.first_seen, t.route_type,
t.payload_type, t.payload_version, t.decoded_json,
o.id, o.observer_id, o.observer_name, o.direction,
o.snr, o.rssi, o.score, o.path_json, o.timestamp` + rpCol + `
o.snr, o.rssi, o.score, o.path_json, o.timestamp` + obsRawHexCol + rpCol + `
FROM transmissions t
LEFT JOIN observations o ON o.transmission_id = t.id` + limitClause + `
ORDER BY t.first_seen ASC, o.timestamp DESC`
@@ -500,12 +505,16 @@ func (s *PacketStore) Load() error {
var observerID, observerName, direction, pathJSON, obsTimestamp sql.NullString
var snr, rssi sql.NullFloat64
var score sql.NullInt64
var obsRawHex sql.NullString
var resolvedPathStr sql.NullString
scanArgs := []interface{}{&txID, &rawHex, &hash, &firstSeen, &routeType, &payloadType,
&payloadVersion, &decodedJSON,
&obsID, &observerID, &observerName, &direction,
&snr, &rssi, &score, &pathJSON, &obsTimestamp}
if s.db.hasObsRawHex {
scanArgs = append(scanArgs, &obsRawHex)
}
if s.db.hasResolvedPath {
scanArgs = append(scanArgs, &resolvedPathStr)
}
@@ -565,6 +574,7 @@ func (s *PacketStore) Load() error {
RSSI: nullFloatPtr(rssi),
Score: nullIntPtr(score),
PathJSON: obsPJ,
RawHex: nullStrVal(obsRawHex),
Timestamp: normalizeTimestamp(nullStrVal(obsTimestamp)),
}
@@ -1384,11 +1394,15 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
// New ingests always resolve fresh using the current prefix map and neighbor graph.
// On restart, Load() handles reading persisted resolved_path values. (review item #7)
var querySQL string
obsRHCol := ""
if s.db.hasObsRawHex {
obsRHCol = ", o.raw_hex"
}
if s.db.isV3 {
querySQL = `SELECT t.id, t.raw_hex, t.hash, t.first_seen, t.route_type,
t.payload_type, t.payload_version, t.decoded_json,
o.id, obs.id, obs.name, o.direction,
o.snr, o.rssi, o.score, o.path_json, strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch')
o.snr, o.rssi, o.score, o.path_json, strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch')` + obsRHCol + `
FROM transmissions t
LEFT JOIN observations o ON o.transmission_id = t.id
LEFT JOIN observers obs ON obs.rowid = o.observer_idx
@@ -1398,7 +1412,7 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
querySQL = `SELECT t.id, t.raw_hex, t.hash, t.first_seen, t.route_type,
t.payload_type, t.payload_version, t.decoded_json,
o.id, o.observer_id, o.observer_name, o.direction,
o.snr, o.rssi, o.score, o.path_json, o.timestamp
o.snr, o.rssi, o.score, o.path_json, o.timestamp` + obsRHCol + `
FROM transmissions t
LEFT JOIN observations o ON o.transmission_id = t.id
WHERE t.id > ?
@@ -1419,6 +1433,7 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
routeType, payloadType *int
obsID *int
observerID, observerName, direction, pathJSON, obsTS string
obsRawHex string
snr, rssi *float64
score *int
}
@@ -1435,11 +1450,16 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
var observerID, observerName, direction, pathJSON, obsTimestamp sql.NullString
var snrVal, rssiVal sql.NullFloat64
var scoreVal sql.NullInt64
var obsRawHex sql.NullString
if err := rows.Scan(&txID, &rawHex, &hash, &firstSeen, &routeType, &payloadType,
scanArgs2 := []interface{}{&txID, &rawHex, &hash, &firstSeen, &routeType, &payloadType,
&payloadVersion, &decodedJSON,
&obsIDVal, &observerID, &observerName, &direction,
&snrVal, &rssiVal, &scoreVal, &pathJSON, &obsTimestamp); err != nil {
&snrVal, &rssiVal, &scoreVal, &pathJSON, &obsTimestamp}
if s.db.hasObsRawHex {
scanArgs2 = append(scanArgs2, &obsRawHex)
}
if err := rows.Scan(scanArgs2...); err != nil {
continue
}
@@ -1464,6 +1484,7 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
direction: nullStrVal(direction),
pathJSON: nullStrVal(pathJSON),
obsTS: nullStrVal(obsTimestamp),
obsRawHex: nullStrVal(obsRawHex),
snr: nullFloatPtr(snrVal),
rssi: nullFloatPtr(rssiVal),
score: nullIntPtr(scoreVal),
@@ -1564,6 +1585,7 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
RSSI: r.rssi,
Score: r.score,
PathJSON: r.pathJSON,
RawHex: r.obsRawHex,
Timestamp: normalizeTimestamp(r.obsTS),
}
@@ -1806,9 +1828,13 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
}
var querySQL string
obsRHCol2 := ""
if s.db.hasObsRawHex {
obsRHCol2 = ", o.raw_hex"
}
if s.db.isV3 {
querySQL = `SELECT o.id, o.transmission_id, obs.id, obs.name, o.direction,
o.snr, o.rssi, o.score, o.path_json, strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch')
o.snr, o.rssi, o.score, o.path_json, strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch')` + obsRHCol2 + `
FROM observations o
LEFT JOIN observers obs ON obs.rowid = o.observer_idx
WHERE o.id > ?
@@ -1816,7 +1842,7 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
LIMIT ?`
} else {
querySQL = `SELECT o.id, o.transmission_id, o.observer_id, o.observer_name, o.direction,
o.snr, o.rssi, o.score, o.path_json, o.timestamp
o.snr, o.rssi, o.score, o.path_json, o.timestamp` + obsRHCol2 + `
FROM observations o
WHERE o.id > ?
ORDER BY o.id ASC
@@ -1839,6 +1865,7 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
snr, rssi *float64
score *int
pathJSON string
rawHex string
timestamp string
}
@@ -1848,9 +1875,14 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
var observerID, observerName, direction, pathJSON, ts sql.NullString
var snr, rssi sql.NullFloat64
var score sql.NullInt64
var obsRawHex sql.NullString
if err := rows.Scan(&oid, &txID, &observerID, &observerName, &direction,
&snr, &rssi, &score, &pathJSON, &ts); err != nil {
scanArgs3 := []interface{}{&oid, &txID, &observerID, &observerName, &direction,
&snr, &rssi, &score, &pathJSON, &ts}
if s.db.hasObsRawHex {
scanArgs3 = append(scanArgs3, &obsRawHex)
}
if err := rows.Scan(scanArgs3...); err != nil {
continue
}
@@ -1864,6 +1896,7 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
rssi: nullFloatPtr(rssi),
score: nullIntPtr(score),
pathJSON: nullStrVal(pathJSON),
rawHex: nullStrVal(obsRawHex),
timestamp: nullStrVal(ts),
})
}
@@ -1919,6 +1952,7 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
RSSI: r.rssi,
Score: r.score,
PathJSON: r.pathJSON,
RawHex: r.rawHex,
Timestamp: normalizeTimestamp(r.timestamp),
}
@@ -2408,7 +2442,12 @@ func (s *PacketStore) enrichObs(obs *StoreObs) map[string]interface{} {
if tx != nil {
m["hash"] = strOrNil(tx.Hash)
m["raw_hex"] = strOrNil(tx.RawHex)
// Prefer per-observation raw_hex; fall back to transmission-level (#881)
if obs.RawHex != "" {
m["raw_hex"] = obs.RawHex
} else {
m["raw_hex"] = strOrNil(tx.RawHex)
}
m["payload_type"] = intPtrOrNil(tx.PayloadType)
m["route_type"] = intPtrOrNil(tx.RouteType)
m["decoded_json"] = strOrNil(tx.DecodedJSON)
+3 -1
View File
@@ -277,6 +277,9 @@ type ObservationResp struct {
SNR interface{} `json:"snr"`
RSSI interface{} `json:"rssi"`
PathJSON interface{} `json:"path_json"`
ResolvedPath interface{} `json:"resolved_path,omitempty"`
Direction interface{} `json:"direction,omitempty"`
RawHex interface{} `json:"raw_hex,omitempty"`
Timestamp interface{} `json:"timestamp"`
}
@@ -312,7 +315,6 @@ type PacketTimestampsResponse struct {
type PacketDetailResponse struct {
Packet interface{} `json:"packet"`
Path []interface{} `json:"path"`
Breakdown *Breakdown `json:"breakdown"`
ObservationCount int `json:"observation_count"`
Observations []ObservationResp `json:"observations,omitempty"`
}
+241
View File
@@ -0,0 +1,241 @@
# Clock Skew Classifier — Redesign
**Status:** spec, pre-implementation
**Supersedes:** parts of #690 / #789 / #845 / PR #894
**Date drafted:** 2026-04-24
## Problem
The current classifier (`cmd/server/clock_skew.go`) uses windowed medians, hysteresis, "good fraction" floors, and a 365-day `no_clock` threshold. It produces:
- False `no_clock` flags on nodes whose clocks are working today but had garbage timestamps in recent samples.
- Symmetric severity bands that conflate "clock at firmware default" with "operator set the clock wrong by a year" — completely different operator actions required.
- Compounding over-engineering as each operator complaint added a new tier or window.
The actual physical reality of these devices is much simpler than the classifier assumes.
## Hardware reality
Most MeshCore nodes have **no auto-updating RTC**. There are two hardware paths:
1. **Volatile RTC nodes** (`firmware/src/helpers/ArduinoHelpers.h:11``VolatileRTCClock`):
- On boot, `base_time` is hardcoded to a firmware-build constant (currently `1715770351` = 2024-05-15 20:52:31 UTC).
- `getCurrentTime()` returns `base_time + millis()/1000`.
- On reboot the value snaps back to the constant.
- User must manually sync via companion app (`set time` CLI invokes `setCurrentTime(...)`) to set a real wall-clock time, which then ticks until the next reboot.
2. **Hardware-RTC nodes** (`firmware/src/helpers/AutoDiscoverRTCClock.cpp` — DS3231 / RV3028 / PCF8563):
- Real-time chip with battery backup. Holds the time across reboots.
- Behaves correctly once set; no default-snap behavior.
The `set time RESET` CLI command (`firmware/src/helpers/CommonCLI.cpp:215`) explicitly calls `setCurrentTime(1715770351)` regardless of hardware — so even hardware-RTC nodes can be deliberately reset to the default epoch.
**Therefore every node is in exactly one of these states:**
| State | Description |
|---|---|
| **Default / never set** | RTC is at a firmware-default epoch + ticking up since the last boot. |
| **Set, drifting normally** | RTC was synced; small skew accumulating at ~0.8s/day per #789 reports. |
| **Set, drifted past tolerance** | Like above but skew has grown beyond what's useful. |
| **Wrong** | Operator-set incorrect time, or genuine RTC malfunction not matching any known default. |
There is no "bimodal RTC bug" — what looked bimodal in #845 is just a sequence of `defaulted → user sync → reboot → defaulted again`. The "bad" timestamps are not noise; they're a constant (the default epoch + a small uptime).
## Production data analysis (2026-04-24)
### 00id.net (this deployment, 416 nodes, commit `abd9c46`)
`lastSkewSec` (advert_ts observed_ts) distribution:
| Bucket | Count | Pct |
|---|---:|---:|
| OK ≤15s | 90 | 22% |
| Degrading ≤60s | 93 | 22% |
| Degraded ≤10m | 13 | 3% |
| off ≤1d | 5 | 1% |
| off ≤1y | 110 | 26% |
| absurd >1y | 105 | 25% |
Per-node `lastAdvertTS` raw timestamp distribution shows a sharp default cluster:
```
+0 days count=19 samples=114969 ← exactly at 1715770351 (just rebooted)
+1d count=9 samples=24766
+2d count=7 samples=58101
+3d count=2 samples=360
... ← decay through ~110 days
+113d count=2 samples=53776
```
103 of 416 nodes (25%) have `lastAdvertTS` between `1715770351` and `1715770351 + 1095 days`, consistent with the volatile-RTC-default-ticking-up pattern.
A second cluster of 5 nodes has `lastAdvertTS = 1672531542 ≈ 1672531200 + 5min` = **2023-01-01 00:00:00 UTC** + small uptime. This is a *different* firmware-default epoch from an older firmware version.
### Cascadia (analyzer.cascadiamesh.org, 433 nodes in 5000-packet sample, commit `111b03c` v3.5.0)
ADVERT timestamp by year-month:
```
1970-01 1 ← epoch zero (ESP32 native fallback OR ancient firmware)
2021-01 1 ← possible third default epoch
2023-01 2 ← old firmware default (matches 00id)
2024-05 60 ← current VolatileRTCClock + days uptime
2024-06 39 ← same default + weeks uptime
2024-07 21
2024-08 10
2024-09 2
2024-10 1
2024-11 2 ← decays out as fewer nodes have multi-month uptime since reboot
2025-10 1 ← pre-current-now miscellany
2025-11 2
2026-03 4
2026-04 285 ← currently set clocks (this is "now-ish")
2027-04 1 ← operator set wrong by ~1 year (typo?)
2067-12 1 ← operator set wildly wrong / corrupted RTC
```
Confirms the model: ~67% of nodes have a current clock, ~32% are at known firmware defaults at varying uptime offsets, ~3 outliers represent genuine misconfigurations.
## Known firmware default epochs
These are the values discovered in production data so far:
| Epoch (unix) | UTC | Source |
|---:|---|---|
| `0` | 1970-01-01 | Likely ESP32 boot when no RTC initialization runs (`time(NULL)` returns 0). |
| `1609459200` | 2021-01-01 | Speculation — single-sample evidence, validate as more data arrives. |
| `1672531200` | 2023-01-01 | Older firmware `VolatileRTCClock::base_time` value. |
| `1715770351` | 2024-05-15 20:52:31 | **Current** `VolatileRTCClock` constructor + `set time RESET` CLI. |
Treat the table as data, not fixed code. New firmware versions will introduce new defaults; expect to add to the list over time.
## Reconciliation with #690 — the four timestamps
#690 lists three timestamps; in practice there are four signals worth distinguishing:
| Signal | Source | Used for |
|---|---|---|
| `advert_ts` | Inside MeshCore packet, set by sending node | Per-node classification (THE signal). |
| `mqtt_envelope_ts` | Set by observer when it forwards via MQTT | Observer-side calibration only — *not* a direct node-skew signal because observer clock can itself be wrong. |
| `corescope_received_ts` | Wall clock when CoreScope ingested the message | Reference "now"; calibration cross-check. |
| `same_packet_across_observers` | Multiple observers seeing the same hash | Phase 2 calibration (triangulation). |
**Inputs flow:**
1. **Phase 2 (existing, kept):** for each packet hash seen by ≥2 observers, compute each observer's deviation from the per-packet median observed_ts → `observerOffset`. This is the triangulation #690 calls for ("Same packet observed by more than one (ideally 3+) observers gives good indication if one observer is off"). Observer offsets are the calibration table.
2. **Per-advert correction (existing, kept):** `correctedSkew = (advert_ts - observed_ts) + observerOffset[observer_id]`. If no calibration exists for an observer, fall back to raw skew with `calibrated: false`.
3. **Default detection (new):** runs on RAW `advert_ts`, not corrected. The firmware default is a fixed wall-clock value; observer offsets are seconds-to-minutes scale and cannot move `advert_ts` from 2024 to 2026. Default check is independent of calibration.
4. **Severity classification (new):** if `is_default(advert_ts)``default`; else classify by `|correctedSkew|` band.
This keeps everything #690 asks for (observer detection, bias subtraction, triangulation), and adds the firmware-default cluster as a new pre-empting tier.
## UI: explain WHY (#690 requirement)
The classifier alone doesn't satisfy #690's "present on the UI why clock skew is obvious or suspected." The evidence panel from PR #906 (per-hash observer breakdown showing raw vs corrected skew per observer) is the WHY.
For each per-node clock card the UI must show:
- **Tier badge** (default / ok / degrading / degraded / wrong) + magnitude.
- **Plain-English reason line**: e.g. "Last advert at 2024-05-15 + 3.2 days uptime — matches firmware default (volatile RTC, not yet user-set)" or "Last advert 12s vs wall clock — within OK tolerance."
- **Calibration footnote**: "Skew corrected using observer X offset +1.7s (computed from 412 multi-observer packets)" or "Single-observer measurement, no calibration available."
- **Evidence accordion** (PR #906 shape, retained): for the most recent N hashes, each observer's raw vs corrected skew + the observer's offset.
For the per-observer page (also from PR #906): show the observer's offset, the multi-observer sample count, and a tier badge using the same scale (treating `|observerOffset|` as the skew).
## Proposed classifier
Per-advert classification, no windowing:
```python
DEFAULT_EPOCHS = [0, 1609459200, 1672531200, 1715770351]
MAX_PLAUSIBLE_UPTIME_SEC = 1095 * 86400 # 3 years
def is_default(ts):
return any(d <= ts <= d + MAX_PLAUSIBLE_UPTIME_SEC for d in DEFAULT_EPOCHS)
def classify(advert_ts, corrected_skew_sec):
if is_default(advert_ts):
return "default" # gray
abs_skew = abs(corrected_skew_sec)
if abs_skew <= 15: return "ok" # green
if abs_skew <= 60: return "degrading" # yellow
if abs_skew <= 600: return "degraded" # orange
return "wrong" # red
```
`corrected_skew_sec` is the observer-bias-subtracted skew per Phase 2 calibration. Default detection is independent of calibration (runs on raw `advert_ts`).
Per-node state = classification of the node's most-recent advert (per hash, picking the most recent observation across all observers). No medians, no good-fraction, no hysteresis.
## Severity tier definitions
| Tier | Condition | Color | UI label | Meaning |
|---|---|---|---|---|
| `default` | Advert ts within `[default, default + 3y]` of any known epoch | Gray | "Default" | Volatile RTC at firmware boot constant; never set or rebooted and not re-synced. |
| `ok` | abs(skew) ≤ 15s | Green | "OK" | Working clock. |
| `degrading` | 15s < abs(skew) ≤ 60s | Yellow | "Degrading" | Real but accumulating drift. |
| `degraded` | 60s < abs(skew) ≤ 600s | Orange | "Degraded" | Off by minutes — needs re-sync. |
| `wrong` | abs(skew) > 600s and not `default` | Red | "Wrong" | Operator-set error or RTC malfunction. |
## What this kills
- The 365-day `no_clock` threshold and the entire `recentSkewWindow{Count,Sec}` machinery.
- The hysteresis / `goodFraction` / `longTermGoodFraction` logic from PR #894.
- The proposed `bimodal_clock` tier from #845 — the pattern is not bimodal, it's defaulted vs set.
- All Theil-Sen drift calculations as classifier inputs (drift remains a derived display value).
## What this preserves
- **Phase 2 observer calibration** (`calibrateObservers()`) — kept verbatim. It's what powers the "subtract observer bias" requirement from #690 and provides the triangulation evidence the UI needs.
- **Drift display** (computed but not classifying).
- **PR #906 evidence UI** — orthogonal to the classifier; it is in fact the implementation of #690's "explain WHY" requirement. Only label strings change to match the new tier names.
- **`/api/observers/clock-skew`** — unchanged shape.
## API impact
`/api/nodes/{pubkey}/clock-skew` response changes:
- `severity` enum: `default | ok | degrading | degraded | wrong` (no more `no_clock | severe | warn | absurd`).
- New field `defaultEpoch` (int, optional): if `severity == "default"`, the matched epoch.
- Drop fields: `recentMedianSkewSec`, `goodFraction`, `recentBadSampleCount`, `longTermGoodFraction`.
- Keep: `lastSkewSec`, `medianSkewSec`, `meanSkewSec`, `driftPerDaySec`, `sampleCount`, `calibrated`, `lastAdvertTS`, `lastObservedTS`, `nodeName`, `nodeRole`.
`/api/nodes/clock-skew` (fleet) shape unchanged except severity enum values.
## UI impact
- New CSS classes `skew-badge--default`, `skew-badge--degrading`, `skew-badge--degraded`, `skew-badge--wrong`. Drop `--no_clock`, `--severe`, `--warn`, `--absurd`, `--bimodal_clock`.
- Tooltip text updated per tier.
- "Default" badge tooltip should explain the clock is at firmware default plus uptime since boot, and the operator hasn't set it yet (or hasn't re-set it since the last reboot).
## Migration
Single PR replaces the classifier in `clock_skew.go` and updates the frontend badges/labels. No database schema change, no data migration — all per-call computation.
## Open issues to close
- **#789** (median hides corrected clocks) — resolved by per-advert classification.
- **#845** (bimodal_clock tier) — replaced by `default` tier; the pattern that motivated it is correctly captured.
- **PR #894** — close without merging; this design supersedes Option C entirely.
- **#690** UI completion (PR #906) — keeps moving in parallel; only label updates needed.
## Validation plan
1. Hand-run the classifier against a snapshot of `/api/nodes/clock-skew` from 00id and cascadia. Confirm:
- All 103 00id "absurd" nodes reclassify as `default`.
- All 5 cascadia 2023-01 nodes reclassify as `default`.
- The 2027 / 2067 cascadia outliers reclassify as `wrong`.
- The 285 cascadia 2026-04 nodes reclassify as `ok` (or `degrading` if drift exceeds 15s).
2. Add per-tier unit tests in `cmd/server/clock_skew_test.go`.
3. Add a regression test for each known default epoch (synthesize advert at `default + 0s`, `default + 1d`, `default + 3y - 1s` → all classify as `default`).
4. Edge cases:
- `advert_ts == 0` → matches default epoch 0.
- `advert_ts == 1715770351 + 731 days` → no longer matches (uptime cap exceeded) — should fall through to time-based classification, likely `wrong`.
- Future timestamps beyond `now + 600s``wrong`.
## Out of scope (follow-ups)
- Per-firmware-version known-default lookup (when `firmware_version` field becomes reliable on adverts).
- Reboot-count / flakiness indicator ("this node has hit default N times in last 30d").
- Auto-discovery of new default epochs from clustering analysis (could detect a 4th default emerging in the wild).
- Filtering defaulted-clock adverts out of time-windowed analytics queries (separate spec — affects path attribution).
+3
View File
@@ -0,0 +1,3 @@
module github.com/meshcore-analyzer/packetpath
go 1.22
+76
View File
@@ -0,0 +1,76 @@
// Package packetpath provides shared helpers for extracting path hops from
// raw MeshCore packet hex bytes.
package packetpath
import (
"encoding/hex"
"fmt"
"strings"
)
// DecodePathFromRawHex extracts the header path hops directly from raw hex bytes.
// This is the authoritative path that matches what's in raw_hex, as opposed to
// decoded.Path.Hops which may be overwritten for TRACE packets (issue #886).
//
// WARNING: This function returns the literal header path bytes regardless of
// payload type. For TRACE packets these bytes are SNR values, NOT hop hashes.
// Callers that may receive TRACE packets MUST check PathBytesAreHops(payloadType)
// first, or use the safer DecodeHopsForPayload wrapper.
func DecodePathFromRawHex(rawHex string) ([]string, error) {
buf, err := hex.DecodeString(rawHex)
if err != nil || len(buf) < 2 {
return nil, fmt.Errorf("invalid or too-short hex")
}
headerByte := buf[0]
offset := 1
if IsTransportRoute(int(headerByte & 0x03)) {
if len(buf) < offset+4 {
return nil, fmt.Errorf("too short for transport codes")
}
offset += 4
}
if offset >= len(buf) {
return nil, fmt.Errorf("too short for path byte")
}
pathByte := buf[offset]
offset++
hashSize := int(pathByte>>6) + 1
hashCount := int(pathByte & 0x3F)
hops := make([]string, 0, hashCount)
for i := 0; i < hashCount; i++ {
start := offset + i*hashSize
end := start + hashSize
if end > len(buf) {
break
}
hops = append(hops, strings.ToUpper(hex.EncodeToString(buf[start:end])))
}
return hops, nil
}
// DecodeHopsForPayload returns the header path hops only when the payload type's
// header bytes are actually route hops (i.e. PathBytesAreHops(payloadType) is true).
// For TRACE packets it returns (nil, ErrPayloadHasNoHeaderHops) so the caller is
// forced to source hops from the decoded payload instead.
//
// Prefer this over DecodePathFromRawHex when the payload type is known.
func DecodeHopsForPayload(rawHex string, payloadType byte) ([]string, error) {
if !PathBytesAreHops(payloadType) {
return nil, ErrPayloadHasNoHeaderHops
}
return DecodePathFromRawHex(rawHex)
}
// ErrPayloadHasNoHeaderHops is returned by DecodeHopsForPayload when the
// payload type repurposes the raw_hex header path bytes (e.g. TRACE → SNR values).
var ErrPayloadHasNoHeaderHops = errPayloadHasNoHeaderHops{}
type errPayloadHasNoHeaderHops struct{}
func (errPayloadHasNoHeaderHops) Error() string {
return "payload type repurposes header path bytes; source hops from decoded payload"
}
+150
View File
@@ -0,0 +1,150 @@
package packetpath
import (
"encoding/hex"
"encoding/json"
"strings"
"testing"
)
func TestDecodePathFromRawHex_Basic(t *testing.T) {
// Build a simple FLOOD packet (route_type=1) with 2 hops of hashSize=1
// header: route_type=1, payload_type=2 (TXT_MSG), version=0 → 0b00_0010_01 = 0x09
// path byte: hashSize=1 (bits 7-6 = 0), hashCount=2 (bits 5-0 = 2) → 0x02
// hops: AB, CD
// payload: some bytes
raw := "0902ABCD" + "DEADBEEF"
hops, err := DecodePathFromRawHex(raw)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 2 || hops[0] != "AB" || hops[1] != "CD" {
t.Fatalf("expected [AB, CD], got %v", hops)
}
}
func TestDecodePathFromRawHex_ZeroHops(t *testing.T) {
// DIRECT route (type=2), no hops → 0b00_0010_10 = 0x0A
// path byte: 0x00 (0 hops)
raw := "0A00" + "DEADBEEF"
hops, err := DecodePathFromRawHex(raw)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 0 {
t.Fatalf("expected 0 hops, got %v", hops)
}
}
func TestDecodePathFromRawHex_TransportRoute(t *testing.T) {
// TRANSPORT_FLOOD (route_type=0), payload_type=5 (GRP_TXT), version=0
// header: 0b00_0101_00 = 0x14
// transport codes: 4 bytes
// path byte: hashSize=1, hashCount=1 → 0x01
// hop: FF
raw := "14" + "00112233" + "01" + "FF" + "DEAD"
hops, err := DecodePathFromRawHex(raw)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 1 || hops[0] != "FF" {
t.Fatalf("expected [FF], got %v", hops)
}
}
// buildTracePacket creates a TRACE packet hex string where header path bytes are
// SNR values, and payload contains the actual route hops.
func buildTracePacket() (rawHex string, headerPathHops []string, payloadHops []string) {
// DIRECT route (type=2), TRACE payload (type=9), version=0
// header byte: 0b00_1001_10 = 0x26
headerByte := byte(0x26)
// Header path: 2 SNR bytes (hashSize=1, hashCount=2) → path byte = 0x02
// SNR values: 0x1A (26 dB), 0x0F (15 dB)
pathByte := byte(0x02)
snrBytes := []byte{0x1A, 0x0F}
// TRACE payload: tag(4) + authCode(4) + flags(1) + path hops
tag := []byte{0x01, 0x00, 0x00, 0x00}
authCode := []byte{0x02, 0x00, 0x00, 0x00}
// flags: path_sz=0 (1 byte hops), other bits=0 → 0x00
flags := byte(0x00)
// Payload hops: AA, BB, CC (the actual route)
payloadPathBytes := []byte{0xAA, 0xBB, 0xCC}
var buf []byte
buf = append(buf, headerByte, pathByte)
buf = append(buf, snrBytes...)
buf = append(buf, tag...)
buf = append(buf, authCode...)
buf = append(buf, flags)
buf = append(buf, payloadPathBytes...)
rawHex = strings.ToUpper(hex.EncodeToString(buf))
headerPathHops = []string{"1A", "0F"} // SNR values — NOT route hops
payloadHops = []string{"AA", "BB", "CC"} // actual route hops from payload
return
}
func TestDecodePathFromRawHex_TraceReturnsSNR(t *testing.T) {
rawHex, expectedSNR, _ := buildTracePacket()
hops, err := DecodePathFromRawHex(rawHex)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
// DecodePathFromRawHex always returns header path bytes — for TRACE these are SNR values
if len(hops) != len(expectedSNR) {
t.Fatalf("expected %d hops (SNR), got %d: %v", len(expectedSNR), len(hops), hops)
}
for i, h := range hops {
if h != expectedSNR[i] {
t.Errorf("hop[%d]: expected %s, got %s", i, expectedSNR[i], h)
}
}
}
func TestTracePathJSON_UsesPayloadHops(t *testing.T) {
// This test validates the TRACE vs non-TRACE logic that callers should implement:
// For TRACE: path_json = decoded.Path.Hops (payload-decoded route hops)
// For non-TRACE: path_json = DecodePathFromRawHex(raw_hex)
rawHex, snrHops, payloadHops := buildTracePacket()
// DecodePathFromRawHex returns SNR bytes for TRACE
headerHops, _ := DecodePathFromRawHex(rawHex)
headerJSON, _ := json.Marshal(headerHops)
// payload hops (what decoded.Path.Hops would return after TRACE decoding)
payloadJSON, _ := json.Marshal(payloadHops)
// They must differ — SNR != route hops
if string(headerJSON) == string(payloadJSON) {
t.Fatalf("SNR hops and payload hops should differ for TRACE; both are %s", headerJSON)
}
// For TRACE, path_json should be payloadHops, not headerHops
_ = snrHops // snrHops == headerHops — used for documentation
t.Logf("TRACE: header path (SNR) = %s, payload path (route) = %s", headerJSON, payloadJSON)
}
func TestDecodeHopsForPayload_NonTrace(t *testing.T) {
// header 0x01, path_len 0x02, hops 0xAA 0xBB, then payload bytes
raw := "0102AABB00"
hops, err := DecodeHopsForPayload(raw, 0x05) // GRP_TXT — header path bytes ARE hops
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(hops) != 2 || hops[0] != "AA" || hops[1] != "BB" {
t.Errorf("expected [AA BB], got %v", hops)
}
}
func TestDecodeHopsForPayload_TraceReturnsError(t *testing.T) {
raw := "010205F00100"
hops, err := DecodeHopsForPayload(raw, PayloadTRACE)
if err != ErrPayloadHasNoHeaderHops {
t.Errorf("expected ErrPayloadHasNoHeaderHops, got %v", err)
}
if hops != nil {
t.Errorf("expected nil hops for TRACE, got %v", hops)
}
}
+24
View File
@@ -0,0 +1,24 @@
package packetpath
// Route type constants (header bits 1-0).
const (
RouteTransportFlood = 0
RouteFlood = 1
RouteDirect = 2
RouteTransportDirect = 3
)
// PayloadTRACE is the payload type constant for TRACE packets.
const PayloadTRACE = 0x09
// IsTransportRoute returns true for TRANSPORT_FLOOD (0) and TRANSPORT_DIRECT (3).
func IsTransportRoute(routeType int) bool {
return routeType == RouteTransportFlood || routeType == RouteTransportDirect
}
// PathBytesAreHops returns true when the raw_hex header path bytes represent
// route hop hashes (the normal case). Returns false for packet types where
// header path bytes are repurposed (e.g. TRACE uses them for SNR values).
func PathBytesAreHops(payloadType byte) bool {
return payloadType != PayloadTRACE
}
+31
View File
@@ -0,0 +1,31 @@
package packetpath
import "testing"
func TestIsTransportRoute(t *testing.T) {
if !IsTransportRoute(RouteTransportFlood) {
t.Error("RouteTransportFlood should be transport")
}
if !IsTransportRoute(RouteTransportDirect) {
t.Error("RouteTransportDirect should be transport")
}
if IsTransportRoute(RouteFlood) {
t.Error("RouteFlood should not be transport")
}
if IsTransportRoute(RouteDirect) {
t.Error("RouteDirect should not be transport")
}
}
func TestPathBytesAreHops(t *testing.T) {
if PathBytesAreHops(PayloadTRACE) {
t.Error("PathBytesAreHops(PayloadTRACE) should be false")
}
// All other known payload types should return true.
otherTypes := []byte{0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F}
for _, pt := range otherTypes {
if !PathBytesAreHops(pt) {
t.Errorf("PathBytesAreHops(0x%02X) should be true", pt)
}
}
}
+5 -5
View File
@@ -3495,12 +3495,12 @@ function destroy() { _analyticsData = {}; _channelData = null; if (_ngState && _
});
// Summary
var counts = { ok: 0, warning: 0, critical: 0, absurd: 0 };
var counts = { ok: 0, degrading: 0, degraded: 0, wrong: 0, default: 0 };
data.forEach(function(n) { if (counts[n.severity] !== undefined) counts[n.severity]++; });
// Filter buttons (also serve as summary — no separate stats pills needed)
var filterColors = { ok: 'var(--status-green)', warning: 'var(--status-yellow)', critical: 'var(--status-orange)', absurd: 'var(--status-purple)', no_clock: 'var(--text-muted)' };
var filters = ['all', 'ok', 'warning', 'critical', 'absurd', 'no_clock'];
var filterColors = { ok: 'var(--status-green)', degrading: 'var(--status-yellow)', degraded: 'var(--status-orange)', wrong: 'var(--status-red)', default: 'var(--text-muted)' };
var filters = ['all', 'ok', 'degrading', 'degraded', 'wrong', 'default'];
var filterHtml = '<div style="margin-bottom:10px">' + filters.map(function(f) {
var dot = f !== 'all' ? '<span style="display:inline-block;width:8px;height:8px;border-radius:50%;background:' + filterColors[f] + ';margin-right:4px;vertical-align:middle"></span>' : '';
return '<button class="clock-filter-btn' + (activeFilter === f ? ' active' : '') + '" data-filter="' + f + '">' +
@@ -3513,8 +3513,8 @@ function destroy() { _analyticsData = {}; _channelData = null; if (_ngState && _
var rowClass = 'clock-fleet-row--' + (n.severity || 'ok');
var lastAdv = n.lastObservedTS ? new Date(n.lastObservedTS * 1000).toISOString().replace('T', ' ').replace(/\.\d+Z/, ' UTC') : '—';
var skewVal = window.currentSkewValue(n);
var skewText = n.severity === 'no_clock' ? 'No Clock' : formatSkew(skewVal);
var driftText = n.severity === 'no_clock' || !n.driftPerDaySec ? '' : formatDrift(n.driftPerDaySec);
var skewText = n.severity === 'default' ? 'Default' : formatSkew(skewVal);
var driftText = n.severity === 'default' || !n.driftPerDaySec ? '' : formatDrift(n.driftPerDaySec);
return '<tr class="' + rowClass + '" data-pubkey="' + esc(n.pubkey) + '" style="cursor:pointer">' +
'<td><strong>' + esc(n.nodeName || n.pubkey.slice(0, 12)) + '</strong></td>' +
'<td style="font-family:var(--mono,monospace)">' + skewText + '</td>' +
+65
View File
@@ -14,6 +14,71 @@ function isTransportRoute(rt) { return rt === 0 || rt === 3; }
function getPathLenOffset(routeType) { return isTransportRoute(routeType) ? 5 : 1; }
function transportBadge(rt) { return isTransportRoute(rt) ? ' <span class="badge badge-transport" title="' + routeTypeName(rt) + '">T</span>' : ''; }
/**
* Compute breakdown byte ranges from raw_hex on the client.
* Mirrors cmd/server/decoder.go BuildBreakdown(). Used so per-observation raw_hex
* (which can differ in path length from the top-level packet) gets accurate
* highlighted byte ranges, instead of using the server-supplied breakdown
* computed once from the top-level raw_hex.
*/
function computeBreakdownRanges(hexString, routeType, payloadType) {
if (!hexString) return [];
const clean = hexString.replace(/\s+/g, '');
const bytes = clean.length / 2;
if (bytes < 2) return [];
const ranges = [];
// Header
ranges.push({ start: 0, end: 0, label: 'Header' });
let offset = 1;
if (isTransportRoute(routeType)) {
if (bytes < offset + 4) return ranges;
ranges.push({ start: offset, end: offset + 3, label: 'Transport Codes' });
offset += 4;
}
if (offset >= bytes) return ranges;
// Path Length byte
ranges.push({ start: offset, end: offset, label: 'Path Length' });
const pathByte = parseInt(clean.slice(offset * 2, offset * 2 + 2), 16);
offset += 1;
if (isNaN(pathByte)) return ranges;
const hashSize = (pathByte >> 6) + 1;
const hashCount = pathByte & 0x3F;
const pathBytes = hashSize * hashCount;
if (hashCount > 0 && offset + pathBytes <= bytes) {
ranges.push({ start: offset, end: offset + pathBytes - 1, label: 'Path' });
}
offset += pathBytes;
if (offset >= bytes) return ranges;
const payloadStart = offset;
// ADVERT (payload_type 4) gets sub-fields when full record present
if (payloadType === 4 && bytes - payloadStart >= 100) {
ranges.push({ start: payloadStart, end: payloadStart + 31, label: 'PubKey' });
ranges.push({ start: payloadStart + 32, end: payloadStart + 35, label: 'Timestamp' });
ranges.push({ start: payloadStart + 36, end: payloadStart + 99, label: 'Signature' });
const appStart = payloadStart + 100;
if (appStart < bytes) {
ranges.push({ start: appStart, end: appStart, label: 'Flags' });
const appFlags = parseInt(clean.slice(appStart * 2, appStart * 2 + 2), 16);
let fOff = appStart + 1;
if (!isNaN(appFlags)) {
if ((appFlags & 0x10) && fOff + 8 <= bytes) {
ranges.push({ start: fOff, end: fOff + 3, label: 'Latitude' });
ranges.push({ start: fOff + 4, end: fOff + 7, label: 'Longitude' });
fOff += 8;
}
if ((appFlags & 0x20) && fOff + 2 <= bytes) fOff += 2;
if ((appFlags & 0x40) && fOff + 2 <= bytes) fOff += 2;
if ((appFlags & 0x80) && fOff < bytes) {
ranges.push({ start: fOff, end: bytes - 1, label: 'Name' });
}
}
}
} else {
ranges.push({ start: payloadStart, end: bytes - 1, label: 'Payload' });
}
return ranges;
}
// --- Utilities ---
const _apiPerf = { calls: 0, totalMs: 0, log: [], cacheHits: 0 };
const _apiCache = new Map();
+44 -17
View File
@@ -393,17 +393,25 @@
}
}
// Merge user-stored keys into the channel list
// Merge user-stored keys into the channel list.
// If a stored key matches a server-known channel, mark that channel as
// userAdded so the ✕ button appears — otherwise the user has no way to
// remove a key they added but that the server already knows about.
function mergeUserChannels() {
var keys = ChannelDecrypt.getStoredKeys();
var names = Object.keys(keys);
for (var i = 0; i < names.length; i++) {
var name = names[i];
// Check if channel already exists by name
var exists = channels.some(function (ch) {
return ch.name === name || ch.hash === name || ch.hash === ('user:' + name);
});
if (!exists) {
var matched = false;
for (var j = 0; j < channels.length; j++) {
var ch = channels[j];
if (ch.name === name || ch.hash === name || ch.hash === ('user:' + name)) {
ch.userAdded = true;
matched = true;
break;
}
}
if (!matched) {
channels.push({
hash: 'user:' + name,
name: name,
@@ -749,19 +757,38 @@
e.stopPropagation();
var channelHash = removeBtn.getAttribute('data-remove-channel');
if (!channelHash) return;
var chName = channelHash.startsWith('user:') ? channelHash.substring(5) : channelHash;
// The localStorage key is the channel name. For user:-prefixed entries
// strip the prefix; for server-known channels look up the channel
// object so we use its display name (the hash itself isn't the key).
var ch = channels.find(function (c) { return c.hash === channelHash; });
var chName = channelHash.startsWith('user:')
? channelHash.substring(5)
: (ch && ch.name) || channelHash;
if (!confirm('Remove channel "' + chName + '"? This will clear saved keys and cached messages.')) return;
ChannelDecrypt.removeKey(chName);
// Remove from channels array
channels = channels.filter(function (c) { return c.hash !== channelHash; });
if (selectedHash === channelHash) {
selectedHash = null;
messages = [];
history.replaceState(null, '', '#/channels');
var msgEl2 = document.getElementById('chMessages');
if (msgEl2) msgEl2.innerHTML = '<div class="ch-empty">Choose a channel from the sidebar to view messages</div>';
var header2 = document.getElementById('chHeader');
if (header2) header2.querySelector('.ch-header-text').textContent = 'Select a channel';
if (channelHash.startsWith('user:')) {
// Pure user-added channel — drop from the list entirely.
channels = channels.filter(function (c) { return c.hash !== channelHash; });
if (selectedHash === channelHash) {
selectedHash = null;
messages = [];
history.replaceState(null, '', '#/channels');
var msgEl2 = document.getElementById('chMessages');
if (msgEl2) msgEl2.innerHTML = '<div class="ch-empty">Choose a channel from the sidebar to view messages</div>';
var header2 = document.getElementById('chHeader');
if (header2) header2.querySelector('.ch-header-text').textContent = 'Select a channel';
}
} else if (ch) {
// Server-known channel: keep the row, just unmark as user-added so
// the ✕ disappears until they re-add a key.
ch.userAdded = false;
// If this was the selected channel, clear decrypted messages since
// the key is gone — they can't be re-decrypted without re-adding it.
if (selectedHash === channelHash) {
messages = [];
var msgEl2 = document.getElementById('chMessages');
if (msgEl2) msgEl2.innerHTML = '<div class="ch-empty">Key removed — add a key to decrypt messages</div>';
}
}
renderChannelList();
return;
+126 -68
View File
@@ -72,33 +72,89 @@ window.HopResolver = (function() {
}
/**
* Pick the best candidate using affinity first, then geo-distance fallback.
* Pick the best candidate by scoring against BOTH prev and next resolved hops.
*
* Strategy (in priority order):
* 1. Neighbor-graph edge weight: sum of edge scores to prevPubkey + nextPubkey. Pick max.
* 2. Geographic centroid: if no candidate has graph edges, compute centroid of
* prev+next positions and pick closest candidate by haversine distance.
* 3. Single-anchor geo fallback: if only one neighbor is resolved, use it as anchor.
* 4. Original heuristic: first candidate (when no context at all).
*
* @param {Array} candidates - candidates with lat/lon/pubkey/name
* @param {string|null} adjacentPubkey - pubkey of the previously/next resolved hop
* @param {Object|null} anchor - {lat, lon} for geo fallback
* @param {number|null} fallbackLat - fallback anchor lat (e.g. observer)
* @param {number|null} fallbackLon - fallback anchor lon
* @param {string|null} prevPubkey - pubkey of previous resolved hop
* @param {string|null} nextPubkey - pubkey of next resolved hop
* @param {Object|null} prevPos - {lat, lon} of previous resolved hop or origin
* @param {Object|null} nextPos - {lat, lon} of next resolved hop or observer
* @returns {Object} best candidate
*/
function pickByAffinity(candidates, adjacentPubkey, anchor, fallbackLat, fallbackLon) {
// If we have affinity data and an adjacent hop, prefer neighbors
if (adjacentPubkey && Object.keys(affinityMap).length > 0) {
const withAffinity = candidates
.map(c => ({ ...c, affinity: getAffinity(adjacentPubkey, c.pubkey) }))
.filter(c => c.affinity > 0);
if (withAffinity.length > 0) {
withAffinity.sort((a, b) => b.affinity - a.affinity);
return withAffinity[0];
function pickByAffinity(candidates, prevPubkey, nextPubkey, prevPos, nextPos) {
const hasGraph = Object.keys(affinityMap).length > 0;
const hasAdj = prevPubkey || nextPubkey;
// Strategy 1: neighbor-graph edge weights (sum of prev + next)
if (hasGraph && hasAdj) {
const scored = candidates.map(function(c) {
let s = 0;
if (prevPubkey) s += getAffinity(prevPubkey, c.pubkey);
if (nextPubkey) s += getAffinity(nextPubkey, c.pubkey);
return { candidate: c, edgeScore: s };
});
const withEdges = scored.filter(function(s) { return s.edgeScore > 0; });
if (withEdges.length > 0) {
withEdges.sort(function(a, b) { return b.edgeScore - a.edgeScore; });
_traceMultiCandidate(candidates, scored, withEdges[0].candidate, 'graph');
return withEdges[0].candidate;
}
}
// Fallback: geo-distance sort (existing behavior)
const effectiveAnchor = anchor || (fallbackLat != null ? { lat: fallbackLat, lon: fallbackLon } : null);
if (effectiveAnchor) {
candidates.sort((a, b) => dist(a.lat, a.lon, effectiveAnchor.lat, effectiveAnchor.lon) - dist(b.lat, b.lon, effectiveAnchor.lat, effectiveAnchor.lon));
// Strategy 2/3: geographic — centroid of prev+next, or single anchor
let anchorLat = null, anchorLon = null, anchorCount = 0;
if (prevPos && prevPos.lat != null && prevPos.lon != null) {
anchorLat = (anchorLat || 0) + prevPos.lat;
anchorLon = (anchorLon || 0) + prevPos.lon;
anchorCount++;
}
if (nextPos && nextPos.lat != null && nextPos.lon != null) {
anchorLat = (anchorLat || 0) + nextPos.lat;
anchorLon = (anchorLon || 0) + nextPos.lon;
anchorCount++;
}
if (anchorCount > 0) {
anchorLat /= anchorCount;
anchorLon /= anchorCount;
const geoScored = candidates.map(function(c) {
const d = (c.lat != null && c.lon != null && !(c.lat === 0 && c.lon === 0))
? haversineKm(c.lat, c.lon, anchorLat, anchorLon) : 999999;
return { candidate: c, distKm: d };
});
geoScored.sort(function(a, b) { return a.distKm - b.distKm; });
_traceMultiCandidate(candidates, geoScored, geoScored[0].candidate, 'centroid');
return geoScored[0].candidate;
}
// Strategy 4: no context — return first candidate
_traceMultiCandidate(candidates, null, candidates[0], 'fallback');
return candidates[0];
}
/** Dev-mode console trace for multi-candidate picks */
function _traceMultiCandidate(candidates, scored, chosen, method) {
if (typeof console === 'undefined' || !console.debug) return;
if (candidates.length < 2) return;
try {
const prefix = candidates[0].pubkey ? candidates[0].pubkey.slice(0, 2) : '??';
const scoreSummary = scored ? scored.map(function(s) {
const pk = (s.candidate || s).pubkey || '?';
const val = s.edgeScore != null ? s.edgeScore : (s.distKm != null ? s.distKm + 'km' : '?');
return pk.slice(0, 8) + ':' + val;
}) : [];
console.debug('[hop-resolver] hash=' + prefix + ' candidates=' + candidates.length +
' scored=[' + scoreSummary.join(',') + '] chose=' + (chosen.pubkey || '?').slice(0, 8) +
' method=' + method);
} catch(e) { /* trace is best-effort */ }
}
/**
* Resolve an array of hex hop prefixes to node info.
* Returns a map: { hop: {name, pubkey, lat, lon, ambiguous, unreliable} }
@@ -169,52 +225,54 @@ window.HopResolver = (function() {
}
}
// Forward pass
let lastPos = (originLat != null && originLon != null) ? { lat: originLat, lon: originLon } : null;
let lastResolvedPubkey = null;
for (let i = 0; i < hops.length; i++) {
const hop = hops[i];
if (hopPositions[hop]) {
lastPos = hopPositions[hop];
lastResolvedPubkey = resolved[hop] ? resolved[hop].pubkey : null;
continue;
// Combined disambiguation: resolve ambiguous hops using both neighbors.
// We iterate until no more hops can be resolved (handles cascading dependencies).
const originPos = (originLat != null && originLon != null) ? { lat: originLat, lon: originLon } : null;
const observerPos = (observerLat != null && observerLon != null) ? { lat: observerLat, lon: observerLon } : null;
let changed = true;
let maxIter = hops.length + 1; // prevent infinite loops
while (changed && maxIter-- > 0) {
changed = false;
for (let i = 0; i < hops.length; i++) {
const hop = hops[i];
if (hopPositions[hop]) continue; // already resolved
const r = resolved[hop];
if (!r || !r.ambiguous) continue;
const withLoc = r.candidates.filter(c => c.lat != null && c.lon != null && !(c.lat === 0 && c.lon === 0));
if (!withLoc.length) continue;
// Find prev resolved neighbor
let prevPubkey = null, prevPos = null;
for (let j = i - 1; j >= 0; j--) {
if (hopPositions[hops[j]]) {
prevPos = hopPositions[hops[j]];
prevPubkey = resolved[hops[j]] ? resolved[hops[j]].pubkey : null;
break;
}
}
if (!prevPos && originPos) prevPos = originPos;
// Find next resolved neighbor
let nextPubkey = null, nextPos = null;
for (let j = i + 1; j < hops.length; j++) {
if (hopPositions[hops[j]]) {
nextPos = hopPositions[hops[j]];
nextPubkey = resolved[hops[j]] ? resolved[hops[j]].pubkey : null;
break;
}
}
if (!nextPos && observerPos) nextPos = observerPos;
// Skip if we have zero context (wait for a later iteration or neighbor resolution)
if (!prevPubkey && !nextPubkey && !prevPos && !nextPos) continue;
const picked = pickByAffinity(withLoc, prevPubkey, nextPubkey, prevPos, nextPos);
r.name = picked.name;
r.pubkey = picked.pubkey;
hopPositions[hop] = { lat: picked.lat, lon: picked.lon };
changed = true;
}
const r = resolved[hop];
if (!r || !r.ambiguous) continue;
const withLoc = r.candidates.filter(c => c.lat && c.lon && !(c.lat === 0 && c.lon === 0));
if (!withLoc.length) continue;
// Affinity-aware: prefer candidates that are neighbors of the previous hop
const picked = pickByAffinity(withLoc, lastResolvedPubkey, lastPos, i === hops.length - 1 ? observerLat : null, i === hops.length - 1 ? observerLon : null);
r.name = picked.name;
r.pubkey = picked.pubkey;
hopPositions[hop] = { lat: picked.lat, lon: picked.lon };
lastPos = hopPositions[hop];
lastResolvedPubkey = picked.pubkey;
}
// Backward pass
let nextPos = (observerLat != null && observerLon != null) ? { lat: observerLat, lon: observerLon } : null;
let nextResolvedPubkey = null;
for (let i = hops.length - 1; i >= 0; i--) {
const hop = hops[i];
if (hopPositions[hop]) {
nextPos = hopPositions[hop];
nextResolvedPubkey = resolved[hop] ? resolved[hop].pubkey : null;
continue;
}
const r = resolved[hop];
if (!r || !r.ambiguous) continue;
const withLoc = r.candidates.filter(c => c.lat && c.lon && !(c.lat === 0 && c.lon === 0));
if (!withLoc.length || !nextPos) continue;
// Affinity-aware: prefer candidates that are neighbors of the next hop
const picked = pickByAffinity(withLoc, nextResolvedPubkey, nextPos, null, null);
r.name = picked.name;
r.pubkey = picked.pubkey;
hopPositions[hop] = { lat: picked.lat, lon: picked.lon };
nextPos = hopPositions[hop];
nextResolvedPubkey = picked.pubkey;
}
// Sanity check: drop hops impossibly far from neighbors
@@ -276,13 +334,13 @@ window.HopResolver = (function() {
*/
function resolveFromServer(hops, resolvedPath) {
if (!hops || !resolvedPath || hops.length !== resolvedPath.length) return {};
var result = {};
for (var i = 0; i < hops.length; i++) {
var hop = hops[i];
var pubkey = resolvedPath[i];
const result = {};
for (let i = 0; i < hops.length; i++) {
const hop = hops[i];
const pubkey = resolvedPath[i];
if (!pubkey) continue; // null = unresolved, leave for client-side fallback
// O(1) lookup via pubkeyIdx built during init()
var node = pubkeyIdx[pubkey.toLowerCase()] || null;
const node = pubkeyIdx[pubkey.toLowerCase()] || null;
result[hop] = {
name: node ? node.name : pubkey.slice(0, 8),
pubkey: pubkey,
+38 -12
View File
@@ -808,7 +808,7 @@
let _themeRefreshHandler = null;
let _allNodes = null; // cached full node list
let _fleetSkew = null; // cached clock skew map: pubkey → {severity, recentMedianSkewSec, medianSkewSec, ...}
let _fleetSkew = null; // cached clock skew map: pubkey → {severity, medianSkewSec, ...}
/**
* Fetch per-node clock skew and render into the given container element.
@@ -824,14 +824,28 @@
var driftHtml = cs.driftPerDaySec ? '<div style="font-size:12px;color:var(--text-muted);margin-top:2px">Drift: ' + formatDrift(cs.driftPerDaySec) + '</div>' : '';
var sparkHtml = renderSkewSparkline(cs.samples, 200, 32);
var skewVal = window.currentSkewValue(cs);
var skewDisplay = cs.severity === 'no_clock'
? '<span style="font-size:18px;font-weight:700;color:var(--text-muted)">No Clock</span>'
var skewDisplay = cs.severity === 'default'
? '<span style="font-size:18px;font-weight:700;color:var(--text-muted)">Default</span>'
: '<span style="font-size:18px;font-weight:700;font-family:var(--mono)">' + formatSkew(skewVal) + '</span>';
var bimodalWarning = '';
if (cs.severity === 'bimodal_clock') {
var totalRecent = cs.recentSampleCount || 0;
bimodalWarning = '<div style="font-size:12px;color:var(--status-amber-text);margin-top:4px">⚠️ ' + (cs.recentBadSampleCount || '?') + ' of last ' + (totalRecent || '?') + ' adverts had nonsense timestamps (likely RTC reset)</div>';
// Per-tier explainer line (plain English reason).
var explainer = '';
var absSkew = Math.abs(cs.lastSkewSec || 0);
var skewStr = Math.round(absSkew) + 's';
if (cs.severity === 'default') {
var isoAdv = cs.lastAdvertTS ? new Date(cs.lastAdvertTS * 1000).toISOString() : '?';
explainer = 'Last advert at ' + isoAdv + ' — matches firmware default (volatile RTC, not user-set since boot)';
} else if (cs.severity === 'ok') {
explainer = 'Last advert ' + skewStr + ' vs wall clock — within OK tolerance (≤15s)';
} else if (cs.severity === 'degrading') {
explainer = 'Last advert ' + skewStr + ' vs wall clock — drift accumulating (≤60s)';
} else if (cs.severity === 'degraded') {
explainer = 'Last advert ' + skewStr + ' vs wall clock — significantly off (≤10m)';
} else if (cs.severity === 'wrong') {
explainer = 'Last advert ' + skewStr + ' vs wall clock — clock incorrect (operator-set or RTC failure)';
}
var explainerHtml = explainer ? '<div style="font-size:12px;color:var(--text-muted);margin-top:4px">' + explainer + '</div>' : '';
container.innerHTML =
'<h4 style="margin:0 0 6px">⏰ Clock Skew</h4>' +
'<div style="display:flex;align-items:center;gap:12px;flex-wrap:wrap">' +
@@ -839,9 +853,9 @@
renderSkewBadge(cs.severity, skewVal, cs) +
(cs.calibrated ? ' <span style="font-size:10px;color:var(--text-muted)" title="Observer-calibrated">✓ calibrated</span>' : '') +
'</div>' +
explainerHtml +
driftHtml +
(sparkHtml ? '<div class="skew-sparkline-wrap" style="margin-top:8px">' + sparkHtml + '<div style="font-size:10px;color:var(--text-muted)">Skew over time (' + (cs.samples || []).length + ' samples)</div></div>' : '') +
bimodalWarning;
(sparkHtml ? '<div class="skew-sparkline-wrap" style="margin-top:8px">' + sparkHtml + '<div style="font-size:10px;color:var(--text-muted)">Skew over time (' + (cs.samples || []).length + ' samples)</div></div>' : '');
} catch (e) {
// Non-fatal — section stays hidden
}
@@ -1144,6 +1158,19 @@
makeColumnsResizable('#nodesTable', 'meshcore-nodes-col-widths');
}
/**
* Navigate to the full-screen node view for `pubkey` from anywhere within
* the nodes module. Single source of navigation truth works regardless
* of current hash state (hash assignment alone is a no-op when the hash
* is already the target).
*/
function navigateToNode(pubkey) {
destroy();
var appEl = document.getElementById('app');
history.replaceState(null, '', '#/nodes/' + encodeURIComponent(pubkey));
init(appEl, pubkey);
}
async function selectNode(pubkey) {
// On mobile, navigate to full-screen node view
if (window.innerWidth <= 640) {
@@ -1307,12 +1334,11 @@
} catch {}
}
// #856: Wire "Details" button to navigate to full-screen node view
// Wire "Details" button via the unified navigateToNode helper
var detailBtn = panel.querySelector('.node-detail-btn');
if (detailBtn) {
detailBtn.addEventListener('click', function() {
var pk = detailBtn.getAttribute('data-pubkey');
location.hash = '#/nodes/' + pk;
navigateToNode(decodeURIComponent(detailBtn.getAttribute('data-pubkey')));
});
}
+42 -20
View File
@@ -387,9 +387,9 @@
const obs = data.observations.find(o => String(o.id) === String(obsTarget));
if (obs) {
expandedHashes.add(h);
const obsPacket = {...data.packet, observer_id: obs.observer_id, observer_name: obs.observer_name, snr: obs.snr, rssi: obs.rssi, path_json: obs.path_json, resolved_path: obs.resolved_path, timestamp: obs.timestamp, first_seen: obs.timestamp};
const obsPacket = {...data.packet, observer_id: obs.observer_id, observer_name: obs.observer_name, snr: obs.snr, rssi: obs.rssi, path_json: obs.path_json, resolved_path: obs.resolved_path, direction: obs.direction, timestamp: obs.timestamp, first_seen: obs.timestamp};
clearParsedCache(obsPacket);
selectPacket(obs.id, h, {packet: obsPacket, breakdown: data.breakdown, observations: data.observations}, obs.id);
selectPacket(obs.id, h, {packet: obsPacket, observations: data.observations}, obs.id);
} else {
selectPacket(data.packet.id, h, data);
}
@@ -519,7 +519,7 @@
if (p.decoded_json) existing.decoded_json = p.decoded_json;
// Update expanded children if this group is expanded
if (expandedHashes.has(h) && existing._children) {
existing._children.unshift(p);
existing._children.unshift(clearParsedCache({...p, _isObservation: true}));
if (existing._children.length > 200) existing._children.length = 200;
sortGroupChildren(existing);
// Invalidate row counts — child count changed, so virtual scroll
@@ -683,10 +683,14 @@
// Restore expanded group children (parallel fetch, Map lookup)
if (groupByHash && expandedHashes.size > 0) {
const expandedArr = [...expandedHashes];
// Fetch the full packet detail (which includes per-observation rows) for each expanded hash.
// Previously this used `/packets?hash=X&limit=20` which returned ONE aggregate row, causing
// every "child" row in the table to carry the parent packet.id instead of unique observation
// ids — so clicking any child pointed the side pane at the same aggregate. See #866.
const results = await Promise.all(expandedArr.map(hash => {
const group = hashIndex.get(hash);
if (!group) return { hash, group: null, data: null };
return api(`/packets?hash=${hash}&limit=20`)
return api(`/packets/${hash}`)
.then(data => ({ hash, group, data }))
.catch(() => ({ hash, group, data: null }));
}));
@@ -694,7 +698,15 @@
if (!group) {
expandedHashes.delete(hash);
} else if (data) {
group._children = data.packets || [];
const pkt = data.packet || group;
// Build per-observation children. Spread (pkt, obs) so obs-level fields
// (id, observer_id/name, path_json, snr/rssi, timestamp, raw_hex) override
// the aggregate. Each child's `id` is the observation id (unique per observer).
const obs = data.observations || [];
group._children = obs.length
? obs.map(o => clearParsedCache({...pkt, ...o, _isObservation: true}))
: [pkt];
group._fetchedData = { packet: pkt, observations: obs };
sortGroupChildren(group);
}
}
@@ -1246,9 +1258,9 @@
const child = group?._children?.find(c => String(c.id) === String(value));
if (child) {
const parentData = group._fetchedData;
const obsPacket = parentData ? {...parentData.packet, observer_id: child.observer_id, observer_name: child.observer_name, snr: child.snr, rssi: child.rssi, path_json: child.path_json, resolved_path: child.resolved_path, timestamp: child.timestamp, first_seen: child.timestamp} : child;
const obsPacket = parentData ? {...parentData.packet, observer_id: child.observer_id, observer_name: child.observer_name, snr: child.snr, rssi: child.rssi, path_json: child.path_json, resolved_path: child.resolved_path, direction: child.direction, timestamp: child.timestamp, first_seen: child.timestamp} : child;
if (parentData) { clearParsedCache(obsPacket); }
selectPacket(child.id, parentHash, {packet: obsPacket, breakdown: parentData?.breakdown, observations: parentData?.observations}, child.id);
selectPacket(child.id, parentHash, {packet: obsPacket, observations: parentData?.observations}, child.id);
}
}
else if (action === 'select-hash') pktSelectHash(value);
@@ -1797,7 +1809,7 @@
panel.innerHTML = isMobileNow ? '' : '<div class="panel-resize-handle" id="pktResizeHandle"></div>' + PANEL_CLOSE_HTML;
const content = document.createElement('div');
panel.appendChild(content);
await renderDetail(content, data);
await renderDetail(content, data, selectedObservationId);
if (!isMobileNow) initPanelResize();
} catch (e) {
panel.innerHTML = `<div class="text-muted">Error: ${e.message}</div>`;
@@ -1806,8 +1818,6 @@
async function renderDetail(panel, data, chosenObsId) {
const pkt = data.packet;
const breakdown = data.breakdown || {};
const ranges = breakdown.ranges || [];
const observations = data.observations || [];
// Per-observation rendering (issue #849):
@@ -1828,6 +1838,15 @@
const decoded = getParsedDecoded(effectivePkt) || {};
const pathHops = getParsedPath(effectivePkt) || [];
// Compute breakdown ranges from the actually-rendered raw_hex (per-observation).
// Single source of truth — derived from the same bytes we display, so a
// post-#882 per-obs raw_hex with a different path length than the top-level
// packet's raw_hex still gets accurate byte highlights.
const obsRawHexForRanges = effectivePkt.raw_hex || pkt.raw_hex || '';
const ranges = obsRawHexForRanges
? computeBreakdownRanges(obsRawHexForRanges, pkt.route_type, pkt.payload_type)
: [];
// Cross-check: hop count from raw_hex path_len byte vs path_json length
const obsRawHex = effectivePkt.raw_hex || pkt.raw_hex || '';
let rawHopCount = null;
@@ -1838,7 +1857,7 @@
if (!isNaN(plByte)) rawHopCount = plByte & 0x3F;
}
if (rawHopCount != null && pathHops.length !== rawHopCount) {
console.warn(`[CoreScope] Hop count inconsistency for packet ${pkt.hash}: path_json has ${pathHops.length} hops but raw_hex path_len has ${rawHopCount}. Trusting raw_hex.`);
console.warn(`[CoreScope] Hop count inconsistency for packet ${pkt.hash}: path_json has ${pathHops.length} hops but raw_hex path_len has ${rawHopCount}. UI shows path_json.`);
}
// Resolve sender GPS — from packet directly, or from known node in DB
@@ -1975,8 +1994,10 @@
? `<div class="anomaly-banner" style="background:var(--warning, #f0ad4e); color:#000; padding:8px 12px; border-radius:4px; margin-bottom:8px; font-weight:600;">⚠️ Anomaly: ${escapeHtml(decoded.anomaly)}</div>`
: '';
// Hop count display: trust raw_hex (firmware truth) over path_json
const displayHopCount = rawHopCount != null ? rawHopCount : pathHops.length;
// Hop count display: use pathHops length (= effective observation's path_json).
// The raw_hex/path_json mismatch warning is logged above for diagnostics; the UI
// must stay self-consistent — top pill names and byte breakdown rows must agree.
const displayHopCount = pathHops.length;
const obsIndicator = currentObs && observations.length > 1
? `<span style="font-size:0.8em;color:var(--text-muted);margin-left:6px">(observation ${observations.indexOf(currentObs) + 1} of ${observations.length})</span>`
: '';
@@ -2181,18 +2202,19 @@
rows += fieldRow(off, 'Path Length', '0x' + (buf.slice(off * 2, off * 2 + 2) || '??'), hashCountVal === 0 ? `hash_count=0 (direct advert)` : `hash_size=${hashSizeVal} byte${hashSizeVal !== 1 ? 's' : ''}, hash_count=${hashCountVal}`);
off += 1;
// Path — derive hop count from path_len byte (firmware truth), not aggregated _parsedPath
// Path — render hops from path_json (what this observation reported).
// Byte offsets advance by hashSize * pathHops.length to match.
const hashSize = isNaN(pathByte0) ? 1 : ((pathByte0 >> 6) + 1);
if (typeof hashCountVal === 'number' && hashCountVal > 0) {
rows += sectionRow('Path (' + hashCountVal + ' hops)', 'section-path');
for (let i = 0; i < hashCountVal; i++) {
if (pathHops.length > 0) {
rows += sectionRow('Path (' + pathHops.length + ' hops)', 'section-path');
for (let i = 0; i < pathHops.length; i++) {
const hopOff = off + i * hashSize;
const hex = buf.slice(hopOff * 2, (hopOff + hashSize) * 2).toUpperCase();
const hex = String(pathHops[i] || '').toUpperCase();
const hopHtml = HopDisplay.renderHop(hex, hopNameCache[hex]);
const label = `Hop ${i}${hopHtml}`;
rows += fieldRow(hopOff, label, hex, '');
}
off += hashSize * hashCountVal;
off += hashSize * pathHops.length;
}
// Payload
@@ -2466,7 +2488,7 @@
renderTableRows();
return;
}
// Single fetch — gets packet + observations + path + breakdown
// Single fetch — gets packet + observations + path
try {
const data = await api(`/packets/${hash}`);
const pkt = data.packet;
+11 -19
View File
@@ -397,17 +397,16 @@
// #690 — Clock Skew shared helpers
var SKEW_SEVERITY_COLORS = {
default: 'var(--text-muted)',
ok: 'var(--status-green)',
warning: 'var(--status-yellow)',
critical: 'var(--status-orange)',
absurd: 'var(--status-purple)',
bimodal_clock: 'var(--status-amber)',
no_clock: 'var(--text-muted)'
degrading: 'var(--status-yellow)',
degraded: 'var(--status-orange)',
wrong: 'var(--status-red)'
};
var SKEW_SEVERITY_LABELS = {
ok: 'OK', warning: 'Warning', critical: 'Critical', absurd: 'Absurd', bimodal_clock: 'Bimodal', no_clock: 'No Clock'
default: 'Default', ok: 'OK', degrading: 'Degrading', degraded: 'Degraded', wrong: 'Wrong'
};
var SKEW_SEVERITY_ORDER = { no_clock: 0, bimodal_clock: 1, absurd: 2, critical: 3, warning: 4, ok: 5 };
var SKEW_SEVERITY_ORDER = { default: 0, wrong: 1, degraded: 2, degrading: 3, ok: 4 };
window.SKEW_SEVERITY_COLORS = SKEW_SEVERITY_COLORS;
window.SKEW_SEVERITY_LABELS = SKEW_SEVERITY_LABELS;
@@ -430,26 +429,19 @@
return (secPerDay >= 0 ? '+' : '') + secPerDay.toFixed(1) + ' s/day';
};
/** Pick the skew value that drives current-health UI: prefer the
* recent-window median (#789, current health) over the all-time median
* (poisoned by historical bad samples). Falls back gracefully if the
* field isn't present (older API responses). */
/** Pick the skew value that drives current-health UI. Uses lastSkewSec
* (most recent corrected skew) when available, falls back to medianSkewSec. */
window.currentSkewValue = function(cs) {
if (!cs) return null;
return cs.recentMedianSkewSec != null ? cs.recentMedianSkewSec : cs.medianSkewSec;
return cs.lastSkewSec != null ? cs.lastSkewSec : cs.medianSkewSec;
};
/** Render a clock skew badge HTML */
window.renderSkewBadge = function(severity, skewSec, cs) {
if (!severity) return '';
var cls = 'skew-badge skew-badge--' + severity;
if (severity === 'no_clock') {
return '<span class="' + cls + '" title="Uninitialized RTC — no valid clock">🚫 No Clock</span>';
}
if (severity === 'bimodal_clock' && cs) {
var badPct = cs.goodFraction != null ? Math.round((1 - cs.goodFraction) * 100) : '?';
var label = '⏰ ' + window.formatSkew(skewSec);
return '<span class="' + cls + '" title="Clock skew: ' + window.formatSkew(skewSec) + ' (bimodal: ' + badPct + '% of recent adverts have nonsense timestamps)">' + label + '</span>';
if (severity === 'default') {
return '<span class="' + cls + '" title="Firmware default clock — volatile RTC not yet user-set since boot">⏰ Default</span>';
}
var label = severity === 'ok' ? '⏰' : '⏰ ' + window.formatSkew(skewSec);
return '<span class="' + cls + '" title="Clock skew: ' + window.formatSkew(skewSec) + ' (' + (SKEW_SEVERITY_LABELS[severity] || severity) + ')">' + label + '</span>';
+8 -9
View File
@@ -2291,22 +2291,21 @@ th.sort-active { color: var(--accent, #60a5fa); }
/* #690 — Clock Skew badges & fleet table */
.skew-badge { display: inline-block; font-size: 10px; padding: 1px 5px; border-radius: 3px; margin-left: 4px; font-weight: 600; white-space: nowrap; }
.skew-badge--default { background: var(--text-muted); color: #fff; }
.skew-badge--ok { background: var(--status-green); color: #fff; }
.skew-badge--warning { background: var(--status-yellow); color: #000; }
.skew-badge--critical { background: var(--status-orange); color: #fff; }
.skew-badge--absurd { background: var(--status-purple); color: #fff; }
.skew-badge--no_clock { background: var(--text-muted); color: #fff; }
.skew-badge--bimodal_clock { background: var(--status-amber-light); color: var(--status-amber-text); border: 1px solid var(--status-amber); }
.skew-badge--degrading { background: var(--status-yellow); color: #000; }
.skew-badge--degraded { background: var(--status-orange); color: #fff; }
.skew-badge--wrong { background: var(--status-red); color: #fff; }
.skew-detail-section { padding: 10px 16px; margin-bottom: 8px; }
.skew-sparkline-wrap { margin-top: 6px; }
.skew-sparkline-wrap svg { display: block; }
.clock-fleet-row--warning { background: color-mix(in srgb, var(--status-yellow) 10%, transparent); }
.clock-fleet-row--critical { background: color-mix(in srgb, var(--status-orange) 10%, transparent); }
.clock-fleet-row--absurd { background: color-mix(in srgb, var(--status-purple) 10%, transparent); }
.clock-fleet-row--no_clock { background: color-mix(in srgb, var(--text-muted) 10%, transparent); }
.clock-fleet-row--degrading { background: color-mix(in srgb, var(--status-yellow) 10%, transparent); }
.clock-fleet-row--degraded { background: color-mix(in srgb, var(--status-orange) 10%, transparent); }
.clock-fleet-row--wrong { background: color-mix(in srgb, var(--status-red) 10%, transparent); }
.clock-fleet-row--default { background: color-mix(in srgb, var(--text-muted) 10%, transparent); }
.clock-filter-btn { font-size: 12px; padding: 3px 8px; border: 1px solid var(--border); border-radius: 4px; background: var(--card-bg, #fff); color: var(--text); cursor: pointer; margin-right: 4px; }
.clock-filter-btn.active { background: var(--accent); color: #fff; border-color: var(--accent); }
+338 -2
View File
@@ -15,6 +15,11 @@ async function test(name, fn) {
results.push({ name, pass: true });
console.log(` \u2705 ${name}`);
} catch (err) {
if (err.skip) {
results.push({ name, pass: true, skipped: true });
console.log(`${name}: ${err.message}`);
return;
}
results.push({ name, pass: false, error: err.message });
console.log(` \u274c ${name}: ${err.message}`);
console.log(`\nFail-fast: stopping after first failure.`);
@@ -1778,12 +1783,343 @@ async function run() {
}
});
// Test: Expanded group children have unique observation ids (#866)
await test('Expanded group children update detail pane per-observation', async () => {
await page.goto(`${BASE}/#/packets`, { waitUntil: 'domcontentloaded' });
// Ensure grouped mode and wide time window
await page.evaluate(() => {
localStorage.setItem('meshcore-time-window', '525600');
localStorage.setItem('meshcore-groupbyhash', 'true');
});
await page.reload({ waitUntil: 'load' });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
// Find a group row with observation_count > 1 (has expand button)
const expandBtn = await page.$('table tbody tr .expand-btn, table tbody tr [data-expand]');
if (!expandBtn) {
console.log(' ️ No expandable groups found — skipping child assertion');
return;
}
// Click expand and wait for the /packets/<hash> detail API call
const [detailResp] = await Promise.all([
page.waitForResponse(resp => {
const u = new URL(resp.url(), BASE);
// Match /api/packets/<hash> but not /api/packets?... or /api/packets/observations
return /\/api\/packets\/[A-Fa-f0-9]+$/.test(u.pathname) && resp.status() === 200;
}, { timeout: 15000 }),
expandBtn.click(),
]);
assert(detailResp, 'Expected /api/packets/<hash> response on expand');
// Wait for child rows to appear
await page.waitForSelector('table tbody tr.child-row, table tbody tr[class*="child"]', { timeout: 5000 });
const childRows = await page.$$('table tbody tr.child-row, table tbody tr[class*="child"]');
if (childRows.length < 2) {
console.log(' ️ Group has < 2 children — skipping per-observation assertion');
return;
}
// Click first child row
await childRows[0].click();
await page.waitForFunction(() => {
const panel = document.getElementById('pktRight');
return panel && !panel.classList.contains('empty') && panel.textContent.trim().length > 0;
}, { timeout: 10000 });
const content1 = await page.$eval('#pktRight', el => el.textContent.trim());
const url1 = page.url();
// Click second child row
await childRows[1].click();
await page.waitForTimeout(500);
const content2 = await page.$eval('#pktRight', el => el.textContent.trim());
const url2 = page.url();
// URL should contain ?obs= with a real observation id
assert(url1.includes('obs=') || url2.includes('obs='), `URL should contain obs= parameter, got: ${url1}`);
// The two children should show different detail pane content (different observers)
// At minimum, the URL obs= values should differ
if (url1.includes('obs=') && url2.includes('obs=')) {
const obs1 = new URL(url1).hash.match(/obs=(\d+)/)?.[1];
const obs2 = new URL(url2).hash.match(/obs=(\d+)/)?.[1];
if (obs1 && obs2) {
assert(obs1 !== obs2, `Two children should have different obs ids, both got obs=${obs1}`);
}
}
// Verify obs id is NOT the aggregate packet id (the bug from #866)
const obsMatch = url2.match(/obs=(\d+)/);
if (obsMatch) {
const detailJson = await detailResp.json().catch(() => null);
if (detailJson?.packet?.id) {
const aggId = String(detailJson.packet.id);
// At least one child obs id should differ from the aggregate packet id
const obs1 = url1.match(/obs=(\d+)/)?.[1];
const obs2 = url2.match(/obs=(\d+)/)?.[1];
const allSameAsAgg = obs1 === aggId && obs2 === aggId;
assert(!allSameAsAgg, `Child obs ids should not all equal aggregate packet.id (${aggId})`);
}
}
});
// Test: per-observation raw_hex — hex pane updates when switching observations (#881)
await test('Packet detail hex pane updates per observation', async () => {
await page.goto(BASE + '#/packets', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
await page.waitForTimeout(500);
// Try clicking packet rows to find one with multiple observations
const rows = await page.$$('table tbody tr[data-action]');
let obsRows = [];
for (let i = 0; i < Math.min(rows.length, 10); i++) {
await rows[i].click({ timeout: 3000 }).catch(() => null);
await page.waitForTimeout(600);
obsRows = await page.$$('.detail-obs-row');
if (obsRows.length >= 2) break;
}
if (obsRows.length < 2) {
console.log(' ⏭ Skipped: no packet with ≥2 observations found in first 10 rows');
return;
}
// Click first observation, capture hex dump
await obsRows[0].click({ timeout: 5000 });
await page.waitForTimeout(500);
const hex1 = await page.$eval('.hex-dump', el => el.textContent).catch(() => '');
// Click second observation, capture hex dump
await obsRows[1].click({ timeout: 5000 });
await page.waitForTimeout(500);
const hex2 = await page.$eval('.hex-dump', el => el.textContent).catch(() => '');
// If both have content and differ, the feature works
if (hex1 && hex2 && hex1 !== hex2) {
console.log(' ✓ Hex pane content differs between observations');
} else if (hex1 && hex2 && hex1 === hex2) {
console.log(' ⏭ Hex same for both observations (likely historical NULL raw_hex — OK)');
} else {
console.log(' ⏭ Could not capture hex content from both observations');
}
});
// Test: path pill (top) and byte breakdown (bottom) agree on hop count
// Regression for visual mismatch where badge said "1 hop" but path text listed N names
await test('Packet detail path pill and byte breakdown agree on hop count', async () => {
await page.goto(BASE + '#/packets', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
await page.waitForTimeout(500);
// Click rows until we find one whose detail pane renders a multi-hop path
const rows = await page.$$('table tbody tr[data-action]');
let found = false;
for (let i = 0; i < Math.min(rows.length, 15); i++) {
await rows[i].click({ timeout: 3000 }).catch(() => null);
await page.waitForTimeout(500);
const result = await page.evaluate(() => {
// Path pill: <dt>Path</dt><dd><span class="badge ...">N hops</span> ...names...</dd>
const dts = document.querySelectorAll('dl.detail-meta dt');
let pillBadgeCount = null;
let pillNameCount = null;
for (const dt of dts) {
if (dt.textContent.trim() === 'Path') {
const dd = dt.nextElementSibling;
if (!dd) break;
const badge = dd.querySelector('.badge');
if (badge) {
const m = badge.textContent.match(/(\d+)\s*hop/);
if (m) pillBadgeCount = parseInt(m[1], 10);
}
// Count rendered hop links/spans (HopDisplay.renderHop output)
const hops = dd.querySelectorAll('.hop-link, [data-hop-link], .hop-named, .hop-anonymous');
pillNameCount = hops.length;
break;
}
}
// Byte breakdown: section row "Path (N hops)" + N "Hop X — ..." rows
let breakdownSectionCount = null;
let breakdownRowCount = 0;
const fieldTable = document.querySelector('table.field-table');
if (fieldTable) {
for (const tr of fieldTable.querySelectorAll('tr')) {
const txt = tr.textContent.trim();
const sec = txt.match(/^Path\s*\((\d+)\s*hops?\)/);
if (sec) breakdownSectionCount = parseInt(sec[1], 10);
if (/^\s*\d+\s*Hop\s+\d+\s*—/.test(txt) || /^Hop\s+\d+\s*—/.test(txt.replace(/^\d+/, '').trim())) {
breakdownRowCount++;
}
}
}
return { pillBadgeCount, pillNameCount, breakdownSectionCount, breakdownRowCount };
});
if (result.pillBadgeCount && result.pillBadgeCount > 0 && result.breakdownSectionCount != null) {
found = true;
// Top badge count must equal bottom section count
assert(result.pillBadgeCount === result.breakdownSectionCount,
`Path pill badge says ${result.pillBadgeCount} hops but byte breakdown says ${result.breakdownSectionCount} hops`);
// Number of rendered hop names in pill should also match (within 1, since renderPath may add separators)
if (result.pillNameCount != null && result.pillNameCount > 0) {
assert(Math.abs(result.pillNameCount - result.pillBadgeCount) <= 1,
`Path pill badge ${result.pillBadgeCount} but rendered ${result.pillNameCount} hop names`);
}
// And breakdown rendered rows should match its own section count
assert(result.breakdownRowCount > 0,
'breakdown rows selector matched nothing — selector or DOM changed');
assert(result.breakdownRowCount === result.breakdownSectionCount,
`Byte breakdown section says ${result.breakdownSectionCount} hops but rendered ${result.breakdownRowCount} hop rows`);
console.log(` ✓ Path pill (${result.pillBadgeCount}) and byte breakdown (${result.breakdownSectionCount}) agree`);
break;
}
}
if (!found) {
if (process.env.E2E_REQUIRE_PATH_TEST === '1') {
throw new Error('BLOCKED — no multi-hop packet found in first 15 rows (E2E_REQUIRE_PATH_TEST=1 requires it)');
}
const skipErr = new Error('SKIP: No multi-hop packet with byte breakdown found in first 15 rows — needs fixture');
skipErr.skip = true;
throw skipErr;
}
});
// Test: hex-strip color spans match the labeled byte rows (per-obs raw_hex).
// Regression #891: server-supplied breakdown was computed once from top-level
// raw_hex, so per-observation rendering had off-by-N highlights vs the labels.
await test('Packet detail hex strip Path range matches hop row count', async () => {
await page.goto(BASE + '#/packets', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
await page.waitForTimeout(500);
const rows = await page.$$('table tbody tr[data-action]');
let checked = 0;
for (let i = 0; i < Math.min(rows.length, 25) && checked < 3; i++) {
await rows[i].click({ timeout: 3000 }).catch(() => null);
await page.waitForTimeout(400);
const result = await page.evaluate(() => {
const dump = document.querySelector('.hex-dump');
const fieldTable = document.querySelector('table.field-table');
if (!dump || !fieldTable) return null;
const pathSpan = dump.querySelector('span.hex-byte.hex-path');
const pathBytes = pathSpan ? pathSpan.textContent.trim().split(/\s+/).filter(Boolean).length : 0;
const hopRows = [];
for (const tr of fieldTable.querySelectorAll('tr')) {
const cells = [...tr.cells].map(c => c.textContent.trim());
if (cells.length >= 2 && /^Hop\s+\d+/.test(cells[1])) hopRows.push(cells[2]);
}
return { pathBytes, hopRows };
});
if (!result || (result.pathBytes === 0 && result.hopRows.length === 0)) continue;
checked++;
// Either both zero, or the count of bytes inside hex-path == hop rows.
// (For multi-byte hash sizes this is bytes-per-hop * hops; for hash_size=1 it's just hops.)
// The simpler invariant: if there are hop rows, hex-path span must exist and have at least
// as many bytes as there are hops (== exactly hops * hash_size).
assert(result.hopRows.length > 0,
`row ${i}: hex-path span has ${result.pathBytes} bytes but no hop rows in the labeled table`);
assert(result.pathBytes >= result.hopRows.length,
`row ${i}: hex-path has ${result.pathBytes} bytes but ${result.hopRows.length} hop rows — strip and labels disagree`);
assert(result.pathBytes % result.hopRows.length === 0,
`row ${i}: hex-path has ${result.pathBytes} bytes but ${result.hopRows.length} hop rows — bytes/hops not divisible (hash_size violated)`);
console.log(` ✓ row ${i}: hex-path ${result.pathBytes} bytes / ${result.hopRows.length} hop rows (hash_size=${result.pathBytes / result.hopRows.length})`);
}
if (checked === 0) {
const skipErr = new Error('SKIP: no packet with rendered hex strip + hop rows found in first 25 rows');
skipErr.skip = true;
throw skipErr;
}
});
// Test: clicking a different observation row re-renders strip + breakdown consistently.
// Regression: observations of the same packet hash have different raw_hex (#882),
// so picking a different obs must recompute the byte ranges, not reuse the old ones.
await test('Packet detail switches consistently across observations', async () => {
await page.goto(BASE + '#/packets?groupByHash=1', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
await page.waitForTimeout(500);
let opened = false;
const groupRows = await page.$$('table tbody tr[data-action]');
for (let i = 0; i < Math.min(groupRows.length, 10); i++) {
await groupRows[i].click({ timeout: 3000 }).catch(() => null);
await page.waitForTimeout(400);
const obsCount = await page.evaluate(() => {
return document.querySelectorAll('table.observations-table tbody tr, .obs-row').length;
});
if (obsCount >= 2) { opened = true; break; }
}
if (!opened) {
const skipErr = new Error('SKIP: no multi-observation packet found in first 10 group rows');
skipErr.skip = true;
throw skipErr;
}
async function snapshot() {
return page.evaluate(() => {
const dump = document.querySelector('.hex-dump');
const fieldTable = document.querySelector('table.field-table');
if (!dump || !fieldTable) return null;
const pathSpan = dump.querySelector('span.hex-byte.hex-path');
const pathBytes = pathSpan ? pathSpan.textContent.trim().split(/\s+/).filter(Boolean).length : 0;
const hopRows = [];
for (const tr of fieldTable.querySelectorAll('tr')) {
const cells = [...tr.cells].map(c => c.textContent.trim());
if (cells.length >= 2 && /^Hop\s+\d+/.test(cells[1])) hopRows.push(cells[2]);
}
const rawHexParts = [...dump.querySelectorAll('span.hex-byte')].map(s => s.textContent.trim());
return { pathBytes, hopCount: hopRows.length, rawHexJoined: rawHexParts.join('|') };
});
}
const snapA = await snapshot();
assert(snapA, 'first snapshot must have hex dump + field table');
assert(snapA.hopCount === 0 || snapA.pathBytes >= snapA.hopCount,
`obs A inconsistent: hex-path ${snapA.pathBytes} bytes vs ${snapA.hopCount} hop rows`);
const switched = await page.evaluate(() => {
const obsRows = [...document.querySelectorAll('table.observations-table tbody tr, .obs-row')];
if (obsRows.length < 2) return false;
obsRows[1].click();
return true;
});
assert(switched, 'should click second observation row');
await page.waitForTimeout(500);
const snapB = await snapshot();
assert(snapB, 'second snapshot must have hex dump + field table');
assert(snapB.hopCount === 0 || snapB.pathBytes >= snapB.hopCount,
`obs B inconsistent: hex-path ${snapB.pathBytes} bytes vs ${snapB.hopCount} hop rows`);
console.log(` ✓ obs A: ${snapA.pathBytes} path bytes / ${snapA.hopCount} hops; obs B: ${snapB.pathBytes} / ${snapB.hopCount}`);
});
// Test: clicking the 🔍 Details button in the nodes side panel navigates to
// the full-screen node detail view. Regression: hash already === target,
// so location.hash assignment was a no-op and the panel stayed open.
await test('Nodes side panel Details button opens full-screen view', async () => {
await page.goto(BASE + '#/nodes', { waitUntil: 'domcontentloaded' });
await page.waitForSelector('table tbody tr[data-action]', { timeout: 15000 });
await page.waitForTimeout(500);
// Open side panel
await page.click('table tbody tr[data-action]');
await page.waitForSelector('#nodesRight .node-detail-btn', { timeout: 5000 });
// Click Details
await page.click('#nodesRight .node-detail-btn');
// Wait for full-screen view to appear
await page.waitForSelector('.node-fullscreen', { timeout: 5000 });
const isFullScreen = await page.evaluate(() => !!document.querySelector('.node-fullscreen'));
assert(isFullScreen, 'Details button should open full-screen node view');
});
await browser.close();
// Summary
const passed = results.filter(r => r.pass).length;
const skipped = results.filter(r => r.skipped).length;
const passed = results.filter(r => r.pass && !r.skipped).length;
const failed = results.filter(r => !r.pass).length;
console.log(`\n${passed}/${results.length} tests passed${failed ? `, ${failed} failed` : ''}`);
console.log(`\n${passed}/${results.length} tests passed${skipped ? `, ${skipped} skipped` : ''}${failed ? `, ${failed} failed` : ''}`);
process.exit(failed > 0 ? 1 : 0);
}
+360 -26
View File
@@ -690,6 +690,88 @@ console.log('\n=== haversineKm (hop-resolver.js) ===');
});
}
// ===== pickByAffinity — neighbor-graph + centroid scoring (#874) =====
console.log('\n=== pickByAffinity neighbor-graph scoring (#874) ===');
{
const ctx = makeSandbox();
ctx.IATA_COORDS_GEO = {};
loadInCtx(ctx, 'public/hop-resolver.js');
const HR = ctx.window.HopResolver;
// Two nodes sharing prefix "ab", hundreds of km apart.
// NodeSF is near San Francisco, NodeDEN is near Denver.
const nodeSF = { public_key: 'ab11111111111111', name: 'NodeSF', lat: 37.7, lon: -122.4 };
const nodeDEN = { public_key: 'ab22222222222222', name: 'NodeDEN', lat: 39.7, lon: -104.9 };
// A known neighbor of NodeSF (in the graph)
const nodeNeighbor = { public_key: 'cc33333333333333', name: 'SFNeighbor', lat: 37.8, lon: -122.3 };
// Another known node near Denver
const nodeDenNeighbor = { public_key: 'dd44444444444444', name: 'DENNeighbor', lat: 39.8, lon: -105.0 };
test('#874: graph edge scoring picks correct regional candidate (SF)', () => {
HR.init([nodeSF, nodeDEN, nodeNeighbor, nodeDenNeighbor]);
HR.setAffinity({ edges: [
{ source: 'cc33333333333333', target: 'ab11111111111111', weight: 5 },
{ source: 'dd44444444444444', target: 'ab22222222222222', weight: 5 },
]});
// Path: SFNeighbor → [ab??] → DENNeighbor
// With graph edges, ab11 (NodeSF) has edge to SFNeighbor, ab22 (NodeDEN) has edge to DENNeighbor
// Prev=SFNeighbor, Next=DENNeighbor → both have score 5, but SFNeighbor edge only to ab11
const result = HR.resolve(['cc', 'ab', 'dd'],
null, null, null, null);
assert.strictEqual(result['ab'].name, 'NodeSF',
'Should pick NodeSF because it has a graph edge to prev hop SFNeighbor');
});
test('#874: graph edge scoring — next hop breaks tie', () => {
HR.init([nodeSF, nodeDEN, nodeNeighbor, nodeDenNeighbor]);
HR.setAffinity({ edges: [
{ source: 'dd44444444444444', target: 'ab22222222222222', weight: 8 },
// No edge from SFNeighbor to either ab node
]});
// Path: SFNeighbor → [ab??] → DENNeighbor
// Only ab22 (NodeDEN) has edge to DENNeighbor (next hop)
const result = HR.resolve(['cc', 'ab', 'dd'],
null, null, null, null);
assert.strictEqual(result['ab'].name, 'NodeDEN',
'Should pick NodeDEN because it has graph edge to next hop DENNeighbor');
});
test('#874: centroid fallback when no graph edges exist', () => {
HR.init([nodeSF, nodeDEN, nodeNeighbor]);
HR.setAffinity({ edges: [] }); // no edges at all
// Path: SFNeighbor → [ab??]
// SFNeighbor is at (37.8, -122.3), centroid is just that point
// NodeSF (37.7, -122.4) is ~14km away, NodeDEN (39.7, -104.9) is ~1500km away
const result = HR.resolve(['cc', 'ab'],
null, null, null, null);
assert.strictEqual(result['ab'].name, 'NodeSF',
'Should pick NodeSF via centroid proximity to SFNeighbor');
});
test('#874: centroid uses average of prev+next positions', () => {
// Prev near SF, next near Denver → centroid is midpoint (~Nevada)
// NodeDEN is closer to Nevada midpoint than NodeSF
const nodeMid = { public_key: 'ee55555555555555', name: 'MidNode', lat: 38.5, lon: -114.0 };
HR.init([nodeSF, nodeDEN, nodeNeighbor, nodeDenNeighbor, nodeMid]);
HR.setAffinity({ edges: [] });
// Path: SFNeighbor → [ab??] → DENNeighbor
// centroid = avg(37.8,-122.3, 39.8,-105.0) = (38.8, -113.65) — closer to Denver
const result = HR.resolve(['cc', 'ab', 'dd'],
null, null, null, null);
assert.strictEqual(result['ab'].name, 'NodeDEN',
'Should pick NodeDEN because centroid of SF+Denver neighbors is closer to Denver');
});
test('#874: fallback when no context at all', () => {
HR.init([nodeSF, nodeDEN]);
HR.setAffinity({ edges: [] });
// Single ambiguous hop, no origin/observer, no neighbors
const result = HR.resolve(['ab'], null, null, null, null);
assert.ok(result['ab'].ambiguous || result['ab'].name != null,
'Should resolve to some candidate without crashing');
});
}
// ===== SNR/RSSI Number casting =====
{
// These test the pattern used in observer-detail.js, home.js, traces.js, live.js
@@ -1722,6 +1804,128 @@ console.log('\n=== app.js: formatEngineBadge ===');
});
}
// ===== APP.JS: computeBreakdownRanges =====
console.log('\n=== app.js: computeBreakdownRanges ===');
{
const ctx = makeSandbox();
loadInCtx(ctx, 'public/roles.js');
loadInCtx(ctx, 'public/app.js');
const computeBreakdownRanges = ctx.computeBreakdownRanges;
function findRange(ranges, label) {
return ranges.find(r => r.label === label);
}
test('returns [] for empty hex', () => {
assert.deepEqual(computeBreakdownRanges('', 1, 5), []);
});
test('returns [] for too-short hex (< 2 bytes)', () => {
assert.deepEqual(computeBreakdownRanges('15', 1, 5), []);
});
test('FLOOD non-transport: 4-hop hash_size=1', () => {
// header=15, plb=04 → hash_size=1, hash_count=4
// bytes: 15 04 90 FA F9 10 6E 01 D9
const r = computeBreakdownRanges('150490FAF910 6E01D9'.replace(/\s/g,''), 1, 5);
assert.deepEqual(findRange(r, 'Header'), { start: 0, end: 0, label: 'Header' });
assert.deepEqual(findRange(r, 'Path Length'), { start: 1, end: 1, label: 'Path Length' });
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 5, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 6, end: 8, label: 'Payload' });
assert.strictEqual(findRange(r, 'Transport Codes'), undefined);
});
test('FLOOD non-transport: 7-hop hash_size=1', () => {
// header=15, plb=07
const hex = '15077f6d7d1cadeca33988fd95e0851ebf01ea12e1879e';
const r = computeBreakdownRanges(hex, 1, 5);
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 8, label: 'Path' });
const payload = findRange(r, 'Payload');
assert.strictEqual(payload.start, 9, 'payload starts after the 7 path bytes');
});
test('FLOOD non-transport: 8-hop hash_size=1', () => {
const hex = '1508' + '11223344556677AA' + 'BBCCDD';
const r = computeBreakdownRanges(hex, 1, 5);
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 9, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 10, end: 12, label: 'Payload' });
});
test('Direct advert: 0-hop, no Path range', () => {
// plb=00 → 0 hops; expect Path Length but NO Path range
const r = computeBreakdownRanges('1100AABBCCDD', 1, 4);
assert.deepEqual(findRange(r, 'Path Length'), { start: 1, end: 1, label: 'Path Length' });
assert.strictEqual(findRange(r, 'Path'), undefined);
});
test('Transport route shifts path-length offset by 4', () => {
// route_type=0 (TRANSPORT_FLOOD): bytes 1..4 are Transport Codes
// header=14, transport=AABBCCDD, plb=02, hops=11 22, payload=99
const hex = '14AABBCCDD021122' + '99';
const r = computeBreakdownRanges(hex, 0, 5);
assert.deepEqual(findRange(r, 'Transport Codes'), { start: 1, end: 4, label: 'Transport Codes' });
assert.deepEqual(findRange(r, 'Path Length'), { start: 5, end: 5, label: 'Path Length' });
assert.deepEqual(findRange(r, 'Path'), { start: 6, end: 7, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 8, end: 8, label: 'Payload' });
});
test('hash_size=2 (plb top bits=01): 4 hops × 2 bytes', () => {
// plb = 01 0001 00 = 0x44 → hash_size=2, hash_count=4 → 8 path bytes
const hex = '15' + '44' + 'AABB' + 'CCDD' + 'EEFF' + '1122' + '9988';
const r = computeBreakdownRanges(hex, 1, 5);
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 9, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 10, end: 11, label: 'Payload' });
});
test('hash_size=3 (plb top bits=10): 2 hops × 3 bytes', () => {
// plb = 10 0000 10 = 0x82 → hash_size=3, hash_count=2 → 6 path bytes
const hex = '15' + '82' + 'AABBCC' + 'DDEEFF' + '99';
const r = computeBreakdownRanges(hex, 1, 5);
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 7, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 8, end: 8, label: 'Payload' });
});
test('hash_size=4 (plb top bits=11): 2 hops × 4 bytes', () => {
// plb = 11 0000 10 = 0xC2 → hash_size=4, hash_count=2 → 8 path bytes
const hex = '15' + 'C2' + 'AABBCCDD' + 'EEFF1122' + '99887766';
const r = computeBreakdownRanges(hex, 1, 5);
assert.deepEqual(findRange(r, 'Path'), { start: 2, end: 9, label: 'Path' });
assert.deepEqual(findRange(r, 'Payload'), { start: 10, end: 13, label: 'Payload' });
});
test('truncated path: not enough bytes → no Path range', () => {
// plb=04 says 4 hops but only 2 bytes remain
const hex = '1504AABB';
const r = computeBreakdownRanges(hex, 1, 5);
assert.strictEqual(findRange(r, 'Path'), undefined);
});
test('ADVERT (payload_type=4) with full record: PubKey/Timestamp/Signature/Flags', () => {
// header=11, plb=00 (direct advert)
// payload: 32 bytes pubkey + 4 bytes ts + 64 bytes sig + 1 byte flags
const pubkey = 'AB'.repeat(32);
const ts = '11223344';
const sig = 'CD'.repeat(64);
const flags = '00';
const hex = '1100' + pubkey + ts + sig + flags;
const r = computeBreakdownRanges(hex, 1, 4);
assert.deepEqual(findRange(r, 'PubKey'), { start: 2, end: 33, label: 'PubKey' });
assert.deepEqual(findRange(r, 'Timestamp'), { start: 34, end: 37, label: 'Timestamp' });
assert.deepEqual(findRange(r, 'Signature'), { start: 38, end: 101, label: 'Signature' });
assert.deepEqual(findRange(r, 'Flags'), { start: 102, end: 102, label: 'Flags' });
});
test('NaN-safe: malformed path-length byte produces no Path range', () => {
// hex with non-hex char in plb position would parseInt-fail → bail
// Use a 1-byte payload that makes pathByte parseInt produce NaN-ish via X
// (parseInt of 'XY' is NaN). Since fs reads only hex chars, simulate via short hex.
// Easier: empty string already returns []; 1-byte returns []. Both covered above.
// Use plb=FF (hash_size=4, hash_count=63) too long for input → no Path
const r = computeBreakdownRanges('15FF' + 'AA', 1, 5);
assert.strictEqual(findRange(r, 'Path'), undefined);
});
}
// ===== APP.JS: isTransportRoute + transportBadge =====
console.log('\n=== app.js: isTransportRoute + transportBadge ===');
{
@@ -5462,40 +5666,33 @@ console.log('\n=== packets.js: buildFieldTable hop count from path_len (#844) ==
loadInCtx(ftCtx, 'public/packets.js');
const { buildFieldTable } = ftCtx.window._packetsTestAPI;
test('#844: byte breakdown uses path_len hop count, not aggregated _parsedPath', () => {
test('#885: byte breakdown uses pathHops length (single source of truth)', () => {
// After #885 the byte breakdown agrees with the path pill: both render
// from the per-observation path_json. raw_hex is the underlying bytes
// for that same observation, so consistency is by construction.
// path_len = 0x42 → hash_size=2, hash_count=2
// raw_hex: header(11) + path_len(42) + hop0(41B1) + hop1(27D7) + pubkey(32 bytes)...
const pubkey = 'C0DEDAD4'.padEnd(64, '0'); // 32 bytes = 64 hex chars
const raw = '1142' + '41B1' + '27D7' + pubkey + '00000000' + '0'.repeat(128);
const pkt = { raw_hex: raw, route_type: 1, payload_type: 0 };
// Pass aggregated pathHops with 7 hops (mismatched)
const pathHops = ['41B1', '5EB0', '1000', '2DD2', '52F8', '9535', '762B'];
// Per-obs path_json IS the source of truth — pass the 2 hops that match raw_hex.
const pathHops = ['41B1', '27D7'];
const html = buildFieldTable(pkt, {}, pathHops, {});
// Section header should say "2 hops", not "7 hops"
assert.ok(html.includes('Path (2 hops)'), 'Should show "Path (2 hops)" from path_len, got: ' +
(html.match(/Path \(\d+ hops\)/)?.[0] || 'no match'));
assert.ok(!html.includes('Path (7 hops)'), 'Should NOT show 7 hops from aggregated path');
// Should contain hop values from raw_hex
assert.ok(html.includes('Path (2 hops)'), 'Should show "Path (2 hops)"');
assert.ok(html.includes('41B1'), 'Should show hop 0 = 41B1');
assert.ok(html.includes('27D7'), 'Should show hop 1 = 27D7');
// Should NOT contain hops from aggregated path that aren't in raw_hex
assert.ok(!html.includes('5EB0'), 'Should NOT show aggregated hop 5EB0');
assert.ok(!html.includes('9535'), 'Should NOT show aggregated hop 9535');
});
test('#844: pubkey offset correct after 2-hop path (not after 7-hop)', () => {
test('#885: pubkey offset advances by hashSize * pathHops.length', () => {
const pubkey = 'C0DEDAD4'.padEnd(64, '0');
const raw = '1142' + '41B1' + '27D7' + pubkey + '00000000' + '0'.repeat(128);
const pkt = { raw_hex: raw, route_type: 1, payload_type: 0 };
const html = buildFieldTable(pkt, { type: 'ADVERT', pubKey: pubkey }, ['41B1','5EB0','1000','2DD2','52F8','9535','762B'], {});
const html = buildFieldTable(pkt, { type: 'ADVERT', pubKey: pubkey }, ['41B1', '27D7'], {});
// Public Key should be at offset 6 (1 header + 1 path_len + 2*2 hops = 6)
// Not at offset 16 (1 + 1 + 2*7 = 16)
assert.ok(html.includes('>6<') || html.includes('"6"'),
'Public Key should be at offset 6, not 16');
'Public Key should be at offset 6');
});
test('#844: hashCountVal=0 (direct advert) skips Path section', () => {
@@ -5707,12 +5904,11 @@ console.log('\n=== channel-decrypt.js: key derivation, MAC, parsing, storage ===
assert.strictEqual(ctx.window.renderSkewBadge(null, 0), '');
});
test('renderSkewBadge renders bimodal_clock badge with tooltip (#845)', () => {
var cs = { goodFraction: 0.6, recentBadSampleCount: 4, recentSampleCount: 10 };
var html = ctx.window.renderSkewBadge('bimodal_clock', -5, cs);
assert.ok(html.includes('skew-badge--bimodal_clock'), 'should contain bimodal_clock class');
assert.ok(html.includes('bimodal'), 'tooltip should mention bimodal');
assert.ok(html.includes('40%'), 'tooltip should show bad percentage');
test('renderSkewBadge renders default badge with tooltip', () => {
var cs = {};
var html = ctx.window.renderSkewBadge('default', 0, cs);
assert.ok(html.includes('skew-badge--default'), 'should contain default class');
assert.ok(html.toLowerCase().includes('firmware default'), 'tooltip should mention firmware default');
assert.ok(html.includes('⏰'), 'should contain clock emoji');
});
@@ -5736,9 +5932,9 @@ console.log('\n=== channel-decrypt.js: key derivation, MAC, parsing, storage ===
test('SKEW_SEVERITY_ORDER sorts worst first', () => {
var order = ctx.window.SKEW_SEVERITY_ORDER;
assert.ok(order.absurd < order.critical, 'absurd should sort before critical');
assert.ok(order.critical < order.warning, 'critical should sort before warning');
assert.ok(order.warning < order.ok, 'warning should sort before ok');
assert.ok(order.wrong < order.degraded, 'wrong should sort before degraded');
assert.ok(order.degraded < order.degrading, 'degraded should sort before degrading');
assert.ok(order.degrading < order.ok, 'degrading should sort before ok');
});
}
@@ -6116,6 +6312,144 @@ console.log('\n=== analytics.js: renderCollisionsFromServer collision table ==='
});
}
// ===== Issue #866: Full-page obs-switch — hex + path must update per observation =====
{
console.log('\n=== Issue #866: Full-page observation switch ===');
const ctx866 = makeSandbox();
loadInCtx(ctx866, 'public/roles.js');
loadInCtx(ctx866, 'public/app.js');
loadInCtx(ctx866, 'public/packet-helpers.js');
test('#866: switching observation updates effectivePkt path_json', () => {
const pkt = { id: 1, hash: 'abc123', observer_id: 'obs-agg', path_json: '["A","B","C","D"]', raw_hex: '0484A1B1C1D1', route_type: 1, timestamp: '2026-01-01T00:00:00Z' };
const obs1 = { id: 10, observer_id: 'obs-1', path_json: '["A","B"]', snr: 5, rssi: -80, timestamp: '2026-01-01T00:01:00Z' };
const obs2 = { id: 20, observer_id: 'obs-2', path_json: '["A","B","C","D"]', snr: 8, rssi: -75, timestamp: '2026-01-01T00:02:00Z' };
// Simulate renderDetail logic: pick obs1
const eff1 = ctx866.clearParsedCache({...pkt, ...obs1, _isObservation: true});
const path1 = ctx866.getParsedPath(eff1);
assert.deepStrictEqual(path1, ['A', 'B']);
assert.strictEqual(eff1.observer_id, 'obs-1');
assert.strictEqual(eff1.snr, 5);
// Switch to obs2
const eff2 = ctx866.clearParsedCache({...pkt, ...obs2, _isObservation: true});
const path2 = ctx866.getParsedPath(eff2);
assert.deepStrictEqual(path2, ['A', 'B', 'C', 'D']);
assert.strictEqual(eff2.observer_id, 'obs-2');
assert.strictEqual(eff2.snr, 8);
});
test('#866: effectivePkt preserves raw_hex from packet when obs has none', () => {
const pkt = { id: 1, hash: 'h1', raw_hex: '0482AABB', route_type: 1 };
const obs = { id: 10, observer_id: 'obs-1', path_json: '["AA"]', snr: 3, rssi: -90, timestamp: '2026-01-01T00:00:00Z' };
const eff = ctx866.clearParsedCache({...pkt, ...obs, _isObservation: true});
// obs doesn't have raw_hex, so packet's raw_hex survives spread
assert.strictEqual(eff.raw_hex, '0482AABB');
});
test('#866: effectivePkt uses obs raw_hex when available (API now returns it)', () => {
const pkt = { id: 1, hash: 'h1', raw_hex: '0482AABB', route_type: 1 };
const obs = { id: 10, observer_id: 'obs-1', raw_hex: '0441CC', path_json: '["CC"]', snr: 3, rssi: -90, timestamp: '2026-01-01T00:00:00Z' };
const eff = ctx866.clearParsedCache({...pkt, ...obs, _isObservation: true});
// obs has raw_hex from API, should override
assert.strictEqual(eff.raw_hex, '0441CC');
});
test('#866: direction field carried through observation spread', () => {
const pkt = { id: 1, hash: 'h1', direction: 'rx', route_type: 1 };
const obs = { id: 10, observer_id: 'obs-1', direction: 'tx', path_json: '[]', timestamp: '2026-01-01T00:00:00Z' };
const eff = {...pkt, ...obs, _isObservation: true};
assert.strictEqual(eff.direction, 'tx');
});
test('#866: resolved_path carried through observation spread', () => {
const pkt = { id: 1, hash: 'h1', resolved_path: '["aaa","bbb","ccc"]', route_type: 1 };
const obs = { id: 10, observer_id: 'obs-1', resolved_path: '["aaa"]', path_json: '["AA"]', timestamp: '2026-01-01T00:00:00Z' };
const eff = ctx866.clearParsedCache({...pkt, ...obs, _isObservation: true});
const rp = ctx866.getResolvedPath(eff);
assert.deepStrictEqual(rp, ['aaa']);
});
test('#866: getPathLenOffset used for hop count cross-check', () => {
// Flood route: offset 1
assert.strictEqual(ctx866.getPathLenOffset(1), 1);
assert.strictEqual(ctx866.getPathLenOffset(2), 1);
// Transport route: offset 5
assert.strictEqual(ctx866.getPathLenOffset(0), 5);
assert.strictEqual(ctx866.getPathLenOffset(3), 5);
});
test('#866: URL hash should encode obs parameter for deep linking', () => {
// Simulate the URL construction pattern from renderDetail obs click
const pktHash = 'abc123def456';
const obsId = '42';
const url = `#/packets/${pktHash}?obs=${obsId}`;
assert.strictEqual(url, '#/packets/abc123def456?obs=42');
// Parse back
const qIdx = url.indexOf('?');
const qs = new URLSearchParams(url.substring(qIdx));
assert.strictEqual(qs.get('obs'), '42');
});
}
// ===== #872 — hop-display unreliable badge =====
{
console.log('\n--- #872: hop-display unreliable warning badge ---');
function makeHopDisplaySandbox() {
const sb = {
window: { addEventListener: () => {}, dispatchEvent: () => {} },
document: {
readyState: 'complete',
createElement: () => ({ id: '', textContent: '', innerHTML: '' }),
head: { appendChild: () => {} },
getElementById: () => null,
addEventListener: () => {},
querySelectorAll: () => [],
querySelector: () => null,
},
console,
Date, Math, Array, Object, String, Number, JSON, RegExp, Map, Set,
encodeURIComponent, parseInt, parseFloat, isNaN, Infinity, NaN, undefined,
setTimeout: () => {}, setInterval: () => {}, clearTimeout: () => {}, clearInterval: () => {},
};
sb.window.document = sb.document;
sb.self = sb.window;
sb.globalThis = sb.window;
const ctx = vm.createContext(sb);
const hopSrc = fs.readFileSync(__dirname + '/public/hop-display.js', 'utf8');
vm.runInContext(hopSrc, ctx);
return ctx;
}
const hopCtx = makeHopDisplaySandbox();
test('#872: unreliable hop renders warning badge, not strikethrough', () => {
const html = hopCtx.window.HopDisplay.renderHop('AABB', {
name: 'TestNode', pubkey: 'pk123', unreliable: true,
ambiguous: false, conflicts: [], globalFallback: false,
}, {});
// Must contain unreliable warning badge button
assert.ok(html.includes('hop-unreliable-btn'), 'should have unreliable badge button');
assert.ok(html.includes('⚠️'), 'should have ⚠️ icon');
assert.ok(html.includes('Unreliable name resolution'), 'should have tooltip text');
// Must NOT contain line-through in inline style (CSS class no longer has it)
assert.ok(!html.includes('line-through'), 'should not contain line-through');
// Should still have hop-unreliable class for subtle styling
assert.ok(html.includes('hop-unreliable'), 'should have hop-unreliable class');
});
test('#872: reliable hop does NOT render unreliable badge', () => {
const html = hopCtx.window.HopDisplay.renderHop('CCDD', {
name: 'GoodNode', pubkey: 'pk456', unreliable: false,
ambiguous: false, conflicts: [], globalFallback: false,
}, {});
assert.ok(!html.includes('hop-unreliable-btn'), 'should not have unreliable badge');
});
}
// ===== SUMMARY =====
Promise.allSettled(pendingTests).then(() => {
console.log(`\n${'═'.repeat(40)}`);
+22
View File
@@ -95,5 +95,27 @@ const result6 = HopResolver.resolve(['ee44'], null, null, null, null, null);
assert(result6['ee44'].name === 'NodeD', 'Unique prefix resolves directly — got: ' + result6['ee44'].name);
assert(!result6['ee44'].ambiguous, 'Should not be marked ambiguous');
// Test 7: lat=0 / lon=0 candidates are NOT excluded (equator/prime-meridian bug fix)
console.log('\nTest 7: lat=0 / lon=0 candidates are included in geo scoring');
const nodeEquator = { public_key: 'ab5555', name: 'EquatorNode', lat: 0, lon: 10 };
const nodeFar = { public_key: 'ab6666', name: 'FarNode', lat: 60, lon: 60 };
const anchorNearEq = { public_key: 'cd7777', name: 'AnchorEq', lat: 1, lon: 11 };
HopResolver.init([nodeEquator, nodeFar, anchorNearEq]);
HopResolver.setAffinity({});
// Anchor near equator — EquatorNode (0,10) should be geo-closest
const result7 = HopResolver.resolve(['cd77', 'ab'], 1.0, 11.0, null, null, null);
assert(result7['ab'].name === 'EquatorNode',
'lat=0 candidate should be included and win by geo — got: ' + result7['ab'].name);
// Test 8: lon=0 candidate is also included
console.log('\nTest 8: lon=0 candidate is included in geo scoring');
const nodePrime = { public_key: 'ab8888', name: 'PrimeMeridian', lat: 10, lon: 0 };
const anchorNearPM = { public_key: 'cd9999', name: 'AnchorPM', lat: 11, lon: 1 };
HopResolver.init([nodePrime, nodeFar, anchorNearPM]);
HopResolver.setAffinity({});
const result8 = HopResolver.resolve(['cd99', 'ab'], 11.0, 1.0, null, null, null);
assert(result8['ab'].name === 'PrimeMeridian',
'lon=0 candidate should be included and win by geo — got: ' + result8['ab'].name);
console.log('\n' + (passed + failed) + ' tests, ' + passed + ' passed, ' + failed + ' failed\n');
process.exit(failed > 0 ? 1 : 0);