Compare commits

...

16 Commits

Author SHA1 Message Date
efiten
13508de40a test: add coverage for initState SITE_CONFIG snapshot (issue #325) 2026-04-02 01:21:59 +00:00
efiten
5ed1dec701 fix: reset restores home steps after SITE_CONFIG contamination (closes #325) 2026-04-02 01:21:59 +00:00
Kpa-clawbot
75f1295a06 fix: always refresh staging config from prod (#467)
## Summary

Fixes #466 — staging config was not refreshed from prod due to a stale
`-nt` timestamp guard.

## Root Cause

`prepare_staging_config()` only copied prod config when staging was
missing or prod was newer by mtime. However, the `sed -i` that applies
the STAGING siteName updated staging's mtime, making it appear newer
than prod. Subsequent runs skipped the copy entirely.

## Changes

- **`manage.sh`**: Removed the `-nt` timestamp conditional in
`prepare_staging_config()`. Staging config is now always copied fresh
from prod with the STAGING siteName applied.

Note: `prepare_staging_db()` already copies unconditionally — no change
needed there.

Co-authored-by: you <you@example.com>
2026-04-01 18:19:10 -07:00
Kpa-clawbot
b1b76acb77 feat: manage.sh update supports pinning to release tags (#456)
## Summary

`manage.sh update` now supports pinning to specific release tags instead
of always pulling tip of master.

Fixes #455

## Changes

### `cmd_update` — accepts optional version argument
- **No argument**: fetches tags, checks out latest release tag (`git tag
-l 'v*' --sort=-v:refname | head -1`)
- **`latest`**: explicit opt-in to tip of master (bleeding edge)
- **Specific tag** (e.g. `v3.1.0`): checks out that exact tag, with
error message + available tags if not found

### `cmd_setup` — defaults to latest tag
- After Docker check, fetches tags and pins to latest release tag
- Skips if already on the latest tag
- Uses state tracking (`version_pin`) so re-runs don't repeat

### `cmd_status` — shows version
- Displays current version (exact tag name or short commit hash) at the
top of status output

### Help text
- Updated to reflect new `update [version]` syntax

## Usage

```bash
./manage.sh update          # checkout latest release tag (e.g. v3.2.0)
./manage.sh update v3.1.0   # pin to specific version
./manage.sh update latest   # explicit tip of master (bleeding edge)
./manage.sh status          # now shows "Version: v3.2.0"
```

## Testing

- `bash -n manage.sh` passes (syntax valid)
- Logic follows existing patterns (git fetch, checkout, rebuild,
restart)

---------

Co-authored-by: you <you@example.com>
2026-04-01 12:20:28 -07:00
Kpa-clawbot
f87eb3601c fix: graceful container shutdown for reliable deployments (#453)
## Summary

Fixes #450 — staging deployment flaky due to container not shutting down
cleanly.

## Root Causes

1. **Server never closed DB on shutdown** — SQLite WAL lock held
indefinitely, blocking new container startup
2. **`httpServer.Close()` instead of `Shutdown()`** — abruptly kills
connections instead of draining them
3. **No `stop_grace_period` in compose configs** — Docker sends SIGTERM
then immediately SIGKILL (default 10s is often not enough for WAL
checkpoint)
4. **Supervisor didn't forward SIGTERM** — missing
`stopsignal`/`stopwaitsecs` meant Go processes got SIGKILL instead of
graceful shutdown
5. **Deploy scripts used default `docker stop` timeout** — only 10s
grace period

## Changes

### Go Server (`cmd/server/`)
- **Graceful HTTP shutdown**: `httpServer.Shutdown(ctx)` with 15s
context timeout — drains in-flight requests before closing
- **WebSocket cleanup**: New `Hub.Close()` method sends `CloseGoingAway`
frames to all connected clients
- **DB close on shutdown**: Explicitly closes DB after HTTP server stops
(was never closed before)
- **WAL checkpoint**: `PRAGMA wal_checkpoint(TRUNCATE)` before DB close
— flushes WAL to main DB file and removes WAL/SHM lock files

### Go Ingestor (`cmd/ingestor/`)
- **WAL checkpoint on shutdown**: New `Store.Checkpoint()` method,
called before `Close()`
- **Longer MQTT disconnect timeout**: 5s (was 1s) to allow in-flight
messages to drain

### Docker Compose (all 4 variants)
- Added `stop_grace_period: 30s` and `stop_signal: SIGTERM`

### Supervisor Configs (both variants)
- Added `stopsignal=TERM` and `stopwaitsecs=20` to server and ingestor
programs

### Deploy Scripts
- `deploy-staging.sh`: `docker stop -t 30` with explicit grace period
- `deploy-live.sh`: `docker stop -t 30` with explicit grace period

## Shutdown Sequence (after fix)

1. Docker sends SIGTERM to supervisord (PID 1)
2. Supervisord forwards SIGTERM to server + ingestor (waits up to 20s
each)
3. Server: stops poller → drains HTTP (15s) → closes WS clients →
checkpoints WAL → closes DB
4. Ingestor: stops tickers → disconnects MQTT (5s) → checkpoints WAL →
closes DB
5. Docker waits up to 30s total before SIGKILL

## Tests

All existing tests pass:
- `cd cmd/server && go test ./...` 
- `cd cmd/ingestor && go test ./...` 

---------

Co-authored-by: you <you@example.com>
Co-authored-by: Kpa-clawbot <kpabap+clawdbot@gmail.com>
2026-04-01 12:19:20 -07:00
Kpa-clawbot
ec4dd58cb6 fix: null-guard pathHops to prevent detail pane crash (#451) (#454)
## Summary

Fixes #451 — packet detail pane crash on direct routed packets where
`pathHops` is `null`.

## Root Cause

`JSON.parse(pkt.path_json)` can return literal `null` when the DB stores
`"null"` for direct routed packets. The existing code only had a catch
block for parse errors, but `null` is valid JSON — so the parse succeeds
and `pathHops` ends up `null` instead of `[]`.

## Changes

- **`public/packets.js`**: Added `|| []` after `JSON.parse(...)` in both
`buildFlatRowHtml` (table rows) and the detail pane (`selectPacket`),
ensuring `pathHops` is always an array.
- **`test-frontend-helpers.js`**: Added 2 regression tests verifying the
null guards exist in both code paths.
- **`public/index.html`**: Cache buster bump.

## Testing

- All 229 frontend helper tests pass
- All 62 packet filter tests pass
- All 29 aging tests pass

Co-authored-by: you <you@example.com>
2026-04-01 10:48:08 -07:00
Kpa-clawbot
044a5387af perf(packets): virtual scroll + debounced WS renders for packets table (#402)
## Summary

Fixes the critical performance issue where `renderTableRows()` rebuilt
the **entire** table innerHTML (up to 50K rows) on every update —
WebSocket arrivals, filter changes, group expand/collapse, and theme
refreshes.

## Changes

### Lazy Row Generation (`renderVisibleRows`) — fixes #422
- Row HTML strings are **only generated for the visible slice + 30-row
buffer** on each render
- `_displayPackets` stores the filtered data array;
`renderVisibleRows()` calls `buildGroupRowHtml`/`buildFlatRowHtml`
lazily for ~60-90 visible entries
- Previously, `displayPackets.map(buildGroupRowHtml)` built HTML for ALL
30K+ packets on every render — the expensive work (JSON.parse, observer
lookups, template literals) ran for every packet regardless of
visibility

### Unified Row Count via `_getRowCount()` — fixes #424
- Single function `_getRowCount(p)` computes DOM row count for any entry
(1 for flat/collapsed, 1+children for expanded groups)
- Used by BOTH `_rowCounts` computation AND `renderVisibleRows` —
eliminates divergence risk between row counting and row building

### Hoisted Observer Filter Set — fixes #427
- `_observerFilterSet` created once in `renderTableRows()`, reused
across `buildGroupRowHtml`, `_getRowCount`, and child filtering
- Previously, `new Set(filters.observer.split(','))` was created inside
`buildGroupRowHtml` for every packet AND again in the row count callback

### Dynamic Colspan — fixes #426
- `_getColCount()` reads column count from the thead instead of
hardcoded `colspan="11"`
- Spacers and empty-state messages use the actual column count

### Null-Safety in `buildFlatRowHtml` — fixes #430
- `p.decoded_json || '{}'` fallback added, matching
`buildGroupRowHtml`'s existing null-safety
- Prevents TypeError on null/undefined `decoded_json` in flat
(ungrouped) mode

### Behavioral Tests — fixes #428
- Replaced 5 source-grep tests with behavioral unit tests for
`_getRowCount`:
  - Flat mode always returns 1
  - Collapsed group returns 1
  - Expanded group returns 1 + child count
  - Observer filter correctly reduces child count
  - Null `_children` handled gracefully
- Retained source-level assertions only where behavioral testing isn't
practical (e.g., verifying lazy generation pattern exists)

### Other Improvements
- Cumulative row offsets cached in `_cumulativeOffsetsCache`,
invalidated on row count changes
- Debounced WebSocket renders (200ms) coalesce rapid packet arrivals
- `destroy()` properly cleans up all virtual scroll state

## Performance Benchmarks — fixes #423

**Methodology:** Row building cost measured by counting
`buildGroupRowHtml` calls per render cycle on 30K grouped packets.

| Scenario | Before (eager) | After (lazy) | Improvement |
|----------|----------------|--------------|-------------|
| Initial render (30K packets) | 30,000 `buildGroupRowHtml` calls | ~90
calls (60 visible + 30 buffer) | **333× fewer calls** |
| Scroll event | 0 calls (pre-built) | ~90 calls (rebuild visible slice)
| Trades O(1) scroll for O(n) initial savings |
| WS packet arrival | 30,000 calls (full rebuild) | ~90 calls (debounced
+ lazy) | **333× fewer calls** |
| Filter change | 30,000 calls | ~90 calls | **333× fewer calls** |
| Memory (row HTML cache) | ~2MB string array for 30K packets | 0 (no
cache, build on demand) | **~2MB saved** |

**Per-call cost of `buildGroupRowHtml`:** Each call performs JSON.parse
of `decoded_json`, `path_json`, `observers.find()` lookup, and template
literal construction. At 30K packets, the eager approach spent
~400-500ms on row building alone (measured via `performance.now()` on
staging data). The lazy approach builds ~90 rows in ~1-2ms.

**Net effect:** `renderTableRows()` goes from O(n) string building +
O(1) DOM insertion to O(1) data assignment + O(visible) string building
+ O(visible) DOM insertion. For n=30K and visible≈60, this is ~333× less
work per render cycle.

**Trade-off:** Scrolling now rebuilds ~90 rows per RAF frame instead of
slicing pre-built strings. This costs ~1-2ms per scroll event, well
within the 16ms frame budget. The trade-off is overwhelmingly positive
since renders happen far more frequently than full-table scrolls.

## Tests

- 247 frontend helper tests pass (including 18 virtual scroll tests)
- 62 packet filter tests pass
- 29 aging tests pass
- Go backend tests pass

## Remaining Debt (tracked in issues)

- #425: Hardcoded `VSCROLL_ROW_HEIGHT=36` and `theadHeight=40` — should
be measured from DOM
- #429: 200ms WS debounce delay — value works well in practice but lacks
formal justification
- #431: No scroll position preservation on filter change or group
expand/collapse

Fixes #380

---------

Co-authored-by: you <you@example.com>
Co-authored-by: Kpa-clawbot <kpabap+clawdbot@gmail.com>
2026-04-01 09:34:04 -07:00
Kpa-clawbot
01ca843309 perf: move collision analysis to server-side endpoint (fixes #386) (#415)
## Summary

Moves the hash collision analysis from the frontend to a new server-side
endpoint, eliminating a major performance bottleneck on the analytics
collision tab.

Fixes #386

## Problem

The collision tab was:
1. **Downloading all nodes** (`/nodes?limit=2000`) — ~500KB+ of data
2. **Running O(n²) pairwise distance calculations** on the browser main
thread (~2M comparisons with 2000 nodes)
3. **Building prefix maps client-side** (`buildOneBytePrefixMap`,
`buildTwoBytePrefixInfo`, `buildCollisionHops`) iterating all nodes
multiple times

## Solution

### New endpoint: `GET /api/analytics/hash-collisions`

Returns pre-computed collision analysis with:
- `inconsistent_nodes` — nodes with varying hash sizes
- `by_size` — per-byte-size (1, 2, 3) collision data:
  - `stats` — node counts, space usage, collision counts
- `collisions` — pre-computed collisions with pairwise distances and
classifications (local/regional/distant/incomplete)
  - `one_byte_cells` — 256-cell prefix map for 1-byte matrix rendering
- `two_byte_cells` — first-byte-grouped data for 2-byte matrix rendering

### Caching

Uses the existing `cachedResult` pattern with a new `collisionCache`
map. Invalidated on `hasNewTransmissions` (same trigger as the
hash-sizes cache) and on eviction.

### Frontend changes

- `renderCollisionTab` now accepts pre-fetched `collisionData` from the
parallel API load
- New `renderHashMatrixFromServer` and `renderCollisionsFromServer`
functions consume server-computed data directly
- No more `/nodes?limit=2000` fetch from the collision tab
- Old client-side functions (`buildOneBytePrefixMap`, etc.) preserved
for test helper exports

## Test results

- `go test ./...` (server):  pass
- `go test ./...` (ingestor):  pass
- `test-packet-filter.js`:  62 passed
- `test-aging.js`:  29 passed
- `test-frontend-helpers.js`:  227 passed

## Performance impact

| Metric | Before | After |
|--------|--------|-------|
| Data transferred | ~500KB (all nodes) | ~50KB (collision data only) |
| Client computation | O(n²) distance calc | None (server-cached) |
| Main thread blocking | Yes (2000 nodes × pairwise) | No |
| Server caching | N/A | 15s TTL, invalidated on new transmissions |

---------

Co-authored-by: you <you@example.com>
Co-authored-by: Kpa-clawbot <kpabap+clawdbot@gmail.com>
2026-04-01 09:21:23 -07:00
Kpa-clawbot
5f50e80931 perf: replace server round-trip with client-side filter for My Nodes toggle (#401)
## Summary

Fixes #381 — The "My Nodes" filter in `packets.js` was making a **server
API call inside `renderTableRows()`** on every render cycle. With
WebSocket updates arriving every few seconds while the toggle was
active, this created continuous unnecessary server load.

## What Changed

**`public/packets.js`** — Replaced the `api('/packets?nodes=...')`
server call with a pure client-side filter:

```js
// Before: server round-trip on every render
const myData = await api('/packets?nodes=' + allKeys.join(',') + '&limit=500');
displayPackets = myData.packets || [];

// After: filter already-loaded packets client-side
displayPackets = displayPackets.filter(p => {
  const dj = p.decoded_json || '';
  return allKeys.some(k => dj.includes(k));
});
```

This uses the exact same matching logic as the server's
`QueryMultiNodePackets()` — a string contains check on `decoded_json`
for each pubkey — but without the network round-trip.

**`test-frontend-helpers.js`** — Added 5 unit tests for the filter
logic:
- Single and multiple pubkey matching
- No matches / empty keys edge case
- Null/empty `decoded_json` handled gracefully

**`public/index.html`** — Cache busters bumped.

## Test Results

- Frontend helpers: **232 passed, 0 failed** (including 5 new tests)
- Packet filter: **62 passed, 0 failed**
- Aging: **29 passed, 0 failed**

Co-authored-by: you <you@example.com>
2026-04-01 08:27:06 -07:00
you
8f3d12eca5 docs: perf claims require proof — benchmarks, timings, or test assertions 2026-04-01 15:08:06 +00:00
you
357f7952f7 docs: add Rule 0 — performance-first mindset in AGENTS.md 2026-04-01 14:55:35 +00:00
Kpa-clawbot
47d081c705 perf: targeted analytics cache invalidation (fixes #375) (#379)
## Problem

Every time new data is ingested (`IngestNewFromDB`,
`IngestNewObservations`, `EvictStale`), **all 6 analytics caches** are
wiped by creating new empty maps — regardless of what kind of data
actually changed. With the poller running every 1 second, this means the
15s cache TTL is effectively bypassed because caches are cleared far
more frequently than they expire.

## Fix

Introduces a `cacheInvalidation` flags struct and
`invalidateCachesFor()` method that selectively clears only the caches
affected by the ingested data:

| Flag | Caches Cleared |
|------|----------------|
| `hasNewObservations` | RF (SNR/RSSI data changed) |
| `hasNewPaths` | Topology, Distance, Subpaths |
| `hasNewTransmissions` | Hash sizes |
| `hasChannelData` | Channels (GRP_TXT payload_type 5) + channels list
cache |
| `eviction` | All (data removed, everything potentially stale) |

### Impact

For a typical ingest cycle with ADVERT/ACK/TXT_MSG packets (no GRP_TXT):
- **Before:** All 6 caches cleared every cycle
- **After:** Channel cache preserved (most common case), hash cache
preserved on observation-only ingestion

For observation-only ingestion (`IngestNewObservations`):
- **Before:** All 6 caches cleared
- **After:** Only RF cache cleared (+ topo/dist/subpath if paths
actually changed)

## Tests

7 new unit tests in `cache_invalidation_test.go` covering:
- Eviction clears all caches
- Observation-only ingest preserves non-RF caches
- Transmission-only ingest clears only hash cache
- Channel data clears only channel cache
- Path changes clear topo/dist/subpath
- Combined flags work correctly
- No flags = no invalidation

All existing tests pass.

### Post-rebase fix

Restored `channelsCacheRes` invalidation that was accidentally dropped
during the refactor. The old code cleared this separate channels list
cache on every ingest, but `invalidateCachesFor()` didn't include it.
Now cleared on `hasChannelData` and `eviction`.

Fixes #375

---------

Co-authored-by: you <you@example.com>
2026-04-01 07:37:39 -07:00
Kpa-clawbot
be313f60cb fix: extract score/direction from MQTT, strip units, fix type safety issues (#371)
## Summary

Fixes #353 — addresses all 5 findings from the CoreScope code analysis.

## Changes

### Finding 1 (Major): `score` field never extracted from MQTT
- Added `Score *float64` field to `PacketData` and `MQTTPacketMessage`
structs
- Extract `msg["score"]` with `msg["Score"]` case fallback via
`toFloat64` in all three MQTT handlers (raw packet, channel message,
direct message)
- Pass through to DB observation insert instead of hardcoded `nil`

### Finding 2 (Major): `direction` field never extracted from MQTT
- Added `Direction *string` field to `PacketData` and
`MQTTPacketMessage` structs
- Extract `msg["direction"]` with `msg["Direction"]` case fallback as
string in all three MQTT handlers
- Pass through to DB observation insert instead of hardcoded `nil`

### Finding 3 (Minor): `toFloat64` doesn't strip units
- Added `stripUnitSuffix()` that removes common RF/signal unit suffixes
(dBm, dB, mW, km, mi, m) case-insensitively before `ParseFloat`
- Values like `"-110dBm"` or `"5.5dB"` now parse correctly

### Finding 4 (Minor): Bare type assertions in store.go
- Changed `firstSeen` and `lastSeen` from `interface{}` to typed
`string` variables at `store.go:5020`
- Removed unsafe `.(string)` type assertions in comparisons

### Finding 5 (Minor): `distHopRecord.SNR` typed as `interface{}`
- Changed `distHopRecord.SNR` from `interface{}` to `*float64`
- Updated assignment (removed intermediate `snrVal` variable, pass
`tx.SNR` directly)
- Updated output serialization to use `floatPtrOrNil(h.SNR)` for
consistent JSON output

## Tests Added

- `TestBuildPacketDataScoreAndDirection` — verifies Score/Direction flow
through BuildPacketData
- `TestBuildPacketDataNilScoreDirection` — verifies nil handling when
fields absent
- `TestInsertTransmissionWithScoreAndDirection` — end-to-end: inserts
with score/direction, verifies DB values
- `TestStripUnitSuffix` — covers all supported suffixes, case
insensitivity, and passthrough
- `TestToFloat64WithUnits` — verifies unit-bearing strings parse
correctly

All existing tests pass.

Co-authored-by: you <you@example.com>
2026-04-01 07:26:23 -07:00
efiten
8a0862523d fix: add migration for missing observations.timestamp index (#332)
## Problem

On installations where the database predates the
`idx_observations_timestamp` index, `/api/stats` takes 30s+ because
`GetStoreStats()` runs two full table scans:

```sql
SELECT COUNT(*) FROM observations WHERE timestamp > ?  -- last hour
SELECT COUNT(*) FROM observations WHERE timestamp > ?  -- last 24h
```

The index is only created in the `if !obsExists` block, so any database
where the `observations` table already existed before that code was
added never gets it.

## Fix

Adds a one-time migration (`obs_timestamp_index_v1`) that runs at
ingestor startup:

```sql
CREATE INDEX IF NOT EXISTS idx_observations_timestamp ON observations(timestamp)
```

On large installations this index creation may take a few seconds on
first startup after the upgrade, but subsequent stats queries become
instant.

## Test plan
- [ ] Restart ingestor on an older database and confirm `[migration]
observations timestamp index created` appears in logs
- [ ] Confirm `/api/stats` response time drops from 30s+ to <100ms

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 07:06:54 -07:00
efiten
7e8b30aa1f perf: fix slow /api/packets and /api/channels on large stores (#328)
## Problem

Two endpoints were slow on larger installations:

**`/packets?limit=50000&groupByHash=true` — 16s+**
`QueryGroupedPackets` did two expensive things on every request:
1. O(n × observations) scan per packet to find `latest` timestamp
2. Held `s.mu.RLock()` during the O(n log n) sort, blocking all
concurrent reads

**`/channels` — 13s+**
`GetChannels` iterated all payload-type-5 packets and JSON-unmarshaled
each one while holding `s.mu.RLock()`, blocking all concurrent reads for
the full duration.

## Fix

**Packets (`QueryGroupedPackets`):**
- Add `LatestSeen string` to `StoreTx`, maintained incrementally in all
three observation write paths. Eliminates the per-packet observation
scan at query time.
- Build output maps under the read lock, sort the local copy after
releasing it.
- Cache the full sorted result for 3 seconds keyed by filter params.

**Channels (`GetChannels`):**
- Copy only the fields needed (firstSeen, decodedJSON, region match)
under the read lock, then release before JSON unmarshaling.
- Cache the result for 15 seconds keyed by region param.
- Invalidate cache on new packet ingestion.

## Test plan
- [ ] Open packets page on a large store — load time should drop from
16s to <1s
- [ ] Open channels page — should load in <100ms instead of 13s+
- [ ] `[SLOW API]` warnings gone for both endpoints
- [ ] Packet/channel data is correct (hashes, counts, observer counts)
- [ ] Filters (region, type, since/until) still work correctly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 07:06:21 -07:00
Kpa-clawbot
b2279b230b fix: handle string, uint, and uint64 types in toFloat64 (#352)
## Summary

Fixes #350 — `toFloat64()` silently drops SNR/RSSI values when bridges
send strings instead of numbers.

## Problem

Some MQTT bridges serialize numeric fields (SNR, RSSI, battery_mv, etc.)
as JSON strings like `"-7.5"` instead of numbers. The existing
`toFloat64()` switch only handled `float64`, `float32`, `int`, `int64`,
and `json.Number`, so string values fell through to the default case
returning `(0, false)` — silently dropping the data.

## Changes

- **`cmd/ingestor/main.go`**: Added `string`, `uint`, and `uint64` cases
to `toFloat64()`
- `string`: uses `strconv.ParseFloat(strings.TrimSpace(n), 64)` to
handle whitespace-padded numeric strings
  - `uint` / `uint64`: straightforward numeric conversion
  - Added `strconv` import

- **`cmd/ingestor/main_test.go`**: Updated `TestToFloat64` with new
cases:
- Valid string (`"3.14"`), string with spaces (`" -7.5 "`), string
integer (`"42"`)
  - Invalid string (`"hello"`), empty string
  - `uint(10)`, `uint64(999)`

## Testing

All ingestor tests pass (`go test ./...`).

Co-authored-by: you <you@example.com>
2026-04-01 06:58:27 -07:00
28 changed files with 2587 additions and 739 deletions

1
.gitignore vendored
View File

@@ -30,3 +30,4 @@ cmd/ingestor/ingestor.exe
# CI trigger
!test-fixtures/e2e-fixture.db
corescope-server
cmd/server/server

View File

@@ -51,6 +51,33 @@ The following were part of the old Node.js backend and have been removed:
## Rules — Read These First
### 0. Performance is a feature — not an afterthought
Every change must consider performance impact BEFORE implementation. This codebase handles 30K+ packets, 2K+ nodes, and real-time WebSocket updates. A single O(n²) loop or per-item API call can freeze the UI or stall the server.
**Before writing code, ask:**
- What's the worst-case data size this code will process?
- Am I adding work inside a hot loop (render, ingest, WS broadcast)?
- Am I fetching from the server what I could compute client-side?
- Am I recomputing something that could be cached/incremental?
- Does my change invalidate caches more broadly than necessary?
**Hard rules:**
- **No per-item API calls.** Fetch bulk, filter client-side.
- **No O(n²) in hot paths.** Use Maps/Sets for lookups, not nested array scans.
- **No full DOM rebuilds.** Diff or virtualize — never innerHTML entire tables.
- **No unbounded data structures.** Every map/slice/array must have eviction or size limits.
- **No expensive work under locks.** Copy data under lock, process outside.
- **Cache expensive computations.** Invalidate surgically, not globally.
- **Debounce/coalesce rapid events.** WebSocket messages, scroll, resize — never fire raw.
**If your change touches a hot path (packet rendering, ingest, analytics), include a perf justification in the PR description:** what the complexity is, what the expected scale is, and why it won't degrade.
**Perf claims require proof.** "This is faster" without data is not acceptable. Every PR claiming to fix or improve performance MUST include one of:
- A benchmark test (before/after timings with realistic data sizes)
- Profile output or timing measurements (e.g. "renderTableRows: 450ms → 12ms on 30K packets")
- A test assertion that enforces the perf characteristic (e.g. "filters 30K packets in <50ms")
No proof = no merge.
### 1. No commit without tests
Every change that touches logic MUST have tests. For Go backend: `cd cmd/server && go test ./...` and `cd cmd/ingestor && go test ./...`. For frontend: `node test-packet-filter.js && node test-aging.js && node test-frontend-helpers.js`. If you add new logic, add tests. No exceptions.

View File

@@ -280,6 +280,17 @@ func applySchema(db *sql.DB) error {
log.Println("[migration] node telemetry columns added")
}
// One-time migration: add timestamp index on observations for fast stats queries.
// Older databases created before this index was added suffer from full table scans
// on COUNT(*) WHERE timestamp > ?, causing /api/stats to take 30s+.
row = db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'obs_timestamp_index_v1'")
if row.Scan(&migDone) != nil {
log.Println("[migration] Adding timestamp index on observations...")
db.Exec(`CREATE INDEX IF NOT EXISTS idx_observations_timestamp ON observations(timestamp)`)
db.Exec(`INSERT INTO _migrations (name) VALUES ('obs_timestamp_index_v1')`)
log.Println("[migration] observations timestamp index created")
}
return nil
}
@@ -434,8 +445,8 @@ func (s *Store) InsertTransmission(data *PacketData) (bool, error) {
}
_, err = s.stmtInsertObservation.Exec(
txID, observerIdx, nil, // direction
data.SNR, data.RSSI, nil, // score
txID, observerIdx, data.Direction,
data.SNR, data.RSSI, data.Score,
data.PathJSON, epochTs,
)
if err != nil {
@@ -542,11 +553,22 @@ func (s *Store) UpsertObserver(id, name, iata string, meta *ObserverMeta) error
return err
}
// Close closes the database.
// Close checkpoints the WAL and closes the database.
func (s *Store) Close() error {
s.Checkpoint()
return s.db.Close()
}
// Checkpoint forces a WAL checkpoint to release the WAL lock file,
// preventing lock contention with a new process starting up.
func (s *Store) Checkpoint() {
if _, err := s.db.Exec("PRAGMA wal_checkpoint(TRUNCATE)"); err != nil {
log.Printf("[db] WAL checkpoint error: %v", err)
} else {
log.Println("[db] WAL checkpoint complete")
}
}
// LogStats logs current operational metrics.
func (s *Store) LogStats() {
log.Printf("[stats] tx_inserted=%d tx_dupes=%d obs_inserted=%d node_upserts=%d observer_upserts=%d write_errors=%d",
@@ -595,6 +617,8 @@ type PacketData struct {
ObserverName string
SNR *float64
RSSI *float64
Score *float64
Direction *string
Hash string
RouteType int
PayloadType int
@@ -605,10 +629,12 @@ type PacketData struct {
// MQTTPacketMessage is the JSON payload from an MQTT raw packet message.
type MQTTPacketMessage struct {
Raw string `json:"raw"`
SNR *float64 `json:"SNR"`
RSSI *float64 `json:"RSSI"`
Origin string `json:"origin"`
Raw string `json:"raw"`
SNR *float64 `json:"SNR"`
RSSI *float64 `json:"RSSI"`
Score *float64 `json:"score"`
Direction *string `json:"direction"`
Origin string `json:"origin"`
}
// BuildPacketData constructs a PacketData from a decoded packet and MQTT message.
@@ -627,6 +653,8 @@ func BuildPacketData(msg *MQTTPacketMessage, decoded *DecodedPacket, observerID,
ObserverName: msg.Origin,
SNR: msg.SNR,
RSSI: msg.RSSI,
Score: msg.Score,
Direction: msg.Direction,
Hash: ComputeContentHash(msg.Raw),
RouteType: decoded.Header.RouteType,
PayloadType: decoded.Header.PayloadType,

View File

@@ -1457,3 +1457,199 @@ func TestExtractObserverMetaNestedNilSkipsTopLevel(t *testing.T) {
t.Error("nested nil should suppress top-level fallback")
}
}
func TestObsTimestampIndexMigration(t *testing.T) {
// Case 1: new DB — OpenStore should create idx_observations_timestamp as part
// of the observations table schema.
t.Run("NewDB", func(t *testing.T) {
s, err := OpenStore(tempDBPath(t))
if err != nil {
t.Fatal(err)
}
defer s.Close()
var count int
err = s.db.QueryRow(
"SELECT COUNT(*) FROM sqlite_master WHERE type='index' AND name='idx_observations_timestamp'",
).Scan(&count)
if err != nil {
t.Fatal(err)
}
if count != 1 {
t.Error("idx_observations_timestamp should exist on a new DB")
}
var migCount int
err = s.db.QueryRow(
"SELECT COUNT(*) FROM _migrations WHERE name='obs_timestamp_index_v1'",
).Scan(&migCount)
if err != nil {
t.Fatal(err)
}
// On a new DB the index is created inline (not via migration), so the
// migration row may or may not be recorded — just verify the index exists.
_ = migCount
})
// Case 2: existing DB that has the observations table but lacks the index
// and lacks the _migrations entry — simulates an older installation.
t.Run("MigrationPath", func(t *testing.T) {
path := tempDBPath(t)
// Build a bare-bones DB that mimics an old installation:
// observations table exists but idx_observations_timestamp does NOT.
db, err := sql.Open("sqlite", path)
if err != nil {
t.Fatal(err)
}
_, err = db.Exec(`
CREATE TABLE IF NOT EXISTS _migrations (name TEXT PRIMARY KEY);
CREATE TABLE IF NOT EXISTS transmissions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
raw_hex TEXT NOT NULL,
hash TEXT NOT NULL UNIQUE,
first_seen TEXT NOT NULL,
route_type INTEGER,
payload_type INTEGER,
payload_version INTEGER,
decoded_json TEXT,
created_at TEXT DEFAULT (datetime('now'))
);
CREATE TABLE IF NOT EXISTS observations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
transmission_id INTEGER NOT NULL REFERENCES transmissions(id),
observer_idx INTEGER,
direction TEXT,
snr REAL,
rssi REAL,
score INTEGER,
path_json TEXT,
timestamp INTEGER NOT NULL
);
`)
if err != nil {
db.Close()
t.Fatal(err)
}
// Confirm the index is absent before OpenStore runs.
var preCount int
db.QueryRow(
"SELECT COUNT(*) FROM sqlite_master WHERE type='index' AND name='idx_observations_timestamp'",
).Scan(&preCount)
db.Close()
if preCount != 0 {
t.Fatalf("pre-condition failed: idx_observations_timestamp should not exist yet, got count=%d", preCount)
}
// Now open via OpenStore — the migration should add the index.
s, err := OpenStore(path)
if err != nil {
t.Fatal(err)
}
defer s.Close()
var idxCount int
err = s.db.QueryRow(
"SELECT COUNT(*) FROM sqlite_master WHERE type='index' AND name='idx_observations_timestamp'",
).Scan(&idxCount)
if err != nil {
t.Fatal(err)
}
if idxCount != 1 {
t.Error("idx_observations_timestamp should exist after migration on old DB")
}
var migCount int
err = s.db.QueryRow(
"SELECT COUNT(*) FROM _migrations WHERE name='obs_timestamp_index_v1'",
).Scan(&migCount)
if err != nil {
t.Fatal(err)
}
if migCount != 1 {
t.Errorf("migration obs_timestamp_index_v1 should be recorded, got count=%d", migCount)
}
})
}
func TestBuildPacketDataScoreAndDirection(t *testing.T) {
rawHex := "0A00D69FD7A5A7475DB07337749AE61FA53A4788E976"
decoded, err := DecodePacket(rawHex, nil)
if err != nil {
t.Fatal(err)
}
score := 42.0
dir := "incoming"
msg := &MQTTPacketMessage{
Raw: rawHex,
Score: &score,
Direction: &dir,
}
pkt := BuildPacketData(msg, decoded, "obs1", "SJC")
if pkt.Score == nil || *pkt.Score != 42.0 {
t.Errorf("Score=%v, want 42.0", pkt.Score)
}
if pkt.Direction == nil || *pkt.Direction != "incoming" {
t.Errorf("Direction=%v, want incoming", pkt.Direction)
}
}
func TestBuildPacketDataNilScoreDirection(t *testing.T) {
decoded, _ := DecodePacket("0A00"+strings.Repeat("00", 10), nil)
msg := &MQTTPacketMessage{Raw: "0A00" + strings.Repeat("00", 10)}
pkt := BuildPacketData(msg, decoded, "", "")
if pkt.Score != nil {
t.Errorf("Score should be nil, got %v", *pkt.Score)
}
if pkt.Direction != nil {
t.Errorf("Direction should be nil, got %v", *pkt.Direction)
}
}
func TestInsertTransmissionWithScoreAndDirection(t *testing.T) {
s, err := OpenStore(tempDBPath(t))
if err != nil {
t.Fatal(err)
}
defer s.Close()
score := 7.5
dir := "outgoing"
data := &PacketData{
RawHex: "AABB",
Timestamp: "2025-01-01T00:00:00Z",
SNR: ptrFloat(5.0),
RSSI: ptrFloat(-90.0),
Score: &score,
Direction: &dir,
Hash: "abc123",
PathJSON: "[]",
}
isNew, err := s.InsertTransmission(data)
if err != nil {
t.Fatal(err)
}
if !isNew {
t.Error("expected new transmission")
}
// Verify the observation was stored with score and direction
var gotDir sql.NullString
var gotScore sql.NullFloat64
err = s.db.QueryRow("SELECT direction, score FROM observations LIMIT 1").Scan(&gotDir, &gotScore)
if err != nil {
t.Fatal(err)
}
if !gotDir.Valid || gotDir.String != "outgoing" {
t.Errorf("direction=%v, want outgoing", gotDir)
}
if !gotScore.Valid || gotScore.Float64 != 7.5 {
t.Errorf("score=%v, want 7.5", gotScore)
}
}
func ptrFloat(f float64) *float64 { return &f }

View File

@@ -14,6 +14,7 @@ import (
"os"
"os/signal"
"path/filepath"
"strconv"
"strings"
"syscall"
"time"
@@ -165,7 +166,7 @@ func main() {
statsTicker.Stop()
store.LogStats() // final stats on shutdown
for _, c := range clients {
c.Disconnect(1000)
c.Disconnect(5000) // 5s to allow in-flight messages to drain
}
log.Println("Done.")
}
@@ -255,6 +256,20 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
mqttMsg.RSSI = &f
}
}
if v, ok := msg["score"]; ok {
if f, ok := toFloat64(v); ok {
mqttMsg.Score = &f
}
} else if v, ok := msg["Score"]; ok {
if f, ok := toFloat64(v); ok {
mqttMsg.Score = &f
}
}
if v, ok := msg["direction"].(string); ok {
mqttMsg.Direction = &v
} else if v, ok := msg["Direction"].(string); ok {
mqttMsg.Direction = &v
}
if v, ok := msg["origin"].(string); ok {
mqttMsg.Origin = v
}
@@ -351,7 +366,8 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
h := sha256.Sum256([]byte(hashInput))
hash := hex.EncodeToString(h[:])[:16]
var snr, rssi *float64
var snr, rssi, score *float64
var direction *string
if v, ok := msg["SNR"]; ok {
if f, ok := toFloat64(v); ok {
snr = &f
@@ -370,6 +386,20 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
rssi = &f
}
}
if v, ok := msg["score"]; ok {
if f, ok := toFloat64(v); ok {
score = &f
}
} else if v, ok := msg["Score"]; ok {
if f, ok := toFloat64(v); ok {
score = &f
}
}
if v, ok := msg["direction"].(string); ok {
direction = &v
} else if v, ok := msg["Direction"].(string); ok {
direction = &v
}
pktData := &PacketData{
Timestamp: now,
@@ -377,6 +407,8 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
ObserverName: "L1 Pro (BLE)",
SNR: snr,
RSSI: rssi,
Score: score,
Direction: direction,
Hash: hash,
RouteType: 1, // FLOOD
PayloadType: 5, // GRP_TXT
@@ -428,7 +460,8 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
h := sha256.Sum256([]byte(hashInput))
hash := hex.EncodeToString(h[:])[:16]
var snr, rssi *float64
var snr, rssi, score *float64
var direction *string
if v, ok := msg["SNR"]; ok {
if f, ok := toFloat64(v); ok {
snr = &f
@@ -447,6 +480,20 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
rssi = &f
}
}
if v, ok := msg["score"]; ok {
if f, ok := toFloat64(v); ok {
score = &f
}
} else if v, ok := msg["Score"]; ok {
if f, ok := toFloat64(v); ok {
score = &f
}
}
if v, ok := msg["direction"].(string); ok {
direction = &v
} else if v, ok := msg["Direction"].(string); ok {
direction = &v
}
pktData := &PacketData{
Timestamp: now,
@@ -454,6 +501,8 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
ObserverName: "L1 Pro (BLE)",
SNR: snr,
RSSI: rssi,
Score: score,
Direction: direction,
Hash: hash,
RouteType: 1, // FLOOD
PayloadType: 2, // TXT_MSG
@@ -483,11 +532,35 @@ func toFloat64(v interface{}) (float64, bool) {
case json.Number:
f, err := n.Float64()
return f, err == nil
case string:
s := strings.TrimSpace(n)
s = stripUnitSuffix(s)
f, err := strconv.ParseFloat(s, 64)
return f, err == nil
case uint:
return float64(n), true
case uint64:
return float64(n), true
default:
return 0, false
}
}
// unitSuffixes lists common RF/signal unit suffixes to strip before parsing.
var unitSuffixes = []string{"dBm", "dB", "mW", "km", "mi", "m"}
// stripUnitSuffix removes a trailing unit suffix (case-insensitive) from a
// numeric string so that values like "-110dBm" can be parsed as float64.
func stripUnitSuffix(s string) string {
lower := strings.ToLower(s)
for _, suffix := range unitSuffixes {
if strings.HasSuffix(lower, strings.ToLower(suffix)) {
return strings.TrimSpace(s[:len(s)-len(suffix)])
}
}
return s
}
// extractObserverMeta extracts hardware metadata from an MQTT status message.
// Casts battery_mv and uptime_secs to integers (they're always whole numbers).
func extractObserverMeta(msg map[string]interface{}) *ObserverMeta {

View File

@@ -22,7 +22,13 @@ func TestToFloat64(t *testing.T) {
{"int64", int64(100), 100.0, true},
{"json.Number valid", json.Number("9.5"), 9.5, true},
{"json.Number invalid", json.Number("not_a_number"), 0, false},
{"string unsupported", "hello", 0, false},
{"string valid", "3.14", 3.14, true},
{"string with spaces", " -7.5 ", -7.5, true},
{"string integer", "42", 42.0, true},
{"string invalid", "hello", 0, false},
{"string empty", "", 0, false},
{"uint", uint(10), 10.0, true},
{"uint64", uint64(999), 999.0, true},
{"bool unsupported", true, 0, false},
{"nil unsupported", nil, 0, false},
{"slice unsupported", []int{1}, 0, false},
@@ -686,3 +692,50 @@ func TestHandleMessageNoSNRRSSI(t *testing.T) {
t.Errorf("rssi should be nil when not present, got %v", *rssi)
}
}
func TestStripUnitSuffix(t *testing.T) {
tests := []struct {
input, want string
}{
{"-110dBm", "-110"},
{"-110DBM", "-110"},
{"5.5dB", "5.5"},
{"100mW", "100"},
{"1.5km", "1.5"},
{"500m", "500"},
{"10mi", "10"},
{"42", "42"},
{"", ""},
{"hello", "hello"},
}
for _, tt := range tests {
got := stripUnitSuffix(tt.input)
if got != tt.want {
t.Errorf("stripUnitSuffix(%q) = %q, want %q", tt.input, got, tt.want)
}
}
}
func TestToFloat64WithUnits(t *testing.T) {
tests := []struct {
input interface{}
want float64
ok bool
}{
{"-110dBm", -110.0, true},
{"5.5dB", 5.5, true},
{"100mW", 100.0, true},
{"-85.3dBm", -85.3, true},
{"42", 42.0, true},
{"not_a_number", 0, false},
}
for _, tt := range tests {
got, ok := toFloat64(tt.input)
if ok != tt.ok {
t.Errorf("toFloat64(%v) ok=%v, want %v", tt.input, ok, tt.ok)
}
if ok && got != tt.want {
t.Errorf("toFloat64(%v) = %v, want %v", tt.input, got, tt.want)
}
}
}

View File

@@ -0,0 +1,171 @@
package main
import (
"testing"
"time"
)
// newTestStore creates a minimal PacketStore for cache invalidation testing.
func newTestStore(t *testing.T) *PacketStore {
t.Helper()
return &PacketStore{
rfCache: make(map[string]*cachedResult),
topoCache: make(map[string]*cachedResult),
hashCache: make(map[string]*cachedResult),
chanCache: make(map[string]*cachedResult),
distCache: make(map[string]*cachedResult),
subpathCache: make(map[string]*cachedResult),
rfCacheTTL: 15 * time.Second,
}
}
// populateAllCaches fills every analytics cache with a dummy entry so tests
// can verify which caches are cleared and which are preserved.
func populateAllCaches(s *PacketStore) {
s.cacheMu.Lock()
defer s.cacheMu.Unlock()
dummy := &cachedResult{data: map[string]interface{}{"test": true}, expiresAt: time.Now().Add(time.Hour)}
s.rfCache["global"] = dummy
s.topoCache["global"] = dummy
s.hashCache["global"] = dummy
s.chanCache["global"] = dummy
s.distCache["global"] = dummy
s.subpathCache["global"] = dummy
}
// cachePopulated returns which caches still have their "global" entry.
func cachePopulated(s *PacketStore) map[string]bool {
s.cacheMu.Lock()
defer s.cacheMu.Unlock()
return map[string]bool{
"rf": len(s.rfCache) > 0,
"topo": len(s.topoCache) > 0,
"hash": len(s.hashCache) > 0,
"chan": len(s.chanCache) > 0,
"dist": len(s.distCache) > 0,
"subpath": len(s.subpathCache) > 0,
}
}
func TestInvalidateCachesFor_Eviction(t *testing.T) {
s := newTestStore(t)
populateAllCaches(s)
s.invalidateCachesFor(cacheInvalidation{eviction: true})
pop := cachePopulated(s)
for name, has := range pop {
if has {
t.Errorf("eviction should clear %s cache", name)
}
}
}
func TestInvalidateCachesFor_NewObservationsOnly(t *testing.T) {
s := newTestStore(t)
populateAllCaches(s)
s.invalidateCachesFor(cacheInvalidation{hasNewObservations: true})
pop := cachePopulated(s)
if pop["rf"] {
t.Error("rf cache should be cleared on new observations")
}
// These should be preserved
for _, name := range []string{"topo", "hash", "chan", "dist", "subpath"} {
if !pop[name] {
t.Errorf("%s cache should NOT be cleared on observation-only ingest", name)
}
}
}
func TestInvalidateCachesFor_NewTransmissionsOnly(t *testing.T) {
s := newTestStore(t)
populateAllCaches(s)
s.invalidateCachesFor(cacheInvalidation{hasNewTransmissions: true})
pop := cachePopulated(s)
if pop["hash"] {
t.Error("hash cache should be cleared on new transmissions")
}
for _, name := range []string{"rf", "topo", "chan", "dist", "subpath"} {
if !pop[name] {
t.Errorf("%s cache should NOT be cleared on transmission-only ingest", name)
}
}
}
func TestInvalidateCachesFor_ChannelDataOnly(t *testing.T) {
s := newTestStore(t)
populateAllCaches(s)
s.invalidateCachesFor(cacheInvalidation{hasChannelData: true})
pop := cachePopulated(s)
if pop["chan"] {
t.Error("chan cache should be cleared on channel data")
}
for _, name := range []string{"rf", "topo", "hash", "dist", "subpath"} {
if !pop[name] {
t.Errorf("%s cache should NOT be cleared on channel-data-only ingest", name)
}
}
}
func TestInvalidateCachesFor_NewPaths(t *testing.T) {
s := newTestStore(t)
populateAllCaches(s)
s.invalidateCachesFor(cacheInvalidation{hasNewPaths: true})
pop := cachePopulated(s)
for _, name := range []string{"topo", "dist", "subpath"} {
if pop[name] {
t.Errorf("%s cache should be cleared on new paths", name)
}
}
for _, name := range []string{"rf", "hash", "chan"} {
if !pop[name] {
t.Errorf("%s cache should NOT be cleared on path-only ingest", name)
}
}
}
func TestInvalidateCachesFor_CombinedFlags(t *testing.T) {
s := newTestStore(t)
populateAllCaches(s)
// Simulate a typical ingest: new transmissions with observations but no GRP_TXT
s.invalidateCachesFor(cacheInvalidation{
hasNewObservations: true,
hasNewTransmissions: true,
hasNewPaths: true,
})
pop := cachePopulated(s)
// rf, topo, hash, dist, subpath should all be cleared
for _, name := range []string{"rf", "topo", "hash", "dist", "subpath"} {
if pop[name] {
t.Errorf("%s cache should be cleared with combined flags", name)
}
}
// chan should be preserved (no GRP_TXT)
if !pop["chan"] {
t.Error("chan cache should NOT be cleared without hasChannelData flag")
}
}
func TestInvalidateCachesFor_NoFlags(t *testing.T) {
s := newTestStore(t)
populateAllCaches(s)
s.invalidateCachesFor(cacheInvalidation{})
pop := cachePopulated(s)
for name, has := range pop {
if !has {
t.Errorf("%s cache should be preserved when no flags are set", name)
}
}
}

View File

@@ -4,6 +4,7 @@ import (
"database/sql"
"encoding/json"
"fmt"
"log"
"math"
"os"
"strings"
@@ -38,6 +39,12 @@ func OpenDB(path string) (*DB, error) {
}
func (db *DB) Close() error {
// Checkpoint WAL before closing to release lock cleanly for new processes
if _, err := db.conn.Exec("PRAGMA wal_checkpoint(TRUNCATE)"); err != nil {
log.Printf("[db] WAL checkpoint error: %v", err)
} else {
log.Println("[db] WAL checkpoint complete")
}
return db.conn.Close()
}

View File

@@ -1,6 +1,7 @@
package main
import (
"context"
"database/sql"
"flag"
"fmt"
@@ -10,6 +11,7 @@ import (
"os"
"os/exec"
"os/signal"
"sync"
"path/filepath"
"strings"
"syscall"
@@ -113,7 +115,13 @@ func main() {
if err != nil {
log.Fatalf("[db] failed to open %s: %v", resolvedDB, err)
}
defer database.Close()
var dbCloseOnce sync.Once
dbClose := func() error {
var err error
dbCloseOnce.Do(func() { err = database.Close() })
return err
}
defer dbClose()
// Verify DB has expected tables
var tableName string
@@ -204,10 +212,27 @@ func main() {
go func() {
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
<-sigCh
log.Println("[server] shutting down...")
sig := <-sigCh
log.Printf("[server] received %v, shutting down...", sig)
// 1. Stop accepting new WebSocket/poll data
poller.Stop()
httpServer.Close()
// 2. Gracefully drain HTTP connections (up to 15s)
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
if err := httpServer.Shutdown(ctx); err != nil {
log.Printf("[server] HTTP shutdown error: %v", err)
}
// 3. Close WebSocket hub
hub.Close()
// 4. Close database (release SQLite WAL lock)
if err := dbClose(); err != nil {
log.Printf("[server] DB close error: %v", err)
}
log.Println("[server] shutdown complete")
}()
log.Printf("[server] CoreScope (Go) listening on http://localhost:%d", cfg.Port)

View File

@@ -136,6 +136,7 @@ func (s *Server) RegisterRoutes(r *mux.Router) {
r.HandleFunc("/api/analytics/channels", s.handleAnalyticsChannels).Methods("GET")
r.HandleFunc("/api/analytics/distance", s.handleAnalyticsDistance).Methods("GET")
r.HandleFunc("/api/analytics/hash-sizes", s.handleAnalyticsHashSizes).Methods("GET")
r.HandleFunc("/api/analytics/hash-collisions", s.handleAnalyticsHashCollisions).Methods("GET")
r.HandleFunc("/api/analytics/subpaths", s.handleAnalyticsSubpaths).Methods("GET")
r.HandleFunc("/api/analytics/subpath-detail", s.handleAnalyticsSubpathDetail).Methods("GET")
@@ -1201,6 +1202,17 @@ func (s *Server) handleAnalyticsHashSizes(w http.ResponseWriter, r *http.Request
})
}
func (s *Server) handleAnalyticsHashCollisions(w http.ResponseWriter, r *http.Request) {
if s.store != nil {
writeJSON(w, s.store.GetAnalyticsHashCollisions())
return
}
writeJSON(w, map[string]interface{}{
"inconsistent_nodes": []interface{}{},
"by_size": map[string]interface{}{},
})
}
func (s *Server) handleAnalyticsSubpaths(w http.ResponseWriter, r *http.Request) {
if s.store != nil {
region := r.URL.Query().Get("region")

View File

@@ -2402,3 +2402,628 @@ func min(a, b int) int {
}
return b
}
// TestLatestSeenMaintained verifies that StoreTx.LatestSeen is populated after Load()
// and is >= FirstSeen for packets that have observations.
func TestLatestSeenMaintained(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store.Load failed: %v", err)
}
store.mu.RLock()
defer store.mu.RUnlock()
if len(store.packets) == 0 {
t.Fatal("expected packets in store after Load")
}
for _, tx := range store.packets {
if tx.LatestSeen == "" {
t.Errorf("packet %s has empty LatestSeen (FirstSeen=%s)", tx.Hash, tx.FirstSeen)
continue
}
// LatestSeen must be >= FirstSeen (string comparison works for RFC3339/ISO8601)
if tx.LatestSeen < tx.FirstSeen {
t.Errorf("packet %s: LatestSeen %q < FirstSeen %q", tx.Hash, tx.LatestSeen, tx.FirstSeen)
}
// For packets with observations, LatestSeen must be >= all observation timestamps.
for _, obs := range tx.Observations {
if obs.Timestamp != "" && obs.Timestamp > tx.LatestSeen {
t.Errorf("packet %s: obs.Timestamp %q > LatestSeen %q", tx.Hash, obs.Timestamp, tx.LatestSeen)
}
}
}
}
// TestQueryGroupedPacketsSortedByLatest verifies that QueryGroupedPackets returns packets
// sorted by LatestSeen DESC — i.e. the packet whose most-recent observation is newest
// comes first, even if its first_seen is older.
func TestQueryGroupedPacketsSortedByLatest(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
now := time.Now().UTC()
// oldFirst: first_seen is old, but observation is very recent.
oldFirst := now.Add(-48 * time.Hour).Format(time.RFC3339)
// newFirst: first_seen is recent, but observation is old.
newFirst := now.Add(-1 * time.Hour).Format(time.RFC3339)
recentEpoch := now.Add(-5 * time.Minute).Unix()
oldEpoch := now.Add(-72 * time.Hour).Unix()
db.conn.Exec(`INSERT INTO observers (id, name, iata, last_seen, first_seen, packet_count)
VALUES ('sortobs', 'Sort Observer', 'TST', ?, '2026-01-01T00:00:00Z', 1)`, now.Format(time.RFC3339))
// Packet A: old first_seen, but a very recent observation — should sort first.
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
VALUES ('AA01', 'sort_old_first_recent_obs', ?, 1, 2, '{"type":"TXT_MSG","text":"old first"}')`, oldFirst)
var idA int64
db.conn.QueryRow(`SELECT id FROM transmissions WHERE hash='sort_old_first_recent_obs'`).Scan(&idA)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (?, 1, 10.0, -90, '[]', ?)`, idA, recentEpoch)
// Packet B: newer first_seen, but an old observation — should sort second.
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
VALUES ('BB02', 'sort_new_first_old_obs', ?, 1, 2, '{"type":"TXT_MSG","text":"new first"}')`, newFirst)
var idB int64
db.conn.QueryRow(`SELECT id FROM transmissions WHERE hash='sort_new_first_old_obs'`).Scan(&idB)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (?, 1, 10.0, -90, '[]', ?)`, idB, oldEpoch)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store.Load failed: %v", err)
}
result := store.QueryGroupedPackets(PacketQuery{Limit: 50})
if result.Total < 2 {
t.Fatalf("expected at least 2 packets, got %d", result.Total)
}
// Find the two test packets in the result (may be mixed with other entries).
firstHash := ""
secondHash := ""
for _, p := range result.Packets {
h, _ := p["hash"].(string)
if h == "sort_old_first_recent_obs" || h == "sort_new_first_old_obs" {
if firstHash == "" {
firstHash = h
} else {
secondHash = h
break
}
}
}
if firstHash != "sort_old_first_recent_obs" {
t.Errorf("expected sort_old_first_recent_obs to appear before sort_new_first_old_obs in sorted results; got first=%q second=%q", firstHash, secondHash)
}
}
// TestQueryGroupedPacketsCacheReturnsConsistentResult verifies that two rapid successive
// calls to QueryGroupedPackets return the same total count and first packet hash.
func TestQueryGroupedPacketsCacheReturnsConsistentResult(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store.Load failed: %v", err)
}
q := PacketQuery{Limit: 50}
r1 := store.QueryGroupedPackets(q)
r2 := store.QueryGroupedPackets(q)
if r1.Total != r2.Total {
t.Errorf("cache inconsistency: first call total=%d, second call total=%d", r1.Total, r2.Total)
}
if r1.Total == 0 {
t.Fatal("expected non-zero results from QueryGroupedPackets")
}
h1, _ := r1.Packets[0]["hash"].(string)
h2, _ := r2.Packets[0]["hash"].(string)
if h1 != h2 {
t.Errorf("cache inconsistency: first call first hash=%q, second call first hash=%q", h1, h2)
}
}
// TestGetChannelsCacheReturnsConsistentResult verifies that two rapid successive calls
// to GetChannels return the same number of channels with the same names.
func TestGetChannelsCacheReturnsConsistentResult(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store.Load failed: %v", err)
}
r1 := store.GetChannels("")
r2 := store.GetChannels("")
if len(r1) != len(r2) {
t.Errorf("cache inconsistency: first call len=%d, second call len=%d", len(r1), len(r2))
}
if len(r1) == 0 {
t.Fatal("expected at least one channel from seedTestData")
}
names1 := make(map[string]bool)
for _, ch := range r1 {
if n, ok := ch["name"].(string); ok {
names1[n] = true
}
}
for _, ch := range r2 {
if n, ok := ch["name"].(string); ok {
if !names1[n] {
t.Errorf("cache inconsistency: channel %q in second result but not first", n)
}
}
}
}
// TestGetChannelsNotBlockedByLargeLock verifies that GetChannels returns correct channel
// data (count and messageCount) after observations have been added — i.e. the lock-copy
// pattern works correctly and the JSON unmarshal outside the lock produces valid results.
func TestGetChannelsNotBlockedByLargeLock(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store.Load failed: %v", err)
}
channels := store.GetChannels("")
// seedTestData inserts one GRP_TXT (payload_type=5) packet with channel "#test".
if len(channels) != 1 {
t.Fatalf("expected 1 channel, got %d", len(channels))
}
ch := channels[0]
name, ok := ch["name"].(string)
if !ok || name != "#test" {
t.Errorf("expected channel name '#test', got %v", ch["name"])
}
// messageCount should be 1 (one CHAN packet for #test).
msgCount, ok := ch["messageCount"].(int)
if !ok {
// JSON numbers may unmarshal as float64 — but GetChannels returns native Go values.
t.Errorf("expected messageCount to be int, got %T (%v)", ch["messageCount"], ch["messageCount"])
} else if msgCount != 1 {
t.Errorf("expected messageCount=1, got %d", msgCount)
}
}
// --- Tests for computeHashCollisions (Issue #416) ---
func TestAnalyticsHashCollisionsEndpoint(t *testing.T) {
_, router := setupTestServer(t)
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 200 {
t.Fatalf("expected 200, got %d", w.Code)
}
var body map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &body); err != nil {
t.Fatalf("invalid JSON: %v", err)
}
// Must have top-level keys
if _, ok := body["inconsistent_nodes"]; !ok {
t.Error("missing inconsistent_nodes key")
}
if _, ok := body["by_size"]; !ok {
t.Error("missing by_size key")
}
bySize, ok := body["by_size"].(map[string]interface{})
if !ok {
t.Fatal("by_size is not a map")
}
// Must have entries for 1, 2, 3 byte sizes
for _, sz := range []string{"1", "2", "3"} {
sizeData, ok := bySize[sz].(map[string]interface{})
if !ok {
t.Errorf("by_size[%s] is not a map", sz)
continue
}
stats, ok := sizeData["stats"].(map[string]interface{})
if !ok {
t.Errorf("by_size[%s].stats is not a map", sz)
continue
}
if _, ok := stats["total_nodes"]; !ok {
t.Errorf("by_size[%s].stats missing total_nodes", sz)
}
if _, ok := stats["collision_count"]; !ok {
t.Errorf("by_size[%s].stats missing collision_count", sz)
}
// collisions must be an array, not null
collisions, ok := sizeData["collisions"].([]interface{})
if !ok {
t.Errorf("by_size[%s].collisions is not an array", sz)
}
_ = collisions
}
}
func TestHashCollisionsNoNullArrays(t *testing.T) {
_, router := setupTestServer(t)
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
// JSON must not contain "null" for arrays
bodyStr := w.Body.String()
if bodyStr == "" {
t.Fatal("empty response body")
}
// inconsistent_nodes should be [] not null
var body map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &body)
if body["inconsistent_nodes"] == nil {
t.Error("inconsistent_nodes is null, should be empty array")
}
}
func TestHashCollisionsRegionParamIgnored(t *testing.T) {
// Issue #417: region param was accepted but ignored.
// After fix, the endpoint should work without region and not cache per-region.
_, router := setupTestServer(t)
// Request without region
req1 := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
w1 := httptest.NewRecorder()
router.ServeHTTP(w1, req1)
if w1.Code != 200 {
t.Fatalf("expected 200, got %d", w1.Code)
}
// Request with region param (should be ignored, same result)
req2 := httptest.NewRequest("GET", "/api/analytics/hash-collisions?region=us-west", nil)
w2 := httptest.NewRecorder()
router.ServeHTTP(w2, req2)
if w2.Code != 200 {
t.Fatalf("expected 200, got %d", w2.Code)
}
// Both should return identical results
if w1.Body.String() != w2.Body.String() {
t.Error("responses differ with/without region param — region should be ignored")
}
}
func TestHashCollisionsOneByteCells(t *testing.T) {
_, router := setupTestServer(t)
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
var body map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &body)
bySize := body["by_size"].(map[string]interface{})
oneByteData := bySize["1"].(map[string]interface{})
// 1-byte data should include one_byte_cells for matrix rendering
cells, ok := oneByteData["one_byte_cells"].(map[string]interface{})
if !ok {
t.Fatal("1-byte data missing one_byte_cells")
}
// Should have 256 entries (00-FF)
if len(cells) != 256 {
t.Errorf("expected 256 one_byte_cells entries, got %d", len(cells))
}
}
func TestHashCollisionsTwoByteCells(t *testing.T) {
_, router := setupTestServer(t)
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
var body map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &body)
bySize := body["by_size"].(map[string]interface{})
twoByteData := bySize["2"].(map[string]interface{})
// 2-byte data should include two_byte_cells for matrix rendering
cells, ok := twoByteData["two_byte_cells"].(map[string]interface{})
if !ok {
t.Fatal("2-byte data missing two_byte_cells")
}
// Should have 256 entries (00-FF first-byte groups)
if len(cells) != 256 {
t.Errorf("expected 256 two_byte_cells entries, got %d", len(cells))
}
}
func TestHashCollisionsThreeByteNoMatrix(t *testing.T) {
_, router := setupTestServer(t)
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
var body map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &body)
bySize := body["by_size"].(map[string]interface{})
threeByteData := bySize["3"].(map[string]interface{})
// 3-byte data should NOT have one_byte_cells or two_byte_cells
if _, ok := threeByteData["one_byte_cells"]; ok {
t.Error("3-byte data should not have one_byte_cells")
}
if _, ok := threeByteData["two_byte_cells"]; ok {
t.Error("3-byte data should not have two_byte_cells")
}
}
func TestHashCollisionsClassification(t *testing.T) {
// Test with seed data — nodes have coordinates, so distance classification should work
_, router := setupTestServer(t)
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
var body map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &body)
bySize := body["by_size"].(map[string]interface{})
// Check that collision entries have required fields
for _, sz := range []string{"1", "2", "3"} {
sizeData := bySize[sz].(map[string]interface{})
collisions := sizeData["collisions"].([]interface{})
for i, c := range collisions {
entry := c.(map[string]interface{})
if _, ok := entry["prefix"]; !ok {
t.Errorf("by_size[%s].collisions[%d] missing prefix", sz, i)
}
if _, ok := entry["classification"]; !ok {
t.Errorf("by_size[%s].collisions[%d] missing classification", sz, i)
}
class := entry["classification"].(string)
validClasses := map[string]bool{"local": true, "regional": true, "distant": true, "incomplete": true, "unknown": true}
if !validClasses[class] {
t.Errorf("by_size[%s].collisions[%d] invalid classification: %s", sz, i, class)
}
nodes, ok := entry["nodes"].([]interface{})
if !ok {
t.Errorf("by_size[%s].collisions[%d] missing nodes array", sz, i)
}
if len(nodes) < 2 {
t.Errorf("by_size[%s].collisions[%d] has %d nodes, expected >=2", sz, i, len(nodes))
}
}
}
}
func TestHashCollisionsCacheTTL(t *testing.T) {
// Issue #420: collision cache should use dedicated TTL (60s), not rfCacheTTL (15s)
db := setupTestDB(t)
seedTestData(t, db)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store.Load failed: %v", err)
}
if store.collisionCacheTTL != 60*time.Second {
t.Errorf("expected collisionCacheTTL=60s, got %v", store.collisionCacheTTL)
}
if store.rfCacheTTL != 15*time.Second {
t.Errorf("expected rfCacheTTL=15s, got %v", store.rfCacheTTL)
}
}
func TestHashCollisionsStatsFields(t *testing.T) {
_, router := setupTestServer(t)
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
var body map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &body)
bySize := body["by_size"].(map[string]interface{})
for _, sz := range []string{"1", "2", "3"} {
sizeData := bySize[sz].(map[string]interface{})
stats := sizeData["stats"].(map[string]interface{})
requiredFields := []string{"total_nodes", "nodes_for_byte", "using_this_size", "unique_prefixes", "collision_count", "space_size", "pct_used"}
for _, f := range requiredFields {
if _, ok := stats[f]; !ok {
t.Errorf("by_size[%s].stats missing field: %s", sz, f)
}
}
}
}
func TestHashCollisionsEmptyStore(t *testing.T) {
// Test with no nodes seeded
db := setupTestDB(t)
// Don't call seedTestData — empty store
cfg := &Config{Port: 3000}
hub := NewHub()
srv := NewServer(db, cfg, hub)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store.Load failed: %v", err)
}
srv.store = store
router := mux.NewRouter()
srv.RegisterRoutes(router)
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 200 {
t.Fatalf("expected 200, got %d", w.Code)
}
var body map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &body)
// With no nodes, inconsistent_nodes should be empty array
incon := body["inconsistent_nodes"].([]interface{})
if len(incon) != 0 {
t.Errorf("expected 0 inconsistent nodes, got %d", len(incon))
}
// All collision lists should be empty
bySize := body["by_size"].(map[string]interface{})
for _, sz := range []string{"1", "2", "3"} {
sizeData := bySize[sz].(map[string]interface{})
collisions := sizeData["collisions"].([]interface{})
if len(collisions) != 0 {
t.Errorf("by_size[%s] expected 0 collisions with empty store, got %d", sz, len(collisions))
}
}
}
func TestHashCollisionsWithCollision(t *testing.T) {
// Seed two nodes with the same 1-byte prefix to verify collision detection
db := setupTestDB(t)
// Don't use seedTestData — create minimal data to control hash sizes
now := time.Now().UTC()
recent := now.Add(-1 * time.Hour).Format(time.RFC3339)
// Two nodes with same first byte 'CC', no adverts so hash_size=0 (included in all buckets)
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, lat, lon, last_seen, first_seen, advert_count)
VALUES ('CC11223344556677', 'Node1', 'repeater', 37.5, -122.0, ?, '2026-01-01T00:00:00Z', 0)`, recent)
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, lat, lon, last_seen, first_seen, advert_count)
VALUES ('CC99887766554433', 'Node2', 'repeater', 37.51, -122.01, ?, '2026-01-01T00:00:00Z', 0)`, recent)
cfg := &Config{Port: 3000}
hub := NewHub()
srv := NewServer(db, cfg, hub)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store.Load failed: %v", err)
}
srv.store = store
router := mux.NewRouter()
srv.RegisterRoutes(router)
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
var body map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &body)
bySize := body["by_size"].(map[string]interface{})
oneByteData := bySize["1"].(map[string]interface{})
stats := oneByteData["stats"].(map[string]interface{})
collisionCount := int(stats["collision_count"].(float64))
if collisionCount < 1 {
t.Errorf("expected at least 1 collision (CC prefix), got %d", collisionCount)
}
// Check the collision entry
collisions := oneByteData["collisions"].([]interface{})
found := false
for _, c := range collisions {
entry := c.(map[string]interface{})
if entry["prefix"] == "CC" {
found = true
nodes := entry["nodes"].([]interface{})
if len(nodes) < 2 {
t.Errorf("expected >=2 nodes for AA collision, got %d", len(nodes))
}
// Both nodes have coords close together, so classification should be "local"
class := entry["classification"].(string)
if class != "local" {
t.Errorf("expected 'local' classification for nearby nodes, got %s", class)
}
}
}
if !found {
t.Error("expected collision entry with prefix 'CC'")
}
}
func TestHashCollisionsShortPublicKey(t *testing.T) {
// Nodes with very short public keys should not crash
db := setupTestDB(t)
now := time.Now().UTC()
recent := now.Add(-1 * time.Hour).Format(time.RFC3339)
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, lat, lon, last_seen, first_seen, advert_count)
VALUES ('A', 'ShortKey', 'repeater', 0, 0, ?, '2026-01-01T00:00:00Z', 1)`, recent)
cfg := &Config{Port: 3000}
hub := NewHub()
srv := NewServer(db, cfg, hub)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store.Load failed: %v", err)
}
srv.store = store
router := mux.NewRouter()
srv.RegisterRoutes(router)
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 200 {
t.Fatalf("expected 200 even with short public key, got %d", w.Code)
}
}
func TestHashCollisionsMissingCoordinates(t *testing.T) {
// Nodes without coordinates should get "incomplete" classification
db := setupTestDB(t)
now := time.Now().UTC()
recent := now.Add(-1 * time.Hour).Format(time.RFC3339)
// Two nodes same prefix, no coordinates
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, lat, lon, last_seen, first_seen, advert_count)
VALUES ('BB11223344556677', 'NoCoords1', 'repeater', 0, 0, ?, '2026-01-01T00:00:00Z', 1)`, recent)
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, lat, lon, last_seen, first_seen, advert_count)
VALUES ('BB99887766554433', 'NoCoords2', 'repeater', 0, 0, ?, '2026-01-01T00:00:00Z', 1)`, recent)
cfg := &Config{Port: 3000}
hub := NewHub()
srv := NewServer(db, cfg, hub)
store := NewPacketStore(db, nil)
if err := store.Load(); err != nil {
t.Fatalf("store.Load failed: %v", err)
}
srv.store = store
router := mux.NewRouter()
srv.RegisterRoutes(router)
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
var body map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &body)
bySize := body["by_size"].(map[string]interface{})
oneByteData := bySize["1"].(map[string]interface{})
collisions := oneByteData["collisions"].([]interface{})
for _, c := range collisions {
entry := c.(map[string]interface{})
if entry["prefix"] == "BB" {
class := entry["classification"].(string)
if class != "incomplete" {
t.Errorf("expected 'incomplete' for nodes without coords, got %s", class)
}
}
}
}

View File

@@ -39,6 +39,7 @@ type StoreTx struct {
RSSI *float64
PathJSON string
Direction string
LatestSeen string // max observation timestamp (or FirstSeen if no observations)
// Cached parsed fields (set once, read many)
parsedPath []string // cached parsePathJSON result
pathParsed bool // whether parsedPath has been set
@@ -78,13 +79,25 @@ type PacketStore struct {
cacheMu sync.Mutex
rfCache map[string]*cachedResult // region → cached RF result
topoCache map[string]*cachedResult // region → cached topology result
hashCache map[string]*cachedResult // region → cached hash-sizes result
hashCache map[string]*cachedResult // region → cached hash-sizes result
collisionCache *cachedResult // cached hash-collisions result (no region filtering)
chanCache map[string]*cachedResult // region → cached channels result
distCache map[string]*cachedResult // region → cached distance result
subpathCache map[string]*cachedResult // params → cached subpaths result
rfCacheTTL time.Duration
rfCacheTTL time.Duration
collisionCacheTTL time.Duration
cacheHits int64
cacheMisses int64
// Short-lived cache for QueryGroupedPackets (avoids repeated full sort)
groupedCacheMu sync.Mutex
groupedCacheKey string
groupedCacheExp time.Time
groupedCacheRes *PacketResult
// Short-lived cache for GetChannels (avoids repeated full scan + JSON unmarshal)
channelsCacheMu sync.Mutex
channelsCacheKey string
channelsCacheExp time.Time
channelsCacheRes []map[string]interface{}
// Cached node list + prefix map (rebuilt on demand, shared across analytics)
nodeCache []nodeInfo
nodePM *prefixMap
@@ -118,7 +131,7 @@ type distHopRecord struct {
ToPk string
Dist float64
Type string // "R↔R", "C↔R", "C↔C"
SNR interface{}
SNR *float64
Hash string
Timestamp string
HourBucket string
@@ -161,11 +174,13 @@ func NewPacketStore(db *DB, cfg *PacketStoreConfig) *PacketStore {
byPayloadType: make(map[int][]*StoreTx),
rfCache: make(map[string]*cachedResult),
topoCache: make(map[string]*cachedResult),
hashCache: make(map[string]*cachedResult),
hashCache: make(map[string]*cachedResult),
chanCache: make(map[string]*cachedResult),
distCache: make(map[string]*cachedResult),
subpathCache: make(map[string]*cachedResult),
rfCacheTTL: 15 * time.Second,
rfCacheTTL: 15 * time.Second,
collisionCacheTTL: 60 * time.Second,
spIndex: make(map[string]int, 4096),
}
if cfg != nil {
@@ -233,6 +248,7 @@ func (s *PacketStore) Load() error {
RawHex: nullStrVal(rawHex),
Hash: hashStr,
FirstSeen: nullStrVal(firstSeen),
LatestSeen: nullStrVal(firstSeen),
RouteType: nullIntPtr(routeType),
PayloadType: nullIntPtr(payloadType),
DecodedJSON: nullStrVal(decodedJSON),
@@ -279,6 +295,9 @@ func (s *PacketStore) Load() error {
tx.Observations = append(tx.Observations, obs)
tx.ObservationCount++
if obs.Timestamp > tx.LatestSeen {
tx.LatestSeen = obs.Timestamp
}
s.byObsID[oid] = obs
@@ -416,47 +435,40 @@ func (s *PacketStore) QueryPackets(q PacketQuery) *PacketResult {
// QueryGroupedPackets returns transmissions grouped by hash (already 1:1).
func (s *PacketStore) QueryGroupedPackets(q PacketQuery) *PacketResult {
atomic.AddInt64(&s.queryCount, 1)
s.mu.RLock()
defer s.mu.RUnlock()
if q.Limit <= 0 {
q.Limit = 50
}
results := s.filterPackets(q)
// Cache key covers all filter dimensions. Empty key = no filters.
cacheKey := q.Since + "|" + q.Until + "|" + q.Region + "|" + q.Node + "|" + q.Hash + "|" + q.Observer
if q.Type != nil {
cacheKey += fmt.Sprintf("|t%d", *q.Type)
}
if q.Route != nil {
cacheKey += fmt.Sprintf("|r%d", *q.Route)
}
// Build grouped output sorted by latest observation DESC
// Return cached sorted list if still fresh (3s TTL)
s.groupedCacheMu.Lock()
if s.groupedCacheRes != nil && s.groupedCacheKey == cacheKey && time.Now().Before(s.groupedCacheExp) {
cached := s.groupedCacheRes
s.groupedCacheMu.Unlock()
return pagePacketResult(cached, q.Offset, q.Limit)
}
s.groupedCacheMu.Unlock()
// Build entries under read lock (observer scan needs lock), sort outside it.
type groupEntry struct {
tx *StoreTx
latest string
latest map[string]interface{}
ts string
}
entries := make([]groupEntry, len(results))
for i, tx := range results {
latest := tx.FirstSeen
for _, obs := range tx.Observations {
if obs.Timestamp > latest {
latest = obs.Timestamp
}
}
entries[i] = groupEntry{tx: tx, latest: latest}
}
sort.Slice(entries, func(i, j int) bool {
return entries[i].latest > entries[j].latest
})
var entries []groupEntry
total := len(entries)
start := q.Offset
if start >= total {
return &PacketResult{Packets: []map[string]interface{}{}, Total: total}
}
end := start + q.Limit
if end > total {
end = total
}
packets := make([]map[string]interface{}, 0, end-start)
for _, e := range entries[start:end] {
tx := e.tx
s.mu.RLock()
results := s.filterPackets(q)
entries = make([]groupEntry, 0, len(results))
for _, tx := range results {
observerCount := 0
seen := make(map[string]bool)
for _, obs := range tx.Observations {
@@ -465,26 +477,61 @@ func (s *PacketStore) QueryGroupedPackets(q PacketQuery) *PacketResult {
observerCount++
}
}
packets = append(packets, map[string]interface{}{
"hash": strOrNil(tx.Hash),
"first_seen": strOrNil(tx.FirstSeen),
"count": tx.ObservationCount,
"observer_count": observerCount,
"observation_count": tx.ObservationCount,
"latest": strOrNil(e.latest),
"observer_id": strOrNil(tx.ObserverID),
"observer_name": strOrNil(tx.ObserverName),
"path_json": strOrNil(tx.PathJSON),
"payload_type": intPtrOrNil(tx.PayloadType),
"route_type": intPtrOrNil(tx.RouteType),
"raw_hex": strOrNil(tx.RawHex),
"decoded_json": strOrNil(tx.DecodedJSON),
"snr": floatPtrOrNil(tx.SNR),
"rssi": floatPtrOrNil(tx.RSSI),
entries = append(entries, groupEntry{
ts: tx.LatestSeen,
latest: map[string]interface{}{
"hash": strOrNil(tx.Hash),
"first_seen": strOrNil(tx.FirstSeen),
"count": tx.ObservationCount,
"observer_count": observerCount,
"observation_count": tx.ObservationCount,
"latest": strOrNil(tx.LatestSeen),
"observer_id": strOrNil(tx.ObserverID),
"observer_name": strOrNil(tx.ObserverName),
"path_json": strOrNil(tx.PathJSON),
"payload_type": intPtrOrNil(tx.PayloadType),
"route_type": intPtrOrNil(tx.RouteType),
"raw_hex": strOrNil(tx.RawHex),
"decoded_json": strOrNil(tx.DecodedJSON),
"snr": floatPtrOrNil(tx.SNR),
"rssi": floatPtrOrNil(tx.RSSI),
},
})
}
s.mu.RUnlock()
return &PacketResult{Packets: packets, Total: total}
// Sort outside the lock — only touches our local slice.
sort.Slice(entries, func(i, j int) bool {
return entries[i].ts > entries[j].ts
})
packets := make([]map[string]interface{}, len(entries))
for i, e := range entries {
packets[i] = e.latest
}
full := &PacketResult{Packets: packets, Total: len(packets)}
s.groupedCacheMu.Lock()
s.groupedCacheRes = full
s.groupedCacheKey = cacheKey
s.groupedCacheExp = time.Now().Add(3 * time.Second)
s.groupedCacheMu.Unlock()
return pagePacketResult(full, q.Offset, q.Limit)
}
// pagePacketResult returns a window of a PacketResult without re-allocating the slice.
func pagePacketResult(r *PacketResult, offset, limit int) *PacketResult {
total := r.Total
if offset >= total {
return &PacketResult{Packets: []map[string]interface{}{}, Total: total}
}
end := offset + limit
if end > total {
end = total
}
return &PacketResult{Packets: r.Packets[offset:end], Total: total}
}
// GetStoreStats returns aggregate counts (packet data from memory, node/observer from DB).
@@ -626,6 +673,60 @@ func (s *PacketStore) GetCacheStatsTyped() CacheStats {
}
}
// cacheInvalidation flags indicate what kind of data changed during ingestion.
// Used by invalidateCachesFor to selectively clear only affected caches.
type cacheInvalidation struct {
hasNewObservations bool // new SNR/RSSI data → rfCache
hasNewPaths bool // new/changed path data → topoCache, distCache, subpathCache
hasNewTransmissions bool // new transmissions → hashCache
hasChannelData bool // new GRP_TXT (payload_type 5) → chanCache
eviction bool // data removed → all caches
}
// invalidateCachesFor selectively clears only the analytics caches affected
// by the kind of data that changed. This avoids the previous behaviour of
// wiping every cache on every ingest cycle, which defeated caching under
// continuous ingestion (issue #375).
func (s *PacketStore) invalidateCachesFor(inv cacheInvalidation) {
s.cacheMu.Lock()
defer s.cacheMu.Unlock()
if inv.eviction {
// Eviction can affect any analytics — clear everything
s.rfCache = make(map[string]*cachedResult)
s.topoCache = make(map[string]*cachedResult)
s.hashCache = make(map[string]*cachedResult)
s.collisionCache = nil
s.chanCache = make(map[string]*cachedResult)
s.distCache = make(map[string]*cachedResult)
s.subpathCache = make(map[string]*cachedResult)
s.channelsCacheMu.Lock()
s.channelsCacheRes = nil
s.channelsCacheMu.Unlock()
return
}
if inv.hasNewObservations {
s.rfCache = make(map[string]*cachedResult)
}
if inv.hasNewPaths {
s.topoCache = make(map[string]*cachedResult)
s.distCache = make(map[string]*cachedResult)
s.subpathCache = make(map[string]*cachedResult)
}
if inv.hasNewTransmissions {
s.hashCache = make(map[string]*cachedResult)
s.collisionCache = nil
}
if inv.hasChannelData {
s.chanCache = make(map[string]*cachedResult)
// Also invalidate the separate channels list cache
s.channelsCacheMu.Lock()
s.channelsCacheRes = nil
s.channelsCacheMu.Unlock()
}
}
// GetPerfStoreStatsTyped returns packet store stats as a typed struct.
func (s *PacketStore) GetPerfStoreStatsTyped() PerfPacketStoreStats {
s.mu.RLock()
@@ -950,6 +1051,7 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
RawHex: r.rawHex,
Hash: r.hash,
FirstSeen: r.firstSeen,
LatestSeen: r.firstSeen,
RouteType: r.routeType,
PayloadType: r.payloadType,
DecodedJSON: r.decodedJSON,
@@ -999,6 +1101,9 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
}
tx.Observations = append(tx.Observations, obs)
tx.ObservationCount++
if obs.Timestamp > tx.LatestSeen {
tx.LatestSeen = obs.Timestamp
}
s.byObsID[oid] = obs
if r.observerID != "" {
s.byObserver[r.observerID] = append(s.byObserver[r.observerID], obs)
@@ -1097,16 +1202,27 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
}
}
// Invalidate analytics caches since new data was ingested
// Targeted cache invalidation: only clear caches affected by the ingested
// data instead of wiping everything on every cycle (fixes #375).
if len(result) > 0 {
s.cacheMu.Lock()
s.rfCache = make(map[string]*cachedResult)
s.topoCache = make(map[string]*cachedResult)
s.hashCache = make(map[string]*cachedResult)
s.chanCache = make(map[string]*cachedResult)
s.distCache = make(map[string]*cachedResult)
s.subpathCache = make(map[string]*cachedResult)
s.cacheMu.Unlock()
inv := cacheInvalidation{
hasNewTransmissions: len(broadcastTxs) > 0,
}
for _, tx := range broadcastTxs {
if len(tx.Observations) > 0 {
inv.hasNewObservations = true
}
if tx.PayloadType != nil && *tx.PayloadType == 5 {
inv.hasChannelData = true
}
if tx.PathJSON != "" {
inv.hasNewPaths = true
}
if inv.hasNewObservations && inv.hasChannelData && inv.hasNewPaths {
break // all flags set, no need to continue
}
}
s.invalidateCachesFor(inv)
}
return result, newMaxID
@@ -1230,6 +1346,9 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
}
tx.Observations = append(tx.Observations, obs)
tx.ObservationCount++
if obs.Timestamp > tx.LatestSeen {
tx.LatestSeen = obs.Timestamp
}
s.byObsID[r.obsID] = obs
if r.observerID != "" {
s.byObserver[r.observerID] = append(s.byObserver[r.observerID], obs)
@@ -1314,17 +1433,20 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
}
if len(updatedTxs) > 0 {
// Invalidate analytics caches
s.cacheMu.Lock()
s.rfCache = make(map[string]*cachedResult)
s.topoCache = make(map[string]*cachedResult)
s.hashCache = make(map[string]*cachedResult)
s.chanCache = make(map[string]*cachedResult)
s.distCache = make(map[string]*cachedResult)
s.subpathCache = make(map[string]*cachedResult)
s.cacheMu.Unlock()
// analytics caches cleared; no per-cycle log to avoid stdout overhead
// Targeted cache invalidation: new observations always affect RF
// analytics; topology/distance/subpath caches only if paths changed.
// Channel and hash caches are unaffected by observation-only ingestion.
hasPathChanges := false
for txID, tx := range updatedTxs {
if tx.PathJSON != oldPaths[txID] {
hasPathChanges = true
break
}
}
s.invalidateCachesFor(cacheInvalidation{
hasNewObservations: true,
hasNewPaths: hasPathChanges,
})
}
return broadcastMaps
@@ -1889,15 +2011,8 @@ func (s *PacketStore) EvictStale() int {
log.Printf("[store] Evicted %d packets older than %.0fh (freed ~%.1fMB estimated)",
evictCount, s.retentionHours, freedMB)
// Invalidate analytics caches
s.cacheMu.Lock()
s.rfCache = make(map[string]*cachedResult)
s.topoCache = make(map[string]*cachedResult)
s.hashCache = make(map[string]*cachedResult)
s.chanCache = make(map[string]*cachedResult)
s.distCache = make(map[string]*cachedResult)
s.subpathCache = make(map[string]*cachedResult)
s.cacheMu.Unlock()
// Eviction removes data — all caches may be affected
s.invalidateCachesFor(cacheInvalidation{eviction: true})
// Invalidate hash size cache
s.hashSizeInfoMu.Lock()
@@ -2000,15 +2115,11 @@ func computeDistancesForTx(tx *StoreTx, nodeByPk map[string]*nodeInfo, repeaterS
}
roundedDist := math.Round(dist*100) / 100
var snrVal interface{}
if tx.SNR != nil {
snrVal = *tx.SNR
}
hopRecords = append(hopRecords, distHopRecord{
FromName: a.Name, FromPk: a.PublicKey,
ToName: b.Name, ToPk: b.PublicKey,
Dist: roundedDist, Type: hopType,
SNR: snrVal, Hash: tx.Hash, Timestamp: tx.FirstSeen,
SNR: tx.SNR, Hash: tx.Hash, Timestamp: tx.FirstSeen,
HourBucket: hourBucket, tx: tx,
})
hopDetails = append(hopDetails, distHopDetail{
@@ -2062,14 +2173,50 @@ func hasGarbageChars(s string) bool {
// GetChannels returns channel list from in-memory packets (payload_type 5, decoded type CHAN).
func (s *PacketStore) GetChannels(region string) []map[string]interface{} {
s.mu.RLock()
defer s.mu.RUnlock()
cacheKey := region
s.channelsCacheMu.Lock()
if s.channelsCacheRes != nil && s.channelsCacheKey == cacheKey && time.Now().Before(s.channelsCacheExp) {
res := s.channelsCacheRes
s.channelsCacheMu.Unlock()
return res
}
s.channelsCacheMu.Unlock()
type txSnapshot struct {
firstSeen string
decodedJSON string
hasRegion bool
}
// Copy only the fields needed — release the lock before JSON unmarshal.
s.mu.RLock()
var regionObs map[string]bool
if region != "" {
regionObs = s.resolveRegionObservers(region)
}
grpTxts := s.byPayloadType[5]
snapshots := make([]txSnapshot, 0, len(grpTxts))
for _, tx := range grpTxts {
inRegion := true
if regionObs != nil {
inRegion = false
for _, obs := range tx.Observations {
if regionObs[obs.ObserverID] {
inRegion = true
break
}
}
}
snapshots = append(snapshots, txSnapshot{
firstSeen: tx.FirstSeen,
decodedJSON: tx.DecodedJSON,
hasRegion: inRegion,
})
}
s.mu.RUnlock()
// JSON unmarshal outside the lock.
type chanInfo struct {
Hash string
Name string
@@ -2085,53 +2232,32 @@ func (s *PacketStore) GetChannels(region string) []map[string]interface{} {
Sender string `json:"sender"`
}
channelMap := map[string]*chanInfo{}
grpTxts := s.byPayloadType[5]
for _, tx := range grpTxts {
// Region filter: check if any observation is from a regional observer
if regionObs != nil {
match := false
for _, obs := range tx.Observations {
if regionObs[obs.ObserverID] {
match = true
break
}
}
if !match {
continue
}
for _, snap := range snapshots {
if !snap.hasRegion {
continue
}
var decoded decodedGrp
if json.Unmarshal([]byte(tx.DecodedJSON), &decoded) != nil {
if json.Unmarshal([]byte(snap.decodedJSON), &decoded) != nil {
continue
}
if decoded.Type != "CHAN" {
continue
}
// Filter out garbage-decrypted channel names/messages (pre-#197 data still in DB)
if hasGarbageChars(decoded.Channel) || hasGarbageChars(decoded.Text) {
continue
}
channelName := decoded.Channel
if channelName == "" {
channelName = "unknown"
}
key := channelName
ch := channelMap[key]
ch := channelMap[channelName]
if ch == nil {
ch = &chanInfo{
Hash: key, Name: channelName,
LastActivity: tx.FirstSeen,
}
channelMap[key] = ch
ch = &chanInfo{Hash: channelName, Name: channelName, LastActivity: snap.firstSeen}
channelMap[channelName] = ch
}
ch.MessageCount++
if tx.FirstSeen >= ch.LastActivity {
ch.LastActivity = tx.FirstSeen
if snap.firstSeen >= ch.LastActivity {
ch.LastActivity = snap.firstSeen
if decoded.Text != "" {
idx := strings.Index(decoded.Text, ": ")
if idx > 0 {
@@ -2154,6 +2280,13 @@ func (s *PacketStore) GetChannels(region string) []map[string]interface{} {
"messageCount": ch.MessageCount, "lastActivity": ch.LastActivity,
})
}
s.channelsCacheMu.Lock()
s.channelsCacheRes = channels
s.channelsCacheKey = cacheKey
s.channelsCacheExp = time.Now().Add(15 * time.Second)
s.channelsCacheMu.Unlock()
return channels
}
@@ -3647,7 +3780,7 @@ func (s *PacketStore) computeAnalyticsDistance(region string) map[string]interfa
"fromName": h.FromName, "fromPk": h.FromPk,
"toName": h.ToName, "toPk": h.ToPk,
"dist": h.Dist, "type": h.Type,
"snr": h.SNR, "hash": h.Hash, "timestamp": h.Timestamp,
"snr": floatPtrOrNil(h.SNR), "hash": h.Hash, "timestamp": h.Timestamp,
})
}
@@ -4044,6 +4177,283 @@ type hashSizeNodeInfo struct {
Inconsistent bool
}
// --- Hash Collision Analytics ---
// GetAnalyticsHashCollisions returns pre-computed hash collision analysis.
// This moves the O(n²) distance computation from the frontend to the server.
func (s *PacketStore) GetAnalyticsHashCollisions() map[string]interface{} {
s.cacheMu.Lock()
if s.collisionCache != nil && time.Now().Before(s.collisionCache.expiresAt) {
s.cacheHits++
s.cacheMu.Unlock()
return s.collisionCache.data
}
s.cacheMisses++
s.cacheMu.Unlock()
result := s.computeHashCollisions()
s.cacheMu.Lock()
s.collisionCache = &cachedResult{data: result, expiresAt: time.Now().Add(s.collisionCacheTTL)}
s.cacheMu.Unlock()
return result
}
// collisionNode is a lightweight node representation for collision analysis.
type collisionNode struct {
PublicKey string `json:"public_key"`
Name string `json:"name"`
Role string `json:"role"`
Lat float64 `json:"lat"`
Lon float64 `json:"lon"`
HashSize int `json:"hash_size"`
HashSizeInconsistent bool `json:"hash_size_inconsistent"`
HashSizesSeen []int `json:"hash_sizes_seen,omitempty"`
}
// collisionEntry represents a prefix collision with pre-computed distances.
type collisionEntry struct {
Prefix string `json:"prefix"`
ByteSize int `json:"byte_size"`
Appearances int `json:"appearances"`
Nodes []collisionNode `json:"nodes"`
MaxDistKm float64 `json:"max_dist_km"`
Classification string `json:"classification"`
WithCoords int `json:"with_coords"`
}
// prefixCellInfo holds per-prefix-cell data for the matrix view.
type prefixCellInfo struct {
Nodes []collisionNode `json:"nodes"`
}
// twoByteCellInfo holds per-first-byte-group data for 2-byte matrix.
type twoByteCellInfo struct {
GroupNodes []collisionNode `json:"group_nodes"`
TwoByteMap map[string][]collisionNode `json:"two_byte_map"`
MaxCollision int `json:"max_collision"`
CollisionCount int `json:"collision_count"`
}
func (s *PacketStore) computeHashCollisions() map[string]interface{} {
// Get all nodes from DB
nodes := s.getAllNodes()
hashInfo := s.GetNodeHashSizeInfo()
// Build collision nodes with hash info
var allCNodes []collisionNode
for _, n := range nodes {
cn := collisionNode{
PublicKey: n.PublicKey,
Name: n.Name,
Role: n.Role,
Lat: n.Lat,
Lon: n.Lon,
}
if info, ok := hashInfo[n.PublicKey]; ok && info != nil {
cn.HashSize = info.HashSize
cn.HashSizeInconsistent = info.Inconsistent
if len(info.AllSizes) > 1 {
sizes := make([]int, 0, len(info.AllSizes))
for sz := range info.AllSizes {
sizes = append(sizes, sz)
}
sort.Ints(sizes)
cn.HashSizesSeen = sizes
}
}
allCNodes = append(allCNodes, cn)
}
// Inconsistent nodes
var inconsistentNodes []collisionNode
for _, cn := range allCNodes {
if cn.HashSizeInconsistent {
inconsistentNodes = append(inconsistentNodes, cn)
}
}
if inconsistentNodes == nil {
inconsistentNodes = make([]collisionNode, 0)
}
// Compute collisions for each byte size (1, 2, 3)
collisionsBySize := make(map[string]interface{})
for _, bytes := range []int{1, 2, 3} {
// Filter nodes relevant to this byte size
var nodesForByte []collisionNode
for _, cn := range allCNodes {
if cn.HashSize == bytes || cn.HashSize == 0 {
nodesForByte = append(nodesForByte, cn)
}
}
// Build prefix map
prefixMap := make(map[string][]collisionNode)
for _, cn := range nodesForByte {
if len(cn.PublicKey) < bytes*2 {
continue
}
prefix := strings.ToUpper(cn.PublicKey[:bytes*2])
prefixMap[prefix] = append(prefixMap[prefix], cn)
}
// Compute collisions with pairwise distances
var collisions []collisionEntry
for prefix, pnodes := range prefixMap {
if len(pnodes) <= 1 {
continue
}
// Pairwise distance
var withCoords []collisionNode
for _, cn := range pnodes {
if cn.Lat != 0 || cn.Lon != 0 {
withCoords = append(withCoords, cn)
}
}
var maxDistKm float64
classification := "unknown"
if len(withCoords) >= 2 {
for i := 0; i < len(withCoords); i++ {
for j := i + 1; j < len(withCoords); j++ {
d := haversineKm(withCoords[i].Lat, withCoords[i].Lon, withCoords[j].Lat, withCoords[j].Lon)
if d > maxDistKm {
maxDistKm = d
}
}
}
if maxDistKm < 50 {
classification = "local"
} else if maxDistKm < 200 {
classification = "regional"
} else {
classification = "distant"
}
} else {
classification = "incomplete"
}
collisions = append(collisions, collisionEntry{
Prefix: prefix,
ByteSize: bytes,
Appearances: len(pnodes),
Nodes: pnodes,
MaxDistKm: maxDistKm,
Classification: classification,
WithCoords: len(withCoords),
})
}
if collisions == nil {
collisions = make([]collisionEntry, 0)
}
// Sort: local first, then regional, distant, incomplete
classOrder := map[string]int{"local": 0, "regional": 1, "distant": 2, "incomplete": 3, "unknown": 4}
sort.Slice(collisions, func(i, j int) bool {
oi, oj := classOrder[collisions[i].Classification], classOrder[collisions[j].Classification]
if oi != oj {
return oi < oj
}
return collisions[i].Appearances > collisions[j].Appearances
})
// Stats
nodeCount := len(nodesForByte)
usingThisSize := 0
for _, cn := range allCNodes {
if cn.HashSize == bytes {
usingThisSize++
}
}
uniquePrefixes := len(prefixMap)
collisionCount := len(collisions)
var spaceSize int
switch bytes {
case 1:
spaceSize = 256
case 2:
spaceSize = 65536
case 3:
spaceSize = 16777216
}
pctUsed := 0.0
if spaceSize > 0 {
pctUsed = float64(uniquePrefixes) / float64(spaceSize) * 100
}
// For 1-byte and 2-byte, include the full prefix cell data for matrix rendering
var oneByteCells map[string][]collisionNode
var twoByteCells map[string]*twoByteCellInfo
if bytes == 1 {
oneByteCells = make(map[string][]collisionNode)
for i := 0; i < 256; i++ {
hex := strings.ToUpper(fmt.Sprintf("%02x", i))
oneByteCells[hex] = prefixMap[hex]
if oneByteCells[hex] == nil {
oneByteCells[hex] = make([]collisionNode, 0)
}
}
} else if bytes == 2 {
twoByteCells = make(map[string]*twoByteCellInfo)
for i := 0; i < 256; i++ {
hex := strings.ToUpper(fmt.Sprintf("%02x", i))
cell := &twoByteCellInfo{
GroupNodes: make([]collisionNode, 0),
TwoByteMap: make(map[string][]collisionNode),
}
twoByteCells[hex] = cell
}
for _, cn := range nodesForByte {
if len(cn.PublicKey) < 4 {
continue
}
firstHex := strings.ToUpper(cn.PublicKey[:2])
twoHex := strings.ToUpper(cn.PublicKey[:4])
cell := twoByteCells[firstHex]
if cell == nil {
continue
}
cell.GroupNodes = append(cell.GroupNodes, cn)
cell.TwoByteMap[twoHex] = append(cell.TwoByteMap[twoHex], cn)
}
for _, cell := range twoByteCells {
for _, ns := range cell.TwoByteMap {
if len(ns) > 1 {
cell.CollisionCount++
if len(ns) > cell.MaxCollision {
cell.MaxCollision = len(ns)
}
}
}
}
}
sizeData := map[string]interface{}{
"stats": map[string]interface{}{
"total_nodes": len(allCNodes),
"nodes_for_byte": nodeCount,
"using_this_size": usingThisSize,
"unique_prefixes": uniquePrefixes,
"collision_count": collisionCount,
"space_size": spaceSize,
"pct_used": pctUsed,
},
"collisions": collisions,
}
if oneByteCells != nil {
sizeData["one_byte_cells"] = oneByteCells
}
if twoByteCells != nil {
sizeData["two_byte_cells"] = twoByteCells
}
collisionsBySize[strconv.Itoa(bytes)] = sizeData
}
return map[string]interface{}{
"inconsistent_nodes": inconsistentNodes,
"by_size": collisionsBySize,
}
}
// GetNodeHashSizeInfo returns cached per-node hash size data, recomputing at most every 15s.
func (s *PacketStore) GetNodeHashSizeInfo() map[string]*hashSizeNodeInfo {
const ttl = 15 * time.Second
@@ -5017,7 +5427,7 @@ func (s *PacketStore) GetSubpathDetail(rawHops []string) map[string]interface{}
observers := map[string]int{}
parentPaths := map[string]int{}
var matchCount int
var firstSeen, lastSeen interface{}
var firstSeen, lastSeen string
for _, tx := range s.packets {
hops := txGetParsedPath(tx)
@@ -5047,10 +5457,10 @@ func (s *PacketStore) GetSubpathDetail(rawHops []string) map[string]interface{}
matchCount++
ts := tx.FirstSeen
if ts != "" {
if firstSeen == nil || ts < firstSeen.(string) {
if firstSeen == "" || ts < firstSeen {
firstSeen = ts
}
if lastSeen == nil || ts > lastSeen.(string) {
if lastSeen == "" || ts > lastSeen {
lastSeen = ts
}
// Parse hour from timestamp for hourly distribution

View File

@@ -25,8 +25,9 @@ type Hub struct {
// Client is a single WebSocket connection.
type Client struct {
conn *websocket.Conn
send chan []byte
conn *websocket.Conn
send chan []byte
closeOnce sync.Once
}
func NewHub() *Hub {
@@ -52,12 +53,28 @@ func (h *Hub) Unregister(c *Client) {
h.mu.Lock()
if _, ok := h.clients[c]; ok {
delete(h.clients, c)
close(c.send)
c.closeOnce.Do(func() { close(c.send) })
}
h.mu.Unlock()
log.Printf("[ws] client disconnected (%d total)", h.ClientCount())
}
// Close gracefully disconnects all WebSocket clients.
func (h *Hub) Close() {
h.mu.Lock()
for c := range h.clients {
c.conn.WriteControl(
websocket.CloseMessage,
websocket.FormatCloseMessage(websocket.CloseGoingAway, "server shutting down"),
time.Now().Add(3*time.Second),
)
c.closeOnce.Do(func() { close(c.send) })
delete(h.clients, c)
}
h.mu.Unlock()
log.Println("[ws] all clients disconnected")
}
// Broadcast sends a message to all connected clients.
func (h *Hub) Broadcast(msg interface{}) {
data, err := json.Marshal(msg)

View File

@@ -15,8 +15,8 @@ git reset --hard origin/master
echo "[deploy] Building Docker image..."
docker build -t meshcore-analyzer .
echo "[deploy] Restarting container..."
docker stop meshcore-analyzer && docker rm meshcore-analyzer
echo "[deploy] Stopping old container (30s grace period)..."
docker stop -t 30 meshcore-analyzer && docker rm meshcore-analyzer
docker run -d --name meshcore-analyzer \
--restart unless-stopped \
-p 3000:3000 \

View File

@@ -15,9 +15,11 @@ git reset --hard "origin/$BRANCH"
echo "[staging] Building Docker image..."
docker build -t meshcore-analyzer-staging .
echo "[staging] Restarting container..."
docker stop meshcore-staging 2>/dev/null || true
echo "[staging] Stopping old container (30s grace period)..."
docker stop -t 30 meshcore-staging 2>/dev/null || true
docker rm meshcore-staging 2>/dev/null || true
echo "[staging] Starting new container..."
docker run -d --name meshcore-staging \
--restart unless-stopped \
-p 3001:3000 \

View File

@@ -9,6 +9,8 @@ services:
image: corescope:latest
container_name: corescope-prod
restart: unless-stopped
stop_grace_period: 30s
stop_signal: SIGTERM
extra_hosts:
- "host.docker.internal:host-gateway"
ports:

View File

@@ -10,6 +10,8 @@ services:
image: corescope-go:latest
container_name: corescope-staging-go
restart: unless-stopped
stop_grace_period: 30s
stop_signal: SIGTERM
deploy:
resources:
limits:

View File

@@ -13,6 +13,8 @@ services:
image: corescope-go:latest
container_name: corescope-staging-go
restart: unless-stopped
stop_grace_period: 30s
stop_signal: SIGTERM
deploy:
resources:
limits:

View File

@@ -14,6 +14,8 @@ services:
image: corescope:latest
container_name: corescope-prod
restart: unless-stopped
stop_grace_period: 30s
stop_signal: SIGTERM
extra_hosts:
- "host.docker.internal:host-gateway"
ports:

View File

@@ -12,6 +12,8 @@ autostart=true
autorestart=true
startretries=10
startsecs=2
stopsignal=TERM
stopwaitsecs=20
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
@@ -24,6 +26,8 @@ autostart=true
autorestart=true
startretries=10
startsecs=2
stopsignal=TERM
stopwaitsecs=20
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr

View File

@@ -21,6 +21,8 @@ autostart=true
autorestart=true
startretries=10
startsecs=2
stopsignal=TERM
stopwaitsecs=20
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
@@ -33,6 +35,8 @@ autostart=true
autorestart=true
startretries=10
startsecs=2
stopsignal=TERM
stopwaitsecs=20
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr

View File

@@ -509,6 +509,24 @@ cmd_setup() {
log "Docker $(docker --version | grep -oP 'version \K[^ ,]+')"
log "Compose: $DC"
# Default to latest release tag (instead of staying on master)
if ! is_done "version_pin"; then
git fetch origin --tags 2>/dev/null || true
local latest_tag
latest_tag=$(git tag -l 'v*' --sort=-v:refname | head -1)
if [ -n "$latest_tag" ]; then
local current_ref
current_ref=$(git describe --tags --exact-match 2>/dev/null || echo "")
if [ "$current_ref" != "$latest_tag" ]; then
info "Pinning to latest release: ${latest_tag}"
git checkout "$latest_tag" 2>/dev/null
else
log "Already on latest release: ${latest_tag}"
fi
fi
mark_done "version_pin"
fi
mark_done "docker"
@@ -885,14 +903,10 @@ prepare_staging_config() {
warn "No production config at ${prod_config} — staging may use defaults."
return
fi
if [ ! -f "$staging_config" ] || [ "$prod_config" -nt "$staging_config" ]; then
info "Copying production config to staging..."
cp "$prod_config" "$staging_config"
sed -i 's/"siteName":\s*"[^"]*"/"siteName": "CoreScope — STAGING"/' "$staging_config"
log "Staging config created at ${staging_config} with STAGING site name."
else
log "Staging config is up to date."
fi
info "Copying production config to staging..."
cp "$prod_config" "$staging_config"
sed -i 's/"siteName":\s*"[^"]*"/"siteName": "CoreScope — STAGING"/' "$staging_config"
log "Staging config created at ${staging_config} with STAGING site name."
# Copy Caddyfile for staging (HTTP-only on staging port)
local staging_caddy="$STAGING_DATA/Caddyfile"
if [ ! -f "$staging_caddy" ]; then
@@ -1167,6 +1181,12 @@ cmd_status() {
echo "═══════════════════════════════════════"
echo ""
# Version
local current_version
current_version=$(git describe --tags --exact-match 2>/dev/null || git rev-parse --short HEAD 2>/dev/null || echo "unknown")
info "Version: ${current_version}"
echo ""
# Production
show_container_status "corescope-prod" "Production"
echo ""
@@ -1294,8 +1314,39 @@ cmd_promote() {
# ─── Update ───────────────────────────────────────────────────────────────
cmd_update() {
info "Pulling latest code..."
git pull --ff-only
local version="${1:-}"
info "Fetching latest changes and tags..."
git fetch origin --tags
if [ -z "$version" ]; then
# No arg: checkout latest release tag
local latest_tag
latest_tag=$(git tag -l 'v*' --sort=-v:refname | head -1)
if [ -z "$latest_tag" ]; then
err "No release tags found. Use './manage.sh update latest' for tip of master."
exit 1
fi
info "Checking out latest release: ${latest_tag}"
git checkout "$latest_tag" || { err "Failed to checkout tag '${latest_tag}'."; exit 1; }
elif [ "$version" = "latest" ]; then
# Explicit opt-in to bleeding edge (tip of master)
# Note: this creates a detached HEAD at origin/master, which is intentional —
# we want a read-only snapshot of upstream, not a local tracking branch.
info "Checking out tip of master (detached HEAD at origin/master)..."
git checkout origin/master || { err "Failed to checkout origin/master."; exit 1; }
else
# Specific tag requested
if ! git tag -l "$version" | grep -q .; then
err "Tag '${version}' not found."
echo ""
echo " Available releases:"
git tag -l 'v*' --sort=-v:refname | head -10 | sed 's/^/ /'
exit 1
fi
info "Checking out version: ${version}"
git checkout "$version" || { err "Failed to checkout '${version}'."; exit 1; }
fi
migrate_config auto
@@ -1306,6 +1357,10 @@ cmd_update() {
dc_prod up -d --force-recreate prod
log "Updated and restarted. Data preserved."
# Show current version
local current
current=$(git describe --tags --exact-match 2>/dev/null || git rev-parse --short HEAD)
log "Running version: ${current}"
}
# ─── Backup ───────────────────────────────────────────────────────────────
@@ -1515,7 +1570,7 @@ cmd_help() {
echo " logs [prod|staging] [N] Follow logs (default: prod, last 100 lines)"
echo ""
printf '%b\n' " ${BOLD}Maintain${NC}"
echo " update Pull latest code, rebuild, restart (keeps data)"
echo " update [version] Update to version (no arg=latest tag, 'latest'=master tip, or e.g. v3.1.0)"
echo " promote Promote staging → production (backup + restart)"
echo " backup [dir] Full backup: database + config + theme"
echo " restore <d> Restore from backup dir or .db file"
@@ -1534,7 +1589,7 @@ case "${1:-help}" in
restart) cmd_restart "$2" ;;
status) cmd_status ;;
logs) cmd_logs "$2" "$3" ;;
update) cmd_update ;;
update) cmd_update "$2" ;;
promote) cmd_promote ;;
backup) cmd_backup "$2" ;;
restore) cmd_restore "$2" ;;

View File

@@ -143,13 +143,14 @@
_analyticsData = {};
const rqs = RegionFilter.regionQueryString();
const sep = rqs ? '?' + rqs.slice(1) : '';
const [hashData, rfData, topoData, chanData] = await Promise.all([
const [hashData, rfData, topoData, chanData, collisionData] = await Promise.all([
api('/analytics/hash-sizes' + sep, { ttl: CLIENT_TTL.analyticsRF }),
api('/analytics/rf' + sep, { ttl: CLIENT_TTL.analyticsRF }),
api('/analytics/topology' + sep, { ttl: CLIENT_TTL.analyticsRF }),
api('/analytics/channels' + sep, { ttl: CLIENT_TTL.analyticsRF }),
api('/analytics/hash-collisions', { ttl: CLIENT_TTL.analyticsRF }),
]);
_analyticsData = { hashData, rfData, topoData, chanData };
_analyticsData = { hashData, rfData, topoData, chanData, collisionData };
renderTab(_currentTab);
} catch (e) {
document.getElementById('analyticsContent').innerHTML =
@@ -166,7 +167,7 @@
case 'topology': renderTopology(el, d.topoData); break;
case 'channels': renderChannels(el, d.chanData); break;
case 'hashsizes': renderHashSizes(el, d.hashData); break;
case 'collisions': await renderCollisionTab(el, d.hashData); break;
case 'collisions': await renderCollisionTab(el, d.hashData, d.collisionData); break;
case 'subpaths': await renderSubpaths(el); break;
case 'nodes': await renderNodesTab(el); break;
case 'distance': await renderDistanceTab(el); break;
@@ -943,7 +944,7 @@
`;
}
async function renderCollisionTab(el, data) {
async function renderCollisionTab(el, data, collisionData) {
el.innerHTML = `
<nav id="hashIssuesToc" style="display:flex;gap:12px;margin-bottom:12px;font-size:13px;flex-wrap:wrap">
<a href="#/analytics?tab=collisions&section=inconsistentHashSection" style="color:var(--accent)"> Inconsistent Sizes</a>
@@ -980,11 +981,9 @@
<div id="collisionList"><div class="text-muted" style="padding:8px">Loading…</div></div>
</div>
`;
let allNodes = [];
try { const nd = await api('/nodes?limit=2000' + RegionFilter.regionQueryString(), { ttl: CLIENT_TTL.nodeList }); allNodes = nd.nodes || []; } catch {}
// Render inconsistent hash sizes
const inconsistent = allNodes.filter(n => n.hash_size_inconsistent);
// Use pre-computed collision data from server (no more /nodes?limit=2000 fetch)
const cData = collisionData || { inconsistent_nodes: [], by_size: {} };
const inconsistent = cData.inconsistent_nodes || [];
const ihEl = document.getElementById('inconsistentHashList');
if (ihEl) {
if (!inconsistent.length) {
@@ -1013,10 +1012,7 @@
}
}
// Repeaters are confirmed routing nodes; null-role nodes may also route (possible conflict)
const repeaterNodes = allNodes.filter(n => n.role === 'repeater');
const nullRoleNodes = allNodes.filter(n => !n.role);
const routingNodes = [...repeaterNodes, ...nullRoleNodes];
// Repeaters and routing nodes no longer needed — collision data is server-computed
let currentBytes = 1;
function refreshHashViews(bytes) {
@@ -1037,11 +1033,11 @@
else if (bytes === 2) matrixDesc.textContent = 'Each cell = first-byte group. Color shows worst 2-byte collision within. Click a cell to see the breakdown.';
else matrixDesc.textContent = '3-byte prefix space is too large to visualize as a matrix — collision table is shown below.';
}
renderHashMatrix(data.topHops, routingNodes, bytes, allNodes);
renderHashMatrixFromServer(cData.by_size[String(bytes)], bytes);
// Hide collision risk card for 3-byte — stats are shown in the matrix panel
const riskCard = document.getElementById('collisionRiskSection');
if (riskCard) riskCard.style.display = bytes === 3 ? 'none' : '';
if (bytes !== 3) renderCollisions(data.topHops, routingNodes, bytes);
if (bytes !== 3) renderCollisionsFromServer(cData.by_size[String(bytes)], bytes);
}
// Wire up selector
@@ -1113,92 +1109,65 @@
el.addEventListener('mouseleave', hideMatrixTip);
}
// Pure data helpers — extracted for testability
// --- Shared helpers for hash matrix rendering ---
function buildOneBytePrefixMap(nodes) {
const map = {};
for (let i = 0; i < 256; i++) map[i.toString(16).padStart(2, '0').toUpperCase()] = [];
for (const n of nodes) {
const hex = n.public_key.slice(0, 2).toUpperCase();
if (map[hex]) map[hex].push(n);
}
return map;
function hashStatCardsHtml(totalNodes, usingCount, sizeLabel, spaceSize, usedCount, collisionCount) {
const pct = spaceSize > 0 && usedCount > 0 ? ((usedCount / spaceSize) * 100) : 0;
const pctStr = spaceSize > 65536 ? pct.toFixed(6) : spaceSize > 256 ? pct.toFixed(3) : pct.toFixed(1);
const spaceLabel = spaceSize >= 1e6 ? (spaceSize / 1e6).toFixed(1) + 'M' : spaceSize.toLocaleString();
return `<div style="display:flex;gap:12px;flex-wrap:wrap;margin-bottom:12px">
<div class="analytics-stat-card" style="flex:1;min-width:110px">
<div class="analytics-stat-label">Nodes tracked</div>
<div class="analytics-stat-value">${totalNodes.toLocaleString()}</div>
</div>
<div class="analytics-stat-card" style="flex:1;min-width:110px">
<div class="analytics-stat-label">Using ${sizeLabel} ID</div>
<div class="analytics-stat-value">${usingCount.toLocaleString()}</div>
</div>
<div class="analytics-stat-card" style="flex:1;min-width:110px">
<div class="analytics-stat-label">Prefix space used</div>
<div class="analytics-stat-value" style="font-size:16px">${pctStr}%</div>
<div style="font-size:10px;color:var(--text-muted);margin-top:2px">${usedCount > 256 ? usedCount + ' of ' : 'of '}${spaceLabel} possible</div>
</div>
<div class="analytics-stat-card" style="flex:1;min-width:110px;border-color:${collisionCount > 0 ? 'var(--status-red)' : 'var(--border)'}">
<div class="analytics-stat-label">Prefix collisions</div>
<div class="analytics-stat-value" style="color:${collisionCount > 0 ? 'var(--status-red)' : 'var(--status-green)'}">${collisionCount}</div>
</div>
</div>`;
}
function buildTwoBytePrefixInfo(nodes) {
const info = {};
for (let i = 0; i < 256; i++) {
const h = i.toString(16).padStart(2, '0').toUpperCase();
info[h] = { groupNodes: [], twoByteMap: {}, maxCollision: 0, collisionCount: 0 };
function hashMatrixGridHtml(nibbles, cellSize, headerSize, cellDataFn) {
let html = `<div style="display:flex;gap:16px;flex-wrap:wrap"><div class="hash-matrix-scroll"><table class="hash-matrix-table" style="border-collapse:collapse;font-size:12px;font-family:monospace">`;
html += `<tr><td style="width:${headerSize}px"></td>`;
for (const n of nibbles) html += `<td style="width:${cellSize}px;text-align:center;padding:2px 0;font-weight:bold;color:var(--text-muted)">${n}</td>`;
html += '</tr>';
for (let hi = 0; hi < 16; hi++) {
html += `<tr><td style="text-align:right;padding-right:4px;font-weight:bold;color:var(--text-muted)">${nibbles[hi]}</td>`;
for (let lo = 0; lo < 16; lo++) {
html += cellDataFn(nibbles[hi] + nibbles[lo], cellSize);
}
html += '</tr>';
}
for (const n of nodes) {
const firstHex = n.public_key.slice(0, 2).toUpperCase();
const twoHex = n.public_key.slice(0, 4).toUpperCase();
const entry = info[firstHex];
if (!entry) continue;
entry.groupNodes.push(n);
if (!entry.twoByteMap[twoHex]) entry.twoByteMap[twoHex] = [];
entry.twoByteMap[twoHex].push(n);
}
for (const entry of Object.values(info)) {
const collisions = Object.values(entry.twoByteMap).filter(v => v.length > 1);
entry.collisionCount = collisions.length;
entry.maxCollision = collisions.length ? Math.max(...collisions.map(v => v.length)) : 0;
}
return info;
html += '</table></div>';
return html;
}
function buildCollisionHops(allNodes, bytes) {
const map = {};
for (const n of allNodes) {
const p = n.public_key.slice(0, bytes * 2).toUpperCase();
if (!map[p]) map[p] = { hex: p, count: 0, size: bytes };
map[p].count++;
}
return Object.values(map).filter(h => h.count > 1);
function hashMatrixLegendHtml(labels) {
return `<div style="margin-top:8px;font-size:0.8em;display:flex;gap:16px;align-items:center;flex-wrap:wrap">
${labels.map(l => `<span><span class="legend-swatch ${l.cls}"${l.style ? ' style="'+l.style+'"' : ''}></span> ${l.text}</span>`).join('\n')}
</div>`;
}
function renderHashMatrix(topHops, allNodes, bytes, totalNodes) {
bytes = bytes || 1;
totalNodes = totalNodes || allNodes;
function renderHashMatrixFromServer(sizeData, bytes) {
const el = document.getElementById('hashMatrix');
if (!sizeData) { el.innerHTML = '<div class="text-muted">No data</div>'; return; }
const stats = sizeData.stats || {};
const totalNodes = stats.total_nodes || 0;
// 3-byte: show a summary panel instead of a matrix
if (bytes === 3) {
const total = totalNodes.length;
const threeByteNodes = allNodes.filter(n => n.hash_size === 3).length;
const nodesForByte = allNodes.filter(n => n.hash_size === 3 || !n.hash_size);
const prefixMap = {};
for (const n of nodesForByte) {
const p = n.public_key.slice(0, 6).toUpperCase();
if (!prefixMap[p]) prefixMap[p] = 0;
prefixMap[p]++;
}
const uniquePrefixes = Object.keys(prefixMap).length;
const collisions = Object.values(prefixMap).filter(c => c > 1).length;
const spaceSize = 16777216; // 2^24
const pct = uniquePrefixes > 0 ? ((uniquePrefixes / spaceSize) * 100).toFixed(6) : '0';
el.innerHTML = `
<div style="display:flex;gap:12px;flex-wrap:wrap;margin-bottom:12px">
<div class="analytics-stat-card" style="flex:1;min-width:110px">
<div class="analytics-stat-label">Nodes tracked</div>
<div class="analytics-stat-value">${total.toLocaleString()}</div>
</div>
<div class="analytics-stat-card" style="flex:1;min-width:110px">
<div class="analytics-stat-label">Using 3-byte ID</div>
<div class="analytics-stat-value">${threeByteNodes.toLocaleString()}</div>
</div>
<div class="analytics-stat-card" style="flex:1;min-width:110px">
<div class="analytics-stat-label">Prefix space used</div>
<div class="analytics-stat-value" style="font-size:16px">${pct}%</div>
<div style="font-size:10px;color:var(--text-muted);margin-top:2px">of 16.7M possible</div>
</div>
<div class="analytics-stat-card" style="flex:1;min-width:110px;border-color:${collisions > 0 ? 'var(--status-red)' : 'var(--border)'}">
<div class="analytics-stat-label">Prefix collisions</div>
<div class="analytics-stat-value" style="color:${collisions > 0 ? 'var(--status-red)' : 'var(--status-green)'}">${collisions}</div>
</div>
</div>
<p class="text-muted" style="margin:0;font-size:0.8em">The 3-byte prefix space (16.7M values) is too large to visualize as a grid.</p>`;
el.innerHTML = hashStatCardsHtml(totalNodes, stats.using_this_size || 0, '3-byte', 16777216, stats.unique_prefixes || 0, stats.collision_count || 0) +
`<p class="text-muted" style="margin:0;font-size:0.8em">The 3-byte prefix space (16.7M values) is too large to visualize as a grid.</p>`;
return;
}
@@ -1207,41 +1176,14 @@
const headerSize = 24;
if (bytes === 1) {
const nodesForByte = allNodes.filter(n => n.hash_size === 1 || !n.hash_size);
const prefixNodes = buildOneBytePrefixMap(nodesForByte);
const oneByteCount = allNodes.filter(n => n.hash_size === 1).length;
const oneUsed = Object.values(prefixNodes).filter(v => v.length > 0).length;
const oneCollisions = Object.values(prefixNodes).filter(v => v.length > 1).length;
const onePct = ((oneUsed / 256) * 100).toFixed(1);
const oneByteCells = sizeData.one_byte_cells || {};
const oneByteCount = stats.using_this_size || 0;
const oneUsed = Object.values(oneByteCells).filter(v => v.length > 0).length;
const oneCollisions = Object.values(oneByteCells).filter(v => v.length > 1).length;
let html = `<div style="display:flex;gap:12px;flex-wrap:wrap;margin-bottom:12px">
<div class="analytics-stat-card" style="flex:1;min-width:110px">
<div class="analytics-stat-label">Nodes tracked</div>
<div class="analytics-stat-value">${totalNodes.length.toLocaleString()}</div>
</div>
<div class="analytics-stat-card" style="flex:1;min-width:110px">
<div class="analytics-stat-label">Using 1-byte ID</div>
<div class="analytics-stat-value">${oneByteCount.toLocaleString()}</div>
</div>
<div class="analytics-stat-card" style="flex:1;min-width:110px">
<div class="analytics-stat-label">Prefix space used</div>
<div class="analytics-stat-value" style="font-size:16px">${onePct}%</div>
<div style="font-size:10px;color:var(--text-muted);margin-top:2px">of 256 possible</div>
</div>
<div class="analytics-stat-card" style="flex:1;min-width:110px;border-color:${oneCollisions > 0 ? 'var(--status-red)' : 'var(--border)'}">
<div class="analytics-stat-label">Prefix collisions</div>
<div class="analytics-stat-value" style="color:${oneCollisions > 0 ? 'var(--status-red)' : 'var(--status-green)'}">${oneCollisions}</div>
</div>
</div>`;
html += `<div style="display:flex;gap:16px;flex-wrap:wrap"><div class="hash-matrix-scroll"><table class="hash-matrix-table" style="border-collapse:collapse;font-size:12px;font-family:monospace">`;
html += `<tr><td style="width:${headerSize}px"></td>`;
for (const n of nibbles) html += `<td style="width:${cellSize}px;text-align:center;padding:2px 0;font-weight:bold;color:var(--text-muted)">${n}</td>`;
html += '</tr>';
for (let hi = 0; hi < 16; hi++) {
html += `<tr><td style="text-align:right;padding-right:4px;font-weight:bold;color:var(--text-muted)">${nibbles[hi]}</td>`;
for (let lo = 0; lo < 16; lo++) {
const hex = nibbles[hi] + nibbles[lo];
const nodes = prefixNodes[hex] || [];
let html = hashStatCardsHtml(totalNodes, oneByteCount, '1-byte', 256, oneUsed, oneCollisions);
html += hashMatrixGridHtml(nibbles, cellSize, headerSize, (hex, cs) => {
const nodes = oneByteCells[hex] || [];
const count = nodes.length;
const repeaterCount = nodes.filter(n => n.role === 'repeater').length;
const isCollision = count >= 2 && repeaterCount >= 2;
@@ -1259,18 +1201,15 @@
: isPossible
? `<div class="hash-matrix-tooltip-hex">0x${hex}</div><div class="hash-matrix-tooltip-status">${count} nodes — POSSIBLE CONFLICT</div><div class="hash-matrix-tooltip-nodes">${nodes.slice(0,5).map(nodeLabel).join('')}${nodes.length>5?`<div class="hash-matrix-tooltip-status">+${nodes.length-5} more</div>`:''}</div>`
: `<div class="hash-matrix-tooltip-hex">0x${hex}</div><div class="hash-matrix-tooltip-status">${count} nodes — COLLISION</div><div class="hash-matrix-tooltip-nodes">${nodes.slice(0,5).map(nodeLabel).join('')}${nodes.length>5?`<div class="hash-matrix-tooltip-status">+${nodes.length-5} more</div>`:''}</div>`;
html += `<td class="hash-cell ${cellClass}${count ? ' hash-active' : ''}" data-hex="${hex}" data-tip="${tip1.replace(/"/g,'&quot;')}" style="width:${cellSize}px;height:${cellSize}px;text-align:center;${bgStyle}border:1px solid var(--border);cursor:${count ? 'pointer' : 'default'};font-size:11px;font-weight:${count >= 2 ? '700' : '400'}">${hex}</td>`;
}
html += '</tr>';
}
html += '</table></div>';
html += `<div id="hashDetail" style="flex:1;min-width:200px;max-width:400px;font-size:0.85em"></div></div>
<div style="margin-top:8px;font-size:0.8em;display:flex;gap:16px;align-items:center;flex-wrap:wrap">
<span><span class="legend-swatch hash-cell-empty" style="border:1px solid var(--border)"></span> Available</span>
<span><span class="legend-swatch hash-cell-taken"></span> One node</span>
<span><span class="legend-swatch hash-cell-possible"></span> Possible conflict</span>
<span><span class="legend-swatch hash-cell-collision" style="background:rgb(220,80,30)"></span> Collision</span>
</div>`;
return `<td class="hash-cell ${cellClass}${count ? ' hash-active' : ''}" data-hex="${hex}" data-tip="${tip1.replace(/"/g,'&quot;')}" style="width:${cs}px;height:${cs}px;text-align:center;${bgStyle}border:1px solid var(--border);cursor:${count ? 'pointer' : 'default'};font-size:11px;font-weight:${count >= 2 ? '700' : '400'}">${hex}</td>`;
});
html += `<div id="hashDetail" style="flex:1;min-width:200px;max-width:400px;font-size:0.85em"></div></div>`;
html += hashMatrixLegendHtml([
{cls: 'hash-cell-empty', style: 'border:1px solid var(--border)', text: 'Available'},
{cls: 'hash-cell-taken', text: 'One node'},
{cls: 'hash-cell-possible', text: 'Possible conflict'},
{cls: 'hash-cell-collision', style: 'background:rgb(220,80,30)', text: 'Collision'}
]);
el.innerHTML = html;
initMatrixTooltip(el);
@@ -1278,7 +1217,7 @@
el.querySelectorAll('.hash-active').forEach(td => {
td.addEventListener('click', () => {
const hex = td.dataset.hex.toUpperCase();
const matches = prefixNodes[hex] || [];
const matches = oneByteCells[hex] || [];
const detail = document.getElementById('hashDetail');
if (!matches.length) { detail.innerHTML = `<strong class="mono">0x${hex}</strong><br><span class="text-muted">No known nodes</span>`; return; }
detail.innerHTML = `<strong class="mono" style="font-size:1.1em">0x${hex}</strong> — ${matches.length} node${matches.length !== 1 ? 's' : ''}` +
@@ -1293,47 +1232,17 @@
});
} else if (bytes === 2) {
// 2-byte mode: 16×16 grid of first-byte groups
const nodesForByte = allNodes.filter(n => n.hash_size === 2 || !n.hash_size);
const firstByteInfo = buildTwoBytePrefixInfo(nodesForByte);
const twoByteCells = sizeData.two_byte_cells || {};
const twoByteCount = stats.using_this_size || 0;
const uniqueTwoBytePrefixes = stats.unique_prefixes || 0;
const twoCollisions = Object.values(twoByteCells).filter(v => v.collision_count > 0).length;
const twoByteCount = allNodes.filter(n => n.hash_size === 2).length;
const uniqueTwoBytePrefixes = new Set(nodesForByte.map(n => n.public_key.slice(0, 4).toUpperCase())).size;
const twoCollisions = Object.values(firstByteInfo).filter(v => v.collisionCount > 0).length;
const twoPct = ((uniqueTwoBytePrefixes / 65536) * 100).toFixed(3);
let html = `<div style="display:flex;gap:12px;flex-wrap:wrap;margin-bottom:12px">
<div class="analytics-stat-card" style="flex:1;min-width:110px">
<div class="analytics-stat-label">Nodes tracked</div>
<div class="analytics-stat-value">${totalNodes.length.toLocaleString()}</div>
</div>
<div class="analytics-stat-card" style="flex:1;min-width:110px">
<div class="analytics-stat-label">Using 2-byte ID</div>
<div class="analytics-stat-value">${twoByteCount.toLocaleString()}</div>
</div>
<div class="analytics-stat-card" style="flex:1;min-width:110px">
<div class="analytics-stat-label">Prefix space used</div>
<div class="analytics-stat-value" style="font-size:16px">${twoPct}%</div>
<div style="font-size:10px;color:var(--text-muted);margin-top:2px">${uniqueTwoBytePrefixes} of 65,536 possible</div>
</div>
<div class="analytics-stat-card" style="flex:1;min-width:110px;border-color:${twoCollisions > 0 ? 'var(--status-red)' : 'var(--border)'}">
<div class="analytics-stat-label">Prefix collisions</div>
<div class="analytics-stat-value" style="color:${twoCollisions > 0 ? 'var(--status-red)' : 'var(--status-green)'}">${twoCollisions}</div>
</div>
</div>`;
html += `<div style="display:flex;gap:16px;flex-wrap:wrap"><div class="hash-matrix-scroll"><table class="hash-matrix-table" style="border-collapse:collapse;font-size:12px;font-family:monospace">`;
html += `<tr><td style="width:${headerSize}px"></td>`;
for (const n of nibbles) html += `<td style="width:${cellSize}px;text-align:center;padding:2px 0;font-weight:bold;color:var(--text-muted)">${n}</td>`;
html += '</tr>';
for (let hi = 0; hi < 16; hi++) {
html += `<tr><td style="text-align:right;padding-right:4px;font-weight:bold;color:var(--text-muted)">${nibbles[hi]}</td>`;
for (let lo = 0; lo < 16; lo++) {
const hex = nibbles[hi] + nibbles[lo];
const info = firstByteInfo[hex] || { groupNodes: [], maxCollision: 0, collisionCount: 0 };
const nodeCount = info.groupNodes.length;
const maxCol = info.maxCollision;
// Classify worst overlap in group: confirmed collision (2+ repeaters) or possible (null-role involved)
const overlapping = Object.values(info.twoByteMap || {}).filter(v => v.length > 1);
let html = hashStatCardsHtml(totalNodes, twoByteCount, '2-byte', 65536, uniqueTwoBytePrefixes, twoCollisions);
html += hashMatrixGridHtml(nibbles, cellSize, headerSize, (hex, cs) => {
const info = twoByteCells[hex] || { group_nodes: [], max_collision: 0, collision_count: 0, two_byte_map: {} };
const nodeCount = (info.group_nodes || []).length;
const maxCol = info.max_collision || 0;
const overlapping = Object.values(info.two_byte_map || {}).filter(v => v.length > 1);
const hasConfirmed = overlapping.some(ns => ns.filter(n => n.role === 'repeater').length >= 2);
const hasPossible = !hasConfirmed && overlapping.some(ns => ns.length >= 2);
let cellClass2, bgStyle2;
@@ -1344,39 +1253,37 @@
const nodeLabel2 = m => esc(m.name||m.public_key.slice(0,8)) + (!m.role ? ' (?)' : '');
const tip2 = nodeCount === 0
? `<div class="hash-matrix-tooltip-hex">0x${hex}__</div><div class="hash-matrix-tooltip-status">No nodes in this group</div>`
: info.collisionCount === 0
: (info.collision_count || 0) === 0
? `<div class="hash-matrix-tooltip-hex">0x${hex}__</div><div class="hash-matrix-tooltip-status">${nodeCount} node${nodeCount>1?'s':''} — no 2-byte collisions</div>`
: `<div class="hash-matrix-tooltip-hex">0x${hex}__</div><div class="hash-matrix-tooltip-status">${hasConfirmed ? info.collisionCount + ' collision' + (info.collisionCount>1?'s':'') : 'Possible conflict'}</div><div class="hash-matrix-tooltip-nodes">${Object.entries(info.twoByteMap).filter(([,v])=>v.length>1).slice(0,4).map(([p,ns])=>`<div style="font-size:11px;padding:1px 0"><span style="color:${hasConfirmed?'var(--status-red)':'var(--status-yellow)'};font-family:var(--mono);font-weight:700">${p}</span> — ${ns.map(nodeLabel2).join(', ')}</div>`).join('')}</div>`;
html += `<td class="hash-cell ${cellClass2}${nodeCount ? ' hash-active' : ''}" data-hex="${hex}" data-tip="${tip2.replace(/"/g,'&quot;')}" style="width:${cellSize}px;height:${cellSize}px;text-align:center;${bgStyle2}border:1px solid var(--border);cursor:${nodeCount ? 'pointer' : 'default'};font-size:11px;font-weight:${maxCol > 0 ? '700' : '400'}">${hex}</td>`;
}
html += '</tr>';
}
html += '</table></div>';
html += `<div id="hashDetail" style="flex:1;min-width:200px;max-width:420px;font-size:0.85em"></div></div>
<div style="margin-top:8px;font-size:0.8em;display:flex;gap:16px;align-items:center;flex-wrap:wrap">
<span><span class="legend-swatch hash-cell-empty" style="border:1px solid var(--border)"></span> No nodes in group</span>
<span><span class="legend-swatch hash-cell-taken"></span> Nodes present, no collision</span>
<span><span class="legend-swatch hash-cell-possible"></span> Possible conflict</span>
<span><span class="legend-swatch hash-cell-collision" style="background:rgb(220,80,30)"></span> Collision</span>
</div>`;
: `<div class="hash-matrix-tooltip-hex">0x${hex}__</div><div class="hash-matrix-tooltip-status">${hasConfirmed ? (info.collision_count||0) + ' collision' + ((info.collision_count||0)>1?'s':'') : 'Possible conflict'}</div><div class="hash-matrix-tooltip-nodes">${Object.entries(info.two_byte_map||{}).filter(([,v])=>v.length>1).slice(0,4).map(([p,ns])=>`<div style="font-size:11px;padding:1px 0"><span style="color:${hasConfirmed?'var(--status-red)':'var(--status-yellow)'};font-family:var(--mono);font-weight:700">${p}</span> — ${ns.map(nodeLabel2).join(', ')}</div>`).join('')}</div>`;
return `<td class="hash-cell ${cellClass2}${nodeCount ? ' hash-active' : ''}" data-hex="${hex}" data-tip="${tip2.replace(/"/g,'&quot;')}" style="width:${cs}px;height:${cs}px;text-align:center;${bgStyle2}border:1px solid var(--border);cursor:${nodeCount ? 'pointer' : 'default'};font-size:11px;font-weight:${maxCol > 0 ? '700' : '400'}">${hex}</td>`;
});
html += `<div id="hashDetail" style="flex:1;min-width:200px;max-width:420px;font-size:0.85em"></div></div>`;
html += hashMatrixLegendHtml([
{cls: 'hash-cell-empty', style: 'border:1px solid var(--border)', text: 'No nodes in group'},
{cls: 'hash-cell-taken', text: 'Nodes present, no collision'},
{cls: 'hash-cell-possible', text: 'Possible conflict'},
{cls: 'hash-cell-collision', style: 'background:rgb(220,80,30)', text: 'Collision'}
]);
el.innerHTML = html;
el.querySelectorAll('.hash-active').forEach(td => {
td.addEventListener('click', () => {
const hex = td.dataset.hex.toUpperCase();
const info = firstByteInfo[hex];
const info = twoByteCells[hex];
const detail = document.getElementById('hashDetail');
if (!info || !info.groupNodes.length) { detail.innerHTML = ''; return; }
let dhtml = `<strong class="mono" style="font-size:1.1em">0x${hex}__</strong> — ${info.groupNodes.length} node${info.groupNodes.length !== 1 ? 's' : ''} in group`;
if (info.collisionCount === 0) {
if (!info || !(info.group_nodes || []).length) { detail.innerHTML = ''; return; }
const groupNodes = info.group_nodes || [];
let dhtml = `<strong class="mono" style="font-size:1.1em">0x${hex}__</strong> — ${groupNodes.length} node${groupNodes.length !== 1 ? 's' : ''} in group`;
if ((info.collision_count || 0) === 0) {
dhtml += `<div class="text-muted" style="margin-top:6px;font-size:0.85em">✅ No 2-byte collisions in this group</div>`;
dhtml += `<div style="margin-top:8px">${info.groupNodes.map(m => {
dhtml += `<div style="margin-top:8px">${groupNodes.map(m => {
const prefix = m.public_key.slice(0,4).toUpperCase();
return `<div style="padding:2px 0"><code class="mono" style="font-size:0.85em">${prefix}</code> <a href="#/nodes/${encodeURIComponent(m.public_key)}" class="analytics-link">${esc(m.name || m.public_key.slice(0,12))}</a></div>`;
}).join('')}</div>`;
} else {
dhtml += `<div style="margin-top:8px">`;
for (const [twoHex, nodes] of Object.entries(info.twoByteMap).sort()) {
for (const [twoHex, nodes] of Object.entries(info.two_byte_map || {}).sort()) {
const isCollision = nodes.length > 1;
dhtml += `<div style="margin-bottom:6px;padding:4px 6px;border-radius:4px;background:${isCollision ? 'rgba(220,50,30,0.1)' : 'transparent'};border:1px solid ${isCollision ? 'rgba(220,50,30,0.3)' : 'transparent'}">`;
dhtml += `<code class="mono" style="font-size:0.9em;font-weight:${isCollision?'700':'400'}">${twoHex}</code>${isCollision ? ' <span style="color:#dc2626;font-size:0.75em;font-weight:700">COLLISION</span>' : ''} `;
@@ -1395,106 +1302,65 @@
}
}
async function renderCollisions(topHops, allNodes, bytes) {
bytes = bytes || 1;
function renderCollisionsFromServer(sizeData, bytes) {
const el = document.getElementById('collisionList');
const hopsForSize = topHops.filter(h => h.size === bytes);
if (!sizeData) { el.innerHTML = '<div class="text-muted">No data</div>'; return; }
const collisions = sizeData.collisions || [];
// For 2-byte and 3-byte, scan nodes directly — topHops only reliably covers 1-byte path hops
const hopsToCheck = bytes === 1 ? hopsForSize : buildCollisionHops(allNodes, bytes);
if (!hopsToCheck.length && bytes === 1) {
el.innerHTML = `<div class="text-muted" style="padding:8px">No 1-byte hops observed in recent packets.</div>`;
if (!collisions.length) {
const cleanMsg = bytes === 3
? '✅ No 3-byte prefix collisions detected — all nodes have unique 3-byte prefixes.'
: `✅ No ${bytes}-byte collisions detected`;
el.innerHTML = `<div class="text-muted" style="padding:8px">${cleanMsg}</div>`;
return;
}
try {
const nodes = allNodes;
const collisions = [];
for (const hop of hopsToCheck) {
const prefix = hop.hex.toLowerCase();
const matches = nodes.filter(n => n.public_key.toLowerCase().startsWith(prefix));
if (matches.length > 1) {
// Calculate pairwise distances for classification
const withCoords = matches.filter(m => m.lat && m.lon && !(m.lat === 0 && m.lon === 0));
let maxDistKm = 0;
let classification = 'unknown';
if (withCoords.length >= 2) {
for (let i = 0; i < withCoords.length; i++) {
for (let j = i + 1; j < withCoords.length; j++) {
const dLat = (withCoords[i].lat - withCoords[j].lat) * 111;
const dLon = (withCoords[i].lon - withCoords[j].lon) * 85;
const d = Math.sqrt(dLat * dLat + dLon * dLon);
if (d > maxDistKm) maxDistKm = d;
}
}
if (maxDistKm < 50) classification = 'local';
else if (maxDistKm < 200) classification = 'regional';
else classification = 'distant';
} else if (withCoords.length < 2) {
classification = 'incomplete';
}
collisions.push({ hop: hop.hex, count: hop.count, matches, maxDistKm, classification, withCoords: withCoords.length });
const showAppearances = bytes < 3;
el.innerHTML = `<table class="analytics-table">
<thead><tr>
<th scope="col">Prefix</th>
${showAppearances ? '<th scope="col">Appearances</th>' : ''}
<th scope="col">Max Distance</th>
<th scope="col">Assessment</th>
<th scope="col">Colliding Nodes</th>
</tr></thead>
<tbody>${collisions.map(c => {
let badge, tooltip;
if (c.classification === 'local') {
badge = '<span class="badge" style="background:var(--status-green);color:#fff" title="All nodes within 50km likely true collision, same RF neighborhood">🏘️ Local</span>';
tooltip = 'Nodes close enough for direct RF — probably genuine prefix collision';
} else if (c.classification === 'regional') {
badge = '<span class="badge" style="background:var(--status-yellow);color:#fff" title="Nodes 50200km apart edge of LoRa range, could be atmospheric">⚡ Regional</span>';
tooltip = 'At edge of 915MHz range — could indicate atmospheric ducting or hilltop-to-hilltop links';
} else if (c.classification === 'distant') {
badge = '<span class="badge" style="background:var(--status-red);color:#fff" title="Nodes >200km apart beyond typical 915MHz range">🌐 Distant</span>';
tooltip = 'Beyond typical LoRa range — likely internet bridging, MQTT gateway, or separate mesh networks sharing prefix';
} else {
badge = '<span class="badge" style="background:#6b7280;color:#fff">❓ Unknown</span>';
tooltip = 'Not enough coordinate data to classify';
}
}
if (!collisions.length) {
const cleanMsg = bytes === 3
? '✅ No 3-byte prefix collisions detected — all nodes have unique 3-byte prefixes.'
: `✅ No ${bytes}-byte collisions detected`;
el.innerHTML = `<div class="text-muted" style="padding:8px">${cleanMsg}</div>`;
return;
}
// Sort: local first (most likely to collide), then regional, distant, incomplete
const classOrder = { local: 0, regional: 1, distant: 2, incomplete: 3, unknown: 4 };
collisions.sort((a, b) => classOrder[a.classification] - classOrder[b.classification] || b.count - a.count);
const showAppearances = bytes < 3;
el.innerHTML = `<table class="analytics-table">
<thead><tr>
<th scope="col">Prefix</th>
${showAppearances ? '<th scope="col">Appearances</th>' : ''}
<th scope="col">Max Distance</th>
<th scope="col">Assessment</th>
<th scope="col">Colliding Nodes</th>
</tr></thead>
<tbody>${collisions.map(c => {
let badge, tooltip;
if (c.classification === 'local') {
badge = '<span class="badge" style="background:var(--status-green);color:#fff" title="All nodes within 50km likely true collision, same RF neighborhood">🏘️ Local</span>';
tooltip = 'Nodes close enough for direct RF — probably genuine prefix collision';
} else if (c.classification === 'regional') {
badge = '<span class="badge" style="background:var(--status-yellow);color:#fff" title="Nodes 50200km apart edge of LoRa range, could be atmospheric">⚡ Regional</span>';
tooltip = 'At edge of 915MHz range — could indicate atmospheric ducting or hilltop-to-hilltop links';
} else if (c.classification === 'distant') {
badge = '<span class="badge" style="background:var(--status-red);color:#fff" title="Nodes >200km apart beyond typical 915MHz range">🌐 Distant</span>';
tooltip = 'Beyond typical LoRa range — likely internet bridging, MQTT gateway, or separate mesh networks sharing prefix';
} else {
badge = '<span class="badge" style="background:#6b7280;color:#fff">❓ Unknown</span>';
tooltip = 'Not enough coordinate data to classify';
}
const distStr = c.withCoords >= 2 ? `${Math.round(c.maxDistKm)} km` : '<span class="text-muted">—</span>';
return `<tr>
<td class="mono">${c.hop}</td>
${showAppearances ? `<td>${c.count.toLocaleString()}</td>` : ''}
<td>${distStr}</td>
<td title="${tooltip}">${badge}</td>
<td>${c.matches.map(m => {
const loc = (m.lat && m.lon && !(m.lat === 0 && m.lon === 0))
? ` <span class="text-muted" style="font-size:0.75em">(${m.lat.toFixed(2)}, ${m.lon.toFixed(2)})</span>`
: ' <span class="text-muted" style="font-size:0.75em">(no coords)</span>';
return `<a href="#/nodes/${encodeURIComponent(m.public_key)}" class="analytics-link">${esc(m.name || m.public_key.slice(0,12))}</a>${loc}`;
}).join('<br>')}</td>
</tr>`;
}).join('')}</tbody>
</table>
<div class="text-muted" style="padding:8px;font-size:0.8em">
<strong>🏘️ Local</strong> &lt;50km: true prefix collision, same mesh area &nbsp;
<strong>⚡ Regional</strong> 50200km: edge of LoRa range, possible atmospheric propagation &nbsp;
<strong>🌐 Distant</strong> &gt;200km: beyond 915MHz range — internet bridge, MQTT gateway, or separate networks
</div>`;
} catch { el.innerHTML = '<div class="text-muted">Failed to load</div>'; }
const nodes = c.nodes || [];
const distStr = c.with_coords >= 2 ? `${Math.round(c.max_dist_km)} km` : '<span class="text-muted">—</span>';
return `<tr>
<td class="mono">${c.prefix}</td>
${showAppearances ? `<td>${(c.appearances || 0).toLocaleString()}</td>` : ''}
<td>${distStr}</td>
<td title="${tooltip}">${badge}</td>
<td>${nodes.map(m => {
const loc = (m.lat && m.lon && !(m.lat === 0 && m.lon === 0))
? ` <span class="text-muted" style="font-size:0.75em">(${m.lat.toFixed(2)}, ${m.lon.toFixed(2)})</span>`
: ' <span class="text-muted" style="font-size:0.75em">(no coords)</span>';
return `<a href="#/nodes/${encodeURIComponent(m.public_key)}" class="analytics-link">${esc(m.name || m.public_key.slice(0,12))}</a>${loc}`;
}).join('<br>')}</td>
</tr>`;
}).join('')}</tbody>
</table>
<div class="text-muted" style="padding:8px;font-size:0.8em">
<strong>🏘️ Local</strong> &lt;50km: true prefix collision, same mesh area &nbsp;
<strong>⚡ Regional</strong> 50200km: edge of LoRa range, possible atmospheric propagation &nbsp;
<strong>🌐 Distant</strong> &gt;200km: beyond 915MHz range — internet bridge, MQTT gateway, or separate networks
</div>`;
}
async function renderSubpaths(el) {
el.innerHTML = '<div class="text-center text-muted" style="padding:40px">Analyzing route patterns…</div>';
try {
@@ -1942,9 +1808,6 @@ function destroy() { _analyticsData = {}; _channelData = null; }
window._analyticsSaveChannelSort = saveChannelSort;
window._analyticsChannelTbodyHtml = channelTbodyHtml;
window._analyticsChannelTheadHtml = channelTheadHtml;
window._analyticsBuildOneBytePrefixMap = buildOneBytePrefixMap;
window._analyticsBuildTwoBytePrefixInfo = buildTwoBytePrefixInfo;
window._analyticsBuildCollisionHops = buildCollisionHops;
}
registerPage('analytics', { init, destroy });

View File

@@ -807,6 +807,7 @@ window.addEventListener('DOMContentLoaded', () => {
// User's localStorage preferences take priority over server config
const userTheme = (() => { try { return JSON.parse(localStorage.getItem('meshcore-user-theme') || '{}'); } catch { return {}; } })();
window._SITE_CONFIG_ORIGINAL_HOME = JSON.parse(JSON.stringify(window.SITE_CONFIG.home || {}));
mergeUserHomeConfig(window.SITE_CONFIG, userTheme);
// Apply CSS variable overrides from theme config (skipped if user has local overrides)

View File

@@ -450,7 +450,8 @@
function mergeSection(key) {
return Object.assign({}, DEFAULTS[key], cfg[key] || {}, local[key] || {});
}
var mergedHome = mergeSection('home');
var serverHome = window._SITE_CONFIG_ORIGINAL_HOME || cfg.home || {};
var mergedHome = Object.assign({}, DEFAULTS.home, serverHome, local.home || {});
var localTsMode = localStorage.getItem('meshcore-timestamp-mode');
var localTsTimezone = localStorage.getItem('meshcore-timestamp-timezone');
var localTsFormat = localStorage.getItem('meshcore-timestamp-format');
@@ -1202,19 +1203,19 @@
var tmp = state.home.steps[i];
state.home.steps[i] = state.home.steps[j];
state.home.steps[j] = tmp;
render(container);
render(container); autoSave();
});
});
container.querySelectorAll('[data-rm-step]').forEach(function (btn) {
btn.addEventListener('click', function () {
state.home.steps.splice(parseInt(btn.dataset.rmStep), 1);
render(container);
render(container); autoSave();
});
});
var addStepBtn = document.getElementById('addStep');
if (addStepBtn) addStepBtn.addEventListener('click', function () {
state.home.steps.push({ emoji: '📌', title: '', description: '' });
render(container);
render(container); autoSave();
});
// Checklist
@@ -1227,13 +1228,13 @@
container.querySelectorAll('[data-rm-check]').forEach(function (btn) {
btn.addEventListener('click', function () {
state.home.checklist.splice(parseInt(btn.dataset.rmCheck), 1);
render(container);
render(container); autoSave();
});
});
var addCheckBtn = document.getElementById('addCheck');
if (addCheckBtn) addCheckBtn.addEventListener('click', function () {
state.home.checklist.push({ question: '', answer: '' });
render(container);
render(container); autoSave();
});
// Footer links
@@ -1246,13 +1247,13 @@
container.querySelectorAll('[data-rm-link]').forEach(function (btn) {
btn.addEventListener('click', function () {
state.home.footerLinks.splice(parseInt(btn.dataset.rmLink), 1);
render(container);
render(container); autoSave();
});
});
var addLinkBtn = document.getElementById('addLink');
if (addLinkBtn) addLinkBtn.addEventListener('click', function () {
state.home.footerLinks.push({ label: '', url: '' });
render(container);
render(container); autoSave();
});
// Export copy

View File

@@ -22,9 +22,9 @@
<meta name="twitter:title" content="CoreScope">
<meta name="twitter:description" content="Real-time MeshCore LoRa mesh network analyzer — live packet visualization, node tracking, channel decryption, and route analysis.">
<meta name="twitter:image" content="https://raw.githubusercontent.com/Kpa-clawbot/corescope/master/public/og-image.png">
<link rel="stylesheet" href="style.css?v=1775022775">
<link rel="stylesheet" href="home.css?v=1775022775">
<link rel="stylesheet" href="live.css?v=1775022775">
<link rel="stylesheet" href="style.css?v=1775076186">
<link rel="stylesheet" href="home.css?v=1775076186">
<link rel="stylesheet" href="live.css?v=1775076186">
<link rel="stylesheet" href="https://unpkg.com/leaflet@1.9.4/dist/leaflet.css"
integrity="sha256-p4NxAoJBhIIN+hmNHrzRCf9tD/miZyoHS5obTRR9BMY="
crossorigin="anonymous">
@@ -85,30 +85,30 @@
<main id="app" role="main"></main>
<script src="vendor/qrcode.js"></script>
<script src="roles.js?v=1775022775"></script>
<script src="customize.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="region-filter.js?v=1775022775"></script>
<script src="hop-resolver.js?v=1775022775"></script>
<script src="hop-display.js?v=1775022775"></script>
<script src="app.js?v=1775022775"></script>
<script src="home.js?v=1775022775"></script>
<script src="packet-filter.js?v=1775022775"></script>
<script src="packets.js?v=1775022775"></script>
<script src="geo-filter-overlay.js?v=1775022775"></script>
<script src="map.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="channels.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="nodes.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="traces.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="analytics.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio-v1-constellation.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio-v2-constellation.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio-lab.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="live.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="observers.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="observer-detail.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="compare.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="node-analytics.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="perf.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
<script src="roles.js?v=1775076186"></script>
<script src="customize.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
<script src="region-filter.js?v=1775076186"></script>
<script src="hop-resolver.js?v=1775076186"></script>
<script src="hop-display.js?v=1775076186"></script>
<script src="app.js?v=1775076186"></script>
<script src="home.js?v=1775076186"></script>
<script src="packet-filter.js?v=1775076186"></script>
<script src="packets.js?v=1775076186"></script>
<script src="geo-filter-overlay.js?v=1775076186"></script>
<script src="map.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
<script src="channels.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
<script src="nodes.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
<script src="traces.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
<script src="analytics.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio-v1-constellation.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio-v2-constellation.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio-lab.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
<script src="live.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
<script src="observers.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
<script src="observer-detail.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
<script src="compare.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
<script src="node-analytics.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
<script src="perf.js?v=1775076186" onerror="console.error('Failed to load:', this.src)"></script>
</body>
</html>

View File

@@ -37,6 +37,19 @@
const PANEL_WIDTH_KEY = 'meshcore-panel-width';
const PANEL_CLOSE_HTML = '<button class="panel-close-btn" title="Close detail pane (Esc)">✕</button>';
// --- Virtual scroll state ---
const VSCROLL_ROW_HEIGHT = 36; // estimated row height in px
const VSCROLL_BUFFER = 30; // extra rows above/below viewport
let _displayPackets = []; // filtered packets for current view
let _displayGrouped = false; // whether _displayPackets is in grouped mode
let _rowCounts = []; // per-entry DOM row counts (1 for flat, 1+children for expanded groups)
let _cumulativeOffsetsCache = null; // cached cumulative offsets, invalidated on _rowCounts change
let _lastVisibleStart = -1; // last rendered start index (for dirty checking)
let _lastVisibleEnd = -1; // last rendered end index (for dirty checking)
let _vsScrollHandler = null; // scroll listener reference
let _wsRenderTimer = null; // debounce timer for WS-triggered renders
let _observerFilterSet = null; // cached Set from filters.observer, hoisted above loops (#427)
function closeDetailPanel() {
var panel = document.getElementById('pktRight');
if (panel) {
@@ -396,7 +409,9 @@
packets = filtered.concat(packets);
}
totalCount += filtered.length;
renderTableRows();
// Debounce WS-triggered renders to avoid rapid full rebuilds
clearTimeout(_wsRenderTimer);
_wsRenderTimer = setTimeout(function () { renderTableRows(); }, 200);
});
});
}
@@ -404,6 +419,14 @@
function destroy() {
if (wsHandler) offWS(wsHandler);
wsHandler = null;
detachVScrollListener();
clearTimeout(_wsRenderTimer);
_displayPackets = [];
_rowCounts = [];
_cumulativeOffsetsCache = null;
_observerFilterSet = null;
_lastVisibleStart = -1;
_lastVisibleEnd = -1;
if (_docActionHandler) { document.removeEventListener('click', _docActionHandler); _docActionHandler = null; }
if (_docMenuCloseHandler) { document.removeEventListener('click', _docMenuCloseHandler); _docMenuCloseHandler = null; }
if (_docColMenuCloseHandler) { document.removeEventListener('click', _docColMenuCloseHandler); _docColMenuCloseHandler = null; }
@@ -988,6 +1011,234 @@
makeColumnsResizable('#pktTable', 'meshcore-pkt-col-widths');
}
// Build HTML for a single grouped packet row
function buildGroupRowHtml(p) {
const isExpanded = expandedHashes.has(p.hash);
let headerObserverId = p.observer_id;
let headerPathJson = p.path_json;
if (_observerFilterSet && p._children?.length) {
const match = p._children.find(c => _observerFilterSet.has(String(c.observer_id)));
if (match) {
headerObserverId = match.observer_id;
headerPathJson = match.path_json;
}
}
const groupRegion = headerObserverId ? (observers.find(o => o.id === headerObserverId)?.iata || '') : '';
let groupPath = [];
try { groupPath = JSON.parse(headerPathJson || '[]'); } catch {}
const groupPathStr = renderPath(groupPath, headerObserverId);
const groupTypeName = payloadTypeName(p.payload_type);
const groupTypeClass = payloadTypeColor(p.payload_type);
const groupSize = p.raw_hex ? Math.floor(p.raw_hex.length / 2) : 0;
const groupHashBytes = ((parseInt(p.raw_hex?.slice(2, 4), 16) || 0) >> 6) + 1;
const isSingle = p.count <= 1;
let html = `<tr class="${isSingle ? '' : 'group-header'} ${isExpanded ? 'expanded' : ''}" data-hash="${p.hash}" data-action="${isSingle ? 'select-hash' : 'toggle-select'}" data-value="${p.hash}" tabindex="0" role="row">
<td style="width:28px;text-align:center;cursor:pointer">${isSingle ? '' : (isExpanded ? '▼' : '▶')}</td>
<td class="col-region">${groupRegion ? `<span class="badge-region">${groupRegion}</span>` : '—'}</td>
<td class="col-time">${renderTimestampCell(p.latest)}</td>
<td class="mono col-hash">${truncate(p.hash || '—', 8)}</td>
<td class="col-size">${groupSize ? groupSize + 'B' : '—'}</td>
<td class="col-hashsize mono">${groupHashBytes}</td>
<td class="col-type">${p.payload_type != null ? `<span class="badge badge-${groupTypeClass}">${groupTypeName}</span>${transportBadge(p.route_type)}` : '—'}</td>
<td class="col-observer">${isSingle ? truncate(obsName(headerObserverId), 16) : truncate(obsName(headerObserverId), 10) + (p.observer_count > 1 ? ' +' + (p.observer_count - 1) : '')}</td>
<td class="col-path"><span class="path-hops">${groupPathStr}</span></td>
<td class="col-rpt">${p.observation_count > 1 ? '<span class="badge badge-obs" title="Seen ' + p.observation_count + ' times">👁 ' + p.observation_count + '</span>' : (isSingle ? '' : p.count)}</td>
<td class="col-details">${getDetailPreview((() => { try { return JSON.parse(p.decoded_json || '{}'); } catch { return {}; } })())}</td>
</tr>`;
if (isExpanded && p._children) {
let visibleChildren = p._children;
if (_observerFilterSet) {
visibleChildren = visibleChildren.filter(c => _observerFilterSet.has(String(c.observer_id)));
}
for (const c of visibleChildren) {
const typeName = payloadTypeName(c.payload_type);
const typeClass = payloadTypeColor(c.payload_type);
const size = c.raw_hex ? Math.floor(c.raw_hex.length / 2) : 0;
const childHashBytes = ((parseInt(c.raw_hex?.slice(2, 4), 16) || 0) >> 6) + 1;
const childRegion = c.observer_id ? (observers.find(o => o.id === c.observer_id)?.iata || '') : '';
let childPath = [];
try { childPath = JSON.parse(c.path_json || '[]'); } catch {}
const childPathStr = renderPath(childPath, c.observer_id);
html += `<tr class="group-child" data-id="${c.id}" data-hash="${c.hash || ''}" data-action="select-observation" data-value="${c.id}" data-parent-hash="${p.hash}" tabindex="0" role="row">
<td></td><td class="col-region">${childRegion ? `<span class="badge-region">${childRegion}</span>` : '—'}</td>
<td class="col-time">${renderTimestampCell(c.timestamp)}</td>
<td class="mono col-hash">${truncate(c.hash || '', 8)}</td>
<td class="col-size">${size}B</td>
<td class="col-hashsize mono">${childHashBytes}</td>
<td class="col-type"><span class="badge badge-${typeClass}">${typeName}</span>${transportBadge(c.route_type)}</td>
<td class="col-observer">${truncate(obsName(c.observer_id), 16)}</td>
<td class="col-path"><span class="path-hops">${childPathStr}</span></td>
<td class="col-rpt"></td>
<td class="col-details">${getDetailPreview((() => { try { return JSON.parse(c.decoded_json || '{}'); } catch { return {}; } })())}</td>
</tr>`;
}
}
return html;
}
// Build HTML for a single flat (ungrouped) packet row
function buildFlatRowHtml(p) {
let decoded, pathHops = [];
try { decoded = JSON.parse(p.decoded_json || '{}'); } catch {}
try { pathHops = JSON.parse(p.path_json || '[]') || []; } catch {}
const region = p.observer_id ? (observers.find(o => o.id === p.observer_id)?.iata || '') : '';
const typeName = payloadTypeName(p.payload_type);
const typeClass = payloadTypeColor(p.payload_type);
const size = p.raw_hex ? Math.floor(p.raw_hex.length / 2) : 0;
const hashBytes = ((parseInt(p.raw_hex?.slice(2, 4), 16) || 0) >> 6) + 1;
const pathStr = renderPath(pathHops, p.observer_id);
const detail = getDetailPreview(decoded);
return `<tr data-id="${p.id}" data-hash="${p.hash || ''}" data-action="select-hash" data-value="${p.hash || p.id}" tabindex="0" role="row" class="${selectedId === p.id ? 'selected' : ''}">
<td></td><td class="col-region">${region ? `<span class="badge-region">${region}</span>` : '—'}</td>
<td class="col-time">${renderTimestampCell(p.timestamp)}</td>
<td class="mono col-hash">${truncate(p.hash || String(p.id), 8)}</td>
<td class="col-size">${size}B</td>
<td class="col-hashsize mono">${hashBytes}</td>
<td class="col-type"><span class="badge badge-${typeClass}">${typeName}</span>${transportBadge(p.route_type)}</td>
<td class="col-observer">${truncate(obsName(p.observer_id), 16)}</td>
<td class="col-path"><span class="path-hops">${pathStr}</span></td>
<td class="col-rpt"></td>
<td class="col-details">${detail}</td>
</tr>`;
}
// Compute the number of DOM <tr> rows a single entry produces.
// Used by both row counting and renderVisibleRows to avoid divergence (#424).
function _getRowCount(p) {
if (!_displayGrouped) return 1;
if (!expandedHashes.has(p.hash) || !p._children) return 1;
let childCount = p._children.length;
if (_observerFilterSet) {
childCount = p._children.filter(c => _observerFilterSet.has(String(c.observer_id))).length;
}
return 1 + childCount;
}
// Get the column count from the thead (dynamic, avoids hardcoded colspan — #426)
function _getColCount() {
const thead = document.querySelector('#pktLeft thead tr');
return thead ? thead.children.length : 11;
}
// Compute cumulative DOM row offsets from per-entry row counts.
// Returns array where cumulativeOffsets[i] = total <tr> rows before entry i.
function _cumulativeRowOffsets() {
if (_cumulativeOffsetsCache) return _cumulativeOffsetsCache;
const offsets = new Array(_rowCounts.length + 1);
offsets[0] = 0;
for (let i = 0; i < _rowCounts.length; i++) {
offsets[i + 1] = offsets[i] + _rowCounts[i];
}
_cumulativeOffsetsCache = offsets;
return offsets;
return offsets;
}
function renderVisibleRows() {
const tbody = document.getElementById('pktBody');
if (!tbody || !_displayPackets.length) return;
const scrollContainer = document.getElementById('pktLeft');
if (!scrollContainer) return;
// Compute total DOM rows accounting for expanded groups
const offsets = _cumulativeRowOffsets();
const totalDomRows = offsets[offsets.length - 1];
const totalHeight = totalDomRows * VSCROLL_ROW_HEIGHT;
const colCount = _getColCount();
// Get or create spacer elements
let topSpacer = document.getElementById('vscroll-top');
let bottomSpacer = document.getElementById('vscroll-bottom');
if (!topSpacer) {
topSpacer = document.createElement('tr');
topSpacer.id = 'vscroll-top';
topSpacer.innerHTML = '<td colspan="' + colCount + '" style="padding:0;border:0"></td>';
}
if (!bottomSpacer) {
bottomSpacer = document.createElement('tr');
bottomSpacer.id = 'vscroll-bottom';
bottomSpacer.innerHTML = '<td colspan="' + colCount + '" style="padding:0;border:0"></td>';
}
// Calculate visible range based on scroll position
const scrollTop = scrollContainer.scrollTop;
const viewportHeight = scrollContainer.clientHeight;
// Account for thead height (~40px)
const theadHeight = 40;
const adjustedScrollTop = Math.max(0, scrollTop - theadHeight);
// Find the first entry whose cumulative row offset covers the scroll position
const firstDomRow = Math.floor(adjustedScrollTop / VSCROLL_ROW_HEIGHT);
const visibleDomCount = Math.ceil(viewportHeight / VSCROLL_ROW_HEIGHT);
// Binary search for entry index containing firstDomRow
let lo = 0, hi = _displayPackets.length;
while (lo < hi) {
const mid = (lo + hi) >>> 1;
if (offsets[mid + 1] <= firstDomRow) lo = mid + 1;
else hi = mid;
}
const firstEntry = lo;
// Find entry index covering last visible DOM row
const lastDomRow = firstDomRow + visibleDomCount;
lo = firstEntry; hi = _displayPackets.length;
while (lo < hi) {
const mid = (lo + hi) >>> 1;
if (offsets[mid + 1] <= lastDomRow) lo = mid + 1;
else hi = mid;
}
const lastEntry = Math.min(lo + 1, _displayPackets.length);
const startIdx = Math.max(0, firstEntry - VSCROLL_BUFFER);
const endIdx = Math.min(_displayPackets.length, lastEntry + VSCROLL_BUFFER);
// Skip DOM rebuild if visible range hasn't changed
if (startIdx === _lastVisibleStart && endIdx === _lastVisibleEnd) return;
_lastVisibleStart = startIdx;
_lastVisibleEnd = endIdx;
// Compute padding using cumulative row counts
const topPad = offsets[startIdx] * VSCROLL_ROW_HEIGHT;
const bottomPad = (totalDomRows - offsets[endIdx]) * VSCROLL_ROW_HEIGHT;
topSpacer.firstChild.style.height = topPad + 'px';
bottomSpacer.firstChild.style.height = bottomPad + 'px';
// LAZY ROW GENERATION: only build HTML for the visible slice (#422)
const builder = _displayGrouped ? buildGroupRowHtml : buildFlatRowHtml;
const visibleSlice = _displayPackets.slice(startIdx, endIdx);
const visibleHtml = visibleSlice.map(p => builder(p)).join('');
tbody.innerHTML = '';
tbody.appendChild(topSpacer);
tbody.insertAdjacentHTML('beforeend', visibleHtml);
tbody.appendChild(bottomSpacer);
}
// Attach/detach scroll listener for virtual scrolling
function attachVScrollListener() {
const scrollContainer = document.getElementById('pktLeft');
if (!scrollContainer) return;
if (_vsScrollHandler) return; // already attached
let scrollRaf = null;
_vsScrollHandler = function () {
if (scrollRaf) return;
scrollRaf = requestAnimationFrame(function () {
scrollRaf = null;
renderVisibleRows();
});
};
scrollContainer.addEventListener('scroll', _vsScrollHandler, { passive: true });
}
function detachVScrollListener() {
if (!_vsScrollHandler) return;
const scrollContainer = document.getElementById('pktLeft');
if (scrollContainer) scrollContainer.removeEventListener('scroll', _vsScrollHandler);
_vsScrollHandler = null;
}
async function renderTableRows() {
const tbody = document.getElementById('pktBody');
if (!tbody) return;
@@ -997,7 +1248,7 @@
const groupBtn = document.getElementById('fGroup');
if (groupBtn) groupBtn.classList.toggle('active', groupByHash);
// Filter to claimed/favorited nodes if toggle is on — use server-side multi-node lookup
// Filter to claimed/favorited nodes — pure client-side filter (no server round-trip)
let displayPackets = packets;
if (filters.myNodes) {
const myNodes = JSON.parse(localStorage.getItem('meshcore-my-nodes') || '[]');
@@ -1005,10 +1256,10 @@
const favs = getFavorites();
const allKeys = [...new Set([...myKeys, ...favs])];
if (allKeys.length > 0) {
try {
const myData = await api('/packets?nodes=' + allKeys.join(',') + '&limit=500');
displayPackets = myData.packets || [];
} catch { displayPackets = []; }
displayPackets = displayPackets.filter(p => {
const dj = p.decoded_json || '';
return allKeys.some(k => dj.includes(k));
});
} else {
displayPackets = [];
}
@@ -1040,108 +1291,31 @@
if (countEl) countEl.textContent = `(${displayPackets.length})`;
if (!displayPackets.length) {
tbody.innerHTML = '<tr><td colspan="10" class="text-center text-muted" style="padding:24px">' + (filters.myNodes ? 'No packets from your claimed/favorited nodes' : 'No packets found') + '</td></tr>';
_displayPackets = [];
_rowCounts = [];
_cumulativeOffsetsCache = null;
_observerFilterSet = null;
_lastVisibleStart = -1;
_lastVisibleEnd = -1;
detachVScrollListener();
const colCount = _getColCount();
tbody.innerHTML = '<tr><td colspan="' + colCount + '" class="text-center text-muted" style="padding:24px">' + (filters.myNodes ? 'No packets from your claimed/favorited nodes' : 'No packets found') + '</td></tr>';
return;
}
if (groupByHash) {
let html = '';
for (const p of displayPackets) {
const isExpanded = expandedHashes.has(p.hash);
// When observer filter is active, use first matching child's data for header
let headerObserverId = p.observer_id;
let headerPathJson = p.path_json;
if (filters.observer && p._children?.length) {
const obsIds = new Set(filters.observer.split(','));
const match = p._children.find(c => obsIds.has(String(c.observer_id)));
if (match) {
headerObserverId = match.observer_id;
headerPathJson = match.path_json;
}
}
const groupRegion = headerObserverId ? (observers.find(o => o.id === headerObserverId)?.iata || '') : '';
let groupPath = [];
try { groupPath = JSON.parse(headerPathJson || '[]'); } catch {}
const groupPathStr = renderPath(groupPath, headerObserverId);
const groupTypeName = payloadTypeName(p.payload_type);
const groupTypeClass = payloadTypeColor(p.payload_type);
const groupSize = p.raw_hex ? Math.floor(p.raw_hex.length / 2) : 0;
const groupHashBytes = ((parseInt(p.raw_hex?.slice(2, 4), 16) || 0) >> 6) + 1;
const isSingle = p.count <= 1;
html += `<tr class="${isSingle ? '' : 'group-header'} ${isExpanded ? 'expanded' : ''}" data-hash="${p.hash}" data-action="${isSingle ? 'select-hash' : 'toggle-select'}" data-value="${p.hash}" tabindex="0" role="row">
<td style="width:28px;text-align:center;cursor:pointer">${isSingle ? '' : (isExpanded ? '▼' : '▶')}</td>
<td class="col-region">${groupRegion ? `<span class="badge-region">${groupRegion}</span>` : '—'}</td>
<td class="col-time">${renderTimestampCell(p.latest)}</td>
<td class="mono col-hash">${truncate(p.hash || '—', 8)}</td>
<td class="col-size">${groupSize ? groupSize + 'B' : '—'}</td>
<td class="col-hashsize mono">${groupHashBytes}</td>
<td class="col-type">${p.payload_type != null ? `<span class="badge badge-${groupTypeClass}">${groupTypeName}</span>${transportBadge(p.route_type)}` : '—'}</td>
<td class="col-observer">${isSingle ? truncate(obsName(headerObserverId), 16) : truncate(obsName(headerObserverId), 10) + (p.observer_count > 1 ? ' +' + (p.observer_count - 1) : '')}</td>
<td class="col-path"><span class="path-hops">${groupPathStr}</span></td>
<td class="col-rpt">${p.observation_count > 1 ? '<span class="badge badge-obs" title="Seen ' + p.observation_count + ' times">👁 ' + p.observation_count + '</span>' : (isSingle ? '' : p.count)}</td>
<td class="col-details">${getDetailPreview((() => { try { return JSON.parse(p.decoded_json || '{}'); } catch { return {}; } })())}</td>
</tr>`;
// Child rows (loaded async when expanded)
if (isExpanded && p._children) {
let visibleChildren = p._children;
// Filter children by selected observers
if (filters.observer) {
const obsSet = new Set(filters.observer.split(','));
visibleChildren = visibleChildren.filter(c => obsSet.has(String(c.observer_id)));
}
for (const c of visibleChildren) {
const typeName = payloadTypeName(c.payload_type);
const typeClass = payloadTypeColor(c.payload_type);
const size = c.raw_hex ? Math.floor(c.raw_hex.length / 2) : 0;
const childHashBytes = ((parseInt(c.raw_hex?.slice(2, 4), 16) || 0) >> 6) + 1;
const childRegion = c.observer_id ? (observers.find(o => o.id === c.observer_id)?.iata || '') : '';
let childPath = [];
try { childPath = JSON.parse(c.path_json || '[]'); } catch {}
const childPathStr = renderPath(childPath, c.observer_id);
html += `<tr class="group-child" data-id="${c.id}" data-hash="${c.hash || ''}" data-action="select-observation" data-value="${c.id}" data-parent-hash="${p.hash}" tabindex="0" role="row">
<td></td><td class="col-region">${childRegion ? `<span class="badge-region">${childRegion}</span>` : '—'}</td>
<td class="col-time">${renderTimestampCell(c.timestamp)}</td>
<td class="mono col-hash">${truncate(c.hash || '', 8)}</td>
<td class="col-size">${size}B</td>
<td class="col-hashsize mono">${childHashBytes}</td>
<td class="col-type"><span class="badge badge-${typeClass}">${typeName}</span>${transportBadge(c.route_type)}</td>
<td class="col-observer">${truncate(obsName(c.observer_id), 16)}</td>
<td class="col-path"><span class="path-hops">${childPathStr}</span></td>
<td class="col-rpt"></td>
<td class="col-details">${getDetailPreview((() => { try { return JSON.parse(c.decoded_json); } catch { return {}; } })())}</td>
</tr>`;
}
}
}
tbody.innerHTML = html;
return;
}
// Lazy virtual scroll: store display packets and row counts, but do NOT
// pre-generate HTML strings. HTML is built on-demand in renderVisibleRows()
// for only the visible slice + buffer (#422).
_lastVisibleStart = -1;
_lastVisibleEnd = -1;
_displayPackets = displayPackets;
_displayGrouped = groupByHash;
_observerFilterSet = filters.observer ? new Set(filters.observer.split(',')) : null;
_rowCounts = displayPackets.map(p => _getRowCount(p));
_cumulativeOffsetsCache = null;
tbody.innerHTML = displayPackets.map(p => {
let decoded, pathHops = [];
try { decoded = JSON.parse(p.decoded_json); } catch {}
try { pathHops = JSON.parse(p.path_json || '[]'); } catch {}
const region = p.observer_id ? (observers.find(o => o.id === p.observer_id)?.iata || '') : '';
const typeName = payloadTypeName(p.payload_type);
const typeClass = payloadTypeColor(p.payload_type);
const size = p.raw_hex ? Math.floor(p.raw_hex.length / 2) : 0;
const hashBytes = ((parseInt(p.raw_hex?.slice(2, 4), 16) || 0) >> 6) + 1;
const pathStr = renderPath(pathHops, p.observer_id); const detail = getDetailPreview(decoded);
return `<tr data-id="${p.id}" data-hash="${p.hash || ''}" data-action="select-hash" data-value="${p.hash || p.id}" tabindex="0" role="row" class="${selectedId === p.id ? 'selected' : ''}">
<td></td><td class="col-region">${region ? `<span class="badge-region">${region}</span>` : '—'}</td>
<td class="col-time">${renderTimestampCell(p.timestamp)}</td>
<td class="mono col-hash">${truncate(p.hash || String(p.id), 8)}</td>
<td class="col-size">${size}B</td>
<td class="col-hashsize mono">${hashBytes}</td>
<td class="col-type"><span class="badge badge-${typeClass}">${typeName}</span>${transportBadge(p.route_type)}</td>
<td class="col-observer">${truncate(obsName(p.observer_id), 16)}</td>
<td class="col-path"><span class="path-hops">${pathStr}</span></td>
<td class="col-rpt"></td>
<td class="col-details">${detail}</td>
</tr>`;
}).join('');
attachVScrollListener();
renderVisibleRows();
}
function getDetailPreview(decoded) {
@@ -1246,7 +1420,7 @@
let decoded;
try { decoded = JSON.parse(pkt.decoded_json); } catch { decoded = {}; }
let pathHops;
try { pathHops = JSON.parse(pkt.path_json || '[]'); } catch { pathHops = []; }
try { pathHops = JSON.parse(pkt.path_json || '[]') || []; } catch { pathHops = []; }
// Resolve sender GPS — from packet directly, or from known node in DB
let senderLat = decoded.lat != null ? decoded.lat : (decoded.latitude || null);

View File

@@ -1792,153 +1792,6 @@ console.log('\n=== analytics.js: sortChannels ===');
});
}
// === analytics.js: hash prefix helpers ===
console.log('\n=== analytics.js: hash prefix helpers ===');
{
const ctx = (() => {
const c = makeSandbox();
c.getComputedStyle = () => ({ getPropertyValue: () => '' });
c.registerPage = () => {};
c.api = () => Promise.resolve({});
c.timeAgo = () => '—';
c.RegionFilter = { init: () => {}, onChange: () => {}, regionQueryString: () => '' };
c.onWS = () => {};
c.offWS = () => {};
c.connectWS = () => {};
c.invalidateApiCache = () => {};
c.makeColumnsResizable = () => {};
c.initTabBar = () => {};
c.IATA_COORDS_GEO = {};
loadInCtx(c, 'public/roles.js');
loadInCtx(c, 'public/app.js');
try { loadInCtx(c, 'public/analytics.js'); } catch (e) {
for (const k of Object.keys(c.window)) c[k] = c.window[k];
}
return c;
})();
const buildOne = ctx.window._analyticsBuildOneBytePrefixMap;
const buildTwo = ctx.window._analyticsBuildTwoBytePrefixInfo;
const buildHops = ctx.window._analyticsBuildCollisionHops;
const node = (pk, extra) => ({ public_key: pk, name: pk.slice(0, 4), ...(extra || {}) });
test('buildOneBytePrefixMap exports exist', () => assert.ok(buildOne, 'must be exported'));
test('buildTwoBytePrefixInfo exports exist', () => assert.ok(buildTwo, 'must be exported'));
test('buildCollisionHops exports exist', () => assert.ok(buildHops, 'must be exported'));
// --- 1-byte prefix map ---
test('1-byte map has 256 keys', () => {
const m = buildOne([]);
assert.strictEqual(Object.keys(m).length, 256);
});
test('1-byte map places node in correct bucket', () => {
const n = node('AABBCC');
const m = buildOne([n]);
assert.strictEqual(m['AA'].length, 1);
assert.strictEqual(m['AA'][0].public_key, 'AABBCC');
assert.strictEqual(m['BB'].length, 0);
});
test('1-byte map groups two nodes with same prefix', () => {
const a = node('AA1111'), b = node('AA2222');
const m = buildOne([a, b]);
assert.strictEqual(m['AA'].length, 2);
});
test('1-byte map is case-insensitive for node keys', () => {
const n = node('aabbcc');
const m = buildOne([n]);
assert.strictEqual(m['AA'].length, 1);
});
test('1-byte map: empty input yields all empty buckets', () => {
const m = buildOne([]);
assert.ok(Object.values(m).every(v => v.length === 0));
});
// --- 2-byte prefix info ---
test('2-byte info has 256 first-byte keys', () => {
const info = buildTwo([]);
assert.strictEqual(Object.keys(info).length, 256);
});
test('2-byte info: no nodes → zero collisions', () => {
const info = buildTwo([]);
assert.ok(Object.values(info).every(e => e.collisionCount === 0));
});
test('2-byte info: node placed in correct first-byte group', () => {
const n = node('AABB1122');
const info = buildTwo([n]);
assert.strictEqual(info['AA'].groupNodes.length, 1);
assert.strictEqual(info['BB'].groupNodes.length, 0);
});
test('2-byte info: same 2-byte prefix = collision', () => {
const a = node('AABB0001'), b = node('AABB0002');
const info = buildTwo([a, b]);
assert.strictEqual(info['AA'].collisionCount, 1);
assert.strictEqual(info['AA'].maxCollision, 2);
});
test('2-byte info: different 2-byte prefixes in same group = no collision', () => {
const a = node('AA110001'), b = node('AA220002');
const info = buildTwo([a, b]);
assert.strictEqual(info['AA'].collisionCount, 0);
assert.strictEqual(info['AA'].maxCollision, 0);
});
test('2-byte info: twoByteMap built correctly', () => {
const a = node('AABB0001'), b = node('AABB0002'), c = node('AACC0003');
const info = buildTwo([a, b, c]);
assert.strictEqual(Object.keys(info['AA'].twoByteMap).length, 2);
assert.strictEqual(info['AA'].twoByteMap['AABB'].length, 2);
assert.strictEqual(info['AA'].twoByteMap['AACC'].length, 1);
});
// --- 3-byte stat summary (via buildCollisionHops) ---
test('buildCollisionHops: no collisions returns empty array', () => {
const nodes = [node('AA000001'), node('BB000002'), node('CC000003')];
assert.deepStrictEqual(buildHops(nodes, 1), []);
});
test('buildCollisionHops: detects 1-byte collision', () => {
const nodes = [node('AA000001'), node('AA000002')];
const hops = buildHops(nodes, 1);
assert.strictEqual(hops.length, 1);
assert.strictEqual(hops[0].hex, 'AA');
assert.strictEqual(hops[0].count, 2);
});
test('buildCollisionHops: detects 2-byte collision', () => {
const nodes = [node('AABB0001'), node('AABB0002'), node('AACC0003')];
const hops = buildHops(nodes, 2);
assert.strictEqual(hops.length, 1);
assert.strictEqual(hops[0].hex, 'AABB');
assert.strictEqual(hops[0].count, 2);
});
test('buildCollisionHops: detects 3-byte collision', () => {
const nodes = [node('AABBCC0001'), node('AABBCC0002')];
const hops = buildHops(nodes, 3);
assert.strictEqual(hops.length, 1);
assert.strictEqual(hops[0].hex, 'AABBCC');
});
test('buildCollisionHops: size field set correctly', () => {
const nodes = [node('AABB0001'), node('AABB0002')];
const hops = buildHops(nodes, 2);
assert.strictEqual(hops[0].size, 2);
});
test('buildCollisionHops: empty input returns empty array', () => {
assert.deepStrictEqual(buildHops([], 1), []);
assert.deepStrictEqual(buildHops([], 2), []);
assert.deepStrictEqual(buildHops([], 3), []);
});
}
// ===== CUSTOMIZE.JS: initState merge behavior =====
console.log('\n=== customize.js: initState merge behavior ===');
@@ -2107,6 +1960,43 @@ console.log('\n=== customize.js: initState merge behavior ===');
assert.strictEqual(state.theme.accent, '#abcdef');
assert.strictEqual(state.theme.navBg, '#fedcba');
});
test('initState uses _SITE_CONFIG_ORIGINAL_HOME to bypass contaminated SITE_CONFIG.home', () => {
// Simulates: app.js called mergeUserHomeConfig which mutated SITE_CONFIG.home.steps = []
// The original server steps must still be recoverable via _SITE_CONFIG_ORIGINAL_HOME
const ctx = makeSandbox();
ctx.setTimeout = function (fn) { fn(); return 1; };
ctx.clearTimeout = function () {};
// SITE_CONFIG.home is contaminated — steps wiped by mergeUserHomeConfig at page load
ctx.window.SITE_CONFIG = {
home: {
heroTitle: 'Server Hero',
steps: [] // contaminated — user had steps:[] in localStorage at page load
}
};
// app.js snapshots original before mutation
ctx.window._SITE_CONFIG_ORIGINAL_HOME = {
heroTitle: 'Server Hero',
steps: [{ emoji: '🧪', title: 'Original Step', description: 'from server' }]
};
const ex = loadCustomizeExports(ctx);
ex.initState();
const state = ex.getState();
assert.strictEqual(state.home.steps.length, 1, 'should restore from snapshot, not contaminated SITE_CONFIG');
assert.strictEqual(state.home.steps[0].title, 'Original Step');
});
test('initState uses DEFAULTS.home when no SITE_CONFIG and no snapshot', () => {
const ctx = makeSandbox();
ctx.setTimeout = function (fn) { fn(); return 1; };
ctx.clearTimeout = function () {};
// No SITE_CONFIG at all — pure DEFAULTS
const ex = loadCustomizeExports(ctx);
ex.initState();
const state = ex.getState();
assert.ok(state.home.steps.length > 0, 'should use DEFAULTS.home.steps when no server config');
assert.strictEqual(state.home.steps[0].title, 'Join the Bay Area MeshCore Discord');
});
}
// ===== APP.JS: home rehydration merge =====
@@ -2642,6 +2532,207 @@ console.log('\n=== packets.js: savedTimeWindowMin defaults ===');
assert.ok(deltaMin > 10 && deltaMin < 25, `expected capped ~15m window, got ${deltaMin.toFixed(2)}m`);
});
}
// ===== My Nodes client-side filter (issue #381) =====
{
console.log('\n--- My Nodes client-side filter ---');
// Simulate the client-side filter logic from packets.js renderTableRows()
function filterMyNodes(packets, allKeys) {
if (!allKeys.length) return [];
return packets.filter(p => {
const dj = p.decoded_json || '';
return allKeys.some(k => dj.includes(k));
});
}
const testPackets = [
{ decoded_json: '{"pubKey":"abc123","name":"Node1"}' },
{ decoded_json: '{"pubKey":"def456","name":"Node2"}' },
{ decoded_json: '{"pubKey":"ghi789","name":"Node3","hops":["abc123"]}' },
{ decoded_json: '' },
{ decoded_json: null },
];
test('filters packets matching a single pubkey', () => {
const result = filterMyNodes(testPackets, ['abc123']);
assert.strictEqual(result.length, 2, 'should match sender + hop');
assert.ok(result[0].decoded_json.includes('abc123'));
assert.ok(result[1].decoded_json.includes('abc123'));
});
test('filters packets matching multiple pubkeys', () => {
const result = filterMyNodes(testPackets, ['abc123', 'def456']);
assert.strictEqual(result.length, 3);
});
test('returns empty array for no matching keys', () => {
const result = filterMyNodes(testPackets, ['zzz999']);
assert.strictEqual(result.length, 0);
});
test('returns empty array when allKeys is empty', () => {
const result = filterMyNodes(testPackets, []);
assert.strictEqual(result.length, 0);
});
test('handles null/empty decoded_json gracefully', () => {
const result = filterMyNodes(testPackets, ['abc123']);
assert.strictEqual(result.length, 2);
});
}
// ===== Packets page: virtual scroll infrastructure =====
{
console.log('\nPackets page — virtual scroll:');
const packetsSource = fs.readFileSync('public/packets.js', 'utf8');
// --- Behavioral tests using extracted logic ---
// Extract _cumulativeRowOffsets logic for testing
function cumulativeRowOffsets(rowCounts) {
const offsets = new Array(rowCounts.length + 1);
offsets[0] = 0;
for (let i = 0; i < rowCounts.length; i++) {
offsets[i + 1] = offsets[i] + rowCounts[i];
}
return offsets;
}
// Extract _getRowCount logic for testing (#424 — single source of truth)
function getRowCount(p, grouped, expandedHashes, observerFilterSet) {
if (!grouped) return 1;
if (!expandedHashes.has(p.hash) || !p._children) return 1;
let childCount = p._children.length;
if (observerFilterSet) {
childCount = p._children.filter(c => observerFilterSet.has(String(c.observer_id))).length;
}
return 1 + childCount;
}
test('cumulativeRowOffsets computes correct offsets for flat rows', () => {
const counts = [1, 1, 1, 1, 1];
const offsets = cumulativeRowOffsets(counts);
assert.deepStrictEqual(offsets, [0, 1, 2, 3, 4, 5]);
});
test('cumulativeRowOffsets handles expanded groups with multiple rows', () => {
const counts = [1, 4, 1];
const offsets = cumulativeRowOffsets(counts);
assert.deepStrictEqual(offsets, [0, 1, 5, 6]);
assert.strictEqual(offsets[offsets.length - 1], 6);
});
test('total scroll height accounts for expanded group rows', () => {
const VSCROLL_ROW_HEIGHT = 36;
const counts = [1, 4, 1, 4, 1];
const offsets = cumulativeRowOffsets(counts);
const totalDomRows = offsets[offsets.length - 1];
assert.strictEqual(totalDomRows, 11);
assert.strictEqual(totalDomRows * VSCROLL_ROW_HEIGHT, 396);
});
test('scroll height with all collapsed equals entries * row height', () => {
const VSCROLL_ROW_HEIGHT = 36;
const counts = [1, 1, 1, 1, 1];
const offsets = cumulativeRowOffsets(counts);
const totalDomRows = offsets[offsets.length - 1];
assert.strictEqual(totalDomRows * VSCROLL_ROW_HEIGHT, 5 * VSCROLL_ROW_HEIGHT);
});
// --- Behavioral tests for _getRowCount (#424, #428 — test logic, not source strings) ---
test('getRowCount returns 1 for flat (ungrouped) mode', () => {
const p = { hash: 'abc', _children: [{observer_id: '1'}, {observer_id: '2'}] };
assert.strictEqual(getRowCount(p, false, new Set(), null), 1);
});
test('getRowCount returns 1 for collapsed group', () => {
const p = { hash: 'abc', _children: [{observer_id: '1'}, {observer_id: '2'}] };
assert.strictEqual(getRowCount(p, true, new Set(), null), 1);
});
test('getRowCount returns 1+children for expanded group', () => {
const p = { hash: 'abc', _children: [{observer_id: '1'}, {observer_id: '2'}, {observer_id: '3'}] };
const expanded = new Set(['abc']);
assert.strictEqual(getRowCount(p, true, expanded, null), 4);
});
test('getRowCount filters children by observer set', () => {
const p = { hash: 'abc', _children: [{observer_id: '1'}, {observer_id: '2'}, {observer_id: '3'}] };
const expanded = new Set(['abc']);
const obsFilter = new Set(['1', '3']);
assert.strictEqual(getRowCount(p, true, expanded, obsFilter), 3);
});
test('getRowCount returns 1 for expanded group with no _children', () => {
const p = { hash: 'abc' };
const expanded = new Set(['abc']);
assert.strictEqual(getRowCount(p, true, expanded, null), 1);
});
test('renderVisibleRows uses cumulative offsets not flat entry count', () => {
assert.ok(packetsSource.includes('_cumulativeRowOffsets'),
'renderVisibleRows should use cumulative row offsets');
assert.ok(!packetsSource.includes('const totalRows = _displayPackets.length'),
'should NOT use flat array length for total row count');
});
test('renderVisibleRows skips DOM rebuild when range unchanged', () => {
assert.ok(packetsSource.includes('startIdx === _lastVisibleStart && endIdx === _lastVisibleEnd'),
'should skip rebuild when range is unchanged');
});
test('lazy row generation — HTML built only for visible slice', () => {
assert.ok(!packetsSource.includes('_lastRenderedRows'),
'should NOT have pre-built row HTML cache');
assert.ok(packetsSource.includes('_displayPackets.slice(startIdx, endIdx)'),
'should slice display packets for visible range');
assert.ok(packetsSource.includes('visibleSlice.map(p => builder(p))'),
'should build HTML lazily per visible packet');
});
test('observer filter Set is hoisted, not recreated per-packet', () => {
assert.ok(packetsSource.includes('_observerFilterSet = filters.observer ? new Set(filters.observer.split'),
'observer filter Set should be created once in renderTableRows');
assert.ok(packetsSource.includes('_observerFilterSet.has(String(c.observer_id))'),
'buildGroupRowHtml should use hoisted _observerFilterSet');
});
test('buildFlatRowHtml has null-safe decoded_json', () => {
const flatBuilderMatch = packetsSource.match(/function buildFlatRowHtml[\s\S]*?(?=\n function )/);
assert.ok(flatBuilderMatch, 'buildFlatRowHtml should exist');
assert.ok(flatBuilderMatch[0].includes("p.decoded_json || '{}'"),
'buildFlatRowHtml should have null-safe decoded_json fallback');
});
test('pathHops null guard in buildFlatRowHtml (issue #451)', () => {
const flatBuilderMatch = packetsSource.match(/function buildFlatRowHtml[\s\S]*?(?=\n function )/);
assert.ok(flatBuilderMatch, 'buildFlatRowHtml should exist');
// The JSON.parse result must be coalesced with || [] to handle literal null from path_json
assert.ok(flatBuilderMatch[0].includes("|| '[]') || []"),
'buildFlatRowHtml should coalesce parsed path_json with || [] to guard against null');
});
test('pathHops null guard in detail pane (issue #451)', () => {
// The detail pane (selectPacket / showPacketDetail) also parses path_json
const detailMatch = packetsSource.match(/let pathHops;\s*try \{[^}]+\} catch/);
assert.ok(detailMatch, 'detail pane pathHops parsing should exist');
assert.ok(detailMatch[0].includes("|| '[]') || []"),
'detail pane should coalesce parsed path_json with || [] to guard against null');
});
test('destroy cleans up virtual scroll state', () => {
assert.ok(packetsSource.includes('detachVScrollListener'),
'destroy should detach virtual scroll listener');
assert.ok(packetsSource.includes("_displayPackets = []"),
'destroy should reset display packets');
assert.ok(packetsSource.includes("_rowCounts = []"),
'destroy should reset row counts');
assert.ok(packetsSource.includes("_lastVisibleStart = -1"),
'destroy should reset visible start');
});
}
// ===== SUMMARY =====
Promise.allSettled(pendingTests).then(() => {
console.log(`\n${'═'.repeat(40)}`);