mirror of
https://github.com/Kpa-clawbot/meshcore-analyzer.git
synced 2026-05-14 00:55:05 +00:00
4ed272761ba94d99912a08e31533ee484ad3f1cd
37 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
fb744d895f |
fix(#1143): structural pubkey attribution via from_pubkey column (#1152)
Fixes #1143. ## Summary Replaces the structurally unsound `decoded_json LIKE '%pubkey%'` (and `OR LIKE '%name%'`) attribution path with an exact-match lookup on a dedicated, indexed `transmissions.from_pubkey` column. This closes both holes documented in #1143: - **Hole 1** — same-name false positives via `OR LIKE '%name%'` - **Hole 2a** — adversarial spoofing: a malicious node names itself with another node's pubkey and gets attributed to the victim - **Hole 2b** — accidental false positive when any free-text field (path elements, channel names, message bodies) contains a 64-char hex substring matching a real pubkey - **Perf** — query now uses an index instead of a full-table scan against `LIKE '%substring%'` ## TDD Two-commit history shows red-then-green: | Commit | Status | Purpose | |---|---|---| | `7f0f08e` | RED — tests assertion-fail on master behaviour | Adversarial fixtures + spec | | `59327db` | GREEN — schema + ingestor + server + migration | Implementation | The red commit's test schema includes the new column so the file compiles, but the production code still uses LIKE — the assertions fail because the malicious / same-name / free-text rows are returned. The green commit changes the query plus adds the migration/ingest path. ## Changes ### Schema - new column `transmissions.from_pubkey TEXT` - new index `idx_transmissions_from_pubkey` ### Ingestor (`cmd/ingestor/`) - `PacketData.FromPubkey` populated from decoded ADVERT `pubKey` at write time. Cheap — already parsing `decoded_json`. Non-ADVERTs stay NULL. - `stmtInsertTransmission` writes the column. - Migration `from_pubkey_v1` ALTERs legacy DBs to add the column + index. - Bonus: rewrote the recipe in the gated one-shot `advert_count_unique_v1` migration to use `from_pubkey` (already marked done on existing DBs; kept correct for fresh installs). ### Server (`cmd/server/`) - `ensureFromPubkeyColumn` mirrors the ingestor migration so the server can boot against a DB the ingestor has never touched (e2e fixture, fresh installs). - `backfillFromPubkeyAsync` runs **after** HTTP starts. Scans `WHERE from_pubkey IS NULL AND payload_type = 4` in 5000-row chunks with a 100ms yield between chunks. Cannot block boot even on prod-sized DBs (100K+ transmissions). Queries handle NULL gracefully (return empty for that pubkey, same as today's unknown-pubkey path). - All in-scope LIKE call sites switched to exact match: | Site | Before | After | |---|---|---| | `buildPacketWhere` (was db.go:582) | `decoded_json LIKE '%pubkey%'` | `from_pubkey = ?` | | `buildTransmissionWhere` (was db.go:626) | `t.decoded_json LIKE '%pubkey%'` | `t.from_pubkey = ?` | | `GetRecentTransmissionsForNode` (was db.go:910) | `LIKE '%pubkey%' OR LIKE '%name%'` | `t.from_pubkey = ?` | | `QueryMultiNodePackets` (was db.go:1785) | `decoded_json LIKE '%pubkey%' OR ...` | `t.from_pubkey IN (?, ?, ...)` | | `advert_count_unique_v1` (was ingestor/db.go:257) | `decoded_json LIKE '%' \|\| nodes.public_key \|\| '%'` | `t.from_pubkey = nodes.public_key` | `GetRecentTransmissionsForNode` signature simplifies: the `name` parameter is gone (it was only ever used for the legacy `OR LIKE '%name%'` fallback). Sole caller in `routes.go:1243` updated. ### Tests - `cmd/server/from_pubkey_attribution_test.go` — adversarial fixtures + Hole 1/2a/2b/QueryMultiNodePackets exact-match assertions, EXPLAIN QUERY PLAN index check, migration backfill correctness. - `cmd/ingestor/from_pubkey_test.go` — write-time correctness (BuildPacketData populates FromPubkey for ADVERT only; InsertTransmission persists it; non-ADVERTs stay NULL). - Existing test schemas (server v2, server v3, coverage) get the new column **plus a SQLite trigger** that auto-populates `from_pubkey` from `decoded_json` on ADVERT inserts. This means existing fixtures (which only seed `decoded_json`) keep attributing correctly without per-test edits. - `seedTestData`'s ADVERTs explicitly set `from_pubkey`. ## Performance — index is used ``` $ EXPLAIN QUERY PLAN SELECT id FROM transmissions WHERE from_pubkey = ? SEARCH transmissions USING INDEX idx_transmissions_from_pubkey (from_pubkey=?) ``` Asserted in `TestFromPubkeyIndexUsed`. ## Migration approach - **Sync at boot**: `ALTER TABLE transmissions ADD COLUMN from_pubkey TEXT` is a metadata-only operation in SQLite — microseconds regardless of table size. `CREATE INDEX IF NOT EXISTS idx_transmissions_from_pubkey` is **not** metadata-only: it scans the table once. Empirically a few hundred ms on a 100K-row table; expect a few seconds on a 10M-row table (one-time cost, blocking boot during that window). Subsequent boots no-op via `IF NOT EXISTS`. If this boot delay becomes an operational concern at prod scale we can defer the `CREATE INDEX` to a goroutine — for now a few-second one-time delay is acceptable. - **Async**: row-level backfill of legacy NULL ADVERTs (chunked 5000 / 100ms yield). On a 100K-ADVERT prod DB, this completes in seconds in the background; HTTP is fully available throughout. - **Safety**: queries handle NULL gracefully — a node whose ADVERTs haven't backfilled yet returns empty, identical to today's behaviour for unknown pubkeys. No half-state regression. ## Out of scope (intentionally) The free-text `LIKE` paths the issue explicitly leaves alone (e.g. user-typed packet search) are untouched. Only the pubkey-attribution sites get the column treatment. ## Cycle-3 review fixes | Finding | Status | Commit | |---|---|---| | **M1c** — async-contract test was tautological (test's own `go`, not production's) | Fixed | `23ace71` (red) → `a05b50c` (green) | | **m1c** — package-global atomic resets unsafe under `t.Parallel()` | Fixed (`// DO NOT t.Parallel` comment + `Reset()` helper) | rolled into `23ace71` / `241ec69` | | **m2c** — `/api/healthz` read 3 atomics non-atomically (torn snapshot) | Fixed (single RWMutex-guarded snapshot + race test) | `241ec69` | | **n3c.m1** — vestigial OR-scaffolding in `QueryMultiNodePackets` | Fixed (cleanup) | `5a53ceb` | | **n3c.m2** — verify PR body language about `ALTER` vs `CREATE INDEX` | Verified accurate (already corrected in cycle 2) | (no change) | | **n3c.m3** — `json.Unmarshal` per row in backfill → could use SQL `json_extract` | **Deferred as known followup** — pure perf optimization (current per-row Unmarshal is correct, just slower); SQL rewrite would unwind the chunked-yield architecture and is non-trivial. Acceptable for one-time backfill at boot on legacy DBs. | ### M1c implementation detail `startFromPubkeyBackfill(dbPath, chunkSize, yieldDuration)` is now the single production entry point used by `main.go`. It internally does `go backfillFromPubkeyAsync(...)`. The test calls `startFromPubkeyBackfill` (no `go` prefix) and asserts the dispatch returns within 50ms — so if anyone removes the `go` keyword inside the wrapper, the test fails. **Manually verified**: removing the `go` keyword causes `TestBackfillFromPubkey_DoesNotBlockBoot` to fail with "backfill dispatch took ~1s (>50ms): not async — would block boot." ### m2c implementation detail `fromPubkeyBackfillTotal/Processed/Done` are now plain `int64`/`bool` package globals guarded by a single `sync.RWMutex`. `fromPubkeyBackfillSnapshot()` returns all three under one RLock. `TestHealthzFromPubkeyBackfillConsistentSnapshot` races a writer (lock-step total/processed updates with periodic done flips) against 8 readers hammering `/api/healthz`, asserting `processed<=total` and `(done => processed==total)` on every response. Verified the test catches torn reads (manually injected a 3-RLock implementation; test failed within milliseconds with "processed>total" and "done=true but processed!=total" errors). --------- Co-authored-by: openclaw-bot <bot@openclaw.local> Co-authored-by: openclaw-bot <bot@openclaw.dev> |
||
|
|
fc57433f27 |
fix(analytics): merge channel buckets by hash byte; reject rainbow-table mismatches (closes #978) (#980)
## Summary Closes #978 — analytics channels duplicated by encrypted/decrypted split + rainbow-table collisions. ## Root cause Two distinct bugs in `computeAnalyticsChannels` (`cmd/server/store.go`): 1. **Encrypted/decrypted split**: The grouping key included the decoded channel name (`hash + "_" + channel`), so packets from observers that could decrypt a channel created a separate bucket from packets where decryption failed. Same physical channel, two entries. 2. **Rainbow-table collisions**: Some observers' lookup tables map hash bytes to wrong channel names. E.g., hash `72` incorrectly claimed to be `#wardriving` (real hash is `129`). This created ghost 1-message entries. ## Fix 1. **Always group by hash byte alone** (drop `_channel` suffix from `chKey`). When any packet decrypts successfully, upgrade the bucket's display name from placeholder (`chN`) to the real name (first-decrypter-wins for stability). 2. **Validate channel names** against the firmware hash invariant: `SHA256(SHA256("#name")[:16])[0] == channelHash`. Mismatches are treated as encrypted (placeholder name, no trust in decoded channel). Guard is in the analytics handler (not the ingestor) to avoid breaking other surfaces that use the decoded field for display. ## Verification (e2e-fixture.db) | Metric | BEFORE | AFTER | |--------|--------|-------| | Total channels | 22 | 19 | | Duplicate hash bytes | 3 (hashes 217, 202, 17) | 0 | ## Tests added - `TestComputeAnalyticsChannels_MergesEncryptedAndDecrypted` — same hash, mixed encrypted/decrypted → ONE bucket - `TestComputeAnalyticsChannels_RejectsRainbowTableMismatch` — hash 72 claimed as `#wardriving` (real=129) → rejected, stays `ch72` - `TestChannelNameMatchesHash` — unit test for hash validation helper - `TestIsPlaceholderName` — unit test for placeholder detection Anti-tautology gate: both main tests fail when their respective fix lines are reverted. Co-authored-by: you <you@example.com> |
||
|
|
568de4b441 |
fix(observers): exclude soft-deleted observers from /api/observers and totalObservers (#954)
## Bug `/api/observers` returned soft-deleted (inactive=1) observers. Operators saw stale observers in the UI even after the auto-prune marked them inactive on schedule. Reproduced on staging: 14 observers older than 14 days returned by the API; all of them had `inactive=1` in the DB. ## Root cause `DB.GetObservers()` (`cmd/server/db.go:974`) ran `SELECT ... FROM observers ORDER BY last_seen DESC` with no WHERE filter. The `RemoveStaleObservers` path correctly soft-deletes by setting `inactive=1`, but the read path didn't honor it. `statsRow` (`cmd/server/db.go:234`) had the same bug — `totalObservers` count included soft-deleted rows. ## Fix Add `WHERE inactive IS NULL OR inactive = 0` to both: ```go // GetObservers "SELECT ... FROM observers WHERE inactive IS NULL OR inactive = 0 ORDER BY last_seen DESC" // statsRow.TotalObservers "SELECT COUNT(*) FROM observers WHERE inactive IS NULL OR inactive = 0" ``` `NULL` check preserves backward compatibility with rows from before the `inactive` migration. ## Tests Added regression `TestGetObservers_ExcludesInactive`: - Seed two observers, mark one inactive, assert `GetObservers()` returns only the other. - **Anti-tautology gate verified**: reverting the WHERE clause causes the test to fail with `expected 1 observer, got 2` and `inactive observer obs2 should be excluded`. `go test ./...` passes (19.6s). ## Out of scope - `GetObserverByID` lookup at line 1009 still returns inactive observers — this is intentional, so an old deep link to `/observers/<id>` shows "inactive" rather than 404. - Frontend may also have its own caching layer; this fix is server-side only. --------- Co-authored-by: Kpa-clawbot <bot@example.invalid> Co-authored-by: you <you@example.com> Co-authored-by: KpaBap <kpabap@gmail.com> |
||
|
|
5678874128 |
fix: exclude non-repeater nodes from path-hop resolution (#935) (#936)
Fixes #935 ## Problem `buildPrefixMap()` indexed ALL nodes regardless of role, causing companions/sensors to appear as repeater hops when their pubkey prefix collided with a path-hop hash byte. ## Fix ### Server (`cmd/server/store.go`) - Added `canAppearInPath(role string) bool` — allowlist of roles that can forward packets (repeater, room_server, room) - `buildPrefixMap` now skips nodes that fail this check ### Client (`public/hop-resolver.js`) - Added matching `canAppearInPath(role)` helper - `init()` now only populates `prefixIdx` for path-eligible nodes - `pubkeyIdx` remains complete — `resolveFromServer()` still resolves any node type by full pubkey (for server-confirmed `resolved_path` arrays) ## Tests - `cmd/server/prefix_map_role_test.go`: 7 new tests covering role filtering in prefix map and resolveWithContext - `test-hop-resolver-affinity.js`: 4 new tests verifying client-side role filter + pubkeyIdx completeness - All existing tests updated to include `Role: "repeater"` where needed - `go test ./cmd/server/...` — PASS - `node test-hop-resolver-affinity.js` — 16/17 pass (1 pre-existing centroid failure unrelated to this change) ## Commits 1. `fix: filter prefix map to only repeater/room roles (#935)` — server implementation 2. `test: prefix map role filter coverage (#935)` — server tests 3. `ui: filter HopResolver prefix index to repeater/room roles (#935)` — client implementation 4. `test: hop-resolver role filter coverage (#935)` — client tests --------- Co-authored-by: you <you@example.com> |
||
|
|
a605518d6d |
fix(#881): per-observation raw_hex — each observer sees different bytes on air (#882)
## Problem Each MeshCore observer receives a physically distinct over-the-air byte sequence for the same transmission (different path bytes, flags/hops remaining). The `observations` table stored only `path_json` per observer — all observations pointed at one `transmissions.raw_hex`. This prevented the hex pane from updating when switching observations in the packet detail view. ## Changes | Layer | Change | |-------|--------| | **Schema** | `ALTER TABLE observations ADD COLUMN raw_hex TEXT` (nullable). Migration: `observations_raw_hex_v1` | | **Ingestor** | `stmtInsertObservation` now stores per-observer `raw_hex` from MQTT payload | | **View** | `packets_v` uses `COALESCE(o.raw_hex, t.raw_hex)` — backward compatible with NULL historical rows | | **Server** | `enrichObs` prefers `obs.RawHex` when non-empty, falls back to `tx.RawHex` | | **Frontend** | No changes — `effectivePkt.raw_hex` already flows through `renderDetail` | ## Tests - **Ingestor**: `TestPerObservationRawHex` — two MQTT packets for same hash from different observers → both stored with distinct raw_hex - **Server**: `TestPerObservationRawHexEnrich` — enrichObs returns per-obs raw_hex when present, tx fallback when NULL - **E2E**: Playwright assertion in `test-e2e-playwright.js` for hex pane update on observation switch E2E assertion added: `test-e2e-playwright.js:1794` ## Scope - Historical observations: raw_hex stays NULL, UI falls back to transmission raw_hex silently - No backfill, no path_json reconstruction, no frontend changes Closes #881 --------- Co-authored-by: you <you@example.com> |
||
|
|
886aabf0ae |
fix(#827): /api/packets/{hash} falls back to DB when in-memory store misses (#831)
Closes #827. ## Problem `/api/packets/{hash}` only consulted the in-memory `PacketStore`. When a packet aged out of memory, the handler 404'd — even though SQLite still had it and `/api/nodes/{pubkey}` `recentAdverts` (which reads from the DB) was actively surfacing the hash. Net effect: the **Analyze →** link on older adverts in the node detail page led to a dead "Not found". Two-store inconsistency: DB has the packet, in-memory doesn't, node detail surfaces it from DB → packet detail can't serve it. ## Fix In `handlePacketDetail`: - After in-memory miss, fall back to `db.GetPacketByHash` (already existed) for hash lookups, and `db.GetTransmissionByID` for numeric IDs. - Track when the result came from the DB; if so and the store has no observations, populate from DB via a new `db.GetObservationsForHash` so the response shows real observations instead of the misleading `observation_count = 1` fallback. ## Tests - `TestPacketDetailFallsBackToDBWhenStoreMisses` — insert a packet directly into the DB after `store.Load()`, confirm store doesn't have it, assert 200 + populated observations. - `TestPacketDetail404WhenAbsentFromBoth` — neither store nor DB → 404 (no false positives). - `TestPacketDetailPrefersStoreOverDB` — both have it; store result wins (no double-fetch). - `TestHandlePacketDetailNoStore` updated: it previously asserted the old buggy 404 behavior; now asserts the correct DB-fallback 200. All `go test ./... -run "PacketDetail|Packet|GetPacket"` and the full `cmd/server` suite pass. ## Out of scope The `/api/packets?hash=` filter is the live in-memory list endpoint and intentionally store-only for performance. Not touched here — happy to file a follow-up if you'd rather harmonise. ## Repro context Verified against prod with a recently-adverting repeater whose recent advert hash lives in `recentAdverts` (DB) but had been evicted from the in-memory store; pre-fix 404, post-fix 200 with full observations. Co-authored-by: you <you@example.com> |
||
|
|
9e90548637 |
perf(#800): remove per-StoreTx ResolvedPath, replace with membership index + on-demand decode (#806)
## Summary Remove `ResolvedPath []*string` field from `StoreTx` and `StoreObs` structs, replacing it with a compact membership index + on-demand SQL decode. This eliminates the dominant heap cost identified in profiling (#791, #799). **Spec:** #800 (consolidated from two rounds of expert + implementer review on #799) Closes #800 Closes #791 ## Design ### Removed - `StoreTx.ResolvedPath []*string` - `StoreObs.ResolvedPath []*string` - `TransmissionResp.ResolvedPath`, `ObservationResp.ResolvedPath` struct fields ### Added | Structure | Purpose | Est. cost at 1M obs | |---|---|---:| | `resolvedPubkeyIndex map[uint64][]int` | FNV-1a(pubkey) → []txID forward index | 50–120 MB | | `resolvedPubkeyReverse map[int][]uint64` | txID → []hashes for clean removal | ~40 MB | | `apiResolvedPathLRU` (10K entries) | FIFO cache for on-demand API decode | ~2 MB | ### Decode-window discipline `resolved_path` JSON decoded once per packet. Consumers fed in order, temp slice dropped — never stored on struct: 1. `addToByNode` — relay node indexing 2. `touchRelayLastSeen` — relay liveness DB updates 3. `byPathHop` resolved-key entries 4. `resolvedPubkeyIndex` + reverse insert 5. WebSocket broadcast map (raw JSON bytes) 6. Persist batch (raw JSON bytes for SQL UPDATE) ### Collision safety When the forward index returns candidates, a batched SQL query confirms exact pubkey presence using `LIKE '%"pubkey"%'` on the `resolved_path` column. ### Feature flag `useResolvedPathIndex` (default `true`). Off-path is conservative: all candidates kept, index not consulted. For one-release rollback safety. ## Files changed | File | Changes | |---|---| | `resolved_index.go` | **New** — index structures, LRU cache, on-demand SQL helpers, collision safety | | `store.go` | Remove RP fields, decode-window discipline in Load/Ingest, on-demand txToMap/obsToMap/enrichObs, eviction cleanup via SQL, memory accounting update | | `types.go` | Remove RP fields from TransmissionResp/ObservationResp | | `routes.go` | Replace `nodeInResolvedPath` with `nodeInResolvedPathViaIndex`, remove RP from mapSlice helpers | | `neighbor_persist.go` | Refactor backfill: reverse-map removal → forward+reverse insert → LRU invalidation | ## Tests added (27 new) **Unit:** - `TestStoreTx_ResolvedPathFieldAbsent` — reflection guard - `TestResolvedPubkeyIndex_BuildFromLoad` — forward+reverse consistency - `TestResolvedPubkeyIndex_HashCollision` — SQL collision safety - `TestResolvedPubkeyIndex_IngestUpdate` — maps reflect new ingests - `TestResolvedPubkeyIndex_RemoveOnEvict` — clean removal via reverse map - `TestResolvedPubkeyIndex_PerObsCoverage` — non-best obs pubkeys indexed - `TestAddToByNode_WithoutResolvedPathField` - `TestTouchRelayLastSeen_WithoutResolvedPathField` - `TestWebSocketBroadcast_IncludesResolvedPath` - `TestBackfill_InvalidatesLRU` - `TestEviction_ByNodeCleanup_OnDemandSQL` - `TestExtractResolvedPubkeys`, `TestMergeResolvedPubkeys` - `TestResolvedPubkeyHash_Deterministic` - `TestLRU_EvictionOnFull` **Endpoint:** - `TestPathsThroughNode_NilResolvedPathFallback` - `TestPacketsAPI_OnDemandResolvedPath` - `TestPacketsAPI_OnDemandResolvedPath_LRUHit` - `TestPacketsAPI_OnDemandResolvedPath_Empty` **Feature flag:** - `TestFeatureFlag_OffPath_PreservesOldBehavior` - `TestFeatureFlag_Toggle_NoStateLeak` **Concurrency:** - `TestReverseMap_NoLeakOnPartialFailure` - `TestDecodeWindow_LockHoldTimeBounded` - `TestLivePolling_LRUUnderConcurrentIngest` **Regression:** - `TestRepeaterLiveness_StillAccurate` **Benchmarks:** - `BenchmarkLoad_BeforeAfter` - `BenchmarkResolvedPubkeyIndex_Memory` - `BenchmarkPathsThroughNode_Latency` - `BenchmarkLivePolling_UnderIngest` ## Benchmark results ``` BenchmarkResolvedPubkeyIndex_Memory/pubkeys=50K 429ms 103MB 777K allocs BenchmarkResolvedPubkeyIndex_Memory/pubkeys=500K 4205ms 896MB 7.67M allocs BenchmarkLoad_BeforeAfter 65ms 20MB 202K allocs BenchmarkPathsThroughNode_Latency 3.9µs 0B 0 allocs BenchmarkLivePolling_UnderIngest 5.4µs 545B 7 allocs ``` Key: per-obs `[]*string` overhead completely eliminated. At 1M obs with 3 hops average, this saves ~72 bytes/obs × 1M = ~68 MB just from the slice headers + pointers, plus the JSON-decoded string data (~900 MB at scale per profiling). ## Design choices - **FNV-1a instead of xxhash**: stdlib availability, no external dependency. Performance is equivalent for this use case (pubkey strings are short). - **FIFO LRU instead of true LRU**: simpler implementation, adequate for the access pattern (mostly sequential obs IDs from live polling). - **Grouped packets view omits resolved_path**: cold path, not worth SQL round-trip per page render. - **Backfill pending check uses reverse-map presence** instead of per-obs field: if a tx has any indexed pubkeys, its observations are considered resolved. Closes #807 --------- Co-authored-by: you <you@example.com> |
||
|
|
0e286d85fd |
fix: channel query performance — add channel_hash column, SQL-level filtering (#762) (#763)
## Problem Channel API endpoints scan entire DB — 2.4s for channel list, 30s for messages. ## Fix - Added `channel_hash` column to transmissions (populated on ingest, backfilled on startup) - `GetChannels()` rewrites to GROUP BY channel_hash (one row per channel vs scanning every packet) - `GetChannelMessages()` filters by channel_hash at SQL level with proper LIMIT/OFFSET - 60s cache for channel list - Index: `idx_tx_channel_hash` for fast lookups Expected: 2.4s → <100ms for list, 30s → <500ms for messages. Fixes #762 --------- Co-authored-by: you <you@example.com> |
||
|
|
aa84ce1e6a |
fix: correct hash_size detection for transport routes and zero-hop adverts (#747)
## Summary Fixes #744 Fixes #722 Three bugs in hash_size computation caused zero-hop adverts to incorrectly report `hash_size=1`, masking nodes that actually use multi-byte hashes. ## Bugs Fixed ### 1. Wrong path byte offset for transport routes (`computeNodeHashSizeInfo`) Transport routes (types 0 and 3) have 4 transport code bytes before the path byte. The code read the path byte from offset 1 (byte index `RawHex[2:4]`) for all route types. For transport routes, the correct offset is 5 (`RawHex[10:12]`). ### 2. Missing RouteTransportDirect skip (`computeNodeHashSizeInfo`) Zero-hop adverts from `RouteDirect` (type 2) were correctly skipped, but `RouteTransportDirect` (type 3) zero-hop adverts were not. Both have locally-generated path bytes with unreliable hash_size bits. ### 3. Zero-hop adverts not skipped in analytics (`computeAnalyticsHashSizes`) `computeAnalyticsHashSizes()` unconditionally overwrote a node's `hashSize` with whatever the latest advert reported. A zero-hop direct advert with `hash_size=1` could overwrite a previously-correct `hash_size=2` from a multi-hop flood advert. Fix: skip hash_size update for zero-hop direct/transport-direct adverts while still counting the packet and updating `lastSeen`. ## Tests Added - `TestHashSizeTransportRoutePathByteOffset` — verifies transport routes read path byte at offset 5, regular flood reads at offset 1 - `TestHashSizeTransportDirectZeroHopSkipped` — verifies both RouteDirect and RouteTransportDirect zero-hop adverts are skipped - `TestAnalyticsHashSizesZeroHopSkip` — verifies analytics hash_size is not overwritten by zero-hop adverts - Fixed 3 existing tests (`FlipFlop`, `Dominant`, `LatestWins`) that used route_type 0 (TransportFlood) header bytes without proper transport code padding ## Complexity All changes are O(1) per packet — no new loops or data structures. The additional offset computation and zero-hop check are constant-time operations within the existing packet scan loop. Co-authored-by: you <you@example.com> |
||
|
|
e893a1b3c4 |
fix: index relay hops in byNode for liveness tracking (#708)
## Problem Nodes that only appear as relay hops in packet paths (via `resolved_path`) were never indexed in `byNode`, so `last_heard` was never computed for them. This made relay-only nodes show as dead/stale even when actively forwarding traffic. Fixes #660 ## Root Cause `indexByNode()` only indexed pubkeys from decoded JSON fields (`pubKey`, `destPubKey`, `srcPubKey`). Relay nodes appearing in `resolved_path` were ignored entirely. ## Fix `indexByNode()` now also iterates: 1. `ResolvedPath` entries from each observation 2. `tx.ResolvedPath` (best observation's resolved path, used for DB-loaded packets) A per-call `indexed` set prevents double-indexing when the same pubkey appears in both decoded JSON and resolved path. Extracted `addToByNode()` helper to deduplicate the nodeHashes/byNode append logic. ## Scope **Phase 1 only** — server-side in-memory indexing. No DB changes, no ingestor changes. This makes `last_heard` reflect relay activity with zero risk to persistence. ## Tests 5 new test cases in `TestIndexByNodeResolvedPath`: - Resolved path pubkeys from observations get indexed - Null entries in resolved path are skipped - Relay-only nodes (no decoded JSON match) appear in `byNode` - Dedup between decoded JSON and resolved path - `tx.ResolvedPath` indexed when observations are empty All existing tests pass unchanged. ## Complexity O(observations × path_length) per packet — typically 1-3 observations × 1-3 hops. No hot-path regression. --------- Co-authored-by: you <you@example.com> |
||
|
|
fcad49594b |
fix: include path.hopsCompleted in TRACE WebSocket broadcasts (#695)
## Summary Fixes #683 — TRACE packets on the live map were showing the full path instead of distinguishing completed vs remaining hops. ## Root Cause Both WebSocket broadcast builders in `store.go` constructed the `decoded` map with only `header` and `payload` keys — `path` was never included. The frontend reads `decoded.path.hopsCompleted` to split trace routes into solid (completed) and dashed (remaining) segments, but that field was always `undefined`. ## Fix For TRACE packets (payload type 9), call `DecodePacket()` on the raw hex during broadcast and include the resulting `Path` struct in `decoded["path"]`. This populates `hopsCompleted` which the frontend already knows how to consume. Both broadcast builders are patched: - `IngestNewFromDB()` — new transmissions path (~line 1419) - `IngestNewObservations()` — new observations path (~line 1680) TRACE packets are infrequent, so the per-packet decode overhead is negligible. ## Testing - Added `TestIngestTraceBroadcastIncludesPath` — verifies that TRACE broadcast maps include `decoded.path` with correct `hopsCompleted` value - All existing tests pass (`cmd/server` + `cmd/ingestor`) Co-authored-by: you <you@example.com> |
||
|
|
088b4381c3 |
Fix: Hash Stats 'By Repeaters' includes non-repeater nodes (#654)
## Summary The "By Repeaters" section on the Hash Stats analytics page was counting **all** node types (companions, room servers, sensors, etc.) instead of only repeaters. This made the "By Repeaters" distribution identical to "Multi-Byte Hash Adopters", defeating the purpose of the breakdown. Fixes #652 ## Root Cause `computeAnalyticsHashSizes()` in `cmd/server/store.go` built its `byNode` map from advert packet data without cross-referencing node roles from the node store. Both `distributionByRepeaters` and `multiByteNodes` consumed this unfiltered map. ## Changes ### `cmd/server/store.go` - Build a `nodeRoleByPK` lookup map from `getCachedNodesAndPM()` at the start of the function - Store `role` in each `byNode` entry when processing advert packets - **`distributionByRepeaters`**: filter to only count nodes whose role contains "repeater" - **`multiByteNodes`**: include `role` field in output so the frontend can filter/group by node type ### `cmd/server/coverage_test.go` - Add `TestHashSizesDistributionByRepeatersFiltersRole`: verifies that companion nodes are excluded from `distributionByRepeaters` but included in `multiByteNodes` with correct role ### `cmd/server/routes_test.go` - Fix `TestHashAnalyticsZeroHopAdvert`: invalidate node cache after DB insert so role lookup works - Fix `TestAnalyticsHashSizeSameNameDifferentPubkey`: insert node records as repeaters + invalidate cache ## Testing All `cmd/server` tests pass (68 insertions, 3 deletions across 3 files). Co-authored-by: you <you@example.com> |
||
|
|
6ae62ce535 |
perf: make txToMap observations lazy via ExpandObservations flag (#595)
## Summary `txToMap()` previously always allocated observation sub-maps for every packet, even though the `/api/packets` handler immediately stripped them via `delete(p, "observations")` unless `expand=observations` was requested. A typical page of 50 packets with ~5 observations each caused 300+ unnecessary map allocations per request. ## Changes - **`txToMap`**: Add variadic `includeObservations bool` parameter. Observations are only built when `true` is passed, eliminating allocations when they'd just be discarded. - **`PacketQuery`**: Add `ExpandObservations bool` field to thread the caller's intent through the query pipeline. - **`routes.go`**: Set `ExpandObservations` based on `expand=observations` query param. Removed the post-hoc `delete(p, "observations")` loop — observations are simply never created when not requested. - **Single-packet lookups** (`GetPacketByID`, `GetPacketByHash`): Always pass `true` since detail views need observations. - **Multi-node/analytics queries**: Default (no flag) = no observations, matching prior behavior. ## Testing - Added `TestTxToMapLazyObservations` covering all three cases: no flag, `false`, and `true`. - All existing tests pass (`go test ./...`). ## Perf Impact Eliminates ~250 observation map allocations per /api/packets request (at default page size of 50 with ~5 observations each). This is a constant-factor improvement per request — no algorithmic complexity change. Fixes #374 Co-authored-by: you <you@example.com> |
||
|
|
cd470dffbe |
perf: batch observation fetching to eliminate N+1 API calls on sort change (#586)
## Summary
Fixes the N+1 API call pattern when changing observation sort mode on
the packets page. Previously, switching sort to Path or Time fired
individual `/api/packets/{hash}` requests for **every**
multi-observation group without cached children — potentially 100+
concurrent requests.
## Changes
### Backend: Batch observations endpoint
- **New endpoint:** `POST /api/packets/observations` accepts `{"hashes":
["h1", "h2", ...]}` and returns all observations keyed by hash in a
single response
- Capped at 200 hashes per request to prevent abuse
- 4 test cases covering empty input, invalid JSON, too-many-hashes, and
valid requests
### Frontend: Use batch endpoint
- `packets.js` sort change handler now collects all hashes needing
observation data and sends a single POST request instead of N individual
GETs
- Same behavior, single round-trip
## Performance
- **Before:** Changing sort with 100 visible groups → 100 concurrent API
requests, browser connection queueing (6 per host), several seconds of
lag
- **After:** Single POST request regardless of group count, response
time proportional to store lookup (sub-millisecond per hash in memory)
Fixes #389
---------
Co-authored-by: you <you@example.com>
|
||
|
|
45d8116880 |
perf: query only matching node locations in handleObservers (#579)
## Summary `handleObservers()` in `routes.go` was calling `GetNodeLocations()` which fetches ALL nodes from the DB just to match ~10 observer IDs against node public keys. With 500+ nodes this is wasteful. ## Changes - **`db.go`**: Added `GetNodeLocationsByKeys(keys []string)` — queries only the rows matching the given public keys using a parameterized `WHERE LOWER(public_key) IN (?, ?, ...)` clause. - **`routes.go`**: `handleObservers` now collects observer IDs and calls the targeted method instead of the full-table scan. - **`coverage_test.go`**: Added `TestGetNodeLocationsByKeys` covering known key, empty keys, and unknown key cases. ## Performance With ~10 observers and 500+ nodes, the query goes from scanning all 500 rows to fetching only ~10. The original `GetNodeLocations()` is preserved for any other callers. Fixes #378 Co-authored-by: you <you@example.com> |
||
|
|
02004c5912 |
perf: incremental distance index update on path changes (#576)
## Summary Replace full `buildDistanceIndex()` rebuild with incremental `removeTxFromDistanceIndex`/`addTxToDistanceIndex` for only the transmissions whose paths actually changed during `IngestNewObservations`. ## Problem When any transmission's best path changed during observation ingestion, the **entire distance index was rebuilt** — iterating all 30K+ packets, resolving all hops, and computing haversine distances. This `O(total_packets × avg_hops)` operation ran under a write lock, blocking all API readers. A 30-second debounce (`distRebuildInterval`) was added in #557 to mitigate this, but it only delayed the pain — the full rebuild still happened, just less frequently. ## Fix - Added `removeTxFromDistanceIndex(tx)` — filters out all `distHopRecord` and `distPathRecord` entries for a specific transmission - Added `addTxToDistanceIndex(tx)` — computes and appends new distance records for a single transmission - In `IngestNewObservations`, changed path-change handling to call remove+add for each affected tx instead of marking dirty and waiting for a full rebuild - Removed `distDirty`, `distLast`, and `distRebuildInterval` since incremental updates are cheap enough to apply immediately ## Complexity - **Before:** `O(total_packets × avg_hops)` per rebuild (30K+ packets) - **After:** `O(changed_txs × avg_hops + total_dist_records)` — the remove is a linear scan of the distance slices, but only for affected txs; the add is `O(hops)` per changed tx The remove scan over `distHops`/`distPaths` slices is linear in slice length, but this is still far cheaper than the full rebuild which also does JSON parsing, hop resolution, and haversine math for every packet. ## Tests - Updated `TestDistanceRebuildDebounce` → `TestDistanceIncrementalUpdate` to verify incremental behavior and check for duplicate path records - All existing tests pass (`go test ./...` in both `cmd/server` and `cmd/ingestor`) Fixes #365 --------- Co-authored-by: you <you@example.com> |
||
|
|
ef30031e2e |
perf: cache resolveRegionObservers with 30s TTL (#575)
## Summary Cache `resolveRegionObservers()` results with a 30-second TTL to eliminate repeated database queries for region→observer ID mappings. ## Problem `resolveRegionObservers()` queried the database on every call despite the observers table changing infrequently (~20 rows). It's called from 10+ hot paths including `filterPackets()`, `GetChannels()`, and multiple analytics compute functions. When analytics caches are cold, parallel requests each hit the DB independently. ## Solution - Added a dedicated `regionObsMu` mutex + `regionObsCache` map with 30s TTL - Uses a separate mutex (not `s.mu`) to avoid deadlocks — callers already hold `s.mu.RLock()` - Cache is lazily populated per-region and fully invalidated after TTL expires - Follows the same pattern as `getCachedNodesAndPM()` (30s TTL, on-demand rebuild) ## Changes - **`cmd/server/store.go`**: Added `regionObsMu`, `regionObsCache`, `regionObsCacheTime` fields; rewrote `resolveRegionObservers()` to check cache first; added `fetchAndCacheRegionObs()` helper - **`cmd/server/coverage_test.go`**: Added `TestResolveRegionObserversCaching` — verifies cache population, cache hits, and nil handling for unknown regions ## Testing - All existing Go tests pass (`go test ./...`) - New test verifies caching behavior (population, hits, nil for unknown regions) Fixes #362 --------- Co-authored-by: you <you@example.com> |
||
|
|
d4f2c3ac66 |
perf: index subpath detail lookups instead of scanning all packets (#571)
## Summary `GetSubpathDetail()` iterated ALL packets to find those containing a specific subpath — `O(packets × hops × subpath_length)`. With 30K+ packets this caused user-visible latency on every subpath detail click. ## Changes ### `cmd/server/store.go` - Added `spTxIndex map[string][]*StoreTx` alongside existing `spIndex` — tracks which transmissions contain each subpath key - Extended `addTxToSubpathIndexFull()` and `removeTxFromSubpathIndexFull()` to maintain both indexes simultaneously - Original `addTxToSubpathIndex()`/`removeTxFromSubpathIndex()` wrappers preserved for backward compatibility - `buildSubpathIndex()` now populates both `spIndex` and `spTxIndex` during `Load()` - All incremental update sites (ingest, path change, eviction) use the `Full` variants - `GetSubpathDetail()` rewritten: direct `O(1)` map lookup on `spTxIndex[key]` instead of scanning all packets ### `cmd/server/coverage_test.go` - Added `TestSubpathTxIndexPopulated`: verifies `spTxIndex` is populated, counts match `spIndex`, and `GetSubpathDetail` returns correct results for both existing and non-existent subpaths ## Complexity - **Before:** `O(total_packets × avg_hops × subpath_length)` per request - **After:** `O(matched_txs)` per request (direct map lookup) ## Tests All tests pass: `cmd/server` (4.6s), `cmd/ingestor` (25.6s) Fixes #358 --------- Co-authored-by: you <you@example.com> |
||
|
|
37300bf5c8 |
fix: cap prefix map at 8 chars to cut memory ~10x (#570)
## Summary `buildPrefixMap()` was generating map entries for every prefix length from 2 to `len(pubkey)` (up to 64 chars), creating ~31 entries per node. With 500 nodes that's ~15K map entries; with 1K+ nodes it balloons to 31K+. ## Changes **`cmd/server/store.go`:** - Added `maxPrefixLen = 8` constant — MeshCore path hops use 2–6 char prefixes, 8 gives headroom - Capped the prefix generation loop at `maxPrefixLen` instead of `len(pk)` - Added full pubkey as a separate map entry when key is longer than `maxPrefixLen`, ensuring exact-match lookups (used by `resolveWithContext`) still work **`cmd/server/coverage_test.go`:** - Added `TestPrefixMapCap` with subtests for: - Short prefix resolution still works - Full pubkey exact-match resolution still works - Intermediate prefixes beyond the cap correctly return nil - Short keys (≤8 chars) have all prefix entries - Map size is bounded ## Impact - Map entries per node: ~31 → ~8 (one per prefix length 2–8, plus one full-key entry) - Total map size for 500 nodes: ~15K entries → ~4K entries (~75% reduction) - No behavioral change for path hop resolution (2–6 char prefixes) - No behavioral change for exact pubkey lookups ## Tests All existing tests pass: - `cmd/server`: ✅ - `cmd/ingestor`: ✅ Fixes #364 --------- Co-authored-by: you <you@example.com> |
||
|
|
588fba226d |
perf: track max transmission/observation IDs incrementally (#569)
## Summary Replace O(n) map iteration in `MaxTransmissionID()` and `MaxObservationID()` with O(1) field lookups. ## What Changed - Added `maxTxID` and `maxObsID` fields to `PacketStore` - Updated `Load()`, `IngestNewFromDB()`, and `IngestNewObservations()` to track max IDs incrementally as entries are added - `MaxTransmissionID()` and `MaxObservationID()` now return the tracked field directly instead of iterating the entire map ## Performance Before: O(n) iteration over 30K+ map entries under a read lock After: O(1) field return ## Tests - Added `TestMaxTransmissionIDIncremental` verifying the incremental field matches brute-force iteration over the maps - All existing tests pass (`cmd/server` and `cmd/ingestor`) Fixes #356 Co-authored-by: you <you@example.com> |
||
|
|
81ef51cc5c |
fix: debounce distance index rebuild to prevent CPU hot loop (#557)
## Problem On busy meshes (325K+ transmissions, 50 observers), the distance index rebuild runs on **every ingest poll** (~1s interval), computing haversine distances for 1M+ hop records. Each rebuild takes 2-3 seconds but new observations arrive faster than it can finish, creating a CPU hot loop that starves the HTTP server. Discovered on the Cascadia Mesh instance where `corescope-server` was consuming 15 minutes of CPU time in 10 minutes of uptime, the API was completely unresponsive, and health checks were timing out. ### Server logs showing the hot loop: ``` [store] Built distance index: 1797778 hop records, 207072 path records [store] Built distance index: 1797806 hop records, 207075 path records [store] Built distance index: 1797811 hop records, 207075 path records [store] Built distance index: 1797820 hop records, 207075 path records ``` Every 2 seconds, nonstop. ## Root Cause `IngestNewObservations` calls `buildDistanceIndex()` synchronously whenever `pickBestObservation` selects a longer path. With 50 observers sending observations every second, paths change on nearly every poll cycle, triggering a full rebuild each time. ## Fix - Mark distance index dirty on path changes instead of rebuilding inline - Rebuild at most every **30 seconds** (configurable via `distLast` timer) - Set `distLast` after initial `Load()` to prevent immediate re-rebuild on first ingest - Distance data is at most 30s stale — acceptable for an analytics view ## Testing - `go build`, `go vet`, `go test` all pass - No behavioral change for the initial load or the analytics API response shape - Distance data freshness goes from real-time to 30s max staleness --------- Co-authored-by: Kpa-clawbot <259247574+Kpa-clawbot@users.noreply.github.com> Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> Co-authored-by: you <you@example.com> |
||
|
|
709e5a4776 |
fix: observer filter drops groups in grouped packets view (#464) (#531)
## Summary - When `groupByHash=true`, each group only carries its representative (best-path) `observer_id`. The client-side filter was checking only that field, silently dropping groups that were seen by the selected observer but had a different representative. - `loadPackets` now passes the `observer` param to the server so `filterPackets`/`buildGroupedWhere` do the correct "any observation matches" check. - Client-side observer filter in `renderTableRows` is skipped for grouped mode (server already filtered correctly). - Both `db.go` and `store.go` observer filtering extended to support comma-separated IDs (multi-select UI). ## Test plan - [ ] Set an observer filter on the Packets screen with grouping enabled — all groups that have **any** observation from the selected observer(s) should appear, not just groups where that observer is the representative - [ ] Multi-select two observers — groups seen by either should appear - [ ] Toggle to flat (ungrouped) mode — per-observation filter still works correctly - [ ] Existing grouped packets tests pass: `cd cmd/server && go test ./...` Fixes #464 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> Co-authored-by: you <you@example.com> |
||
|
|
c173ab7e80 |
perf: skip JSON parse in indexByNode when no pubkey fields present (#376) (#499)
## Summary - `indexByNode` was calling `json.Unmarshal` for every packet during `Load()` and `IngestNewFromDB()`, even channel messages and other payloads that can never contain node pubkey fields - All three target fields (`"pubKey"`, `"destPubKey"`, `"srcPubKey"`) share the common substring `"ubKey"` — added a `strings.Contains` pre-check that skips the JSON parse entirely for packets that don't match - At 30K+ packets on startup, this eliminates the majority of `json.Unmarshal` calls in `indexByNode` (channel messages, status packets, etc. all bypass it) ## Test plan - [x] 5 new subtests in `TestIndexByNodePreCheck`: ADVERT with pubKey indexed, destPubKey indexed, channel message skipped, empty JSON skipped, duplicate hash deduped - [x] All existing Go tests pass - [x] Deploy to staging and verify node-filtered packet queries still work correctly Closes #376 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> Co-authored-by: you <you@example.com> |
||
|
|
77d8f35a04 |
feat: implement packet store eviction/aging to prevent OOM (#273)
## Summary
The in-memory `PacketStore` had **no eviction or aging** — it grew
unbounded until OOM killed the process. At ~3K packets/hour and ~5KB per
packet (not the 450 bytes previously estimated), an 8GB VM would OOM in
a few days.
## Changes
### Time-based eviction
- Configurable via `config.json`: `"packetStore": { "retentionHours": 24
}`
- Packets older than the retention window are evicted from the head of
the sorted slice
### Memory-based cap
- Configurable via `"packetStore": { "maxMemoryMB": 1024 }`
- Hard ceiling — evicts oldest packets when estimated memory exceeds the
cap
### Index cleanup
When a `StoreTx` is evicted, ALL associated data is removed from:
- `byHash`, `byTxID`, `byObsID`, `byObserver`, `byNode`, `byPayloadType`
- `nodeHashes`, `distHops`, `distPaths`, `spIndex`
### Periodic execution
- Background ticker runs eviction every 60 seconds
- Analytics caches and hash size cache are invalidated after eviction
### Stats fixes
- `estimatedMB` now uses ~5KB/packet + ~500B/observation (was 430B +
200B)
- `evicted` counter reflects actual evictions (was hardcoded to 0)
- Removed fake `maxPackets: 2386092` and `maxMB: 1024` from stats
### Config example
```json
{
"packetStore": {
"retentionHours": 24,
"maxMemoryMB": 1024
}
}
```
Both values default to 0 (unlimited) for backward compatibility.
## Tests
- 7 new tests in `eviction_test.go` covering time-based, memory-based,
index cleanup, thread safety, config parsing, and no-op when disabled
- All existing tests pass unchanged
Co-authored-by: Kpa-clawbot <kpabap+clawdbot@gmail.com>
|
||
|
|
712fa15a8c |
fix: force single SQLite connection in test DBs to prevent in-memory table visibility issues
SQLite :memory: databases create separate databases per connection. When the connection pool opens multiple connections (e.g. poller goroutine vs main test goroutine), tables created on one connection are invisible to others. Setting MaxOpenConns(1) ensures all queries use the same in-memory database, fixing TestPollerBroadcastsMultipleObservations. |
||
|
|
ab03b142f5 |
fix: per-observation WS broadcast for live view starburst — fixes #237
IngestNewFromDB now broadcasts one message per observation (not per transmission). IngestNewObservations also broadcasts late arrivals. Tests verify multi-observer packets produce multiple WS messages. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |
||
|
|
f5d0ce066b |
refactor: remove packets_v SQL fallbacks — store handles all queries (#220)
* refactor: remove all packets_v SQL fallbacks — store handles all queries Remove DB fallback paths from all route handlers. The in-memory PacketStore now handles all packet/node/analytics queries. Handlers return empty results or 404 when no store is available instead of falling back to direct DB queries. - Remove else-DB branches from handlePacketDetail, handleNodeHealth, handleNodeAnalytics, handleBulkHealth, handlePacketTimestamps, etc. - Remove unused DB methods (GetPacketByHash, GetTransmissionByID, GetPacketByID, GetObservationsForHash, GetTimestamps, GetNodeHealth, GetNodeAnalytics, GetBulkHealth, etc.) - Remove packets_v VIEW creation from schema - Update tests for new behavior (no-store returns 404/empty, not 500) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: address PR #220 review comments Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --------- Co-authored-by: Kpa-clawbot <259247574+Kpa-clawbot@users.noreply.github.com> Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> Co-authored-by: KpaBap <kpabap@gmail.com> |
||
|
|
54cbc648e0 |
feat: decode telemetry from adverts — battery voltage + temperature on nodes
Sensor nodes embed telemetry (battery_mv, temperature_c) in their advert appdata after the null-terminated name. This commit adds decoding and storage for both the Go ingestor and Node.js backend. Changes: - decoder.go/decoder.js: Parse telemetry bytes from advert appdata (battery_mv as uint16 LE millivolts, temperature_c as int16 LE /100) - db.go/db.js: Add battery_mv INTEGER and temperature_c REAL columns to nodes and inactive_nodes tables, with migration for existing DBs - main.go/server.js: Update node telemetry on advert processing - server db.go: Include battery_mv/temperature_c in node API responses - Tests: Decoder telemetry tests (positive, negative temp, no telemetry), DB migration test, node telemetry update test, server API shape tests Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |
||
|
|
f374a4a775 |
fix: enforce consistent types between Go ingestor writes and server reads
Schema: - observers.noise_floor: INTEGER → REAL (dBm has decimals) - battery_mv, uptime_secs remain INTEGER (always whole numbers) Ingestor write side (cmd/ingestor/db.go): - UpsertObserver now accepts ObserverMeta with battery_mv (int), uptime_secs (int64), noise_floor (float64) - COALESCE preserves existing values when meta is nil - Added migration: cast integer noise_floor values to REAL Ingestor MQTT handler (cmd/ingestor/main.go — already updated): - extractObserverMeta extracts hardware fields from status messages - battery_mv/uptime_secs cast via math.Round to int on write Server read side (cmd/server/db.go): - Observer.BatteryMv: *float64 → *int (matches INTEGER storage) - Observer.UptimeSecs: *float64 → *int64 (matches INTEGER storage) - Observer.NoiseFloor: *float64 (unchanged, matches REAL storage) - GetObservers/GetObserverByID: use sql.NullInt64 intermediaries for battery_mv/uptime_secs, sql.NullFloat64 for noise_floor Proto (proto/observer.proto — already correct): - battery_mv: int32, uptime_secs: int64, noise_floor: double Tests: - TestUpsertObserverWithMeta: verifies correct SQLite types via typeof() - TestUpsertObserverMetaPreservesExisting: nil-meta preserves values - TestExtractObserverMeta: float-to-int rounding, empty message - TestSchemaNoiseFloorIsReal: PRAGMA table_info validation - TestObserverTypeConsistency: server reads typed values correctly - TestObserverTypesInGetObservers: list endpoint type consistency Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |
||
|
|
6dacdef6b8 |
perf: precompute subpath index at ingest time, fixes #168
The subpaths analytics endpoint iterated ALL packets on every cold query, taking ~900ms. The TTL cache only masked the problem. Fix: maintain a precomputed raw-hop subpath index (map[string]int) that is built once during Load() and incrementally updated during IngestNewFromDB() and IngestNewObservations(). At query time the fast path iterates only unique raw subpaths (typically a few thousand entries) instead of all packets (30K+), resolves hop prefixes to names, and merges counts. Region-filtered queries still fall back to the O(N) path since they require per-transmission observer checks. Expected cold-hit improvement: ~900ms → <5ms for the common no-region case. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |
||
|
|
f1cb840b5a |
fix: prepend to byPayloadType in IngestNewFromDB to preserve newest-first order
IngestNewFromDB was appending new transmissions to byPayloadType slices, breaking the newest-first ordering established by Load(). This caused GetChannelMessages (which iterates backwards assuming newest-first) to place newly ingested messages at the wrong position, making them invisible when returning the latest messages from the tail. Changed append to prepend, matching the existing s.packets prepend pattern on line 881. Added regression test. fixes #198 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |
||
|
|
2435f2eaaf |
fix: observation timestamps, leaked fields, perf path normalization
- #178: Use strftime ISO 8601 format instead of datetime() for observation timestamps in all SQL queries (v3 + v2 views). Add normalizeTimestamp() helper for non-v3 paths that may store space-separated timestamps. - #179: Strip internal fields (decoded_json, direction, payload_type, raw_hex, route_type, score, created_at) from ObservationResp. Only expose id, transmission_id, observer_id, observer_name, snr, rssi, path_json, timestamp — matching Node.js parity. - #180: Remove _parsedDecoded and _parsedPath from node detail recentAdverts response. These internal/computed fields were leaking to the API. Updated golden shapes.json accordingly. - #181: Use mux route template (GetPathTemplate) for perf stats path normalization, converting {param} to :param for Node.js parity. Fallback to hex regex for unmatched routes. Compile regexes once at package level instead of per-request. fixes #178, fixes #179, fixes #180, fixes #181 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |
||
|
|
df63efa78d |
fix: poll new observations for existing transmissions (fixes #174)
The poller only queried WHERE t.id > sinceID, which missed new observations added to transmissions already in the store. The trace page was correct because it always queries the DB directly. Add IngestNewObservations() that polls observations by o.id watermark, adds them to existing StoreTx entries, re-picks best observation, and invalidates analytics caches. The Poller now tracks both lastTxID and lastObsID watermarks. Includes tests for v3, v2, dedup, best-path re-pick, and GetMaxObservationID. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |
||
|
|
95afaf2f0d |
fix(go): add nested packet field to WS broadcast, fixes #162
The frontend packets.js filters WS messages with m.data?.packet and extracts m.data.packet for live rendering. Node's server.js includes a packet sub-object (packet: fullPacket) in the broadcast data, but Go's IngestNewFromDB built the data flat without a nested packet field. This caused the Go staging packets page to never live-update via WS even though messages were being sent — they were silently filtered out by packets.js. Fix: build the packet fields map separately, then create the broadcast map with both top-level fields (for live.js) and nested packet (for packets.js). Also fixes the fallback DB-direct poller path. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |
||
|
|
6ec00a5aab |
test(go): fix failing test + add 18 tests, coverage 91.5% → 92.3%
Fix TestStoreGetAnalyticsChannelsNumericHash: corrected observation transmission_id references (3,4,5 → 4,5,6) and changed assertion to only check numeric-hash channels (seed data CHAN lacks channelHash). New tests covering gaps: - resolvePayloadTypeName unknown type fallback - Cache hit paths for Topology, HashSizes, Channels - GetChannelMessages edge cases (empty, offset, default limit) - filterPackets: empty region, since/until, hash-only fast path, observer+type combo, node filter - GetNodeHashSizeInfo: short hex, bad hex, bad JSON, missing pubKey, public_key field, flip-flop inconsistency detection - handleResolveHops: empty param, empty hop skip, nonexistent prefix - handleObservers error path (closed DB) - handleAnalyticsChannels DB fallback (no store) - GetChannelMessages dedup/repeats - transmissionsForObserver from-slice filter path - GetPerfStoreStats advertByObserver count - handleAudioLabBuckets query error path Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |
||
|
|
1457795e3e |
fix: analytics channels + uniqueNodes mismatch
fixes #154: Go analytics channels showed single 'ch?' because channelHash is a JSON number (from decoder.js) but the Go struct declared it as string. json.Unmarshal failed on every packet. Changed to interface{} with proper type conversion. Also fixed chKey to use hash (not name) for grouping, matching Node.js. fixes #155: uniqueNodes in topology analytics used hop resolution count (phantom hops inflated it). Both Node.js and Go now use db.getStats().totalNodes (7-day active window), matching /api/stats. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |
||
|
|
4101dc1715 |
test(go): push Go server test coverage from 77.9% to 91.4% (#145)
Add comprehensive coverage_test.go targeting all major coverage gaps: - buildPacketWhere: all 8 filter types (0% -> covered) - QueryMultiNodePackets: DB + store paths (0% -> covered) - IngestNewFromDB: v3/v2 schema, dedup, defaults (0% -> covered) - MaxTransmissionID: loaded + empty store (0% -> covered) - handleBulkHealth DB fallback (24.4% -> covered) - handleAnalyticsRF DB fallback (11.8% -> covered) - handlePackets multi-node path (63.5% -> covered) - handlePacketDetail no-store fallback (71.1% -> covered) - handleAnalyticsChannels DB fallback (57.1% -> covered) - detectSchema v2 path (30.8% -> covered) - Store channel queries, analytics, helpers - Prefix map resolve with GPS preference - wsOrStatic, Poller, perfMiddleware slow queries - Helper functions: pathLen, floatPtrOrNil, nilIfEmpty, etc. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |