mirror of
https://github.com/Kpa-clawbot/meshcore-analyzer.git
synced 2026-05-12 04:54:42 +00:00
eaeb65b426a7ac80e7d8e2c32aaa4b5ca0ba1521
116 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
83881e6b71 |
fix(#688): auto-discover hashtag channels from message text (#1071)
## Summary Auto-discovers previously-unknown hashtag channels by scanning decoded channel message text for `#name` mentions and surfacing them via `GetChannels`. Workflow (per the issue): 1. New channel message arrives on a known channel 2. Decoded text is scanned for `#hashtag` mentions 3. Any mention that doesn't match an existing channel is surfaced as a discovered channel (`discovered: true`, `messageCount: 0`) 4. Future traffic on that channel will populate the entry once it has its own packets ## Changes - `cmd/server/discovered_channels.go` — new file. `extractHashtagsFromText` parses `#name` mentions from free text, deduped, order-preserving. Trailing punctuation is excluded by the character class. - `cmd/server/store.go` — `GetChannels` now scans CHAN packet text for hashtags after building the primary channel map, and appends any unseen hashtag mentions as discovered entries. - `cmd/server/discovered_channels_test.go` — new tests covering parser edge cases (single, multi, dedup, punctuation, none, bare `#`) and end-to-end discovery via `GetChannels`. ## TDD - Red: `34f1817` — stub returns `nil`, both new tests fail on assertion (verified). - Green: `d27b3ed` — real implementation, full `cmd/server` test suite passes (21.7s). ## Notes - Discovered channels carry `messageCount: 0` and `lastActivity` set to the most recent mention's `firstSeen`, so they sort naturally alongside real channels. - Names are matched against existing entries by both `#name` and bare `name` so a channel that already has decoded traffic isn't double-listed. - The existing `channelsCache` (15s) covers the new code path; no separate invalidation needed since the source data (`byPayloadType[5]`) drives both maps. Fixes #688 --------- Co-authored-by: corescope-bot <bot@corescope.local> |
||
|
|
d144764d38 |
fix(analytics): multiByteCapability missing under region filter → all rows 'unknown' (#1049)
## Bug `https://meshcore.meshat.se/#/analytics`: - Unfiltered → 0 adopter rows show "unknown" (correct). - Region filter `JKG` → 14 rows show "unknown" (wrong — same nodes, all confirmed when unfiltered). Multi-byte capability is a property of the NODE, derived from its own adverts (the full pubkey is in the advert payload, no prefix collision risk). The observing region should only control which nodes appear in the analytics list — it must not change a node's cap evidence. ## Root cause `PacketStore.GetAnalyticsHashSizes(region)` only attached `result["multiByteCapability"]` when `region == ""`. Under any region filter the field was absent. The frontend (`public/analytics.js:1011`) does `data.multiByteCapability || []`, so every adopter row falls through the merge with no cap status and renders as "unknown". ## Fix Always populate `multiByteCapability`. When a region filter is active, source the global adopter hash-size set from a no-region compute pass so out-of-region observers' adverts still count as evidence. ## TDD Red commit (`0968137`): adds `cmd/server/multibyte_region_filter_test.go`, asserts that `GetAnalyticsHashSizes("JKG")` returns a populated `multiByteCapability` with Node A as `confirmed`. Fails on the assertion (field missing) before the fix. Green commit (`6616730`): always compute capability against the global advert dataset. ## Files changed - `cmd/server/store.go` — `GetAnalyticsHashSizes`: drop the `region == ""` gate, always populate `multiByteCapability`. - `cmd/server/multibyte_region_filter_test.go` — new red→green test. ## Verification ``` go test ./... -count=1 # all server tests pass (21s) ``` --------- Co-authored-by: clawbot <bot@corescope.local> |
||
|
|
9f55ef802b |
fix(#804): attribute analytics by repeater home region, not observer (#1025)
Fixes #804. ## Problem Analytics filtered region purely by **observer** region: a multi-byte repeater whose home is PDX would leak into SJC results whenever its flood adverts were relayed past an SJC observer. Per-node groupings (`multiByteNodes`, `distributionByRepeaters`) inherited the same bug. ## Fix Two new helpers in `cmd/server/store.go`: - `iataMatchesRegion(iata, regionParam)` — case-insensitive IATA→region match using the existing `normalizeRegionCodes` parser. - `computeNodeHomeRegions()` — derives each node's HOME IATA from its zero-hop DIRECT adverts. Path byte for those packets is set locally on the originating radio and the packet has not been relayed, so the observer that hears it must be in direct RF range. Plurality vote when zero-hop adverts span multiple regions. `computeAnalyticsHashSizes` now applies these in two ways: 1. **Observer-region filter is relaxed for ADVERT packets** when the originator's home region matches the requested region. A flood advert from a PDX repeater that's only heard by an SJC observer still attributes to PDX. 2. **Per-node grouping** (`multiByteNodes`, `distributionByRepeaters`) excludes nodes whose HOME region disagrees with the requested region. Falls back to the observer-region filter when home is unknown. Adds `attributionMethod` to the response (`"observer"` or `"repeater"`) so operators can tell which method was applied. ## Backwards compatibility - No region filter requested → behavior unchanged (`attributionMethod` is `"observer"`). - Region filter requested but no zero-hop direct adverts seen for a node → falls back to the prior observer-region check for that node. - Operators without IATA-tagged observers see no change. ## TDD - **Red commit** (`c35d349`): adds `TestIssue804_AnalyticsAttributesByRepeaterRegion` with three subtests (PDX leak into SJC, attributionMethod field present, SJC leak into PDX). Compiles, runs, fails on assertions. - **Green commit** (`11b157f`): the implementation. All subtests pass, full `cmd/server` package green. ## Files changed - `cmd/server/store.go` — helpers + analytics filter logic (+236/-51) - `cmd/server/issue804_repeater_region_test.go` — new test (+147) --------- Co-authored-by: CoreScope Bot <bot@corescope.local> Co-authored-by: openclaw-bot <bot@openclaw.local> |
||
|
|
a56ee5c4fe |
feat(analytics): selectable timeframes via ?window/?from/?to (#842) (#1018)
## Summary Selectable analytics timeframes (#842). Adds backend support for `?window=1h|24h|7d|30d` and `?from=&to=` on the three main analytics endpoints (`/api/analytics/rf`, `/api/analytics/topology`, `/api/analytics/channels`), and a time-window picker in the Analytics page UI that drives them. Default behavior with no query params is unchanged. ## TDD trail - Red: `bbab04d` — adds `TimeWindow` + `ParseTimeWindow` stub and tests; tests fail on assertions because the stub returns the zero window. - Green: `75d27f9` — implements `ParseTimeWindow`, threads `TimeWindow` through `compute*` loops + caches, wires HTTP handlers, adds frontend picker + E2E. ## Backend changes - `cmd/server/time_window.go` — full `ParseTimeWindow` (`?window=` aliases + `?from=/&to=` RFC3339 absolute range; invalid input → zero window for backwards compatibility). - `cmd/server/store.go` — new `GetAnalytics{RF,Topology,Channels}WithWindow` wrappers; `compute*` loops skip transmissions whose `FirstSeen` (or per-obs `Timestamp` for the region+observer slice) falls outside the window. Cache key composes `region|window` so different windows do not poison each other. - `cmd/server/routes.go` — handlers call `ParseTimeWindow(r)` and dispatch to the `*WithWindow` methods. ## Frontend changes - `public/analytics.js` — new `<select id="analyticsTimeWindow">` rendered under the region filter (All / 1h / 24h / 7d / 30d). Selecting an option triggers `loadAnalytics()` which appends `&window=…` to every analytics fetch. ## Tests - `cmd/server/time_window_test.go` — covers all aliases, absolute range, no-params backwards compatibility, `Includes()` bounds, and `CacheKey()` distinctness. - `cmd/server/topology_dedup_test.go`, `cmd/server/channel_analytics_test.go` — updated callers to pass `TimeWindow{}`. ## E2E (rule 18) `test-e2e-playwright.js:592-611` — opens `/#/analytics`, asserts the picker is rendered with a `24h` option, then asserts that selecting `24h` triggers a network request to `/api/analytics/rf?…window=24h`. ## Backwards compatibility No params → zero `TimeWindow` → original code paths (no filter, region-only cache key). Verified by `TestParseTimeWindow_NoParams_BackwardsCompatible` and by the existing analytics tests still passing unchanged on `_wt-fix-842`. Fixes #842 --------- Co-authored-by: you <you@example.com> Co-authored-by: corescope-bot <bot@corescope> |
||
|
|
e86b5a3a0c |
feat: show multi-byte hash support indicator on map markers (#1002)
## Summary Show 2-byte hash support indicator on map markers. Fixes #903. ## What changed ### Backend (`cmd/server/store.go`, `cmd/server/routes.go`) - **`EnrichNodeWithMultiByte()`** — new enrichment function that adds `multi_byte_status` (confirmed/suspected/unknown), `multi_byte_evidence` (advert/path), and `multi_byte_max_hash_size` fields to node API responses - **`GetMultiByteCapMap()`** — cached (15s TTL) map of pubkey → `MultiByteCapEntry`, reusing the existing `computeMultiByteCapability()` logic that combines advert-based and path-hop-based evidence - Wired into both `/api/nodes` (list) and `/api/nodes/{pubkey}` (detail) endpoints ### Frontend (`public/map.js`) - Added **"Multi-byte support"** checkbox in the map Display controls section - When toggled on, repeater markers change color: - 🟢 Green (`#27ae60`) — **confirmed** (advertised with hash_size ≥ 2) - 🟡 Yellow (`#f39c12`) — **suspected** (seen as hop in multi-byte path) - 🔴 Red (`#e74c3c`) — **unknown** (no multi-byte evidence) - Popup tooltip shows multi-byte status and evidence for repeaters - State persisted in localStorage (`meshcore-map-multibyte-overlay`) ## TDD - Red commit: `2f49cbc` — failing test for `EnrichNodeWithMultiByte` - Green commit: `4957782` — implementation + passing tests ## Performance - `GetMultiByteCapMap()` uses a 15s TTL cache (same pattern as `GetNodeHashSizeInfo`) - Enrichment is O(n) over nodes, no per-item API calls - Frontend color override is computed inline during existing marker render loop — no additional DOM rebuilds --------- Co-authored-by: you <you@example.com> |
||
|
|
564d93d6aa |
fix: dedup topology analytics by resolved pubkey (#998)
## Fix topology analytics double-counting repeaters/pairs (#909) ### Problem `computeAnalyticsTopology()` aggregates by raw hop hex string. When firmware emits variable-length path hashes (1-3 bytes per hop), the same physical node appears multiple times with different prefix lengths (e.g. `"07"`, `"0735bc"`, `"0735bc6d"` all referring to the same node). This inflates repeater counts and creates duplicate pair entries. ### Solution Added a confidence-gated dedup pass after frequency counting: 1. **For each hop prefix**, check if it resolves unambiguously (exactly 1 candidate in the prefix map) 2. **Unambiguous prefixes** → group by resolved pubkey, sum counts, keep longest prefix as display identifier 3. **Ambiguous prefixes** (multiple candidates for that prefix) → left as separate entries (conservative) 4. **Same treatment for pairs**: canonicalize by sorted pubkey pair ### Addressing @efiten's collision concern At scale (~2000+ repeaters), 1-byte prefixes (256 buckets) WILL collide. This fix explicitly checks the prefix map candidate count. Ambiguous prefixes (where `len(pm.m[hop]) > 1`) are never merged — they remain as separate entries. Only prefixes with a single matching node are eligible for dedup. ### TDD - **Red commit**: `4dbf9c0` — added 3 failing tests - **Green commit**: `d6cae9a` — implemented dedup, all tests pass ### Tests added - `TestTopologyDedup_RepeatersMergeByPubkey` — verifies entries with different prefix lengths for same node merge to single entry with summed count - `TestTopologyDedup_AmbiguousPrefixNotMerged` — verifies colliding short prefix stays separate from unambiguous longer prefix - `TestTopologyDedup_PairsMergeByPubkey` — verifies pair entries merge by resolved pubkey pair Fixes #909 --------- Co-authored-by: you <you@example.com> |
||
|
|
b7c280c20a |
fix: drop/filter packets with null hash or timestamp (closes #871) (#993)
## Summary Closes #871 The `/api/packets` endpoint could return packets with `null` hash or timestamp fields. This was caused by legacy data in SQLite (rows with empty `hash` or `NULL`/empty `first_seen`) predating the ingestor's existing validation guard (`if hash == "" { return false, nil }` at `cmd/ingestor/db.go:610`). ## Root Cause `cmd/server/store.go` `filterPackets()` had no data-integrity guard. Legacy rows with empty `hash` or `first_seen` were loaded into the in-memory store and returned verbatim. The `strOrNil("")` helper then serialized these as JSON `null`. ## Fix Added a data-integrity predicate at the top of `filterPackets`'s scan callback (`cmd/server/store.go:2278`): ```go if tx.Hash == "" || tx.FirstSeen == "" { return false } ``` This filters bad legacy rows at query time. The write path (ingestor) already rejects empty hashes, so no new bad data enters. ## TDD Evidence - **Red commit:** `15774c3` — test `TestIssue871_NoNullHashOrTimestamp` asserts no packet in API response has null/empty hash or timestamp - **Green commit:** `281fd6f` — adds the filter guard, test passes ## Testing - `go test ./...` in `cmd/server` passes (full suite) - Client-side defensive filter from PR #868 remains as defense-in-depth --------- Co-authored-by: you <you@example.com> |
||
|
|
fc57433f27 |
fix(analytics): merge channel buckets by hash byte; reject rainbow-table mismatches (closes #978) (#980)
## Summary Closes #978 — analytics channels duplicated by encrypted/decrypted split + rainbow-table collisions. ## Root cause Two distinct bugs in `computeAnalyticsChannels` (`cmd/server/store.go`): 1. **Encrypted/decrypted split**: The grouping key included the decoded channel name (`hash + "_" + channel`), so packets from observers that could decrypt a channel created a separate bucket from packets where decryption failed. Same physical channel, two entries. 2. **Rainbow-table collisions**: Some observers' lookup tables map hash bytes to wrong channel names. E.g., hash `72` incorrectly claimed to be `#wardriving` (real hash is `129`). This created ghost 1-message entries. ## Fix 1. **Always group by hash byte alone** (drop `_channel` suffix from `chKey`). When any packet decrypts successfully, upgrade the bucket's display name from placeholder (`chN`) to the real name (first-decrypter-wins for stability). 2. **Validate channel names** against the firmware hash invariant: `SHA256(SHA256("#name")[:16])[0] == channelHash`. Mismatches are treated as encrypted (placeholder name, no trust in decoded channel). Guard is in the analytics handler (not the ingestor) to avoid breaking other surfaces that use the decoded field for display. ## Verification (e2e-fixture.db) | Metric | BEFORE | AFTER | |--------|--------|-------| | Total channels | 22 | 19 | | Duplicate hash bytes | 3 (hashes 217, 202, 17) | 0 | ## Tests added - `TestComputeAnalyticsChannels_MergesEncryptedAndDecrypted` — same hash, mixed encrypted/decrypted → ONE bucket - `TestComputeAnalyticsChannels_RejectsRainbowTableMismatch` — hash 72 claimed as `#wardriving` (real=129) → rejected, stays `ch72` - `TestChannelNameMatchesHash` — unit test for hash validation helper - `TestIsPlaceholderName` — unit test for placeholder detection Anti-tautology gate: both main tests fail when their respective fix lines are reverted. Co-authored-by: you <you@example.com> |
||
|
|
e460932668 |
fix(store): apply retentionHours cutoff in Load() to prevent OOM on cold start (#917)
## Problem `Load()` loaded all transmissions from the DB regardless of `retentionHours`, so `buildSubpathIndex()` processed the full DB history on every startup. On a DB with ~280K paths this produces ~13.5M subpath index entries, OOM-killing the process before it ever starts listening — causing a supervisord crash loop with no useful error message. ## Fix Apply the same `retentionHours` cutoff to `Load()`'s SQL that `EvictStale()` already uses at runtime. Both conditions (`retentionHours` window and `maxPackets` cap) are combined with AND so neither safety limit is bypassed. Startup now builds indexes only over the retention window, making startup time and memory proportional to recent activity rather than total DB history. ## Docs - `config.example.json`: adds `retentionHours` to the `packetStore` block with recommended value `168` (7 days) and a warning about `0` on large DBs - `docs/user-guide/configuration.md`: documents the field and adds an explicit OOM warning ## Test plan - [x] `cd cmd/server && go test ./... -run TestRetentionLoad` — covers the retention-filtered load: verifies packets outside the window are excluded, and that `retentionHours: 0` still loads everything - [x] Deploy on an instance with a large DB (>100K paths) and `retentionHours: 168` — server reaches "listening" in seconds instead of OOM-crashing - [x] Verify `config.example.json` has `retentionHours: 168` in the `packetStore` block - [x] Verify `docs/user-guide/configuration.md` documents the field and warning 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> Co-authored-by: Kpa-clawbot <kpaclawbot@outlook.com> |
||
|
|
54f7f9d35b |
feat: path-prefix candidate inspector with map view (#944) (#945)
## feat: path-prefix candidate inspector with map view (#944) Implements the locked spec from #944: a beam-search-based path prefix inspector that enumerates candidate full-pubkey paths from short hex prefixes and scores them. ### Server (`cmd/server/path_inspect.go`) - **`POST /api/paths/inspect`** — accepts 1-64 hex prefixes (1-3 bytes, uniform length per request) - Beam search (width 20) over cached `prefixMap` + `NeighborGraph` - Per-hop scoring: edge weight (35%), GPS plausibility (20%), recency (15%), prefix selectivity (30%) - Geometric mean aggregation with 0.05 floor per hop - Speculative threshold: score < 0.7 - Score cache: 30s TTL, keyed by (prefixes, observer, window) - Cold-start: synchronous NeighborGraph rebuild with 2s hard timeout → 503 `{retry:true}` - Body limit: 4096 bytes via `http.MaxBytesReader` - Zero SQL queries in handler hot path - Request validation: rejects empty, odd-length, >3 bytes, mixed lengths, >64 hops ### Frontend (`public/path-inspector.js`) - New page under Tools route with input field (comma/space separated hex prefixes) - Client-side validation with error feedback - Results table: rank, score (color-coded speculative), path names, per-hop evidence (collapsed) - "Show on Map" button calls `drawPacketRoute` (one path at a time, clears prior) - Deep link: `#/tools/path-inspector?prefixes=2c,a1,f4` ### Nav reorganization - `Traces` nav item renamed to `Tools` - Backward-compat: `#/traces/<hash>` redirects to `#/tools/trace/<hash>` - Tools sub-routing dispatches to traces or path-inspector ### Store changes - Added `LastSeen time.Time` to `nodeInfo` struct, populated from `nodes.last_seen` - Added `inspectMu` + `inspectCache` fields to `PacketStore` ### Tests - **Go unit tests** (`path_inspect_test.go`): scoreHop components, beam width cap, speculative flag, all validation error cases, valid request integration - **Frontend tests** (`test-path-inspector.js`): parse comma/space/mixed, validation (empty, odd, >3 bytes, mixed lengths, invalid hex, valid) - Anti-tautology gate verified: removing beam pruning fails width test; removing validation fails reject tests ### CSS - `--path-inspector-speculative` variable in both themes (amber, WCAG AA on both dark/light backgrounds) - All colors via CSS variables (no hardcoded hex in production code) Closes #944 --------- Co-authored-by: you <you@example.com> |
||
|
|
5678874128 |
fix: exclude non-repeater nodes from path-hop resolution (#935) (#936)
Fixes #935 ## Problem `buildPrefixMap()` indexed ALL nodes regardless of role, causing companions/sensors to appear as repeater hops when their pubkey prefix collided with a path-hop hash byte. ## Fix ### Server (`cmd/server/store.go`) - Added `canAppearInPath(role string) bool` — allowlist of roles that can forward packets (repeater, room_server, room) - `buildPrefixMap` now skips nodes that fail this check ### Client (`public/hop-resolver.js`) - Added matching `canAppearInPath(role)` helper - `init()` now only populates `prefixIdx` for path-eligible nodes - `pubkeyIdx` remains complete — `resolveFromServer()` still resolves any node type by full pubkey (for server-confirmed `resolved_path` arrays) ## Tests - `cmd/server/prefix_map_role_test.go`: 7 new tests covering role filtering in prefix map and resolveWithContext - `test-hop-resolver-affinity.js`: 4 new tests verifying client-side role filter + pubkeyIdx completeness - All existing tests updated to include `Role: "repeater"` where needed - `go test ./cmd/server/...` — PASS - `node test-hop-resolver-affinity.js` — 16/17 pass (1 pre-existing centroid failure unrelated to this change) ## Commits 1. `fix: filter prefix map to only repeater/room roles (#935)` — server implementation 2. `test: prefix map role filter coverage (#935)` — server tests 3. `ui: filter HopResolver prefix index to repeater/room roles (#935)` — client implementation 4. `test: hop-resolver role filter coverage (#935)` — client tests --------- Co-authored-by: you <you@example.com> |
||
|
|
a605518d6d |
fix(#881): per-observation raw_hex — each observer sees different bytes on air (#882)
## Problem Each MeshCore observer receives a physically distinct over-the-air byte sequence for the same transmission (different path bytes, flags/hops remaining). The `observations` table stored only `path_json` per observer — all observations pointed at one `transmissions.raw_hex`. This prevented the hex pane from updating when switching observations in the packet detail view. ## Changes | Layer | Change | |-------|--------| | **Schema** | `ALTER TABLE observations ADD COLUMN raw_hex TEXT` (nullable). Migration: `observations_raw_hex_v1` | | **Ingestor** | `stmtInsertObservation` now stores per-observer `raw_hex` from MQTT payload | | **View** | `packets_v` uses `COALESCE(o.raw_hex, t.raw_hex)` — backward compatible with NULL historical rows | | **Server** | `enrichObs` prefers `obs.RawHex` when non-empty, falls back to `tx.RawHex` | | **Frontend** | No changes — `effectivePkt.raw_hex` already flows through `renderDetail` | ## Tests - **Ingestor**: `TestPerObservationRawHex` — two MQTT packets for same hash from different observers → both stored with distinct raw_hex - **Server**: `TestPerObservationRawHexEnrich` — enrichObs returns per-obs raw_hex when present, tx fallback when NULL - **E2E**: Playwright assertion in `test-e2e-playwright.js` for hex pane update on observation switch E2E assertion added: `test-e2e-playwright.js:1794` ## Scope - Historical observations: raw_hex stays NULL, UI falls back to transmission raw_hex silently - No backfill, no path_json reconstruction, no frontend changes Closes #881 --------- Co-authored-by: you <you@example.com> |
||
|
|
a371d35bfd |
feat(#847): dedupe Top Longest Hops by pair + add obs count and SNR cues (#848)
## Problem The "Top 20 Longest Hops" RF analytics card shows the same repeater pair filling most slots because the query sorts raw hop records by distance with no pair deduplication. A single long link observed 12+ times dominates the leaderboard. ## Fix Dedupe by unordered `(pk1, pk2)` pair. Per pair, keep the max-distance record and compute reliability metrics: | Column | Description | |--------|-------------| | **Obs** | Total observations of this link | | **Best SNR** | Maximum SNR seen (dB) | | **Median SNR** | Median SNR across all observations (dB) | Tooltip on each row shows the timestamp of the best observation. ### Before | # | From | To | Distance | Type | SNR | Packet | |---|------|----|----------|------|-----|--------| | 1 | NodeX | NodeY | 200 mi | R↔R | 5 dB | abc… | | 2 | NodeX | NodeY | 199 mi | R↔R | 6 dB | def… | | 3 | NodeX | NodeY | 198 mi | R↔R | 4 dB | ghi… | ### After | # | From | To | Distance | Type | Obs | Best SNR | Median SNR | Packet | |---|------|----|----------|------|-----|----------|------------|--------| | 1 | NodeX | NodeY | 200 mi | R↔R | 12 | 8.0 dB | 5.2 dB | abc… | | 2 | NodeA | NodeB | 150 mi | C↔R | 3 | 6.5 dB | 6.5 dB | jkl… | ## Changes - **`cmd/server/store.go`**: Group `filteredHops` by unordered pair key, accumulate obs count / best SNR / median SNR per group, sort by max distance, take top 20 - **`cmd/server/types.go`**: Update `DistanceHop` struct — replace `SNR` with `BestSnr`, `MedianSnr`, add `ObsCount` - **`public/analytics.js`**: Replace single SNR column with Obs, Best SNR, Median SNR; add row tooltip with best observation timestamp - **`cmd/server/store_tophops_test.go`**: 3 unit tests — basic dedupe, reverse-pair merge, nil SNR edge case ## Test Coverage - `TestDedupeTopHopsByPair`: 5 records on pair (A,B) + 1 on (C,D) → 2 results, correct obsCount/dist/bestSnr/medianSnr - `TestDedupeTopHopsReversePairMerges`: (B,A) and (A,B) merge into one entry - `TestDedupeTopHopsNilSNR`: all-nil SNR records → bestSnr and medianSnr both nil - Existing `TestAnalyticsRFEndpoint` and `TestAnalyticsRFWithRegion` still pass Closes #847 --------- Co-authored-by: you <you@example.com> |
||
|
|
7f024b7aa7 |
fix(#673): replace raw JSON text search with byNode index for node packet queries (#803)
## Summary Fixes #673 - GRP_TXT packets whose message text contains a node's pubkey were incorrectly counted as packets for that node, inflating packet counts and type breakdowns - Two code paths in `store.go` used `strings.Contains` on the full `DecodedJSON` blob — this matched pubkeys appearing anywhere in the JSON, including inside chat message text - `filterPackets` slow path (combined node + other filters): replaced substring search with a hash-set membership check against `byNode[nodePK]` - `GetNodeAnalytics`: removed the full-packet-scan + text search branch entirely; always uses the `byNode` index (which already covers `pubKey`/`destPubKey`/`srcPubKey` via structured field indexing) ## Test Plan - [x] `TestGetNodeAnalytics_ExcludesGRPTXTWithPubkeyInText` — verifies a GRP_TXT packet with the node's pubkey in its text field is not counted in that node's analytics - [x] `TestFilterPackets_NodeQueryDoesNotMatchChatText` — verifies the combined-filter slow path of `filterPackets` returns only the indexed ADVERT, not the chat packet Both tests were written as failing tests against the buggy code and pass after the fix. Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> |
||
|
|
d7fe24e2db |
Fix channel filter on Packets page (UI + API) — #812 (#816)
Closes #812 ## Root causes **Server (`/api/packets?channel=…` returned identical totals):** The handler in `cmd/server/routes.go` never read the `channel` query parameter into `PacketQuery`, so it was silently ignored by both the SQLite path (`db.go::buildTransmissionWhere`) and the in-memory path (`store.go::filterPackets`). The codebase already had everything else in place — the `channel_hash` column with an index from #762, decoded `channel` / `channelHashHex` fields on each packet — it just wasn't wired up. **UI (`/#/packets` had no channel filter):** `public/packets.js` rendered observer / type / time-window / region filters but no channel control, and didn't read `?channel=` from the URL. ## Fix ### Server - New `Channel` field on `PacketQuery`; `handlePackets` reads `r.URL.Query().Get("channel")`. - DB path filters by the indexed `channel_hash` column (exact match). - In-memory path: helper `packetMatchesChannel` matches `decoded.channel` (plaintext, e.g. `#test`, `public`) or `enc_<HEX>` against `channelHashHex` for undecryptable GRP_TXT. Uses cached `ParsedDecoded()` so it's O(1) after first parse. Fast-path index guards and the grouped-cache key updated to include channel. - Regression test (`channel_filter_test.go`): `channel=#test` returns ≥1 GRP_TXT packet and fewer than baseline; `channel=nonexistentchannel` returns `total=0`. ### UI - New `<select id="fChannel">` populated from `/api/channels`. - Round-trips via `?channel=…` on the URL hash (read on init, written on change). - Pre-seeds the current value as an option so encrypted hashes not in `/api/channels` still display as selected on reload. - On change, calls `loadPackets()` so the server-side filter applies before pagination. ## Perf Filter adds at most one cached map lookup per packet (DB path uses indexed column, store path uses `ParsedDecoded()` cache). Staging baseline 149–190 ms for `?channel=#test&limit=50`; the new comparison is negligible. Target ≤ 500 ms preserved. ## Tests `cd cmd/server && go test ./... -count=1 -timeout 120s` → PASS. --------- Co-authored-by: you <you@example.com> |
||
|
|
9e90548637 |
perf(#800): remove per-StoreTx ResolvedPath, replace with membership index + on-demand decode (#806)
## Summary Remove `ResolvedPath []*string` field from `StoreTx` and `StoreObs` structs, replacing it with a compact membership index + on-demand SQL decode. This eliminates the dominant heap cost identified in profiling (#791, #799). **Spec:** #800 (consolidated from two rounds of expert + implementer review on #799) Closes #800 Closes #791 ## Design ### Removed - `StoreTx.ResolvedPath []*string` - `StoreObs.ResolvedPath []*string` - `TransmissionResp.ResolvedPath`, `ObservationResp.ResolvedPath` struct fields ### Added | Structure | Purpose | Est. cost at 1M obs | |---|---|---:| | `resolvedPubkeyIndex map[uint64][]int` | FNV-1a(pubkey) → []txID forward index | 50–120 MB | | `resolvedPubkeyReverse map[int][]uint64` | txID → []hashes for clean removal | ~40 MB | | `apiResolvedPathLRU` (10K entries) | FIFO cache for on-demand API decode | ~2 MB | ### Decode-window discipline `resolved_path` JSON decoded once per packet. Consumers fed in order, temp slice dropped — never stored on struct: 1. `addToByNode` — relay node indexing 2. `touchRelayLastSeen` — relay liveness DB updates 3. `byPathHop` resolved-key entries 4. `resolvedPubkeyIndex` + reverse insert 5. WebSocket broadcast map (raw JSON bytes) 6. Persist batch (raw JSON bytes for SQL UPDATE) ### Collision safety When the forward index returns candidates, a batched SQL query confirms exact pubkey presence using `LIKE '%"pubkey"%'` on the `resolved_path` column. ### Feature flag `useResolvedPathIndex` (default `true`). Off-path is conservative: all candidates kept, index not consulted. For one-release rollback safety. ## Files changed | File | Changes | |---|---| | `resolved_index.go` | **New** — index structures, LRU cache, on-demand SQL helpers, collision safety | | `store.go` | Remove RP fields, decode-window discipline in Load/Ingest, on-demand txToMap/obsToMap/enrichObs, eviction cleanup via SQL, memory accounting update | | `types.go` | Remove RP fields from TransmissionResp/ObservationResp | | `routes.go` | Replace `nodeInResolvedPath` with `nodeInResolvedPathViaIndex`, remove RP from mapSlice helpers | | `neighbor_persist.go` | Refactor backfill: reverse-map removal → forward+reverse insert → LRU invalidation | ## Tests added (27 new) **Unit:** - `TestStoreTx_ResolvedPathFieldAbsent` — reflection guard - `TestResolvedPubkeyIndex_BuildFromLoad` — forward+reverse consistency - `TestResolvedPubkeyIndex_HashCollision` — SQL collision safety - `TestResolvedPubkeyIndex_IngestUpdate` — maps reflect new ingests - `TestResolvedPubkeyIndex_RemoveOnEvict` — clean removal via reverse map - `TestResolvedPubkeyIndex_PerObsCoverage` — non-best obs pubkeys indexed - `TestAddToByNode_WithoutResolvedPathField` - `TestTouchRelayLastSeen_WithoutResolvedPathField` - `TestWebSocketBroadcast_IncludesResolvedPath` - `TestBackfill_InvalidatesLRU` - `TestEviction_ByNodeCleanup_OnDemandSQL` - `TestExtractResolvedPubkeys`, `TestMergeResolvedPubkeys` - `TestResolvedPubkeyHash_Deterministic` - `TestLRU_EvictionOnFull` **Endpoint:** - `TestPathsThroughNode_NilResolvedPathFallback` - `TestPacketsAPI_OnDemandResolvedPath` - `TestPacketsAPI_OnDemandResolvedPath_LRUHit` - `TestPacketsAPI_OnDemandResolvedPath_Empty` **Feature flag:** - `TestFeatureFlag_OffPath_PreservesOldBehavior` - `TestFeatureFlag_Toggle_NoStateLeak` **Concurrency:** - `TestReverseMap_NoLeakOnPartialFailure` - `TestDecodeWindow_LockHoldTimeBounded` - `TestLivePolling_LRUUnderConcurrentIngest` **Regression:** - `TestRepeaterLiveness_StillAccurate` **Benchmarks:** - `BenchmarkLoad_BeforeAfter` - `BenchmarkResolvedPubkeyIndex_Memory` - `BenchmarkPathsThroughNode_Latency` - `BenchmarkLivePolling_UnderIngest` ## Benchmark results ``` BenchmarkResolvedPubkeyIndex_Memory/pubkeys=50K 429ms 103MB 777K allocs BenchmarkResolvedPubkeyIndex_Memory/pubkeys=500K 4205ms 896MB 7.67M allocs BenchmarkLoad_BeforeAfter 65ms 20MB 202K allocs BenchmarkPathsThroughNode_Latency 3.9µs 0B 0 allocs BenchmarkLivePolling_UnderIngest 5.4µs 545B 7 allocs ``` Key: per-obs `[]*string` overhead completely eliminated. At 1M obs with 3 hops average, this saves ~72 bytes/obs × 1M = ~68 MB just from the slice headers + pointers, plus the JSON-decoded string data (~900 MB at scale per profiling). ## Design choices - **FNV-1a instead of xxhash**: stdlib availability, no external dependency. Performance is equivalent for this use case (pubkey strings are short). - **FIFO LRU instead of true LRU**: simpler implementation, adequate for the access pattern (mostly sequential obs IDs from live polling). - **Grouped packets view omits resolved_path**: cold path, not worth SQL round-trip per page render. - **Backfill pending check uses reverse-map presence** instead of per-obs field: if a tx has any indexed pubkeys, its observations are considered resolved. Closes #807 --------- Co-authored-by: you <you@example.com> |
||
|
|
a8e1cea683 |
fix: use payload type bits only in content hash (not full header byte) (#787)
## Problem The firmware computes packet content hash as: ``` SHA256(payload_type_byte + [path_len for TRACE] + payload) ``` Where `payload_type_byte = (header >> 2) & 0x0F` — just the payload type bits (2-5). CoreScope was using the **full header byte** in its hash computation, which includes route type bits (0-1) and version bits (6-7). This meant the same logical packet produced different content hashes depending on route type — breaking dedup and packet lookup. **Firmware reference:** `Packet.cpp::calculatePacketHash()` uses `getPayloadType()` which returns `(header >> PH_TYPE_SHIFT) & PH_TYPE_MASK`. ## Fix - Extract only payload type bits: `payloadType := (headerByte >> 2) & 0x0F` - Include `path_len` byte in hash for TRACE packets (matching firmware behavior) - Applied to both `cmd/server/decoder.go` and `cmd/ingestor/decoder.go` ## Tests Added - **Route type independence:** Same payload with FLOOD vs DIRECT route types produces identical hash - **TRACE path_len inclusion:** TRACE packets with different `path_len` produce different hashes - **Firmware compatibility:** Hash output matches manual computation of firmware algorithm ## Migration Impact Existing packets in the DB have content hashes computed with the old (incorrect) formula. Options: 1. **Recompute hashes** via migration (recommended for clean state) 2. **Dual lookup** — check both old and new hash on queries (backward compat) 3. **Accept the break** — old hashes become stale, new packets get correct hashes Recommend option 1 (migration) as a follow-up. The volume of affected packets depends on how many distinct route types were seen for the same logical packet. Fixes #786 --------- Co-authored-by: you <you@example.com> |
||
|
|
d596becca3 |
feat: bounded cold load — limit Load() by memory budget (#790)
## Implements #748 M1 — Bounded Cold Load ### Problem `Load()` pulls the ENTIRE database into RAM before eviction runs. On a 1GB database, this means 3+ GB peak memory at startup, regardless of `maxMemoryMB`. This is the root cause of #743 (OOM on 2GB VMs). ### Solution Calculate the maximum number of transmissions that fit within the `maxMemoryMB` budget and use a SQL subquery LIMIT to load only the newest packets. **Two-phase approach** (avoids the JOIN-LIMIT row count problem): ```sql SELECT ... FROM transmissions t LEFT JOIN observations o ON ... WHERE t.id IN (SELECT id FROM transmissions ORDER BY first_seen DESC LIMIT ?) ORDER BY t.first_seen ASC, o.timestamp DESC ``` ### Changes - **`estimateStoreTxBytesTypical(numObs)`** — estimates memory cost of a typical transmission without needing an actual `StoreTx` instance. Used for budget calculation. - **Budget calculation in `Load()`** — `maxPackets = (maxMemoryMB * 1048576) / avgBytesPerPacket` with a floor of 1000 packets. - **Subquery LIMIT** — loads only the newest N transmissions when bounded. - **`oldestLoaded` tracking** — records the oldest packet timestamp in memory so future SQL fallback queries (M2+) know where in-memory data ends. - **Perf stats** — `oldestLoaded` exposed in `/api/perf/store-stats`. - **Logging** — bounded loads show `Loaded X/Y transmissions (limited by ZMB budget)`. ### When `maxMemoryMB=0` (unlimited) Behavior is completely unchanged — no LIMIT clause, all packets loaded. ### Tests (6 new) | Test | Validates | |------|-----------| | `TestBoundedLoad_LimitedMemory` | With 1MB budget, loads fewer than total (hits 1000 minimum) | | `TestBoundedLoad_NewestFirst` | Loaded packets are the newest, not oldest | | `TestBoundedLoad_OldestLoadedSet` | `oldestLoaded` matches first packet's `FirstSeen` | | `TestBoundedLoad_UnlimitedWithZero` | `maxMemoryMB=0` loads all packets | | `TestBoundedLoad_AscendingOrder` | Packets remain in ascending `first_seen` order after bounded load | | `TestEstimateStoreTxBytesTypical` | Estimate grows with observation count, exceeds floor | Plus benchmarks: `BenchmarkLoad_Bounded` vs `BenchmarkLoad_Unlimited`. ### Perf justification On a 5000-transmission test DB with 1MB budget: - Bounded: loads 1000 packets (the minimum) in ~1.3s - The subquery uses SQLite's index on `first_seen` — O(N log N) for the LIMIT, then indexed JOIN for observations - No full table scan needed when bounded ### Next milestones - **M2**: Packet list/search SQL fallback (uses `oldestLoaded` boundary) - **M3**: Node analytics SQL fallback - **M4-M5**: Remaining endpoint fallbacks + live-only memory store --------- Co-authored-by: you <you@example.com> |
||
|
|
6a648dea11 |
fix: multi-byte adopters — all node types, role column, advert precedence (#754) (#767)
## Fix: Multi-Byte Adopters Table — Three Bugs (#754) ### Bug 1: Companions in "Unknown" `computeMultiByteCapability()` was repeater-only. Extended to classify **all node types** (companions, rooms, sensors). A companion advertising with 2-byte hash is now correctly "Confirmed". ### Bug 2: No Role Column Added a **Role** column to the merged Multi-Byte Hash Adopters table, color-coded using `ROLE_COLORS` from `roles.js`. Users can now distinguish repeaters from companions without clicking through to node detail. ### Bug 3: Data Source Disagreement When adopter data (from `computeAnalyticsHashSizes`) shows `hashSize >= 2` but capability only found path evidence ("Suspected"), the advert-based adopter data now takes precedence → "Confirmed". The adopter hash sizes are passed into `computeMultiByteCapability()` as an additional confirmed evidence source. ### Changes - `cmd/server/store.go`: Extended capability to all node types, accept adopter hash sizes, prioritize advert evidence - `public/analytics.js`: Added Role column with color-coded badges - `cmd/server/multibyte_capability_test.go`: 3 new tests (companion confirmed, role populated, adopter precedence) ### Tests - All 10 multi-byte capability tests pass - All 544 frontend helper tests pass - All 62 packet filter tests pass - All 29 aging tests pass --------- Co-authored-by: you <you@example.com> |
||
|
|
401fd070f8 |
fix: improve trackedBytes accuracy for memory estimation (#751)
## Problem Fixes #743 — High memory usage / OOM with relatively small dataset. `trackedBytes` severely undercounted actual per-packet memory because it only tracked base struct sizes and string field lengths, missing major allocations: | Structure | Untracked Cost | Scale Impact | |-----------|---------------|--------------| | `spTxIndex` (O(path²) subpath entries) | 40 bytes × path combos | 50-150MB | | `ResolvedPath` on observations | 24 bytes × elements | ~25MB | | Per-tx maps (`obsKeys`, `observerSet`) | 200 bytes/tx flat | ~11MB | | `byPathHop` index entries | 50 bytes/hop | 20-40MB | This caused eviction to trigger too late (or not at all), leading to OOM. ## Fix Expanded `estimateStoreTxBytes` and `estimateStoreObsBytes` to account for: - **Per-tx maps**: +200 bytes flat for `obsKeys` + `observerSet` map headers - **Path hop index**: +50 bytes per hop in `byPathHop` - **Subpath index**: +40 bytes × `hops*(hops-1)/2` combinations for `spTxIndex` - **Resolved paths**: +24 bytes per `ResolvedPath` element on observations Updated the existing `TestEstimateStoreTxBytes` to match new formula. All existing eviction tests continue to pass — the eviction logic itself is unchanged. Also exposed `avgBytesPerPacket` in the perf API (`/api/perf`) so operators can monitor per-packet memory costs. ## Performance Benchmark confirms negligible overhead (called on every insert): ``` BenchmarkEstimateStoreTxBytes 159M ops 7.5 ns/op 0 B/op 0 allocs BenchmarkEstimateStoreObsBytes 1B ops 1.0 ns/op 0 B/op 0 allocs ``` ## Tests - 6 new tests in `tracked_bytes_test.go`: - Reasonable value ranges for different packet sizes - 10-hop packets estimate significantly more than 2-hop (subpath cost) - Observations with `ResolvedPath` estimate more than without - 15 observations estimate >10x a single observation - `trackedBytes` matches sum of individual estimates after batch insert - Eviction triggers correctly with improved estimates - 2 benchmarks confirming sub-10ns estimate cost - Updated existing `TestEstimateStoreTxBytes` for new formula - Full test suite passes --------- Co-authored-by: you <you@example.com> |
||
|
|
a815e70975 |
feat: Clock skew detection — backend computation (M1) (#746)
## Summary Implements **Milestone 1** of #690 — backend clock skew computation for nodes and observers. ## What's New ### Clock Skew Engine (`clock_skew.go`) **Phase 1 — Raw Skew Calculation:** For every ADVERT observation: `raw_skew = advert_timestamp - observation_timestamp` **Phase 2 — Observer Calibration:** Same packet seen by multiple observers → compute each observer's clock offset as the median deviation from the per-packet median observation timestamp. This identifies observers with their own clock drift. **Phase 3 — Corrected Node Skew:** `corrected_skew = raw_skew + observer_offset` — compensates for observer clock error. **Phase 4 — Trend Analysis:** Linear regression over time-ordered skew samples estimates drift rate in seconds/day. Detects crystal drift vs stable offset vs sudden jumps. ### Severity Classification | Level | Threshold | Meaning | |-------|-----------|---------| | ✅ OK | < 5 min | Normal | | ⚠️ Warning | 5 min – 1 hour | Clock drifting | | 🔴 Critical | 1 hour – 30 days | Likely no time source | | 🟣 Absurd | > 30 days | Firmware default or epoch 0 | ### New API Endpoints - `GET /api/nodes/{pubkey}/clock-skew` — per-node skew data (mean, median, last, drift, severity) - `GET /api/observers/clock-skew` — observer calibration offsets - Clock skew also included in `GET /api/nodes/{pubkey}/analytics` response as `clockSkew` field ### Performance - 30-second compute cache avoids reprocessing on every request - Operates on in-memory `byPayloadType[ADVERT]` index — no DB queries - O(n) in total ADVERT observations, O(m log m) for median calculations ## Tests 15 unit tests covering: - Severity classification at all thresholds - Median/mean math helpers - ISO timestamp parsing - Timestamp extraction from decoded JSON (nested and top-level) - Observer calibration with single and multi-observer scenarios - Observer offset correction direction (verified the sign is `+obsOffset`) - Drift estimation: stable, linear, insufficient data, short time span - JSON number extraction edge cases ## What's NOT in This PR - No UI changes (M2–M4) - No customizer integration (M5) - Thresholds are hardcoded constants (will be configurable in M5) Implements #690 M1. --------- Co-authored-by: you <you@example.com> |
||
|
|
aa84ce1e6a |
fix: correct hash_size detection for transport routes and zero-hop adverts (#747)
## Summary Fixes #744 Fixes #722 Three bugs in hash_size computation caused zero-hop adverts to incorrectly report `hash_size=1`, masking nodes that actually use multi-byte hashes. ## Bugs Fixed ### 1. Wrong path byte offset for transport routes (`computeNodeHashSizeInfo`) Transport routes (types 0 and 3) have 4 transport code bytes before the path byte. The code read the path byte from offset 1 (byte index `RawHex[2:4]`) for all route types. For transport routes, the correct offset is 5 (`RawHex[10:12]`). ### 2. Missing RouteTransportDirect skip (`computeNodeHashSizeInfo`) Zero-hop adverts from `RouteDirect` (type 2) were correctly skipped, but `RouteTransportDirect` (type 3) zero-hop adverts were not. Both have locally-generated path bytes with unreliable hash_size bits. ### 3. Zero-hop adverts not skipped in analytics (`computeAnalyticsHashSizes`) `computeAnalyticsHashSizes()` unconditionally overwrote a node's `hashSize` with whatever the latest advert reported. A zero-hop direct advert with `hash_size=1` could overwrite a previously-correct `hash_size=2` from a multi-hop flood advert. Fix: skip hash_size update for zero-hop direct/transport-direct adverts while still counting the packet and updating `lastSeen`. ## Tests Added - `TestHashSizeTransportRoutePathByteOffset` — verifies transport routes read path byte at offset 5, regular flood reads at offset 1 - `TestHashSizeTransportDirectZeroHopSkipped` — verifies both RouteDirect and RouteTransportDirect zero-hop adverts are skipped - `TestAnalyticsHashSizesZeroHopSkip` — verifies analytics hash_size is not overwritten by zero-hop adverts - Fixed 3 existing tests (`FlipFlop`, `Dominant`, `LatestWins`) that used route_type 0 (TransportFlood) header bytes without proper transport code padding ## Complexity All changes are O(1) per packet — no new loops or data structures. The additional offset computation and zero-hop check are constant-time operations within the existing packet scan loop. Co-authored-by: you <you@example.com> |
||
|
|
84f03f4f41 |
fix: hide undecryptable channel messages by default (#727) (#728)
## Problem Channels page shows 53K 'Unknown' messages — undecryptable GRP_TXT packets with no content. Pure noise. ## Fix - Backend: channels API filters out undecrypted messages by default - `?includeEncrypted=true` param to include them - Frontend: 'Show encrypted' toggle in channels sidebar - Unknown channels grayed out with '(no key)' label - Toggle persists in localStorage Fixes #727 --------- Co-authored-by: you <you@example.com> |
||
|
|
65482ff6f6 |
fix: cache invalidation tuning — 7% → 50-80% hit rate (#721)
## Cache Invalidation Tuning — 7% → 50-80% Hit Rate Fixes #720 ### Problem Server-side cache hit rate was 7% (48 hits / 631 misses over 4.7 days). Root causes from the [cache audit report](https://github.com/Kpa-clawbot/CoreScope/issues/720): 1. **`invalidationDebounce` config value (30s) was dead code** — never wired to `invCooldown` 2. **`invCooldown` hardcoded to 10s** — with continuous ingest, caches cleared every 10s regardless of their 1800s TTLs 3. **`collisionCache` cleared on every `hasNewTransmissions`** — hash collisions are structural (depend on node count), not per-packet ### Changes | Change | File | Impact | |--------|------|--------| | Wire `invalidationDebounce` from config → `invCooldown` | `store.go` | Config actually works now | | Default `invCooldown` 10s → 300s (5 min) | `store.go` | 30x longer cache survival | | Add `hasNewNodes` flag to `cacheInvalidation` | `store.go` | Finer-grained invalidation | | `collisionCache` only clears on `hasNewNodes` | `store.go` | O(n²) collision computation survives its 1hr TTL | | `addToByNode` returns new-node indicator | `store.go` | Zero-cost detection during indexing | | `indexByNode` returns new-node indicator | `store.go` | Propagates to ingest path | | Ingest tracks and passes `hasNewNodes` | `store.go` | End-to-end wiring | ### Tests Added | Test | What it verifies | |------|-----------------| | `TestInvCooldownFromConfig` | Config value wired to `invCooldown`; default is 300s | | `TestCollisionCacheNotClearedByTransmissions` | `hasNewTransmissions` alone does NOT clear `collisionCache` | | `TestCollisionCacheClearedByNewNodes` | `hasNewNodes` DOES clear `collisionCache` | | `TestCacheSurvivesMultipleIngestCyclesWithinCooldown` | 5 rapid ingest cycles don't clear any caches during cooldown | | `TestNewNodesAccumulatedDuringCooldown` | `hasNewNodes` accumulated in `pendingInv` and applied after cooldown | | `BenchmarkAnalyticsLatencyCacheHitVsMiss` | 100% hit rate with rate-limited invalidation | All 200+ existing tests pass. Both benchmarks show 100% hit rate. ### Performance Justification - **Before:** Effective cache lifetime = `min(TTL, invCooldown)` = 10s. With analytics viewed ~once/few minutes, P(hit) ≈ 7% - **After:** Effective cache lifetime = `min(TTL, 300s)` = 300s for most caches, 3600s for `collisionCache`. Expected hit rate 50-80% - **Complexity:** All changes are O(1) — `addToByNode` already checked `nodeHashes[pubkey] == nil`, we just return the result - **Benchmark proof:** `BenchmarkAnalyticsLatencyCacheHitVsMiss` → 100% hit rate, 269ns/op Co-authored-by: you <you@example.com> |
||
|
|
f95aa49804 |
fix: exclude TRACE packets from multi-byte capability suspected detection (#715)
## Summary Exclude TRACE packets (payload_type 8) from the "suspected" multi-byte capability inference logic. TRACE packets carry hash size in their own flags — forwarding repeaters read it from the TRACE header, not their compile-time `PATH_HASH_SIZE`. Pre-1.14 repeaters can forward multi-byte TRACEs without actually supporting multi-byte hashes, creating false positives. Fixes #714 ## Changes ### `cmd/server/store.go` - In `computeMultiByteCapability()`, skip packets with `payload_type == 8` (TRACE) when scanning `byPathHop` for suspected multi-byte nodes - "Confirmed" detection (from adverts) is unaffected ### `cmd/server/multibyte_capability_test.go` - `TestMultiByteCapability_TraceExcluded`: TRACE packet with 2-byte path does NOT mark repeater as suspected - `TestMultiByteCapability_NonTraceStillSuspected`: Non-TRACE packet with 2-byte path still marks as suspected - `TestMultiByteCapability_ConfirmedUnaffectedByTraceExclusion`: Confirmed status from advert unaffected by TRACE exclusion ## Testing All 7 multi-byte capability tests pass. Full `cmd/server` and `cmd/ingestor` test suites pass. Co-authored-by: you <you@example.com> |
||
|
|
4a7e20a8cb |
fix: redesign memory eviction — self-accounting trackedBytes, watermarks, safety cap (#711)
## Problem `HeapAlloc`-based eviction cascades on large databases — evicts down to near-zero packets because Go runtime overhead exceeds `maxMemoryMB` even with an empty packet store. ## Fix (per Carmack spec on #710) 1. **Self-accounting `trackedBytes`** — running counter maintained on insert/evict, computed from actual struct sizes. No `runtime.ReadMemStats`. 2. **High/low watermark hysteresis** (100%/85%) — evict to 85% of budget, don't re-trigger until 100% crossed again. 3. **25% per-pass safety cap** — never evict more than a quarter of packets in one cycle. 4. **Oldest-first** — evict from sorted head, O(1) candidate selection. `maxMemoryMB` now means packet store budget, not total process heap. Fixes #710 Co-authored-by: you <you@example.com> |
||
|
|
e893a1b3c4 |
fix: index relay hops in byNode for liveness tracking (#708)
## Problem Nodes that only appear as relay hops in packet paths (via `resolved_path`) were never indexed in `byNode`, so `last_heard` was never computed for them. This made relay-only nodes show as dead/stale even when actively forwarding traffic. Fixes #660 ## Root Cause `indexByNode()` only indexed pubkeys from decoded JSON fields (`pubKey`, `destPubKey`, `srcPubKey`). Relay nodes appearing in `resolved_path` were ignored entirely. ## Fix `indexByNode()` now also iterates: 1. `ResolvedPath` entries from each observation 2. `tx.ResolvedPath` (best observation's resolved path, used for DB-loaded packets) A per-call `indexed` set prevents double-indexing when the same pubkey appears in both decoded JSON and resolved path. Extracted `addToByNode()` helper to deduplicate the nodeHashes/byNode append logic. ## Scope **Phase 1 only** — server-side in-memory indexing. No DB changes, no ingestor changes. This makes `last_heard` reflect relay activity with zero risk to persistence. ## Tests 5 new test cases in `TestIndexByNodeResolvedPath`: - Resolved path pubkeys from observations get indexed - Null entries in resolved path are skipped - Relay-only nodes (no decoded JSON match) appear in `byNode` - Dedup between decoded JSON and resolved path - `tx.ResolvedPath` indexed when observations are empty All existing tests pass unchanged. ## Complexity O(observations × path_length) per packet — typically 1-3 observations × 1-3 hops. No hot-path regression. --------- Co-authored-by: you <you@example.com> |
||
|
|
ef8bce5002 |
feat: repeater multi-byte capability inference table (#706)
## Summary Adds a new "Repeater Multi-Byte Capability" section to the Hash Stats analytics tab that classifies each repeater's ability to handle multi-byte hash prefixes (firmware >= v1.14). Fixes #689 ## What Changed ### Backend (`cmd/server/store.go`) - New `computeMultiByteCapability()` method that infers capability for each repeater using two evidence sources: - **Confirmed** (100% reliable): node has advertised with `hash_size >= 2`, leveraging existing `computeNodeHashSizeInfo()` data - **Suspected** (<100%): node's prefix appears as a hop in packets with multi-byte path headers, using the `byPathHop` index. Prefix collisions mean this isn't definitive. - **Unknown**: no multi-byte evidence — could be pre-1.14 or 1.14+ with default settings - Extended `/api/analytics/hash-sizes` response with `multiByteCapability` array ### Frontend (`public/analytics.js`) - New `renderMultiByteCapability()` function on the Hash Stats tab - Color-coded table: green confirmed, yellow suspected, gray unknown - Filter buttons to show all/confirmed/suspected/unknown - Column sorting by name, role, status, evidence, max hash size, last seen - Clickable rows link to node detail pages ### Tests (`cmd/server/multibyte_capability_test.go`) - `TestMultiByteCapability_Confirmed`: advert with hash_size=2 → confirmed - `TestMultiByteCapability_Suspected`: path appearance only → suspected - `TestMultiByteCapability_Unknown`: 1-byte advert only → unknown - `TestMultiByteCapability_PrefixCollision`: two nodes sharing prefix, one confirmed via advert, other correctly marked suspected (not confirmed) ## Performance - `computeMultiByteCapability()` runs once per cache cycle (15s TTL via hash-sizes cache) - Leverages existing `GetNodeHashSizeInfo()` cache (also 15s TTL) — no redundant advert scanning - Path hop scan is O(repeaters × prefix lengths) lookups in the `byPathHop` map, with early break on first match per prefix - Only computed for global (non-regional) requests to avoid unnecessary work --------- Co-authored-by: you <you@example.com> |
||
|
|
922ebe54e7 |
BYOP Advert signature validation (#686)
For BYOP mode in the packet analyzer, perform signature validation on advert packets and display whether successful or not. This is added as we observed many corrupted advert packets that would be easily detectable as such if signature validation checks were performed. At present this MR is just to add this status in BYOP mode so there is minimal impact to the application and no performance penalty for having to perform these checks on all packets. Moving forward it probably makes sense to do these checks on all advert packets so that corrupt packets can be ignored in several contexts (like node lists for example). Let me know what you think and I can adjust as needed. --------- Co-authored-by: you <you@example.com> |
||
|
|
2e1a4a2e0d |
fix: handle companion nodes without adverts in My Mesh health cards (#696)
## Summary Fixes #665 — companion nodes claimed in "My Mesh" showed "Could not load data" because they never sent an advert, so they had no `nodes` table entry, causing the health API to return 404. ## Three-Layer Fix ### 1. API Resilience (`cmd/server/store.go`) `GetNodeHealth()` now falls back to building a partial response from the in-memory packet store when `GetNodeByPubkey()` returns nil. Returns a synthetic node stub (`role: "unknown"`, `name: "Unknown"`) with whatever stats exist from packets, instead of returning nil → 404. ### 2. Ingestor Cleanup (`cmd/ingestor/main.go`) Removed phantom sender node creation that used `"sender-" + name` as the pubkey. Channel messages don't carry the sender's real pubkey, so these synthetic entries were unreachable from the claiming/health flow — they just polluted the nodes table with unmatchable keys. ### 3. Frontend UX (`public/home.js`) The catch block in `loadMyNodes()` now distinguishes 404 (node not in DB yet) from other errors: - **404**: Shows 📡 "Waiting for first advert — this node has been seen in channel messages but hasn't advertised yet" - **Other errors**: Shows ❓ "Could not load data" (unchanged) ## Tests - Added `TestNodeHealthPartialFromPackets` — verifies a node with packets but no DB entry returns 200 with synthetic node stub and stats - Updated `TestHandleMessageChannelMessage` — verifies channel messages no longer create phantom sender nodes - All existing tests pass (`cmd/server`, `cmd/ingestor`) Co-authored-by: you <you@example.com> |
||
|
|
fcad49594b |
fix: include path.hopsCompleted in TRACE WebSocket broadcasts (#695)
## Summary Fixes #683 — TRACE packets on the live map were showing the full path instead of distinguishing completed vs remaining hops. ## Root Cause Both WebSocket broadcast builders in `store.go` constructed the `decoded` map with only `header` and `payload` keys — `path` was never included. The frontend reads `decoded.path.hopsCompleted` to split trace routes into solid (completed) and dashed (remaining) segments, but that field was always `undefined`. ## Fix For TRACE packets (payload type 9), call `DecodePacket()` on the raw hex during broadcast and include the resulting `Path` struct in `decoded["path"]`. This populates `hopsCompleted` which the frontend already knows how to consume. Both broadcast builders are patched: - `IngestNewFromDB()` — new transmissions path (~line 1419) - `IngestNewObservations()` — new observations path (~line 1680) TRACE packets are infrequent, so the per-packet decode overhead is negligible. ## Testing - Added `TestIngestTraceBroadcastIncludesPath` — verifies that TRACE broadcast maps include `decoded.path` with correct `hopsCompleted` value - All existing tests pass (`cmd/server` + `cmd/ingestor`) Co-authored-by: you <you@example.com> |
||
|
|
22bf33700e |
Fix: filter path-hop candidates by resolved_path to prevent prefix collisions (#658)
## Problem
The "Paths Through This Node" API endpoint (`/api/nodes/{pubkey}/paths`)
returns unrelated packets when two nodes share a hex prefix. For
example, querying paths for "Kpa Roof Solar" (`c0dedad4...`) returns 316
packets that actually belong to "C0ffee SF" (`C0FFEEC7...`) because both
share the `c0` prefix in the `byPathHop` index.
Fixes #655
## Root Cause
`handleNodePaths()` in `routes.go` collects candidates from the
`byPathHop` index using 2-char and 4-char hex prefixes for speed, but
never verifies that the target node actually appears in each candidate's
resolved path. The broad index lookup is intentional, but the
**post-filter was missing**.
## Fix
Added `nodeInResolvedPath()` helper in `store.go` that checks whether a
transmission's `resolved_path` (from the neighbor affinity graph via
`resolveWithContext`) contains the target node's full pubkey. The
filter:
- **Includes** packets where `resolved_path` contains the target node's
full pubkey
- **Excludes** packets where `resolved_path` resolved to a different
node (prefix collision)
- **Excludes** packets where `resolved_path` is nil/empty (ambiguous —
avoids false positives)
The check examines both the best observation's resolved_path
(`tx.ResolvedPath`) and all individual observations, so packets are
included if *any* observation resolved the target.
## Tests
- `TestNodeInResolvedPath` — unit test for the helper with 5 cases
(match, different node, nil, all-nil elements, match in observation
only)
- `TestNodePathsPrefixCollisionFilter` — integration test: two nodes
sharing `aa` prefix, verifies the collision packet is excluded from one
and included for the other
- Updated test DB schema to include `resolved_path` column and seed data
with resolved pubkeys
- All existing tests pass (165 additions, 8 modifications)
## Performance
No impact on hot paths. The filter runs once per API call on the
already-collected candidate set (typically small). `nodeInResolvedPath`
is O(observations × hops) per candidate — negligible since observations
per transmission are typically 1–5.
---------
Co-authored-by: you <you@example.com>
|
||
|
|
088b4381c3 |
Fix: Hash Stats 'By Repeaters' includes non-repeater nodes (#654)
## Summary The "By Repeaters" section on the Hash Stats analytics page was counting **all** node types (companions, room servers, sensors, etc.) instead of only repeaters. This made the "By Repeaters" distribution identical to "Multi-Byte Hash Adopters", defeating the purpose of the breakdown. Fixes #652 ## Root Cause `computeAnalyticsHashSizes()` in `cmd/server/store.go` built its `byNode` map from advert packet data without cross-referencing node roles from the node store. Both `distributionByRepeaters` and `multiByteNodes` consumed this unfiltered map. ## Changes ### `cmd/server/store.go` - Build a `nodeRoleByPK` lookup map from `getCachedNodesAndPM()` at the start of the function - Store `role` in each `byNode` entry when processing advert packets - **`distributionByRepeaters`**: filter to only count nodes whose role contains "repeater" - **`multiByteNodes`**: include `role` field in output so the frontend can filter/group by node type ### `cmd/server/coverage_test.go` - Add `TestHashSizesDistributionByRepeatersFiltersRole`: verifies that companion nodes are excluded from `distributionByRepeaters` but included in `multiByteNodes` with correct role ### `cmd/server/routes_test.go` - Fix `TestHashAnalyticsZeroHopAdvert`: invalidate node cache after DB insert so role lookup works - Fix `TestAnalyticsHashSizeSameNameDifferentPubkey`: insert node records as repeaters + invalidate cache ## Testing All `cmd/server` tests pass (68 insertions, 3 deletions across 3 files). Co-authored-by: you <you@example.com> |
||
|
|
30e7e9ae3c |
docs: document lock ordering for cacheMu and channelsCacheMu (#624)
## Summary Documents the lock ordering for all five mutexes in `PacketStore` (`store.go`) to prevent future deadlocks. ## What changed Added a comment block above the `PacketStore` struct documenting: - All 5 mutexes (`mu`, `cacheMu`, `channelsCacheMu`, `groupedCacheMu`, `regionObsMu`) - What each mutex guards - The required acquisition order (numbered 1–5) - The nesting relationships that exist today (`cacheMu → channelsCacheMu` in `invalidateCachesFor` and `rebuildAnalyticsCaches`) - Confirmation that no reverse ordering exists (no deadlock risk) ## Verification - Grepped all lock acquisition sites to confirm no reverse nesting exists - `go build ./...` passes — documentation-only change Fixes #413 --------- Co-authored-by: you <you@example.com> |
||
|
|
05fbcb09dd |
fix: wire cacheTTL.analyticsHashSizes config to collision cache (#420) (#622)
## Summary Fixes #420 — wires `cacheTTL` config values to server-side cache durations that were previously hardcoded. ## Problem `collisionCacheTTL` was hardcoded at 60s in `store.go`. The config has `cacheTTL.analyticsHashSizes: 3600` (1 hour) but it was never read — the `/api/config/cache` endpoint just passed the raw map to the client without applying values server-side. ## Changes - **`store.go`**: Add `cacheTTLSec()` helper to safely extract duration values from the `cacheTTL` config map. `NewPacketStore` now accepts an optional `cacheTTL` map (variadic, backward-compatible) and wires: - `cacheTTL.analyticsHashSizes` → `collisionCacheTTL` - `cacheTTL.analyticsRF` → `rfCacheTTL` - **Default changed**: `collisionCacheTTL` default raised from 60s → 3600s (1 hour). Hash collision computation is expensive and data changes rarely — 60s was causing unnecessary recomputation. - **`main.go`**: Pass `cfg.CacheTTL` to `NewPacketStore`. - **Tests**: Added `TestCacheTTLFromConfig` and `TestCacheTTLDefaults` in eviction_test.go. Updated existing `TestHashCollisionsCacheTTL` for the new default. ## Audit of other cacheTTL values The remaining `cacheTTL` keys (`stats`, `nodeDetail`, `nodeHealth`, `nodeList`, `bulkHealth`, `networkStatus`, `observers`, `channels`, `channelMessages`, `analyticsTopology`, `analyticsChannels`, `analyticsSubpaths`, `analyticsSubpathDetail`, `nodeAnalytics`, `nodeSearch`, `invalidationDebounce`) are **client-side only** — served via `/api/config/cache` and consumed by the frontend. They don't have corresponding server-side caches to wire to. The only server-side caches (`rfCache`, `topoCache`, `hashCache`, `chanCache`, `distCache`, `subpathCache`, `collisionCache`) all use either `rfCacheTTL` or `collisionCacheTTL`, both now configurable. ## Complexity O(1) config lookup at store init time. No hot-path impact. Co-authored-by: you <you@example.com> |
||
|
|
767c8a5a3e |
perf: async chunked backfill — HTTP serves within 2 minutes (#612) (#614)
## Summary Adds two config knobs for controlling backfill scope and neighbor graph data retention, plus removes the dead synchronous backfill function. ## Changes ### Config knobs #### `resolvedPath.backfillHours` (default: 24) Controls how far back (in hours) the async backfill scans for observations with NULL `resolved_path`. Transmissions with `first_seen` older than this window are skipped, reducing startup time for instances with large historical datasets. #### `neighborGraph.maxAgeDays` (default: 30) Controls the maximum age of `neighbor_edges` entries. Edges with `last_seen` older than this are pruned from both SQLite and the in-memory graph. Pruning runs on startup (after a 4-minute stagger) and every 24 hours thereafter. ### Dead code removal - Removed the synchronous `backfillResolvedPaths` function that was replaced by the async version. ### Implementation details - `backfillResolvedPathsAsync` now accepts a `backfillHours` parameter and filters by `tx.FirstSeen` - `NeighborGraph.PruneOlderThan(cutoff)` removes stale edges from the in-memory graph - `PruneNeighborEdges(conn, graph, maxAgeDays)` prunes both DB and in-memory graph - Periodic pruning ticker follows the same pattern as metrics pruning (24h interval, staggered start) - Graceful shutdown stops the edge prune ticker ### Config example Both knobs added to `config.example.json` with `_comment` fields. ## Tests - Config default/override tests for both knobs - `TestGraphPruneOlderThan` — in-memory edge pruning - `TestPruneNeighborEdgesDB` — SQLite + in-memory pruning together - `TestBackfillRespectsHourWindow` — verifies old transmissions are excluded by backfill window --------- Co-authored-by: you <you@example.com> |
||
|
|
6ae62ce535 |
perf: make txToMap observations lazy via ExpandObservations flag (#595)
## Summary `txToMap()` previously always allocated observation sub-maps for every packet, even though the `/api/packets` handler immediately stripped them via `delete(p, "observations")` unless `expand=observations` was requested. A typical page of 50 packets with ~5 observations each caused 300+ unnecessary map allocations per request. ## Changes - **`txToMap`**: Add variadic `includeObservations bool` parameter. Observations are only built when `true` is passed, eliminating allocations when they'd just be discarded. - **`PacketQuery`**: Add `ExpandObservations bool` field to thread the caller's intent through the query pipeline. - **`routes.go`**: Set `ExpandObservations` based on `expand=observations` query param. Removed the post-hoc `delete(p, "observations")` loop — observations are simply never created when not requested. - **Single-packet lookups** (`GetPacketByID`, `GetPacketByHash`): Always pass `true` since detail views need observations. - **Multi-node/analytics queries**: Default (no flag) = no observations, matching prior behavior. ## Testing - Added `TestTxToMapLazyObservations` covering all three cases: no flag, `false`, and `true`. - All existing tests pass (`go test ./...`). ## Perf Impact Eliminates ~250 observation map allocations per /api/packets request (at default page size of 50 with ~5 observations each). This is a constant-factor improvement per request — no algorithmic complexity change. Fixes #374 Co-authored-by: you <you@example.com> |
||
|
|
6e2f79c0ad |
perf: optimize QueryGroupedPackets — cache observer count, defer map construction (#594)
## Summary
Optimizes `QueryGroupedPackets()` in `store.go` to eliminate two major
inefficiencies on every grouped packet list request:
### Changes
1. **Cache `UniqueObserverCount` on `StoreTx`** — Instead of iterating
all observations to count unique observers on every query
(O(total_observations) per request), we now track unique observers at
ingest time via an `observerSet` map and pre-computed
`UniqueObserverCount` field. This is updated incrementally as
observations arrive.
2. **Defer map construction until after pagination** — Previously,
`map[string]interface{}` was built for ALL 30K+ filtered results before
sorting and paginating. Now the grouped cache stores sorted `[]*StoreTx`
pointers (lightweight), and `groupedTxsToPage()` builds maps only for
the requested page (typically 50 items). This eliminates ~30K map
allocations per cache miss.
3. **Lighter cache footprint** — The grouped cache now stores
`[]*StoreTx` instead of `*PacketResult` with pre-built maps, reducing
memory pressure and GC work.
### Complexity
- Observer counting: O(1) per query (was O(total_observations))
- Map construction: O(page_size) per query (was O(n) where n = all
filtered results)
- Sort remains O(n log n) on cache miss, but the cache (3s TTL) absorbs
repeated requests
### Testing
- `cd cmd/server && go test ./...` — all tests pass
- `cd cmd/ingestor && go build ./...` — builds clean
Fixes #370
---------
Co-authored-by: you <you@example.com>
|
||
|
|
45991eca09 |
perf: combine chained filterPackets passes into single scan (#592)
## Summary Combines the chained `filterTxSlice` calls in `filterPackets()` into a single pass over the packet slice. ## Problem When multiple filter parameters are specified (e.g., `type=4&route=1&since=...&until=...`), each filter created a new intermediate `[]*StoreTx` slice. With N filters, this meant N separate scans and N-1 unnecessary allocations. ## Fix All filter predicates (type, route, observer, hash, since, until, region, node) are pre-computed before the loop, then evaluated in a single `filterTxSlice` call. This eliminates all intermediate allocations. **Preserved behavior:** - Fast-path index lookups for hash-only and observer-only queries remain unchanged - Node-only fast-path via `byNode` index preserved - All existing filter semantics maintained (same comparison operators, same null checks) **Complexity:** Single `O(n)` pass regardless of how many filters are active, vs previous `O(n * k)` where k = number of active filters (each pass is O(n) but allocates). ## Testing All existing tests pass (`cd cmd/server && go test ./...`). Fixes #373 Co-authored-by: you <you@example.com> |
||
|
|
76c42556a2 |
perf: sort snrVals/rssiVals once in computeAnalyticsRF (#591)
## Summary Sort `snrVals` and `rssiVals` once upfront in `computeAnalyticsRF()` and read min/max/median directly from the sorted slices, instead of copying and sorting per stat call. ## Changes - Sort both slices once before computing stats (2 sorts total instead of 4+ copy+sorts) - Read `min` from `sorted[0]`, `max` from `sorted[len-1]`, `median` from `sorted[len/2]` - Remove the now-unused `sortedF64` and `medianF64` helper closures ## Performance impact With 100K+ observations, this eliminates multiple O(n log n) copy+sort operations. Previously each call to `medianF64` did a full copy + sort, and `minF64`/`maxF64` did O(n) scans on the unsorted array. Now: 2 in-place sorts total, O(1) lookups for min/max/median. Fixes #366 Co-authored-by: you <you@example.com> |
||
|
|
6f8378a31c |
perf: batch-remove from secondary indexes in EvictStale (#590)
## Summary `EvictStale()` was doing O(n) linear scans per evicted item to remove from secondary indexes (`byObserver`, `byPayloadType`, `byNode`). Evicting 1000 packets from an observer with 50K observations meant 1000 × 50K = 50M comparisons — all under a write lock. ## Fix Replace per-item removal with batch single-pass filtering: 1. **Collect phase**: Walk evicted packets once, building sets of evicted tx IDs, observation IDs, and affected index keys 2. **Filter phase**: For each affected index slice, do a single pass keeping only non-evicted entries **Before**: O(evicted_count × index_slice_size) per index — quadratic in practice **After**: O(evicted_count + index_slice_size) per affected key — linear ## Changes - `cmd/server/store.go`: Restructured `EvictStale()` eviction loop into collect + batch-filter pattern ## Testing - All existing tests pass (`cd cmd/server && go test ./...`) Fixes #368 Co-authored-by: you <you@example.com> |
||
|
|
56115ee0a4 |
perf: use byNode index in QueryMultiNodePackets instead of full scan (#589)
## Summary `QueryMultiNodePackets()` was scanning ALL packets with `strings.Contains` on JSON blobs — O(packets × pubkeys × json_length). With 30K+ packets and multiple pubkeys, this caused noticeable latency on `/api/packets?nodes=...`. ## Fix Replace the full scan with lookups into the existing `byNode` index, which already maps pubkeys to their transmissions. Merge results with hash-based deduplication, then apply time filters. **Before:** O(N × P × J) where N=all packets, P=pubkeys, J=avg JSON length **After:** O(M × P) where M=packets per pubkey (typically small), plus O(R log R) sort for pagination correctness Results are sorted by `FirstSeen` after merging to maintain the oldest-first ordering expected by the pagination logic. Fixes #357 Co-authored-by: you <you@example.com> |
||
|
|
321d1cf913 |
perf: apply time filter early in GetNodeAnalytics to avoid full packet scan (#588)
## Problem
`GetNodeAnalytics()` in `store.go` scans ALL 30K+ packets doing
`strings.Contains` on every JSON blob when the node has a name, then
filters by time range *after* the full scan. This is `O(packets ×
json_length)` on every `/api/nodes/{pubkey}/analytics` request.
## Fix
Move the `fromISO` time check inside the scan loop so old packets are
skipped **before** the expensive `strings.Contains` matching. For the
non-name path (indexed-only), the time filter is also applied inline,
eliminating the separate `allPkts` intermediate slice.
### Before
1. Scan all packets → collect matches (including old ones) → `allPkts`
2. Filter `allPkts` by time → `packets`
### After
1. Scan packets, skip `tx.FirstSeen <= fromISO` immediately → `packets`
This avoids `strings.Contains` calls on packets outside the requested
time window (typically 7 days out of months of data).
## Complexity
- **Before:** `O(total_packets × avg_json_length)` for name matching
- **After:** `O(recent_packets × avg_json_length)` — only packets within
the time window are string-matched
## Testing
- `cd cmd/server && go test ./...` — all tests pass
Fixes #367
Co-authored-by: you <you@example.com>
|
||
|
|
790a713ba9 |
perf: combine 4 subpath API calls into single bulk endpoint (#587)
## Summary
Consolidates the 4 parallel `/api/analytics/subpaths` calls in the Route
Patterns tab into a single `/api/analytics/subpaths-bulk` endpoint,
eliminating 3 redundant server-side scans of the subpath index on cache
miss.
## Changes
### Backend (`cmd/server/routes.go`, `cmd/server/store.go`)
- New `GET
/api/analytics/subpaths-bulk?groups=2-2:50,3-3:30,4-4:20,5-8:15`
endpoint
- Groups format: `minLen-maxLen:limit` comma-separated
- `GetAnalyticsSubpathsBulk()` iterates `spIndex` once, bucketing
entries into per-group accumulators by hop length
- Hop name resolution is done once per raw hop and shared across groups
- Results are cached per-group for compatibility with existing
single-key cache lookups
- Region-filtered queries fall back to individual
`GetAnalyticsSubpaths()` calls (region filtering requires
per-transmission observer checks)
### Frontend (`public/analytics.js`)
- `renderSubpaths()` now makes 1 API call instead of 4
- Response shape: `{ results: [{ subpaths, totalPaths }, ...] }` —
destructured into the same `[d2, d3, d4, d5]` variables
### Tests (`cmd/server/routes_test.go`)
- `TestAnalyticsSubpathsBulk`: validates 3-group response shape, missing
params error, invalid format error
## Performance
- **Before:** 4 API calls → 4 scans of `spIndex` + 4× hop resolution on
cache miss
- **After:** 1 API call → 1 scan of `spIndex` + 1× hop resolution
(shared cache)
- Cache miss cost reduced by ~75% for this tab
- No change on cache hit (individual group caching still works)
Fixes #398
Co-authored-by: you <you@example.com>
|
||
|
|
f3d5d1e021 |
perf: resolve hops from in-memory prefix map instead of N+1 DB queries (#577)
## Summary Replace N+1 per-hop DB queries in `handleResolveHops` with O(1) lookups against the in-memory prefix map that already exists in the packet store. ## Problem Each hop in the `resolve-hops` API triggered a separate `SELECT ... LIKE ?` query against the nodes table. With 10 hops, that's 10 DB round-trips — unnecessary when `getCachedNodesAndPM()` already maintains an in-memory prefix map that can resolve hops instantly. ## Changes - **routes.go**: Replace the per-hop DB query loop with `pm.m[hopLower]` lookups from the prefix map. Convert `nodeInfo` → `HopCandidate` inline. Remove unused `rows`/`sql.Scan` code. - **store.go**: Add `InvalidateNodeCache()` method to force prefix map rebuild (needed by tests that insert nodes after store initialization). - **routes_test.go**: Give `TestResolveHopsAmbiguous` a proper store so hops resolve via the prefix map. - **resolve_context_test.go**: Call `InvalidateNodeCache()` after inserting test nodes. Fix confidence assertion — with GPS candidates and no affinity context, `resolveWithContext` correctly returns `gps_preference` (previously masked because the prefix map didn't have the test nodes). ## Complexity O(1) per hop lookup via hash map vs O(n) DB scan per hop. No hot-path impact — this endpoint is called on-demand, not in a render loop. Fixes #369 --------- Co-authored-by: you <you@example.com> |
||
|
|
02004c5912 |
perf: incremental distance index update on path changes (#576)
## Summary Replace full `buildDistanceIndex()` rebuild with incremental `removeTxFromDistanceIndex`/`addTxToDistanceIndex` for only the transmissions whose paths actually changed during `IngestNewObservations`. ## Problem When any transmission's best path changed during observation ingestion, the **entire distance index was rebuilt** — iterating all 30K+ packets, resolving all hops, and computing haversine distances. This `O(total_packets × avg_hops)` operation ran under a write lock, blocking all API readers. A 30-second debounce (`distRebuildInterval`) was added in #557 to mitigate this, but it only delayed the pain — the full rebuild still happened, just less frequently. ## Fix - Added `removeTxFromDistanceIndex(tx)` — filters out all `distHopRecord` and `distPathRecord` entries for a specific transmission - Added `addTxToDistanceIndex(tx)` — computes and appends new distance records for a single transmission - In `IngestNewObservations`, changed path-change handling to call remove+add for each affected tx instead of marking dirty and waiting for a full rebuild - Removed `distDirty`, `distLast`, and `distRebuildInterval` since incremental updates are cheap enough to apply immediately ## Complexity - **Before:** `O(total_packets × avg_hops)` per rebuild (30K+ packets) - **After:** `O(changed_txs × avg_hops + total_dist_records)` — the remove is a linear scan of the distance slices, but only for affected txs; the add is `O(hops)` per changed tx The remove scan over `distHops`/`distPaths` slices is linear in slice length, but this is still far cheaper than the full rebuild which also does JSON parsing, hop resolution, and haversine math for every packet. ## Tests - Updated `TestDistanceRebuildDebounce` → `TestDistanceIncrementalUpdate` to verify incremental behavior and check for duplicate path records - All existing tests pass (`go test ./...` in both `cmd/server` and `cmd/ingestor`) Fixes #365 --------- Co-authored-by: you <you@example.com> |
||
|
|
ef30031e2e |
perf: cache resolveRegionObservers with 30s TTL (#575)
## Summary Cache `resolveRegionObservers()` results with a 30-second TTL to eliminate repeated database queries for region→observer ID mappings. ## Problem `resolveRegionObservers()` queried the database on every call despite the observers table changing infrequently (~20 rows). It's called from 10+ hot paths including `filterPackets()`, `GetChannels()`, and multiple analytics compute functions. When analytics caches are cold, parallel requests each hit the DB independently. ## Solution - Added a dedicated `regionObsMu` mutex + `regionObsCache` map with 30s TTL - Uses a separate mutex (not `s.mu`) to avoid deadlocks — callers already hold `s.mu.RLock()` - Cache is lazily populated per-region and fully invalidated after TTL expires - Follows the same pattern as `getCachedNodesAndPM()` (30s TTL, on-demand rebuild) ## Changes - **`cmd/server/store.go`**: Added `regionObsMu`, `regionObsCache`, `regionObsCacheTime` fields; rewrote `resolveRegionObservers()` to check cache first; added `fetchAndCacheRegionObs()` helper - **`cmd/server/coverage_test.go`**: Added `TestResolveRegionObserversCaching` — verifies cache population, cache hits, and nil handling for unknown regions ## Testing - All existing Go tests pass (`go test ./...`) - New test verifies caching behavior (population, hits, nil for unknown regions) Fixes #362 --------- Co-authored-by: you <you@example.com> |
||
|
|
67511ed6a7 |
perf: combine GetStoreStats into 2 concurrent queries instead of 5 sequential (#574)
## Summary `GetStoreStats()` ran 5 sequential DB queries on every call. This combines them into **2 concurrent queries**: 1. **Node/observer counts** — single query using subqueries: `SELECT (SELECT COUNT(*) FROM nodes WHERE ...), (SELECT COUNT(*) FROM nodes), (SELECT COUNT(*) FROM observers)` 2. **Observation counts** — single query using conditional aggregation: `SUM(CASE WHEN timestamp > ? THEN 1 ELSE 0 END)` scoped to the 24h window, avoiding a full table scan for the 1h count Both queries run concurrently via goroutines + `sync.WaitGroup`. ## What changed - `cmd/server/store.go`: Rewrote `GetStoreStats()` — 5 sequential `QueryRow` calls → 2 concurrent combined queries - Error handling now propagates query errors instead of silently ignoring them ## Performance justification - **Before:** 5 sequential round-trips to SQLite, with 2 potentially expensive `COUNT(*)` scans on the `observations` table - **After:** 2 concurrent round-trips; the observation query scans the 24h window once instead of separately scanning for 1h and 24h - The 10s cache (`statsTTL`) remains, so this fires at most once per 10s — but when it does fire, it's ~2.5x fewer round-trips and the observation scan is halved ## Tests - `go test ./...` passes for both `cmd/server` and `cmd/ingestor` Fixes #363 --------- Co-authored-by: you <you@example.com> |
||
|
|
d4f2c3ac66 |
perf: index subpath detail lookups instead of scanning all packets (#571)
## Summary `GetSubpathDetail()` iterated ALL packets to find those containing a specific subpath — `O(packets × hops × subpath_length)`. With 30K+ packets this caused user-visible latency on every subpath detail click. ## Changes ### `cmd/server/store.go` - Added `spTxIndex map[string][]*StoreTx` alongside existing `spIndex` — tracks which transmissions contain each subpath key - Extended `addTxToSubpathIndexFull()` and `removeTxFromSubpathIndexFull()` to maintain both indexes simultaneously - Original `addTxToSubpathIndex()`/`removeTxFromSubpathIndex()` wrappers preserved for backward compatibility - `buildSubpathIndex()` now populates both `spIndex` and `spTxIndex` during `Load()` - All incremental update sites (ingest, path change, eviction) use the `Full` variants - `GetSubpathDetail()` rewritten: direct `O(1)` map lookup on `spTxIndex[key]` instead of scanning all packets ### `cmd/server/coverage_test.go` - Added `TestSubpathTxIndexPopulated`: verifies `spTxIndex` is populated, counts match `spIndex`, and `GetSubpathDetail` returns correct results for both existing and non-existent subpaths ## Complexity - **Before:** `O(total_packets × avg_hops × subpath_length)` per request - **After:** `O(matched_txs)` per request (direct map lookup) ## Tests All tests pass: `cmd/server` (4.6s), `cmd/ingestor` (25.6s) Fixes #358 --------- Co-authored-by: you <you@example.com> |
||
|
|
37300bf5c8 |
fix: cap prefix map at 8 chars to cut memory ~10x (#570)
## Summary `buildPrefixMap()` was generating map entries for every prefix length from 2 to `len(pubkey)` (up to 64 chars), creating ~31 entries per node. With 500 nodes that's ~15K map entries; with 1K+ nodes it balloons to 31K+. ## Changes **`cmd/server/store.go`:** - Added `maxPrefixLen = 8` constant — MeshCore path hops use 2–6 char prefixes, 8 gives headroom - Capped the prefix generation loop at `maxPrefixLen` instead of `len(pk)` - Added full pubkey as a separate map entry when key is longer than `maxPrefixLen`, ensuring exact-match lookups (used by `resolveWithContext`) still work **`cmd/server/coverage_test.go`:** - Added `TestPrefixMapCap` with subtests for: - Short prefix resolution still works - Full pubkey exact-match resolution still works - Intermediate prefixes beyond the cap correctly return nil - Short keys (≤8 chars) have all prefix entries - Map size is bounded ## Impact - Map entries per node: ~31 → ~8 (one per prefix length 2–8, plus one full-key entry) - Total map size for 500 nodes: ~15K entries → ~4K entries (~75% reduction) - No behavioral change for path hop resolution (2–6 char prefixes) - No behavioral change for exact pubkey lookups ## Tests All existing tests pass: - `cmd/server`: ✅ - `cmd/ingestor`: ✅ Fixes #364 --------- Co-authored-by: you <you@example.com> |