mirror of
https://github.com/Kpa-clawbot/meshcore-analyzer.git
synced 2026-05-11 21:24:42 +00:00
76d89e65788cd034fc8d98ce3102225d7e0fcd69
266 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
76d89e6578 |
fix(ingestor): exclude path_json='[]' rows from backfill WHERE (#1119) (#1121)
## Summary `BackfillPathJSONAsync` re-selected observations whose `path_json` was already `'[]'`, rewrote them to `'[]'`, and looped forever. The `len(batch) == 0` exit condition was never reached, the migration marker was never recorded, and the ingestor sustained 2–3 MB/s WAL writes at idle (76% of CPU in `sqlite.Exec` per pprof). ## Fix Drop `'[]'` from the WHERE clause: ```diff WHERE o.raw_hex IS NOT NULL AND o.raw_hex != '' - AND (o.path_json IS NULL OR o.path_json = '' OR o.path_json = '[]') + AND (o.path_json IS NULL OR o.path_json = '') ``` `'[]'` is the "already attempted, no hops" sentinel (still written at line 994 of `cmd/ingestor/db.go` when `DecodePathFromRawHex` returns no hops). Excluding it from the WHERE lets the loop terminate after one full pass and the migration marker `backfill_path_json_from_raw_hex_v1` to be recorded. ## TDD - **Red commit** (`19f8004`): `TestBackfillPathJSONAsync_BracketRowsTerminate` — seeds 100 observations with `path_json='[]'` and a `raw_hex` that decodes to zero hops, asserts the migration marker is written within 5s. Fails on master with *"backfill never recorded migration marker within 5s — infinite loop on path_json='[]' rows"*. - **Green commit** (`7019100`): WHERE-clause fix + updates `TestBackfillPathJsonFromRawHex` row 1 expectation (the pre-seeded `'[]'` row is now correctly skipped instead of being re-decoded). ## Test results ``` ok github.com/corescope/ingestor 49.656s ``` ## Acceptance criteria from #1119 - [x] Backfill terminates within 1 polling cycle of having no progress to make - [x] Migration marker `backfill_path_json_from_raw_hex_v1` written after termination - [x] On restart, backfill recognizes migration done and exits immediately (existing behavior — the migration check at the top of `BackfillPathJSONAsync` was always correct; the bug was that the marker never got written) - [x] Test: seed DB with N observations all having `path_json = '[]'` → backfill runs once → no UPDATEs issued, migration marker written - [ ] Disk write rate on idle staging drops from 2–3 MB/s to <100 KB/s — to be verified by the user post-deploy Fixes #1119. --------- Co-authored-by: OpenClaw Bot <bot@openclaw.local> |
||
|
|
45f2607f75 |
perf(ingestor): group commit observation INSERTs by time window (M1, refs #1115) (#1117)
## Summary Implements **M1 from #1115**: batches observation/transmission INSERTs into a single SQLite `BEGIN/COMMIT` window instead of fsyncing per packet. At ~250 obs/sec this drops WAL fsync rate from ~20/s to ~1/s and eliminates the `obs-persist skipped` / `SQLITE_BUSY` log spam that the issue documents. This is a **partial fix** — it ships the group-commit mechanism. Acceptance items 6–7 (measured fsync rate / measured `obs-persist skipped` rate at staging steady-state) require post-deploy observation, and M2 (per-`tx_hash` observation buffering) is intentionally deferred. The issue stays open for the user to verify on staging. > Partial fix for #1115 — does not auto-close. Refs #1115. ## Mechanism - `Store` gains an active `*sql.Tx`, `pendingRows` counter, `gcMu`, and the `groupCommitMs` / `groupCommitMaxRows` knobs. `SetGroupCommit(ms, maxRows)` enables the mode; `FlushGroupTx()` commits the in-flight tx. - `InsertTransmission` lazily opens a tx on the first call after each flush, then issues all writes through `tx.Stmt()` bindings of the existing prepared statements. With `MaxOpenConns(1)` the connection is already serialized; `gcMu` serializes group-commit state without contention. - A goroutine in `cmd/ingestor/main.go` calls `FlushGroupTx()` every `groupCommitMs` ms. `pendingRows >= groupCommitMaxRows` triggers an eager flush. `Close()` flushes before the WAL checkpoint so no rows are lost on graceful shutdown. - `groupCommitMs == 0` short-circuits to the legacy per-call auto-commit path (statements bound to `s.db`, no tx) — current behavior preserved byte-for-byte for operators who opt out. ## Config Two new optional fields (ingestor-only), both documented in `config.example.json`: | Field | Default | Effect | |---|---|---| | `groupCommitMs` | `1000` | Flush window in ms. `0` disables batching (legacy per-packet auto-commit). | | `groupCommitMaxRows` | `1000` | Safety cap; when exceeded the queue flushes immediately to bound memory and the crash-loss window. | No DB schema change. No required config change on upgrade. ## Tests (TDD red → green visible in commits) `cmd/ingestor/group_commit_test.go` — three assertions, written first as the red commit: - `TestGroupCommit_BatchesInsertsIntoOneTx` — 50 `InsertTransmission` calls inside a wide window produce **0** commits until `FlushGroupTx`, then exactly **1**; all 50 rows visible after flush. (This is the spec's "50 observations → 1 SQLite write transaction" assertion.) - `TestGroupCommit_Disabled` — `groupCommitMs=0` keeps every insert immediately visible and `GroupCommitFlushes` never advances. (Spec's "groupCommitMs=0 reverts to per-packet behavior" assertion.) - `TestGroupCommit_MaxRowsForcesEarlyFlush` — cap=3, 7 inserts → 2 auto-flushes from the cap + 1 final manual flush = 3 total. Red commit: `e2b0370` (stubs `SetGroupCommit` / `FlushGroupTx` so the tests compile and fail on **assertions**, not import errors). Green commit: `73f3559`. Full ingestor suite (`go test ./...` in `cmd/ingestor`) stays green, ~49 s. ## Performance This PR is the perf change itself. Local micro-test (the new `TestGroupCommit_BatchesInsertsIntoOneTx`) shows the structural property: 50 inserts → 1 commit. The fsync-rate measurement called out in the M1 acceptance criteria (`~20/s → ~1/s` at 250 obs/sec) requires staging deployment to confirm — that's the remaining open item that keeps #1115 open after this merges. No hot-path regressions: when `groupCommitMs > 0` we acquire one mutex per insert (uncontended in the steady state — the connection was already single-threaded via `MaxOpenConns(1)`). When `groupCommitMs == 0` the code path is identical to before plus one nil-tx check. ## What this PR does NOT do (per spec) - Does not collapse "30 observations of one packet" into 1 row write — that's M2. - Does not eliminate dual-writer contention with `cmd/server`'s `resolved_path` writes. - Does not change observation ordering or live broadcast latency. --------- Co-authored-by: corescope-bot <bot@corescope.local> |
||
|
|
5fa3b56ccb |
fix(#662): GetRepeaterRelayInfo also looks up byPathHop by 1-byte prefix (#1086)
## Summary Partial fix for #662. `GetRepeaterRelayInfo` was reporting "never observed as relay hop" / `RelayCount24h=0` for nodes that clearly DO have packets passing through them — visible on the same node detail page in the "Paths seen through node" view. ## Root cause The `byPathHop` index is keyed by **both**: - full resolved pubkey (populated when neighbor-affinity resolution succeeds), and - raw 1-byte hop prefix from the wire (e.g. `"a3"`) `GetRepeaterRelayInfo` only looked up the full-pubkey key. Many ingested non-advert packets only carry the raw 1-byte hop — so any repeater whose path appearances are all raw-hop entries returned 0, even though the path-listing endpoint (which prefix-matches) renders them. Example node: an `a3…` repeater on staging has ~dozens of paths through it in the UI but the relay-info function returns 0. ## Fix Look up under both keys (full pubkey + 1-byte prefix) and de-dup by tx ID before counting. ## Trade-off The 1-byte prefix CAN over-count when multiple nodes share a first byte. This trades a possible over-count for clearly false zeros. The richer disambiguation done by the path-listing endpoint (resolved-path SQL post-filter via `confirmResolvedPathContains`) is out of scope for this partial fix — adding it here would mean disk I/O inside what is currently a pure in-memory lookup. Worth a follow-up if over-counting shows up in practice. ## TDD - Red commit (`test: failing test for relay-info prefix-hop mismatch`): adds `TestRepeaterRelayActivity_PrefixHop` that builds a non-advert packet with `PathJSON: ["a3"]`, indexes it via `addTxToPathHopIndex`, then asserts `RelayCount24h>=1` for the full pubkey starting with `a3…`. Fails on the assertion (got 0), not a build error. - Green commit (`fix: GetRepeaterRelayInfo also looks up byPathHop by 1-byte prefix`): the lookup change. All five `TestRepeaterRelayActivity_*` tests pass. ## Scope This is a **partial** fix — addresses the read-side prefix mismatch only. Issue #662 is a 4-axis epic (also covers ingest indexing consistency, UI surfacing, and schema). Leaving #662 open. --------- Co-authored-by: corescope-bot <bot@corescope> Co-authored-by: clawbot <clawbot@users.noreply.github.com> |
||
|
|
136e1d23c8 |
feat(#730): foreign-advert detection — flag instead of silent drop (#1084)
## Summary **Partial fix for #730 (M1 only — M2 frontend and M3 alerting deferred).** Today the ingestor **silently drops** ADVERTs whose GPS lies outside the configured `geo_filter` polygon. That's the wrong default for an analytics tool — operators get zero visibility into bridged or leaked meshes. This PR makes the new default **flag, don't drop**: foreign adverts are stored, the node row is tagged `foreign_advert=1`, and the API surfaces `"foreign": true` so dashboards / map overlays can be built on top. ## Behavior | Mode | What happens to an ADVERT outside `geo_filter` | |---|---| | (default) flag | Stored, marked `foreign_advert=1`, exposed via API | | drop (legacy) | Silently dropped (preserves old behavior for ops who want it) | ## What's done (M1 — Backend) - ingestor stores foreign adverts instead of dropping - `nodes.foreign_advert` column added (migration) - `/api/nodes` and `/api/nodes/{pk}` expose `foreign: true` field - Config: `geofilter.action: "flag"|"drop"` (default `flag`) - Tests + config docs ## What's NOT done (deferred to M2 + M3) - **M2 — Frontend:** Map overlay showing foreign adverts as distinct markers, foreign-advert filter on packets/nodes pages, dedicated foreign-advert dashboard - **M3 — Alerting:** Time-series detection of bridging events, alert when foreign advert rate spikes, identify bridge entry-point nodes Issue #730 remains open for M2 and M3. --------- Co-authored-by: corescope-bot <bot@corescope> |
||
|
|
3ab404b545 |
feat(node-battery): voltage trend chart + /api/nodes/{pubkey}/battery (#663) (#1082)
## Summary Closes #663 (Phase 2 + 3 partial — time-series tracking + thresholds for nodes that are also observers). Adds a per-node battery voltage trend chart and `/api/nodes/{pubkey}/battery` endpoint, sourced from the existing `observer_metrics.battery_mv` samples populated by observer status messages. No new ingest or schema changes — purely surfaces data we were already collecting. ## Scope (TDD red→green) **RED commit:** test(node-battery) — DB query, endpoint shape (200/404/no-data), and config getters all asserted. **GREEN commit:** feat(node-battery) — implementation only. ## Changes ### Backend - `cmd/server/node_battery.go` (new): - `DB.GetNodeBatteryHistory(pubkey, since)` — pulls `(timestamp, battery_mv)` rows from `observer_metrics WHERE LOWER(observer_id) = LOWER(public_key) AND battery_mv IS NOT NULL`. Case-insensitive join tolerates historical pubkey casing variation (observers persist uppercase, nodes lowercase in this DB). - `Server.handleNodeBattery` — `GET /api/nodes/{pubkey}/battery?days=N` (default 7, max 365). Returns `{public_key, days, samples[], latest_mv, latest_ts, status, thresholds}`. - `Config.LowBatteryMv()` / `CriticalBatteryMv()` — defaults 3300 / 3000 mV. - `cmd/server/config.go` — `BatteryThresholds *BatteryThresholdsConfig` field. - `cmd/server/routes.go` — route registration alongside existing `/health`, `/analytics`. ### Frontend - `public/node-analytics.js` — new "Battery Voltage" chart card with status badge (🔋 OK / ⚠️ Low / 🪫 Critical / No data). Renders dashed threshold lines at `lowMv` and `criticalMv`. Empty-state message when no samples in window. ### Config - `config.example.json` — `batteryThresholds: { lowMv: 3300, criticalMv: 3000 }` with `_comment` per Config Documentation Rule. ## Status semantics | latest_mv | status | |-----------------------|------------| | no samples in window | `unknown` | | `>= lowMv` | `ok` | | `< lowMv`, `>= critMv`| `low` | | `< criticalMv` | `critical` | ## What this PR does NOT do (deferred) The issue's full Phase 1 (writing decoded sensor advert telemetry into `nodes.battery_mv` / `temperature_c` from server-side decoder) and Phase 4 (firmware/active polling for repeaters without observers) are out of scope here. This PR delivers the requested Phase 2/3 surfacing for the data path that already lands rows: `observer_metrics`. Repeaters that are also observers (i.e. publish status to MQTT) will get a voltage trend immediately; pure passive nodes won't until Phase 1 lands. ## Tests - `TestGetNodeBatteryHistory_FromObserverMetrics` — case-insensitive join, NULL skipping, ordering. - `TestNodeBatteryEndpoint` — full happy path with thresholds + status. - `TestNodeBatteryEndpoint_NoData` — 200 + status=unknown. - `TestNodeBatteryEndpoint_404` — unknown node. - `TestBatteryThresholds_ConfigOverride` — config getters + defaults. `cd cmd/server && go test ./...` — green. ## Performance Endpoint is per-pubkey (called once on analytics page open), indexed by `(observer_id, timestamp)` PK on `observer_metrics`. No hot-path impact. --------- Co-authored-by: bot <bot@corescope> |
||
|
|
f33801ecb4 |
feat(repeater): usefulness score — traffic axis (#672) (#1079)
## Summary Implements the **Traffic axis** of the repeater usefulness score (#672). Does NOT close #672 — Bridge, Coverage, and Redundancy axes are deferred to follow-up PRs. Adds `usefulness_score` (0..1) to repeater/room node API responses representing what fraction of non-advert traffic passes through this repeater as a relay hop. ## Why traffic-axis-first The issue proposes a 4-axis composite (Bridge, Coverage, Traffic, Redundancy). Bridge/Coverage/Redundancy require betweenness centrality and neighbor graph infrastructure (#773 Neighbor Graph V2). Traffic axis can ship independently using existing path-hop data. ## Remaining work for #672 - Bridge axis (betweenness centrality — depends on #773) - Coverage axis (observer reach comparison) - Redundancy axis (node-removal simulation — depends on #687) - Composite score combining all 4 axes Partial fix for #672. --------- Co-authored-by: meshcore-bot <bot@meshcore.local> |
||
|
|
d05e468598 |
feat(memlimit): GOMEMLIMIT support, derive from packetStore.maxMemoryMB (#836) (#1077)
## Summary Implements **part 1** of #836 — `GOMEMLIMIT` support so the Go runtime self-throttles GC under cgroup memory pressure instead of getting SIGKILLed. (Parts 2 & 3 — bounded cold-load batching + README ops docs — land in follow-up PRs.) ## Behavior On startup `cmd/server/main.go` now calls `applyMemoryLimit(maxMemoryMB, envSet)`: | Condition | Action | Log | |---|---|---| | `GOMEMLIMIT` env set | Honor the runtime's parse, do nothing | `[memlimit] using GOMEMLIMIT from environment (...)` | | env unset, `packetStore.maxMemoryMB > 0` | `debug.SetMemoryLimit(maxMB * 1.5 MiB)` | `[memlimit] derived from packetStore.maxMemoryMB=512 → 768 MiB (1.5x headroom)` | | env unset, `maxMemoryMB == 0` | No-op | `[memlimit] no soft memory limit set ... recommend setting one to avoid container OOM-kill` | The 1.5x headroom covers Go's NextGC trigger at ~2× live heap (per #836 heap profile: 680 MB live → 1.38 GB NextGC). ## Tests (TDD red→green visible in commit history) - `TestApplyMemoryLimit_FromEnv` — env wins, function does not override - `TestApplyMemoryLimit_DerivedFromMaxMemoryMB` — verifies bytes computation + `debug.SetMemoryLimit` actually applied at runtime - `TestApplyMemoryLimit_None` — no env, no config → reports `"none"`, no side effect Red commit: `7de3c62` (assertion failures, builds clean) Green commit: `454516d` ## Config docs `config.example.json` `packetStore._comment_gomemlimit` documents env/derived/override behavior. ## Out of scope - Cold-load transient bounding (item 2 in #836) - README container-size table (item 3) - QA §1.1 rewrite Closes part 1 of #836. --------- Co-authored-by: corescope-bot <bot@corescope> |
||
|
|
45f30fcadc |
feat(repeater): liveness detection — distinguish actively relaying from advert-only (#662) (#1073)
## Summary Implements repeater liveness detection per #662 — distinguishes a repeater that is **actively relaying traffic** from one that is **alive but idle** (only sending its own adverts). ## Approach The backend already maintains a `byPathHop` index keyed by lowercase hop/pubkey for every transmission. Decode-window writes also key it by **resolved pubkey** for relay hops. We just weren't surfacing it. `GetRepeaterRelayInfo(pubkey, windowHours)`: - Reads `byPathHop[pubkey]`. - Skips packets whose `payload_type == 4` (advert) — a self-advert proves liveness, not relaying. - Returns the most recent `FirstSeen` as `lastRelayed`, plus `relayActive` (within window) and the `windowHours` actually used. ## Three states (per issue) | State | Indicator | Condition | |---|---|---| | 🟢 Relaying | green | `last_relayed` within `relayActiveHours` | | 🟡 Alive (idle) | yellow | repeater is in the DB but `relay_active=false` (no recent path-hop appearance, or none ever) | | ⚪ Stale | existing | falls out of the existing `getNodeStatus` logic | ## API - `GET /api/nodes` — repeater/room rows now include `last_relayed` (omitted if never observed) and `relay_active`. - `GET /api/nodes/{pubkey}` — same fields plus `relay_window_hours`. ## Config New optional field under `healthThresholds`: ```json "healthThresholds": { ..., "relayActiveHours": 24 } ``` Default 24h. Documented in `config.example.json`. ## Frontend Node detail page gains a **Last Relayed** row for repeaters/rooms with the 🟢/🟡 state badge. Tooltip explains the distinction from "Last Heard". ## TDD - **Red commit** `4445f91`: `repeater_liveness_test.go` + stub `GetRepeaterRelayInfo` returning zero. Active and Stale tests fail on assertion (LastRelayed empty / mismatched). Idle and IgnoresAdverts already match the desired behavior under the stub. Compiles, runs, fails on assertions — not on imports. - **Green commit** `5fcfb57`: Implementation. All four tests pass. Full `cmd/server` suite green (~22s). ## Performance `O(N)` over `byPathHop[pubkey]` per call. The index is bounded by store eviction; a single repeater has at most a few hundred entries on real data. The `/api/nodes` loop adds one map read + scan per repeater row — negligible against the existing enrichment work. ## Limitations (per issue body) 1. Observer coverage gaps — if no observer hears a repeater's relay, it'll show as idle even when actively relaying. This is inherent to passive observation. 2. Low-traffic networks — a repeater in a quiet area legitimately shows idle. The 🟡 indicator copy makes that explicit ("alive (idle)"). 3. Hash collisions are mitigated by the existing `resolveWithContext` path before pubkeys land in `byPathHop`. Fixes #662 --------- Co-authored-by: clawbot <bot@corescope.local> |
||
|
|
83881e6b71 |
fix(#688): auto-discover hashtag channels from message text (#1071)
## Summary Auto-discovers previously-unknown hashtag channels by scanning decoded channel message text for `#name` mentions and surfacing them via `GetChannels`. Workflow (per the issue): 1. New channel message arrives on a known channel 2. Decoded text is scanned for `#hashtag` mentions 3. Any mention that doesn't match an existing channel is surfaced as a discovered channel (`discovered: true`, `messageCount: 0`) 4. Future traffic on that channel will populate the entry once it has its own packets ## Changes - `cmd/server/discovered_channels.go` — new file. `extractHashtagsFromText` parses `#name` mentions from free text, deduped, order-preserving. Trailing punctuation is excluded by the character class. - `cmd/server/store.go` — `GetChannels` now scans CHAN packet text for hashtags after building the primary channel map, and appends any unseen hashtag mentions as discovered entries. - `cmd/server/discovered_channels_test.go` — new tests covering parser edge cases (single, multi, dedup, punctuation, none, bare `#`) and end-to-end discovery via `GetChannels`. ## TDD - Red: `34f1817` — stub returns `nil`, both new tests fail on assertion (verified). - Green: `d27b3ed` — real implementation, full `cmd/server` test suite passes (21.7s). ## Notes - Discovered channels carry `messageCount: 0` and `lastActivity` set to the most recent mention's `firstSeen`, so they sort naturally alongside real channels. - Names are matched against existing entries by both `#name` and bare `name` so a channel that already has decoded traffic isn't double-listed. - The existing `channelsCache` (15s) covers the new code path; no separate invalidation needed since the source data (`byPayloadType[5]`) drives both maps. Fixes #688 --------- Co-authored-by: corescope-bot <bot@corescope.local> |
||
|
|
d144764d38 |
fix(analytics): multiByteCapability missing under region filter → all rows 'unknown' (#1049)
## Bug `https://meshcore.meshat.se/#/analytics`: - Unfiltered → 0 adopter rows show "unknown" (correct). - Region filter `JKG` → 14 rows show "unknown" (wrong — same nodes, all confirmed when unfiltered). Multi-byte capability is a property of the NODE, derived from its own adverts (the full pubkey is in the advert payload, no prefix collision risk). The observing region should only control which nodes appear in the analytics list — it must not change a node's cap evidence. ## Root cause `PacketStore.GetAnalyticsHashSizes(region)` only attached `result["multiByteCapability"]` when `region == ""`. Under any region filter the field was absent. The frontend (`public/analytics.js:1011`) does `data.multiByteCapability || []`, so every adopter row falls through the merge with no cap status and renders as "unknown". ## Fix Always populate `multiByteCapability`. When a region filter is active, source the global adopter hash-size set from a no-region compute pass so out-of-region observers' adverts still count as evidence. ## TDD Red commit (`0968137`): adds `cmd/server/multibyte_region_filter_test.go`, asserts that `GetAnalyticsHashSizes("JKG")` returns a populated `multiByteCapability` with Node A as `confirmed`. Fails on the assertion (field missing) before the fix. Green commit (`6616730`): always compute capability against the global advert dataset. ## Files changed - `cmd/server/store.go` — `GetAnalyticsHashSizes`: drop the `region == ""` gate, always populate `multiByteCapability`. - `cmd/server/multibyte_region_filter_test.go` — new red→green test. ## Verification ``` go test ./... -count=1 # all server tests pass (21s) ``` --------- Co-authored-by: clawbot <bot@corescope.local> |
||
|
|
227f375b4a |
test(ingestor): regression test for observer metadata persistence (#1044) (#1047)
Adds end-to-end test proving that `extractObserverMeta` + `UpsertObserver` correctly stores model, firmware, battery_mv, noise_floor, uptime_secs from a real MQTT status payload. Test passes — confirms the code path works. #1044 was caused by upstream observers not including metadata fields in their status payloads (older `meshcoretomqtt` client versions), not a code bug. Closes #1044 Co-authored-by: meshcore-bot <bot@meshcore.local> |
||
|
|
c9301fee9c |
fix(ingestor): extract per-hop SNR for TRACE packets at ingest time (#1028)
## Problem PR #1007 added per-hop SNR extraction (`snrValues`) for TRACE packets to `cmd/server/decoder.go`. That code path is only hit by the on-demand re-decode endpoint (packet detail). The actual ingest pipeline runs `cmd/ingestor/decoder.go`, decodes the packet once, and persists `decoded_json` into SQLite. The server then serves `decoded_json` as-is for list/feed queries. Net effect: `snrValues` never appears in any production response, because the ingestor's decoder was never updated. Confirmed empirically: `strings /app/corescope-ingestor | grep snrVal` returns nothing. ## Fix Port the SNR extraction logic from `cmd/server/decoder.go` (lines 410–422) into `cmd/ingestor/decoder.go`. For TRACE packets, the header path bytes are int8 SNR values in quarter-dB encoding; extract them into `payload.SNRValues` **before** `path.Hops` is overwritten with payload-derived hop IDs. Also adds the matching `SNRValues []float64` field to the ingestor's `Payload` struct so it serializes into `decoded_json`. ## TDD - **Red commit** (`6ae4c07`): adds `TestDecodeTraceExtractsSNRValues` + `SNRValues` field stub. Compiles, fails on assertion (`len(SNRValues)=0, want 2`). - **Green commit** (`4a4f3f3`): adds extraction loop. Test passes. Test packet: `26022FF8116A23A80000000001C0DE1000DEDE` - header `0x26` = TRACE + DIRECT - pathByte `0x02` = hash_size 1, hash_count 2 - header path `2F F8` → SNR `[int8(0x2F)/4, int8(0xF8)/4]` = `[11.75, -2.0]` ## Files - `cmd/ingestor/decoder.go` — `+16` (field + extraction) - `cmd/ingestor/decoder_test.go` — `+29` (red test) ## Out of scope - `cmd/server/decoder.go` is already correct (PR #1007). Untouched. - Backfill of historical `decoded_json` rows. New TRACE packets get SNR; old rows do not until re-decoded. --------- Co-authored-by: corescope-bot <bot@corescope.local> |
||
|
|
9f55ef802b |
fix(#804): attribute analytics by repeater home region, not observer (#1025)
Fixes #804. ## Problem Analytics filtered region purely by **observer** region: a multi-byte repeater whose home is PDX would leak into SJC results whenever its flood adverts were relayed past an SJC observer. Per-node groupings (`multiByteNodes`, `distributionByRepeaters`) inherited the same bug. ## Fix Two new helpers in `cmd/server/store.go`: - `iataMatchesRegion(iata, regionParam)` — case-insensitive IATA→region match using the existing `normalizeRegionCodes` parser. - `computeNodeHomeRegions()` — derives each node's HOME IATA from its zero-hop DIRECT adverts. Path byte for those packets is set locally on the originating radio and the packet has not been relayed, so the observer that hears it must be in direct RF range. Plurality vote when zero-hop adverts span multiple regions. `computeAnalyticsHashSizes` now applies these in two ways: 1. **Observer-region filter is relaxed for ADVERT packets** when the originator's home region matches the requested region. A flood advert from a PDX repeater that's only heard by an SJC observer still attributes to PDX. 2. **Per-node grouping** (`multiByteNodes`, `distributionByRepeaters`) excludes nodes whose HOME region disagrees with the requested region. Falls back to the observer-region filter when home is unknown. Adds `attributionMethod` to the response (`"observer"` or `"repeater"`) so operators can tell which method was applied. ## Backwards compatibility - No region filter requested → behavior unchanged (`attributionMethod` is `"observer"`). - Region filter requested but no zero-hop direct adverts seen for a node → falls back to the prior observer-region check for that node. - Operators without IATA-tagged observers see no change. ## TDD - **Red commit** (`c35d349`): adds `TestIssue804_AnalyticsAttributesByRepeaterRegion` with three subtests (PDX leak into SJC, attributionMethod field present, SJC leak into PDX). Compiles, runs, fails on assertions. - **Green commit** (`11b157f`): the implementation. All subtests pass, full `cmd/server` package green. ## Files changed - `cmd/server/store.go` — helpers + analytics filter logic (+236/-51) - `cmd/server/issue804_repeater_region_test.go` — new test (+147) --------- Co-authored-by: CoreScope Bot <bot@corescope.local> Co-authored-by: openclaw-bot <bot@openclaw.local> |
||
|
|
1f4969c1a6 |
fix(#770): treat region 'All' as no-filter + document region behavior (#1026)
## Summary Fixes #770 — selecting "All" in the region filter dropdown produced an empty channel list. ## Root cause `normalizeRegionCodes` (cmd/server/db.go) treated any non-empty input as a literal IATA code. The frontend region filter labels its catch-all option **"All"**; while `region-filter.js` normally sends an empty string when "All" is selected, any code path that ends up sending `?region=All` (deep-link URLs, manual queries, future callers) caused the function to return `["ALL"]`. Downstream queries then filtered observers for `iata = 'ALL'`, which never matches anything → empty response. ## Fix `normalizeRegionCodes` now treats `All` / `ALL` / `all` (case-insensitive, with optional whitespace, mixed in CSV) as equivalent to an empty value, returning `nil` to signal "no filter". Real IATA codes (`SJC`, `PDX`, `sjc,PDX` → `[SJC PDX]`) still pass through unchanged. This is a defensive server-side fix: a single chokepoint that all region-aware endpoints already flow through (channels, packets, analytics, encrypted channels, observer ID resolution). ## Documentation Expanded `_comment_regions` in `config.example.json` to explain: - How IATA codes are resolved (payload > topic > source config — set in #1012) - What the `regions` map controls (display labels) vs runtime-discovered codes - That observers without an IATA tag only appear under "All Regions" - That the `All` sentinel is server-side safe ## TDD - **Red commit** (`4f65bf4`): `cmd/server/region_filter_test.go` — `TestNormalizeRegionCodes_AllIsNoFilter` asserts `All` / `ALL` / `all` / `""` / `"All,"` all collapse to `nil`. Compiles, runs, fails on assertion (`got [ALL], want nil`). Companion test `TestNormalizeRegionCodes_RealCodesPreserved` locks in that `sjc,PDX` still returns `[SJC PDX]`. - **Green commit** (`c9fb965`): two-line change in `normalizeRegionCodes` + docs update. ## Verification ``` $ go test -run TestNormalizeRegionCodes -count=1 ./cmd/server ok github.com/corescope/server 0.023s $ go test -count=1 ./cmd/server ok github.com/corescope/server 21.454s ``` Full suite green; no existing region tests regressed. Fixes #770 --------- Co-authored-by: Kpa-clawbot <bot@corescope> |
||
|
|
b06adf9f2a |
feat: /api/backup — one-click SQLite database export (#474) (#1022)
## Summary Implements `GET /api/backup` — one-click SQLite database export per #474. Operators can now grab a complete, consistent snapshot of the analyzer DB with a single authenticated request — no SSH, no scripts, no DB tooling. ## Endpoint ``` GET /api/backup X-API-Key: <key> # required → 200 OK Content-Type: application/octet-stream Content-Disposition: attachment; filename="corescope-backup-<unix>.db" <body: complete SQLite database file> ``` ## Approach Uses SQLite's `VACUUM INTO 'path'` to produce an atomic, defragmented copy of the database into a fresh file: - **Consistent**: VACUUM INTO runs at read isolation — the snapshot reflects a single point in time even while the ingestor is writing to the WAL. - **Non-blocking**: writers continue uninterrupted; we never hold a write lock. - **Works on read-only connections**: verified manually against a WAL-mode source DB (`mode=ro` connection successfully produces a snapshot). - **No corruption risk**: even if the live on-disk DB has issues, VACUUM INTO surfaces what the server can read rather than copying broken pages byte-for-byte. The snapshot is staged in `os.MkdirTemp(...)` and removed after the response body is fully streamed (deferred cleanup). Requesting client IP is logged for audit. The issue suggested an alternative in-memory rebuild path; `VACUUM INTO` is simpler, faster, and produces a strictly more accurate copy of what the server actually sees, so going with it. ## Security - Mounted under `requireAPIKey` middleware — same gate as other admin endpoints (`/api/admin/prune`, `/api/perf/reset`). - Returns 401 without a valid `X-API-Key` header. - Returns 403 if no API key is configured server-side. - `X-Content-Type-Options: nosniff` set on the response. ## TDD - **Red** (`99548f2`): `cmd/server/backup_test.go` adds `TestBackupRequiresAPIKey` + `TestBackupReturnsValidSQLiteSnapshot`. Stub handler returns 200 with no body so the tests fail on assertions (Content-Type / Content-Disposition / SQLite magic header), not on import or build errors. - **Green** (`837b2fe`): real implementation lands; both tests pass; full `go test ./...` suite stays green. ## Files - `cmd/server/backup.go` — handler implementation - `cmd/server/backup_test.go` — red-then-green tests - `cmd/server/routes.go` — route registration under `requireAPIKey` - `cmd/server/openapi.go` — OpenAPI metadata so `/api/openapi` advertises the endpoint ## Out of scope (follow-ups) - Rate limiting (issue suggested 1 req/min). Not added here — admin-key-gated endpoint with a fast snapshot path is acceptable for v1; happy to add a token-bucket limiter in a follow-up if operators report hammering. - UI button to trigger the download (frontend work — separate PR). Fixes #474 --------- Co-authored-by: corescope-bot <bot@corescope.local> |
||
|
|
51b9fed15e |
feat(roles): /#/roles page + /api/analytics/roles endpoint (Fixes #818) (#1023)
## Summary Implements `/#/roles` per QA #809 §5.4 / issue #818. The page previously showed "Page not yet implemented." ### Backend - New `GET /api/analytics/roles` returns `{ totalNodes, roles: [{ role, nodeCount, withSkew, meanAbsSkewSec, medianAbsSkewSec, okCount, warningCount, criticalCount, absurdCount, noClockCount }] }`. - Pure `computeRoleAnalytics(nodesByPubkey, skewByPubkey)` does the bucketing/aggregation — no store/lock dependency, fully unit-testable. - Roles are normalised (lowercased + trimmed; empty bucketed as `unknown`). ### Frontend - New `public/roles-page.js` renders a distribution table: count, share, distribution bar, w/ skew, median |skew|, mean |skew|, severity breakdown (OK / Warning / Critical / Absurd / No-clock). - Registered as the `roles` page in the SPA router and linked from the main nav. - Auto-refreshes every 60 s, with a manual refresh button. ### Tests (TDD) - **Red commit** (`9726d5b`): two assertion-failing tests against a stub `computeRoleAnalytics` that returns an empty result. Compiles, runs, fails on `TotalNodes = 0, want 5` and `len(Roles) = 0, want 1`. - **Green commit** (`7efb76a`): full implementation, route wiring, frontend page + nav, plus E2E test in `test-e2e-playwright.js` covering both the empty-state contract (no "Page not yet implemented" placeholder) and the populated-table case (header columns, body rows, API response shape). ### Verification - `go test ./cmd/server/...` green. - Local server with the e2e fixture: `GET /api/analytics/roles` returns `{"totalNodes":200,"roles":[{"role":"repeater","nodeCount":168,...},{"role":"room","nodeCount":23,...},{"role":"companion","nodeCount":9,...}]}`. Fixes #818 --------- Co-authored-by: corescope-bot <bot@corescope> |
||
|
|
a56ee5c4fe |
feat(analytics): selectable timeframes via ?window/?from/?to (#842) (#1018)
## Summary Selectable analytics timeframes (#842). Adds backend support for `?window=1h|24h|7d|30d` and `?from=&to=` on the three main analytics endpoints (`/api/analytics/rf`, `/api/analytics/topology`, `/api/analytics/channels`), and a time-window picker in the Analytics page UI that drives them. Default behavior with no query params is unchanged. ## TDD trail - Red: `bbab04d` — adds `TimeWindow` + `ParseTimeWindow` stub and tests; tests fail on assertions because the stub returns the zero window. - Green: `75d27f9` — implements `ParseTimeWindow`, threads `TimeWindow` through `compute*` loops + caches, wires HTTP handlers, adds frontend picker + E2E. ## Backend changes - `cmd/server/time_window.go` — full `ParseTimeWindow` (`?window=` aliases + `?from=/&to=` RFC3339 absolute range; invalid input → zero window for backwards compatibility). - `cmd/server/store.go` — new `GetAnalytics{RF,Topology,Channels}WithWindow` wrappers; `compute*` loops skip transmissions whose `FirstSeen` (or per-obs `Timestamp` for the region+observer slice) falls outside the window. Cache key composes `region|window` so different windows do not poison each other. - `cmd/server/routes.go` — handlers call `ParseTimeWindow(r)` and dispatch to the `*WithWindow` methods. ## Frontend changes - `public/analytics.js` — new `<select id="analyticsTimeWindow">` rendered under the region filter (All / 1h / 24h / 7d / 30d). Selecting an option triggers `loadAnalytics()` which appends `&window=…` to every analytics fetch. ## Tests - `cmd/server/time_window_test.go` — covers all aliases, absolute range, no-params backwards compatibility, `Includes()` bounds, and `CacheKey()` distinctness. - `cmd/server/topology_dedup_test.go`, `cmd/server/channel_analytics_test.go` — updated callers to pass `TimeWindow{}`. ## E2E (rule 18) `test-e2e-playwright.js:592-611` — opens `/#/analytics`, asserts the picker is rendered with a `24h` option, then asserts that selecting `24h` triggers a network request to `/api/analytics/rf?…window=24h`. ## Backwards compatibility No params → zero `TimeWindow` → original code paths (no filter, region-only cache key). Verified by `TestParseTimeWindow_NoParams_BackwardsCompatible` and by the existing analytics tests still passing unchanged on `_wt-fix-842`. Fixes #842 --------- Co-authored-by: you <you@example.com> Co-authored-by: corescope-bot <bot@corescope> |
||
|
|
df69a17718 |
feat(#772): short pubkey-prefix URLs for mesh sharing (#1016)
## Summary Fixes #772 — adds a short-URL form for node detail pages so operators can paste node links into a mesh chat without bringing along a 64-hex-char public key. ## Approach **Pubkey-prefix resolution** (no allocator, no lookup table). - The SPA hash route `#/nodes/<key>` already accepts whatever pubkey-shaped string the user pastes; the front end forwards it to `GET /api/nodes/<key>`. - When that lookup misses **and** the path is 8..63 hex chars, the backend now calls `DB.GetNodeByPrefix` and: - returns the matching node when exactly one node has that prefix, - returns **409 Conflict** when multiple nodes share the prefix (with a "use a longer prefix" hint), - falls through to the existing 404 otherwise. - 8 hex chars = 32 bits of entropy, which is enough for fleets in the low thousands. Operators can extend to 10–12 chars if collisions become common. - The full-screen node detail card gets a new **📡 Copy short URL** button that copies `…/#/nodes/<first 8 hex chars>`. ### Why not an opaque ID table (`/s/<id>`)? Considered and rejected: - Needs persistence + an allocator + cleanup story. - IDs aren't self-describing — operators can't sanity-check them. - IDs don't survive a DB rebuild. - 32 bits of pubkey already buys us collision resistance with zero moving parts. If the directory grows past the point where 8-char prefixes routinely collide, we can extend the minimum length without changing the URL shape. ## Changes - `cmd/server/db.go` — new `GetNodeByPrefix(prefix)` returning `(node, ambiguous, error)`. Validates hex; rejects <8 chars; `LIMIT 2` to detect collisions cheaply. - `cmd/server/routes.go` — `handleNodeDetail` falls back to prefix resolution; canonicalizes pubkey downstream; emits 409 on ambiguity; honors blacklist on the resolved pubkey. - `public/nodes.js` — adds **📡 Copy short URL** button + handler on the full-screen node detail card. - `cmd/server/short_url_test.go` — Go tests (red-then-green). - `test-e2e-playwright.js` — E2E: navigates via prefix-only URL and asserts the new button surfaces. ## TDD evidence - Red commit: `2dea97a` — tests added with a stub `GetNodeByPrefix` returning `(nil, false, nil)`. All four assertions failed (assertion failures, not build errors): expected node got nil; expected ambiguous=true got false; route 404 vs expected 200/409. - Green commit: `9b8f146` — implementation lands; `go test ./...` passes locally in `cmd/server`. ## Compatibility - Existing 64-char pubkey URLs are untouched (exact lookup runs first). - Blacklist is enforced both on the raw input and on the resolved pubkey. - No new config knobs. ## What I did **not** touch - `cmd/server/db_test.go`, other route tests — unchanged. - Packet-detail short URLs (issue scopes nodes; revisit in a follow-up if asked). Fixes #772 --------- Co-authored-by: clawbot <bot@corescope.local> |
||
|
|
5e01de0d52 |
fix: make path_json backfill async to unblock MQTT startup (#1013)
## Summary **P0 fix**: The `path_json` backfill migration (PR #983) ran synchronously in `applySchema`, blocking the ingestor main goroutine. On staging (~502K observations), MQTT never connected — no new packets ingested for 15+ hours. ## Fix Extract the backfill into `BackfillPathJSONAsync()` — a method on `*Store` that launches the work in a background goroutine. Called from `main.go` before MQTT connect, it runs concurrently without blocking subscription. **Pattern**: identical to `backfillResolvedPathsAsync` in the server (same lesson learned). ## Safety - Idempotent: checks `_migrations` table, skips if already recorded - Only touches `path_json IS NULL` rows — no conflict with live ingest (new observations get `path_json` at write time) - Panic-recovered goroutine with start/completion logging - Batched (1000 rows per iteration) to avoid memory pressure ## TDD - **Red commit**: `c6e1375` — test asserts `BackfillPathJSONAsync` method exists + OpenStore doesn't block - **Green commit**: `015871f` — implements async method, all tests pass ## Files changed - `cmd/ingestor/db.go` — removed sync backfill from `applySchema`, added `BackfillPathJSONAsync()` - `cmd/ingestor/main.go` — call `store.BackfillPathJSONAsync()` after store creation - `cmd/ingestor/db_test.go` — new async tests + updated existing test to use async API --------- Co-authored-by: you <you@example.com> |
||
|
|
b0e4d2fa18 |
feat: add optional MQTT region field (#788) (#1012)
## Summary
Add optional `region` field to MQTT source config and JSON payload,
enabling publishers to explicitly provide region data without relying
solely on topic path structure.
## Changes
- **`MQTTSource.Region`** — new optional config field. When set, acts as
default region for all messages from that source (useful when a broker
serves a single region).
- **`MQTTPacketMessage.Region`** — new optional JSON payload field.
Publishers can include `"region": "PDX"` in their MQTT messages.
- **`PacketData.Region`** — carries the resolved region through to
storage.
- **Priority resolution**: payload `region` > topic-derived region >
source config `region`
- Observer IATA is updated with the effective region on every packet.
## Config example
```json
{
"mqttSources": [
{
"name": "cascadia",
"broker": "tcp://cascadia-broker:1883",
"topics": ["meshcore/#"],
"region": "PDX"
}
]
}
```
## Payload example
```json
{"raw": "0a1b2c...", "SNR": 5.2, "region": "PDX"}
```
## TDD
- Red commit: `980304c` (tests fail at compile — fields don't exist)
- Green commit: `4caf88b` (implementation, all tests pass)
## Unblocks
- #804, #770, #730 (all depend on region being available on
observations)
Fixes #788
---------
Co-authored-by: you <you@example.com>
|
||
|
|
c186129d47 |
feat: parse and display per-hop SNR values for TRACE packets (#1007)
## Summary Parse and display per-hop SNR values from TRACE packets in the Packet Byte Breakdown panel. ## Changes ### Backend (`cmd/server/decoder.go`) - Added `SNRValues []float64` field to Payload struct (`json:"snrValues,omitempty"`) - In the TRACE-specific block, extract SNR from header path bytes before they're overwritten with route hops - Each header path byte is `int8(SNR_dB * 4.0)` per firmware — decode by dividing by 4.0 ### Frontend (`public/packets.js`) - Added "SNR Path" section in `buildFieldTable()` showing per-hop SNR values in dB when packet type is TRACE - Added TRACE-specific payload rendering (trace tag, auth code, flags with hash_size, route hops) ## TDD - Red commit: `4dba4e8` — test asserts `Payload.SNRValues` field (compile fails, field doesn't exist) - Green commit: `5a496bd` — implementation passes all tests ## Testing - `go test ./...` passes (all existing + 2 new TRACE SNR tests) - No frontend test changes needed (no existing TRACE UI tests; rendering is additive) Fixes #979 --------- Co-authored-by: you <you@example.com> |
||
|
|
153308134e |
feat: add global observer IATA whitelist config (#1001)
## Summary
Adds a global `observerIATAWhitelist` config field that restricts which
observer IATA regions are processed by the ingestor.
## Problem
Operators running regional instances (e.g., Sweden) want to ensure only
observers physically in their region contribute data. The existing
per-source `iataFilter` only filters packet messages but still allows
status messages through, meaning observers from other regions appear in
the database.
## Solution
New top-level config field `observerIATAWhitelist`:
- When non-empty, **all** messages (status + packets) from observers
outside the whitelist are silently dropped
- Case-insensitive matching
- Empty list = all regions allowed (fully backwards compatible)
- Lazy O(1) lookup via cached uppercase set (same pattern as
`observerBlacklist`)
### Config example
```json
{
"observerIATAWhitelist": ["ARN", "GOT"]
}
```
## TDD
- **Red commit:** `f19c2b2` — tests for `ObserverIATAWhitelist` field
and `IsObserverIATAAllowed` method (build fails)
- **Green commit:** `782f516` — implementation + integration test
## Files changed
- `cmd/ingestor/config.go` — new field, new method
`IsObserverIATAAllowed`
- `cmd/ingestor/main.go` — whitelist check in `handleMessage` before
status processing
- `cmd/ingestor/config_test.go` — unit tests for config parsing and
matching
- `cmd/ingestor/main_test.go` — integration test for handleMessage
filtering
Fixes #914
---------
Co-authored-by: you <you@example.com>
|
||
|
|
e86b5a3a0c |
feat: show multi-byte hash support indicator on map markers (#1002)
## Summary Show 2-byte hash support indicator on map markers. Fixes #903. ## What changed ### Backend (`cmd/server/store.go`, `cmd/server/routes.go`) - **`EnrichNodeWithMultiByte()`** — new enrichment function that adds `multi_byte_status` (confirmed/suspected/unknown), `multi_byte_evidence` (advert/path), and `multi_byte_max_hash_size` fields to node API responses - **`GetMultiByteCapMap()`** — cached (15s TTL) map of pubkey → `MultiByteCapEntry`, reusing the existing `computeMultiByteCapability()` logic that combines advert-based and path-hop-based evidence - Wired into both `/api/nodes` (list) and `/api/nodes/{pubkey}` (detail) endpoints ### Frontend (`public/map.js`) - Added **"Multi-byte support"** checkbox in the map Display controls section - When toggled on, repeater markers change color: - 🟢 Green (`#27ae60`) — **confirmed** (advertised with hash_size ≥ 2) - 🟡 Yellow (`#f39c12`) — **suspected** (seen as hop in multi-byte path) - 🔴 Red (`#e74c3c`) — **unknown** (no multi-byte evidence) - Popup tooltip shows multi-byte status and evidence for repeaters - State persisted in localStorage (`meshcore-map-multibyte-overlay`) ## TDD - Red commit: `2f49cbc` — failing test for `EnrichNodeWithMultiByte` - Green commit: `4957782` — implementation + passing tests ## Performance - `GetMultiByteCapMap()` uses a 15s TTL cache (same pattern as `GetNodeHashSizeInfo`) - Enrichment is O(n) over nodes, no per-item API calls - Frontend color override is computed inline during existing marker render loop — no additional DOM rebuilds --------- Co-authored-by: you <you@example.com> |
||
|
|
2e3a94b86d |
chore(db): one-time cleanup of legacy packets with empty hash or null timestamp (closes #994) (#997)
## Summary One-time startup migration that deletes legacy packets (transmissions + observations) with empty hash or empty `first_seen` timestamp. This is the write-side cleanup following #993's read-side filter. ### Migration: `cleanup_legacy_null_hash_ts` - Checks `_migrations` table for marker - If not present: deletes observations referencing bad transmissions, then deletes the transmissions themselves - Logs count of deleted rows - Records marker for idempotency ### TDD - **Red commit:** `b1a24a1` — test asserts migration deletes bad rows (fails without implementation) - **Green commit:** `2b94522` — implements the migration, all tests pass Fixes #994 --------- Co-authored-by: you <you@example.com> |
||
|
|
564d93d6aa |
fix: dedup topology analytics by resolved pubkey (#998)
## Fix topology analytics double-counting repeaters/pairs (#909) ### Problem `computeAnalyticsTopology()` aggregates by raw hop hex string. When firmware emits variable-length path hashes (1-3 bytes per hop), the same physical node appears multiple times with different prefix lengths (e.g. `"07"`, `"0735bc"`, `"0735bc6d"` all referring to the same node). This inflates repeater counts and creates duplicate pair entries. ### Solution Added a confidence-gated dedup pass after frequency counting: 1. **For each hop prefix**, check if it resolves unambiguously (exactly 1 candidate in the prefix map) 2. **Unambiguous prefixes** → group by resolved pubkey, sum counts, keep longest prefix as display identifier 3. **Ambiguous prefixes** (multiple candidates for that prefix) → left as separate entries (conservative) 4. **Same treatment for pairs**: canonicalize by sorted pubkey pair ### Addressing @efiten's collision concern At scale (~2000+ repeaters), 1-byte prefixes (256 buckets) WILL collide. This fix explicitly checks the prefix map candidate count. Ambiguous prefixes (where `len(pm.m[hop]) > 1`) are never merged — they remain as separate entries. Only prefixes with a single matching node are eligible for dedup. ### TDD - **Red commit**: `4dbf9c0` — added 3 failing tests - **Green commit**: `d6cae9a` — implemented dedup, all tests pass ### Tests added - `TestTopologyDedup_RepeatersMergeByPubkey` — verifies entries with different prefix lengths for same node merge to single entry with summed count - `TestTopologyDedup_AmbiguousPrefixNotMerged` — verifies colliding short prefix stays separate from unambiguous longer prefix - `TestTopologyDedup_PairsMergeByPubkey` — verifies pair entries merge by resolved pubkey pair Fixes #909 --------- Co-authored-by: you <you@example.com> |
||
|
|
b7c280c20a |
fix: drop/filter packets with null hash or timestamp (closes #871) (#993)
## Summary Closes #871 The `/api/packets` endpoint could return packets with `null` hash or timestamp fields. This was caused by legacy data in SQLite (rows with empty `hash` or `NULL`/empty `first_seen`) predating the ingestor's existing validation guard (`if hash == "" { return false, nil }` at `cmd/ingestor/db.go:610`). ## Root Cause `cmd/server/store.go` `filterPackets()` had no data-integrity guard. Legacy rows with empty `hash` or `first_seen` were loaded into the in-memory store and returned verbatim. The `strOrNil("")` helper then serialized these as JSON `null`. ## Fix Added a data-integrity predicate at the top of `filterPackets`'s scan callback (`cmd/server/store.go:2278`): ```go if tx.Hash == "" || tx.FirstSeen == "" { return false } ``` This filters bad legacy rows at query time. The write path (ingestor) already rejects empty hashes, so no new bad data enters. ## TDD Evidence - **Red commit:** `15774c3` — test `TestIssue871_NoNullHashOrTimestamp` asserts no packet in API response has null/empty hash or timestamp - **Green commit:** `281fd6f` — adds the filter guard, test passes ## Testing - `go test ./...` in `cmd/server` passes (full suite) - Client-side defensive filter from PR #868 remains as defense-in-depth --------- Co-authored-by: you <you@example.com> |
||
|
|
d43c95a4bb |
fix(ingestor): warn when TRACE payload decode fails but observation stored (closes #889) (#992)
## Summary Closes #889. When a TRACE packet's payload is too short to decode (< 9 bytes), `decodeTrace` returns an error in `Payload.Error` but the observation is still stored with empty `Path.Hops`. Previously this was completely silent — no log, no anomaly flag, no indication the row is degraded. This fix populates `DecodedPacket.Anomaly` with the decode error message (e.g., `"TRACE payload decode failed: too short"`) so operators and downstream consumers can identify degraded observations. ## TDD Commit History 1. **Red commit** `04e0165` — failing test asserting `Anomaly` is set when TRACE payload decode fails 2. **Green commit** `d3e72d1` — 3-line fix in `decoder.go` line 601-603: check `payload.Error != ""` for TRACE packets and set anomaly ## What Changed `cmd/ingestor/decoder.go` (lines 601-603): Added a check before the existing TRACE path-parsing block. If `payload.Error` is non-empty for a TRACE packet, `anomaly` is set to `"TRACE payload decode failed: <error>"`. `cmd/ingestor/decoder_test.go`: Added `TestDecodeTracePayloadFailSetsAnomaly` — constructs a TRACE packet with a 4-byte payload (too short), asserts the packet is still returned (observation stored) and `Anomaly` is populated. ## Verification - `go build ./...` ✓ - `go test ./...` ✓ (all pass including new test) - Anti-tautology: reverting the fix causes the new test to fail (asserts `pkt.Anomaly == ""` → error) --------- Co-authored-by: you <you@example.com> |
||
|
|
dd2f044f2b |
fix: cache RW SQLite connection + dedup DBConfig (closes #921) (#982)
Closes #921 ## Summary Follow-up to #920 (incremental auto-vacuum). Addresses both items from the adversarial review: ### 1. RW connection caching Previously, every call to `openRW(dbPath)` opened a new SQLite RW connection and closed it after use. This happened in: - `runIncrementalVacuum` (~4x/hour) - `PruneOldPackets`, `PruneOldMetrics`, `RemoveStaleObservers` - `buildAndPersistEdges`, `PruneNeighborEdges` - All neighbor persist operations Now a single `*sql.DB` handle (with `MaxOpenConns(1)`) is cached process-wide via `cachedRW(dbPath)`. The underlying connection pool manages serialization. The original `openRW()` function is retained for one-shot test usage. ### 2. DBConfig dedup `DBConfig` was defined identically in both `cmd/server/config.go` and `cmd/ingestor/config.go`. Extracted to `internal/dbconfig/` as a shared package; both binaries now use a type alias (`type DBConfig = dbconfig.DBConfig`). ## Tests added | Test | File | |------|------| | `TestCachedRW_ReturnsSameHandle` | `cmd/server/rw_cache_test.go` | | `TestCachedRW_100Calls_SingleConnection` | `cmd/server/rw_cache_test.go` | | `TestGetIncrementalVacuumPages_Default` | `internal/dbconfig/dbconfig_test.go` | | `TestGetIncrementalVacuumPages_Configured` | `internal/dbconfig/dbconfig_test.go` | ## Verification ``` ok github.com/corescope/server 20.069s ok github.com/corescope/ingestor 47.117s ok github.com/meshcore-analyzer/dbconfig 0.003s ``` Both binaries build cleanly. 100 sequential `cachedRW()` calls return the same handle with exactly 1 entry in the cache map. --------- Co-authored-by: you <you@example.com> |
||
|
|
58484ad924 |
feat(ingestor): backfill observations.path_json from raw_hex (closes #888) (#983)
## Summary Adds an idempotent startup migration to the ingestor that backfills `observations.path_json` from per-observation `raw_hex` (added in #882). **Approach: Server-side migration (Option B)** — runs automatically at startup, chunked in batches of 1000, tracked via `_migrations` table. Chosen over a standalone script because: 1. Follows existing migration pattern (channel_hash, last_packet_at, etc.) 2. Zero operator action required — just deploy 3. Idempotent — safe to restart mid-migration (uncommitted rows get picked up next run) ## What it does - Selects observations where `raw_hex` is populated but `path_json` is NULL/empty/`[]` - Excludes TRACE packets (`payload_type = 9`) at the SQL level — their header bytes are SNR values, not hops - Decodes hops via `packetpath.DecodePathFromRawHex` (reuses existing helper) - Updates `path_json` with the decoded JSON array - Marks rows with undecoded/empty hops as `'[]'` to prevent infinite re-scanning - Records `backfill_path_json_from_raw_hex_v1` in `_migrations` when complete ## Safety - **Never overwrites** existing non-empty `path_json` — only fills where missing - **Batched** (1000 rows per iteration) — won't OOM on large DBs - **TRACE-safe** — excluded at query level per `packetpath.PathBytesAreHops` semantics ## Test `TestBackfillPathJsonFromRawHex` — creates synthetic observations with: - Empty path_json + valid raw_hex → verifies backfill populates correctly - NULL path_json → verifies backfill populates - Existing path_json → verifies NO overwrite - TRACE packet → verifies skip Anti-tautology: test asserts specific decoded values (`["AABB","CCDD"]`) from known raw_hex input, not just "something changed." Closes #888 Co-authored-by: you <you@example.com> |
||
|
|
fc57433f27 |
fix(analytics): merge channel buckets by hash byte; reject rainbow-table mismatches (closes #978) (#980)
## Summary Closes #978 — analytics channels duplicated by encrypted/decrypted split + rainbow-table collisions. ## Root cause Two distinct bugs in `computeAnalyticsChannels` (`cmd/server/store.go`): 1. **Encrypted/decrypted split**: The grouping key included the decoded channel name (`hash + "_" + channel`), so packets from observers that could decrypt a channel created a separate bucket from packets where decryption failed. Same physical channel, two entries. 2. **Rainbow-table collisions**: Some observers' lookup tables map hash bytes to wrong channel names. E.g., hash `72` incorrectly claimed to be `#wardriving` (real hash is `129`). This created ghost 1-message entries. ## Fix 1. **Always group by hash byte alone** (drop `_channel` suffix from `chKey`). When any packet decrypts successfully, upgrade the bucket's display name from placeholder (`chN`) to the real name (first-decrypter-wins for stability). 2. **Validate channel names** against the firmware hash invariant: `SHA256(SHA256("#name")[:16])[0] == channelHash`. Mismatches are treated as encrypted (placeholder name, no trust in decoded channel). Guard is in the analytics handler (not the ingestor) to avoid breaking other surfaces that use the decoded field for display. ## Verification (e2e-fixture.db) | Metric | BEFORE | AFTER | |--------|--------|-------| | Total channels | 22 | 19 | | Duplicate hash bytes | 3 (hashes 217, 202, 17) | 0 | ## Tests added - `TestComputeAnalyticsChannels_MergesEncryptedAndDecrypted` — same hash, mixed encrypted/decrypted → ONE bucket - `TestComputeAnalyticsChannels_RejectsRainbowTableMismatch` — hash 72 claimed as `#wardriving` (real=129) → rejected, stays `ch72` - `TestChannelNameMatchesHash` — unit test for hash validation helper - `TestIsPlaceholderName` — unit test for placeholder detection Anti-tautology gate: both main tests fail when their respective fix lines are reverted. Co-authored-by: you <you@example.com> |
||
|
|
5aa8f795cd |
feat(ingestor): per-source MQTT connect timeout (#931) (#977)
## Summary Per-source MQTT connect timeout, correctly targeting the `WaitTimeout` startup gate (#931). ## What changed - Added `connectTimeoutSec` field to `MQTTSource` struct (per-source, not global) — `config.go:24` - Added `ConnectTimeoutOrDefault()` helper returning configured value or 30 (default from #926) — `config.go:29` - Replaced hardcoded `WaitTimeout(30 * time.Second)` with `WaitTimeout(time.Duration(connectTimeout) * time.Second)` — `main.go:173` - Updated `config.example.json` with field at source level - Unit tests for default (30) and custom values ## Why this supersedes #976 PR #976 made paho's `SetConnectTimeout` (per-TCP-dial, was 10s) configurable via a **global** `mqttConnectTimeoutSeconds` field. Issue #931 explicitly references the **30s timeout** — which is `WaitTimeout(30s)`, the startup gate from #926. It also requests **per-source** config, not global. This PR targets the correct timeout at the correct granularity. ## Live verification (Rule 18) Two sources pointed at unreachable brokers: - `fast` (`connectTimeoutSec: 5`): timed out in 5s ✅ - `default` (unset): timed out in 30s ✅ ``` 19:00:35 MQTT [fast] connect timeout: 5s 19:00:40 MQTT [fast] initial connection timed out — retrying in background 19:00:40 MQTT [default] connect timeout: 30s 19:01:10 MQTT [default] initial connection timed out — retrying in background ``` Closes #931 Supersedes #976 Co-authored-by: you <you@example.com> |
||
|
|
1e7c187521 |
fix(ingestor): address review BLOCKERs from PR #926 (goroutine leak + guard semantics) [v2] (#974)
## fix(ingestor): address review BLOCKERs from PR #926 (goroutine leak + guard semantics) Supersedes #970. Rebased onto current master to resolve merge conflicts. ### Changes (same as #970) - **BL1 (goroutine leak):** Call `client.Disconnect(0)` on the error path after `Connect()` fails with `ConnectRetry=true`, preventing Paho's internal retry goroutines from leaking. - **BL2 (guard semantics):** Use `connectedCount == 0` instead of `len(clients) == 0` to detect zero-connected state, since timed-out clients are appended to the slice. - **Tests:** `TestBL1_GoroutineLeakOnHardFailure` and `TestBL2_ZeroConnectedFatals` covering both blockers. ### Context - Fixes blockers raised in review of #926 - Related: #910 (original hang bug) Co-authored-by: you <you@example.com> |
||
|
|
4b8d8143f4 |
feat(server): explicit CORS policy with configurable origin allowlist (#883) (#971)
## Summary Adds explicit CORS policy support to the CoreScope API server, closing #883. ### Problem The API relied on browser same-origin defaults with no way for operators to configure cross-origin access. Operators running dashboards or third-party frontends on different origins had no supported way to make API calls. ### Solution **New config option:** `corsAllowedOrigins` (string array, default `[]`) **Middleware behavior:** | Config | Behavior | |--------|----------| | `[]` (default) | No `Access-Control-*` headers added — browsers enforce same-origin. **Preserves current behavior.** | | `["https://dashboard.example.com"]` | Echoes matching `Origin`, sets `Allow-Methods`/`Allow-Headers` | | `["*"]` | Sets `Access-Control-Allow-Origin: *` (explicit opt-in only) | **Headers set when origin matches:** - `Access-Control-Allow-Origin: <origin>` (or `*`) - `Access-Control-Allow-Methods: GET, POST, OPTIONS` - `Access-Control-Allow-Headers: Content-Type, X-API-Key` - `Vary: Origin` (non-wildcard only) **Preflight handling:** `OPTIONS` → `204 No Content` with CORS headers (or `403` if origin not in allowlist). ### Config example ```json { "corsAllowedOrigins": ["https://dashboard.example.com", "https://monitor.internal"] } ``` ### Files changed | File | Change | |------|--------| | `cmd/server/cors.go` | New CORS middleware | | `cmd/server/cors_test.go` | 7 unit tests covering all branches | | `cmd/server/config.go` | `CORSAllowedOrigins` field | | `cmd/server/routes.go` | Wire middleware before all routes | ### Testing **Unit tests (7):** - Default config → no CORS headers - Allowlist match → headers present with `Vary: Origin` - Allowlist miss → no CORS headers - Preflight allowed → 204 with headers - Preflight rejected → 403 - Wildcard → `*` without `Vary` - No `Origin` header → pass-through **Live verification (Rule 18):** ``` # Default (empty corsAllowedOrigins): $ curl -I -H "Origin: https://evil.example" localhost:19883/api/health HTTP/1.1 200 OK # No Access-Control-* headers ✓ # With corsAllowedOrigins: ["https://good.example"]: $ curl -I -H "Origin: https://good.example" localhost:19884/api/health Access-Control-Allow-Origin: https://good.example Access-Control-Allow-Methods: GET, POST, OPTIONS Access-Control-Allow-Headers: Content-Type, X-API-Key Vary: Origin ✓ $ curl -I -H "Origin: https://evil.example" localhost:19884/api/health # No Access-Control-* headers ✓ $ curl -I -X OPTIONS -H "Origin: https://good.example" localhost:19884/api/health HTTP/1.1 204 No Content Access-Control-Allow-Origin: https://good.example ✓ ``` Closes #883 Co-authored-by: you <you@example.com> |
||
|
|
3364eed303 |
feat: separate "Last Status Update" from "Last Packet Observation" for observers (v3 rebase) (#969)
Rebased version of #968 (which was itself a rebase of #905) — resolves merge conflict with #906 (clock-skew UI) that landed on master. ## Conflict resolution **`public/observers.js`** — master (#906) added "Clock Offset" column to observer table; #968 split "Last Seen" into "Last Status" + "Last Packet" columns. Combined both: the table now has Status | Name | Region | Last Status | Last Packet | Packets | Packets/Hour | Clock Offset | Uptime. ## What this PR adds (unchanged from #968/#905) - `last_packet_at` column in observers DB table - Separate "Last Status Update" and "Last Packet Observation" display in observers list and detail page - Server-side migration to add the column automatically - Backfill heuristic for existing data - Tests for ingestor and server ## Verification - All Go tests pass (`cmd/server`, `cmd/ingestor`) - Frontend tests pass (`test-packets.js`, `test-hash-color.js`) - Built server, hit `/api/observers` — `last_packet_at` field present in JSON - Observer table header has all 9 columns including both Last Packet and Clock Offset ## Prior PRs - #905 — original (conflicts with master) - #968 — first rebase (conflicts after #906 landed) - This PR — second rebase, resolves #906 conflict Supersedes #968. Closes #905. --------- Co-authored-by: you <you@example.com> |
||
|
|
d65122491e |
fix(ingestor): unblock startup when one of multiple MQTT sources is unreachable (#926)
## Summary - With `ConnectRetry=true`, paho's `token.Wait()` only returns on success — it blocks forever for unreachable brokers, stalling the entire startup loop before any other source connects - Switches to `token.WaitTimeout(30s)`: on timeout the client is still tracked so `ConnectRetry` keeps retrying in background; `OnConnect` fires and subscribes when it eventually connects - Adds `TestMQTTConnectRetryTimeoutDoesNotBlock` to confirm `WaitTimeout` returns within deadline for unreachable brokers (regression guard for this exact failure mode) Fixes #910 ## Test plan - [x] Two MQTT sources configured, one unreachable: ingestor reaches `Running` status and ingests from the reachable source immediately on startup - [x] Unreachable source logs `initial connection timed out — retrying in background` and reconnects automatically when the broker comes back - [x] Single source, reachable: behaviour unchanged (`Running — 1 MQTT source(s) connected`) - [x] Single source, unreachable: `Running — 0 MQTT source(s) connected, 1 retrying in background`; ingestion starts once broker is available - [x] `go test ./...` passes (excluding pre-existing `TestOpenStoreInvalidPath` failure on master) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> |
||
|
|
40c3aa13f9 |
fix(paths): exclude false-positive paths from short-prefix collisions (#930)
Fixes #929 ## Summary - `handleNodePaths` pulls candidates from `byPathHop` using 2-char and 4-char prefix keys (e.g. `"7a"` for a node using 1-byte adverts) - When two nodes share the same short prefix, paths through the *other* node are included as candidates - The `resolved_path` post-filter covers decoded packets but falls through conservatively (`inIndex = true`) when `resolved_path` is NULL, letting false positives reach the response **Fix:** during the aggregation phase (which already calls `resolveHop` per hop), add a `containsTarget` check. If every hop resolves to a different node's pubkey, skip the path. Packets confirmed via the full-pubkey index key or via SQL bypass the check. Unresolvable hops are kept conservatively. ## Test plan - [x] `TestHandleNodePaths_PrefixCollisionExclusion`: two nodes sharing `"7a"` prefix; verifies the path with no `resolved_path` (false positive) is excluded and the SQL-confirmed path (true positive) is included - [x] Full test suite: `go test github.com/corescope/server` — all pass 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> |
||
|
|
b47587f031 |
feat(#690): expose observer skew + per-hash evidence in clock UI (#906)
## Summary UI completion of #690 — surfaces observer clock skew and per-hash evidence that the backend already computes but wasn't exposed in the frontend. **Not related to #845/PR #894** (bimodal detection) — this is the UI surface for the original #690 scope. ## Changes ### Backend: per-hash evidence in node clock-skew API (commit 1) - Extended `GET /api/nodes/{pubkey}/clock-skew` to return `recentHashEvidence` (most recent 10 hashes with per-observer raw/corrected skew and observer offset) and `calibrationSummary` (total/calibrated/uncalibrated counts). - Evidence is cached during `ClockSkewEngine.Recompute()` — route handler is cheap. - Fleet endpoint omits evidence to keep payload small. ### Frontend: observer list page — clock offset column (commit 2) - Added "Clock Offset" column to observers table. - Fetches `/api/observers/clock-skew` once on page load, joins by ObserverID. - Color-coded severity badge + sample count tooltip. - Singleton observers show "—" not "0". ### Frontend: observer-detail clock card (commit 3) - Added clock offset card mirroring node clock card style. - Shows: offset value, sample count, severity badge. - Inline explainer describing how offset is computed from multi-observer packets. ### Frontend: node clock card evidence panel (commit 4) - Collapsible "Evidence" section in existing node clock skew card. - Per-hash breakdown: observer count, median corrected skew, per-observer raw/corrected/offset. - Calibration summary line and plain-English severity reason at top. ## Test Results ``` go test ./... (cmd/server) — PASS (19.3s) go test ./... (cmd/ingestor) — PASS (31.6s) Frontend helpers: 610 passed, 0 failed ``` New test: `TestNodeClockSkew_EvidencePayload` — 3-observer scenario verifying per-hash array shape, corrected = raw + offset math, and median. No frontend JS smoke test added — no existing test harness for clock/observer rendering. Noted for future. ## Screenshots Screenshots TBD ## Perf justification Evidence is computed inside the existing `Recompute()` cycle (already O(n) on samples). The `hashEvidence` map adds ~32 bytes per sample of memory. Evidence is stripped from fleet responses. Per-node endpoint returns at most 10 evidence entries — bounded payload. --------- Co-authored-by: you <you@example.com> |
||
|
|
b3a9677c52 |
feat(ingestor + server): observerBlacklist config (#962) (#963)
## Summary Implements `observerBlacklist` config — mirrors the existing `nodeBlacklist` pattern for observers. Drop observers by pubkey at ingest, with defense-in-depth filtering on the server side. Closes #962 ## Changes ### Ingestor (`cmd/ingestor/`) - **`config.go`**: Added `ObserverBlacklist []string` field + `IsObserverBlacklisted()` method (case-insensitive, whitespace-trimmed) - **`main.go`**: Early return in `handleMessage` when `parts[2]` (observer ID from MQTT topic) matches blacklist — before status handling, before IATA filter. No UpsertObserver, no observations, no metrics insert. Log line: `observer <pubkey-short> blacklisted, dropping` ### Server (`cmd/server/`) - **`config.go`**: Same `ObserverBlacklist` field + `IsObserverBlacklisted()` with `sync.Once` cached set (same pattern as `nodeBlacklist`) - **`routes.go`**: Defense-in-depth filtering in `handleObservers` (skip blacklisted in list) and `handleObserverDetail` (404 for blacklisted ID) - **`main.go`**: Startup `softDeleteBlacklistedObservers()` marks matching rows `inactive=1` so historical data is hidden - **`neighbor_persist.go`**: `softDeleteBlacklistedObservers()` implementation ### Tests - `cmd/ingestor/observer_blacklist_test.go`: config method tests (case-insensitive, empty, nil) - `cmd/server/observer_blacklist_test.go`: config tests + HTTP handler tests (list excludes blacklisted, detail returns 404, no-blacklist passes all, concurrent safety) ## Config ```json { "observerBlacklist": [ "EE550DE547D7B94848A952C98F585881FCF946A128E72905E95517475F83CFB1" ] } ``` ## Verification (Rule 18 — actual server output) **Before blacklist** (no config): ``` Total: 31 DUBLIN in list: True ``` **After blacklist** (DUBLIN Observer pubkey in `observerBlacklist`): ``` [observer-blacklist] soft-deleted 1 blacklisted observer(s) Total: 30 DUBLIN in list: False ``` Detail endpoint for blacklisted observer returns **404**. All existing tests pass (`go test ./...` for both server and ingestor). --------- Co-authored-by: you <you@example.com> |
||
|
|
e1a1be1735 |
fix(server): add observers.inactive column at startup if missing (root cause of CI flake) (#961)
## The actual root cause PR #954 added `WHERE inactive IS NULL OR inactive = 0` to the server's observer queries, but the `inactive` column is only added by the **ingestor** migration (`cmd/ingestor/db.go:344-354`). When the server runs against a DB the ingestor never touched (e.g. the e2e fixture), the column doesn't exist: ``` $ sqlite3 test-fixtures/e2e-fixture.db "SELECT COUNT(*) FROM observers WHERE inactive IS NULL OR inactive = 0;" Error: no such column: inactive ``` The server's `db.QueryRow().Scan()` swallows that error → `totalObservers` stays 0 → `/api/observers` returns empty → map test fails with "No map markers/overlays found". This explains all the failing CI runs since #954 merged. PR #957 (freshen fixture) helped with the `nodes` time-rot but couldn't fix the missing-column problem. PR #960 (freshen observers) added the right timestamps but the column was still missing. PR #959 (data-loaded in finally) fixed a different real bug. None of those touched the actual mechanism. ## Fix Mirror the existing `ensureResolvedPathColumn` pattern: add `ensureObserverInactiveColumn` that runs at server startup, checks if the column exists via `PRAGMA table_info`, adds it with `ALTER TABLE observers ADD COLUMN inactive INTEGER DEFAULT 0` if missing. Wired into `cmd/server/main.go` immediately after `ensureResolvedPathColumn`. ## Verification End-to-end on a freshened fixture: ``` $ sqlite3 /tmp/e2e-verify.db "PRAGMA table_info(observers);" | grep inactive (no output — column absent) $ ./cs-fixed -port 13702 -db /tmp/e2e-verify.db -public public & [store] Added inactive column to observers $ curl 'http://localhost:13702/api/observers' returned=31 # was 0 before fix ``` `go test ./...` passes (19.8s). ## Lessons I should have run `sqlite3 fixture "SELECT ... WHERE inactive ..."` directly the first time the map test failed after #954 instead of writing four "fix" PRs that didn't address the actual mechanism. Apologies for the wild goose chase. Co-authored-by: Kpa-clawbot <bot@example.invalid> |
||
|
|
568de4b441 |
fix(observers): exclude soft-deleted observers from /api/observers and totalObservers (#954)
## Bug `/api/observers` returned soft-deleted (inactive=1) observers. Operators saw stale observers in the UI even after the auto-prune marked them inactive on schedule. Reproduced on staging: 14 observers older than 14 days returned by the API; all of them had `inactive=1` in the DB. ## Root cause `DB.GetObservers()` (`cmd/server/db.go:974`) ran `SELECT ... FROM observers ORDER BY last_seen DESC` with no WHERE filter. The `RemoveStaleObservers` path correctly soft-deletes by setting `inactive=1`, but the read path didn't honor it. `statsRow` (`cmd/server/db.go:234`) had the same bug — `totalObservers` count included soft-deleted rows. ## Fix Add `WHERE inactive IS NULL OR inactive = 0` to both: ```go // GetObservers "SELECT ... FROM observers WHERE inactive IS NULL OR inactive = 0 ORDER BY last_seen DESC" // statsRow.TotalObservers "SELECT COUNT(*) FROM observers WHERE inactive IS NULL OR inactive = 0" ``` `NULL` check preserves backward compatibility with rows from before the `inactive` migration. ## Tests Added regression `TestGetObservers_ExcludesInactive`: - Seed two observers, mark one inactive, assert `GetObservers()` returns only the other. - **Anti-tautology gate verified**: reverting the WHERE clause causes the test to fail with `expected 1 observer, got 2` and `inactive observer obs2 should be excluded`. `go test ./...` passes (19.6s). ## Out of scope - `GetObserverByID` lookup at line 1009 still returns inactive observers — this is intentional, so an old deep link to `/observers/<id>` shows "inactive" rather than 404. - Frontend may also have its own caching layer; this fix is server-side only. --------- Co-authored-by: Kpa-clawbot <bot@example.invalid> Co-authored-by: you <you@example.com> Co-authored-by: KpaBap <kpabap@gmail.com> |
||
|
|
57e272494d |
feat(server): /api/healthz readiness endpoint gated on store load (#955) (#956)
## Summary Fixes RCA #2 from #955: the HTTP listener and `/api/stats` go live before background goroutines (pickBestObservation, neighbor graph build) finish, causing CI readiness checks to pass prematurely. ## Changes 1. **`cmd/server/healthz.go`** — New `GET /api/healthz` endpoint: - Returns `503 {"ready":false,"reason":"loading"}` while background init is running - Returns `200 {"ready":true,"loadedTx":N,"loadedObs":N}` once ready 2. **`cmd/server/main.go`** — Added `sync.WaitGroup` tracking pickBestObservation and neighbor graph build goroutines. A coordinator goroutine sets `readiness.Store(1)` when all complete. `backfillResolvedPathsAsync` is NOT gated (async by design, can take 20+ min). 3. **`cmd/server/routes.go`** — Wired `/api/healthz` before system endpoints. 4. **`.github/workflows/deploy.yml`** — CI wait-for-ready loop now polls `/api/healthz` instead of `/api/stats`. 5. **`cmd/server/healthz_test.go`** — Tests for 503-before-ready, 200-after-ready, JSON shape, and anti-tautology gate. ## Rule 18 Verification Built and ran against `test-fixtures/e2e-fixture.db` (499 tx): - With the small fixture DB, init completes in <300ms so both immediate and delayed curls return 200 - Unit tests confirm 503 behavior when `readiness=0` (simulating slow init) - On production DBs with 100K+ txs, the 503 window would be 5-15s (pickBestObservation processes in 5000-tx chunks with 10ms yields) ## Test Results ``` === RUN TestHealthzNotReady --- PASS === RUN TestHealthzReady --- PASS === RUN TestHealthzAntiTautology --- PASS ok github.com/corescope/server 19.662s (full suite) ``` Co-authored-by: you <you@example.com> |
||
|
|
6345c6fb05 |
fix(ingestor): observability + bounded backoff for MQTT reconnect (#947) (#949)
## Summary Fixes #947 — MQTT ingestor silently stalls after `pingresp not received` disconnect due to paho's default 10-minute reconnect backoff and zero observability of reconnect attempts. ## Changes ### `cmd/ingestor/main.go` - **Extract `buildMQTTOpts()`** — encapsulates MQTT client option construction for testability - **`SetMaxReconnectInterval(30s)`** — bounds paho's default 10-minute exponential backoff (source: `options.go:137` in `paho.mqtt.golang@v1.5.0`) - **`SetConnectTimeout(10s)`** — prevents stuck connect attempts from blocking reconnect cycle - **`SetWriteTimeout(10s)`** — prevents stuck publish writes - **`SetReconnectingHandler`** — logs `MQTT [<tag>] reconnecting to <broker>` on every reconnect attempt, giving operators visibility into retry behavior - **Enhanced `SetConnectionLostHandler`** — now includes broker address in log line for multi-source disambiguation ### `cmd/ingestor/mqtt_opts_test.go` (new) - Tests verify `MaxReconnectInterval`, `ConnectTimeout`, `WriteTimeout` are set correctly - Tests verify credential and TLS configuration - Anti-tautology: tests fail if timing settings are removed from `buildMQTTOpts()` ## Operator impact After this change, a pingresp disconnect produces: ``` MQTT [staging] disconnected from tcp://broker:1883: pingresp not received, disconnecting MQTT [staging] reconnecting to tcp://broker:1883 MQTT [staging] reconnecting to tcp://broker:1883 MQTT [staging] connected to tcp://broker:1883 MQTT [staging] subscribed to meshcore/# ``` Max gap between disconnect and first reconnect attempt: ~30s (was up to 10 minutes). --------- Co-authored-by: you <you@example.com> |
||
|
|
e460932668 |
fix(store): apply retentionHours cutoff in Load() to prevent OOM on cold start (#917)
## Problem `Load()` loaded all transmissions from the DB regardless of `retentionHours`, so `buildSubpathIndex()` processed the full DB history on every startup. On a DB with ~280K paths this produces ~13.5M subpath index entries, OOM-killing the process before it ever starts listening — causing a supervisord crash loop with no useful error message. ## Fix Apply the same `retentionHours` cutoff to `Load()`'s SQL that `EvictStale()` already uses at runtime. Both conditions (`retentionHours` window and `maxPackets` cap) are combined with AND so neither safety limit is bypassed. Startup now builds indexes only over the retention window, making startup time and memory proportional to recent activity rather than total DB history. ## Docs - `config.example.json`: adds `retentionHours` to the `packetStore` block with recommended value `168` (7 days) and a warning about `0` on large DBs - `docs/user-guide/configuration.md`: documents the field and adds an explicit OOM warning ## Test plan - [x] `cd cmd/server && go test ./... -run TestRetentionLoad` — covers the retention-filtered load: verifies packets outside the window are excluded, and that `retentionHours: 0` still loads everything - [x] Deploy on an instance with a large DB (>100K paths) and `retentionHours: 168` — server reaches "listening" in seconds instead of OOM-crashing - [x] Verify `config.example.json` has `retentionHours: 168` in the `packetStore` block - [x] Verify `docs/user-guide/configuration.md` documents the field and warning 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> Co-authored-by: Kpa-clawbot <kpaclawbot@outlook.com> |
||
|
|
aeae7813bc |
fix: enable SQLite incremental auto-vacuum so DB shrinks after retention (#919) (#920)
Closes #919 ## Summary Enables SQLite incremental auto-vacuum so the database file actually shrinks after retention reaper deletes old data. Previously, `DELETE` operations freed pages internally but never returned disk space to the OS. ## Changes ### 1. Auto-vacuum on new databases - `PRAGMA auto_vacuum = INCREMENTAL` set via DSN pragma before `journal_mode(WAL)` in the ingestor's `OpenStoreWithInterval` - Must be set before any tables are created; DSN ordering ensures this ### 2. Post-reaper incremental vacuum - `PRAGMA incremental_vacuum(N)` runs after every retention reaper cycle (packets, metrics, observers, neighbor edges) - N defaults to 1024 pages, configurable via `db.incrementalVacuumPages` - Noop on `auto_vacuum=NONE` databases (safe before migration) - Added to both server and ingestor ### 3. Opt-in full VACUUM for existing databases - Startup check logs a clear warning if `auto_vacuum != INCREMENTAL` - `db.vacuumOnStartup: true` config triggers one-time `PRAGMA auto_vacuum = INCREMENTAL; VACUUM` - Logs start/end time for operator visibility ### 4. Documentation - `docs/user-guide/configuration.md`: retention section notes that lowering retention doesn't immediately shrink the DB - `docs/user-guide/database.md`: new guide covering WAL, auto-vacuum, migration, manual VACUUM ### 5. Tests - `TestNewDBHasIncrementalAutoVacuum` — fresh DB gets `auto_vacuum=2` - `TestExistingDBHasAutoVacuumNone` — old DB stays at `auto_vacuum=0` - `TestVacuumOnStartupMigratesDB` — full VACUUM sets `auto_vacuum=2` - `TestIncrementalVacuumReducesFreelist` — DELETE + vacuum shrinks freelist - `TestCheckAutoVacuumLogs` — handles both modes without panic - `TestConfigIncrementalVacuumPages` — config defaults and overrides ## Migration path for existing databases 1. On startup, CoreScope logs: `[db] auto_vacuum=NONE — DB needs one-time VACUUM...` 2. Set `db.vacuumOnStartup: true` in config.json 3. Restart — VACUUM runs (blocks startup, minutes on large DBs) 4. Remove `vacuumOnStartup` after migration ## Test results ``` ok github.com/corescope/server 19.448s ok github.com/corescope/ingestor 30.682s ``` --------- Co-authored-by: you <you@example.com> |
||
|
|
54f7f9d35b |
feat: path-prefix candidate inspector with map view (#944) (#945)
## feat: path-prefix candidate inspector with map view (#944) Implements the locked spec from #944: a beam-search-based path prefix inspector that enumerates candidate full-pubkey paths from short hex prefixes and scores them. ### Server (`cmd/server/path_inspect.go`) - **`POST /api/paths/inspect`** — accepts 1-64 hex prefixes (1-3 bytes, uniform length per request) - Beam search (width 20) over cached `prefixMap` + `NeighborGraph` - Per-hop scoring: edge weight (35%), GPS plausibility (20%), recency (15%), prefix selectivity (30%) - Geometric mean aggregation with 0.05 floor per hop - Speculative threshold: score < 0.7 - Score cache: 30s TTL, keyed by (prefixes, observer, window) - Cold-start: synchronous NeighborGraph rebuild with 2s hard timeout → 503 `{retry:true}` - Body limit: 4096 bytes via `http.MaxBytesReader` - Zero SQL queries in handler hot path - Request validation: rejects empty, odd-length, >3 bytes, mixed lengths, >64 hops ### Frontend (`public/path-inspector.js`) - New page under Tools route with input field (comma/space separated hex prefixes) - Client-side validation with error feedback - Results table: rank, score (color-coded speculative), path names, per-hop evidence (collapsed) - "Show on Map" button calls `drawPacketRoute` (one path at a time, clears prior) - Deep link: `#/tools/path-inspector?prefixes=2c,a1,f4` ### Nav reorganization - `Traces` nav item renamed to `Tools` - Backward-compat: `#/traces/<hash>` redirects to `#/tools/trace/<hash>` - Tools sub-routing dispatches to traces or path-inspector ### Store changes - Added `LastSeen time.Time` to `nodeInfo` struct, populated from `nodes.last_seen` - Added `inspectMu` + `inspectCache` fields to `PacketStore` ### Tests - **Go unit tests** (`path_inspect_test.go`): scoreHop components, beam width cap, speculative flag, all validation error cases, valid request integration - **Frontend tests** (`test-path-inspector.js`): parse comma/space/mixed, validation (empty, odd, >3 bytes, mixed lengths, invalid hex, valid) - Anti-tautology gate verified: removing beam pruning fails width test; removing validation fails reject tests ### CSS - `--path-inspector-speculative` variable in both themes (amber, WCAG AA on both dark/light backgrounds) - All colors via CSS variables (no hardcoded hex in production code) Closes #944 --------- Co-authored-by: you <you@example.com> |
||
|
|
5678874128 |
fix: exclude non-repeater nodes from path-hop resolution (#935) (#936)
Fixes #935 ## Problem `buildPrefixMap()` indexed ALL nodes regardless of role, causing companions/sensors to appear as repeater hops when their pubkey prefix collided with a path-hop hash byte. ## Fix ### Server (`cmd/server/store.go`) - Added `canAppearInPath(role string) bool` — allowlist of roles that can forward packets (repeater, room_server, room) - `buildPrefixMap` now skips nodes that fail this check ### Client (`public/hop-resolver.js`) - Added matching `canAppearInPath(role)` helper - `init()` now only populates `prefixIdx` for path-eligible nodes - `pubkeyIdx` remains complete — `resolveFromServer()` still resolves any node type by full pubkey (for server-confirmed `resolved_path` arrays) ## Tests - `cmd/server/prefix_map_role_test.go`: 7 new tests covering role filtering in prefix map and resolveWithContext - `test-hop-resolver-affinity.js`: 4 new tests verifying client-side role filter + pubkeyIdx completeness - All existing tests updated to include `Role: "repeater"` where needed - `go test ./cmd/server/...` — PASS - `node test-hop-resolver-affinity.js` — 16/17 pass (1 pre-existing centroid failure unrelated to this change) ## Commits 1. `fix: filter prefix map to only repeater/room roles (#935)` — server implementation 2. `test: prefix map role filter coverage (#935)` — server tests 3. `ui: filter HopResolver prefix index to repeater/room roles (#935)` — client implementation 4. `test: hop-resolver role filter coverage (#935)` — client tests --------- Co-authored-by: you <you@example.com> |
||
|
|
6ca5e86df6 |
fix: compute hex-dump byte ranges client-side from per-obs raw_hex (#891)
## Symptom The colored byte strip in the packet detail pane is offset from the labeled byte breakdown below it. Off by N bytes where N is the difference between the top-level packet's path length and the displayed observation's path length. ## Root cause Server computes `breakdown.ranges` once from the top-level packet's raw_hex (in `BuildBreakdown`) and ships it in the API response. After #882 we render each observation's own raw_hex, but we keep using the top-level breakdown — so a 7-hop top-level packet shipped "Path: bytes 2-8", and when we rendered an 8-hop observation we coloured 7 of the 8 path bytes and bled into the payload. The labeled rows below (which use `buildFieldTable`) parse the displayed raw_hex on the client, so they were correct — they just didn't match the strip above. ## Fix Port `BuildBreakdown()` to JS as `computeBreakdownRanges()` in `app.js`. Use it in `renderDetail()` from the actually-rendered (per-obs) raw_hex. ## Test Manually verified the JS function output matches the Go implementation for FLOOD/non-transport, transport, ADVERT, and direct-advert (zero hops) cases. Closes nothing (caught in post-tag bug bash). --------- Co-authored-by: you <you@example.com> |
||
|
|
56ec590bc4 |
fix(#886): derive path_json from raw_hex at ingest (#887)
## Problem Per-observation `path_json` disagrees with `raw_hex` path section for TRACE packets. **Reproducer:** packet `af081a2c41281b1e`, observer `lutin🏡` - `path_json`: `["67","33","D6","33","67"]` (5 hops — from TRACE payload) - `raw_hex` path section: `30 2D 0D 23` (4 bytes — SNR values in header) ## Root Cause `DecodePacket` correctly parses TRACE packets by replacing `path.Hops` with hop IDs from the payload's `pathData` field (the actual route). However, the header path bytes for TRACE packets contain **SNR values** (one per completed hop), not hop IDs. `BuildPacketData` used `decoded.Path.Hops` to build `path_json`, which for TRACE packets contained the payload-derived hops — not the header path bytes that `raw_hex` stores. This caused `path_json` and `raw_hex` to describe completely different paths. ## Fix - Added `DecodePathFromRawHex(rawHex)` — extracts header path hops directly from raw hex bytes, independent of any TRACE payload overwriting. - `BuildPacketData` now calls `DecodePathFromRawHex(msg.Raw)` instead of using `decoded.Path.Hops`, guaranteeing `path_json` always matches the `raw_hex` path section. ## Tests (8 new) **`DecodePathFromRawHex` unit tests:** - hash_size 1, 2, 3, 4 - zero-hop direct packets - transport route (4-byte transport codes before path) **`BuildPacketData` integration tests:** - TRACE packet: asserts path_json matches raw_hex header path (not payload hops) - Non-TRACE packet: asserts path_json matches raw_hex header path All existing tests continue to pass (`go test ./...` for both ingestor and server). Fixes #886 --------- Co-authored-by: you <you@example.com> |
||
|
|
a605518d6d |
fix(#881): per-observation raw_hex — each observer sees different bytes on air (#882)
## Problem Each MeshCore observer receives a physically distinct over-the-air byte sequence for the same transmission (different path bytes, flags/hops remaining). The `observations` table stored only `path_json` per observer — all observations pointed at one `transmissions.raw_hex`. This prevented the hex pane from updating when switching observations in the packet detail view. ## Changes | Layer | Change | |-------|--------| | **Schema** | `ALTER TABLE observations ADD COLUMN raw_hex TEXT` (nullable). Migration: `observations_raw_hex_v1` | | **Ingestor** | `stmtInsertObservation` now stores per-observer `raw_hex` from MQTT payload | | **View** | `packets_v` uses `COALESCE(o.raw_hex, t.raw_hex)` — backward compatible with NULL historical rows | | **Server** | `enrichObs` prefers `obs.RawHex` when non-empty, falls back to `tx.RawHex` | | **Frontend** | No changes — `effectivePkt.raw_hex` already flows through `renderDetail` | ## Tests - **Ingestor**: `TestPerObservationRawHex` — two MQTT packets for same hash from different observers → both stored with distinct raw_hex - **Server**: `TestPerObservationRawHexEnrich` — enrichObs returns per-obs raw_hex when present, tx fallback when NULL - **E2E**: Playwright assertion in `test-e2e-playwright.js` for hex pane update on observation switch E2E assertion added: `test-e2e-playwright.js:1794` ## Scope - Historical observations: raw_hex stays NULL, UI falls back to transmission raw_hex silently - No backfill, no path_json reconstruction, no frontend changes Closes #881 --------- Co-authored-by: you <you@example.com> |
||
|
|
42ff5a291b |
fix(#866): full-page obs-switch — update hex + path + direction per observation (#870)
## Problem On `/#/packets/<hash>?obs=<id>`, clicking a different observation updated summary fields (Observer, SNR/RSSI, Timestamp) but **not** hex payload or path details. Sister bug to #849 (fixed in #851 for the detail dialog). ## Root Causes | Cause | Impact | |-------|--------| | `selectPacket` called `renderDetail` without `selectedObservationId` | Initial render missed observation context on some code paths | | `ObservationResp` missing `direction`, `resolved_path`, `raw_hex` | Frontend obs-switch lost direction and resolved_path context | | `obsPacket` construction omitted `direction` field | Direction not preserved when switching observations | ## Fix - `selectPacket` explicitly passes `selectedObservationId` to `renderDetail` - `ObservationResp` gains `Direction`, `ResolvedPath`, `RawHex` fields - `mapSliceToObservations` copies the three new fields - `obsPacket` spreads include `direction` from the observation ## Tests 7 new tests in `test-frontend-helpers.js`: - Observation switch updates `effectivePkt` path - `raw_hex` preserved from packet when obs has none - `raw_hex` from obs overrides when API provides it - `direction` carried through observation spread - `resolved_path` carried through observation spread - `getPathLenOffset` cross-check for transport routes - URL hash `?obs=` round-trip encoding All 584 frontend + 62 filter + 29 aging tests pass. Go server tests pass. Fixes #866 Co-authored-by: you <you@example.com> |