Commit Graph

58 Commits

Author SHA1 Message Date
you 1f84b8e477 feat: server-side hop resolution at ingest — resolved_path (#555)
Implements server-side hop prefix resolution at ingest time with a
persisted neighbor graph, replacing client-side HopResolver for path
resolution.

## Changes

### M1: Persisted neighbor graph (neighbor_edges table)
- New SQLite table neighbor_edges (node_a, node_b, count, last_seen)
- Load from SQLite on startup → build in-memory NeighborGraph
- First-run backfill: scan all packets, extract edges per ADVERT/non-ADVERT rules
- Incremental edge upserts during ingest (both in-memory and SQLite)

### M2: resolved_path column on observations
- ALTER TABLE observations ADD COLUMN resolved_path TEXT
- Resolve hop prefixes at ingest using resolveWithContext with 4-tier priority
  (affinity → geo → GPS → first match)
- Store as JSON array of full 64-char lowercase hex pubkeys (null for unresolved)
- Cold startup backfill for observations without resolved_path
- ResolvedPath field added to StoreObs and StoreTx structs
- Propagated through pickBestObservation to transmission level

### M3: API and WebSocket broadcast
- resolved_path included in all packet/observation API responses (omitempty)
- Included in WebSocket broadcast messages per observation
- TransmissionResp and ObservationResp types updated

### Call site migration
- All 7 pm.resolve() call sites migrated to pm.resolveWithContext() with
  the persisted graph (store.go: distance index, topology, subpaths,
  subpath detail; routes.go: node paths)
- pm.resolve() retained for test compatibility but no longer used in prod

### Schema compatibility
- DB.hasResolvedPath flag detects column presence at startup
- SQL queries dynamically include/exclude resolved_path column
- Graceful degradation when column doesn't exist (tests, old DBs)

Fixes #555
2026-04-04 04:48:43 +00:00
Kpa-clawbot 9a39198d92 fix: only count repeaters in hash collision analysis (#441) (#548)
Fixes #441

## Summary

Hash collision analysis was including ALL node types, inflating
collision counts with irrelevant data. Per MeshCore firmware analysis,
**only repeaters matter for collision analysis** — they're the only role
that forwards packets and appears in routing `path[]` arrays.

## Root Causes Fixed

1. **`hash_size==0` nodes counted in all buckets** — nodes with unknown
hash size were included via `cn.HashSize == bytes || cn.HashSize == 0`,
polluting every bucket
2. **Non-repeater roles included** — companions, rooms, sensors, and
observers were counted even though their hash collisions never cause
routing ambiguity

## Fix

Changed `computeHashCollisions()` filter from:
```go
// Before: include everything except companions
if cn.HashSize == bytes && cn.Role != "companion" {
```
To:
```go
// After: only include repeaters (per firmware analysis)
if cn.HashSize == bytes && cn.Role == "repeater" {
```

## Why only repeaters?

From [MeshCore firmware
analysis](https://github.com/Kpa-clawbot/CoreScope/issues/441#issuecomment-4185218547):
- Only repeaters override `allowPacketForward()` to return `true`
- Only repeaters append their hash to `path[]` during relay
- Companions, rooms, sensors, observers never forward packets
- Cross-role collisions are benign (companion silently drops, real
repeater still forwards)

## Tests
- `TestHashCollisionsOnlyRepeaters` — verifies companions, rooms,
sensors, and hash_size==0 nodes are all excluded

---------

Co-authored-by: you <you@example.com>
2026-04-03 14:23:13 -07:00
Kpa-clawbot 59bff5462c fix: rate-limit cache invalidation to prevent 0% hit rate (#533) (#546)
## Summary

Fixes #533 — server cache hit rate always 0%.

## Root Cause

`invalidateCachesFor()` is called at the end of every
`IngestNewFromDB()` and `IngestNewObservations()` cycle (~2-5s). Since
new data arrives continuously, caches are cleared faster than any
analytics request can hit them, resulting in a permanent 0% cache hit
rate. The cache TTL (15s/60s) is irrelevant because entries are evicted
by invalidation long before they expire.

## Fix

Rate-limit cache invalidation with a 10-second cooldown:

- First call after cooldown goes through immediately
- Subsequent calls during cooldown accumulate dirty flags in
`pendingInv`
- Next call after cooldown merges pending + current flags and applies
them
- Eviction bypasses cooldown (data removal requires immediate clearing)

Analytics data may be at most ~10s stale, which is acceptable for a
dashboard.

## Changes

- **`store.go`**: Added `lastInvalidated`, `pendingInv`, `invCooldown`
fields. Refactored `invalidateCachesFor()` to rate-limit non-eviction
invalidation. Extracted `applyCacheInvalidation()` helper.
- **`cache_invalidation_test.go`**: Added 4 new tests:
- `TestInvalidationRateLimited` — verifies caches survive during
cooldown
  - `TestInvalidationCooldownAccumulatesFlags` — verifies flag merging
- `TestEvictionBypassesCooldown` — verifies eviction always clears
immediately
- `BenchmarkCacheHitDuringIngestion` — confirms 100% hit rate during
rapid ingestion (was 0%)

## Perf Proof

```
BenchmarkCacheHitDuringIngestion-16    3467889    1018 ns/op    100.0 hit%
```

Before: 0% hit rate under continuous ingestion. After: 100% hit rate
during cooldown periods.

Co-authored-by: you <you@example.com>
2026-04-03 13:53:58 -07:00
Kpa-clawbot 8c1cd8a9fe perf: track advert pubkeys incrementally, eliminate per-request JSON parsing (#360) (#544)
## Summary

`GetPerfStoreStats()` and `GetPerfStoreStatsTyped()` iterated **all**
ADVERT packets and called `json.Unmarshal` on each one — under a read
lock — on every `/api/perf` and `/api/health` request. With 5K+ adverts,
each health check triggered thousands of JSON parses.

## Fix

Added a refcounted `advertPubkeys map[string]int` to `PacketStore` that
tracks distinct pubkeys incrementally during `Load()`,
`IngestNewFromDB()`, and eviction. The perf/health handlers now just
read `len(s.advertPubkeys)` — O(1) with zero allocations.

## Benchmark Results (5K adverts, 200 distinct pubkeys)

| Method | ns/op | allocs/op |
|--------|-------|-----------|
| `GetPerfStoreStatsTyped` | **78** | **0** |
| `GetPerfStoreStats` | **2,565** | **9** |

Before this change, both methods performed O(N) JSON unmarshals per
call.

## Tests Added

- `TestAdvertPubkeyTracking` — verifies incremental tracking through
add/evict lifecycle
- `TestAdvertPubkeyPublicKeyField` — covers the `public_key` JSON field
variant
- `TestAdvertPubkeyNonAdvert` — ensures non-ADVERT packets don't affect
count
- `BenchmarkGetPerfStoreStats` — 5K adverts benchmark
- `BenchmarkGetPerfStoreStatsTyped` — 5K adverts benchmark

Fixes #360

---------

Co-authored-by: you <you@example.com>
2026-04-03 13:51:13 -07:00
Kpa-clawbot 9b9f396af5 perf: replace O(n²) observation dedup with map-based O(n) (#355) (#543)
## Summary

Fixes #355 — replaces O(n²) observation dedup in `Load()`,
`IngestNewFromDB()`, and `IngestNewObservations()` with an O(1)
map-based lookup.

## Changes

- Added `obsKeys map[string]bool` field to `StoreTx` for O(1) dedup
keyed on `observerID + "|" + pathJSON`
- Replaced all 3 linear-scan dedup sites in `store.go` with map lookups
- Lazy-init `obsKeys` for transmissions created before this change (in
`IngestNewFromDB` and `IngestNewObservations`)
- Added regression test (`TestObsDedupCorrectness`) verifying dedup
correctness
- Added nil-map safety test (`TestObsDedupNilMapSafety`)
- Added benchmark comparing map vs linear scan

## Benchmark Results (ARM64, 16 cores)

| Observations | Map (O(1)) | Linear (O(n)) | Speedup |
|---|---|---|---|
| 10 | 34 ns/op | 41 ns/op | 1.2x |
| 50 | 34 ns/op | 186 ns/op | 5.5x |
| 100 | 34 ns/op | 361 ns/op | 10.6x |
| 500 | 34 ns/op | 4,903 ns/op | **146x** |

Map lookup is constant time regardless of observation count. The linear
scan degrades quadratically — at 500 observations per transmission
(realistic for popular packets seen by many observers), the old code is
146x slower per dedup check.

All existing tests pass.

---------

Co-authored-by: you <you@example.com>
2026-04-03 13:33:26 -07:00
efiten 709e5a4776 fix: observer filter drops groups in grouped packets view (#464) (#531)
## Summary

- When `groupByHash=true`, each group only carries its representative
(best-path) `observer_id`. The client-side filter was checking only that
field, silently dropping groups that were seen by the selected observer
but had a different representative.
- `loadPackets` now passes the `observer` param to the server so
`filterPackets`/`buildGroupedWhere` do the correct "any observation
matches" check.
- Client-side observer filter in `renderTableRows` is skipped for
grouped mode (server already filtered correctly).
- Both `db.go` and `store.go` observer filtering extended to support
comma-separated IDs (multi-select UI).

## Test plan

- [ ] Set an observer filter on the Packets screen with grouping enabled
— all groups that have **any** observation from the selected observer(s)
should appear, not just groups where that observer is the representative
- [ ] Multi-select two observers — groups seen by either should appear
- [ ] Toggle to flat (ungrouped) mode — per-observation filter still
works correctly
- [ ] Existing grouped packets tests pass: `cd cmd/server && go test
./...`

Fixes #464

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: you <you@example.com>
2026-04-03 09:22:37 -07:00
Kpa-clawbot 5151030697 feat: affinity-aware hop resolution (#482) — milestone 4 (#511)
## Summary

Milestone 4 of #482: adds affinity-aware hop resolution to improve
disambiguation accuracy across all hop resolution in the app.

### What changed

**Backend — `prefixMap.resolveWithContext()` (store.go)**

New method that applies a 4-tier disambiguation priority when multiple
nodes match a hop prefix:

| Priority | Strategy | When it wins |
|----------|----------|-------------|
| 1 | **Affinity graph score** | Neighbor graph has data, score ratio ≥
3× runner-up |
| 2 | **Geographic proximity** | Context nodes have GPS, pick closest
candidate |
| 3 | **GPS preference** | At least one candidate has coordinates |
| 4 | **First match** | No signal — current naive fallback |

The existing `resolve()` method is unchanged for backward compatibility.
New callers that have context (originator, observer, adjacent hops) can
use `resolveWithContext()` for better results.

**API — `handleResolveHops` (routes.go)**

Enhanced `/api/resolve-hops` endpoint:
- New query params: `from_node`, `observer` — provide context for
affinity scoring
- New response fields on `HopCandidate`: `affinityScore` (float,
0.0–1.0)
- New response fields on `HopResolution`: `bestCandidate` (pubkey when
confident), `confidence` (one of `unique_prefix`, `neighbor_affinity`,
`ambiguous`)
- Backward compatible: without context params, behavior is identical to
before (just adds `confidence` field)

**Types (types.go)**
- `HopCandidate.AffinityScore *float64`
- `HopResolution.BestCandidate *string`
- `HopResolution.Confidence string`

### Tests

- 7 unit tests for `resolveWithContext` covering all 4 priority tiers +
edge cases
- 2 unit tests for `geoDistApprox`
- 4 API tests for enhanced `/api/resolve-hops` response shape
- All existing tests pass (no regressions)

### Impact

This improves ALL hop resolution across the app — analytics, route
display, subpath analysis, and any future feature that resolves hop
prefixes. The affinity graph (from M1/M2) now feeds directly into
disambiguation decisions.

Part of #482

---------

Co-authored-by: you <you@example.com>
2026-04-02 22:28:07 -07:00
efiten c173ab7e80 perf: skip JSON parse in indexByNode when no pubkey fields present (#376) (#499)
## Summary
- `indexByNode` was calling `json.Unmarshal` for every packet during
`Load()` and `IngestNewFromDB()`, even channel messages and other
payloads that can never contain node pubkey fields
- All three target fields (`"pubKey"`, `"destPubKey"`, `"srcPubKey"`)
share the common substring `"ubKey"` — added a `strings.Contains`
pre-check that skips the JSON parse entirely for packets that don't
match
- At 30K+ packets on startup, this eliminates the majority of
`json.Unmarshal` calls in `indexByNode` (channel messages, status
packets, etc. all bypass it)

## Test plan
- [x] 5 new subtests in `TestIndexByNodePreCheck`: ADVERT with pubKey
indexed, destPubKey indexed, channel message skipped, empty JSON
skipped, duplicate hash deduped
- [x] All existing Go tests pass
- [x] Deploy to staging and verify node-filtered packet queries still
work correctly

Closes #376

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: you <you@example.com>
2026-04-02 17:44:02 -07:00
Jukka Väisänen 4664c90db4 fix: skip zero-hop adverts when checking node hash size (#493)
Fixes issue router IDs flapping between 1byte and multi-byte as
described in https://github.com/Kpa-clawbot/CoreScope/issues/303 with a
minimal patch + test coverage.

This fix is critical for regions using multi-byte IDs.

Closes https://github.com/Kpa-clawbot/CoreScope/issues/303

---------

Co-authored-by: you <you@example.com>
2026-04-03 00:33:20 +00:00
Kpa-clawbot 6712da7d7c fix: add region filtering to hash-collisions endpoint (#477)
## Summary

The `/api/analytics/hash-collisions` endpoint always returned global
results, ignoring the active region filter. Every other analytics
endpoint (RF, topology, hash-sizes, channels, distance, subpaths)
respected the `?region=` query parameter — this was the only one that
didn't.

Fixes #438

## Changes

### Backend (`cmd/server/`)

- **routes.go**: Extract `region` query param and pass to
`GetAnalyticsHashCollisions(region)`
- **store.go**:
- `collisionCache` changed from `*cachedResult` →
`map[string]*cachedResult` (keyed by region, `""` = global) — consistent
with `rfCache`, `topoCache`, etc.
- `GetAnalyticsHashCollisions(region)` and
`computeHashCollisions(region)` now accept a region parameter
- When region is specified, resolves regional observers, scans packets
for nodes seen by those observers, and filters the node list before
computing collisions
  - Cache invalidation updated to clear the map (not set to nil)

### Frontend (`public/`)

- **analytics.js**: The hash-collisions fetch was missing `+ sep` (the
region query string). All other fetches in the same `Promise.all` block
had it — this was simply overlooked in PR #415.
- **index.html**: Cache busters bumped

### Tests (`cmd/server/routes_test.go`)

- `TestHashCollisionsRegionParamIgnored` → renamed to
`TestHashCollisionsRegionParam` with updated comments reflecting that
region is now accepted (with no configured regional observers, results
match global — which the test verifies)

## Performance

No new hot-path work. Region filtering adds one scan of `s.packets`
(same as every other region-filtered analytics endpoint) only when
`?region=` is provided. Results are cached per-region with the existing
60s TTL. Without `?region=`, behavior is unchanged.

Co-authored-by: you <you@example.com>
2026-04-01 23:27:34 -07:00
Kpa-clawbot 01ca843309 perf: move collision analysis to server-side endpoint (fixes #386) (#415)
## Summary

Moves the hash collision analysis from the frontend to a new server-side
endpoint, eliminating a major performance bottleneck on the analytics
collision tab.

Fixes #386

## Problem

The collision tab was:
1. **Downloading all nodes** (`/nodes?limit=2000`) — ~500KB+ of data
2. **Running O(n²) pairwise distance calculations** on the browser main
thread (~2M comparisons with 2000 nodes)
3. **Building prefix maps client-side** (`buildOneBytePrefixMap`,
`buildTwoBytePrefixInfo`, `buildCollisionHops`) iterating all nodes
multiple times

## Solution

### New endpoint: `GET /api/analytics/hash-collisions`

Returns pre-computed collision analysis with:
- `inconsistent_nodes` — nodes with varying hash sizes
- `by_size` — per-byte-size (1, 2, 3) collision data:
  - `stats` — node counts, space usage, collision counts
- `collisions` — pre-computed collisions with pairwise distances and
classifications (local/regional/distant/incomplete)
  - `one_byte_cells` — 256-cell prefix map for 1-byte matrix rendering
- `two_byte_cells` — first-byte-grouped data for 2-byte matrix rendering

### Caching

Uses the existing `cachedResult` pattern with a new `collisionCache`
map. Invalidated on `hasNewTransmissions` (same trigger as the
hash-sizes cache) and on eviction.

### Frontend changes

- `renderCollisionTab` now accepts pre-fetched `collisionData` from the
parallel API load
- New `renderHashMatrixFromServer` and `renderCollisionsFromServer`
functions consume server-computed data directly
- No more `/nodes?limit=2000` fetch from the collision tab
- Old client-side functions (`buildOneBytePrefixMap`, etc.) preserved
for test helper exports

## Test results

- `go test ./...` (server):  pass
- `go test ./...` (ingestor):  pass
- `test-packet-filter.js`:  62 passed
- `test-aging.js`:  29 passed
- `test-frontend-helpers.js`:  227 passed

## Performance impact

| Metric | Before | After |
|--------|--------|-------|
| Data transferred | ~500KB (all nodes) | ~50KB (collision data only) |
| Client computation | O(n²) distance calc | None (server-cached) |
| Main thread blocking | Yes (2000 nodes × pairwise) | No |
| Server caching | N/A | 15s TTL, invalidated on new transmissions |

---------

Co-authored-by: you <you@example.com>
Co-authored-by: Kpa-clawbot <kpabap+clawdbot@gmail.com>
2026-04-01 09:21:23 -07:00
Kpa-clawbot 47d081c705 perf: targeted analytics cache invalidation (fixes #375) (#379)
## Problem

Every time new data is ingested (`IngestNewFromDB`,
`IngestNewObservations`, `EvictStale`), **all 6 analytics caches** are
wiped by creating new empty maps — regardless of what kind of data
actually changed. With the poller running every 1 second, this means the
15s cache TTL is effectively bypassed because caches are cleared far
more frequently than they expire.

## Fix

Introduces a `cacheInvalidation` flags struct and
`invalidateCachesFor()` method that selectively clears only the caches
affected by the ingested data:

| Flag | Caches Cleared |
|------|----------------|
| `hasNewObservations` | RF (SNR/RSSI data changed) |
| `hasNewPaths` | Topology, Distance, Subpaths |
| `hasNewTransmissions` | Hash sizes |
| `hasChannelData` | Channels (GRP_TXT payload_type 5) + channels list
cache |
| `eviction` | All (data removed, everything potentially stale) |

### Impact

For a typical ingest cycle with ADVERT/ACK/TXT_MSG packets (no GRP_TXT):
- **Before:** All 6 caches cleared every cycle
- **After:** Channel cache preserved (most common case), hash cache
preserved on observation-only ingestion

For observation-only ingestion (`IngestNewObservations`):
- **Before:** All 6 caches cleared
- **After:** Only RF cache cleared (+ topo/dist/subpath if paths
actually changed)

## Tests

7 new unit tests in `cache_invalidation_test.go` covering:
- Eviction clears all caches
- Observation-only ingest preserves non-RF caches
- Transmission-only ingest clears only hash cache
- Channel data clears only channel cache
- Path changes clear topo/dist/subpath
- Combined flags work correctly
- No flags = no invalidation

All existing tests pass.

### Post-rebase fix

Restored `channelsCacheRes` invalidation that was accidentally dropped
during the refactor. The old code cleared this separate channels list
cache on every ingest, but `invalidateCachesFor()` didn't include it.
Now cleared on `hasChannelData` and `eviction`.

Fixes #375

---------

Co-authored-by: you <you@example.com>
2026-04-01 07:37:39 -07:00
Kpa-clawbot be313f60cb fix: extract score/direction from MQTT, strip units, fix type safety issues (#371)
## Summary

Fixes #353 — addresses all 5 findings from the CoreScope code analysis.

## Changes

### Finding 1 (Major): `score` field never extracted from MQTT
- Added `Score *float64` field to `PacketData` and `MQTTPacketMessage`
structs
- Extract `msg["score"]` with `msg["Score"]` case fallback via
`toFloat64` in all three MQTT handlers (raw packet, channel message,
direct message)
- Pass through to DB observation insert instead of hardcoded `nil`

### Finding 2 (Major): `direction` field never extracted from MQTT
- Added `Direction *string` field to `PacketData` and
`MQTTPacketMessage` structs
- Extract `msg["direction"]` with `msg["Direction"]` case fallback as
string in all three MQTT handlers
- Pass through to DB observation insert instead of hardcoded `nil`

### Finding 3 (Minor): `toFloat64` doesn't strip units
- Added `stripUnitSuffix()` that removes common RF/signal unit suffixes
(dBm, dB, mW, km, mi, m) case-insensitively before `ParseFloat`
- Values like `"-110dBm"` or `"5.5dB"` now parse correctly

### Finding 4 (Minor): Bare type assertions in store.go
- Changed `firstSeen` and `lastSeen` from `interface{}` to typed
`string` variables at `store.go:5020`
- Removed unsafe `.(string)` type assertions in comparisons

### Finding 5 (Minor): `distHopRecord.SNR` typed as `interface{}`
- Changed `distHopRecord.SNR` from `interface{}` to `*float64`
- Updated assignment (removed intermediate `snrVal` variable, pass
`tx.SNR` directly)
- Updated output serialization to use `floatPtrOrNil(h.SNR)` for
consistent JSON output

## Tests Added

- `TestBuildPacketDataScoreAndDirection` — verifies Score/Direction flow
through BuildPacketData
- `TestBuildPacketDataNilScoreDirection` — verifies nil handling when
fields absent
- `TestInsertTransmissionWithScoreAndDirection` — end-to-end: inserts
with score/direction, verifies DB values
- `TestStripUnitSuffix` — covers all supported suffixes, case
insensitivity, and passthrough
- `TestToFloat64WithUnits` — verifies unit-bearing strings parse
correctly

All existing tests pass.

Co-authored-by: you <you@example.com>
2026-04-01 07:26:23 -07:00
efiten 7e8b30aa1f perf: fix slow /api/packets and /api/channels on large stores (#328)
## Problem

Two endpoints were slow on larger installations:

**`/packets?limit=50000&groupByHash=true` — 16s+**
`QueryGroupedPackets` did two expensive things on every request:
1. O(n × observations) scan per packet to find `latest` timestamp
2. Held `s.mu.RLock()` during the O(n log n) sort, blocking all
concurrent reads

**`/channels` — 13s+**
`GetChannels` iterated all payload-type-5 packets and JSON-unmarshaled
each one while holding `s.mu.RLock()`, blocking all concurrent reads for
the full duration.

## Fix

**Packets (`QueryGroupedPackets`):**
- Add `LatestSeen string` to `StoreTx`, maintained incrementally in all
three observation write paths. Eliminates the per-packet observation
scan at query time.
- Build output maps under the read lock, sort the local copy after
releasing it.
- Cache the full sorted result for 3 seconds keyed by filter params.

**Channels (`GetChannels`):**
- Copy only the fields needed (firstSeen, decodedJSON, region match)
under the read lock, then release before JSON unmarshaling.
- Cache the result for 15 seconds keyed by region param.
- Invalidate cache on new packet ingestion.

## Test plan
- [ ] Open packets page on a large store — load time should drop from
16s to <1s
- [ ] Open channels page — should load in <100ms instead of 13s+
- [ ] `[SLOW API]` warnings gone for both endpoints
- [ ] Packet/channel data is correct (hashes, counts, observer counts)
- [ ] Filters (region, type, since/until) still work correctly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-01 07:06:21 -07:00
Kpa-clawbot 738d5fef39 fix: poller uses store max IDs to prevent replaying entire DB
When GetMaxTransmissionID() fails silently (e.g., corrupted DB returns 0
from COALESCE), the poller starts from ID 0 and replays the entire
database over WebSocket — broadcasting thousands of old packets per second.

Fix: after querying the DB, use the in-memory store's MaxTransmissionID
and MaxObservationID as a floor. Since Load() already read the full DB
successfully, the store has the correct max IDs.

Root cause discovered on staging: DB corruption caused MAX(id) query to
fail, returning 0. Poller log showed 'starting from transmission ID 0'
followed by 1000-2000 broadcasts per tick walking through 76K rows.

Also adds MaxObservationID() to PacketStore for observation cursor safety.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-31 23:28:56 -07:00
Kpa-clawbot 4cdc554b40 fix: use latest advert for node hash size instead of historical mode (#341)
## Summary

Fixes #303 — Repeater hash stats now reflect the **latest advert**
instead of the historical mode (most frequent).

When a node is reconfigured (e.g. from 1-byte to 2-byte hash size), the
analytics and node detail pages now show the updated value immediately
after the next advert is received.

## Changes

### cmd/server/store.go

1. **computeNodeHashSizeInfo** — Changed hash size determination from
statistical mode to latest advert. The most recent advert in
chronological order now determines hash_size. The hash_sizes_seen and
hash_size_inconsistent tracking is preserved for multi-byte analytics.

2. **computeAnalyticsHashSizes** — Two fixes:
- **yNode keyed by pubKey** instead of name, so same-name nodes with
different public keys are counted separately in distributionByRepeaters.
- **Zero-hop adverts included** — advert originator tracking now happens
before the hops check, so zero-hop adverts contribute to per-node stats.

### cmd/server/routes_test.go

Added 4 new tests:
- TestGetNodeHashSizeInfoLatestWins — 4 historical 1-byte adverts + 1
recent 2-byte advert → hash size should be 2 (not 1 from mode)
- TestGetNodeHashSizeInfoNoAdverts — node with no ADVERT packets →
graceful nil, no crash
- TestAnalyticsHashSizeSameNameDifferentPubkey — two nodes named
"SameName" with different pubkeys → counted as 2 separate entries
- Updated TestGetNodeHashSizeInfoDominant comment to reflect new
behavior

## Context

Community report from contributor @kizniche: after reconfiguring a
repeater from 1-byte to 2-byte hash and sending a flood advert, the
analytics page still showed 1-byte. Root cause was the mode-based
computation which required many new adverts to shift the majority. The
upstream firmware bug causing stale path bytes
(meshcore-dev/MeshCore#2154) has been fixed, making the latest advert
reliable.

## Testing

- `go vet ./...` — clean
- `go test ./... -count=1` — all tests pass (including 4 new ones)
- `cmd/ingestor` tests — pass

---------

Co-authored-by: Kpa-clawbot <259247574+Kpa-clawbot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-31 22:26:32 -07:00
Kpa-clawbot b51ced8655 Wire channel region filtering end-to-end
Pass region through channel message routes, apply DB/store filtering, normalize IATA at read and write boundaries, and add regression coverage for routes/server/ingestor.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-30 23:03:56 -07:00
efiten 568e3904ba fix: use dominant (most common) hash size instead of last-seen (#285)
## Problem

Repeaters with 2-byte adverts occasionally appear as 1-byte on the map
and in stats.

**Root cause:** `computeNodeHashSizeInfo()` sets `HashSize` by
overwriting on every packet (`ni.HashSize = hs`), so the last advert
processed wins — regardless of how many previous packets correctly
showed 2-byte.

When a node sends an ADVERT directly (no relay hops), the path byte
encodes `hashCount=0`. Some firmware sets the full path byte to `0x00`
in this case, which decodes as `hashSize=1` even if the node normally
uses 2-byte hashes. If this packet happens to be the last one iterated,
the node shows as 1-byte.

## Fix

Compute the **mode** (most frequent hash size) across all observed
adverts instead of using the last-seen value. On a tie, prefer the
larger value.

```go
counts := make(map[int]int, len(ni.AllSizes))
for _, hs := range ni.Seq {
    counts[hs]++
}
best, bestCount := 1, 0
for hs, cnt := range counts {
    if cnt > bestCount || (cnt == bestCount && hs > best) {
        best = hs
        bestCount = cnt
    }
}
ni.HashSize = best
```

A node with 4× hashSize=2 and 1× hashSize=1 now correctly reports
`HashSize=2`.

## Test

`TestGetNodeHashSizeInfoDominant`: seeds 5 adverts (4× 2-byte, 1×
1-byte) and asserts `HashSize=2`.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-30 16:26:10 -07:00
Kpa-clawbot 77d8f35a04 feat: implement packet store eviction/aging to prevent OOM (#273)
## Summary

The in-memory `PacketStore` had **no eviction or aging** — it grew
unbounded until OOM killed the process. At ~3K packets/hour and ~5KB per
packet (not the 450 bytes previously estimated), an 8GB VM would OOM in
a few days.

## Changes

### Time-based eviction
- Configurable via `config.json`: `"packetStore": { "retentionHours": 24
}`
- Packets older than the retention window are evicted from the head of
the sorted slice

### Memory-based cap
- Configurable via `"packetStore": { "maxMemoryMB": 1024 }`
- Hard ceiling — evicts oldest packets when estimated memory exceeds the
cap

### Index cleanup
When a `StoreTx` is evicted, ALL associated data is removed from:
- `byHash`, `byTxID`, `byObsID`, `byObserver`, `byNode`, `byPayloadType`
- `nodeHashes`, `distHops`, `distPaths`, `spIndex`

### Periodic execution
- Background ticker runs eviction every 60 seconds
- Analytics caches and hash size cache are invalidated after eviction

### Stats fixes
- `estimatedMB` now uses ~5KB/packet + ~500B/observation (was 430B +
200B)
- `evicted` counter reflects actual evictions (was hardcoded to 0)
- Removed fake `maxPackets: 2386092` and `maxMB: 1024` from stats

### Config example
```json
{
  "packetStore": {
    "retentionHours": 24,
    "maxMemoryMB": 1024
  }
}
```

Both values default to 0 (unlimited) for backward compatibility.

## Tests
- 7 new tests in `eviction_test.go` covering time-based, memory-based,
index cleanup, thread safety, config parsing, and no-op when disabled
- All existing tests pass unchanged

Co-authored-by: Kpa-clawbot <kpabap+clawdbot@gmail.com>
2026-03-30 03:42:11 +00:00
Kpa-clawbot a6364c92f4 fix: packets-per-hour counts unique transmissions, not observations (#274)
## Problem

The RF analytics `packetsPerHour` chart was counting **observations**
instead of **unique transmissions** per hour. With ~34 observations per
transmission on average, the chart showed ~5,645 packets/hr instead of
the correct ~163/hr.

**Evidence from prod API:**
- `packetsPerHour` total: 1,580,620 (sum of all hourly counts)
- `totalPackets`: 45,764
- That's a ~34× inflation — exactly the observations-per-transmission
ratio

## Root Cause

In `store.go`, the `hourBuckets[hr]++` counter was inside the
observations loop (both regional and non-regional paths). Other counters
like `packetSizes` and `typeBuckets` already deduplicate by hash —
`hourBuckets` was the only one that didn't.

## Fix

Added a `seenHourHash` map (keyed by `hash|hour`) to deduplicate. Each
unique transmission is counted once per hour bucket, matching how packet
sizes and payload types already work.

Both the regional observer path and the non-regional path are fixed. The
legacy path (transmissions without observations) was already correct
since it iterates per-transmission.

Co-authored-by: Kpa-clawbot <kpabap+clawdbot@gmail.com>
2026-03-29 20:16:25 -07:00
Kpa-clawbot 8c63200679 feat: hash size distribution by repeaters (Go server) (#264)
## Summary

Adds `distributionByRepeaters` to the `/api/analytics/hash-sizes`
endpoint in the **Go server**.

### Problem
PR #263 implemented this feature in the deprecated Node.js server
(server.js). All backend changes should go in the Go server at
`cmd/server/`.

### Solution
- For each hash size (1, 2, 3), count how many unique repeaters (nodes)
advertise packets with that hash size
- Uses the existing `byNode` map already computed in
`computeAnalyticsHashSizes()`
- Added to both the live response and the empty/fallback response in
routes.go
- Frontend changes from PR #263 (`public/analytics.js`) already render
this field — no frontend changes needed

### Response shape
```json
{
  "distributionByRepeaters": { "1": 42, "2": 7, "3": 2 },
  ...existing fields...
}
```

### Testing
- All Go server tests pass
- Replaces PR #263 (which modified the wrong server)

Closes #263

---------

Co-authored-by: you <you@example.com>
2026-03-29 15:18:40 -07:00
Kpa-clawbot ab03b142f5 fix: per-observation WS broadcast for live view starburst — fixes #237
IngestNewFromDB now broadcasts one message per observation (not per
transmission). IngestNewObservations also broadcasts late arrivals.
Tests verify multi-observer packets produce multiple WS messages.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-29 08:32:37 -07:00
efiten 3f54632b07 fix: cache /stats and GetNodeHashSizeInfo to eliminate slow API calls
- /api/stats: 10s server-side cache — was running 5 SQLite COUNT queries
  on every call, taking ~1500ms with 28 concurrent WS clients polling every 15s
- GetNodeHashSizeInfo: 15s cache — was doing a full O(n) scan + JSON unmarshal
  of all advert packets in memory on every /nodes request, taking ~1200ms

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-29 07:09:05 -07:00
efiten 380b1b1e28 fix: address review — observation ordering, stale comments, affected query functions
- Load() SQL: keep o.timestamp DESC (consistent with IngestNewFromDB) so
  pickBestObservation tie-breaking is identical on both load paths
- GetTimestamps: scan from tail instead of head (was breaking on first item
  assuming it was the newest, now correctly reads from newest end)
- QueryMultiNodePackets: apply same DESC/ASC tail-read pagination as
  QueryPackets (was sorting for ASC and assuming DESC as-is)
- GetNodeHealth recentPackets: read from tail to return 20 newest items
  (was reading from head = 20 oldest items)
- Remove stale "Prepend (newest first)" comments, replace with accurate
  "oldest-first; new items go to tail" wording

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-28 15:54:40 -07:00
efiten 03cfd114da perf: eliminate O(n) slice prepend on every packet ingest
s.packets and s.byPayloadType[t] were prepended on every new packet
to maintain newest-first order, copying the entire slice each time.
With 2-3M packets in memory this meant ~24MB of pointer copies per
ingest cycle, causing sustained high CPU and GC pressure.

Fix: store both slices oldest-first (append to tail). Load() SQL
changed to ASC ordering. QueryPackets DESC pagination now reads from
the tail in O(page_size) with no sort; GetChannelMessages switches
from reverse-iteration to forward-iteration.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-28 15:54:40 -07:00
Kpa-clawbot 8a813ed87c perf: precompute distance analytics at ingest time, fixes #169
Replace expensive per-request distance computation (1.2s cold) with
precomputed distance index built during Load() and incrementally
updated on IngestNewFromDB/IngestNewObservations.

- Add distHopRecord/distPathRecord types for precomputed hop distances
- buildDistanceIndex() iterates all packets once during Load(), computing
  haversine distances and storing results in distHops/distPaths slices
- computeDistancesForTx() handles per-packet distance computation,
  shared between full rebuild and incremental ingest
- IngestNewFromDB appends distance records for new packets (no rebuild)
- IngestNewObservations triggers full rebuild only if paths changed
- computeAnalyticsDistance() now aggregates from precomputed records
  instead of re-iterating all packets with JSON parsing + haversine

Cold request path: ~10-20ms (filter + sort precomputed records)
vs previous: ~1.2s (iterate 30K+ packets, parse JSON, resolve hops,
compute haversine for each).

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 22:54:03 -07:00
Kpa-clawbot 9ebfd40aa0 fix: filter garbage channel names from /api/channels, fixes #201
Channels with garbage-decrypted names (pre-#197 data still in DB) are now
filtered at the API level using the same non-printable character heuristic
from #197. Applied in both Node.js server.js and Go server (store.go, db.go).
No data is deleted — only filtered from API responses.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 22:49:45 -07:00
Kpa-clawbot 6dacdef6b8 perf: precompute subpath index at ingest time, fixes #168
The subpaths analytics endpoint iterated ALL packets on every cold query,
taking ~900ms.  The TTL cache only masked the problem.

Fix: maintain a precomputed raw-hop subpath index (map[string]int) that
is built once during Load() and incrementally updated during
IngestNewFromDB() and IngestNewObservations().

At query time the fast path iterates only unique raw subpaths (typically
a few thousand entries) instead of all packets (30K+), resolves hop
prefixes to names, and merges counts.  Region-filtered queries still
fall back to the O(N) path since they require per-transmission observer
checks.

Expected cold-hit improvement: ~900ms → <5ms for the common no-region
case.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 22:44:39 -07:00
Kpa-clawbot f1cb840b5a fix: prepend to byPayloadType in IngestNewFromDB to preserve newest-first order
IngestNewFromDB was appending new transmissions to byPayloadType slices,
breaking the newest-first ordering established by Load(). This caused
GetChannelMessages (which iterates backwards assuming newest-first) to
place newly ingested messages at the wrong position, making them invisible
when returning the latest messages from the tail.

Changed append to prepend, matching the existing s.packets prepend pattern
on line 881. Added regression test.

fixes #198

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 21:56:54 -07:00
Kpa-clawbot 3cd87d766e feat: in-memory store.GetNodeAnalytics + _parsedPath in txToMap
#195 — /api/nodes/:pubkey/analytics was hitting SQL (packets_v view)
for all queries. Added store.GetNodeAnalytics(pubkey, days) that uses
the byNode[pubkey] index + text search through decoded_json, computing
all analytics (timeline, SNR trend, type breakdown, observer coverage,
hop distribution, peer interactions, uptime heatmap, computed stats)
entirely in-memory. Route handler now uses store path when available,
falling back to SQL only when store is nil.

#196 — recentPackets from /api/nodes/:pubkey/health were missing the
_parsedPath field that Node.js includes (lazy-cached parsed path_json
array). Added _parsedPath to txToMap() output using txGetParsedPath(),
matching the Node.js packet shape.

fixes #195, fixes #196

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 21:53:21 -07:00
Kpa-clawbot 47ee63ed55 fix: #191 #192 #193 #194 — repeater-only collision matrix, expand=observations, store-based node health, goRuntime in perf
#191: Hash collision matrix now filters to role=repeater only (routing-relevant)
#192: expand=observations in /api/packets now returns full observation details (txToMap includes observations, stripped by default)
#193: /api/nodes/:pubkey/health uses in-memory PacketStore when available instead of slow SQL queries
#194: goRuntime (heapMB, sysMB, numGoroutine, numGC, gcPauseMs) restored in /api/perf response

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 21:25:19 -07:00
Kpa-clawbot 77988ded3e fix: #184-#189 — sanitize names, packetsLast24h, ReadMemStats cache, dup name indicator, heatmap warning
#184: Strip non-printable chars (<0x20 except tab/newline) from ADVERT
names in Go server decoder, Go ingestor decoder, and Node decoder.js.

#185: Add visual (N) badge next to node names when multiple nodes share
the same display name (case-insensitive). Shows in list, side pane, and
full detail page with 'also known as' links to other keys.

#186: Add packetsLast24h field to /api/stats response.

#187 #188: Cache runtime.ReadMemStats() with 5s TTL in Go server.

#189: Temporarily patch HTMLCanvasElement.prototype.getContext during
L.heatLayer().addTo(map) to pass { willReadFrequently: true }, preventing
Chrome console warning about canvas readback performance.

Tests: 10 new tests for buildDupNameMap + dupNameBadge (143 total frontend).
Cache busters bumped.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 20:50:08 -07:00
Kpa-clawbot 2435f2eaaf fix: observation timestamps, leaked fields, perf path normalization
- #178: Use strftime ISO 8601 format instead of datetime() for observation
  timestamps in all SQL queries (v3 + v2 views). Add normalizeTimestamp()
  helper for non-v3 paths that may store space-separated timestamps.

- #179: Strip internal fields (decoded_json, direction, payload_type,
  raw_hex, route_type, score, created_at) from ObservationResp. Only
  expose id, transmission_id, observer_id, observer_name, snr, rssi,
  path_json, timestamp — matching Node.js parity.

- #180: Remove _parsedDecoded and _parsedPath from node detail
  recentAdverts response. These internal/computed fields were leaking
  to the API. Updated golden shapes.json accordingly.

- #181: Use mux route template (GetPathTemplate) for perf stats path
  normalization, converting {param} to :param for Node.js parity.
  Fallback to hex regex for unmatched routes. Compile regexes once at
  package level instead of per-request.

fixes #178, fixes #179, fixes #180, fixes #181

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 18:09:36 -07:00
Kpa-clawbot df63efa78d fix: poll new observations for existing transmissions (fixes #174)
The poller only queried WHERE t.id > sinceID, which missed new
observations added to transmissions already in the store. The trace
page was correct because it always queries the DB directly.

Add IngestNewObservations() that polls observations by o.id watermark,
adds them to existing StoreTx entries, re-picks best observation, and
invalidates analytics caches. The Poller now tracks both lastTxID and
lastObsID watermarks.

Includes tests for v3, v2, dedup, best-path re-pick, and
GetMaxObservationID.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 17:26:26 -07:00
Kpa-clawbot 59c225593f fix(go): resolve 6 backend issues — #165 #168 #169 #170 #171 #172
#165 — build_time in API: already implemented (BuildTime ldflags in
Dockerfile.go, main.go, StatsResponse, HealthResponse)

#168 — subpaths API slow: cache (subpathCache with TTL) and invalidation
already in place; verified working

#169 — distance API slow: cache (distCache with TTL) and invalidation
already in place; verified working

#170 — audio-lab/buckets: in-memory store path already implemented,
matching Node.js pktStore.packets iteration with type grouping and
size-distributed sampling

#171 — channels stale latest message: add companion bridge handling to
Go ingestor for meshcore/message/channel/<n> and meshcore/message/direct/<id>
MQTT topics. Stores decoded channel messages with type CHAN in decoded_json,
enabling the channels endpoint to find them. Also handles direct messages.

#172 — packets page not live-updating: add missing direction field to WS
broadcast packet map for full parity with txToMap/Node.js fullPacket shape.
WS broadcast shape verified correct (type, data.packet structure, timestamp,
payload_type, observer_id all present).

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 16:06:24 -07:00
Kpa-clawbot 64bf3744e2 fix: channels stale latest message from observation-timestamp ordering, fixes #171
db.GetChannels() queried packets_v (observation-level rows) ordered by
observation timestamp and always overwrote lastMessage. When an older
message had a later re-observation, it would overwrite the correct
latest message with stale data.

Fix: query transmissions table directly (one row per unique message)
ordered by first_seen. This ensures lastMessage always reflects the
most recently sent message, not the most recently observed one.

Also fix db.GetChannelMessages() to use first_seen ordering with
schema-aware queries (v2/v3), and add missing distCache/subpathCache
invalidation on packet ingestion.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 16:01:54 -07:00
Kpa-clawbot e300228874 perf: add TTL cache for subpaths API + build timestamp in stats/health
- Add 15s TTL cache to GetAnalyticsSubpaths with composite key (region|minLen|maxLen|limit),
  matching the existing cache pattern used by RF, topology, hash, channel, and distance analytics.
  Cache hits return instantly vs 900ms+ computation. fixes #168

- Add BuildTime to /api/stats and /api/health responses, injected via ldflags at build time.
  Dockerfile.go now accepts BUILD_TIME build arg. fixes #165

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 15:44:06 -07:00
Kpa-clawbot f54a10f04d fix(go): add timestamp field to WS broadcast packet — fixes #172
The Go server's WebSocket broadcast included first_seen but not
timestamp in the nested packet object. The frontend packets.js
filters on m.data.packet and reads p.timestamp for row insertion
and sorting. Without this field, live-updating silently failed
(rows inserted with undefined latest, breaking display).

Mirrors the pattern already used in txToMap() (store.go:1168)
which correctly emits both first_seen and timestamp.

Also updates websocket_test.go to assert timestamp presence
in broadcast data to prevent regression.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 15:40:57 -07:00
Kpa-clawbot 75b6dc5d9f perf: add TTL cache for /api/analytics/distance endpoint
The distance analytics endpoint was recomputing haversine distances on
every request (~1.22s). Add a 15s TTL cache following the same pattern
used by RF, topology, hash-sizes, and channels analytics endpoints.
Include distCache in cache stats size calculation.

fixes #169

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 15:38:43 -07:00
Kpa-clawbot f55a3454aa feat(go): replace map[string]interface{} with typed Go structs in route handlers
Phase 1: Create cmd/server/types.go with ~80 typed response structs
matching all proto definitions. Every API response shape is now a
compile-time checked struct.

Phase 2: Rewire all route handlers in routes.go to construct typed
structs instead of map[string]interface{} for response building:
- /api/stats -> StatsResponse
- /api/health -> HealthResponse
- /api/perf -> PerfResponse
- /api/config/* -> typed config responses
- /api/nodes/* -> NodeListResponse, NodeDetailResponse, etc.
- /api/packets/* -> PacketListResponse, PacketDetailResponse
- /api/analytics/* -> RFAnalyticsResponse, TopologyResponse, etc.
- /api/observers/* -> ObserverListResponse, ObserverResp
- /api/channels/* -> ChannelListResponse, ChannelMessagesResponse
- /api/traces/* -> TraceResponse
- /api/resolve-hops -> ResolveHopsResponse
- /api/iata-coords -> IataCoordsResponse (typed IataCoord)
- /api/audio-lab/buckets -> AudioLabBucketsResponse
- WebSocket broadcast -> WSMessage struct
- SlowQuery tracking -> SlowQuery struct (was map)

Phase 3 (partial): Add typed store/db methods:
- PacketStore.GetCacheStatsTyped() -> CacheStats
- PacketStore.GetPerfStoreStatsTyped() -> PerfPacketStoreStats
- DB.GetDBSizeStatsTyped() -> SqliteStats

Remaining map usage is in store/db data flow (PacketResult.Packets
still uses maps) — these will be addressed in a follow-up.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 15:17:21 -07:00
Kpa-clawbot fc494962d1 fix(go): add broadcast diagnostic logging, rebuild fixes stale deployment
The Go staging packets page wasn't live-updating because the deployed
binary was stale (built before the #162 fix). Rebuilding from current
source fixed the issue — broadcasts now fire correctly.

Added two permanent diagnostic log lines:
- [poller] IngestNewFromDB: logs when new transmissions are found
- [broadcast] sending N packets to M clients: logs each broadcast batch

These log lines make it easy to verify the broadcast pipeline is working
and would have caught this stale-deployment issue immediately.

Verified on VM: WS clients receive packets with nested 'packet' field,
all Go tests pass.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 14:15:19 -07:00
Kpa-clawbot 95afaf2f0d fix(go): add nested packet field to WS broadcast, fixes #162
The frontend packets.js filters WS messages with m.data?.packet and
extracts m.data.packet for live rendering. Node's server.js includes
a packet sub-object (packet: fullPacket) in the broadcast data, but
Go's IngestNewFromDB built the data flat without a nested packet field.

This caused the Go staging packets page to never live-update via WS
even though messages were being sent — they were silently filtered out
by packets.js.

Fix: build the packet fields map separately, then create the broadcast
map with both top-level fields (for live.js) and nested packet (for
packets.js). Also fixes the fallback DB-direct poller path.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 13:31:37 -07:00
Kpa-clawbot 8f712774ae fix: WS broadcast missing decoded fields + cache analytics endpoints
#156: The Go WebSocket broadcast from IngestNewFromDB was missing the
'decoded' field (with header.payloadTypeName) that live.js needs to
display packet types. Added decoded object with payloadTypeName
resolution, plus observer_id, observer_name, snr, rssi, path_json,
and observation_count fields to match the Node.js broadcast shape.

#157-160: Analytics endpoints (hash-sizes, topology, channels) were
recomputing everything on every request. Added:
- TTL caching (15s) for topology, hash-sizes, and channels endpoints
  (matching the existing RF cache pattern)
- Cached node list + prefix map shared across analytics (30s TTL)
- Lazy-cached parsed path JSON on StoreTx (parse once, read many)
- Cache invalidation on new data ingestion
- Global payloadTypeNames map (avoids per-call allocation)

Fixes #156, fixes #157, fixes #158, fixes #159, fixes #160

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 12:17:26 -07:00
Kpa-clawbot 1457795e3e fix: analytics channels + uniqueNodes mismatch
fixes #154: Go analytics channels showed single 'ch?' because
channelHash is a JSON number (from decoder.js) but the Go struct
declared it as string. json.Unmarshal failed on every packet.
Changed to interface{} with proper type conversion. Also fixed
chKey to use hash (not name) for grouping, matching Node.js.

fixes #155: uniqueNodes in topology analytics used hop resolution
count (phantom hops inflated it). Both Node.js and Go now use
db.getStats().totalNodes (7-day active window), matching /api/stats.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 12:07:50 -07:00
Kpa-clawbot 2f5404edc3 fix: close last parity gaps in /api/perf and /api/nodes/:pubkey
- db.go: Add freelistMB (PRAGMA freelist_count * page_size) and walPages
  (PRAGMA wal_checkpoint(PASSIVE)) to GetDBSizeStats
- store.go: Add advertByObserver count to GetPerfStoreStats indexes
  (count distinct pubkeys with ADVERT observations)
- db.go: Add getObservationsForTransmissions helper; enrich
  GetRecentTransmissionsForNode results with observations array,
  _parsedPath, and _parsedDecoded
- db_test.go: Add second ADVERT with different hash_size to seed data
  so hash_sizes_seen is populated; enrich decoded_json with full
  ADVERT fields; update count assertions for new seed row

fixes #151, fixes #152

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 11:57:35 -07:00
Kpa-clawbot 979b649028 fix(go): add missing packetStore fields to /api/perf (inMemory, maxPackets, etc.)
Frontend reads ps.inMemory.toLocaleString() which crashed because
the Go response was missing inMemory, sqliteOnly, maxPackets, maxMB,
evicted, inserts, queries fields. Added all + atomic counters.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 11:39:16 -07:00
Kpa-clawbot 5bb5bea444 fix(go): channels null arrays + hash size enrichment on nodes
- Fix #148: channels endpoint returned null for msgLengths when no
  decrypted messages exist. Initialize msgLengths as make([]int, 0)
  in store path and guard channels slice in DB fallback path.

- Fix #149: nodes endpoint always returned hash_size=null and
  hash_size_inconsistent=false. Add GetNodeHashSizeInfo() to
  PacketStore that scans advert packets to compute per-node hash
  size, flip-flop detection, and sizes_seen. Enrich nodes in both
  handleNodes and handleNodeDetail with computed hash data.

fixes #148, fixes #149

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 11:21:41 -07:00
Kpa-clawbot 8a0f731452 fix: topology uniqueNodes counts only real nodes, not hop prefixes
The Go analytics topology endpoint was counting every unique hop string
from packet paths (including unresolved 1-byte hex prefixes) as a unique
node, inflating the count from ~540 to 6502. Now resolves each hop via
the prefix map and deduplicates by public key, matching the Node.js
behavior.

fixes #146

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 10:50:32 -07:00
Kpa-clawbot 93dbe0e909 fix(go): add runtime stats to /api/perf and /api/health, fixes #143
- /api/perf: add goRuntime (heap, GC, goroutines, CPU), packetStore
  stats (totalLoaded, observations, index sizes, estimatedMB),
  sqlite stats (dbSizeMB, walSizeMB, row counts), real RF cache
  hit/miss tracking, and endpoint sorting by total time spent
- /api/health: add memory.heapMB, goRuntime (goroutines, gcPauses,
  numCPU), real packetStore packet count and estimatedMB, real
  cache stats from RF cache; remove hardcoded-zero eventLoop
- store.go: add cacheHits/cacheMisses tracking in GetAnalyticsRF,
  GetPerfStoreStats() and GetCacheStats() methods
- db.go: add path field to DB struct, GetDBSizeStats() for file
  sizes and row counts
- Tests: verify new fields in health/perf endpoints, add
  TestGetDBSizeStats, wire up PacketStore in test server setup

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 10:45:00 -07:00
Kpa-clawbot d7172961f4 fix(go): analytics endpoints parity — fixes #134, #135, #136, #137, #138, #140, #142
Implement all analytics endpoints from in-memory PacketStore instead of
returning stubs/empty data. Each handler now matches the Node.js response
shape field-by-field.

Endpoints fixed:
- /api/analytics/topology (#135): full hop distribution, top repeaters,
  top pairs, hops-vs-SNR, per-observer reachability, cross-observer
  comparison, best path analysis
- /api/analytics/distance (#137): haversine distance computation,
  category stats (R↔R, C↔R, C↔C), distance histogram, top hops/paths,
  distance over time
- /api/analytics/hash-sizes (#136): hash size distribution from raw_hex
  path byte parsing, hourly breakdown, top hops, multi-byte node tracking
- /api/analytics/hash-issues (#138): hash-sizes data now populated so
  frontend collision tab can compute inconsistent sizes and collision risk
- /api/analytics/route-patterns (#134): subpaths and subpath-detail now
  compute from in-memory store with hop resolution
- /api/nodes/bulk-health (#140): switched from N per-node SQL queries to
  in-memory PacketStore lookups with observer stats
- /api/channels (#142): response shape already correct via GetChannels;
  analytics/channels now returns topSenders, channelTimeline, msgLengths
- /api/analytics/channels: full channel analytics with sender tracking,
  timeline, and message length distribution

All handlers fall back to DB/stubs when store is nil (test compat).
All 42+ existing Go tests pass. go vet clean.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 10:23:11 -07:00