Commit Graph

24 Commits

Author SHA1 Message Date
Kpa-clawbot 136e1d23c8 feat(#730): foreign-advert detection — flag instead of silent drop (#1084)
## Summary

**Partial fix for #730 (M1 only — M2 frontend and M3 alerting
deferred).**

Today the ingestor **silently drops** ADVERTs whose GPS lies outside the
configured `geo_filter` polygon. That's the wrong default for an
analytics tool — operators get zero visibility into bridged or leaked
meshes.

This PR makes the new default **flag, don't drop**: foreign adverts are
stored, the node row is tagged `foreign_advert=1`, and the API surfaces
`"foreign": true` so dashboards / map overlays can be built on top.

## Behavior

| Mode | What happens to an ADVERT outside `geo_filter` |
|---|---|
| (default) flag | Stored, marked `foreign_advert=1`, exposed via API |
| drop (legacy) | Silently dropped (preserves old behavior for ops who
want it) |

## What's done (M1 — Backend)
- ingestor stores foreign adverts instead of dropping
- `nodes.foreign_advert` column added (migration)
- `/api/nodes` and `/api/nodes/{pk}` expose `foreign: true` field
- Config: `geofilter.action: "flag"|"drop"` (default `flag`)
- Tests + config docs

## What's NOT done (deferred to M2 + M3)

- **M2 — Frontend:** Map overlay showing foreign adverts as distinct
markers, foreign-advert filter on packets/nodes pages, dedicated
foreign-advert dashboard
- **M3 — Alerting:** Time-series detection of bridging events, alert
when foreign advert rate spikes, identify bridge entry-point nodes

Issue #730 remains open for M2 and M3.

---------

Co-authored-by: corescope-bot <bot@corescope>
2026-05-05 01:58:52 -07:00
Kpa-clawbot 3364eed303 feat: separate "Last Status Update" from "Last Packet Observation" for observers (v3 rebase) (#969)
Rebased version of #968 (which was itself a rebase of #905) — resolves
merge conflict with #906 (clock-skew UI) that landed on master.

## Conflict resolution

**`public/observers.js`** — master (#906) added "Clock Offset" column to
observer table; #968 split "Last Seen" into "Last Status" + "Last
Packet" columns. Combined both: the table now has Status | Name | Region
| Last Status | Last Packet | Packets | Packets/Hour | Clock Offset |
Uptime.

## What this PR adds (unchanged from #968/#905)

- `last_packet_at` column in observers DB table
- Separate "Last Status Update" and "Last Packet Observation" display in
observers list and detail page
- Server-side migration to add the column automatically
- Backfill heuristic for existing data
- Tests for ingestor and server

## Verification

- All Go tests pass (`cmd/server`, `cmd/ingestor`)
- Frontend tests pass (`test-packets.js`, `test-hash-color.js`)
- Built server, hit `/api/observers` — `last_packet_at` field present in
JSON
- Observer table header has all 9 columns including both Last Packet and
Clock Offset

## Prior PRs

- #905 — original (conflicts with master)
- #968 — first rebase (conflicts after #906 landed)
- This PR — second rebase, resolves #906 conflict

Supersedes #968. Closes #905.

---------

Co-authored-by: you <you@example.com>
2026-05-02 12:03:42 -07:00
Kpa-clawbot 568de4b441 fix(observers): exclude soft-deleted observers from /api/observers and totalObservers (#954)
## Bug

`/api/observers` returned soft-deleted (inactive=1) observers. Operators
saw stale observers in the UI even after the auto-prune marked them
inactive on schedule. Reproduced on staging: 14 observers older than 14
days returned by the API; all of them had `inactive=1` in the DB.

## Root cause

`DB.GetObservers()` (`cmd/server/db.go:974`) ran `SELECT ... FROM
observers ORDER BY last_seen DESC` with no WHERE filter. The
`RemoveStaleObservers` path correctly soft-deletes by setting
`inactive=1`, but the read path didn't honor it.

`statsRow` (`cmd/server/db.go:234`) had the same bug — `totalObservers`
count included soft-deleted rows.

## Fix

Add `WHERE inactive IS NULL OR inactive = 0` to both:

```go
// GetObservers
"SELECT ... FROM observers WHERE inactive IS NULL OR inactive = 0 ORDER BY last_seen DESC"

// statsRow.TotalObservers
"SELECT COUNT(*) FROM observers WHERE inactive IS NULL OR inactive = 0"
```

`NULL` check preserves backward compatibility with rows from before the
`inactive` migration.

## Tests

Added regression `TestGetObservers_ExcludesInactive`:
- Seed two observers, mark one inactive, assert `GetObservers()` returns
only the other.
- **Anti-tautology gate verified**: reverting the WHERE clause causes
the test to fail with `expected 1 observer, got 2` and `inactive
observer obs2 should be excluded`.

`go test ./...` passes (19.6s).

## Out of scope

- `GetObserverByID` lookup at line 1009 still returns inactive observers
— this is intentional, so an old deep link to `/observers/<id>` shows
"inactive" rather than 404.
- Frontend may also have its own caching layer; this fix is server-side
only.

---------

Co-authored-by: Kpa-clawbot <bot@example.invalid>
Co-authored-by: you <you@example.com>
Co-authored-by: KpaBap <kpabap@gmail.com>
2026-05-01 17:51:08 +00:00
Kpa-clawbot a605518d6d fix(#881): per-observation raw_hex — each observer sees different bytes on air (#882)
## Problem

Each MeshCore observer receives a physically distinct over-the-air byte
sequence for the same transmission (different path bytes, flags/hops
remaining). The `observations` table stored only `path_json` per
observer — all observations pointed at one `transmissions.raw_hex`. This
prevented the hex pane from updating when switching observations in the
packet detail view.

## Changes

| Layer | Change |
|-------|--------|
| **Schema** | `ALTER TABLE observations ADD COLUMN raw_hex TEXT`
(nullable). Migration: `observations_raw_hex_v1` |
| **Ingestor** | `stmtInsertObservation` now stores per-observer
`raw_hex` from MQTT payload |
| **View** | `packets_v` uses `COALESCE(o.raw_hex, t.raw_hex)` —
backward compatible with NULL historical rows |
| **Server** | `enrichObs` prefers `obs.RawHex` when non-empty, falls
back to `tx.RawHex` |
| **Frontend** | No changes — `effectivePkt.raw_hex` already flows
through `renderDetail` |

## Tests

- **Ingestor**: `TestPerObservationRawHex` — two MQTT packets for same
hash from different observers → both stored with distinct raw_hex
- **Server**: `TestPerObservationRawHexEnrich` — enrichObs returns
per-obs raw_hex when present, tx fallback when NULL
- **E2E**: Playwright assertion in `test-e2e-playwright.js` for hex pane
update on observation switch

E2E assertion added: `test-e2e-playwright.js:1794`

## Scope

- Historical observations: raw_hex stays NULL, UI falls back to
transmission raw_hex silently
- No backfill, no path_json reconstruction, no frontend changes

Closes #881

---------

Co-authored-by: you <you@example.com>
2026-04-21 13:45:29 -07:00
Kpa-clawbot 0e286d85fd fix: channel query performance — add channel_hash column, SQL-level filtering (#762) (#763)
## Problem
Channel API endpoints scan entire DB — 2.4s for channel list, 30s for
messages.

## Fix
- Added `channel_hash` column to transmissions (populated on ingest,
backfilled on startup)
- `GetChannels()` rewrites to GROUP BY channel_hash (one row per channel
vs scanning every packet)
- `GetChannelMessages()` filters by channel_hash at SQL level with
proper LIMIT/OFFSET
- 60s cache for channel list
- Index: `idx_tx_channel_hash` for fast lookups

Expected: 2.4s → <100ms for list, 30s → <500ms for messages.

Fixes #762

---------

Co-authored-by: you <you@example.com>
2026-04-16 00:09:36 -07:00
Kpa-clawbot 71be54f085 feat: DB-backed channel messages for full history (#725 M1) (#726)
## Summary

Switches channel API endpoints to query SQLite instead of the in-memory
packet store, giving users access to the full message history.

Implements #725 (M1 only — DB-backed channel messages). Does NOT close
#725 — M2-M5 (custom channels, PSK, persistence, retroactive decryption)
remain.

## Problem

Channel endpoints (`/api/channels`, `/api/channels/{hash}/messages`)
preferred the in-memory packet store when available. The store is
bounded by `packetStore.maxMemoryMB` — typically showing only recent
messages. The SQLite database has the complete history (weeks/months of
channel messages) but was only used as a fallback when the store was nil
(never in production).

## Fix

Reversed the preference order: DB first, in-memory store fallback.
Region filtering added to the DB path.

Co-authored-by: you <you@example.com>
2026-04-12 23:22:52 -07:00
Kpa-clawbot 22bf33700e Fix: filter path-hop candidates by resolved_path to prevent prefix collisions (#658)
## Problem

The "Paths Through This Node" API endpoint (`/api/nodes/{pubkey}/paths`)
returns unrelated packets when two nodes share a hex prefix. For
example, querying paths for "Kpa Roof Solar" (`c0dedad4...`) returns 316
packets that actually belong to "C0ffee SF" (`C0FFEEC7...`) because both
share the `c0` prefix in the `byPathHop` index.

Fixes #655

## Root Cause

`handleNodePaths()` in `routes.go` collects candidates from the
`byPathHop` index using 2-char and 4-char hex prefixes for speed, but
never verifies that the target node actually appears in each candidate's
resolved path. The broad index lookup is intentional, but the
**post-filter was missing**.

## Fix

Added `nodeInResolvedPath()` helper in `store.go` that checks whether a
transmission's `resolved_path` (from the neighbor affinity graph via
`resolveWithContext`) contains the target node's full pubkey. The
filter:

- **Includes** packets where `resolved_path` contains the target node's
full pubkey
- **Excludes** packets where `resolved_path` resolved to a different
node (prefix collision)
- **Excludes** packets where `resolved_path` is nil/empty (ambiguous —
avoids false positives)

The check examines both the best observation's resolved_path
(`tx.ResolvedPath`) and all individual observations, so packets are
included if *any* observation resolved the target.

## Tests

- `TestNodeInResolvedPath` — unit test for the helper with 5 cases
(match, different node, nil, all-nil elements, match in observation
only)
- `TestNodePathsPrefixCollisionFilter` — integration test: two nodes
sharing `aa` prefix, verifies the collision packet is excluded from one
and included for the other
- Updated test DB schema to include `resolved_path` column and seed data
with resolved pubkeys
- All existing tests pass (165 additions, 8 modifications)

## Performance

No impact on hot paths. The filter runs once per API call on the
already-collected candidate set (typically small). `nodeInResolvedPath`
is O(observations × hops) per candidate — negligible since observations
per transmission are typically 1–5.

---------

Co-authored-by: you <you@example.com>
2026-04-07 21:24:00 -07:00
Kpa-clawbot 232770a858 feat(rf-health): M2 — airtime, error rate, battery charts with delta computation (#605)
## M2: Airtime + Channel Quality + Battery Charts

Implements M2 of #600 — server-side delta computation and three new
charts in the RF Health detail view.

### Backend Changes

**Delta computation** for cumulative counters (`tx_air_secs`,
`rx_air_secs`, `recv_errors`):
- Computes per-interval deltas between consecutive samples
- **Reboot handling:** detects counter reset (current < previous), skips
that delta, records reboot timestamp
- **Gap handling:** if time between samples > 2× interval, inserts null
(no interpolation)
- Returns `tx_airtime_pct` and `rx_airtime_pct` as percentages
(delta_secs / interval_secs × 100)
- Returns `recv_error_rate` as delta_errors / (delta_recv +
delta_errors) × 100

**`resolution` query param** on `/api/observers/{id}/metrics`:
- `5m` (default) — raw samples
- `1h` — hourly aggregates (GROUP BY hour with AVG/MAX)
- `1d` — daily aggregates

**Schema additions:**
- `packets_sent` and `packets_recv` columns added to `observer_metrics`
(migration)
- Ingestor parses these fields from MQTT stats messages

**API response** now includes:
- `tx_airtime_pct`, `rx_airtime_pct`, `recv_error_rate` (computed
deltas)
- `reboots` array with timestamps of detected reboots
- `is_reboot_sample` flag on affected samples

### Frontend Changes

Three new charts in the RF Health detail view, stacked vertically below
noise floor:

1. **Airtime chart** — TX (red) + RX (blue) as separate SVG lines,
Y-axis 0-100%, direct labels at endpoints
2. **Error Rate chart** — `recv_error_rate` line, shown only when data
exists
3. **Battery chart** — voltage line with 3.3V low reference, shown only
when battery_mv > 0

All charts:
- Share X-axis and time range (aligned vertically)
- Reboot markers as vertical hairlines spanning all charts
- Direct labels on data (no legends)
- Resolution auto-selected: `1h` for 7d/30d ranges
- Charts hidden when no data exists

### Tests

- `TestComputeDeltas`: normal deltas, reboot detection, gap detection
- `TestGetObserverMetricsResolution`: 5m/1h/1d downsampling verification
- Updated `TestGetObserverMetrics` for new API signature

---------

Co-authored-by: you <you@example.com>
2026-04-04 23:17:17 -07:00
Kpa-clawbot 6f35d4d417 feat: RF Health Dashboard M1 — observer metrics + small multiples grid (#604)
## RF Health Dashboard — M1: Observer Metrics Storage, API & Small
Multiples Grid

Implements M1 of #600.

### What this does

Adds a complete RF health monitoring pipeline: MQTT stats ingestion →
SQLite storage → REST API → interactive dashboard with small multiples
grid.

### Backend Changes

**Ingestor (`cmd/ingestor/`)**
- New `observer_metrics` table via migration system (`_migrations`
pattern)
- Parse `tx_air_secs`, `rx_air_secs`, `recv_errors` from MQTT status
messages (same pattern as existing `noise_floor` and `battery_mv`)
- `INSERT OR REPLACE` with timestamps rounded to nearest 5-min interval
boundary (using ingestor wall clock, not observer timestamps)
- Missing fields stored as NULLs — partial data is always better than no
data
- Configurable retention pruning: `retention.metricsDays` (default 30),
runs on startup + every 24h

**Server (`cmd/server/`)**
- `GET /api/observers/{id}/metrics?since=...&until=...` — per-observer
time-series data
- `GET /api/observers/metrics/summary?window=24h` — fleet summary with
current NF, avg/max NF, sample count
- `parseWindowDuration()` supports `1h`, `24h`, `3d`, `7d`, `30d` etc.
- Server-side metrics retention pruning (same config, staggered 2min
after packet prune)

### Frontend Changes

**RF Health tab (`public/analytics.js`, `public/style.css`)**
- Small multiples grid showing all observers simultaneously — anomalies
pop out visually
- Per-observer cell: name, current NF value, battery voltage, sparkline,
avg/max stats
- NF status coloring: warning (amber) at ≥-100 dBm, critical (red) at
≥-85 dBm — text color only, no background fills
- Click any cell → expanded detail view with full noise floor line chart
- Reference lines with direct text labels (`-100 warning`, `-85
critical`) — not color bands
- Min/max points labeled directly on the chart
- Time range selector: preset buttons (1h/3h/6h/12h/24h/3d/7d/30d) +
custom from/to datetime picker
- Deep linking: `#/analytics?tab=rf-health&observer=...&range=...`
- All charts use SVG, matching existing analytics.js patterns
- Responsive: 3-4 columns on desktop, 1 on mobile

### Design Decisions (from spec)
- Labels directly on data, not in legends
- Reference lines with text labels, not color bands
- Small multiples grid, not card+accordion (Tufte: instant visual fleet
comparison)
- Ingestor wall clock for all timestamps (observer clocks may drift)

### Tests Added

**Ingestor tests:**
- `TestRoundToInterval` — 5 cases for rounding to 5-min boundaries
- `TestInsertMetrics` — basic insertion with all fields
- `TestInsertMetricsIdempotent` — INSERT OR REPLACE deduplication
- `TestInsertMetricsNullFields` — partial data with NULLs
- `TestPruneOldMetrics` — retention pruning
- `TestExtractObserverMetaNewFields` — parsing tx_air_secs, rx_air_secs,
recv_errors

**Server tests:**
- `TestGetObserverMetrics` — time-series query with since/until filters,
NULL handling
- `TestGetMetricsSummary` — fleet summary aggregation
- `TestObserverMetricsAPIEndpoints` — DB query verification
- `TestMetricsAPIEndpoints` — HTTP endpoint response shape
- `TestParseWindowDuration` — duration parsing for h/d formats

### Test Results
```
cd cmd/ingestor && go test ./... → PASS (26s)
cd cmd/server && go test ./... → PASS (5s)
```

### What's NOT in this PR (deferred to M2+)
- Server-side delta computation for cumulative counters
- Airtime charts (TX/RX percentage lines)
- Channel quality chart (recv_error_rate)
- Battery voltage chart
- Reboot detection and chart annotations
- Resolution downsampling (1h, 1d aggregates)
- Pattern detection / automated diagnosis

---------

Co-authored-by: you <you@example.com>
2026-04-04 22:21:35 -07:00
efiten b1d89d7d9f fix: apply region filter in GetNodes — was silently ignored (#496) (#497)
## Summary
- `db.GetNodes` accepted a `region` param from the HTTP handler but
never used it — every region-filter selection was silently ignored and
all nodes were always returned
- Added a subquery filtering `nodes.public_key` against ADVERT
transmissions (payload_type=4) observed by observers with matching IATA
codes
- Handles both v2 (`observer_id TEXT`) and v3 (`observer_idx INT`)
schemas

## Test plan
- [x] 4 new subtests added to `TestGetNodesFiltering`: SJC (1 node), SFO
(1 node), SJC,SFO multi (1 node deduped), AMS unknown (0 nodes)
- [x] All existing Go tests still pass
- [x] Deploy to staging, open `/nodes`, select a region in the filter
bar — only nodes observed by observers in that region should appear

Closes #496

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: you <you@example.com>
2026-04-02 17:49:57 -07:00
Kpa-clawbot b51ced8655 Wire channel region filtering end-to-end
Pass region through channel message routes, apply DB/store filtering, normalize IATA at read and write boundaries, and add regression coverage for routes/server/ingestor.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-30 23:03:56 -07:00
Kpa-clawbot 5aa4fbb600 chore: normalize all files to LF line endings 2026-03-30 22:52:46 -07:00
you 0f70cd1ac0 feat: make health thresholds configurable in hours
Change healthThresholds config from milliseconds to hours for readability.
Config keys: infraDegradedHours, infraSilentHours, nodeDegradedHours, nodeSilentHours.
Defaults: infra degraded 24h, silent 72h; node degraded 1h, silent 24h.

- Config stored in hours, converted to ms at comparison time
- /api/config/client sends ms to frontend (backward compatible)
- Frontend tooltips use dynamic thresholds instead of hardcoded strings
- Added healthThresholds section to config.example.json
- Updated Go and Node.js servers, tests
2026-03-29 09:50:32 -07:00
you 712fa15a8c fix: force single SQLite connection in test DBs to prevent in-memory table visibility issues
SQLite :memory: databases create separate databases per connection.
When the connection pool opens multiple connections (e.g. poller goroutine
vs main test goroutine), tables created on one connection are invisible
to others. Setting MaxOpenConns(1) ensures all queries use the same
in-memory database, fixing TestPollerBroadcastsMultipleObservations.
2026-03-29 08:32:37 -07:00
Kpa-clawbot f5d0ce066b refactor: remove packets_v SQL fallbacks — store handles all queries (#220)
* refactor: remove all packets_v SQL fallbacks — store handles all queries

Remove DB fallback paths from all route handlers. The in-memory
PacketStore now handles all packet/node/analytics queries. Handlers
return empty results or 404 when no store is available instead of
falling back to direct DB queries.

- Remove else-DB branches from handlePacketDetail, handleNodeHealth,
  handleNodeAnalytics, handleBulkHealth, handlePacketTimestamps, etc.
- Remove unused DB methods (GetPacketByHash, GetTransmissionByID,
  GetPacketByID, GetObservationsForHash, GetTimestamps, GetNodeHealth,
  GetNodeAnalytics, GetBulkHealth, etc.)
- Remove packets_v VIEW creation from schema
- Update tests for new behavior (no-store returns 404/empty, not 500)

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix: address PR #220 review comments

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

---------

Co-authored-by: Kpa-clawbot <259247574+Kpa-clawbot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: KpaBap <kpabap@gmail.com>
2026-03-28 15:25:56 -07:00
Kpa-clawbot 54cbc648e0 feat: decode telemetry from adverts — battery voltage + temperature on nodes
Sensor nodes embed telemetry (battery_mv, temperature_c) in their advert
appdata after the null-terminated name. This commit adds decoding and
storage for both the Go ingestor and Node.js backend.

Changes:
- decoder.go/decoder.js: Parse telemetry bytes from advert appdata
  (battery_mv as uint16 LE millivolts, temperature_c as int16 LE /100)
- db.go/db.js: Add battery_mv INTEGER and temperature_c REAL columns
  to nodes and inactive_nodes tables, with migration for existing DBs
- main.go/server.js: Update node telemetry on advert processing
- server db.go: Include battery_mv/temperature_c in node API responses
- Tests: Decoder telemetry tests (positive, negative temp, no telemetry),
  DB migration test, node telemetry update test, server API shape tests

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-28 12:07:42 -07:00
Kpa-clawbot f374a4a775 fix: enforce consistent types between Go ingestor writes and server reads
Schema:
- observers.noise_floor: INTEGER → REAL (dBm has decimals)
- battery_mv, uptime_secs remain INTEGER (always whole numbers)

Ingestor write side (cmd/ingestor/db.go):
- UpsertObserver now accepts ObserverMeta with battery_mv (int),
  uptime_secs (int64), noise_floor (float64)
- COALESCE preserves existing values when meta is nil
- Added migration: cast integer noise_floor values to REAL

Ingestor MQTT handler (cmd/ingestor/main.go — already updated):
- extractObserverMeta extracts hardware fields from status messages
- battery_mv/uptime_secs cast via math.Round to int on write

Server read side (cmd/server/db.go):
- Observer.BatteryMv: *float64 → *int (matches INTEGER storage)
- Observer.UptimeSecs: *float64 → *int64 (matches INTEGER storage)
- Observer.NoiseFloor: *float64 (unchanged, matches REAL storage)
- GetObservers/GetObserverByID: use sql.NullInt64 intermediaries
  for battery_mv/uptime_secs, sql.NullFloat64 for noise_floor

Proto (proto/observer.proto — already correct):
- battery_mv: int32, uptime_secs: int64, noise_floor: double

Tests:
- TestUpsertObserverWithMeta: verifies correct SQLite types via typeof()
- TestUpsertObserverMetaPreservesExisting: nil-meta preserves values
- TestExtractObserverMeta: float-to-int rounding, empty message
- TestSchemaNoiseFloorIsReal: PRAGMA table_info validation
- TestObserverTypeConsistency: server reads typed values correctly
- TestObserverTypesInGetObservers: list endpoint type consistency

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-28 11:22:14 -07:00
Kpa-clawbot 2435f2eaaf fix: observation timestamps, leaked fields, perf path normalization
- #178: Use strftime ISO 8601 format instead of datetime() for observation
  timestamps in all SQL queries (v3 + v2 views). Add normalizeTimestamp()
  helper for non-v3 paths that may store space-separated timestamps.

- #179: Strip internal fields (decoded_json, direction, payload_type,
  raw_hex, route_type, score, created_at) from ObservationResp. Only
  expose id, transmission_id, observer_id, observer_name, snr, rssi,
  path_json, timestamp — matching Node.js parity.

- #180: Remove _parsedDecoded and _parsedPath from node detail
  recentAdverts response. These internal/computed fields were leaking
  to the API. Updated golden shapes.json accordingly.

- #181: Use mux route template (GetPathTemplate) for perf stats path
  normalization, converting {param} to :param for Node.js parity.
  Fallback to hex regex for unmatched routes. Compile regexes once at
  package level instead of per-request.

fixes #178, fixes #179, fixes #180, fixes #181

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 18:09:36 -07:00
Kpa-clawbot 64bf3744e2 fix: channels stale latest message from observation-timestamp ordering, fixes #171
db.GetChannels() queried packets_v (observation-level rows) ordered by
observation timestamp and always overwrote lastMessage. When an older
message had a later re-observation, it would overwrite the correct
latest message with stale data.

Fix: query transmissions table directly (one row per unique message)
ordered by first_seen. This ensures lastMessage always reflects the
most recently sent message, not the most recently observed one.

Also fix db.GetChannelMessages() to use first_seen ordering with
schema-aware queries (v2/v3), and add missing distCache/subpathCache
invalidation on packet ingestion.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 16:01:54 -07:00
Kpa-clawbot 2f5404edc3 fix: close last parity gaps in /api/perf and /api/nodes/:pubkey
- db.go: Add freelistMB (PRAGMA freelist_count * page_size) and walPages
  (PRAGMA wal_checkpoint(PASSIVE)) to GetDBSizeStats
- store.go: Add advertByObserver count to GetPerfStoreStats indexes
  (count distinct pubkeys with ADVERT observations)
- db.go: Add getObservationsForTransmissions helper; enrich
  GetRecentTransmissionsForNode results with observations array,
  _parsedPath, and _parsedDecoded
- db_test.go: Add second ADVERT with different hash_size to seed data
  so hash_sizes_seen is populated; enrich decoded_json with full
  ADVERT fields; update count assertions for new seed row

fixes #151, fixes #152

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 11:57:35 -07:00
Kpa-clawbot 93dbe0e909 fix(go): add runtime stats to /api/perf and /api/health, fixes #143
- /api/perf: add goRuntime (heap, GC, goroutines, CPU), packetStore
  stats (totalLoaded, observations, index sizes, estimatedMB),
  sqlite stats (dbSizeMB, walSizeMB, row counts), real RF cache
  hit/miss tracking, and endpoint sorting by total time spent
- /api/health: add memory.heapMB, goRuntime (goroutines, gcPauses,
  numCPU), real packetStore packet count and estimatedMB, real
  cache stats from RF cache; remove hardcoded-zero eventLoop
- store.go: add cacheHits/cacheMisses tracking in GetAnalyticsRF,
  GetPerfStoreStats() and GetCacheStats() methods
- db.go: add path field to DB struct, GetDBSizeStats() for file
  sizes and row counts
- Tests: verify new fields in health/perf endpoints, add
  TestGetDBSizeStats, wire up PacketStore in test server setup

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 10:45:00 -07:00
Kpa-clawbot 5c68605f2c feat(go-server): full API parity with Node.js server
Performance:
- QueryGroupedPackets: 8s → <100ms (transmissions table, not packets_v VIEW)

Field parity:
- /api/stats: totalNodes uses 7-day window, added totalNodesAllTime
- /api/stats: role counts filtered by 7-day (matching Node.js)
- /api/nodes: role counts use all-time (matching Node.js)
- /api/packets/🆔 path field returns parsed path_json hops
- /api/packets: added multi-node filter (?nodes=pk1,pk2)
- /api/observers: packetsLastHour, lat, lon, nodeRole computed
- /api/observers/🆔 packetsLastHour computed
- /api/nodes/bulk-health: per-node stats from SQL

Tests updated with dynamic timestamps for 7-day filter compat.
All tests pass, go vet clean.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 02:11:33 -07:00
Kpa-clawbot e18a73e1f2 feat: Go server API parity with Node.js — response shapes, perf, computed fields
- Packets query rewired from packets_v VIEW (9s) to direct table joins (~50ms)
- Packet response: added first_seen, observation_count; removed created_at, score
- Node response: added last_heard, hash_size, hash_size_inconsistent
- Schema-aware v2/v3 detection for observer_idx vs observer_id
- All Go tests passing

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 01:50:46 -07:00
Kpa-clawbot e89c2bfe1f test: add comprehensive Go test coverage for ingestor (80%) and server (90%)
- ingestor: add config_test.go (LoadConfig, env overrides, legacy MQTT)
- ingestor: add main_test.go (toFloat64, firstNonEmpty, handleMessage, advertRole)
- ingestor: extend decoder_test.go (short buffer errors, edge cases, all payload types)
- ingestor: extend db_test.go (empty hash, timestamp updates, BuildPacketData, schema)
- server: add config_test.go (LoadConfig, LoadTheme, health thresholds, ResolveDBPath)
- server: add helpers_test.go (writeJSON/Error, queryInt, mergeMap, round, percentile, spaHandler)
- server: extend db_test.go (all query functions, filters, channel messages, node health)
- server: extend routes_test.go (all endpoints, error paths, analytics, observer analytics)
- server: extend websocket_test.go (multi-client, buffer full, poller cycle)

Coverage: ingestor 48% -> 80%, server 52% -> 90%

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-27 00:07:44 -07:00