Compare commits

...

41 Commits

Author SHA1 Message Date
you 37a2b71fc7 fix: add ensureLastPacketAtColumn server-side migration
Mirror the ensureObserverInactiveColumn pattern (PR #961) for the
last_packet_at column added by the ingestor migration. Without this,
the server SELECTs last_packet_at but never adds it — causing 500
errors on /api/observers when running against DBs the ingestor has
not yet touched (e.g. the e2e fixture).

Adds TestEnsureLastPacketAtColumn for correctness + idempotency.
2026-05-02 17:48:15 +00:00
you 9ba45e6fb4 fix: add last_packet_at to ObserverResp (types.go extracted after PR #905) 2026-05-02 17:04:48 +00:00
you 33198b8012 fix: address PR #905 review — migration error handling, backfill heuristic, test comment
1. Migration ALTER error no longer swallowed: check error from ALTER TABLE
   and return if it fails (unless column already exists). Migration is not
   marked complete on failure.

2. Backfill heuristic fixed: use observations table JOIN instead of
   packet_count > 0, since UpsertObserver sets packet_count = 1 on INSERT
   even for status-only observers.

3. Test clarifying comment: document that InsertTransmission uses
   data.Timestamp (not time.Now()) as source-of-truth for last_packet_at,
   so the hardcoded assertion is correct.
2026-05-02 17:01:32 +00:00
you 60893b6418 fix: bump obs-table min-width to 720px for new Last Packet column
The addition of the Last Packet column brings the table to 8 columns.
The previous min-width of 640px was tight for 7 columns; 720px prevents
cramped rendering and ensures the horizontal scroll trigger is appropriate
on narrow viewports.
2026-05-02 17:01:32 +00:00
you 7ac36178d6 test: add last_packet_at tests for ingestor and server
- Ingestor: verify last_packet_at is NULL after UpsertObserver (status path),
  set after InsertTransmission, and unchanged by subsequent UpsertObserver calls
- Server: verify last_packet_at reads back through GetObservers and GetObserverByID
2026-05-02 17:01:32 +00:00
you f56f1368be feat(ui): show separate Last Status and Last Packet columns for observers
- observers.js: rename 'Last Seen' column to 'Last Status', add 'Last Packet'
  column with a warning badge when no packets observed or packets lag behind
  status by >10min
- observer-detail.js: add 'Last Status Update' and 'Last Packet Observation'
  stat cards with relative + absolute timestamps
2026-05-02 17:01:19 +00:00
you cfb7b7297d feat: add last_packet_at column to observers
Add a new 'last_packet_at' column to the observers table that is only
bumped when an actual packet observation lands (InsertTransmission path),
while 'last_seen' continues to be bumped on both status updates and packets.

This allows the UI to distinguish between an observer that is alive
(sending status pings) vs one that is actively forwarding packets.

Schema migration backfills last_packet_at = last_seen for observers
with packet_count > 0. Server API now returns last_packet_at in the
Observer JSON response.
2026-05-02 17:01:19 +00:00
Kpa-clawbot c67f3347ce fix(ui): add GRP_DATA (type 6) to filter dropdown + color tables (#965)
## Bug

Packet type 6 (`PAYLOAD_TYPE_GRP_DATA` per `firmware/src/Packet.h:25`)
was missing from three frontend lookup tables:
- `public/app.js:7` — `PAYLOAD_COLORS` had no entry for 6 → badge color
fell back to `unknown` (grey)
- `public/packets.js:29` — `TYPE_NAMES` (used by the Packets page
type-filter dropdown) had no entry for 6 → "Group Data" missing from the
menu
- `public/roles.js:17,24` — `TYPE_COLORS` and `TYPE_BADGE_MAP` had no
`GRP_DATA` entry → no dedicated CSS class

The packet detail page already handled it (via `PAYLOAD_TYPES` in
`app.js:6` which had `6: 'Group Data'`) so individual GRP_DATA packets
render correctly. The gap was only in the filter UI + badge styling.

## Fix

Add the missing entry in each table. 4 lines across 3 files.

- `app.js`: add `6: 'grp-data'` to `PAYLOAD_COLORS`
- `packets.js`: add `6:'Group Data'` to `TYPE_NAMES`
- `roles.js`: add `GRP_DATA: '#8b5cf6'` to `TYPE_COLORS` and `GRP_DATA:
'grp-data'` to `TYPE_BADGE_MAP`

Color choice `#8b5cf6` (violet) — distinct from GRP_TXT's blue but
visually adjacent so operators read them as related types.

## Verification (rule 18 + 19)

Built server locally, served the JS files, grepped the rendered output:

```
$ curl -s http://localhost:13900/packets.js | grep TYPE_NAMES
const TYPE_NAMES = { ... 5:'Channel Msg', 6:'Group Data', 7:'Anon Req' ... };

$ curl -s http://localhost:13900/app.js | grep PAYLOAD_TYPES
const PAYLOAD_TYPES = { ... 5: 'Channel Msg', 6: 'Group Data', 7: 'Anon Req' ... };

$ curl -s http://localhost:13900/roles.js | grep GRP_DATA
ADVERT: '#22c55e', GRP_TXT: '#3b82f6', GRP_DATA: '#8b5cf6', ...
ADVERT: 'advert', GRP_TXT: 'grp-txt', GRP_DATA: 'grp-data', ...
```

Frontend tests pass: `test-packets.js` 82/82, `test-hash-color.js`
32/32.

## Out of scope

Consolidating the duplicated PAYLOAD_TYPES / TYPE_NAMES tables into a
single source of truth is a separate cleanup. Two parallel name maps
continues to be a footgun (this is the second time a new type's been
added to one but not the other).

Co-authored-by: Kpa-clawbot <bot@example.invalid>
2026-05-02 09:55:09 -07:00
Kpa-clawbot b3a9677c52 feat(ingestor + server): observerBlacklist config (#962) (#963)
## Summary

Implements `observerBlacklist` config — mirrors the existing
`nodeBlacklist` pattern for observers. Drop observers by pubkey at
ingest, with defense-in-depth filtering on the server side.

Closes #962

## Changes

### Ingestor (`cmd/ingestor/`)
- **`config.go`**: Added `ObserverBlacklist []string` field +
`IsObserverBlacklisted()` method (case-insensitive, whitespace-trimmed)
- **`main.go`**: Early return in `handleMessage` when `parts[2]`
(observer ID from MQTT topic) matches blacklist — before status
handling, before IATA filter. No UpsertObserver, no observations, no
metrics insert. Log line: `observer <pubkey-short> blacklisted,
dropping`

### Server (`cmd/server/`)
- **`config.go`**: Same `ObserverBlacklist` field +
`IsObserverBlacklisted()` with `sync.Once` cached set (same pattern as
`nodeBlacklist`)
- **`routes.go`**: Defense-in-depth filtering in `handleObservers` (skip
blacklisted in list) and `handleObserverDetail` (404 for blacklisted ID)
- **`main.go`**: Startup `softDeleteBlacklistedObservers()` marks
matching rows `inactive=1` so historical data is hidden
- **`neighbor_persist.go`**: `softDeleteBlacklistedObservers()`
implementation

### Tests
- `cmd/ingestor/observer_blacklist_test.go`: config method tests
(case-insensitive, empty, nil)
- `cmd/server/observer_blacklist_test.go`: config tests + HTTP handler
tests (list excludes blacklisted, detail returns 404, no-blacklist
passes all, concurrent safety)

## Config

```json
{
  "observerBlacklist": [
    "EE550DE547D7B94848A952C98F585881FCF946A128E72905E95517475F83CFB1"
  ]
}
```

## Verification (Rule 18 — actual server output)

**Before blacklist** (no config):
```
Total: 31
DUBLIN in list: True
```

**After blacklist** (DUBLIN Observer pubkey in `observerBlacklist`):
```
[observer-blacklist] soft-deleted 1 blacklisted observer(s)
Total: 30
DUBLIN in list: False
```

Detail endpoint for blacklisted observer returns **404**.

All existing tests pass (`go test ./...` for both server and ingestor).

---------

Co-authored-by: you <you@example.com>
2026-05-01 23:11:27 -07:00
Kpa-clawbot 707228ad91 ci: update go-server-coverage.json [skip ci] 2026-05-02 02:14:13 +00:00
Kpa-clawbot 8d379baf5e ci: update go-ingestor-coverage.json [skip ci] 2026-05-02 02:14:12 +00:00
Kpa-clawbot 3b436c768b ci: update frontend-tests.json [skip ci] 2026-05-02 02:14:11 +00:00
Kpa-clawbot 6d49cf939c ci: update frontend-coverage.json [skip ci] 2026-05-02 02:14:09 +00:00
Kpa-clawbot 8d39b33111 ci: update e2e-tests.json [skip ci] 2026-05-02 02:14:08 +00:00
Kpa-clawbot e1a1be1735 fix(server): add observers.inactive column at startup if missing (root cause of CI flake) (#961)
## The actual root cause

PR #954 added `WHERE inactive IS NULL OR inactive = 0` to the server's
observer queries, but the `inactive` column is only added by the
**ingestor** migration (`cmd/ingestor/db.go:344-354`). When the server
runs against a DB the ingestor never touched (e.g. the e2e fixture), the
column doesn't exist:

```
$ sqlite3 test-fixtures/e2e-fixture.db "SELECT COUNT(*) FROM observers WHERE inactive IS NULL OR inactive = 0;"
Error: no such column: inactive
```

The server's `db.QueryRow().Scan()` swallows that error →
`totalObservers` stays 0 → `/api/observers` returns empty → map test
fails with "No map markers/overlays found".

This explains all the failing CI runs since #954 merged. PR #957
(freshen fixture) helped with the `nodes` time-rot but couldn't fix the
missing-column problem. PR #960 (freshen observers) added the right
timestamps but the column was still missing. PR #959 (data-loaded in
finally) fixed a different real bug. None of those touched the actual
mechanism.

## Fix

Mirror the existing `ensureResolvedPathColumn` pattern: add
`ensureObserverInactiveColumn` that runs at server startup, checks if
the column exists via `PRAGMA table_info`, adds it with `ALTER TABLE
observers ADD COLUMN inactive INTEGER DEFAULT 0` if missing.

Wired into `cmd/server/main.go` immediately after
`ensureResolvedPathColumn`.

## Verification

End-to-end on a freshened fixture:

```
$ sqlite3 /tmp/e2e-verify.db "PRAGMA table_info(observers);" | grep inactive
(no output — column absent)

$ ./cs-fixed -port 13702 -db /tmp/e2e-verify.db -public public &
[store] Added inactive column to observers

$ curl 'http://localhost:13702/api/observers'
returned=31    # was 0 before fix
```

`go test ./...` passes (19.8s).

## Lessons

I should have run `sqlite3 fixture "SELECT ... WHERE inactive ..."`
directly the first time the map test failed after #954 instead of
writing four "fix" PRs that didn't address the actual mechanism.
Apologies for the wild goose chase.

Co-authored-by: Kpa-clawbot <bot@example.invalid>
2026-05-01 19:04:23 -07:00
Kpa-clawbot b97fe5758c fix(ci): freshen observer timestamps so RemoveStaleObservers doesn't prune them on startup (#960)
## Bug

Master CI failing on `Map page loads with markers: No map
markers/overlays found` since #954 (observer filter) merged.

## Root cause chain

1. Fixture has 31 observers, all dated `2026-03-26` to `2026-03-29` (33+
days old)
2. PR #957's `tools/freshen-fixture.sh` shifts `nodes`, `transmissions`,
`neighbor_edges` timestamps but NOT `observers.last_seen`
3. Server startup runs `RemoveStaleObservers(14)` per
`cmd/server/main.go:382` — marks all 33-day-old observers `inactive=1`
4. PR #954's `GetObservers` filter then excludes them
5. `/api/observers` returns 0 → map has no observer markers → test
asserts >0 → fails

Server log line confirms: `[db] transmissions=499 observations=500
nodes=200 observers=0`

## Fix

Extend `freshen-fixture.sh` to also shift `observers.last_seen` (same
algorithm — preserve relative ordering, max anchored to now). Also
defensively clear any stale `inactive=1` flags from prior failed runs.
The `inactive` column may not exist on a fresh fixture (server adds via
migration); script silently no-ops if column absent.

## Verification

```
$ bash tools/freshen-fixture.sh /tmp/test.db
nodes: min=2026-05-01T11:07:29Z max=2026-05-01T18:49:02Z
observers: count=31 max=2026-05-01T18:49:02Z
```

After: 31 observers, oldest 3 days old, within the 14d retention window.
Server's startup prune won't touch them.

Co-authored-by: Kpa-clawbot <bot@example.invalid>
2026-05-01 16:55:25 -07:00
Kpa-clawbot 568de4b441 fix(observers): exclude soft-deleted observers from /api/observers and totalObservers (#954)
## Bug

`/api/observers` returned soft-deleted (inactive=1) observers. Operators
saw stale observers in the UI even after the auto-prune marked them
inactive on schedule. Reproduced on staging: 14 observers older than 14
days returned by the API; all of them had `inactive=1` in the DB.

## Root cause

`DB.GetObservers()` (`cmd/server/db.go:974`) ran `SELECT ... FROM
observers ORDER BY last_seen DESC` with no WHERE filter. The
`RemoveStaleObservers` path correctly soft-deletes by setting
`inactive=1`, but the read path didn't honor it.

`statsRow` (`cmd/server/db.go:234`) had the same bug — `totalObservers`
count included soft-deleted rows.

## Fix

Add `WHERE inactive IS NULL OR inactive = 0` to both:

```go
// GetObservers
"SELECT ... FROM observers WHERE inactive IS NULL OR inactive = 0 ORDER BY last_seen DESC"

// statsRow.TotalObservers
"SELECT COUNT(*) FROM observers WHERE inactive IS NULL OR inactive = 0"
```

`NULL` check preserves backward compatibility with rows from before the
`inactive` migration.

## Tests

Added regression `TestGetObservers_ExcludesInactive`:
- Seed two observers, mark one inactive, assert `GetObservers()` returns
only the other.
- **Anti-tautology gate verified**: reverting the WHERE clause causes
the test to fail with `expected 1 observer, got 2` and `inactive
observer obs2 should be excluded`.

`go test ./...` passes (19.6s).

## Out of scope

- `GetObserverByID` lookup at line 1009 still returns inactive observers
— this is intentional, so an old deep link to `/observers/<id>` shows
"inactive" rather than 404.
- Frontend may also have its own caching layer; this fix is server-side
only.

---------

Co-authored-by: Kpa-clawbot <bot@example.invalid>
Co-authored-by: you <you@example.com>
Co-authored-by: KpaBap <kpabap@gmail.com>
2026-05-01 17:51:08 +00:00
Kpa-clawbot 04c8558768 fix(spa): data-loaded setAttribute in finally so it fires on errors (#959)
## Bug

PR #958 added `data-loaded="true"` attributes for E2E sync, but placed
the `setAttribute` call inside the `try` block of `loadNodes()` /
`loadPackets()` / `loadNodes()` (map). When the API call failed (e.g.
`/api/observers` returns 500, or any other exception), the `catch`
swallowed the error and `setAttribute` was never reached. E2E tests then
waited 15s for `[data-loaded="true"]` and timed out.

This blocked PR #954 CI repeatedly with `Map page loads with markers:
page.waitForSelector: Timeout 15000ms exceeded`.

## Fix

Move `setAttribute('data-loaded', 'true')` to a `finally` block in all
three handlers (`map.js`, `nodes.js`, `packets.js`). The attribute now
fires on both success and error paths, so E2E tests proceed (test still
asserts on the actual rendered state — markers, rows, etc — so an empty
page still fails the right assertion, just much faster).

Removed the duplicate setAttribute calls inside the try blocks (the
finally is the single source of truth now).

## Verification

- `node test-packets.js` 82/82 
- `node test-hash-color.js` 32/32 
- Code reading: each `finally` runs after either success or catch, sets
the same attribute on the same container element.

## Why CI didn't catch this on #958

The PR #958 tests passed because the staging fixture happened to load
successfully when those tests ran. The flake only manifests when an
upstream fetch fails (e.g. observer API returning unexpected shape,
network blip, server still warming).

Co-authored-by: Kpa-clawbot <bot@example.invalid>
2026-05-01 10:49:21 -07:00
Kpa-clawbot 52b5ae86d6 ci: update go-server-coverage.json [skip ci] 2026-05-01 15:17:33 +00:00
Kpa-clawbot 8397f2bb1c ci: update go-ingestor-coverage.json [skip ci] 2026-05-01 15:17:32 +00:00
Kpa-clawbot ed65498281 ci: update frontend-tests.json [skip ci] 2026-05-01 15:17:31 +00:00
Kpa-clawbot c53af5cf66 ci: update frontend-coverage.json [skip ci] 2026-05-01 15:17:30 +00:00
Kpa-clawbot 9f606600e2 ci: update e2e-tests.json [skip ci] 2026-05-01 15:17:28 +00:00
Kpa-clawbot 053aef1994 fix(spa): decouple navigate() from theme fetch + add data-loaded sync attributes (#955) (#958)
## Summary

Fixes the chained async init race identified in RCA #3 of #955.

`navigate()` (which dispatches page handlers and fetches data) was gated
behind `/api/config/theme` resolving via `.finally()`. Tests use
`waitUntil: 'domcontentloaded'` which returns BEFORE theme fetch
resolves, creating a race condition where 3+ serial network requests
must complete before any DOM rows appear.

## Changes

### Decouple navigate() from theme fetch (public/app.js)
- Move `navigate()` call out of the theme fetch `.finally()` block
- Call it immediately on DOMContentLoaded — theme is purely cosmetic and
applies in parallel

### Add data-loaded sync attributes (public/nodes.js, map.js,
packets.js)
- Set `data-loaded="true"` on the container element after each page's
data fetch resolves and DOM renders
- Nodes: set on `#nodesLeft` after `loadNodes()` renders rows
- Map: set on `#leaflet-map` after `renderMarkers()` completes
- Packets: set on `#pktLeft` after `loadPackets()` renders rows

### Update E2E tests (test-e2e-playwright.js)
- Add `await page.waitForSelector('[data-loaded="true"]', { timeout:
15000 })` before row/marker assertions
- Increase map marker timeout from 3s to 8s as additional safety margin
- Tests now synchronize on data readiness rather than racing DOM
appearance

## Verification

- Spun up local server on port 13586 with e2e-fixture.db
- Confirmed navigate() is called immediately (not gated on theme)
- Confirmed data-loaded attributes are present in served JS
- API returns data correctly (2 nodes from fixture)

Closes #955 (RCA #3)

Co-authored-by: you <you@example.com>
2026-05-01 08:07:08 -07:00
Kpa-clawbot 7aef3c355c fix(ci): freshen fixture timestamps before E2E to avoid time-based filter exclusion (#955) (#957)
## Problem

The E2E fixture DB (`test-fixtures/e2e-fixture.db`) has static
timestamps from March 29, 2026. The map page applies a default
`lastHeard=30d` filter, so once the fixture ages past 30 days all nodes
are excluded from `/api/nodes?lastHeard=30d` — causing the "Map page
loads with markers" test to fail deterministically.

This started blocking all CI on ~April 28, 2026 (30 days after March
29).

Closes #955 (RCA #1: time-based fixture rot)

## Fix

Added `tools/freshen-fixture.sh` — a small script that shifts all
`last_seen`/`first_seen` timestamps forward so the newest is near
`now()`, preserving relative ordering between nodes. Runs in CI before
the Go server starts. Does **not** modify the checked-in fixture (no
binary blob churn).

## Verification

```
$ cp test-fixtures/e2e-fixture.db /tmp/fix4.db
$ bash tools/freshen-fixture.sh /tmp/fix4.db
Fixture timestamps freshened in /tmp/fix4.db
nodes: min=2026-05-01T07:10:00Z max=2026-05-01T14:51:33Z

$ ./corescope-server -port 13585 -db /tmp/fix4.db -public public &
$ curl -s "http://localhost:13585/api/nodes?limit=200&lastHeard=30d" | jq '{total, count: (.nodes | length)}'
{
  "total": 200,
  "count": 200
}
```

All 200 nodes returned with the 30-day filter after freshening (vs 0
without the fix).

Co-authored-by: you <you@example.com>
2026-05-01 08:06:19 -07:00
Kpa-clawbot 9ac484607f ci: update go-server-coverage.json [skip ci] 2026-05-01 15:05:20 +00:00
Kpa-clawbot b562de32ff ci: update go-ingestor-coverage.json [skip ci] 2026-05-01 15:05:19 +00:00
Kpa-clawbot 6f0c58c94a ci: update frontend-tests.json [skip ci] 2026-05-01 15:05:18 +00:00
Kpa-clawbot 7d1c679f4f ci: update frontend-coverage.json [skip ci] 2026-05-01 15:05:17 +00:00
Kpa-clawbot ead08c721d ci: update e2e-tests.json [skip ci] 2026-05-01 15:05:16 +00:00
Kpa-clawbot 57e272494d feat(server): /api/healthz readiness endpoint gated on store load (#955) (#956)
## Summary

Fixes RCA #2 from #955: the HTTP listener and `/api/stats` go live
before background goroutines (pickBestObservation, neighbor graph build)
finish, causing CI readiness checks to pass prematurely.

## Changes

1. **`cmd/server/healthz.go`** — New `GET /api/healthz` endpoint:
- Returns `503 {"ready":false,"reason":"loading"}` while background init
is running
   - Returns `200 {"ready":true,"loadedTx":N,"loadedObs":N}` once ready

2. **`cmd/server/main.go`** — Added `sync.WaitGroup` tracking
pickBestObservation and neighbor graph build goroutines. A coordinator
goroutine sets `readiness.Store(1)` when all complete.
`backfillResolvedPathsAsync` is NOT gated (async by design, can take 20+
min).

3. **`cmd/server/routes.go`** — Wired `/api/healthz` before system
endpoints.

4. **`.github/workflows/deploy.yml`** — CI wait-for-ready loop now polls
`/api/healthz` instead of `/api/stats`.

5. **`cmd/server/healthz_test.go`** — Tests for 503-before-ready,
200-after-ready, JSON shape, and anti-tautology gate.

## Rule 18 Verification

Built and ran against `test-fixtures/e2e-fixture.db` (499 tx):
- With the small fixture DB, init completes in <300ms so both immediate
and delayed curls return 200
- Unit tests confirm 503 behavior when `readiness=0` (simulating slow
init)
- On production DBs with 100K+ txs, the 503 window would be 5-15s
(pickBestObservation processes in 5000-tx chunks with 10ms yields)

## Test Results

```
=== RUN   TestHealthzNotReady    --- PASS
=== RUN   TestHealthzReady       --- PASS  
=== RUN   TestHealthzAntiTautology --- PASS
ok  github.com/corescope/server  19.662s (full suite)
```

Co-authored-by: you <you@example.com>
2026-05-01 07:55:57 -07:00
Kpa-clawbot d870a693d0 ci: update go-server-coverage.json [skip ci] 2026-05-01 09:36:22 +00:00
Kpa-clawbot d9904cc138 ci: update go-ingestor-coverage.json [skip ci] 2026-05-01 09:36:21 +00:00
Kpa-clawbot 7aa59eabde ci: update frontend-tests.json [skip ci] 2026-05-01 09:36:20 +00:00
Kpa-clawbot ac7d2b64f7 ci: update frontend-coverage.json [skip ci] 2026-05-01 09:36:19 +00:00
Kpa-clawbot fd3bf1a892 ci: update e2e-tests.json [skip ci] 2026-05-01 09:36:18 +00:00
Kpa-clawbot f16afe7fdf ci: update go-server-coverage.json [skip ci] 2026-05-01 09:34:29 +00:00
Kpa-clawbot ed66e54e57 ci: update go-ingestor-coverage.json [skip ci] 2026-05-01 09:34:28 +00:00
Kpa-clawbot 22079a1fc4 ci: update frontend-tests.json [skip ci] 2026-05-01 09:34:27 +00:00
Kpa-clawbot 232882d308 ci: update frontend-coverage.json [skip ci] 2026-05-01 09:34:25 +00:00
Kpa-clawbot fb640bcfc3 ci: update e2e-tests.json [skip ci] 2026-05-01 09:34:25 +00:00
29 changed files with 880 additions and 26 deletions
+1 -1
View File
@@ -1 +1 @@
{"schemaVersion":1,"label":"frontend coverage","message":"36.12%","color":"red"}
{"schemaVersion":1,"label":"frontend coverage","message":"40.21%","color":"red"}
+4 -1
View File
@@ -176,6 +176,9 @@ jobs:
- name: Instrument frontend JS for coverage
run: sh scripts/instrument-frontend.sh
- name: Freshen fixture timestamps
run: bash tools/freshen-fixture.sh test-fixtures/e2e-fixture.db
- name: Start Go server with fixture DB
run: |
fuser -k 13581/tcp 2>/dev/null || true
@@ -183,7 +186,7 @@ jobs:
./corescope-server -port 13581 -db test-fixtures/e2e-fixture.db -public public-instrumented &
echo $! > .server.pid
for i in $(seq 1 30); do
if curl -sf http://localhost:13581/api/stats > /dev/null 2>&1; then
if curl -sf http://localhost:13581/api/healthz > /dev/null 2>&1; then
echo "Server ready after ${i}s"
break
fi
+28
View File
@@ -7,6 +7,7 @@ import (
"log"
"os"
"strings"
"sync"
"github.com/meshcore-analyzer/geofilter"
)
@@ -42,6 +43,15 @@ type Config struct {
GeoFilter *GeoFilterConfig `json:"geo_filter,omitempty"`
ValidateSignatures *bool `json:"validateSignatures,omitempty"`
DB *DBConfig `json:"db,omitempty"`
// ObserverBlacklist is a list of observer public keys to drop at ingest.
// Messages from blacklisted observers are silently discarded — no DB writes,
// no UpsertObserver, no observations, no metrics.
ObserverBlacklist []string `json:"observerBlacklist,omitempty"`
// obsBlacklistSetCached is the lazily-built lowercase set for O(1) lookups.
obsBlacklistSetCached map[string]bool
obsBlacklistOnce sync.Once
}
// GeoFilterConfig is an alias for the shared geofilter.Config type.
@@ -114,6 +124,24 @@ func (c *Config) ObserverDaysOrDefault() int {
return 14
}
// IsObserverBlacklisted returns true if the given observer ID is in the observerBlacklist.
func (c *Config) IsObserverBlacklisted(id string) bool {
if c == nil || len(c.ObserverBlacklist) == 0 {
return false
}
c.obsBlacklistOnce.Do(func() {
m := make(map[string]bool, len(c.ObserverBlacklist))
for _, pk := range c.ObserverBlacklist {
trimmed := strings.ToLower(strings.TrimSpace(pk))
if trimmed != "" {
m[trimmed] = true
}
}
c.obsBlacklistSetCached = m
})
return c.obsBlacklistSetCached[strings.ToLower(strings.TrimSpace(id))]
}
// LoadConfig reads configuration from a JSON file, with env var overrides.
// If the config file does not exist, sensible defaults are used (zero-config startup).
func LoadConfig(path string) (*Config, error) {
+27 -4
View File
@@ -116,7 +116,8 @@ func applySchema(db *sql.DB) error {
battery_mv INTEGER,
uptime_secs INTEGER,
noise_floor REAL,
inactive INTEGER DEFAULT 0
inactive INTEGER DEFAULT 0,
last_packet_at TEXT DEFAULT NULL
);
CREATE INDEX IF NOT EXISTS idx_nodes_last_seen ON nodes(last_seen);
@@ -421,6 +422,28 @@ func applySchema(db *sql.DB) error {
log.Println("[migration] observations.raw_hex column added")
}
// Migration: add last_packet_at column to observers (#last-packet-at)
row = db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'observers_last_packet_at_v1'")
if row.Scan(&migDone) != nil {
log.Println("[migration] Adding last_packet_at column to observers...")
_, alterErr := db.Exec(`ALTER TABLE observers ADD COLUMN last_packet_at TEXT DEFAULT NULL`)
if alterErr != nil && !strings.Contains(alterErr.Error(), "duplicate column") {
return fmt.Errorf("observers last_packet_at ALTER: %w", alterErr)
}
// Backfill: set last_packet_at = last_seen only for observers that actually have
// observation rows (packet_count alone is unreliable — UpsertObserver sets it to 1
// on INSERT even for status-only observers).
res, err := db.Exec(`UPDATE observers SET last_packet_at = last_seen
WHERE last_packet_at IS NULL
AND rowid IN (SELECT DISTINCT observer_idx FROM observations WHERE observer_idx IS NOT NULL)`)
if err == nil {
n, _ := res.RowsAffected()
log.Printf("[migration] Backfilled last_packet_at for %d observers with packets", n)
}
db.Exec(`INSERT INTO _migrations (name) VALUES ('observers_last_packet_at_v1')`)
log.Println("[migration] observers.last_packet_at column added")
}
return nil
}
@@ -504,7 +527,7 @@ func (s *Store) prepareStatements() error {
return err
}
s.stmtUpdateObserverLastSeen, err = s.db.Prepare("UPDATE observers SET last_seen = ? WHERE rowid = ?")
s.stmtUpdateObserverLastSeen, err = s.db.Prepare("UPDATE observers SET last_seen = ?, last_packet_at = ? WHERE rowid = ?")
if err != nil {
return err
}
@@ -583,9 +606,9 @@ func (s *Store) InsertTransmission(data *PacketData) (bool, error) {
err := s.stmtGetObserverRowid.QueryRow(data.ObserverID).Scan(&rowid)
if err == nil {
observerIdx = &rowid
// Update observer last_seen on every packet to prevent
// Update observer last_seen and last_packet_at on every packet to prevent
// low-traffic observers from appearing offline (#463)
_, _ = s.stmtUpdateObserverLastSeen.Exec(now, rowid)
_, _ = s.stmtUpdateObserverLastSeen.Exec(now, now, rowid)
}
}
+55
View File
@@ -569,6 +569,61 @@ func TestInsertTransmissionUpdatesObserverLastSeen(t *testing.T) {
}
}
func TestLastPacketAtUpdatedOnPacketOnly(t *testing.T) {
s, err := OpenStore(tempDBPath(t))
if err != nil {
t.Fatal(err)
}
defer s.Close()
// Insert observer via status path — last_packet_at should be NULL
if err := s.UpsertObserver("obs1", "Observer1", "SJC", nil); err != nil {
t.Fatal(err)
}
var lastPacketAt sql.NullString
s.db.QueryRow("SELECT last_packet_at FROM observers WHERE id = ?", "obs1").Scan(&lastPacketAt)
if lastPacketAt.Valid {
t.Fatalf("expected last_packet_at to be NULL after UpsertObserver, got %s", lastPacketAt.String)
}
// Insert a packet from this observer — last_packet_at should be set
data := &PacketData{
RawHex: "0A00D69F",
Timestamp: "2026-04-24T12:00:00Z",
ObserverID: "obs1",
Hash: "lastpackettest123456",
RouteType: 2,
PayloadType: 2,
PathJSON: "[]",
DecodedJSON: `{"type":"TXT_MSG"}`,
}
if _, err := s.InsertTransmission(data); err != nil {
t.Fatal(err)
}
s.db.QueryRow("SELECT last_packet_at FROM observers WHERE id = ?", "obs1").Scan(&lastPacketAt)
if !lastPacketAt.Valid {
t.Fatal("expected last_packet_at to be non-NULL after InsertTransmission")
}
// InsertTransmission uses `now = data.Timestamp || time.Now()`, so last_packet_at
// should match the packet's Timestamp when provided (same source-of-truth as last_seen).
if lastPacketAt.String != "2026-04-24T12:00:00Z" {
t.Errorf("expected last_packet_at=2026-04-24T12:00:00Z, got %s", lastPacketAt.String)
}
// UpsertObserver again (status path) — last_packet_at should NOT change
if err := s.UpsertObserver("obs1", "Observer1", "SJC", nil); err != nil {
t.Fatal(err)
}
var lastPacketAtAfterStatus sql.NullString
s.db.QueryRow("SELECT last_packet_at FROM observers WHERE id = ?", "obs1").Scan(&lastPacketAtAfterStatus)
if !lastPacketAtAfterStatus.Valid || lastPacketAtAfterStatus.String != lastPacketAt.String {
t.Errorf("UpsertObserver should not change last_packet_at; expected %s, got %v", lastPacketAt.String, lastPacketAtAfterStatus)
}
}
func TestEndToEndIngest(t *testing.T) {
s, err := OpenStore(tempDBPath(t))
if err != nil {
+7
View File
@@ -240,6 +240,13 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
return
}
// Observer blacklist: drop ALL messages from blacklisted observers before any
// DB writes (status, metrics, packets). Trumps IATA filter.
if len(parts) > 2 && cfg.IsObserverBlacklisted(parts[2]) {
log.Printf("MQTT [%s] observer %.8s blacklisted, dropping", tag, parts[2])
return
}
// Status topic: meshcore/<region>/<observer_id>/status
// IATA filter does NOT apply here — observer metadata (noise_floor, battery, etc.)
// is region-independent and should be accepted from all observers regardless of
+43
View File
@@ -0,0 +1,43 @@
package main
import (
"testing"
)
func TestIngestorIsObserverBlacklisted(t *testing.T) {
cfg := &Config{
ObserverBlacklist: []string{"OBS1", "obs2"},
}
tests := []struct {
id string
want bool
}{
{"OBS1", true},
{"obs1", true},
{"OBS2", true},
{"obs3", false},
{"", false},
}
for _, tt := range tests {
got := cfg.IsObserverBlacklisted(tt.id)
if got != tt.want {
t.Errorf("IsObserverBlacklisted(%q) = %v, want %v", tt.id, got, tt.want)
}
}
}
func TestIngestorIsObserverBlacklistedEmpty(t *testing.T) {
cfg := &Config{}
if cfg.IsObserverBlacklisted("anything") {
t.Error("empty blacklist should not match")
}
}
func TestIngestorIsObserverBlacklistedNil(t *testing.T) {
var cfg *Config
if cfg.IsObserverBlacklisted("anything") {
t.Error("nil config should not match")
}
}
+35
View File
@@ -72,6 +72,15 @@ type Config struct {
DebugAffinity bool `json:"debugAffinity,omitempty"`
// ObserverBlacklist is a list of observer public keys to exclude from API
// responses (defense in depth — ingestor drops at ingest, server filters
// any that slipped through from a prior unblocked window).
ObserverBlacklist []string `json:"observerBlacklist,omitempty"`
// obsBlacklistSetCached is the lazily-built set version of ObserverBlacklist.
obsBlacklistSetCached map[string]bool
obsBlacklistOnce sync.Once
ResolvedPath *ResolvedPathConfig `json:"resolvedPath,omitempty"`
NeighborGraph *NeighborGraphConfig `json:"neighborGraph,omitempty"`
}
@@ -404,3 +413,29 @@ func (c *Config) IsBlacklisted(pubkey string) bool {
}
return c.blacklistSet()[strings.ToLower(strings.TrimSpace(pubkey))]
}
// obsBlacklistSet lazily builds and caches the observerBlacklist as a set for O(1) lookups.
func (c *Config) obsBlacklistSet() map[string]bool {
c.obsBlacklistOnce.Do(func() {
if len(c.ObserverBlacklist) == 0 {
return
}
m := make(map[string]bool, len(c.ObserverBlacklist))
for _, pk := range c.ObserverBlacklist {
trimmed := strings.ToLower(strings.TrimSpace(pk))
if trimmed != "" {
m[trimmed] = true
}
}
c.obsBlacklistSetCached = m
})
return c.obsBlacklistSetCached
}
// IsObserverBlacklisted returns true if the given observer ID is in the observerBlacklist.
func (c *Config) IsObserverBlacklisted(id string) bool {
if c == nil || len(c.ObserverBlacklist) == 0 {
return false
}
return c.obsBlacklistSet()[strings.ToLower(strings.TrimSpace(id))]
}
+2 -1
View File
@@ -35,7 +35,8 @@ func setupTestDBv2(t *testing.T) *DB {
CREATE TABLE observers (
id TEXT PRIMARY KEY, name TEXT, iata TEXT, last_seen TEXT, first_seen TEXT,
packet_count INTEGER DEFAULT 0, model TEXT, firmware TEXT,
client_version TEXT, radio TEXT, battery_mv INTEGER, uptime_secs INTEGER, noise_floor REAL
client_version TEXT, radio TEXT, battery_mv INTEGER, uptime_secs INTEGER, noise_floor REAL,
inactive INTEGER DEFAULT 0
);
CREATE TABLE transmissions (
id INTEGER PRIMARY KEY AUTOINCREMENT, raw_hex TEXT NOT NULL,
+7 -6
View File
@@ -170,6 +170,7 @@ type Observer struct {
BatteryMv *int `json:"battery_mv"`
UptimeSecs *int64 `json:"uptime_secs"`
NoiseFloor *float64 `json:"noise_floor"`
LastPacketAt *string `json:"last_packet_at"`
}
// Transmission represents a row from the transmissions table.
@@ -231,7 +232,7 @@ func (db *DB) GetStats() (*Stats, error) {
sevenDaysAgo := time.Now().Add(-7 * 24 * time.Hour).Format(time.RFC3339)
db.conn.QueryRow("SELECT COUNT(*) FROM nodes WHERE last_seen > ?", sevenDaysAgo).Scan(&s.TotalNodes)
db.conn.QueryRow("SELECT COUNT(*) FROM nodes").Scan(&s.TotalNodesAllTime)
db.conn.QueryRow("SELECT COUNT(*) FROM observers").Scan(&s.TotalObservers)
db.conn.QueryRow("SELECT COUNT(*) FROM observers WHERE inactive IS NULL OR inactive = 0").Scan(&s.TotalObservers)
oneHourAgo := time.Now().Add(-1 * time.Hour).Unix()
db.conn.QueryRow("SELECT COUNT(*) FROM observations WHERE timestamp > ?", oneHourAgo).Scan(&s.PacketsLastHour)
@@ -970,9 +971,9 @@ func (db *DB) getObservationsForTransmissions(txIDs []int) map[int][]map[string]
return result
}
// GetObservers returns all observers sorted by last_seen DESC.
// GetObservers returns active observers (not soft-deleted) sorted by last_seen DESC.
func (db *DB) GetObservers() ([]Observer, error) {
rows, err := db.conn.Query("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor FROM observers ORDER BY last_seen DESC")
rows, err := db.conn.Query("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor, last_packet_at FROM observers WHERE inactive IS NULL OR inactive = 0 ORDER BY last_seen DESC")
if err != nil {
return nil, err
}
@@ -983,7 +984,7 @@ func (db *DB) GetObservers() ([]Observer, error) {
var o Observer
var batteryMv, uptimeSecs sql.NullInt64
var noiseFloor sql.NullFloat64
if err := rows.Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor); err != nil {
if err := rows.Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor, &o.LastPacketAt); err != nil {
continue
}
if batteryMv.Valid {
@@ -1006,8 +1007,8 @@ func (db *DB) GetObserverByID(id string) (*Observer, error) {
var o Observer
var batteryMv, uptimeSecs sql.NullInt64
var noiseFloor sql.NullFloat64
err := db.conn.QueryRow("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor FROM observers WHERE id = ?", id).
Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor)
err := db.conn.QueryRow("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor, last_packet_at FROM observers WHERE id = ?", id).
Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor, &o.LastPacketAt)
if err != nil {
return nil, err
}
+76 -2
View File
@@ -48,7 +48,9 @@ func setupTestDB(t *testing.T) *DB {
radio TEXT,
battery_mv INTEGER,
uptime_secs INTEGER,
noise_floor REAL
noise_floor REAL,
inactive INTEGER DEFAULT 0,
last_packet_at TEXT DEFAULT NULL
);
CREATE TABLE transmissions (
@@ -355,6 +357,35 @@ func TestGetObservers(t *testing.T) {
if observers[0].ID != "obs1" {
t.Errorf("expected obs1 first (most recent), got %s", observers[0].ID)
}
// last_packet_at should be nil since seedTestData doesn't set it
if observers[0].LastPacketAt != nil {
t.Errorf("expected nil LastPacketAt for obs1 from seed, got %v", *observers[0].LastPacketAt)
}
}
// Regression: GetObservers must exclude soft-deleted (inactive=1) rows.
// Stale observers were appearing in /api/observers despite the auto-prune
// marking them inactive, because the SELECT query had no WHERE filter.
func TestGetObservers_ExcludesInactive(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
// Mark obs2 inactive — soft delete simulating a stale-observer prune.
if _, err := db.conn.Exec(`UPDATE observers SET inactive = 1 WHERE id = ?`, "obs2"); err != nil {
t.Fatalf("update inactive: %v", err)
}
observers, err := db.GetObservers()
if err != nil {
t.Fatal(err)
}
if len(observers) != 1 {
t.Errorf("expected 1 observer (obs1) after marking obs2 inactive, got %d", len(observers))
}
for _, o := range observers {
if o.ID == "obs2" {
t.Errorf("inactive observer obs2 should be excluded")
}
}
}
func TestGetObserverByID(t *testing.T) {
@@ -369,6 +400,48 @@ func TestGetObserverByID(t *testing.T) {
if obs.ID != "obs1" {
t.Errorf("expected obs1, got %s", obs.ID)
}
// Verify last_packet_at is nil by default
if obs.LastPacketAt != nil {
t.Errorf("expected nil LastPacketAt, got %v", *obs.LastPacketAt)
}
}
func TestGetObserverLastPacketAt(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
// Set last_packet_at for obs1
ts := "2026-04-24T12:00:00Z"
db.conn.Exec(`UPDATE observers SET last_packet_at = ? WHERE id = ?`, ts, "obs1")
// Verify via GetObservers
observers, err := db.GetObservers()
if err != nil {
t.Fatal(err)
}
var obs1 *Observer
for i := range observers {
if observers[i].ID == "obs1" {
obs1 = &observers[i]
break
}
}
if obs1 == nil {
t.Fatal("obs1 not found")
}
if obs1.LastPacketAt == nil || *obs1.LastPacketAt != ts {
t.Errorf("expected LastPacketAt=%s via GetObservers, got %v", ts, obs1.LastPacketAt)
}
// Verify via GetObserverByID
obs, err := db.GetObserverByID("obs1")
if err != nil {
t.Fatal(err)
}
if obs.LastPacketAt == nil || *obs.LastPacketAt != ts {
t.Errorf("expected LastPacketAt=%s via GetObserverByID, got %v", ts, obs.LastPacketAt)
}
}
func TestGetObserverByIDNotFound(t *testing.T) {
@@ -1109,7 +1182,8 @@ func setupTestDBV2(t *testing.T) *DB {
iata TEXT,
last_seen TEXT,
first_seen TEXT,
packet_count INTEGER DEFAULT 0
packet_count INTEGER DEFAULT 0,
last_packet_at TEXT DEFAULT NULL
);
CREATE TABLE transmissions (
+43
View File
@@ -0,0 +1,43 @@
package main
import (
"encoding/json"
"net/http"
"sync/atomic"
)
// readiness tracks whether background init goroutines have completed.
// Set to 1 once store.Load, pickBestObservation, and neighbor graph build are done.
var readiness atomic.Int32
// handleHealthz returns 200 when the server is ready to serve queries,
// or 503 while background initialization is still running.
func (s *Server) handleHealthz(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
if readiness.Load() == 0 {
w.WriteHeader(http.StatusServiceUnavailable)
json.NewEncoder(w).Encode(map[string]interface{}{
"ready": false,
"reason": "loading",
})
return
}
var loadedTx, loadedObs int
if s.store != nil {
s.store.mu.RLock()
loadedTx = len(s.store.packets)
for _, p := range s.store.packets {
loadedObs += len(p.Observations)
}
s.store.mu.RUnlock()
}
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(map[string]interface{}{
"ready": true,
"loadedTx": loadedTx,
"loadedObs": loadedObs,
})
}
+80
View File
@@ -0,0 +1,80 @@
package main
import (
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
)
func TestHealthzNotReady(t *testing.T) {
// Ensure readiness is 0 (not ready)
readiness.Store(0)
defer readiness.Store(0)
srv := &Server{store: &PacketStore{}}
req := httptest.NewRequest("GET", "/api/healthz", nil)
w := httptest.NewRecorder()
srv.handleHealthz(w, req)
if w.Code != http.StatusServiceUnavailable {
t.Fatalf("expected 503, got %d", w.Code)
}
var resp map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("invalid JSON: %v", err)
}
if resp["ready"] != false {
t.Fatalf("expected ready=false, got %v", resp["ready"])
}
if resp["reason"] != "loading" {
t.Fatalf("expected reason=loading, got %v", resp["reason"])
}
}
func TestHealthzReady(t *testing.T) {
readiness.Store(1)
defer readiness.Store(0)
srv := &Server{store: &PacketStore{}}
req := httptest.NewRequest("GET", "/api/healthz", nil)
w := httptest.NewRecorder()
srv.handleHealthz(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d", w.Code)
}
var resp map[string]interface{}
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("invalid JSON: %v", err)
}
if resp["ready"] != true {
t.Fatalf("expected ready=true, got %v", resp["ready"])
}
if _, ok := resp["loadedTx"]; !ok {
t.Fatal("missing loadedTx field")
}
if _, ok := resp["loadedObs"]; !ok {
t.Fatal("missing loadedObs field")
}
}
func TestHealthzAntiTautology(t *testing.T) {
// When readiness is 0, must NOT return 200
readiness.Store(0)
defer readiness.Store(0)
srv := &Server{store: &PacketStore{}}
req := httptest.NewRequest("GET", "/api/healthz", nil)
w := httptest.NewRecorder()
srv.handleHealthz(w, req)
if w.Code == http.StatusOK {
t.Fatal("anti-tautology: handler returned 200 when readiness=0; gating is broken")
}
}
+32
View File
@@ -174,6 +174,27 @@ func main() {
database.hasResolvedPath = true // detectSchema ran before column was added; fix the flag
}
// Ensure observers.inactive column exists (PR #954 filters on it; ingestor migration
// adds it but server may run against DBs ingestor never touched, e.g. e2e fixture).
if err := ensureObserverInactiveColumn(dbPath); err != nil {
log.Printf("[store] warning: could not add observers.inactive column: %v", err)
}
// Ensure observers.last_packet_at column exists (PR #905 reads it; ingestor migration
// adds it but server may run against DBs ingestor never touched, e.g. e2e fixture).
if err := ensureLastPacketAtColumn(dbPath); err != nil {
log.Printf("[store] warning: could not add observers.last_packet_at column: %v", err)
}
// Soft-delete observers that are in the blacklist (mark inactive=1) so
// historical data from a prior unblocked window is hidden too.
if len(cfg.ObserverBlacklist) > 0 {
softDeleteBlacklistedObservers(dbPath, cfg.ObserverBlacklist)
}
// WaitGroup for background init steps that gate /api/healthz readiness.
var initWg sync.WaitGroup
// Load or build neighbor graph
if neighborEdgesTableExists(database.conn) {
store.graph = loadNeighborEdgesFromDB(database.conn)
@@ -181,7 +202,9 @@ func main() {
} else {
log.Printf("[neighbor] no persisted edges found, will build in background...")
store.graph = NewNeighborGraph() // empty graph — gets populated by background goroutine
initWg.Add(1)
go func() {
defer initWg.Done()
defer func() {
if r := recover(); r != nil {
log.Printf("[neighbor] graph build panic recovered: %v", r)
@@ -205,7 +228,9 @@ func main() {
// API serves best-effort data until this completes (~10s for 100K txs).
// Processes in chunks of 5000, releasing the lock between chunks so API
// handlers remain responsive.
initWg.Add(1)
go func() {
defer initWg.Done()
defer func() {
if r := recover(); r != nil {
log.Printf("[store] pickBestObservation panic recovered: %v", r)
@@ -233,6 +258,13 @@ func main() {
log.Printf("[store] initial pickBestObservation complete (%d transmissions)", totalPackets)
}()
// Mark server ready once all background init completes.
go func() {
initWg.Wait()
readiness.Store(1)
log.Printf("[server] readiness: ready=true (background init complete)")
}()
// WebSocket hub
hub := NewHub()
+112
View File
@@ -281,6 +281,118 @@ func ensureResolvedPathColumn(dbPath string) error {
return nil
}
// ensureObserverInactiveColumn adds the inactive column to observers if missing.
// The column was originally added by ingestor migration (cmd/ingestor/db.go:344) to
// support soft-delete via RemoveStaleObservers + filtered reads (PR #954). When the
// server starts against a DB that was never touched by the ingestor (e.g. the e2e
// fixture), the column is missing and read queries that filter on it (GetObservers,
// GetStats) silently fail with "no such column: inactive" — leaving /api/observers
// returning empty.
func ensureObserverInactiveColumn(dbPath string) error {
rw, err := openRW(dbPath)
if err != nil {
return err
}
defer rw.Close()
rows, err := rw.Query("PRAGMA table_info(observers)")
if err != nil {
return err
}
defer rows.Close()
for rows.Next() {
var cid int
var colName string
var colType sql.NullString
var notNull, pk int
var dflt sql.NullString
if rows.Scan(&cid, &colName, &colType, &notNull, &dflt, &pk) == nil && colName == "inactive" {
return nil // already exists
}
}
_, err = rw.Exec("ALTER TABLE observers ADD COLUMN inactive INTEGER DEFAULT 0")
if err != nil {
return fmt.Errorf("add inactive column: %w", err)
}
log.Println("[store] Added inactive column to observers")
return nil
}
// ensureLastPacketAtColumn adds the last_packet_at column to observers if missing.
// The column was originally added by ingestor migration (observers_last_packet_at_v1)
// to track the most recent packet observation time separately from status updates.
// When the server starts against a DB that was never touched by the ingestor (e.g.
// the e2e fixture), the column is missing and read queries that reference it
// (GetObservers, GetObserverByID) fail with "no such column: last_packet_at".
func ensureLastPacketAtColumn(dbPath string) error {
rw, err := openRW(dbPath)
if err != nil {
return err
}
defer rw.Close()
rows, err := rw.Query("PRAGMA table_info(observers)")
if err != nil {
return err
}
defer rows.Close()
for rows.Next() {
var cid int
var colName string
var colType sql.NullString
var notNull, pk int
var dflt sql.NullString
if rows.Scan(&cid, &colName, &colType, &notNull, &dflt, &pk) == nil && colName == "last_packet_at" {
return nil // already exists
}
}
_, err = rw.Exec("ALTER TABLE observers ADD COLUMN last_packet_at TEXT")
if err != nil {
return fmt.Errorf("add last_packet_at column: %w", err)
}
log.Println("[store] Added last_packet_at column to observers")
return nil
}
// softDeleteBlacklistedObservers marks observers matching the blacklist as
// inactive=1 so they are hidden from API responses. Runs once at startup.
func softDeleteBlacklistedObservers(dbPath string, blacklist []string) {
rw, err := openRW(dbPath)
if err != nil {
log.Printf("[observer-blacklist] warning: could not open DB for soft-delete: %v", err)
return
}
defer rw.Close()
placeholders := make([]string, 0, len(blacklist))
args := make([]interface{}, 0, len(blacklist))
for _, pk := range blacklist {
trimmed := strings.TrimSpace(pk)
if trimmed == "" {
continue
}
placeholders = append(placeholders, "LOWER(?)")
args = append(args, trimmed)
}
if len(placeholders) == 0 {
return
}
query := "UPDATE observers SET inactive = 1 WHERE LOWER(id) IN (" + strings.Join(placeholders, ",") + ") AND (inactive IS NULL OR inactive = 0)"
result, err := rw.Exec(query, args...)
if err != nil {
log.Printf("[observer-blacklist] warning: soft-delete failed: %v", err)
return
}
if n, _ := result.RowsAffected(); n > 0 {
log.Printf("[observer-blacklist] soft-deleted %d blacklisted observer(s)", n)
}
}
// resolvePathForObs resolves hop prefixes to full pubkeys for an observation.
// Returns nil if path is empty.
func resolvePathForObs(pathJSON, observerID string, tx *StoreTx, pm *prefixMap, graph *NeighborGraph) []*string {
+59
View File
@@ -538,3 +538,62 @@ func TestOpenRW_BusyTimeout(t *testing.T) {
t.Errorf("expected busy_timeout=5000, got %d", timeout)
}
}
func TestEnsureLastPacketAtColumn(t *testing.T) {
// Create a temp DB with observers table missing last_packet_at
dir := t.TempDir()
dbPath := dir + "/test.db"
db, err := sql.Open("sqlite", dbPath)
if err != nil {
t.Fatal(err)
}
_, err = db.Exec(`CREATE TABLE observers (
id TEXT PRIMARY KEY,
name TEXT,
last_seen TEXT,
lat REAL,
lon REAL,
inactive INTEGER DEFAULT 0
)`)
if err != nil {
t.Fatal(err)
}
db.Close()
// First call: should add the column
if err := ensureLastPacketAtColumn(dbPath); err != nil {
t.Fatalf("first call failed: %v", err)
}
// Verify column exists
db2, err := sql.Open("sqlite", dbPath)
if err != nil {
t.Fatal(err)
}
defer db2.Close()
var found bool
rows, err := db2.Query("PRAGMA table_info(observers)")
if err != nil {
t.Fatal(err)
}
defer rows.Close()
for rows.Next() {
var cid int
var colName string
var colType sql.NullString
var notNull, pk int
var dflt sql.NullString
if rows.Scan(&cid, &colName, &colType, &notNull, &dflt, &pk) == nil && colName == "last_packet_at" {
found = true
}
}
if !found {
t.Fatal("last_packet_at column not found after migration")
}
// Idempotency: second call should succeed without error
if err := ensureLastPacketAtColumn(dbPath); err != nil {
t.Fatalf("idempotent call failed: %v", err)
}
}
+159
View File
@@ -0,0 +1,159 @@
package main
import (
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
)
func TestConfigIsObserverBlacklisted(t *testing.T) {
cfg := &Config{
ObserverBlacklist: []string{"OBS1", "obs2", " Obs3 "},
}
tests := []struct {
id string
want bool
}{
{"OBS1", true},
{"obs1", true}, // case-insensitive
{"OBS2", true},
{"Obs3", true}, // whitespace trimmed
{"obs4", false},
{"", false},
}
for _, tt := range tests {
got := cfg.IsObserverBlacklisted(tt.id)
if got != tt.want {
t.Errorf("IsObserverBlacklisted(%q) = %v, want %v", tt.id, got, tt.want)
}
}
}
func TestConfigIsObserverBlacklistedEmpty(t *testing.T) {
cfg := &Config{}
if cfg.IsObserverBlacklisted("anything") {
t.Error("empty blacklist should not match anything")
}
}
func TestConfigIsObserverBlacklistedNil(t *testing.T) {
var cfg *Config
if cfg.IsObserverBlacklisted("anything") {
t.Error("nil config should not match anything")
}
}
func TestObserverBlacklistFiltersHandleObservers(t *testing.T) {
db := setupTestDB(t)
db.conn.Exec("INSERT OR IGNORE INTO observers (id, name, iata, last_seen) VALUES ('goodobs', 'GoodObs', 'SFO', datetime('now'))")
db.conn.Exec("INSERT OR IGNORE INTO observers (id, name, iata, last_seen) VALUES ('badobs', 'BadObs', 'LAX', datetime('now'))")
cfg := &Config{
ObserverBlacklist: []string{"badobs"},
}
srv := NewServer(db, cfg, NewHub())
srv.RegisterRoutes(setupTestRouter(srv))
req := httptest.NewRequest("GET", "/api/observers", nil)
w := httptest.NewRecorder()
srv.router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var resp ObserverListResponse
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("failed to parse response: %v", err)
}
for _, obs := range resp.Observers {
if obs.ID == "badobs" {
t.Error("blacklisted observer should not appear in observers list")
}
}
foundGood := false
for _, obs := range resp.Observers {
if obs.ID == "goodobs" {
foundGood = true
}
}
if !foundGood {
t.Error("non-blacklisted observer should appear in observers list")
}
}
func TestObserverBlacklistFiltersObserverDetail(t *testing.T) {
db := setupTestDB(t)
db.conn.Exec("INSERT OR IGNORE INTO observers (id, name, iata, last_seen) VALUES ('badobs', 'BadObs', 'LAX', datetime('now'))")
cfg := &Config{
ObserverBlacklist: []string{"badobs"},
}
srv := NewServer(db, cfg, NewHub())
srv.RegisterRoutes(setupTestRouter(srv))
req := httptest.NewRequest("GET", "/api/observers/badobs", nil)
w := httptest.NewRecorder()
srv.router.ServeHTTP(w, req)
if w.Code != http.StatusNotFound {
t.Errorf("expected 404 for blacklisted observer detail, got %d", w.Code)
}
}
func TestNoObserverBlacklistPassesAll(t *testing.T) {
db := setupTestDB(t)
db.conn.Exec("INSERT OR IGNORE INTO observers (id, name, iata, last_seen) VALUES ('someobs', 'SomeObs', 'SFO', datetime('now'))")
cfg := &Config{}
srv := NewServer(db, cfg, NewHub())
srv.RegisterRoutes(setupTestRouter(srv))
req := httptest.NewRequest("GET", "/api/observers", nil)
w := httptest.NewRecorder()
srv.router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d", w.Code)
}
var resp ObserverListResponse
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("failed to parse response: %v", err)
}
foundSome := false
for _, obs := range resp.Observers {
if obs.ID == "someobs" {
foundSome = true
}
}
if !foundSome {
t.Error("without blacklist, observer should appear")
}
}
func TestObserverBlacklistConcurrent(t *testing.T) {
cfg := &Config{
ObserverBlacklist: []string{"AA", "BB", "CC"},
}
done := make(chan struct{})
for i := 0; i < 50; i++ {
go func() {
defer func() { done <- struct{}{} }()
for j := 0; j < 100; j++ {
cfg.IsObserverBlacklisted("AA")
cfg.IsObserverBlacklisted("DD")
}
}()
}
for i := 0; i < 50; i++ {
<-done
}
}
+16
View File
@@ -118,6 +118,9 @@ func (s *Server) RegisterRoutes(r *mux.Router) {
r.HandleFunc("/api/config/map", s.handleConfigMap).Methods("GET")
r.HandleFunc("/api/config/geo-filter", s.handleConfigGeoFilter).Methods("GET")
// Readiness endpoint (gated on background init completion)
r.HandleFunc("/api/healthz", s.handleHealthz).Methods("GET")
// System endpoints
r.HandleFunc("/api/health", s.handleHealth).Methods("GET")
r.HandleFunc("/api/stats", s.handleStats).Methods("GET")
@@ -1929,6 +1932,10 @@ func (s *Server) handleObservers(w http.ResponseWriter, r *http.Request) {
result := make([]ObserverResp, 0, len(observers))
for _, o := range observers {
// Defense in depth: skip observers that are in the blacklist
if s.cfg != nil && s.cfg.IsObserverBlacklisted(o.ID) {
continue
}
plh := 0
if c, ok := pktCounts[o.ID]; ok {
plh = c
@@ -1948,6 +1955,7 @@ func (s *Server) handleObservers(w http.ResponseWriter, r *http.Request) {
ClientVersion: o.ClientVersion, Radio: o.Radio,
BatteryMv: o.BatteryMv, UptimeSecs: o.UptimeSecs,
NoiseFloor: o.NoiseFloor,
LastPacketAt: o.LastPacketAt,
PacketsLastHour: plh,
Lat: lat, Lon: lon, NodeRole: nodeRole,
})
@@ -1960,6 +1968,13 @@ func (s *Server) handleObservers(w http.ResponseWriter, r *http.Request) {
func (s *Server) handleObserverDetail(w http.ResponseWriter, r *http.Request) {
id := mux.Vars(r)["id"]
// Defense in depth: reject blacklisted observer
if s.cfg != nil && s.cfg.IsObserverBlacklisted(id) {
writeError(w, 404, "Observer not found")
return
}
obs, err := s.db.GetObserverByID(id)
if err != nil || obs == nil {
writeError(w, 404, "Observer not found")
@@ -1982,6 +1997,7 @@ func (s *Server) handleObserverDetail(w http.ResponseWriter, r *http.Request) {
ClientVersion: obs.ClientVersion, Radio: obs.Radio,
BatteryMv: obs.BatteryMv, UptimeSecs: obs.UptimeSecs,
NoiseFloor: obs.NoiseFloor,
LastPacketAt: obs.LastPacketAt,
PacketsLastHour: plh,
})
}
+1
View File
@@ -859,6 +859,7 @@ type ObserverResp struct {
BatteryMv interface{} `json:"battery_mv"`
UptimeSecs interface{} `json:"uptime_secs"`
NoiseFloor interface{} `json:"noise_floor"`
LastPacketAt interface{} `json:"last_packet_at"`
PacketsLastHour int `json:"packetsLastHour"`
Lat interface{} `json:"lat"`
Lon interface{} `json:"lon"`
+5 -4
View File
@@ -4,7 +4,7 @@
// --- Route/Payload name maps ---
const ROUTE_TYPES = { 0: 'TRANSPORT_FLOOD', 1: 'FLOOD', 2: 'DIRECT', 3: 'TRANSPORT_DIRECT' };
const PAYLOAD_TYPES = { 0: 'Request', 1: 'Response', 2: 'Direct Msg', 3: 'ACK', 4: 'Advert', 5: 'Channel Msg', 6: 'Group Data', 7: 'Anon Req', 8: 'Path', 9: 'Trace', 10: 'Multipart', 11: 'Control', 15: 'Raw Custom' };
const PAYLOAD_COLORS = { 0: 'req', 1: 'response', 2: 'txt-msg', 3: 'ack', 4: 'advert', 5: 'grp-txt', 7: 'anon-req', 8: 'path', 9: 'trace' };
const PAYLOAD_COLORS = { 0: 'req', 1: 'response', 2: 'txt-msg', 3: 'ack', 4: 'advert', 5: 'grp-txt', 6: 'grp-data', 7: 'anon-req', 8: 'path', 9: 'trace' };
function routeTypeName(n) { return ROUTE_TYPES[n] || 'UNKNOWN'; }
function payloadTypeName(n) { return PAYLOAD_TYPES[n] || 'UNKNOWN'; }
@@ -965,10 +965,11 @@ window.addEventListener('DOMContentLoaded', () => {
}).catch(() => {
window.SITE_CONFIG = { timestamps: { defaultMode: 'ago', timezone: 'local', formatPreset: 'iso', customFormat: '', allowCustomFormat: false } };
if (window._customizerV2) window._customizerV2.init(window.SITE_CONFIG);
}).finally(() => {
if (!location.hash || location.hash === '#/') location.hash = '#/home';
else navigate();
});
// Navigate immediately — don't gate data-fetching pages on cosmetic theme fetch
if (!location.hash || location.hash === '#/') location.hash = '#/home';
else navigate();
});
/**
+5
View File
@@ -595,6 +595,11 @@
// Only fitBounds on subsequent data refreshes if user hasn't manually panned
} catch (e) {
console.error('Map load error:', e);
} finally {
// Always signal data-loaded — even on error — so E2E tests can proceed.
// Otherwise an api() failure leaves the test waiting forever.
var mapContainer = document.getElementById('leaflet-map');
if (mapContainer) mapContainer.setAttribute('data-loaded', 'true');
}
}
+4
View File
@@ -955,6 +955,10 @@
console.error('Failed to load nodes:', e);
const tbody = document.getElementById('nodesBody');
if (tbody) tbody.innerHTML = '<tr><td colspan="6" class="text-center" style="padding:24px;color:var(--error,#ef4444)"><div role="alert" aria-live="polite">Failed to load nodes. Please try again.</div></td></tr>';
} finally {
// Always signal data-loaded — even on error — so E2E tests can proceed.
var nodesContainer = document.getElementById('nodesLeft') || document.getElementById('nodesBody');
if (nodesContainer) nodesContainer.setAttribute('data-loaded', 'true');
}
}
+8
View File
@@ -150,6 +150,14 @@
<div class="stat-label">First Seen</div>
<div class="stat-value" style="font-size:0.85em">${obs.first_seen ? new Date(obs.first_seen).toLocaleDateString() : '—'}</div>
</div>
<div class="stat-card">
<div class="stat-label">Last Status Update</div>
<div class="stat-value" style="font-size:0.85em">${obs.last_seen ? timeAgo(obs.last_seen) + '<br><span style="font-size:0.8em;color:var(--text-muted)">' + new Date(obs.last_seen).toLocaleString() + '</span>' : '—'}</div>
</div>
<div class="stat-card">
<div class="stat-label">Last Packet Observation</div>
<div class="stat-value" style="font-size:0.85em">${obs.last_packet_at ? timeAgo(obs.last_packet_at) + '<br><span style="font-size:0.8em;color:var(--text-muted)">' + new Date(obs.last_packet_at).toLocaleString() + '</span>' : '<span style="color:var(--text-muted)">never</span>'}</div>
</div>
</div>
<div class="mono" style="font-size:0.75em;color:var(--text-muted);margin-bottom:20px;word-break:break-all">
ID: ${obs.id}
+13 -1
View File
@@ -75,6 +75,17 @@
return { cls: 'health-red', label: 'Offline' };
}
function packetBadge(o) {
if (!o.last_packet_at) return '<span title="No packets ever observed">📡⚠ never</span>';
const pktAgo = Date.now() - new Date(o.last_packet_at).getTime();
const statusAgo = o.last_seen ? Date.now() - new Date(o.last_seen).getTime() : Infinity;
const gap = pktAgo - statusAgo;
if (gap > 600000) {
return `<span title="Last packet ${timeAgo(o.last_packet_at)} — status is newer by ${Math.round(gap/60000)}min. Observer may be alive but not forwarding packets.">📡⚠ ${timeAgo(o.last_packet_at)}</span>`;
}
return timeAgo(o.last_packet_at);
}
function uptimeStr(firstSeen) {
if (!firstSeen) return '—';
const ms = Date.now() - new Date(firstSeen).getTime();
@@ -123,7 +134,7 @@
<div class="obs-table-scroll"><table class="data-table obs-table" id="obsTable">
<caption class="sr-only">Observer status and statistics</caption>
<thead><tr>
<th scope="col">Status</th><th scope="col">Name</th><th scope="col">Region</th><th scope="col">Last Seen</th>
<th scope="col">Status</th><th scope="col">Name</th><th scope="col">Region</th><th scope="col">Last Status</th><th scope="col">Last Packet</th>
<th scope="col">Packets</th><th scope="col">Packets/Hour</th><th scope="col">Uptime</th>
</tr></thead>
<tbody>${filtered.map(o => {
@@ -134,6 +145,7 @@
<td class="mono">${o.name || o.id}</td>
<td>${o.iata ? `<span class="badge-region">${o.iata}</span>` : '—'}</td>
<td>${timeAgo(o.last_seen)}</td>
<td>${packetBadge(o)}</td>
<td>${(o.packet_count || 0).toLocaleString()}</td>
<td>${sparkBar(o.packetsLastHour || 0, maxPktsHr)}</td>
<td>${uptimeStr(o.first_seen)}</td>
+5 -1
View File
@@ -26,7 +26,7 @@
let observers = [];
let observerMap = new Map(); // id → observer for O(1) lookups (#383)
let regionMap = {};
const TYPE_NAMES = { 0:'Request', 1:'Response', 2:'Direct Msg', 3:'ACK', 4:'Advert', 5:'Channel Msg', 7:'Anon Req', 8:'Path', 9:'Trace', 11:'Control' };
const TYPE_NAMES = { 0:'Request', 1:'Response', 2:'Direct Msg', 3:'ACK', 4:'Advert', 5:'Channel Msg', 6:'Group Data', 7:'Anon Req', 8:'Path', 9:'Trace', 11:'Control' };
function typeName(t) { return TYPE_NAMES[t] ?? `Type ${t}`; }
const isMobile = window.innerWidth <= 1024;
const PACKET_LIMIT = isMobile ? 1000 : 50000;
@@ -748,6 +748,10 @@
console.error('Failed to load packets:', e);
const tbody = document.getElementById('pktBody');
if (tbody) tbody.innerHTML = '<tr><td colspan="' + _getColCount() + '" class="text-center" style="padding:24px;color:var(--error,#ef4444)"><div role="alert" aria-live="polite">Failed to load packets. Please try again.</div></td></tr>';
} finally {
// Always signal data-loaded — even on error — so E2E tests can proceed.
var pktContainer = document.getElementById('pktLeft') || document.getElementById('pktBody');
if (pktContainer) pktContainer.setAttribute('data-loaded', 'true');
}
}
+2 -2
View File
@@ -15,14 +15,14 @@
};
window.TYPE_COLORS = {
ADVERT: '#22c55e', GRP_TXT: '#3b82f6', TXT_MSG: '#f59e0b', ACK: '#6b7280',
ADVERT: '#22c55e', GRP_TXT: '#3b82f6', GRP_DATA: '#8b5cf6', TXT_MSG: '#f59e0b', ACK: '#6b7280',
REQUEST: '#a855f7', RESPONSE: '#06b6d4', TRACE: '#ec4899', PATH: '#14b8a6',
ANON_REQ: '#f43f5e', UNKNOWN: '#6b7280'
};
// Badge CSS class name mapping
const TYPE_BADGE_MAP = {
ADVERT: 'advert', GRP_TXT: 'grp-txt', TXT_MSG: 'txt-msg', ACK: 'ack',
ADVERT: 'advert', GRP_TXT: 'grp-txt', GRP_DATA: 'grp-data', TXT_MSG: 'txt-msg', ACK: 'ack',
REQUEST: 'req', RESPONSE: 'response', TRACE: 'trace', PATH: 'path',
ANON_REQ: 'anon-req', UNKNOWN: 'unknown'
};
+1 -1
View File
@@ -1558,7 +1558,7 @@ tr[data-hops]:hover { background: rgba(59,130,246,0.1); }
/* #20 — Observers table horizontal scroll on mobile */
.obs-table-scroll { overflow-x: auto; -webkit-overflow-scrolling: touch; }
.obs-table-scroll .obs-table { min-width: 640px; }
.obs-table-scroll .obs-table { min-width: 720px; }
/* #206 — Analytics/Compare tables scroll wrappers on mobile */
.analytics-table-scroll { overflow-x: auto; -webkit-overflow-scrolling: touch; }
+7 -2
View File
@@ -211,6 +211,7 @@ async function run() {
// Test 2: Nodes page loads with data
await test('Nodes page loads with data', async () => {
await page.goto(`${BASE}/#/nodes`, { waitUntil: 'domcontentloaded' });
await page.waitForSelector('[data-loaded="true"]', { timeout: 15000 });
await page.waitForSelector('table tbody tr');
const headers = await page.$$eval('th', els => els.map(e => e.textContent.trim()));
for (const col of ['Name', 'Public Key', 'Role']) {
@@ -236,6 +237,7 @@ async function run() {
// Test: Node side panel Details link navigates to full detail page (#778)
await test('Node side panel Details link navigates', async () => {
await page.goto(`${BASE}/#/nodes`, { waitUntil: 'domcontentloaded' });
await page.waitForSelector('[data-loaded="true"]', { timeout: 15000 });
await page.waitForSelector('table tbody tr');
await page.click('table tbody tr');
await page.waitForSelector('.node-detail');
@@ -257,6 +259,7 @@ async function run() {
// Test: Nodes page has WebSocket auto-update listener (#131)
await test('Nodes page has WebSocket auto-update', async () => {
await page.goto(`${BASE}/#/nodes`, { waitUntil: 'domcontentloaded' });
await page.waitForSelector('[data-loaded="true"]', { timeout: 15000 });
await page.waitForSelector('table tbody tr');
// The live dot in navbar indicates WS connection status
const liveDot = await page.$('#liveDot');
@@ -282,11 +285,12 @@ async function run() {
// Test 3: Map page loads with markers
await test('Map page loads with markers', async () => {
await page.goto(`${BASE}/#/map`, { waitUntil: 'domcontentloaded' });
await page.waitForSelector('[data-loaded="true"]', { timeout: 15000 });
await page.waitForSelector('.leaflet-container');
await page.waitForSelector('.leaflet-tile-loaded');
// Wait for markers/overlays to render (may not exist with empty DB)
try {
await page.waitForSelector('.leaflet-marker-icon, .leaflet-interactive, circle, .marker-cluster, .leaflet-marker-pane > *, .leaflet-overlay-pane svg path, .leaflet-overlay-pane svg circle', { timeout: 3000 });
await page.waitForSelector('.leaflet-marker-icon, .leaflet-interactive, circle, .marker-cluster, .leaflet-marker-pane > *, .leaflet-overlay-pane svg path, .leaflet-overlay-pane svg circle', { timeout: 8000 });
} catch (_) {
// No markers with empty DB \u2014 assertion below handles it
}
@@ -362,7 +366,7 @@ async function run() {
await page.waitForSelector('.leaflet-container');
// Wait for markers (may not exist with empty DB)
try {
await page.waitForSelector('.leaflet-marker-icon, .leaflet-interactive', { timeout: 3000 });
await page.waitForSelector('.leaflet-marker-icon, .leaflet-interactive', { timeout: 8000 });
} catch (_) {
// No markers with empty DB
}
@@ -394,6 +398,7 @@ async function run() {
await page.goto(`${BASE}/#/packets`, { waitUntil: 'domcontentloaded' });
await page.evaluate(() => localStorage.setItem('meshcore-time-window', '525600'));
await page.reload({ waitUntil: 'load' });
await page.waitForSelector('[data-loaded="true"]', { timeout: 15000 });
await page.waitForSelector('table tbody tr', { timeout: 15000 });
const rowsBefore = await page.$$('table tbody tr');
assert(rowsBefore.length > 0, 'No packets visible');
+43
View File
@@ -0,0 +1,43 @@
#!/usr/bin/env bash
# freshen-fixture.sh — Shift all timestamps in the fixture DB to be relative to now.
# Preserves the relative ordering between timestamps.
# Usage: bash tools/freshen-fixture.sh <path-to-fixture.db>
set -euo pipefail
DB="${1:?Usage: freshen-fixture.sh <path-to-fixture.db>}"
if [ ! -f "$DB" ]; then
echo "ERROR: DB file not found: $DB" >&2
exit 1
fi
# Find the max timestamp across all time columns, compute offset, shift everything forward.
sqlite3 "$DB" <<'SQL'
-- Shift all timestamps forward so the newest is ~now, preserving relative ordering.
-- Use strftime to produce RFC3339 format (T separator + Z suffix) for correct comparison.
UPDATE nodes SET last_seen = strftime('%Y-%m-%dT%H:%M:%SZ', last_seen,
(SELECT printf('+%d seconds', CAST((julianday('now') - julianday(MAX(last_seen))) * 86400 AS INTEGER)) FROM nodes)
) WHERE last_seen IS NOT NULL;
UPDATE transmissions SET first_seen = strftime('%Y-%m-%dT%H:%M:%SZ', first_seen,
(SELECT printf('+%d seconds', CAST((julianday('now') - julianday(MAX(first_seen))) * 86400 AS INTEGER)) FROM transmissions)
) WHERE first_seen IS NOT NULL;
-- Observers: shift last_seen too so they don't get auto-pruned by RemoveStaleObservers
-- on server startup (default 14d threshold marks all >14d observers inactive=1, which
-- the /api/observers filter then excludes — leaving the map page with no observer markers).
UPDATE observers SET last_seen = strftime('%Y-%m-%dT%H:%M:%SZ', last_seen,
(SELECT printf('+%d seconds', CAST((julianday('now') - julianday(MAX(last_seen))) * 86400 AS INTEGER)) FROM observers)
) WHERE last_seen IS NOT NULL;
SQL
# Defensive: clear any stale inactive=1 flags. Column may not exist on fresh fixtures
# (added by server migration on first startup); silently no-op if missing.
sqlite3 "$DB" "UPDATE observers SET inactive = 0 WHERE inactive = 1;" 2>/dev/null || true
# neighbor_edges may not exist in all fixture versions
sqlite3 "$DB" "UPDATE neighbor_edges SET last_seen = strftime('%Y-%m-%dT%H:%M:%SZ', last_seen, (SELECT printf('+%d seconds', CAST((julianday('now') - julianday(MAX(last_seen))) * 86400 AS INTEGER)) FROM neighbor_edges)) WHERE last_seen IS NOT NULL;" 2>/dev/null || true
echo "Fixture timestamps freshened in $DB"
sqlite3 "$DB" "SELECT 'nodes: min=' || MIN(last_seen) || ' max=' || MAX(last_seen) FROM nodes;"
sqlite3 "$DB" "SELECT 'observers: count=' || COUNT(*) || ' max=' || MAX(last_seen) FROM observers;"