Commit Graph

1375 Commits

Author SHA1 Message Date
Kpa-clawbot
dfe383cc51 fix: node detail panel Details/Analytics links don't navigate (#779)
Fixes #778

## Problem

The Details and Analytics links in the node side panel don't navigate
when clicked. This is a regression from #739 (desktop node deep
linking).

**Root cause:** When a node is selected, `selectNode()` uses
`history.replaceState()` to set the URL to `#/nodes/{pubkey}`. The
Details link has `href="#/nodes/{pubkey}"` — the same hash. Clicking an
anchor with the same hash as the current URL doesn't fire the
`hashchange` event, so the SPA router never triggers navigation.

## Fix

Added a click handler on the `nodesRight` panel that intercepts clicks
on `.btn-primary` navigation links:

1. `e.preventDefault()` to stop the default anchor behavior
2. If the current hash already matches the target, temporarily clear it
via `replaceState`
3. Set `location.hash` to the target, which fires `hashchange` and
triggers the SPA router

This handles both the Details link (`#/nodes/{pubkey}`) and the
Analytics link (`#/nodes/{pubkey}/analytics`).

## Testing

- All frontend helper tests pass (552/552)
- All packet filter tests pass (62/62)
- All aging tests pass (29/29)
- Go server tests pass

---------

Co-authored-by: you <you@example.com>
2026-04-16 23:21:05 -07:00
you
fa348efe2a fix: force-remove staging container before deploy — handles both compose and docker-run containers
The deploy step used only 'docker compose down' which can't remove
containers created via 'docker run'. Now explicitly stops+removes
the named container first, then runs compose down as cleanup.

Permanent fix for the recurring CI deploy failure.
2026-04-17 05:08:32 +00:00
Kpa-clawbot
a9a18ff051 fix: neighbor graph slider persists to localStorage, default 0.7 (#776)
## Summary

The neighbor graph min score slider didn't persist its value to
localStorage, resetting to 0.10 on every page load. This was a poor
default for most use cases.

## Changes

- **Default changed from 0.10 to 0.70** — more useful starting point
that filters out low-confidence edges
- **localStorage persistence** — slider value saved on change, restored
on page load
- **3 new tests** in `test-frontend-helpers.js` verifying default value,
load behavior, and save behavior

## Testing

- `node test-frontend-helpers.js` — 547 passed, 0 failed
- `node test-packet-filter.js` — 62 passed, 0 failed
- `node test-aging.js` — 29 passed, 0 failed

---------

Co-authored-by: you <you@example.com>
2026-04-16 22:04:51 -07:00
Kpa-clawbot
ceea136e97 feat: observer graph representation (M1+M2) (#774)
## Summary

Fixes #753 — Milestones M1 and M2: Observer nodes in the neighbor graph
are now correctly labeled, colored, and filterable.

### M1: Label + color observers

**Backend** (`cmd/server/neighbor_api.go`):
- `buildNodeInfoMap()` now queries the `observers` table after building
from `nodes`
- Observer-only pubkeys (not already in the map as repeaters etc.) get
`role: "observer"` and their name from the observers table
- Observer-repeaters keep their repeater role (not overwritten)

**Frontend**:
- CSS variable `--role-observer: #8b5cf6` added to `:root`
- `ROLE_COLORS.observer` was already defined in `roles.js`

### M2: Observer filter checkbox (default unchecked)

**Frontend** (`public/analytics.js`):
- Observer checkbox added to the role filter section, **unchecked by
default**
- Observers create hub-and-spoke patterns (one observer can have 100+
edges) that drown out the actual repeater topology — hiding them by
default keeps the graph clean
- Fixed `applyNGFilters()` which previously always showed observers
regardless of checkbox state

### Tests

- Backend: `TestBuildNodeInfoMap_ObserverEnrichment` — verifies
observer-only pubkeys get name+role from observers table, and
observer-repeaters keep their repeater role
- All existing Go tests pass
- All frontend helper tests pass (544/544)

---------

Co-authored-by: you <you@example.com>
2026-04-16 21:35:14 -07:00
you
99dc4f805a fix: E2E neighbor test — use hash evaluation instead of page.goto for reliable SPA navigation
page.goto with hash-only change may not reliably trigger hashchange in
Playwright, causing the mobile full-screen node view to never render.
Use page.evaluate to set location.hash directly, which guarantees the
SPA router fires. Also increase timeout from 10s to 15s for CI margin.
2026-04-17 03:45:16 +00:00
Kpa-clawbot
ba7cd0fba7 fix: clock skew sanity checks — filter epoch-0, cap drift, min samples (#769)
Nodes with dead RTCs show -690d skew and -3 billion s/day drift. Fix:

1. **No Clock severity**: |skew| > 365d → `no_clock`, skip drift
2. **Drift cap**: |drift| > 86400 s/day → nil (physically impossible)
3. **Min samples**: < 5 samples → no drift regression
4. **Frontend**: 'No Clock' badge, '–' for unreliable drift

Fixes the crazy stats on the Clock Health fleet view.

---------

Co-authored-by: you <you@example.com>
2026-04-16 08:10:47 -07:00
Kpa-clawbot
6a648dea11 fix: multi-byte adopters — all node types, role column, advert precedence (#754) (#767)
## Fix: Multi-Byte Adopters Table — Three Bugs (#754)

### Bug 1: Companions in "Unknown"
`computeMultiByteCapability()` was repeater-only. Extended to classify
**all node types** (companions, rooms, sensors). A companion advertising
with 2-byte hash is now correctly "Confirmed".

### Bug 2: No Role Column
Added a **Role** column to the merged Multi-Byte Hash Adopters table,
color-coded using `ROLE_COLORS` from `roles.js`. Users can now
distinguish repeaters from companions without clicking through to node
detail.

### Bug 3: Data Source Disagreement
When adopter data (from `computeAnalyticsHashSizes`) shows `hashSize >=
2` but capability only found path evidence ("Suspected"), the
advert-based adopter data now takes precedence → "Confirmed". The
adopter hash sizes are passed into `computeMultiByteCapability()` as an
additional confirmed evidence source.

### Changes
- `cmd/server/store.go`: Extended capability to all node types, accept
adopter hash sizes, prioritize advert evidence
- `public/analytics.js`: Added Role column with color-coded badges
- `cmd/server/multibyte_capability_test.go`: 3 new tests (companion
confirmed, role populated, adopter precedence)

### Tests
- All 10 multi-byte capability tests pass
- All 544 frontend helper tests pass
- All 62 packet filter tests pass
- All 29 aging tests pass

---------

Co-authored-by: you <you@example.com>
2026-04-16 00:51:38 -07:00
Kpa-clawbot
29157742eb feat: show collision details in Hash Usage Matrix for all hash sizes (#758)
## Summary

Shows which prefixes are colliding in the Hash Usage Matrix, making the
"PREFIX COLLISIONS: N" count actionable.

Fixes #757

## Changes

### Frontend (`public/analytics.js`)
- **Clickable collision count**: When collisions > 0, the stat card is
clickable and scrolls to the collision details section. Shows a `▼`
indicator.
- **3-byte collision table**: The collision risk section and
`renderCollisionsFromServer` now render for all hash sizes including
3-byte (was previously hidden/skipped for 3-byte).
- **Helpful hint**: 3-byte panel now says "See collision details below"
when collisions exist.

### Backend (`cmd/server/collision_details_test.go`)
- Test that collision details include correct prefix and node
name/pubkey pairs
- Test that collision details are empty when no collisions exist

### Frontend Tests (`test-frontend-helpers.js`)
- Test clickable stat card renders `onclick` and `cursor:pointer` when
collisions > 0
- Test non-clickable card when collisions = 0
- Test collision table renders correct node links (`#/nodes/{pubkey}`)
- Test no-collision message renders correctly

## What was already there

The backend already returned full collision details (prefix, nodes with
pubkeys/names/coords, distance classification) in the `hash-collisions`
API. The frontend already had `renderCollisionsFromServer` rendering a
rich table with node links. The gap was:
1. The 3-byte tab hid the collision risk section entirely
2. No visual affordance to navigate from the stat count to the details

## Perf justification

No new computation — collision data was already computed and returned by
the API. The only change is rendering it for 3-byte (same as
1-byte/2-byte). The collision list is already limited by the backend
sort+slice pattern.

---------

Co-authored-by: you <you@example.com>
2026-04-16 00:18:25 -07:00
Kpa-clawbot
ed19a19473 fix: correct field table offsets for transport routes (#766)
## Summary

Fixes #765 — packet detail field table showed wrong byte offsets for
transport routes.

## Problem

`buildFieldTable()` hardcoded `path_length` at byte 1 for ALL packet
types. For `TRANSPORT_FLOOD` (route_type=0) and `TRANSPORT_DIRECT`
(route_type=3), transport codes occupy bytes 1-4, pushing `path_length`
to byte 5.

This caused:
- Wrong offset numbers in the field table for transport packets
- Transport codes displayed AFTER path length (wrong byte order)
- `Advertised Hash Size` row referenced wrong byte

## Fix

- Use dynamic `offset` tracking that accounts for transport codes
- Render transport code rows before path length (matching actual wire
format)
- Store `pathLenOffset` for correct reference in ADVERT payload section
- Reuse already-parsed `pathByte0` for hash size calculation in path
section

## Tests

Added 4 regression tests in `test-frontend-helpers.js`:
- FLOOD (route_type=1): path_length at byte 1, no transport codes
- TRANSPORT_FLOOD (route_type=0): transport codes at bytes 1-4,
path_length at byte 5
- TRANSPORT_DIRECT (route_type=3): same offsets as TRANSPORT_FLOOD
- Field table row order matches byte layout for transport routes

All existing tests pass (538 frontend helpers, 62 packet filter, 29
aging).

Co-authored-by: you <you@example.com>
2026-04-16 00:15:56 -07:00
copelaje
d27a7a653e fix case on channel key so Public decode/display works right (#761)
Simple change. Before this change Public wasn't showing up in the
channels display due to the case issue.
2026-04-16 00:14:47 -07:00
Kpa-clawbot
0e286d85fd fix: channel query performance — add channel_hash column, SQL-level filtering (#762) (#763)
## Problem
Channel API endpoints scan entire DB — 2.4s for channel list, 30s for
messages.

## Fix
- Added `channel_hash` column to transmissions (populated on ingest,
backfilled on startup)
- `GetChannels()` rewrites to GROUP BY channel_hash (one row per channel
vs scanning every packet)
- `GetChannelMessages()` filters by channel_hash at SQL level with
proper LIMIT/OFFSET
- 60s cache for channel list
- Index: `idx_tx_channel_hash` for fast lookups

Expected: 2.4s → <100ms for list, 30s → <500ms for messages.

Fixes #762

---------

Co-authored-by: you <you@example.com>
2026-04-16 00:09:36 -07:00
Kpa-clawbot
bffcbdaa0b feat: add channel UX — visible button, hint, status feedback (#760)
## Fixes #759

The "Add Channel" input was a bare text field with no visible submit
button and no feedback — users didn't know how to submit or whether it
worked.

### Changes

**`public/channels.js`**
- Replaced bare `<input>` with structured form: label, input + button
row, hint text, status div
- Added `showAddStatus()` helper for visual feedback during/after
channel add
- Status messages: loading → success (with decrypted message count) /
warning (no messages) / error
- Auto-hide status after 5 seconds
- Fallback click handler on the `+` button for browsers that don't fire
form submit

**`public/style.css`**
- `.ch-add-form` — form container
- `.ch-add-label` — bold 13px label
- `.ch-add-row` — flex row for input + button
- `.ch-add-btn` — 32×32 accent-colored submit button
- `.ch-add-hint` — muted helper text
- `.ch-add-status` — feedback with success/warn/error/loading variants

**`test-channel-add-ux.js`** — 20 tests validating HTML structure, CSS
classes, and feedback logic

### Before / After

**Before:** Bare input field, no button, no hint, no feedback  
**After:** Labeled section with visible `+` button, format hint, and
status messages showing decryption results

---------

Co-authored-by: you <you@example.com>
2026-04-15 18:54:35 -07:00
Kpa-clawbot
3bdf72b4cf feat: clock skew UI — node badges, detail sparkline, fleet analytics (#690 M2+M3) (#752)
## Summary

Frontend visualizations for clock skew detection.

Implements #690 M2 and M3. Does NOT close #690 — M4+M5 remain.

### M2: Node badges + detail sparkline
- Severity badges ( green/yellow/orange/red) on node list next to each
node
- Node detail: Clock Skew section with current value, severity, drift
rate
- Inline SVG sparkline showing skew history, color-coded by severity
zones

### M3: Fleet analytics view
- 'Clock Health' section on Analytics page
- Sortable table: Name | Skew | Severity | Drift | Last Advert
- Filter buttons by severity (OK/Warning/Critical/Absurd)
- Summary stats: X nodes OK, Y warning, Z critical
- Color-coded rows

### Changes
- `public/nodes.js` — badge rendering + detail section
- `public/analytics.js` — fleet clock health view
- `public/roles.js` — severity color helpers
- `public/style.css` — badge + sparkline + fleet table styles
- `cmd/server/clock_skew.go` — added fleet summary endpoint
- `cmd/server/routes.go` — wired fleet endpoint
- `test-frontend-helpers.js` — 11 new tests

---------

Co-authored-by: you <you@example.com>
2026-04-15 15:25:50 -07:00
Kpa-clawbot
401fd070f8 fix: improve trackedBytes accuracy for memory estimation (#751)
## Problem

Fixes #743 — High memory usage / OOM with relatively small dataset.

`trackedBytes` severely undercounted actual per-packet memory because it
only tracked base struct sizes and string field lengths, missing major
allocations:

| Structure | Untracked Cost | Scale Impact |
|-----------|---------------|--------------|
| `spTxIndex` (O(path²) subpath entries) | 40 bytes × path combos |
50-150MB |
| `ResolvedPath` on observations | 24 bytes × elements | ~25MB |
| Per-tx maps (`obsKeys`, `observerSet`) | 200 bytes/tx flat | ~11MB |
| `byPathHop` index entries | 50 bytes/hop | 20-40MB |

This caused eviction to trigger too late (or not at all), leading to
OOM.

## Fix

Expanded `estimateStoreTxBytes` and `estimateStoreObsBytes` to account
for:

- **Per-tx maps**: +200 bytes flat for `obsKeys` + `observerSet` map
headers
- **Path hop index**: +50 bytes per hop in `byPathHop`
- **Subpath index**: +40 bytes × `hops*(hops-1)/2` combinations for
`spTxIndex`
- **Resolved paths**: +24 bytes per `ResolvedPath` element on
observations

Updated the existing `TestEstimateStoreTxBytes` to match new formula.
All existing eviction tests continue to pass — the eviction logic itself
is unchanged.

Also exposed `avgBytesPerPacket` in the perf API (`/api/perf`) so
operators can monitor per-packet memory costs.

## Performance

Benchmark confirms negligible overhead (called on every insert):

```
BenchmarkEstimateStoreTxBytes    159M ops    7.5 ns/op    0 B/op    0 allocs
BenchmarkEstimateStoreObsBytes   1B ops      1.0 ns/op    0 B/op    0 allocs
```

## Tests

- 6 new tests in `tracked_bytes_test.go`:
  - Reasonable value ranges for different packet sizes
  - 10-hop packets estimate significantly more than 2-hop (subpath cost)
  - Observations with `ResolvedPath` estimate more than without
  - 15 observations estimate >10x a single observation
- `trackedBytes` matches sum of individual estimates after batch insert
  - Eviction triggers correctly with improved estimates
- 2 benchmarks confirming sub-10ns estimate cost
- Updated existing `TestEstimateStoreTxBytes` for new formula
- Full test suite passes

---------

Co-authored-by: you <you@example.com>
2026-04-15 07:53:32 -07:00
Kpa-clawbot
1b315bf6d0 feat: PSK channels, channel removal, message caching (#725 M3+M4+M5) (#750)
## Summary

Implements milestones M3, M4, and M5 from #725 — all client-side, zero
server changes.

### M3: PSK channel support

The channel input field now accepts both `#channelname` (hashtag
derivation) and raw 32-char hex keys (PSK). Auto-detection: if input
starts with `#`, derive key via SHA-256; otherwise validate as hex and
store directly. Same decrypt pipeline — `ChannelDecrypt.decrypt()` takes
key bytes regardless of source.

Input placeholder updated to: `#LongFast or paste hex key`

### M4: Channel removal

User-added channels now show a ✕ button on hover. Click → confirm dialog
→ removes:
- Key from localStorage (`ChannelDecrypt.removeKey()`)
- Cached messages from localStorage
(`ChannelDecrypt.clearChannelCache()`)
- Channel entry from sidebar

If the removed channel was selected, the view resets to the empty state.

### M5: localStorage message caching with delta fetch

After client-side decryption, results are cached in localStorage keyed
by channel name:

```
{ messages: [...], lastTimestamp: "...", count: N, ts: Date.now() }
```

On subsequent visits:
1. **Instant render** — cached messages displayed immediately via
`onCacheHit` callback
2. **Delta fetch** — only packets newer than `lastTimestamp` are fetched
and decrypted
3. **Merge** — new messages merged with cache, deduplicated by
`packetHash`
4. **Cache invalidation** — if total candidate count changes, full
re-decrypt triggered
5. **Size limit** — max 1000 messages cached per channel (most recent
kept)

### Performance

- Delta fetch avoids re-decrypting the full history on every page load
- Cache-first rendering provides instant UI response
- `deduplicateAndMerge()` uses a hash set for O(n) dedup
- 1000-message cap prevents localStorage quota issues

### Tests (24 new)

- M3: hex key detection (valid/invalid patterns)
- M3: key derivation round-trip, channel hash computation
- M3: PSK key storage and retrieval
- M4: channel removal clears both key and cache
- M5: cache size limit enforcement (1200 → 1000 stored)
- M5: cache stores count and lastTimestamp
- M5: clearChannelCache works independently
- All existing tests pass (523 frontend helpers, 62 packet filter)

### Files changed

| File | Change |
|------|--------|
| `public/channel-decrypt.js` | `removeKey()` now clears cache;
`clearChannelCache()`; `setCache()` with count + size limit |
| `public/channels.js` | Extracted `decryptCandidates()`,
`deduplicateAndMerge()`; delta fetch logic; remove button handler;
cache-first rendering |
| `public/style.css` | `.ch-remove-btn` styles (hover-reveal ✕) |
| `test-channel-decrypt-m345.js` | 24 new tests |

Implements #725

Co-authored-by: you <you@example.com>
2026-04-14 23:23:02 -07:00
Kpa-clawbot
a815e70975 feat: Clock skew detection — backend computation (M1) (#746)
## Summary

Implements **Milestone 1** of #690 — backend clock skew computation for
nodes and observers.

## What's New

### Clock Skew Engine (`clock_skew.go`)

**Phase 1 — Raw Skew Calculation:**
For every ADVERT observation: `raw_skew = advert_timestamp -
observation_timestamp`

**Phase 2 — Observer Calibration:**
Same packet seen by multiple observers → compute each observer's clock
offset as the median deviation from the per-packet median observation
timestamp. This identifies observers with their own clock drift.

**Phase 3 — Corrected Node Skew:**
`corrected_skew = raw_skew + observer_offset` — compensates for observer
clock error.

**Phase 4 — Trend Analysis:**
Linear regression over time-ordered skew samples estimates drift rate in
seconds/day. Detects crystal drift vs stable offset vs sudden jumps.

### Severity Classification

| Level | Threshold | Meaning |
|-------|-----------|---------|
|  OK | < 5 min | Normal |
| ⚠️ Warning | 5 min – 1 hour | Clock drifting |
| 🔴 Critical | 1 hour – 30 days | Likely no time source |
| 🟣 Absurd | > 30 days | Firmware default or epoch 0 |

### New API Endpoints

- `GET /api/nodes/{pubkey}/clock-skew` — per-node skew data (mean,
median, last, drift, severity)
- `GET /api/observers/clock-skew` — observer calibration offsets
- Clock skew also included in `GET /api/nodes/{pubkey}/analytics`
response as `clockSkew` field

### Performance

- 30-second compute cache avoids reprocessing on every request
- Operates on in-memory `byPayloadType[ADVERT]` index — no DB queries
- O(n) in total ADVERT observations, O(m log m) for median calculations

## Tests

15 unit tests covering:
- Severity classification at all thresholds
- Median/mean math helpers
- ISO timestamp parsing
- Timestamp extraction from decoded JSON (nested and top-level)
- Observer calibration with single and multi-observer scenarios
- Observer offset correction direction (verified the sign is
`+obsOffset`)
- Drift estimation: stable, linear, insufficient data, short time span
- JSON number extraction edge cases

## What's NOT in This PR

- No UI changes (M2–M4)
- No customizer integration (M5)
- Thresholds are hardcoded constants (will be configurable in M5)

Implements #690 M1.

---------

Co-authored-by: you <you@example.com>
2026-04-14 23:22:35 -07:00
Kpa-clawbot
aa84ce1e6a fix: correct hash_size detection for transport routes and zero-hop adverts (#747)
## Summary

Fixes #744
Fixes #722

Three bugs in hash_size computation caused zero-hop adverts to
incorrectly report `hash_size=1`, masking nodes that actually use
multi-byte hashes.

## Bugs Fixed

### 1. Wrong path byte offset for transport routes
(`computeNodeHashSizeInfo`)

Transport routes (types 0 and 3) have 4 transport code bytes before the
path byte. The code read the path byte from offset 1 (byte index
`RawHex[2:4]`) for all route types. For transport routes, the correct
offset is 5 (`RawHex[10:12]`).

### 2. Missing RouteTransportDirect skip (`computeNodeHashSizeInfo`)

Zero-hop adverts from `RouteDirect` (type 2) were correctly skipped, but
`RouteTransportDirect` (type 3) zero-hop adverts were not. Both have
locally-generated path bytes with unreliable hash_size bits.

### 3. Zero-hop adverts not skipped in analytics
(`computeAnalyticsHashSizes`)

`computeAnalyticsHashSizes()` unconditionally overwrote a node's
`hashSize` with whatever the latest advert reported. A zero-hop direct
advert with `hash_size=1` could overwrite a previously-correct
`hash_size=2` from a multi-hop flood advert.

Fix: skip hash_size update for zero-hop direct/transport-direct adverts
while still counting the packet and updating `lastSeen`.

## Tests Added

- `TestHashSizeTransportRoutePathByteOffset` — verifies transport routes
read path byte at offset 5, regular flood reads at offset 1
- `TestHashSizeTransportDirectZeroHopSkipped` — verifies both
RouteDirect and RouteTransportDirect zero-hop adverts are skipped
- `TestAnalyticsHashSizesZeroHopSkip` — verifies analytics hash_size is
not overwritten by zero-hop adverts
- Fixed 3 existing tests (`FlipFlop`, `Dominant`, `LatestWins`) that
used route_type 0 (TransportFlood) header bytes without proper transport
code padding

## Complexity

All changes are O(1) per packet — no new loops or data structures. The
additional offset computation and zero-hop check are constant-time
operations within the existing packet scan loop.

Co-authored-by: you <you@example.com>
2026-04-14 23:04:26 -07:00
Kpa-clawbot
2aea01f10c fix: merge repeater+observer into single map marker (#745)
## Problem

When a node acts as both a repeater and an observer (same public key —
common with powered repeaters running MQTT clients), the map shows two
separate markers at the same location: a repeater rectangle and an
observer star. This causes visual clutter and both markers get pushed
out from the real location by the deconfliction algorithm.

## Solution

Detect combined repeater+observer nodes by matching node `public_key`
against observer `id`. When matched:

- **Label mode (hash labels on):** The repeater label gets a gold ★
appended inside the rectangle
- **Icon mode (hash labels off):** The repeater diamond gets a small
star overlay in the top-right corner of the SVG
- **Popup:** Shows both REPEATER and OBSERVER badges
- **Observer markers:** Skipped when the observer is already represented
as a combined node marker
- **Legend count:** Observer count excludes combined nodes (shows
standalone observers only)

## Performance

- Observer lookup uses a `Map` keyed by lowercase pubkey — O(1) per node
check
- Legend count uses a `Set` of node pubkeys — O(n+m) instead of O(n×m)
- No additional API calls; uses existing `observers` array already
fetched

## Testing

- All 523 frontend helper tests pass
- All 62 packet filter tests pass
- Visual: combined nodes show as single marker with star indicator

Fixes #719

---------

Co-authored-by: you <you@example.com>
2026-04-14 22:47:28 -07:00
efiten
b7c2cb070c docs: geofilter manual + config.example.json entry (#734)
## Summary

- Add missing `geo_filter` block to `config.example.json` with polygon
example, `bufferKm`, and inline `_comment`
- Add `docs/user-guide/geofilter.md`: full operator guide covering
config schema, GeoFilter Builder workflow, and prune script as one-time
migration tool
- Add Geographic filtering section to `docs/user-guide/configuration.md`
with link to the full guide

Closes #669 (M1: documentation)

## Test plan

- [x] `config.example.json` parses cleanly (no JSON errors)
- [x] `docs/user-guide/geofilter.md` renders correctly in GitHub preview
- [x] Link from `configuration.md` to `geofilter.md` resolves

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-14 22:43:19 -07:00
efiten
1de80a9eaf feat: serve geofilter builder from app, link from customizer (#735)
## Summary

Part of #669 — M2: Link the builder from the app.

- **`public/geofilter-builder.html`** — the existing
`tools/geofilter-builder.html` is now served by the static file server
at `/geofilter-builder.html`. Additions vs the original: a `← CoreScope`
back-link in the header, inline code comments explaining the output
format, and a help bar below the output panel with paste instructions
and a link to the documentation.
- **`public/customize-v2.js`** — adds a "Tools" section at the bottom of
the Export tab with a `🗺️ GeoFilter Builder →` link and a one-line
description.
- **`docs/user-guide/customization.md`** — documents the new GeoFilter
Builder entry in the Export tab.

> **Note:** `tools/geofilter-builder.html` is kept as-is for
local/offline use. The `public/` copy is what the server serves.

> **Depends on:** #734 (M1 docs) for `docs/user-guide/geofilter.md` —
the link in the help bar references that file. Can be merged
independently; the link still works once M1 lands.

## Test plan

- [x] Open the app, go to Customizer → Export tab — "Tools" section
appears with GeoFilter Builder link
- [x] Click the link — opens `/geofilter-builder.html` in a new tab
- [x] Builder loads the Leaflet map, draw 3+ points — JSON output
appears
- [x] Copy button works, output is valid `{ "geo_filter": { ... } }`
JSON
- [x] `← CoreScope` back-link navigates to `/`
- [x] Help bar shows paste instructions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-14 22:42:27 -07:00
efiten
e6ace95059 fix: desktop node click updates URL hash, deep link opens split panel (#676) (#739)
## Problem

Clicking a node on desktop opened the side panel but never updated the
URL hash, making nodes non-shareable/bookmarkable on desktop. Loading
`#/nodes/{pubkey}` directly on desktop also incorrectly showed the
full-screen mobile view.

## Changes

- `selectNode()` on desktop: adds `history.replaceState(null, '',
'#/nodes/' + pubkey)` so the URL updates on every click
- `init()`: full-screen path is now gated to `window.innerWidth <= 640`
(mobile only); desktop with a `routeParam` falls through to the split
panel and calls `selectNode()` to pre-select the node
- Deselect (Escape / close button): also calls `history.replaceState`
back to `#/nodes`

## Test plan

- [x] Desktop: click a node → URL updates to `#/nodes/{pubkey}`, split
panel opens
- [x] Desktop: copy URL, open in new tab → split panel opens with that
node selected (not full-screen)
- [x] Desktop: press Escape → URL reverts to `#/nodes`
- [x] Mobile (≤640px): clicking a node still navigates to full-screen
view

Closes #676

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-14 22:38:06 -07:00
efiten
f605d4ce7e fix: serialize filter params in URL hash for deep linking (#682) (#740)
## Problem

Applying packet filters (hash, node, observer, Wireshark expression) did
not update the URL hash, so filtered views could not be shared or
bookmarked.

## Changes

**`buildPacketsQuery()`** — extended to include:
- `hash=` from `filters.hash`
- `node=` from `filters.node`  
- `observer=` from `filters.observer`
- `filter=` from `filters._filterExpr` (Wireshark expression string)

**`updatePacketsUrl()`** — now called on every filter change:
- hash input (debounced)
- observer multi-select change
- node autocomplete select and clear
- Wireshark filter input (on valid expression or clear)

**URL restore on load** — `getHashParams()` now reads `hash`, `node`,
`observer`, `filter` and restores them into `filters` before the DOM is
built. Input fields pick up values from `filters` as before. Wireshark
expression is also recompiled and `filter-active` class applied.

## Test plan

- [ ] Type in hash filter → URL updates with `&hash=...`
- [ ] Copy URL, open in new tab → hash filter is pre-filled
- [ ] Select an observer → URL updates with `&observer=...`
- [ ] Select a node filter → URL updates with `&node=...`
- [ ] Type `type=ADVERT` in Wireshark filter → URL updates with
`&filter=type%3DADVERT`
- [ ] Load that URL → filter expression restored and active

Closes #682

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-14 22:37:33 -07:00
Kpa-clawbot
84f03f4f41 fix: hide undecryptable channel messages by default (#727) (#728)
## Problem

Channels page shows 53K 'Unknown' messages — undecryptable GRP_TXT
packets with no content. Pure noise.

## Fix

- Backend: channels API filters out undecrypted messages by default
- `?includeEncrypted=true` param to include them
- Frontend: 'Show encrypted' toggle in channels sidebar
- Unknown channels grayed out with '(no key)' label
- Toggle persists in localStorage

Fixes #727

---------

Co-authored-by: you <you@example.com>
2026-04-13 19:40:20 +00:00
Kpa-clawbot
8158631d02 feat: client-side channel decryption — add custom channels in browser (#725 M2) (#733)
## Summary

Pure client-side channel decryption. Users can add custom hashtag
channels or PSK channels directly in the browser. **The server never
sees the keys.**

Implements #725 M2 (revised). Does NOT close #725.

## How it works

1. User types `#channelname` or pastes a hex PSK in the channels sidebar
2. Browser derives key (`SHA256("#name")[:16]`) using Web Crypto API
3. Key stored in **localStorage** — never sent to the server
4. Browser fetches encrypted GRP_TXT packets via existing API
5. Browser decrypts client-side: AES-128-ECB + HMAC-SHA256 MAC
verification
6. Decrypted messages cached in localStorage
7. Progressive rendering — newest messages first, chunk-based

## Security

- Keys never leave the browser
- No new API endpoints
- No server-side changes whatsoever
- Channel interest partially observable via hash-based API requests
(documented, acceptable tradeoff)

## Changes

- `public/channels.js` — client-side decrypt module + UI integration
(+307 lines)
- `public/index.html` — no new script (inline in channels.js IIFE)
- `public/style.css` — add-channel input styling

---------

Co-authored-by: you <you@example.com>
2026-04-13 12:28:41 -07:00
Kpa-clawbot
14367488e2 fix: TRACE path_json uses path_sz from flags byte, not header hash_size (#732)
## Summary

TRACE packets encode their route hash size in the flags byte (`flags &
0x03`), not the header path byte. The decoder was using `path.HashSize`
from the header, which could be wrong or zero for direct-route TRACEs,
producing incorrect hop counts in `path_json`.

## Protocol Note

Per firmware, TRACE packets are **always direct-routed** (route_type 2 =
DIRECT, or 3 = TRANSPORT_DIRECT). FLOOD-routed TRACEs (route_type 1) are
anomalous — firmware explicitly rejects TRACE via flood. The decoder
handles these gracefully without crashing.

## Changes

**`cmd/server/decoder.go` and `cmd/ingestor/decoder.go`:**
- Read `pathSz` from TRACE flags byte: `(traceFlags & 0x03) + 1`
(0→1byte, 1→2byte, 2→3byte)
- Use `pathSz` instead of `path.HashSize` for splitting TRACE payload
path data into hops
- Update `path.HashSize` to reflect the actual TRACE path size
- Added `HopsCompleted` field to ingestor `Path` struct for parity with
server
- Updated comments to clarify TRACE is always direct-routed per firmware

**`cmd/server/decoder_test.go` — 5 new tests:**
- `TraceFlags1_TwoBytePathSz`: flags=1 → 2-byte hashes via DIRECT route
- `TraceFlags2_ThreeBytePathSz`: flags=2 → 3-byte hashes via DIRECT
route
- `TracePathSzUnevenPayload`: payload not evenly divisible by path_sz
- `TraceTransportDirect`: route_type=3 with transport codes + TRACE path
parsing
- `TraceFloodRouteGraceful`: anomalous FLOOD+TRACE handled without crash

All existing TRACE tests (flags=0, 1-byte hashes) continue to pass.

Fixes #731

---------

Co-authored-by: you <you@example.com>
2026-04-13 08:20:09 -07:00
Kpa-clawbot
71be54f085 feat: DB-backed channel messages for full history (#725 M1) (#726)
## Summary

Switches channel API endpoints to query SQLite instead of the in-memory
packet store, giving users access to the full message history.

Implements #725 (M1 only — DB-backed channel messages). Does NOT close
#725 — M2-M5 (custom channels, PSK, persistence, retroactive decryption)
remain.

## Problem

Channel endpoints (`/api/channels`, `/api/channels/{hash}/messages`)
preferred the in-memory packet store when available. The store is
bounded by `packetStore.maxMemoryMB` — typically showing only recent
messages. The SQLite database has the complete history (weeks/months of
channel messages) but was only used as a fallback when the store was nil
(never in production).

## Fix

Reversed the preference order: DB first, in-memory store fallback.
Region filtering added to the DB path.

Co-authored-by: you <you@example.com>
2026-04-12 23:22:52 -07:00
Kpa-clawbot
c233c14156 feat: CLI tool to decrypt and export hashtag channel messages (#724)
## Summary

Adds `corescope-decrypt` — a standalone CLI tool that decrypts and
exports MeshCore hashtag channel messages from a CoreScope SQLite
database.

### What it does

MeshCore hashtag channels use symmetric encryption with keys derived
from the channel name. The CoreScope ingestor stores **all** GRP_TXT
packets, even those it can't decrypt. This tool enables retroactive
decryption — decrypt historical messages for any channel whose name you
learn after the fact.

### Architecture

- **`internal/channel/`** — Shared crypto package extracted from
ingestor logic:
  - `DeriveKey()` — `SHA-256("#name")[:16]`
  - `ChannelHash()` — 1-byte packet filter (`SHA-256(key)[0]`)
  - `Decrypt()` — HMAC-SHA256 MAC verify + AES-128-ECB
  - `ParsePlaintext()` — timestamp + flags + "sender: message" parsing

- **`cmd/decrypt/`** — CLI binary with three output formats:
  - `--format json` — Full metadata (observers, path, raw hex)
  - `--format html` — Self-contained interactive viewer with search/sort
  - `--format irc` (or `log`) — Plain-text IRC-style log, greppable

### Usage

```bash
# JSON export
corescope-decrypt --channel "#wardriving" --db meshcore.db

# Interactive HTML viewer
corescope-decrypt --channel wardriving --db meshcore.db --format html --output wardriving.html

# Greppable log
corescope-decrypt --channel "#wardriving" --db meshcore.db --format irc | grep "KE6QR"

# From Docker
docker exec corescope-prod /app/corescope-decrypt --channel "#wardriving" --db /app/data/meshcore.db
```

### Build & deployment

- Statically linked (`CGO_ENABLED=0`) — zero dependencies
- Added to Dockerfile (available at `/app/corescope-decrypt` in
container)
- CI: builds and tests in go-test job
- CI: attaches linux/amd64 and linux/arm64 binaries to GitHub Releases
on tags

### Testing

- `internal/channel/` — 9 tests: key derivation, encrypt/decrypt
round-trip, MAC rejection, wrong-channel rejection, plaintext parsing
- `cmd/decrypt/` — 7 tests: payload extraction, channel hash
consistency, all 3 output formats, JSON parseability, fixture DB
integration
- Verified against real fixture DB: successfully decrypts 17
`#wardriving` messages

### Limitations

- Hashtag channels only (name-derived keys). Custom PSK channels not
supported.
- No DM decryption (asymmetric, per-peer keys).
- Read-only database access.

Fixes #723

---------

Co-authored-by: you <you@example.com>
v3.5.2
2026-04-12 22:07:41 -07:00
Kpa-clawbot
65482ff6f6 fix: cache invalidation tuning — 7% → 50-80% hit rate (#721)
## Cache Invalidation Tuning — 7% → 50-80% Hit Rate

Fixes #720

### Problem

Server-side cache hit rate was 7% (48 hits / 631 misses over 4.7 days).
Root causes from the [cache audit
report](https://github.com/Kpa-clawbot/CoreScope/issues/720):

1. **`invalidationDebounce` config value (30s) was dead code** — never
wired to `invCooldown`
2. **`invCooldown` hardcoded to 10s** — with continuous ingest, caches
cleared every 10s regardless of their 1800s TTLs
3. **`collisionCache` cleared on every `hasNewTransmissions`** — hash
collisions are structural (depend on node count), not per-packet

### Changes

| Change | File | Impact |
|--------|------|--------|
| Wire `invalidationDebounce` from config → `invCooldown` | `store.go` |
Config actually works now |
| Default `invCooldown` 10s → 300s (5 min) | `store.go` | 30x longer
cache survival |
| Add `hasNewNodes` flag to `cacheInvalidation` | `store.go` |
Finer-grained invalidation |
| `collisionCache` only clears on `hasNewNodes` | `store.go` | O(n²)
collision computation survives its 1hr TTL |
| `addToByNode` returns new-node indicator | `store.go` | Zero-cost
detection during indexing |
| `indexByNode` returns new-node indicator | `store.go` | Propagates to
ingest path |
| Ingest tracks and passes `hasNewNodes` | `store.go` | End-to-end
wiring |

### Tests Added

| Test | What it verifies |
|------|-----------------|
| `TestInvCooldownFromConfig` | Config value wired to `invCooldown`;
default is 300s |
| `TestCollisionCacheNotClearedByTransmissions` | `hasNewTransmissions`
alone does NOT clear `collisionCache` |
| `TestCollisionCacheClearedByNewNodes` | `hasNewNodes` DOES clear
`collisionCache` |
| `TestCacheSurvivesMultipleIngestCyclesWithinCooldown` | 5 rapid ingest
cycles don't clear any caches during cooldown |
| `TestNewNodesAccumulatedDuringCooldown` | `hasNewNodes` accumulated in
`pendingInv` and applied after cooldown |
| `BenchmarkAnalyticsLatencyCacheHitVsMiss` | 100% hit rate with
rate-limited invalidation |

All 200+ existing tests pass. Both benchmarks show 100% hit rate.

### Performance Justification

- **Before:** Effective cache lifetime = `min(TTL, invCooldown)` = 10s.
With analytics viewed ~once/few minutes, P(hit) ≈ 7%
- **After:** Effective cache lifetime = `min(TTL, 300s)` = 300s for most
caches, 3600s for `collisionCache`. Expected hit rate 50-80%
- **Complexity:** All changes are O(1) — `addToByNode` already checked
`nodeHashes[pubkey] == nil`, we just return the result
- **Benchmark proof:** `BenchmarkAnalyticsLatencyCacheHitVsMiss` → 100%
hit rate, 269ns/op

Co-authored-by: you <you@example.com>
2026-04-12 18:09:23 -07:00
Kpa-clawbot
7af91f7ef6 fix: perf page shows tracked memory instead of heap allocation (#718)
## Summary

The perf page "Memory Used" tile displayed `estimatedMB` (Go
`runtime.HeapAlloc`), which includes all Go runtime allocations — not
just packet store data. This made the displayed value misleading: it
showed ~2.4GB heap when only ~833MB was actual tracked packet data.

## Changes

### Frontend (`public/perf.js`)
- Primary tile now shows `trackedMB` as **"Tracked Memory"** — the
self-accounted packet store memory
- Added separate **"Heap (debug)"** tile showing `estimatedMB` for
runtime visibility

### Backend
- **`types.go`**: Added `TrackedMB` field to `HealthPacketStoreStats`
struct
- **`routes.go`**: Populate `TrackedMB` in `/health` endpoint response
from `GetPerfStoreStatsTyped()`
- **`routes_test.go`**: Assert `trackedMB` exists in health endpoint's
`packetStore`
- **`testdata/golden/shapes.json`**: Updated shape fixture with new
field

### What was already correct
- `/api/perf/stats` already exposed both `estimatedMB` and `trackedMB`
- `trackedMemoryMB()` method already existed in store.go
- Eviction logic already used `trackedBytes` (not HeapAlloc)

## Testing
- All Go tests pass (`go test ./... -count=1`)
- No frontend logic changes beyond template string field swap

Fixes #717

Co-authored-by: you <you@example.com>
2026-04-12 12:40:17 -07:00
Kpa-clawbot
f95aa49804 fix: exclude TRACE packets from multi-byte capability suspected detection (#715)
## Summary

Exclude TRACE packets (payload_type 8) from the "suspected" multi-byte
capability inference logic. TRACE packets carry hash size in their own
flags — forwarding repeaters read it from the TRACE header, not their
compile-time `PATH_HASH_SIZE`. Pre-1.14 repeaters can forward multi-byte
TRACEs without actually supporting multi-byte hashes, creating false
positives.

Fixes #714

## Changes

### `cmd/server/store.go`
- In `computeMultiByteCapability()`, skip packets with `payload_type ==
8` (TRACE) when scanning `byPathHop` for suspected multi-byte nodes
- "Confirmed" detection (from adverts) is unaffected

### `cmd/server/multibyte_capability_test.go`
- `TestMultiByteCapability_TraceExcluded`: TRACE packet with 2-byte path
does NOT mark repeater as suspected
- `TestMultiByteCapability_NonTraceStillSuspected`: Non-TRACE packet
with 2-byte path still marks as suspected
- `TestMultiByteCapability_ConfirmedUnaffectedByTraceExclusion`:
Confirmed status from advert unaffected by TRACE exclusion

## Testing

All 7 multi-byte capability tests pass. Full `cmd/server` and
`cmd/ingestor` test suites pass.

Co-authored-by: you <you@example.com>
2026-04-12 00:11:20 -07:00
Kpa-clawbot
45623672d9 fix: integrate multi-byte capability into adopters table, fix filter buttons (#712) (#713)
## Summary

Fixes #712 — Multi-byte capability filter buttons broken + needs
integration with Hash Adopters.

### Changes

**M1: Fix filter buttons breaking after first click**
- Root cause: `section.replaceWith(newSection)` replaced the entire DOM
node, but the event listener was attached to the old node. After
replacement, clicks went unhandled.
- Fix: Instead of replacing the whole section, only swap the table
content inside a stable `#mbAdoptersTableWrap` div. The event listener
on `#mbAdoptersSection` persists across filter changes.
- Button active state is now toggled via `classList.toggle` instead of
full DOM rebuild.

**M2: Better button labels**
- Changed from icon-only (` 76`) to descriptive labels: ` Confirmed
(76)`, `⚠️ Suspected (81)`, ` Unknown (223)`

**M3: Integrate with Multi-Byte Hash Adopters**
- Merged capability status into the existing adopters table as a new
"Status" column
- Removed the separate "Repeater Multi-Byte Capability" section
- Filter buttons now apply to the integrated table
- Nodes without capability data default to  Unknown
- Capability data is looked up by pubkey from the existing
`multiByteCapability` API response (no backend changes needed)

### Performance

- No new API calls — capability data already exists in the hash sizes
response
- Filter toggle is O(n) where n = number of adopter nodes (typically
<500)
- Event delegation on stable parent — no listener re-attachment needed

### Tests

- Updated existing `renderMultiByteCapability` tests for new label
format
- Added 5 new tests for `renderMultiByteAdopters`: empty state, status
integration, text labels with counts, unknown default, Status column
presence
- All 507 frontend tests pass, all Go tests pass

Co-authored-by: you <you@example.com>
2026-04-11 23:07:44 -07:00
Kpa-clawbot
4a7e20a8cb fix: redesign memory eviction — self-accounting trackedBytes, watermarks, safety cap (#711)
## Problem

`HeapAlloc`-based eviction cascades on large databases — evicts down to
near-zero packets because Go runtime overhead exceeds `maxMemoryMB` even
with an empty packet store.

## Fix (per Carmack spec on #710)

1. **Self-accounting `trackedBytes`** — running counter maintained on
insert/evict, computed from actual struct sizes. No
`runtime.ReadMemStats`.
2. **High/low watermark hysteresis** (100%/85%) — evict to 85% of
budget, don't re-trigger until 100% crossed again.
3. **25% per-pass safety cap** — never evict more than a quarter of
packets in one cycle.
4. **Oldest-first** — evict from sorted head, O(1) candidate selection.

`maxMemoryMB` now means packet store budget, not total process heap.

Fixes #710

Co-authored-by: you <you@example.com>
2026-04-11 23:06:48 -07:00
Kpa-clawbot
7e0b904d09 fix: refresh live feed relative timestamps every 10s (#709)
## Summary

Fixes #701 — Live feed timestamps showed stale relative times (e.g. "2s
ago" never updated to "5m ago").

## Root Cause

`formatLiveTimestampHtml()` was called once when each feed item was
created and never refreshed. The dedup path (when a duplicate hash moves
an item to the top) also didn't update the timestamp.

## Changes

### `public/live.js`
- **`data-ts` attribute on `.feed-time` spans**: All three feed item
creation paths (VCR replay, `addFeedItemDOM`, `addFeedItem`) now store
the packet timestamp as `data-ts` on the `.feed-time` span element
- **10-second refresh interval**: A `setInterval` queries all
`.feed-time[data-ts]` elements and re-renders their content via
`formatLiveTimestampHtml()`, keeping relative times accurate
- **Dedup path timestamp update**: When a duplicate hash observation
moves an existing feed item to the top, the `.feed-time` span is updated
with the new observation's timestamp
- **Cleanup**: The interval is cleared on page teardown alongside other
intervals

### `test-live.js`
- 3 new tests: formatting idempotency, numeric timestamp acceptance,
`data-ts` round-trip correctness

## Performance

- The refresh interval runs every 10s, iterating over at most 25
`.feed-time` DOM elements (feed is capped at 25 items via `while
(feed.children.length > 25)`). Negligible overhead.
- Uses `querySelectorAll` with attribute selector — O(n) where n ≤ 25.

## Testing

- All 3 new tests pass
- All pre-existing test suites pass (70 live.js tests, 62 packet-filter,
501 frontend-helpers)
- 8 pre-existing failures in `test-live.js` are unrelated
(`getParsedDecoded` missing from sandbox)

Co-authored-by: you <you@example.com>
2026-04-11 21:30:38 -07:00
Kpa-clawbot
e893a1b3c4 fix: index relay hops in byNode for liveness tracking (#708)
## Problem

Nodes that only appear as relay hops in packet paths (via
`resolved_path`) were never indexed in `byNode`, so `last_heard` was
never computed for them. This made relay-only nodes show as dead/stale
even when actively forwarding traffic.

Fixes #660

## Root Cause

`indexByNode()` only indexed pubkeys from decoded JSON fields (`pubKey`,
`destPubKey`, `srcPubKey`). Relay nodes appearing in `resolved_path`
were ignored entirely.

## Fix

`indexByNode()` now also iterates:
1. `ResolvedPath` entries from each observation
2. `tx.ResolvedPath` (best observation's resolved path, used for
DB-loaded packets)

A per-call `indexed` set prevents double-indexing when the same pubkey
appears in both decoded JSON and resolved path.

Extracted `addToByNode()` helper to deduplicate the nodeHashes/byNode
append logic.

## Scope

**Phase 1 only** — server-side in-memory indexing. No DB changes, no
ingestor changes. This makes `last_heard` reflect relay activity with
zero risk to persistence.

## Tests

5 new test cases in `TestIndexByNodeResolvedPath`:
- Resolved path pubkeys from observations get indexed
- Null entries in resolved path are skipped
- Relay-only nodes (no decoded JSON match) appear in `byNode`
- Dedup between decoded JSON and resolved path
- `tx.ResolvedPath` indexed when observations are empty

All existing tests pass unchanged.

## Complexity

O(observations × path_length) per packet — typically 1-3 observations ×
1-3 hops. No hot-path regression.

---------

Co-authored-by: you <you@example.com>
2026-04-11 21:25:42 -07:00
Kpa-clawbot
fcba2a9f3d fix: set PRAGMA busy_timeout on all RW SQLite connections (#707)
## Problem

`SQLITE_BUSY` contention between the ingestor and server's async
persistence goroutine drops `resolved_path` and `neighbor_edges`
updates. The DSN parameter `_busy_timeout=10000` may not be honored by
the modernc/sqlite driver.

## Fix

- **`openRW()` now sets `PRAGMA busy_timeout = 5000`** after opening the
connection, guaranteeing SQLite retries for up to 5 seconds before
returning `SQLITE_BUSY`
- **Refactored `PruneOldPackets` and `PruneOldMetrics`** to use
`openRW()` instead of duplicating connection setup — all RW connections
now get consistent busy_timeout handling
- Added test verifying the pragma is set correctly

## Changes

| File | Change |
|------|--------|
| `cmd/server/neighbor_persist.go` | `openRW()` sets `PRAGMA
busy_timeout = 5000` after open |
| `cmd/server/db.go` | `PruneOldPackets` and `PruneOldMetrics` use
`openRW()` instead of inline `sql.Open` |
| `cmd/server/neighbor_persist_test.go` | `TestOpenRW_BusyTimeout`
verifies pragma is set |

## Performance

No performance impact — `PRAGMA busy_timeout` is a connection-level
setting with zero overhead on uncontended writes. Under contention, it
converts immediate `SQLITE_BUSY` failures into brief retries (up to 5s),
which is strictly better than dropping data.

Fixes #705

---------

Co-authored-by: you <you@example.com>
2026-04-11 21:25:23 -07:00
you
c6a0f91b07 fix: add internal/sigvalidate to Dockerfile for both server and ingestor builds
PR #686 added internal/sigvalidate/ with replace directives in both
go.mod files but didn't update the Dockerfile to COPY it into the
Docker build context. go mod download fails with 'no such file'.
2026-04-12 04:14:56 +00:00
Kpa-clawbot
ef8bce5002 feat: repeater multi-byte capability inference table (#706)
## Summary

Adds a new "Repeater Multi-Byte Capability" section to the Hash Stats
analytics tab that classifies each repeater's ability to handle
multi-byte hash prefixes (firmware >= v1.14).

Fixes #689

## What Changed

### Backend (`cmd/server/store.go`)
- New `computeMultiByteCapability()` method that infers capability for
each repeater using two evidence sources:
- **Confirmed** (100% reliable): node has advertised with `hash_size >=
2`, leveraging existing `computeNodeHashSizeInfo()` data
- **Suspected** (<100%): node's prefix appears as a hop in packets with
multi-byte path headers, using the `byPathHop` index. Prefix collisions
mean this isn't definitive.
- **Unknown**: no multi-byte evidence — could be pre-1.14 or 1.14+ with
default settings
- Extended `/api/analytics/hash-sizes` response with
`multiByteCapability` array

### Frontend (`public/analytics.js`)
- New `renderMultiByteCapability()` function on the Hash Stats tab
- Color-coded table: green confirmed, yellow suspected, gray unknown
- Filter buttons to show all/confirmed/suspected/unknown
- Column sorting by name, role, status, evidence, max hash size, last
seen
- Clickable rows link to node detail pages

### Tests (`cmd/server/multibyte_capability_test.go`)
- `TestMultiByteCapability_Confirmed`: advert with hash_size=2 →
confirmed
- `TestMultiByteCapability_Suspected`: path appearance only → suspected
- `TestMultiByteCapability_Unknown`: 1-byte advert only → unknown
- `TestMultiByteCapability_PrefixCollision`: two nodes sharing prefix,
one confirmed via advert, other correctly marked suspected (not
confirmed)

## Performance

- `computeMultiByteCapability()` runs once per cache cycle (15s TTL via
hash-sizes cache)
- Leverages existing `GetNodeHashSizeInfo()` cache (also 15s TTL) — no
redundant advert scanning
- Path hop scan is O(repeaters × prefix lengths) lookups in the
`byPathHop` map, with early break on first match per prefix
- Only computed for global (non-regional) requests to avoid unnecessary
work

---------

Co-authored-by: you <you@example.com>
2026-04-11 21:02:54 -07:00
copelaje
922ebe54e7 BYOP Advert signature validation (#686)
For BYOP mode in the packet analyzer, perform signature validation on
advert packets and display whether successful or not. This is added as
we observed many corrupted advert packets that would be easily
detectable as such if signature validation checks were performed.

At present this MR is just to add this status in BYOP mode so there is
minimal impact to the application and no performance penalty for having
to perform these checks on all packets. Moving forward it probably makes
sense to do these checks on all advert packets so that corrupt packets
can be ignored in several contexts (like node lists for example).

Let me know what you think and I can adjust as needed.

---------

Co-authored-by: you <you@example.com>
2026-04-12 04:02:17 +00:00
Kpa-clawbot
26c47df814 fix: entrypoint .env support + deployment docs for bare docker run (#704)
## Summary

Fixes #702 — `.env` file `DISABLE_MOSQUITTO`/`DISABLE_CADDY` ignored
when using `docker run`.

## Changes

### Entrypoint sources `/app/data/.env`
The entrypoint now sources `/app/data/.env` (if present) before the
`DISABLE_*` checks. This works regardless of how the container is
started — `docker run`, compose, or `manage.sh`.

```bash
if [ -f /app/data/.env ]; then
  set -a
  . /app/data/.env
  set +a
fi
```

### `DISABLE_CADDY` added to compose files
Both `docker-compose.yml` and `docker-compose.staging.yml` now forward
`DISABLE_CADDY` to the container environment (was missing — only
`DISABLE_MOSQUITTO` was wired).

### Deployment docs updated
- `docs/deployment.md`: bare `docker run` is now the primary/recommended
approach with a full parameter reference table
- Documents the `/app/data/.env` convenience feature
- Compose and `manage.sh` marked as legacy alternatives
- `DISABLE_CADDY` added to the environment variable reference

### README quick start updated
Shows the full `docker run` command with `--restart`, ports, and
volumes. Includes HTTPS variant. Documents `-e` flags and `.env` file.

### v3.5.0 release notes
Updated the env var documentation to mention the `.env` file support.

## Testing
- All Go server tests pass
- All Go ingestor tests pass
- No logic changes to Go code — entrypoint shell script + docs only

---------

Co-authored-by: you <you@example.com>
2026-04-11 20:43:16 -07:00
Kpa-clawbot
bc22dbdb14 feat: DragManager — core drag mechanics (#608 M1) (#697)
## Summary

Implements M1 of the draggable panels spec from #608: the `DragManager`
class with core drag mechanics.

Fixes #608 (M1: DragManager core drag mechanics)

## What's New

### `public/drag-manager.js` (~215 lines)
- **State machine:** `IDLE → PENDING → DRAGGING → IDLE`
- **5px dead zone** on `.panel-header` to disambiguate click vs drag —
prevents hijacking corner toggle and close button clicks
- **Pointer events** with `setPointerCapture` for reliable tracking
- **`transform: translate()`** during drag — zero layout reflow
- **Snap-to-edge** on release: 20px threshold snaps to 12px margin
- **Z-index management** — dragged panel comes to front (counter from
1000)
- **`_detachFromCorner()`** — transitions panel from M0 corner CSS to
fixed positioning
- **Escape key** cancels drag and reverts to pre-drag position
- **`restorePositions()`** — applies saved viewport percentages on init
- **`handleResize()`** — clamps dragged panels inside viewport on window
resize
- **`enable()`/`disable()`** — responsive gate control

### `public/live.js` integration
- Instantiates `DragManager` after `initPanelPositions()`
- Registers `liveFeed`, `liveLegend`, `liveNodeDetail` panels
- **Responsive gate:** `matchMedia('(pointer: fine) and (min-width:
768px)')` — disables drag on touch/small screens, reverts to M0 corner
toggle
- **Resize clamping** debounced at 200ms

### `public/live.css` additions
- `cursor: grab/grabbing` on `.panel-header` (desktop only via `@media
(pointer: fine)`)
- `.is-dragging` class: opacity 0.92, elevated box-shadow, `will-change:
transform`, transitions disabled
- `[data-dragged="true"]` disables corner transition animations
- `prefers-reduced-motion` support

### Persistence
- **Format:** `panel-drag-{id}` → `{ xPct, yPct }` (viewport
percentages)
- **Survives resize:** positions recalculated from percentages
- **Corner toggle still works:** clicking corner button after drag
clears drag state (handled by existing M0 code)

## Tests

14 new unit tests in `test-drag-manager.js`:
- State machine transitions (IDLE → PENDING → DRAGGING → IDLE)
- Dead zone enforcement
- Button click guard (no drag on button pointerdown)
- Snap-to-edge behavior
- Position persistence as viewport percentages
- Restore from localStorage
- Resize clamping
- Disable/enable

## Performance

- `transform: translate()` during drag — compositor-only, no layout
reflow
- `will-change: transform` only during active drag (`.is-dragging`),
removed on drop
- `localStorage` write only on `pointerup`, never during `pointermove`
- Resize handler debounced at 200ms
- Single `style.transform` assignment per pointermove frame — negligible
cost

---------

Co-authored-by: you <you@example.com>
2026-04-11 20:41:35 -07:00
Kpa-clawbot
9917d50622 fix: resolve neighbor graph duplicate entries from different prefix lengths (#699)
## Problem

The neighbor graph creates separate entries for the same physical node
when observed with different prefix lengths. For example, a 1-byte
prefix `B0` (ambiguous, unresolved) and a 2-byte prefix `B05B` (resolved
to Busbee) appear as two separate neighbors of the same node.

Fixes #698

## Solution

### Part 1: Post-build resolution pass (Phase 1.5)

New function `resolveAmbiguousEdges(pm, graph)` in `neighbor_graph.go`:
- Called after `BuildFromStore()` completes the full graph, before any
API use
- Iterates all ambiguous edges and attempts resolution via
`resolveWithContext` with full graph context
- Only accepts high-confidence resolutions (`neighbor_affinity`,
`geo_proximity`, `unique_prefix`) — rejects
`first_match`/`gps_preference` fallbacks to avoid false positives
- Merges with existing resolved edges (count accumulation, max LastSeen)
or updates in-place
- Phase 1 edge collection loop is **unchanged**

### Part 2: API-layer dedup (defense-in-depth)

New function `dedupPrefixEntries()` in `neighbor_api.go`:
- Scans neighbor response for unresolved prefix entries matching
resolved pubkey entries
- Merges counts, timestamps, and observers; removes the unresolved entry
- O(n²) on ~50 neighbors per node — negligible cost

### Performance

Phase 1.5 runs O(ambiguous_edges × candidates). Per Carmack's analysis:
~50ms at 2K nodes on the 5-min rebuild cycle. Hot ingest path untouched.

## Tests

9 new tests in `neighbor_dedup_test.go`:

1. **Geo proximity resolution** — ambiguous edge resolved when candidate
has GPS near context node
2. **Merge with existing** — ambiguous edge merged into existing
resolved edge (count accumulation)
3. **No-match preservation** — ambiguous edge left as-is when prefix has
no candidates
4. **API dedup** — unresolved prefix merged with resolved pubkey in
response
5. **Integration** — node with both 1-byte and 2-byte prefix
observations shows single neighbor entry
6. **Phase 1 regression** — non-ambiguous edge collection unchanged
7. **LastSeen preservation** — merge keeps higher LastSeen timestamp
8. **No-match dedup** — API dedup doesn't merge non-matching prefixes
9. **Benchmark** — Phase 1.5 with 500+ edges

All existing tests pass (server + ingestor).

---------

Co-authored-by: you <you@example.com>
2026-04-10 11:19:54 -07:00
Kpa-clawbot
2e1a4a2e0d fix: handle companion nodes without adverts in My Mesh health cards (#696)
## Summary

Fixes #665 — companion nodes claimed in "My Mesh" showed "Could not load
data" because they never sent an advert, so they had no `nodes` table
entry, causing the health API to return 404.

## Three-Layer Fix

### 1. API Resilience (`cmd/server/store.go`)
`GetNodeHealth()` now falls back to building a partial response from the
in-memory packet store when `GetNodeByPubkey()` returns nil. Returns a
synthetic node stub (`role: "unknown"`, `name: "Unknown"`) with whatever
stats exist from packets, instead of returning nil → 404.

### 2. Ingestor Cleanup (`cmd/ingestor/main.go`)
Removed phantom sender node creation that used `"sender-" + name` as the
pubkey. Channel messages don't carry the sender's real pubkey, so these
synthetic entries were unreachable from the claiming/health flow — they
just polluted the nodes table with unmatchable keys.

### 3. Frontend UX (`public/home.js`)
The catch block in `loadMyNodes()` now distinguishes 404 (node not in DB
yet) from other errors:
- **404**: Shows 📡 "Waiting for first advert — this node has been seen
in channel messages but hasn't advertised yet"
- **Other errors**: Shows  "Could not load data" (unchanged)

## Tests
- Added `TestNodeHealthPartialFromPackets` — verifies a node with
packets but no DB entry returns 200 with synthetic node stub and stats
- Updated `TestHandleMessageChannelMessage` — verifies channel messages
no longer create phantom sender nodes
- All existing tests pass (`cmd/server`, `cmd/ingestor`)

Co-authored-by: you <you@example.com>
2026-04-09 20:03:52 -07:00
Kpa-clawbot
fcad49594b fix: include path.hopsCompleted in TRACE WebSocket broadcasts (#695)
## Summary

Fixes #683 — TRACE packets on the live map were showing the full path
instead of distinguishing completed vs remaining hops.

## Root Cause

Both WebSocket broadcast builders in `store.go` constructed the
`decoded` map with only `header` and `payload` keys — `path` was never
included. The frontend reads `decoded.path.hopsCompleted` to split trace
routes into solid (completed) and dashed (remaining) segments, but that
field was always `undefined`.

## Fix

For TRACE packets (payload type 9), call `DecodePacket()` on the raw hex
during broadcast and include the resulting `Path` struct in
`decoded["path"]`. This populates `hopsCompleted` which the frontend
already knows how to consume.

Both broadcast builders are patched:
- `IngestNewFromDB()` — new transmissions path (~line 1419)
- `IngestNewObservations()` — new observations path (~line 1680)

TRACE packets are infrequent, so the per-packet decode overhead is
negligible.

## Testing

- Added `TestIngestTraceBroadcastIncludesPath` — verifies that TRACE
broadcast maps include `decoded.path` with correct `hopsCompleted` value
- All existing tests pass (`cmd/server` + `cmd/ingestor`)

Co-authored-by: you <you@example.com>
2026-04-09 20:02:46 -07:00
Kpa-clawbot
a1e1e0bd2f fix: bottom-positioned panels overlap VCR bar (#693)
Fixes #685

## Problem

Corner positioning CSS (from PR #608) sets `bottom: 12px` for
bottom-positioned panels (`bl`, `br`), but the VCR bar at the bottom of
the live page is ~50px tall. This causes the legend (and any
bottom-positioned panel) to overlap the VCR controls.

## Fix

Changed `bottom: 12px` → `bottom: 58px` for both
`.live-overlay[data-position="bl"]` and
`.live-overlay[data-position="br"]`, matching the legend's original
`bottom: 58px` value that properly clears the VCR bar.

The VCR bar height is fixed (`.vcr-bar` class with consistent padding),
so a hardcoded value is appropriate here.

## Testing

- All existing tests pass (`npm test` — 13/13)
- CSS-only change, no logic affected

Co-authored-by: you <you@example.com>
2026-04-09 20:02:18 -07:00
efiten
34e7366d7c test: add RouteTransportDirect zero-hop cases to ingestor decoder tests (#684)
## Summary

Closes the symmetry gap flagged as a nit in PR #653 review:

> The ingestor decoder tests omit `RouteTransportDirect` zero-hop tests
— only the server decoder has those. Since the logic is identical, this
is not a blocker, but adding them would make the test suites symmetric.

- Adds `TestZeroHopTransportDirectHashSize` — `pathByte=0x00`, expects
`HashSize=0`
- Adds `TestZeroHopTransportDirectHashSizeWithNonZeroUpperBits` —
`pathByte=0xC0` (hash_size bits set, hash_count=0), expects `HashSize=0`

Both mirror the equivalent tests already present in
`cmd/server/decoder_test.go`.

## Test plan

- [ ] `cd cmd/ingestor && go test -run TestZeroHopTransportDirect -v` →
both new tests pass
- [ ] `cd cmd/ingestor && go test ./...` → no regressions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-09 17:36:34 -07:00
you
111b03cea1 docs: lead with pre-built Docker image as the headline v3.5.0 2026-04-08 07:22:07 +00:00
you
34c56d203e docs: promote API docs to own section with live analyzer.00id.net links, fix transition section 2026-04-08 07:21:11 +00:00
you
cc9f25e5c8 docs: fix release notes — bind mount for caddy-data, no personal paths, add Caddyfile example 2026-04-08 07:20:02 +00:00
you
2e33eb7050 docs: add HTTPS/Caddyfile mount to release notes and upgrade steps 2026-04-08 07:14:15 +00:00
you
6dd0957507 docs: use v3.5.0 tag in release notes, :latest requires git tag 2026-04-08 07:05:58 +00:00