Commit Graph

151 Commits

Author SHA1 Message Date
you
bb409a2e00 fix: nodes list shows actual last heard time, not just last advert
Server now computes last_heard from in-memory packet store (all traffic
types) and includes it in /api/nodes response. Client prefers last_heard
over DB last_seen for display, sort, filter, and status calculation.

Fixes inconsistency where list showed '5d ago' but side pane showed
'26m ago' for the same node.
2026-03-23 20:32:56 +00:00
you
49e6d6d0b7 fix: advert flags are a 4-bit enum type, not individual bit flags
ADV_TYPE_ROOM=3 (0b0011) was misread as chat+repeater because decoder
treated lower nibble as individual bits. Now correctly: type & 0x0F as
enum (0=none, 1=chat, 2=repeater, 3=room, 4=sensor).

Includes startup backfill: scans all adverts and fixes any node roles
in the DB that were incorrectly set to 'repeater' when they should be
'room'. Logs count of fixed nodes on startup.
2026-03-23 19:20:13 +00:00
you
d97f35534f fix: smarter hash inconsistency detection — only flag flip-flopping, not upgrades
A node going 1B→2B and staying 2B is a firmware upgrade, not a bug.
Only flag as inconsistent when hash sizes flip-flop (2+ transitions in
the chronological advert sequence). Single transition = clean upgrade.
2026-03-23 16:55:16 +00:00
you
f7a4ad2b73 feat: detect and flag inconsistent hash sizes across adverts
Tracks all hash_size values seen per node. If a node has sent adverts
with different hash sizes, flags it as hash_size_inconsistent with a
yellow ⚠️ badge on both side pane and detail page. Tooltip mentions
likely firmware bug (pre-1.14.1). Stats row shows all sizes seen.
2026-03-23 16:39:02 +00:00
you
7bc8524404 revert: hash_size back to newest-first, not max
Repeaters can switch hash sizes (e.g. buggy 1.14 firmware emitting 0x00
path bytes). Latest advert is the correct current state.
2026-03-23 16:37:07 +00:00
you
f33f7bc5a8 fix: hash_size uses max across all adverts, not just newest
Some nodes emit adverts with varying path byte values (e.g. 0x00 and 0x40).
Taking the first/newest was unreliable. Now takes the maximum hash_size
seen across all adverts for each node.
2026-03-23 16:33:39 +00:00
you
3ac91874ba fix: include hash_size in node detail API endpoint
Was only included in the /api/nodes list endpoint but missing from
/api/nodes/:pubkey detail endpoint. Reads from _hashSizeMap.
2026-03-23 16:22:54 +00:00
you
26d28d88d1 fix: theme.json goes next to config.json, log location on startup
- Search order: app dir first (next to config.json), then data/ dir
- Startup log: '[theme] Loaded from ...' or 'No theme.json found. Place it next to config.json'
- README updated: 'put it next to config.json' instead of confusing data/ path
2026-03-23 05:13:17 +00:00
you
4a5cd9fef4 fix: config.json load order — bind-mount first, data/ fallback
Broke MQTT by reading example config from data/ instead of the real
bind-mounted /app/config.json. Now checks app dir first.
2026-03-23 05:03:03 +00:00
you
664a3dde62 fix: hot-load config.json and theme.json from data/ volume
- Both loadConfigFile() and loadThemeFile() check data/ dir first, then app dir
- Theme endpoint re-reads both files on every request — edit the file, refresh the page
- No container restart, no symlinks, no extra mounts needed
- Just edit /app/data/theme.json (or config.json) and it's live
2026-03-23 04:44:30 +00:00
you
a2cc30fa2f fix: remove POST /api/config/theme and server save/load buttons
Server theme is admin-only: download theme.json, place it on the server manually.
No unauthenticated write endpoint.
2026-03-23 04:34:10 +00:00
you
f29317332c feat: server-side theme save/load via theme.json
- Server reads from theme.json (separate from config.json), hot-loads on every request
- POST /api/config/theme writes theme.json directly — no manual file editing
- GET /api/config/theme now merges: defaults → config.json → theme.json
- Also returns themeDark and typeColors (were missing from API)
- Customizer: replaced 'merge into config.json' instructions with Save/Load Server buttons
- JSON export moved to collapsible details section
- theme.json added to .gitignore (instance-specific)
2026-03-23 04:31:08 +00:00
you
0ed96539db feat: config-driven customization system (Phase 1)
Add GET /api/config/theme endpoint serving branding, theme colors,
node colors, and home page content from config.json with sensible
defaults so unconfigured instances look identical to before.

Client-side (app.js):
- Fetch theme config on page load, before first render
- Override CSS variables from theme.* on document root
- Override ROLE_COLORS/ROLE_STYLE from nodeColors.*
- Replace nav brand text, logo, favicon from branding.*
- Store config in window.SITE_CONFIG for other pages

Home page (home.js):
- Hero title/subtitle from config.home
- Steps and checklist from config.home
- Footer links from config.home.footerLinks
- Chooser welcome text uses configured siteName

Config example updated with all available theme options.

No default appearance changes — all overrides are optional.
2026-03-23 00:37:48 +00:00
you
d82b08def7 Fix: sort regional candidates by distance to IATA center
Without sender GPS (channel texts etc), the forward pass had no
anchor and just took candidates[0] — random order. Now regional
candidates are sorted by distKm to observer IATA center before
disambiguation. Closest to region center = default pick.
2026-03-22 23:59:33 +00:00
you
b7573df53e Fix: move /api/iata-coords route after app initialization
Route was at line 16 before express app was created — caused
'Cannot access app before initialization' crash on startup.
2026-03-22 22:26:15 +00:00
you
bd43936e8b Show observer IATA regions in packet, node, and live detail views
- packets.js: obsName() now shows IATA code next to observer name, e.g. 'EW-SFC-DR01 (SFO)'
- packets.js: hop conflicts in field table show distance (e.g. '37km')
- nodes.js: both full and sidebar detail views show 'Regions: SJC, OAK, SFO' badges and per-observer IATA
- live.js: node detail panel shows regions in 'Heard By' heading and per-observer IATA
- server.js: /api/nodes/:pubkey/health now returns iata field for each observer
- Bump cache busters
2026-03-22 22:21:01 +00:00
you
f25e96036d Client-side regional hop filtering (#117)
HopResolver now mirrors server-side layered regional filtering:
- init() accepts observers list + IATA coords
- resolve() accepts observerId, looks up IATA, filters candidates
  by haversine distance (300km radius) to IATA center
- Candidates include regional, filterMethod, distKm fields
- Packet detail view passes observer_id to resolve()

New endpoint: GET /api/iata-coords returns airport coordinates
for client-side use.

Fixes: conflict badges showing "0 conflicts" in packet detail
because client-side resolver had no regional filtering.
2026-03-22 22:18:01 +00:00
you
90881f0676 Regional hop filtering: layered geo + observer approach (#117)
Layer 1 (GPS, bridge-proof): Nodes with lat/lon are checked via
haversine distance to the observer IATA center. Only nodes within
300km are considered regional. Bridged WA nodes appearing in SJC
MQTT feeds are correctly rejected because their GPS coords are
1100km+ from SJC.

Layer 2 (observer-based, fallback): Nodes without GPS fall back to
_advertByObserver index — were they seen by a regional observer?
Less precise but still useful for nodes that never sent ADVERTs
with coordinates.

Layer 3: Global fallback, flagged.

New module: iata-coords.js with 60+ IATA airport coordinates +
haversine distance function.

API response now includes filterMethod (geo/observer/none) and
distKm per conflict candidate.

Tests: 22 unit tests (haversine, boundaries, cross-regional
collision sim, layered fallback, bridge rejection).
2026-03-22 22:09:43 +00:00
you
2fc07ef697 Fix #117: Regional filtering for repeater ID resolution
1-byte (and 2-byte) hop IDs match many nodes globally. Previously
resolve-hops picked candidates from anywhere, causing cross-regional
false paths (e.g. Eugene packet showing Vancouver repeaters).

Fix: Use observer IATA to determine packet region. Filter candidates
to nodes seen by observers in the same IATA region via the existing
_advertByObserver index. Fall back to global only if zero regional
candidates exist (flagged as globalFallback).

API changes to /api/resolve-hops response:
- conflicts[]: all candidates with regional flag per hop
- totalGlobal/totalRegional: candidate counts
- globalFallback: true when no regional candidates found
- region: packet IATA region in top-level response

UI changes:
- Conflict count badge (⚠3) instead of bare ⚠
- Tooltip shows regional vs global candidates
- Unreliable hops shown with strikethrough + opacity
- Global fallback hops shown with red dashed underline
2026-03-22 21:38:24 +00:00
you
443a078168 Audio Lab page (Milestone 1): Packet Jukebox
New #/audio-lab page for understanding and debugging audio sonification.

Server: GET /api/audio-lab/buckets — returns representative packets
bucketed by type (up to 8 per type spanning size range).

Client: Left sidebar with collapsible type sections, right panel with:
- Controls: Play, Loop, Speed (0.25x-4x), BPM, Volume, Voice select
- Packet Data: type, sizes, hops, obs count, hex dump with sampled
  bytes highlighted
- Sound Mapping: computed instrument, scale, filter, volume, voices,
  pan — shows exactly why it sounds the way it does
- Note Sequence: table of sampled bytes → MIDI → freq → duration → gap
- Byte Visualizer: bar chart of payload bytes, sampled ones colored

Enables MeshAudio automatically on first play. Mobile responsive.
2026-03-22 19:19:45 +00:00
you
193e63a834 Fix realistic mode: all WS broadcasts include hash + raw_hex
Secondary broadcast paths (ADVERT, GRP_TXT, TXT_MSG, TRACE, API)
were missing hash field. Without hash, realistic mode's buffer
check (if pkt.hash) failed and packets fell through to
animatePacket individually — causing duplicate feed items and
duplicate sonification.

Also added missing addFeedItem call in animateRealisticPropagation
so the feed shows consolidated entries in realistic mode.
2026-03-22 18:37:53 +00:00
you
aa35337a5e fix: WS broadcast had null packet when observation was deduped
getById() returns null for deduped observations (not stored in byId).
Client filters on m.data.packet being truthy, so all deduped packets
were silently dropped from WS. Fallback to transmission or raw pktData.
2026-03-22 04:38:07 +00:00
you
81751fd3af Drop legacy packets + paths tables on startup, remove dead code
Migration runs automatically on next startup — drops paths first (FK to
packets), then packets. Removes insertPacket(), insertPath(), all
prepared statements and references to both tables. Server-side type/
observer filtering also removed (client does it in-memory).

Saves ~2M rows (paths) + full packets table worth of disk.
2026-03-22 00:45:27 +00:00
you
59062946be Fix multi-select filters (AND→OR) and add localStorage persistence
- Server: support comma-separated type filter values (OR logic)
- Server: add observer_id filtering to /api/packets endpoint
- Client: fix type and observer filters to use OR logic for multi-select
- Client: persist observer and type filter selections to localStorage
- Keys: meshcore-observer-filter, meshcore-type-filter
2026-03-22 00:34:38 +00:00
you
5c25c6adf6 Disable auto-seeding on empty DB - require --seed flag or SEED_DB=true
Previously db.seed() ran unconditionally on startup and would populate
a fresh database with fake test data. Now seeding only triggers when
explicitly requested via --seed CLI flag or SEED_DB=true env var.

The seed functionality remains available for developers:
  node server.js --seed
  SEED_DB=true node server.js
  node db.js  (direct run still seeds)
2026-03-21 23:00:01 +00:00
you
734e5570c1 fix: header row shows first observer's path, not longest path
Removed server-side longest-path override in /api/packets/:id that
replaced the transmission's path_json with the longest observation
path. The header should always reflect the first observer's path.
Individual observation paths are available in the observations array.
2026-03-21 22:58:37 +00:00
you
8a18e091e0 fix: use packet hash instead of sender_timestamp for channel message dedup
Device clocks on MeshCore nodes are wildly inaccurate (off by hours or
epoch-near values like 4). The channel messages endpoint was using
sender_timestamp as part of the deduplication key, which could cause
messages to fail deduplication or incorrectly collide.

Changed dedupe key from sender:timestamp to sender:hash, which is the
correct unique identifier for a transmission.

Also added TIMESTAMP-AUDIT.md documenting all device timestamp usage.
2026-03-21 21:53:38 +00:00
you
3a1b0b544b refactor: migrate all SQL queries from packets table to packets_v view (transmissions+observations)
- Create packets_v SQL view joining transmissions+observations to match old packets schema
- Replace all SELECT FROM packets with packets_v in db.js, packet-store.js, server.js
- Update countPackets/countRecentPackets to query observations directly
- Update seed() to use insertTransmission instead of insertPacket
- Remove insertPacket from exports (no longer called)
- Keep packets table schema intact (not dropped yet, pending testing)
2026-03-21 20:38:58 +00:00
you
016c4091db perf: disable SQLite auto-checkpoint, use manual PASSIVE checkpoint
SQLite WAL auto-checkpoint (every 1000 pages/4MB) was causing 200ms+
event loop spikes on a 381MB database. This is synchronous I/O that
blocks the Node.js event loop unpredictably.

Fix: disable auto-checkpoint, run PASSIVE (non-blocking) checkpoint
every 5 minutes. PASSIVE won't stall readers or writers — it only
checkpoints pages that aren't currently in use.
2026-03-21 20:19:22 +00:00
you
0ee8992e09 perf: eliminate synchronous blocking during startup pre-warm
Previous pre-warm called computeAllSubpaths() synchronously (500ms+)
directly in the setTimeout callback, then sequentially warmed 8 more
endpoints. Any browser request arriving during this 1.5s+ window
waited the full duration (user saw 4816ms for a 3-hop resolve-hops).

Fix: ALL pre-warm now goes through self-HTTP-requests which yield the
event loop between each computation. Delayed to 5s so initial page
load requests complete first (they populate cache on-demand).

Removed the sync computeAllSubpaths() call and inline subpath cache
population — the /api/analytics/subpaths endpoint handles this itself.
2026-03-21 20:07:27 +00:00
you
a3d94e74c6 perf: node paths endpoint uses disambiguateHops with prefix index
Replaced inline resolveHopsInternal (allNodes.filter per hop) with
shared disambiguateHops (prefix-indexed). Also uses _parsedPath cache
and per-request disambig cache. /api/nodes/:pubkey/paths was 560ms cold,
should now be much faster.
2026-03-21 19:59:11 +00:00
you
a16bd34a1f perf: revert 200ms gap (made it worse), warm at 100ms instead of 1s
200ms gaps meant clients hit cold caches themselves (worse).
Pre-warm should be fast and immediate — start at 100ms after listen,
setImmediate between endpoints to yield but not delay.
2026-03-21 19:54:10 +00:00
you
e95da27555 perf: 200ms gap between pre-warm requests to drain client queue
setImmediate wasn't enough — each analytics computation blocks for
200-400ms synchronously. Adding a 200ms setTimeout between pre-warm
requests gives pending client requests a window to complete between
the heavy computations.
2026-03-21 19:53:02 +00:00
you
88e4287b3e perf: shared cached node list + cached path JSON parse
- 8 separate 'SELECT * FROM nodes' queries replaced with getCachedNodes()
  (refreshes every 30s, prefix index built once and reused)
- Region-filtered subpaths + master subpaths use _parsedPath cache
- Eliminates repeated SQLite queries + prefix index rebuilds across
  back-to-back analytics endpoint calls
2026-03-21 19:49:54 +00:00
you
961be4fd80 perf: yield event loop between pre-warm requests via setImmediate 2026-03-21 19:38:59 +00:00
you
95185a381d perf: add observers, nodes, distance to startup pre-warm
These endpoints were missing from the sequential pre-warm,
causing cold cache hits when clients connect before warm completes.
observers was 3s cold, distance was 600ms cold.
2026-03-21 19:36:46 +00:00
you
74fd97f761 Optimize analytics endpoints: prefix maps, cached JSON.parse, reduced scans
- Topology: replace O(N) allNodes.filter with prefix map + hop cache for resolveHop
- Topology: use _parsedPath cached JSON.parse for path_json (3 call sites)
- Topology: build observer map from already-filtered packets instead of second full scan
- Hash-sizes: prefix map for hop resolution instead of allNodes.find per hop
- Hash-sizes: use _parsedPath and _parsedDecoded cached parses
- Channels: use _parsedDecoded cached parse for decoded_json
2026-03-21 19:35:06 +00:00
you
9867b62872 Optimize /api/analytics/distance cold cache performance
- Build prefix map for O(1) hop resolution instead of O(N) linear scan per hop
- Cache resolved hops to avoid re-resolving same hex prefix across packets
- Pre-compute repeater set for O(1) role lookups
- Cache parsed path_json/decoded_json on packet objects (_parsedPath/_parsedDecoded)
2026-03-21 19:22:37 +00:00
you
0fb3553762 perf: precompute hash_size map at startup, update incrementally
The hash_size computation was scanning all 19K+ packets with JSON.parse
on every /api/nodes request, blocking the event loop for hundreds of ms.
Event loop p95 was 236ms, max 1732ms.

Now computed once at startup and updated incrementally on each new packet.
/api/nodes just does a Map.get per node instead of full scan.
2026-03-21 19:00:28 +00:00
you
8811efdb24 fix: hash_size must use newest ADVERT, not oldest
Packets array is sorted newest-first. The previous 'last-wins'
approach (unconditional set) gave the OLDEST packet's hash_size.
Switched to first-wins (!has guard) which correctly uses the
newest ADVERT since we iterate newest-first.

Verified: Kpa Roof Solar has 1-byte ADVERTs (old firmware) and
2-byte ADVERTs (new firmware) interleaved. Newest are 2-byte.
2026-03-21 18:52:06 +00:00
you
e50ce03414 fix: Pass 2 hash_size was overwriting ADVERT-derived values
Pass 1 correctly uses last-wins for ADVERT packets. But Pass 2
(fallback for nodes without ADVERTs) was also unconditionally
overwriting, so a stale 1-byte non-ADVERT packet would clobber
the correct 2-byte value from Pass 1.

Restored the !hashSizeMap.has() guard on Pass 2 only — it should
only fill gaps, never override ADVERT-derived hash_size.
2026-03-21 18:50:50 +00:00
you
ee58e648a6 Fix hash_size using first-seen-wins instead of last-seen-wins
The hashSizeMap was guarded by !hashSizeMap.has(pk), meaning the oldest
ADVERT determined a node's hash_size permanently. If a node upgraded
firmware from 1-byte to 2-byte hash prefix, the stale value persisted.

Remove the guard so newer packets overwrite older ones (last-seen-wins).
2026-03-21 18:43:48 +00:00
you
2170dd7743 security: require API key for POST /api/packets and /api/perf/reset
- New config.apiKey field — when set, POST endpoints require X-Api-Key header
- If apiKey not configured, endpoints remain open (dev/local mode)
- GET endpoints and /api/decode (read-only) remain public
- Closes the packet injection attack surface
2026-03-21 18:40:06 +00:00
you
804c39504c feat: View on Map buttons for distance leaderboard hops and paths
- 🗺️ button on each top hop row → opens map with from/to markers + line
- 🗺️ button on each top path row → opens map with full multi-hop route
- Server now includes fromPk/toPk in topPaths hops for map resolution
- Uses existing drawPacketRoute() via sessionStorage handoff
2026-03-21 17:56:44 +00:00
you
27914fbd62 fix: cap max hop distance at 300km, link to node detail not analytics
- 1000km filter was too generous for LoRa (record ~250km)
- Uhuru kwa watu 📡 ↔ Bay Area hops at 880km were obviously wrong
- Node links in leaderboard now go to #/nodes/:pk (detail) not /analytics
2026-03-21 17:45:51 +00:00
you
5720d0d948 Add Distance/Range analytics tab
New /api/analytics/distance endpoint that:
- Resolves path hops to nodes with valid GPS coordinates
- Calculates haversine distances between consecutive hops
- Separates stats by link type: R↔R, C↔R, C↔C
- Returns top longest hops, longest paths, category stats, histogram, time series
- Filters out invalid GPS (null, 0/0) and sanity-checks >1000km
- Supports region filtering and caching

New Distance tab in analytics UI with:
- Summary cards (total hops, paths, avg/max distance)
- Link type breakdown table
- Distance histogram
- Average distance over time sparkline
- Top 20 longest hops leaderboard
- Top 10 longest multi-hop paths table
2026-03-21 17:33:33 +00:00
you
ca4aa72574 fix: region filter nodes by ADVERT observers, not data packets
The previous approach matched nodes via data packet hashes seen by
regional observers — but mesh packets propagate everywhere, so nearly
every node matched every region (550/558).

New approach: _advertByObserver index tracks which observers saw each
node's ADVERT packets. ADVERTs are local broadcasts that indicate
physical presence, so they're the correct signal for geographic filtering.

Also fixes role counts to reflect filtered results, not global totals.
2026-03-21 08:31:55 +00:00
you
63a525ecc1 fix: stack overflow in /analytics/rf — replace Math.min/max spread
Math.min(...arr) and Math.max(...arr) blow the call stack when arr
has tens of thousands of elements. Replaced with simple for-loop
arrMin/arrMax helpers.
2026-03-21 08:26:34 +00:00
you
4ffeb8204e fix: analytics RF stats respect region filter
- Separate region filtering from SNR filtering in /api/analytics/rf
- totalAllPackets now shows regional observation count (was global)
- Add totalTransmissions (unique hashes in regional set)
- Payload types and packet sizes use all regional data, not just SNR-filtered
- Signal stats (SNR, RSSI, scatter) use SNR-filtered subset
- Handle empty SNR/RSSI arrays gracefully (no Infinity/-Infinity)
2026-03-21 08:01:30 +00:00
you
c2acf40951 fix: revert broken SQL region filtering for nodes — use in-memory index
The subagent used a non-existent column (sender_key) in the SQL join.
Reverted to the same byObserver + _nodeHashIndex approach used by
bulk-health and network-status endpoints.
2026-03-21 07:15:22 +00:00