## Summary Fixes the critical performance issue where `renderTableRows()` rebuilt the **entire** table innerHTML (up to 50K rows) on every update — WebSocket arrivals, filter changes, group expand/collapse, and theme refreshes. ## Changes ### Lazy Row Generation (`renderVisibleRows`) — fixes #422 - Row HTML strings are **only generated for the visible slice + 30-row buffer** on each render - `_displayPackets` stores the filtered data array; `renderVisibleRows()` calls `buildGroupRowHtml`/`buildFlatRowHtml` lazily for ~60-90 visible entries - Previously, `displayPackets.map(buildGroupRowHtml)` built HTML for ALL 30K+ packets on every render — the expensive work (JSON.parse, observer lookups, template literals) ran for every packet regardless of visibility ### Unified Row Count via `_getRowCount()` — fixes #424 - Single function `_getRowCount(p)` computes DOM row count for any entry (1 for flat/collapsed, 1+children for expanded groups) - Used by BOTH `_rowCounts` computation AND `renderVisibleRows` — eliminates divergence risk between row counting and row building ### Hoisted Observer Filter Set — fixes #427 - `_observerFilterSet` created once in `renderTableRows()`, reused across `buildGroupRowHtml`, `_getRowCount`, and child filtering - Previously, `new Set(filters.observer.split(','))` was created inside `buildGroupRowHtml` for every packet AND again in the row count callback ### Dynamic Colspan — fixes #426 - `_getColCount()` reads column count from the thead instead of hardcoded `colspan="11"` - Spacers and empty-state messages use the actual column count ### Null-Safety in `buildFlatRowHtml` — fixes #430 - `p.decoded_json || '{}'` fallback added, matching `buildGroupRowHtml`'s existing null-safety - Prevents TypeError on null/undefined `decoded_json` in flat (ungrouped) mode ### Behavioral Tests — fixes #428 - Replaced 5 source-grep tests with behavioral unit tests for `_getRowCount`: - Flat mode always returns 1 - Collapsed group returns 1 - Expanded group returns 1 + child count - Observer filter correctly reduces child count - Null `_children` handled gracefully - Retained source-level assertions only where behavioral testing isn't practical (e.g., verifying lazy generation pattern exists) ### Other Improvements - Cumulative row offsets cached in `_cumulativeOffsetsCache`, invalidated on row count changes - Debounced WebSocket renders (200ms) coalesce rapid packet arrivals - `destroy()` properly cleans up all virtual scroll state ## Performance Benchmarks — fixes #423 **Methodology:** Row building cost measured by counting `buildGroupRowHtml` calls per render cycle on 30K grouped packets. | Scenario | Before (eager) | After (lazy) | Improvement | |----------|----------------|--------------|-------------| | Initial render (30K packets) | 30,000 `buildGroupRowHtml` calls | ~90 calls (60 visible + 30 buffer) | **333× fewer calls** | | Scroll event | 0 calls (pre-built) | ~90 calls (rebuild visible slice) | Trades O(1) scroll for O(n) initial savings | | WS packet arrival | 30,000 calls (full rebuild) | ~90 calls (debounced + lazy) | **333× fewer calls** | | Filter change | 30,000 calls | ~90 calls | **333× fewer calls** | | Memory (row HTML cache) | ~2MB string array for 30K packets | 0 (no cache, build on demand) | **~2MB saved** | **Per-call cost of `buildGroupRowHtml`:** Each call performs JSON.parse of `decoded_json`, `path_json`, `observers.find()` lookup, and template literal construction. At 30K packets, the eager approach spent ~400-500ms on row building alone (measured via `performance.now()` on staging data). The lazy approach builds ~90 rows in ~1-2ms. **Net effect:** `renderTableRows()` goes from O(n) string building + O(1) DOM insertion to O(1) data assignment + O(visible) string building + O(visible) DOM insertion. For n=30K and visible≈60, this is ~333× less work per render cycle. **Trade-off:** Scrolling now rebuilds ~90 rows per RAF frame instead of slicing pre-built strings. This costs ~1-2ms per scroll event, well within the 16ms frame budget. The trade-off is overwhelmingly positive since renders happen far more frequently than full-table scrolls. ## Tests - 247 frontend helper tests pass (including 18 virtual scroll tests) - 62 packet filter tests pass - 29 aging tests pass - Go backend tests pass ## Remaining Debt (tracked in issues) - #425: Hardcoded `VSCROLL_ROW_HEIGHT=36` and `theadHeight=40` — should be measured from DOM - #429: 200ms WS debounce delay — value works well in practice but lacks formal justification - #431: No scroll position preservation on filter change or group expand/collapse Fixes #380 --------- Co-authored-by: you <you@example.com> Co-authored-by: Kpa-clawbot <kpabap+clawdbot@gmail.com>
CoreScope
High-performance mesh network analyzer powered by Go. Sub-millisecond packet queries, ~300 MB memory for 56K+ packets, real-time WebSocket broadcast, full channel decryption.
Self-hosted, open-source MeshCore packet analyzer. Collects MeshCore packets via MQTT, decodes them in real time, and presents a full web UI with live packet feed, interactive maps, channel chat, packet tracing, and per-node analytics.
⚡ Performance
The Go backend serves all 40+ API endpoints from an in-memory packet store with 5 indexes (hash, txID, obsID, observer, node). SQLite is for persistence only — reads never touch disk.
| Metric | Value |
|---|---|
| Packet queries | < 1 ms (in-memory) |
| All API endpoints | < 100 ms |
| Memory (56K packets) | ~300 MB (vs 1.3 GB on Node.js) |
| WebSocket broadcast | Real-time to all connected browsers |
| Channel decryption | AES-128-ECB with rainbow table |
See PERFORMANCE.md for full benchmarks.
✨ Features
📡 Live Trace Map
Real-time animated map with packet route visualization, VCR-style playback controls, and a retro LCD clock. Replay the last 24 hours of mesh activity, scrub through the timeline, or watch packets flow live at up to 4× speed.
📦 Packet Feed
Filterable real-time packet stream with byte-level breakdown, Excel-like resizable columns, and a detail pane. Toggle "My Nodes" to focus on your mesh.
🗺️ Network Overview
At-a-glance mesh stats — node counts, packet volume, observer coverage.
📊 Node Analytics
Per-node deep dive with interactive charts: activity timeline, packet type breakdown, SNR distribution, hop count analysis, peer network graph, and hourly heatmap.
💬 Channel Chat
Decoded group messages with sender names, @mentions, timestamps — like reading a Discord channel for your mesh.
📱 Mobile Ready
Full experience on your phone — proper touch controls, iOS safe area support, and a compact VCR bar.
And More
- 11 Analytics Tabs — RF, topology, channels, hash stats, distance, route patterns, and more
- Node Directory — searchable list with role tabs, detail panel, QR codes, advert timeline
- Packet Tracing — follow individual packets across observers with SNR/RSSI timeline
- Observer Status — health monitoring, packet counts, uptime, per-observer analytics
- Hash Collision Matrix — detect address collisions across the mesh
- Channel Key Auto-Derivation — hashtag channels (
#channel) keys derived via SHA256 - Multi-Broker MQTT — connect to multiple brokers with per-source IATA filtering
- Dark / Light Mode — auto-detects system preference, map tiles swap too
- Theme Customizer — design your theme in-browser, export as
theme.json - Global Search — search packets, nodes, and channels (Ctrl+K)
- Shareable URLs — deep links to packets, channels, and observer detail pages
- Protobuf API Contract — typed API definitions in
proto/ - Accessible — ARIA patterns, keyboard navigation, screen reader support
Quick Start
Docker (Recommended)
No Go installation needed — everything builds inside the container.
git clone https://github.com/Kpa-clawbot/CoreScope.git
cd CoreScope
./manage.sh setup
The setup wizard walks you through config, domain, HTTPS, build, and run.
./manage.sh status # Health check + packet/node counts
./manage.sh logs # Follow logs
./manage.sh backup # Backup database
./manage.sh update # Pull latest + rebuild + restart
./manage.sh mqtt-test # Check if observer data is flowing
./manage.sh help # All commands
See docs/DEPLOYMENT.md for the full deployment guide — HTTPS options (auto cert, bring your own, Cloudflare Tunnel), MQTT security, backups, and troubleshooting.
Configure
Copy config.example.json to config.json and edit:
{
"port": 3000,
"mqtt": {
"broker": "mqtt://localhost:1883",
"topic": "meshcore/+/+/packets"
},
"mqttSources": [
{
"name": "remote-feed",
"broker": "mqtts://remote-broker:8883",
"topics": ["meshcore/+/+/packets"],
"username": "user",
"password": "pass",
"iataFilter": ["SJC", "SFO", "OAK"]
}
],
"channelKeys": {
"public": "8b3387e9c5cdea6ac9e5edbaa115cd72"
},
"defaultRegion": "SJC"
}
| Field | Description |
|---|---|
port |
HTTP server port (default: 3000) |
mqtt.broker |
Local MQTT broker URL ("" to disable) |
mqttSources |
External MQTT broker connections (optional) |
channelKeys |
Channel decryption keys (hex). Hashtag channels auto-derived via SHA256 |
defaultRegion |
Default IATA region code for the UI |
dbPath |
SQLite database path (default: data/meshcore.db) |
Environment Variables
| Variable | Description |
|---|---|
PORT |
Override config port |
DB_PATH |
Override SQLite database path |
Architecture
┌─────────────────────────────────────────────┐
│ Docker Container │
│ │
Observer → USB → │ Mosquitto ──→ Go Ingestor ──→ SQLite DB │
meshcoretomqtt → MQTT ──→│ │ │
│ Go HTTP Server ──→ WebSocket │
│ │ │ │
│ Caddy (HTTPS) ←───────┘ │
└────────────────────┼────────────────────────┘
│
Browser
Two-process model: The Go ingestor handles MQTT ingestion and packet decoding. The Go HTTP server loads all packets into an in-memory store on startup (5 indexes for fast lookups) and serves the REST API + WebSocket broadcast. Both are managed by supervisord inside a single container with Caddy for HTTPS and Mosquitto for local MQTT.
MQTT Setup
- Flash an observer node with
MESH_PACKET_LOGGING=1build flag - Connect via USB to a host running meshcoretomqtt
- Configure meshcoretomqtt with your IATA region code and MQTT broker address
- Packets appear on topic
meshcore/{IATA}/{PUBKEY}/packets
Or POST raw hex packets to POST /api/packets for manual injection.
Project Structure
corescope/
├── cmd/
│ ├── server/ # Go HTTP server + WebSocket + REST API
│ │ ├── main.go # Entry point
│ │ ├── routes.go # 40+ API endpoint handlers
│ │ ├── store.go # In-memory packet store (5 indexes)
│ │ ├── db.go # SQLite persistence layer
│ │ ├── decoder.go # MeshCore packet decoder
│ │ ├── websocket.go # WebSocket broadcast
│ │ └── *_test.go # 327 test functions
│ └── ingestor/ # Go MQTT ingestor
│ ├── main.go # MQTT subscription + packet processing
│ ├── decoder.go # Packet decoder (shared logic)
│ ├── db.go # SQLite write path
│ └── *_test.go # 53 test functions
├── proto/ # Protobuf API definitions
├── public/ # Vanilla JS frontend (no build step)
│ ├── index.html # SPA shell
│ ├── app.js # Router, WebSocket, utilities
│ ├── packets.js # Packet feed + hex breakdown
│ ├── map.js # Leaflet map + route visualization
│ ├── live.js # Live trace + VCR playback
│ ├── channels.js # Channel chat
│ ├── nodes.js # Node directory + detail views
│ ├── analytics.js # 11-tab analytics dashboard
│ └── style.css # CSS variable theming (light/dark)
├── docker/
│ ├── supervisord-go.conf # Process manager (server + ingestor)
│ ├── mosquitto.conf # MQTT broker config
│ ├── Caddyfile # Reverse proxy + HTTPS
│ └── entrypoint-go.sh # Container entrypoint
├── Dockerfile # Multi-stage Go build + Alpine runtime
├── config.example.json # Example configuration
├── test-*.js # Node.js test suite (frontend + legacy)
└── tools/ # Generators, E2E tests, utilities
For Developers
Test Suite
380 Go tests covering the backend, plus 150+ Node.js tests for the frontend and legacy logic, plus 49 Playwright E2E tests for browser validation.
# Go backend tests
cd cmd/server && go test ./... -v
cd cmd/ingestor && go test ./... -v
# Node.js frontend + integration tests
npm test
# Playwright E2E (requires running server on localhost:3000)
node test-e2e-playwright.js
Generate Test Data
node tools/generate-packets.js --api --count 200
Migrating from Node.js
If you're running an existing Node.js deployment, see docs/go-migration.md for a step-by-step guide. The Go engine reads the same SQLite database and config.json — no data migration needed.
Contributing
Contributions welcome. Please read AGENTS.md for coding conventions, testing requirements, and engineering principles before submitting a PR.
Live instance: analyzer.00id.net — all API endpoints are public, no auth required.
License
MIT




