## Summary
Observers that stop actively sending data now get removed after a
configurable retention period (default 14 days).
Previously, observers remained in the `observers` table forever. This
meant nodes that were once observers for an instance but are no longer
connected (even if still active in the mesh elsewhere) would continue
appearing in the observer list indefinitely.
## Key Design Decisions
- **Active data requirement**: `last_seen` is only updated when the
observer itself sends packets (via `stmtUpdateObserverLastSeen`). Being
seen by another node does NOT update this field. So an observer must
actively send data to stay listed.
- **Default: 14 days** — observers not seen in 14 days are removed
- **`-1` = keep forever** — for users who want observers to never be
removed
- **`0` = use default (14 days)** — same as not setting the field
- **Runs on startup + daily ticker** — staggered 3 minutes after metrics
prune to avoid DB contention
## Changes
| File | Change |
|------|--------|
| `cmd/ingestor/config.go` | Add `ObserverDays` to `RetentionConfig`,
add `ObserverDaysOrDefault()` |
| `cmd/ingestor/db.go` | Add `RemoveStaleObservers()` — deletes
observers with `last_seen` before cutoff |
| `cmd/ingestor/main.go` | Wire up startup + daily ticker for observer
retention |
| `cmd/server/config.go` | Add `ObserverDays` to `RetentionConfig`, add
`ObserverDaysOrDefault()` |
| `cmd/server/db.go` | Add `RemoveStaleObservers()` (server-side, uses
read-write connection) |
| `cmd/server/main.go` | Wire up startup + daily ticker, shutdown
cleanup |
| `cmd/server/routes.go` | Admin prune API now also removes stale
observers |
| `config.example.json` | Add `observerDays: 14` with documentation |
| `cmd/ingestor/coverage_boost_test.go` | 4 tests: basic removal, empty
store, keep forever (-1), default (0→14) |
| `cmd/server/config_test.go` | 4 tests: `ObserverDaysOrDefault` edge
cases |
## Config Example
```json
{
"retention": {
"nodeDays": 7,
"observerDays": 14,
"packetDays": 30,
"_comment": "observerDays: -1 = keep forever, 0 = use default (14)"
}
}
```
## Admin API
The `/api/admin/prune` endpoint now also removes stale observers (using
`observerDays` from config) and reports `observers_removed` in the
response alongside `packets_deleted`.
## Test Plan
- [x] `TestRemoveStaleObservers` — old observer removed, recent observer
kept
- [x] `TestRemoveStaleObserversNone` — empty store, no errors
- [x] `TestRemoveStaleObserversKeepForever` — `-1` keeps even year-old
observers
- [x] `TestRemoveStaleObserversDefault` — `0` defaults to 14 days
- [x] `TestObserverDaysOrDefault` (ingestor) —
nil/zero/positive/keep-forever
- [x] `TestObserverDaysOrDefault` (server) —
nil/zero/positive/keep-forever
- [x] Both binaries compile cleanly (`go build`)
- [ ] Manual: verify observer count decreases after retention period on
a live instance
CoreScope
High-performance mesh network analyzer powered by Go. Sub-millisecond packet queries, ~300 MB memory for 56K+ packets, real-time WebSocket broadcast, full channel decryption.
Self-hosted, open-source MeshCore packet analyzer. Collects MeshCore packets via MQTT, decodes them in real time, and presents a full web UI with live packet feed, interactive maps, channel chat, packet tracing, and per-node analytics.
⚡ Performance
The Go backend serves all 40+ API endpoints from an in-memory packet store with 5 indexes (hash, txID, obsID, observer, node). SQLite is for persistence only — reads never touch disk.
| Metric | Value |
|---|---|
| Packet queries | < 1 ms (in-memory) |
| All API endpoints | < 100 ms |
| Memory (56K packets) | ~300 MB (vs 1.3 GB on Node.js) |
| WebSocket broadcast | Real-time to all connected browsers |
| Channel decryption | AES-128-ECB with rainbow table |
See PERFORMANCE.md for full benchmarks.
✨ Features
📡 Live Trace Map
Real-time animated map with packet route visualization, VCR-style playback controls, and a retro LCD clock. Replay the last 24 hours of mesh activity, scrub through the timeline, or watch packets flow live at up to 4× speed.
📦 Packet Feed
Filterable real-time packet stream with byte-level breakdown, Excel-like resizable columns, and a detail pane. Toggle "My Nodes" to focus on your mesh.
🗺️ Network Overview
At-a-glance mesh stats — node counts, packet volume, observer coverage.
📊 Node Analytics
Per-node deep dive with interactive charts: activity timeline, packet type breakdown, SNR distribution, hop count analysis, peer network graph, and hourly heatmap.
💬 Channel Chat
Decoded group messages with sender names, @mentions, timestamps — like reading a Discord channel for your mesh.
📱 Mobile Ready
Full experience on your phone — proper touch controls, iOS safe area support, and a compact VCR bar.
And More
- 11 Analytics Tabs — RF, topology, channels, hash stats, distance, route patterns, and more
- Node Directory — searchable list with role tabs, detail panel, QR codes, advert timeline
- Packet Tracing — follow individual packets across observers with SNR/RSSI timeline
- Observer Status — health monitoring, packet counts, uptime, per-observer analytics
- Hash Collision Matrix — detect address collisions across the mesh
- Channel Key Auto-Derivation — hashtag channels (
#channel) keys derived via SHA256 - Multi-Broker MQTT — connect to multiple brokers with per-source IATA filtering
- Dark / Light Mode — auto-detects system preference, map tiles swap too
- Theme Customizer — design your theme in-browser, export as
theme.json - Global Search — search packets, nodes, and channels (Ctrl+K)
- Shareable URLs — deep links to packets, channels, and observer detail pages
- Protobuf API Contract — typed API definitions in
proto/ - Accessible — ARIA patterns, keyboard navigation, screen reader support
Quick Start
Pre-built Image (Recommended)
No build step required — just run:
docker run -d --name corescope \
--restart=unless-stopped \
-p 80:80 -p 1883:1883 \
-v /your/data:/app/data \
ghcr.io/kpa-clawbot/corescope:latest
Open http://localhost — done. No config file needed; CoreScope starts with sensible defaults.
For HTTPS with a custom domain, add -p 443:443 and mount your Caddyfile:
docker run -d --name corescope \
--restart=unless-stopped \
-p 80:80 -p 443:443 -p 1883:1883 \
-v /your/data:/app/data \
-v /your/Caddyfile:/etc/caddy/Caddyfile:ro \
-v /your/caddy-data:/data/caddy \
ghcr.io/kpa-clawbot/corescope:latest
Disable built-in services with -e DISABLE_MOSQUITTO=true or -e DISABLE_CADDY=true, or drop a .env file in your data volume. See docs/deployment.md for the full reference.
Build from Source
git clone https://github.com/Kpa-clawbot/CoreScope.git
cd CoreScope
./manage.sh setup
The setup wizard walks you through config, domain, HTTPS, build, and run.
./manage.sh status # Health check + packet/node counts
./manage.sh logs # Follow logs
./manage.sh backup # Backup database
./manage.sh update # Pull latest + rebuild + restart
./manage.sh mqtt-test # Check if observer data is flowing
./manage.sh help # All commands
Configure
Copy config.example.json to config.json and edit:
{
"port": 3000,
"mqtt": {
"broker": "mqtt://localhost:1883",
"topic": "meshcore/+/+/packets"
},
"mqttSources": [
{
"name": "remote-feed",
"broker": "mqtts://remote-broker:8883",
"topics": ["meshcore/+/+/packets"],
"username": "user",
"password": "pass",
"iataFilter": ["SJC", "SFO", "OAK"]
}
],
"channelKeys": {
"public": "8b3387e9c5cdea6ac9e5edbaa115cd72"
},
"defaultRegion": "SJC"
}
| Field | Description |
|---|---|
port |
HTTP server port (default: 3000) |
mqtt.broker |
Local MQTT broker URL ("" to disable) |
mqttSources |
External MQTT broker connections (optional) |
channelKeys |
Channel decryption keys (hex). Hashtag channels auto-derived via SHA256 |
defaultRegion |
Default IATA region code for the UI |
dbPath |
SQLite database path (default: data/meshcore.db) |
Environment Variables
| Variable | Description |
|---|---|
PORT |
Override config port |
DB_PATH |
Override SQLite database path |
Architecture
┌─────────────────────────────────────────────┐
│ Docker Container │
│ │
Observer → USB → │ Mosquitto ──→ Go Ingestor ──→ SQLite DB │
meshcoretomqtt → MQTT ──→│ │ │
│ Go HTTP Server ──→ WebSocket │
│ │ │ │
│ Caddy (HTTPS) ←───────┘ │
└────────────────────┼────────────────────────┘
│
Browser
Two-process model: The Go ingestor handles MQTT ingestion and packet decoding. The Go HTTP server loads all packets into an in-memory store on startup (5 indexes for fast lookups) and serves the REST API + WebSocket broadcast. Both are managed by supervisord inside a single container with Caddy for HTTPS and Mosquitto for local MQTT.
MQTT Setup
- Flash an observer node with
MESH_PACKET_LOGGING=1build flag - Connect via USB to a host running meshcoretomqtt
- Configure meshcoretomqtt with your IATA region code and MQTT broker address
- Packets appear on topic
meshcore/{IATA}/{PUBKEY}/packets
Or POST raw hex packets to POST /api/packets for manual injection.
Project Structure
corescope/
├── cmd/
│ ├── server/ # Go HTTP server + WebSocket + REST API
│ │ ├── main.go # Entry point
│ │ ├── routes.go # 40+ API endpoint handlers
│ │ ├── store.go # In-memory packet store (5 indexes)
│ │ ├── db.go # SQLite persistence layer
│ │ ├── decoder.go # MeshCore packet decoder
│ │ ├── websocket.go # WebSocket broadcast
│ │ └── *_test.go # 327 test functions
│ └── ingestor/ # Go MQTT ingestor
│ ├── main.go # MQTT subscription + packet processing
│ ├── decoder.go # Packet decoder (shared logic)
│ ├── db.go # SQLite write path
│ └── *_test.go # 53 test functions
├── proto/ # Protobuf API definitions
├── public/ # Vanilla JS frontend (no build step)
│ ├── index.html # SPA shell
│ ├── app.js # Router, WebSocket, utilities
│ ├── packets.js # Packet feed + hex breakdown
│ ├── map.js # Leaflet map + route visualization
│ ├── live.js # Live trace + VCR playback
│ ├── channels.js # Channel chat
│ ├── nodes.js # Node directory + detail views
│ ├── analytics.js # 11-tab analytics dashboard
│ └── style.css # CSS variable theming (light/dark)
├── docker/
│ ├── supervisord-go.conf # Process manager (server + ingestor)
│ ├── mosquitto.conf # MQTT broker config
│ ├── Caddyfile # Reverse proxy + HTTPS
│ └── entrypoint-go.sh # Container entrypoint
├── Dockerfile # Multi-stage Go build + Alpine runtime
├── config.example.json # Example configuration
├── test-*.js # Node.js test suite (frontend + legacy)
└── tools/ # Generators, E2E tests, utilities
For Developers
Test Suite
380 Go tests covering the backend, plus 150+ Node.js tests for the frontend and legacy logic, plus 49 Playwright E2E tests for browser validation.
# Go backend tests
cd cmd/server && go test ./... -v
cd cmd/ingestor && go test ./... -v
# Node.js frontend + integration tests
npm test
# Playwright E2E (requires running server on localhost:3000)
node test-e2e-playwright.js
Generate Test Data
node tools/generate-packets.js --api --count 200
Migrating from Node.js
If you're running an existing Node.js deployment, see docs/go-migration.md for a step-by-step guide. The Go engine reads the same SQLite database and config.json — no data migration needed.
Contributing
Contributions welcome. Please read AGENTS.md for coding conventions, testing requirements, and engineering principles before submitting a PR.
Live instance: analyzer.00id.net — all API endpoints are public, no auth required.
API Documentation: CoreScope auto-generates an OpenAPI 3.0 spec. Browse the interactive Swagger UI at /api/docs or fetch the machine-readable spec at /api/spec.
License
MIT




