Compare commits

...

15 Commits

Author SHA1 Message Date
KpaBap
d538d2f3e7 Merge branch 'master' into rename/corescope-migration 2026-03-28 16:21:57 -07:00
Kpa-clawbot
5f5eae07b0 Merge pull request #222 from efiten/pr/perf-fix
perf: eliminate O(n) slice prepend on every packet ingest
2026-03-28 16:01:08 -07:00
efiten
380b1b1e28 fix: address review — observation ordering, stale comments, affected query functions
- Load() SQL: keep o.timestamp DESC (consistent with IngestNewFromDB) so
  pickBestObservation tie-breaking is identical on both load paths
- GetTimestamps: scan from tail instead of head (was breaking on first item
  assuming it was the newest, now correctly reads from newest end)
- QueryMultiNodePackets: apply same DESC/ASC tail-read pagination as
  QueryPackets (was sorting for ASC and assuming DESC as-is)
- GetNodeHealth recentPackets: read from tail to return 20 newest items
  (was reading from head = 20 oldest items)
- Remove stale "Prepend (newest first)" comments, replace with accurate
  "oldest-first; new items go to tail" wording

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-28 15:54:40 -07:00
efiten
03cfd114da perf: eliminate O(n) slice prepend on every packet ingest
s.packets and s.byPayloadType[t] were prepended on every new packet
to maintain newest-first order, copying the entire slice each time.
With 2-3M packets in memory this meant ~24MB of pointer copies per
ingest cycle, causing sustained high CPU and GC pressure.

Fix: store both slices oldest-first (append to tail). Load() SQL
changed to ASC ordering. QueryPackets DESC pagination now reads from
the tail in O(page_size) with no sort; GetChannelMessages switches
from reverse-iteration to forward-iteration.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-28 15:54:40 -07:00
Kpa-clawbot
df90de77a7 Merge pull request #219 from Kpa-clawbot/fix/hashchannels-derivation
fix: port hashChannels key derivation to Go ingestor (fixes #218)
2026-03-28 15:34:43 -07:00
copilot-swe-agent[bot]
7b97c532a1 test: fix env isolation and comment accuracy in channel key tests
Agent-Logs-Url: https://github.com/Kpa-clawbot/meshcore-analyzer/sessions/38b3e96f-861b-4929-8134-b1b9de39a7fc

Co-authored-by: KpaBap <746025+KpaBap@users.noreply.github.com>
2026-03-28 15:27:26 -07:00
Kpa-clawbot
e0c2d37041 fix: port hashChannels key derivation to Go ingestor (fixes #218)
Add HashChannels config field and deriveHashtagChannelKey() to the Go
ingestor, matching the Node.js server-helpers.js algorithm:
SHA-256(channelName) -> first 32 hex chars (16 bytes AES-128 key).

Merge priority preserved: rainbow (lowest) -> derived -> explicit (highest).

Tests include cross-language vectors validated against Node.js output
and merge priority / normalization / skip-explicit coverage.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-28 15:27:26 -07:00
Kpa-clawbot
f5d0ce066b refactor: remove packets_v SQL fallbacks — store handles all queries (#220)
* refactor: remove all packets_v SQL fallbacks — store handles all queries

Remove DB fallback paths from all route handlers. The in-memory
PacketStore now handles all packet/node/analytics queries. Handlers
return empty results or 404 when no store is available instead of
falling back to direct DB queries.

- Remove else-DB branches from handlePacketDetail, handleNodeHealth,
  handleNodeAnalytics, handleBulkHealth, handlePacketTimestamps, etc.
- Remove unused DB methods (GetPacketByHash, GetTransmissionByID,
  GetPacketByID, GetObservationsForHash, GetTimestamps, GetNodeHealth,
  GetNodeAnalytics, GetBulkHealth, etc.)
- Remove packets_v VIEW creation from schema
- Update tests for new behavior (no-store returns 404/empty, not 500)

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix: address PR #220 review comments

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

---------

Co-authored-by: Kpa-clawbot <259247574+Kpa-clawbot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: KpaBap <kpabap@gmail.com>
2026-03-28 15:25:56 -07:00
Kpa-clawbot
202d0d87d7 ci: Add pull_request trigger to CI workflow
- Add pull_request trigger for PRs against master
- Add 'if: github.event_name == push' to build/deploy/publish jobs
- Test jobs (go-test, node-test) now run on both push and PRs
- Build/deploy/publish only run on push to master

This fixes the chicken-and-egg problem where branch protection requires
CI checks but CI doesn't run on PRs. Now PRs get test validation before
merge while keeping production deployments only on master pushes.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-28 15:15:35 -07:00
Kpa-clawbot
99d2e67eb1 Rename Phase 1: MeshCore Analyzer -> CoreScope (backend + infra)
Reviewed by Kobayashi (gpt-5.3-codex). All comments addressed.
2026-03-28 14:45:24 -07:00
KpaBap
8a458c7c2a Merge pull request #227 from Kpa-clawbot/rename/corescope-frontend
rename: MeshCore Analyzer → CoreScope (frontend + .squad)
2026-03-28 14:39:06 -07:00
Kpa-clawbot
66b3c05da3 fix: remove stray backtick in template literal
Fixes malformed template literal in test assertion message that would cause a syntax error.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-28 14:37:27 -07:00
Kpa-clawbot
71ec5e6fca rename: MeshCore Analyzer → CoreScope (frontend + .squad)
Phase 1 of the CoreScope rename — frontend display strings and
squad agent metadata only.

index.html:
- <title>, og:title, twitter:title → CoreScope
- Brand text span → CoreScope
- og:image/twitter:image URLs → corescope repo (placeholder)
- Cache busters bumped

public/*.js headers (19 files):
- All file header comments updated

public/*.css headers:
- style.css, home.css updated

JavaScript strings:
- app.js: GitHub URL → corescope
- home.js: 3 fallback siteName references
- customize.js: default siteName + heroTitle

Tests:
- test-e2e-playwright.js: title assertion → corescope
- test-frontend-helpers.js: GitHub URL constant
- benchmark.js: header string
- test-all.sh: header string

.squad:
- team.md, casting/history.json
- All 7 agent charters + 5 history files

NOT renamed (intentional):
- localStorage keys (meshcore-*)
- CSS classes (.meshcore-marker)
- Window globals (_meshcore*)

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-28 14:03:32 -07:00
Kpa-clawbot
1453fb6492 docs: add CoreScope rename migration guide
Documents what existing users need to update when the rename
from MeshCore Analyzer to CoreScope lands:
- Git remote URL update
- Docker image/container name changes
- Config branding.siteName (if customized)
- CI/CD references (if applicable)
- Confirms data dirs, MQTT, browser state unchanged

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-28 13:51:41 -07:00
Kpa-clawbot
5cc6064e11 fix: Dockerfile .git-commit COPY fails on legacy builder — use RUN default
The glob trick COPY .git-commi[t] only works with BuildKit.
manage.sh uses legacy docker build. Just create a default via RUN.
Commit hash comes through --build-arg ldflags anyway.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-28 13:36:37 -07:00
50 changed files with 777 additions and 1370 deletions

View File

@@ -8,6 +8,13 @@ on:
- 'LICENSE'
- '.gitignore'
- 'docs/**'
pull_request:
branches: [master]
paths-ignore:
- '**.md'
- 'LICENSE'
- '.gitignore'
- 'docs/**'
concurrency:
group: deploy
@@ -270,6 +277,7 @@ jobs:
# ───────────────────────────────────────────────────────────────
build:
name: "🏗️ Build Docker Image"
if: github.event_name == 'push'
needs: [go-test]
runs-on: self-hosted
steps:
@@ -294,6 +302,7 @@ jobs:
# ───────────────────────────────────────────────────────────────
deploy:
name: "🚀 Deploy Staging"
if: github.event_name == 'push'
needs: [build]
runs-on: self-hosted
steps:
@@ -338,6 +347,7 @@ jobs:
# ───────────────────────────────────────────────────────────────
publish:
name: "📝 Publish Badges & Summary"
if: github.event_name == 'push'
needs: [deploy]
runs-on: self-hosted
steps:

View File

@@ -1,10 +1,10 @@
# Bishop — Tester
Unit tests, Playwright E2E, coverage gates, and quality assurance for MeshCore Analyzer.
Unit tests, Playwright E2E, coverage gates, and quality assurance for CoreScope.
## Project Context
**Project:** MeshCore Analyzer — Real-time LoRa mesh packet analyzer
**Project:** CoreScope — Real-time LoRa mesh packet analyzer
**Stack:** Node.js native test runner, Playwright, c8 + nyc (coverage), supertest
**User:** User

View File

@@ -2,7 +2,7 @@
## Project Context
MeshCore Analyzer has 14 test files, 4,290 lines of test code. Backend coverage 85%+, frontend 42%+. Tests use Node.js native runner, Playwright for E2E, c8/nyc for coverage, supertest for API routes. vm.createContext pattern used for testing frontend helpers in Node.js.
CoreScope has 14 test files, 4,290 lines of test code. Backend coverage 85%+, frontend 42%+. Tests use Node.js native runner, Playwright for E2E, c8/nyc for coverage, supertest for API routes. vm.createContext pattern used for testing frontend helpers in Node.js.
User: User

View File

@@ -1,10 +1,10 @@
# Hicks — Backend Dev
Server, decoder, packet-store, SQLite, API, MQTT, WebSocket, and performance for MeshCore Analyzer.
Server, decoder, packet-store, SQLite, API, MQTT, WebSocket, and performance for CoreScope.
## Project Context
**Project:** MeshCore Analyzer — Real-time LoRa mesh packet analyzer
**Project:** CoreScope — Real-time LoRa mesh packet analyzer
**Stack:** Node.js 18+, Express 5, SQLite (better-sqlite3), MQTT (mqtt), WebSocket (ws)
**User:** User

View File

@@ -2,7 +2,7 @@
## Project Context
MeshCore Analyzer is a real-time LoRa mesh packet analyzer. Node.js + Express + SQLite backend, vanilla JS SPA frontend. Custom decoder.js fixes path_length bug from upstream library. In-memory packet store provides O(1) lookups for 30K+ packets. TTL response cache achieves 7,000× speedup on bulk health endpoint.
CoreScope is a real-time LoRa mesh packet analyzer. Node.js + Express + SQLite backend, vanilla JS SPA frontend. Custom decoder.js fixes path_length bug from upstream library. In-memory packet store provides O(1) lookups for 30K+ packets. TTL response cache achieves 7,000× speedup on bulk health endpoint.
User: User

View File

@@ -1,10 +1,10 @@
# Kobayashi — Lead
Architecture, code review, and decision-making for MeshCore Analyzer.
Architecture, code review, and decision-making for CoreScope.
## Project Context
**Project:** MeshCore Analyzer — Real-time LoRa mesh packet analyzer
**Project:** CoreScope — Real-time LoRa mesh packet analyzer
**Stack:** Node.js 18+, Express 5, SQLite, vanilla JS frontend, Leaflet, WebSocket, MQTT
**User:** User

View File

@@ -2,7 +2,7 @@
## Project Context
MeshCore Analyzer is a real-time LoRa mesh packet analyzer. Node.js + Express + SQLite backend, vanilla JS SPA frontend with Leaflet maps, WebSocket live feed, MQTT ingestion. Production at v2.6.0, ~18K lines, 85%+ backend test coverage.
CoreScope is a real-time LoRa mesh packet analyzer. Node.js + Express + SQLite backend, vanilla JS SPA frontend with Leaflet maps, WebSocket live feed, MQTT ingestion. Production at v2.6.0, ~18K lines, 85%+ backend test coverage.
User: User

View File

@@ -1,10 +1,10 @@
# Newt — Frontend Dev
Vanilla JS UI, Leaflet maps, live visualization, theming, and all public/ modules for MeshCore Analyzer.
Vanilla JS UI, Leaflet maps, live visualization, theming, and all public/ modules for CoreScope.
## Project Context
**Project:** MeshCore Analyzer — Real-time LoRa mesh packet analyzer
**Project:** CoreScope — Real-time LoRa mesh packet analyzer
**Stack:** Vanilla HTML/CSS/JavaScript (ES5/6), Leaflet maps, WebSocket, Canvas animations
**User:** User

View File

@@ -2,7 +2,7 @@
## Project Context
MeshCore Analyzer is a real-time LoRa mesh packet analyzer with a vanilla JS SPA frontend. 22 frontend modules, Leaflet maps, WebSocket live feed, VCR playback, Canvas animations, theme customizer with CSS variables. No build step, no framework. ES5/6 for broad browser support.
CoreScope is a real-time LoRa mesh packet analyzer with a vanilla JS SPA frontend. 22 frontend modules, Leaflet maps, WebSocket live feed, VCR playback, Canvas animations, theme customizer with CSS variables. No build step, no framework. ES5/6 for broad browser support.
User: User

View File

@@ -4,7 +4,7 @@ Tracks the work queue and keeps the team moving. Always on the roster.
## Project Context
**Project:** MeshCore Analyzer — Real-time LoRa mesh packet analyzer
**Project:** CoreScope — Real-time LoRa mesh packet analyzer
**User:** User
## Responsibilities

View File

@@ -1,10 +1,10 @@
# Ripley — Support Engineer
Deep knowledge of every frontend behavior, API response, and user-facing feature in MeshCore Analyzer. Fields community questions, triages bug reports, and explains "why does X look like Y."
Deep knowledge of every frontend behavior, API response, and user-facing feature in CoreScope. Fields community questions, triages bug reports, and explains "why does X look like Y."
## Project Context
**Project:** MeshCore Analyzer — Real-time LoRa mesh packet analyzer
**Project:** CoreScope — Real-time LoRa mesh packet analyzer
**Stack:** Vanilla JS frontend (public/*.js), Node.js backend, SQLite, WebSocket, MQTT
**User:** Kpa-clawbot

View File

@@ -1,7 +1,7 @@
# Ripley — Support Engineer History
## Core Context
- Project: MeshCore Analyzer — real-time LoRa mesh packet analyzer
- Project: CoreScope — real-time LoRa mesh packet analyzer
- User: Kpa-clawbot
- Joined the team 2026-03-27 to handle community support and triage

View File

@@ -1,10 +1,10 @@
# Scribe — Session Logger
Silent agent that maintains decisions, logs, and cross-agent context for MeshCore Analyzer.
Silent agent that maintains decisions, logs, and cross-agent context for CoreScope.
## Project Context
**Project:** MeshCore Analyzer — Real-time LoRa mesh packet analyzer
**Project:** CoreScope — Real-time LoRa mesh packet analyzer
**User:** User
## Responsibilities

View File

@@ -5,7 +5,7 @@
"universe": "aliens",
"created_at": "2026-03-26T04:22:08Z",
"agents": ["Kobayashi", "Hicks", "Newt", "Bishop"],
"reason": "Initial team casting for MeshCore Analyzer project"
"reason": "Initial team casting for CoreScope project"
}
]
}

View File

@@ -1,8 +1,8 @@
# Squad — MeshCore Analyzer
# Squad — CoreScope
## Project Context
**Project:** MeshCore Analyzer — Real-time LoRa mesh packet analyzer
**Project:** CoreScope — Real-time LoRa mesh packet analyzer
**Stack:** Node.js 18+, Express 5, SQLite (better-sqlite3), vanilla JS frontend, Leaflet maps, WebSocket (ws), MQTT (mqtt)
**User:** User
**Description:** Self-hosted alternative to analyzer.letsmesh.net. Ingests MeshCore mesh network packets via MQTT, decodes with custom parser (decoder.js), stores in SQLite with in-memory indexing (packet-store.js), and serves a rich SPA with live visualization, packet analysis, node analytics, channel chat, observer health, and theme customizer. ~18K lines, 14 test files, 85%+ backend coverage. Production at v2.6.0.

View File

@@ -148,7 +148,7 @@ async function benchmarkEndpoints(port, endpoints, nocache = false) {
}
async function run() {
console.log(`\nMeshCore Analyzer Benchmark — ${RUNS} runs per endpoint`);
console.log(`\nCoreScope Benchmark — ${RUNS} runs per endpoint`);
console.log('Launching servers...\n');
// Launch both servers

View File

@@ -26,13 +26,14 @@ type MQTTLegacy struct {
// Config holds the ingestor configuration, compatible with the Node.js config.json format.
type Config struct {
DBPath string `json:"dbPath"`
MQTT *MQTTLegacy `json:"mqtt,omitempty"`
MQTTSources []MQTTSource `json:"mqttSources,omitempty"`
LogLevel string `json:"logLevel,omitempty"`
ChannelKeysPath string `json:"channelKeysPath,omitempty"`
ChannelKeys map[string]string `json:"channelKeys,omitempty"`
Retention *RetentionConfig `json:"retention,omitempty"`
DBPath string `json:"dbPath"`
MQTT *MQTTLegacy `json:"mqtt,omitempty"`
MQTTSources []MQTTSource `json:"mqttSources,omitempty"`
LogLevel string `json:"logLevel,omitempty"`
ChannelKeysPath string `json:"channelKeysPath,omitempty"`
ChannelKeys map[string]string `json:"channelKeys,omitempty"`
HashChannels []string `json:"hashChannels,omitempty"`
Retention *RetentionConfig `json:"retention,omitempty"`
}
// RetentionConfig controls how long stale nodes are kept before being moved to inactive_nodes.

View File

@@ -512,34 +512,64 @@ func firstNonEmpty(vals ...string) string {
return ""
}
// deriveHashtagChannelKey derives an AES-128 key from a channel name.
// Same algorithm as Node.js: SHA-256(channelName) → first 32 hex chars (16 bytes).
func deriveHashtagChannelKey(channelName string) string {
h := sha256.Sum256([]byte(channelName))
return hex.EncodeToString(h[:16])
}
// loadChannelKeys loads channel decryption keys from config and/or a JSON file.
// Priority: CHANNEL_KEYS_PATH env var > cfg.ChannelKeysPath > channel-rainbow.json next to config.
// Merge priority: rainbow (lowest) → derived from hashChannels → explicit config (highest).
func loadChannelKeys(cfg *Config, configPath string) map[string]string {
keys := make(map[string]string)
// Determine file path for rainbow keys
// 1. Rainbow table keys (lowest priority)
keysPath := os.Getenv("CHANNEL_KEYS_PATH")
if keysPath == "" {
keysPath = cfg.ChannelKeysPath
}
if keysPath == "" {
// Default: look for channel-rainbow.json next to config file
keysPath = filepath.Join(filepath.Dir(configPath), "channel-rainbow.json")
}
rainbowCount := 0
if data, err := os.ReadFile(keysPath); err == nil {
var fileKeys map[string]string
if err := json.Unmarshal(data, &fileKeys); err == nil {
for k, v := range fileKeys {
keys[k] = v
}
log.Printf("Loaded %d channel keys from %s", len(fileKeys), keysPath)
rainbowCount = len(fileKeys)
log.Printf("Loaded %d channel keys from %s", rainbowCount, keysPath)
} else {
log.Printf("Warning: failed to parse channel keys file %s: %v", keysPath, err)
}
}
// Merge inline config keys (override file keys)
// 2. Derived keys from hashChannels (middle priority)
derivedCount := 0
for _, raw := range cfg.HashChannels {
trimmed := strings.TrimSpace(raw)
if trimmed == "" {
continue
}
channelName := trimmed
if !strings.HasPrefix(channelName, "#") {
channelName = "#" + channelName
}
// Skip if explicit config already has this key
if _, exists := cfg.ChannelKeys[channelName]; exists {
continue
}
keys[channelName] = deriveHashtagChannelKey(channelName)
derivedCount++
}
if derivedCount > 0 {
log.Printf("[channels] %d derived from hashChannels", derivedCount)
}
// 3. Explicit config keys (highest priority — overrides rainbow + derived)
for k, v := range cfg.ChannelKeys {
keys[k] = v
}

View File

@@ -3,6 +3,8 @@ package main
import (
"encoding/json"
"math"
"os"
"path/filepath"
"testing"
"time"
)
@@ -492,3 +494,132 @@ func TestAdvertRole(t *testing.T) {
})
}
}
func TestDeriveHashtagChannelKey(t *testing.T) {
// Test vectors validated against Node.js server-helpers.js
tests := []struct {
name string
want string
}{
{"#General", "649af2cab73ed5a890890a5485a0c004"},
{"#test", "9cd8fcf22a47333b591d96a2b848b73f"},
{"#MeshCore", "dcf73f393fa217f6b28fcec6ffc411ad"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := deriveHashtagChannelKey(tt.name)
if got != tt.want {
t.Errorf("deriveHashtagChannelKey(%q) = %q, want %q", tt.name, got, tt.want)
}
})
}
// Deterministic
k1 := deriveHashtagChannelKey("#foo")
k2 := deriveHashtagChannelKey("#foo")
if k1 != k2 {
t.Error("deriveHashtagChannelKey should be deterministic")
}
// Returns 32-char hex string (16 bytes)
if len(k1) != 32 {
t.Errorf("key length = %d, want 32", len(k1))
}
// Different inputs → different keys
k3 := deriveHashtagChannelKey("#bar")
if k1 == k3 {
t.Error("different inputs should produce different keys")
}
}
func TestLoadChannelKeysMergePriority(t *testing.T) {
dir := t.TempDir()
cfgPath := filepath.Join(dir, "config.json")
// Create a rainbow file with two keys: #rainbow (unique) and #override (to be overridden)
rainbowPath := filepath.Join(dir, "channel-rainbow.json")
t.Setenv("CHANNEL_KEYS_PATH", rainbowPath)
rainbow := map[string]string{
"#rainbow": "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"#override": "rainbow_value_should_be_overridden",
}
rainbowJSON, err := json.Marshal(rainbow)
if err != nil {
t.Fatal(err)
}
if err := os.WriteFile(rainbowPath, rainbowJSON, 0o644); err != nil {
t.Fatal(err)
}
cfg := &Config{
HashChannels: []string{"General", "#override"},
ChannelKeys: map[string]string{"#override": "explicit_wins"},
}
keys := loadChannelKeys(cfg, cfgPath)
// Rainbow key loaded
if keys["#rainbow"] != "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" {
t.Errorf("rainbow key missing or wrong: %q", keys["#rainbow"])
}
// HashChannels derived #General
expected := deriveHashtagChannelKey("#General")
if keys["#General"] != expected {
t.Errorf("#General = %q, want %q (derived)", keys["#General"], expected)
}
// Explicit config wins over both rainbow and derived
if keys["#override"] != "explicit_wins" {
t.Errorf("#override = %q, want explicit_wins", keys["#override"])
}
}
func TestLoadChannelKeysHashChannelsNormalization(t *testing.T) {
t.Setenv("CHANNEL_KEYS_PATH", "")
dir := t.TempDir()
cfgPath := filepath.Join(dir, "config.json")
cfg := &Config{
HashChannels: []string{
"NoPound", // should become #NoPound
"#HasPound", // stays #HasPound
" Spaced ", // trimmed → #Spaced
"", // skipped
},
}
keys := loadChannelKeys(cfg, cfgPath)
if _, ok := keys["#NoPound"]; !ok {
t.Error("should derive key for #NoPound (auto-prefixed)")
}
if _, ok := keys["#HasPound"]; !ok {
t.Error("should derive key for #HasPound")
}
if _, ok := keys["#Spaced"]; !ok {
t.Error("should derive key for #Spaced (trimmed)")
}
if len(keys) != 3 {
t.Errorf("expected 3 keys, got %d", len(keys))
}
}
func TestLoadChannelKeysSkipExplicit(t *testing.T) {
t.Setenv("CHANNEL_KEYS_PATH", "")
dir := t.TempDir()
cfgPath := filepath.Join(dir, "config.json")
cfg := &Config{
HashChannels: []string{"General"},
ChannelKeys: map[string]string{"#General": "my_explicit_key"},
}
keys := loadChannelKeys(cfg, cfgPath)
// Explicit key should win — hashChannels derivation should be skipped
if keys["#General"] != "my_explicit_key" {
t.Errorf("#General = %q, want my_explicit_key", keys["#General"])
}
}

View File

@@ -46,14 +46,6 @@ func setupTestDBv2(t *testing.T) *DB {
observer_id TEXT, observer_name TEXT, direction TEXT,
snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp INTEGER NOT NULL
);
CREATE VIEW packets_v AS
SELECT o.id, t.raw_hex,
strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch') AS timestamp,
o.observer_id, o.observer_name,
o.direction, o.snr, o.rssi, o.score, t.hash, t.route_type,
t.payload_type, t.payload_version, o.path_json, t.decoded_json, t.created_at
FROM observations o
JOIN transmissions t ON t.id = o.transmission_id;
`
if _, err := conn.Exec(schema); err != nil {
t.Fatal(err)
@@ -551,8 +543,8 @@ func TestHandlePacketDetailNoStore(t *testing.T) {
req := httptest.NewRequest("GET", "/api/packets/abc123def4567890", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 200 {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
if w.Code != 404 {
t.Fatalf("expected 404 (no store), got %d: %s", w.Code, w.Body.String())
}
})
@@ -560,8 +552,8 @@ func TestHandlePacketDetailNoStore(t *testing.T) {
req := httptest.NewRequest("GET", "/api/packets/1", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 200 {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
if w.Code != 404 {
t.Fatalf("expected 404 (no store), got %d: %s", w.Code, w.Body.String())
}
})
@@ -1475,8 +1467,8 @@ func TestHandleObserverAnalyticsNoStore(t *testing.T) {
req := httptest.NewRequest("GET", "/api/observers/obs1/analytics", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 200 {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
if w.Code != 503 {
t.Fatalf("expected 503, got %d: %s", w.Code, w.Body.String())
}
}
@@ -3272,20 +3264,6 @@ func TestHandlePacketDetailWithStoreAllPaths(t *testing.T) {
// --- Additional DB function coverage ---
func TestDBGetTimestamps(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
ts, err := db.GetTimestamps("2000-01-01")
if err != nil {
t.Fatal(err)
}
if len(ts) < 1 {
t.Error("expected >=1 timestamps")
}
}
func TestDBGetNewTransmissionsSince(t *testing.T) {
db := setupTestDB(t)
defer db.Close()

View File

@@ -120,14 +120,14 @@ func (db *DB) scanTransmissionRow(rows *sql.Rows) map[string]interface{} {
// Node represents a row from the nodes table.
type Node struct {
PublicKey string `json:"public_key"`
Name *string `json:"name"`
Role *string `json:"role"`
Lat *float64 `json:"lat"`
Lon *float64 `json:"lon"`
LastSeen *string `json:"last_seen"`
FirstSeen *string `json:"first_seen"`
AdvertCount int `json:"advert_count"`
PublicKey string `json:"public_key"`
Name *string `json:"name"`
Role *string `json:"role"`
Lat *float64 `json:"lat"`
Lon *float64 `json:"lon"`
LastSeen *string `json:"last_seen"`
FirstSeen *string `json:"first_seen"`
AdvertCount int `json:"advert_count"`
BatteryMv *int `json:"battery_mv"`
TemperatureC *float64 `json:"temperature_c"`
}
@@ -162,7 +162,7 @@ type Transmission struct {
CreatedAt *string `json:"created_at"`
}
// Observation (from packets_v view).
// Observation (observation-level data).
type Observation struct {
ID int `json:"id"`
RawHex *string `json:"raw_hex"`
@@ -435,7 +435,7 @@ func (db *DB) QueryGroupedPackets(q PacketQuery) (*PacketResult, error) {
w = "WHERE " + strings.Join(where, " AND ")
}
// Count total transmissions (fast — queries transmissions directly, not packets_v)
// Count total transmissions (fast — queries transmissions directly, not a VIEW)
var total int
if len(where) == 0 {
db.conn.QueryRow("SELECT COUNT(*) FROM transmissions").Scan(&total)
@@ -628,18 +628,6 @@ func (db *DB) resolveNodePubkey(nodeIDOrName string) string {
return pk
}
// GetPacketByID fetches a single packet/observation.
func (db *DB) GetPacketByID(id int) (map[string]interface{}, error) {
rows, err := db.conn.Query("SELECT id, raw_hex, timestamp, observer_id, observer_name, direction, snr, rssi, score, hash, route_type, payload_type, payload_version, path_json, decoded_json, created_at FROM packets_v WHERE id = ?", id)
if err != nil {
return nil, err
}
defer rows.Close()
if rows.Next() {
return scanPacketRow(rows), nil
}
return nil, nil
}
// GetTransmissionByID fetches from transmissions table with observer data.
func (db *DB) GetTransmissionByID(id int) (map[string]interface{}, error) {
@@ -673,24 +661,6 @@ func (db *DB) GetPacketByHash(hash string) (map[string]interface{}, error) {
return nil, nil
}
// GetObservationsForHash returns all observations for a given hash.
func (db *DB) GetObservationsForHash(hash string) ([]map[string]interface{}, error) {
rows, err := db.conn.Query(`SELECT id, raw_hex, timestamp, observer_id, observer_name, direction, snr, rssi, score, hash, route_type, payload_type, payload_version, path_json, decoded_json, created_at
FROM packets_v WHERE hash = ? ORDER BY timestamp DESC`, strings.ToLower(hash))
if err != nil {
return nil, err
}
defer rows.Close()
result := make([]map[string]interface{}, 0)
for rows.Next() {
p := scanPacketRow(rows)
if p != nil {
result = append(result, p)
}
}
return result, nil
}
// GetNodes returns filtered, paginated node list.
func (db *DB) GetNodes(limit, offset int, role, search, before, lastHeard, sortBy, region string) ([]map[string]interface{}, int, map[string]int, error) {
@@ -798,30 +768,6 @@ func (db *DB) GetNodeByPubkey(pubkey string) (map[string]interface{}, error) {
return nil, nil
}
// GetRecentPacketsForNode returns recent packets referencing a node.
func (db *DB) GetRecentPacketsForNode(pubkey string, name string, limit int) ([]map[string]interface{}, error) {
if limit <= 0 {
limit = 20
}
pk := "%" + pubkey + "%"
np := "%" + name + "%"
rows, err := db.conn.Query(`SELECT id, raw_hex, timestamp, observer_id, observer_name, direction, snr, rssi, score, hash, route_type, payload_type, payload_version, path_json, decoded_json, created_at
FROM packets_v WHERE decoded_json LIKE ? OR decoded_json LIKE ?
ORDER BY timestamp DESC LIMIT ?`, pk, np, limit)
if err != nil {
return nil, err
}
defer rows.Close()
packets := make([]map[string]interface{}, 0)
for rows.Next() {
p := scanPacketRow(rows)
if p != nil {
packets = append(packets, p)
}
}
return packets, nil
}
// GetRecentTransmissionsForNode returns recent transmissions referencing a node (Node.js-compatible shape).
func (db *DB) GetRecentTransmissionsForNode(pubkey string, name string, limit int) ([]map[string]interface{}, error) {
@@ -1045,103 +991,6 @@ func (db *DB) GetDistinctIATAs() ([]string, error) {
return codes, nil
}
// GetNodeHealth returns health info for a node (observers, stats, recent packets).
func (db *DB) GetNodeHealth(pubkey string) (map[string]interface{}, error) {
node, err := db.GetNodeByPubkey(pubkey)
if err != nil || node == nil {
return nil, err
}
name := ""
if n, ok := node["name"]; ok && n != nil {
name = fmt.Sprintf("%v", n)
}
pk := "%" + pubkey + "%"
np := "%" + name + "%"
whereClause := "decoded_json LIKE ? OR decoded_json LIKE ?"
if name == "" {
whereClause = "decoded_json LIKE ?"
np = pk
}
todayStart := time.Now().UTC().Truncate(24 * time.Hour).Format(time.RFC3339)
// Observers
observerSQL := fmt.Sprintf(`SELECT observer_id, observer_name, AVG(snr) as avgSnr, AVG(rssi) as avgRssi, COUNT(*) as packetCount
FROM packets_v WHERE (%s) AND observer_id IS NOT NULL GROUP BY observer_id ORDER BY packetCount DESC`, whereClause)
oRows, err := db.conn.Query(observerSQL, pk, np)
if err != nil {
return nil, err
}
defer oRows.Close()
observers := make([]map[string]interface{}, 0)
for oRows.Next() {
var obsID, obsName sql.NullString
var avgSnr, avgRssi sql.NullFloat64
var pktCount int
oRows.Scan(&obsID, &obsName, &avgSnr, &avgRssi, &pktCount)
observers = append(observers, map[string]interface{}{
"observer_id": nullStr(obsID),
"observer_name": nullStr(obsName),
"avgSnr": nullFloat(avgSnr),
"avgRssi": nullFloat(avgRssi),
"packetCount": pktCount,
})
}
// Stats
var packetsToday, totalPackets int
db.conn.QueryRow(fmt.Sprintf("SELECT COUNT(*) FROM packets_v WHERE (%s) AND timestamp > ?", whereClause), pk, np, todayStart).Scan(&packetsToday)
db.conn.QueryRow(fmt.Sprintf("SELECT COUNT(*) FROM packets_v WHERE (%s)", whereClause), pk, np).Scan(&totalPackets)
var avgSnr sql.NullFloat64
db.conn.QueryRow(fmt.Sprintf("SELECT AVG(snr) FROM packets_v WHERE (%s)", whereClause), pk, np).Scan(&avgSnr)
var lastHeard sql.NullString
db.conn.QueryRow(fmt.Sprintf("SELECT MAX(timestamp) FROM packets_v WHERE (%s)", whereClause), pk, np).Scan(&lastHeard)
// Avg hops
hRows, _ := db.conn.Query(fmt.Sprintf("SELECT path_json FROM packets_v WHERE (%s) AND path_json IS NOT NULL", whereClause), pk, np)
totalHops, hopCount := 0, 0
if hRows != nil {
defer hRows.Close()
for hRows.Next() {
var pj sql.NullString
hRows.Scan(&pj)
if pj.Valid {
var hops []interface{}
if json.Unmarshal([]byte(pj.String), &hops) == nil {
totalHops += len(hops)
hopCount++
}
}
}
}
avgHops := 0
if hopCount > 0 {
avgHops = int(math.Round(float64(totalHops) / float64(hopCount)))
}
// Recent packets
recentPackets, _ := db.GetRecentTransmissionsForNode(pubkey, name, 20)
return map[string]interface{}{
"node": node,
"observers": observers,
"stats": map[string]interface{}{
"totalTransmissions": totalPackets,
"totalObservations": totalPackets,
"totalPackets": totalPackets,
"packetsToday": packetsToday,
"avgSnr": nullFloat(avgSnr),
"avgHops": avgHops,
"lastHeard": nullStr(lastHeard),
},
"recentPackets": recentPackets,
}, nil
}
// GetNetworkStatus returns overall network health status.
func (db *DB) GetNetworkStatus(healthThresholds HealthThresholds) (map[string]interface{}, error) {
@@ -1190,10 +1039,28 @@ func (db *DB) GetNetworkStatus(healthThresholds HealthThresholds) (map[string]in
}, nil
}
// GetTraces returns observations for a hash.
// GetTraces returns observations for a hash using direct table queries.
func (db *DB) GetTraces(hash string) ([]map[string]interface{}, error) {
rows, err := db.conn.Query(`SELECT observer_id, observer_name, timestamp, snr, rssi, path_json
FROM packets_v WHERE hash = ? ORDER BY timestamp ASC`, strings.ToLower(hash))
var querySQL string
if db.isV3 {
querySQL = `SELECT obs.id AS observer_id, obs.name AS observer_name,
strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch') AS timestamp,
o.snr, o.rssi, o.path_json
FROM observations o
JOIN transmissions t ON t.id = o.transmission_id
LEFT JOIN observers obs ON obs.rowid = o.observer_idx
WHERE t.hash = ?
ORDER BY o.timestamp ASC`
} else {
querySQL = `SELECT o.observer_id, o.observer_name,
strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch') AS timestamp,
o.snr, o.rssi, o.path_json
FROM observations o
JOIN transmissions t ON t.id = o.transmission_id
WHERE t.hash = ?
ORDER BY o.timestamp ASC`
}
rows, err := db.conn.Query(querySQL, strings.ToLower(hash))
if err != nil {
return nil, err
}
@@ -1219,7 +1086,7 @@ func (db *DB) GetTraces(hash string) ([]map[string]interface{}, error) {
}
// GetChannels returns channel list from GRP_TXT packets.
// Queries transmissions directly (not packets_v) to avoid observation-level
// Queries transmissions directly (not a VIEW) to avoid observation-level
// duplicates that could cause stale lastMessage when an older message has
// a later re-observation timestamp.
func (db *DB) GetChannels() ([]map[string]interface{}, error) {
@@ -1435,31 +1302,7 @@ func (db *DB) GetChannelMessages(channelHash string, limit, offset int) ([]map[s
return messages, total, nil
}
// GetTimestamps returns packet timestamps since a given time.
func (db *DB) GetTimestamps(since string) ([]string, error) {
rows, err := db.conn.Query("SELECT timestamp FROM packets_v WHERE timestamp > ? ORDER BY timestamp ASC", since)
if err != nil {
return nil, err
}
defer rows.Close()
var timestamps []string
for rows.Next() {
var ts string
rows.Scan(&ts)
timestamps = append(timestamps, ts)
}
if timestamps == nil {
timestamps = []string{}
}
return timestamps, nil
}
// GetNodeCountsForPacket returns observation count for a hash.
func (db *DB) GetObservationCount(hash string) int {
var count int
db.conn.QueryRow("SELECT COUNT(*) FROM packets_v WHERE hash = ?", strings.ToLower(hash)).Scan(&count)
return count
}
// GetNewTransmissionsSince returns new transmissions after a given ID for WebSocket polling.
func (db *DB) GetNewTransmissionsSince(lastID int, limit int) ([]map[string]interface{}, error) {

View File

@@ -73,16 +73,6 @@ func setupTestDB(t *testing.T) *DB {
timestamp INTEGER NOT NULL
);
CREATE VIEW packets_v AS
SELECT o.id, t.raw_hex,
strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch') AS timestamp,
obs.id AS observer_id, obs.name AS observer_name,
o.direction, o.snr, o.rssi, o.score, t.hash, t.route_type,
t.payload_type, t.payload_version, o.path_json, t.decoded_json,
t.created_at
FROM observations o
JOIN transmissions t ON t.id = o.transmission_id
LEFT JOIN observers obs ON obs.rowid = o.observer_idx;
`
if _, err := conn.Exec(schema); err != nil {
t.Fatal(err)
@@ -569,51 +559,6 @@ func TestGetNewTransmissionsSince(t *testing.T) {
}
}
func TestGetObservationsForHash(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
obs, err := db.GetObservationsForHash("abc123def4567890")
if err != nil {
t.Fatal(err)
}
if len(obs) != 2 {
t.Errorf("expected 2 observations, got %d", len(obs))
}
}
func TestGetPacketByIDFound(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
pkt, err := db.GetPacketByID(1)
if err != nil {
t.Fatal(err)
}
if pkt == nil {
t.Fatal("expected packet, got nil")
}
if pkt["hash"] != "abc123def4567890" {
t.Errorf("expected hash abc123def4567890, got %v", pkt["hash"])
}
}
func TestGetPacketByIDNotFound(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
pkt, err := db.GetPacketByID(9999)
if err != nil {
t.Fatal(err)
}
if pkt != nil {
t.Error("expected nil for nonexistent packet ID")
}
}
func TestGetTransmissionByIDFound(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
@@ -656,34 +601,6 @@ func TestGetPacketByHashNotFound(t *testing.T) {
}
}
func TestGetRecentPacketsForNode(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
packets, err := db.GetRecentPacketsForNode("aabbccdd11223344", "TestRepeater", 20)
if err != nil {
t.Fatal(err)
}
if len(packets) == 0 {
t.Error("expected packets for TestRepeater")
}
}
func TestGetRecentPacketsForNodeDefaultLimit(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
packets, err := db.GetRecentPacketsForNode("aabbccdd11223344", "TestRepeater", 0)
if err != nil {
t.Fatal(err)
}
if packets == nil {
t.Error("expected non-nil result")
}
}
func TestGetObserverIdsForRegion(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
@@ -733,46 +650,6 @@ func TestGetObserverIdsForRegion(t *testing.T) {
})
}
func TestGetNodeHealth(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
t.Run("found", func(t *testing.T) {
result, err := db.GetNodeHealth("aabbccdd11223344")
if err != nil {
t.Fatal(err)
}
if result == nil {
t.Fatal("expected result, got nil")
}
node, ok := result["node"].(map[string]interface{})
if !ok {
t.Fatal("expected node object")
}
if node["name"] != "TestRepeater" {
t.Errorf("expected TestRepeater, got %v", node["name"])
}
stats, ok := result["stats"].(map[string]interface{})
if !ok {
t.Fatal("expected stats object")
}
if stats["totalPackets"] == nil {
t.Error("expected totalPackets in stats")
}
})
t.Run("not found", func(t *testing.T) {
result, err := db.GetNodeHealth("nonexistent")
if err != nil {
t.Fatal(err)
}
if result != nil {
t.Error("expected nil for nonexistent node")
}
})
}
func TestGetChannelMessages(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
@@ -815,48 +692,6 @@ func TestGetChannelMessages(t *testing.T) {
})
}
func TestGetTimestamps(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
t.Run("with results", func(t *testing.T) {
ts, err := db.GetTimestamps("2020-01-01")
if err != nil {
t.Fatal(err)
}
if len(ts) == 0 {
t.Error("expected timestamps")
}
})
t.Run("no results", func(t *testing.T) {
ts, err := db.GetTimestamps("2099-01-01")
if err != nil {
t.Fatal(err)
}
if len(ts) != 0 {
t.Errorf("expected 0 timestamps, got %d", len(ts))
}
})
}
func TestGetObservationCount(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
count := db.GetObservationCount("abc123def4567890")
if count != 2 {
t.Errorf("expected 2, got %d", count)
}
count = db.GetObservationCount("nonexistent")
if count != 0 {
t.Errorf("expected 0 for nonexistent, got %d", count)
}
}
func TestBuildPacketWhereFilters(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
@@ -1280,29 +1115,6 @@ func TestOpenDBInvalidPath(t *testing.T) {
}
}
func TestGetNodeHealthNoName(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
// Insert a node without a name
db.conn.Exec(`INSERT INTO observers (id, name, iata) VALUES ('obs1', 'Observer One', 'SJC')`)
db.conn.Exec(`INSERT INTO nodes (public_key, role, last_seen, first_seen, advert_count)
VALUES ('deadbeef12345678', 'repeater', '2026-01-15T10:00:00Z', '2026-01-01T00:00:00Z', 5)`)
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
VALUES ('DDEE', 'deadbeefhash1234', '2026-01-15T10:05:00Z', 1, 4,
'{"pubKey":"deadbeef12345678","type":"ADVERT"}')`)
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
VALUES (1, 1, 11.0, -91, '["dd"]', 1736935500)`)
result, err := db.GetNodeHealth("deadbeef12345678")
if err != nil {
t.Fatal(err)
}
if result == nil {
t.Fatal("expected result, got nil")
}
}
func TestGetChannelMessagesObserverFallback(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
@@ -1383,20 +1195,6 @@ func TestQueryGroupedPacketsWithFilters(t *testing.T) {
}
}
func TestGetTracesEmpty(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
seedTestData(t, db)
traces, err := db.GetTraces("nonexistenthash1")
if err != nil {
t.Fatal(err)
}
if len(traces) != 0 {
t.Errorf("expected 0 traces, got %d", len(traces))
}
}
func TestNullHelpers(t *testing.T) {
// nullStr
if nullStr(sql.NullString{Valid: false}) != nil {
@@ -1474,9 +1272,11 @@ func TestNodeTelemetryFields(t *testing.T) {
db := setupTestDB(t)
defer db.Close()
// Insert node with telemetry data
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, lat, lon, last_seen, first_seen, advert_count, battery_mv, temperature_c)
VALUES ('pk_telem1', 'SensorNode', 'sensor', 37.0, -122.0, '2026-01-01T00:00:00Z', '2026-01-01T00:00:00Z', 5, 3700, 28.5)`)
// Test via GetNodeByPubkey
node, err := db.GetNodeByPubkey("pk_telem1")
if err != nil {
t.Fatal(err)
@@ -1491,6 +1291,7 @@ func TestNodeTelemetryFields(t *testing.T) {
t.Errorf("temperature_c=%v, want 28.5", node["temperature_c"])
}
// Test via GetNodes
nodes, _, _, err := db.GetNodes(50, 0, "sensor", "", "", "", "", "")
if err != nil {
t.Fatal(err)
@@ -1502,6 +1303,7 @@ func TestNodeTelemetryFields(t *testing.T) {
t.Errorf("GetNodes battery_mv=%v, want 3700", nodes[0]["battery_mv"])
}
// Test node without telemetry — fields should be nil
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, last_seen, first_seen, advert_count)
VALUES ('pk_notelem', 'PlainNode', 'repeater', '2026-01-01T00:00:00Z', '2026-01-01T00:00:00Z', 3)`)
node2, _ := db.GetNodeByPubkey("pk_notelem")

File diff suppressed because it is too large Load Diff

View File

@@ -18,7 +18,9 @@ func setupTestServer(t *testing.T) (*Server, *mux.Router) {
hub := NewHub()
srv := NewServer(db, cfg, hub)
store := NewPacketStore(db)
store.Load()
if err := store.Load(); err != nil {
t.Fatalf("store.Load failed: %v", err)
}
srv.store = store
router := mux.NewRouter()
srv.RegisterRoutes(router)
@@ -722,6 +724,9 @@ func TestNodePathsFound(t *testing.T) {
if body["paths"] == nil {
t.Error("expected paths in response")
}
if got, ok := body["totalTransmissions"].(float64); !ok || got < 1 {
t.Errorf("expected totalTransmissions >= 1, got %v", body["totalTransmissions"])
}
}
func TestNodePathsNotFound(t *testing.T) {
@@ -832,6 +837,9 @@ func TestObserverAnalytics(t *testing.T) {
if body["recentPackets"] == nil {
t.Error("expected recentPackets")
}
if recent, ok := body["recentPackets"].([]interface{}); !ok || len(recent) == 0 {
t.Errorf("expected non-empty recentPackets, got %v", body["recentPackets"])
}
})
t.Run("custom days", func(t *testing.T) {
@@ -1251,6 +1259,11 @@ func TestNodeAnalyticsNoNameNode(t *testing.T) {
cfg := &Config{Port: 3000}
hub := NewHub()
srv := NewServer(db, cfg, hub)
store := NewPacketStore(db)
if err := store.Load(); err != nil {
t.Fatalf("store.Load failed: %v", err)
}
srv.store = store
router := mux.NewRouter()
srv.RegisterRoutes(router)
@@ -1282,6 +1295,11 @@ func TestNodeHealthForNoNameNode(t *testing.T) {
cfg := &Config{Port: 3000}
hub := NewHub()
srv := NewServer(db, cfg, hub)
store := NewPacketStore(db)
if err := store.Load(); err != nil {
t.Fatalf("store.Load failed: %v", err)
}
srv.store = store
router := mux.NewRouter()
srv.RegisterRoutes(router)
@@ -1521,8 +1539,6 @@ func TestHandlerErrorPaths(t *testing.T) {
router := mux.NewRouter()
srv.RegisterRoutes(router)
// Drop the view to force query errors
db.conn.Exec("DROP VIEW IF EXISTS packets_v")
t.Run("stats error", func(t *testing.T) {
db.conn.Exec("DROP TABLE IF EXISTS transmissions")
@@ -1563,7 +1579,7 @@ func TestHandlerErrorTraces(t *testing.T) {
router := mux.NewRouter()
srv.RegisterRoutes(router)
db.conn.Exec("DROP VIEW IF EXISTS packets_v")
db.conn.Exec("DROP TABLE IF EXISTS observations")
req := httptest.NewRequest("GET", "/api/traces/abc123def4567890", nil)
w := httptest.NewRecorder()
@@ -1697,13 +1713,12 @@ func TestHandlerErrorTimestamps(t *testing.T) {
router := mux.NewRouter()
srv.RegisterRoutes(router)
db.conn.Exec("DROP VIEW IF EXISTS packets_v")
// Without a store, timestamps returns empty 200
req := httptest.NewRequest("GET", "/api/packets/timestamps?since=2020-01-01", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 500 {
t.Errorf("expected 500 for timestamps error, got %d", w.Code)
if w.Code != 200 {
t.Errorf("expected 200 for timestamps without store, got %d", w.Code)
}
}
@@ -1740,8 +1755,8 @@ func TestHandlerErrorBulkHealth(t *testing.T) {
req := httptest.NewRequest("GET", "/api/nodes/bulk-health", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 500 {
t.Errorf("expected 500, got %d", w.Code)
if w.Code != 200 {
t.Errorf("expected 200, got %d", w.Code)
}
}
@@ -1876,7 +1891,9 @@ func TestGetNodeHashSizeInfoFlipFlop(t *testing.T) {
db := setupTestDB(t)
seedTestData(t, db)
store := NewPacketStore(db)
store.Load()
if err := store.Load(); err != nil {
t.Fatalf("store.Load failed: %v", err)
}
pk := "abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234567890"
db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, role) VALUES (?, 'TestNode', 'repeater')", pk)
@@ -1934,7 +1951,17 @@ for _, field := range arrayFields {
if body[field] == nil {
t.Errorf("field %q is null, expected []", field)
}
}
}
func TestObserverAnalyticsNoStore(t *testing.T) {
_, router := setupNoStoreServer(t)
req := httptest.NewRequest("GET", "/api/observers/obs1/analytics", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != 503 {
t.Fatalf("expected 503, got %d", w.Code)
}
}
func min(a, b int) int {
if a < b {

View File

@@ -62,7 +62,7 @@ type StoreObs struct {
type PacketStore struct {
mu sync.RWMutex
db *DB
packets []*StoreTx // sorted by first_seen DESC
packets []*StoreTx // sorted by first_seen ASC (oldest first; newest at tail)
byHash map[string]*StoreTx // hash → *StoreTx
byTxID map[int]*StoreTx // transmission_id → *StoreTx
byObsID map[int]*StoreObs // observation_id → *StoreObs
@@ -176,7 +176,7 @@ func (s *PacketStore) Load() error {
FROM transmissions t
LEFT JOIN observations o ON o.transmission_id = t.id
LEFT JOIN observers obs ON obs.rowid = o.observer_idx
ORDER BY t.first_seen DESC, o.timestamp DESC`
ORDER BY t.first_seen ASC, o.timestamp DESC`
} else {
loadSQL = `SELECT t.id, t.raw_hex, t.hash, t.first_seen, t.route_type,
t.payload_type, t.payload_version, t.decoded_json,
@@ -184,7 +184,7 @@ func (s *PacketStore) Load() error {
o.snr, o.rssi, o.score, o.path_json, o.timestamp
FROM transmissions t
LEFT JOIN observations o ON o.transmission_id = t.id
ORDER BY t.first_seen DESC, o.timestamp DESC`
ORDER BY t.first_seen ASC, o.timestamp DESC`
}
rows, err := s.db.conn.Query(loadSQL)
@@ -368,28 +368,32 @@ func (s *PacketStore) QueryPackets(q PacketQuery) *PacketResult {
results := s.filterPackets(q)
total := len(results)
if q.Order == "ASC" {
sorted := make([]*StoreTx, len(results))
copy(sorted, results)
sort.Slice(sorted, func(i, j int) bool {
return sorted[i].FirstSeen < sorted[j].FirstSeen
})
results = sorted
}
// Paginate
// results is oldest-first (ASC). For DESC (default) read backwards from the tail;
// for ASC read forwards. Both are O(page_size) — no sort copy needed.
start := q.Offset
if start >= len(results) {
if start >= total {
return &PacketResult{Packets: []map[string]interface{}{}, Total: total}
}
end := start + q.Limit
if end > len(results) {
end = len(results)
pageSize := q.Limit
if start+pageSize > total {
pageSize = total - start
}
packets := make([]map[string]interface{}, 0, end-start)
for _, tx := range results[start:end] {
packets = append(packets, txToMap(tx))
packets := make([]map[string]interface{}, 0, pageSize)
if q.Order == "ASC" {
for _, tx := range results[start : start+pageSize] {
packets = append(packets, txToMap(tx))
}
} else {
// DESC: newest items are at the tail; page 0 = last pageSize items reversed
endIdx := total - start
startIdx := endIdx - pageSize
if startIdx < 0 {
startIdx = 0
}
for i := endIdx - 1; i >= startIdx; i-- {
packets = append(packets, txToMap(results[i]))
}
}
return &PacketResult{Packets: packets, Total: total}
}
@@ -719,15 +723,16 @@ func (s *PacketStore) GetTimestamps(since string) []string {
s.mu.RLock()
defer s.mu.RUnlock()
// packets sorted newest first — scan from start until older than since
// packets sorted oldest-first — scan from tail until we reach items older than since
var result []string
for _, tx := range s.packets {
for i := len(s.packets) - 1; i >= 0; i-- {
tx := s.packets[i]
if tx.FirstSeen <= since {
break
}
result = append(result, tx.FirstSeen)
}
// Reverse to get ASC order
// result is currently newest-first; reverse to return ASC order
for i, j := 0, len(result)-1; i < j; i, j = i+1, j-1 {
result[i], result[j] = result[j], result[i]
}
@@ -777,23 +782,30 @@ func (s *PacketStore) QueryMultiNodePackets(pubkeys []string, limit, offset int,
total := len(filtered)
if order == "ASC" {
sort.Slice(filtered, func(i, j int) bool {
return filtered[i].FirstSeen < filtered[j].FirstSeen
})
}
// filtered is oldest-first (built by iterating s.packets forward).
// Apply same DESC/ASC pagination logic as QueryPackets.
if offset >= total {
return &PacketResult{Packets: []map[string]interface{}{}, Total: total}
}
end := offset + limit
if end > total {
end = total
pageSize := limit
if offset+pageSize > total {
pageSize = total - offset
}
packets := make([]map[string]interface{}, 0, end-offset)
for _, tx := range filtered[offset:end] {
packets = append(packets, txToMap(tx))
packets := make([]map[string]interface{}, 0, pageSize)
if order == "ASC" {
for _, tx := range filtered[offset : offset+pageSize] {
packets = append(packets, txToMap(tx))
}
} else {
endIdx := total - offset
startIdx := endIdx - pageSize
if startIdx < 0 {
startIdx = 0
}
for i := endIdx - 1; i >= startIdx; i-- {
packets = append(packets, txToMap(filtered[i]))
}
}
return &PacketResult{Packets: packets, Total: total}
}
@@ -926,15 +938,14 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
DecodedJSON: r.decodedJSON,
}
s.byHash[r.hash] = tx
// Prepend (newest first)
s.packets = append([]*StoreTx{tx}, s.packets...)
s.packets = append(s.packets, tx) // oldest-first; new items go to tail
s.byTxID[r.txID] = tx
s.indexByNode(tx)
if tx.PayloadType != nil {
pt := *tx.PayloadType
// Prepend to maintain newest-first order (matches Load ordering)
// Append to maintain oldest-first order (matches Load ordering)
// so GetChannelMessages reverse iteration stays correct
s.byPayloadType[pt] = append([]*StoreTx{tx}, s.byPayloadType[pt]...)
s.byPayloadType[pt] = append(s.byPayloadType[pt], tx)
}
if _, exists := broadcastTxs[r.txID]; !exists {
@@ -1079,8 +1090,6 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
s.cacheMu.Unlock()
}
log.Printf("[poller] IngestNewFromDB: found %d new txs, maxID %d->%d", len(result), sinceID, newMaxID)
return result, newMaxID
}
@@ -1263,8 +1272,7 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) int {
s.subpathCache = make(map[string]*cachedResult)
s.cacheMu.Unlock()
log.Printf("[poller] IngestNewObservations: updated %d existing txs, maxObsID %d->%d",
len(updatedTxs), sinceObsID, newMaxObsID)
// analytics caches cleared; no per-cycle log to avoid stdout overhead
}
return newMaxObsID
@@ -1888,7 +1896,7 @@ func (s *PacketStore) GetChannelMessages(channelHash string, limit, offset int)
msgMap := map[string]*msgEntry{}
var msgOrder []string
// Iterate type-5 packets oldest-first (byPayloadType is in load order = newest first)
// Iterate type-5 packets oldest-first (byPayloadType is ASC = oldest first)
type decodedMsg struct {
Type string `json:"type"`
Channel string `json:"channel"`
@@ -1899,8 +1907,7 @@ func (s *PacketStore) GetChannelMessages(channelHash string, limit, offset int)
}
grpTxts := s.byPayloadType[5]
for i := len(grpTxts) - 1; i >= 0; i-- {
tx := grpTxts[i]
for _, tx := range grpTxts {
if tx.DecodedJSON == "" {
continue
}
@@ -4069,13 +4076,13 @@ func (s *PacketStore) GetNodeHealth(pubkey string) (map[string]interface{}, erro
lhVal = lastHeard
}
// Recent packets (up to 20, newest first — packets are already sorted DESC)
// Recent packets (up to 20, newest first — read from tail of oldest-first slice)
recentLimit := 20
if len(packets) < recentLimit {
recentLimit = len(packets)
}
recentPackets := make([]map[string]interface{}, 0, recentLimit)
for i := 0; i < recentLimit; i++ {
for i := len(packets) - 1; i >= len(packets)-recentLimit; i-- {
p := txToMap(packets[i])
delete(p, "observations")
recentPackets = append(recentPackets, p)

101
docs/rename-migration.md Normal file
View File

@@ -0,0 +1,101 @@
# CoreScope Migration Guide
MeshCore Analyzer has been renamed to **CoreScope**. This document covers what you need to update.
## What Changed
- **Repository name**: `meshcore-analyzer``corescope`
- **Docker image name**: `meshcore-analyzer:latest``corescope:latest`
- **Docker container prefixes**: `meshcore-*``corescope-*`
- **Default site name**: "MeshCore Analyzer" → "CoreScope"
## What Did NOT Change
- **Data directories** — `~/meshcore-data/` stays as-is
- **Database filename** — `meshcore.db` is unchanged
- **MQTT topics** — `meshcore/#` topics are protocol-level and unchanged
- **Browser state** — Favorites, localStorage keys, and settings are preserved
- **Config file format** — `config.json` structure is the same
---
## 1. Git Remote Update
Update your local clone to point to the new repository URL:
```bash
git remote set-url origin https://github.com/Kpa-clawbot/corescope.git
git pull
```
## 2. Docker (manage.sh) Users
Rebuild with the new image name:
```bash
./manage.sh stop
git pull
./manage.sh setup
```
The new image is `corescope:latest`. You can clean up the old image:
```bash
docker rmi meshcore-analyzer:latest
```
## 3. Docker Compose Users
Rebuild containers with the new names:
```bash
docker compose down
git pull
docker compose build
docker compose up -d
```
Container names change from `meshcore-*` to `corescope-*`. Old containers are removed by `docker compose down`.
## 4. Data Directories
**No action required.** The data directory `~/meshcore-data/` and database file `meshcore.db` are unchanged. Your existing data carries over automatically.
## 5. Config
If you customized `branding.siteName` in your `config.json`, update it to your preferred name. Otherwise the new default "CoreScope" applies automatically.
No other config keys changed.
## 6. MQTT
**No action required.** MQTT topics (`meshcore/#`) are protocol-level and are not affected by the rename.
## 7. Browser
**No action required.** Bookmarks/favorites will continue to work at the same host and port. localStorage keys are unchanged, so your settings and preferences are preserved.
## 8. CI/CD
If you have custom CI/CD pipelines that reference:
- The old repository URL (`meshcore-analyzer`)
- The old Docker image name (`meshcore-analyzer:latest`)
- Old container names (`meshcore-*`)
Update those references to use the new names.
---
## Summary Checklist
| Item | Action Required? | What to Do |
|------|-----------------|------------|
| Git remote | ✅ Yes | `git remote set-url origin …corescope.git` |
| Docker image | ✅ Yes | Rebuild; optionally `docker rmi` old image |
| Docker Compose | ✅ Yes | `docker compose down && build && up` |
| Data directories | ❌ No | Unchanged |
| Config | ⚠️ Maybe | Only if you customized `branding.siteName` |
| MQTT | ❌ No | Topics unchanged |
| Browser | ❌ No | Settings preserved |
| CI/CD | ⚠️ Maybe | Update if referencing old repo/image names |

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — analytics.js (v2 — full nerd mode) === */
/* === CoreScope — analytics.js (v2 — full nerd mode) === */
'use strict';
(function () {

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — app.js === */
/* === CoreScope — app.js === */
'use strict';
// --- Route/Payload name maps ---
@@ -109,7 +109,7 @@ function formatVersionBadge(version, commit, engine) {
if (!version && !commit && !engine) return '';
var port = (typeof location !== 'undefined' && location.port) || '';
var isProd = !port || port === '80' || port === '443';
var GH = 'https://github.com/Kpa-clawbot/meshcore-analyzer';
var GH = 'https://github.com/Kpa-clawbot/corescope';
var parts = [];
if (version && isProd) {
var vTag = version.charAt(0) === 'v' ? version : 'v' + version;

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — audio-lab.js === */
/* === CoreScope — audio-lab.js === */
/* Audio Lab: Packet Jukebox for sound debugging & understanding */
'use strict';

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — channels.js === */
/* === CoreScope — channels.js === */
'use strict';
(function () {

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — compare.js === */
/* === CoreScope — compare.js === */
/* Observer packet comparison — Fixes #129 */
'use strict';

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — customize.js === */
/* === CoreScope — customize.js === */
/* Tools → Customization: visual config builder with live preview & JSON export */
'use strict';
@@ -9,7 +9,7 @@
const DEFAULTS = {
branding: {
siteName: 'MeshCore Analyzer',
siteName: 'CoreScope',
tagline: 'Real-time MeshCore LoRa mesh network analyzer',
logoUrl: '',
faviconUrl: ''
@@ -45,7 +45,7 @@
ANON_REQ: '#f43f5e'
},
home: {
heroTitle: 'MeshCore Analyzer',
heroTitle: 'CoreScope',
heroSubtitle: 'Find your nodes to start monitoring them.',
steps: [
{ emoji: '💬', title: 'Join the Bay Area MeshCore Discord', description: 'The community Discord is the best place to get help and find local mesh enthusiasts.' },

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — home.css === */
/* === CoreScope — home.css === */
/* Override #app overflow:hidden for home page scrolling */
#app:has(.home-hero), #app:has(.home-chooser) { overflow-y: auto; }

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — home.js (My Mesh Dashboard) === */
/* === CoreScope — home.js (My Mesh Dashboard) === */
'use strict';
(function () {
@@ -39,7 +39,7 @@
function showChooser(container) {
container.innerHTML = `
<section class="home-chooser">
<h1>Welcome to ${escapeHtml(window.SITE_CONFIG?.branding?.siteName || 'MeshCore Analyzer')}</h1>
<h1>Welcome to ${escapeHtml(window.SITE_CONFIG?.branding?.siteName || 'CoreScope')}</h1>
<p>How familiar are you with MeshCore?</p>
<div class="chooser-options">
<button class="chooser-btn new" id="chooseNew">
@@ -63,7 +63,7 @@
const myNodes = getMyNodes();
const hasNodes = myNodes.length > 0;
const homeCfg = window.SITE_CONFIG?.home || null;
const siteName = window.SITE_CONFIG?.branding?.siteName || 'MeshCore Analyzer';
const siteName = window.SITE_CONFIG?.branding?.siteName || 'CoreScope';
container.innerHTML = `
<section class="home-hero">
@@ -324,7 +324,7 @@
loadMyNodes();
// Update title if no nodes left
const h1 = document.querySelector('.home-hero h1');
if (h1 && !getMyNodes().length) h1.textContent = 'MeshCore Analyzer';
if (h1 && !getMyNodes().length) h1.textContent = 'CoreScope';
});
});

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — hop-display.js === */
/* === CoreScope — hop-display.js === */
/* Shared hop rendering with conflict info for all pages */
'use strict';

View File

@@ -5,12 +5,12 @@
<link rel="icon" href="favicon.ico" type="image/x-icon">
<link rel="icon" href="favicon.svg" type="image/svg+xml">
<meta name="viewport" content="width=device-width, initial-scale=1.0, viewport-fit=cover">
<title>MeshCore Analyzer</title>
<title>CoreScope</title>
<!-- Open Graph / Discord embed -->
<meta property="og:title" content="MeshCore Analyzer">
<meta property="og:title" content="CoreScope">
<meta property="og:description" content="Real-time MeshCore LoRa mesh network analyzer — live packet visualization, node tracking, channel decryption, route analysis, and deep mesh analytics.">
<meta property="og:image" content="https://raw.githubusercontent.com/Kpa-clawbot/meshcore-analyzer/master/public/og-image.png">
<meta property="og:image" content="https://raw.githubusercontent.com/Kpa-clawbot/corescope/master/public/og-image.png">
<meta property="og:image:width" content="1200">
<meta property="og:image:height" content="630">
<meta property="og:url" content="https://analyzer.00id.net">
@@ -19,12 +19,12 @@
<!-- Twitter Card -->
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:title" content="MeshCore Analyzer">
<meta name="twitter:title" content="CoreScope">
<meta name="twitter:description" content="Real-time MeshCore LoRa mesh network analyzer — live packet visualization, node tracking, channel decryption, and route analysis.">
<meta name="twitter:image" content="https://raw.githubusercontent.com/Kpa-clawbot/meshcore-analyzer/master/public/og-image.png">
<link rel="stylesheet" href="style.css?v=1774690966">
<link rel="stylesheet" href="home.css?v=1774690966">
<link rel="stylesheet" href="live.css?v=1774690966">
<meta name="twitter:image" content="https://raw.githubusercontent.com/Kpa-clawbot/corescope/master/public/og-image.png">
<link rel="stylesheet" href="style.css?v=1774731523">
<link rel="stylesheet" href="home.css?v=1774731523">
<link rel="stylesheet" href="live.css?v=1774731523">
<link rel="stylesheet" href="https://unpkg.com/leaflet@1.9.4/dist/leaflet.css"
integrity="sha256-p4NxAoJBhIIN+hmNHrzRCf9tD/miZyoHS5obTRR9BMY="
crossorigin="anonymous">
@@ -40,7 +40,7 @@
<div class="nav-left">
<a href="#/" class="nav-brand">
<span class="brand-icon">🍄</span>
<span class="brand-text">MeshCore Analyzer</span>
<span class="brand-text">CoreScope</span>
<span class="live-dot" id="liveDot" title="WebSocket connected" aria-label="WebSocket connected"></span>
</a>
<div class="nav-links">
@@ -81,29 +81,29 @@
<main id="app" role="main"></main>
<script src="vendor/qrcode.js"></script>
<script src="roles.js?v=1774690966"></script>
<script src="customize.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="region-filter.js?v=1774690966"></script>
<script src="hop-resolver.js?v=1774690966"></script>
<script src="hop-display.js?v=1774690966"></script>
<script src="app.js?v=1774690966"></script>
<script src="home.js?v=1774690966"></script>
<script src="packet-filter.js?v=1774690966"></script>
<script src="packets.js?v=1774690966"></script>
<script src="map.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="channels.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="nodes.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="traces.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="analytics.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio-v1-constellation.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio-v2-constellation.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio-lab.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="live.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="observers.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="observer-detail.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="compare.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="node-analytics.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="perf.js?v=1774690966" onerror="console.error('Failed to load:', this.src)"></script>
<script src="roles.js?v=1774731523"></script>
<script src="customize.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
<script src="region-filter.js?v=1774731523"></script>
<script src="hop-resolver.js?v=1774731523"></script>
<script src="hop-display.js?v=1774731523"></script>
<script src="app.js?v=1774731523"></script>
<script src="home.js?v=1774731523"></script>
<script src="packet-filter.js?v=1774731523"></script>
<script src="packets.js?v=1774731523"></script>
<script src="map.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
<script src="channels.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
<script src="nodes.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
<script src="traces.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
<script src="analytics.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio-v1-constellation.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio-v2-constellation.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
<script src="audio-lab.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
<script src="live.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
<script src="observers.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
<script src="observer-detail.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
<script src="compare.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
<script src="node-analytics.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
<script src="perf.js?v=1774731523" onerror="console.error('Failed to load:', this.src)"></script>
</body>
</html>

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — map.js === */
/* === CoreScope — map.js === */
'use strict';
(function () {

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — node-analytics.js === */
/* === CoreScope — node-analytics.js === */
'use strict';
(function () {
const PAYLOAD_LABELS = { 0: 'Request', 1: 'Response', 2: 'Direct Msg', 3: 'ACK', 4: 'Advert', 5: 'Channel Msg', 7: 'Anon Req', 8: 'Path', 9: 'Trace', 11: 'Control' };

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — nodes.js === */
/* === CoreScope — nodes.js === */
'use strict';
(function () {

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — observer-detail.js === */
/* === CoreScope — observer-detail.js === */
'use strict';
(function () {
const PAYLOAD_LABELS = { 0: 'Request', 1: 'Response', 2: 'Direct Msg', 3: 'ACK', 4: 'Advert', 5: 'Channel Msg', 7: 'Anon Req', 8: 'Path', 9: 'Trace', 11: 'Control' };

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — observers.js === */
/* === CoreScope — observers.js === */
'use strict';
(function () {

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — packets.js === */
/* === CoreScope — packets.js === */
'use strict';
(function () {

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — perf.js === */
/* === CoreScope — perf.js === */
'use strict';
(function () {

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — region-filter.js (shared region filter component) === */
/* === CoreScope — region-filter.js (shared region filter component) === */
'use strict';
(function () {

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — roles.js (shared config module) === */
/* === CoreScope — roles.js (shared config module) === */
'use strict';
/*

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — style.css === */
/* === CoreScope — style.css === */
:root {
--nav-bg: #0f0f23;

View File

@@ -1,4 +1,4 @@
/* === MeshCore Analyzer — traces.js === */
/* === CoreScope — traces.js === */
'use strict';
(function () {

View File

@@ -3,7 +3,7 @@
set -e
echo "═══════════════════════════════════════"
echo " MeshCore Analyzer — Test Suite"
echo " CoreScope — Test Suite"
echo "═══════════════════════════════════════"
echo ""

View File

@@ -44,7 +44,7 @@ async function run() {
await page.goto(BASE, { waitUntil: 'domcontentloaded' });
await page.waitForSelector('nav, .navbar, .nav, [class*="nav"]');
const title = await page.title();
assert(title.toLowerCase().includes('meshcore'), `Title "${title}" doesn't contain MeshCore`);
assert(title.toLowerCase().includes('corescope'), `Title "${title}" doesn't contain CoreScope`);
const nav = await page.$('nav, .navbar, .nav, [class*="nav"]');
assert(nav, 'Nav bar not found');
});

View File

@@ -1304,7 +1304,7 @@ console.log('\n=== app.js: formatVersionBadge ===');
loadInCtx(ctx, 'public/app.js');
return ctx;
}
const GH = 'https://github.com/Kpa-clawbot/meshcore-analyzer';
const GH = 'https://github.com/Kpa-clawbot/corescope';
test('returns empty string when all args missing', () => {
const { formatVersionBadge } = makeBadgeSandbox('');