mirror of
https://github.com/Kpa-clawbot/meshcore-analyzer.git
synced 2026-04-25 07:12:06 +00:00
Compare commits
30 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2af4259eca | ||
|
|
bf2e721dd7 | ||
|
|
f20431d816 | ||
|
|
f9cfad9cd4 | ||
|
|
96d0bbe487 | ||
|
|
6712da7d7c | ||
|
|
6aef83c82a | ||
|
|
9f14c74b3e | ||
|
|
0b8b1e91a6 | ||
|
|
c678555e75 | ||
|
|
623ebc879b | ||
|
|
0b1924d401 | ||
|
|
0f502370c5 | ||
|
|
e47c39ffda | ||
|
|
1499a55ba7 | ||
|
|
f71e117cdd | ||
|
|
75f1295a06 | ||
|
|
b1b76acb77 | ||
|
|
f87eb3601c | ||
|
|
ec4dd58cb6 | ||
|
|
044a5387af | ||
|
|
01ca843309 | ||
|
|
5f50e80931 | ||
|
|
8f3d12eca5 | ||
|
|
357f7952f7 | ||
|
|
47d081c705 | ||
|
|
be313f60cb | ||
|
|
8a0862523d | ||
|
|
7e8b30aa1f | ||
|
|
b2279b230b |
6
.github/workflows/deploy.yml
vendored
6
.github/workflows/deploy.yml
vendored
@@ -246,6 +246,12 @@ jobs:
|
||||
with:
|
||||
node-version: '22'
|
||||
|
||||
- name: Free disk space
|
||||
run: |
|
||||
docker system prune -af 2>/dev/null || true
|
||||
docker builder prune -af 2>/dev/null || true
|
||||
df -h /
|
||||
|
||||
- name: Build Go Docker image
|
||||
run: |
|
||||
echo "${GITHUB_SHA::7}" > .git-commit
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -30,3 +30,4 @@ cmd/ingestor/ingestor.exe
|
||||
# CI trigger
|
||||
!test-fixtures/e2e-fixture.db
|
||||
corescope-server
|
||||
cmd/server/server
|
||||
|
||||
39
AGENTS.md
39
AGENTS.md
@@ -33,7 +33,7 @@ public/ — Frontend (vanilla JS, one file per page) — ACTIVE, NOT
|
||||
style.css — Main styles, CSS variables for theming
|
||||
live.css — Live page styles
|
||||
home.css — Home page styles
|
||||
index.html — SPA shell, script/style tags with cache busters
|
||||
index.html — SPA shell, script/style tags with __BUST__ placeholder (auto-replaced at server startup)
|
||||
test-fixtures/ — Real data SQLite fixture from staging (used for E2E tests)
|
||||
scripts/ — Tooling (coverage collector, fixture capture, frontend instrumentation)
|
||||
```
|
||||
@@ -51,18 +51,41 @@ The following were part of the old Node.js backend and have been removed:
|
||||
|
||||
## Rules — Read These First
|
||||
|
||||
### 0. Performance is a feature — not an afterthought
|
||||
Every change must consider performance impact BEFORE implementation. This codebase handles 30K+ packets, 2K+ nodes, and real-time WebSocket updates. A single O(n²) loop or per-item API call can freeze the UI or stall the server.
|
||||
|
||||
**Before writing code, ask:**
|
||||
- What's the worst-case data size this code will process?
|
||||
- Am I adding work inside a hot loop (render, ingest, WS broadcast)?
|
||||
- Am I fetching from the server what I could compute client-side?
|
||||
- Am I recomputing something that could be cached/incremental?
|
||||
- Does my change invalidate caches more broadly than necessary?
|
||||
|
||||
**Hard rules:**
|
||||
- **No per-item API calls.** Fetch bulk, filter client-side.
|
||||
- **No O(n²) in hot paths.** Use Maps/Sets for lookups, not nested array scans.
|
||||
- **No full DOM rebuilds.** Diff or virtualize — never innerHTML entire tables.
|
||||
- **No unbounded data structures.** Every map/slice/array must have eviction or size limits.
|
||||
- **No expensive work under locks.** Copy data under lock, process outside.
|
||||
- **Cache expensive computations.** Invalidate surgically, not globally.
|
||||
- **Debounce/coalesce rapid events.** WebSocket messages, scroll, resize — never fire raw.
|
||||
|
||||
**If your change touches a hot path (packet rendering, ingest, analytics), include a perf justification in the PR description:** what the complexity is, what the expected scale is, and why it won't degrade.
|
||||
|
||||
**Perf claims require proof.** "This is faster" without data is not acceptable. Every PR claiming to fix or improve performance MUST include one of:
|
||||
- A benchmark test (before/after timings with realistic data sizes)
|
||||
- Profile output or timing measurements (e.g. "renderTableRows: 450ms → 12ms on 30K packets")
|
||||
- A test assertion that enforces the perf characteristic (e.g. "filters 30K packets in <50ms")
|
||||
No proof = no merge.
|
||||
|
||||
### 1. No commit without tests
|
||||
Every change that touches logic MUST have tests. For Go backend: `cd cmd/server && go test ./...` and `cd cmd/ingestor && go test ./...`. For frontend: `node test-packet-filter.js && node test-aging.js && node test-frontend-helpers.js`. If you add new logic, add tests. No exceptions.
|
||||
|
||||
### 2. No commit without browser validation
|
||||
After pushing, verify the change works in an actual browser. Use `browser profile=openclaw` against the running instance. Take a screenshot if the change is visual. If you can't validate it, say so — don't claim it works.
|
||||
|
||||
### 3. Cache busters — ALWAYS bump them
|
||||
Every time you change a `.js` or `.css` file in `public/`, bump the cache buster in `index.html`. This has caused 7 separate production regressions. Use:
|
||||
```bash
|
||||
NEWV=$(date +%s) && sed -i "s/v=[0-9]*/v=$NEWV/g" public/index.html
|
||||
```
|
||||
Do this in the SAME commit as the code change, not as a follow-up.
|
||||
### 3. Cache busters are automatic — do NOT manually edit them
|
||||
Cache busters are injected automatically by the Go server at startup. The `__BUST__` placeholder in `index.html` is replaced with a Unix timestamp when the server reads the file. No manual bumping needed — every server restart picks up new asset versions. Do NOT replace `__BUST__` with hardcoded timestamps.
|
||||
|
||||
### 4. Verify API response shape before building UI
|
||||
Before writing client code that consumes an API endpoint, check what the endpoint ACTUALLY returns. Use `curl` or check the server code. Don't assume fields exist — grouped packets (`groupByHash=true`) have different fields than raw packets. This has caused multiple breakages.
|
||||
@@ -324,7 +347,7 @@ One logical change per commit. Each commit is deployable. Each commit has its te
|
||||
|
||||
| Pitfall | Times it happened | Prevention |
|
||||
|---------|-------------------|------------|
|
||||
| Forgot cache busters | 7 | Always bump in same commit |
|
||||
| Forgot cache busters | 7 | Now automatic — `__BUST__` replaced at server startup |
|
||||
| Grouped packets missing fields | 3 | curl the actual API first |
|
||||
| last_seen vs last_heard mismatch | 4 | Always use `last_heard \|\| last_seen` |
|
||||
| CSS selectors don't match SVG | 2 | Manipulate SVG in JS after generation |
|
||||
|
||||
@@ -36,8 +36,9 @@ type Store struct {
|
||||
stmtUpsertNode *sql.Stmt
|
||||
stmtIncrementAdvertCount *sql.Stmt
|
||||
stmtUpsertObserver *sql.Stmt
|
||||
stmtGetObserverRowid *sql.Stmt
|
||||
stmtUpdateNodeTelemetry *sql.Stmt
|
||||
stmtGetObserverRowid *sql.Stmt
|
||||
stmtUpdateObserverLastSeen *sql.Stmt
|
||||
stmtUpdateNodeTelemetry *sql.Stmt
|
||||
}
|
||||
|
||||
// OpenStore opens or creates a SQLite DB at the given path, applying the
|
||||
@@ -280,6 +281,17 @@ func applySchema(db *sql.DB) error {
|
||||
log.Println("[migration] node telemetry columns added")
|
||||
}
|
||||
|
||||
// One-time migration: add timestamp index on observations for fast stats queries.
|
||||
// Older databases created before this index was added suffer from full table scans
|
||||
// on COUNT(*) WHERE timestamp > ?, causing /api/stats to take 30s+.
|
||||
row = db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'obs_timestamp_index_v1'")
|
||||
if row.Scan(&migDone) != nil {
|
||||
log.Println("[migration] Adding timestamp index on observations...")
|
||||
db.Exec(`CREATE INDEX IF NOT EXISTS idx_observations_timestamp ON observations(timestamp)`)
|
||||
db.Exec(`INSERT INTO _migrations (name) VALUES ('obs_timestamp_index_v1')`)
|
||||
log.Println("[migration] observations timestamp index created")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -358,6 +370,11 @@ func (s *Store) prepareStatements() error {
|
||||
return err
|
||||
}
|
||||
|
||||
s.stmtUpdateObserverLastSeen, err = s.db.Prepare("UPDATE observers SET last_seen = ? WHERE rowid = ?")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s.stmtUpdateNodeTelemetry, err = s.db.Prepare(`
|
||||
UPDATE nodes SET
|
||||
battery_mv = COALESCE(?, battery_mv),
|
||||
@@ -417,13 +434,16 @@ func (s *Store) InsertTransmission(data *PacketData) (bool, error) {
|
||||
s.Stats.DuplicateTransmissions.Add(1)
|
||||
}
|
||||
|
||||
// Resolve observer_idx
|
||||
// Resolve observer_idx and update last_seen
|
||||
var observerIdx *int64
|
||||
if data.ObserverID != "" {
|
||||
var rowid int64
|
||||
err := s.stmtGetObserverRowid.QueryRow(data.ObserverID).Scan(&rowid)
|
||||
if err == nil {
|
||||
observerIdx = &rowid
|
||||
// Update observer last_seen on every packet to prevent
|
||||
// low-traffic observers from appearing offline (#463)
|
||||
_, _ = s.stmtUpdateObserverLastSeen.Exec(now, rowid)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -434,8 +454,8 @@ func (s *Store) InsertTransmission(data *PacketData) (bool, error) {
|
||||
}
|
||||
|
||||
_, err = s.stmtInsertObservation.Exec(
|
||||
txID, observerIdx, nil, // direction
|
||||
data.SNR, data.RSSI, nil, // score
|
||||
txID, observerIdx, data.Direction,
|
||||
data.SNR, data.RSSI, data.Score,
|
||||
data.PathJSON, epochTs,
|
||||
)
|
||||
if err != nil {
|
||||
@@ -542,11 +562,22 @@ func (s *Store) UpsertObserver(id, name, iata string, meta *ObserverMeta) error
|
||||
return err
|
||||
}
|
||||
|
||||
// Close closes the database.
|
||||
// Close checkpoints the WAL and closes the database.
|
||||
func (s *Store) Close() error {
|
||||
s.Checkpoint()
|
||||
return s.db.Close()
|
||||
}
|
||||
|
||||
// Checkpoint forces a WAL checkpoint to release the WAL lock file,
|
||||
// preventing lock contention with a new process starting up.
|
||||
func (s *Store) Checkpoint() {
|
||||
if _, err := s.db.Exec("PRAGMA wal_checkpoint(TRUNCATE)"); err != nil {
|
||||
log.Printf("[db] WAL checkpoint error: %v", err)
|
||||
} else {
|
||||
log.Println("[db] WAL checkpoint complete")
|
||||
}
|
||||
}
|
||||
|
||||
// LogStats logs current operational metrics.
|
||||
func (s *Store) LogStats() {
|
||||
log.Printf("[stats] tx_inserted=%d tx_dupes=%d obs_inserted=%d node_upserts=%d observer_upserts=%d write_errors=%d",
|
||||
@@ -595,6 +626,8 @@ type PacketData struct {
|
||||
ObserverName string
|
||||
SNR *float64
|
||||
RSSI *float64
|
||||
Score *float64
|
||||
Direction *string
|
||||
Hash string
|
||||
RouteType int
|
||||
PayloadType int
|
||||
@@ -605,10 +638,12 @@ type PacketData struct {
|
||||
|
||||
// MQTTPacketMessage is the JSON payload from an MQTT raw packet message.
|
||||
type MQTTPacketMessage struct {
|
||||
Raw string `json:"raw"`
|
||||
SNR *float64 `json:"SNR"`
|
||||
RSSI *float64 `json:"RSSI"`
|
||||
Origin string `json:"origin"`
|
||||
Raw string `json:"raw"`
|
||||
SNR *float64 `json:"SNR"`
|
||||
RSSI *float64 `json:"RSSI"`
|
||||
Score *float64 `json:"score"`
|
||||
Direction *string `json:"direction"`
|
||||
Origin string `json:"origin"`
|
||||
}
|
||||
|
||||
// BuildPacketData constructs a PacketData from a decoded packet and MQTT message.
|
||||
@@ -627,6 +662,8 @@ func BuildPacketData(msg *MQTTPacketMessage, decoded *DecodedPacket, observerID,
|
||||
ObserverName: msg.Origin,
|
||||
SNR: msg.SNR,
|
||||
RSSI: msg.RSSI,
|
||||
Score: msg.Score,
|
||||
Direction: msg.Direction,
|
||||
Hash: ComputeContentHash(msg.Raw),
|
||||
RouteType: decoded.Header.RouteType,
|
||||
PayloadType: decoded.Header.PayloadType,
|
||||
|
||||
@@ -516,6 +516,56 @@ func TestInsertTransmissionWithObserver(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// #463: Verify that inserting a packet updates the observer's last_seen,
|
||||
// so low-traffic observers don't incorrectly appear offline.
|
||||
func TestInsertTransmissionUpdatesObserverLastSeen(t *testing.T) {
|
||||
s, err := OpenStore(tempDBPath(t))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
// Insert observer with an old last_seen
|
||||
if err := s.UpsertObserver("obs1", "Observer1", "SJC", nil); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
// Backdate last_seen to 2 hours ago
|
||||
oldTime := "2026-03-24T22:00:00Z"
|
||||
s.db.Exec("UPDATE observers SET last_seen = ? WHERE id = ?", oldTime, "obs1")
|
||||
|
||||
// Verify it was backdated
|
||||
var lastSeenBefore string
|
||||
s.db.QueryRow("SELECT last_seen FROM observers WHERE id = ?", "obs1").Scan(&lastSeenBefore)
|
||||
if lastSeenBefore != oldTime {
|
||||
t.Fatalf("expected last_seen=%s, got %s", oldTime, lastSeenBefore)
|
||||
}
|
||||
|
||||
// Insert a packet from this observer
|
||||
data := &PacketData{
|
||||
RawHex: "0A00D69F",
|
||||
Timestamp: "2026-03-25T01:00:00Z",
|
||||
ObserverID: "obs1",
|
||||
Hash: "lastseentest123456",
|
||||
RouteType: 2,
|
||||
PayloadType: 2,
|
||||
PathJSON: "[]",
|
||||
DecodedJSON: `{"type":"TXT_MSG"}`,
|
||||
}
|
||||
if _, err := s.InsertTransmission(data); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Verify last_seen was updated
|
||||
var lastSeenAfter string
|
||||
s.db.QueryRow("SELECT last_seen FROM observers WHERE id = ?", "obs1").Scan(&lastSeenAfter)
|
||||
if lastSeenAfter == oldTime {
|
||||
t.Error("observer last_seen was NOT updated after packet insertion — low-traffic observers will appear offline")
|
||||
}
|
||||
if lastSeenAfter != "2026-03-25T01:00:00Z" {
|
||||
t.Errorf("expected last_seen=2026-03-25T01:00:00Z, got %s", lastSeenAfter)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEndToEndIngest(t *testing.T) {
|
||||
s, err := OpenStore(tempDBPath(t))
|
||||
if err != nil {
|
||||
@@ -1457,3 +1507,199 @@ func TestExtractObserverMetaNestedNilSkipsTopLevel(t *testing.T) {
|
||||
t.Error("nested nil should suppress top-level fallback")
|
||||
}
|
||||
}
|
||||
|
||||
func TestObsTimestampIndexMigration(t *testing.T) {
|
||||
// Case 1: new DB — OpenStore should create idx_observations_timestamp as part
|
||||
// of the observations table schema.
|
||||
t.Run("NewDB", func(t *testing.T) {
|
||||
s, err := OpenStore(tempDBPath(t))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
var count int
|
||||
err = s.db.QueryRow(
|
||||
"SELECT COUNT(*) FROM sqlite_master WHERE type='index' AND name='idx_observations_timestamp'",
|
||||
).Scan(&count)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if count != 1 {
|
||||
t.Error("idx_observations_timestamp should exist on a new DB")
|
||||
}
|
||||
|
||||
var migCount int
|
||||
err = s.db.QueryRow(
|
||||
"SELECT COUNT(*) FROM _migrations WHERE name='obs_timestamp_index_v1'",
|
||||
).Scan(&migCount)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
// On a new DB the index is created inline (not via migration), so the
|
||||
// migration row may or may not be recorded — just verify the index exists.
|
||||
_ = migCount
|
||||
})
|
||||
|
||||
// Case 2: existing DB that has the observations table but lacks the index
|
||||
// and lacks the _migrations entry — simulates an older installation.
|
||||
t.Run("MigrationPath", func(t *testing.T) {
|
||||
path := tempDBPath(t)
|
||||
|
||||
// Build a bare-bones DB that mimics an old installation:
|
||||
// observations table exists but idx_observations_timestamp does NOT.
|
||||
db, err := sql.Open("sqlite", path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
_, err = db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS _migrations (name TEXT PRIMARY KEY);
|
||||
CREATE TABLE IF NOT EXISTS transmissions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
raw_hex TEXT NOT NULL,
|
||||
hash TEXT NOT NULL UNIQUE,
|
||||
first_seen TEXT NOT NULL,
|
||||
route_type INTEGER,
|
||||
payload_type INTEGER,
|
||||
payload_version INTEGER,
|
||||
decoded_json TEXT,
|
||||
created_at TEXT DEFAULT (datetime('now'))
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS observations (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
transmission_id INTEGER NOT NULL REFERENCES transmissions(id),
|
||||
observer_idx INTEGER,
|
||||
direction TEXT,
|
||||
snr REAL,
|
||||
rssi REAL,
|
||||
score INTEGER,
|
||||
path_json TEXT,
|
||||
timestamp INTEGER NOT NULL
|
||||
);
|
||||
`)
|
||||
if err != nil {
|
||||
db.Close()
|
||||
t.Fatal(err)
|
||||
}
|
||||
// Confirm the index is absent before OpenStore runs.
|
||||
var preCount int
|
||||
db.QueryRow(
|
||||
"SELECT COUNT(*) FROM sqlite_master WHERE type='index' AND name='idx_observations_timestamp'",
|
||||
).Scan(&preCount)
|
||||
db.Close()
|
||||
if preCount != 0 {
|
||||
t.Fatalf("pre-condition failed: idx_observations_timestamp should not exist yet, got count=%d", preCount)
|
||||
}
|
||||
|
||||
// Now open via OpenStore — the migration should add the index.
|
||||
s, err := OpenStore(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
var idxCount int
|
||||
err = s.db.QueryRow(
|
||||
"SELECT COUNT(*) FROM sqlite_master WHERE type='index' AND name='idx_observations_timestamp'",
|
||||
).Scan(&idxCount)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if idxCount != 1 {
|
||||
t.Error("idx_observations_timestamp should exist after migration on old DB")
|
||||
}
|
||||
|
||||
var migCount int
|
||||
err = s.db.QueryRow(
|
||||
"SELECT COUNT(*) FROM _migrations WHERE name='obs_timestamp_index_v1'",
|
||||
).Scan(&migCount)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if migCount != 1 {
|
||||
t.Errorf("migration obs_timestamp_index_v1 should be recorded, got count=%d", migCount)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestBuildPacketDataScoreAndDirection(t *testing.T) {
|
||||
rawHex := "0A00D69FD7A5A7475DB07337749AE61FA53A4788E976"
|
||||
decoded, err := DecodePacket(rawHex, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
score := 42.0
|
||||
dir := "incoming"
|
||||
msg := &MQTTPacketMessage{
|
||||
Raw: rawHex,
|
||||
Score: &score,
|
||||
Direction: &dir,
|
||||
}
|
||||
|
||||
pkt := BuildPacketData(msg, decoded, "obs1", "SJC")
|
||||
if pkt.Score == nil || *pkt.Score != 42.0 {
|
||||
t.Errorf("Score=%v, want 42.0", pkt.Score)
|
||||
}
|
||||
if pkt.Direction == nil || *pkt.Direction != "incoming" {
|
||||
t.Errorf("Direction=%v, want incoming", pkt.Direction)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildPacketDataNilScoreDirection(t *testing.T) {
|
||||
decoded, _ := DecodePacket("0A00"+strings.Repeat("00", 10), nil)
|
||||
msg := &MQTTPacketMessage{Raw: "0A00" + strings.Repeat("00", 10)}
|
||||
pkt := BuildPacketData(msg, decoded, "", "")
|
||||
|
||||
if pkt.Score != nil {
|
||||
t.Errorf("Score should be nil, got %v", *pkt.Score)
|
||||
}
|
||||
if pkt.Direction != nil {
|
||||
t.Errorf("Direction should be nil, got %v", *pkt.Direction)
|
||||
}
|
||||
}
|
||||
|
||||
func TestInsertTransmissionWithScoreAndDirection(t *testing.T) {
|
||||
s, err := OpenStore(tempDBPath(t))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
score := 7.5
|
||||
dir := "outgoing"
|
||||
data := &PacketData{
|
||||
RawHex: "AABB",
|
||||
Timestamp: "2025-01-01T00:00:00Z",
|
||||
SNR: ptrFloat(5.0),
|
||||
RSSI: ptrFloat(-90.0),
|
||||
Score: &score,
|
||||
Direction: &dir,
|
||||
Hash: "abc123",
|
||||
PathJSON: "[]",
|
||||
}
|
||||
|
||||
isNew, err := s.InsertTransmission(data)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !isNew {
|
||||
t.Error("expected new transmission")
|
||||
}
|
||||
|
||||
// Verify the observation was stored with score and direction
|
||||
var gotDir sql.NullString
|
||||
var gotScore sql.NullFloat64
|
||||
err = s.db.QueryRow("SELECT direction, score FROM observations LIMIT 1").Scan(&gotDir, &gotScore)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !gotDir.Valid || gotDir.String != "outgoing" {
|
||||
t.Errorf("direction=%v, want outgoing", gotDir)
|
||||
}
|
||||
if !gotScore.Valid || gotScore.Float64 != 7.5 {
|
||||
t.Errorf("score=%v, want 7.5", gotScore)
|
||||
}
|
||||
}
|
||||
|
||||
func ptrFloat(f float64) *float64 { return &f }
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
"os"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
@@ -165,7 +166,7 @@ func main() {
|
||||
statsTicker.Stop()
|
||||
store.LogStats() // final stats on shutdown
|
||||
for _, c := range clients {
|
||||
c.Disconnect(1000)
|
||||
c.Disconnect(5000) // 5s to allow in-flight messages to drain
|
||||
}
|
||||
log.Println("Done.")
|
||||
}
|
||||
@@ -255,6 +256,20 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
|
||||
mqttMsg.RSSI = &f
|
||||
}
|
||||
}
|
||||
if v, ok := msg["score"]; ok {
|
||||
if f, ok := toFloat64(v); ok {
|
||||
mqttMsg.Score = &f
|
||||
}
|
||||
} else if v, ok := msg["Score"]; ok {
|
||||
if f, ok := toFloat64(v); ok {
|
||||
mqttMsg.Score = &f
|
||||
}
|
||||
}
|
||||
if v, ok := msg["direction"].(string); ok {
|
||||
mqttMsg.Direction = &v
|
||||
} else if v, ok := msg["Direction"].(string); ok {
|
||||
mqttMsg.Direction = &v
|
||||
}
|
||||
if v, ok := msg["origin"].(string); ok {
|
||||
mqttMsg.Origin = v
|
||||
}
|
||||
@@ -351,7 +366,8 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
|
||||
h := sha256.Sum256([]byte(hashInput))
|
||||
hash := hex.EncodeToString(h[:])[:16]
|
||||
|
||||
var snr, rssi *float64
|
||||
var snr, rssi, score *float64
|
||||
var direction *string
|
||||
if v, ok := msg["SNR"]; ok {
|
||||
if f, ok := toFloat64(v); ok {
|
||||
snr = &f
|
||||
@@ -370,6 +386,20 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
|
||||
rssi = &f
|
||||
}
|
||||
}
|
||||
if v, ok := msg["score"]; ok {
|
||||
if f, ok := toFloat64(v); ok {
|
||||
score = &f
|
||||
}
|
||||
} else if v, ok := msg["Score"]; ok {
|
||||
if f, ok := toFloat64(v); ok {
|
||||
score = &f
|
||||
}
|
||||
}
|
||||
if v, ok := msg["direction"].(string); ok {
|
||||
direction = &v
|
||||
} else if v, ok := msg["Direction"].(string); ok {
|
||||
direction = &v
|
||||
}
|
||||
|
||||
pktData := &PacketData{
|
||||
Timestamp: now,
|
||||
@@ -377,6 +407,8 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
|
||||
ObserverName: "L1 Pro (BLE)",
|
||||
SNR: snr,
|
||||
RSSI: rssi,
|
||||
Score: score,
|
||||
Direction: direction,
|
||||
Hash: hash,
|
||||
RouteType: 1, // FLOOD
|
||||
PayloadType: 5, // GRP_TXT
|
||||
@@ -428,7 +460,8 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
|
||||
h := sha256.Sum256([]byte(hashInput))
|
||||
hash := hex.EncodeToString(h[:])[:16]
|
||||
|
||||
var snr, rssi *float64
|
||||
var snr, rssi, score *float64
|
||||
var direction *string
|
||||
if v, ok := msg["SNR"]; ok {
|
||||
if f, ok := toFloat64(v); ok {
|
||||
snr = &f
|
||||
@@ -447,6 +480,20 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
|
||||
rssi = &f
|
||||
}
|
||||
}
|
||||
if v, ok := msg["score"]; ok {
|
||||
if f, ok := toFloat64(v); ok {
|
||||
score = &f
|
||||
}
|
||||
} else if v, ok := msg["Score"]; ok {
|
||||
if f, ok := toFloat64(v); ok {
|
||||
score = &f
|
||||
}
|
||||
}
|
||||
if v, ok := msg["direction"].(string); ok {
|
||||
direction = &v
|
||||
} else if v, ok := msg["Direction"].(string); ok {
|
||||
direction = &v
|
||||
}
|
||||
|
||||
pktData := &PacketData{
|
||||
Timestamp: now,
|
||||
@@ -454,6 +501,8 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
|
||||
ObserverName: "L1 Pro (BLE)",
|
||||
SNR: snr,
|
||||
RSSI: rssi,
|
||||
Score: score,
|
||||
Direction: direction,
|
||||
Hash: hash,
|
||||
RouteType: 1, // FLOOD
|
||||
PayloadType: 2, // TXT_MSG
|
||||
@@ -483,11 +532,35 @@ func toFloat64(v interface{}) (float64, bool) {
|
||||
case json.Number:
|
||||
f, err := n.Float64()
|
||||
return f, err == nil
|
||||
case string:
|
||||
s := strings.TrimSpace(n)
|
||||
s = stripUnitSuffix(s)
|
||||
f, err := strconv.ParseFloat(s, 64)
|
||||
return f, err == nil
|
||||
case uint:
|
||||
return float64(n), true
|
||||
case uint64:
|
||||
return float64(n), true
|
||||
default:
|
||||
return 0, false
|
||||
}
|
||||
}
|
||||
|
||||
// unitSuffixes lists common RF/signal unit suffixes to strip before parsing.
|
||||
var unitSuffixes = []string{"dBm", "dB", "mW", "km", "mi", "m"}
|
||||
|
||||
// stripUnitSuffix removes a trailing unit suffix (case-insensitive) from a
|
||||
// numeric string so that values like "-110dBm" can be parsed as float64.
|
||||
func stripUnitSuffix(s string) string {
|
||||
lower := strings.ToLower(s)
|
||||
for _, suffix := range unitSuffixes {
|
||||
if strings.HasSuffix(lower, strings.ToLower(suffix)) {
|
||||
return strings.TrimSpace(s[:len(s)-len(suffix)])
|
||||
}
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// extractObserverMeta extracts hardware metadata from an MQTT status message.
|
||||
// Casts battery_mv and uptime_secs to integers (they're always whole numbers).
|
||||
func extractObserverMeta(msg map[string]interface{}) *ObserverMeta {
|
||||
|
||||
@@ -22,7 +22,13 @@ func TestToFloat64(t *testing.T) {
|
||||
{"int64", int64(100), 100.0, true},
|
||||
{"json.Number valid", json.Number("9.5"), 9.5, true},
|
||||
{"json.Number invalid", json.Number("not_a_number"), 0, false},
|
||||
{"string unsupported", "hello", 0, false},
|
||||
{"string valid", "3.14", 3.14, true},
|
||||
{"string with spaces", " -7.5 ", -7.5, true},
|
||||
{"string integer", "42", 42.0, true},
|
||||
{"string invalid", "hello", 0, false},
|
||||
{"string empty", "", 0, false},
|
||||
{"uint", uint(10), 10.0, true},
|
||||
{"uint64", uint64(999), 999.0, true},
|
||||
{"bool unsupported", true, 0, false},
|
||||
{"nil unsupported", nil, 0, false},
|
||||
{"slice unsupported", []int{1}, 0, false},
|
||||
@@ -686,3 +692,50 @@ func TestHandleMessageNoSNRRSSI(t *testing.T) {
|
||||
t.Errorf("rssi should be nil when not present, got %v", *rssi)
|
||||
}
|
||||
}
|
||||
|
||||
func TestStripUnitSuffix(t *testing.T) {
|
||||
tests := []struct {
|
||||
input, want string
|
||||
}{
|
||||
{"-110dBm", "-110"},
|
||||
{"-110DBM", "-110"},
|
||||
{"5.5dB", "5.5"},
|
||||
{"100mW", "100"},
|
||||
{"1.5km", "1.5"},
|
||||
{"500m", "500"},
|
||||
{"10mi", "10"},
|
||||
{"42", "42"},
|
||||
{"", ""},
|
||||
{"hello", "hello"},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
got := stripUnitSuffix(tt.input)
|
||||
if got != tt.want {
|
||||
t.Errorf("stripUnitSuffix(%q) = %q, want %q", tt.input, got, tt.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestToFloat64WithUnits(t *testing.T) {
|
||||
tests := []struct {
|
||||
input interface{}
|
||||
want float64
|
||||
ok bool
|
||||
}{
|
||||
{"-110dBm", -110.0, true},
|
||||
{"5.5dB", 5.5, true},
|
||||
{"100mW", 100.0, true},
|
||||
{"-85.3dBm", -85.3, true},
|
||||
{"42", 42.0, true},
|
||||
{"not_a_number", 0, false},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
got, ok := toFloat64(tt.input)
|
||||
if ok != tt.ok {
|
||||
t.Errorf("toFloat64(%v) ok=%v, want %v", tt.input, ok, tt.ok)
|
||||
}
|
||||
if ok && got != tt.want {
|
||||
t.Errorf("toFloat64(%v) = %v, want %v", tt.input, got, tt.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
171
cmd/server/cache_invalidation_test.go
Normal file
171
cmd/server/cache_invalidation_test.go
Normal file
@@ -0,0 +1,171 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// newTestStore creates a minimal PacketStore for cache invalidation testing.
|
||||
func newTestStore(t *testing.T) *PacketStore {
|
||||
t.Helper()
|
||||
return &PacketStore{
|
||||
rfCache: make(map[string]*cachedResult),
|
||||
topoCache: make(map[string]*cachedResult),
|
||||
hashCache: make(map[string]*cachedResult),
|
||||
chanCache: make(map[string]*cachedResult),
|
||||
distCache: make(map[string]*cachedResult),
|
||||
subpathCache: make(map[string]*cachedResult),
|
||||
rfCacheTTL: 15 * time.Second,
|
||||
}
|
||||
}
|
||||
|
||||
// populateAllCaches fills every analytics cache with a dummy entry so tests
|
||||
// can verify which caches are cleared and which are preserved.
|
||||
func populateAllCaches(s *PacketStore) {
|
||||
s.cacheMu.Lock()
|
||||
defer s.cacheMu.Unlock()
|
||||
dummy := &cachedResult{data: map[string]interface{}{"test": true}, expiresAt: time.Now().Add(time.Hour)}
|
||||
s.rfCache["global"] = dummy
|
||||
s.topoCache["global"] = dummy
|
||||
s.hashCache["global"] = dummy
|
||||
s.chanCache["global"] = dummy
|
||||
s.distCache["global"] = dummy
|
||||
s.subpathCache["global"] = dummy
|
||||
}
|
||||
|
||||
// cachePopulated returns which caches still have their "global" entry.
|
||||
func cachePopulated(s *PacketStore) map[string]bool {
|
||||
s.cacheMu.Lock()
|
||||
defer s.cacheMu.Unlock()
|
||||
return map[string]bool{
|
||||
"rf": len(s.rfCache) > 0,
|
||||
"topo": len(s.topoCache) > 0,
|
||||
"hash": len(s.hashCache) > 0,
|
||||
"chan": len(s.chanCache) > 0,
|
||||
"dist": len(s.distCache) > 0,
|
||||
"subpath": len(s.subpathCache) > 0,
|
||||
}
|
||||
}
|
||||
|
||||
func TestInvalidateCachesFor_Eviction(t *testing.T) {
|
||||
s := newTestStore(t)
|
||||
populateAllCaches(s)
|
||||
|
||||
s.invalidateCachesFor(cacheInvalidation{eviction: true})
|
||||
|
||||
pop := cachePopulated(s)
|
||||
for name, has := range pop {
|
||||
if has {
|
||||
t.Errorf("eviction should clear %s cache", name)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestInvalidateCachesFor_NewObservationsOnly(t *testing.T) {
|
||||
s := newTestStore(t)
|
||||
populateAllCaches(s)
|
||||
|
||||
s.invalidateCachesFor(cacheInvalidation{hasNewObservations: true})
|
||||
|
||||
pop := cachePopulated(s)
|
||||
if pop["rf"] {
|
||||
t.Error("rf cache should be cleared on new observations")
|
||||
}
|
||||
// These should be preserved
|
||||
for _, name := range []string{"topo", "hash", "chan", "dist", "subpath"} {
|
||||
if !pop[name] {
|
||||
t.Errorf("%s cache should NOT be cleared on observation-only ingest", name)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestInvalidateCachesFor_NewTransmissionsOnly(t *testing.T) {
|
||||
s := newTestStore(t)
|
||||
populateAllCaches(s)
|
||||
|
||||
s.invalidateCachesFor(cacheInvalidation{hasNewTransmissions: true})
|
||||
|
||||
pop := cachePopulated(s)
|
||||
if pop["hash"] {
|
||||
t.Error("hash cache should be cleared on new transmissions")
|
||||
}
|
||||
for _, name := range []string{"rf", "topo", "chan", "dist", "subpath"} {
|
||||
if !pop[name] {
|
||||
t.Errorf("%s cache should NOT be cleared on transmission-only ingest", name)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestInvalidateCachesFor_ChannelDataOnly(t *testing.T) {
|
||||
s := newTestStore(t)
|
||||
populateAllCaches(s)
|
||||
|
||||
s.invalidateCachesFor(cacheInvalidation{hasChannelData: true})
|
||||
|
||||
pop := cachePopulated(s)
|
||||
if pop["chan"] {
|
||||
t.Error("chan cache should be cleared on channel data")
|
||||
}
|
||||
for _, name := range []string{"rf", "topo", "hash", "dist", "subpath"} {
|
||||
if !pop[name] {
|
||||
t.Errorf("%s cache should NOT be cleared on channel-data-only ingest", name)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestInvalidateCachesFor_NewPaths(t *testing.T) {
|
||||
s := newTestStore(t)
|
||||
populateAllCaches(s)
|
||||
|
||||
s.invalidateCachesFor(cacheInvalidation{hasNewPaths: true})
|
||||
|
||||
pop := cachePopulated(s)
|
||||
for _, name := range []string{"topo", "dist", "subpath"} {
|
||||
if pop[name] {
|
||||
t.Errorf("%s cache should be cleared on new paths", name)
|
||||
}
|
||||
}
|
||||
for _, name := range []string{"rf", "hash", "chan"} {
|
||||
if !pop[name] {
|
||||
t.Errorf("%s cache should NOT be cleared on path-only ingest", name)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestInvalidateCachesFor_CombinedFlags(t *testing.T) {
|
||||
s := newTestStore(t)
|
||||
populateAllCaches(s)
|
||||
|
||||
// Simulate a typical ingest: new transmissions with observations but no GRP_TXT
|
||||
s.invalidateCachesFor(cacheInvalidation{
|
||||
hasNewObservations: true,
|
||||
hasNewTransmissions: true,
|
||||
hasNewPaths: true,
|
||||
})
|
||||
|
||||
pop := cachePopulated(s)
|
||||
// rf, topo, hash, dist, subpath should all be cleared
|
||||
for _, name := range []string{"rf", "topo", "hash", "dist", "subpath"} {
|
||||
if pop[name] {
|
||||
t.Errorf("%s cache should be cleared with combined flags", name)
|
||||
}
|
||||
}
|
||||
// chan should be preserved (no GRP_TXT)
|
||||
if !pop["chan"] {
|
||||
t.Error("chan cache should NOT be cleared without hasChannelData flag")
|
||||
}
|
||||
}
|
||||
|
||||
func TestInvalidateCachesFor_NoFlags(t *testing.T) {
|
||||
s := newTestStore(t)
|
||||
populateAllCaches(s)
|
||||
|
||||
s.invalidateCachesFor(cacheInvalidation{})
|
||||
|
||||
pop := cachePopulated(s)
|
||||
for name, has := range pop {
|
||||
if !has {
|
||||
t.Errorf("%s cache should be preserved when no flags are set", name)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"log"
|
||||
"math"
|
||||
"os"
|
||||
"strings"
|
||||
@@ -38,6 +39,12 @@ func OpenDB(path string) (*DB, error) {
|
||||
}
|
||||
|
||||
func (db *DB) Close() error {
|
||||
// Checkpoint WAL before closing to release lock cleanly for new processes
|
||||
if _, err := db.conn.Exec("PRAGMA wal_checkpoint(TRUNCATE)"); err != nil {
|
||||
log.Printf("[db] WAL checkpoint error: %v", err)
|
||||
} else {
|
||||
log.Println("[db] WAL checkpoint complete")
|
||||
}
|
||||
return db.conn.Close()
|
||||
}
|
||||
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"net/http/httptest"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
@@ -326,6 +327,84 @@ func TestSpaHandler(t *testing.T) {
|
||||
t.Errorf("expected no-cache header for .html, got %s", cc)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("root path serves index.html", func(t *testing.T) {
|
||||
req := httptest.NewRequest("GET", "/", nil)
|
||||
w := httptest.NewRecorder()
|
||||
handler.ServeHTTP(w, req)
|
||||
|
||||
if w.Code != 200 {
|
||||
t.Errorf("expected 200, got %d", w.Code)
|
||||
}
|
||||
body := w.Body.String()
|
||||
if body != "<html>SPA</html>" {
|
||||
t.Errorf("expected SPA index.html content, got %s", body)
|
||||
}
|
||||
ct := w.Header().Get("Content-Type")
|
||||
if ct != "text/html; charset=utf-8" {
|
||||
t.Errorf("expected text/html content type, got %s", ct)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("/index.html serves pre-processed content", func(t *testing.T) {
|
||||
req := httptest.NewRequest("GET", "/index.html", nil)
|
||||
w := httptest.NewRecorder()
|
||||
handler.ServeHTTP(w, req)
|
||||
|
||||
if w.Code != 200 {
|
||||
t.Errorf("expected 200, got %d", w.Code)
|
||||
}
|
||||
body := w.Body.String()
|
||||
if body != "<html>SPA</html>" {
|
||||
t.Errorf("expected SPA index.html content, got %s", body)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestSpaHandlerCacheBust(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
htmlWithBust := `<html><script src="app.js?v=__BUST__"></script><link href="style.css?v=__BUST__"></html>`
|
||||
os.WriteFile(filepath.Join(dir, "index.html"), []byte(htmlWithBust), 0644)
|
||||
|
||||
fs := http.FileServer(http.Dir(dir))
|
||||
handler := spaHandler(dir, fs)
|
||||
|
||||
t.Run("__BUST__ is replaced with a Unix timestamp", func(t *testing.T) {
|
||||
req := httptest.NewRequest("GET", "/", nil)
|
||||
w := httptest.NewRecorder()
|
||||
handler.ServeHTTP(w, req)
|
||||
|
||||
body := w.Body.String()
|
||||
if strings.Contains(body, "__BUST__") {
|
||||
t.Errorf("__BUST__ placeholder was not replaced in response: %s", body)
|
||||
}
|
||||
// Verify it was replaced with digits (Unix timestamp)
|
||||
if !strings.Contains(body, "v=") {
|
||||
t.Errorf("expected v= query params in response, got: %s", body)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("SPA fallback also has busted values", func(t *testing.T) {
|
||||
req := httptest.NewRequest("GET", "/nonexistent/route", nil)
|
||||
w := httptest.NewRecorder()
|
||||
handler.ServeHTTP(w, req)
|
||||
|
||||
body := w.Body.String()
|
||||
if strings.Contains(body, "__BUST__") {
|
||||
t.Errorf("__BUST__ placeholder was not replaced in SPA fallback: %s", body)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("/index.html also has busted values", func(t *testing.T) {
|
||||
req := httptest.NewRequest("GET", "/index.html", nil)
|
||||
w := httptest.NewRecorder()
|
||||
handler.ServeHTTP(w, req)
|
||||
|
||||
body := w.Body.String()
|
||||
if strings.Contains(body, "__BUST__") {
|
||||
t.Errorf("__BUST__ placeholder was not replaced for /index.html: %s", body)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestWriteJSON(t *testing.T) {
|
||||
@@ -345,3 +424,29 @@ func TestWriteJSON(t *testing.T) {
|
||||
t.Errorf("expected 'value', got %v", body["key"])
|
||||
}
|
||||
}
|
||||
|
||||
func TestHaversineKm(t *testing.T) {
|
||||
// Same point should be 0
|
||||
if d := haversineKm(37.0, -122.0, 37.0, -122.0); d != 0 {
|
||||
t.Errorf("same point: expected 0, got %f", d)
|
||||
}
|
||||
|
||||
// SF to LA ~559km
|
||||
d := haversineKm(37.7749, -122.4194, 34.0522, -118.2437)
|
||||
if d < 550 || d > 570 {
|
||||
t.Errorf("SF to LA: expected ~559km, got %f", d)
|
||||
}
|
||||
|
||||
// Symmetry
|
||||
d1 := haversineKm(37.7749, -122.4194, 34.0522, -118.2437)
|
||||
d2 := haversineKm(34.0522, -118.2437, 37.7749, -122.4194)
|
||||
if d1 != d2 {
|
||||
t.Errorf("not symmetric: %f vs %f", d1, d2)
|
||||
}
|
||||
|
||||
// Oslo to Stockholm ~415km (old Euclidean dLat*111, dLon*85 would give ~627km)
|
||||
d = haversineKm(59.9, 10.7, 59.3, 18.0)
|
||||
if d < 400 || d > 430 {
|
||||
t.Errorf("Oslo to Stockholm: expected ~415km, got %f", d)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"flag"
|
||||
"fmt"
|
||||
@@ -12,6 +13,7 @@ import (
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
@@ -113,7 +115,13 @@ func main() {
|
||||
if err != nil {
|
||||
log.Fatalf("[db] failed to open %s: %v", resolvedDB, err)
|
||||
}
|
||||
defer database.Close()
|
||||
var dbCloseOnce sync.Once
|
||||
dbClose := func() error {
|
||||
var err error
|
||||
dbCloseOnce.Do(func() { err = database.Close() })
|
||||
return err
|
||||
}
|
||||
defer dbClose()
|
||||
|
||||
// Verify DB has expected tables
|
||||
var tableName string
|
||||
@@ -204,10 +212,27 @@ func main() {
|
||||
go func() {
|
||||
sigCh := make(chan os.Signal, 1)
|
||||
signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
|
||||
<-sigCh
|
||||
log.Println("[server] shutting down...")
|
||||
sig := <-sigCh
|
||||
log.Printf("[server] received %v, shutting down...", sig)
|
||||
|
||||
// 1. Stop accepting new WebSocket/poll data
|
||||
poller.Stop()
|
||||
httpServer.Close()
|
||||
|
||||
// 2. Gracefully drain HTTP connections (up to 15s)
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
|
||||
defer cancel()
|
||||
if err := httpServer.Shutdown(ctx); err != nil {
|
||||
log.Printf("[server] HTTP shutdown error: %v", err)
|
||||
}
|
||||
|
||||
// 3. Close WebSocket hub
|
||||
hub.Close()
|
||||
|
||||
// 4. Close database (release SQLite WAL lock)
|
||||
if err := dbClose(); err != nil {
|
||||
log.Printf("[server] DB close error: %v", err)
|
||||
}
|
||||
log.Println("[server] shutdown complete")
|
||||
}()
|
||||
|
||||
log.Printf("[server] CoreScope (Go) listening on http://localhost:%d", cfg.Port)
|
||||
@@ -217,11 +242,35 @@ func main() {
|
||||
}
|
||||
|
||||
// spaHandler serves static files, falling back to index.html for SPA routes.
|
||||
// It reads index.html once at creation time and replaces the __BUST__ placeholder
|
||||
// with a Unix timestamp so browsers fetch fresh JS/CSS after each server restart.
|
||||
func spaHandler(root string, fs http.Handler) http.Handler {
|
||||
// Pre-process index.html: replace __BUST__ with a cache-bust timestamp
|
||||
indexPath := filepath.Join(root, "index.html")
|
||||
rawHTML, err := os.ReadFile(indexPath)
|
||||
if err != nil {
|
||||
log.Printf("[static] warning: could not read index.html for cache-bust: %v", err)
|
||||
rawHTML = []byte("<!DOCTYPE html><html><body><h1>CoreScope</h1><p>index.html not found</p></body></html>")
|
||||
}
|
||||
bustValue := fmt.Sprintf("%d", time.Now().Unix())
|
||||
indexHTML := []byte(strings.ReplaceAll(string(rawHTML), "__BUST__", bustValue))
|
||||
log.Printf("[static] cache-bust value: %s", bustValue)
|
||||
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
// Serve pre-processed index.html for root and /index.html
|
||||
if r.URL.Path == "/" || r.URL.Path == "/index.html" {
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
w.Header().Set("Cache-Control", "no-cache, no-store, must-revalidate")
|
||||
w.Write(indexHTML)
|
||||
return
|
||||
}
|
||||
|
||||
path := filepath.Join(root, r.URL.Path)
|
||||
if _, err := os.Stat(path); os.IsNotExist(err) {
|
||||
http.ServeFile(w, r, filepath.Join(root, "index.html"))
|
||||
// SPA fallback — serve pre-processed index.html
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
w.Header().Set("Cache-Control", "no-cache, no-store, must-revalidate")
|
||||
w.Write(indexHTML)
|
||||
return
|
||||
}
|
||||
// Disable caching for JS/CSS/HTML
|
||||
|
||||
95
cmd/server/perfstats_race_test.go
Normal file
95
cmd/server/perfstats_race_test.go
Normal file
@@ -0,0 +1,95 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// TestPerfStatsConcurrentAccess verifies that concurrent writes and reads
|
||||
// to PerfStats do not trigger data races. Run with: go test -race
|
||||
func TestPerfStatsConcurrentAccess(t *testing.T) {
|
||||
ps := NewPerfStats()
|
||||
|
||||
var wg sync.WaitGroup
|
||||
const goroutines = 50
|
||||
const iterations = 200
|
||||
|
||||
// Concurrent writers (simulating perfMiddleware)
|
||||
for i := 0; i < goroutines; i++ {
|
||||
wg.Add(1)
|
||||
go func(id int) {
|
||||
defer wg.Done()
|
||||
for j := 0; j < iterations; j++ {
|
||||
ms := float64(j) * 0.5
|
||||
key := "/api/test"
|
||||
if id%2 == 0 {
|
||||
key = "/api/other"
|
||||
}
|
||||
|
||||
ps.mu.Lock()
|
||||
ps.Requests++
|
||||
ps.TotalMs += ms
|
||||
if _, ok := ps.Endpoints[key]; !ok {
|
||||
ps.Endpoints[key] = &EndpointPerf{Recent: make([]float64, 0, 100)}
|
||||
}
|
||||
ep := ps.Endpoints[key]
|
||||
ep.Count++
|
||||
ep.TotalMs += ms
|
||||
if ms > ep.MaxMs {
|
||||
ep.MaxMs = ms
|
||||
}
|
||||
ep.Recent = append(ep.Recent, ms)
|
||||
if len(ep.Recent) > 100 {
|
||||
ep.Recent = ep.Recent[1:]
|
||||
}
|
||||
if ms > 50 {
|
||||
ps.SlowQueries = append(ps.SlowQueries, SlowQuery{
|
||||
Path: key,
|
||||
Ms: ms,
|
||||
Time: time.Now().UTC().Format(time.RFC3339),
|
||||
})
|
||||
if len(ps.SlowQueries) > 50 {
|
||||
ps.SlowQueries = ps.SlowQueries[1:]
|
||||
}
|
||||
}
|
||||
ps.mu.Unlock()
|
||||
}
|
||||
}(i)
|
||||
}
|
||||
|
||||
// Concurrent readers (simulating handlePerf / handleHealth)
|
||||
for i := 0; i < 10; i++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for j := 0; j < iterations; j++ {
|
||||
ps.mu.Lock()
|
||||
_ = ps.Requests
|
||||
_ = ps.TotalMs
|
||||
for _, ep := range ps.Endpoints {
|
||||
_ = ep.Count
|
||||
_ = ep.MaxMs
|
||||
c := make([]float64, len(ep.Recent))
|
||||
copy(c, ep.Recent)
|
||||
}
|
||||
s := make([]SlowQuery, len(ps.SlowQueries))
|
||||
copy(s, ps.SlowQueries)
|
||||
ps.mu.Unlock()
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
|
||||
// Verify consistency
|
||||
ps.mu.Lock()
|
||||
defer ps.mu.Unlock()
|
||||
expectedRequests := int64(goroutines * iterations)
|
||||
if ps.Requests != expectedRequests {
|
||||
t.Errorf("expected %d requests, got %d", expectedRequests, ps.Requests)
|
||||
}
|
||||
if len(ps.Endpoints) == 0 {
|
||||
t.Error("expected endpoints to be populated")
|
||||
}
|
||||
}
|
||||
@@ -42,6 +42,7 @@ type Server struct {
|
||||
|
||||
// PerfStats tracks request performance.
|
||||
type PerfStats struct {
|
||||
mu sync.Mutex
|
||||
Requests int64
|
||||
TotalMs float64
|
||||
Endpoints map[string]*EndpointPerf
|
||||
@@ -136,6 +137,7 @@ func (s *Server) RegisterRoutes(r *mux.Router) {
|
||||
r.HandleFunc("/api/analytics/channels", s.handleAnalyticsChannels).Methods("GET")
|
||||
r.HandleFunc("/api/analytics/distance", s.handleAnalyticsDistance).Methods("GET")
|
||||
r.HandleFunc("/api/analytics/hash-sizes", s.handleAnalyticsHashSizes).Methods("GET")
|
||||
r.HandleFunc("/api/analytics/hash-collisions", s.handleAnalyticsHashCollisions).Methods("GET")
|
||||
r.HandleFunc("/api/analytics/subpaths", s.handleAnalyticsSubpaths).Methods("GET")
|
||||
r.HandleFunc("/api/analytics/subpath-detail", s.handleAnalyticsSubpathDetail).Methods("GET")
|
||||
|
||||
@@ -161,10 +163,7 @@ func (s *Server) perfMiddleware(next http.Handler) http.Handler {
|
||||
next.ServeHTTP(w, r)
|
||||
ms := float64(time.Since(start).Microseconds()) / 1000.0
|
||||
|
||||
s.perfStats.Requests++
|
||||
s.perfStats.TotalMs += ms
|
||||
|
||||
// Normalize key: prefer mux route template (like Node.js req.route.path)
|
||||
// Normalize key outside lock (no shared state needed)
|
||||
key := r.URL.Path
|
||||
if route := mux.CurrentRoute(r); route != nil {
|
||||
if tmpl, err := route.GetPathTemplate(); err == nil {
|
||||
@@ -174,6 +173,11 @@ func (s *Server) perfMiddleware(next http.Handler) http.Handler {
|
||||
if key == r.URL.Path {
|
||||
key = perfHexFallback.ReplaceAllString(key, ":id")
|
||||
}
|
||||
|
||||
s.perfStats.mu.Lock()
|
||||
s.perfStats.Requests++
|
||||
s.perfStats.TotalMs += ms
|
||||
|
||||
if _, ok := s.perfStats.Endpoints[key]; !ok {
|
||||
s.perfStats.Endpoints[key] = &EndpointPerf{Recent: make([]float64, 0, 100)}
|
||||
}
|
||||
@@ -199,6 +203,7 @@ func (s *Server) perfMiddleware(next http.Handler) http.Handler {
|
||||
s.perfStats.SlowQueries = s.perfStats.SlowQueries[1:]
|
||||
}
|
||||
}
|
||||
s.perfStats.mu.Unlock()
|
||||
})
|
||||
}
|
||||
|
||||
@@ -364,7 +369,8 @@ func (s *Server) handleHealth(w http.ResponseWriter, r *http.Request) {
|
||||
lastPauseMs = float64(m.PauseNs[(m.NumGC+255)%256]) / 1e6
|
||||
}
|
||||
|
||||
// Build slow queries list
|
||||
// Build slow queries list (copy under lock)
|
||||
s.perfStats.mu.Lock()
|
||||
recentSlow := make([]SlowQuery, 0)
|
||||
sliceEnd := s.perfStats.SlowQueries
|
||||
if len(sliceEnd) > 5 {
|
||||
@@ -373,6 +379,10 @@ func (s *Server) handleHealth(w http.ResponseWriter, r *http.Request) {
|
||||
for _, sq := range sliceEnd {
|
||||
recentSlow = append(recentSlow, sq)
|
||||
}
|
||||
perfRequests := s.perfStats.Requests
|
||||
perfTotalMs := s.perfStats.TotalMs
|
||||
perfSlowCount := len(s.perfStats.SlowQueries)
|
||||
s.perfStats.mu.Unlock()
|
||||
|
||||
writeJSON(w, HealthResponse{
|
||||
Status: "ok",
|
||||
@@ -402,9 +412,9 @@ func (s *Server) handleHealth(w http.ResponseWriter, r *http.Request) {
|
||||
EstimatedMB: pktEstMB,
|
||||
},
|
||||
Perf: HealthPerfStats{
|
||||
TotalRequests: int(s.perfStats.Requests),
|
||||
AvgMs: safeAvg(s.perfStats.TotalMs, float64(s.perfStats.Requests)),
|
||||
SlowQueries: len(s.perfStats.SlowQueries),
|
||||
TotalRequests: int(perfRequests),
|
||||
AvgMs: safeAvg(perfTotalMs, float64(perfRequests)),
|
||||
SlowQueries: perfSlowCount,
|
||||
RecentSlow: recentSlow,
|
||||
},
|
||||
})
|
||||
@@ -464,22 +474,50 @@ func (s *Server) handleStats(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
func (s *Server) handlePerf(w http.ResponseWriter, r *http.Request) {
|
||||
// Endpoint performance summary
|
||||
// Copy perfStats under lock to avoid data races
|
||||
s.perfStats.mu.Lock()
|
||||
type epSnapshot struct {
|
||||
path string
|
||||
count int
|
||||
totalMs float64
|
||||
maxMs float64
|
||||
recent []float64
|
||||
}
|
||||
epSnapshots := make([]epSnapshot, 0, len(s.perfStats.Endpoints))
|
||||
for path, ep := range s.perfStats.Endpoints {
|
||||
recentCopy := make([]float64, len(ep.Recent))
|
||||
copy(recentCopy, ep.Recent)
|
||||
epSnapshots = append(epSnapshots, epSnapshot{path, ep.Count, ep.TotalMs, ep.MaxMs, recentCopy})
|
||||
}
|
||||
uptimeSec := int(time.Since(s.perfStats.StartedAt).Seconds())
|
||||
totalRequests := s.perfStats.Requests
|
||||
totalMs := s.perfStats.TotalMs
|
||||
slowQueries := make([]SlowQuery, 0)
|
||||
sliceEnd := s.perfStats.SlowQueries
|
||||
if len(sliceEnd) > 20 {
|
||||
sliceEnd = sliceEnd[len(sliceEnd)-20:]
|
||||
}
|
||||
for _, sq := range sliceEnd {
|
||||
slowQueries = append(slowQueries, sq)
|
||||
}
|
||||
s.perfStats.mu.Unlock()
|
||||
|
||||
// Process snapshots outside lock
|
||||
type epEntry struct {
|
||||
path string
|
||||
data *EndpointStatsResp
|
||||
}
|
||||
var entries []epEntry
|
||||
for path, ep := range s.perfStats.Endpoints {
|
||||
sorted := sortedCopy(ep.Recent)
|
||||
for _, snap := range epSnapshots {
|
||||
sorted := sortedCopy(snap.recent)
|
||||
d := &EndpointStatsResp{
|
||||
Count: ep.Count,
|
||||
AvgMs: safeAvg(ep.TotalMs, float64(ep.Count)),
|
||||
Count: snap.count,
|
||||
AvgMs: safeAvg(snap.totalMs, float64(snap.count)),
|
||||
P50Ms: round(percentile(sorted, 0.5), 1),
|
||||
P95Ms: round(percentile(sorted, 0.95), 1),
|
||||
MaxMs: round(ep.MaxMs, 1),
|
||||
MaxMs: round(snap.maxMs, 1),
|
||||
}
|
||||
entries = append(entries, epEntry{path, d})
|
||||
entries = append(entries, epEntry{snap.path, d})
|
||||
}
|
||||
// Sort by total time spent (count * avg) descending, matching Node.js
|
||||
sort.Slice(entries, func(i, j int) bool {
|
||||
@@ -520,22 +558,10 @@ func (s *Server) handlePerf(w http.ResponseWriter, r *http.Request) {
|
||||
sqliteStats = &ss
|
||||
}
|
||||
|
||||
uptimeSec := int(time.Since(s.perfStats.StartedAt).Seconds())
|
||||
|
||||
// Convert slow queries
|
||||
slowQueries := make([]SlowQuery, 0)
|
||||
sliceEnd := s.perfStats.SlowQueries
|
||||
if len(sliceEnd) > 20 {
|
||||
sliceEnd = sliceEnd[len(sliceEnd)-20:]
|
||||
}
|
||||
for _, sq := range sliceEnd {
|
||||
slowQueries = append(slowQueries, sq)
|
||||
}
|
||||
|
||||
writeJSON(w, PerfResponse{
|
||||
Uptime: uptimeSec,
|
||||
TotalRequests: s.perfStats.Requests,
|
||||
AvgMs: safeAvg(s.perfStats.TotalMs, float64(s.perfStats.Requests)),
|
||||
TotalRequests: totalRequests,
|
||||
AvgMs: safeAvg(totalMs, float64(totalRequests)),
|
||||
Endpoints: summary,
|
||||
SlowQueries: slowQueries,
|
||||
Cache: perfCS,
|
||||
@@ -559,7 +585,13 @@ func (s *Server) handlePerf(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
func (s *Server) handlePerfReset(w http.ResponseWriter, r *http.Request) {
|
||||
s.perfStats = NewPerfStats()
|
||||
s.perfStats.mu.Lock()
|
||||
s.perfStats.Requests = 0
|
||||
s.perfStats.TotalMs = 0
|
||||
s.perfStats.Endpoints = make(map[string]*EndpointPerf)
|
||||
s.perfStats.SlowQueries = make([]SlowQuery, 0)
|
||||
s.perfStats.StartedAt = time.Now()
|
||||
s.perfStats.mu.Unlock()
|
||||
writeJSON(w, OkResp{Ok: true})
|
||||
}
|
||||
|
||||
@@ -1201,6 +1233,18 @@ func (s *Server) handleAnalyticsHashSizes(w http.ResponseWriter, r *http.Request
|
||||
})
|
||||
}
|
||||
|
||||
func (s *Server) handleAnalyticsHashCollisions(w http.ResponseWriter, r *http.Request) {
|
||||
if s.store != nil {
|
||||
region := r.URL.Query().Get("region")
|
||||
writeJSON(w, s.store.GetAnalyticsHashCollisions(region))
|
||||
return
|
||||
}
|
||||
writeJSON(w, map[string]interface{}{
|
||||
"inconsistent_nodes": []interface{}{},
|
||||
"by_size": map[string]interface{}{},
|
||||
})
|
||||
}
|
||||
|
||||
func (s *Server) handleAnalyticsSubpaths(w http.ResponseWriter, r *http.Request) {
|
||||
if s.store != nil {
|
||||
region := r.URL.Query().Get("region")
|
||||
|
||||
@@ -2402,3 +2402,628 @@ func min(a, b int) int {
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
// TestLatestSeenMaintained verifies that StoreTx.LatestSeen is populated after Load()
|
||||
// and is >= FirstSeen for packets that have observations.
|
||||
func TestLatestSeenMaintained(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
defer db.Close()
|
||||
seedTestData(t, db)
|
||||
|
||||
store := NewPacketStore(db, nil)
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatalf("store.Load failed: %v", err)
|
||||
}
|
||||
|
||||
store.mu.RLock()
|
||||
defer store.mu.RUnlock()
|
||||
|
||||
if len(store.packets) == 0 {
|
||||
t.Fatal("expected packets in store after Load")
|
||||
}
|
||||
|
||||
for _, tx := range store.packets {
|
||||
if tx.LatestSeen == "" {
|
||||
t.Errorf("packet %s has empty LatestSeen (FirstSeen=%s)", tx.Hash, tx.FirstSeen)
|
||||
continue
|
||||
}
|
||||
// LatestSeen must be >= FirstSeen (string comparison works for RFC3339/ISO8601)
|
||||
if tx.LatestSeen < tx.FirstSeen {
|
||||
t.Errorf("packet %s: LatestSeen %q < FirstSeen %q", tx.Hash, tx.LatestSeen, tx.FirstSeen)
|
||||
}
|
||||
// For packets with observations, LatestSeen must be >= all observation timestamps.
|
||||
for _, obs := range tx.Observations {
|
||||
if obs.Timestamp != "" && obs.Timestamp > tx.LatestSeen {
|
||||
t.Errorf("packet %s: obs.Timestamp %q > LatestSeen %q", tx.Hash, obs.Timestamp, tx.LatestSeen)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TestQueryGroupedPacketsSortedByLatest verifies that QueryGroupedPackets returns packets
|
||||
// sorted by LatestSeen DESC — i.e. the packet whose most-recent observation is newest
|
||||
// comes first, even if its first_seen is older.
|
||||
func TestQueryGroupedPacketsSortedByLatest(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
defer db.Close()
|
||||
|
||||
now := time.Now().UTC()
|
||||
// oldFirst: first_seen is old, but observation is very recent.
|
||||
oldFirst := now.Add(-48 * time.Hour).Format(time.RFC3339)
|
||||
// newFirst: first_seen is recent, but observation is old.
|
||||
newFirst := now.Add(-1 * time.Hour).Format(time.RFC3339)
|
||||
recentEpoch := now.Add(-5 * time.Minute).Unix()
|
||||
oldEpoch := now.Add(-72 * time.Hour).Unix()
|
||||
|
||||
db.conn.Exec(`INSERT INTO observers (id, name, iata, last_seen, first_seen, packet_count)
|
||||
VALUES ('sortobs', 'Sort Observer', 'TST', ?, '2026-01-01T00:00:00Z', 1)`, now.Format(time.RFC3339))
|
||||
|
||||
// Packet A: old first_seen, but a very recent observation — should sort first.
|
||||
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
|
||||
VALUES ('AA01', 'sort_old_first_recent_obs', ?, 1, 2, '{"type":"TXT_MSG","text":"old first"}')`, oldFirst)
|
||||
var idA int64
|
||||
db.conn.QueryRow(`SELECT id FROM transmissions WHERE hash='sort_old_first_recent_obs'`).Scan(&idA)
|
||||
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
|
||||
VALUES (?, 1, 10.0, -90, '[]', ?)`, idA, recentEpoch)
|
||||
|
||||
// Packet B: newer first_seen, but an old observation — should sort second.
|
||||
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
|
||||
VALUES ('BB02', 'sort_new_first_old_obs', ?, 1, 2, '{"type":"TXT_MSG","text":"new first"}')`, newFirst)
|
||||
var idB int64
|
||||
db.conn.QueryRow(`SELECT id FROM transmissions WHERE hash='sort_new_first_old_obs'`).Scan(&idB)
|
||||
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
|
||||
VALUES (?, 1, 10.0, -90, '[]', ?)`, idB, oldEpoch)
|
||||
|
||||
store := NewPacketStore(db, nil)
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatalf("store.Load failed: %v", err)
|
||||
}
|
||||
|
||||
result := store.QueryGroupedPackets(PacketQuery{Limit: 50})
|
||||
if result.Total < 2 {
|
||||
t.Fatalf("expected at least 2 packets, got %d", result.Total)
|
||||
}
|
||||
|
||||
// Find the two test packets in the result (may be mixed with other entries).
|
||||
firstHash := ""
|
||||
secondHash := ""
|
||||
for _, p := range result.Packets {
|
||||
h, _ := p["hash"].(string)
|
||||
if h == "sort_old_first_recent_obs" || h == "sort_new_first_old_obs" {
|
||||
if firstHash == "" {
|
||||
firstHash = h
|
||||
} else {
|
||||
secondHash = h
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if firstHash != "sort_old_first_recent_obs" {
|
||||
t.Errorf("expected sort_old_first_recent_obs to appear before sort_new_first_old_obs in sorted results; got first=%q second=%q", firstHash, secondHash)
|
||||
}
|
||||
}
|
||||
|
||||
// TestQueryGroupedPacketsCacheReturnsConsistentResult verifies that two rapid successive
|
||||
// calls to QueryGroupedPackets return the same total count and first packet hash.
|
||||
func TestQueryGroupedPacketsCacheReturnsConsistentResult(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
defer db.Close()
|
||||
seedTestData(t, db)
|
||||
|
||||
store := NewPacketStore(db, nil)
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatalf("store.Load failed: %v", err)
|
||||
}
|
||||
|
||||
q := PacketQuery{Limit: 50}
|
||||
r1 := store.QueryGroupedPackets(q)
|
||||
r2 := store.QueryGroupedPackets(q)
|
||||
|
||||
if r1.Total != r2.Total {
|
||||
t.Errorf("cache inconsistency: first call total=%d, second call total=%d", r1.Total, r2.Total)
|
||||
}
|
||||
if r1.Total == 0 {
|
||||
t.Fatal("expected non-zero results from QueryGroupedPackets")
|
||||
}
|
||||
h1, _ := r1.Packets[0]["hash"].(string)
|
||||
h2, _ := r2.Packets[0]["hash"].(string)
|
||||
if h1 != h2 {
|
||||
t.Errorf("cache inconsistency: first call first hash=%q, second call first hash=%q", h1, h2)
|
||||
}
|
||||
}
|
||||
|
||||
// TestGetChannelsCacheReturnsConsistentResult verifies that two rapid successive calls
|
||||
// to GetChannels return the same number of channels with the same names.
|
||||
func TestGetChannelsCacheReturnsConsistentResult(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
defer db.Close()
|
||||
seedTestData(t, db)
|
||||
|
||||
store := NewPacketStore(db, nil)
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatalf("store.Load failed: %v", err)
|
||||
}
|
||||
|
||||
r1 := store.GetChannels("")
|
||||
r2 := store.GetChannels("")
|
||||
|
||||
if len(r1) != len(r2) {
|
||||
t.Errorf("cache inconsistency: first call len=%d, second call len=%d", len(r1), len(r2))
|
||||
}
|
||||
if len(r1) == 0 {
|
||||
t.Fatal("expected at least one channel from seedTestData")
|
||||
}
|
||||
|
||||
names1 := make(map[string]bool)
|
||||
for _, ch := range r1 {
|
||||
if n, ok := ch["name"].(string); ok {
|
||||
names1[n] = true
|
||||
}
|
||||
}
|
||||
for _, ch := range r2 {
|
||||
if n, ok := ch["name"].(string); ok {
|
||||
if !names1[n] {
|
||||
t.Errorf("cache inconsistency: channel %q in second result but not first", n)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TestGetChannelsNotBlockedByLargeLock verifies that GetChannels returns correct channel
|
||||
// data (count and messageCount) after observations have been added — i.e. the lock-copy
|
||||
// pattern works correctly and the JSON unmarshal outside the lock produces valid results.
|
||||
func TestGetChannelsNotBlockedByLargeLock(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
defer db.Close()
|
||||
seedTestData(t, db)
|
||||
|
||||
store := NewPacketStore(db, nil)
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatalf("store.Load failed: %v", err)
|
||||
}
|
||||
|
||||
channels := store.GetChannels("")
|
||||
|
||||
// seedTestData inserts one GRP_TXT (payload_type=5) packet with channel "#test".
|
||||
if len(channels) != 1 {
|
||||
t.Fatalf("expected 1 channel, got %d", len(channels))
|
||||
}
|
||||
|
||||
ch := channels[0]
|
||||
name, ok := ch["name"].(string)
|
||||
if !ok || name != "#test" {
|
||||
t.Errorf("expected channel name '#test', got %v", ch["name"])
|
||||
}
|
||||
|
||||
// messageCount should be 1 (one CHAN packet for #test).
|
||||
msgCount, ok := ch["messageCount"].(int)
|
||||
if !ok {
|
||||
// JSON numbers may unmarshal as float64 — but GetChannels returns native Go values.
|
||||
t.Errorf("expected messageCount to be int, got %T (%v)", ch["messageCount"], ch["messageCount"])
|
||||
} else if msgCount != 1 {
|
||||
t.Errorf("expected messageCount=1, got %d", msgCount)
|
||||
}
|
||||
}
|
||||
|
||||
// --- Tests for computeHashCollisions (Issue #416) ---
|
||||
|
||||
func TestAnalyticsHashCollisionsEndpoint(t *testing.T) {
|
||||
_, router := setupTestServer(t)
|
||||
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
if w.Code != 200 {
|
||||
t.Fatalf("expected 200, got %d", w.Code)
|
||||
}
|
||||
var body map[string]interface{}
|
||||
if err := json.Unmarshal(w.Body.Bytes(), &body); err != nil {
|
||||
t.Fatalf("invalid JSON: %v", err)
|
||||
}
|
||||
|
||||
// Must have top-level keys
|
||||
if _, ok := body["inconsistent_nodes"]; !ok {
|
||||
t.Error("missing inconsistent_nodes key")
|
||||
}
|
||||
if _, ok := body["by_size"]; !ok {
|
||||
t.Error("missing by_size key")
|
||||
}
|
||||
|
||||
bySize, ok := body["by_size"].(map[string]interface{})
|
||||
if !ok {
|
||||
t.Fatal("by_size is not a map")
|
||||
}
|
||||
// Must have entries for 1, 2, 3 byte sizes
|
||||
for _, sz := range []string{"1", "2", "3"} {
|
||||
sizeData, ok := bySize[sz].(map[string]interface{})
|
||||
if !ok {
|
||||
t.Errorf("by_size[%s] is not a map", sz)
|
||||
continue
|
||||
}
|
||||
stats, ok := sizeData["stats"].(map[string]interface{})
|
||||
if !ok {
|
||||
t.Errorf("by_size[%s].stats is not a map", sz)
|
||||
continue
|
||||
}
|
||||
if _, ok := stats["total_nodes"]; !ok {
|
||||
t.Errorf("by_size[%s].stats missing total_nodes", sz)
|
||||
}
|
||||
if _, ok := stats["collision_count"]; !ok {
|
||||
t.Errorf("by_size[%s].stats missing collision_count", sz)
|
||||
}
|
||||
// collisions must be an array, not null
|
||||
collisions, ok := sizeData["collisions"].([]interface{})
|
||||
if !ok {
|
||||
t.Errorf("by_size[%s].collisions is not an array", sz)
|
||||
}
|
||||
_ = collisions
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashCollisionsNoNullArrays(t *testing.T) {
|
||||
_, router := setupTestServer(t)
|
||||
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
// JSON must not contain "null" for arrays
|
||||
bodyStr := w.Body.String()
|
||||
if bodyStr == "" {
|
||||
t.Fatal("empty response body")
|
||||
}
|
||||
// inconsistent_nodes should be [] not null
|
||||
var body map[string]interface{}
|
||||
json.Unmarshal(w.Body.Bytes(), &body)
|
||||
if body["inconsistent_nodes"] == nil {
|
||||
t.Error("inconsistent_nodes is null, should be empty array")
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashCollisionsRegionParam(t *testing.T) {
|
||||
// Issue #438: region param should be accepted and used for filtering.
|
||||
// With no region observers configured, results should be identical to global.
|
||||
_, router := setupTestServer(t)
|
||||
|
||||
// Request without region
|
||||
req1 := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
|
||||
w1 := httptest.NewRecorder()
|
||||
router.ServeHTTP(w1, req1)
|
||||
if w1.Code != 200 {
|
||||
t.Fatalf("expected 200, got %d", w1.Code)
|
||||
}
|
||||
|
||||
// Request with region param (no observers for this region, so falls back to global)
|
||||
req2 := httptest.NewRequest("GET", "/api/analytics/hash-collisions?region=us-west", nil)
|
||||
w2 := httptest.NewRecorder()
|
||||
router.ServeHTTP(w2, req2)
|
||||
if w2.Code != 200 {
|
||||
t.Fatalf("expected 200, got %d", w2.Code)
|
||||
}
|
||||
|
||||
// With no region observers configured, both should return identical results
|
||||
if w1.Body.String() != w2.Body.String() {
|
||||
t.Error("responses differ with/without region param when no region observers configured")
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashCollisionsOneByteCells(t *testing.T) {
|
||||
_, router := setupTestServer(t)
|
||||
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
var body map[string]interface{}
|
||||
json.Unmarshal(w.Body.Bytes(), &body)
|
||||
bySize := body["by_size"].(map[string]interface{})
|
||||
oneByteData := bySize["1"].(map[string]interface{})
|
||||
|
||||
// 1-byte data should include one_byte_cells for matrix rendering
|
||||
cells, ok := oneByteData["one_byte_cells"].(map[string]interface{})
|
||||
if !ok {
|
||||
t.Fatal("1-byte data missing one_byte_cells")
|
||||
}
|
||||
// Should have 256 entries (00-FF)
|
||||
if len(cells) != 256 {
|
||||
t.Errorf("expected 256 one_byte_cells entries, got %d", len(cells))
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashCollisionsTwoByteCells(t *testing.T) {
|
||||
_, router := setupTestServer(t)
|
||||
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
var body map[string]interface{}
|
||||
json.Unmarshal(w.Body.Bytes(), &body)
|
||||
bySize := body["by_size"].(map[string]interface{})
|
||||
twoByteData := bySize["2"].(map[string]interface{})
|
||||
|
||||
// 2-byte data should include two_byte_cells for matrix rendering
|
||||
cells, ok := twoByteData["two_byte_cells"].(map[string]interface{})
|
||||
if !ok {
|
||||
t.Fatal("2-byte data missing two_byte_cells")
|
||||
}
|
||||
// Should have 256 entries (00-FF first-byte groups)
|
||||
if len(cells) != 256 {
|
||||
t.Errorf("expected 256 two_byte_cells entries, got %d", len(cells))
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashCollisionsThreeByteNoMatrix(t *testing.T) {
|
||||
_, router := setupTestServer(t)
|
||||
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
var body map[string]interface{}
|
||||
json.Unmarshal(w.Body.Bytes(), &body)
|
||||
bySize := body["by_size"].(map[string]interface{})
|
||||
threeByteData := bySize["3"].(map[string]interface{})
|
||||
|
||||
// 3-byte data should NOT have one_byte_cells or two_byte_cells
|
||||
if _, ok := threeByteData["one_byte_cells"]; ok {
|
||||
t.Error("3-byte data should not have one_byte_cells")
|
||||
}
|
||||
if _, ok := threeByteData["two_byte_cells"]; ok {
|
||||
t.Error("3-byte data should not have two_byte_cells")
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashCollisionsClassification(t *testing.T) {
|
||||
// Test with seed data — nodes have coordinates, so distance classification should work
|
||||
_, router := setupTestServer(t)
|
||||
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
var body map[string]interface{}
|
||||
json.Unmarshal(w.Body.Bytes(), &body)
|
||||
bySize := body["by_size"].(map[string]interface{})
|
||||
|
||||
// Check that collision entries have required fields
|
||||
for _, sz := range []string{"1", "2", "3"} {
|
||||
sizeData := bySize[sz].(map[string]interface{})
|
||||
collisions := sizeData["collisions"].([]interface{})
|
||||
for i, c := range collisions {
|
||||
entry := c.(map[string]interface{})
|
||||
if _, ok := entry["prefix"]; !ok {
|
||||
t.Errorf("by_size[%s].collisions[%d] missing prefix", sz, i)
|
||||
}
|
||||
if _, ok := entry["classification"]; !ok {
|
||||
t.Errorf("by_size[%s].collisions[%d] missing classification", sz, i)
|
||||
}
|
||||
class := entry["classification"].(string)
|
||||
validClasses := map[string]bool{"local": true, "regional": true, "distant": true, "incomplete": true, "unknown": true}
|
||||
if !validClasses[class] {
|
||||
t.Errorf("by_size[%s].collisions[%d] invalid classification: %s", sz, i, class)
|
||||
}
|
||||
nodes, ok := entry["nodes"].([]interface{})
|
||||
if !ok {
|
||||
t.Errorf("by_size[%s].collisions[%d] missing nodes array", sz, i)
|
||||
}
|
||||
if len(nodes) < 2 {
|
||||
t.Errorf("by_size[%s].collisions[%d] has %d nodes, expected >=2", sz, i, len(nodes))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashCollisionsCacheTTL(t *testing.T) {
|
||||
// Issue #420: collision cache should use dedicated TTL (60s), not rfCacheTTL (15s)
|
||||
db := setupTestDB(t)
|
||||
seedTestData(t, db)
|
||||
store := NewPacketStore(db, nil)
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatalf("store.Load failed: %v", err)
|
||||
}
|
||||
|
||||
if store.collisionCacheTTL != 60*time.Second {
|
||||
t.Errorf("expected collisionCacheTTL=60s, got %v", store.collisionCacheTTL)
|
||||
}
|
||||
if store.rfCacheTTL != 15*time.Second {
|
||||
t.Errorf("expected rfCacheTTL=15s, got %v", store.rfCacheTTL)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashCollisionsStatsFields(t *testing.T) {
|
||||
_, router := setupTestServer(t)
|
||||
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
var body map[string]interface{}
|
||||
json.Unmarshal(w.Body.Bytes(), &body)
|
||||
bySize := body["by_size"].(map[string]interface{})
|
||||
|
||||
for _, sz := range []string{"1", "2", "3"} {
|
||||
sizeData := bySize[sz].(map[string]interface{})
|
||||
stats := sizeData["stats"].(map[string]interface{})
|
||||
|
||||
requiredFields := []string{"total_nodes", "nodes_for_byte", "using_this_size", "unique_prefixes", "collision_count", "space_size", "pct_used"}
|
||||
for _, f := range requiredFields {
|
||||
if _, ok := stats[f]; !ok {
|
||||
t.Errorf("by_size[%s].stats missing field: %s", sz, f)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashCollisionsEmptyStore(t *testing.T) {
|
||||
// Test with no nodes seeded
|
||||
db := setupTestDB(t)
|
||||
// Don't call seedTestData — empty store
|
||||
cfg := &Config{Port: 3000}
|
||||
hub := NewHub()
|
||||
srv := NewServer(db, cfg, hub)
|
||||
store := NewPacketStore(db, nil)
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatalf("store.Load failed: %v", err)
|
||||
}
|
||||
srv.store = store
|
||||
router := mux.NewRouter()
|
||||
srv.RegisterRoutes(router)
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
if w.Code != 200 {
|
||||
t.Fatalf("expected 200, got %d", w.Code)
|
||||
}
|
||||
|
||||
var body map[string]interface{}
|
||||
json.Unmarshal(w.Body.Bytes(), &body)
|
||||
|
||||
// With no nodes, inconsistent_nodes should be empty array
|
||||
incon := body["inconsistent_nodes"].([]interface{})
|
||||
if len(incon) != 0 {
|
||||
t.Errorf("expected 0 inconsistent nodes, got %d", len(incon))
|
||||
}
|
||||
|
||||
// All collision lists should be empty
|
||||
bySize := body["by_size"].(map[string]interface{})
|
||||
for _, sz := range []string{"1", "2", "3"} {
|
||||
sizeData := bySize[sz].(map[string]interface{})
|
||||
collisions := sizeData["collisions"].([]interface{})
|
||||
if len(collisions) != 0 {
|
||||
t.Errorf("by_size[%s] expected 0 collisions with empty store, got %d", sz, len(collisions))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashCollisionsWithCollision(t *testing.T) {
|
||||
// Seed two nodes with the same 1-byte prefix to verify collision detection
|
||||
db := setupTestDB(t)
|
||||
// Don't use seedTestData — create minimal data to control hash sizes
|
||||
now := time.Now().UTC()
|
||||
recent := now.Add(-1 * time.Hour).Format(time.RFC3339)
|
||||
|
||||
// Two nodes with same first byte 'CC', no adverts so hash_size=0 (included in all buckets)
|
||||
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, lat, lon, last_seen, first_seen, advert_count)
|
||||
VALUES ('CC11223344556677', 'Node1', 'repeater', 37.5, -122.0, ?, '2026-01-01T00:00:00Z', 0)`, recent)
|
||||
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, lat, lon, last_seen, first_seen, advert_count)
|
||||
VALUES ('CC99887766554433', 'Node2', 'repeater', 37.51, -122.01, ?, '2026-01-01T00:00:00Z', 0)`, recent)
|
||||
|
||||
cfg := &Config{Port: 3000}
|
||||
hub := NewHub()
|
||||
srv := NewServer(db, cfg, hub)
|
||||
store := NewPacketStore(db, nil)
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatalf("store.Load failed: %v", err)
|
||||
}
|
||||
srv.store = store
|
||||
router := mux.NewRouter()
|
||||
srv.RegisterRoutes(router)
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
var body map[string]interface{}
|
||||
json.Unmarshal(w.Body.Bytes(), &body)
|
||||
bySize := body["by_size"].(map[string]interface{})
|
||||
oneByteData := bySize["1"].(map[string]interface{})
|
||||
stats := oneByteData["stats"].(map[string]interface{})
|
||||
|
||||
collisionCount := int(stats["collision_count"].(float64))
|
||||
if collisionCount < 1 {
|
||||
t.Errorf("expected at least 1 collision (CC prefix), got %d", collisionCount)
|
||||
}
|
||||
|
||||
// Check the collision entry
|
||||
collisions := oneByteData["collisions"].([]interface{})
|
||||
found := false
|
||||
for _, c := range collisions {
|
||||
entry := c.(map[string]interface{})
|
||||
if entry["prefix"] == "CC" {
|
||||
found = true
|
||||
nodes := entry["nodes"].([]interface{})
|
||||
if len(nodes) < 2 {
|
||||
t.Errorf("expected >=2 nodes for AA collision, got %d", len(nodes))
|
||||
}
|
||||
// Both nodes have coords close together, so classification should be "local"
|
||||
class := entry["classification"].(string)
|
||||
if class != "local" {
|
||||
t.Errorf("expected 'local' classification for nearby nodes, got %s", class)
|
||||
}
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Error("expected collision entry with prefix 'CC'")
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashCollisionsShortPublicKey(t *testing.T) {
|
||||
// Nodes with very short public keys should not crash
|
||||
db := setupTestDB(t)
|
||||
now := time.Now().UTC()
|
||||
recent := now.Add(-1 * time.Hour).Format(time.RFC3339)
|
||||
|
||||
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, lat, lon, last_seen, first_seen, advert_count)
|
||||
VALUES ('A', 'ShortKey', 'repeater', 0, 0, ?, '2026-01-01T00:00:00Z', 1)`, recent)
|
||||
|
||||
cfg := &Config{Port: 3000}
|
||||
hub := NewHub()
|
||||
srv := NewServer(db, cfg, hub)
|
||||
store := NewPacketStore(db, nil)
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatalf("store.Load failed: %v", err)
|
||||
}
|
||||
srv.store = store
|
||||
router := mux.NewRouter()
|
||||
srv.RegisterRoutes(router)
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
if w.Code != 200 {
|
||||
t.Fatalf("expected 200 even with short public key, got %d", w.Code)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashCollisionsMissingCoordinates(t *testing.T) {
|
||||
// Nodes without coordinates should get "incomplete" classification
|
||||
db := setupTestDB(t)
|
||||
now := time.Now().UTC()
|
||||
recent := now.Add(-1 * time.Hour).Format(time.RFC3339)
|
||||
|
||||
// Two nodes same prefix, no coordinates
|
||||
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, lat, lon, last_seen, first_seen, advert_count)
|
||||
VALUES ('BB11223344556677', 'NoCoords1', 'repeater', 0, 0, ?, '2026-01-01T00:00:00Z', 1)`, recent)
|
||||
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, lat, lon, last_seen, first_seen, advert_count)
|
||||
VALUES ('BB99887766554433', 'NoCoords2', 'repeater', 0, 0, ?, '2026-01-01T00:00:00Z', 1)`, recent)
|
||||
|
||||
cfg := &Config{Port: 3000}
|
||||
hub := NewHub()
|
||||
srv := NewServer(db, cfg, hub)
|
||||
store := NewPacketStore(db, nil)
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatalf("store.Load failed: %v", err)
|
||||
}
|
||||
srv.store = store
|
||||
router := mux.NewRouter()
|
||||
srv.RegisterRoutes(router)
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/analytics/hash-collisions", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
var body map[string]interface{}
|
||||
json.Unmarshal(w.Body.Bytes(), &body)
|
||||
bySize := body["by_size"].(map[string]interface{})
|
||||
oneByteData := bySize["1"].(map[string]interface{})
|
||||
collisions := oneByteData["collisions"].([]interface{})
|
||||
|
||||
for _, c := range collisions {
|
||||
entry := c.(map[string]interface{})
|
||||
if entry["prefix"] == "BB" {
|
||||
class := entry["classification"].(string)
|
||||
if class != "incomplete" {
|
||||
t.Errorf("expected 'incomplete' for nodes without coords, got %s", class)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -39,6 +39,7 @@ type StoreTx struct {
|
||||
RSSI *float64
|
||||
PathJSON string
|
||||
Direction string
|
||||
LatestSeen string // max observation timestamp (or FirstSeen if no observations)
|
||||
// Cached parsed fields (set once, read many)
|
||||
parsedPath []string // cached parsePathJSON result
|
||||
pathParsed bool // whether parsedPath has been set
|
||||
@@ -78,13 +79,25 @@ type PacketStore struct {
|
||||
cacheMu sync.Mutex
|
||||
rfCache map[string]*cachedResult // region → cached RF result
|
||||
topoCache map[string]*cachedResult // region → cached topology result
|
||||
hashCache map[string]*cachedResult // region → cached hash-sizes result
|
||||
hashCache map[string]*cachedResult // region → cached hash-sizes result
|
||||
collisionCache map[string]*cachedResult // cached hash-collisions result keyed by region ("" = global)
|
||||
chanCache map[string]*cachedResult // region → cached channels result
|
||||
distCache map[string]*cachedResult // region → cached distance result
|
||||
subpathCache map[string]*cachedResult // params → cached subpaths result
|
||||
rfCacheTTL time.Duration
|
||||
rfCacheTTL time.Duration
|
||||
collisionCacheTTL time.Duration
|
||||
cacheHits int64
|
||||
cacheMisses int64
|
||||
// Short-lived cache for QueryGroupedPackets (avoids repeated full sort)
|
||||
groupedCacheMu sync.Mutex
|
||||
groupedCacheKey string
|
||||
groupedCacheExp time.Time
|
||||
groupedCacheRes *PacketResult
|
||||
// Short-lived cache for GetChannels (avoids repeated full scan + JSON unmarshal)
|
||||
channelsCacheMu sync.Mutex
|
||||
channelsCacheKey string
|
||||
channelsCacheExp time.Time
|
||||
channelsCacheRes []map[string]interface{}
|
||||
// Cached node list + prefix map (rebuilt on demand, shared across analytics)
|
||||
nodeCache []nodeInfo
|
||||
nodePM *prefixMap
|
||||
@@ -118,7 +131,7 @@ type distHopRecord struct {
|
||||
ToPk string
|
||||
Dist float64
|
||||
Type string // "R↔R", "C↔R", "C↔C"
|
||||
SNR interface{}
|
||||
SNR *float64
|
||||
Hash string
|
||||
Timestamp string
|
||||
HourBucket string
|
||||
@@ -161,11 +174,14 @@ func NewPacketStore(db *DB, cfg *PacketStoreConfig) *PacketStore {
|
||||
byPayloadType: make(map[int][]*StoreTx),
|
||||
rfCache: make(map[string]*cachedResult),
|
||||
topoCache: make(map[string]*cachedResult),
|
||||
hashCache: make(map[string]*cachedResult),
|
||||
hashCache: make(map[string]*cachedResult),
|
||||
|
||||
collisionCache: make(map[string]*cachedResult),
|
||||
chanCache: make(map[string]*cachedResult),
|
||||
distCache: make(map[string]*cachedResult),
|
||||
subpathCache: make(map[string]*cachedResult),
|
||||
rfCacheTTL: 15 * time.Second,
|
||||
rfCacheTTL: 15 * time.Second,
|
||||
collisionCacheTTL: 60 * time.Second,
|
||||
spIndex: make(map[string]int, 4096),
|
||||
}
|
||||
if cfg != nil {
|
||||
@@ -233,6 +249,7 @@ func (s *PacketStore) Load() error {
|
||||
RawHex: nullStrVal(rawHex),
|
||||
Hash: hashStr,
|
||||
FirstSeen: nullStrVal(firstSeen),
|
||||
LatestSeen: nullStrVal(firstSeen),
|
||||
RouteType: nullIntPtr(routeType),
|
||||
PayloadType: nullIntPtr(payloadType),
|
||||
DecodedJSON: nullStrVal(decodedJSON),
|
||||
@@ -279,6 +296,9 @@ func (s *PacketStore) Load() error {
|
||||
|
||||
tx.Observations = append(tx.Observations, obs)
|
||||
tx.ObservationCount++
|
||||
if obs.Timestamp > tx.LatestSeen {
|
||||
tx.LatestSeen = obs.Timestamp
|
||||
}
|
||||
|
||||
s.byObsID[oid] = obs
|
||||
|
||||
@@ -416,47 +436,40 @@ func (s *PacketStore) QueryPackets(q PacketQuery) *PacketResult {
|
||||
// QueryGroupedPackets returns transmissions grouped by hash (already 1:1).
|
||||
func (s *PacketStore) QueryGroupedPackets(q PacketQuery) *PacketResult {
|
||||
atomic.AddInt64(&s.queryCount, 1)
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
if q.Limit <= 0 {
|
||||
q.Limit = 50
|
||||
}
|
||||
|
||||
results := s.filterPackets(q)
|
||||
// Cache key covers all filter dimensions. Empty key = no filters.
|
||||
cacheKey := q.Since + "|" + q.Until + "|" + q.Region + "|" + q.Node + "|" + q.Hash + "|" + q.Observer
|
||||
if q.Type != nil {
|
||||
cacheKey += fmt.Sprintf("|t%d", *q.Type)
|
||||
}
|
||||
if q.Route != nil {
|
||||
cacheKey += fmt.Sprintf("|r%d", *q.Route)
|
||||
}
|
||||
|
||||
// Build grouped output sorted by latest observation DESC
|
||||
// Return cached sorted list if still fresh (3s TTL)
|
||||
s.groupedCacheMu.Lock()
|
||||
if s.groupedCacheRes != nil && s.groupedCacheKey == cacheKey && time.Now().Before(s.groupedCacheExp) {
|
||||
cached := s.groupedCacheRes
|
||||
s.groupedCacheMu.Unlock()
|
||||
return pagePacketResult(cached, q.Offset, q.Limit)
|
||||
}
|
||||
s.groupedCacheMu.Unlock()
|
||||
|
||||
// Build entries under read lock (observer scan needs lock), sort outside it.
|
||||
type groupEntry struct {
|
||||
tx *StoreTx
|
||||
latest string
|
||||
latest map[string]interface{}
|
||||
ts string
|
||||
}
|
||||
entries := make([]groupEntry, len(results))
|
||||
for i, tx := range results {
|
||||
latest := tx.FirstSeen
|
||||
for _, obs := range tx.Observations {
|
||||
if obs.Timestamp > latest {
|
||||
latest = obs.Timestamp
|
||||
}
|
||||
}
|
||||
entries[i] = groupEntry{tx: tx, latest: latest}
|
||||
}
|
||||
sort.Slice(entries, func(i, j int) bool {
|
||||
return entries[i].latest > entries[j].latest
|
||||
})
|
||||
var entries []groupEntry
|
||||
|
||||
total := len(entries)
|
||||
start := q.Offset
|
||||
if start >= total {
|
||||
return &PacketResult{Packets: []map[string]interface{}{}, Total: total}
|
||||
}
|
||||
end := start + q.Limit
|
||||
if end > total {
|
||||
end = total
|
||||
}
|
||||
|
||||
packets := make([]map[string]interface{}, 0, end-start)
|
||||
for _, e := range entries[start:end] {
|
||||
tx := e.tx
|
||||
s.mu.RLock()
|
||||
results := s.filterPackets(q)
|
||||
entries = make([]groupEntry, 0, len(results))
|
||||
for _, tx := range results {
|
||||
observerCount := 0
|
||||
seen := make(map[string]bool)
|
||||
for _, obs := range tx.Observations {
|
||||
@@ -465,26 +478,61 @@ func (s *PacketStore) QueryGroupedPackets(q PacketQuery) *PacketResult {
|
||||
observerCount++
|
||||
}
|
||||
}
|
||||
packets = append(packets, map[string]interface{}{
|
||||
"hash": strOrNil(tx.Hash),
|
||||
"first_seen": strOrNil(tx.FirstSeen),
|
||||
"count": tx.ObservationCount,
|
||||
"observer_count": observerCount,
|
||||
"observation_count": tx.ObservationCount,
|
||||
"latest": strOrNil(e.latest),
|
||||
"observer_id": strOrNil(tx.ObserverID),
|
||||
"observer_name": strOrNil(tx.ObserverName),
|
||||
"path_json": strOrNil(tx.PathJSON),
|
||||
"payload_type": intPtrOrNil(tx.PayloadType),
|
||||
"route_type": intPtrOrNil(tx.RouteType),
|
||||
"raw_hex": strOrNil(tx.RawHex),
|
||||
"decoded_json": strOrNil(tx.DecodedJSON),
|
||||
"snr": floatPtrOrNil(tx.SNR),
|
||||
"rssi": floatPtrOrNil(tx.RSSI),
|
||||
entries = append(entries, groupEntry{
|
||||
ts: tx.LatestSeen,
|
||||
latest: map[string]interface{}{
|
||||
"hash": strOrNil(tx.Hash),
|
||||
"first_seen": strOrNil(tx.FirstSeen),
|
||||
"count": tx.ObservationCount,
|
||||
"observer_count": observerCount,
|
||||
"observation_count": tx.ObservationCount,
|
||||
"latest": strOrNil(tx.LatestSeen),
|
||||
"observer_id": strOrNil(tx.ObserverID),
|
||||
"observer_name": strOrNil(tx.ObserverName),
|
||||
"path_json": strOrNil(tx.PathJSON),
|
||||
"payload_type": intPtrOrNil(tx.PayloadType),
|
||||
"route_type": intPtrOrNil(tx.RouteType),
|
||||
"raw_hex": strOrNil(tx.RawHex),
|
||||
"decoded_json": strOrNil(tx.DecodedJSON),
|
||||
"snr": floatPtrOrNil(tx.SNR),
|
||||
"rssi": floatPtrOrNil(tx.RSSI),
|
||||
},
|
||||
})
|
||||
}
|
||||
s.mu.RUnlock()
|
||||
|
||||
return &PacketResult{Packets: packets, Total: total}
|
||||
// Sort outside the lock — only touches our local slice.
|
||||
sort.Slice(entries, func(i, j int) bool {
|
||||
return entries[i].ts > entries[j].ts
|
||||
})
|
||||
|
||||
packets := make([]map[string]interface{}, len(entries))
|
||||
for i, e := range entries {
|
||||
packets[i] = e.latest
|
||||
}
|
||||
|
||||
full := &PacketResult{Packets: packets, Total: len(packets)}
|
||||
|
||||
s.groupedCacheMu.Lock()
|
||||
s.groupedCacheRes = full
|
||||
s.groupedCacheKey = cacheKey
|
||||
s.groupedCacheExp = time.Now().Add(3 * time.Second)
|
||||
s.groupedCacheMu.Unlock()
|
||||
|
||||
return pagePacketResult(full, q.Offset, q.Limit)
|
||||
}
|
||||
|
||||
// pagePacketResult returns a window of a PacketResult without re-allocating the slice.
|
||||
func pagePacketResult(r *PacketResult, offset, limit int) *PacketResult {
|
||||
total := r.Total
|
||||
if offset >= total {
|
||||
return &PacketResult{Packets: []map[string]interface{}{}, Total: total}
|
||||
}
|
||||
end := offset + limit
|
||||
if end > total {
|
||||
end = total
|
||||
}
|
||||
return &PacketResult{Packets: r.Packets[offset:end], Total: total}
|
||||
}
|
||||
|
||||
// GetStoreStats returns aggregate counts (packet data from memory, node/observer from DB).
|
||||
@@ -626,6 +674,60 @@ func (s *PacketStore) GetCacheStatsTyped() CacheStats {
|
||||
}
|
||||
}
|
||||
|
||||
// cacheInvalidation flags indicate what kind of data changed during ingestion.
|
||||
// Used by invalidateCachesFor to selectively clear only affected caches.
|
||||
type cacheInvalidation struct {
|
||||
hasNewObservations bool // new SNR/RSSI data → rfCache
|
||||
hasNewPaths bool // new/changed path data → topoCache, distCache, subpathCache
|
||||
hasNewTransmissions bool // new transmissions → hashCache
|
||||
hasChannelData bool // new GRP_TXT (payload_type 5) → chanCache
|
||||
eviction bool // data removed → all caches
|
||||
}
|
||||
|
||||
// invalidateCachesFor selectively clears only the analytics caches affected
|
||||
// by the kind of data that changed. This avoids the previous behaviour of
|
||||
// wiping every cache on every ingest cycle, which defeated caching under
|
||||
// continuous ingestion (issue #375).
|
||||
func (s *PacketStore) invalidateCachesFor(inv cacheInvalidation) {
|
||||
s.cacheMu.Lock()
|
||||
defer s.cacheMu.Unlock()
|
||||
|
||||
if inv.eviction {
|
||||
// Eviction can affect any analytics — clear everything
|
||||
s.rfCache = make(map[string]*cachedResult)
|
||||
s.topoCache = make(map[string]*cachedResult)
|
||||
s.hashCache = make(map[string]*cachedResult)
|
||||
s.collisionCache = make(map[string]*cachedResult)
|
||||
s.chanCache = make(map[string]*cachedResult)
|
||||
s.distCache = make(map[string]*cachedResult)
|
||||
s.subpathCache = make(map[string]*cachedResult)
|
||||
s.channelsCacheMu.Lock()
|
||||
s.channelsCacheRes = nil
|
||||
s.channelsCacheMu.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
if inv.hasNewObservations {
|
||||
s.rfCache = make(map[string]*cachedResult)
|
||||
}
|
||||
if inv.hasNewPaths {
|
||||
s.topoCache = make(map[string]*cachedResult)
|
||||
s.distCache = make(map[string]*cachedResult)
|
||||
s.subpathCache = make(map[string]*cachedResult)
|
||||
}
|
||||
if inv.hasNewTransmissions {
|
||||
s.hashCache = make(map[string]*cachedResult)
|
||||
s.collisionCache = make(map[string]*cachedResult)
|
||||
}
|
||||
if inv.hasChannelData {
|
||||
s.chanCache = make(map[string]*cachedResult)
|
||||
// Also invalidate the separate channels list cache
|
||||
s.channelsCacheMu.Lock()
|
||||
s.channelsCacheRes = nil
|
||||
s.channelsCacheMu.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
// GetPerfStoreStatsTyped returns packet store stats as a typed struct.
|
||||
func (s *PacketStore) GetPerfStoreStatsTyped() PerfPacketStoreStats {
|
||||
s.mu.RLock()
|
||||
@@ -950,6 +1052,7 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
|
||||
RawHex: r.rawHex,
|
||||
Hash: r.hash,
|
||||
FirstSeen: r.firstSeen,
|
||||
LatestSeen: r.firstSeen,
|
||||
RouteType: r.routeType,
|
||||
PayloadType: r.payloadType,
|
||||
DecodedJSON: r.decodedJSON,
|
||||
@@ -999,6 +1102,9 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
|
||||
}
|
||||
tx.Observations = append(tx.Observations, obs)
|
||||
tx.ObservationCount++
|
||||
if obs.Timestamp > tx.LatestSeen {
|
||||
tx.LatestSeen = obs.Timestamp
|
||||
}
|
||||
s.byObsID[oid] = obs
|
||||
if r.observerID != "" {
|
||||
s.byObserver[r.observerID] = append(s.byObserver[r.observerID], obs)
|
||||
@@ -1097,16 +1203,27 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
|
||||
}
|
||||
}
|
||||
|
||||
// Invalidate analytics caches since new data was ingested
|
||||
// Targeted cache invalidation: only clear caches affected by the ingested
|
||||
// data instead of wiping everything on every cycle (fixes #375).
|
||||
if len(result) > 0 {
|
||||
s.cacheMu.Lock()
|
||||
s.rfCache = make(map[string]*cachedResult)
|
||||
s.topoCache = make(map[string]*cachedResult)
|
||||
s.hashCache = make(map[string]*cachedResult)
|
||||
s.chanCache = make(map[string]*cachedResult)
|
||||
s.distCache = make(map[string]*cachedResult)
|
||||
s.subpathCache = make(map[string]*cachedResult)
|
||||
s.cacheMu.Unlock()
|
||||
inv := cacheInvalidation{
|
||||
hasNewTransmissions: len(broadcastTxs) > 0,
|
||||
}
|
||||
for _, tx := range broadcastTxs {
|
||||
if len(tx.Observations) > 0 {
|
||||
inv.hasNewObservations = true
|
||||
}
|
||||
if tx.PayloadType != nil && *tx.PayloadType == 5 {
|
||||
inv.hasChannelData = true
|
||||
}
|
||||
if tx.PathJSON != "" {
|
||||
inv.hasNewPaths = true
|
||||
}
|
||||
if inv.hasNewObservations && inv.hasChannelData && inv.hasNewPaths {
|
||||
break // all flags set, no need to continue
|
||||
}
|
||||
}
|
||||
s.invalidateCachesFor(inv)
|
||||
}
|
||||
|
||||
return result, newMaxID
|
||||
@@ -1230,6 +1347,9 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
|
||||
}
|
||||
tx.Observations = append(tx.Observations, obs)
|
||||
tx.ObservationCount++
|
||||
if obs.Timestamp > tx.LatestSeen {
|
||||
tx.LatestSeen = obs.Timestamp
|
||||
}
|
||||
s.byObsID[r.obsID] = obs
|
||||
if r.observerID != "" {
|
||||
s.byObserver[r.observerID] = append(s.byObserver[r.observerID], obs)
|
||||
@@ -1314,17 +1434,20 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
|
||||
}
|
||||
|
||||
if len(updatedTxs) > 0 {
|
||||
// Invalidate analytics caches
|
||||
s.cacheMu.Lock()
|
||||
s.rfCache = make(map[string]*cachedResult)
|
||||
s.topoCache = make(map[string]*cachedResult)
|
||||
s.hashCache = make(map[string]*cachedResult)
|
||||
s.chanCache = make(map[string]*cachedResult)
|
||||
s.distCache = make(map[string]*cachedResult)
|
||||
s.subpathCache = make(map[string]*cachedResult)
|
||||
s.cacheMu.Unlock()
|
||||
|
||||
// analytics caches cleared; no per-cycle log to avoid stdout overhead
|
||||
// Targeted cache invalidation: new observations always affect RF
|
||||
// analytics; topology/distance/subpath caches only if paths changed.
|
||||
// Channel and hash caches are unaffected by observation-only ingestion.
|
||||
hasPathChanges := false
|
||||
for txID, tx := range updatedTxs {
|
||||
if tx.PathJSON != oldPaths[txID] {
|
||||
hasPathChanges = true
|
||||
break
|
||||
}
|
||||
}
|
||||
s.invalidateCachesFor(cacheInvalidation{
|
||||
hasNewObservations: true,
|
||||
hasNewPaths: hasPathChanges,
|
||||
})
|
||||
}
|
||||
|
||||
return broadcastMaps
|
||||
@@ -1889,15 +2012,8 @@ func (s *PacketStore) EvictStale() int {
|
||||
log.Printf("[store] Evicted %d packets older than %.0fh (freed ~%.1fMB estimated)",
|
||||
evictCount, s.retentionHours, freedMB)
|
||||
|
||||
// Invalidate analytics caches
|
||||
s.cacheMu.Lock()
|
||||
s.rfCache = make(map[string]*cachedResult)
|
||||
s.topoCache = make(map[string]*cachedResult)
|
||||
s.hashCache = make(map[string]*cachedResult)
|
||||
s.chanCache = make(map[string]*cachedResult)
|
||||
s.distCache = make(map[string]*cachedResult)
|
||||
s.subpathCache = make(map[string]*cachedResult)
|
||||
s.cacheMu.Unlock()
|
||||
// Eviction removes data — all caches may be affected
|
||||
s.invalidateCachesFor(cacheInvalidation{eviction: true})
|
||||
|
||||
// Invalidate hash size cache
|
||||
s.hashSizeInfoMu.Lock()
|
||||
@@ -2000,15 +2116,11 @@ func computeDistancesForTx(tx *StoreTx, nodeByPk map[string]*nodeInfo, repeaterS
|
||||
}
|
||||
|
||||
roundedDist := math.Round(dist*100) / 100
|
||||
var snrVal interface{}
|
||||
if tx.SNR != nil {
|
||||
snrVal = *tx.SNR
|
||||
}
|
||||
hopRecords = append(hopRecords, distHopRecord{
|
||||
FromName: a.Name, FromPk: a.PublicKey,
|
||||
ToName: b.Name, ToPk: b.PublicKey,
|
||||
Dist: roundedDist, Type: hopType,
|
||||
SNR: snrVal, Hash: tx.Hash, Timestamp: tx.FirstSeen,
|
||||
SNR: tx.SNR, Hash: tx.Hash, Timestamp: tx.FirstSeen,
|
||||
HourBucket: hourBucket, tx: tx,
|
||||
})
|
||||
hopDetails = append(hopDetails, distHopDetail{
|
||||
@@ -2062,14 +2174,50 @@ func hasGarbageChars(s string) bool {
|
||||
|
||||
// GetChannels returns channel list from in-memory packets (payload_type 5, decoded type CHAN).
|
||||
func (s *PacketStore) GetChannels(region string) []map[string]interface{} {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
cacheKey := region
|
||||
|
||||
s.channelsCacheMu.Lock()
|
||||
if s.channelsCacheRes != nil && s.channelsCacheKey == cacheKey && time.Now().Before(s.channelsCacheExp) {
|
||||
res := s.channelsCacheRes
|
||||
s.channelsCacheMu.Unlock()
|
||||
return res
|
||||
}
|
||||
s.channelsCacheMu.Unlock()
|
||||
|
||||
type txSnapshot struct {
|
||||
firstSeen string
|
||||
decodedJSON string
|
||||
hasRegion bool
|
||||
}
|
||||
|
||||
// Copy only the fields needed — release the lock before JSON unmarshal.
|
||||
s.mu.RLock()
|
||||
var regionObs map[string]bool
|
||||
if region != "" {
|
||||
regionObs = s.resolveRegionObservers(region)
|
||||
}
|
||||
grpTxts := s.byPayloadType[5]
|
||||
snapshots := make([]txSnapshot, 0, len(grpTxts))
|
||||
for _, tx := range grpTxts {
|
||||
inRegion := true
|
||||
if regionObs != nil {
|
||||
inRegion = false
|
||||
for _, obs := range tx.Observations {
|
||||
if regionObs[obs.ObserverID] {
|
||||
inRegion = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
snapshots = append(snapshots, txSnapshot{
|
||||
firstSeen: tx.FirstSeen,
|
||||
decodedJSON: tx.DecodedJSON,
|
||||
hasRegion: inRegion,
|
||||
})
|
||||
}
|
||||
s.mu.RUnlock()
|
||||
|
||||
// JSON unmarshal outside the lock.
|
||||
type chanInfo struct {
|
||||
Hash string
|
||||
Name string
|
||||
@@ -2085,53 +2233,32 @@ func (s *PacketStore) GetChannels(region string) []map[string]interface{} {
|
||||
Sender string `json:"sender"`
|
||||
}
|
||||
channelMap := map[string]*chanInfo{}
|
||||
|
||||
grpTxts := s.byPayloadType[5]
|
||||
for _, tx := range grpTxts {
|
||||
|
||||
// Region filter: check if any observation is from a regional observer
|
||||
if regionObs != nil {
|
||||
match := false
|
||||
for _, obs := range tx.Observations {
|
||||
if regionObs[obs.ObserverID] {
|
||||
match = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !match {
|
||||
continue
|
||||
}
|
||||
for _, snap := range snapshots {
|
||||
if !snap.hasRegion {
|
||||
continue
|
||||
}
|
||||
|
||||
var decoded decodedGrp
|
||||
if json.Unmarshal([]byte(tx.DecodedJSON), &decoded) != nil {
|
||||
if json.Unmarshal([]byte(snap.decodedJSON), &decoded) != nil {
|
||||
continue
|
||||
}
|
||||
if decoded.Type != "CHAN" {
|
||||
continue
|
||||
}
|
||||
// Filter out garbage-decrypted channel names/messages (pre-#197 data still in DB)
|
||||
if hasGarbageChars(decoded.Channel) || hasGarbageChars(decoded.Text) {
|
||||
continue
|
||||
}
|
||||
|
||||
channelName := decoded.Channel
|
||||
if channelName == "" {
|
||||
channelName = "unknown"
|
||||
}
|
||||
key := channelName
|
||||
|
||||
ch := channelMap[key]
|
||||
ch := channelMap[channelName]
|
||||
if ch == nil {
|
||||
ch = &chanInfo{
|
||||
Hash: key, Name: channelName,
|
||||
LastActivity: tx.FirstSeen,
|
||||
}
|
||||
channelMap[key] = ch
|
||||
ch = &chanInfo{Hash: channelName, Name: channelName, LastActivity: snap.firstSeen}
|
||||
channelMap[channelName] = ch
|
||||
}
|
||||
ch.MessageCount++
|
||||
if tx.FirstSeen >= ch.LastActivity {
|
||||
ch.LastActivity = tx.FirstSeen
|
||||
if snap.firstSeen >= ch.LastActivity {
|
||||
ch.LastActivity = snap.firstSeen
|
||||
if decoded.Text != "" {
|
||||
idx := strings.Index(decoded.Text, ": ")
|
||||
if idx > 0 {
|
||||
@@ -2154,6 +2281,13 @@ func (s *PacketStore) GetChannels(region string) []map[string]interface{} {
|
||||
"messageCount": ch.MessageCount, "lastActivity": ch.LastActivity,
|
||||
})
|
||||
}
|
||||
|
||||
s.channelsCacheMu.Lock()
|
||||
s.channelsCacheRes = channels
|
||||
s.channelsCacheKey = cacheKey
|
||||
s.channelsCacheExp = time.Now().Add(15 * time.Second)
|
||||
s.channelsCacheMu.Unlock()
|
||||
|
||||
return channels
|
||||
}
|
||||
|
||||
@@ -3647,7 +3781,7 @@ func (s *PacketStore) computeAnalyticsDistance(region string) map[string]interfa
|
||||
"fromName": h.FromName, "fromPk": h.FromPk,
|
||||
"toName": h.ToName, "toPk": h.ToPk,
|
||||
"dist": h.Dist, "type": h.Type,
|
||||
"snr": h.SNR, "hash": h.Hash, "timestamp": h.Timestamp,
|
||||
"snr": floatPtrOrNil(h.SNR), "hash": h.Hash, "timestamp": h.Timestamp,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -4044,6 +4178,332 @@ type hashSizeNodeInfo struct {
|
||||
Inconsistent bool
|
||||
}
|
||||
|
||||
// --- Hash Collision Analytics ---
|
||||
|
||||
// GetAnalyticsHashCollisions returns pre-computed hash collision analysis.
|
||||
// This moves the O(n²) distance computation from the frontend to the server.
|
||||
func (s *PacketStore) GetAnalyticsHashCollisions(region string) map[string]interface{} {
|
||||
s.cacheMu.Lock()
|
||||
if cached, ok := s.collisionCache[region]; ok && time.Now().Before(cached.expiresAt) {
|
||||
s.cacheHits++
|
||||
s.cacheMu.Unlock()
|
||||
return cached.data
|
||||
}
|
||||
s.cacheMisses++
|
||||
s.cacheMu.Unlock()
|
||||
|
||||
result := s.computeHashCollisions(region)
|
||||
|
||||
s.cacheMu.Lock()
|
||||
s.collisionCache[region] = &cachedResult{data: result, expiresAt: time.Now().Add(s.collisionCacheTTL)}
|
||||
s.cacheMu.Unlock()
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// collisionNode is a lightweight node representation for collision analysis.
|
||||
type collisionNode struct {
|
||||
PublicKey string `json:"public_key"`
|
||||
Name string `json:"name"`
|
||||
Role string `json:"role"`
|
||||
Lat float64 `json:"lat"`
|
||||
Lon float64 `json:"lon"`
|
||||
HashSize int `json:"hash_size"`
|
||||
HashSizeInconsistent bool `json:"hash_size_inconsistent"`
|
||||
HashSizesSeen []int `json:"hash_sizes_seen,omitempty"`
|
||||
}
|
||||
|
||||
// collisionEntry represents a prefix collision with pre-computed distances.
|
||||
type collisionEntry struct {
|
||||
Prefix string `json:"prefix"`
|
||||
ByteSize int `json:"byte_size"`
|
||||
Appearances int `json:"appearances"`
|
||||
Nodes []collisionNode `json:"nodes"`
|
||||
MaxDistKm float64 `json:"max_dist_km"`
|
||||
Classification string `json:"classification"`
|
||||
WithCoords int `json:"with_coords"`
|
||||
}
|
||||
|
||||
// prefixCellInfo holds per-prefix-cell data for the matrix view.
|
||||
type prefixCellInfo struct {
|
||||
Nodes []collisionNode `json:"nodes"`
|
||||
}
|
||||
|
||||
// twoByteCellInfo holds per-first-byte-group data for 2-byte matrix.
|
||||
type twoByteCellInfo struct {
|
||||
GroupNodes []collisionNode `json:"group_nodes"`
|
||||
TwoByteMap map[string][]collisionNode `json:"two_byte_map"`
|
||||
MaxCollision int `json:"max_collision"`
|
||||
CollisionCount int `json:"collision_count"`
|
||||
}
|
||||
|
||||
func (s *PacketStore) computeHashCollisions(region string) map[string]interface{} {
|
||||
// Get all nodes from DB
|
||||
nodes := s.getAllNodes()
|
||||
hashInfo := s.GetNodeHashSizeInfo()
|
||||
|
||||
// If region is specified, filter to only nodes seen by regional observers
|
||||
if region != "" {
|
||||
regionObs := s.resolveRegionObservers(region)
|
||||
if regionObs != nil {
|
||||
s.mu.RLock()
|
||||
regionNodePKs := make(map[string]bool)
|
||||
for _, tx := range s.packets {
|
||||
match := false
|
||||
for _, obs := range tx.Observations {
|
||||
if regionObs[obs.ObserverID] {
|
||||
match = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !match {
|
||||
continue
|
||||
}
|
||||
// Collect node public keys from advert packets
|
||||
if tx.DecodedJSON != "" {
|
||||
var d map[string]interface{}
|
||||
if json.Unmarshal([]byte(tx.DecodedJSON), &d) == nil {
|
||||
if pk, ok := d["pubKey"].(string); ok && pk != "" {
|
||||
regionNodePKs[pk] = true
|
||||
}
|
||||
if pk, ok := d["public_key"].(string); ok && pk != "" {
|
||||
regionNodePKs[pk] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
// Include observers themselves as nodes in the region
|
||||
for _, obs := range tx.Observations {
|
||||
if obs.ObserverID != "" {
|
||||
regionNodePKs[obs.ObserverID] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
s.mu.RUnlock()
|
||||
|
||||
// Filter nodes to only those seen in the region
|
||||
filtered := make([]nodeInfo, 0, len(regionNodePKs))
|
||||
for _, n := range nodes {
|
||||
if regionNodePKs[n.PublicKey] {
|
||||
filtered = append(filtered, n)
|
||||
}
|
||||
}
|
||||
nodes = filtered
|
||||
}
|
||||
}
|
||||
|
||||
// Build collision nodes with hash info
|
||||
var allCNodes []collisionNode
|
||||
for _, n := range nodes {
|
||||
cn := collisionNode{
|
||||
PublicKey: n.PublicKey,
|
||||
Name: n.Name,
|
||||
Role: n.Role,
|
||||
Lat: n.Lat,
|
||||
Lon: n.Lon,
|
||||
}
|
||||
if info, ok := hashInfo[n.PublicKey]; ok && info != nil {
|
||||
cn.HashSize = info.HashSize
|
||||
cn.HashSizeInconsistent = info.Inconsistent
|
||||
if len(info.AllSizes) > 1 {
|
||||
sizes := make([]int, 0, len(info.AllSizes))
|
||||
for sz := range info.AllSizes {
|
||||
sizes = append(sizes, sz)
|
||||
}
|
||||
sort.Ints(sizes)
|
||||
cn.HashSizesSeen = sizes
|
||||
}
|
||||
}
|
||||
allCNodes = append(allCNodes, cn)
|
||||
}
|
||||
|
||||
// Inconsistent nodes
|
||||
var inconsistentNodes []collisionNode
|
||||
for _, cn := range allCNodes {
|
||||
if cn.HashSizeInconsistent {
|
||||
inconsistentNodes = append(inconsistentNodes, cn)
|
||||
}
|
||||
}
|
||||
if inconsistentNodes == nil {
|
||||
inconsistentNodes = make([]collisionNode, 0)
|
||||
}
|
||||
|
||||
// Compute collisions for each byte size (1, 2, 3)
|
||||
collisionsBySize := make(map[string]interface{})
|
||||
for _, bytes := range []int{1, 2, 3} {
|
||||
// Filter nodes relevant to this byte size
|
||||
var nodesForByte []collisionNode
|
||||
for _, cn := range allCNodes {
|
||||
if cn.HashSize == bytes || cn.HashSize == 0 {
|
||||
nodesForByte = append(nodesForByte, cn)
|
||||
}
|
||||
}
|
||||
|
||||
// Build prefix map
|
||||
prefixMap := make(map[string][]collisionNode)
|
||||
for _, cn := range nodesForByte {
|
||||
if len(cn.PublicKey) < bytes*2 {
|
||||
continue
|
||||
}
|
||||
prefix := strings.ToUpper(cn.PublicKey[:bytes*2])
|
||||
prefixMap[prefix] = append(prefixMap[prefix], cn)
|
||||
}
|
||||
|
||||
// Compute collisions with pairwise distances
|
||||
var collisions []collisionEntry
|
||||
for prefix, pnodes := range prefixMap {
|
||||
if len(pnodes) <= 1 {
|
||||
continue
|
||||
}
|
||||
// Pairwise distance
|
||||
var withCoords []collisionNode
|
||||
for _, cn := range pnodes {
|
||||
if cn.Lat != 0 || cn.Lon != 0 {
|
||||
withCoords = append(withCoords, cn)
|
||||
}
|
||||
}
|
||||
var maxDistKm float64
|
||||
classification := "unknown"
|
||||
if len(withCoords) >= 2 {
|
||||
for i := 0; i < len(withCoords); i++ {
|
||||
for j := i + 1; j < len(withCoords); j++ {
|
||||
d := haversineKm(withCoords[i].Lat, withCoords[i].Lon, withCoords[j].Lat, withCoords[j].Lon)
|
||||
if d > maxDistKm {
|
||||
maxDistKm = d
|
||||
}
|
||||
}
|
||||
}
|
||||
if maxDistKm < 50 {
|
||||
classification = "local"
|
||||
} else if maxDistKm < 200 {
|
||||
classification = "regional"
|
||||
} else {
|
||||
classification = "distant"
|
||||
}
|
||||
} else {
|
||||
classification = "incomplete"
|
||||
}
|
||||
collisions = append(collisions, collisionEntry{
|
||||
Prefix: prefix,
|
||||
ByteSize: bytes,
|
||||
Appearances: len(pnodes),
|
||||
Nodes: pnodes,
|
||||
MaxDistKm: maxDistKm,
|
||||
Classification: classification,
|
||||
WithCoords: len(withCoords),
|
||||
})
|
||||
}
|
||||
if collisions == nil {
|
||||
collisions = make([]collisionEntry, 0)
|
||||
}
|
||||
|
||||
// Sort: local first, then regional, distant, incomplete
|
||||
classOrder := map[string]int{"local": 0, "regional": 1, "distant": 2, "incomplete": 3, "unknown": 4}
|
||||
sort.Slice(collisions, func(i, j int) bool {
|
||||
oi, oj := classOrder[collisions[i].Classification], classOrder[collisions[j].Classification]
|
||||
if oi != oj {
|
||||
return oi < oj
|
||||
}
|
||||
return collisions[i].Appearances > collisions[j].Appearances
|
||||
})
|
||||
|
||||
// Stats
|
||||
nodeCount := len(nodesForByte)
|
||||
usingThisSize := 0
|
||||
for _, cn := range allCNodes {
|
||||
if cn.HashSize == bytes {
|
||||
usingThisSize++
|
||||
}
|
||||
}
|
||||
uniquePrefixes := len(prefixMap)
|
||||
collisionCount := len(collisions)
|
||||
var spaceSize int
|
||||
switch bytes {
|
||||
case 1:
|
||||
spaceSize = 256
|
||||
case 2:
|
||||
spaceSize = 65536
|
||||
case 3:
|
||||
spaceSize = 16777216
|
||||
}
|
||||
pctUsed := 0.0
|
||||
if spaceSize > 0 {
|
||||
pctUsed = float64(uniquePrefixes) / float64(spaceSize) * 100
|
||||
}
|
||||
|
||||
// For 1-byte and 2-byte, include the full prefix cell data for matrix rendering
|
||||
var oneByteCells map[string][]collisionNode
|
||||
var twoByteCells map[string]*twoByteCellInfo
|
||||
if bytes == 1 {
|
||||
oneByteCells = make(map[string][]collisionNode)
|
||||
for i := 0; i < 256; i++ {
|
||||
hex := strings.ToUpper(fmt.Sprintf("%02x", i))
|
||||
oneByteCells[hex] = prefixMap[hex]
|
||||
if oneByteCells[hex] == nil {
|
||||
oneByteCells[hex] = make([]collisionNode, 0)
|
||||
}
|
||||
}
|
||||
} else if bytes == 2 {
|
||||
twoByteCells = make(map[string]*twoByteCellInfo)
|
||||
for i := 0; i < 256; i++ {
|
||||
hex := strings.ToUpper(fmt.Sprintf("%02x", i))
|
||||
cell := &twoByteCellInfo{
|
||||
GroupNodes: make([]collisionNode, 0),
|
||||
TwoByteMap: make(map[string][]collisionNode),
|
||||
}
|
||||
twoByteCells[hex] = cell
|
||||
}
|
||||
for _, cn := range nodesForByte {
|
||||
if len(cn.PublicKey) < 4 {
|
||||
continue
|
||||
}
|
||||
firstHex := strings.ToUpper(cn.PublicKey[:2])
|
||||
twoHex := strings.ToUpper(cn.PublicKey[:4])
|
||||
cell := twoByteCells[firstHex]
|
||||
if cell == nil {
|
||||
continue
|
||||
}
|
||||
cell.GroupNodes = append(cell.GroupNodes, cn)
|
||||
cell.TwoByteMap[twoHex] = append(cell.TwoByteMap[twoHex], cn)
|
||||
}
|
||||
for _, cell := range twoByteCells {
|
||||
for _, ns := range cell.TwoByteMap {
|
||||
if len(ns) > 1 {
|
||||
cell.CollisionCount++
|
||||
if len(ns) > cell.MaxCollision {
|
||||
cell.MaxCollision = len(ns)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
sizeData := map[string]interface{}{
|
||||
"stats": map[string]interface{}{
|
||||
"total_nodes": len(allCNodes),
|
||||
"nodes_for_byte": nodeCount,
|
||||
"using_this_size": usingThisSize,
|
||||
"unique_prefixes": uniquePrefixes,
|
||||
"collision_count": collisionCount,
|
||||
"space_size": spaceSize,
|
||||
"pct_used": pctUsed,
|
||||
},
|
||||
"collisions": collisions,
|
||||
}
|
||||
if oneByteCells != nil {
|
||||
sizeData["one_byte_cells"] = oneByteCells
|
||||
}
|
||||
if twoByteCells != nil {
|
||||
sizeData["two_byte_cells"] = twoByteCells
|
||||
}
|
||||
collisionsBySize[strconv.Itoa(bytes)] = sizeData
|
||||
}
|
||||
|
||||
return map[string]interface{}{
|
||||
"inconsistent_nodes": inconsistentNodes,
|
||||
"by_size": collisionsBySize,
|
||||
}
|
||||
}
|
||||
|
||||
// GetNodeHashSizeInfo returns cached per-node hash size data, recomputing at most every 15s.
|
||||
func (s *PacketStore) GetNodeHashSizeInfo() map[string]*hashSizeNodeInfo {
|
||||
const ttl = 15 * time.Second
|
||||
@@ -5017,7 +5477,7 @@ func (s *PacketStore) GetSubpathDetail(rawHops []string) map[string]interface{}
|
||||
observers := map[string]int{}
|
||||
parentPaths := map[string]int{}
|
||||
var matchCount int
|
||||
var firstSeen, lastSeen interface{}
|
||||
var firstSeen, lastSeen string
|
||||
|
||||
for _, tx := range s.packets {
|
||||
hops := txGetParsedPath(tx)
|
||||
@@ -5047,10 +5507,10 @@ func (s *PacketStore) GetSubpathDetail(rawHops []string) map[string]interface{}
|
||||
matchCount++
|
||||
ts := tx.FirstSeen
|
||||
if ts != "" {
|
||||
if firstSeen == nil || ts < firstSeen.(string) {
|
||||
if firstSeen == "" || ts < firstSeen {
|
||||
firstSeen = ts
|
||||
}
|
||||
if lastSeen == nil || ts > lastSeen.(string) {
|
||||
if lastSeen == "" || ts > lastSeen {
|
||||
lastSeen = ts
|
||||
}
|
||||
// Parse hour from timestamp for hourly distribution
|
||||
|
||||
@@ -25,8 +25,9 @@ type Hub struct {
|
||||
|
||||
// Client is a single WebSocket connection.
|
||||
type Client struct {
|
||||
conn *websocket.Conn
|
||||
send chan []byte
|
||||
conn *websocket.Conn
|
||||
send chan []byte
|
||||
closeOnce sync.Once
|
||||
}
|
||||
|
||||
func NewHub() *Hub {
|
||||
@@ -52,12 +53,28 @@ func (h *Hub) Unregister(c *Client) {
|
||||
h.mu.Lock()
|
||||
if _, ok := h.clients[c]; ok {
|
||||
delete(h.clients, c)
|
||||
close(c.send)
|
||||
c.closeOnce.Do(func() { close(c.send) })
|
||||
}
|
||||
h.mu.Unlock()
|
||||
log.Printf("[ws] client disconnected (%d total)", h.ClientCount())
|
||||
}
|
||||
|
||||
// Close gracefully disconnects all WebSocket clients.
|
||||
func (h *Hub) Close() {
|
||||
h.mu.Lock()
|
||||
for c := range h.clients {
|
||||
c.conn.WriteControl(
|
||||
websocket.CloseMessage,
|
||||
websocket.FormatCloseMessage(websocket.CloseGoingAway, "server shutting down"),
|
||||
time.Now().Add(3*time.Second),
|
||||
)
|
||||
c.closeOnce.Do(func() { close(c.send) })
|
||||
delete(h.clients, c)
|
||||
}
|
||||
h.mu.Unlock()
|
||||
log.Println("[ws] all clients disconnected")
|
||||
}
|
||||
|
||||
// Broadcast sends a message to all connected clients.
|
||||
func (h *Hub) Broadcast(msg interface{}) {
|
||||
data, err := json.Marshal(msg)
|
||||
|
||||
@@ -15,8 +15,8 @@ git reset --hard origin/master
|
||||
echo "[deploy] Building Docker image..."
|
||||
docker build -t meshcore-analyzer .
|
||||
|
||||
echo "[deploy] Restarting container..."
|
||||
docker stop meshcore-analyzer && docker rm meshcore-analyzer
|
||||
echo "[deploy] Stopping old container (30s grace period)..."
|
||||
docker stop -t 30 meshcore-analyzer && docker rm meshcore-analyzer
|
||||
docker run -d --name meshcore-analyzer \
|
||||
--restart unless-stopped \
|
||||
-p 3000:3000 \
|
||||
|
||||
@@ -15,9 +15,11 @@ git reset --hard "origin/$BRANCH"
|
||||
echo "[staging] Building Docker image..."
|
||||
docker build -t meshcore-analyzer-staging .
|
||||
|
||||
echo "[staging] Restarting container..."
|
||||
docker stop meshcore-staging 2>/dev/null || true
|
||||
echo "[staging] Stopping old container (30s grace period)..."
|
||||
docker stop -t 30 meshcore-staging 2>/dev/null || true
|
||||
docker rm meshcore-staging 2>/dev/null || true
|
||||
|
||||
echo "[staging] Starting new container..."
|
||||
docker run -d --name meshcore-staging \
|
||||
--restart unless-stopped \
|
||||
-p 3001:3000 \
|
||||
|
||||
@@ -9,6 +9,8 @@ services:
|
||||
image: corescope:latest
|
||||
container_name: corescope-prod
|
||||
restart: unless-stopped
|
||||
stop_grace_period: 30s
|
||||
stop_signal: SIGTERM
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
ports:
|
||||
|
||||
@@ -10,6 +10,8 @@ services:
|
||||
image: corescope-go:latest
|
||||
container_name: corescope-staging-go
|
||||
restart: unless-stopped
|
||||
stop_grace_period: 30s
|
||||
stop_signal: SIGTERM
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
|
||||
@@ -13,6 +13,8 @@ services:
|
||||
image: corescope-go:latest
|
||||
container_name: corescope-staging-go
|
||||
restart: unless-stopped
|
||||
stop_grace_period: 30s
|
||||
stop_signal: SIGTERM
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
|
||||
@@ -14,6 +14,8 @@ services:
|
||||
image: corescope:latest
|
||||
container_name: corescope-prod
|
||||
restart: unless-stopped
|
||||
stop_grace_period: 30s
|
||||
stop_signal: SIGTERM
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
ports:
|
||||
|
||||
@@ -12,6 +12,8 @@ autostart=true
|
||||
autorestart=true
|
||||
startretries=10
|
||||
startsecs=2
|
||||
stopsignal=TERM
|
||||
stopwaitsecs=20
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
@@ -24,6 +26,8 @@ autostart=true
|
||||
autorestart=true
|
||||
startretries=10
|
||||
startsecs=2
|
||||
stopsignal=TERM
|
||||
stopwaitsecs=20
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
|
||||
@@ -21,6 +21,8 @@ autostart=true
|
||||
autorestart=true
|
||||
startretries=10
|
||||
startsecs=2
|
||||
stopsignal=TERM
|
||||
stopwaitsecs=20
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
@@ -33,6 +35,8 @@ autostart=true
|
||||
autorestart=true
|
||||
startretries=10
|
||||
startsecs=2
|
||||
stopsignal=TERM
|
||||
stopwaitsecs=20
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
|
||||
79
manage.sh
79
manage.sh
@@ -509,6 +509,24 @@ cmd_setup() {
|
||||
|
||||
log "Docker $(docker --version | grep -oP 'version \K[^ ,]+')"
|
||||
log "Compose: $DC"
|
||||
|
||||
# Default to latest release tag (instead of staying on master)
|
||||
if ! is_done "version_pin"; then
|
||||
git fetch origin --tags 2>/dev/null || true
|
||||
local latest_tag
|
||||
latest_tag=$(git tag -l 'v*' --sort=-v:refname | head -1)
|
||||
if [ -n "$latest_tag" ]; then
|
||||
local current_ref
|
||||
current_ref=$(git describe --tags --exact-match 2>/dev/null || echo "")
|
||||
if [ "$current_ref" != "$latest_tag" ]; then
|
||||
info "Pinning to latest release: ${latest_tag}"
|
||||
git checkout "$latest_tag" 2>/dev/null
|
||||
else
|
||||
log "Already on latest release: ${latest_tag}"
|
||||
fi
|
||||
fi
|
||||
mark_done "version_pin"
|
||||
fi
|
||||
|
||||
mark_done "docker"
|
||||
|
||||
@@ -885,14 +903,10 @@ prepare_staging_config() {
|
||||
warn "No production config at ${prod_config} — staging may use defaults."
|
||||
return
|
||||
fi
|
||||
if [ ! -f "$staging_config" ] || [ "$prod_config" -nt "$staging_config" ]; then
|
||||
info "Copying production config to staging..."
|
||||
cp "$prod_config" "$staging_config"
|
||||
sed -i 's/"siteName":\s*"[^"]*"/"siteName": "CoreScope — STAGING"/' "$staging_config"
|
||||
log "Staging config created at ${staging_config} with STAGING site name."
|
||||
else
|
||||
log "Staging config is up to date."
|
||||
fi
|
||||
info "Copying production config to staging..."
|
||||
cp "$prod_config" "$staging_config"
|
||||
sed -i 's/"siteName":\s*"[^"]*"/"siteName": "CoreScope — STAGING"/' "$staging_config"
|
||||
log "Staging config created at ${staging_config} with STAGING site name."
|
||||
# Copy Caddyfile for staging (HTTP-only on staging port)
|
||||
local staging_caddy="$STAGING_DATA/Caddyfile"
|
||||
if [ ! -f "$staging_caddy" ]; then
|
||||
@@ -1167,6 +1181,12 @@ cmd_status() {
|
||||
echo "═══════════════════════════════════════"
|
||||
echo ""
|
||||
|
||||
# Version
|
||||
local current_version
|
||||
current_version=$(git describe --tags --exact-match 2>/dev/null || git rev-parse --short HEAD 2>/dev/null || echo "unknown")
|
||||
info "Version: ${current_version}"
|
||||
echo ""
|
||||
|
||||
# Production
|
||||
show_container_status "corescope-prod" "Production"
|
||||
echo ""
|
||||
@@ -1294,8 +1314,39 @@ cmd_promote() {
|
||||
# ─── Update ───────────────────────────────────────────────────────────────
|
||||
|
||||
cmd_update() {
|
||||
info "Pulling latest code..."
|
||||
git pull --ff-only
|
||||
local version="${1:-}"
|
||||
|
||||
info "Fetching latest changes and tags..."
|
||||
git fetch origin --tags
|
||||
|
||||
if [ -z "$version" ]; then
|
||||
# No arg: checkout latest release tag
|
||||
local latest_tag
|
||||
latest_tag=$(git tag -l 'v*' --sort=-v:refname | head -1)
|
||||
if [ -z "$latest_tag" ]; then
|
||||
err "No release tags found. Use './manage.sh update latest' for tip of master."
|
||||
exit 1
|
||||
fi
|
||||
info "Checking out latest release: ${latest_tag}"
|
||||
git checkout "$latest_tag" || { err "Failed to checkout tag '${latest_tag}'."; exit 1; }
|
||||
elif [ "$version" = "latest" ]; then
|
||||
# Explicit opt-in to bleeding edge (tip of master)
|
||||
# Note: this creates a detached HEAD at origin/master, which is intentional —
|
||||
# we want a read-only snapshot of upstream, not a local tracking branch.
|
||||
info "Checking out tip of master (detached HEAD at origin/master)..."
|
||||
git checkout origin/master || { err "Failed to checkout origin/master."; exit 1; }
|
||||
else
|
||||
# Specific tag requested
|
||||
if ! git tag -l "$version" | grep -q .; then
|
||||
err "Tag '${version}' not found."
|
||||
echo ""
|
||||
echo " Available releases:"
|
||||
git tag -l 'v*' --sort=-v:refname | head -10 | sed 's/^/ /'
|
||||
exit 1
|
||||
fi
|
||||
info "Checking out version: ${version}"
|
||||
git checkout "$version" || { err "Failed to checkout '${version}'."; exit 1; }
|
||||
fi
|
||||
|
||||
migrate_config auto
|
||||
|
||||
@@ -1306,6 +1357,10 @@ cmd_update() {
|
||||
dc_prod up -d --force-recreate prod
|
||||
|
||||
log "Updated and restarted. Data preserved."
|
||||
# Show current version
|
||||
local current
|
||||
current=$(git describe --tags --exact-match 2>/dev/null || git rev-parse --short HEAD)
|
||||
log "Running version: ${current}"
|
||||
}
|
||||
|
||||
# ─── Backup ───────────────────────────────────────────────────────────────
|
||||
@@ -1515,7 +1570,7 @@ cmd_help() {
|
||||
echo " logs [prod|staging] [N] Follow logs (default: prod, last 100 lines)"
|
||||
echo ""
|
||||
printf '%b\n' " ${BOLD}Maintain${NC}"
|
||||
echo " update Pull latest code, rebuild, restart (keeps data)"
|
||||
echo " update [version] Update to version (no arg=latest tag, 'latest'=master tip, or e.g. v3.1.0)"
|
||||
echo " promote Promote staging → production (backup + restart)"
|
||||
echo " backup [dir] Full backup: database + config + theme"
|
||||
echo " restore <d> Restore from backup dir or .db file"
|
||||
@@ -1534,7 +1589,7 @@ case "${1:-help}" in
|
||||
restart) cmd_restart "$2" ;;
|
||||
status) cmd_status ;;
|
||||
logs) cmd_logs "$2" "$3" ;;
|
||||
update) cmd_update ;;
|
||||
update) cmd_update "$2" ;;
|
||||
promote) cmd_promote ;;
|
||||
backup) cmd_backup "$2" ;;
|
||||
restore) cmd_restore "$2" ;;
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "meshcore-analyzer",
|
||||
"version": "3.2.0",
|
||||
"version": "3.3.0",
|
||||
"description": "Community-run alternative to the closed-source `analyzer.letsmesh.net`. MQTT packet collection + open-source web analyzer for the Bay Area MeshCore mesh.",
|
||||
"main": "index.js",
|
||||
"scripts": {
|
||||
|
||||
@@ -143,13 +143,14 @@
|
||||
_analyticsData = {};
|
||||
const rqs = RegionFilter.regionQueryString();
|
||||
const sep = rqs ? '?' + rqs.slice(1) : '';
|
||||
const [hashData, rfData, topoData, chanData] = await Promise.all([
|
||||
const [hashData, rfData, topoData, chanData, collisionData] = await Promise.all([
|
||||
api('/analytics/hash-sizes' + sep, { ttl: CLIENT_TTL.analyticsRF }),
|
||||
api('/analytics/rf' + sep, { ttl: CLIENT_TTL.analyticsRF }),
|
||||
api('/analytics/topology' + sep, { ttl: CLIENT_TTL.analyticsRF }),
|
||||
api('/analytics/channels' + sep, { ttl: CLIENT_TTL.analyticsRF }),
|
||||
api('/analytics/hash-collisions' + sep, { ttl: CLIENT_TTL.analyticsRF }),
|
||||
]);
|
||||
_analyticsData = { hashData, rfData, topoData, chanData };
|
||||
_analyticsData = { hashData, rfData, topoData, chanData, collisionData };
|
||||
renderTab(_currentTab);
|
||||
} catch (e) {
|
||||
document.getElementById('analyticsContent').innerHTML =
|
||||
@@ -166,7 +167,7 @@
|
||||
case 'topology': renderTopology(el, d.topoData); break;
|
||||
case 'channels': renderChannels(el, d.chanData); break;
|
||||
case 'hashsizes': renderHashSizes(el, d.hashData); break;
|
||||
case 'collisions': await renderCollisionTab(el, d.hashData); break;
|
||||
case 'collisions': await renderCollisionTab(el, d.hashData, d.collisionData); break;
|
||||
case 'subpaths': await renderSubpaths(el); break;
|
||||
case 'nodes': await renderNodesTab(el); break;
|
||||
case 'distance': await renderDistanceTab(el); break;
|
||||
@@ -943,7 +944,7 @@
|
||||
`;
|
||||
}
|
||||
|
||||
async function renderCollisionTab(el, data) {
|
||||
async function renderCollisionTab(el, data, collisionData) {
|
||||
el.innerHTML = `
|
||||
<nav id="hashIssuesToc" style="display:flex;gap:12px;margin-bottom:12px;font-size:13px;flex-wrap:wrap">
|
||||
<a href="#/analytics?tab=collisions§ion=inconsistentHashSection" style="color:var(--accent)">⚠️ Inconsistent Sizes</a>
|
||||
@@ -980,11 +981,9 @@
|
||||
<div id="collisionList"><div class="text-muted" style="padding:8px">Loading…</div></div>
|
||||
</div>
|
||||
`;
|
||||
let allNodes = [];
|
||||
try { const nd = await api('/nodes?limit=2000' + RegionFilter.regionQueryString(), { ttl: CLIENT_TTL.nodeList }); allNodes = nd.nodes || []; } catch {}
|
||||
|
||||
// Render inconsistent hash sizes
|
||||
const inconsistent = allNodes.filter(n => n.hash_size_inconsistent);
|
||||
// Use pre-computed collision data from server (no more /nodes?limit=2000 fetch)
|
||||
const cData = collisionData || { inconsistent_nodes: [], by_size: {} };
|
||||
const inconsistent = cData.inconsistent_nodes || [];
|
||||
const ihEl = document.getElementById('inconsistentHashList');
|
||||
if (ihEl) {
|
||||
if (!inconsistent.length) {
|
||||
@@ -1013,10 +1012,7 @@
|
||||
}
|
||||
}
|
||||
|
||||
// Repeaters are confirmed routing nodes; null-role nodes may also route (possible conflict)
|
||||
const repeaterNodes = allNodes.filter(n => n.role === 'repeater');
|
||||
const nullRoleNodes = allNodes.filter(n => !n.role);
|
||||
const routingNodes = [...repeaterNodes, ...nullRoleNodes];
|
||||
// Repeaters and routing nodes no longer needed — collision data is server-computed
|
||||
|
||||
let currentBytes = 1;
|
||||
function refreshHashViews(bytes) {
|
||||
@@ -1037,11 +1033,11 @@
|
||||
else if (bytes === 2) matrixDesc.textContent = 'Each cell = first-byte group. Color shows worst 2-byte collision within. Click a cell to see the breakdown.';
|
||||
else matrixDesc.textContent = '3-byte prefix space is too large to visualize as a matrix — collision table is shown below.';
|
||||
}
|
||||
renderHashMatrix(data.topHops, routingNodes, bytes, allNodes);
|
||||
renderHashMatrixFromServer(cData.by_size[String(bytes)], bytes);
|
||||
// Hide collision risk card for 3-byte — stats are shown in the matrix panel
|
||||
const riskCard = document.getElementById('collisionRiskSection');
|
||||
if (riskCard) riskCard.style.display = bytes === 3 ? 'none' : '';
|
||||
if (bytes !== 3) renderCollisions(data.topHops, routingNodes, bytes);
|
||||
if (bytes !== 3) renderCollisionsFromServer(cData.by_size[String(bytes)], bytes);
|
||||
}
|
||||
|
||||
// Wire up selector
|
||||
@@ -1113,92 +1109,65 @@
|
||||
el.addEventListener('mouseleave', hideMatrixTip);
|
||||
}
|
||||
|
||||
// Pure data helpers — extracted for testability
|
||||
// --- Shared helpers for hash matrix rendering ---
|
||||
|
||||
function buildOneBytePrefixMap(nodes) {
|
||||
const map = {};
|
||||
for (let i = 0; i < 256; i++) map[i.toString(16).padStart(2, '0').toUpperCase()] = [];
|
||||
for (const n of nodes) {
|
||||
const hex = n.public_key.slice(0, 2).toUpperCase();
|
||||
if (map[hex]) map[hex].push(n);
|
||||
}
|
||||
return map;
|
||||
function hashStatCardsHtml(totalNodes, usingCount, sizeLabel, spaceSize, usedCount, collisionCount) {
|
||||
const pct = spaceSize > 0 && usedCount > 0 ? ((usedCount / spaceSize) * 100) : 0;
|
||||
const pctStr = spaceSize > 65536 ? pct.toFixed(6) : spaceSize > 256 ? pct.toFixed(3) : pct.toFixed(1);
|
||||
const spaceLabel = spaceSize >= 1e6 ? (spaceSize / 1e6).toFixed(1) + 'M' : spaceSize.toLocaleString();
|
||||
return `<div style="display:flex;gap:12px;flex-wrap:wrap;margin-bottom:12px">
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px">
|
||||
<div class="analytics-stat-label">Nodes tracked</div>
|
||||
<div class="analytics-stat-value">${totalNodes.toLocaleString()}</div>
|
||||
</div>
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px">
|
||||
<div class="analytics-stat-label">Using ${sizeLabel} ID</div>
|
||||
<div class="analytics-stat-value">${usingCount.toLocaleString()}</div>
|
||||
</div>
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px">
|
||||
<div class="analytics-stat-label">Prefix space used</div>
|
||||
<div class="analytics-stat-value" style="font-size:16px">${pctStr}%</div>
|
||||
<div style="font-size:10px;color:var(--text-muted);margin-top:2px">${usedCount > 256 ? usedCount + ' of ' : 'of '}${spaceLabel} possible</div>
|
||||
</div>
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px;border-color:${collisionCount > 0 ? 'var(--status-red)' : 'var(--border)'}">
|
||||
<div class="analytics-stat-label">Prefix collisions</div>
|
||||
<div class="analytics-stat-value" style="color:${collisionCount > 0 ? 'var(--status-red)' : 'var(--status-green)'}">${collisionCount}</div>
|
||||
</div>
|
||||
</div>`;
|
||||
}
|
||||
|
||||
function buildTwoBytePrefixInfo(nodes) {
|
||||
const info = {};
|
||||
for (let i = 0; i < 256; i++) {
|
||||
const h = i.toString(16).padStart(2, '0').toUpperCase();
|
||||
info[h] = { groupNodes: [], twoByteMap: {}, maxCollision: 0, collisionCount: 0 };
|
||||
function hashMatrixGridHtml(nibbles, cellSize, headerSize, cellDataFn) {
|
||||
let html = `<div style="display:flex;gap:16px;flex-wrap:wrap"><div class="hash-matrix-scroll"><table class="hash-matrix-table" style="border-collapse:collapse;font-size:12px;font-family:monospace">`;
|
||||
html += `<tr><td style="width:${headerSize}px"></td>`;
|
||||
for (const n of nibbles) html += `<td style="width:${cellSize}px;text-align:center;padding:2px 0;font-weight:bold;color:var(--text-muted)">${n}</td>`;
|
||||
html += '</tr>';
|
||||
for (let hi = 0; hi < 16; hi++) {
|
||||
html += `<tr><td style="text-align:right;padding-right:4px;font-weight:bold;color:var(--text-muted)">${nibbles[hi]}</td>`;
|
||||
for (let lo = 0; lo < 16; lo++) {
|
||||
html += cellDataFn(nibbles[hi] + nibbles[lo], cellSize);
|
||||
}
|
||||
html += '</tr>';
|
||||
}
|
||||
for (const n of nodes) {
|
||||
const firstHex = n.public_key.slice(0, 2).toUpperCase();
|
||||
const twoHex = n.public_key.slice(0, 4).toUpperCase();
|
||||
const entry = info[firstHex];
|
||||
if (!entry) continue;
|
||||
entry.groupNodes.push(n);
|
||||
if (!entry.twoByteMap[twoHex]) entry.twoByteMap[twoHex] = [];
|
||||
entry.twoByteMap[twoHex].push(n);
|
||||
}
|
||||
for (const entry of Object.values(info)) {
|
||||
const collisions = Object.values(entry.twoByteMap).filter(v => v.length > 1);
|
||||
entry.collisionCount = collisions.length;
|
||||
entry.maxCollision = collisions.length ? Math.max(...collisions.map(v => v.length)) : 0;
|
||||
}
|
||||
return info;
|
||||
html += '</table></div>';
|
||||
return html;
|
||||
}
|
||||
|
||||
function buildCollisionHops(allNodes, bytes) {
|
||||
const map = {};
|
||||
for (const n of allNodes) {
|
||||
const p = n.public_key.slice(0, bytes * 2).toUpperCase();
|
||||
if (!map[p]) map[p] = { hex: p, count: 0, size: bytes };
|
||||
map[p].count++;
|
||||
}
|
||||
return Object.values(map).filter(h => h.count > 1);
|
||||
function hashMatrixLegendHtml(labels) {
|
||||
return `<div style="margin-top:8px;font-size:0.8em;display:flex;gap:16px;align-items:center;flex-wrap:wrap">
|
||||
${labels.map(l => `<span><span class="legend-swatch ${l.cls}"${l.style ? ' style="'+l.style+'"' : ''}></span> ${l.text}</span>`).join('\n')}
|
||||
</div>`;
|
||||
}
|
||||
|
||||
function renderHashMatrix(topHops, allNodes, bytes, totalNodes) {
|
||||
bytes = bytes || 1;
|
||||
totalNodes = totalNodes || allNodes;
|
||||
function renderHashMatrixFromServer(sizeData, bytes) {
|
||||
const el = document.getElementById('hashMatrix');
|
||||
if (!sizeData) { el.innerHTML = '<div class="text-muted">No data</div>'; return; }
|
||||
const stats = sizeData.stats || {};
|
||||
const totalNodes = stats.total_nodes || 0;
|
||||
|
||||
// 3-byte: show a summary panel instead of a matrix
|
||||
if (bytes === 3) {
|
||||
const total = totalNodes.length;
|
||||
const threeByteNodes = allNodes.filter(n => n.hash_size === 3).length;
|
||||
const nodesForByte = allNodes.filter(n => n.hash_size === 3 || !n.hash_size);
|
||||
const prefixMap = {};
|
||||
for (const n of nodesForByte) {
|
||||
const p = n.public_key.slice(0, 6).toUpperCase();
|
||||
if (!prefixMap[p]) prefixMap[p] = 0;
|
||||
prefixMap[p]++;
|
||||
}
|
||||
const uniquePrefixes = Object.keys(prefixMap).length;
|
||||
const collisions = Object.values(prefixMap).filter(c => c > 1).length;
|
||||
const spaceSize = 16777216; // 2^24
|
||||
const pct = uniquePrefixes > 0 ? ((uniquePrefixes / spaceSize) * 100).toFixed(6) : '0';
|
||||
el.innerHTML = `
|
||||
<div style="display:flex;gap:12px;flex-wrap:wrap;margin-bottom:12px">
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px">
|
||||
<div class="analytics-stat-label">Nodes tracked</div>
|
||||
<div class="analytics-stat-value">${total.toLocaleString()}</div>
|
||||
</div>
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px">
|
||||
<div class="analytics-stat-label">Using 3-byte ID</div>
|
||||
<div class="analytics-stat-value">${threeByteNodes.toLocaleString()}</div>
|
||||
</div>
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px">
|
||||
<div class="analytics-stat-label">Prefix space used</div>
|
||||
<div class="analytics-stat-value" style="font-size:16px">${pct}%</div>
|
||||
<div style="font-size:10px;color:var(--text-muted);margin-top:2px">of 16.7M possible</div>
|
||||
</div>
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px;border-color:${collisions > 0 ? 'var(--status-red)' : 'var(--border)'}">
|
||||
<div class="analytics-stat-label">Prefix collisions</div>
|
||||
<div class="analytics-stat-value" style="color:${collisions > 0 ? 'var(--status-red)' : 'var(--status-green)'}">${collisions}</div>
|
||||
</div>
|
||||
</div>
|
||||
<p class="text-muted" style="margin:0;font-size:0.8em">The 3-byte prefix space (16.7M values) is too large to visualize as a grid.</p>`;
|
||||
el.innerHTML = hashStatCardsHtml(totalNodes, stats.using_this_size || 0, '3-byte', 16777216, stats.unique_prefixes || 0, stats.collision_count || 0) +
|
||||
`<p class="text-muted" style="margin:0;font-size:0.8em">The 3-byte prefix space (16.7M values) is too large to visualize as a grid.</p>`;
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -1207,41 +1176,14 @@
|
||||
const headerSize = 24;
|
||||
|
||||
if (bytes === 1) {
|
||||
const nodesForByte = allNodes.filter(n => n.hash_size === 1 || !n.hash_size);
|
||||
const prefixNodes = buildOneBytePrefixMap(nodesForByte);
|
||||
const oneByteCount = allNodes.filter(n => n.hash_size === 1).length;
|
||||
const oneUsed = Object.values(prefixNodes).filter(v => v.length > 0).length;
|
||||
const oneCollisions = Object.values(prefixNodes).filter(v => v.length > 1).length;
|
||||
const onePct = ((oneUsed / 256) * 100).toFixed(1);
|
||||
const oneByteCells = sizeData.one_byte_cells || {};
|
||||
const oneByteCount = stats.using_this_size || 0;
|
||||
const oneUsed = Object.values(oneByteCells).filter(v => v.length > 0).length;
|
||||
const oneCollisions = Object.values(oneByteCells).filter(v => v.length > 1).length;
|
||||
|
||||
let html = `<div style="display:flex;gap:12px;flex-wrap:wrap;margin-bottom:12px">
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px">
|
||||
<div class="analytics-stat-label">Nodes tracked</div>
|
||||
<div class="analytics-stat-value">${totalNodes.length.toLocaleString()}</div>
|
||||
</div>
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px">
|
||||
<div class="analytics-stat-label">Using 1-byte ID</div>
|
||||
<div class="analytics-stat-value">${oneByteCount.toLocaleString()}</div>
|
||||
</div>
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px">
|
||||
<div class="analytics-stat-label">Prefix space used</div>
|
||||
<div class="analytics-stat-value" style="font-size:16px">${onePct}%</div>
|
||||
<div style="font-size:10px;color:var(--text-muted);margin-top:2px">of 256 possible</div>
|
||||
</div>
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px;border-color:${oneCollisions > 0 ? 'var(--status-red)' : 'var(--border)'}">
|
||||
<div class="analytics-stat-label">Prefix collisions</div>
|
||||
<div class="analytics-stat-value" style="color:${oneCollisions > 0 ? 'var(--status-red)' : 'var(--status-green)'}">${oneCollisions}</div>
|
||||
</div>
|
||||
</div>`;
|
||||
html += `<div style="display:flex;gap:16px;flex-wrap:wrap"><div class="hash-matrix-scroll"><table class="hash-matrix-table" style="border-collapse:collapse;font-size:12px;font-family:monospace">`;
|
||||
html += `<tr><td style="width:${headerSize}px"></td>`;
|
||||
for (const n of nibbles) html += `<td style="width:${cellSize}px;text-align:center;padding:2px 0;font-weight:bold;color:var(--text-muted)">${n}</td>`;
|
||||
html += '</tr>';
|
||||
for (let hi = 0; hi < 16; hi++) {
|
||||
html += `<tr><td style="text-align:right;padding-right:4px;font-weight:bold;color:var(--text-muted)">${nibbles[hi]}</td>`;
|
||||
for (let lo = 0; lo < 16; lo++) {
|
||||
const hex = nibbles[hi] + nibbles[lo];
|
||||
const nodes = prefixNodes[hex] || [];
|
||||
let html = hashStatCardsHtml(totalNodes, oneByteCount, '1-byte', 256, oneUsed, oneCollisions);
|
||||
html += hashMatrixGridHtml(nibbles, cellSize, headerSize, (hex, cs) => {
|
||||
const nodes = oneByteCells[hex] || [];
|
||||
const count = nodes.length;
|
||||
const repeaterCount = nodes.filter(n => n.role === 'repeater').length;
|
||||
const isCollision = count >= 2 && repeaterCount >= 2;
|
||||
@@ -1259,18 +1201,15 @@
|
||||
: isPossible
|
||||
? `<div class="hash-matrix-tooltip-hex">0x${hex}</div><div class="hash-matrix-tooltip-status">${count} nodes — POSSIBLE CONFLICT</div><div class="hash-matrix-tooltip-nodes">${nodes.slice(0,5).map(nodeLabel).join('')}${nodes.length>5?`<div class="hash-matrix-tooltip-status">+${nodes.length-5} more</div>`:''}</div>`
|
||||
: `<div class="hash-matrix-tooltip-hex">0x${hex}</div><div class="hash-matrix-tooltip-status">${count} nodes — COLLISION</div><div class="hash-matrix-tooltip-nodes">${nodes.slice(0,5).map(nodeLabel).join('')}${nodes.length>5?`<div class="hash-matrix-tooltip-status">+${nodes.length-5} more</div>`:''}</div>`;
|
||||
html += `<td class="hash-cell ${cellClass}${count ? ' hash-active' : ''}" data-hex="${hex}" data-tip="${tip1.replace(/"/g,'"')}" style="width:${cellSize}px;height:${cellSize}px;text-align:center;${bgStyle}border:1px solid var(--border);cursor:${count ? 'pointer' : 'default'};font-size:11px;font-weight:${count >= 2 ? '700' : '400'}">${hex}</td>`;
|
||||
}
|
||||
html += '</tr>';
|
||||
}
|
||||
html += '</table></div>';
|
||||
html += `<div id="hashDetail" style="flex:1;min-width:200px;max-width:400px;font-size:0.85em"></div></div>
|
||||
<div style="margin-top:8px;font-size:0.8em;display:flex;gap:16px;align-items:center;flex-wrap:wrap">
|
||||
<span><span class="legend-swatch hash-cell-empty" style="border:1px solid var(--border)"></span> Available</span>
|
||||
<span><span class="legend-swatch hash-cell-taken"></span> One node</span>
|
||||
<span><span class="legend-swatch hash-cell-possible"></span> Possible conflict</span>
|
||||
<span><span class="legend-swatch hash-cell-collision" style="background:rgb(220,80,30)"></span> Collision</span>
|
||||
</div>`;
|
||||
return `<td class="hash-cell ${cellClass}${count ? ' hash-active' : ''}" data-hex="${hex}" data-tip="${tip1.replace(/"/g,'"')}" style="width:${cs}px;height:${cs}px;text-align:center;${bgStyle}border:1px solid var(--border);cursor:${count ? 'pointer' : 'default'};font-size:11px;font-weight:${count >= 2 ? '700' : '400'}">${hex}</td>`;
|
||||
});
|
||||
html += `<div id="hashDetail" style="flex:1;min-width:200px;max-width:400px;font-size:0.85em"></div></div>`;
|
||||
html += hashMatrixLegendHtml([
|
||||
{cls: 'hash-cell-empty', style: 'border:1px solid var(--border)', text: 'Available'},
|
||||
{cls: 'hash-cell-taken', text: 'One node'},
|
||||
{cls: 'hash-cell-possible', text: 'Possible conflict'},
|
||||
{cls: 'hash-cell-collision', style: 'background:rgb(220,80,30)', text: 'Collision'}
|
||||
]);
|
||||
el.innerHTML = html;
|
||||
|
||||
initMatrixTooltip(el);
|
||||
@@ -1278,7 +1217,7 @@
|
||||
el.querySelectorAll('.hash-active').forEach(td => {
|
||||
td.addEventListener('click', () => {
|
||||
const hex = td.dataset.hex.toUpperCase();
|
||||
const matches = prefixNodes[hex] || [];
|
||||
const matches = oneByteCells[hex] || [];
|
||||
const detail = document.getElementById('hashDetail');
|
||||
if (!matches.length) { detail.innerHTML = `<strong class="mono">0x${hex}</strong><br><span class="text-muted">No known nodes</span>`; return; }
|
||||
detail.innerHTML = `<strong class="mono" style="font-size:1.1em">0x${hex}</strong> — ${matches.length} node${matches.length !== 1 ? 's' : ''}` +
|
||||
@@ -1293,47 +1232,17 @@
|
||||
});
|
||||
|
||||
} else if (bytes === 2) {
|
||||
// 2-byte mode: 16×16 grid of first-byte groups
|
||||
const nodesForByte = allNodes.filter(n => n.hash_size === 2 || !n.hash_size);
|
||||
const firstByteInfo = buildTwoBytePrefixInfo(nodesForByte);
|
||||
const twoByteCells = sizeData.two_byte_cells || {};
|
||||
const twoByteCount = stats.using_this_size || 0;
|
||||
const uniqueTwoBytePrefixes = stats.unique_prefixes || 0;
|
||||
const twoCollisions = Object.values(twoByteCells).filter(v => v.collision_count > 0).length;
|
||||
|
||||
const twoByteCount = allNodes.filter(n => n.hash_size === 2).length;
|
||||
const uniqueTwoBytePrefixes = new Set(nodesForByte.map(n => n.public_key.slice(0, 4).toUpperCase())).size;
|
||||
const twoCollisions = Object.values(firstByteInfo).filter(v => v.collisionCount > 0).length;
|
||||
const twoPct = ((uniqueTwoBytePrefixes / 65536) * 100).toFixed(3);
|
||||
|
||||
let html = `<div style="display:flex;gap:12px;flex-wrap:wrap;margin-bottom:12px">
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px">
|
||||
<div class="analytics-stat-label">Nodes tracked</div>
|
||||
<div class="analytics-stat-value">${totalNodes.length.toLocaleString()}</div>
|
||||
</div>
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px">
|
||||
<div class="analytics-stat-label">Using 2-byte ID</div>
|
||||
<div class="analytics-stat-value">${twoByteCount.toLocaleString()}</div>
|
||||
</div>
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px">
|
||||
<div class="analytics-stat-label">Prefix space used</div>
|
||||
<div class="analytics-stat-value" style="font-size:16px">${twoPct}%</div>
|
||||
<div style="font-size:10px;color:var(--text-muted);margin-top:2px">${uniqueTwoBytePrefixes} of 65,536 possible</div>
|
||||
</div>
|
||||
<div class="analytics-stat-card" style="flex:1;min-width:110px;border-color:${twoCollisions > 0 ? 'var(--status-red)' : 'var(--border)'}">
|
||||
<div class="analytics-stat-label">Prefix collisions</div>
|
||||
<div class="analytics-stat-value" style="color:${twoCollisions > 0 ? 'var(--status-red)' : 'var(--status-green)'}">${twoCollisions}</div>
|
||||
</div>
|
||||
</div>`;
|
||||
html += `<div style="display:flex;gap:16px;flex-wrap:wrap"><div class="hash-matrix-scroll"><table class="hash-matrix-table" style="border-collapse:collapse;font-size:12px;font-family:monospace">`;
|
||||
html += `<tr><td style="width:${headerSize}px"></td>`;
|
||||
for (const n of nibbles) html += `<td style="width:${cellSize}px;text-align:center;padding:2px 0;font-weight:bold;color:var(--text-muted)">${n}</td>`;
|
||||
html += '</tr>';
|
||||
for (let hi = 0; hi < 16; hi++) {
|
||||
html += `<tr><td style="text-align:right;padding-right:4px;font-weight:bold;color:var(--text-muted)">${nibbles[hi]}</td>`;
|
||||
for (let lo = 0; lo < 16; lo++) {
|
||||
const hex = nibbles[hi] + nibbles[lo];
|
||||
const info = firstByteInfo[hex] || { groupNodes: [], maxCollision: 0, collisionCount: 0 };
|
||||
const nodeCount = info.groupNodes.length;
|
||||
const maxCol = info.maxCollision;
|
||||
// Classify worst overlap in group: confirmed collision (2+ repeaters) or possible (null-role involved)
|
||||
const overlapping = Object.values(info.twoByteMap || {}).filter(v => v.length > 1);
|
||||
let html = hashStatCardsHtml(totalNodes, twoByteCount, '2-byte', 65536, uniqueTwoBytePrefixes, twoCollisions);
|
||||
html += hashMatrixGridHtml(nibbles, cellSize, headerSize, (hex, cs) => {
|
||||
const info = twoByteCells[hex] || { group_nodes: [], max_collision: 0, collision_count: 0, two_byte_map: {} };
|
||||
const nodeCount = (info.group_nodes || []).length;
|
||||
const maxCol = info.max_collision || 0;
|
||||
const overlapping = Object.values(info.two_byte_map || {}).filter(v => v.length > 1);
|
||||
const hasConfirmed = overlapping.some(ns => ns.filter(n => n.role === 'repeater').length >= 2);
|
||||
const hasPossible = !hasConfirmed && overlapping.some(ns => ns.length >= 2);
|
||||
let cellClass2, bgStyle2;
|
||||
@@ -1344,39 +1253,37 @@
|
||||
const nodeLabel2 = m => esc(m.name||m.public_key.slice(0,8)) + (!m.role ? ' (?)' : '');
|
||||
const tip2 = nodeCount === 0
|
||||
? `<div class="hash-matrix-tooltip-hex">0x${hex}__</div><div class="hash-matrix-tooltip-status">No nodes in this group</div>`
|
||||
: info.collisionCount === 0
|
||||
: (info.collision_count || 0) === 0
|
||||
? `<div class="hash-matrix-tooltip-hex">0x${hex}__</div><div class="hash-matrix-tooltip-status">${nodeCount} node${nodeCount>1?'s':''} — no 2-byte collisions</div>`
|
||||
: `<div class="hash-matrix-tooltip-hex">0x${hex}__</div><div class="hash-matrix-tooltip-status">${hasConfirmed ? info.collisionCount + ' collision' + (info.collisionCount>1?'s':'') : 'Possible conflict'}</div><div class="hash-matrix-tooltip-nodes">${Object.entries(info.twoByteMap).filter(([,v])=>v.length>1).slice(0,4).map(([p,ns])=>`<div style="font-size:11px;padding:1px 0"><span style="color:${hasConfirmed?'var(--status-red)':'var(--status-yellow)'};font-family:var(--mono);font-weight:700">${p}</span> — ${ns.map(nodeLabel2).join(', ')}</div>`).join('')}</div>`;
|
||||
html += `<td class="hash-cell ${cellClass2}${nodeCount ? ' hash-active' : ''}" data-hex="${hex}" data-tip="${tip2.replace(/"/g,'"')}" style="width:${cellSize}px;height:${cellSize}px;text-align:center;${bgStyle2}border:1px solid var(--border);cursor:${nodeCount ? 'pointer' : 'default'};font-size:11px;font-weight:${maxCol > 0 ? '700' : '400'}">${hex}</td>`;
|
||||
}
|
||||
html += '</tr>';
|
||||
}
|
||||
html += '</table></div>';
|
||||
html += `<div id="hashDetail" style="flex:1;min-width:200px;max-width:420px;font-size:0.85em"></div></div>
|
||||
<div style="margin-top:8px;font-size:0.8em;display:flex;gap:16px;align-items:center;flex-wrap:wrap">
|
||||
<span><span class="legend-swatch hash-cell-empty" style="border:1px solid var(--border)"></span> No nodes in group</span>
|
||||
<span><span class="legend-swatch hash-cell-taken"></span> Nodes present, no collision</span>
|
||||
<span><span class="legend-swatch hash-cell-possible"></span> Possible conflict</span>
|
||||
<span><span class="legend-swatch hash-cell-collision" style="background:rgb(220,80,30)"></span> Collision</span>
|
||||
</div>`;
|
||||
: `<div class="hash-matrix-tooltip-hex">0x${hex}__</div><div class="hash-matrix-tooltip-status">${hasConfirmed ? (info.collision_count||0) + ' collision' + ((info.collision_count||0)>1?'s':'') : 'Possible conflict'}</div><div class="hash-matrix-tooltip-nodes">${Object.entries(info.two_byte_map||{}).filter(([,v])=>v.length>1).slice(0,4).map(([p,ns])=>`<div style="font-size:11px;padding:1px 0"><span style="color:${hasConfirmed?'var(--status-red)':'var(--status-yellow)'};font-family:var(--mono);font-weight:700">${p}</span> — ${ns.map(nodeLabel2).join(', ')}</div>`).join('')}</div>`;
|
||||
return `<td class="hash-cell ${cellClass2}${nodeCount ? ' hash-active' : ''}" data-hex="${hex}" data-tip="${tip2.replace(/"/g,'"')}" style="width:${cs}px;height:${cs}px;text-align:center;${bgStyle2}border:1px solid var(--border);cursor:${nodeCount ? 'pointer' : 'default'};font-size:11px;font-weight:${maxCol > 0 ? '700' : '400'}">${hex}</td>`;
|
||||
});
|
||||
html += `<div id="hashDetail" style="flex:1;min-width:200px;max-width:420px;font-size:0.85em"></div></div>`;
|
||||
html += hashMatrixLegendHtml([
|
||||
{cls: 'hash-cell-empty', style: 'border:1px solid var(--border)', text: 'No nodes in group'},
|
||||
{cls: 'hash-cell-taken', text: 'Nodes present, no collision'},
|
||||
{cls: 'hash-cell-possible', text: 'Possible conflict'},
|
||||
{cls: 'hash-cell-collision', style: 'background:rgb(220,80,30)', text: 'Collision'}
|
||||
]);
|
||||
el.innerHTML = html;
|
||||
|
||||
el.querySelectorAll('.hash-active').forEach(td => {
|
||||
td.addEventListener('click', () => {
|
||||
const hex = td.dataset.hex.toUpperCase();
|
||||
const info = firstByteInfo[hex];
|
||||
const info = twoByteCells[hex];
|
||||
const detail = document.getElementById('hashDetail');
|
||||
if (!info || !info.groupNodes.length) { detail.innerHTML = ''; return; }
|
||||
let dhtml = `<strong class="mono" style="font-size:1.1em">0x${hex}__</strong> — ${info.groupNodes.length} node${info.groupNodes.length !== 1 ? 's' : ''} in group`;
|
||||
if (info.collisionCount === 0) {
|
||||
if (!info || !(info.group_nodes || []).length) { detail.innerHTML = ''; return; }
|
||||
const groupNodes = info.group_nodes || [];
|
||||
let dhtml = `<strong class="mono" style="font-size:1.1em">0x${hex}__</strong> — ${groupNodes.length} node${groupNodes.length !== 1 ? 's' : ''} in group`;
|
||||
if ((info.collision_count || 0) === 0) {
|
||||
dhtml += `<div class="text-muted" style="margin-top:6px;font-size:0.85em">✅ No 2-byte collisions in this group</div>`;
|
||||
dhtml += `<div style="margin-top:8px">${info.groupNodes.map(m => {
|
||||
dhtml += `<div style="margin-top:8px">${groupNodes.map(m => {
|
||||
const prefix = m.public_key.slice(0,4).toUpperCase();
|
||||
return `<div style="padding:2px 0"><code class="mono" style="font-size:0.85em">${prefix}</code> <a href="#/nodes/${encodeURIComponent(m.public_key)}" class="analytics-link">${esc(m.name || m.public_key.slice(0,12))}</a></div>`;
|
||||
}).join('')}</div>`;
|
||||
} else {
|
||||
dhtml += `<div style="margin-top:8px">`;
|
||||
for (const [twoHex, nodes] of Object.entries(info.twoByteMap).sort()) {
|
||||
for (const [twoHex, nodes] of Object.entries(info.two_byte_map || {}).sort()) {
|
||||
const isCollision = nodes.length > 1;
|
||||
dhtml += `<div style="margin-bottom:6px;padding:4px 6px;border-radius:4px;background:${isCollision ? 'rgba(220,50,30,0.1)' : 'transparent'};border:1px solid ${isCollision ? 'rgba(220,50,30,0.3)' : 'transparent'}">`;
|
||||
dhtml += `<code class="mono" style="font-size:0.9em;font-weight:${isCollision?'700':'400'}">${twoHex}</code>${isCollision ? ' <span style="color:#dc2626;font-size:0.75em;font-weight:700">COLLISION</span>' : ''} `;
|
||||
@@ -1395,106 +1302,65 @@
|
||||
}
|
||||
}
|
||||
|
||||
async function renderCollisions(topHops, allNodes, bytes) {
|
||||
bytes = bytes || 1;
|
||||
function renderCollisionsFromServer(sizeData, bytes) {
|
||||
const el = document.getElementById('collisionList');
|
||||
const hopsForSize = topHops.filter(h => h.size === bytes);
|
||||
if (!sizeData) { el.innerHTML = '<div class="text-muted">No data</div>'; return; }
|
||||
const collisions = sizeData.collisions || [];
|
||||
|
||||
// For 2-byte and 3-byte, scan nodes directly — topHops only reliably covers 1-byte path hops
|
||||
const hopsToCheck = bytes === 1 ? hopsForSize : buildCollisionHops(allNodes, bytes);
|
||||
|
||||
if (!hopsToCheck.length && bytes === 1) {
|
||||
el.innerHTML = `<div class="text-muted" style="padding:8px">No 1-byte hops observed in recent packets.</div>`;
|
||||
if (!collisions.length) {
|
||||
const cleanMsg = bytes === 3
|
||||
? '✅ No 3-byte prefix collisions detected — all nodes have unique 3-byte prefixes.'
|
||||
: `✅ No ${bytes}-byte collisions detected`;
|
||||
el.innerHTML = `<div class="text-muted" style="padding:8px">${cleanMsg}</div>`;
|
||||
return;
|
||||
}
|
||||
try {
|
||||
const nodes = allNodes;
|
||||
const collisions = [];
|
||||
for (const hop of hopsToCheck) {
|
||||
const prefix = hop.hex.toLowerCase();
|
||||
const matches = nodes.filter(n => n.public_key.toLowerCase().startsWith(prefix));
|
||||
if (matches.length > 1) {
|
||||
// Calculate pairwise distances for classification
|
||||
const withCoords = matches.filter(m => m.lat && m.lon && !(m.lat === 0 && m.lon === 0));
|
||||
let maxDistKm = 0;
|
||||
let classification = 'unknown';
|
||||
if (withCoords.length >= 2) {
|
||||
for (let i = 0; i < withCoords.length; i++) {
|
||||
for (let j = i + 1; j < withCoords.length; j++) {
|
||||
const dLat = (withCoords[i].lat - withCoords[j].lat) * 111;
|
||||
const dLon = (withCoords[i].lon - withCoords[j].lon) * 85;
|
||||
const d = Math.sqrt(dLat * dLat + dLon * dLon);
|
||||
if (d > maxDistKm) maxDistKm = d;
|
||||
}
|
||||
}
|
||||
if (maxDistKm < 50) classification = 'local';
|
||||
else if (maxDistKm < 200) classification = 'regional';
|
||||
else classification = 'distant';
|
||||
} else if (withCoords.length < 2) {
|
||||
classification = 'incomplete';
|
||||
}
|
||||
collisions.push({ hop: hop.hex, count: hop.count, matches, maxDistKm, classification, withCoords: withCoords.length });
|
||||
|
||||
const showAppearances = bytes < 3;
|
||||
el.innerHTML = `<table class="analytics-table">
|
||||
<thead><tr>
|
||||
<th scope="col">Prefix</th>
|
||||
${showAppearances ? '<th scope="col">Appearances</th>' : ''}
|
||||
<th scope="col">Max Distance</th>
|
||||
<th scope="col">Assessment</th>
|
||||
<th scope="col">Colliding Nodes</th>
|
||||
</tr></thead>
|
||||
<tbody>${collisions.map(c => {
|
||||
let badge, tooltip;
|
||||
if (c.classification === 'local') {
|
||||
badge = '<span class="badge" style="background:var(--status-green);color:#fff" title="All nodes within 50km — likely true collision, same RF neighborhood">🏘️ Local</span>';
|
||||
tooltip = 'Nodes close enough for direct RF — probably genuine prefix collision';
|
||||
} else if (c.classification === 'regional') {
|
||||
badge = '<span class="badge" style="background:var(--status-yellow);color:#fff" title="Nodes 50–200km apart — edge of LoRa range, could be atmospheric">⚡ Regional</span>';
|
||||
tooltip = 'At edge of 915MHz range — could indicate atmospheric ducting or hilltop-to-hilltop links';
|
||||
} else if (c.classification === 'distant') {
|
||||
badge = '<span class="badge" style="background:var(--status-red);color:#fff" title="Nodes >200km apart — beyond typical 915MHz range">🌐 Distant</span>';
|
||||
tooltip = 'Beyond typical LoRa range — likely internet bridging, MQTT gateway, or separate mesh networks sharing prefix';
|
||||
} else {
|
||||
badge = '<span class="badge" style="background:#6b7280;color:#fff">❓ Unknown</span>';
|
||||
tooltip = 'Not enough coordinate data to classify';
|
||||
}
|
||||
}
|
||||
if (!collisions.length) {
|
||||
const cleanMsg = bytes === 3
|
||||
? '✅ No 3-byte prefix collisions detected — all nodes have unique 3-byte prefixes.'
|
||||
: `✅ No ${bytes}-byte collisions detected`;
|
||||
el.innerHTML = `<div class="text-muted" style="padding:8px">${cleanMsg}</div>`;
|
||||
return;
|
||||
}
|
||||
|
||||
// Sort: local first (most likely to collide), then regional, distant, incomplete
|
||||
const classOrder = { local: 0, regional: 1, distant: 2, incomplete: 3, unknown: 4 };
|
||||
collisions.sort((a, b) => classOrder[a.classification] - classOrder[b.classification] || b.count - a.count);
|
||||
|
||||
const showAppearances = bytes < 3;
|
||||
el.innerHTML = `<table class="analytics-table">
|
||||
<thead><tr>
|
||||
<th scope="col">Prefix</th>
|
||||
${showAppearances ? '<th scope="col">Appearances</th>' : ''}
|
||||
<th scope="col">Max Distance</th>
|
||||
<th scope="col">Assessment</th>
|
||||
<th scope="col">Colliding Nodes</th>
|
||||
</tr></thead>
|
||||
<tbody>${collisions.map(c => {
|
||||
let badge, tooltip;
|
||||
if (c.classification === 'local') {
|
||||
badge = '<span class="badge" style="background:var(--status-green);color:#fff" title="All nodes within 50km — likely true collision, same RF neighborhood">🏘️ Local</span>';
|
||||
tooltip = 'Nodes close enough for direct RF — probably genuine prefix collision';
|
||||
} else if (c.classification === 'regional') {
|
||||
badge = '<span class="badge" style="background:var(--status-yellow);color:#fff" title="Nodes 50–200km apart — edge of LoRa range, could be atmospheric">⚡ Regional</span>';
|
||||
tooltip = 'At edge of 915MHz range — could indicate atmospheric ducting or hilltop-to-hilltop links';
|
||||
} else if (c.classification === 'distant') {
|
||||
badge = '<span class="badge" style="background:var(--status-red);color:#fff" title="Nodes >200km apart — beyond typical 915MHz range">🌐 Distant</span>';
|
||||
tooltip = 'Beyond typical LoRa range — likely internet bridging, MQTT gateway, or separate mesh networks sharing prefix';
|
||||
} else {
|
||||
badge = '<span class="badge" style="background:#6b7280;color:#fff">❓ Unknown</span>';
|
||||
tooltip = 'Not enough coordinate data to classify';
|
||||
}
|
||||
const distStr = c.withCoords >= 2 ? `${Math.round(c.maxDistKm)} km` : '<span class="text-muted">—</span>';
|
||||
return `<tr>
|
||||
<td class="mono">${c.hop}</td>
|
||||
${showAppearances ? `<td>${c.count.toLocaleString()}</td>` : ''}
|
||||
<td>${distStr}</td>
|
||||
<td title="${tooltip}">${badge}</td>
|
||||
<td>${c.matches.map(m => {
|
||||
const loc = (m.lat && m.lon && !(m.lat === 0 && m.lon === 0))
|
||||
? ` <span class="text-muted" style="font-size:0.75em">(${m.lat.toFixed(2)}, ${m.lon.toFixed(2)})</span>`
|
||||
: ' <span class="text-muted" style="font-size:0.75em">(no coords)</span>';
|
||||
return `<a href="#/nodes/${encodeURIComponent(m.public_key)}" class="analytics-link">${esc(m.name || m.public_key.slice(0,12))}</a>${loc}`;
|
||||
}).join('<br>')}</td>
|
||||
</tr>`;
|
||||
}).join('')}</tbody>
|
||||
</table>
|
||||
<div class="text-muted" style="padding:8px;font-size:0.8em">
|
||||
<strong>🏘️ Local</strong> <50km: true prefix collision, same mesh area
|
||||
<strong>⚡ Regional</strong> 50–200km: edge of LoRa range, possible atmospheric propagation
|
||||
<strong>🌐 Distant</strong> >200km: beyond 915MHz range — internet bridge, MQTT gateway, or separate networks
|
||||
</div>`;
|
||||
} catch { el.innerHTML = '<div class="text-muted">Failed to load</div>'; }
|
||||
const nodes = c.nodes || [];
|
||||
const distStr = c.with_coords >= 2 ? `${Math.round(c.max_dist_km)} km` : '<span class="text-muted">—</span>';
|
||||
return `<tr>
|
||||
<td class="mono">${c.prefix}</td>
|
||||
${showAppearances ? `<td>${(c.appearances || 0).toLocaleString()}</td>` : ''}
|
||||
<td>${distStr}</td>
|
||||
<td title="${tooltip}">${badge}</td>
|
||||
<td>${nodes.map(m => {
|
||||
const loc = (m.lat && m.lon && !(m.lat === 0 && m.lon === 0))
|
||||
? ` <span class="text-muted" style="font-size:0.75em">(${m.lat.toFixed(2)}, ${m.lon.toFixed(2)})</span>`
|
||||
: ' <span class="text-muted" style="font-size:0.75em">(no coords)</span>';
|
||||
return `<a href="#/nodes/${encodeURIComponent(m.public_key)}" class="analytics-link">${esc(m.name || m.public_key.slice(0,12))}</a>${loc}`;
|
||||
}).join('<br>')}</td>
|
||||
</tr>`;
|
||||
}).join('')}</tbody>
|
||||
</table>
|
||||
<div class="text-muted" style="padding:8px;font-size:0.8em">
|
||||
<strong>🏘️ Local</strong> <50km: true prefix collision, same mesh area
|
||||
<strong>⚡ Regional</strong> 50–200km: edge of LoRa range, possible atmospheric propagation
|
||||
<strong>🌐 Distant</strong> >200km: beyond 915MHz range — internet bridge, MQTT gateway, or separate networks
|
||||
</div>`;
|
||||
}
|
||||
|
||||
async function renderSubpaths(el) {
|
||||
el.innerHTML = '<div class="text-center text-muted" style="padding:40px">Analyzing route patterns…</div>';
|
||||
try {
|
||||
@@ -1622,9 +1488,9 @@
|
||||
for (let i = 0; i < data.nodes.length - 1; i++) {
|
||||
const a = data.nodes[i], b = data.nodes[i+1];
|
||||
if (a.lat && a.lon && b.lat && b.lon && !(a.lat===0&&a.lon===0) && !(b.lat===0&&b.lon===0)) {
|
||||
const dLat = (a.lat - b.lat) * 111;
|
||||
const dLon = (a.lon - b.lon) * 85;
|
||||
const km = Math.sqrt(dLat*dLat + dLon*dLon);
|
||||
const km = window.HopResolver && window.HopResolver.haversineKm
|
||||
? window.HopResolver.haversineKm(a.lat, a.lon, b.lat, b.lon)
|
||||
: (() => { const R=6371, dLat=(b.lat-a.lat)*Math.PI/180, dLon=(b.lon-a.lon)*Math.PI/180, h=Math.sin(dLat/2)**2+Math.cos(a.lat*Math.PI/180)*Math.cos(b.lat*Math.PI/180)*Math.sin(dLon/2)**2; return R*2*Math.atan2(Math.sqrt(h),Math.sqrt(1-h)); })();
|
||||
total += km;
|
||||
const cls = km > 200 ? 'color:var(--status-red);font-weight:bold' : km > 50 ? 'color:var(--status-yellow)' : 'color:var(--status-green)';
|
||||
dists.push(`<div style="padding:2px 0"><span style="${cls}">${km < 1 ? (km*1000).toFixed(0)+'m' : km.toFixed(1)+'km'}</span> <span class="text-muted">${esc(a.name)} → ${esc(b.name)}</span></div>`);
|
||||
@@ -1942,9 +1808,6 @@ function destroy() { _analyticsData = {}; _channelData = null; }
|
||||
window._analyticsSaveChannelSort = saveChannelSort;
|
||||
window._analyticsChannelTbodyHtml = channelTbodyHtml;
|
||||
window._analyticsChannelTheadHtml = channelTheadHtml;
|
||||
window._analyticsBuildOneBytePrefixMap = buildOneBytePrefixMap;
|
||||
window._analyticsBuildTwoBytePrefixInfo = buildTwoBytePrefixInfo;
|
||||
window._analyticsBuildCollisionHops = buildCollisionHops;
|
||||
}
|
||||
|
||||
registerPage('analytics', { init, destroy });
|
||||
|
||||
@@ -807,6 +807,7 @@ window.addEventListener('DOMContentLoaded', () => {
|
||||
|
||||
// User's localStorage preferences take priority over server config
|
||||
const userTheme = (() => { try { return JSON.parse(localStorage.getItem('meshcore-user-theme') || '{}'); } catch { return {}; } })();
|
||||
window._SITE_CONFIG_ORIGINAL_HOME = JSON.parse(JSON.stringify(window.SITE_CONFIG.home || {}));
|
||||
mergeUserHomeConfig(window.SITE_CONFIG, userTheme);
|
||||
|
||||
// Apply CSS variable overrides from theme config (skipped if user has local overrides)
|
||||
|
||||
@@ -274,6 +274,9 @@
|
||||
for (let i = 0; i < str.length; i++) h = ((h << 5) - h + str.charCodeAt(i)) | 0;
|
||||
return Math.abs(h);
|
||||
}
|
||||
function formatHashHex(hash) {
|
||||
return typeof hash === 'number' ? '0x' + hash.toString(16).toUpperCase().padStart(2, '0') : hash;
|
||||
}
|
||||
function getChannelColor(hash) { return CHANNEL_COLORS[hashCode(String(hash)) % CHANNEL_COLORS.length]; }
|
||||
function getSenderColor(name) {
|
||||
const isDark = document.documentElement.getAttribute('data-theme') === 'dark' ||
|
||||
@@ -659,7 +662,7 @@
|
||||
});
|
||||
|
||||
el.innerHTML = sorted.map(ch => {
|
||||
const name = ch.name || `Channel ${ch.hash}`;
|
||||
const name = ch.name || `Channel ${formatHashHex(ch.hash)}`;
|
||||
const color = getChannelColor(ch.hash);
|
||||
const time = ch.lastActivityMs ? formatSecondsAgo(Math.floor((Date.now() - ch.lastActivityMs) / 1000)) : '';
|
||||
const preview = ch.lastSender && ch.lastMessage
|
||||
@@ -688,7 +691,7 @@
|
||||
history.replaceState(null, '', `#/channels/${encodeURIComponent(hash)}`);
|
||||
renderChannelList();
|
||||
const ch = channels.find(c => c.hash === hash);
|
||||
const name = ch?.name || `Channel ${hash}`;
|
||||
const name = ch?.name || `Channel ${formatHashHex(hash)}`;
|
||||
const header = document.getElementById('chHeader');
|
||||
header.querySelector('.ch-header-text').textContent = `${name} — ${ch?.messageCount || 0} messages`;
|
||||
|
||||
|
||||
@@ -450,7 +450,8 @@
|
||||
function mergeSection(key) {
|
||||
return Object.assign({}, DEFAULTS[key], cfg[key] || {}, local[key] || {});
|
||||
}
|
||||
var mergedHome = mergeSection('home');
|
||||
var serverHome = window._SITE_CONFIG_ORIGINAL_HOME || cfg.home || {};
|
||||
var mergedHome = Object.assign({}, DEFAULTS.home, serverHome, local.home || {});
|
||||
var localTsMode = localStorage.getItem('meshcore-timestamp-mode');
|
||||
var localTsTimezone = localStorage.getItem('meshcore-timestamp-timezone');
|
||||
var localTsFormat = localStorage.getItem('meshcore-timestamp-format');
|
||||
@@ -1202,19 +1203,19 @@
|
||||
var tmp = state.home.steps[i];
|
||||
state.home.steps[i] = state.home.steps[j];
|
||||
state.home.steps[j] = tmp;
|
||||
render(container);
|
||||
render(container); autoSave();
|
||||
});
|
||||
});
|
||||
container.querySelectorAll('[data-rm-step]').forEach(function (btn) {
|
||||
btn.addEventListener('click', function () {
|
||||
state.home.steps.splice(parseInt(btn.dataset.rmStep), 1);
|
||||
render(container);
|
||||
render(container); autoSave();
|
||||
});
|
||||
});
|
||||
var addStepBtn = document.getElementById('addStep');
|
||||
if (addStepBtn) addStepBtn.addEventListener('click', function () {
|
||||
state.home.steps.push({ emoji: '📌', title: '', description: '' });
|
||||
render(container);
|
||||
render(container); autoSave();
|
||||
});
|
||||
|
||||
// Checklist
|
||||
@@ -1227,13 +1228,13 @@
|
||||
container.querySelectorAll('[data-rm-check]').forEach(function (btn) {
|
||||
btn.addEventListener('click', function () {
|
||||
state.home.checklist.splice(parseInt(btn.dataset.rmCheck), 1);
|
||||
render(container);
|
||||
render(container); autoSave();
|
||||
});
|
||||
});
|
||||
var addCheckBtn = document.getElementById('addCheck');
|
||||
if (addCheckBtn) addCheckBtn.addEventListener('click', function () {
|
||||
state.home.checklist.push({ question: '', answer: '' });
|
||||
render(container);
|
||||
render(container); autoSave();
|
||||
});
|
||||
|
||||
// Footer links
|
||||
@@ -1246,13 +1247,13 @@
|
||||
container.querySelectorAll('[data-rm-link]').forEach(function (btn) {
|
||||
btn.addEventListener('click', function () {
|
||||
state.home.footerLinks.splice(parseInt(btn.dataset.rmLink), 1);
|
||||
render(container);
|
||||
render(container); autoSave();
|
||||
});
|
||||
});
|
||||
var addLinkBtn = document.getElementById('addLink');
|
||||
if (addLinkBtn) addLinkBtn.addEventListener('click', function () {
|
||||
state.home.footerLinks.push({ label: '', url: '' });
|
||||
render(container);
|
||||
render(container); autoSave();
|
||||
});
|
||||
|
||||
// Export copy
|
||||
|
||||
@@ -203,5 +203,5 @@ window.HopResolver = (function() {
|
||||
return nodesList.length > 0;
|
||||
}
|
||||
|
||||
return { init: init, resolve: resolve, ready: ready };
|
||||
return { init: init, resolve: resolve, ready: ready, haversineKm: haversineKm };
|
||||
})();
|
||||
|
||||
@@ -22,9 +22,9 @@
|
||||
<meta name="twitter:title" content="CoreScope">
|
||||
<meta name="twitter:description" content="Real-time MeshCore LoRa mesh network analyzer — live packet visualization, node tracking, channel decryption, and route analysis.">
|
||||
<meta name="twitter:image" content="https://raw.githubusercontent.com/Kpa-clawbot/corescope/master/public/og-image.png">
|
||||
<link rel="stylesheet" href="style.css?v=1775022775">
|
||||
<link rel="stylesheet" href="home.css?v=1775022775">
|
||||
<link rel="stylesheet" href="live.css?v=1775022775">
|
||||
<link rel="stylesheet" href="style.css?v=__BUST__">
|
||||
<link rel="stylesheet" href="home.css?v=__BUST__">
|
||||
<link rel="stylesheet" href="live.css?v=__BUST__">
|
||||
<link rel="stylesheet" href="https://unpkg.com/leaflet@1.9.4/dist/leaflet.css"
|
||||
integrity="sha256-p4NxAoJBhIIN+hmNHrzRCf9tD/miZyoHS5obTRR9BMY="
|
||||
crossorigin="anonymous">
|
||||
@@ -85,30 +85,30 @@
|
||||
<main id="app" role="main"></main>
|
||||
|
||||
<script src="vendor/qrcode.js"></script>
|
||||
<script src="roles.js?v=1775022775"></script>
|
||||
<script src="customize.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="region-filter.js?v=1775022775"></script>
|
||||
<script src="hop-resolver.js?v=1775022775"></script>
|
||||
<script src="hop-display.js?v=1775022775"></script>
|
||||
<script src="app.js?v=1775022775"></script>
|
||||
<script src="home.js?v=1775022775"></script>
|
||||
<script src="packet-filter.js?v=1775022775"></script>
|
||||
<script src="packets.js?v=1775022775"></script>
|
||||
<script src="geo-filter-overlay.js?v=1775022775"></script>
|
||||
<script src="map.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="channels.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="nodes.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="traces.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="analytics.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="audio.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="audio-v1-constellation.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="audio-v2-constellation.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="audio-lab.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="live.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="observers.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="observer-detail.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="compare.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="node-analytics.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="perf.js?v=1775022775" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="roles.js?v=__BUST__"></script>
|
||||
<script src="customize.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="region-filter.js?v=__BUST__"></script>
|
||||
<script src="hop-resolver.js?v=__BUST__"></script>
|
||||
<script src="hop-display.js?v=__BUST__"></script>
|
||||
<script src="app.js?v=__BUST__"></script>
|
||||
<script src="home.js?v=__BUST__"></script>
|
||||
<script src="packet-filter.js?v=__BUST__"></script>
|
||||
<script src="packets.js?v=__BUST__"></script>
|
||||
<script src="geo-filter-overlay.js?v=__BUST__"></script>
|
||||
<script src="map.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="channels.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="nodes.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="traces.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="analytics.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="audio.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="audio-v1-constellation.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="audio-v2-constellation.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="audio-lab.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="live.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="observers.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="observer-detail.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="compare.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="node-analytics.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="perf.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
164
public/live.js
164
public/live.js
@@ -10,6 +10,7 @@
|
||||
let nodeData = {};
|
||||
let packetCount = 0;
|
||||
let activeAnims = 0;
|
||||
const MAX_CONCURRENT_ANIMS = 20;
|
||||
let nodeActivity = {};
|
||||
let recentPaths = [];
|
||||
let showGhostHops = localStorage.getItem('live-ghost-hops') !== 'false';
|
||||
@@ -368,12 +369,17 @@
|
||||
}
|
||||
}
|
||||
|
||||
function updateVCRClock(tsMs) {
|
||||
function vcrFormatTime(tsMs) {
|
||||
const d = new Date(tsMs);
|
||||
const hh = String(d.getHours()).padStart(2, '0');
|
||||
const mm = String(d.getMinutes()).padStart(2, '0');
|
||||
const ss = String(d.getSeconds()).padStart(2, '0');
|
||||
drawLcdText(`${hh}:${mm}:${ss}`, statusGreen());
|
||||
const utc = typeof getTimestampTimezone === 'function' && getTimestampTimezone() === 'utc';
|
||||
const hh = String(utc ? d.getUTCHours() : d.getHours()).padStart(2, '0');
|
||||
const mm = String(utc ? d.getUTCMinutes() : d.getMinutes()).padStart(2, '0');
|
||||
const ss = String(utc ? d.getUTCSeconds() : d.getSeconds()).padStart(2, '0');
|
||||
return `${hh}:${mm}:${ss}`;
|
||||
}
|
||||
|
||||
function updateVCRClock(tsMs) {
|
||||
drawLcdText(vcrFormatTime(tsMs), statusGreen());
|
||||
}
|
||||
|
||||
function updateVCRLcd() {
|
||||
@@ -1060,8 +1066,7 @@
|
||||
const rect = timelineEl.getBoundingClientRect();
|
||||
const pct = (e.clientX - rect.left) / rect.width;
|
||||
const ts = Date.now() - VCR.timelineScope + pct * VCR.timelineScope;
|
||||
const d = new Date(ts);
|
||||
timeTooltip.textContent = d.toLocaleTimeString([], {hour:'2-digit',minute:'2-digit',second:'2-digit'});
|
||||
timeTooltip.textContent = vcrFormatTime(ts);
|
||||
timeTooltip.style.left = (e.clientX - rect.left) + 'px';
|
||||
timeTooltip.classList.remove('hidden');
|
||||
});
|
||||
@@ -1074,8 +1079,7 @@
|
||||
const rect = timelineEl.getBoundingClientRect();
|
||||
const pct = Math.max(0, Math.min(1, (touch.clientX - rect.left) / rect.width));
|
||||
const ts = Date.now() - VCR.timelineScope + pct * VCR.timelineScope;
|
||||
const d = new Date(ts);
|
||||
timeTooltip.textContent = d.toLocaleTimeString([], {hour:'2-digit',minute:'2-digit',second:'2-digit'});
|
||||
timeTooltip.textContent = vcrFormatTime(ts);
|
||||
timeTooltip.style.left = (touch.clientX - rect.left) + 'px';
|
||||
timeTooltip.classList.remove('hidden');
|
||||
});
|
||||
@@ -1581,6 +1585,7 @@
|
||||
window._livePruneStaleNodes = pruneStaleNodes;
|
||||
window._liveNodeMarkers = function() { return nodeMarkers; };
|
||||
window._liveNodeData = function() { return nodeData; };
|
||||
window._vcrFormatTime = vcrFormatTime;
|
||||
|
||||
async function replayRecent() {
|
||||
try {
|
||||
@@ -1843,6 +1848,7 @@
|
||||
|
||||
function animatePath(hopPositions, typeName, color, rawHex, onHop) {
|
||||
if (!animLayer || !pathsLayer) return;
|
||||
if (activeAnims >= MAX_CONCURRENT_ANIMS) return;
|
||||
activeAnims++;
|
||||
document.getElementById('liveAnimCount').textContent = activeAnims;
|
||||
let hopIndex = 0;
|
||||
@@ -1850,9 +1856,11 @@
|
||||
function nextHop() {
|
||||
if (hopIndex >= hopPositions.length) {
|
||||
activeAnims = Math.max(0, activeAnims - 1);
|
||||
document.getElementById('liveAnimCount').textContent = activeAnims;
|
||||
const countEl = document.getElementById('liveAnimCount');
|
||||
if (countEl) countEl.textContent = activeAnims;
|
||||
return;
|
||||
}
|
||||
if (!animLayer) return;
|
||||
// Audio hook: notify per-hop callback
|
||||
if (onHop) try { onHop(hopIndex, hopPositions.length, hopPositions[hopIndex]); } catch (e) {}
|
||||
const hp = hopPositions[hopIndex];
|
||||
@@ -1864,12 +1872,22 @@
|
||||
radius: 3, fillColor: '#94a3b8', fillOpacity: 0.35, color: '#94a3b8', weight: 1, opacity: 0.5
|
||||
}).addTo(animLayer);
|
||||
let pulseUp = true;
|
||||
const pulseTimer = setInterval(() => {
|
||||
if (!animLayer.hasLayer(ghost)) { clearInterval(pulseTimer); return; }
|
||||
ghost.setStyle({ fillOpacity: pulseUp ? 0.6 : 0.25, opacity: pulseUp ? 0.7 : 0.4 });
|
||||
pulseUp = !pulseUp;
|
||||
}, 600);
|
||||
setTimeout(() => { clearInterval(pulseTimer); if (animLayer.hasLayer(ghost)) animLayer.removeLayer(ghost); }, 3000);
|
||||
let lastPulseTime = performance.now();
|
||||
const pulseExpiry = lastPulseTime + 3000;
|
||||
function ghostPulse(now) {
|
||||
if (!animLayer || !animLayer.hasLayer(ghost)) return;
|
||||
if (now >= pulseExpiry) {
|
||||
if (animLayer && animLayer.hasLayer(ghost)) animLayer.removeLayer(ghost);
|
||||
return;
|
||||
}
|
||||
if (now - lastPulseTime >= 600) {
|
||||
lastPulseTime = now;
|
||||
ghost.setStyle({ fillOpacity: pulseUp ? 0.6 : 0.25, opacity: pulseUp ? 0.7 : 0.4 });
|
||||
pulseUp = !pulseUp;
|
||||
}
|
||||
requestAnimationFrame(ghostPulse);
|
||||
}
|
||||
requestAnimationFrame(ghostPulse);
|
||||
}
|
||||
} else {
|
||||
pulseNode(hp.key, hp.pos, typeName);
|
||||
@@ -1913,20 +1931,30 @@
|
||||
}).addTo(animLayer);
|
||||
|
||||
let r = 2, op = 0.9;
|
||||
const iv = setInterval(() => {
|
||||
r += 1.5; op -= 0.03;
|
||||
if (op <= 0) {
|
||||
clearInterval(iv);
|
||||
let lastPulse = performance.now();
|
||||
const pulseStart = lastPulse;
|
||||
function animatePulse(now) {
|
||||
if (now - pulseStart > 2000) {
|
||||
try { animLayer.removeLayer(ring); } catch {}
|
||||
return;
|
||||
}
|
||||
try {
|
||||
ring.setRadius(r);
|
||||
ring.setStyle({ opacity: op, weight: Math.max(0.3, 3 - r * 0.04) });
|
||||
} catch { clearInterval(iv); }
|
||||
}, 26);
|
||||
// Safety cleanup — never let a ring live longer than 2s
|
||||
setTimeout(() => { clearInterval(iv); try { animLayer.removeLayer(ring); } catch {} }, 2000);
|
||||
const elapsed = now - lastPulse;
|
||||
if (elapsed >= 26) {
|
||||
const ticks = Math.min(Math.floor(elapsed / 26), 4);
|
||||
r += 1.5 * ticks; op -= 0.03 * ticks;
|
||||
lastPulse = now;
|
||||
if (op <= 0) {
|
||||
try { animLayer.removeLayer(ring); } catch {}
|
||||
return;
|
||||
}
|
||||
try {
|
||||
ring.setRadius(r);
|
||||
ring.setStyle({ opacity: op, weight: Math.max(0.3, 3 - r * 0.04) });
|
||||
} catch { return; }
|
||||
}
|
||||
requestAnimationFrame(animatePulse);
|
||||
}
|
||||
requestAnimationFrame(animatePulse);
|
||||
|
||||
const baseColor = marker._baseColor || '#6b7280';
|
||||
const baseSize = marker._baseSize || 6;
|
||||
@@ -2239,43 +2267,61 @@
|
||||
radius: 3.5, fillColor: '#fff', fillOpacity: 1, color: color, weight: 1.5
|
||||
}).addTo(animLayer);
|
||||
|
||||
const interval = setInterval(() => {
|
||||
step++;
|
||||
const lat = from[0] + latStep * step;
|
||||
const lon = from[1] + lonStep * step;
|
||||
currentCoords.push([lat, lon]);
|
||||
line.setLatLngs(currentCoords);
|
||||
contrail.setLatLngs(currentCoords);
|
||||
dot.setLatLng([lat, lon]);
|
||||
|
||||
if (step >= steps) {
|
||||
clearInterval(interval);
|
||||
if (animLayer) animLayer.removeLayer(dot);
|
||||
|
||||
recentPaths.push({ line, glowLine: contrail, time: Date.now() });
|
||||
while (recentPaths.length > 5) {
|
||||
const old = recentPaths.shift();
|
||||
if (pathsLayer) { pathsLayer.removeLayer(old.line); pathsLayer.removeLayer(old.glowLine); }
|
||||
let lastStep = performance.now();
|
||||
function animateLine(now) {
|
||||
const elapsed = now - lastStep;
|
||||
if (elapsed >= 33) {
|
||||
const ticks = Math.min(Math.floor(elapsed / 33), 4);
|
||||
lastStep = now;
|
||||
for (let t = 0; t < ticks && step < steps; t++) {
|
||||
step++;
|
||||
const lat = from[0] + latStep * step;
|
||||
const lon = from[1] + lonStep * step;
|
||||
currentCoords.push([lat, lon]);
|
||||
}
|
||||
const lastPt = currentCoords[currentCoords.length - 1];
|
||||
line.setLatLngs(currentCoords);
|
||||
contrail.setLatLngs(currentCoords);
|
||||
dot.setLatLng(lastPt);
|
||||
|
||||
setTimeout(() => {
|
||||
let fadeOp = mainOpacity;
|
||||
const fi = setInterval(() => {
|
||||
fadeOp -= 0.1;
|
||||
if (fadeOp <= 0) {
|
||||
clearInterval(fi);
|
||||
if (pathsLayer) { pathsLayer.removeLayer(line); pathsLayer.removeLayer(contrail); }
|
||||
recentPaths = recentPaths.filter(p => p.line !== line);
|
||||
} else {
|
||||
line.setStyle({ opacity: fadeOp });
|
||||
contrail.setStyle({ opacity: fadeOp * 0.15 });
|
||||
if (step >= steps) {
|
||||
if (animLayer) animLayer.removeLayer(dot);
|
||||
|
||||
recentPaths.push({ line, glowLine: contrail, time: Date.now() });
|
||||
while (recentPaths.length > 5) {
|
||||
const old = recentPaths.shift();
|
||||
if (pathsLayer) { pathsLayer.removeLayer(old.line); pathsLayer.removeLayer(old.glowLine); }
|
||||
}
|
||||
|
||||
setTimeout(() => {
|
||||
let fadeOp = mainOpacity;
|
||||
let lastFade = performance.now();
|
||||
function animateFade(now) {
|
||||
const fadeElapsed = now - lastFade;
|
||||
if (fadeElapsed >= 52) {
|
||||
const fadeTicks = Math.min(Math.floor(fadeElapsed / 52), 4);
|
||||
lastFade = now;
|
||||
fadeOp -= 0.1 * fadeTicks;
|
||||
if (fadeOp <= 0) {
|
||||
if (pathsLayer) { pathsLayer.removeLayer(line); pathsLayer.removeLayer(contrail); }
|
||||
recentPaths = recentPaths.filter(p => p.line !== line);
|
||||
return;
|
||||
}
|
||||
line.setStyle({ opacity: fadeOp });
|
||||
contrail.setStyle({ opacity: fadeOp * 0.15 });
|
||||
}
|
||||
requestAnimationFrame(animateFade);
|
||||
}
|
||||
}, 52);
|
||||
}, 800);
|
||||
requestAnimationFrame(animateFade);
|
||||
}, 800);
|
||||
|
||||
if (onComplete) onComplete();
|
||||
if (onComplete) onComplete();
|
||||
return;
|
||||
}
|
||||
}
|
||||
}, 33);
|
||||
requestAnimationFrame(animateLine);
|
||||
}
|
||||
requestAnimationFrame(animateLine);
|
||||
}
|
||||
|
||||
function showHeatMap() {
|
||||
|
||||
@@ -10,6 +10,8 @@
|
||||
let targetNodeKey = null;
|
||||
let observers = [];
|
||||
let filters = { repeater: true, companion: true, room: true, sensor: true, observer: true, lastHeard: '30d', neighbors: false, clusters: false, hashLabels: localStorage.getItem('meshcore-map-hash-labels') !== 'false', statusFilter: localStorage.getItem('meshcore-map-status-filter') || 'all' };
|
||||
let selectedReferenceNode = null; // pubkey of the reference node for neighbor filtering
|
||||
let neighborPubkeys = null; // Set of pubkeys that are direct neighbors of selected node
|
||||
let wsHandler = null;
|
||||
let heatLayer = null;
|
||||
let geoFilterLayer = null;
|
||||
@@ -108,6 +110,8 @@
|
||||
<fieldset class="mc-section">
|
||||
<legend class="mc-label">Filters</legend>
|
||||
<label for="mcNeighbors"><input type="checkbox" id="mcNeighbors"> Show direct neighbors</label>
|
||||
<div id="mcNeighborRef" style="display:none;font-size:11px;color:var(--text-muted);margin-top:2px;padding-left:20px;">Ref: <span id="mcNeighborRefName">—</span></div>
|
||||
<div id="mcNeighborHint" style="display:none;font-size:11px;color:var(--text-muted);margin-top:2px;padding-left:20px;">Click a node marker to set the reference node</div>
|
||||
</fieldset>
|
||||
<fieldset class="mc-section">
|
||||
<legend class="mc-label">Last Heard</legend>
|
||||
@@ -207,7 +211,19 @@
|
||||
const heatEl = document.getElementById('mcHeatmap');
|
||||
if (localStorage.getItem('meshcore-map-heatmap') === 'true') { heatEl.checked = true; }
|
||||
heatEl.addEventListener('change', e => { localStorage.setItem('meshcore-map-heatmap', e.target.checked); toggleHeatmap(e.target.checked); });
|
||||
document.getElementById('mcNeighbors').addEventListener('change', e => { filters.neighbors = e.target.checked; renderMarkers(); });
|
||||
document.getElementById('mcNeighbors').addEventListener('change', e => {
|
||||
filters.neighbors = e.target.checked;
|
||||
const hintEl = document.getElementById('mcNeighborHint');
|
||||
const refEl = document.getElementById('mcNeighborRef');
|
||||
if (e.target.checked && !selectedReferenceNode) {
|
||||
hintEl.style.display = 'block';
|
||||
refEl.style.display = 'none';
|
||||
} else {
|
||||
hintEl.style.display = 'none';
|
||||
refEl.style.display = selectedReferenceNode ? 'block' : 'none';
|
||||
}
|
||||
renderMarkers();
|
||||
});
|
||||
|
||||
// Hash Labels toggle
|
||||
const hashLabelEl = document.getElementById('mcHashLabels');
|
||||
@@ -646,6 +662,11 @@
|
||||
const status = getNodeStatus(role, lastMs);
|
||||
if (status !== filters.statusFilter) return false;
|
||||
}
|
||||
// Neighbor filter: show only the reference node and its direct neighbors
|
||||
if (filters.neighbors && selectedReferenceNode && neighborPubkeys) {
|
||||
const pk = n.public_key;
|
||||
if (pk !== selectedReferenceNode && !neighborPubkeys.has(pk)) return false;
|
||||
}
|
||||
return true;
|
||||
});
|
||||
|
||||
@@ -724,6 +745,43 @@
|
||||
</div>`;
|
||||
}
|
||||
|
||||
async function selectReferenceNode(pubkey, name) {
|
||||
selectedReferenceNode = pubkey;
|
||||
neighborPubkeys = new Set();
|
||||
try {
|
||||
const data = await api('/nodes/' + pubkey + '/paths');
|
||||
const paths = data.paths || [];
|
||||
for (const p of paths) {
|
||||
const hops = p.hops || [];
|
||||
// Find the reference node in the path; direct neighbors are adjacent hops
|
||||
for (let i = 0; i < hops.length; i++) {
|
||||
if (hops[i].pubkey === pubkey) {
|
||||
if (i > 0 && hops[i - 1].pubkey) neighborPubkeys.add(hops[i - 1].pubkey);
|
||||
if (i < hops.length - 1 && hops[i + 1].pubkey) neighborPubkeys.add(hops[i + 1].pubkey);
|
||||
}
|
||||
}
|
||||
// (Redundant block removed — the main loop above already handles first/last hops)
|
||||
}
|
||||
} catch (e) {
|
||||
console.warn('Failed to fetch neighbor paths for', pubkey, '— neighbor filter may be incomplete:', e);
|
||||
neighborPubkeys = new Set();
|
||||
}
|
||||
// Update sidebar UI
|
||||
const refEl = document.getElementById('mcNeighborRef');
|
||||
const refNameEl = document.getElementById('mcNeighborRefName');
|
||||
const hintEl = document.getElementById('mcNeighborHint');
|
||||
if (refEl) { refEl.style.display = 'block'; }
|
||||
if (refNameEl) { refNameEl.textContent = name || pubkey.slice(0, 8); }
|
||||
if (hintEl) { hintEl.style.display = 'none'; }
|
||||
// Auto-enable the neighbors filter
|
||||
filters.neighbors = true;
|
||||
const cb = document.getElementById('mcNeighbors');
|
||||
if (cb) cb.checked = true;
|
||||
renderMarkers();
|
||||
}
|
||||
// Expose for popup onclick
|
||||
window._mapSelectRefNode = selectReferenceNode;
|
||||
|
||||
function buildPopup(node) {
|
||||
const key = node.public_key ? truncate(node.public_key, 16) : '—';
|
||||
const loc = (node.lat && node.lon) ? `${node.lat.toFixed(5)}, ${node.lon.toFixed(5)}` : '—';
|
||||
@@ -749,7 +807,10 @@
|
||||
<dt style="color:var(--text-muted);float:left;clear:left;width:80px;padding:2px 0;">Adverts</dt>
|
||||
<dd style="margin-left:88px;padding:2px 0;">${node.advert_count || 0}</dd>
|
||||
</dl>
|
||||
<div style="margin-top:8px;clear:both;"><a href="#/nodes/${node.public_key}" style="color:var(--accent);font-size:12px;">View Node →</a></div>
|
||||
<div style="margin-top:8px;clear:both;">
|
||||
<a href="#/nodes/${node.public_key}" style="color:var(--accent);font-size:12px;">View Node →</a>
|
||||
${node.public_key ? ` · <a href="#" onclick="event.preventDefault();window._mapSelectRefNode('${safeEsc(node.public_key.replace(/\\/g, '\\\\').replace(/'/g, "\\'").replace(/</g, '\\x3c'))}','${safeEsc((node.name || 'Unknown').replace(/\\/g, '\\\\').replace(/'/g, "\\'").replace(/</g, '\\x3c'))}')" style="color:var(--accent);font-size:12px;">Show Neighbors</a>` : ''}
|
||||
</div>
|
||||
</div>`;
|
||||
}
|
||||
|
||||
@@ -775,6 +836,9 @@
|
||||
routeLayer = null;
|
||||
if (heatLayer) { heatLayer = null; }
|
||||
geoFilterLayer = null;
|
||||
selectedReferenceNode = null;
|
||||
neighborPubkeys = null;
|
||||
delete window._mapSelectRefNode;
|
||||
}
|
||||
|
||||
function toggleHeatmap(on) {
|
||||
|
||||
@@ -228,11 +228,39 @@
|
||||
loadNodes();
|
||||
// Auto-refresh when ADVERT packets arrive via WebSocket (fixes #131)
|
||||
wsHandler = debouncedOnWS(function (msgs) {
|
||||
if (msgs.some(isAdvertMessage)) {
|
||||
_allNodes = null;
|
||||
const advertMsgs = msgs.filter(isAdvertMessage);
|
||||
if (!advertMsgs.length) return;
|
||||
|
||||
if (!_allNodes) {
|
||||
invalidateApiCache('/nodes');
|
||||
loadNodes(true);
|
||||
return;
|
||||
}
|
||||
|
||||
let needReload = false;
|
||||
for (const m of advertMsgs) {
|
||||
const payload = m.data && m.data.decoded && m.data.decoded.payload;
|
||||
const pubKey = payload && (payload.pubKey || payload.public_key);
|
||||
if (!pubKey) { needReload = true; break; }
|
||||
|
||||
const existing = _allNodes.find(n => n.public_key === pubKey);
|
||||
if (existing) {
|
||||
if (payload.name) existing.name = payload.name;
|
||||
if (payload.lat != null) existing.lat = payload.lat;
|
||||
if (payload.lon != null) existing.lon = payload.lon;
|
||||
const ts = m.data.packet && (m.data.packet.timestamp || m.data.packet.first_seen);
|
||||
if (ts) existing.last_seen = ts;
|
||||
} else {
|
||||
needReload = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (needReload) {
|
||||
_allNodes = null;
|
||||
invalidateApiCache('/nodes');
|
||||
}
|
||||
loadNodes(true);
|
||||
}, 5000);
|
||||
}
|
||||
|
||||
@@ -929,4 +957,6 @@
|
||||
|
||||
// Test hooks
|
||||
window._nodesIsAdvertMessage = isAdvertMessage;
|
||||
window._nodesGetAllNodes = function() { return _allNodes; };
|
||||
window._nodesSetAllNodes = function(n) { _allNodes = n; };
|
||||
})();
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
// Resolve observer_id to friendly name from loaded observers list
|
||||
function obsName(id) {
|
||||
if (!id) return '—';
|
||||
const o = observers.find(ob => ob.id === id);
|
||||
const o = observerMap.get(id);
|
||||
if (!o) return id;
|
||||
return o.iata ? `${o.name} (${o.iata})` : o.name;
|
||||
}
|
||||
@@ -21,6 +21,7 @@
|
||||
let packetsPaused = false;
|
||||
let pauseBuffer = [];
|
||||
let observers = [];
|
||||
let observerMap = new Map(); // id → observer for O(1) lookups (#383)
|
||||
let regionMap = {};
|
||||
const TYPE_NAMES = { 0:'Request', 1:'Response', 2:'Direct Msg', 3:'ACK', 4:'Advert', 5:'Channel Msg', 7:'Anon Req', 8:'Path', 9:'Trace', 11:'Control' };
|
||||
function typeName(t) { return TYPE_NAMES[t] ?? `Type ${t}`; }
|
||||
@@ -37,6 +38,19 @@
|
||||
const PANEL_WIDTH_KEY = 'meshcore-panel-width';
|
||||
const PANEL_CLOSE_HTML = '<button class="panel-close-btn" title="Close detail pane (Esc)">✕</button>';
|
||||
|
||||
// --- Virtual scroll state ---
|
||||
const VSCROLL_ROW_HEIGHT = 36; // estimated row height in px
|
||||
const VSCROLL_BUFFER = 30; // extra rows above/below viewport
|
||||
let _displayPackets = []; // filtered packets for current view
|
||||
let _displayGrouped = false; // whether _displayPackets is in grouped mode
|
||||
let _rowCounts = []; // per-entry DOM row counts (1 for flat, 1+children for expanded groups)
|
||||
let _cumulativeOffsetsCache = null; // cached cumulative offsets, invalidated on _rowCounts change
|
||||
let _lastVisibleStart = -1; // last rendered start index (for dirty checking)
|
||||
let _lastVisibleEnd = -1; // last rendered end index (for dirty checking)
|
||||
let _vsScrollHandler = null; // scroll listener reference
|
||||
let _wsRenderTimer = null; // debounce timer for WS-triggered renders
|
||||
let _observerFilterSet = null; // cached Set from filters.observer, hoisted above loops (#427)
|
||||
|
||||
function closeDetailPanel() {
|
||||
var panel = document.getElementById('pktRight');
|
||||
if (panel) {
|
||||
@@ -336,7 +350,7 @@
|
||||
if (filters.hash && p.hash !== filters.hash) return false;
|
||||
if (RegionFilter.getRegionParam()) {
|
||||
const selectedRegions = RegionFilter.getRegionParam().split(',');
|
||||
const obs = observers.find(o => o.id === p.observer_id);
|
||||
const obs = observerMap.get(p.observer_id);
|
||||
if (!obs || !selectedRegions.includes(obs.iata)) return false;
|
||||
}
|
||||
if (filters.node && !(p.decoded_json || '').includes(filters.node)) return false;
|
||||
@@ -396,7 +410,9 @@
|
||||
packets = filtered.concat(packets);
|
||||
}
|
||||
totalCount += filtered.length;
|
||||
renderTableRows();
|
||||
// Debounce WS-triggered renders to avoid rapid full rebuilds
|
||||
clearTimeout(_wsRenderTimer);
|
||||
_wsRenderTimer = setTimeout(function () { renderTableRows(); }, 200);
|
||||
});
|
||||
});
|
||||
}
|
||||
@@ -404,6 +420,14 @@
|
||||
function destroy() {
|
||||
if (wsHandler) offWS(wsHandler);
|
||||
wsHandler = null;
|
||||
detachVScrollListener();
|
||||
clearTimeout(_wsRenderTimer);
|
||||
_displayPackets = [];
|
||||
_rowCounts = [];
|
||||
_cumulativeOffsetsCache = null;
|
||||
_observerFilterSet = null;
|
||||
_lastVisibleStart = -1;
|
||||
_lastVisibleEnd = -1;
|
||||
if (_docActionHandler) { document.removeEventListener('click', _docActionHandler); _docActionHandler = null; }
|
||||
if (_docMenuCloseHandler) { document.removeEventListener('click', _docMenuCloseHandler); _docMenuCloseHandler = null; }
|
||||
if (_docColMenuCloseHandler) { document.removeEventListener('click', _docColMenuCloseHandler); _docColMenuCloseHandler = null; }
|
||||
@@ -416,6 +440,7 @@
|
||||
hopNameCache = {};
|
||||
totalCount = 0;
|
||||
observers = [];
|
||||
observerMap = new Map();
|
||||
directPacketId = null;
|
||||
directPacketHash = null;
|
||||
groupByHash = true;
|
||||
@@ -427,6 +452,7 @@
|
||||
try {
|
||||
const data = await api('/observers', { ttl: CLIENT_TTL.observers });
|
||||
observers = data.observers || [];
|
||||
observerMap = new Map(observers.map(o => [o.id, o]));
|
||||
} catch {}
|
||||
}
|
||||
|
||||
@@ -673,7 +699,7 @@
|
||||
obsTrigger.textContent = 'All Observers ▾';
|
||||
} else if (selectedObservers.size === 1) {
|
||||
const id = [...selectedObservers][0];
|
||||
const o = observers.find(x => String(x.id) === id);
|
||||
const o = observerMap.get(id) || observerMap.get(Number(id));
|
||||
obsTrigger.textContent = (o ? (o.name || o.id) : id) + ' ▾';
|
||||
} else {
|
||||
obsTrigger.textContent = selectedObservers.size + ' Observers ▾';
|
||||
@@ -988,6 +1014,233 @@
|
||||
makeColumnsResizable('#pktTable', 'meshcore-pkt-col-widths');
|
||||
}
|
||||
|
||||
// Build HTML for a single grouped packet row
|
||||
function buildGroupRowHtml(p) {
|
||||
const isExpanded = expandedHashes.has(p.hash);
|
||||
let headerObserverId = p.observer_id;
|
||||
let headerPathJson = p.path_json;
|
||||
if (_observerFilterSet && p._children?.length) {
|
||||
const match = p._children.find(c => _observerFilterSet.has(String(c.observer_id)));
|
||||
if (match) {
|
||||
headerObserverId = match.observer_id;
|
||||
headerPathJson = match.path_json;
|
||||
}
|
||||
}
|
||||
const groupRegion = headerObserverId ? (observerMap.get(headerObserverId)?.iata || '') : '';
|
||||
let groupPath = [];
|
||||
try { groupPath = JSON.parse(headerPathJson || '[]'); } catch {}
|
||||
const groupPathStr = renderPath(groupPath, headerObserverId);
|
||||
const groupTypeName = payloadTypeName(p.payload_type);
|
||||
const groupTypeClass = payloadTypeColor(p.payload_type);
|
||||
const groupSize = p.raw_hex ? Math.floor(p.raw_hex.length / 2) : 0;
|
||||
const groupHashBytes = ((parseInt(p.raw_hex?.slice(2, 4), 16) || 0) >> 6) + 1;
|
||||
const isSingle = p.count <= 1;
|
||||
let html = `<tr class="${isSingle ? '' : 'group-header'} ${isExpanded ? 'expanded' : ''}" data-hash="${p.hash}" data-action="${isSingle ? 'select-hash' : 'toggle-select'}" data-value="${p.hash}" tabindex="0" role="row">
|
||||
<td style="width:28px;text-align:center;cursor:pointer">${isSingle ? '' : (isExpanded ? '▼' : '▶')}</td>
|
||||
<td class="col-region">${groupRegion ? `<span class="badge-region">${groupRegion}</span>` : '—'}</td>
|
||||
<td class="col-time">${renderTimestampCell(p.latest)}</td>
|
||||
<td class="mono col-hash">${truncate(p.hash || '—', 8)}</td>
|
||||
<td class="col-size">${groupSize ? groupSize + 'B' : '—'}</td>
|
||||
<td class="col-hashsize mono">${groupHashBytes}</td>
|
||||
<td class="col-type">${p.payload_type != null ? `<span class="badge badge-${groupTypeClass}">${groupTypeName}</span>${transportBadge(p.route_type)}` : '—'}</td>
|
||||
<td class="col-observer">${isSingle ? truncate(obsName(headerObserverId), 16) : truncate(obsName(headerObserverId), 10) + (p.observer_count > 1 ? ' +' + (p.observer_count - 1) : '')}</td>
|
||||
<td class="col-path"><span class="path-hops">${groupPathStr}</span></td>
|
||||
<td class="col-rpt">${p.observation_count > 1 ? '<span class="badge badge-obs" title="Seen ' + p.observation_count + ' times">👁 ' + p.observation_count + '</span>' : (isSingle ? '' : p.count)}</td>
|
||||
<td class="col-details">${getDetailPreview((() => { try { return JSON.parse(p.decoded_json || '{}'); } catch { return {}; } })())}</td>
|
||||
</tr>`;
|
||||
if (isExpanded && p._children) {
|
||||
let visibleChildren = p._children;
|
||||
if (_observerFilterSet) {
|
||||
visibleChildren = visibleChildren.filter(c => _observerFilterSet.has(String(c.observer_id)));
|
||||
}
|
||||
for (const c of visibleChildren) {
|
||||
const typeName = payloadTypeName(c.payload_type);
|
||||
const typeClass = payloadTypeColor(c.payload_type);
|
||||
const size = c.raw_hex ? Math.floor(c.raw_hex.length / 2) : 0;
|
||||
const childHashBytes = ((parseInt(c.raw_hex?.slice(2, 4), 16) || 0) >> 6) + 1;
|
||||
const childRegion = c.observer_id ? (observerMap.get(c.observer_id)?.iata || '') : '';
|
||||
let childPath = [];
|
||||
try { childPath = JSON.parse(c.path_json || '[]'); } catch {}
|
||||
const childPathStr = renderPath(childPath, c.observer_id);
|
||||
html += `<tr class="group-child" data-id="${c.id}" data-hash="${c.hash || ''}" data-action="select-observation" data-value="${c.id}" data-parent-hash="${p.hash}" tabindex="0" role="row">
|
||||
<td></td><td class="col-region">${childRegion ? `<span class="badge-region">${childRegion}</span>` : '—'}</td>
|
||||
<td class="col-time">${renderTimestampCell(c.timestamp)}</td>
|
||||
<td class="mono col-hash">${truncate(c.hash || '', 8)}</td>
|
||||
<td class="col-size">${size}B</td>
|
||||
<td class="col-hashsize mono">${childHashBytes}</td>
|
||||
<td class="col-type"><span class="badge badge-${typeClass}">${typeName}</span>${transportBadge(c.route_type)}</td>
|
||||
<td class="col-observer">${truncate(obsName(c.observer_id), 16)}</td>
|
||||
<td class="col-path"><span class="path-hops">${childPathStr}</span></td>
|
||||
<td class="col-rpt"></td>
|
||||
<td class="col-details">${getDetailPreview((() => { try { return JSON.parse(c.decoded_json || '{}'); } catch { return {}; } })())}</td>
|
||||
</tr>`;
|
||||
}
|
||||
}
|
||||
return html;
|
||||
}
|
||||
|
||||
// Build HTML for a single flat (ungrouped) packet row
|
||||
function buildFlatRowHtml(p) {
|
||||
let decoded, pathHops = [];
|
||||
try { decoded = JSON.parse(p.decoded_json || '{}'); } catch {}
|
||||
try { pathHops = JSON.parse(p.path_json || '[]') || []; } catch {}
|
||||
const region = p.observer_id ? (observerMap.get(p.observer_id)?.iata || '') : '';
|
||||
const typeName = payloadTypeName(p.payload_type);
|
||||
const typeClass = payloadTypeColor(p.payload_type);
|
||||
const size = p.raw_hex ? Math.floor(p.raw_hex.length / 2) : 0;
|
||||
const hashBytes = ((parseInt(p.raw_hex?.slice(2, 4), 16) || 0) >> 6) + 1;
|
||||
const pathStr = renderPath(pathHops, p.observer_id);
|
||||
const detail = getDetailPreview(decoded);
|
||||
return `<tr data-id="${p.id}" data-hash="${p.hash || ''}" data-action="select-hash" data-value="${p.hash || p.id}" tabindex="0" role="row" class="${selectedId === p.id ? 'selected' : ''}">
|
||||
<td></td><td class="col-region">${region ? `<span class="badge-region">${region}</span>` : '—'}</td>
|
||||
<td class="col-time">${renderTimestampCell(p.timestamp)}</td>
|
||||
<td class="mono col-hash">${truncate(p.hash || String(p.id), 8)}</td>
|
||||
<td class="col-size">${size}B</td>
|
||||
<td class="col-hashsize mono">${hashBytes}</td>
|
||||
<td class="col-type"><span class="badge badge-${typeClass}">${typeName}</span>${transportBadge(p.route_type)}</td>
|
||||
<td class="col-observer">${truncate(obsName(p.observer_id), 16)}</td>
|
||||
<td class="col-path"><span class="path-hops">${pathStr}</span></td>
|
||||
<td class="col-rpt"></td>
|
||||
<td class="col-details">${detail}</td>
|
||||
</tr>`;
|
||||
}
|
||||
|
||||
// Compute the number of DOM <tr> rows a single entry produces.
|
||||
// Used by both row counting and renderVisibleRows to avoid divergence (#424).
|
||||
function _getRowCount(p) {
|
||||
if (!_displayGrouped) return 1;
|
||||
if (!expandedHashes.has(p.hash) || !p._children) return 1;
|
||||
let childCount = p._children.length;
|
||||
if (_observerFilterSet) {
|
||||
childCount = p._children.filter(c => _observerFilterSet.has(String(c.observer_id))).length;
|
||||
}
|
||||
return 1 + childCount;
|
||||
}
|
||||
|
||||
// Get the column count from the thead (dynamic, avoids hardcoded colspan — #426)
|
||||
function _getColCount() {
|
||||
const thead = document.querySelector('#pktLeft thead tr');
|
||||
return thead ? thead.children.length : 11;
|
||||
}
|
||||
|
||||
// Compute cumulative DOM row offsets from per-entry row counts.
|
||||
// Returns array where cumulativeOffsets[i] = total <tr> rows before entry i.
|
||||
function _cumulativeRowOffsets() {
|
||||
if (_cumulativeOffsetsCache) return _cumulativeOffsetsCache;
|
||||
const offsets = new Array(_rowCounts.length + 1);
|
||||
offsets[0] = 0;
|
||||
for (let i = 0; i < _rowCounts.length; i++) {
|
||||
offsets[i + 1] = offsets[i] + _rowCounts[i];
|
||||
}
|
||||
_cumulativeOffsetsCache = offsets;
|
||||
return offsets;
|
||||
}
|
||||
|
||||
function renderVisibleRows() {
|
||||
const tbody = document.getElementById('pktBody');
|
||||
if (!tbody || !_displayPackets.length) return;
|
||||
|
||||
const scrollContainer = document.getElementById('pktLeft');
|
||||
if (!scrollContainer) return;
|
||||
|
||||
// Compute total DOM rows accounting for expanded groups
|
||||
const offsets = _cumulativeRowOffsets();
|
||||
const totalDomRows = offsets[offsets.length - 1];
|
||||
const totalHeight = totalDomRows * VSCROLL_ROW_HEIGHT;
|
||||
const colCount = _getColCount();
|
||||
|
||||
// Get or create spacer elements
|
||||
let topSpacer = document.getElementById('vscroll-top');
|
||||
let bottomSpacer = document.getElementById('vscroll-bottom');
|
||||
if (!topSpacer) {
|
||||
topSpacer = document.createElement('tr');
|
||||
topSpacer.id = 'vscroll-top';
|
||||
topSpacer.innerHTML = '<td colspan="' + colCount + '" style="padding:0;border:0"></td>';
|
||||
}
|
||||
if (!bottomSpacer) {
|
||||
bottomSpacer = document.createElement('tr');
|
||||
bottomSpacer.id = 'vscroll-bottom';
|
||||
bottomSpacer.innerHTML = '<td colspan="' + colCount + '" style="padding:0;border:0"></td>';
|
||||
}
|
||||
|
||||
// Calculate visible range based on scroll position
|
||||
const scrollTop = scrollContainer.scrollTop;
|
||||
const viewportHeight = scrollContainer.clientHeight;
|
||||
// Account for thead height (~40px)
|
||||
const theadHeight = 40;
|
||||
const adjustedScrollTop = Math.max(0, scrollTop - theadHeight);
|
||||
|
||||
// Find the first entry whose cumulative row offset covers the scroll position
|
||||
const firstDomRow = Math.floor(adjustedScrollTop / VSCROLL_ROW_HEIGHT);
|
||||
const visibleDomCount = Math.ceil(viewportHeight / VSCROLL_ROW_HEIGHT);
|
||||
|
||||
// Binary search for entry index containing firstDomRow
|
||||
let lo = 0, hi = _displayPackets.length;
|
||||
while (lo < hi) {
|
||||
const mid = (lo + hi) >>> 1;
|
||||
if (offsets[mid + 1] <= firstDomRow) lo = mid + 1;
|
||||
else hi = mid;
|
||||
}
|
||||
const firstEntry = lo;
|
||||
|
||||
// Find entry index covering last visible DOM row
|
||||
const lastDomRow = firstDomRow + visibleDomCount;
|
||||
lo = firstEntry; hi = _displayPackets.length;
|
||||
while (lo < hi) {
|
||||
const mid = (lo + hi) >>> 1;
|
||||
if (offsets[mid + 1] <= lastDomRow) lo = mid + 1;
|
||||
else hi = mid;
|
||||
}
|
||||
const lastEntry = Math.min(lo + 1, _displayPackets.length);
|
||||
|
||||
const startIdx = Math.max(0, firstEntry - VSCROLL_BUFFER);
|
||||
const endIdx = Math.min(_displayPackets.length, lastEntry + VSCROLL_BUFFER);
|
||||
|
||||
// Skip DOM rebuild if visible range hasn't changed
|
||||
if (startIdx === _lastVisibleStart && endIdx === _lastVisibleEnd) return;
|
||||
_lastVisibleStart = startIdx;
|
||||
_lastVisibleEnd = endIdx;
|
||||
|
||||
// Compute padding using cumulative row counts
|
||||
const topPad = offsets[startIdx] * VSCROLL_ROW_HEIGHT;
|
||||
const bottomPad = (totalDomRows - offsets[endIdx]) * VSCROLL_ROW_HEIGHT;
|
||||
|
||||
topSpacer.firstChild.style.height = topPad + 'px';
|
||||
bottomSpacer.firstChild.style.height = bottomPad + 'px';
|
||||
|
||||
// LAZY ROW GENERATION: only build HTML for the visible slice (#422)
|
||||
const builder = _displayGrouped ? buildGroupRowHtml : buildFlatRowHtml;
|
||||
const visibleSlice = _displayPackets.slice(startIdx, endIdx);
|
||||
const visibleHtml = visibleSlice.map(p => builder(p)).join('');
|
||||
tbody.innerHTML = '';
|
||||
tbody.appendChild(topSpacer);
|
||||
tbody.insertAdjacentHTML('beforeend', visibleHtml);
|
||||
tbody.appendChild(bottomSpacer);
|
||||
}
|
||||
|
||||
// Attach/detach scroll listener for virtual scrolling
|
||||
function attachVScrollListener() {
|
||||
const scrollContainer = document.getElementById('pktLeft');
|
||||
if (!scrollContainer) return;
|
||||
if (_vsScrollHandler) return; // already attached
|
||||
let scrollRaf = null;
|
||||
_vsScrollHandler = function () {
|
||||
if (scrollRaf) return;
|
||||
scrollRaf = requestAnimationFrame(function () {
|
||||
scrollRaf = null;
|
||||
renderVisibleRows();
|
||||
});
|
||||
};
|
||||
scrollContainer.addEventListener('scroll', _vsScrollHandler, { passive: true });
|
||||
}
|
||||
|
||||
function detachVScrollListener() {
|
||||
if (!_vsScrollHandler) return;
|
||||
const scrollContainer = document.getElementById('pktLeft');
|
||||
if (scrollContainer) scrollContainer.removeEventListener('scroll', _vsScrollHandler);
|
||||
_vsScrollHandler = null;
|
||||
}
|
||||
|
||||
async function renderTableRows() {
|
||||
const tbody = document.getElementById('pktBody');
|
||||
if (!tbody) return;
|
||||
@@ -997,7 +1250,7 @@
|
||||
const groupBtn = document.getElementById('fGroup');
|
||||
if (groupBtn) groupBtn.classList.toggle('active', groupByHash);
|
||||
|
||||
// Filter to claimed/favorited nodes if toggle is on — use server-side multi-node lookup
|
||||
// Filter to claimed/favorited nodes — pure client-side filter (no server round-trip)
|
||||
let displayPackets = packets;
|
||||
if (filters.myNodes) {
|
||||
const myNodes = JSON.parse(localStorage.getItem('meshcore-my-nodes') || '[]');
|
||||
@@ -1005,10 +1258,10 @@
|
||||
const favs = getFavorites();
|
||||
const allKeys = [...new Set([...myKeys, ...favs])];
|
||||
if (allKeys.length > 0) {
|
||||
try {
|
||||
const myData = await api('/packets?nodes=' + allKeys.join(',') + '&limit=500');
|
||||
displayPackets = myData.packets || [];
|
||||
} catch { displayPackets = []; }
|
||||
displayPackets = displayPackets.filter(p => {
|
||||
const dj = p.decoded_json || '';
|
||||
return allKeys.some(k => dj.includes(k));
|
||||
});
|
||||
} else {
|
||||
displayPackets = [];
|
||||
}
|
||||
@@ -1040,108 +1293,31 @@
|
||||
if (countEl) countEl.textContent = `(${displayPackets.length})`;
|
||||
|
||||
if (!displayPackets.length) {
|
||||
tbody.innerHTML = '<tr><td colspan="10" class="text-center text-muted" style="padding:24px">' + (filters.myNodes ? 'No packets from your claimed/favorited nodes' : 'No packets found') + '</td></tr>';
|
||||
_displayPackets = [];
|
||||
_rowCounts = [];
|
||||
_cumulativeOffsetsCache = null;
|
||||
_observerFilterSet = null;
|
||||
_lastVisibleStart = -1;
|
||||
_lastVisibleEnd = -1;
|
||||
detachVScrollListener();
|
||||
const colCount = _getColCount();
|
||||
tbody.innerHTML = '<tr><td colspan="' + colCount + '" class="text-center text-muted" style="padding:24px">' + (filters.myNodes ? 'No packets from your claimed/favorited nodes' : 'No packets found') + '</td></tr>';
|
||||
return;
|
||||
}
|
||||
|
||||
if (groupByHash) {
|
||||
let html = '';
|
||||
for (const p of displayPackets) {
|
||||
const isExpanded = expandedHashes.has(p.hash);
|
||||
// When observer filter is active, use first matching child's data for header
|
||||
let headerObserverId = p.observer_id;
|
||||
let headerPathJson = p.path_json;
|
||||
if (filters.observer && p._children?.length) {
|
||||
const obsIds = new Set(filters.observer.split(','));
|
||||
const match = p._children.find(c => obsIds.has(String(c.observer_id)));
|
||||
if (match) {
|
||||
headerObserverId = match.observer_id;
|
||||
headerPathJson = match.path_json;
|
||||
}
|
||||
}
|
||||
const groupRegion = headerObserverId ? (observers.find(o => o.id === headerObserverId)?.iata || '') : '';
|
||||
let groupPath = [];
|
||||
try { groupPath = JSON.parse(headerPathJson || '[]'); } catch {}
|
||||
const groupPathStr = renderPath(groupPath, headerObserverId);
|
||||
const groupTypeName = payloadTypeName(p.payload_type);
|
||||
const groupTypeClass = payloadTypeColor(p.payload_type);
|
||||
const groupSize = p.raw_hex ? Math.floor(p.raw_hex.length / 2) : 0;
|
||||
const groupHashBytes = ((parseInt(p.raw_hex?.slice(2, 4), 16) || 0) >> 6) + 1;
|
||||
const isSingle = p.count <= 1;
|
||||
html += `<tr class="${isSingle ? '' : 'group-header'} ${isExpanded ? 'expanded' : ''}" data-hash="${p.hash}" data-action="${isSingle ? 'select-hash' : 'toggle-select'}" data-value="${p.hash}" tabindex="0" role="row">
|
||||
<td style="width:28px;text-align:center;cursor:pointer">${isSingle ? '' : (isExpanded ? '▼' : '▶')}</td>
|
||||
<td class="col-region">${groupRegion ? `<span class="badge-region">${groupRegion}</span>` : '—'}</td>
|
||||
<td class="col-time">${renderTimestampCell(p.latest)}</td>
|
||||
<td class="mono col-hash">${truncate(p.hash || '—', 8)}</td>
|
||||
<td class="col-size">${groupSize ? groupSize + 'B' : '—'}</td>
|
||||
<td class="col-hashsize mono">${groupHashBytes}</td>
|
||||
<td class="col-type">${p.payload_type != null ? `<span class="badge badge-${groupTypeClass}">${groupTypeName}</span>${transportBadge(p.route_type)}` : '—'}</td>
|
||||
<td class="col-observer">${isSingle ? truncate(obsName(headerObserverId), 16) : truncate(obsName(headerObserverId), 10) + (p.observer_count > 1 ? ' +' + (p.observer_count - 1) : '')}</td>
|
||||
<td class="col-path"><span class="path-hops">${groupPathStr}</span></td>
|
||||
<td class="col-rpt">${p.observation_count > 1 ? '<span class="badge badge-obs" title="Seen ' + p.observation_count + ' times">👁 ' + p.observation_count + '</span>' : (isSingle ? '' : p.count)}</td>
|
||||
<td class="col-details">${getDetailPreview((() => { try { return JSON.parse(p.decoded_json || '{}'); } catch { return {}; } })())}</td>
|
||||
</tr>`;
|
||||
// Child rows (loaded async when expanded)
|
||||
if (isExpanded && p._children) {
|
||||
let visibleChildren = p._children;
|
||||
// Filter children by selected observers
|
||||
if (filters.observer) {
|
||||
const obsSet = new Set(filters.observer.split(','));
|
||||
visibleChildren = visibleChildren.filter(c => obsSet.has(String(c.observer_id)));
|
||||
}
|
||||
for (const c of visibleChildren) {
|
||||
const typeName = payloadTypeName(c.payload_type);
|
||||
const typeClass = payloadTypeColor(c.payload_type);
|
||||
const size = c.raw_hex ? Math.floor(c.raw_hex.length / 2) : 0;
|
||||
const childHashBytes = ((parseInt(c.raw_hex?.slice(2, 4), 16) || 0) >> 6) + 1;
|
||||
const childRegion = c.observer_id ? (observers.find(o => o.id === c.observer_id)?.iata || '') : '';
|
||||
let childPath = [];
|
||||
try { childPath = JSON.parse(c.path_json || '[]'); } catch {}
|
||||
const childPathStr = renderPath(childPath, c.observer_id);
|
||||
html += `<tr class="group-child" data-id="${c.id}" data-hash="${c.hash || ''}" data-action="select-observation" data-value="${c.id}" data-parent-hash="${p.hash}" tabindex="0" role="row">
|
||||
<td></td><td class="col-region">${childRegion ? `<span class="badge-region">${childRegion}</span>` : '—'}</td>
|
||||
<td class="col-time">${renderTimestampCell(c.timestamp)}</td>
|
||||
<td class="mono col-hash">${truncate(c.hash || '', 8)}</td>
|
||||
<td class="col-size">${size}B</td>
|
||||
<td class="col-hashsize mono">${childHashBytes}</td>
|
||||
<td class="col-type"><span class="badge badge-${typeClass}">${typeName}</span>${transportBadge(c.route_type)}</td>
|
||||
<td class="col-observer">${truncate(obsName(c.observer_id), 16)}</td>
|
||||
<td class="col-path"><span class="path-hops">${childPathStr}</span></td>
|
||||
<td class="col-rpt"></td>
|
||||
<td class="col-details">${getDetailPreview((() => { try { return JSON.parse(c.decoded_json); } catch { return {}; } })())}</td>
|
||||
</tr>`;
|
||||
}
|
||||
}
|
||||
}
|
||||
tbody.innerHTML = html;
|
||||
return;
|
||||
}
|
||||
// Lazy virtual scroll: store display packets and row counts, but do NOT
|
||||
// pre-generate HTML strings. HTML is built on-demand in renderVisibleRows()
|
||||
// for only the visible slice + buffer (#422).
|
||||
_lastVisibleStart = -1;
|
||||
_lastVisibleEnd = -1;
|
||||
_displayPackets = displayPackets;
|
||||
_displayGrouped = groupByHash;
|
||||
_observerFilterSet = filters.observer ? new Set(filters.observer.split(',')) : null;
|
||||
_rowCounts = displayPackets.map(p => _getRowCount(p));
|
||||
_cumulativeOffsetsCache = null;
|
||||
|
||||
tbody.innerHTML = displayPackets.map(p => {
|
||||
let decoded, pathHops = [];
|
||||
try { decoded = JSON.parse(p.decoded_json); } catch {}
|
||||
try { pathHops = JSON.parse(p.path_json || '[]'); } catch {}
|
||||
|
||||
const region = p.observer_id ? (observers.find(o => o.id === p.observer_id)?.iata || '') : '';
|
||||
const typeName = payloadTypeName(p.payload_type);
|
||||
const typeClass = payloadTypeColor(p.payload_type);
|
||||
const size = p.raw_hex ? Math.floor(p.raw_hex.length / 2) : 0;
|
||||
const hashBytes = ((parseInt(p.raw_hex?.slice(2, 4), 16) || 0) >> 6) + 1;
|
||||
const pathStr = renderPath(pathHops, p.observer_id); const detail = getDetailPreview(decoded);
|
||||
|
||||
return `<tr data-id="${p.id}" data-hash="${p.hash || ''}" data-action="select-hash" data-value="${p.hash || p.id}" tabindex="0" role="row" class="${selectedId === p.id ? 'selected' : ''}">
|
||||
<td></td><td class="col-region">${region ? `<span class="badge-region">${region}</span>` : '—'}</td>
|
||||
<td class="col-time">${renderTimestampCell(p.timestamp)}</td>
|
||||
<td class="mono col-hash">${truncate(p.hash || String(p.id), 8)}</td>
|
||||
<td class="col-size">${size}B</td>
|
||||
<td class="col-hashsize mono">${hashBytes}</td>
|
||||
<td class="col-type"><span class="badge badge-${typeClass}">${typeName}</span>${transportBadge(p.route_type)}</td>
|
||||
<td class="col-observer">${truncate(obsName(p.observer_id), 16)}</td>
|
||||
<td class="col-path"><span class="path-hops">${pathStr}</span></td>
|
||||
<td class="col-rpt"></td>
|
||||
<td class="col-details">${detail}</td>
|
||||
</tr>`;
|
||||
}).join('');
|
||||
attachVScrollListener();
|
||||
renderVisibleRows();
|
||||
}
|
||||
|
||||
function getDetailPreview(decoded) {
|
||||
@@ -1246,7 +1422,7 @@
|
||||
let decoded;
|
||||
try { decoded = JSON.parse(pkt.decoded_json); } catch { decoded = {}; }
|
||||
let pathHops;
|
||||
try { pathHops = JSON.parse(pkt.path_json || '[]'); } catch { pathHops = []; }
|
||||
try { pathHops = JSON.parse(pkt.path_json || '[]') || []; } catch { pathHops = []; }
|
||||
|
||||
// Resolve sender GPS — from packet directly, or from known node in DB
|
||||
let senderLat = decoded.lat != null ? decoded.lat : (decoded.latitude || null);
|
||||
|
||||
123
test-anim-perf.js
Normal file
123
test-anim-perf.js
Normal file
@@ -0,0 +1,123 @@
|
||||
/**
|
||||
* test-anim-perf.js — Performance benchmark for animation timer management
|
||||
*
|
||||
* Demonstrates that the rAF + concurrency-cap approach keeps active animation
|
||||
* count bounded, whereas the old setInterval approach accumulated without limit.
|
||||
*
|
||||
* Run: node test-anim-perf.js
|
||||
*/
|
||||
|
||||
'use strict';
|
||||
|
||||
let passed = 0, failed = 0;
|
||||
function assert(cond, msg) {
|
||||
if (cond) { console.log(` ✅ ${msg}`); passed++; }
|
||||
else { console.log(` ❌ ${msg}`); failed++; }
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Simulate OLD behaviour: setInterval-based, no concurrency cap
|
||||
// ---------------------------------------------------------------------------
|
||||
function simulateOldModel(packetsPerSec, hopsPerPacket, durationSec) {
|
||||
// Each hop spawns 3 intervals (pulse 26ms, line 33ms, fade 52ms).
|
||||
// Pulse lasts ~2s, line ~0.66s, fade ~0.8s+0.4s ≈ 1.2s
|
||||
// At any moment, timers from the last ~2s of packets are still alive.
|
||||
const intervalLifetimes = [2.0, 0.66, 1.2]; // seconds each interval lives
|
||||
let maxConcurrent = 0;
|
||||
// Walk through time in 0.1s steps
|
||||
const dt = 0.1;
|
||||
const spawns = []; // {time, lifetime}
|
||||
for (let t = 0; t < durationSec; t += dt) {
|
||||
// Spawn timers for packets arriving in this window
|
||||
const pktsInWindow = packetsPerSec * dt;
|
||||
for (let p = 0; p < pktsInWindow; p++) {
|
||||
for (let h = 0; h < hopsPerPacket; h++) {
|
||||
for (const lt of intervalLifetimes) {
|
||||
spawns.push({ time: t, lifetime: lt });
|
||||
}
|
||||
}
|
||||
}
|
||||
// Count alive timers
|
||||
const alive = spawns.filter(s => t < s.time + s.lifetime).length;
|
||||
if (alive > maxConcurrent) maxConcurrent = alive;
|
||||
}
|
||||
return maxConcurrent;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Simulate NEW behaviour: rAF + MAX_CONCURRENT_ANIMS cap
|
||||
// ---------------------------------------------------------------------------
|
||||
function simulateNewModel(packetsPerSec, hopsPerPacket, durationSec) {
|
||||
const MAX_CONCURRENT_ANIMS = 20;
|
||||
let activeAnims = 0;
|
||||
let maxConcurrent = 0;
|
||||
const anims = []; // {endTime}
|
||||
const dt = 0.1;
|
||||
for (let t = 0; t < durationSec; t += dt) {
|
||||
// Expire finished animations
|
||||
while (anims.length && anims[0].endTime <= t) {
|
||||
anims.shift();
|
||||
activeAnims--;
|
||||
}
|
||||
// Try to start new animations
|
||||
const pktsInWindow = packetsPerSec * dt;
|
||||
for (let p = 0; p < pktsInWindow; p++) {
|
||||
if (activeAnims >= MAX_CONCURRENT_ANIMS) break; // cap reached — drop
|
||||
activeAnims++;
|
||||
// rAF animation lifetime: longest is pulse ~2s
|
||||
anims.push({ endTime: t + 2.0 });
|
||||
}
|
||||
// Sort by endTime so expiry works
|
||||
anims.sort((a, b) => a.endTime - b.endTime);
|
||||
if (activeAnims > maxConcurrent) maxConcurrent = activeAnims;
|
||||
}
|
||||
return maxConcurrent;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
console.log('\n=== Animation timer accumulation: old vs new ===');
|
||||
|
||||
// Scenario: 5 pkts/sec, 3 hops each, 30 seconds
|
||||
const oldPeak30s = simulateOldModel(5, 3, 30);
|
||||
const newPeak30s = simulateNewModel(5, 3, 30);
|
||||
console.log(` Old model (30s @ 5pkt/s×3hops): peak ${oldPeak30s} concurrent timers`);
|
||||
console.log(` New model (30s @ 5pkt/s×3hops): peak ${newPeak30s} concurrent animations`);
|
||||
assert(oldPeak30s > 100, `old model accumulates >100 timers (got ${oldPeak30s})`);
|
||||
assert(newPeak30s <= 20, `new model stays ≤20 (got ${newPeak30s})`);
|
||||
|
||||
// Scenario: 5 minutes sustained
|
||||
const oldPeak5m = simulateOldModel(5, 3, 300);
|
||||
const newPeak5m = simulateNewModel(5, 3, 300);
|
||||
console.log(` Old model (5min @ 5pkt/s×3hops): peak ${oldPeak5m} concurrent timers`);
|
||||
console.log(` New model (5min @ 5pkt/s×3hops): peak ${newPeak5m} concurrent animations`);
|
||||
assert(oldPeak5m > 100, `old model at 5min still unbounded (got ${oldPeak5m})`);
|
||||
assert(newPeak5m <= 20, `new model at 5min still ≤20 (got ${newPeak5m})`);
|
||||
|
||||
// Scenario: burst — 20 pkts/sec for 10s
|
||||
const oldBurst = simulateOldModel(20, 3, 10);
|
||||
const newBurst = simulateNewModel(20, 3, 10);
|
||||
console.log(` Old model (burst 20pkt/s×3hops, 10s): peak ${oldBurst} concurrent timers`);
|
||||
console.log(` New model (burst 20pkt/s×3hops, 10s): peak ${newBurst} concurrent animations`);
|
||||
assert(oldBurst > 200, `old model under burst >200 timers (got ${oldBurst})`);
|
||||
assert(newBurst <= 20, `new model under burst stays ≤20 (got ${newBurst})`);
|
||||
|
||||
console.log('\n=== drawAnimatedLine frame-drop catch-up ===');
|
||||
|
||||
// Read the source and verify catch-up logic exists
|
||||
const fs = require('fs');
|
||||
const src = fs.readFileSync(__dirname + '/public/live.js', 'utf8');
|
||||
|
||||
// Extract the animateLine function body
|
||||
const lineMatch = src.match(/function animateLine\(now\)\s*\{[\s\S]*?requestAnimationFrame\(animateLine\)/);
|
||||
assert(lineMatch && /Math\.min\(Math\.floor\(elapsed\s*\/\s*33\)/.test(lineMatch[0]),
|
||||
'drawAnimatedLine catches up on frame drops (multi-tick per frame)');
|
||||
|
||||
const fadeMatch = src.match(/function animateFade\(now\)\s*\{[\s\S]*?requestAnimationFrame\(animateFade\)/);
|
||||
assert(fadeMatch && /Math\.min\(Math\.floor\(fadeElapsed\s*\/\s*52\)/.test(fadeMatch[0]),
|
||||
'animateFade catches up on frame drops (multi-tick per frame)');
|
||||
|
||||
console.log(`\n${passed} passed, ${failed} failed\n`);
|
||||
process.exit(failed ? 1 : 0);
|
||||
@@ -564,6 +564,40 @@ console.log('\n=== hop-resolver.js ===');
|
||||
});
|
||||
}
|
||||
|
||||
// ===== haversineKm exposed from HopResolver (issue #433) =====
|
||||
console.log('\n=== haversineKm (hop-resolver.js) ===');
|
||||
{
|
||||
const ctx = makeSandbox();
|
||||
ctx.IATA_COORDS_GEO = {};
|
||||
loadInCtx(ctx, 'public/hop-resolver.js');
|
||||
const HR = ctx.window.HopResolver;
|
||||
|
||||
test('haversineKm is exported', () => {
|
||||
assert.strictEqual(typeof HR.haversineKm, 'function');
|
||||
});
|
||||
|
||||
test('haversineKm same point = 0', () => {
|
||||
assert.strictEqual(HR.haversineKm(37.0, -122.0, 37.0, -122.0), 0);
|
||||
});
|
||||
|
||||
test('haversineKm SF to LA ~559km', () => {
|
||||
// San Francisco (37.7749, -122.4194) to Los Angeles (34.0522, -118.2437)
|
||||
const d = HR.haversineKm(37.7749, -122.4194, 34.0522, -118.2437);
|
||||
assert.ok(d > 550 && d < 570, `Expected ~559km, got ${d}`);
|
||||
});
|
||||
|
||||
test('haversineKm differs from old Euclidean approximation', () => {
|
||||
// The old code used dLat*111, dLon*85 which is inaccurate at high latitudes
|
||||
// Oslo (59.9, 10.7) to Stockholm (59.3, 18.0)
|
||||
const haversine = HR.haversineKm(59.9, 10.7, 59.3, 18.0);
|
||||
const dLat = (59.9 - 59.3) * 111;
|
||||
const dLon = (10.7 - 18.0) * 85;
|
||||
const euclidean = Math.sqrt(dLat*dLat + dLon*dLon);
|
||||
// Haversine should give ~415km, Euclidean ~627km (wrong because dLon*85 is wrong at 60° latitude)
|
||||
assert.ok(Math.abs(haversine - euclidean) > 50, `Expected significant difference, haversine=${haversine.toFixed(1)}, euclidean=${euclidean.toFixed(1)}`);
|
||||
});
|
||||
}
|
||||
|
||||
// ===== SNR/RSSI Number casting =====
|
||||
{
|
||||
// These test the pattern used in observer-detail.js, home.js, traces.js, live.js
|
||||
@@ -966,6 +1000,66 @@ console.log('\n=== live.js: pruneStaleNodes ===');
|
||||
});
|
||||
}
|
||||
|
||||
// ===== live.js: vcrFormatTime respects UTC/local setting =====
|
||||
console.log('\n=== live.js: vcrFormatTime UTC/local ===');
|
||||
{
|
||||
function makeLiveSandboxForVcr() {
|
||||
const ctx = makeSandbox();
|
||||
ctx.L = { map: () => ({ on: () => {}, setView: () => {}, addLayer: () => {}, remove: () => {} }), tileLayer: () => ({ addTo: () => {} }), layerGroup: () => ({ addTo: () => {}, clearLayers: () => {}, addLayer: () => {} }), circleMarker: () => ({ addTo: () => {}, remove: () => {}, setStyle: () => {}, getLatLng: () => ({}), on: () => {} }), Polyline: function() { return { addTo: () => {}, remove: () => {} }; }, Control: { extend: () => function() { return { addTo: () => {} }; } } };
|
||||
ctx.Chart = function() { return { destroy: () => {}, update: () => {} }; };
|
||||
ctx.navigator = {};
|
||||
ctx.visualViewport = null;
|
||||
ctx.document.documentElement = { getAttribute: () => null, setAttribute: () => {} };
|
||||
ctx.document.body = { appendChild: () => {}, removeChild: () => {}, contains: () => false };
|
||||
ctx.document.querySelector = () => null;
|
||||
ctx.document.querySelectorAll = () => [];
|
||||
ctx.document.createElementNS = () => ctx.document.createElement();
|
||||
ctx.cancelAnimationFrame = () => {};
|
||||
ctx.IATA_COORDS_GEO = {};
|
||||
loadInCtx(ctx, 'public/roles.js');
|
||||
try { loadInCtx(ctx, 'public/live.js'); } catch (e) {
|
||||
for (const k of Object.keys(ctx.window)) ctx[k] = ctx.window[k];
|
||||
}
|
||||
return ctx;
|
||||
}
|
||||
|
||||
test('vcrFormatTime is exposed as window._vcrFormatTime', () => {
|
||||
const ctx = makeLiveSandboxForVcr();
|
||||
assert.strictEqual(typeof ctx.window._vcrFormatTime, 'function', '_vcrFormatTime must be exposed');
|
||||
});
|
||||
|
||||
test('vcrFormatTime uses UTC hours when timezone is utc', () => {
|
||||
const ctx = makeLiveSandboxForVcr();
|
||||
const fn = ctx.window._vcrFormatTime;
|
||||
assert.ok(fn, '_vcrFormatTime must be exposed');
|
||||
// Force UTC mode
|
||||
ctx.getTimestampTimezone = () => 'utc';
|
||||
// Use a known timestamp: 2024-01-15 14:30:45 UTC = different local time in most zones
|
||||
const tsMs = Date.UTC(2024, 0, 15, 14, 30, 45);
|
||||
const result = fn(tsMs);
|
||||
assert.strictEqual(result, '14:30:45', 'UTC mode must show UTC hours 14:30:45');
|
||||
});
|
||||
|
||||
test('vcrFormatTime uses local hours when timezone is local', () => {
|
||||
const ctx = makeLiveSandboxForVcr();
|
||||
const fn = ctx.window._vcrFormatTime;
|
||||
assert.ok(fn, '_vcrFormatTime must be exposed');
|
||||
ctx.getTimestampTimezone = () => 'local';
|
||||
const d = new Date(2024, 0, 15, 9, 5, 3); // local time
|
||||
const expected = String(d.getHours()).padStart(2,'0') + ':' + String(d.getMinutes()).padStart(2,'0') + ':' + String(d.getSeconds()).padStart(2,'0');
|
||||
assert.strictEqual(fn(d.getTime()), expected, 'local mode must use local hours');
|
||||
});
|
||||
|
||||
test('vcrFormatTime zero-pads single-digit hours, minutes, seconds', () => {
|
||||
const ctx = makeLiveSandboxForVcr();
|
||||
const fn = ctx.window._vcrFormatTime;
|
||||
assert.ok(fn, '_vcrFormatTime must be exposed');
|
||||
ctx.getTimestampTimezone = () => 'utc';
|
||||
const tsMs = Date.UTC(2024, 0, 15, 3, 5, 7); // 03:05:07 UTC
|
||||
assert.strictEqual(fn(tsMs), '03:05:07');
|
||||
});
|
||||
}
|
||||
|
||||
// ===== NODES.JS: isAdvertMessage + auto-update logic =====
|
||||
console.log('\n=== nodes.js: isAdvertMessage ===');
|
||||
{
|
||||
@@ -1181,6 +1275,61 @@ console.log('\n=== nodes.js: WS handler runtime behavior ===');
|
||||
assert.ok(env.getApiCalls() > 0, 'api called because _allNodes was reset to null');
|
||||
});
|
||||
|
||||
test('ADVERT for known node upserts in-place without API fetch', () => {
|
||||
const env = makeNodesWsSandbox();
|
||||
// Pre-populate _allNodes with a known node
|
||||
assert.ok(typeof env.ctx.window._nodesSetAllNodes === 'function', '_nodesSetAllNodes must be exposed');
|
||||
env.ctx.window._nodesSetAllNodes([
|
||||
{ public_key: 'aabbccddeeff00112233445566778899aabbccddeeff00112233445566778899', name: 'OldName', role: 'repeater', lat: null, lon: null, last_seen: '2024-01-01T00:00:00Z' }
|
||||
]);
|
||||
env.resetCounters();
|
||||
|
||||
env.sendWS({
|
||||
type: 'packet',
|
||||
data: {
|
||||
packet: { payload_type: 4, timestamp: '2024-06-01T12:00:00Z' },
|
||||
decoded: {
|
||||
header: { payloadTypeName: 'ADVERT' },
|
||||
payload: { type: 'ADVERT', pubKey: 'aabbccddeeff00112233445566778899aabbccddeeff00112233445566778899', name: 'NewName', lat: 50.85, lon: 4.35 }
|
||||
}
|
||||
}
|
||||
});
|
||||
env.fireTimers();
|
||||
|
||||
assert.strictEqual(env.getApiCalls(), 0, 'known node upsert must NOT trigger API fetch');
|
||||
assert.strictEqual(env.getInvalidated().length, 0, 'no cache invalidation for known node upsert');
|
||||
const nodes = env.ctx.window._nodesGetAllNodes();
|
||||
assert.ok(nodes, '_nodesGetAllNodes must be exposed');
|
||||
assert.strictEqual(nodes[0].name, 'NewName', 'name must be updated in place');
|
||||
assert.strictEqual(nodes[0].lat, 50.85, 'lat must be updated in place');
|
||||
assert.strictEqual(nodes[0].lon, 4.35, 'lon must be updated in place');
|
||||
assert.strictEqual(nodes[0].last_seen, '2024-06-01T12:00:00Z', 'last_seen must be updated from packet timestamp');
|
||||
});
|
||||
|
||||
test('ADVERT for unknown node falls back to full reload', () => {
|
||||
const env = makeNodesWsSandbox();
|
||||
env.ctx.window._nodesSetAllNodes([
|
||||
{ public_key: 'aabbccddeeff00112233445566778899aabbccddeeff00112233445566778899', name: 'ExistingNode', role: 'repeater' }
|
||||
]);
|
||||
env.resetCounters();
|
||||
|
||||
// Send ADVERT from a pubKey NOT in _allNodes
|
||||
env.sendWS({
|
||||
type: 'packet',
|
||||
data: {
|
||||
packet: { payload_type: 4 },
|
||||
decoded: {
|
||||
header: { payloadTypeName: 'ADVERT' },
|
||||
payload: { type: 'ADVERT', pubKey: 'ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff', name: 'BrandNewNode' }
|
||||
}
|
||||
}
|
||||
});
|
||||
env.fireTimers();
|
||||
|
||||
assert.ok(env.getApiCalls() > 0, 'unknown node must trigger full reload');
|
||||
assert.ok(env.getInvalidated().includes('/nodes'), 'cache must be invalidated for unknown node');
|
||||
});
|
||||
|
||||
test('scroll position and selection preserved during WS-triggered refresh', () => {
|
||||
const env = makeNodesWsSandbox();
|
||||
// Simulate scrolled panel state — WS handler should not touch scroll or rebuild panel
|
||||
@@ -1792,153 +1941,6 @@ console.log('\n=== analytics.js: sortChannels ===');
|
||||
});
|
||||
}
|
||||
|
||||
// === analytics.js: hash prefix helpers ===
|
||||
console.log('\n=== analytics.js: hash prefix helpers ===');
|
||||
{
|
||||
const ctx = (() => {
|
||||
const c = makeSandbox();
|
||||
c.getComputedStyle = () => ({ getPropertyValue: () => '' });
|
||||
c.registerPage = () => {};
|
||||
c.api = () => Promise.resolve({});
|
||||
c.timeAgo = () => '—';
|
||||
c.RegionFilter = { init: () => {}, onChange: () => {}, regionQueryString: () => '' };
|
||||
c.onWS = () => {};
|
||||
c.offWS = () => {};
|
||||
c.connectWS = () => {};
|
||||
c.invalidateApiCache = () => {};
|
||||
c.makeColumnsResizable = () => {};
|
||||
c.initTabBar = () => {};
|
||||
c.IATA_COORDS_GEO = {};
|
||||
loadInCtx(c, 'public/roles.js');
|
||||
loadInCtx(c, 'public/app.js');
|
||||
try { loadInCtx(c, 'public/analytics.js'); } catch (e) {
|
||||
for (const k of Object.keys(c.window)) c[k] = c.window[k];
|
||||
}
|
||||
return c;
|
||||
})();
|
||||
|
||||
const buildOne = ctx.window._analyticsBuildOneBytePrefixMap;
|
||||
const buildTwo = ctx.window._analyticsBuildTwoBytePrefixInfo;
|
||||
const buildHops = ctx.window._analyticsBuildCollisionHops;
|
||||
|
||||
const node = (pk, extra) => ({ public_key: pk, name: pk.slice(0, 4), ...(extra || {}) });
|
||||
|
||||
test('buildOneBytePrefixMap exports exist', () => assert.ok(buildOne, 'must be exported'));
|
||||
test('buildTwoBytePrefixInfo exports exist', () => assert.ok(buildTwo, 'must be exported'));
|
||||
test('buildCollisionHops exports exist', () => assert.ok(buildHops, 'must be exported'));
|
||||
|
||||
// --- 1-byte prefix map ---
|
||||
test('1-byte map has 256 keys', () => {
|
||||
const m = buildOne([]);
|
||||
assert.strictEqual(Object.keys(m).length, 256);
|
||||
});
|
||||
|
||||
test('1-byte map places node in correct bucket', () => {
|
||||
const n = node('AABBCC');
|
||||
const m = buildOne([n]);
|
||||
assert.strictEqual(m['AA'].length, 1);
|
||||
assert.strictEqual(m['AA'][0].public_key, 'AABBCC');
|
||||
assert.strictEqual(m['BB'].length, 0);
|
||||
});
|
||||
|
||||
test('1-byte map groups two nodes with same prefix', () => {
|
||||
const a = node('AA1111'), b = node('AA2222');
|
||||
const m = buildOne([a, b]);
|
||||
assert.strictEqual(m['AA'].length, 2);
|
||||
});
|
||||
|
||||
test('1-byte map is case-insensitive for node keys', () => {
|
||||
const n = node('aabbcc');
|
||||
const m = buildOne([n]);
|
||||
assert.strictEqual(m['AA'].length, 1);
|
||||
});
|
||||
|
||||
test('1-byte map: empty input yields all empty buckets', () => {
|
||||
const m = buildOne([]);
|
||||
assert.ok(Object.values(m).every(v => v.length === 0));
|
||||
});
|
||||
|
||||
// --- 2-byte prefix info ---
|
||||
test('2-byte info has 256 first-byte keys', () => {
|
||||
const info = buildTwo([]);
|
||||
assert.strictEqual(Object.keys(info).length, 256);
|
||||
});
|
||||
|
||||
test('2-byte info: no nodes → zero collisions', () => {
|
||||
const info = buildTwo([]);
|
||||
assert.ok(Object.values(info).every(e => e.collisionCount === 0));
|
||||
});
|
||||
|
||||
test('2-byte info: node placed in correct first-byte group', () => {
|
||||
const n = node('AABB1122');
|
||||
const info = buildTwo([n]);
|
||||
assert.strictEqual(info['AA'].groupNodes.length, 1);
|
||||
assert.strictEqual(info['BB'].groupNodes.length, 0);
|
||||
});
|
||||
|
||||
test('2-byte info: same 2-byte prefix = collision', () => {
|
||||
const a = node('AABB0001'), b = node('AABB0002');
|
||||
const info = buildTwo([a, b]);
|
||||
assert.strictEqual(info['AA'].collisionCount, 1);
|
||||
assert.strictEqual(info['AA'].maxCollision, 2);
|
||||
});
|
||||
|
||||
test('2-byte info: different 2-byte prefixes in same group = no collision', () => {
|
||||
const a = node('AA110001'), b = node('AA220002');
|
||||
const info = buildTwo([a, b]);
|
||||
assert.strictEqual(info['AA'].collisionCount, 0);
|
||||
assert.strictEqual(info['AA'].maxCollision, 0);
|
||||
});
|
||||
|
||||
test('2-byte info: twoByteMap built correctly', () => {
|
||||
const a = node('AABB0001'), b = node('AABB0002'), c = node('AACC0003');
|
||||
const info = buildTwo([a, b, c]);
|
||||
assert.strictEqual(Object.keys(info['AA'].twoByteMap).length, 2);
|
||||
assert.strictEqual(info['AA'].twoByteMap['AABB'].length, 2);
|
||||
assert.strictEqual(info['AA'].twoByteMap['AACC'].length, 1);
|
||||
});
|
||||
|
||||
// --- 3-byte stat summary (via buildCollisionHops) ---
|
||||
test('buildCollisionHops: no collisions returns empty array', () => {
|
||||
const nodes = [node('AA000001'), node('BB000002'), node('CC000003')];
|
||||
assert.deepStrictEqual(buildHops(nodes, 1), []);
|
||||
});
|
||||
|
||||
test('buildCollisionHops: detects 1-byte collision', () => {
|
||||
const nodes = [node('AA000001'), node('AA000002')];
|
||||
const hops = buildHops(nodes, 1);
|
||||
assert.strictEqual(hops.length, 1);
|
||||
assert.strictEqual(hops[0].hex, 'AA');
|
||||
assert.strictEqual(hops[0].count, 2);
|
||||
});
|
||||
|
||||
test('buildCollisionHops: detects 2-byte collision', () => {
|
||||
const nodes = [node('AABB0001'), node('AABB0002'), node('AACC0003')];
|
||||
const hops = buildHops(nodes, 2);
|
||||
assert.strictEqual(hops.length, 1);
|
||||
assert.strictEqual(hops[0].hex, 'AABB');
|
||||
assert.strictEqual(hops[0].count, 2);
|
||||
});
|
||||
|
||||
test('buildCollisionHops: detects 3-byte collision', () => {
|
||||
const nodes = [node('AABBCC0001'), node('AABBCC0002')];
|
||||
const hops = buildHops(nodes, 3);
|
||||
assert.strictEqual(hops.length, 1);
|
||||
assert.strictEqual(hops[0].hex, 'AABBCC');
|
||||
});
|
||||
|
||||
test('buildCollisionHops: size field set correctly', () => {
|
||||
const nodes = [node('AABB0001'), node('AABB0002')];
|
||||
const hops = buildHops(nodes, 2);
|
||||
assert.strictEqual(hops[0].size, 2);
|
||||
});
|
||||
|
||||
test('buildCollisionHops: empty input returns empty array', () => {
|
||||
assert.deepStrictEqual(buildHops([], 1), []);
|
||||
assert.deepStrictEqual(buildHops([], 2), []);
|
||||
assert.deepStrictEqual(buildHops([], 3), []);
|
||||
});
|
||||
}
|
||||
|
||||
// ===== CUSTOMIZE.JS: initState merge behavior =====
|
||||
console.log('\n=== customize.js: initState merge behavior ===');
|
||||
@@ -2107,6 +2109,43 @@ console.log('\n=== customize.js: initState merge behavior ===');
|
||||
assert.strictEqual(state.theme.accent, '#abcdef');
|
||||
assert.strictEqual(state.theme.navBg, '#fedcba');
|
||||
});
|
||||
|
||||
test('initState uses _SITE_CONFIG_ORIGINAL_HOME to bypass contaminated SITE_CONFIG.home', () => {
|
||||
// Simulates: app.js called mergeUserHomeConfig which mutated SITE_CONFIG.home.steps = []
|
||||
// The original server steps must still be recoverable via _SITE_CONFIG_ORIGINAL_HOME
|
||||
const ctx = makeSandbox();
|
||||
ctx.setTimeout = function (fn) { fn(); return 1; };
|
||||
ctx.clearTimeout = function () {};
|
||||
// SITE_CONFIG.home is contaminated — steps wiped by mergeUserHomeConfig at page load
|
||||
ctx.window.SITE_CONFIG = {
|
||||
home: {
|
||||
heroTitle: 'Server Hero',
|
||||
steps: [] // contaminated — user had steps:[] in localStorage at page load
|
||||
}
|
||||
};
|
||||
// app.js snapshots original before mutation
|
||||
ctx.window._SITE_CONFIG_ORIGINAL_HOME = {
|
||||
heroTitle: 'Server Hero',
|
||||
steps: [{ emoji: '🧪', title: 'Original Step', description: 'from server' }]
|
||||
};
|
||||
const ex = loadCustomizeExports(ctx);
|
||||
ex.initState();
|
||||
const state = ex.getState();
|
||||
assert.strictEqual(state.home.steps.length, 1, 'should restore from snapshot, not contaminated SITE_CONFIG');
|
||||
assert.strictEqual(state.home.steps[0].title, 'Original Step');
|
||||
});
|
||||
|
||||
test('initState uses DEFAULTS.home when no SITE_CONFIG and no snapshot', () => {
|
||||
const ctx = makeSandbox();
|
||||
ctx.setTimeout = function (fn) { fn(); return 1; };
|
||||
ctx.clearTimeout = function () {};
|
||||
// No SITE_CONFIG at all — pure DEFAULTS
|
||||
const ex = loadCustomizeExports(ctx);
|
||||
ex.initState();
|
||||
const state = ex.getState();
|
||||
assert.ok(state.home.steps.length > 0, 'should use DEFAULTS.home.steps when no server config');
|
||||
assert.strictEqual(state.home.steps[0].title, 'Join the Bay Area MeshCore Discord');
|
||||
});
|
||||
}
|
||||
|
||||
// ===== APP.JS: home rehydration merge =====
|
||||
@@ -2642,6 +2681,358 @@ console.log('\n=== packets.js: savedTimeWindowMin defaults ===');
|
||||
assert.ok(deltaMin > 10 && deltaMin < 25, `expected capped ~15m window, got ${deltaMin.toFixed(2)}m`);
|
||||
});
|
||||
}
|
||||
// ===== My Nodes client-side filter (issue #381) =====
|
||||
{
|
||||
console.log('\n--- My Nodes client-side filter ---');
|
||||
|
||||
// Simulate the client-side filter logic from packets.js renderTableRows()
|
||||
function filterMyNodes(packets, allKeys) {
|
||||
if (!allKeys.length) return [];
|
||||
return packets.filter(p => {
|
||||
const dj = p.decoded_json || '';
|
||||
return allKeys.some(k => dj.includes(k));
|
||||
});
|
||||
}
|
||||
|
||||
const testPackets = [
|
||||
{ decoded_json: '{"pubKey":"abc123","name":"Node1"}' },
|
||||
{ decoded_json: '{"pubKey":"def456","name":"Node2"}' },
|
||||
{ decoded_json: '{"pubKey":"ghi789","name":"Node3","hops":["abc123"]}' },
|
||||
{ decoded_json: '' },
|
||||
{ decoded_json: null },
|
||||
];
|
||||
|
||||
test('filters packets matching a single pubkey', () => {
|
||||
const result = filterMyNodes(testPackets, ['abc123']);
|
||||
assert.strictEqual(result.length, 2, 'should match sender + hop');
|
||||
assert.ok(result[0].decoded_json.includes('abc123'));
|
||||
assert.ok(result[1].decoded_json.includes('abc123'));
|
||||
});
|
||||
|
||||
test('filters packets matching multiple pubkeys', () => {
|
||||
const result = filterMyNodes(testPackets, ['abc123', 'def456']);
|
||||
assert.strictEqual(result.length, 3);
|
||||
});
|
||||
|
||||
test('returns empty array for no matching keys', () => {
|
||||
const result = filterMyNodes(testPackets, ['zzz999']);
|
||||
assert.strictEqual(result.length, 0);
|
||||
});
|
||||
|
||||
test('returns empty array when allKeys is empty', () => {
|
||||
const result = filterMyNodes(testPackets, []);
|
||||
assert.strictEqual(result.length, 0);
|
||||
});
|
||||
|
||||
test('handles null/empty decoded_json gracefully', () => {
|
||||
const result = filterMyNodes(testPackets, ['abc123']);
|
||||
assert.strictEqual(result.length, 2);
|
||||
});
|
||||
}
|
||||
|
||||
// ===== Packets page: virtual scroll infrastructure =====
|
||||
{
|
||||
console.log('\nPackets page — virtual scroll:');
|
||||
const packetsSource = fs.readFileSync('public/packets.js', 'utf8');
|
||||
|
||||
// --- Behavioral tests using extracted logic ---
|
||||
|
||||
// Extract _cumulativeRowOffsets logic for testing
|
||||
function cumulativeRowOffsets(rowCounts) {
|
||||
const offsets = new Array(rowCounts.length + 1);
|
||||
offsets[0] = 0;
|
||||
for (let i = 0; i < rowCounts.length; i++) {
|
||||
offsets[i + 1] = offsets[i] + rowCounts[i];
|
||||
}
|
||||
return offsets;
|
||||
}
|
||||
|
||||
// Extract _getRowCount logic for testing (#424 — single source of truth)
|
||||
function getRowCount(p, grouped, expandedHashes, observerFilterSet) {
|
||||
if (!grouped) return 1;
|
||||
if (!expandedHashes.has(p.hash) || !p._children) return 1;
|
||||
let childCount = p._children.length;
|
||||
if (observerFilterSet) {
|
||||
childCount = p._children.filter(c => observerFilterSet.has(String(c.observer_id))).length;
|
||||
}
|
||||
return 1 + childCount;
|
||||
}
|
||||
|
||||
test('cumulativeRowOffsets computes correct offsets for flat rows', () => {
|
||||
const counts = [1, 1, 1, 1, 1];
|
||||
const offsets = cumulativeRowOffsets(counts);
|
||||
assert.deepStrictEqual(offsets, [0, 1, 2, 3, 4, 5]);
|
||||
});
|
||||
|
||||
test('cumulativeRowOffsets handles expanded groups with multiple rows', () => {
|
||||
const counts = [1, 4, 1];
|
||||
const offsets = cumulativeRowOffsets(counts);
|
||||
assert.deepStrictEqual(offsets, [0, 1, 5, 6]);
|
||||
assert.strictEqual(offsets[offsets.length - 1], 6);
|
||||
});
|
||||
|
||||
test('total scroll height accounts for expanded group rows', () => {
|
||||
const VSCROLL_ROW_HEIGHT = 36;
|
||||
const counts = [1, 4, 1, 4, 1];
|
||||
const offsets = cumulativeRowOffsets(counts);
|
||||
const totalDomRows = offsets[offsets.length - 1];
|
||||
assert.strictEqual(totalDomRows, 11);
|
||||
assert.strictEqual(totalDomRows * VSCROLL_ROW_HEIGHT, 396);
|
||||
});
|
||||
|
||||
test('scroll height with all collapsed equals entries * row height', () => {
|
||||
const VSCROLL_ROW_HEIGHT = 36;
|
||||
const counts = [1, 1, 1, 1, 1];
|
||||
const offsets = cumulativeRowOffsets(counts);
|
||||
const totalDomRows = offsets[offsets.length - 1];
|
||||
assert.strictEqual(totalDomRows * VSCROLL_ROW_HEIGHT, 5 * VSCROLL_ROW_HEIGHT);
|
||||
});
|
||||
|
||||
// --- Behavioral tests for _getRowCount (#424, #428 — test logic, not source strings) ---
|
||||
|
||||
test('getRowCount returns 1 for flat (ungrouped) mode', () => {
|
||||
const p = { hash: 'abc', _children: [{observer_id: '1'}, {observer_id: '2'}] };
|
||||
assert.strictEqual(getRowCount(p, false, new Set(), null), 1);
|
||||
});
|
||||
|
||||
test('getRowCount returns 1 for collapsed group', () => {
|
||||
const p = { hash: 'abc', _children: [{observer_id: '1'}, {observer_id: '2'}] };
|
||||
assert.strictEqual(getRowCount(p, true, new Set(), null), 1);
|
||||
});
|
||||
|
||||
test('getRowCount returns 1+children for expanded group', () => {
|
||||
const p = { hash: 'abc', _children: [{observer_id: '1'}, {observer_id: '2'}, {observer_id: '3'}] };
|
||||
const expanded = new Set(['abc']);
|
||||
assert.strictEqual(getRowCount(p, true, expanded, null), 4);
|
||||
});
|
||||
|
||||
test('getRowCount filters children by observer set', () => {
|
||||
const p = { hash: 'abc', _children: [{observer_id: '1'}, {observer_id: '2'}, {observer_id: '3'}] };
|
||||
const expanded = new Set(['abc']);
|
||||
const obsFilter = new Set(['1', '3']);
|
||||
assert.strictEqual(getRowCount(p, true, expanded, obsFilter), 3);
|
||||
});
|
||||
|
||||
test('getRowCount returns 1 for expanded group with no _children', () => {
|
||||
const p = { hash: 'abc' };
|
||||
const expanded = new Set(['abc']);
|
||||
assert.strictEqual(getRowCount(p, true, expanded, null), 1);
|
||||
});
|
||||
|
||||
test('renderVisibleRows uses cumulative offsets not flat entry count', () => {
|
||||
assert.ok(packetsSource.includes('_cumulativeRowOffsets'),
|
||||
'renderVisibleRows should use cumulative row offsets');
|
||||
assert.ok(!packetsSource.includes('const totalRows = _displayPackets.length'),
|
||||
'should NOT use flat array length for total row count');
|
||||
});
|
||||
|
||||
test('renderVisibleRows skips DOM rebuild when range unchanged', () => {
|
||||
assert.ok(packetsSource.includes('startIdx === _lastVisibleStart && endIdx === _lastVisibleEnd'),
|
||||
'should skip rebuild when range is unchanged');
|
||||
});
|
||||
|
||||
test('lazy row generation — HTML built only for visible slice', () => {
|
||||
assert.ok(!packetsSource.includes('_lastRenderedRows'),
|
||||
'should NOT have pre-built row HTML cache');
|
||||
assert.ok(packetsSource.includes('_displayPackets.slice(startIdx, endIdx)'),
|
||||
'should slice display packets for visible range');
|
||||
assert.ok(packetsSource.includes('visibleSlice.map(p => builder(p))'),
|
||||
'should build HTML lazily per visible packet');
|
||||
});
|
||||
|
||||
test('observer filter Set is hoisted, not recreated per-packet', () => {
|
||||
assert.ok(packetsSource.includes('_observerFilterSet = filters.observer ? new Set(filters.observer.split'),
|
||||
'observer filter Set should be created once in renderTableRows');
|
||||
assert.ok(packetsSource.includes('_observerFilterSet.has(String(c.observer_id))'),
|
||||
'buildGroupRowHtml should use hoisted _observerFilterSet');
|
||||
});
|
||||
|
||||
test('buildFlatRowHtml has null-safe decoded_json', () => {
|
||||
const flatBuilderMatch = packetsSource.match(/function buildFlatRowHtml[\s\S]*?(?=\n function )/);
|
||||
assert.ok(flatBuilderMatch, 'buildFlatRowHtml should exist');
|
||||
assert.ok(flatBuilderMatch[0].includes("p.decoded_json || '{}'"),
|
||||
'buildFlatRowHtml should have null-safe decoded_json fallback');
|
||||
});
|
||||
|
||||
test('pathHops null guard in buildFlatRowHtml (issue #451)', () => {
|
||||
const flatBuilderMatch = packetsSource.match(/function buildFlatRowHtml[\s\S]*?(?=\n function )/);
|
||||
assert.ok(flatBuilderMatch, 'buildFlatRowHtml should exist');
|
||||
// The JSON.parse result must be coalesced with || [] to handle literal null from path_json
|
||||
assert.ok(flatBuilderMatch[0].includes("|| '[]') || []"),
|
||||
'buildFlatRowHtml should coalesce parsed path_json with || [] to guard against null');
|
||||
});
|
||||
|
||||
test('pathHops null guard in detail pane (issue #451)', () => {
|
||||
// The detail pane (selectPacket / showPacketDetail) also parses path_json
|
||||
const detailMatch = packetsSource.match(/let pathHops;\s*try \{[^}]+\} catch/);
|
||||
assert.ok(detailMatch, 'detail pane pathHops parsing should exist');
|
||||
assert.ok(detailMatch[0].includes("|| '[]') || []"),
|
||||
'detail pane should coalesce parsed path_json with || [] to guard against null');
|
||||
});
|
||||
|
||||
test('destroy cleans up virtual scroll state', () => {
|
||||
assert.ok(packetsSource.includes('detachVScrollListener'),
|
||||
'destroy should detach virtual scroll listener');
|
||||
assert.ok(packetsSource.includes("_displayPackets = []"),
|
||||
'destroy should reset display packets');
|
||||
assert.ok(packetsSource.includes("_rowCounts = []"),
|
||||
'destroy should reset row counts');
|
||||
assert.ok(packetsSource.includes("_lastVisibleStart = -1"),
|
||||
'destroy should reset visible start');
|
||||
});
|
||||
}
|
||||
|
||||
// ===== live.js: nextHop null guards =====
|
||||
console.log('\n=== live.js: nextHop null guards ===');
|
||||
{
|
||||
const liveSource = fs.readFileSync('public/live.js', 'utf8');
|
||||
|
||||
test('nextHop guards animLayer null before use', () => {
|
||||
assert.ok(liveSource.includes('if (!animLayer) return;'),
|
||||
'nextHop must return early when animLayer is null (post-destroy)');
|
||||
});
|
||||
|
||||
test('nextHop setInterval guards animLayer null', () => {
|
||||
assert.ok(liveSource.includes('if (!animLayer || !animLayer.hasLayer(ghost))'),
|
||||
'setInterval in nextHop must guard animLayer null');
|
||||
});
|
||||
|
||||
test('nextHop setTimeout guards animLayer null', () => {
|
||||
assert.ok(liveSource.includes('if (animLayer && animLayer.hasLayer(ghost)) animLayer.removeLayer(ghost)'),
|
||||
'setTimeout in nextHop must guard animLayer null');
|
||||
});
|
||||
|
||||
test('nextHop guards liveAnimCount element null', () => {
|
||||
assert.ok(liveSource.includes('const countEl = document.getElementById(\'liveAnimCount\')'),
|
||||
'nextHop must null-check liveAnimCount element');
|
||||
assert.ok(liveSource.includes('if (countEl) countEl.textContent = activeAnims'),
|
||||
'nextHop must conditionally update liveAnimCount');
|
||||
});
|
||||
}
|
||||
|
||||
// === channels.js: formatHashHex (#465) ===
|
||||
console.log('\n=== channels.js: formatHashHex (issue #465) ===');
|
||||
{
|
||||
const chSource = fs.readFileSync('public/channels.js', 'utf8');
|
||||
|
||||
test('formatHashHex exists in channels.js', () => {
|
||||
assert.ok(chSource.includes('function formatHashHex('), 'formatHashHex function must exist');
|
||||
});
|
||||
|
||||
test('channel fallback name uses formatHashHex', () => {
|
||||
assert.ok(chSource.includes('formatHashHex(ch.hash)'), 'renderChannelList must format hash as hex');
|
||||
assert.ok(chSource.includes('formatHashHex(hash)'), 'selectChannel must format hash as hex');
|
||||
});
|
||||
|
||||
test('formatHashHex produces correct hex output', () => {
|
||||
// Extract and evaluate the function
|
||||
const match = chSource.match(/function formatHashHex\(hash\)\s*\{[^}]+\}/);
|
||||
assert.ok(match, 'should extract formatHashHex');
|
||||
const ctx = vm.createContext({});
|
||||
vm.runInContext(match[0], ctx);
|
||||
const fmt = vm.runInContext('formatHashHex', ctx);
|
||||
assert.strictEqual(fmt(10), '0x0A');
|
||||
assert.strictEqual(fmt(255), '0xFF');
|
||||
assert.strictEqual(fmt(0), '0x00');
|
||||
assert.strictEqual(fmt(1), '0x01');
|
||||
assert.strictEqual(fmt('LongFast'), 'LongFast'); // string hash passes through
|
||||
});
|
||||
}
|
||||
|
||||
// ===== MAP NEIGHBOR FILTER LOGIC =====
|
||||
{
|
||||
console.log('\n--- Map neighbor filter logic ---');
|
||||
|
||||
// NOTE: applyNeighborFilter is a hand-written copy of the filter logic from
|
||||
// public/map.js _renderMarkersInner. The real code is browser-only (depends on
|
||||
// Leaflet, DOM, closure state) and cannot be imported directly in Node.
|
||||
// If the filter logic in map.js changes, update this copy to match.
|
||||
function applyNeighborFilter(nodes, filters, selectedReferenceNode, neighborPubkeys) {
|
||||
return nodes.filter(n => {
|
||||
if (!n.lat || !n.lon) return false;
|
||||
if (!filters[n.role || 'companion']) return false;
|
||||
if (filters.neighbors && selectedReferenceNode && neighborPubkeys) {
|
||||
const pk = n.public_key;
|
||||
if (pk !== selectedReferenceNode && !neighborPubkeys.has(pk)) return false;
|
||||
}
|
||||
return true;
|
||||
});
|
||||
}
|
||||
|
||||
const testNodes = [
|
||||
{ public_key: 'aaa', lat: 1, lon: 1, role: 'repeater', name: 'NodeA' },
|
||||
{ public_key: 'bbb', lat: 2, lon: 2, role: 'repeater', name: 'NodeB' },
|
||||
{ public_key: 'ccc', lat: 3, lon: 3, role: 'companion', name: 'NodeC' },
|
||||
{ public_key: 'ddd', lat: 4, lon: 4, role: 'repeater', name: 'NodeD' },
|
||||
];
|
||||
const baseFilters = { repeater: true, companion: true, room: true, sensor: true, neighbors: false };
|
||||
|
||||
test('neighbor filter off shows all nodes', () => {
|
||||
const result = applyNeighborFilter(testNodes, baseFilters, null, null);
|
||||
assert.strictEqual(result.length, 4);
|
||||
});
|
||||
|
||||
test('neighbor filter on with no reference shows all nodes', () => {
|
||||
const f = { ...baseFilters, neighbors: true };
|
||||
const result = applyNeighborFilter(testNodes, f, null, null);
|
||||
assert.strictEqual(result.length, 4);
|
||||
});
|
||||
|
||||
test('neighbor filter on with reference and neighbors filters correctly', () => {
|
||||
const f = { ...baseFilters, neighbors: true };
|
||||
const neighborSet = new Set(['bbb', 'ccc']);
|
||||
const result = applyNeighborFilter(testNodes, f, 'aaa', neighborSet);
|
||||
assert.strictEqual(result.length, 3); // aaa (ref) + bbb + ccc (neighbors)
|
||||
const pks = result.map(n => n.public_key);
|
||||
assert.ok(pks.includes('aaa'), 'reference node should be included');
|
||||
assert.ok(pks.includes('bbb'), 'neighbor bbb should be included');
|
||||
assert.ok(pks.includes('ccc'), 'neighbor ccc should be included');
|
||||
assert.ok(!pks.includes('ddd'), 'non-neighbor ddd should be excluded');
|
||||
});
|
||||
|
||||
test('neighbor filter on with reference and empty neighbors shows only reference', () => {
|
||||
const f = { ...baseFilters, neighbors: true };
|
||||
const neighborSet = new Set();
|
||||
const result = applyNeighborFilter(testNodes, f, 'aaa', neighborSet);
|
||||
assert.strictEqual(result.length, 1);
|
||||
assert.strictEqual(result[0].public_key, 'aaa');
|
||||
});
|
||||
|
||||
test('neighbor filter respects role filter', () => {
|
||||
const f = { ...baseFilters, neighbors: true, companion: false };
|
||||
const neighborSet = new Set(['bbb', 'ccc']);
|
||||
const result = applyNeighborFilter(testNodes, f, 'aaa', neighborSet);
|
||||
assert.strictEqual(result.length, 2); // aaa + bbb (ccc is companion, filtered out)
|
||||
const pks = result.map(n => n.public_key);
|
||||
assert.ok(!pks.includes('ccc'), 'companion ccc should be filtered by role');
|
||||
});
|
||||
|
||||
// Test path parsing for neighbor extraction
|
||||
test('neighbor extraction from paths data', () => {
|
||||
const refPubkey = 'aaa';
|
||||
const paths = [
|
||||
{ hops: [{ pubkey: 'bbb' }, { pubkey: 'aaa' }, { pubkey: 'ccc' }] },
|
||||
{ hops: [{ pubkey: 'aaa' }, { pubkey: 'ddd' }] },
|
||||
{ hops: [{ pubkey: 'eee' }, { pubkey: 'aaa' }] },
|
||||
];
|
||||
const neighborSet = new Set();
|
||||
for (const p of paths) {
|
||||
const hops = p.hops || [];
|
||||
for (let i = 0; i < hops.length; i++) {
|
||||
if (hops[i].pubkey === refPubkey) {
|
||||
if (i > 0 && hops[i - 1].pubkey) neighborSet.add(hops[i - 1].pubkey);
|
||||
if (i < hops.length - 1 && hops[i + 1].pubkey) neighborSet.add(hops[i + 1].pubkey);
|
||||
}
|
||||
}
|
||||
}
|
||||
assert.ok(neighborSet.has('bbb'), 'bbb is adjacent in path 1');
|
||||
assert.ok(neighborSet.has('ccc'), 'ccc is adjacent in path 1');
|
||||
assert.ok(neighborSet.has('ddd'), 'ddd is adjacent in path 2');
|
||||
assert.ok(neighborSet.has('eee'), 'eee is adjacent in path 3');
|
||||
assert.strictEqual(neighborSet.size, 4);
|
||||
});
|
||||
}
|
||||
|
||||
// ===== SUMMARY =====
|
||||
Promise.allSettled(pendingTests).then(() => {
|
||||
console.log(`\n${'═'.repeat(40)}`);
|
||||
|
||||
78
test-live-anims.js
Normal file
78
test-live-anims.js
Normal file
@@ -0,0 +1,78 @@
|
||||
/* Unit tests for live.js animation system — verifies rAF migration and concurrency cap */
|
||||
'use strict';
|
||||
const fs = require('fs');
|
||||
const assert = require('assert');
|
||||
|
||||
const src = fs.readFileSync('public/live.js', 'utf8');
|
||||
|
||||
let passed = 0, failed = 0;
|
||||
function test(name, fn) {
|
||||
try { fn(); passed++; console.log(` ✅ ${name}`); }
|
||||
catch (e) { failed++; console.log(` ❌ ${name}: ${e.message}`); }
|
||||
}
|
||||
|
||||
console.log('\n=== Animation interval elimination ===');
|
||||
|
||||
test('pulseNode does not use setInterval', () => {
|
||||
// Extract pulseNode function body
|
||||
const pulseStart = src.indexOf('function pulseNode(');
|
||||
const nextFn = src.indexOf('\n function ', pulseStart + 1);
|
||||
const body = src.substring(pulseStart, nextFn);
|
||||
assert.ok(!body.includes('setInterval'), 'pulseNode still uses setInterval');
|
||||
assert.ok(body.includes('requestAnimationFrame'), 'pulseNode should use requestAnimationFrame');
|
||||
});
|
||||
|
||||
test('drawAnimatedLine does not use setInterval', () => {
|
||||
const drawStart = src.indexOf('function drawAnimatedLine(');
|
||||
const nextFn = src.indexOf('\n function ', drawStart + 1);
|
||||
const body = src.substring(drawStart, nextFn);
|
||||
assert.ok(!body.includes('setInterval'), 'drawAnimatedLine still uses setInterval');
|
||||
assert.ok(body.includes('requestAnimationFrame'), 'drawAnimatedLine should use requestAnimationFrame');
|
||||
});
|
||||
|
||||
test('ghost hop pulse does not use setInterval', () => {
|
||||
// Ghost pulse is inside animatePath
|
||||
const animStart = src.indexOf('function animatePath(');
|
||||
const animEnd = src.indexOf('\n function ', animStart + 1);
|
||||
const body = src.substring(animStart, animEnd);
|
||||
assert.ok(!body.includes('setInterval'), 'animatePath still uses setInterval');
|
||||
});
|
||||
|
||||
console.log('\n=== Concurrency cap ===');
|
||||
|
||||
test('MAX_CONCURRENT_ANIMS is defined', () => {
|
||||
assert.ok(src.includes('MAX_CONCURRENT_ANIMS'), 'MAX_CONCURRENT_ANIMS constant not found');
|
||||
});
|
||||
|
||||
test('MAX_CONCURRENT_ANIMS is set to 20', () => {
|
||||
const match = src.match(/MAX_CONCURRENT_ANIMS\s*=\s*(\d+)/);
|
||||
assert.ok(match, 'Could not parse MAX_CONCURRENT_ANIMS value');
|
||||
assert.strictEqual(parseInt(match[1]), 20);
|
||||
});
|
||||
|
||||
test('animatePath checks MAX_CONCURRENT_ANIMS before proceeding', () => {
|
||||
const animStart = src.indexOf('function animatePath(');
|
||||
// Check that within the first 200 chars of the function, we check the cap
|
||||
const snippet = src.substring(animStart, animStart + 300);
|
||||
assert.ok(snippet.includes('activeAnims >= MAX_CONCURRENT_ANIMS'), 'animatePath should check activeAnims against cap');
|
||||
});
|
||||
|
||||
console.log('\n=== Safety: no stale setInterval in animation functions ===');
|
||||
|
||||
test('no setInterval remains in animation hot path', () => {
|
||||
// The only acceptable setIntervals are the UI ones (timeline, clock, prune, rate counter)
|
||||
// Count total setInterval occurrences
|
||||
const matches = src.match(/setInterval\(/g) || [];
|
||||
// Count known OK ones: _timelineRefreshInterval, _lcdClockInterval, _pruneInterval, _rateCounterInterval
|
||||
const okPatterns = ['_timelineRefreshInterval', '_lcdClockInterval', '_pruneInterval', '_rateCounterInterval'];
|
||||
let okCount = 0;
|
||||
for (const p of okPatterns) {
|
||||
if (src.includes(p + ' = setInterval') || src.includes(p + '= setInterval')) okCount++;
|
||||
}
|
||||
// Allow some non-animation setIntervals (the 4 UI ones above)
|
||||
assert.ok(matches.length <= okCount + 1,
|
||||
`Found ${matches.length} setInterval calls, expected at most ${okCount + 1} (non-animation). Some animation setIntervals may remain.`);
|
||||
});
|
||||
|
||||
console.log(`\n${passed} passed, ${failed} failed\n`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
Reference in New Issue
Block a user