mirror of
https://github.com/Kpa-clawbot/meshcore-analyzer.git
synced 2026-05-13 18:23:07 +00:00
Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 0be8b897bc |
@@ -1 +1 @@
|
||||
{"schemaVersion":1,"label":"e2e tests","message":"93 passed","color":"brightgreen"}
|
||||
{"schemaVersion":1,"label":"e2e tests","message":"82 passed","color":"brightgreen"}
|
||||
|
||||
@@ -1 +1 @@
|
||||
{"schemaVersion":1,"label":"frontend coverage","message":"40.01%","color":"red"}
|
||||
{"schemaVersion":1,"label":"frontend coverage","message":"37.26%","color":"red"}
|
||||
|
||||
@@ -79,12 +79,6 @@ jobs:
|
||||
go test ./...
|
||||
echo "--- Decrypt CLI tests passed ---"
|
||||
|
||||
- name: Run JS unit tests (packet-filter)
|
||||
run: |
|
||||
set -e
|
||||
node test-packet-filter.js
|
||||
node test-channel-decrypt-insecure-context.js
|
||||
|
||||
- name: Verify proto syntax
|
||||
run: |
|
||||
set -e
|
||||
@@ -182,9 +176,6 @@ jobs:
|
||||
- name: Instrument frontend JS for coverage
|
||||
run: sh scripts/instrument-frontend.sh
|
||||
|
||||
- name: Freshen fixture timestamps
|
||||
run: bash tools/freshen-fixture.sh test-fixtures/e2e-fixture.db
|
||||
|
||||
- name: Start Go server with fixture DB
|
||||
run: |
|
||||
fuser -k 13581/tcp 2>/dev/null || true
|
||||
@@ -192,7 +183,7 @@ jobs:
|
||||
./corescope-server -port 13581 -db test-fixtures/e2e-fixture.db -public public-instrumented &
|
||||
echo $! > .server.pid
|
||||
for i in $(seq 1 30); do
|
||||
if curl -sf http://localhost:13581/api/healthz > /dev/null 2>&1; then
|
||||
if curl -sf http://localhost:13581/api/stats > /dev/null 2>&1; then
|
||||
echo "Server ready after ${i}s"
|
||||
break
|
||||
fi
|
||||
@@ -368,7 +359,7 @@ jobs:
|
||||
# ───────────────────────────────────────────────────────────────
|
||||
deploy:
|
||||
name: "🚀 Deploy Staging"
|
||||
if: github.event_name == 'push'
|
||||
if: false # disabled: staging VM offline, manual deploy required
|
||||
needs: [build-and-publish]
|
||||
runs-on: [self-hosted, meshcore-runner-2]
|
||||
steps:
|
||||
@@ -457,7 +448,7 @@ jobs:
|
||||
publish:
|
||||
name: "📝 Publish Badges & Summary"
|
||||
if: github.event_name == 'push'
|
||||
needs: [deploy]
|
||||
needs: [build-and-publish]
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
|
||||
@@ -15,7 +15,6 @@ COPY cmd/server/go.mod cmd/server/go.sum ./
|
||||
COPY internal/geofilter/ ../../internal/geofilter/
|
||||
COPY internal/sigvalidate/ ../../internal/sigvalidate/
|
||||
COPY internal/packetpath/ ../../internal/packetpath/
|
||||
COPY internal/dbconfig/ ../../internal/dbconfig/
|
||||
RUN go mod download
|
||||
COPY cmd/server/ ./
|
||||
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} \
|
||||
@@ -27,7 +26,6 @@ COPY cmd/ingestor/go.mod cmd/ingestor/go.sum ./
|
||||
COPY internal/geofilter/ ../../internal/geofilter/
|
||||
COPY internal/sigvalidate/ ../../internal/sigvalidate/
|
||||
COPY internal/packetpath/ ../../internal/packetpath/
|
||||
COPY internal/dbconfig/ ../../internal/dbconfig/
|
||||
RUN go mod download
|
||||
COPY cmd/ingestor/ ./
|
||||
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} \
|
||||
|
||||
@@ -1,207 +0,0 @@
|
||||
# v3.6.0 - The Forensics
|
||||
|
||||
CoreScope just got eyes everywhere. This release drops **path inspection**, **color-by-hash markers**, **clock skew detection**, **full channel encryption**, an **observer graph**, and a pile of robustness fixes that make your mesh network feel like it's being watched by someone who actually cares.
|
||||
|
||||
134 commits, 105 PRs merged, 18K+ lines added. Here's what shipped.
|
||||
|
||||
---
|
||||
|
||||
## 🚀 New Features
|
||||
|
||||
### Path-Prefix Candidate Inspector (#944, #945)
|
||||
The marquee feature. Click any path segment and CoreScope opens an interactive inspector showing every candidate node that could match that hop prefix - plotted on a map with scoring by neighbor-graph affinity and geographic centroid. Ambiguous hops? Now you can see *why* they're ambiguous and pick the right one.
|
||||
|
||||
**Why you'll love it:** No more guessing which `0xA3` is the real repeater. The inspector lays out every candidate, scores them, and lets you drill in visually.
|
||||
|
||||
### Color-by-Hash Packet Markers (#948, #951)
|
||||
Every packet type gets a vivid, hash-derived color - on the live feed, map polylines, and flying-packet animations. Bright fill with dark outline for contrast. No more monochrome blobs - you can visually track packet flows by color at a glance.
|
||||
|
||||
### Node Filter on Live Page (#924, #771)
|
||||
Filter the live packet stream to show only traffic flowing through a specific node. Pick a repeater, see exactly what it's carrying. That simple.
|
||||
|
||||
### Clock Skew Detection (#746, #752, #828, #850)
|
||||
Full pipeline: backend computes drift using Theil-Sen regression with outlier rejection (#828), the UI shows per-node badges, detail sparklines, and fleet-wide analytics (#752). Bimodal clock severity (#850) surfaces flaky-RTC nodes that toggle between accurate and drifted - instead of hiding them as "No Clock."
|
||||
|
||||
**Why you'll love it:** Nodes with bad clocks silently corrupt your timeline. Now they glow red before they ruin your analysis.
|
||||
|
||||
### Observer Graph (M1+M2) (#774)
|
||||
Observers are now first-class graph citizens. CoreScope builds a neighbor graph from observation overlaps, scores hop-resolver candidates by graph edges (#876), and uses geographic centroid for tiebreaking. The observer topology is visible and queryable.
|
||||
|
||||
### Channel Encryption - Full Stack (#726, #733, #750, #760)
|
||||
Three milestones landed as one: DB-backed channel message history (#726), client-side PSK decryption in the browser (#733), and PSK channel management with add/remove UX and message caching (#750). Add a channel key in the UI, and CoreScope decrypts messages client-side - no server-side key storage. The add-channel button (#760) makes it dead simple.
|
||||
|
||||
**Why you'll love it:** Encrypted channels are no longer black boxes. Add your PSK, see the messages, search history - all without exposing keys to the server.
|
||||
|
||||
### Hash Collision Inspector (#758)
|
||||
The Hash Usage Matrix now shows collision details for all hash sizes. When two nodes share a prefix, you see exactly who collides and at what size.
|
||||
|
||||
### Geofilter Builder - In-App (#735, #900)
|
||||
The geofilter polygon builder is now served directly from CoreScope with a full docs page (#900). No more hunting for external tools. Link from the customizer, draw your polygon, done.
|
||||
|
||||
### Node Blacklist (#742)
|
||||
`nodeBlacklist` in config hides abusive or troll nodes from all views. They're gone.
|
||||
|
||||
### Observer Retention (#764)
|
||||
Stale observers are automatically pruned after a configurable number of days. Your observer list stays clean without manual intervention.
|
||||
|
||||
### Advert Signature Validation (#794)
|
||||
Corrupt packets with invalid advert signatures are now rejected at ingest. Bad data never hits your store.
|
||||
|
||||
### Bounded Cold Load (#790)
|
||||
`Load()` now respects a memory budget - no more OOM on cold start with a fat database. Combined with retention-hours cutoff (#917), cold start is safe on constrained hardware.
|
||||
|
||||
### Multi-Arch Docker Images (#869)
|
||||
Official images now publish `amd64` + `arm64` in a single multi-arch manifest. Raspberry Pi operators: pull and run. No special tags needed.
|
||||
|
||||
### /nodes Detail Panel + Search (#868)
|
||||
The nodes detail panel ships with search improvements (#862) - find nodes fast, see their full detail in a slide-out panel.
|
||||
|
||||
### Deduplicated Top Longest Hops (#848)
|
||||
Longest hops are now deduplicated by pair with observation count and SNR cues. No more seeing the same link 47 times.
|
||||
|
||||
---
|
||||
|
||||
## 🔥 Performance Wins
|
||||
|
||||
### StoreTx ResolvedPath Elimination (#806)
|
||||
The per-transaction `ResolvedPath` computation is gone - replaced by a membership index with on-demand decode. This was one of the hottest paths in the ingestor.
|
||||
|
||||
### Node Packet Queries (#803)
|
||||
Raw JSON text search for node packets replaced with a proper `byNode` index (#673). Night and day.
|
||||
|
||||
### Channel Query Performance (#762, #763)
|
||||
New `channel_hash` column enables SQL-level channel filtering. No more full-table scan to find messages in a channel.
|
||||
|
||||
### SQLite Auto-Vacuum (#919, #920)
|
||||
Incremental auto-vacuum enabled - the database file actually shrinks after retention pruning. No more 2GB database holding 200MB of live data.
|
||||
|
||||
### Retention-Hours Cutoff on Load (#917)
|
||||
`Load()` now applies `retentionHours` at read time, preventing OOM when the DB has more history than memory allows.
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ Security & Robustness
|
||||
|
||||
### MQTT Reconnect with Bounded Backoff (#947, #949)
|
||||
The ingestor now reconnects to MQTT brokers with exponential backoff, observability logging, and bounded retry. No more silent disconnects that kill your data stream.
|
||||
|
||||
---
|
||||
|
||||
## 🐛 Bugs Squashed
|
||||
|
||||
This release exterminates **40+ bugs** — from protocol-level hash mismatches to pixel-level CSS breakage. Operators told us what hurt; we listened.
|
||||
|
||||
- **Path inspector "Show on Map" missed origin and first hop** (#950) - map view now includes all hops
|
||||
- **Content hash used full header byte** (#787) - content hashing now uses payload type bits only, fixing hash collisions between packets that differ only in header flags
|
||||
- **Encrypted channel deep links showed broken UI** (#825, #826, #815) - deep links to encrypted channels now show a lock message instead of broken UI when you don't have the key
|
||||
- **Geofilter longitude wrapping** (#925) - geofilter builder wraps longitude to [-180, 180]; southern hemisphere polygons no longer invert
|
||||
- **Hash filter bypasses saved region filter** (#939) - hash lookups now skip the geo filter as intended
|
||||
- **Companion-as-repeater excluded from path hops** (#935, #936) - non-repeater nodes no longer pollute hop resolution
|
||||
- **Customize panel re-renders while typing** (#927) - text fields keep focus during config changes
|
||||
- **Per-observation raw_hex** (#881, #882) - each observer's hex dump now shows what *that observer* actually received
|
||||
- **Per-observation children in packet groups** (#866, #880) - expanded groups show per-obs data, not cross-observer aggregates
|
||||
- **Full-page obs-switch** (#866, #870) - switching observers updates hex, path, and direction correctly
|
||||
- **Packet detail shows wrong observation** (#849, #851) - clicking a specific observation opens *that* observation
|
||||
- **Byte breakdown hop count** (#844, #846) - derived from `path_len`, not aggregated `_parsedPath`
|
||||
- **Transport-route path_len offset** (#852, #853) - correct offset calculation + CSS variable fix
|
||||
- **Packets/hour chart bars + x-axis** (#858, #865) - bars render correctly, x-axis labels properly decimated
|
||||
- **Channel timeline capped to top 8** (#860, #864) - no more 47-channel chart spaghetti
|
||||
- **Reachability row opacity removed** (#859, #863) - clean rows without misleading gradient
|
||||
- **Sticky table headers on mobile** (#861, #867) - restored after regression
|
||||
- **Map popup 'Show Neighbors' on iOS Safari** (#840, #841) - link actually works now
|
||||
- **Node detail Recent Packets invisible text** (#829, #830) - CSS fix
|
||||
- **/api/packets/{hash} falls back to DB** (#827, #831) - when in-memory store misses, DB catches it
|
||||
- **IATA filter bypass for status messages** (#694, #802) - status packets no longer filtered out by airport codes
|
||||
- **Desktop node click URL hash** (#676, #739) - clicking a node updates the URL for deep linking
|
||||
- **Filter params in URL hash** (#682, #740) - all filter state serialized for shareable links
|
||||
- **Hide undecryptable channel messages** (#727, #728) - clean default view
|
||||
- **TRACE path_json uses path_sz** (#732) - correct field from flags byte, not header hash_size
|
||||
- **Multi-byte adopters** (#754, #767) - all node types, role column, advert precedence
|
||||
- **Channel key case sensitivity** (#761) - Public decode works correctly
|
||||
- **Transport route field offsets** (#766) - correct offsets in field table
|
||||
- **Clock skew sanity checks** (#769) - filter epoch-0, cap drift, require minimum samples
|
||||
- **Neighbor graph slider persistence** (#776) - default 0.7, persisted to localStorage
|
||||
- **Node detail panel navigation** (#779, #785) - Details/Analytics links actually navigate
|
||||
- **Channel key removal** (#898) - user-added keys for server-known channels can be removed
|
||||
- **Side-panel Details on desktop** (#892) - opens full-screen correctly
|
||||
- **Hex-dump byte ranges client-side** (#891) - computed from per-obs raw_hex
|
||||
- **path_json derived from raw_hex at ingest** (#886, #887) - single source of truth
|
||||
- **Path pill and byte breakdown hop agreement** (#885) - they match now
|
||||
- **Mobile close button + toolbar scroll** (#797, #805) - accessible and scrollable
|
||||
- **/health.recentPackets resolved_path fallback** (#810, #821) - falls back to longest sibling observation
|
||||
- **Channel filter on Packets page** (#812, #816) - UI and API both fixed
|
||||
- **Clock-skew section in side panel** (#813, #814) - renders correctly
|
||||
- **Real RSS in /api/stats** (#832, #835) - surface actual RSS alongside tracked store bytes
|
||||
- **Hash size detection for transport routes + zero-hop adverts** (#747) - correct detection
|
||||
- **Repeater+observer merged map marker** (#745) - single marker, not two overlapping
|
||||
|
||||
---
|
||||
|
||||
## 🎨 UI Polish
|
||||
|
||||
- QA findings applied across the board (#832, #833, #836, #837, #838) - dozens of small UX fixes from systematic QA pass
|
||||
|
||||
---
|
||||
|
||||
## 📦 Upgrading
|
||||
|
||||
```bash
|
||||
git pull
|
||||
docker compose down
|
||||
docker compose build prod
|
||||
docker compose up -d prod
|
||||
```
|
||||
|
||||
Your existing `config.json` works as-is. New optional config keys:
|
||||
- `nodeBlacklist` - array of node hashes to hide
|
||||
- `observerRetentionDays` - days before stale observers are pruned
|
||||
- `memoryBudgetMB` - cap on in-memory packet store
|
||||
|
||||
### Verify
|
||||
|
||||
```bash
|
||||
curl -s http://localhost/api/health | jq .version
|
||||
# "3.6.0"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🙏 External Contributors
|
||||
|
||||
- **#735** ([@efiten](https://github.com/efiten)) - Serve geofilter builder from app, link from customizer
|
||||
- **#739** ([@efiten](https://github.com/efiten)) - Desktop node click updates URL hash for deep linking
|
||||
- **#740** ([@efiten](https://github.com/efiten)) - Serialize filter params in URL hash for shareable links
|
||||
- **#742** ([@Joel-Claw](https://github.com/Joel-Claw)) - Add nodeBlacklist config to hide abusive/troll nodes
|
||||
- **#761** ([@copelaje](https://github.com/copelaje)) - Fix channel key case sensitivity for Public decode
|
||||
- **#764** ([@Joel-Claw](https://github.com/Joel-Claw)) - Add observer retention - prune stale observers after configurable days
|
||||
- **#802** ([@efiten](https://github.com/efiten)) - Bypass IATA filter for status messages, fill SNR on duplicate observations
|
||||
- **#803** ([@efiten](https://github.com/efiten)) - Replace raw JSON text search with byNode index for node packet queries
|
||||
- **#805** ([@efiten](https://github.com/efiten)) - Mobile close button accessible + toolbar scrollable
|
||||
- **#900** ([@efiten](https://github.com/efiten)) - App-served geofilter docs page
|
||||
- **#917** ([@efiten](https://github.com/efiten)) - Apply retentionHours cutoff in Load() to prevent OOM on cold start
|
||||
- **#924** ([@efiten](https://github.com/efiten)) - Node filter on live page - show only traffic through a specific node
|
||||
- **#925** ([@efiten](https://github.com/efiten)) - Fix geobuilder longitude wrapping for southern hemisphere polygons
|
||||
- **#927** ([@efiten](https://github.com/efiten)) - Skip customize panel re-render while text field has focus
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Breaking Changes
|
||||
|
||||
**None.** All API endpoints remain backwards-compatible. New fields are additive only.
|
||||
|
||||
---
|
||||
|
||||
## 📊 By the Numbers
|
||||
|
||||
| Stat | Count |
|
||||
|------|-------|
|
||||
| Commits | 134 |
|
||||
| PRs merged | 105 |
|
||||
| Lines added | 18,480 |
|
||||
| Lines removed | 1,632 |
|
||||
| Files changed | 110 |
|
||||
| Contributors | 4 |
|
||||
|
||||
---
|
||||
|
||||
*Previous release: [v3.5.2](https://github.com/Kpa-clawbot/CoreScope/releases/tag/v3.5.2)*
|
||||
@@ -7,9 +7,7 @@ import (
|
||||
"log"
|
||||
"os"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/meshcore-analyzer/dbconfig"
|
||||
"github.com/meshcore-analyzer/geofilter"
|
||||
)
|
||||
|
||||
@@ -22,17 +20,6 @@ type MQTTSource struct {
|
||||
RejectUnauthorized *bool `json:"rejectUnauthorized,omitempty"`
|
||||
Topics []string `json:"topics"`
|
||||
IATAFilter []string `json:"iataFilter,omitempty"`
|
||||
ConnectTimeoutSec int `json:"connectTimeoutSec,omitempty"`
|
||||
Region string `json:"region,omitempty"`
|
||||
}
|
||||
|
||||
// ConnectTimeoutOrDefault returns the per-source connect timeout in seconds,
|
||||
// or 30 if not set (matching the WaitTimeout default from #926).
|
||||
func (s MQTTSource) ConnectTimeoutOrDefault() int {
|
||||
if s.ConnectTimeoutSec > 0 {
|
||||
return s.ConnectTimeoutSec
|
||||
}
|
||||
return 30
|
||||
}
|
||||
|
||||
// MQTTLegacy is the old single-broker config format.
|
||||
@@ -54,26 +41,6 @@ type Config struct {
|
||||
Metrics *MetricsConfig `json:"metrics,omitempty"`
|
||||
GeoFilter *GeoFilterConfig `json:"geo_filter,omitempty"`
|
||||
ValidateSignatures *bool `json:"validateSignatures,omitempty"`
|
||||
DB *DBConfig `json:"db,omitempty"`
|
||||
|
||||
// ObserverIATAWhitelist restricts which observer IATA regions are processed.
|
||||
// When non-empty, only observers whose IATA code (from the MQTT topic) matches
|
||||
// one of these entries are accepted. Case-insensitive. An empty list means all
|
||||
// IATA codes are allowed. This applies globally, unlike the per-source iataFilter.
|
||||
ObserverIATAWhitelist []string `json:"observerIATAWhitelist,omitempty"`
|
||||
|
||||
// obsIATAWhitelistCached is the lazily-built uppercase set for O(1) lookups.
|
||||
obsIATAWhitelistCached map[string]bool
|
||||
obsIATAWhitelistOnce sync.Once
|
||||
|
||||
// ObserverBlacklist is a list of observer public keys to drop at ingest.
|
||||
// Messages from blacklisted observers are silently discarded — no DB writes,
|
||||
// no UpsertObserver, no observations, no metrics.
|
||||
ObserverBlacklist []string `json:"observerBlacklist,omitempty"`
|
||||
|
||||
// obsBlacklistSetCached is the lazily-built lowercase set for O(1) lookups.
|
||||
obsBlacklistSetCached map[string]bool
|
||||
obsBlacklistOnce sync.Once
|
||||
}
|
||||
|
||||
// GeoFilterConfig is an alias for the shared geofilter.Config type.
|
||||
@@ -91,17 +58,6 @@ type MetricsConfig struct {
|
||||
SampleIntervalSec int `json:"sampleIntervalSec"`
|
||||
}
|
||||
|
||||
// DBConfig is the shared SQLite vacuum/maintenance config (#919, #921).
|
||||
type DBConfig = dbconfig.DBConfig
|
||||
|
||||
// IncrementalVacuumPages returns the configured pages per vacuum or 1024 default.
|
||||
func (c *Config) IncrementalVacuumPages() int {
|
||||
if c.DB != nil && c.DB.IncrementalVacuumPages > 0 {
|
||||
return c.DB.IncrementalVacuumPages
|
||||
}
|
||||
return 1024
|
||||
}
|
||||
|
||||
// ShouldValidateSignatures returns true (default) unless explicitly disabled.
|
||||
func (c *Config) ShouldValidateSignatures() bool {
|
||||
if c.ValidateSignatures != nil {
|
||||
@@ -143,43 +99,6 @@ func (c *Config) ObserverDaysOrDefault() int {
|
||||
return 14
|
||||
}
|
||||
|
||||
// IsObserverBlacklisted returns true if the given observer ID is in the observerBlacklist.
|
||||
func (c *Config) IsObserverBlacklisted(id string) bool {
|
||||
if c == nil || len(c.ObserverBlacklist) == 0 {
|
||||
return false
|
||||
}
|
||||
c.obsBlacklistOnce.Do(func() {
|
||||
m := make(map[string]bool, len(c.ObserverBlacklist))
|
||||
for _, pk := range c.ObserverBlacklist {
|
||||
trimmed := strings.ToLower(strings.TrimSpace(pk))
|
||||
if trimmed != "" {
|
||||
m[trimmed] = true
|
||||
}
|
||||
}
|
||||
c.obsBlacklistSetCached = m
|
||||
})
|
||||
return c.obsBlacklistSetCached[strings.ToLower(strings.TrimSpace(id))]
|
||||
}
|
||||
|
||||
// IsObserverIATAAllowed returns true if the given IATA code is permitted.
|
||||
// When ObserverIATAWhitelist is empty, all codes are allowed.
|
||||
func (c *Config) IsObserverIATAAllowed(iata string) bool {
|
||||
if c == nil || len(c.ObserverIATAWhitelist) == 0 {
|
||||
return true
|
||||
}
|
||||
c.obsIATAWhitelistOnce.Do(func() {
|
||||
m := make(map[string]bool, len(c.ObserverIATAWhitelist))
|
||||
for _, code := range c.ObserverIATAWhitelist {
|
||||
trimmed := strings.ToUpper(strings.TrimSpace(code))
|
||||
if trimmed != "" {
|
||||
m[trimmed] = true
|
||||
}
|
||||
}
|
||||
c.obsIATAWhitelistCached = m
|
||||
})
|
||||
return c.obsIATAWhitelistCached[strings.ToUpper(strings.TrimSpace(iata))]
|
||||
}
|
||||
|
||||
// LoadConfig reads configuration from a JSON file, with env var overrides.
|
||||
// If the config file does not exist, sensible defaults are used (zero-config startup).
|
||||
func LoadConfig(path string) (*Config, error) {
|
||||
|
||||
@@ -284,113 +284,3 @@ func TestLoadConfigWithAllFields(t *testing.T) {
|
||||
t.Errorf("iataFilter=%v", src.IATAFilter)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConnectTimeoutOrDefault(t *testing.T) {
|
||||
// Default when unset
|
||||
s := MQTTSource{}
|
||||
if got := s.ConnectTimeoutOrDefault(); got != 30 {
|
||||
t.Errorf("default: got %d, want 30", got)
|
||||
}
|
||||
|
||||
// Custom value
|
||||
s.ConnectTimeoutSec = 5
|
||||
if got := s.ConnectTimeoutOrDefault(); got != 5 {
|
||||
t.Errorf("custom: got %d, want 5", got)
|
||||
}
|
||||
|
||||
// Zero treated as unset
|
||||
s.ConnectTimeoutSec = 0
|
||||
if got := s.ConnectTimeoutOrDefault(); got != 30 {
|
||||
t.Errorf("zero: got %d, want 30", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConnectTimeoutFromJSON(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
cfgPath := dir + "/config.json"
|
||||
os.WriteFile(cfgPath, []byte(`{"mqttSources":[{"name":"s1","broker":"tcp://b:1883","topics":["#"],"connectTimeoutSec":5}]}`), 0644)
|
||||
cfg, err := LoadConfig(cfgPath)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if got := cfg.MQTTSources[0].ConnectTimeoutOrDefault(); got != 5 {
|
||||
t.Errorf("from JSON: got %d, want 5", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestObserverIATAWhitelist(t *testing.T) {
|
||||
// Config with whitelist set
|
||||
cfg := Config{
|
||||
ObserverIATAWhitelist: []string{"ARN", "got"},
|
||||
}
|
||||
|
||||
// Matching (case-insensitive)
|
||||
if !cfg.IsObserverIATAAllowed("ARN") {
|
||||
t.Error("ARN should be allowed")
|
||||
}
|
||||
if !cfg.IsObserverIATAAllowed("arn") {
|
||||
t.Error("arn (lowercase) should be allowed")
|
||||
}
|
||||
if !cfg.IsObserverIATAAllowed("GOT") {
|
||||
t.Error("GOT should be allowed")
|
||||
}
|
||||
|
||||
// Non-matching
|
||||
if cfg.IsObserverIATAAllowed("SJC") {
|
||||
t.Error("SJC should NOT be allowed")
|
||||
}
|
||||
|
||||
// Empty string not allowed
|
||||
if cfg.IsObserverIATAAllowed("") {
|
||||
t.Error("empty IATA should NOT be allowed")
|
||||
}
|
||||
}
|
||||
|
||||
func TestObserverIATAWhitelistEmpty(t *testing.T) {
|
||||
// No whitelist = allow all
|
||||
cfg := Config{}
|
||||
if !cfg.IsObserverIATAAllowed("SJC") {
|
||||
t.Error("with no whitelist, all IATAs should be allowed")
|
||||
}
|
||||
if !cfg.IsObserverIATAAllowed("") {
|
||||
t.Error("with no whitelist, even empty IATA should be allowed")
|
||||
}
|
||||
}
|
||||
|
||||
func TestObserverIATAWhitelistJSON(t *testing.T) {
|
||||
json := `{
|
||||
"dbPath": "test.db",
|
||||
"observerIATAWhitelist": ["ARN", "GOT"]
|
||||
}`
|
||||
tmp := t.TempDir() + "/config.json"
|
||||
os.WriteFile(tmp, []byte(json), 0644)
|
||||
cfg, err := LoadConfig(tmp)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(cfg.ObserverIATAWhitelist) != 2 {
|
||||
t.Fatalf("expected 2 entries, got %d", len(cfg.ObserverIATAWhitelist))
|
||||
}
|
||||
if !cfg.IsObserverIATAAllowed("ARN") {
|
||||
t.Error("ARN should be allowed after loading from JSON")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMQTTSourceRegionField(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
cfgPath := filepath.Join(dir, "config.json")
|
||||
os.WriteFile(cfgPath, []byte(`{
|
||||
"dbPath": "/tmp/test.db",
|
||||
"mqttSources": [
|
||||
{"name": "cascadia", "broker": "tcp://localhost:1883", "topics": ["meshcore/#"], "region": "PDX"}
|
||||
]
|
||||
}`), 0o644)
|
||||
|
||||
cfg, err := LoadConfig(cfgPath)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if cfg.MQTTSources[0].Region != "PDX" {
|
||||
t.Fatalf("expected region PDX, got %q", cfg.MQTTSources[0].Region)
|
||||
}
|
||||
}
|
||||
|
||||
+5
-198
@@ -8,7 +8,6 @@ import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
@@ -45,7 +44,6 @@ type Store struct {
|
||||
stmtUpsertMetrics *sql.Stmt
|
||||
|
||||
sampleIntervalSec int
|
||||
backfillWg sync.WaitGroup
|
||||
}
|
||||
|
||||
// OpenStore opens or creates a SQLite DB at the given path, applying the
|
||||
@@ -61,7 +59,7 @@ func OpenStoreWithInterval(dbPath string, sampleIntervalSec int) (*Store, error)
|
||||
return nil, fmt.Errorf("creating data dir: %w", err)
|
||||
}
|
||||
|
||||
db, err := sql.Open("sqlite", dbPath+"?_pragma=auto_vacuum(INCREMENTAL)&_pragma=journal_mode(WAL)&_pragma=foreign_keys(ON)&_pragma=busy_timeout(5000)")
|
||||
db, err := sql.Open("sqlite", dbPath+"?_pragma=journal_mode(WAL)&_pragma=foreign_keys(ON)&_pragma=busy_timeout(5000)")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("opening db: %w", err)
|
||||
}
|
||||
@@ -87,9 +85,6 @@ func OpenStoreWithInterval(dbPath string, sampleIntervalSec int) (*Store, error)
|
||||
}
|
||||
|
||||
func applySchema(db *sql.DB) error {
|
||||
// auto_vacuum=INCREMENTAL is set via DSN pragma (must be before journal_mode).
|
||||
// Logging of current mode is handled by CheckAutoVacuum — no duplicate log here.
|
||||
|
||||
schema := `
|
||||
CREATE TABLE IF NOT EXISTS nodes (
|
||||
public_key TEXT PRIMARY KEY,
|
||||
@@ -118,8 +113,7 @@ func applySchema(db *sql.DB) error {
|
||||
battery_mv INTEGER,
|
||||
uptime_secs INTEGER,
|
||||
noise_floor REAL,
|
||||
inactive INTEGER DEFAULT 0,
|
||||
last_packet_at TEXT DEFAULT NULL
|
||||
inactive INTEGER DEFAULT 0
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_nodes_last_seen ON nodes(last_seen);
|
||||
@@ -424,45 +418,6 @@ func applySchema(db *sql.DB) error {
|
||||
log.Println("[migration] observations.raw_hex column added")
|
||||
}
|
||||
|
||||
// Migration: add last_packet_at column to observers (#last-packet-at)
|
||||
row = db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'observers_last_packet_at_v1'")
|
||||
if row.Scan(&migDone) != nil {
|
||||
log.Println("[migration] Adding last_packet_at column to observers...")
|
||||
_, alterErr := db.Exec(`ALTER TABLE observers ADD COLUMN last_packet_at TEXT DEFAULT NULL`)
|
||||
if alterErr != nil && !strings.Contains(alterErr.Error(), "duplicate column") {
|
||||
return fmt.Errorf("observers last_packet_at ALTER: %w", alterErr)
|
||||
}
|
||||
// Backfill: set last_packet_at = last_seen only for observers that actually have
|
||||
// observation rows (packet_count alone is unreliable — UpsertObserver sets it to 1
|
||||
// on INSERT even for status-only observers).
|
||||
res, err := db.Exec(`UPDATE observers SET last_packet_at = last_seen
|
||||
WHERE last_packet_at IS NULL
|
||||
AND rowid IN (SELECT DISTINCT observer_idx FROM observations WHERE observer_idx IS NOT NULL)`)
|
||||
if err == nil {
|
||||
n, _ := res.RowsAffected()
|
||||
log.Printf("[migration] Backfilled last_packet_at for %d observers with packets", n)
|
||||
}
|
||||
db.Exec(`INSERT INTO _migrations (name) VALUES ('observers_last_packet_at_v1')`)
|
||||
log.Println("[migration] observers.last_packet_at column added")
|
||||
}
|
||||
|
||||
// Migration: backfill observations.path_json from raw_hex (#888)
|
||||
// NOTE: This runs ASYNC via BackfillPathJSONAsync() to avoid blocking MQTT startup.
|
||||
// See staging outage where ~502K rows blocked ingest for 15+ hours.
|
||||
|
||||
// One-time cleanup: delete legacy packets with empty hash or empty first_seen (#994)
|
||||
row = db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'cleanup_legacy_null_hash_ts'")
|
||||
if row.Scan(&migDone) != nil {
|
||||
log.Println("[migration] Cleaning up legacy packets with empty hash/timestamp...")
|
||||
db.Exec(`DELETE FROM observations WHERE transmission_id IN (SELECT id FROM transmissions WHERE hash = '' OR first_seen = '')`)
|
||||
res, err := db.Exec(`DELETE FROM transmissions WHERE hash = '' OR first_seen = ''`)
|
||||
if err == nil {
|
||||
deleted, _ := res.RowsAffected()
|
||||
log.Printf("[migration] deleted %d legacy packets with empty hash/timestamp", deleted)
|
||||
}
|
||||
db.Exec(`INSERT INTO _migrations (name) VALUES ('cleanup_legacy_null_hash_ts')`)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -546,7 +501,7 @@ func (s *Store) prepareStatements() error {
|
||||
return err
|
||||
}
|
||||
|
||||
s.stmtUpdateObserverLastSeen, err = s.db.Prepare("UPDATE observers SET last_seen = ?, last_packet_at = ? WHERE rowid = ?")
|
||||
s.stmtUpdateObserverLastSeen, err = s.db.Prepare("UPDATE observers SET last_seen = ? WHERE rowid = ?")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -625,9 +580,9 @@ func (s *Store) InsertTransmission(data *PacketData) (bool, error) {
|
||||
err := s.stmtGetObserverRowid.QueryRow(data.ObserverID).Scan(&rowid)
|
||||
if err == nil {
|
||||
observerIdx = &rowid
|
||||
// Update observer last_seen and last_packet_at on every packet to prevent
|
||||
// Update observer last_seen on every packet to prevent
|
||||
// low-traffic observers from appearing offline (#463)
|
||||
_, _ = s.stmtUpdateObserverLastSeen.Exec(now, now, rowid)
|
||||
_, _ = s.stmtUpdateObserverLastSeen.Exec(now, rowid)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -756,7 +711,6 @@ func (s *Store) UpsertObserver(id, name, iata string, meta *ObserverMeta) error
|
||||
|
||||
// Close checkpoints the WAL and closes the database.
|
||||
func (s *Store) Close() error {
|
||||
s.backfillWg.Wait()
|
||||
s.Checkpoint()
|
||||
return s.db.Close()
|
||||
}
|
||||
@@ -834,58 +788,6 @@ func (s *Store) PruneOldMetrics(retentionDays int) (int64, error) {
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// CheckAutoVacuum inspects the current auto_vacuum mode and logs a warning
|
||||
// if not INCREMENTAL. Performs opt-in full VACUUM if db.vacuumOnStartup is set (#919).
|
||||
func (s *Store) CheckAutoVacuum(cfg *Config) {
|
||||
var autoVacuum int
|
||||
if err := s.db.QueryRow("PRAGMA auto_vacuum").Scan(&autoVacuum); err != nil {
|
||||
log.Printf("[db] warning: could not read auto_vacuum: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
if autoVacuum == 2 {
|
||||
log.Printf("[db] auto_vacuum=INCREMENTAL")
|
||||
return
|
||||
}
|
||||
|
||||
modes := map[int]string{0: "NONE", 1: "FULL", 2: "INCREMENTAL"}
|
||||
mode := modes[autoVacuum]
|
||||
if mode == "" {
|
||||
mode = fmt.Sprintf("UNKNOWN(%d)", autoVacuum)
|
||||
}
|
||||
|
||||
log.Printf("[db] auto_vacuum=%s — DB needs one-time VACUUM to enable incremental auto-vacuum. "+
|
||||
"Set db.vacuumOnStartup: true in config to migrate (will block startup for several minutes on large DBs). "+
|
||||
"See https://github.com/Kpa-clawbot/CoreScope/issues/919", mode)
|
||||
|
||||
if cfg.DB != nil && cfg.DB.VacuumOnStartup {
|
||||
// WARNING: Full VACUUM creates a temporary copy of the entire DB file.
|
||||
// Requires ~2× the DB file size in free disk space or it will fail.
|
||||
log.Printf("[db] vacuumOnStartup=true — starting one-time full VACUUM (ensure 2x DB size free disk space)...")
|
||||
start := time.Now()
|
||||
|
||||
if _, err := s.db.Exec("PRAGMA auto_vacuum = INCREMENTAL"); err != nil {
|
||||
log.Printf("[db] VACUUM failed: could not set auto_vacuum: %v", err)
|
||||
return
|
||||
}
|
||||
if _, err := s.db.Exec("VACUUM"); err != nil {
|
||||
log.Printf("[db] VACUUM failed: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
elapsed := time.Since(start)
|
||||
log.Printf("[db] VACUUM complete in %v — auto_vacuum is now INCREMENTAL", elapsed.Round(time.Millisecond))
|
||||
}
|
||||
}
|
||||
|
||||
// RunIncrementalVacuum returns free pages to the OS (#919).
|
||||
// Safe to call on auto_vacuum=NONE databases (noop).
|
||||
func (s *Store) RunIncrementalVacuum(pages int) {
|
||||
if _, err := s.db.Exec(fmt.Sprintf("PRAGMA incremental_vacuum(%d)", pages)); err != nil {
|
||||
log.Printf("[vacuum] incremental_vacuum error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Checkpoint forces a WAL checkpoint to release the WAL lock file,
|
||||
// preventing lock contention with a new process starting up.
|
||||
func (s *Store) Checkpoint() {
|
||||
@@ -896,92 +798,6 @@ func (s *Store) Checkpoint() {
|
||||
}
|
||||
}
|
||||
|
||||
// BackfillPathJSONAsync launches the path_json backfill in a background goroutine.
|
||||
// It processes observations with NULL/empty path_json that have raw_hex available,
|
||||
// decoding hop paths and updating the column. Safe to run concurrently with ingest
|
||||
// because new observations get path_json at write time; this only touches NULL rows.
|
||||
// Idempotent: skips if migration already recorded.
|
||||
func (s *Store) BackfillPathJSONAsync() {
|
||||
s.backfillWg.Add(1)
|
||||
go func() {
|
||||
defer s.backfillWg.Done()
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
log.Printf("[backfill] path_json async panic recovered: %v", r)
|
||||
}
|
||||
}()
|
||||
|
||||
var migDone int
|
||||
row := s.db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'backfill_path_json_from_raw_hex_v1'")
|
||||
if row.Scan(&migDone) == nil {
|
||||
return // already done
|
||||
}
|
||||
|
||||
log.Println("[backfill] Starting async path_json backfill from raw_hex...")
|
||||
updated := 0
|
||||
errored := false
|
||||
const batchSize = 1000
|
||||
batchNum := 0
|
||||
for {
|
||||
rows, err := s.db.Query(`
|
||||
SELECT o.id, o.raw_hex
|
||||
FROM observations o
|
||||
JOIN transmissions t ON o.transmission_id = t.id
|
||||
WHERE o.raw_hex IS NOT NULL AND o.raw_hex != ''
|
||||
AND (o.path_json IS NULL OR o.path_json = '' OR o.path_json = '[]')
|
||||
AND t.payload_type != 9
|
||||
LIMIT ?`, batchSize)
|
||||
if err != nil {
|
||||
log.Printf("[backfill] path_json query error: %v", err)
|
||||
errored = true
|
||||
break
|
||||
}
|
||||
type pendingRow struct {
|
||||
id int64
|
||||
rawHex string
|
||||
}
|
||||
var batch []pendingRow
|
||||
for rows.Next() {
|
||||
var r pendingRow
|
||||
if err := rows.Scan(&r.id, &r.rawHex); err == nil {
|
||||
batch = append(batch, r)
|
||||
}
|
||||
}
|
||||
rows.Close()
|
||||
if len(batch) == 0 {
|
||||
break
|
||||
}
|
||||
for _, r := range batch {
|
||||
hops, err := packetpath.DecodePathFromRawHex(r.rawHex)
|
||||
if err != nil || len(hops) == 0 {
|
||||
if _, execErr := s.db.Exec(`UPDATE observations SET path_json = '[]' WHERE id = ?`, r.id); execErr != nil {
|
||||
log.Printf("[backfill] write error (id=%d): %v", r.id, execErr)
|
||||
}
|
||||
continue
|
||||
}
|
||||
b, _ := json.Marshal(hops)
|
||||
if _, execErr := s.db.Exec(`UPDATE observations SET path_json = ? WHERE id = ?`, string(b), r.id); execErr != nil {
|
||||
log.Printf("[backfill] write error (id=%d): %v", r.id, execErr)
|
||||
} else {
|
||||
updated++
|
||||
}
|
||||
}
|
||||
batchNum++
|
||||
if batchNum%50 == 0 {
|
||||
log.Printf("[backfill] progress: %d observations updated so far (%d batches)", updated, batchNum)
|
||||
}
|
||||
// Throttle: yield to ingest writers between batches
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
}
|
||||
log.Printf("[backfill] Async path_json backfill complete: %d observations updated", updated)
|
||||
if !errored {
|
||||
s.db.Exec(`INSERT INTO _migrations (name) VALUES ('backfill_path_json_from_raw_hex_v1')`)
|
||||
} else {
|
||||
log.Printf("[backfill] NOT recording migration due to errors — will retry on next restart")
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// LogStats logs current operational metrics.
|
||||
func (s *Store) LogStats() {
|
||||
log.Printf("[stats] tx_inserted=%d tx_dupes=%d obs_inserted=%d node_upserts=%d observer_upserts=%d write_errors=%d sig_drops=%d",
|
||||
@@ -1105,7 +921,6 @@ type PacketData struct {
|
||||
PathJSON string
|
||||
DecodedJSON string
|
||||
ChannelHash string // grouping key for channel queries (#762)
|
||||
Region string // observer region: payload > topic > source config (#788)
|
||||
}
|
||||
|
||||
// nilIfEmpty returns nil for empty strings (for nullable DB columns).
|
||||
@@ -1124,7 +939,6 @@ type MQTTPacketMessage struct {
|
||||
Score *float64 `json:"score"`
|
||||
Direction *string `json:"direction"`
|
||||
Origin string `json:"origin"`
|
||||
Region string `json:"region,omitempty"` // optional region override (#788)
|
||||
}
|
||||
|
||||
// BuildPacketData constructs a PacketData from a decoded packet and MQTT message.
|
||||
@@ -1164,13 +978,6 @@ func BuildPacketData(msg *MQTTPacketMessage, decoded *DecodedPacket, observerID,
|
||||
DecodedJSON: PayloadJSON(&decoded.Payload),
|
||||
}
|
||||
|
||||
// Region priority: payload field > topic-derived parameter (#788)
|
||||
if msg.Region != "" {
|
||||
pd.Region = msg.Region
|
||||
} else {
|
||||
pd.Region = region
|
||||
}
|
||||
|
||||
// Populate channel_hash for fast channel queries (#762)
|
||||
if decoded.Header.PayloadType == PayloadGRP_TXT {
|
||||
if decoded.Payload.Type == "CHAN" && decoded.Payload.Channel != "" {
|
||||
|
||||
@@ -569,61 +569,6 @@ func TestInsertTransmissionUpdatesObserverLastSeen(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestLastPacketAtUpdatedOnPacketOnly(t *testing.T) {
|
||||
s, err := OpenStore(tempDBPath(t))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
// Insert observer via status path — last_packet_at should be NULL
|
||||
if err := s.UpsertObserver("obs1", "Observer1", "SJC", nil); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
var lastPacketAt sql.NullString
|
||||
s.db.QueryRow("SELECT last_packet_at FROM observers WHERE id = ?", "obs1").Scan(&lastPacketAt)
|
||||
if lastPacketAt.Valid {
|
||||
t.Fatalf("expected last_packet_at to be NULL after UpsertObserver, got %s", lastPacketAt.String)
|
||||
}
|
||||
|
||||
// Insert a packet from this observer — last_packet_at should be set
|
||||
data := &PacketData{
|
||||
RawHex: "0A00D69F",
|
||||
Timestamp: "2026-04-24T12:00:00Z",
|
||||
ObserverID: "obs1",
|
||||
Hash: "lastpackettest123456",
|
||||
RouteType: 2,
|
||||
PayloadType: 2,
|
||||
PathJSON: "[]",
|
||||
DecodedJSON: `{"type":"TXT_MSG"}`,
|
||||
}
|
||||
if _, err := s.InsertTransmission(data); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
s.db.QueryRow("SELECT last_packet_at FROM observers WHERE id = ?", "obs1").Scan(&lastPacketAt)
|
||||
if !lastPacketAt.Valid {
|
||||
t.Fatal("expected last_packet_at to be non-NULL after InsertTransmission")
|
||||
}
|
||||
// InsertTransmission uses `now = data.Timestamp || time.Now()`, so last_packet_at
|
||||
// should match the packet's Timestamp when provided (same source-of-truth as last_seen).
|
||||
if lastPacketAt.String != "2026-04-24T12:00:00Z" {
|
||||
t.Errorf("expected last_packet_at=2026-04-24T12:00:00Z, got %s", lastPacketAt.String)
|
||||
}
|
||||
|
||||
// UpsertObserver again (status path) — last_packet_at should NOT change
|
||||
if err := s.UpsertObserver("obs1", "Observer1", "SJC", nil); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
var lastPacketAtAfterStatus sql.NullString
|
||||
s.db.QueryRow("SELECT last_packet_at FROM observers WHERE id = ?", "obs1").Scan(&lastPacketAtAfterStatus)
|
||||
if !lastPacketAtAfterStatus.Valid || lastPacketAtAfterStatus.String != lastPacketAt.String {
|
||||
t.Errorf("UpsertObserver should not change last_packet_at; expected %s, got %v", lastPacketAt.String, lastPacketAtAfterStatus)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEndToEndIngest(t *testing.T) {
|
||||
s, err := OpenStore(tempDBPath(t))
|
||||
if err != nil {
|
||||
@@ -2178,392 +2123,3 @@ func TestBuildPacketData_NonTracePathJSON(t *testing.T) {
|
||||
t.Errorf("path_json = %s, want %s", pd.PathJSON, expectedPathJSON)
|
||||
}
|
||||
}
|
||||
|
||||
// --- Issue #888: Backfill path_json from raw_hex ---
|
||||
|
||||
func TestBackfillPathJsonFromRawHex(t *testing.T) {
|
||||
dbPath := tempDBPath(t)
|
||||
s, err := OpenStore(dbPath)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Insert a transmission with payload_type != TRACE (e.g. 0x01)
|
||||
// raw_hex: header 0x05 (route FLOOD, payload 0x01), path byte 0x42 (hash_size=2, count=2),
|
||||
// hops: AABB, CCDD, then some payload bytes
|
||||
rawHex := "0542AABBCCDD0000000000000000000000000000"
|
||||
s.db.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, payload_type) VALUES (?, 'h1', '2025-01-01T00:00:00Z', 1)`, rawHex)
|
||||
|
||||
// Insert observation with raw_hex but empty path_json
|
||||
s.db.Exec(`INSERT INTO observations (transmission_id, timestamp, raw_hex, path_json) VALUES (1, 1000, ?, '[]')`, rawHex)
|
||||
// Insert observation with raw_hex and NULL path_json
|
||||
s.db.Exec(`INSERT INTO observations (transmission_id, timestamp, raw_hex, path_json) VALUES (1, 1001, ?, NULL)`, rawHex)
|
||||
// Insert observation with existing path_json (should NOT be overwritten)
|
||||
s.db.Exec(`INSERT INTO observations (transmission_id, timestamp, raw_hex, path_json) VALUES (1, 1002, ?, '["XX","YY"]')`, rawHex)
|
||||
|
||||
// Insert a TRACE transmission (payload_type = 0x09) — should be skipped
|
||||
traceRaw := "2604302D0D2359FEE7B100000000006733D63367"
|
||||
s.db.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, payload_type) VALUES (?, 'h2', '2025-01-01T00:00:00Z', 9)`, traceRaw)
|
||||
s.db.Exec(`INSERT INTO observations (transmission_id, timestamp, raw_hex, path_json) VALUES (2, 1003, ?, '[]')`, traceRaw)
|
||||
|
||||
// Remove the migration marker so it runs again on reopen
|
||||
s.db.Exec(`DELETE FROM _migrations WHERE name = 'backfill_path_json_from_raw_hex_v1'`)
|
||||
s.Close()
|
||||
|
||||
// Reopen — backfill is now async, must trigger explicitly
|
||||
s2, err := OpenStore(dbPath)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer s2.Close()
|
||||
|
||||
// Trigger async backfill and wait for completion
|
||||
s2.BackfillPathJSONAsync()
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
var migCount int
|
||||
for time.Now().Before(deadline) {
|
||||
s2.db.QueryRow("SELECT COUNT(*) FROM _migrations WHERE name = 'backfill_path_json_from_raw_hex_v1'").Scan(&migCount)
|
||||
if migCount == 1 {
|
||||
break
|
||||
}
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
}
|
||||
if migCount != 1 {
|
||||
t.Fatalf("migration not recorded")
|
||||
}
|
||||
|
||||
// Row 1 (was '[]') should now have decoded hops
|
||||
var pj1 string
|
||||
s2.db.QueryRow("SELECT path_json FROM observations WHERE id = 1").Scan(&pj1)
|
||||
if pj1 != `["AABB","CCDD"]` {
|
||||
t.Errorf("row 1 path_json = %q, want %q", pj1, `["AABB","CCDD"]`)
|
||||
}
|
||||
|
||||
// Row 2 (was NULL) should now have decoded hops
|
||||
var pj2 string
|
||||
s2.db.QueryRow("SELECT path_json FROM observations WHERE id = 2").Scan(&pj2)
|
||||
if pj2 != `["AABB","CCDD"]` {
|
||||
t.Errorf("row 2 path_json = %q, want %q", pj2, `["AABB","CCDD"]`)
|
||||
}
|
||||
|
||||
// Row 3 (had existing data) should NOT be overwritten
|
||||
var pj3 string
|
||||
s2.db.QueryRow("SELECT path_json FROM observations WHERE id = 3").Scan(&pj3)
|
||||
if pj3 != `["XX","YY"]` {
|
||||
t.Errorf("row 3 path_json = %q, want %q (should not be overwritten)", pj3, `["XX","YY"]`)
|
||||
}
|
||||
|
||||
// Row 4 (TRACE) should NOT be updated
|
||||
var pj4 string
|
||||
s2.db.QueryRow("SELECT path_json FROM observations WHERE id = 4").Scan(&pj4)
|
||||
if pj4 != "[]" {
|
||||
t.Errorf("row 4 (TRACE) path_json = %q, want %q (should be skipped)", pj4, "[]")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCleanupLegacyNullHashTimestamp(t *testing.T) {
|
||||
path := tempDBPath(t)
|
||||
|
||||
// Create a bare-bones DB with legacy bad data
|
||||
db, err := sql.Open("sqlite", path+"?_pragma=journal_mode(WAL)&_pragma=busy_timeout(5000)")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
db.Exec(`CREATE TABLE IF NOT EXISTS transmissions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
raw_hex TEXT NOT NULL,
|
||||
hash TEXT NOT NULL,
|
||||
first_seen TEXT NOT NULL,
|
||||
route_type INTEGER,
|
||||
payload_type INTEGER,
|
||||
payload_version INTEGER,
|
||||
decoded_json TEXT,
|
||||
created_at TEXT DEFAULT (datetime('now')),
|
||||
channel_hash TEXT DEFAULT NULL
|
||||
)`)
|
||||
db.Exec(`CREATE TABLE IF NOT EXISTS observations (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
transmission_id INTEGER NOT NULL REFERENCES transmissions(id),
|
||||
observer_idx INTEGER,
|
||||
direction TEXT,
|
||||
snr REAL,
|
||||
rssi REAL,
|
||||
score INTEGER,
|
||||
path_json TEXT,
|
||||
timestamp INTEGER NOT NULL
|
||||
)`)
|
||||
db.Exec(`CREATE TABLE IF NOT EXISTS _migrations (name TEXT PRIMARY KEY)`)
|
||||
db.Exec(`CREATE TABLE IF NOT EXISTS nodes (public_key TEXT PRIMARY KEY, name TEXT, role TEXT, lat REAL, lon REAL, last_seen TEXT, first_seen TEXT, advert_count INTEGER DEFAULT 0, battery_mv INTEGER, temperature_c REAL)`)
|
||||
db.Exec(`CREATE TABLE IF NOT EXISTS observers (id TEXT PRIMARY KEY, name TEXT, iata TEXT, last_seen TEXT, first_seen TEXT, packet_count INTEGER DEFAULT 0, model TEXT, firmware TEXT, client_version TEXT, radio TEXT, battery_mv INTEGER, uptime_secs INTEGER, noise_floor REAL, inactive INTEGER DEFAULT 0, last_packet_at TEXT DEFAULT NULL)`)
|
||||
|
||||
// Insert good transmission
|
||||
db.Exec(`INSERT INTO transmissions (id, raw_hex, hash, first_seen) VALUES (1, 'aabb', 'abc123', '2024-01-01T00:00:00Z')`)
|
||||
db.Exec(`INSERT INTO observations (transmission_id, observer_idx, timestamp) VALUES (1, 1, 1704067200)`)
|
||||
|
||||
// Insert bad: empty hash
|
||||
db.Exec(`INSERT INTO transmissions (id, raw_hex, hash, first_seen) VALUES (2, 'ccdd', '', '2024-01-01T00:00:00Z')`)
|
||||
db.Exec(`INSERT INTO observations (transmission_id, observer_idx, timestamp) VALUES (2, 1, 1704067200)`)
|
||||
|
||||
// Insert bad: empty first_seen
|
||||
db.Exec(`INSERT INTO transmissions (id, raw_hex, hash, first_seen) VALUES (3, 'eeff', 'def456', '')`)
|
||||
db.Exec(`INSERT INTO observations (transmission_id, observer_idx, timestamp) VALUES (3, 2, 1704067200)`)
|
||||
|
||||
db.Close()
|
||||
|
||||
// Now open via OpenStore which should run the migration
|
||||
s, err := OpenStore(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
// Good transmission should remain
|
||||
var count int
|
||||
s.db.QueryRow("SELECT COUNT(*) FROM transmissions WHERE id = 1").Scan(&count)
|
||||
if count != 1 {
|
||||
t.Error("good transmission should not be deleted")
|
||||
}
|
||||
|
||||
// Bad transmissions should be gone
|
||||
s.db.QueryRow("SELECT COUNT(*) FROM transmissions WHERE id = 2").Scan(&count)
|
||||
if count != 0 {
|
||||
t.Errorf("transmission with empty hash should be deleted, got count=%d", count)
|
||||
}
|
||||
s.db.QueryRow("SELECT COUNT(*) FROM transmissions WHERE id = 3").Scan(&count)
|
||||
if count != 0 {
|
||||
t.Errorf("transmission with empty first_seen should be deleted, got count=%d", count)
|
||||
}
|
||||
|
||||
// Observations for bad transmissions should be gone
|
||||
s.db.QueryRow("SELECT COUNT(*) FROM observations WHERE transmission_id IN (2, 3)").Scan(&count)
|
||||
if count != 0 {
|
||||
t.Errorf("observations for bad transmissions should be deleted, got count=%d", count)
|
||||
}
|
||||
|
||||
// Observation for good transmission should remain
|
||||
s.db.QueryRow("SELECT COUNT(*) FROM observations WHERE transmission_id = 1").Scan(&count)
|
||||
if count != 1 {
|
||||
t.Error("observation for good transmission should remain")
|
||||
}
|
||||
|
||||
// Migration marker should exist
|
||||
var migCount int
|
||||
s.db.QueryRow("SELECT COUNT(*) FROM _migrations WHERE name = 'cleanup_legacy_null_hash_ts'").Scan(&migCount)
|
||||
if migCount != 1 {
|
||||
t.Error("migration marker cleanup_legacy_null_hash_ts should be recorded")
|
||||
}
|
||||
|
||||
// Idempotent: opening again should not error
|
||||
s.Close()
|
||||
s2, err := OpenStore(path)
|
||||
if err != nil {
|
||||
t.Fatal("second open should not fail:", err)
|
||||
}
|
||||
s2.Close()
|
||||
}
|
||||
|
||||
func TestBuildPacketDataRegionFromPayload(t *testing.T) {
|
||||
msg := &MQTTPacketMessage{Raw: "0102030405060708", Region: "PDX"}
|
||||
decoded := &DecodedPacket{
|
||||
Header: Header{RouteType: 1, PayloadType: 3},
|
||||
}
|
||||
pkt := BuildPacketData(msg, decoded, "obs1", "SJC")
|
||||
// When payload has region, it should override the topic-derived region
|
||||
if pkt.Region != "PDX" {
|
||||
t.Fatalf("expected region PDX from payload, got %q", pkt.Region)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildPacketDataRegionFallsBackToTopic(t *testing.T) {
|
||||
msg := &MQTTPacketMessage{Raw: "0102030405060708"}
|
||||
decoded := &DecodedPacket{
|
||||
Header: Header{RouteType: 1, PayloadType: 3},
|
||||
}
|
||||
pkt := BuildPacketData(msg, decoded, "obs1", "SJC")
|
||||
if pkt.Region != "SJC" {
|
||||
t.Fatalf("expected region SJC from topic, got %q", pkt.Region)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// TestBackfillPathJSONAsync verifies that the path_json backfill does NOT block
|
||||
// OpenStore from returning. MQTT connect happens immediately after OpenStore;
|
||||
// if the backfill is synchronous, MQTT would be delayed indefinitely on large DBs.
|
||||
// This test creates pending backfill rows, opens the store, and asserts that
|
||||
// OpenStore returns before the migration is recorded — proving async execution.
|
||||
func TestBackfillPathJSONAsync(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
dbPath := filepath.Join(dir, "async_test.db")
|
||||
|
||||
// Bootstrap schema manually so we can insert test data BEFORE OpenStore
|
||||
db, err := sql.Open("sqlite", dbPath+"?_pragma=journal_mode(WAL)&_pragma=busy_timeout(5000)")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
// Create tables manually (minimal schema for this test)
|
||||
_, err = db.Exec(`
|
||||
CREATE TABLE _migrations (name TEXT PRIMARY KEY);
|
||||
CREATE TABLE transmissions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
raw_hex TEXT NOT NULL,
|
||||
hash TEXT NOT NULL UNIQUE,
|
||||
first_seen TEXT NOT NULL,
|
||||
route_type INTEGER,
|
||||
payload_type INTEGER,
|
||||
payload_version INTEGER,
|
||||
decoded_json TEXT,
|
||||
created_at TEXT DEFAULT (datetime('now')),
|
||||
channel_hash TEXT
|
||||
);
|
||||
CREATE TABLE observers (
|
||||
id TEXT PRIMARY KEY,
|
||||
name TEXT,
|
||||
iata TEXT,
|
||||
last_seen TEXT,
|
||||
first_seen TEXT,
|
||||
packet_count INTEGER DEFAULT 0,
|
||||
model TEXT,
|
||||
firmware TEXT,
|
||||
client_version TEXT,
|
||||
radio TEXT,
|
||||
battery_mv INTEGER,
|
||||
uptime_secs INTEGER,
|
||||
noise_floor REAL,
|
||||
inactive INTEGER DEFAULT 0,
|
||||
last_packet_at TEXT
|
||||
);
|
||||
CREATE TABLE nodes (
|
||||
public_key TEXT PRIMARY KEY,
|
||||
name TEXT, role TEXT, lat REAL, lon REAL,
|
||||
last_seen TEXT, first_seen TEXT, advert_count INTEGER DEFAULT 0,
|
||||
battery_mv INTEGER, temperature_c REAL
|
||||
);
|
||||
CREATE TABLE inactive_nodes (
|
||||
public_key TEXT PRIMARY KEY,
|
||||
name TEXT, role TEXT, lat REAL, lon REAL,
|
||||
last_seen TEXT, first_seen TEXT, advert_count INTEGER DEFAULT 0,
|
||||
battery_mv INTEGER, temperature_c REAL
|
||||
);
|
||||
CREATE TABLE observations (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
transmission_id INTEGER NOT NULL REFERENCES transmissions(id),
|
||||
observer_idx INTEGER,
|
||||
direction TEXT,
|
||||
snr REAL, rssi REAL, score INTEGER,
|
||||
path_json TEXT,
|
||||
timestamp INTEGER NOT NULL,
|
||||
raw_hex TEXT
|
||||
);
|
||||
CREATE UNIQUE INDEX idx_observations_dedup ON observations(transmission_id, observer_idx, COALESCE(path_json, ''));
|
||||
CREATE INDEX idx_observations_transmission_id ON observations(transmission_id);
|
||||
CREATE INDEX idx_observations_observer_idx ON observations(observer_idx);
|
||||
CREATE INDEX idx_observations_timestamp ON observations(timestamp);
|
||||
CREATE TABLE observer_metrics (
|
||||
observer_id TEXT NOT NULL,
|
||||
timestamp TEXT NOT NULL,
|
||||
noise_floor REAL, tx_air_secs INTEGER, rx_air_secs INTEGER,
|
||||
recv_errors INTEGER, battery_mv INTEGER,
|
||||
packets_sent INTEGER, packets_recv INTEGER,
|
||||
PRIMARY KEY (observer_id, timestamp)
|
||||
);
|
||||
CREATE TABLE dropped_packets (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
hash TEXT, raw_hex TEXT, reason TEXT NOT NULL,
|
||||
observer_id TEXT, observer_name TEXT,
|
||||
node_pubkey TEXT, node_name TEXT,
|
||||
dropped_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
`)
|
||||
if err != nil {
|
||||
t.Fatal("bootstrap schema:", err)
|
||||
}
|
||||
|
||||
// Mark all migrations as done EXCEPT the path_json backfill
|
||||
for _, m := range []string{
|
||||
"advert_count_unique_v1", "noise_floor_real_v1", "node_telemetry_v1",
|
||||
"obs_timestamp_index_v1", "observer_metrics_v1", "observer_metrics_ts_idx",
|
||||
"observers_inactive_v1", "observer_metrics_packets_v1", "channel_hash_v1",
|
||||
"dropped_packets_v1", "observations_raw_hex_v1", "observers_last_packet_at_v1",
|
||||
"cleanup_legacy_null_hash_ts",
|
||||
} {
|
||||
db.Exec(`INSERT INTO _migrations (name) VALUES (?)`, m)
|
||||
}
|
||||
|
||||
// Insert a transmission + observations with NULL path_json and valid raw_hex
|
||||
// raw_hex "0102AABBCCDD0000" has 2-hop path decodable by packetpath
|
||||
rawHex := "41020304AABBCCDD05060708"
|
||||
_, err = db.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, payload_type) VALUES (?, 'hash1', '2025-01-01T00:00:00Z', 4)`, rawHex)
|
||||
if err != nil {
|
||||
t.Fatal("insert tx:", err)
|
||||
}
|
||||
// Insert 100 observations needing backfill
|
||||
for i := 0; i < 100; i++ {
|
||||
_, err = db.Exec(`INSERT INTO observations (transmission_id, observer_idx, timestamp, raw_hex, path_json) VALUES (1, ?, ?, ?, NULL)`,
|
||||
i+1, 1700000000+i, rawHex)
|
||||
if err != nil {
|
||||
// dedup index might fire — use unique observer_idx
|
||||
t.Fatalf("insert obs %d: %v", i, err)
|
||||
}
|
||||
}
|
||||
db.Close()
|
||||
|
||||
// Now open store via OpenStore — this must return QUICKLY (non-blocking)
|
||||
start := time.Now()
|
||||
store, err := OpenStoreWithInterval(dbPath, 300)
|
||||
elapsed := time.Since(start)
|
||||
if err != nil {
|
||||
t.Fatal("OpenStore:", err)
|
||||
}
|
||||
defer store.Close()
|
||||
|
||||
// OpenStore must return in under 2 seconds (backfill is no longer in applySchema)
|
||||
if elapsed > 2*time.Second {
|
||||
t.Fatalf("OpenStore blocked for %v — backfill must not run in applySchema", elapsed)
|
||||
}
|
||||
|
||||
// Backfill must NOT be recorded yet — it hasn't been triggered
|
||||
var done int
|
||||
err = store.db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'backfill_path_json_from_raw_hex_v1'").Scan(&done)
|
||||
if err == nil {
|
||||
t.Fatal("migration recorded during OpenStore — backfill must be async via BackfillPathJSONAsync()")
|
||||
}
|
||||
|
||||
// Now trigger the async backfill (simulates what main.go does after OpenStore)
|
||||
store.BackfillPathJSONAsync()
|
||||
|
||||
// Wait for backfill to complete (should be very fast with 100 rows)
|
||||
deadline := time.Now().Add(10 * time.Second)
|
||||
for time.Now().Before(deadline) {
|
||||
err = store.db.QueryRow("SELECT 1 FROM _migrations WHERE name = 'backfill_path_json_from_raw_hex_v1'").Scan(&done)
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal("backfill never completed within 10s")
|
||||
}
|
||||
|
||||
// Verify backfill actually worked — observations should have non-NULL path_json
|
||||
var nullCount int
|
||||
store.db.QueryRow("SELECT COUNT(*) FROM observations WHERE path_json IS NULL").Scan(&nullCount)
|
||||
if nullCount > 0 {
|
||||
t.Errorf("backfill left %d observations with NULL path_json", nullCount)
|
||||
}
|
||||
}
|
||||
|
||||
// TestBackfillPathJSONAsyncMethodExists verifies the async backfill API surface
|
||||
// exists — BackfillPathJSONAsync must be callable independently from OpenStore.
|
||||
func TestBackfillPathJSONAsyncMethodExists(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
dbPath := filepath.Join(dir, "method_test.db")
|
||||
store, err := OpenStoreWithInterval(dbPath, 300)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer store.Close()
|
||||
|
||||
// BackfillPathJSONAsync must exist as a method on *Store
|
||||
// This is a compile-time check — if the method doesn't exist, the test won't compile.
|
||||
store.BackfillPathJSONAsync()
|
||||
}
|
||||
|
||||
@@ -131,7 +131,6 @@ type Payload struct {
|
||||
SenderTimestamp uint32 `json:"sender_timestamp,omitempty"`
|
||||
EphemeralPubKey string `json:"ephemeralPubKey,omitempty"`
|
||||
PathData string `json:"pathData,omitempty"`
|
||||
SNRValues []float64 `json:"snrValues,omitempty"`
|
||||
Tag uint32 `json:"tag,omitempty"`
|
||||
AuthCode uint32 `json:"authCode,omitempty"`
|
||||
TraceFlags *int `json:"traceFlags,omitempty"`
|
||||
@@ -600,9 +599,6 @@ func DecodePacket(hexString string, channelKeys map[string]string, validateSigna
|
||||
// We expose hopsCompleted (count of SNR bytes) so consumers can distinguish
|
||||
// how far the trace got vs the full intended route.
|
||||
var anomaly string
|
||||
if header.PayloadType == PayloadTRACE && payload.Error != "" {
|
||||
anomaly = fmt.Sprintf("TRACE payload decode failed: %s", payload.Error)
|
||||
}
|
||||
if header.PayloadType == PayloadTRACE && payload.PathData != "" {
|
||||
// Flag anomalous routing — firmware only sends TRACE as DIRECT
|
||||
if header.RouteType != RouteDirect && header.RouteType != RouteTransportDirect {
|
||||
@@ -610,21 +606,6 @@ func DecodePacket(hexString string, channelKeys map[string]string, validateSigna
|
||||
}
|
||||
// The header path hops count represents SNR entries = completed hops
|
||||
hopsCompleted := path.HashCount
|
||||
// Extract per-hop SNR from header path bytes (int8, quarter-dB encoding).
|
||||
// Mirrors cmd/server/decoder.go — must be done at ingest time so SNR
|
||||
// values are persisted in decoded_json (server endpoint serves DB as-is).
|
||||
if hopsCompleted > 0 && len(path.Hops) >= hopsCompleted {
|
||||
snrVals := make([]float64, 0, hopsCompleted)
|
||||
for i := 0; i < hopsCompleted; i++ {
|
||||
b, err := hex.DecodeString(path.Hops[i])
|
||||
if err == nil && len(b) == 1 {
|
||||
snrVals = append(snrVals, float64(int8(b[0]))/4.0)
|
||||
}
|
||||
}
|
||||
if len(snrVals) > 0 {
|
||||
payload.SNRValues = snrVals
|
||||
}
|
||||
}
|
||||
pathBytes, err := hex.DecodeString(payload.PathData)
|
||||
if err == nil && payload.TraceFlags != nil {
|
||||
// path_sz from flags byte is a power-of-two exponent per firmware:
|
||||
|
||||
@@ -1926,53 +1926,3 @@ func TestDecodePathFromRawHex_Transport(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecodeTracePayloadFailSetsAnomaly(t *testing.T) {
|
||||
// Issue #889: TRACE packet with payload too short to decode (< 9 bytes)
|
||||
// should still return a DecodedPacket (observation stored) but with Anomaly
|
||||
// set to warn operators that the decode was degraded.
|
||||
// Packet: header 0x26 (TRACE+DIRECT), pathByte 0x00, payload 4 bytes (too short).
|
||||
pkt, err := DecodePacket("2600aabbccdd", nil, false)
|
||||
if err != nil {
|
||||
t.Fatalf("DecodePacket error: %v", err)
|
||||
}
|
||||
if pkt.Payload.Type != "TRACE" {
|
||||
t.Fatalf("payload type=%s, want TRACE", pkt.Payload.Type)
|
||||
}
|
||||
if pkt.Payload.Error == "" {
|
||||
t.Fatal("expected payload.Error to indicate decode failure")
|
||||
}
|
||||
// The key assertion: Anomaly must be set when TRACE decode fails
|
||||
if pkt.Anomaly == "" {
|
||||
t.Error("expected Anomaly to be set when TRACE payload decode fails but observation is stored")
|
||||
}
|
||||
}
|
||||
|
||||
// TestDecodeTraceExtractsSNRValues verifies that for TRACE packets, the header
|
||||
// path bytes are interpreted as int8 SNR values (quarter-dB) and exposed via
|
||||
// payload.SNRValues. Mirrors logic in cmd/server/decoder.go (issue: SNR values
|
||||
// extracted by server but never written into decoded_json by ingestor).
|
||||
//
|
||||
// Packet 26022FF8116A23A80000000001C0DE1000DEDE:
|
||||
// header 0x26 → TRACE (pt=9), DIRECT (rt=2)
|
||||
// pathByte 0x02 → hash_size=1, hash_count=2
|
||||
// header path: 2F F8 → SNR = [int8(0x2F)/4, int8(0xF8)/4] = [11.75, -2.0]
|
||||
// payload (15B): tag=116A23A8 auth=00000000 flags=0x01 pathData=C0DE1000DEDE
|
||||
func TestDecodeTraceExtractsSNRValues(t *testing.T) {
|
||||
pkt, err := DecodePacket("26022FF8116A23A80000000001C0DE1000DEDE", nil, false)
|
||||
if err != nil {
|
||||
t.Fatalf("DecodePacket error: %v", err)
|
||||
}
|
||||
if pkt.Payload.Type != "TRACE" {
|
||||
t.Fatalf("payload type=%s, want TRACE", pkt.Payload.Type)
|
||||
}
|
||||
if len(pkt.Payload.SNRValues) != 2 {
|
||||
t.Fatalf("len(SNRValues)=%d, want 2 (got %v)", len(pkt.Payload.SNRValues), pkt.Payload.SNRValues)
|
||||
}
|
||||
if pkt.Payload.SNRValues[0] != 11.75 {
|
||||
t.Errorf("SNRValues[0]=%v, want 11.75", pkt.Payload.SNRValues[0])
|
||||
}
|
||||
if pkt.Payload.SNRValues[1] != -2.0 {
|
||||
t.Errorf("SNRValues[1]=%v, want -2.0", pkt.Payload.SNRValues[1])
|
||||
}
|
||||
}
|
||||
|
||||
@@ -17,10 +17,6 @@ require github.com/meshcore-analyzer/packetpath v0.0.0
|
||||
|
||||
replace github.com/meshcore-analyzer/packetpath => ../../internal/packetpath
|
||||
|
||||
require github.com/meshcore-analyzer/dbconfig v0.0.0
|
||||
|
||||
replace github.com/meshcore-analyzer/dbconfig => ../../internal/dbconfig
|
||||
|
||||
require (
|
||||
github.com/dustin/go-humanize v1.0.1 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
|
||||
+24
-103
@@ -57,12 +57,6 @@ func main() {
|
||||
defer store.Close()
|
||||
log.Printf("SQLite opened: %s", cfg.DBPath)
|
||||
|
||||
// Async backfill: path_json from raw_hex (#888) — must not block MQTT startup
|
||||
store.BackfillPathJSONAsync()
|
||||
|
||||
// Check auto_vacuum mode and optionally migrate (#919)
|
||||
store.CheckAutoVacuum(cfg)
|
||||
|
||||
// Node retention: move stale nodes to inactive_nodes on startup
|
||||
nodeDays := cfg.NodeDaysOrDefault()
|
||||
store.MoveStaleNodes(nodeDays)
|
||||
@@ -75,15 +69,12 @@ func main() {
|
||||
metricsDays := cfg.MetricsRetentionDays()
|
||||
store.PruneOldMetrics(metricsDays)
|
||||
store.PruneDroppedPackets(metricsDays)
|
||||
vacuumPages := cfg.IncrementalVacuumPages()
|
||||
store.RunIncrementalVacuum(vacuumPages)
|
||||
|
||||
// Daily ticker for node retention
|
||||
retentionTicker := time.NewTicker(1 * time.Hour)
|
||||
go func() {
|
||||
for range retentionTicker.C {
|
||||
store.MoveStaleNodes(nodeDays)
|
||||
store.RunIncrementalVacuum(vacuumPages)
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -92,10 +83,8 @@ func main() {
|
||||
go func() {
|
||||
time.Sleep(90 * time.Second) // stagger after metrics prune
|
||||
store.RemoveStaleObservers(observerDays)
|
||||
store.RunIncrementalVacuum(vacuumPages)
|
||||
for range observerRetentionTicker.C {
|
||||
store.RemoveStaleObservers(observerDays)
|
||||
store.RunIncrementalVacuum(vacuumPages)
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -105,7 +94,6 @@ func main() {
|
||||
for range metricsRetentionTicker.C {
|
||||
store.PruneOldMetrics(metricsDays)
|
||||
store.PruneDroppedPackets(metricsDays)
|
||||
store.RunIncrementalVacuum(vacuumPages)
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -126,16 +114,29 @@ func main() {
|
||||
|
||||
// Connect to each MQTT source
|
||||
var clients []mqtt.Client
|
||||
connectedCount := 0
|
||||
for _, source := range sources {
|
||||
tag := source.Name
|
||||
if tag == "" {
|
||||
tag = source.Broker
|
||||
}
|
||||
|
||||
opts := buildMQTTOpts(source)
|
||||
connectTimeout := source.ConnectTimeoutOrDefault()
|
||||
log.Printf("MQTT [%s] connect timeout: %ds", tag, connectTimeout)
|
||||
opts := mqtt.NewClientOptions().
|
||||
AddBroker(source.Broker).
|
||||
SetAutoReconnect(true).
|
||||
SetConnectRetry(true).
|
||||
SetOrderMatters(true)
|
||||
|
||||
if source.Username != "" {
|
||||
opts.SetUsername(source.Username)
|
||||
}
|
||||
if source.Password != "" {
|
||||
opts.SetPassword(source.Password)
|
||||
}
|
||||
if source.RejectUnauthorized != nil && !*source.RejectUnauthorized {
|
||||
opts.SetTLSConfig(&tls.Config{InsecureSkipVerify: true})
|
||||
} else if strings.HasPrefix(source.Broker, "ssl://") {
|
||||
opts.SetTLSConfig(&tls.Config{})
|
||||
}
|
||||
|
||||
opts.SetOnConnectHandler(func(c mqtt.Client) {
|
||||
log.Printf("MQTT [%s] connected to %s", tag, source.Broker)
|
||||
@@ -155,11 +156,7 @@ func main() {
|
||||
})
|
||||
|
||||
opts.SetConnectionLostHandler(func(c mqtt.Client, err error) {
|
||||
log.Printf("MQTT [%s] disconnected from %s: %v", tag, source.Broker, err)
|
||||
})
|
||||
|
||||
opts.SetReconnectingHandler(func(c mqtt.Client, options *mqtt.ClientOptions) {
|
||||
log.Printf("MQTT [%s] reconnecting to %s", tag, source.Broker)
|
||||
log.Printf("MQTT [%s] disconnected: %v", tag, err)
|
||||
})
|
||||
|
||||
// Capture source for closure
|
||||
@@ -170,43 +167,19 @@ func main() {
|
||||
|
||||
client := mqtt.NewClient(opts)
|
||||
token := client.Connect()
|
||||
// With ConnectRetry=true, token.Wait() blocks forever for unreachable brokers.
|
||||
// WaitTimeout lets startup proceed; the client keeps retrying in the background
|
||||
// and OnConnect fires (subscribing) when it eventually connects (#910).
|
||||
if !token.WaitTimeout(time.Duration(connectTimeout) * time.Second) {
|
||||
log.Printf("MQTT [%s] initial connection timed out — retrying in background", tag)
|
||||
clients = append(clients, client)
|
||||
continue
|
||||
}
|
||||
token.Wait()
|
||||
if token.Error() != nil {
|
||||
log.Printf("MQTT [%s] connection failed (non-fatal): %v", tag, token.Error())
|
||||
// BL1 fix: Disconnect to stop Paho's internal retry goroutines.
|
||||
// With ConnectRetry=true, Connect() spawns background goroutines
|
||||
// that leak if the client is simply discarded.
|
||||
client.Disconnect(0)
|
||||
continue
|
||||
}
|
||||
connectedCount++
|
||||
clients = append(clients, client)
|
||||
}
|
||||
|
||||
// BL2 fix: require at least one immediately-connected source. Timed-out
|
||||
// clients are retrying in background (tracked in clients) but don't count
|
||||
// as "connected" — a single unreachable broker must not silently run with
|
||||
// zero active connections.
|
||||
if connectedCount == 0 {
|
||||
// Clean up any timed-out clients still retrying
|
||||
for _, c := range clients {
|
||||
c.Disconnect(0)
|
||||
}
|
||||
log.Fatal("no MQTT sources connected — all timed out or failed. Check broker is running (default: mqtt://localhost:1883). Set MQTT_BROKER env var or configure mqttSources in config.json")
|
||||
if len(clients) == 0 {
|
||||
log.Fatal("no MQTT connections established — check broker is running (default: mqtt://localhost:1883). Set MQTT_BROKER env var or configure mqttSources in config.json")
|
||||
}
|
||||
|
||||
if connectedCount < len(clients) {
|
||||
log.Printf("Running — %d MQTT source(s) connected, %d retrying in background", connectedCount, len(clients)-connectedCount)
|
||||
} else {
|
||||
log.Printf("Running — %d MQTT source(s) connected", connectedCount)
|
||||
}
|
||||
log.Printf("Running — %d MQTT source(s) connected", len(clients))
|
||||
|
||||
// Wait for shutdown signal
|
||||
sig := make(chan os.Signal, 1)
|
||||
@@ -224,32 +197,6 @@ func main() {
|
||||
log.Println("Done.")
|
||||
}
|
||||
|
||||
// buildMQTTOpts creates MQTT client options for a source with bounded reconnect
|
||||
// backoff, connect timeout, and TLS/auth configuration.
|
||||
func buildMQTTOpts(source MQTTSource) *mqtt.ClientOptions {
|
||||
opts := mqtt.NewClientOptions().
|
||||
AddBroker(source.Broker).
|
||||
SetAutoReconnect(true).
|
||||
SetConnectRetry(true).
|
||||
SetOrderMatters(true).
|
||||
SetMaxReconnectInterval(30 * time.Second).
|
||||
SetConnectTimeout(10 * time.Second).
|
||||
SetWriteTimeout(10 * time.Second)
|
||||
|
||||
if source.Username != "" {
|
||||
opts.SetUsername(source.Username)
|
||||
}
|
||||
if source.Password != "" {
|
||||
opts.SetPassword(source.Password)
|
||||
}
|
||||
if source.RejectUnauthorized != nil && !*source.RejectUnauthorized {
|
||||
opts.SetTLSConfig(&tls.Config{InsecureSkipVerify: true})
|
||||
} else if strings.HasPrefix(source.Broker, "ssl://") {
|
||||
opts.SetTLSConfig(&tls.Config{})
|
||||
}
|
||||
return opts
|
||||
}
|
||||
|
||||
func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message, channelKeys map[string]string, cfg *Config) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
@@ -270,21 +217,8 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
|
||||
return
|
||||
}
|
||||
|
||||
// Observer blacklist: drop ALL messages from blacklisted observers before any
|
||||
// DB writes (status, metrics, packets). Trumps IATA filter.
|
||||
if len(parts) > 2 && cfg.IsObserverBlacklisted(parts[2]) {
|
||||
log.Printf("MQTT [%s] observer %.8s blacklisted, dropping", tag, parts[2])
|
||||
return
|
||||
}
|
||||
|
||||
// Global observer IATA whitelist: if configured, drop messages from observers
|
||||
// in non-whitelisted IATA regions. Applies to ALL message types (status + packets).
|
||||
if len(parts) > 1 && !cfg.IsObserverIATAAllowed(parts[1]) {
|
||||
return
|
||||
}
|
||||
|
||||
// Status topic: meshcore/<region>/<observer_id>/status
|
||||
// Per-source IATA filter does NOT apply here — observer metadata (noise_floor, battery, etc.)
|
||||
// IATA filter does NOT apply here — observer metadata (noise_floor, battery, etc.)
|
||||
// is region-independent and should be accepted from all observers regardless of
|
||||
// which IATA regions are configured for packet ingestion.
|
||||
if len(parts) >= 4 && parts[3] == "status" {
|
||||
@@ -348,16 +282,8 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
|
||||
if len(parts) > 1 {
|
||||
region = parts[1]
|
||||
}
|
||||
// Fallback to source-level region config when topic has no region (#788)
|
||||
if region == "" && source.Region != "" {
|
||||
region = source.Region
|
||||
}
|
||||
|
||||
mqttMsg := &MQTTPacketMessage{Raw: rawHex}
|
||||
// Parse optional region from JSON payload (#788)
|
||||
if v, ok := msg["region"].(string); ok && v != "" {
|
||||
mqttMsg.Region = v
|
||||
}
|
||||
if v, ok := msg["SNR"]; ok {
|
||||
if f, ok := toFloat64(v); ok {
|
||||
mqttMsg.SNR = &f
|
||||
@@ -457,12 +383,7 @@ func handleMessage(store *Store, tag string, source MQTTSource, m mqtt.Message,
|
||||
// Upsert observer
|
||||
if observerID != "" {
|
||||
origin, _ := msg["origin"].(string)
|
||||
// Use effective region: payload > topic > source config (#788)
|
||||
effectiveRegion := region
|
||||
if mqttMsg.Region != "" {
|
||||
effectiveRegion = mqttMsg.Region
|
||||
}
|
||||
if err := store.UpsertObserver(observerID, origin, effectiveRegion, nil); err != nil {
|
||||
if err := store.UpsertObserver(observerID, origin, region, nil); err != nil {
|
||||
log.Printf("MQTT [%s] observer upsert error: %v", tag, err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,11 +5,8 @@ import (
|
||||
"math"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
mqtt "github.com/eclipse/paho.mqtt.golang"
|
||||
)
|
||||
|
||||
func TestToFloat64(t *testing.T) {
|
||||
@@ -783,155 +780,3 @@ func TestIATAFilterDoesNotDropStatusMessages(t *testing.T) {
|
||||
t.Error("packet from out-of-region BFL should still be filtered by IATA")
|
||||
}
|
||||
}
|
||||
|
||||
// TestMQTTConnectRetryTimeoutDoesNotBlock verifies that WaitTimeout returns within
|
||||
// the deadline for an unreachable broker when ConnectRetry=true (#910). Previously,
|
||||
// token.Wait() would block forever in this configuration.
|
||||
func TestMQTTConnectRetryTimeoutDoesNotBlock(t *testing.T) {
|
||||
opts := mqtt.NewClientOptions().
|
||||
AddBroker("tcp://127.0.0.1:1"). // port 1 — nothing listening, fast refusal
|
||||
SetConnectRetry(true).
|
||||
SetAutoReconnect(true)
|
||||
|
||||
client := mqtt.NewClient(opts)
|
||||
token := client.Connect()
|
||||
defer client.Disconnect(100)
|
||||
|
||||
start := time.Now()
|
||||
connected := token.WaitTimeout(3 * time.Second)
|
||||
elapsed := time.Since(start)
|
||||
|
||||
if connected {
|
||||
t.Skip("port 1 unexpectedly accepted a connection — skipping")
|
||||
}
|
||||
if elapsed > 4*time.Second {
|
||||
t.Errorf("WaitTimeout blocked for %v — token.Wait() would block forever with ConnectRetry=true", elapsed)
|
||||
}
|
||||
}
|
||||
|
||||
// TestBL1_GoroutineLeakOnHardFailure reproduces BLOCKER 1: without Disconnect()
|
||||
// on the error path, Paho's internal retry goroutines leak when a client is
|
||||
// discarded after Connect() with ConnectRetry=true.
|
||||
//
|
||||
// We prove the leak by creating N clients WITHOUT Disconnect — goroutines grow
|
||||
// proportionally. The fix (client.Disconnect(0) before continue) prevents this.
|
||||
func TestBL1_GoroutineLeakOnHardFailure(t *testing.T) {
|
||||
runtime.GC()
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
baseline := runtime.NumGoroutine()
|
||||
|
||||
// Create multiple clients connected to unreachable broker, WITHOUT disconnecting.
|
||||
// Each one spawns Paho retry goroutines that accumulate.
|
||||
const numClients = 10
|
||||
clients := make([]mqtt.Client, numClients)
|
||||
for i := 0; i < numClients; i++ {
|
||||
opts := mqtt.NewClientOptions().
|
||||
AddBroker("tcp://127.0.0.1:1").
|
||||
SetConnectRetry(true).
|
||||
SetAutoReconnect(true).
|
||||
SetConnectTimeout(500 * time.Millisecond)
|
||||
c := mqtt.NewClient(opts)
|
||||
tok := c.Connect()
|
||||
tok.WaitTimeout(1 * time.Second)
|
||||
clients[i] = c
|
||||
}
|
||||
|
||||
time.Sleep(200 * time.Millisecond)
|
||||
leaked := runtime.NumGoroutine()
|
||||
goroutineGrowth := leaked - baseline
|
||||
|
||||
// Clean up to not actually leak in test
|
||||
for _, c := range clients {
|
||||
c.Disconnect(0)
|
||||
}
|
||||
|
||||
t.Logf("baseline=%d, after %d undisconnected clients=%d, growth=%d",
|
||||
baseline, numClients, leaked, goroutineGrowth)
|
||||
|
||||
// With ConnectRetry=true, each Connect() spawns retry goroutines.
|
||||
// Without Disconnect, these accumulate. Verify growth is meaningful.
|
||||
if goroutineGrowth < 3 {
|
||||
t.Skip("Connect didn't spawn enough extra goroutines to measure leak")
|
||||
}
|
||||
|
||||
// The fix: calling client.Disconnect(0) on the error path prevents accumulation.
|
||||
// Anti-tautology: removing the Disconnect(0) call from main.go's error path
|
||||
// would cause goroutine accumulation proportional to failed broker count.
|
||||
t.Logf("CONFIRMED: %d leaked goroutines from %d clients without Disconnect — fix adds Disconnect(0) on error path", goroutineGrowth, numClients)
|
||||
}
|
||||
|
||||
// TestBL2_ZeroConnectedFatals verifies BLOCKER 2: when all brokers are unreachable,
|
||||
// connectedCount==0 must be detected. We test the logic directly — if only timed-out
|
||||
// clients exist (appended to clients slice) but connectedCount is 0, the guard triggers.
|
||||
func TestBL2_ZeroConnectedFatals(t *testing.T) {
|
||||
// Simulate the connection loop result: 1 timed-out client, 0 connected
|
||||
var clients []mqtt.Client
|
||||
connectedCount := 0
|
||||
|
||||
// Create a client that times out (unreachable broker)
|
||||
opts := mqtt.NewClientOptions().
|
||||
AddBroker("tcp://127.0.0.1:1").
|
||||
SetConnectRetry(true).
|
||||
SetAutoReconnect(true)
|
||||
|
||||
client := mqtt.NewClient(opts)
|
||||
token := client.Connect()
|
||||
if !token.WaitTimeout(2 * time.Second) {
|
||||
// Timed out — PR #926 appends to clients
|
||||
clients = append(clients, client)
|
||||
}
|
||||
defer func() {
|
||||
for _, c := range clients {
|
||||
c.Disconnect(0)
|
||||
}
|
||||
}()
|
||||
|
||||
// OLD bug: len(clients) == 0 would be false (1 timed-out client in list)
|
||||
// → ingestor would silently run with zero connections
|
||||
if len(clients) == 0 {
|
||||
t.Fatal("expected timed-out client to be in clients slice")
|
||||
}
|
||||
|
||||
// NEW fix: connectedCount == 0 catches this
|
||||
if connectedCount != 0 {
|
||||
t.Errorf("connectedCount should be 0, got %d", connectedCount)
|
||||
}
|
||||
|
||||
// The real code does: if connectedCount == 0 { log.Fatal(...) }
|
||||
// This test proves len(clients) > 0 but connectedCount == 0 — the old guard
|
||||
// would have missed it.
|
||||
if len(clients) > 0 && connectedCount == 0 {
|
||||
t.Log("BL2 confirmed: old guard len(clients)==0 would NOT fatal; new guard connectedCount==0 correctly catches zero-connected state")
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandleMessageObserverIATAWhitelist(t *testing.T) {
|
||||
store := newTestStore(t)
|
||||
source := MQTTSource{Name: "test"}
|
||||
cfg := &Config{
|
||||
ObserverIATAWhitelist: []string{"ARN"},
|
||||
}
|
||||
|
||||
// Message from non-whitelisted region GOT — should be dropped
|
||||
handleMessage(store, "test", source, &mockMessage{
|
||||
topic: "meshcore/GOT/obs1/status",
|
||||
payload: []byte(`{"origin":"node1","noise_floor":-110}`),
|
||||
}, nil, cfg)
|
||||
|
||||
var count int
|
||||
store.db.QueryRow("SELECT COUNT(*) FROM observers WHERE id='obs1'").Scan(&count)
|
||||
if count != 0 {
|
||||
t.Error("observer from non-whitelisted IATA GOT should be dropped")
|
||||
}
|
||||
|
||||
// Message from whitelisted region ARN — should be accepted
|
||||
handleMessage(store, "test", source, &mockMessage{
|
||||
topic: "meshcore/ARN/obs2/status",
|
||||
payload: []byte(`{"origin":"node2","noise_floor":-105}`),
|
||||
}, nil, cfg)
|
||||
|
||||
store.db.QueryRow("SELECT COUNT(*) FROM observers WHERE id='obs2'").Scan(&count)
|
||||
if count != 1 {
|
||||
t.Errorf("observer from whitelisted IATA ARN should be accepted, got count=%d", count)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,76 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestBuildMQTTOpts_ReconnectSettings(t *testing.T) {
|
||||
source := MQTTSource{
|
||||
Broker: "tcp://localhost:1883",
|
||||
Name: "test",
|
||||
}
|
||||
opts := buildMQTTOpts(source)
|
||||
|
||||
if opts.MaxReconnectInterval != 30*time.Second {
|
||||
t.Errorf("MaxReconnectInterval = %v, want 30s", opts.MaxReconnectInterval)
|
||||
}
|
||||
if opts.ConnectTimeout != 10*time.Second {
|
||||
t.Errorf("ConnectTimeout = %v, want 10s", opts.ConnectTimeout)
|
||||
}
|
||||
if opts.WriteTimeout != 10*time.Second {
|
||||
t.Errorf("WriteTimeout = %v, want 10s", opts.WriteTimeout)
|
||||
}
|
||||
if !opts.AutoReconnect {
|
||||
t.Error("AutoReconnect should be true")
|
||||
}
|
||||
if !opts.ConnectRetry {
|
||||
t.Error("ConnectRetry should be true")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildMQTTOpts_Credentials(t *testing.T) {
|
||||
source := MQTTSource{
|
||||
Broker: "tcp://broker:1883",
|
||||
Username: "user1",
|
||||
Password: "pass1",
|
||||
}
|
||||
opts := buildMQTTOpts(source)
|
||||
|
||||
if opts.Username != "user1" {
|
||||
t.Errorf("Username = %q, want %q", opts.Username, "user1")
|
||||
}
|
||||
if opts.Password != "pass1" {
|
||||
t.Errorf("Password = %q, want %q", opts.Password, "pass1")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildMQTTOpts_TLS_InsecureSkipVerify(t *testing.T) {
|
||||
f := false
|
||||
source := MQTTSource{
|
||||
Broker: "ssl://broker:8883",
|
||||
RejectUnauthorized: &f,
|
||||
}
|
||||
opts := buildMQTTOpts(source)
|
||||
|
||||
if opts.TLSConfig == nil {
|
||||
t.Fatal("TLSConfig should be set")
|
||||
}
|
||||
if !opts.TLSConfig.InsecureSkipVerify {
|
||||
t.Error("InsecureSkipVerify should be true when RejectUnauthorized=false")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildMQTTOpts_TLS_SSL_Prefix(t *testing.T) {
|
||||
source := MQTTSource{
|
||||
Broker: "ssl://broker:8883",
|
||||
}
|
||||
opts := buildMQTTOpts(source)
|
||||
|
||||
if opts.TLSConfig == nil {
|
||||
t.Fatal("TLSConfig should be set for ssl:// brokers")
|
||||
}
|
||||
if opts.TLSConfig.InsecureSkipVerify {
|
||||
t.Error("InsecureSkipVerify should be false by default")
|
||||
}
|
||||
}
|
||||
@@ -1,43 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestIngestorIsObserverBlacklisted(t *testing.T) {
|
||||
cfg := &Config{
|
||||
ObserverBlacklist: []string{"OBS1", "obs2"},
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
id string
|
||||
want bool
|
||||
}{
|
||||
{"OBS1", true},
|
||||
{"obs1", true},
|
||||
{"OBS2", true},
|
||||
{"obs3", false},
|
||||
{"", false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
got := cfg.IsObserverBlacklisted(tt.id)
|
||||
if got != tt.want {
|
||||
t.Errorf("IsObserverBlacklisted(%q) = %v, want %v", tt.id, got, tt.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestIngestorIsObserverBlacklistedEmpty(t *testing.T) {
|
||||
cfg := &Config{}
|
||||
if cfg.IsObserverBlacklisted("anything") {
|
||||
t.Error("empty blacklist should not match")
|
||||
}
|
||||
}
|
||||
|
||||
func TestIngestorIsObserverBlacklistedNil(t *testing.T) {
|
||||
var cfg *Config
|
||||
if cfg.IsObserverBlacklisted("anything") {
|
||||
t.Error("nil config should not match")
|
||||
}
|
||||
}
|
||||
@@ -1,89 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// handleBackup streams a consistent SQLite snapshot of the analyzer DB.
|
||||
//
|
||||
// Requires API-key authentication (mounted via requireAPIKey in routes.go).
|
||||
//
|
||||
// Strategy: SQLite's `VACUUM INTO 'path'` produces an atomic, defragmented
|
||||
// copy of the current database into a new file. It runs at READ ISOLATION
|
||||
// against the source DB (works on our read-only connection) and never
|
||||
// blocks concurrent writers — the ingestor keeps writing to the WAL while
|
||||
// the snapshot is taken from a consistent read transaction.
|
||||
//
|
||||
// Response:
|
||||
//
|
||||
// 200 OK
|
||||
// Content-Type: application/octet-stream
|
||||
// Content-Disposition: attachment; filename="corescope-backup-<unix>.db"
|
||||
// <body: complete SQLite database file>
|
||||
//
|
||||
// The temp file is removed after the response is fully written, regardless
|
||||
// of whether the client successfully consumed the stream.
|
||||
func (s *Server) handleBackup(w http.ResponseWriter, r *http.Request) {
|
||||
if s.db == nil || s.db.conn == nil {
|
||||
writeError(w, http.StatusServiceUnavailable, "database unavailable")
|
||||
return
|
||||
}
|
||||
|
||||
ts := time.Now().UTC().Unix()
|
||||
clientIP := r.Header.Get("X-Forwarded-For")
|
||||
if clientIP == "" {
|
||||
clientIP = r.RemoteAddr
|
||||
}
|
||||
log.Printf("[backup] generating backup for client %s", clientIP)
|
||||
|
||||
// Stage the snapshot in the OS temp dir so we never touch the live DB
|
||||
// directory (avoids confusing operators / accidental WAL clobber).
|
||||
tmpDir, err := os.MkdirTemp("", "corescope-backup-")
|
||||
if err != nil {
|
||||
writeError(w, http.StatusInternalServerError, "tempdir failed: "+err.Error())
|
||||
return
|
||||
}
|
||||
defer func() {
|
||||
if rmErr := os.RemoveAll(tmpDir); rmErr != nil {
|
||||
log.Printf("[backup] cleanup error: %v", rmErr)
|
||||
}
|
||||
}()
|
||||
|
||||
snapshotPath := filepath.Join(tmpDir, fmt.Sprintf("corescope-backup-%d.db", ts))
|
||||
|
||||
// SQLite parses the path literal — escape any single quotes defensively.
|
||||
// (mkdtemp output won't contain quotes, but be paranoid for future-proofing.)
|
||||
escaped := strings.ReplaceAll(snapshotPath, "'", "''")
|
||||
if _, err := s.db.conn.ExecContext(r.Context(), fmt.Sprintf("VACUUM INTO '%s'", escaped)); err != nil {
|
||||
writeError(w, http.StatusInternalServerError, "snapshot failed: "+err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
f, err := os.Open(snapshotPath)
|
||||
if err != nil {
|
||||
writeError(w, http.StatusInternalServerError, "open snapshot failed: "+err.Error())
|
||||
return
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
stat, err := f.Stat()
|
||||
if err == nil {
|
||||
w.Header().Set("Content-Length", fmt.Sprintf("%d", stat.Size()))
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/octet-stream")
|
||||
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"corescope-backup-%d.db\"", ts))
|
||||
w.Header().Set("X-Content-Type-Options", "nosniff")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
|
||||
if _, err := io.Copy(w, f); err != nil {
|
||||
// Headers already flushed; just log. Client will see truncated stream.
|
||||
log.Printf("[backup] stream error: %v", err)
|
||||
}
|
||||
}
|
||||
@@ -1,55 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// sqliteMagic is the 16-byte file header identifying a valid SQLite 3 database.
|
||||
// See https://www.sqlite.org/fileformat.html#magic_header_string
|
||||
const sqliteMagic = "SQLite format 3\x00"
|
||||
|
||||
func TestBackupRequiresAPIKey(t *testing.T) {
|
||||
_, router := setupTestServerWithAPIKey(t, "test-secret-key-strong-enough")
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/backup", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
if w.Code != http.StatusUnauthorized {
|
||||
t.Fatalf("expected 401 without API key, got %d (body: %s)", w.Code, w.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestBackupReturnsValidSQLiteSnapshot(t *testing.T) {
|
||||
const apiKey = "test-secret-key-strong-enough"
|
||||
_, router := setupTestServerWithAPIKey(t, apiKey)
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/backup", nil)
|
||||
req.Header.Set("X-API-Key", apiKey)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
if w.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200, got %d (body: %s)", w.Code, w.Body.String())
|
||||
}
|
||||
|
||||
ct := w.Header().Get("Content-Type")
|
||||
if ct != "application/octet-stream" {
|
||||
t.Errorf("expected Content-Type application/octet-stream, got %q", ct)
|
||||
}
|
||||
|
||||
cd := w.Header().Get("Content-Disposition")
|
||||
if !strings.HasPrefix(cd, "attachment;") || !strings.Contains(cd, "filename=\"corescope-backup-") || !strings.HasSuffix(cd, ".db\"") {
|
||||
t.Errorf("expected Content-Disposition attachment with corescope-backup-<ts>.db filename, got %q", cd)
|
||||
}
|
||||
|
||||
body := w.Body.Bytes()
|
||||
if len(body) < len(sqliteMagic) {
|
||||
t.Fatalf("backup body too short (%d bytes) — expected SQLite file", len(body))
|
||||
}
|
||||
if got := string(body[:len(sqliteMagic)]); got != sqliteMagic {
|
||||
t.Fatalf("expected SQLite magic header %q, got %q", sqliteMagic, got)
|
||||
}
|
||||
}
|
||||
@@ -127,92 +127,6 @@ func TestBoundedLoad_AscendingOrder(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// loadStoreWithRetention creates a PacketStore with retentionHours set.
|
||||
func loadStoreWithRetention(t *testing.T, dbPath string, retentionHours float64) *PacketStore {
|
||||
t.Helper()
|
||||
db, err := OpenDB(dbPath)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
cfg := &PacketStoreConfig{RetentionHours: retentionHours}
|
||||
store := NewPacketStore(db, cfg)
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
return store
|
||||
}
|
||||
|
||||
// createTestDBWithAgedPackets inserts numRecent packets with timestamps within
|
||||
// the last hour and numOld packets with timestamps 48 hours ago.
|
||||
func createTestDBWithAgedPackets(t *testing.T, numRecent, numOld int) string {
|
||||
t.Helper()
|
||||
dir := t.TempDir()
|
||||
dbPath := filepath.Join(dir, "test.db")
|
||||
|
||||
conn, err := sql.Open("sqlite", dbPath+"?_journal_mode=WAL")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
execOrFail := func(s string) {
|
||||
if _, err := conn.Exec(s); err != nil {
|
||||
t.Fatalf("setup: %v\nSQL: %s", err, s)
|
||||
}
|
||||
}
|
||||
execOrFail(`CREATE TABLE transmissions (id INTEGER PRIMARY KEY, raw_hex TEXT, hash TEXT, first_seen TEXT, route_type INTEGER, payload_type INTEGER, payload_version INTEGER, decoded_json TEXT)`)
|
||||
execOrFail(`CREATE TABLE observations (id INTEGER PRIMARY KEY, transmission_id INTEGER, observer_id TEXT, observer_name TEXT, direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT, raw_hex TEXT)`)
|
||||
execOrFail(`CREATE TABLE observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
|
||||
execOrFail(`CREATE TABLE nodes (pubkey TEXT PRIMARY KEY, name TEXT, role TEXT, lat REAL, lon REAL, last_seen TEXT, first_seen TEXT, frequency REAL)`)
|
||||
execOrFail(`CREATE TABLE schema_version (version INTEGER)`)
|
||||
execOrFail(`INSERT INTO schema_version (version) VALUES (1)`)
|
||||
execOrFail(`CREATE INDEX idx_tx_first_seen ON transmissions(first_seen)`)
|
||||
|
||||
now := time.Now().UTC()
|
||||
id := 1
|
||||
// Insert old packets (48 hours ago)
|
||||
for i := 0; i < numOld; i++ {
|
||||
ts := now.Add(-48 * time.Hour).Add(time.Duration(i) * time.Second).Format(time.RFC3339)
|
||||
conn.Exec("INSERT INTO transmissions VALUES (?,?,?,?,0,4,1,?)", id, "aa", fmt.Sprintf("old%d", i), ts, `{}`)
|
||||
conn.Exec("INSERT INTO observations VALUES (?,?,?,?,?,?,?,?,?,?,?)", id, id, "obs1", "Obs1", "RX", -10.0, -80.0, 5, `[]`, ts, "")
|
||||
id++
|
||||
}
|
||||
// Insert recent packets (within last hour)
|
||||
for i := 0; i < numRecent; i++ {
|
||||
ts := now.Add(-30 * time.Minute).Add(time.Duration(i) * time.Second).Format(time.RFC3339)
|
||||
conn.Exec("INSERT INTO transmissions VALUES (?,?,?,?,0,4,1,?)", id, "bb", fmt.Sprintf("new%d", i), ts, `{}`)
|
||||
conn.Exec("INSERT INTO observations VALUES (?,?,?,?,?,?,?,?,?,?,?)", id, id, "obs1", "Obs1", "RX", -10.0, -80.0, 5, `[]`, ts, "")
|
||||
id++
|
||||
}
|
||||
return dbPath
|
||||
}
|
||||
|
||||
func TestRetentionLoad_OnlyLoadsRecentPackets(t *testing.T) {
|
||||
dbPath := createTestDBWithAgedPackets(t, 50, 100)
|
||||
defer os.RemoveAll(filepath.Dir(dbPath))
|
||||
|
||||
// retention = 2 hours — should load only the 50 recent packets, not the 100 old ones
|
||||
store := loadStoreWithRetention(t, dbPath, 2)
|
||||
defer store.db.conn.Close()
|
||||
|
||||
if len(store.packets) != 50 {
|
||||
t.Errorf("expected 50 recent packets, got %d (old packets should be excluded by retentionHours)", len(store.packets))
|
||||
}
|
||||
}
|
||||
|
||||
func TestRetentionLoad_ZeroRetentionLoadsAll(t *testing.T) {
|
||||
dbPath := createTestDBWithAgedPackets(t, 50, 100)
|
||||
defer os.RemoveAll(filepath.Dir(dbPath))
|
||||
|
||||
// retention = 0 (unlimited) — should load all 150 packets
|
||||
store := loadStoreWithRetention(t, dbPath, 0)
|
||||
defer store.db.conn.Close()
|
||||
|
||||
if len(store.packets) != 150 {
|
||||
t.Errorf("expected all 150 packets with retentionHours=0, got %d", len(store.packets))
|
||||
}
|
||||
}
|
||||
|
||||
func TestEstimateStoreTxBytesTypical(t *testing.T) {
|
||||
est := estimateStoreTxBytesTypical(10)
|
||||
if est < 1000 {
|
||||
|
||||
@@ -1,168 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
var _ = time.Second // suppress unused import
|
||||
|
||||
// Helper to create a minimal PacketStore with GRP_TXT packets for channel analytics testing.
|
||||
func newChannelTestStore(packets []*StoreTx) *PacketStore {
|
||||
ps := &PacketStore{
|
||||
packets: packets,
|
||||
byHash: make(map[string]*StoreTx),
|
||||
byTxID: make(map[int]*StoreTx),
|
||||
byObsID: make(map[int]*StoreObs),
|
||||
byObserver: make(map[string][]*StoreObs),
|
||||
byNode: make(map[string][]*StoreTx),
|
||||
byPathHop: make(map[string][]*StoreTx),
|
||||
nodeHashes: make(map[string]map[string]bool),
|
||||
byPayloadType: make(map[int][]*StoreTx),
|
||||
rfCache: make(map[string]*cachedResult),
|
||||
topoCache: make(map[string]*cachedResult),
|
||||
hashCache: make(map[string]*cachedResult),
|
||||
collisionCache: make(map[string]*cachedResult),
|
||||
chanCache: make(map[string]*cachedResult),
|
||||
distCache: make(map[string]*cachedResult),
|
||||
subpathCache: make(map[string]*cachedResult),
|
||||
spIndex: make(map[string]int),
|
||||
spTxIndex: make(map[string][]*StoreTx),
|
||||
advertPubkeys: make(map[string]int),
|
||||
lastSeenTouched: make(map[string]time.Time),
|
||||
clockSkew: NewClockSkewEngine(),
|
||||
}
|
||||
ps.byPayloadType[5] = packets
|
||||
return ps
|
||||
}
|
||||
|
||||
func makeGrpTx(channelHash int, channel, text, sender string) *StoreTx {
|
||||
decoded := map[string]interface{}{
|
||||
"type": "CHAN",
|
||||
"channelHash": float64(channelHash),
|
||||
"channel": channel,
|
||||
"text": text,
|
||||
"sender": sender,
|
||||
}
|
||||
b, _ := json.Marshal(decoded)
|
||||
pt := 5
|
||||
return &StoreTx{
|
||||
ID: 1,
|
||||
DecodedJSON: string(b),
|
||||
FirstSeen: "2026-05-01T12:00:00Z",
|
||||
PayloadType: &pt,
|
||||
}
|
||||
}
|
||||
|
||||
// TestComputeAnalyticsChannels_MergesEncryptedAndDecrypted verifies that packets
|
||||
// with the same hash byte but different decryption status merge into ONE bucket.
|
||||
func TestComputeAnalyticsChannels_MergesEncryptedAndDecrypted(t *testing.T) {
|
||||
// Hash 129 is the real hash for #wardriving: SHA256(SHA256("#wardriving")[:16])[0] = 129
|
||||
// Some packets are decrypted (have channel name), some are not (encrypted)
|
||||
packets := []*StoreTx{
|
||||
makeGrpTx(129, "#wardriving", "hello", "alice"),
|
||||
makeGrpTx(129, "#wardriving", "world", "bob"),
|
||||
makeGrpTx(129, "", "", ""), // encrypted — no channel name
|
||||
makeGrpTx(129, "", "", ""), // encrypted
|
||||
}
|
||||
|
||||
store := newChannelTestStore(packets)
|
||||
result := store.computeAnalyticsChannels("", TimeWindow{})
|
||||
|
||||
channels := result["channels"].([]map[string]interface{})
|
||||
if len(channels) != 1 {
|
||||
t.Fatalf("expected 1 channel bucket, got %d: %+v", len(channels), channels)
|
||||
}
|
||||
ch := channels[0]
|
||||
if ch["name"] != "#wardriving" {
|
||||
t.Errorf("expected name '#wardriving', got %q", ch["name"])
|
||||
}
|
||||
if ch["messages"] != 4 {
|
||||
t.Errorf("expected 4 messages, got %v", ch["messages"])
|
||||
}
|
||||
if ch["encrypted"] != false {
|
||||
t.Errorf("expected encrypted=false (some packets decrypted), got %v", ch["encrypted"])
|
||||
}
|
||||
}
|
||||
|
||||
// TestComputeAnalyticsChannels_RejectsRainbowTableMismatch verifies that a packet
|
||||
// with channelHash=72 but channel="#wardriving" (mismatch) does NOT create a
|
||||
// "#wardriving" bucket — it falls into "ch72" instead.
|
||||
func TestComputeAnalyticsChannels_RejectsRainbowTableMismatch(t *testing.T) {
|
||||
// Hash 72 is NOT the correct hash for #wardriving (which is 129).
|
||||
// This simulates a rainbow-table collision/mismatch.
|
||||
packets := []*StoreTx{
|
||||
makeGrpTx(72, "#wardriving", "ghost", "eve"), // mismatch: hash 72 != wardriving's real hash
|
||||
makeGrpTx(129, "#wardriving", "real", "alice"), // correct match
|
||||
}
|
||||
|
||||
store := newChannelTestStore(packets)
|
||||
result := store.computeAnalyticsChannels("", TimeWindow{})
|
||||
|
||||
channels := result["channels"].([]map[string]interface{})
|
||||
if len(channels) != 2 {
|
||||
t.Fatalf("expected 2 channel buckets, got %d: %+v", len(channels), channels)
|
||||
}
|
||||
|
||||
// Find the buckets
|
||||
var ch72, ch129 map[string]interface{}
|
||||
for _, ch := range channels {
|
||||
if ch["hash"] == "72" {
|
||||
ch72 = ch
|
||||
} else if ch["hash"] == "129" {
|
||||
ch129 = ch
|
||||
}
|
||||
}
|
||||
|
||||
if ch72 == nil {
|
||||
t.Fatal("expected a bucket for hash 72")
|
||||
}
|
||||
if ch129 == nil {
|
||||
t.Fatal("expected a bucket for hash 129")
|
||||
}
|
||||
|
||||
// ch72 should NOT be named "#wardriving" — it should be the placeholder
|
||||
if ch72["name"] == "#wardriving" {
|
||||
t.Errorf("hash 72 bucket should NOT be named '#wardriving' (rainbow-table mismatch rejected)")
|
||||
}
|
||||
if ch72["name"] != "ch72" {
|
||||
t.Errorf("expected hash 72 bucket named 'ch72', got %q", ch72["name"])
|
||||
}
|
||||
|
||||
// ch129 should be named "#wardriving"
|
||||
if ch129["name"] != "#wardriving" {
|
||||
t.Errorf("expected hash 129 bucket named '#wardriving', got %q", ch129["name"])
|
||||
}
|
||||
}
|
||||
|
||||
// TestChannelNameMatchesHash verifies the hash validation function.
|
||||
func TestChannelNameMatchesHash(t *testing.T) {
|
||||
// #wardriving hashes to 129
|
||||
if !channelNameMatchesHash("#wardriving", "129") {
|
||||
t.Error("expected #wardriving to match hash 129")
|
||||
}
|
||||
if channelNameMatchesHash("#wardriving", "72") {
|
||||
t.Error("expected #wardriving to NOT match hash 72")
|
||||
}
|
||||
// Without leading # should also work
|
||||
if !channelNameMatchesHash("wardriving", "129") {
|
||||
t.Error("expected wardriving (without #) to match hash 129")
|
||||
}
|
||||
}
|
||||
|
||||
// TestIsPlaceholderName verifies placeholder detection.
|
||||
func TestIsPlaceholderName(t *testing.T) {
|
||||
if !isPlaceholderName("ch129") {
|
||||
t.Error("ch129 should be placeholder")
|
||||
}
|
||||
if !isPlaceholderName("ch0") {
|
||||
t.Error("ch0 should be placeholder")
|
||||
}
|
||||
if isPlaceholderName("#wardriving") {
|
||||
t.Error("#wardriving should NOT be placeholder")
|
||||
}
|
||||
if isPlaceholderName("Public") {
|
||||
t.Error("Public should NOT be placeholder")
|
||||
}
|
||||
}
|
||||
+4
-123
@@ -120,8 +120,6 @@ type NodeClockSkew struct {
|
||||
GoodFraction float64 `json:"goodFraction"` // fraction of recent samples with |skew| <= 1h
|
||||
RecentBadSampleCount int `json:"recentBadSampleCount"` // count of recent samples with |skew| > 1h
|
||||
RecentSampleCount int `json:"recentSampleCount"` // total recent samples in window
|
||||
RecentHashEvidence []HashEvidence `json:"recentHashEvidence,omitempty"`
|
||||
CalibrationSummary *CalibrationSummary `json:"calibrationSummary,omitempty"`
|
||||
NodeName string `json:"nodeName,omitempty"` // populated in fleet responses
|
||||
NodeRole string `json:"nodeRole,omitempty"` // populated in fleet responses
|
||||
}
|
||||
@@ -132,31 +130,6 @@ type SkewSample struct {
|
||||
SkewSec float64 `json:"skew"` // corrected skew in seconds
|
||||
}
|
||||
|
||||
// HashEvidenceObserver is one observer's contribution to a per-hash evidence entry.
|
||||
type HashEvidenceObserver struct {
|
||||
ObserverID string `json:"observerID"`
|
||||
ObserverName string `json:"observerName"`
|
||||
RawSkewSec float64 `json:"rawSkewSec"`
|
||||
CorrectedSkewSec float64 `json:"correctedSkewSec"`
|
||||
ObserverOffsetSec float64 `json:"observerOffsetSec"`
|
||||
Calibrated bool `json:"calibrated"`
|
||||
}
|
||||
|
||||
// HashEvidence is per-hash clock skew evidence showing individual observer contributions.
|
||||
type HashEvidence struct {
|
||||
Hash string `json:"hash"`
|
||||
Observers []HashEvidenceObserver `json:"observers"`
|
||||
MedianCorrectedSkewSec float64 `json:"medianCorrectedSkewSec"`
|
||||
Timestamp int64 `json:"timestamp"`
|
||||
}
|
||||
|
||||
// CalibrationSummary counts how many samples were corrected via observer calibration.
|
||||
type CalibrationSummary struct {
|
||||
TotalSamples int `json:"totalSamples"`
|
||||
CalibratedSamples int `json:"calibratedSamples"`
|
||||
UncalibratedSamples int `json:"uncalibratedSamples"`
|
||||
}
|
||||
|
||||
// txSkewResult maps tx hash → per-transmission skew stats. This is an
|
||||
// intermediate result keyed by hash (not pubkey); the store maps hash → pubkey
|
||||
// when building the final per-node view.
|
||||
@@ -170,27 +143,15 @@ type ClockSkewEngine struct {
|
||||
observerOffsets map[string]float64 // observerID → calibrated offset (seconds)
|
||||
observerSamples map[string]int // observerID → number of multi-observer packets used
|
||||
nodeSkew txSkewResult
|
||||
hashEvidence map[string][]hashEvidenceEntry // hash → per-observer raw/corrected data
|
||||
lastComputed time.Time
|
||||
computeInterval time.Duration
|
||||
}
|
||||
|
||||
// hashEvidenceEntry stores raw evidence per observer per hash, cached during Recompute.
|
||||
type hashEvidenceEntry struct {
|
||||
observerID string
|
||||
rawSkew float64
|
||||
corrected float64
|
||||
offset float64
|
||||
calibrated bool
|
||||
observedTS int64
|
||||
}
|
||||
|
||||
func NewClockSkewEngine() *ClockSkewEngine {
|
||||
return &ClockSkewEngine{
|
||||
observerOffsets: make(map[string]float64),
|
||||
observerSamples: make(map[string]int),
|
||||
nodeSkew: make(txSkewResult),
|
||||
hashEvidence: make(map[string][]hashEvidenceEntry),
|
||||
computeInterval: 30 * time.Second,
|
||||
}
|
||||
}
|
||||
@@ -215,16 +176,14 @@ func (e *ClockSkewEngine) Recompute(store *PacketStore) {
|
||||
var newOffsets map[string]float64
|
||||
var newSamples map[string]int
|
||||
var newNodeSkew txSkewResult
|
||||
var newHashEvidence map[string][]hashEvidenceEntry
|
||||
|
||||
if len(samples) > 0 {
|
||||
newOffsets, newSamples = calibrateObservers(samples)
|
||||
newNodeSkew, newHashEvidence = computeNodeSkew(samples, newOffsets)
|
||||
newNodeSkew = computeNodeSkew(samples, newOffsets)
|
||||
} else {
|
||||
newOffsets = make(map[string]float64)
|
||||
newSamples = make(map[string]int)
|
||||
newNodeSkew = make(txSkewResult)
|
||||
newHashEvidence = make(map[string][]hashEvidenceEntry)
|
||||
}
|
||||
|
||||
// Swap results under brief write lock.
|
||||
@@ -237,7 +196,6 @@ func (e *ClockSkewEngine) Recompute(store *PacketStore) {
|
||||
e.observerOffsets = newOffsets
|
||||
e.observerSamples = newSamples
|
||||
e.nodeSkew = newNodeSkew
|
||||
e.hashEvidence = newHashEvidence
|
||||
e.lastComputed = time.Now()
|
||||
e.mu.Unlock()
|
||||
}
|
||||
@@ -374,7 +332,7 @@ func calibrateObservers(samples []skewSample) (map[string]float64, map[string]in
|
||||
// ── Phase 3: Per-Node Skew ─────────────────────────────────────────────────────
|
||||
|
||||
// computeNodeSkew calculates corrected skew statistics for each node.
|
||||
func computeNodeSkew(samples []skewSample, obsOffsets map[string]float64) (txSkewResult, map[string][]hashEvidenceEntry) {
|
||||
func computeNodeSkew(samples []skewSample, obsOffsets map[string]float64) txSkewResult {
|
||||
// Compute corrected skew per sample, grouped by hash (each hash = one
|
||||
// node's advert transmission). The caller maps hash → pubkey via byNode.
|
||||
type correctedSample struct {
|
||||
@@ -385,7 +343,6 @@ func computeNodeSkew(samples []skewSample, obsOffsets map[string]float64) (txSke
|
||||
|
||||
byHash := make(map[string][]correctedSample)
|
||||
hashAdvertTS := make(map[string]int64)
|
||||
evidence := make(map[string][]hashEvidenceEntry) // hash → per-observer evidence
|
||||
|
||||
for _, s := range samples {
|
||||
obsOffset, hasCal := obsOffsets[s.observerID]
|
||||
@@ -402,14 +359,6 @@ func computeNodeSkew(samples []skewSample, obsOffsets map[string]float64) (txSke
|
||||
calibrated: hasCal,
|
||||
})
|
||||
hashAdvertTS[s.hash] = s.advertTS
|
||||
evidence[s.hash] = append(evidence[s.hash], hashEvidenceEntry{
|
||||
observerID: s.observerID,
|
||||
rawSkew: round(rawSkew, 1),
|
||||
corrected: round(corrected, 1),
|
||||
offset: round(obsOffset, 1),
|
||||
calibrated: hasCal,
|
||||
observedTS: s.observedTS,
|
||||
})
|
||||
}
|
||||
|
||||
// Each hash represents one advert from one node. Compute median corrected
|
||||
@@ -448,7 +397,7 @@ func computeNodeSkew(samples []skewSample, obsOffsets map[string]float64) (txSke
|
||||
LastObservedTS: latestObsTS,
|
||||
}
|
||||
}
|
||||
return result, evidence
|
||||
return result
|
||||
}
|
||||
|
||||
// ── Integration with PacketStore ───────────────────────────────────────────────
|
||||
@@ -609,70 +558,6 @@ func (s *PacketStore) getNodeClockSkewLocked(pubkey string) *NodeClockSkew {
|
||||
samples[i] = SkewSample{Timestamp: p.ts, SkewSec: round(p.skew, 1)}
|
||||
}
|
||||
|
||||
// Build per-hash evidence (most recent 10 hashes with ≥1 observer).
|
||||
// Observer name lookup from store observations.
|
||||
obsNameMap := make(map[string]string)
|
||||
type hashMeta struct {
|
||||
hash string
|
||||
ts int64
|
||||
}
|
||||
var evidenceHashes []hashMeta
|
||||
for _, tx := range txs {
|
||||
if tx.PayloadType == nil || *tx.PayloadType != PayloadADVERT {
|
||||
continue
|
||||
}
|
||||
ev, ok := s.clockSkew.hashEvidence[tx.Hash]
|
||||
if !ok || len(ev) == 0 {
|
||||
continue
|
||||
}
|
||||
// Collect observer names from tx observations.
|
||||
for _, obs := range tx.Observations {
|
||||
if obs.ObserverID != "" && obs.ObserverName != "" {
|
||||
obsNameMap[obs.ObserverID] = obs.ObserverName
|
||||
}
|
||||
}
|
||||
evidenceHashes = append(evidenceHashes, hashMeta{hash: tx.Hash, ts: ev[0].observedTS})
|
||||
}
|
||||
// Sort by timestamp descending, take most recent 10.
|
||||
sort.Slice(evidenceHashes, func(i, j int) bool { return evidenceHashes[i].ts > evidenceHashes[j].ts })
|
||||
if len(evidenceHashes) > 10 {
|
||||
evidenceHashes = evidenceHashes[:10]
|
||||
}
|
||||
var recentEvidence []HashEvidence
|
||||
var calSummary CalibrationSummary
|
||||
for _, eh := range evidenceHashes {
|
||||
entries := s.clockSkew.hashEvidence[eh.hash]
|
||||
var observers []HashEvidenceObserver
|
||||
var corrSkews []float64
|
||||
for _, e := range entries {
|
||||
name := obsNameMap[e.observerID]
|
||||
if name == "" {
|
||||
name = e.observerID
|
||||
}
|
||||
observers = append(observers, HashEvidenceObserver{
|
||||
ObserverID: e.observerID,
|
||||
ObserverName: name,
|
||||
RawSkewSec: e.rawSkew,
|
||||
CorrectedSkewSec: e.corrected,
|
||||
ObserverOffsetSec: e.offset,
|
||||
Calibrated: e.calibrated,
|
||||
})
|
||||
corrSkews = append(corrSkews, e.corrected)
|
||||
calSummary.TotalSamples++
|
||||
if e.calibrated {
|
||||
calSummary.CalibratedSamples++
|
||||
} else {
|
||||
calSummary.UncalibratedSamples++
|
||||
}
|
||||
}
|
||||
recentEvidence = append(recentEvidence, HashEvidence{
|
||||
Hash: eh.hash,
|
||||
Observers: observers,
|
||||
MedianCorrectedSkewSec: round(median(corrSkews), 1),
|
||||
Timestamp: eh.ts,
|
||||
})
|
||||
}
|
||||
|
||||
return &NodeClockSkew{
|
||||
Pubkey: pubkey,
|
||||
MeanSkewSec: round(meanSkew, 1),
|
||||
@@ -689,8 +574,6 @@ func (s *PacketStore) getNodeClockSkewLocked(pubkey string) *NodeClockSkew {
|
||||
GoodFraction: round(goodFraction, 2),
|
||||
RecentBadSampleCount: recentBadCount,
|
||||
RecentSampleCount: recentSampleCount,
|
||||
RecentHashEvidence: recentEvidence,
|
||||
CalibrationSummary: &calSummary,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -718,10 +601,8 @@ func (s *PacketStore) GetFleetClockSkew() []*NodeClockSkew {
|
||||
cs.NodeName = ni.Name
|
||||
cs.NodeRole = ni.Role
|
||||
}
|
||||
// Omit samples and evidence in fleet response (too much data).
|
||||
// Omit samples in fleet response (too much data).
|
||||
cs.Samples = nil
|
||||
cs.RecentHashEvidence = nil
|
||||
cs.CalibrationSummary = nil
|
||||
results = append(results, cs)
|
||||
}
|
||||
return results
|
||||
|
||||
@@ -191,7 +191,7 @@ func TestComputeNodeSkew_BasicCorrection(t *testing.T) {
|
||||
// So the median approach finds obs2 is +5 ahead (relative to median)
|
||||
|
||||
// Now compute node skew with those offsets:
|
||||
nodeSkew, _ := computeNodeSkew(samples, offsets)
|
||||
nodeSkew := computeNodeSkew(samples, offsets)
|
||||
cs, ok := nodeSkew["h1"]
|
||||
if !ok {
|
||||
t.Fatal("expected skew data for hash h1")
|
||||
@@ -220,7 +220,7 @@ func TestComputeNodeSkew_ThreeObservers(t *testing.T) {
|
||||
t.Errorf("obs3 offset = %v, want 30", offsets["obs3"])
|
||||
}
|
||||
|
||||
nodeSkew, _ := computeNodeSkew(samples, offsets)
|
||||
nodeSkew := computeNodeSkew(samples, offsets)
|
||||
cs := nodeSkew["h1"]
|
||||
if cs == nil {
|
||||
t.Fatal("expected skew data for h1")
|
||||
@@ -954,104 +954,3 @@ func TestAllGood_OK_845(t *testing.T) {
|
||||
t.Errorf("recentBadSampleCount = %v, want 0", r.RecentBadSampleCount)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNodeClockSkew_EvidencePayload(t *testing.T) {
|
||||
// 3-observer scenario: obs1 ahead by +2s, obs2 on time, obs3 behind by -1s.
|
||||
// Node clock is 60s ahead. Raw skew = advertTS - obsTS.
|
||||
// Hash has 3 observations, each observer sees same advert.
|
||||
ps := NewPacketStore(nil, nil)
|
||||
|
||||
pt := 4 // ADVERT
|
||||
// Advert timestamp: 1700000060 (node 60s ahead of true time 1700000000)
|
||||
// obs1 sees at 1700000002 (2s ahead of true time) → raw = 60 - 2 = 58
|
||||
// obs2 sees at 1700000000 (on time) → raw = 60 - 0 = 60
|
||||
// obs3 sees at 1699999999 (-1s, behind) → raw = 60 + 1 = 61
|
||||
// Median obsTS = 1700000000, so:
|
||||
// obs1 offset = 1700000002 - 1700000000 = +2
|
||||
// obs2 offset = 0
|
||||
// obs3 offset = 1699999999 - 1700000000 = -1
|
||||
// Corrected: raw + offset → obs1: 58+2=60, obs2: 60+0=60, obs3: 61+(-1)=60
|
||||
|
||||
tx1 := &StoreTx{
|
||||
Hash: "evidence_hash_1",
|
||||
PayloadType: &pt,
|
||||
DecodedJSON: `{"payload":{"timestamp":1700000060}}`,
|
||||
Observations: []*StoreObs{
|
||||
{ObserverID: "obs1", ObserverName: "Observer Alpha", Timestamp: "2023-11-14T22:13:22Z"},
|
||||
{ObserverID: "obs2", ObserverName: "Observer Beta", Timestamp: "2023-11-14T22:13:20Z"},
|
||||
{ObserverID: "obs3", ObserverName: "Observer Gamma", Timestamp: "2023-11-14T22:13:19Z"},
|
||||
},
|
||||
}
|
||||
// Second hash to ensure we get multiple evidence entries.
|
||||
tx2 := &StoreTx{
|
||||
Hash: "evidence_hash_2",
|
||||
PayloadType: &pt,
|
||||
DecodedJSON: `{"payload":{"timestamp":1700003660}}`,
|
||||
Observations: []*StoreObs{
|
||||
{ObserverID: "obs1", ObserverName: "Observer Alpha", Timestamp: "2023-11-14T23:13:22Z"},
|
||||
{ObserverID: "obs2", ObserverName: "Observer Beta", Timestamp: "2023-11-14T23:13:20Z"},
|
||||
{ObserverID: "obs3", ObserverName: "Observer Gamma", Timestamp: "2023-11-14T23:13:19Z"},
|
||||
},
|
||||
}
|
||||
|
||||
ps.mu.Lock()
|
||||
ps.byNode["NODETEST"] = []*StoreTx{tx1, tx2}
|
||||
ps.byPayloadType[4] = []*StoreTx{tx1, tx2}
|
||||
ps.clockSkew.computeInterval = 0
|
||||
ps.mu.Unlock()
|
||||
|
||||
r := ps.GetNodeClockSkew("NODETEST")
|
||||
if r == nil {
|
||||
t.Fatal("expected clock skew result")
|
||||
}
|
||||
|
||||
// Check recentHashEvidence exists.
|
||||
if len(r.RecentHashEvidence) == 0 {
|
||||
t.Fatal("expected recentHashEvidence to be populated")
|
||||
}
|
||||
if len(r.RecentHashEvidence) != 2 {
|
||||
t.Errorf("recentHashEvidence length = %d, want 2", len(r.RecentHashEvidence))
|
||||
}
|
||||
|
||||
// Check first evidence entry has 3 observers.
|
||||
ev := r.RecentHashEvidence[0]
|
||||
if len(ev.Observers) != 3 {
|
||||
t.Fatalf("evidence observers = %d, want 3", len(ev.Observers))
|
||||
}
|
||||
|
||||
// Verify corrected = raw + offset for each observer.
|
||||
for _, o := range ev.Observers {
|
||||
expected := o.RawSkewSec + o.ObserverOffsetSec
|
||||
if math.Abs(o.CorrectedSkewSec-expected) > 0.2 {
|
||||
t.Errorf("observer %s: corrected=%.1f, expected raw(%.1f)+offset(%.1f)=%.1f",
|
||||
o.ObserverID, o.CorrectedSkewSec, o.RawSkewSec, o.ObserverOffsetSec, expected)
|
||||
}
|
||||
}
|
||||
|
||||
// All corrected values should be ~60s (node is 60s ahead).
|
||||
if math.Abs(ev.MedianCorrectedSkewSec-60) > 1 {
|
||||
t.Errorf("median corrected = %.1f, want ~60", ev.MedianCorrectedSkewSec)
|
||||
}
|
||||
|
||||
// Check calibration summary.
|
||||
if r.CalibrationSummary == nil {
|
||||
t.Fatal("expected calibrationSummary")
|
||||
}
|
||||
if r.CalibrationSummary.TotalSamples != 6 { // 3 observers × 2 hashes
|
||||
t.Errorf("calibration total = %d, want 6", r.CalibrationSummary.TotalSamples)
|
||||
}
|
||||
if r.CalibrationSummary.CalibratedSamples != 6 {
|
||||
t.Errorf("calibrated = %d, want 6 (all multi-observer)", r.CalibrationSummary.CalibratedSamples)
|
||||
}
|
||||
|
||||
// Check observer names are populated.
|
||||
nameFound := false
|
||||
for _, o := range ev.Observers {
|
||||
if o.ObserverName == "Observer Alpha" || o.ObserverName == "Observer Beta" {
|
||||
nameFound = true
|
||||
}
|
||||
}
|
||||
if !nameFound {
|
||||
t.Error("expected observer names to be populated from tx observations")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,403 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// TestConcurrentIngestAndEviction exercises the race between IngestNewFromDB
|
||||
// adding packets (via direct store manipulation simulating the locked section)
|
||||
// and RunEviction removing packets. Without proper locking this would trigger
|
||||
// the race detector and produce inconsistent index state.
|
||||
func TestConcurrentIngestAndEviction(t *testing.T) {
|
||||
// Seed store with 200 old packets that are eligible for eviction
|
||||
startTime := time.Now().UTC().Add(-48 * time.Hour)
|
||||
store := makeTestStore(200, startTime, 1)
|
||||
store.retentionHours = 24 // everything older than 24h is evictable
|
||||
store.loaded = true
|
||||
|
||||
// Track bytes for all seeded packets
|
||||
for _, tx := range store.packets {
|
||||
store.trackedBytes += estimateStoreTxBytes(tx)
|
||||
for _, obs := range tx.Observations {
|
||||
store.trackedBytes += estimateStoreObsBytes(obs)
|
||||
}
|
||||
}
|
||||
|
||||
const numIngestGoroutines = 5
|
||||
const packetsPerGoroutine = 50
|
||||
const numEvictionGoroutines = 3
|
||||
|
||||
var wg sync.WaitGroup
|
||||
var ingestedCount int64
|
||||
|
||||
// Concurrent ingest: simulate what IngestNewFromDB does under the lock
|
||||
for g := 0; g < numIngestGoroutines; g++ {
|
||||
wg.Add(1)
|
||||
go func(goroutineID int) {
|
||||
defer wg.Done()
|
||||
for i := 0; i < packetsPerGoroutine; i++ {
|
||||
txID := 1000 + goroutineID*1000 + i
|
||||
hash := fmt.Sprintf("new_hash_%d_%04d", goroutineID, i)
|
||||
pt := 5 // GRP_TXT
|
||||
ts := time.Now().UTC().Format(time.RFC3339)
|
||||
|
||||
tx := &StoreTx{
|
||||
ID: txID,
|
||||
Hash: hash,
|
||||
FirstSeen: ts,
|
||||
LatestSeen: ts,
|
||||
PayloadType: &pt,
|
||||
DecodedJSON: fmt.Sprintf(`{"pubKey":"newpk_%d_%04d"}`, goroutineID, i),
|
||||
obsKeys: make(map[string]bool),
|
||||
observerSet: make(map[string]bool),
|
||||
}
|
||||
|
||||
obs := &StoreObs{
|
||||
ID: txID*10 + 1,
|
||||
TransmissionID: txID,
|
||||
ObserverID: fmt.Sprintf("obs_g%d", goroutineID),
|
||||
ObserverName: fmt.Sprintf("Observer_g%d", goroutineID),
|
||||
Timestamp: ts,
|
||||
}
|
||||
tx.Observations = append(tx.Observations, obs)
|
||||
tx.ObservationCount = 1
|
||||
|
||||
// Acquire write lock (same as IngestNewFromDB)
|
||||
store.mu.Lock()
|
||||
store.packets = append(store.packets, tx)
|
||||
store.byHash[hash] = tx
|
||||
store.byTxID[txID] = tx
|
||||
store.byObsID[obs.ID] = obs
|
||||
store.byObserver[obs.ObserverID] = append(store.byObserver[obs.ObserverID], obs)
|
||||
store.byPayloadType[pt] = append(store.byPayloadType[pt], tx)
|
||||
pk := fmt.Sprintf("newpk_%d_%04d", goroutineID, i)
|
||||
if store.nodeHashes[pk] == nil {
|
||||
store.nodeHashes[pk] = make(map[string]bool)
|
||||
}
|
||||
store.nodeHashes[pk][hash] = true
|
||||
store.byNode[pk] = append(store.byNode[pk], tx)
|
||||
store.trackedBytes += estimateStoreTxBytes(tx)
|
||||
store.trackedBytes += estimateStoreObsBytes(obs)
|
||||
store.totalObs++
|
||||
store.mu.Unlock()
|
||||
|
||||
atomic.AddInt64(&ingestedCount, 1)
|
||||
}
|
||||
}(g)
|
||||
}
|
||||
|
||||
// Concurrent eviction goroutines
|
||||
var evictedTotal int64
|
||||
for g := 0; g < numEvictionGoroutines; g++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for i := 0; i < 10; i++ {
|
||||
store.mu.Lock()
|
||||
n := store.EvictStale()
|
||||
store.mu.Unlock()
|
||||
atomic.AddInt64(&evictedTotal, int64(n))
|
||||
time.Sleep(time.Millisecond)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Concurrent readers (QueryPackets uses RLock)
|
||||
for g := 0; g < 3; g++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for i := 0; i < 20; i++ {
|
||||
store.mu.RLock()
|
||||
_ = len(store.packets)
|
||||
_ = len(store.byHash)
|
||||
store.mu.RUnlock()
|
||||
time.Sleep(500 * time.Microsecond)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
|
||||
// --- Post-state assertions ---
|
||||
store.mu.RLock()
|
||||
defer store.mu.RUnlock()
|
||||
|
||||
totalIngested := int(atomic.LoadInt64(&ingestedCount))
|
||||
totalEvicted := int(atomic.LoadInt64(&evictedTotal))
|
||||
|
||||
if totalIngested != numIngestGoroutines*packetsPerGoroutine {
|
||||
t.Fatalf("expected %d ingested, got %d", numIngestGoroutines*packetsPerGoroutine, totalIngested)
|
||||
}
|
||||
|
||||
// Invariant: packets remaining = initial(200) + ingested - evicted
|
||||
expectedRemaining := 200 + totalIngested - totalEvicted
|
||||
if len(store.packets) != expectedRemaining {
|
||||
t.Fatalf("packets count mismatch: got %d, expected %d (200 + %d ingested - %d evicted)",
|
||||
len(store.packets), expectedRemaining, totalIngested, totalEvicted)
|
||||
}
|
||||
|
||||
// Invariant: byHash must be consistent with packets slice
|
||||
if len(store.byHash) != len(store.packets) {
|
||||
t.Fatalf("byHash size %d != packets len %d", len(store.byHash), len(store.packets))
|
||||
}
|
||||
|
||||
// Invariant: every packet in the slice must be in byHash
|
||||
for _, tx := range store.packets {
|
||||
if store.byHash[tx.Hash] != tx {
|
||||
t.Fatalf("packet %s in slice but not in byHash (or points to different tx)", tx.Hash)
|
||||
}
|
||||
}
|
||||
|
||||
// Invariant: byTxID must map to packets in the slice
|
||||
byTxIDCount := 0
|
||||
for _, tx := range store.packets {
|
||||
if store.byTxID[tx.ID] == tx {
|
||||
byTxIDCount++
|
||||
}
|
||||
}
|
||||
if byTxIDCount != len(store.packets) {
|
||||
t.Fatalf("byTxID consistency: %d/%d packets found", byTxIDCount, len(store.packets))
|
||||
}
|
||||
|
||||
// Invariant: trackedBytes must be non-negative
|
||||
if store.trackedBytes < 0 {
|
||||
t.Fatalf("trackedBytes went negative: %d", store.trackedBytes)
|
||||
}
|
||||
|
||||
// Verify eviction actually happened (old packets were eligible)
|
||||
if totalEvicted == 0 {
|
||||
t.Fatal("expected some evictions to occur but got 0")
|
||||
}
|
||||
|
||||
t.Logf("OK: ingested=%d, evicted=%d, remaining=%d, trackedBytes=%d",
|
||||
totalIngested, totalEvicted, len(store.packets), store.trackedBytes)
|
||||
}
|
||||
|
||||
// TestConcurrentIngestNewObservationsAndEviction exercises the race between
|
||||
// adding new observations to existing transmissions and eviction removing those
|
||||
// same transmissions. This targets the IngestNewObservations path.
|
||||
func TestConcurrentIngestNewObservationsAndEviction(t *testing.T) {
|
||||
// Create store with 100 packets, half old (evictable), half recent
|
||||
now := time.Now().UTC()
|
||||
store := makeTestStore(0, now, 1) // empty, we'll add manually
|
||||
store.retentionHours = 1
|
||||
|
||||
// Add 50 old packets (2h ago) and 50 recent packets
|
||||
for i := 0; i < 100; i++ {
|
||||
var ts time.Time
|
||||
if i < 50 {
|
||||
ts = now.Add(-2 * time.Hour).Add(time.Duration(i) * time.Second)
|
||||
} else {
|
||||
ts = now.Add(-time.Duration(100-i) * time.Second)
|
||||
}
|
||||
hash := fmt.Sprintf("obs_hash_%04d", i)
|
||||
txID := i + 1
|
||||
pt := 4
|
||||
tx := &StoreTx{
|
||||
ID: txID,
|
||||
Hash: hash,
|
||||
FirstSeen: ts.UTC().Format(time.RFC3339),
|
||||
LatestSeen: ts.UTC().Format(time.RFC3339),
|
||||
PayloadType: &pt,
|
||||
DecodedJSON: fmt.Sprintf(`{"pubKey":"pk%04d"}`, i),
|
||||
obsKeys: make(map[string]bool),
|
||||
observerSet: make(map[string]bool),
|
||||
}
|
||||
store.packets = append(store.packets, tx)
|
||||
store.byHash[hash] = tx
|
||||
store.byTxID[txID] = tx
|
||||
store.byPayloadType[pt] = append(store.byPayloadType[pt], tx)
|
||||
store.trackedBytes += estimateStoreTxBytes(tx)
|
||||
}
|
||||
store.loaded = true
|
||||
|
||||
const numObsGoroutines = 4
|
||||
const obsPerGoroutine = 100
|
||||
|
||||
var wg sync.WaitGroup
|
||||
var addedObs int64
|
||||
|
||||
// Goroutines adding observations to RECENT packets (index 50-99)
|
||||
for g := 0; g < numObsGoroutines; g++ {
|
||||
wg.Add(1)
|
||||
go func(gID int) {
|
||||
defer wg.Done()
|
||||
for i := 0; i < obsPerGoroutine; i++ {
|
||||
targetIdx := 50 + (i % 50) // only target recent packets
|
||||
hash := fmt.Sprintf("obs_hash_%04d", targetIdx)
|
||||
|
||||
store.mu.Lock()
|
||||
tx := store.byHash[hash]
|
||||
if tx != nil {
|
||||
obsID := 50000 + gID*10000 + i
|
||||
obs := &StoreObs{
|
||||
ID: obsID,
|
||||
TransmissionID: tx.ID,
|
||||
ObserverID: fmt.Sprintf("obs_new_%d", gID),
|
||||
ObserverName: fmt.Sprintf("NewObs_%d", gID),
|
||||
Timestamp: time.Now().UTC().Format(time.RFC3339),
|
||||
}
|
||||
dk := obs.ObserverID + "|"
|
||||
if !tx.obsKeys[dk] || true { // allow duplicates for stress
|
||||
tx.Observations = append(tx.Observations, obs)
|
||||
tx.ObservationCount++
|
||||
store.byObsID[obsID] = obs
|
||||
store.byObserver[obs.ObserverID] = append(store.byObserver[obs.ObserverID], obs)
|
||||
store.trackedBytes += estimateStoreObsBytes(obs)
|
||||
store.totalObs++
|
||||
atomic.AddInt64(&addedObs, 1)
|
||||
}
|
||||
}
|
||||
store.mu.Unlock()
|
||||
}
|
||||
}(g)
|
||||
}
|
||||
|
||||
// Concurrent eviction
|
||||
var evictedTotal int64
|
||||
for g := 0; g < 2; g++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for i := 0; i < 15; i++ {
|
||||
store.mu.Lock()
|
||||
n := store.EvictStale()
|
||||
store.mu.Unlock()
|
||||
atomic.AddInt64(&evictedTotal, int64(n))
|
||||
time.Sleep(500 * time.Microsecond)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
|
||||
// --- Assertions ---
|
||||
store.mu.RLock()
|
||||
defer store.mu.RUnlock()
|
||||
|
||||
totalEvicted := int(atomic.LoadInt64(&evictedTotal))
|
||||
totalAdded := int(atomic.LoadInt64(&addedObs))
|
||||
|
||||
// All 50 old packets should have been evicted
|
||||
if totalEvicted < 50 {
|
||||
t.Fatalf("expected at least 50 evictions (old packets), got %d", totalEvicted)
|
||||
}
|
||||
|
||||
// Recent packets (50) should survive
|
||||
if len(store.packets) < 50 {
|
||||
t.Fatalf("expected at least 50 remaining packets (recent ones), got %d", len(store.packets))
|
||||
}
|
||||
|
||||
// byHash consistency
|
||||
for _, tx := range store.packets {
|
||||
if store.byHash[tx.Hash] != tx {
|
||||
t.Fatalf("byHash inconsistency for %s", tx.Hash)
|
||||
}
|
||||
}
|
||||
|
||||
// No evicted packet should remain in byHash
|
||||
for i := 0; i < 50; i++ {
|
||||
hash := fmt.Sprintf("obs_hash_%04d", i)
|
||||
if store.byHash[hash] != nil {
|
||||
t.Fatalf("evicted packet %s still in byHash", hash)
|
||||
}
|
||||
}
|
||||
|
||||
// byObsID should not reference observations from evicted packets
|
||||
for obsID, obs := range store.byObsID {
|
||||
if store.byTxID[obs.TransmissionID] == nil {
|
||||
t.Fatalf("byObsID[%d] references evicted transmission %d", obsID, obs.TransmissionID)
|
||||
}
|
||||
}
|
||||
|
||||
// trackedBytes non-negative
|
||||
if store.trackedBytes < 0 {
|
||||
t.Fatalf("trackedBytes negative: %d", store.trackedBytes)
|
||||
}
|
||||
|
||||
t.Logf("OK: evicted=%d, added_obs=%d, remaining=%d, trackedBytes=%d",
|
||||
totalEvicted, totalAdded, len(store.packets), store.trackedBytes)
|
||||
}
|
||||
|
||||
// TestConcurrentRunEvictionWithReads exercises RunEviction's two-phase locking
|
||||
// against concurrent read operations (simulating QueryPackets / GetStoreStats).
|
||||
// Without proper RWMutex usage, this would race on slice/map reads.
|
||||
func TestConcurrentRunEvictionWithReads(t *testing.T) {
|
||||
startTime := time.Now().UTC().Add(-3 * time.Hour)
|
||||
store := makeTestStore(500, startTime, 1)
|
||||
store.retentionHours = 1
|
||||
store.loaded = true
|
||||
|
||||
for _, tx := range store.packets {
|
||||
store.trackedBytes += estimateStoreTxBytes(tx)
|
||||
for _, obs := range tx.Observations {
|
||||
store.trackedBytes += estimateStoreObsBytes(obs)
|
||||
}
|
||||
}
|
||||
|
||||
var wg sync.WaitGroup
|
||||
|
||||
// Multiple RunEviction calls (uses its own locking)
|
||||
var evicted int64
|
||||
for g := 0; g < 3; g++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
n := store.RunEviction()
|
||||
atomic.AddInt64(&evicted, int64(n))
|
||||
}()
|
||||
}
|
||||
|
||||
// Concurrent readers using the public read-lock pattern
|
||||
var readCount int64
|
||||
for g := 0; g < 5; g++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for i := 0; i < 50; i++ {
|
||||
store.mu.RLock()
|
||||
count := len(store.packets)
|
||||
_ = count
|
||||
// Iterate a portion of byHash (simulating query)
|
||||
for hash, tx := range store.byHash {
|
||||
_ = hash
|
||||
_ = tx.ObservationCount
|
||||
break // just access one
|
||||
}
|
||||
store.mu.RUnlock()
|
||||
atomic.AddInt64(&readCount, 1)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
|
||||
store.mu.RLock()
|
||||
defer store.mu.RUnlock()
|
||||
|
||||
totalEvicted := int(atomic.LoadInt64(&evicted))
|
||||
|
||||
// Must have evicted packets older than 1h (most of the 500 are 1-3h old)
|
||||
if totalEvicted == 0 {
|
||||
t.Fatal("expected evictions but got 0")
|
||||
}
|
||||
|
||||
// Consistency: byHash == packets len
|
||||
if len(store.byHash) != len(store.packets) {
|
||||
t.Fatalf("byHash %d != packets %d after concurrent RunEviction+reads",
|
||||
len(store.byHash), len(store.packets))
|
||||
}
|
||||
|
||||
// All reads completed without panic
|
||||
if atomic.LoadInt64(&readCount) != 250 {
|
||||
t.Fatalf("not all reads completed: %d/250", atomic.LoadInt64(&readCount))
|
||||
}
|
||||
|
||||
t.Logf("OK: evicted=%d, remaining=%d, reads=%d",
|
||||
totalEvicted, len(store.packets), atomic.LoadInt64(&readCount))
|
||||
}
|
||||
@@ -8,7 +8,6 @@ import (
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/meshcore-analyzer/dbconfig"
|
||||
"github.com/meshcore-analyzer/geofilter"
|
||||
)
|
||||
|
||||
@@ -63,30 +62,14 @@ type Config struct {
|
||||
|
||||
Retention *RetentionConfig `json:"retention,omitempty"`
|
||||
|
||||
DB *DBConfig `json:"db,omitempty"`
|
||||
|
||||
PacketStore *PacketStoreConfig `json:"packetStore,omitempty"`
|
||||
|
||||
GeoFilter *GeoFilterConfig `json:"geo_filter,omitempty"`
|
||||
|
||||
Timestamps *TimestampConfig `json:"timestamps,omitempty"`
|
||||
|
||||
// CORSAllowedOrigins is the list of origins permitted to make cross-origin
|
||||
// requests. When empty (default), no Access-Control-* headers are sent,
|
||||
// so browsers enforce same-origin policy. Set to ["*"] to allow all origins.
|
||||
CORSAllowedOrigins []string `json:"corsAllowedOrigins,omitempty"`
|
||||
|
||||
DebugAffinity bool `json:"debugAffinity,omitempty"`
|
||||
|
||||
// ObserverBlacklist is a list of observer public keys to exclude from API
|
||||
// responses (defense in depth — ingestor drops at ingest, server filters
|
||||
// any that slipped through from a prior unblocked window).
|
||||
ObserverBlacklist []string `json:"observerBlacklist,omitempty"`
|
||||
|
||||
// obsBlacklistSetCached is the lazily-built set version of ObserverBlacklist.
|
||||
obsBlacklistSetCached map[string]bool
|
||||
obsBlacklistOnce sync.Once
|
||||
|
||||
ResolvedPath *ResolvedPathConfig `json:"resolvedPath,omitempty"`
|
||||
NeighborGraph *NeighborGraphConfig `json:"neighborGraph,omitempty"`
|
||||
}
|
||||
@@ -146,17 +129,6 @@ type RetentionConfig struct {
|
||||
MetricsDays int `json:"metricsDays"`
|
||||
}
|
||||
|
||||
// DBConfig is the shared SQLite vacuum/maintenance config (#919, #921).
|
||||
type DBConfig = dbconfig.DBConfig
|
||||
|
||||
// IncrementalVacuumPages returns the configured pages per vacuum or 1024 default.
|
||||
func (c *Config) IncrementalVacuumPages() int {
|
||||
if c.DB != nil && c.DB.IncrementalVacuumPages > 0 {
|
||||
return c.DB.IncrementalVacuumPages
|
||||
}
|
||||
return 1024
|
||||
}
|
||||
|
||||
// MetricsRetentionDays returns configured metrics retention or 30 days default.
|
||||
func (c *Config) MetricsRetentionDays() int {
|
||||
if c.Retention != nil && c.Retention.MetricsDays > 0 {
|
||||
@@ -416,29 +388,3 @@ func (c *Config) IsBlacklisted(pubkey string) bool {
|
||||
}
|
||||
return c.blacklistSet()[strings.ToLower(strings.TrimSpace(pubkey))]
|
||||
}
|
||||
|
||||
// obsBlacklistSet lazily builds and caches the observerBlacklist as a set for O(1) lookups.
|
||||
func (c *Config) obsBlacklistSet() map[string]bool {
|
||||
c.obsBlacklistOnce.Do(func() {
|
||||
if len(c.ObserverBlacklist) == 0 {
|
||||
return
|
||||
}
|
||||
m := make(map[string]bool, len(c.ObserverBlacklist))
|
||||
for _, pk := range c.ObserverBlacklist {
|
||||
trimmed := strings.ToLower(strings.TrimSpace(pk))
|
||||
if trimmed != "" {
|
||||
m[trimmed] = true
|
||||
}
|
||||
}
|
||||
c.obsBlacklistSetCached = m
|
||||
})
|
||||
return c.obsBlacklistSetCached
|
||||
}
|
||||
|
||||
// IsObserverBlacklisted returns true if the given observer ID is in the observerBlacklist.
|
||||
func (c *Config) IsObserverBlacklisted(id string) bool {
|
||||
if c == nil || len(c.ObserverBlacklist) == 0 {
|
||||
return false
|
||||
}
|
||||
return c.obsBlacklistSet()[strings.ToLower(strings.TrimSpace(id))]
|
||||
}
|
||||
|
||||
@@ -1,66 +0,0 @@
|
||||
package main
|
||||
|
||||
import "net/http"
|
||||
|
||||
// corsMiddleware returns a middleware that sets CORS headers based on the
|
||||
// configured allowed origins. When CORSAllowedOrigins is empty (default),
|
||||
// no Access-Control-* headers are added, preserving browser same-origin policy.
|
||||
func (s *Server) corsMiddleware(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
origins := s.cfg.CORSAllowedOrigins
|
||||
if len(origins) == 0 {
|
||||
next.ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
reqOrigin := r.Header.Get("Origin")
|
||||
if reqOrigin == "" {
|
||||
next.ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
// Check if origin is allowed
|
||||
allowed := false
|
||||
wildcard := false
|
||||
for _, o := range origins {
|
||||
if o == "*" {
|
||||
allowed = true
|
||||
wildcard = true
|
||||
break
|
||||
}
|
||||
if o == reqOrigin {
|
||||
allowed = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !allowed {
|
||||
// Origin not in allowlist — don't add CORS headers
|
||||
if r.Method == http.MethodOptions {
|
||||
// Still reject preflight with 403
|
||||
w.WriteHeader(http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
next.ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
// Set CORS headers
|
||||
if wildcard {
|
||||
w.Header().Set("Access-Control-Allow-Origin", "*")
|
||||
} else {
|
||||
w.Header().Set("Access-Control-Allow-Origin", reqOrigin)
|
||||
w.Header().Set("Vary", "Origin")
|
||||
}
|
||||
w.Header().Set("Access-Control-Allow-Methods", "GET, POST, OPTIONS")
|
||||
w.Header().Set("Access-Control-Allow-Headers", "Content-Type, X-API-Key")
|
||||
|
||||
// Handle preflight
|
||||
if r.Method == http.MethodOptions {
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
return
|
||||
}
|
||||
|
||||
next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
@@ -1,149 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// newTestServerWithCORS creates a minimal Server with the given CORS config.
|
||||
func newTestServerWithCORS(origins []string) *Server {
|
||||
cfg := &Config{CORSAllowedOrigins: origins}
|
||||
srv := &Server{cfg: cfg}
|
||||
return srv
|
||||
}
|
||||
|
||||
// dummyHandler is a simple handler that writes 200 OK.
|
||||
var dummyHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusOK)
|
||||
w.Write([]byte("ok"))
|
||||
})
|
||||
|
||||
func TestCORS_DefaultNoHeaders(t *testing.T) {
|
||||
srv := newTestServerWithCORS(nil)
|
||||
handler := srv.corsMiddleware(dummyHandler)
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/health", nil)
|
||||
req.Header.Set("Origin", "https://evil.example")
|
||||
rr := httptest.NewRecorder()
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if rr.Code != 200 {
|
||||
t.Fatalf("expected 200, got %d", rr.Code)
|
||||
}
|
||||
if v := rr.Header().Get("Access-Control-Allow-Origin"); v != "" {
|
||||
t.Fatalf("expected no ACAO header, got %q", v)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCORS_AllowlistMatch(t *testing.T) {
|
||||
srv := newTestServerWithCORS([]string{"https://good.example"})
|
||||
handler := srv.corsMiddleware(dummyHandler)
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/health", nil)
|
||||
req.Header.Set("Origin", "https://good.example")
|
||||
rr := httptest.NewRecorder()
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if rr.Code != 200 {
|
||||
t.Fatalf("expected 200, got %d", rr.Code)
|
||||
}
|
||||
if v := rr.Header().Get("Access-Control-Allow-Origin"); v != "https://good.example" {
|
||||
t.Fatalf("expected origin echo, got %q", v)
|
||||
}
|
||||
if v := rr.Header().Get("Access-Control-Allow-Methods"); v != "GET, POST, OPTIONS" {
|
||||
t.Fatalf("expected methods header, got %q", v)
|
||||
}
|
||||
if v := rr.Header().Get("Access-Control-Allow-Headers"); v != "Content-Type, X-API-Key" {
|
||||
t.Fatalf("expected headers header, got %q", v)
|
||||
}
|
||||
if v := rr.Header().Get("Vary"); v != "Origin" {
|
||||
t.Fatalf("expected Vary: Origin, got %q", v)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCORS_AllowlistNoMatch(t *testing.T) {
|
||||
srv := newTestServerWithCORS([]string{"https://good.example"})
|
||||
handler := srv.corsMiddleware(dummyHandler)
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/health", nil)
|
||||
req.Header.Set("Origin", "https://evil.example")
|
||||
rr := httptest.NewRecorder()
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if rr.Code != 200 {
|
||||
t.Fatalf("expected 200, got %d", rr.Code)
|
||||
}
|
||||
if v := rr.Header().Get("Access-Control-Allow-Origin"); v != "" {
|
||||
t.Fatalf("expected no ACAO header for non-matching origin, got %q", v)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCORS_PreflightAllowed(t *testing.T) {
|
||||
srv := newTestServerWithCORS([]string{"https://good.example"})
|
||||
handler := srv.corsMiddleware(dummyHandler)
|
||||
|
||||
req := httptest.NewRequest("OPTIONS", "/api/health", nil)
|
||||
req.Header.Set("Origin", "https://good.example")
|
||||
rr := httptest.NewRecorder()
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if rr.Code != http.StatusNoContent {
|
||||
t.Fatalf("expected 204, got %d", rr.Code)
|
||||
}
|
||||
if v := rr.Header().Get("Access-Control-Allow-Origin"); v != "https://good.example" {
|
||||
t.Fatalf("expected origin echo, got %q", v)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCORS_PreflightRejected(t *testing.T) {
|
||||
srv := newTestServerWithCORS([]string{"https://good.example"})
|
||||
handler := srv.corsMiddleware(dummyHandler)
|
||||
|
||||
req := httptest.NewRequest("OPTIONS", "/api/health", nil)
|
||||
req.Header.Set("Origin", "https://evil.example")
|
||||
rr := httptest.NewRecorder()
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if rr.Code != http.StatusForbidden {
|
||||
t.Fatalf("expected 403, got %d", rr.Code)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCORS_Wildcard(t *testing.T) {
|
||||
srv := newTestServerWithCORS([]string{"*"})
|
||||
handler := srv.corsMiddleware(dummyHandler)
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/health", nil)
|
||||
req.Header.Set("Origin", "https://anything.example")
|
||||
rr := httptest.NewRecorder()
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if rr.Code != 200 {
|
||||
t.Fatalf("expected 200, got %d", rr.Code)
|
||||
}
|
||||
if v := rr.Header().Get("Access-Control-Allow-Origin"); v != "*" {
|
||||
t.Fatalf("expected *, got %q", v)
|
||||
}
|
||||
// Wildcard should NOT set Vary: Origin
|
||||
if v := rr.Header().Get("Vary"); v == "Origin" {
|
||||
t.Fatalf("wildcard should not set Vary: Origin")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCORS_NoOriginHeader(t *testing.T) {
|
||||
srv := newTestServerWithCORS([]string{"https://good.example"})
|
||||
handler := srv.corsMiddleware(dummyHandler)
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/health", nil)
|
||||
// No Origin header
|
||||
rr := httptest.NewRecorder()
|
||||
handler.ServeHTTP(rr, req)
|
||||
|
||||
if rr.Code != 200 {
|
||||
t.Fatalf("expected 200, got %d", rr.Code)
|
||||
}
|
||||
if v := rr.Header().Get("Access-Control-Allow-Origin"); v != "" {
|
||||
t.Fatalf("expected no ACAO without Origin header, got %q", v)
|
||||
}
|
||||
}
|
||||
@@ -35,8 +35,7 @@ func setupTestDBv2(t *testing.T) *DB {
|
||||
CREATE TABLE observers (
|
||||
id TEXT PRIMARY KEY, name TEXT, iata TEXT, last_seen TEXT, first_seen TEXT,
|
||||
packet_count INTEGER DEFAULT 0, model TEXT, firmware TEXT,
|
||||
client_version TEXT, radio TEXT, battery_mv INTEGER, uptime_secs INTEGER, noise_floor REAL,
|
||||
inactive INTEGER DEFAULT 0
|
||||
client_version TEXT, radio TEXT, battery_mv INTEGER, uptime_secs INTEGER, noise_floor REAL
|
||||
);
|
||||
CREATE TABLE transmissions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT, raw_hex TEXT NOT NULL,
|
||||
@@ -2498,9 +2497,9 @@ func TestStoreGetAnalyticsChannelsNumericHash(t *testing.T) {
|
||||
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
|
||||
VALUES (5, 1, 10.0, -90, '[]', ?)`, recentEpoch)
|
||||
|
||||
// Also a decrypted CHAN with numeric channelHash — use hash 198 which is the real hash for #general
|
||||
// Also a decrypted CHAN with numeric channelHash
|
||||
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
|
||||
VALUES ('DD03', 'chan_num_hash_3', ?, 1, 5, '{"type":"CHAN","channel":"general","channelHash":198,"channelHashHex":"C6","text":"hello","sender":"Alice"}')`, recent)
|
||||
VALUES ('DD03', 'chan_num_hash_3', ?, 1, 5, '{"type":"CHAN","channel":"general","channelHash":97,"channelHashHex":"61","text":"hello","sender":"Alice"}')`, recent)
|
||||
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
|
||||
VALUES (6, 1, 12.0, -88, '[]', ?)`, recentEpoch)
|
||||
|
||||
@@ -2509,8 +2508,8 @@ func TestStoreGetAnalyticsChannelsNumericHash(t *testing.T) {
|
||||
result := store.GetAnalyticsChannels("")
|
||||
|
||||
channels := result["channels"].([]map[string]interface{})
|
||||
if len(channels) < 3 {
|
||||
t.Errorf("expected at least 3 channels (hash 97 + hash 42 + hash 198), got %d", len(channels))
|
||||
if len(channels) < 2 {
|
||||
t.Errorf("expected at least 2 channels (hash 97 + hash 42), got %d", len(channels))
|
||||
}
|
||||
|
||||
// Verify the numeric-hash channels we inserted have proper hashes (not "?")
|
||||
@@ -2531,13 +2530,13 @@ func TestStoreGetAnalyticsChannelsNumericHash(t *testing.T) {
|
||||
t.Error("expected to find channel with hash '42' (numeric channelHash parsing)")
|
||||
}
|
||||
|
||||
// Verify the decrypted CHAN channel has the correct name (now at hash 198)
|
||||
// Verify the decrypted CHAN channel has the correct name
|
||||
foundGeneral := false
|
||||
for _, ch := range channels {
|
||||
if ch["name"] == "general" {
|
||||
foundGeneral = true
|
||||
if ch["hash"] != "198" {
|
||||
t.Errorf("expected hash '198' for general channel, got %v", ch["hash"])
|
||||
if ch["hash"] != "97" {
|
||||
t.Errorf("expected hash '97' for general channel, got %v", ch["hash"])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
+14
-76
@@ -170,7 +170,6 @@ type Observer struct {
|
||||
BatteryMv *int `json:"battery_mv"`
|
||||
UptimeSecs *int64 `json:"uptime_secs"`
|
||||
NoiseFloor *float64 `json:"noise_floor"`
|
||||
LastPacketAt *string `json:"last_packet_at"`
|
||||
}
|
||||
|
||||
// Transmission represents a row from the transmissions table.
|
||||
@@ -232,7 +231,7 @@ func (db *DB) GetStats() (*Stats, error) {
|
||||
sevenDaysAgo := time.Now().Add(-7 * 24 * time.Hour).Format(time.RFC3339)
|
||||
db.conn.QueryRow("SELECT COUNT(*) FROM nodes WHERE last_seen > ?", sevenDaysAgo).Scan(&s.TotalNodes)
|
||||
db.conn.QueryRow("SELECT COUNT(*) FROM nodes").Scan(&s.TotalNodesAllTime)
|
||||
db.conn.QueryRow("SELECT COUNT(*) FROM observers WHERE inactive IS NULL OR inactive = 0").Scan(&s.TotalObservers)
|
||||
db.conn.QueryRow("SELECT COUNT(*) FROM observers").Scan(&s.TotalObservers)
|
||||
|
||||
oneHourAgo := time.Now().Add(-1 * time.Hour).Unix()
|
||||
db.conn.QueryRow("SELECT COUNT(*) FROM observations WHERE timestamp > ?", oneHourAgo).Scan(&s.PacketsLastHour)
|
||||
@@ -831,55 +830,6 @@ func (db *DB) SearchNodes(query string, limit int) ([]map[string]interface{}, er
|
||||
return nodes, nil
|
||||
}
|
||||
|
||||
// GetNodeByPrefix resolves a hex prefix (>=8 chars) to a unique node.
|
||||
// Returns (node, ambiguous, error). When multiple nodes share the prefix,
|
||||
// returns (nil, true, nil). Used by the short-URL feature (issue #772).
|
||||
//
|
||||
// Trade-off vs an opaque ID lookup table: prefixes are stable across
|
||||
// restarts, self-describing (no allocator needed), and resolve to the
|
||||
// authoritative pubkey on the server. Cost: ambiguity grows with the
|
||||
// node directory; we mitigate with a hard 8-hex-char (32-bit) minimum
|
||||
// and surface 409 Conflict when collisions occur.
|
||||
func (db *DB) GetNodeByPrefix(prefix string) (map[string]interface{}, bool, error) {
|
||||
if len(prefix) < 8 {
|
||||
return nil, false, nil
|
||||
}
|
||||
// Validate hex (avoid SQL LIKE wildcards leaking through).
|
||||
for _, c := range prefix {
|
||||
isHex := (c >= '0' && c <= '9') || (c >= 'a' && c <= 'f') || (c >= 'A' && c <= 'F')
|
||||
if !isHex {
|
||||
return nil, false, nil
|
||||
}
|
||||
}
|
||||
rows, err := db.conn.Query(
|
||||
`SELECT public_key, name, role, lat, lon, last_seen, first_seen, advert_count, battery_mv, temperature_c
|
||||
FROM nodes WHERE public_key LIKE ? LIMIT 2`,
|
||||
prefix+"%",
|
||||
)
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
defer rows.Close()
|
||||
var first map[string]interface{}
|
||||
count := 0
|
||||
for rows.Next() {
|
||||
n := scanNodeRow(rows)
|
||||
if n == nil {
|
||||
continue
|
||||
}
|
||||
count++
|
||||
if count == 1 {
|
||||
first = n
|
||||
} else {
|
||||
return nil, true, nil
|
||||
}
|
||||
}
|
||||
if count == 0 {
|
||||
return nil, false, nil
|
||||
}
|
||||
return first, false, nil
|
||||
}
|
||||
|
||||
// GetNodeByPubkey returns a single node.
|
||||
func (db *DB) GetNodeByPubkey(pubkey string) (map[string]interface{}, error) {
|
||||
rows, err := db.conn.Query("SELECT public_key, name, role, lat, lon, last_seen, first_seen, advert_count, battery_mv, temperature_c FROM nodes WHERE public_key = ?", pubkey)
|
||||
@@ -1020,9 +970,9 @@ func (db *DB) getObservationsForTransmissions(txIDs []int) map[int][]map[string]
|
||||
return result
|
||||
}
|
||||
|
||||
// GetObservers returns active observers (not soft-deleted) sorted by last_seen DESC.
|
||||
// GetObservers returns all observers sorted by last_seen DESC.
|
||||
func (db *DB) GetObservers() ([]Observer, error) {
|
||||
rows, err := db.conn.Query("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor, last_packet_at FROM observers WHERE inactive IS NULL OR inactive = 0 ORDER BY last_seen DESC")
|
||||
rows, err := db.conn.Query("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor FROM observers ORDER BY last_seen DESC")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -1033,7 +983,7 @@ func (db *DB) GetObservers() ([]Observer, error) {
|
||||
var o Observer
|
||||
var batteryMv, uptimeSecs sql.NullInt64
|
||||
var noiseFloor sql.NullFloat64
|
||||
if err := rows.Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor, &o.LastPacketAt); err != nil {
|
||||
if err := rows.Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor); err != nil {
|
||||
continue
|
||||
}
|
||||
if batteryMv.Valid {
|
||||
@@ -1056,8 +1006,8 @@ func (db *DB) GetObserverByID(id string) (*Observer, error) {
|
||||
var o Observer
|
||||
var batteryMv, uptimeSecs sql.NullInt64
|
||||
var noiseFloor sql.NullFloat64
|
||||
err := db.conn.QueryRow("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor, last_packet_at FROM observers WHERE id = ?", id).
|
||||
Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor, &o.LastPacketAt)
|
||||
err := db.conn.QueryRow("SELECT id, name, iata, last_seen, first_seen, packet_count, model, firmware, client_version, radio, battery_mv, uptime_secs, noise_floor FROM observers WHERE id = ?", id).
|
||||
Scan(&o.ID, &o.Name, &o.IATA, &o.LastSeen, &o.FirstSeen, &o.PacketCount, &o.Model, &o.Firmware, &o.ClientVersion, &o.Radio, &batteryMv, &uptimeSecs, &noiseFloor)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -1105,17 +1055,6 @@ func (db *DB) GetObserverIdsForRegion(regionParam string) ([]string, error) {
|
||||
return ids, nil
|
||||
}
|
||||
|
||||
// normalizeRegionCodes parses a region query parameter into a list of upper-case
|
||||
// IATA codes. Returns nil to signal "no filter" (match all regions).
|
||||
//
|
||||
// Sentinel handling (issue #770): the frontend region filter dropdown labels its
|
||||
// catch-all option "All". When that option is selected the UI may send
|
||||
// ?region=All; older code interpreted that literally and tried to match an
|
||||
// IATA code "ALL", which never exists, returning an empty result set. Treat
|
||||
// "All" / "ALL" / "all" (case-insensitive, optionally surrounded by whitespace
|
||||
// or mixed with empty CSV slots) as equivalent to an empty value.
|
||||
//
|
||||
// Real IATA codes (e.g. "SJC", "PDX") still pass through unchanged.
|
||||
func normalizeRegionCodes(regionParam string) []string {
|
||||
if regionParam == "" {
|
||||
return nil
|
||||
@@ -1124,13 +1063,9 @@ func normalizeRegionCodes(regionParam string) []string {
|
||||
codes := make([]string, 0, len(tokens))
|
||||
for _, token := range tokens {
|
||||
code := strings.TrimSpace(strings.ToUpper(token))
|
||||
if code == "" || code == "ALL" {
|
||||
continue
|
||||
if code != "" {
|
||||
codes = append(codes, code)
|
||||
}
|
||||
codes = append(codes, code)
|
||||
}
|
||||
if len(codes) == 0 {
|
||||
return nil
|
||||
}
|
||||
return codes
|
||||
}
|
||||
@@ -1937,10 +1872,11 @@ func nullInt(ni sql.NullInt64) interface{} {
|
||||
// Returns the number of transmissions deleted.
|
||||
// Opens a separate read-write connection since the main connection is read-only.
|
||||
func (db *DB) PruneOldPackets(days int) (int64, error) {
|
||||
rw, err := cachedRW(db.path)
|
||||
rw, err := openRW(db.path)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
defer rw.Close()
|
||||
|
||||
cutoff := time.Now().UTC().AddDate(0, 0, -days).Format(time.RFC3339)
|
||||
tx, err := rw.Begin()
|
||||
@@ -2283,10 +2219,11 @@ func (db *DB) GetMetricsSummary(since string) ([]MetricsSummaryRow, error) {
|
||||
|
||||
// PruneOldMetrics deletes observer_metrics rows older than retentionDays.
|
||||
func (db *DB) PruneOldMetrics(retentionDays int) (int64, error) {
|
||||
rw, err := cachedRW(db.path)
|
||||
rw, err := openRW(db.path)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
defer rw.Close()
|
||||
|
||||
cutoff := time.Now().UTC().AddDate(0, 0, -retentionDays).Format(time.RFC3339)
|
||||
res, err := rw.Exec(`DELETE FROM observer_metrics WHERE timestamp < ?`, cutoff)
|
||||
@@ -2309,10 +2246,11 @@ func (db *DB) RemoveStaleObservers(observerDays int) (int64, error) {
|
||||
if observerDays <= -1 {
|
||||
return 0, nil // keep forever
|
||||
}
|
||||
rw, err := cachedRW(db.path)
|
||||
rw, err := openRW(db.path)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
defer rw.Close()
|
||||
|
||||
cutoff := time.Now().UTC().AddDate(0, 0, -observerDays).Format(time.RFC3339)
|
||||
res, err := rw.Exec(`UPDATE observers SET inactive = 1 WHERE last_seen < ? AND (inactive IS NULL OR inactive = 0)`, cutoff)
|
||||
|
||||
+2
-76
@@ -48,9 +48,7 @@ func setupTestDB(t *testing.T) *DB {
|
||||
radio TEXT,
|
||||
battery_mv INTEGER,
|
||||
uptime_secs INTEGER,
|
||||
noise_floor REAL,
|
||||
inactive INTEGER DEFAULT 0,
|
||||
last_packet_at TEXT DEFAULT NULL
|
||||
noise_floor REAL
|
||||
);
|
||||
|
||||
CREATE TABLE transmissions (
|
||||
@@ -357,35 +355,6 @@ func TestGetObservers(t *testing.T) {
|
||||
if observers[0].ID != "obs1" {
|
||||
t.Errorf("expected obs1 first (most recent), got %s", observers[0].ID)
|
||||
}
|
||||
// last_packet_at should be nil since seedTestData doesn't set it
|
||||
if observers[0].LastPacketAt != nil {
|
||||
t.Errorf("expected nil LastPacketAt for obs1 from seed, got %v", *observers[0].LastPacketAt)
|
||||
}
|
||||
}
|
||||
|
||||
// Regression: GetObservers must exclude soft-deleted (inactive=1) rows.
|
||||
// Stale observers were appearing in /api/observers despite the auto-prune
|
||||
// marking them inactive, because the SELECT query had no WHERE filter.
|
||||
func TestGetObservers_ExcludesInactive(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
defer db.Close()
|
||||
seedTestData(t, db)
|
||||
// Mark obs2 inactive — soft delete simulating a stale-observer prune.
|
||||
if _, err := db.conn.Exec(`UPDATE observers SET inactive = 1 WHERE id = ?`, "obs2"); err != nil {
|
||||
t.Fatalf("update inactive: %v", err)
|
||||
}
|
||||
observers, err := db.GetObservers()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(observers) != 1 {
|
||||
t.Errorf("expected 1 observer (obs1) after marking obs2 inactive, got %d", len(observers))
|
||||
}
|
||||
for _, o := range observers {
|
||||
if o.ID == "obs2" {
|
||||
t.Errorf("inactive observer obs2 should be excluded")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetObserverByID(t *testing.T) {
|
||||
@@ -400,48 +369,6 @@ func TestGetObserverByID(t *testing.T) {
|
||||
if obs.ID != "obs1" {
|
||||
t.Errorf("expected obs1, got %s", obs.ID)
|
||||
}
|
||||
// Verify last_packet_at is nil by default
|
||||
if obs.LastPacketAt != nil {
|
||||
t.Errorf("expected nil LastPacketAt, got %v", *obs.LastPacketAt)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetObserverLastPacketAt(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
defer db.Close()
|
||||
seedTestData(t, db)
|
||||
|
||||
// Set last_packet_at for obs1
|
||||
ts := "2026-04-24T12:00:00Z"
|
||||
db.conn.Exec(`UPDATE observers SET last_packet_at = ? WHERE id = ?`, ts, "obs1")
|
||||
|
||||
// Verify via GetObservers
|
||||
observers, err := db.GetObservers()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var obs1 *Observer
|
||||
for i := range observers {
|
||||
if observers[i].ID == "obs1" {
|
||||
obs1 = &observers[i]
|
||||
break
|
||||
}
|
||||
}
|
||||
if obs1 == nil {
|
||||
t.Fatal("obs1 not found")
|
||||
}
|
||||
if obs1.LastPacketAt == nil || *obs1.LastPacketAt != ts {
|
||||
t.Errorf("expected LastPacketAt=%s via GetObservers, got %v", ts, obs1.LastPacketAt)
|
||||
}
|
||||
|
||||
// Verify via GetObserverByID
|
||||
obs, err := db.GetObserverByID("obs1")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if obs.LastPacketAt == nil || *obs.LastPacketAt != ts {
|
||||
t.Errorf("expected LastPacketAt=%s via GetObserverByID, got %v", ts, obs.LastPacketAt)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetObserverByIDNotFound(t *testing.T) {
|
||||
@@ -1182,8 +1109,7 @@ func setupTestDBV2(t *testing.T) *DB {
|
||||
iata TEXT,
|
||||
last_seen TEXT,
|
||||
first_seen TEXT,
|
||||
packet_count INTEGER DEFAULT 0,
|
||||
last_packet_at TEXT DEFAULT NULL
|
||||
packet_count INTEGER DEFAULT 0
|
||||
);
|
||||
|
||||
CREATE TABLE transmissions (
|
||||
|
||||
@@ -1,262 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
_ "modernc.org/sqlite"
|
||||
)
|
||||
|
||||
// createFreshIngestorDB creates a SQLite DB using the ingestor's applySchema logic
|
||||
// (simulated here) with auto_vacuum=INCREMENTAL set before tables.
|
||||
func createFreshDBWithAutoVacuum(t *testing.T, path string) *sql.DB {
|
||||
t.Helper()
|
||||
// auto_vacuum must be set via DSN before journal_mode creates the DB file
|
||||
db, err := sql.Open("sqlite", path+"?_pragma=auto_vacuum(INCREMENTAL)&_pragma=journal_mode(WAL)&_pragma=busy_timeout(5000)")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
db.SetMaxOpenConns(1)
|
||||
|
||||
// Create minimal schema
|
||||
_, err = db.Exec(`
|
||||
CREATE TABLE transmissions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
raw_hex TEXT NOT NULL,
|
||||
hash TEXT NOT NULL UNIQUE,
|
||||
first_seen TEXT NOT NULL,
|
||||
route_type INTEGER,
|
||||
payload_type INTEGER,
|
||||
payload_version INTEGER,
|
||||
decoded_json TEXT,
|
||||
created_at TEXT DEFAULT (datetime('now')),
|
||||
channel_hash TEXT
|
||||
);
|
||||
CREATE TABLE observations (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
transmission_id INTEGER NOT NULL REFERENCES transmissions(id),
|
||||
observer_idx INTEGER,
|
||||
direction TEXT,
|
||||
snr REAL,
|
||||
rssi REAL,
|
||||
score INTEGER,
|
||||
path_json TEXT,
|
||||
timestamp INTEGER NOT NULL
|
||||
);
|
||||
`)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
return db
|
||||
}
|
||||
|
||||
func TestNewDBHasIncrementalAutoVacuum(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
path := filepath.Join(dir, "test.db")
|
||||
|
||||
db := createFreshDBWithAutoVacuum(t, path)
|
||||
defer db.Close()
|
||||
|
||||
var autoVacuum int
|
||||
if err := db.QueryRow("PRAGMA auto_vacuum").Scan(&autoVacuum); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if autoVacuum != 2 {
|
||||
t.Fatalf("expected auto_vacuum=2 (INCREMENTAL), got %d", autoVacuum)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExistingDBHasAutoVacuumNone(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
path := filepath.Join(dir, "test.db")
|
||||
|
||||
// Create DB WITHOUT setting auto_vacuum (simulates old DB)
|
||||
db, err := sql.Open("sqlite", path+"?_pragma=journal_mode(WAL)")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
db.SetMaxOpenConns(1)
|
||||
_, err = db.Exec("CREATE TABLE dummy (id INTEGER PRIMARY KEY)")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
var autoVacuum int
|
||||
if err := db.QueryRow("PRAGMA auto_vacuum").Scan(&autoVacuum); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
db.Close()
|
||||
|
||||
if autoVacuum != 0 {
|
||||
t.Fatalf("expected auto_vacuum=0 (NONE) for old DB, got %d", autoVacuum)
|
||||
}
|
||||
}
|
||||
|
||||
func TestVacuumOnStartupMigratesDB(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
path := filepath.Join(dir, "test.db")
|
||||
|
||||
// Create DB without auto_vacuum (old DB)
|
||||
db, err := sql.Open("sqlite", path+"?_pragma=journal_mode(WAL)")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
db.SetMaxOpenConns(1)
|
||||
_, err = db.Exec("CREATE TABLE dummy (id INTEGER PRIMARY KEY)")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
var before int
|
||||
db.QueryRow("PRAGMA auto_vacuum").Scan(&before)
|
||||
if before != 0 {
|
||||
t.Fatalf("precondition: expected auto_vacuum=0, got %d", before)
|
||||
}
|
||||
db.Close()
|
||||
|
||||
// Simulate vacuumOnStartup migration using openRW
|
||||
rw, err := openRW(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := rw.Exec("PRAGMA auto_vacuum = INCREMENTAL"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := rw.Exec("VACUUM"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
rw.Close()
|
||||
|
||||
// Verify migration
|
||||
db2, err := sql.Open("sqlite", path+"?mode=ro")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer db2.Close()
|
||||
|
||||
var after int
|
||||
if err := db2.QueryRow("PRAGMA auto_vacuum").Scan(&after); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if after != 2 {
|
||||
t.Fatalf("expected auto_vacuum=2 after VACUUM migration, got %d", after)
|
||||
}
|
||||
}
|
||||
|
||||
func TestIncrementalVacuumReducesFreelist(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
path := filepath.Join(dir, "test.db")
|
||||
|
||||
db := createFreshDBWithAutoVacuum(t, path)
|
||||
|
||||
// Insert a bunch of data
|
||||
now := time.Now().UTC().Format(time.RFC3339)
|
||||
for i := 0; i < 500; i++ {
|
||||
_, err := db.Exec(
|
||||
"INSERT INTO transmissions (raw_hex, hash, first_seen) VALUES (?, ?, ?)",
|
||||
strings.Repeat("AA", 200), // ~400 bytes each
|
||||
"hash_"+string(rune('A'+i%26))+string(rune('0'+i/26)),
|
||||
now,
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
// Get file size before delete
|
||||
db.Close()
|
||||
infoBefore, _ := os.Stat(path)
|
||||
sizeBefore := infoBefore.Size()
|
||||
|
||||
// Reopen and delete all
|
||||
db, err := sql.Open("sqlite", path+"?_pragma=journal_mode(WAL)&_pragma=busy_timeout(5000)")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
db.SetMaxOpenConns(1)
|
||||
defer db.Close()
|
||||
|
||||
_, err = db.Exec("DELETE FROM transmissions")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Check freelist before vacuum
|
||||
var freelistBefore int64
|
||||
db.QueryRow("PRAGMA freelist_count").Scan(&freelistBefore)
|
||||
if freelistBefore == 0 {
|
||||
t.Fatal("expected non-zero freelist after DELETE")
|
||||
}
|
||||
|
||||
// Run incremental vacuum
|
||||
_, err = db.Exec("PRAGMA incremental_vacuum(10000)")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Check freelist after vacuum
|
||||
var freelistAfter int64
|
||||
db.QueryRow("PRAGMA freelist_count").Scan(&freelistAfter)
|
||||
if freelistAfter >= freelistBefore {
|
||||
t.Fatalf("expected freelist to shrink: before=%d after=%d", freelistBefore, freelistAfter)
|
||||
}
|
||||
|
||||
// Checkpoint WAL and check file size shrunk
|
||||
db.Exec("PRAGMA wal_checkpoint(TRUNCATE)")
|
||||
db.Close()
|
||||
infoAfter, _ := os.Stat(path)
|
||||
sizeAfter := infoAfter.Size()
|
||||
if sizeAfter >= sizeBefore {
|
||||
t.Logf("warning: file did not shrink (before=%d after=%d) — may depend on page reuse", sizeBefore, sizeAfter)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckAutoVacuumLogs(t *testing.T) {
|
||||
// This test verifies checkAutoVacuum doesn't panic on various configs
|
||||
dir := t.TempDir()
|
||||
path := filepath.Join(dir, "test.db")
|
||||
|
||||
// Create a fresh DB with auto_vacuum=INCREMENTAL
|
||||
dbConn := createFreshDBWithAutoVacuum(t, path)
|
||||
db := &DB{conn: dbConn, path: path}
|
||||
cfg := &Config{}
|
||||
|
||||
// Should not panic
|
||||
checkAutoVacuum(db, cfg, path)
|
||||
dbConn.Close()
|
||||
|
||||
// Create a DB without auto_vacuum
|
||||
path2 := filepath.Join(dir, "test2.db")
|
||||
dbConn2, _ := sql.Open("sqlite", path2+"?_pragma=journal_mode(WAL)")
|
||||
dbConn2.SetMaxOpenConns(1)
|
||||
dbConn2.Exec("CREATE TABLE dummy (id INTEGER PRIMARY KEY)")
|
||||
db2 := &DB{conn: dbConn2, path: path2}
|
||||
|
||||
// Should log warning but not panic
|
||||
checkAutoVacuum(db2, cfg, path2)
|
||||
dbConn2.Close()
|
||||
}
|
||||
|
||||
func TestConfigIncrementalVacuumPages(t *testing.T) {
|
||||
// Default
|
||||
cfg := &Config{}
|
||||
if cfg.IncrementalVacuumPages() != 1024 {
|
||||
t.Fatalf("expected default 1024, got %d", cfg.IncrementalVacuumPages())
|
||||
}
|
||||
|
||||
// Custom
|
||||
cfg.DB = &DBConfig{IncrementalVacuumPages: 512}
|
||||
if cfg.IncrementalVacuumPages() != 512 {
|
||||
t.Fatalf("expected 512, got %d", cfg.IncrementalVacuumPages())
|
||||
}
|
||||
|
||||
// Zero should return default
|
||||
cfg.DB.IncrementalVacuumPages = 0
|
||||
if cfg.IncrementalVacuumPages() != 1024 {
|
||||
t.Fatalf("expected default 1024 for zero, got %d", cfg.IncrementalVacuumPages())
|
||||
}
|
||||
}
|
||||
@@ -106,7 +106,6 @@ type Payload struct {
|
||||
Tag uint32 `json:"tag,omitempty"`
|
||||
AuthCode uint32 `json:"authCode,omitempty"`
|
||||
TraceFlags *int `json:"traceFlags,omitempty"`
|
||||
SNRValues []float64 `json:"snrValues,omitempty"`
|
||||
RawHex string `json:"raw,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
}
|
||||
@@ -408,19 +407,6 @@ func DecodePacket(hexString string, validateSignatures bool) (*DecodedPacket, er
|
||||
}
|
||||
// The header path hops count represents SNR entries = completed hops
|
||||
hopsCompleted := path.HashCount
|
||||
// Extract per-hop SNR from header path bytes (int8, quarter-dB encoding)
|
||||
if hopsCompleted > 0 && len(path.Hops) >= hopsCompleted {
|
||||
snrVals := make([]float64, 0, hopsCompleted)
|
||||
for i := 0; i < hopsCompleted; i++ {
|
||||
b, err := hex.DecodeString(path.Hops[i])
|
||||
if err == nil && len(b) == 1 {
|
||||
snrVals = append(snrVals, float64(int8(b[0]))/4.0)
|
||||
}
|
||||
}
|
||||
if len(snrVals) > 0 {
|
||||
payload.SNRValues = snrVals
|
||||
}
|
||||
}
|
||||
pathBytes, err := hex.DecodeString(payload.PathData)
|
||||
if err == nil && payload.TraceFlags != nil {
|
||||
// path_sz from flags byte is a power-of-two exponent per firmware:
|
||||
|
||||
@@ -440,51 +440,3 @@ func TestDecodeAdvertSignatureValidation(t *testing.T) {
|
||||
t.Error("expected SignatureValid to be nil when validation disabled")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecodePacket_TraceSNRValues(t *testing.T) {
|
||||
// TRACE packet with 3 SNR bytes in header path:
|
||||
// SNR byte 0: 0x14 = int8(20) → 20/4.0 = 5.0 dB
|
||||
// SNR byte 1: 0xF4 = int8(-12) → -12/4.0 = -3.0 dB
|
||||
// SNR byte 2: 0x08 = int8(8) → 8/4.0 = 2.0 dB
|
||||
// header: DIRECT+TRACE = (0<<6)|(9<<2)|2 = 0x26
|
||||
// path_length: hash_size=0b00 (1-byte), hash_count=3 → 0x03
|
||||
hex := "2603" + "14F408" + // header + path_byte + 3 SNR bytes
|
||||
"01000000" + // tag
|
||||
"02000000" + // authCode
|
||||
"00" + // flags=0 → path_sz=1
|
||||
"AABBCCDD" // 4 route hops (1-byte each)
|
||||
|
||||
pkt, err := DecodePacket(hex, false)
|
||||
if err != nil {
|
||||
t.Fatalf("DecodePacket error: %v", err)
|
||||
}
|
||||
if pkt.Payload.SNRValues == nil {
|
||||
t.Fatal("expected SNRValues to be populated")
|
||||
}
|
||||
if len(pkt.Payload.SNRValues) != 3 {
|
||||
t.Fatalf("expected 3 SNR values, got %d", len(pkt.Payload.SNRValues))
|
||||
}
|
||||
expected := []float64{5.0, -3.0, 2.0}
|
||||
for i, want := range expected {
|
||||
if pkt.Payload.SNRValues[i] != want {
|
||||
t.Errorf("SNRValues[%d] = %v, want %v", i, pkt.Payload.SNRValues[i], want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecodePacket_TraceNoSNRValues(t *testing.T) {
|
||||
// TRACE with 0 SNR bytes → SNRValues should be nil/empty
|
||||
hex := "2600" + // header + path_byte (0 hops)
|
||||
"01000000" + // tag
|
||||
"02000000" + // authCode
|
||||
"00" + // flags
|
||||
"AABB" // 2 route hops
|
||||
|
||||
pkt, err := DecodePacket(hex, false)
|
||||
if err != nil {
|
||||
t.Fatalf("DecodePacket error: %v", err)
|
||||
}
|
||||
if len(pkt.Payload.SNRValues) != 0 {
|
||||
t.Errorf("expected empty SNRValues, got %v", pkt.Payload.SNRValues)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -18,10 +18,6 @@ require github.com/meshcore-analyzer/packetpath v0.0.0
|
||||
|
||||
replace github.com/meshcore-analyzer/packetpath => ../../internal/packetpath
|
||||
|
||||
require github.com/meshcore-analyzer/dbconfig v0.0.0
|
||||
|
||||
replace github.com/meshcore-analyzer/dbconfig => ../../internal/dbconfig
|
||||
|
||||
require (
|
||||
github.com/dustin/go-humanize v1.0.1 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
|
||||
@@ -1,43 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
// readiness tracks whether background init goroutines have completed.
|
||||
// Set to 1 once store.Load, pickBestObservation, and neighbor graph build are done.
|
||||
var readiness atomic.Int32
|
||||
|
||||
// handleHealthz returns 200 when the server is ready to serve queries,
|
||||
// or 503 while background initialization is still running.
|
||||
func (s *Server) handleHealthz(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
|
||||
if readiness.Load() == 0 {
|
||||
w.WriteHeader(http.StatusServiceUnavailable)
|
||||
json.NewEncoder(w).Encode(map[string]interface{}{
|
||||
"ready": false,
|
||||
"reason": "loading",
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
var loadedTx, loadedObs int
|
||||
if s.store != nil {
|
||||
s.store.mu.RLock()
|
||||
loadedTx = len(s.store.packets)
|
||||
for _, p := range s.store.packets {
|
||||
loadedObs += len(p.Observations)
|
||||
}
|
||||
s.store.mu.RUnlock()
|
||||
}
|
||||
|
||||
w.WriteHeader(http.StatusOK)
|
||||
json.NewEncoder(w).Encode(map[string]interface{}{
|
||||
"ready": true,
|
||||
"loadedTx": loadedTx,
|
||||
"loadedObs": loadedObs,
|
||||
})
|
||||
}
|
||||
@@ -1,80 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestHealthzNotReady(t *testing.T) {
|
||||
// Ensure readiness is 0 (not ready)
|
||||
readiness.Store(0)
|
||||
defer readiness.Store(0)
|
||||
|
||||
srv := &Server{store: &PacketStore{}}
|
||||
req := httptest.NewRequest("GET", "/api/healthz", nil)
|
||||
w := httptest.NewRecorder()
|
||||
|
||||
srv.handleHealthz(w, req)
|
||||
|
||||
if w.Code != http.StatusServiceUnavailable {
|
||||
t.Fatalf("expected 503, got %d", w.Code)
|
||||
}
|
||||
|
||||
var resp map[string]interface{}
|
||||
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
|
||||
t.Fatalf("invalid JSON: %v", err)
|
||||
}
|
||||
if resp["ready"] != false {
|
||||
t.Fatalf("expected ready=false, got %v", resp["ready"])
|
||||
}
|
||||
if resp["reason"] != "loading" {
|
||||
t.Fatalf("expected reason=loading, got %v", resp["reason"])
|
||||
}
|
||||
}
|
||||
|
||||
func TestHealthzReady(t *testing.T) {
|
||||
readiness.Store(1)
|
||||
defer readiness.Store(0)
|
||||
|
||||
srv := &Server{store: &PacketStore{}}
|
||||
req := httptest.NewRequest("GET", "/api/healthz", nil)
|
||||
w := httptest.NewRecorder()
|
||||
|
||||
srv.handleHealthz(w, req)
|
||||
|
||||
if w.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200, got %d", w.Code)
|
||||
}
|
||||
|
||||
var resp map[string]interface{}
|
||||
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
|
||||
t.Fatalf("invalid JSON: %v", err)
|
||||
}
|
||||
if resp["ready"] != true {
|
||||
t.Fatalf("expected ready=true, got %v", resp["ready"])
|
||||
}
|
||||
if _, ok := resp["loadedTx"]; !ok {
|
||||
t.Fatal("missing loadedTx field")
|
||||
}
|
||||
if _, ok := resp["loadedObs"]; !ok {
|
||||
t.Fatal("missing loadedObs field")
|
||||
}
|
||||
}
|
||||
|
||||
func TestHealthzAntiTautology(t *testing.T) {
|
||||
// When readiness is 0, must NOT return 200
|
||||
readiness.Store(0)
|
||||
defer readiness.Store(0)
|
||||
|
||||
srv := &Server{store: &PacketStore{}}
|
||||
req := httptest.NewRequest("GET", "/api/healthz", nil)
|
||||
w := httptest.NewRecorder()
|
||||
|
||||
srv.handleHealthz(w, req)
|
||||
|
||||
if w.Code == http.StatusOK {
|
||||
t.Fatal("anti-tautology: handler returned 200 when readiness=0; gating is broken")
|
||||
}
|
||||
}
|
||||
@@ -1,147 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// TestIssue804_AnalyticsAttributesByRepeaterRegion verifies that analytics
|
||||
// (specifically GetAnalyticsHashSizes) attribute multi-byte nodes to the
|
||||
// REPEATER's home region, not the observer that happened to hear the relay.
|
||||
//
|
||||
// Scenario from #804:
|
||||
// - PDX-Repeater is a multi-byte (hashSize=2) repeater whose ZERO-HOP direct
|
||||
// adverts are only heard by obs-PDX (a PDX observer). That zero-hop direct
|
||||
// advert is the most reliable home-region signal — it cannot have been
|
||||
// relayed.
|
||||
// - A flood advert from PDX-Repeater (hashSize=2) propagates and is heard by
|
||||
// obs-SJC (a SJC observer) via a multi-hop relay path.
|
||||
// - When the user asks for region=SJC analytics, the PDX-Repeater MUST NOT
|
||||
// pollute SJC's multiByteNodes — it lives in PDX.
|
||||
// - The result should also expose attributionMethod="repeater" so the API
|
||||
// consumer knows which method was used.
|
||||
//
|
||||
// Pre-fix behavior: PDX-Repeater appears in SJC's multiByteNodes because the
|
||||
// filter is observer-based. This test fails on the pre-fix code at the
|
||||
// "want PDX-Repeater EXCLUDED" assertion.
|
||||
func TestIssue804_AnalyticsAttributesByRepeaterRegion(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
defer db.Close()
|
||||
|
||||
now := time.Now().UTC()
|
||||
recent := now.Add(-1 * time.Hour).Format(time.RFC3339)
|
||||
recentEpoch := now.Add(-1 * time.Hour).Unix()
|
||||
|
||||
// Observers: one in PDX, one in SJC
|
||||
db.conn.Exec(`INSERT INTO observers (id, name, iata, last_seen, first_seen, packet_count)
|
||||
VALUES ('obs-pdx', 'Obs PDX', 'PDX', ?, '2026-01-01T00:00:00Z', 100)`, recent)
|
||||
db.conn.Exec(`INSERT INTO observers (id, name, iata, last_seen, first_seen, packet_count)
|
||||
VALUES ('obs-sjc', 'Obs SJC', 'SJC', ?, '2026-01-01T00:00:00Z', 100)`, recent)
|
||||
|
||||
// PDX-Repeater node (lives in Portland)
|
||||
pdxPK := "pdx0000000000001"
|
||||
db.conn.Exec(`INSERT INTO nodes (public_key, name, role)
|
||||
VALUES (?, 'PDX-Repeater', 'repeater')`, pdxPK)
|
||||
|
||||
// SJC-Repeater node (lives in San Jose) — sanity baseline
|
||||
sjcPK := "sjc0000000000001"
|
||||
db.conn.Exec(`INSERT INTO nodes (public_key, name, role)
|
||||
VALUES (?, 'SJC-Repeater', 'repeater')`, sjcPK)
|
||||
|
||||
pdxDecoded := `{"pubKey":"` + pdxPK + `","name":"PDX-Repeater","type":"ADVERT","flags":{"isRepeater":true}}`
|
||||
sjcDecoded := `{"pubKey":"` + sjcPK + `","name":"SJC-Repeater","type":"ADVERT","flags":{"isRepeater":true}}`
|
||||
|
||||
// 1) PDX-Repeater zero-hop DIRECT advert heard only by obs-PDX.
|
||||
// Establishes PDX as the repeater's home region.
|
||||
// raw_hex header 0x12 = route_type 2 (direct), payload_type 4
|
||||
// pathByte 0x40 (hashSize bits=01 → 2, hop_count=0)
|
||||
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
|
||||
VALUES ('1240aabbccdd', 'pdx_zh_direct', ?, 2, 4, ?)`, recent, pdxDecoded)
|
||||
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
|
||||
VALUES (1, 1, 12.0, -85, '[]', ?)`, recentEpoch)
|
||||
|
||||
// 2) PDX-Repeater FLOOD advert with hashSize=2 (reliable).
|
||||
// Heard ONLY by obs-SJC via a relay path (this is the polluting case).
|
||||
// raw_hex header 0x11 = route_type 1 (flood), payload_type 4
|
||||
// pathByte 0x41 (hashSize bits=01 → 2, hop_count=1)
|
||||
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
|
||||
VALUES ('1141aabbccdd', 'pdx_flood', ?, 1, 4, ?)`, recent, pdxDecoded)
|
||||
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
|
||||
VALUES (2, 2, 8.0, -95, '["aa11"]', ?)`, recentEpoch)
|
||||
|
||||
// 3) SJC-Repeater zero-hop DIRECT advert heard only by obs-SJC.
|
||||
// Establishes SJC as the repeater's home region.
|
||||
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
|
||||
VALUES ('1240ccddeeff', 'sjc_zh_direct', ?, 2, 4, ?)`, recent, sjcDecoded)
|
||||
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
|
||||
VALUES (3, 2, 14.0, -82, '[]', ?)`, recentEpoch)
|
||||
|
||||
// 4) SJC-Repeater FLOOD advert with hashSize=2, heard by obs-SJC.
|
||||
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
|
||||
VALUES ('1141ccddeeff', 'sjc_flood', ?, 1, 4, ?)`, recent, sjcDecoded)
|
||||
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, snr, rssi, path_json, timestamp)
|
||||
VALUES (4, 2, 11.0, -88, '["cc22"]', ?)`, recentEpoch)
|
||||
|
||||
store := NewPacketStore(db, nil)
|
||||
store.Load()
|
||||
|
||||
t.Run("region=SJC excludes PDX-Repeater (heard but not home)", func(t *testing.T) {
|
||||
result := store.GetAnalyticsHashSizes("SJC")
|
||||
|
||||
mb, ok := result["multiByteNodes"].([]map[string]interface{})
|
||||
if !ok {
|
||||
t.Fatal("expected multiByteNodes slice")
|
||||
}
|
||||
|
||||
var foundPDX, foundSJC bool
|
||||
for _, n := range mb {
|
||||
pk, _ := n["pubkey"].(string)
|
||||
if pk == pdxPK {
|
||||
foundPDX = true
|
||||
}
|
||||
if pk == sjcPK {
|
||||
foundSJC = true
|
||||
}
|
||||
}
|
||||
|
||||
if foundPDX {
|
||||
t.Errorf("PDX-Repeater leaked into SJC analytics — region attribution still observer-based (#804 not fixed)")
|
||||
}
|
||||
if !foundSJC {
|
||||
t.Errorf("SJC-Repeater missing from SJC analytics — fix over-filtered")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("API exposes attributionMethod", func(t *testing.T) {
|
||||
result := store.GetAnalyticsHashSizes("SJC")
|
||||
method, ok := result["attributionMethod"].(string)
|
||||
if !ok {
|
||||
t.Fatal("expected attributionMethod string field on result")
|
||||
}
|
||||
if method != "repeater" {
|
||||
t.Errorf("attributionMethod = %q, want %q", method, "repeater")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("region=PDX excludes SJC-Repeater", func(t *testing.T) {
|
||||
result := store.GetAnalyticsHashSizes("PDX")
|
||||
mb, _ := result["multiByteNodes"].([]map[string]interface{})
|
||||
|
||||
var foundPDX, foundSJC bool
|
||||
for _, n := range mb {
|
||||
pk, _ := n["pubkey"].(string)
|
||||
if pk == pdxPK {
|
||||
foundPDX = true
|
||||
}
|
||||
if pk == sjcPK {
|
||||
foundSJC = true
|
||||
}
|
||||
}
|
||||
if !foundPDX {
|
||||
t.Errorf("PDX-Repeater missing from PDX analytics")
|
||||
}
|
||||
if foundSJC {
|
||||
t.Errorf("SJC-Repeater leaked into PDX analytics")
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -1,63 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/gorilla/mux"
|
||||
)
|
||||
|
||||
// TestIssue871_NoNullHashOrTimestamp verifies that /api/packets never returns
|
||||
// packets with null/empty hash or null timestamp (issue #871).
|
||||
func TestIssue871_NoNullHashOrTimestamp(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
seedTestData(t, db)
|
||||
|
||||
// Insert bad legacy data: packet with empty hash
|
||||
now := time.Now().UTC().Add(-30 * time.Minute).Format(time.RFC3339)
|
||||
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
|
||||
VALUES ('DEAD', '', ?, 1, 4, '{}')`, now)
|
||||
// Insert bad legacy data: packet with NULL first_seen (timestamp)
|
||||
db.conn.Exec(`INSERT INTO transmissions (raw_hex, hash, first_seen, route_type, payload_type, decoded_json)
|
||||
VALUES ('BEEF', 'aa11bb22cc33dd44', NULL, 1, 4, '{}')`)
|
||||
|
||||
cfg := &Config{Port: 3000}
|
||||
hub := NewHub()
|
||||
srv := NewServer(db, cfg, hub)
|
||||
store := NewPacketStore(db, nil)
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatalf("store.Load failed: %v", err)
|
||||
}
|
||||
srv.store = store
|
||||
router := mux.NewRouter()
|
||||
srv.RegisterRoutes(router)
|
||||
|
||||
req := httptest.NewRequest(http.MethodGet, "/api/packets?limit=200", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
if w.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200, got %d", w.Code)
|
||||
}
|
||||
|
||||
var resp struct {
|
||||
Packets []map[string]interface{} `json:"packets"`
|
||||
}
|
||||
if err := json.NewDecoder(w.Body).Decode(&resp); err != nil {
|
||||
t.Fatalf("decode error: %v", err)
|
||||
}
|
||||
|
||||
for i, p := range resp.Packets {
|
||||
hash, _ := p["hash"]
|
||||
ts, _ := p["timestamp"]
|
||||
if hash == nil || hash == "" {
|
||||
t.Errorf("packet[%d] has null/empty hash: %v", i, p)
|
||||
}
|
||||
if ts == nil || ts == "" {
|
||||
t.Errorf("packet[%d] has null/empty timestamp: %v", i, p)
|
||||
}
|
||||
}
|
||||
}
|
||||
+2
-49
@@ -148,9 +148,6 @@ func main() {
|
||||
stats.TotalTransmissions, stats.TotalObservations, stats.TotalNodes, stats.TotalObservers)
|
||||
}
|
||||
|
||||
// Check auto_vacuum mode and optionally migrate (#919)
|
||||
checkAutoVacuum(database, cfg, resolvedDB)
|
||||
|
||||
// In-memory packet store
|
||||
store := NewPacketStore(database, cfg.PacketStore, cfg.CacheTTL)
|
||||
if err := store.Load(); err != nil {
|
||||
@@ -174,27 +171,6 @@ func main() {
|
||||
database.hasResolvedPath = true // detectSchema ran before column was added; fix the flag
|
||||
}
|
||||
|
||||
// Ensure observers.inactive column exists (PR #954 filters on it; ingestor migration
|
||||
// adds it but server may run against DBs ingestor never touched, e.g. e2e fixture).
|
||||
if err := ensureObserverInactiveColumn(dbPath); err != nil {
|
||||
log.Printf("[store] warning: could not add observers.inactive column: %v", err)
|
||||
}
|
||||
|
||||
// Ensure observers.last_packet_at column exists (PR #905 reads it; ingestor migration
|
||||
// adds it but server may run against DBs ingestor never touched, e.g. e2e fixture).
|
||||
if err := ensureLastPacketAtColumn(dbPath); err != nil {
|
||||
log.Printf("[store] warning: could not add observers.last_packet_at column: %v", err)
|
||||
}
|
||||
|
||||
// Soft-delete observers that are in the blacklist (mark inactive=1) so
|
||||
// historical data from a prior unblocked window is hidden too.
|
||||
if len(cfg.ObserverBlacklist) > 0 {
|
||||
softDeleteBlacklistedObservers(dbPath, cfg.ObserverBlacklist)
|
||||
}
|
||||
|
||||
// WaitGroup for background init steps that gate /api/healthz readiness.
|
||||
var initWg sync.WaitGroup
|
||||
|
||||
// Load or build neighbor graph
|
||||
if neighborEdgesTableExists(database.conn) {
|
||||
store.graph = loadNeighborEdgesFromDB(database.conn)
|
||||
@@ -202,17 +178,16 @@ func main() {
|
||||
} else {
|
||||
log.Printf("[neighbor] no persisted edges found, will build in background...")
|
||||
store.graph = NewNeighborGraph() // empty graph — gets populated by background goroutine
|
||||
initWg.Add(1)
|
||||
go func() {
|
||||
defer initWg.Done()
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
log.Printf("[neighbor] graph build panic recovered: %v", r)
|
||||
}
|
||||
}()
|
||||
rw, rwErr := cachedRW(dbPath)
|
||||
rw, rwErr := openRW(dbPath)
|
||||
if rwErr == nil {
|
||||
edgeCount := buildAndPersistEdges(store, rw)
|
||||
rw.Close()
|
||||
log.Printf("[neighbor] persisted %d edges", edgeCount)
|
||||
}
|
||||
built := BuildFromStore(store)
|
||||
@@ -227,9 +202,7 @@ func main() {
|
||||
// API serves best-effort data until this completes (~10s for 100K txs).
|
||||
// Processes in chunks of 5000, releasing the lock between chunks so API
|
||||
// handlers remain responsive.
|
||||
initWg.Add(1)
|
||||
go func() {
|
||||
defer initWg.Done()
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
log.Printf("[store] pickBestObservation panic recovered: %v", r)
|
||||
@@ -257,13 +230,6 @@ func main() {
|
||||
log.Printf("[store] initial pickBestObservation complete (%d transmissions)", totalPackets)
|
||||
}()
|
||||
|
||||
// Mark server ready once all background init completes.
|
||||
go func() {
|
||||
initWg.Wait()
|
||||
readiness.Store(1)
|
||||
log.Printf("[server] readiness: ready=true (background init complete)")
|
||||
}()
|
||||
|
||||
// WebSocket hub
|
||||
hub := NewHub()
|
||||
|
||||
@@ -300,7 +266,6 @@ func main() {
|
||||
defer stopEviction()
|
||||
|
||||
// Auto-prune old packets if retention.packetDays is configured
|
||||
vacuumPages := cfg.IncrementalVacuumPages()
|
||||
var stopPrune func()
|
||||
if cfg.Retention != nil && cfg.Retention.PacketDays > 0 {
|
||||
days := cfg.Retention.PacketDays
|
||||
@@ -321,9 +286,6 @@ func main() {
|
||||
log.Printf("[prune] error: %v", err)
|
||||
} else {
|
||||
log.Printf("[prune] deleted %d transmissions older than %d days", n, days)
|
||||
if n > 0 {
|
||||
runIncrementalVacuum(resolvedDB, vacuumPages)
|
||||
}
|
||||
}
|
||||
for {
|
||||
select {
|
||||
@@ -332,9 +294,6 @@ func main() {
|
||||
log.Printf("[prune] error: %v", err)
|
||||
} else {
|
||||
log.Printf("[prune] deleted %d transmissions older than %d days", n, days)
|
||||
if n > 0 {
|
||||
runIncrementalVacuum(resolvedDB, vacuumPages)
|
||||
}
|
||||
}
|
||||
case <-pruneDone:
|
||||
return
|
||||
@@ -362,12 +321,10 @@ func main() {
|
||||
}()
|
||||
time.Sleep(2 * time.Minute) // stagger after packet prune
|
||||
database.PruneOldMetrics(metricsDays)
|
||||
runIncrementalVacuum(resolvedDB, vacuumPages)
|
||||
for {
|
||||
select {
|
||||
case <-metricsPruneTicker.C:
|
||||
database.PruneOldMetrics(metricsDays)
|
||||
runIncrementalVacuum(resolvedDB, vacuumPages)
|
||||
case <-metricsPruneDone:
|
||||
return
|
||||
}
|
||||
@@ -397,12 +354,10 @@ func main() {
|
||||
}()
|
||||
time.Sleep(3 * time.Minute) // stagger after metrics prune
|
||||
database.RemoveStaleObservers(observerDays)
|
||||
runIncrementalVacuum(resolvedDB, vacuumPages)
|
||||
for {
|
||||
select {
|
||||
case <-observerPruneTicker.C:
|
||||
database.RemoveStaleObservers(observerDays)
|
||||
runIncrementalVacuum(resolvedDB, vacuumPages)
|
||||
case <-observerPruneDone:
|
||||
return
|
||||
}
|
||||
@@ -433,7 +388,6 @@ func main() {
|
||||
g := store.graph
|
||||
store.mu.RUnlock()
|
||||
PruneNeighborEdges(dbPath, g, maxAgeDays)
|
||||
runIncrementalVacuum(resolvedDB, vacuumPages)
|
||||
for {
|
||||
select {
|
||||
case <-edgePruneTicker.C:
|
||||
@@ -441,7 +395,6 @@ func main() {
|
||||
g := store.graph
|
||||
store.mu.RUnlock()
|
||||
PruneNeighborEdges(dbPath, g, maxAgeDays)
|
||||
runIncrementalVacuum(resolvedDB, vacuumPages)
|
||||
case <-edgePruneDone:
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1,57 +0,0 @@
|
||||
package main
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestEnrichNodeWithMultiByte(t *testing.T) {
|
||||
t.Run("nil entry leaves no fields", func(t *testing.T) {
|
||||
node := map[string]interface{}{"public_key": "abc123"}
|
||||
EnrichNodeWithMultiByte(node, nil)
|
||||
if _, ok := node["multi_byte_status"]; ok {
|
||||
t.Error("expected no multi_byte_status with nil entry")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("confirmed entry sets fields", func(t *testing.T) {
|
||||
node := map[string]interface{}{"public_key": "abc123"}
|
||||
entry := &MultiByteCapEntry{
|
||||
Status: "confirmed",
|
||||
Evidence: "advert",
|
||||
MaxHashSize: 2,
|
||||
}
|
||||
EnrichNodeWithMultiByte(node, entry)
|
||||
if node["multi_byte_status"] != "confirmed" {
|
||||
t.Errorf("expected confirmed, got %v", node["multi_byte_status"])
|
||||
}
|
||||
if node["multi_byte_evidence"] != "advert" {
|
||||
t.Errorf("expected advert, got %v", node["multi_byte_evidence"])
|
||||
}
|
||||
if node["multi_byte_max_hash_size"] != 2 {
|
||||
t.Errorf("expected 2, got %v", node["multi_byte_max_hash_size"])
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("suspected entry sets fields", func(t *testing.T) {
|
||||
node := map[string]interface{}{"public_key": "abc123"}
|
||||
entry := &MultiByteCapEntry{
|
||||
Status: "suspected",
|
||||
Evidence: "path",
|
||||
MaxHashSize: 2,
|
||||
}
|
||||
EnrichNodeWithMultiByte(node, entry)
|
||||
if node["multi_byte_status"] != "suspected" {
|
||||
t.Errorf("expected suspected, got %v", node["multi_byte_status"])
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("unknown entry sets status unknown", func(t *testing.T) {
|
||||
node := map[string]interface{}{"public_key": "abc123"}
|
||||
entry := &MultiByteCapEntry{
|
||||
Status: "unknown",
|
||||
MaxHashSize: 1,
|
||||
}
|
||||
EnrichNodeWithMultiByte(node, entry)
|
||||
if node["multi_byte_status"] != "unknown" {
|
||||
t.Errorf("expected unknown, got %v", node["multi_byte_status"])
|
||||
}
|
||||
})
|
||||
}
|
||||
+14
-115
@@ -20,10 +20,11 @@ var persistSem = make(chan struct{}, 1)
|
||||
// ensureNeighborEdgesTable creates the neighbor_edges table if it doesn't exist.
|
||||
// Uses a separate read-write connection since the main DB is read-only.
|
||||
func ensureNeighborEdgesTable(dbPath string) error {
|
||||
rw, err := cachedRW(dbPath)
|
||||
rw, err := openRW(dbPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("open rw for neighbor_edges: %w", err)
|
||||
}
|
||||
defer rw.Close()
|
||||
|
||||
_, err = rw.Exec(`CREATE TABLE IF NOT EXISTS neighbor_edges (
|
||||
node_a TEXT NOT NULL,
|
||||
@@ -128,11 +129,12 @@ func asyncPersistResolvedPathsAndEdges(dbPath string, obsUpdates []persistObsUpd
|
||||
go func() {
|
||||
defer func() { <-persistSem }()
|
||||
|
||||
rw, err := cachedRW(dbPath)
|
||||
rw, err := openRW(dbPath)
|
||||
if err != nil {
|
||||
log.Printf("[store] %s rw open error: %v", logPrefix, err)
|
||||
return
|
||||
}
|
||||
defer rw.Close()
|
||||
|
||||
if len(obsUpdates) > 0 {
|
||||
sqlTx, err := rw.Begin()
|
||||
@@ -247,10 +249,11 @@ func buildAndPersistEdges(store *PacketStore, rw *sql.DB) int {
|
||||
|
||||
// ensureResolvedPathColumn adds the resolved_path column to observations if missing.
|
||||
func ensureResolvedPathColumn(dbPath string) error {
|
||||
rw, err := cachedRW(dbPath)
|
||||
rw, err := openRW(dbPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rw.Close()
|
||||
|
||||
// Check if column already exists
|
||||
rows, err := rw.Query("PRAGMA table_info(observations)")
|
||||
@@ -278,115 +281,6 @@ func ensureResolvedPathColumn(dbPath string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// ensureObserverInactiveColumn adds the inactive column to observers if missing.
|
||||
// The column was originally added by ingestor migration (cmd/ingestor/db.go:344) to
|
||||
// support soft-delete via RemoveStaleObservers + filtered reads (PR #954). When the
|
||||
// server starts against a DB that was never touched by the ingestor (e.g. the e2e
|
||||
// fixture), the column is missing and read queries that filter on it (GetObservers,
|
||||
// GetStats) silently fail with "no such column: inactive" — leaving /api/observers
|
||||
// returning empty.
|
||||
func ensureObserverInactiveColumn(dbPath string) error {
|
||||
rw, err := cachedRW(dbPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
rows, err := rw.Query("PRAGMA table_info(observers)")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
for rows.Next() {
|
||||
var cid int
|
||||
var colName string
|
||||
var colType sql.NullString
|
||||
var notNull, pk int
|
||||
var dflt sql.NullString
|
||||
if rows.Scan(&cid, &colName, &colType, ¬Null, &dflt, &pk) == nil && colName == "inactive" {
|
||||
return nil // already exists
|
||||
}
|
||||
}
|
||||
|
||||
_, err = rw.Exec("ALTER TABLE observers ADD COLUMN inactive INTEGER DEFAULT 0")
|
||||
if err != nil {
|
||||
return fmt.Errorf("add inactive column: %w", err)
|
||||
}
|
||||
log.Println("[store] Added inactive column to observers")
|
||||
return nil
|
||||
}
|
||||
|
||||
// ensureLastPacketAtColumn adds the last_packet_at column to observers if missing.
|
||||
// The column was originally added by ingestor migration (observers_last_packet_at_v1)
|
||||
// to track the most recent packet observation time separately from status updates.
|
||||
// When the server starts against a DB that was never touched by the ingestor (e.g.
|
||||
// the e2e fixture), the column is missing and read queries that reference it
|
||||
// (GetObservers, GetObserverByID) fail with "no such column: last_packet_at".
|
||||
func ensureLastPacketAtColumn(dbPath string) error {
|
||||
rw, err := cachedRW(dbPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
rows, err := rw.Query("PRAGMA table_info(observers)")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
for rows.Next() {
|
||||
var cid int
|
||||
var colName string
|
||||
var colType sql.NullString
|
||||
var notNull, pk int
|
||||
var dflt sql.NullString
|
||||
if rows.Scan(&cid, &colName, &colType, ¬Null, &dflt, &pk) == nil && colName == "last_packet_at" {
|
||||
return nil // already exists
|
||||
}
|
||||
}
|
||||
|
||||
_, err = rw.Exec("ALTER TABLE observers ADD COLUMN last_packet_at TEXT")
|
||||
if err != nil {
|
||||
return fmt.Errorf("add last_packet_at column: %w", err)
|
||||
}
|
||||
log.Println("[store] Added last_packet_at column to observers")
|
||||
return nil
|
||||
}
|
||||
|
||||
// softDeleteBlacklistedObservers marks observers matching the blacklist as
|
||||
// inactive=1 so they are hidden from API responses. Runs once at startup.
|
||||
func softDeleteBlacklistedObservers(dbPath string, blacklist []string) {
|
||||
rw, err := cachedRW(dbPath)
|
||||
if err != nil {
|
||||
log.Printf("[observer-blacklist] warning: could not open DB for soft-delete: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
placeholders := make([]string, 0, len(blacklist))
|
||||
args := make([]interface{}, 0, len(blacklist))
|
||||
for _, pk := range blacklist {
|
||||
trimmed := strings.TrimSpace(pk)
|
||||
if trimmed == "" {
|
||||
continue
|
||||
}
|
||||
placeholders = append(placeholders, "LOWER(?)")
|
||||
args = append(args, trimmed)
|
||||
}
|
||||
if len(placeholders) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
query := "UPDATE observers SET inactive = 1 WHERE LOWER(id) IN (" + strings.Join(placeholders, ",") + ") AND (inactive IS NULL OR inactive = 0)"
|
||||
result, err := rw.Exec(query, args...)
|
||||
if err != nil {
|
||||
log.Printf("[observer-blacklist] warning: soft-delete failed: %v", err)
|
||||
return
|
||||
}
|
||||
if n, _ := result.RowsAffected(); n > 0 {
|
||||
log.Printf("[observer-blacklist] soft-deleted %d blacklisted observer(s)", n)
|
||||
}
|
||||
}
|
||||
|
||||
// resolvePathForObs resolves hop prefixes to full pubkeys for an observation.
|
||||
// Returns nil if path is empty.
|
||||
func resolvePathForObs(pathJSON, observerID string, tx *StoreTx, pm *prefixMap, graph *NeighborGraph) []*string {
|
||||
@@ -522,12 +416,16 @@ func backfillResolvedPathsAsync(store *PacketStore, dbPath string, chunkSize int
|
||||
var rw *sql.DB
|
||||
if dbPath != "" {
|
||||
var err error
|
||||
rw, err = cachedRW(dbPath)
|
||||
rw, err = openRW(dbPath)
|
||||
if err != nil {
|
||||
log.Printf("[store] async backfill: open rw error: %v", err)
|
||||
}
|
||||
}
|
||||
// rw is cached process-wide; do not close
|
||||
defer func() {
|
||||
if rw != nil {
|
||||
rw.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
totalProcessed := 0
|
||||
for totalProcessed < totalPending {
|
||||
@@ -752,10 +650,11 @@ func PruneNeighborEdges(dbPath string, graph *NeighborGraph, maxAgeDays int) (in
|
||||
|
||||
// 1. Prune from SQLite using a read-write connection
|
||||
var dbPruned int64
|
||||
rw, err := cachedRW(dbPath)
|
||||
rw, err := openRW(dbPath)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("prune neighbor_edges: open rw: %w", err)
|
||||
}
|
||||
defer rw.Close()
|
||||
res, err := rw.Exec("DELETE FROM neighbor_edges WHERE last_seen < ?", cutoff.Format(time.RFC3339))
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("prune neighbor_edges: %w", err)
|
||||
|
||||
@@ -538,62 +538,3 @@ func TestOpenRW_BusyTimeout(t *testing.T) {
|
||||
t.Errorf("expected busy_timeout=5000, got %d", timeout)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEnsureLastPacketAtColumn(t *testing.T) {
|
||||
// Create a temp DB with observers table missing last_packet_at
|
||||
dir := t.TempDir()
|
||||
dbPath := dir + "/test.db"
|
||||
db, err := sql.Open("sqlite", dbPath)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
_, err = db.Exec(`CREATE TABLE observers (
|
||||
id TEXT PRIMARY KEY,
|
||||
name TEXT,
|
||||
last_seen TEXT,
|
||||
lat REAL,
|
||||
lon REAL,
|
||||
inactive INTEGER DEFAULT 0
|
||||
)`)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
db.Close()
|
||||
|
||||
// First call: should add the column
|
||||
if err := ensureLastPacketAtColumn(dbPath); err != nil {
|
||||
t.Fatalf("first call failed: %v", err)
|
||||
}
|
||||
|
||||
// Verify column exists
|
||||
db2, err := sql.Open("sqlite", dbPath)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer db2.Close()
|
||||
|
||||
var found bool
|
||||
rows, err := db2.Query("PRAGMA table_info(observers)")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
var cid int
|
||||
var colName string
|
||||
var colType sql.NullString
|
||||
var notNull, pk int
|
||||
var dflt sql.NullString
|
||||
if rows.Scan(&cid, &colName, &colType, ¬Null, &dflt, &pk) == nil && colName == "last_packet_at" {
|
||||
found = true
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Fatal("last_packet_at column not found after migration")
|
||||
}
|
||||
|
||||
// Idempotency: second call should succeed without error
|
||||
if err := ensureLastPacketAtColumn(dbPath); err != nil {
|
||||
t.Fatalf("idempotent call failed: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,159 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestConfigIsObserverBlacklisted(t *testing.T) {
|
||||
cfg := &Config{
|
||||
ObserverBlacklist: []string{"OBS1", "obs2", " Obs3 "},
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
id string
|
||||
want bool
|
||||
}{
|
||||
{"OBS1", true},
|
||||
{"obs1", true}, // case-insensitive
|
||||
{"OBS2", true},
|
||||
{"Obs3", true}, // whitespace trimmed
|
||||
{"obs4", false},
|
||||
{"", false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
got := cfg.IsObserverBlacklisted(tt.id)
|
||||
if got != tt.want {
|
||||
t.Errorf("IsObserverBlacklisted(%q) = %v, want %v", tt.id, got, tt.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigIsObserverBlacklistedEmpty(t *testing.T) {
|
||||
cfg := &Config{}
|
||||
if cfg.IsObserverBlacklisted("anything") {
|
||||
t.Error("empty blacklist should not match anything")
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigIsObserverBlacklistedNil(t *testing.T) {
|
||||
var cfg *Config
|
||||
if cfg.IsObserverBlacklisted("anything") {
|
||||
t.Error("nil config should not match anything")
|
||||
}
|
||||
}
|
||||
|
||||
func TestObserverBlacklistFiltersHandleObservers(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
db.conn.Exec("INSERT OR IGNORE INTO observers (id, name, iata, last_seen) VALUES ('goodobs', 'GoodObs', 'SFO', datetime('now'))")
|
||||
db.conn.Exec("INSERT OR IGNORE INTO observers (id, name, iata, last_seen) VALUES ('badobs', 'BadObs', 'LAX', datetime('now'))")
|
||||
|
||||
cfg := &Config{
|
||||
ObserverBlacklist: []string{"badobs"},
|
||||
}
|
||||
srv := NewServer(db, cfg, NewHub())
|
||||
srv.RegisterRoutes(setupTestRouter(srv))
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/observers", nil)
|
||||
w := httptest.NewRecorder()
|
||||
srv.router.ServeHTTP(w, req)
|
||||
|
||||
if w.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
|
||||
}
|
||||
|
||||
var resp ObserverListResponse
|
||||
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
|
||||
t.Fatalf("failed to parse response: %v", err)
|
||||
}
|
||||
|
||||
for _, obs := range resp.Observers {
|
||||
if obs.ID == "badobs" {
|
||||
t.Error("blacklisted observer should not appear in observers list")
|
||||
}
|
||||
}
|
||||
|
||||
foundGood := false
|
||||
for _, obs := range resp.Observers {
|
||||
if obs.ID == "goodobs" {
|
||||
foundGood = true
|
||||
}
|
||||
}
|
||||
if !foundGood {
|
||||
t.Error("non-blacklisted observer should appear in observers list")
|
||||
}
|
||||
}
|
||||
|
||||
func TestObserverBlacklistFiltersObserverDetail(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
db.conn.Exec("INSERT OR IGNORE INTO observers (id, name, iata, last_seen) VALUES ('badobs', 'BadObs', 'LAX', datetime('now'))")
|
||||
|
||||
cfg := &Config{
|
||||
ObserverBlacklist: []string{"badobs"},
|
||||
}
|
||||
srv := NewServer(db, cfg, NewHub())
|
||||
srv.RegisterRoutes(setupTestRouter(srv))
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/observers/badobs", nil)
|
||||
w := httptest.NewRecorder()
|
||||
srv.router.ServeHTTP(w, req)
|
||||
|
||||
if w.Code != http.StatusNotFound {
|
||||
t.Errorf("expected 404 for blacklisted observer detail, got %d", w.Code)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNoObserverBlacklistPassesAll(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
db.conn.Exec("INSERT OR IGNORE INTO observers (id, name, iata, last_seen) VALUES ('someobs', 'SomeObs', 'SFO', datetime('now'))")
|
||||
|
||||
cfg := &Config{}
|
||||
srv := NewServer(db, cfg, NewHub())
|
||||
srv.RegisterRoutes(setupTestRouter(srv))
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/observers", nil)
|
||||
w := httptest.NewRecorder()
|
||||
srv.router.ServeHTTP(w, req)
|
||||
|
||||
if w.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200, got %d", w.Code)
|
||||
}
|
||||
|
||||
var resp ObserverListResponse
|
||||
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
|
||||
t.Fatalf("failed to parse response: %v", err)
|
||||
}
|
||||
|
||||
foundSome := false
|
||||
for _, obs := range resp.Observers {
|
||||
if obs.ID == "someobs" {
|
||||
foundSome = true
|
||||
}
|
||||
}
|
||||
if !foundSome {
|
||||
t.Error("without blacklist, observer should appear")
|
||||
}
|
||||
}
|
||||
|
||||
func TestObserverBlacklistConcurrent(t *testing.T) {
|
||||
cfg := &Config{
|
||||
ObserverBlacklist: []string{"AA", "BB", "CC"},
|
||||
}
|
||||
|
||||
done := make(chan struct{})
|
||||
for i := 0; i < 50; i++ {
|
||||
go func() {
|
||||
defer func() { done <- struct{}{} }()
|
||||
for j := 0; j < 100; j++ {
|
||||
cfg.IsObserverBlacklisted("AA")
|
||||
cfg.IsObserverBlacklisted("DD")
|
||||
}
|
||||
}()
|
||||
}
|
||||
for i := 0; i < 50; i++ {
|
||||
<-done
|
||||
}
|
||||
}
|
||||
@@ -45,7 +45,6 @@ func routeDescriptions() map[string]routeMeta {
|
||||
"POST /api/perf/reset": {Summary: "Reset performance stats", Tag: "admin", Auth: true},
|
||||
"POST /api/admin/prune": {Summary: "Prune old data", Description: "Deletes packets and nodes older than the configured retention period.", Tag: "admin", Auth: true},
|
||||
"GET /api/debug/affinity": {Summary: "Debug neighbor affinity scores", Tag: "admin", Auth: true},
|
||||
"GET /api/backup": {Summary: "Download SQLite backup", Description: "Streams a consistent SQLite snapshot of the analyzer DB (VACUUM INTO). Response is application/octet-stream with attachment filename corescope-backup-<unix>.db.", Tag: "admin", Auth: true},
|
||||
|
||||
// Packets
|
||||
"GET /api/packets": {Summary: "List packets", Description: "Returns decoded packets with filtering, sorting, and pagination.", Tag: "packets",
|
||||
|
||||
@@ -1,427 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"math"
|
||||
"net/http"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// ─── Path Inspector ────────────────────────────────────────────────────────────
|
||||
// POST /api/paths/inspect — beam-search scorer for prefix path candidates.
|
||||
// Spec: issue #944 §2.1–2.5.
|
||||
|
||||
// pathInspectRequest is the JSON body for the inspect endpoint.
|
||||
type pathInspectRequest struct {
|
||||
Prefixes []string `json:"prefixes"`
|
||||
Context *pathInspectContext `json:"context,omitempty"`
|
||||
Limit int `json:"limit,omitempty"`
|
||||
}
|
||||
|
||||
type pathInspectContext struct {
|
||||
ObserverID string `json:"observerId,omitempty"`
|
||||
Since string `json:"since,omitempty"`
|
||||
Until string `json:"until,omitempty"`
|
||||
}
|
||||
|
||||
// pathCandidate is one scored candidate path in the response.
|
||||
type pathCandidate struct {
|
||||
Path []string `json:"path"`
|
||||
Names []string `json:"names"`
|
||||
Score float64 `json:"score"`
|
||||
Speculative bool `json:"speculative"`
|
||||
Evidence pathEvidence `json:"evidence"`
|
||||
}
|
||||
|
||||
type pathEvidence struct {
|
||||
PerHop []hopEvidence `json:"perHop"`
|
||||
}
|
||||
|
||||
type hopEvidence struct {
|
||||
Prefix string `json:"prefix"`
|
||||
CandidatesConsidered int `json:"candidatesConsidered"`
|
||||
Chosen string `json:"chosen"`
|
||||
EdgeWeight float64 `json:"edgeWeight"`
|
||||
Alternatives []hopAlternative `json:"alternatives,omitempty"`
|
||||
}
|
||||
|
||||
// hopAlternative shows a candidate that was considered but not chosen for this hop.
|
||||
type hopAlternative struct {
|
||||
PublicKey string `json:"publicKey"`
|
||||
Name string `json:"name"`
|
||||
Score float64 `json:"score"`
|
||||
}
|
||||
|
||||
type pathInspectResponse struct {
|
||||
Candidates []pathCandidate `json:"candidates"`
|
||||
Input map[string]interface{} `json:"input"`
|
||||
Stats map[string]interface{} `json:"stats"`
|
||||
}
|
||||
|
||||
// beamEntry represents a partial path being extended during beam search.
|
||||
type beamEntry struct {
|
||||
pubkeys []string
|
||||
names []string
|
||||
evidence []hopEvidence
|
||||
score float64 // product of per-hop scores (pre-geometric-mean)
|
||||
}
|
||||
|
||||
const (
|
||||
beamWidth = 20
|
||||
maxInputHops = 64
|
||||
maxPrefixBytes = 3
|
||||
maxRequestItems = 64
|
||||
geoMaxKm = 50.0
|
||||
hopScoreFloor = 0.05
|
||||
speculativeThreshold = 0.7
|
||||
inspectCacheTTL = 30 * time.Second
|
||||
inspectBodyLimit = 4096
|
||||
)
|
||||
|
||||
// Weights per spec §2.3.
|
||||
const (
|
||||
wEdge = 0.35
|
||||
wGeo = 0.20
|
||||
wRecency = 0.15
|
||||
wSelectivity = 0.30
|
||||
)
|
||||
|
||||
func (s *Server) handlePathInspect(w http.ResponseWriter, r *http.Request) {
|
||||
// Body limit per spec §2.1.
|
||||
r.Body = http.MaxBytesReader(w, r.Body, inspectBodyLimit)
|
||||
|
||||
var req pathInspectRequest
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
http.Error(w, `{"error":"invalid JSON"}`, http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
// Validate prefixes.
|
||||
if len(req.Prefixes) == 0 {
|
||||
http.Error(w, `{"error":"prefixes required"}`, http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
if len(req.Prefixes) > maxRequestItems {
|
||||
http.Error(w, `{"error":"too many prefixes (max 64)"}`, http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
// Normalize + validate each prefix.
|
||||
prefixByteLen := -1
|
||||
for i, p := range req.Prefixes {
|
||||
p = strings.ToLower(strings.TrimSpace(p))
|
||||
req.Prefixes[i] = p
|
||||
if len(p) == 0 || len(p)%2 != 0 {
|
||||
http.Error(w, `{"error":"prefixes must be even-length hex"}`, http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
if _, err := hex.DecodeString(p); err != nil {
|
||||
http.Error(w, `{"error":"prefixes must be valid hex"}`, http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
byteLen := len(p) / 2
|
||||
if byteLen > maxPrefixBytes {
|
||||
http.Error(w, `{"error":"prefix exceeds 3 bytes"}`, http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
if prefixByteLen == -1 {
|
||||
prefixByteLen = byteLen
|
||||
} else if byteLen != prefixByteLen {
|
||||
http.Error(w, `{"error":"mixed prefix lengths not allowed"}`, http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
limit := req.Limit
|
||||
if limit <= 0 {
|
||||
limit = 10
|
||||
}
|
||||
if limit > 50 {
|
||||
limit = 50
|
||||
}
|
||||
|
||||
// Check cache.
|
||||
cacheKey := s.store.inspectCacheKey(req)
|
||||
s.store.inspectMu.RLock()
|
||||
if cached, ok := s.store.inspectCache[cacheKey]; ok && time.Now().Before(cached.expiresAt) {
|
||||
s.store.inspectMu.RUnlock()
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(cached.data)
|
||||
return
|
||||
}
|
||||
s.store.inspectMu.RUnlock()
|
||||
|
||||
// Snapshot data under read lock.
|
||||
nodes, pm := s.store.getCachedNodesAndPM()
|
||||
|
||||
// Build pubkey→nodeInfo map for O(1) geo lookup in scorer.
|
||||
nodeByPK := make(map[string]*nodeInfo, len(nodes))
|
||||
for i := range nodes {
|
||||
nodeByPK[strings.ToLower(nodes[i].PublicKey)] = &nodes[i]
|
||||
}
|
||||
|
||||
// Get neighbor graph; handle cold start.
|
||||
graph := s.store.graph
|
||||
if graph == nil || graph.IsStale() {
|
||||
rebuilt := make(chan struct{})
|
||||
go func() {
|
||||
s.store.ensureNeighborGraph()
|
||||
close(rebuilt)
|
||||
}()
|
||||
select {
|
||||
case <-rebuilt:
|
||||
graph = s.store.graph
|
||||
case <-time.After(2 * time.Second):
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.WriteHeader(http.StatusServiceUnavailable)
|
||||
json.NewEncoder(w).Encode(map[string]interface{}{"retry": true})
|
||||
return
|
||||
}
|
||||
if graph == nil {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.WriteHeader(http.StatusServiceUnavailable)
|
||||
json.NewEncoder(w).Encode(map[string]interface{}{"retry": true})
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
now := time.Now()
|
||||
start := now
|
||||
|
||||
// Beam search.
|
||||
beam := s.store.beamSearch(req.Prefixes, pm, graph, nodeByPK, now)
|
||||
|
||||
// Sort by score descending, take top limit.
|
||||
sortBeam(beam)
|
||||
if len(beam) > limit {
|
||||
beam = beam[:limit]
|
||||
}
|
||||
|
||||
// Build response with per-hop alternatives (spec §2.7, M2 fix).
|
||||
candidates := make([]pathCandidate, 0, len(beam))
|
||||
for _, entry := range beam {
|
||||
nHops := len(entry.pubkeys)
|
||||
var score float64
|
||||
if nHops > 0 {
|
||||
score = math.Pow(entry.score, 1.0/float64(nHops))
|
||||
}
|
||||
|
||||
// Populate per-hop alternatives: other candidates at each hop that weren't chosen.
|
||||
evidence := make([]hopEvidence, len(entry.evidence))
|
||||
copy(evidence, entry.evidence)
|
||||
for hi, ev := range evidence {
|
||||
if hi >= len(req.Prefixes) {
|
||||
break
|
||||
}
|
||||
prefix := req.Prefixes[hi]
|
||||
allCands := pm.m[prefix]
|
||||
var alts []hopAlternative
|
||||
for _, c := range allCands {
|
||||
if !canAppearInPath(c.Role) || c.PublicKey == ev.Chosen {
|
||||
continue
|
||||
}
|
||||
// Score this alternative in context of the partial path up to this hop.
|
||||
var partialEntry beamEntry
|
||||
if hi > 0 {
|
||||
partialEntry = beamEntry{pubkeys: entry.pubkeys[:hi], names: entry.names[:hi], score: 1.0}
|
||||
}
|
||||
altScore := s.store.scoreHop(partialEntry, c, ev.CandidatesConsidered, graph, nodeByPK, now, hi)
|
||||
alts = append(alts, hopAlternative{PublicKey: c.PublicKey, Name: c.Name, Score: math.Round(altScore*1000) / 1000})
|
||||
}
|
||||
// Sort alts by score desc, cap at 5.
|
||||
sort.Slice(alts, func(i, j int) bool { return alts[i].Score > alts[j].Score })
|
||||
if len(alts) > 5 {
|
||||
alts = alts[:5]
|
||||
}
|
||||
evidence[hi] = hopEvidence{
|
||||
Prefix: ev.Prefix,
|
||||
CandidatesConsidered: ev.CandidatesConsidered,
|
||||
Chosen: ev.Chosen,
|
||||
EdgeWeight: ev.EdgeWeight,
|
||||
Alternatives: alts,
|
||||
}
|
||||
}
|
||||
|
||||
candidates = append(candidates, pathCandidate{
|
||||
Path: entry.pubkeys,
|
||||
Names: entry.names,
|
||||
Score: math.Round(score*1000) / 1000,
|
||||
Speculative: score < speculativeThreshold,
|
||||
Evidence: pathEvidence{PerHop: evidence},
|
||||
})
|
||||
}
|
||||
|
||||
elapsed := time.Since(start).Milliseconds()
|
||||
resp := pathInspectResponse{
|
||||
Candidates: candidates,
|
||||
Input: map[string]interface{}{
|
||||
"prefixes": req.Prefixes,
|
||||
"hops": len(req.Prefixes),
|
||||
},
|
||||
Stats: map[string]interface{}{
|
||||
"beamWidth": beamWidth,
|
||||
"expansionsRun": len(req.Prefixes) * beamWidth,
|
||||
"elapsedMs": elapsed,
|
||||
},
|
||||
}
|
||||
|
||||
// Cache result (and evict stale entries).
|
||||
s.store.inspectMu.Lock()
|
||||
if s.store.inspectCache == nil {
|
||||
s.store.inspectCache = make(map[string]*inspectCachedResult)
|
||||
}
|
||||
now2 := time.Now()
|
||||
for k, v := range s.store.inspectCache {
|
||||
if now2.After(v.expiresAt) {
|
||||
delete(s.store.inspectCache, k)
|
||||
}
|
||||
}
|
||||
s.store.inspectCache[cacheKey] = &inspectCachedResult{
|
||||
data: resp,
|
||||
expiresAt: now2.Add(inspectCacheTTL),
|
||||
}
|
||||
s.store.inspectMu.Unlock()
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(resp)
|
||||
}
|
||||
|
||||
type inspectCachedResult struct {
|
||||
data pathInspectResponse
|
||||
expiresAt time.Time
|
||||
}
|
||||
|
||||
func (s *PacketStore) inspectCacheKey(req pathInspectRequest) string {
|
||||
key := strings.Join(req.Prefixes, ",")
|
||||
if req.Context != nil {
|
||||
key += "|" + req.Context.ObserverID + "|" + req.Context.Since + "|" + req.Context.Until
|
||||
}
|
||||
return key
|
||||
}
|
||||
|
||||
func (s *PacketStore) beamSearch(prefixes []string, pm *prefixMap, graph *NeighborGraph, nodeByPK map[string]*nodeInfo, now time.Time) []beamEntry {
|
||||
// Start with empty beam.
|
||||
beam := []beamEntry{{pubkeys: nil, names: nil, evidence: nil, score: 1.0}}
|
||||
|
||||
for hopIdx, prefix := range prefixes {
|
||||
candidates := pm.m[prefix]
|
||||
// Filter by role at lookup time (spec §2.2 step 2).
|
||||
var filtered []nodeInfo
|
||||
for _, c := range candidates {
|
||||
if canAppearInPath(c.Role) {
|
||||
filtered = append(filtered, c)
|
||||
}
|
||||
}
|
||||
|
||||
candidateCount := len(filtered)
|
||||
if candidateCount == 0 {
|
||||
// No candidates for this hop — beam dies.
|
||||
return nil
|
||||
}
|
||||
|
||||
var nextBeam []beamEntry
|
||||
for _, entry := range beam {
|
||||
for _, cand := range filtered {
|
||||
hopScore := s.scoreHop(entry, cand, candidateCount, graph, nodeByPK, now, hopIdx)
|
||||
if hopScore < hopScoreFloor {
|
||||
hopScore = hopScoreFloor
|
||||
}
|
||||
|
||||
newEntry := beamEntry{
|
||||
pubkeys: append(append([]string{}, entry.pubkeys...), cand.PublicKey),
|
||||
names: append(append([]string{}, entry.names...), cand.Name),
|
||||
evidence: append(append([]hopEvidence{}, entry.evidence...), hopEvidence{
|
||||
Prefix: prefix,
|
||||
CandidatesConsidered: candidateCount,
|
||||
Chosen: cand.PublicKey,
|
||||
EdgeWeight: hopScore,
|
||||
}),
|
||||
score: entry.score * hopScore,
|
||||
}
|
||||
nextBeam = append(nextBeam, newEntry)
|
||||
}
|
||||
}
|
||||
|
||||
// Prune to beam width.
|
||||
sortBeam(nextBeam)
|
||||
if len(nextBeam) > beamWidth {
|
||||
nextBeam = nextBeam[:beamWidth]
|
||||
}
|
||||
beam = nextBeam
|
||||
}
|
||||
|
||||
return beam
|
||||
}
|
||||
|
||||
func (s *PacketStore) scoreHop(entry beamEntry, cand nodeInfo, candidateCount int, graph *NeighborGraph, nodeByPK map[string]*nodeInfo, now time.Time, hopIdx int) float64 {
|
||||
var edgeScore float64
|
||||
var geoScore float64 = 1.0
|
||||
var recencyScore float64 = 1.0
|
||||
|
||||
if hopIdx == 0 || len(entry.pubkeys) == 0 {
|
||||
// First hop: no prior node to compare against.
|
||||
edgeScore = 1.0
|
||||
} else {
|
||||
lastPK := entry.pubkeys[len(entry.pubkeys)-1]
|
||||
|
||||
// Single scan over neighbors for both edge weight and recency.
|
||||
edges := graph.Neighbors(lastPK)
|
||||
var foundEdge *NeighborEdge
|
||||
for _, e := range edges {
|
||||
peer := e.NodeA
|
||||
if strings.EqualFold(peer, lastPK) {
|
||||
peer = e.NodeB
|
||||
}
|
||||
if strings.EqualFold(peer, cand.PublicKey) {
|
||||
foundEdge = e
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if foundEdge != nil {
|
||||
edgeScore = foundEdge.Score(now)
|
||||
hoursSince := now.Sub(foundEdge.LastSeen).Hours()
|
||||
if hoursSince <= 24 {
|
||||
recencyScore = 1.0
|
||||
} else {
|
||||
recencyScore = math.Max(0.1, 24.0/hoursSince)
|
||||
}
|
||||
} else {
|
||||
edgeScore = 0
|
||||
recencyScore = 0
|
||||
}
|
||||
|
||||
// Geographic plausibility.
|
||||
prevNode := nodeByPK[strings.ToLower(lastPK)]
|
||||
if prevNode != nil && prevNode.HasGPS && cand.HasGPS {
|
||||
dist := haversineKm(prevNode.Lat, prevNode.Lon, cand.Lat, cand.Lon)
|
||||
if dist > geoMaxKm {
|
||||
geoScore = math.Max(0.1, geoMaxKm/dist)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Prefix selectivity.
|
||||
selectivityScore := 1.0 / float64(candidateCount)
|
||||
|
||||
return wEdge*edgeScore + wGeo*geoScore + wRecency*recencyScore + wSelectivity*selectivityScore
|
||||
}
|
||||
|
||||
|
||||
func sortBeam(beam []beamEntry) {
|
||||
sort.Slice(beam, func(i, j int) bool {
|
||||
return beam[i].score > beam[j].score
|
||||
})
|
||||
}
|
||||
|
||||
// ensureNeighborGraph triggers a graph rebuild if nil or stale.
|
||||
func (s *PacketStore) ensureNeighborGraph() {
|
||||
if s.graph != nil && !s.graph.IsStale() {
|
||||
return
|
||||
}
|
||||
g := BuildFromStore(s)
|
||||
s.graph = g
|
||||
}
|
||||
@@ -1,308 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"math"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// ─── Unit tests for path inspector (issue #944) ────────────────────────────────
|
||||
|
||||
func TestScoreHop_EdgeWeight(t *testing.T) {
|
||||
store := &PacketStore{}
|
||||
graph := NewNeighborGraph()
|
||||
now := time.Now()
|
||||
|
||||
// Add an edge between A and B.
|
||||
graph.mu.Lock()
|
||||
edge := &NeighborEdge{
|
||||
NodeA: "aaaa", NodeB: "bbbb",
|
||||
Count: 50, LastSeen: now.Add(-1 * time.Hour),
|
||||
Observers: map[string]bool{"obs1": true},
|
||||
}
|
||||
key := edgeKey{"aaaa", "bbbb"}
|
||||
graph.edges[key] = edge
|
||||
graph.byNode["aaaa"] = append(graph.byNode["aaaa"], edge)
|
||||
graph.byNode["bbbb"] = append(graph.byNode["bbbb"], edge)
|
||||
graph.mu.Unlock()
|
||||
|
||||
entry := beamEntry{pubkeys: []string{"aaaa"}, names: []string{"NodeA"}}
|
||||
cand := nodeInfo{PublicKey: "bbbb", Name: "NodeB", Role: "repeater"}
|
||||
|
||||
score := store.scoreHop(entry, cand, 2, graph, nil, now, 1)
|
||||
|
||||
// With edge present, edgeScore > 0. With 2 candidates, selectivity = 0.5.
|
||||
// Anti-tautology: if we zero out edge weight constant, score would change.
|
||||
if score <= 0.05 {
|
||||
t.Errorf("expected score > floor, got %f", score)
|
||||
}
|
||||
|
||||
// No edge: score should be lower.
|
||||
candNoEdge := nodeInfo{PublicKey: "cccc", Name: "NodeC", Role: "repeater"}
|
||||
scoreNoEdge := store.scoreHop(entry, candNoEdge, 2, graph, nil, now, 1)
|
||||
if scoreNoEdge >= score {
|
||||
t.Errorf("expected no-edge score (%f) < edge score (%f)", scoreNoEdge, score)
|
||||
}
|
||||
}
|
||||
|
||||
func TestScoreHop_FirstHop(t *testing.T) {
|
||||
store := &PacketStore{}
|
||||
graph := NewNeighborGraph()
|
||||
now := time.Now()
|
||||
|
||||
entry := beamEntry{pubkeys: nil, names: nil}
|
||||
cand := nodeInfo{PublicKey: "aaaa", Name: "NodeA", Role: "repeater"}
|
||||
|
||||
score := store.scoreHop(entry, cand, 3, graph, nil, now, 0)
|
||||
// First hop: edgeScore=1.0, geoScore=1.0, recencyScore=1.0, selectivity=1/3
|
||||
// = 0.35*1 + 0.20*1 + 0.15*1 + 0.30*(1/3) = 0.35+0.20+0.15+0.10 = 0.80
|
||||
expected := 0.35 + 0.20 + 0.15 + 0.30/3.0
|
||||
if score < expected-0.01 || score > expected+0.01 {
|
||||
t.Errorf("expected ~%f, got %f", expected, score)
|
||||
}
|
||||
}
|
||||
|
||||
func TestScoreHop_GeoPlausibility(t *testing.T) {
|
||||
store := &PacketStore{}
|
||||
store.nodeCache = []nodeInfo{
|
||||
{PublicKey: "aaaa", Name: "A", Role: "repeater", Lat: 37.0, Lon: -122.0, HasGPS: true},
|
||||
{PublicKey: "bbbb", Name: "B", Role: "repeater", Lat: 37.01, Lon: -122.01, HasGPS: true}, // ~1.4km
|
||||
{PublicKey: "cccc", Name: "C", Role: "repeater", Lat: 40.0, Lon: -120.0, HasGPS: true}, // ~400km
|
||||
}
|
||||
store.nodePM = buildPrefixMap(store.nodeCache)
|
||||
store.nodeCacheTime = time.Now()
|
||||
|
||||
graph := NewNeighborGraph()
|
||||
now := time.Now()
|
||||
|
||||
nodeByPK := map[string]*nodeInfo{
|
||||
"aaaa": &store.nodeCache[0],
|
||||
"bbbb": &store.nodeCache[1],
|
||||
"cccc": &store.nodeCache[2],
|
||||
}
|
||||
|
||||
entry := beamEntry{pubkeys: []string{"aaaa"}, names: []string{"A"}}
|
||||
|
||||
// Close node should score higher than far node (geo component).
|
||||
scoreClose := store.scoreHop(entry, store.nodeCache[1], 2, graph, nodeByPK, now, 1)
|
||||
scoreFar := store.scoreHop(entry, store.nodeCache[2], 2, graph, nodeByPK, now, 1)
|
||||
if scoreFar >= scoreClose {
|
||||
t.Errorf("expected far node score (%f) < close node score (%f)", scoreFar, scoreClose)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBeamSearch_WidthCap(t *testing.T) {
|
||||
store := &PacketStore{}
|
||||
graph := NewNeighborGraph()
|
||||
graph.builtAt = time.Now()
|
||||
now := time.Now()
|
||||
|
||||
// Create 25 nodes that all match prefix "aa".
|
||||
var nodes []nodeInfo
|
||||
for i := 0; i < 25; i++ {
|
||||
// Each node has pubkey starting with "aa" followed by unique hex.
|
||||
pk := "aa" + strings.Repeat("0", 4) + fmt.Sprintf("%02x", i)
|
||||
nodes = append(nodes, nodeInfo{PublicKey: pk, Name: pk, Role: "repeater"})
|
||||
}
|
||||
pm := buildPrefixMap(nodes)
|
||||
|
||||
// Two hops of "aa" — should produce 25*25=625 combos, pruned to 20.
|
||||
beam := store.beamSearch([]string{"aa", "aa"}, pm, graph, nil, now)
|
||||
if len(beam) > beamWidth {
|
||||
t.Errorf("beam exceeded width: got %d, want <= %d", len(beam), beamWidth)
|
||||
}
|
||||
// Anti-tautology: without beam pruning, we'd have up to 25*min(25,beamWidth)=500 entries.
|
||||
// The test verifies pruning is effective.
|
||||
}
|
||||
|
||||
func TestBeamSearch_Speculative(t *testing.T) {
|
||||
store := &PacketStore{}
|
||||
graph := NewNeighborGraph()
|
||||
graph.builtAt = time.Now()
|
||||
now := time.Now()
|
||||
|
||||
// Create nodes with no edges and multiple candidates — should result in low scores (speculative).
|
||||
nodes := []nodeInfo{
|
||||
{PublicKey: "aabb", Name: "N1", Role: "repeater"},
|
||||
{PublicKey: "aabb22", Name: "N1b", Role: "repeater"},
|
||||
{PublicKey: "ccdd", Name: "N2", Role: "repeater"},
|
||||
{PublicKey: "ccdd22", Name: "N2b", Role: "repeater"},
|
||||
{PublicKey: "ccdd33", Name: "N2c", Role: "repeater"},
|
||||
}
|
||||
pm := buildPrefixMap(nodes)
|
||||
|
||||
beam := store.beamSearch([]string{"aa", "cc"}, pm, graph, nil, now)
|
||||
if len(beam) == 0 {
|
||||
t.Fatal("expected at least one result")
|
||||
}
|
||||
|
||||
// Score should be < 0.7 since there's no edge and multiple candidates (speculative).
|
||||
nHops := len(beam[0].pubkeys)
|
||||
score := 1.0
|
||||
if nHops > 0 {
|
||||
product := beam[0].score
|
||||
score = pow(product, 1.0/float64(nHops))
|
||||
}
|
||||
if score >= speculativeThreshold {
|
||||
t.Errorf("expected speculative score (< %f), got %f", speculativeThreshold, score)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandlePathInspect_EmptyPrefixes(t *testing.T) {
|
||||
srv := newTestServerForInspect(t)
|
||||
body := `{"prefixes":[]}`
|
||||
rr := doInspectRequest(srv, body)
|
||||
if rr.Code != http.StatusBadRequest {
|
||||
t.Errorf("expected 400, got %d", rr.Code)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandlePathInspect_OddLengthPrefix(t *testing.T) {
|
||||
srv := newTestServerForInspect(t)
|
||||
body := `{"prefixes":["abc"]}`
|
||||
rr := doInspectRequest(srv, body)
|
||||
if rr.Code != http.StatusBadRequest {
|
||||
t.Errorf("expected 400 for odd-length prefix, got %d", rr.Code)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandlePathInspect_MixedLengths(t *testing.T) {
|
||||
srv := newTestServerForInspect(t)
|
||||
body := `{"prefixes":["aa","bbcc"]}`
|
||||
rr := doInspectRequest(srv, body)
|
||||
if rr.Code != http.StatusBadRequest {
|
||||
t.Errorf("expected 400 for mixed lengths, got %d", rr.Code)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandlePathInspect_TooLongPrefix(t *testing.T) {
|
||||
srv := newTestServerForInspect(t)
|
||||
body := `{"prefixes":["aabbccdd"]}`
|
||||
rr := doInspectRequest(srv, body)
|
||||
if rr.Code != http.StatusBadRequest {
|
||||
t.Errorf("expected 400 for >3-byte prefix, got %d", rr.Code)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandlePathInspect_TooManyPrefixes(t *testing.T) {
|
||||
srv := newTestServerForInspect(t)
|
||||
prefixes := make([]string, 65)
|
||||
for i := range prefixes {
|
||||
prefixes[i] = "aa"
|
||||
}
|
||||
b, _ := json.Marshal(map[string]interface{}{"prefixes": prefixes})
|
||||
rr := doInspectRequest(srv, string(b))
|
||||
if rr.Code != http.StatusBadRequest {
|
||||
t.Errorf("expected 400 for >64 prefixes, got %d", rr.Code)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandlePathInspect_ValidRequest(t *testing.T) {
|
||||
srv := newTestServerForInspect(t)
|
||||
|
||||
// Seed nodes in the store — multiple candidates per prefix to lower selectivity.
|
||||
srv.store.nodeCache = []nodeInfo{
|
||||
{PublicKey: "aabb1234", Name: "NodeA", Role: "repeater", Lat: 37.0, Lon: -122.0, HasGPS: true},
|
||||
{PublicKey: "aabb5678", Name: "NodeA2", Role: "repeater"},
|
||||
{PublicKey: "ccdd5678", Name: "NodeB", Role: "repeater", Lat: 37.01, Lon: -122.01, HasGPS: true},
|
||||
{PublicKey: "ccdd9999", Name: "NodeB2", Role: "repeater"},
|
||||
{PublicKey: "ccdd1111", Name: "NodeB3", Role: "repeater"},
|
||||
}
|
||||
srv.store.nodePM = buildPrefixMap(srv.store.nodeCache)
|
||||
srv.store.nodeCacheTime = time.Now()
|
||||
srv.store.graph = NewNeighborGraph()
|
||||
srv.store.graph.builtAt = time.Now()
|
||||
|
||||
body := `{"prefixes":["aa","cc"]}`
|
||||
rr := doInspectRequest(srv, body)
|
||||
if rr.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200, got %d: %s", rr.Code, rr.Body.String())
|
||||
}
|
||||
|
||||
var resp pathInspectResponse
|
||||
if err := json.Unmarshal(rr.Body.Bytes(), &resp); err != nil {
|
||||
t.Fatalf("invalid JSON response: %v", err)
|
||||
}
|
||||
if len(resp.Candidates) == 0 {
|
||||
t.Error("expected at least one candidate")
|
||||
}
|
||||
if resp.Candidates[0].Speculative != true {
|
||||
// No edge between nodes, so score should be < 0.7.
|
||||
t.Error("expected speculative=true for no-edge path")
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Helpers ──────────────────────────────────────────────────────────────────
|
||||
|
||||
func newTestServerForInspect(t *testing.T) *Server {
|
||||
t.Helper()
|
||||
store := &PacketStore{
|
||||
inspectCache: make(map[string]*inspectCachedResult),
|
||||
}
|
||||
store.graph = NewNeighborGraph()
|
||||
store.graph.builtAt = time.Now()
|
||||
return &Server{store: store}
|
||||
}
|
||||
|
||||
func doInspectRequest(srv *Server, body string) *httptest.ResponseRecorder {
|
||||
req := httptest.NewRequest("POST", "/api/paths/inspect", bytes.NewBufferString(body))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
rr := httptest.NewRecorder()
|
||||
srv.handlePathInspect(rr, req)
|
||||
return rr
|
||||
}
|
||||
|
||||
func pow(base, exp float64) float64 {
|
||||
return math.Pow(base, exp)
|
||||
}
|
||||
|
||||
// BenchmarkBeamSearch — performance proof for spec §2.5 (<100ms p99 for ≤64 hops).
|
||||
// Anti-tautology: removing beam pruning makes this ~625x slower; timing assertion catches it.
|
||||
func BenchmarkBeamSearch(b *testing.B) {
|
||||
// Setup: 100 nodes, 10-hop prefix input, realistic neighbor graph.
|
||||
// Anti-tautology: removing beam pruning makes this ~625x slower.
|
||||
store := &PacketStore{}
|
||||
pm := &prefixMap{m: make(map[string][]nodeInfo)}
|
||||
graph := NewNeighborGraph()
|
||||
nodes := make([]nodeInfo, 100)
|
||||
|
||||
now := time.Now()
|
||||
for i := 0; i < 100; i++ {
|
||||
pk := fmt.Sprintf("%064x", i)
|
||||
prefix := fmt.Sprintf("%02x", i%256)
|
||||
node := nodeInfo{PublicKey: pk, Name: fmt.Sprintf("Node%d", i), Role: "repeater", Lat: 37.0 + float64(i)*0.01, Lon: -122.0 + float64(i)*0.01}
|
||||
nodes[i] = node
|
||||
pm.m[prefix] = append(pm.m[prefix], node)
|
||||
// Add neighbor edges to create a connected graph.
|
||||
if i > 0 {
|
||||
prevPK := fmt.Sprintf("%064x", i-1)
|
||||
key := makeEdgeKey(prevPK, pk)
|
||||
edge := &NeighborEdge{NodeA: prevPK, NodeB: pk, LastSeen: now, Count: 10}
|
||||
graph.edges[key] = edge
|
||||
graph.byNode[prevPK] = append(graph.byNode[prevPK], edge)
|
||||
graph.byNode[pk] = append(graph.byNode[pk], edge)
|
||||
}
|
||||
}
|
||||
|
||||
// 10-hop input using prefixes that map to multiple candidates.
|
||||
prefixes := make([]string, 10)
|
||||
for i := 0; i < 10; i++ {
|
||||
prefixes[i] = fmt.Sprintf("%02x", (i*3)%256)
|
||||
}
|
||||
|
||||
nodeByPK := make(map[string]*nodeInfo)
|
||||
for idx := range nodes {
|
||||
nodeByPK[nodes[idx].PublicKey] = &nodes[idx]
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
store.beamSearch(prefixes, pm, graph, nodeByPK, now)
|
||||
}
|
||||
}
|
||||
@@ -1,78 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/gorilla/mux"
|
||||
)
|
||||
|
||||
// TestHandleNodePaths_PrefixCollisionExclusion verifies that paths through a node
|
||||
// sharing a 2-char prefix with another node are not returned as false positives
|
||||
// when they have no resolved_path data (issue #929).
|
||||
//
|
||||
// Setup:
|
||||
// - nodeA (target): pubkey starts with "7a", no GPS
|
||||
// - nodeB (other): pubkey starts with "7a", has GPS → "7a" resolves to nodeB
|
||||
// - tx1: path ["7a"], resolved_path NULL → false positive candidate, must be excluded
|
||||
// - tx2: path ["7a"], resolved_path contains nodeA pubkey → SQL-confirmed, must be included
|
||||
func TestHandleNodePaths_PrefixCollisionExclusion(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
recent := time.Now().Add(-1 * time.Hour).Format(time.RFC3339)
|
||||
recentEpoch := time.Now().Add(-1 * time.Hour).Unix()
|
||||
|
||||
nodeAPK := "7acb1111aaaabbbb"
|
||||
nodeBPK := "7aff2222ccccdddd" // same "7a" prefix, has GPS so resolveHop("7a") picks B
|
||||
|
||||
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, lat, lon, last_seen, first_seen, advert_count)
|
||||
VALUES (?, 'NodeA', 'repeater', 0, 0, ?, '2026-01-01', 1)`, nodeAPK, recent)
|
||||
db.conn.Exec(`INSERT INTO nodes (public_key, name, role, lat, lon, last_seen, first_seen, advert_count)
|
||||
VALUES (?, 'NodeB', 'repeater', 37.5, -122.0, ?, '2026-01-01', 1)`, nodeBPK, recent)
|
||||
|
||||
// tx1: no resolved_path — should be excluded by hop-level check
|
||||
db.conn.Exec(`INSERT INTO transmissions (id, raw_hex, hash, first_seen) VALUES (10, 'AA', 'hash_fp', ?)`, recent)
|
||||
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, path_json, timestamp, resolved_path)
|
||||
VALUES (10, NULL, '["7a"]', ?, NULL)`, recentEpoch)
|
||||
|
||||
// tx2: resolved_path confirms nodeA — must be included
|
||||
db.conn.Exec(`INSERT INTO transmissions (id, raw_hex, hash, first_seen) VALUES (11, 'BB', 'hash_tp', ?)`, recent)
|
||||
db.conn.Exec(`INSERT INTO observations (transmission_id, observer_idx, path_json, timestamp, resolved_path)
|
||||
VALUES (11, NULL, '["7a"]', ?, ?)`, recentEpoch, `["`+nodeAPK+`"]`)
|
||||
|
||||
cfg := &Config{Port: 3000}
|
||||
hub := NewHub()
|
||||
srv := NewServer(db, cfg, hub)
|
||||
store := NewPacketStore(db, nil)
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatalf("store.Load: %v", err)
|
||||
}
|
||||
srv.store = store
|
||||
router := mux.NewRouter()
|
||||
srv.RegisterRoutes(router)
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/nodes/"+nodeAPK+"/paths", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
if w.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
|
||||
}
|
||||
|
||||
var resp NodePathsResponse
|
||||
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
|
||||
t.Fatalf("unmarshal: %v", err)
|
||||
}
|
||||
|
||||
// Only the SQL-confirmed path (tx2) should be present; tx1 (false positive) must be excluded.
|
||||
// tx1 and tx2 share the same raw path ["7a"] so they collapse into 1 unique path group.
|
||||
// If tx1 were included, TotalTransmissions would be 2.
|
||||
if resp.TotalPaths != 1 {
|
||||
t.Errorf("expected 1 path group, got %d", resp.TotalPaths)
|
||||
}
|
||||
if resp.TotalTransmissions != 1 {
|
||||
t.Errorf("expected 1 transmission (false positive tx1 excluded), got %d", resp.TotalTransmissions)
|
||||
}
|
||||
}
|
||||
@@ -1,41 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
// Issue #770: the region filter dropdown's "All" option was being sent to the
|
||||
// backend as ?region=All. The backend then tried to match observers with IATA
|
||||
// code "ALL", which never exists, producing an empty channel/packet list.
|
||||
//
|
||||
// "All" / "ALL" / "all" / "" must all be treated as "no region filter".
|
||||
func TestNormalizeRegionCodes_AllIsNoFilter(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
in string
|
||||
}{
|
||||
{"empty", ""},
|
||||
{"literal All (frontend dropdown label)", "All"},
|
||||
{"upper ALL", "ALL"},
|
||||
{"lower all", "all"},
|
||||
{"All with whitespace", " All "},
|
||||
{"All in csv with empty siblings", "All,"},
|
||||
}
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
got := normalizeRegionCodes(tc.in)
|
||||
if got != nil {
|
||||
t.Errorf("normalizeRegionCodes(%q) = %v, want nil (no filter)", tc.in, got)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Real region codes must still pass through unchanged (case-folded to upper).
|
||||
// This locks in that the "All" handling does not regress legitimate filters.
|
||||
func TestNormalizeRegionCodes_RealCodesPreserved(t *testing.T) {
|
||||
got := normalizeRegionCodes("sjc,PDX")
|
||||
if len(got) != 2 || got[0] != "SJC" || got[1] != "PDX" {
|
||||
t.Errorf("normalizeRegionCodes(\"sjc,PDX\") = %v, want [SJC PDX]", got)
|
||||
}
|
||||
}
|
||||
@@ -1,133 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"math"
|
||||
"net/http"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// RoleStats summarises one role's population and clock-skew posture.
|
||||
type RoleStats struct {
|
||||
Role string `json:"role"`
|
||||
NodeCount int `json:"nodeCount"`
|
||||
WithSkew int `json:"withSkew"`
|
||||
MeanAbsSkewSec float64 `json:"meanAbsSkewSec"`
|
||||
MedianAbsSkewSec float64 `json:"medianAbsSkewSec"`
|
||||
OkCount int `json:"okCount"`
|
||||
WarningCount int `json:"warningCount"`
|
||||
CriticalCount int `json:"criticalCount"`
|
||||
AbsurdCount int `json:"absurdCount"`
|
||||
NoClockCount int `json:"noClockCount"`
|
||||
}
|
||||
|
||||
// RoleAnalyticsResponse is the payload returned by /api/analytics/roles.
|
||||
type RoleAnalyticsResponse struct {
|
||||
TotalNodes int `json:"totalNodes"`
|
||||
Roles []RoleStats `json:"roles"`
|
||||
}
|
||||
|
||||
// normalizeRole canonicalises a role string so empty/unknown roles bucket
|
||||
// together and case differences don't fragment the distribution.
|
||||
func normalizeRole(r string) string {
|
||||
r = strings.ToLower(strings.TrimSpace(r))
|
||||
if r == "" {
|
||||
return "unknown"
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
// computeRoleAnalytics groups nodes by role and aggregates clock-skew per
|
||||
// role. Pure function: takes the node roster and the per-pubkey skew map and
|
||||
// returns the response — no store / lock dependencies, easy to unit test.
|
||||
//
|
||||
// `nodesByPubkey` lists every known node (pubkey → role). `skewByPubkey`
|
||||
// is the subset of pubkeys that have clock-skew data with their severity and
|
||||
// most-recent corrected skew (in seconds, signed — we take |x| for averages).
|
||||
func computeRoleAnalytics(nodesByPubkey map[string]string, skewByPubkey map[string]*NodeClockSkew) RoleAnalyticsResponse {
|
||||
type bucket struct {
|
||||
stats RoleStats
|
||||
absSkews []float64
|
||||
}
|
||||
buckets := make(map[string]*bucket)
|
||||
for pk, rawRole := range nodesByPubkey {
|
||||
role := normalizeRole(rawRole)
|
||||
b, ok := buckets[role]
|
||||
if !ok {
|
||||
b = &bucket{stats: RoleStats{Role: role}}
|
||||
buckets[role] = b
|
||||
}
|
||||
b.stats.NodeCount++
|
||||
cs, has := skewByPubkey[pk]
|
||||
if !has || cs == nil {
|
||||
continue
|
||||
}
|
||||
b.stats.WithSkew++
|
||||
abs := math.Abs(cs.RecentMedianSkewSec)
|
||||
if abs == 0 {
|
||||
abs = math.Abs(cs.LastSkewSec)
|
||||
}
|
||||
b.absSkews = append(b.absSkews, abs)
|
||||
switch cs.Severity {
|
||||
case SkewOK:
|
||||
b.stats.OkCount++
|
||||
case SkewWarning:
|
||||
b.stats.WarningCount++
|
||||
case SkewCritical:
|
||||
b.stats.CriticalCount++
|
||||
case SkewAbsurd:
|
||||
b.stats.AbsurdCount++
|
||||
case SkewNoClock:
|
||||
b.stats.NoClockCount++
|
||||
}
|
||||
}
|
||||
resp := RoleAnalyticsResponse{Roles: make([]RoleStats, 0, len(buckets))}
|
||||
for _, b := range buckets {
|
||||
if n := len(b.absSkews); n > 0 {
|
||||
sum := 0.0
|
||||
for _, v := range b.absSkews {
|
||||
sum += v
|
||||
}
|
||||
b.stats.MeanAbsSkewSec = round(sum/float64(n), 2)
|
||||
sorted := make([]float64, n)
|
||||
copy(sorted, b.absSkews)
|
||||
sort.Float64s(sorted)
|
||||
if n%2 == 1 {
|
||||
b.stats.MedianAbsSkewSec = round(sorted[n/2], 2)
|
||||
} else {
|
||||
b.stats.MedianAbsSkewSec = round((sorted[n/2-1]+sorted[n/2])/2, 2)
|
||||
}
|
||||
}
|
||||
resp.TotalNodes += b.stats.NodeCount
|
||||
resp.Roles = append(resp.Roles, b.stats)
|
||||
}
|
||||
// Sort: largest population first, then role name for stable output.
|
||||
sort.Slice(resp.Roles, func(i, j int) bool {
|
||||
if resp.Roles[i].NodeCount != resp.Roles[j].NodeCount {
|
||||
return resp.Roles[i].NodeCount > resp.Roles[j].NodeCount
|
||||
}
|
||||
return resp.Roles[i].Role < resp.Roles[j].Role
|
||||
})
|
||||
return resp
|
||||
}
|
||||
|
||||
// handleAnalyticsRoles serves /api/analytics/roles.
|
||||
func (s *Server) handleAnalyticsRoles(w http.ResponseWriter, r *http.Request) {
|
||||
if s.store == nil {
|
||||
writeJSON(w, RoleAnalyticsResponse{Roles: []RoleStats{}})
|
||||
return
|
||||
}
|
||||
nodes, _ := s.store.getCachedNodesAndPM()
|
||||
roles := make(map[string]string, len(nodes))
|
||||
for _, n := range nodes {
|
||||
roles[n.PublicKey] = n.Role
|
||||
}
|
||||
skewMap := make(map[string]*NodeClockSkew)
|
||||
for _, cs := range s.store.GetFleetClockSkew() {
|
||||
if cs == nil {
|
||||
continue
|
||||
}
|
||||
skewMap[cs.Pubkey] = cs
|
||||
}
|
||||
writeJSON(w, computeRoleAnalytics(roles, skewMap))
|
||||
}
|
||||
@@ -1,77 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
// TestComputeRoleAnalytics_Distribution verifies that computeRoleAnalytics
|
||||
// groups nodes by role, normalises empty/case-different roles, and sorts the
|
||||
// output largest-population first. Asserts on the public RoleAnalyticsResponse
|
||||
// shape so the bar is "behaviour", not "compiles".
|
||||
func TestComputeRoleAnalytics_Distribution(t *testing.T) {
|
||||
nodes := map[string]string{
|
||||
"pk_a": "Repeater",
|
||||
"pk_b": "repeater",
|
||||
"pk_c": "companion",
|
||||
"pk_d": "",
|
||||
"pk_e": "ROOM_SERVER",
|
||||
}
|
||||
got := computeRoleAnalytics(nodes, nil)
|
||||
|
||||
if got.TotalNodes != 5 {
|
||||
t.Fatalf("TotalNodes = %d, want 5", got.TotalNodes)
|
||||
}
|
||||
if len(got.Roles) != 4 {
|
||||
t.Fatalf("len(Roles) = %d, want 4 (repeater, companion, room_server, unknown), got %+v", len(got.Roles), got.Roles)
|
||||
}
|
||||
if got.Roles[0].Role != "repeater" || got.Roles[0].NodeCount != 2 {
|
||||
t.Errorf("Roles[0] = %+v, want {repeater,2}", got.Roles[0])
|
||||
}
|
||||
// Empty roles should bucket as "unknown".
|
||||
foundUnknown := false
|
||||
for _, r := range got.Roles {
|
||||
if r.Role == "unknown" {
|
||||
foundUnknown = true
|
||||
if r.NodeCount != 1 {
|
||||
t.Errorf("unknown bucket NodeCount = %d, want 1", r.NodeCount)
|
||||
}
|
||||
}
|
||||
}
|
||||
if !foundUnknown {
|
||||
t.Errorf("no 'unknown' bucket for empty roles in %+v", got.Roles)
|
||||
}
|
||||
}
|
||||
|
||||
// TestComputeRoleAnalytics_SkewAggregation verifies per-role clock-skew
|
||||
// aggregation: counts by severity, mean and median absolute skew.
|
||||
func TestComputeRoleAnalytics_SkewAggregation(t *testing.T) {
|
||||
nodes := map[string]string{
|
||||
"pk_1": "repeater",
|
||||
"pk_2": "repeater",
|
||||
"pk_3": "repeater",
|
||||
}
|
||||
skews := map[string]*NodeClockSkew{
|
||||
"pk_1": {Pubkey: "pk_1", RecentMedianSkewSec: 10, Severity: SkewOK},
|
||||
"pk_2": {Pubkey: "pk_2", RecentMedianSkewSec: -400, Severity: SkewWarning},
|
||||
"pk_3": {Pubkey: "pk_3", RecentMedianSkewSec: 7200, Severity: SkewCritical},
|
||||
}
|
||||
got := computeRoleAnalytics(nodes, skews)
|
||||
if len(got.Roles) != 1 {
|
||||
t.Fatalf("len(Roles) = %d, want 1; got %+v", len(got.Roles), got.Roles)
|
||||
}
|
||||
r := got.Roles[0]
|
||||
if r.WithSkew != 3 {
|
||||
t.Errorf("WithSkew = %d, want 3", r.WithSkew)
|
||||
}
|
||||
if r.OkCount != 1 || r.WarningCount != 1 || r.CriticalCount != 1 {
|
||||
t.Errorf("severity counts = ok %d, warn %d, crit %d; want 1/1/1", r.OkCount, r.WarningCount, r.CriticalCount)
|
||||
}
|
||||
// mean(|10|, |−400|, |7200|) = 7610/3 ≈ 2536.67
|
||||
if r.MeanAbsSkewSec < 2536 || r.MeanAbsSkewSec > 2537 {
|
||||
t.Errorf("MeanAbsSkewSec = %v, want ~2536.67", r.MeanAbsSkewSec)
|
||||
}
|
||||
// median(10, 400, 7200) = 400
|
||||
if r.MedianAbsSkewSec != 400 {
|
||||
t.Errorf("MedianAbsSkewSec = %v, want 400", r.MedianAbsSkewSec)
|
||||
}
|
||||
}
|
||||
+10
-90
@@ -104,9 +104,6 @@ func (s *Server) getMemStats() runtime.MemStats {
|
||||
// RegisterRoutes sets up all HTTP routes on the given router.
|
||||
func (s *Server) RegisterRoutes(r *mux.Router) {
|
||||
s.router = r
|
||||
// CORS middleware (must run before route handlers)
|
||||
r.Use(s.corsMiddleware)
|
||||
|
||||
// Performance instrumentation middleware
|
||||
r.Use(s.perfMiddleware)
|
||||
|
||||
@@ -121,9 +118,6 @@ func (s *Server) RegisterRoutes(r *mux.Router) {
|
||||
r.HandleFunc("/api/config/map", s.handleConfigMap).Methods("GET")
|
||||
r.HandleFunc("/api/config/geo-filter", s.handleConfigGeoFilter).Methods("GET")
|
||||
|
||||
// Readiness endpoint (gated on background init completion)
|
||||
r.HandleFunc("/api/healthz", s.handleHealthz).Methods("GET")
|
||||
|
||||
// System endpoints
|
||||
r.HandleFunc("/api/health", s.handleHealth).Methods("GET")
|
||||
r.HandleFunc("/api/stats", s.handleStats).Methods("GET")
|
||||
@@ -132,7 +126,6 @@ func (s *Server) RegisterRoutes(r *mux.Router) {
|
||||
r.Handle("/api/admin/prune", s.requireAPIKey(http.HandlerFunc(s.handleAdminPrune))).Methods("POST")
|
||||
r.Handle("/api/debug/affinity", s.requireAPIKey(http.HandlerFunc(s.handleDebugAffinity))).Methods("GET")
|
||||
r.Handle("/api/dropped-packets", s.requireAPIKey(http.HandlerFunc(s.handleDroppedPackets))).Methods("GET")
|
||||
r.Handle("/api/backup", s.requireAPIKey(http.HandlerFunc(s.handleBackup))).Methods("GET")
|
||||
|
||||
// Packet endpoints
|
||||
r.HandleFunc("/api/packets/observations", s.handleBatchObservations).Methods("POST")
|
||||
@@ -159,7 +152,6 @@ func (s *Server) RegisterRoutes(r *mux.Router) {
|
||||
r.HandleFunc("/api/nodes", s.handleNodes).Methods("GET")
|
||||
|
||||
// Analytics endpoints
|
||||
r.HandleFunc("/api/analytics/roles", s.handleAnalyticsRoles).Methods("GET")
|
||||
r.HandleFunc("/api/analytics/rf", s.handleAnalyticsRF).Methods("GET")
|
||||
r.HandleFunc("/api/analytics/topology", s.handleAnalyticsTopology).Methods("GET")
|
||||
r.HandleFunc("/api/analytics/channels", s.handleAnalyticsChannels).Methods("GET")
|
||||
@@ -181,7 +173,6 @@ func (s *Server) RegisterRoutes(r *mux.Router) {
|
||||
r.HandleFunc("/api/observers/{id}", s.handleObserverDetail).Methods("GET")
|
||||
r.HandleFunc("/api/observers", s.handleObservers).Methods("GET")
|
||||
r.HandleFunc("/api/traces/{hash}", s.handleTraces).Methods("GET")
|
||||
r.HandleFunc("/api/paths/inspect", s.handlePathInspect).Methods("POST")
|
||||
r.HandleFunc("/api/iata-coords", s.handleIATACoords).Methods("GET")
|
||||
r.HandleFunc("/api/audio-lab/buckets", s.handleAudioLabBuckets).Methods("GET")
|
||||
|
||||
@@ -1096,11 +1087,9 @@ func (s *Server) handleNodes(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
if s.store != nil {
|
||||
hashInfo := s.store.GetNodeHashSizeInfo()
|
||||
mbCap := s.store.GetMultiByteCapMap()
|
||||
for _, node := range nodes {
|
||||
if pk, ok := node["public_key"].(string); ok {
|
||||
EnrichNodeWithHashSize(node, hashInfo[pk])
|
||||
EnrichNodeWithMultiByte(node, mbCap[pk])
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1159,44 +1148,14 @@ func (s *Server) handleNodeDetail(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
node, err := s.db.GetNodeByPubkey(pubkey)
|
||||
if err != nil {
|
||||
writeError(w, 500, err.Error())
|
||||
return
|
||||
}
|
||||
// Issue #772: short-URL fallback. If exact pubkey lookup misses and the
|
||||
// path looks like a hex prefix (>=8 chars, <64), try prefix resolution.
|
||||
if node == nil && len(pubkey) >= 8 && len(pubkey) < 64 {
|
||||
resolved, ambiguous, perr := s.db.GetNodeByPrefix(pubkey)
|
||||
if perr != nil {
|
||||
writeError(w, 500, perr.Error())
|
||||
return
|
||||
}
|
||||
if ambiguous {
|
||||
writeError(w, http.StatusConflict, "Ambiguous prefix: multiple nodes match. Use a longer prefix.")
|
||||
return
|
||||
}
|
||||
if resolved != nil {
|
||||
if pk, _ := resolved["public_key"].(string); pk != "" && s.cfg.IsBlacklisted(pk) {
|
||||
writeError(w, 404, "Not found")
|
||||
return
|
||||
}
|
||||
node = resolved
|
||||
}
|
||||
}
|
||||
if node == nil {
|
||||
if err != nil || node == nil {
|
||||
writeError(w, 404, "Not found")
|
||||
return
|
||||
}
|
||||
// From here on use the canonical pubkey for downstream lookups.
|
||||
if pk, _ := node["public_key"].(string); pk != "" {
|
||||
pubkey = pk
|
||||
}
|
||||
|
||||
if s.store != nil {
|
||||
hashInfo := s.store.GetNodeHashSizeInfo()
|
||||
EnrichNodeWithHashSize(node, hashInfo[pubkey])
|
||||
mbCap := s.store.GetMultiByteCapMap()
|
||||
EnrichNodeWithMultiByte(node, mbCap[pubkey])
|
||||
}
|
||||
|
||||
name := ""
|
||||
@@ -1298,31 +1257,25 @@ func (s *Server) handleNodePaths(w http.ResponseWriter, r *http.Request) {
|
||||
_, pm := s.store.getCachedNodesAndPM()
|
||||
|
||||
// Collect candidate transmissions from the index, deduplicating by tx ID.
|
||||
// confirmedByFullKey tracks TXs found via the full-pubkey index key — these are
|
||||
// already resolved_path-confirmed and bypass the hop-level check below.
|
||||
confirmedByFullKey := make(map[int]bool)
|
||||
seen := make(map[int]bool)
|
||||
var candidates []*StoreTx
|
||||
addCandidates := func(key string, confirmed bool) {
|
||||
addCandidates := func(key string) {
|
||||
for _, tx := range s.store.byPathHop[key] {
|
||||
if !seen[tx.ID] {
|
||||
seen[tx.ID] = true
|
||||
if confirmed {
|
||||
confirmedByFullKey[tx.ID] = true
|
||||
}
|
||||
candidates = append(candidates, tx)
|
||||
}
|
||||
}
|
||||
}
|
||||
addCandidates(lowerPK, true) // full pubkey match (from resolved_path) → confirmed
|
||||
addCandidates(prefix1, false) // 2-char raw hop match
|
||||
addCandidates(prefix2, false) // 4-char raw hop match
|
||||
addCandidates(lowerPK) // full pubkey match (from resolved_path)
|
||||
addCandidates(prefix1) // 2-char raw hop match
|
||||
addCandidates(prefix2) // 4-char raw hop match
|
||||
// Also check any raw hops that start with prefix2 (longer prefixes).
|
||||
// Raw hops are typically 2 chars, so iterate only keys with HasPrefix
|
||||
// on the small set of index keys rather than all packets.
|
||||
for key := range s.store.byPathHop {
|
||||
if len(key) > 4 && len(key) < len(lowerPK) && strings.HasPrefix(key, prefix2) {
|
||||
addCandidates(key, false)
|
||||
addCandidates(key)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1359,7 +1312,6 @@ func (s *Server) handleNodePaths(w http.ResponseWriter, r *http.Request) {
|
||||
s.store.mu.RUnlock()
|
||||
|
||||
// Now run SQL checks outside the lock for candidates that need confirmation.
|
||||
confirmedBySQL := make(map[int]bool)
|
||||
filtered := candidates[:0]
|
||||
for _, cc := range checks {
|
||||
if cc.inIndex {
|
||||
@@ -1367,7 +1319,6 @@ func (s *Server) handleNodePaths(w http.ResponseWriter, r *http.Request) {
|
||||
} else if cc.hasReverse {
|
||||
if s.store.confirmResolvedPathContains(cc.tx.ID, lowerPK) {
|
||||
filtered = append(filtered, cc.tx)
|
||||
confirmedBySQL[cc.tx.ID] = true
|
||||
}
|
||||
}
|
||||
// else: not in index → exclude
|
||||
@@ -1395,14 +1346,10 @@ func (s *Server) handleNodePaths(w http.ResponseWriter, r *http.Request) {
|
||||
return r
|
||||
}
|
||||
for _, tx := range candidates {
|
||||
totalTransmissions++
|
||||
hops := txGetParsedPath(tx)
|
||||
resolvedHops := make([]PathHopResp, len(hops))
|
||||
sigParts := make([]string, len(hops))
|
||||
// For candidates not confirmed via full-pubkey index or SQL, verify that at
|
||||
// least one hop actually resolves to the target. This catches prefix collisions
|
||||
// (e.g. two nodes sharing a "7a" 1-byte prefix) that slipped through the
|
||||
// conservative resolved_path fallback.
|
||||
containsTarget := confirmedByFullKey[tx.ID] || confirmedBySQL[tx.ID]
|
||||
for i, hop := range hops {
|
||||
resolved := resolveHop(hop)
|
||||
entry := PathHopResp{Prefix: hop, Name: hop}
|
||||
@@ -1414,22 +1361,11 @@ func (s *Server) handleNodePaths(w http.ResponseWriter, r *http.Request) {
|
||||
entry.Lon = resolved.Lon
|
||||
}
|
||||
sigParts[i] = resolved.PublicKey
|
||||
if strings.ToLower(resolved.PublicKey) == lowerPK {
|
||||
containsTarget = true
|
||||
}
|
||||
} else {
|
||||
sigParts[i] = hop
|
||||
// Unresolvable hop: keep conservative if prefix could be the target.
|
||||
if strings.HasPrefix(lowerPK, strings.ToLower(hop)) {
|
||||
containsTarget = true
|
||||
}
|
||||
}
|
||||
resolvedHops[i] = entry
|
||||
}
|
||||
if !containsTarget {
|
||||
continue
|
||||
}
|
||||
totalTransmissions++
|
||||
|
||||
sig := strings.Join(sigParts, "→")
|
||||
agg := pathGroups[sig]
|
||||
@@ -1555,9 +1491,8 @@ func (s *Server) handleFleetClockSkew(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
func (s *Server) handleAnalyticsRF(w http.ResponseWriter, r *http.Request) {
|
||||
region := r.URL.Query().Get("region")
|
||||
window := ParseTimeWindow(r)
|
||||
if s.store != nil {
|
||||
writeJSON(w, s.store.GetAnalyticsRFWithWindow(region, window))
|
||||
writeJSON(w, s.store.GetAnalyticsRF(region))
|
||||
return
|
||||
}
|
||||
writeJSON(w, RFAnalyticsResponse{
|
||||
@@ -1576,9 +1511,8 @@ func (s *Server) handleAnalyticsRF(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
func (s *Server) handleAnalyticsTopology(w http.ResponseWriter, r *http.Request) {
|
||||
region := r.URL.Query().Get("region")
|
||||
window := ParseTimeWindow(r)
|
||||
if s.store != nil {
|
||||
data := s.store.GetAnalyticsTopologyWithWindow(region, window)
|
||||
data := s.store.GetAnalyticsTopology(region)
|
||||
if s.cfg != nil && len(s.cfg.NodeBlacklist) > 0 {
|
||||
data = s.filterBlacklistedFromTopology(data)
|
||||
}
|
||||
@@ -1600,8 +1534,7 @@ func (s *Server) handleAnalyticsTopology(w http.ResponseWriter, r *http.Request)
|
||||
func (s *Server) handleAnalyticsChannels(w http.ResponseWriter, r *http.Request) {
|
||||
if s.store != nil {
|
||||
region := r.URL.Query().Get("region")
|
||||
window := ParseTimeWindow(r)
|
||||
writeJSON(w, s.store.GetAnalyticsChannelsWithWindow(region, window))
|
||||
writeJSON(w, s.store.GetAnalyticsChannels(region))
|
||||
return
|
||||
}
|
||||
channels, _ := s.db.GetChannels()
|
||||
@@ -1995,10 +1928,6 @@ func (s *Server) handleObservers(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
result := make([]ObserverResp, 0, len(observers))
|
||||
for _, o := range observers {
|
||||
// Defense in depth: skip observers that are in the blacklist
|
||||
if s.cfg != nil && s.cfg.IsObserverBlacklisted(o.ID) {
|
||||
continue
|
||||
}
|
||||
plh := 0
|
||||
if c, ok := pktCounts[o.ID]; ok {
|
||||
plh = c
|
||||
@@ -2018,7 +1947,6 @@ func (s *Server) handleObservers(w http.ResponseWriter, r *http.Request) {
|
||||
ClientVersion: o.ClientVersion, Radio: o.Radio,
|
||||
BatteryMv: o.BatteryMv, UptimeSecs: o.UptimeSecs,
|
||||
NoiseFloor: o.NoiseFloor,
|
||||
LastPacketAt: o.LastPacketAt,
|
||||
PacketsLastHour: plh,
|
||||
Lat: lat, Lon: lon, NodeRole: nodeRole,
|
||||
})
|
||||
@@ -2031,13 +1959,6 @@ func (s *Server) handleObservers(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
func (s *Server) handleObserverDetail(w http.ResponseWriter, r *http.Request) {
|
||||
id := mux.Vars(r)["id"]
|
||||
|
||||
// Defense in depth: reject blacklisted observer
|
||||
if s.cfg != nil && s.cfg.IsObserverBlacklisted(id) {
|
||||
writeError(w, 404, "Observer not found")
|
||||
return
|
||||
}
|
||||
|
||||
obs, err := s.db.GetObserverByID(id)
|
||||
if err != nil || obs == nil {
|
||||
writeError(w, 404, "Observer not found")
|
||||
@@ -2060,7 +1981,6 @@ func (s *Server) handleObserverDetail(w http.ResponseWriter, r *http.Request) {
|
||||
ClientVersion: obs.ClientVersion, Radio: obs.Radio,
|
||||
BatteryMv: obs.BatteryMv, UptimeSecs: obs.UptimeSecs,
|
||||
NoiseFloor: obs.NoiseFloor,
|
||||
LastPacketAt: obs.LastPacketAt,
|
||||
PacketsLastHour: plh,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1,59 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// rwCache holds a process-wide cached RW connection per database path.
|
||||
// Instead of opening and closing a new RW connection on every call to openRW,
|
||||
// we cache a single *sql.DB (which internally manages one connection due to
|
||||
// SetMaxOpenConns(1)). This eliminates repeated open/close overhead for
|
||||
// vacuum, prune, persist operations that run frequently (#921).
|
||||
var rwCache = struct {
|
||||
mu sync.Mutex
|
||||
conns map[string]*sql.DB
|
||||
}{conns: make(map[string]*sql.DB)}
|
||||
|
||||
// cachedRW returns a cached read-write connection for the given dbPath.
|
||||
// The connection is created on first call and reused thereafter.
|
||||
// Callers MUST NOT call Close() on the returned *sql.DB.
|
||||
func cachedRW(dbPath string) (*sql.DB, error) {
|
||||
rwCache.mu.Lock()
|
||||
defer rwCache.mu.Unlock()
|
||||
|
||||
if db, ok := rwCache.conns[dbPath]; ok {
|
||||
return db, nil
|
||||
}
|
||||
|
||||
dsn := fmt.Sprintf("file:%s?_journal_mode=WAL", dbPath)
|
||||
db, err := sql.Open("sqlite", dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
db.SetMaxOpenConns(1)
|
||||
if _, err := db.Exec("PRAGMA busy_timeout = 5000"); err != nil {
|
||||
db.Close()
|
||||
return nil, fmt.Errorf("set busy_timeout: %w", err)
|
||||
}
|
||||
rwCache.conns[dbPath] = db
|
||||
return db, nil
|
||||
}
|
||||
|
||||
// closeRWCache closes all cached RW connections (for tests/shutdown).
|
||||
func closeRWCache() {
|
||||
rwCache.mu.Lock()
|
||||
defer rwCache.mu.Unlock()
|
||||
for k, db := range rwCache.conns {
|
||||
db.Close()
|
||||
delete(rwCache.conns, k)
|
||||
}
|
||||
}
|
||||
|
||||
// rwCacheLen returns the number of cached connections (for testing).
|
||||
func rwCacheLen() int {
|
||||
rwCache.mu.Lock()
|
||||
defer rwCache.mu.Unlock()
|
||||
return len(rwCache.conns)
|
||||
}
|
||||
@@ -1,55 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestCachedRW_ReturnsSameHandle(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
dbPath := filepath.Join(dir, "test.db")
|
||||
|
||||
// Create the DB file
|
||||
f, _ := os.Create(dbPath)
|
||||
f.Close()
|
||||
|
||||
defer closeRWCache()
|
||||
|
||||
db1, err := cachedRW(dbPath)
|
||||
if err != nil {
|
||||
t.Fatalf("first cachedRW: %v", err)
|
||||
}
|
||||
db2, err := cachedRW(dbPath)
|
||||
if err != nil {
|
||||
t.Fatalf("second cachedRW: %v", err)
|
||||
}
|
||||
if db1 != db2 {
|
||||
t.Fatalf("cachedRW returned different handles: %p vs %p", db1, db2)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCachedRW_100Calls_SingleConnection(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
dbPath := filepath.Join(dir, "test.db")
|
||||
f, _ := os.Create(dbPath)
|
||||
f.Close()
|
||||
|
||||
defer closeRWCache()
|
||||
|
||||
var first interface{}
|
||||
for i := 0; i < 100; i++ {
|
||||
db, err := cachedRW(dbPath)
|
||||
if err != nil {
|
||||
t.Fatalf("call %d: %v", i, err)
|
||||
}
|
||||
if i == 0 {
|
||||
first = db
|
||||
} else if db != first {
|
||||
t.Fatalf("call %d returned different handle", i)
|
||||
}
|
||||
}
|
||||
if rwCacheLen() != 1 {
|
||||
t.Fatalf("expected 1 cached connection, got %d", rwCacheLen())
|
||||
}
|
||||
}
|
||||
@@ -1,109 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// Issue #772 — shortened URL for easier sending over the mesh.
|
||||
//
|
||||
// Public keys are 64 hex chars. Operators want to share node URLs over a
|
||||
// mesh radio link where every byte counts. We allow truncating the pubkey
|
||||
// in the URL down to a minimum 8-hex-char prefix; the server resolves the
|
||||
// prefix back to the full pubkey when (and only when) it is unambiguous.
|
||||
|
||||
func TestResolveNodePrefix_Unique(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
defer db.Close()
|
||||
seedTestData(t, db)
|
||||
|
||||
// "aabbccdd" uniquely identifies the seeded TestRepeater (pubkey aabbccdd11223344).
|
||||
node, ambiguous, err := db.GetNodeByPrefix("aabbccdd")
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected err: %v", err)
|
||||
}
|
||||
if ambiguous {
|
||||
t.Fatalf("expected unambiguous match, got ambiguous=true")
|
||||
}
|
||||
if node == nil {
|
||||
t.Fatalf("expected node, got nil")
|
||||
}
|
||||
if got, _ := node["public_key"].(string); got != "aabbccdd11223344" {
|
||||
t.Errorf("expected public_key aabbccdd11223344, got %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveNodePrefix_Ambiguous(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
defer db.Close()
|
||||
seedTestData(t, db)
|
||||
|
||||
// Insert a second node sharing the 8-char prefix "aabbccdd".
|
||||
if _, err := db.conn.Exec(`INSERT INTO nodes (public_key, name, role, advert_count)
|
||||
VALUES ('aabbccdd99887766', 'OtherNode', 'companion', 1)`); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
node, ambiguous, err := db.GetNodeByPrefix("aabbccdd")
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected err: %v", err)
|
||||
}
|
||||
if !ambiguous {
|
||||
t.Fatalf("expected ambiguous=true for shared prefix, got false (node=%v)", node)
|
||||
}
|
||||
if node != nil {
|
||||
t.Errorf("expected nil node when ambiguous, got %v", node["public_key"])
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveNodePrefix_TooShort(t *testing.T) {
|
||||
db := setupTestDB(t)
|
||||
defer db.Close()
|
||||
seedTestData(t, db)
|
||||
|
||||
// <8 hex chars must NOT resolve, even if it would be unique.
|
||||
node, _, err := db.GetNodeByPrefix("aabbccd")
|
||||
if err == nil && node != nil {
|
||||
t.Errorf("expected nil/error for 7-char prefix, got node %v", node["public_key"])
|
||||
}
|
||||
}
|
||||
|
||||
// Route-level: GET /api/nodes/<8-char-prefix> resolves to the full node.
|
||||
func TestNodeDetailRoute_PrefixResolves(t *testing.T) {
|
||||
_, router := setupTestServer(t)
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/nodes/aabbccdd", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
if w.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200 for unique 8-char prefix, got %d body=%s", w.Code, w.Body.String())
|
||||
}
|
||||
var body NodeDetailResponse
|
||||
if err := json.Unmarshal(w.Body.Bytes(), &body); err != nil {
|
||||
t.Fatalf("unmarshal: %v", err)
|
||||
}
|
||||
pk, _ := body.Node["public_key"].(string)
|
||||
if pk != "aabbccdd11223344" {
|
||||
t.Errorf("expected resolved pubkey aabbccdd11223344, got %q", pk)
|
||||
}
|
||||
}
|
||||
|
||||
// Route-level: GET /api/nodes/<ambiguous-prefix> returns 409 with a hint.
|
||||
func TestNodeDetailRoute_PrefixAmbiguous(t *testing.T) {
|
||||
srv, router := setupTestServer(t)
|
||||
if _, err := srv.db.conn.Exec(`INSERT INTO nodes (public_key, name, role, advert_count)
|
||||
VALUES ('aabbccdd99887766', 'OtherNode', 'companion', 1)`); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/nodes/aabbccdd", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
if w.Code != http.StatusConflict {
|
||||
t.Fatalf("expected 409 for ambiguous prefix, got %d body=%s", w.Code, w.Body.String())
|
||||
}
|
||||
}
|
||||
+75
-525
@@ -1,7 +1,6 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
@@ -189,10 +188,6 @@ type PacketStore struct {
|
||||
hashSizeInfoCache map[string]*hashSizeNodeInfo
|
||||
hashSizeInfoAt time.Time
|
||||
|
||||
// Cached multi-byte capability map (pubkey → entry), recomputed every 15s.
|
||||
multiByteCapCache map[string]*MultiByteCapEntry
|
||||
multiByteCapAt time.Time
|
||||
|
||||
// Precomputed distinct advert pubkey count (refcounted for eviction correctness).
|
||||
// Updated incrementally during Load/Ingest/Evict — avoids JSON parsing in GetPerfStoreStats.
|
||||
advertPubkeys map[string]int // pubkey → number of advert packets referencing it
|
||||
@@ -214,10 +209,6 @@ type PacketStore struct {
|
||||
// Persisted neighbor graph for hop resolution at ingest time.
|
||||
graph *NeighborGraph
|
||||
|
||||
// Path inspector score cache (issue #944).
|
||||
inspectMu sync.RWMutex
|
||||
inspectCache map[string]*inspectCachedResult
|
||||
|
||||
// Clock skew detection engine.
|
||||
clockSkew *ClockSkewEngine
|
||||
|
||||
@@ -473,19 +464,10 @@ func (s *PacketStore) Load() error {
|
||||
obsRawHexCol = ", o.raw_hex"
|
||||
}
|
||||
|
||||
// Build WHERE conditions: retention cutoff (mirrors Evict logic) + optional memory-cap limit.
|
||||
var loadConditions []string
|
||||
if s.retentionHours > 0 {
|
||||
cutoff := time.Now().UTC().Add(-time.Duration(s.retentionHours*3600) * time.Second).Format(time.RFC3339)
|
||||
loadConditions = append(loadConditions, fmt.Sprintf("t.first_seen >= '%s'", cutoff))
|
||||
}
|
||||
limitClause := ""
|
||||
if maxPackets > 0 {
|
||||
loadConditions = append(loadConditions, fmt.Sprintf(
|
||||
"t.id IN (SELECT id FROM transmissions ORDER BY first_seen DESC LIMIT %d)", maxPackets))
|
||||
}
|
||||
filterClause := ""
|
||||
if len(loadConditions) > 0 {
|
||||
filterClause = "\n\t\t\tWHERE " + strings.Join(loadConditions, "\n\t\t\t AND ")
|
||||
limitClause = fmt.Sprintf(
|
||||
"\n\t\t\tWHERE t.id IN (SELECT id FROM transmissions ORDER BY first_seen DESC LIMIT %d)", maxPackets)
|
||||
}
|
||||
|
||||
if s.db.isV3 {
|
||||
@@ -495,7 +477,7 @@ func (s *PacketStore) Load() error {
|
||||
o.snr, o.rssi, o.score, o.path_json, strftime('%Y-%m-%dT%H:%M:%fZ', o.timestamp, 'unixepoch')` + obsRawHexCol + rpCol + `
|
||||
FROM transmissions t
|
||||
LEFT JOIN observations o ON o.transmission_id = t.id
|
||||
LEFT JOIN observers obs ON obs.rowid = o.observer_idx` + filterClause + `
|
||||
LEFT JOIN observers obs ON obs.rowid = o.observer_idx` + limitClause + `
|
||||
ORDER BY t.first_seen ASC, o.timestamp DESC`
|
||||
} else {
|
||||
loadSQL = `SELECT t.id, t.raw_hex, t.hash, t.first_seen, t.route_type,
|
||||
@@ -503,7 +485,7 @@ func (s *PacketStore) Load() error {
|
||||
o.id, o.observer_id, o.observer_name, o.direction,
|
||||
o.snr, o.rssi, o.score, o.path_json, o.timestamp` + obsRawHexCol + rpCol + `
|
||||
FROM transmissions t
|
||||
LEFT JOIN observations o ON o.transmission_id = t.id` + filterClause + `
|
||||
LEFT JOIN observations o ON o.transmission_id = t.id` + limitClause + `
|
||||
ORDER BY t.first_seen ASC, o.timestamp DESC`
|
||||
}
|
||||
|
||||
@@ -2276,10 +2258,6 @@ func (s *PacketStore) filterPackets(q PacketQuery) []*StoreTx {
|
||||
}
|
||||
// Single-pass filter: apply all predicates in one scan.
|
||||
results := filterTxSlice(source, func(tx *StoreTx) bool {
|
||||
// Data integrity: exclude legacy rows missing hash or timestamp (#871)
|
||||
if tx.Hash == "" || tx.FirstSeen == "" {
|
||||
return false
|
||||
}
|
||||
if hasType && (tx.PayloadType == nil || *tx.PayloadType != filterType) {
|
||||
return false
|
||||
}
|
||||
@@ -2441,145 +2419,6 @@ func (s *PacketStore) fetchAndCacheRegionObs(region string) map[string]bool {
|
||||
return m
|
||||
}
|
||||
|
||||
// iataMatchesRegion returns true if iata matches any of the comma-separated
|
||||
// region codes in regionParam. Comparison is case-insensitive and trim-tolerant.
|
||||
// Empty iata never matches; empty regionParam never matches.
|
||||
//
|
||||
// #804: shared helper used by analytics to attribute transmissions to a node's
|
||||
// HOME region (derived from observers that hear its zero-hop direct adverts)
|
||||
// rather than to the observer that happened to relay a packet.
|
||||
func iataMatchesRegion(iata, regionParam string) bool {
|
||||
if iata == "" || regionParam == "" {
|
||||
return false
|
||||
}
|
||||
codes := normalizeRegionCodes(regionParam)
|
||||
if len(codes) == 0 {
|
||||
return false
|
||||
}
|
||||
got := strings.TrimSpace(strings.ToUpper(iata))
|
||||
if got == "" {
|
||||
return false
|
||||
}
|
||||
for _, c := range codes {
|
||||
if c == got {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// computeNodeHomeRegions returns a pubkey → IATA map deriving each node's
|
||||
// HOME region from zero-hop DIRECT adverts. A zero-hop direct advert is the
|
||||
// most authoritative location signal because the path byte is set locally on
|
||||
// the originating radio and the packet has not been relayed: the observer
|
||||
// that hears it is necessarily within direct RF range of the originator.
|
||||
//
|
||||
// When a node has zero-hop direct adverts heard by observers from multiple
|
||||
// regions, the most-frequently-observed region wins (geographic plurality).
|
||||
//
|
||||
// Caller must hold s.mu (read or write). Returns empty map (not nil) if no
|
||||
// observers are loaded or no zero-hop direct adverts have been seen.
|
||||
//
|
||||
// #804: feeds analytics region-attribution so a multi-byte repeater whose
|
||||
// flood adverts get relayed across regions is still attributed to its home.
|
||||
func (s *PacketStore) computeNodeHomeRegions() map[string]string {
|
||||
// Build observer → IATA map. observers table is small (≪ packets), so a
|
||||
// single DB read here is acceptable; resolveRegionObservers does similar.
|
||||
obsIATA := make(map[string]string, 64)
|
||||
if s.db != nil {
|
||||
if observers, err := s.db.GetObservers(); err == nil {
|
||||
for _, o := range observers {
|
||||
if o.IATA != nil && *o.IATA != "" {
|
||||
obsIATA[o.ID] = strings.TrimSpace(strings.ToUpper(*o.IATA))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(obsIATA) == 0 {
|
||||
return map[string]string{}
|
||||
}
|
||||
|
||||
// Tally zero-hop direct ADVERT region observations per pubkey.
|
||||
type tally struct {
|
||||
counts map[string]int
|
||||
}
|
||||
per := make(map[string]*tally, 256)
|
||||
|
||||
for _, tx := range s.packets {
|
||||
if tx.RawHex == "" || len(tx.RawHex) < 4 {
|
||||
continue
|
||||
}
|
||||
if tx.PayloadType == nil || *tx.PayloadType != PayloadADVERT {
|
||||
continue
|
||||
}
|
||||
if tx.DecodedJSON == "" {
|
||||
continue
|
||||
}
|
||||
header, err := strconv.ParseUint(tx.RawHex[:2], 16, 8)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
routeType := header & 0x03
|
||||
if routeType != uint64(RouteDirect) && routeType != uint64(RouteTransportDirect) {
|
||||
continue
|
||||
}
|
||||
// Path byte index — for direct/transport-direct it's at offset 1
|
||||
// (matches the analytics decoder's pathByteIdx logic).
|
||||
if len(tx.RawHex) < 4 {
|
||||
continue
|
||||
}
|
||||
pathByte, err := strconv.ParseUint(tx.RawHex[2:4], 16, 8)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
hopCount := pathByte & 0x3F
|
||||
if hopCount != 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
var d map[string]interface{}
|
||||
if json.Unmarshal([]byte(tx.DecodedJSON), &d) != nil {
|
||||
continue
|
||||
}
|
||||
pk, _ := d["pubKey"].(string)
|
||||
if pk == "" {
|
||||
pk, _ = d["public_key"].(string)
|
||||
}
|
||||
if pk == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
for _, obs := range tx.Observations {
|
||||
iata := obsIATA[obs.ObserverID]
|
||||
if iata == "" {
|
||||
continue
|
||||
}
|
||||
t := per[pk]
|
||||
if t == nil {
|
||||
t = &tally{counts: map[string]int{}}
|
||||
per[pk] = t
|
||||
}
|
||||
t.counts[iata]++
|
||||
}
|
||||
}
|
||||
|
||||
out := make(map[string]string, len(per))
|
||||
for pk, t := range per {
|
||||
var bestIATA string
|
||||
bestCount := 0
|
||||
for iata, n := range t.counts {
|
||||
if n > bestCount || (n == bestCount && iata < bestIATA) {
|
||||
bestCount = n
|
||||
bestIATA = iata
|
||||
}
|
||||
}
|
||||
if bestIATA != "" {
|
||||
out[pk] = bestIATA
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// enrichObs returns a map with observation fields + transmission fields.
|
||||
func (s *PacketStore) enrichObs(obs *StoreObs) map[string]interface{} {
|
||||
tx := s.byTxID[obs.TransmissionID]
|
||||
@@ -3930,18 +3769,8 @@ func (s *PacketStore) GetChannelMessages(channelHash string, limit, offset int,
|
||||
|
||||
// GetAnalyticsChannels returns full channel analytics computed from in-memory packets.
|
||||
func (s *PacketStore) GetAnalyticsChannels(region string) map[string]interface{} {
|
||||
return s.GetAnalyticsChannelsWithWindow(region, TimeWindow{})
|
||||
}
|
||||
|
||||
// GetAnalyticsChannelsWithWindow returns channel analytics for the given region,
|
||||
// optionally bounded to a time window (issue #842). Zero TimeWindow = all data.
|
||||
func (s *PacketStore) GetAnalyticsChannelsWithWindow(region string, window TimeWindow) map[string]interface{} {
|
||||
cacheKey := region
|
||||
if !window.IsZero() {
|
||||
cacheKey = region + "|" + window.CacheKey()
|
||||
}
|
||||
s.cacheMu.Lock()
|
||||
if cached, ok := s.chanCache[cacheKey]; ok && time.Now().Before(cached.expiresAt) {
|
||||
if cached, ok := s.chanCache[region]; ok && time.Now().Before(cached.expiresAt) {
|
||||
s.cacheHits++
|
||||
s.cacheMu.Unlock()
|
||||
return cached.data
|
||||
@@ -3949,43 +3778,16 @@ func (s *PacketStore) GetAnalyticsChannelsWithWindow(region string, window TimeW
|
||||
s.cacheMisses++
|
||||
s.cacheMu.Unlock()
|
||||
|
||||
result := s.computeAnalyticsChannels(region, window)
|
||||
result := s.computeAnalyticsChannels(region)
|
||||
|
||||
s.cacheMu.Lock()
|
||||
s.chanCache[cacheKey] = &cachedResult{data: result, expiresAt: time.Now().Add(s.rfCacheTTL)}
|
||||
s.chanCache[region] = &cachedResult{data: result, expiresAt: time.Now().Add(s.rfCacheTTL)}
|
||||
s.cacheMu.Unlock()
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// channelNameMatchesHash validates that a decrypted channel name hashes to the
|
||||
// observed single-byte channel hash. This rejects rainbow-table mismatches where
|
||||
// an observer's lookup table incorrectly maps a hash byte to the wrong name.
|
||||
// Firmware invariant: channelHash = SHA256(SHA256("#name")[:16])[0]
|
||||
func channelNameMatchesHash(name string, hashStr string) bool {
|
||||
expected, err := strconv.Atoi(hashStr)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
chanName := name
|
||||
if !strings.HasPrefix(chanName, "#") {
|
||||
chanName = "#" + chanName
|
||||
}
|
||||
h1 := sha256.Sum256([]byte(chanName))
|
||||
h2 := sha256.Sum256(h1[:16])
|
||||
return int(h2[0]) == expected
|
||||
}
|
||||
|
||||
// isPlaceholderName returns true if the name is a "chN" placeholder (not a real decrypted name).
|
||||
func isPlaceholderName(name string) bool {
|
||||
if !strings.HasPrefix(name, "ch") {
|
||||
return false
|
||||
}
|
||||
_, err := strconv.Atoi(name[2:])
|
||||
return err == nil
|
||||
}
|
||||
|
||||
func (s *PacketStore) computeAnalyticsChannels(region string, window TimeWindow) map[string]interface{} {
|
||||
func (s *PacketStore) computeAnalyticsChannels(region string) map[string]interface{} {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
@@ -4034,9 +3836,6 @@ func (s *PacketStore) computeAnalyticsChannels(region string, window TimeWindow)
|
||||
|
||||
grpTxts := s.byPayloadType[5]
|
||||
for _, tx := range grpTxts {
|
||||
if !window.Includes(tx.FirstSeen) {
|
||||
continue
|
||||
}
|
||||
if regionObs != nil {
|
||||
match := false
|
||||
for _, obs := range tx.Observations {
|
||||
@@ -4067,27 +3866,16 @@ func (s *PacketStore) computeAnalyticsChannels(region string, window TimeWindow)
|
||||
name = "ch" + hash
|
||||
}
|
||||
encrypted := decoded.Text == "" && decoded.Sender == ""
|
||||
|
||||
// Bug #978 fix: validate channel name against hash to reject rainbow-table mismatches.
|
||||
// If the claimed channel name doesn't hash to the observed channelHash byte, discard it.
|
||||
if name != "" && name != "ch"+hash && !channelNameMatchesHash(name, hash) {
|
||||
name = "ch" + hash
|
||||
encrypted = true
|
||||
}
|
||||
|
||||
// Bug #978 fix: always group by hash byte alone — same physical channel,
|
||||
// regardless of which observer decrypted it.
|
||||
// Use hash as key for grouping (matches Node.js String(hash))
|
||||
chKey := hash
|
||||
if decoded.Type == "CHAN" && decoded.Channel != "" {
|
||||
chKey = hash + "_" + decoded.Channel
|
||||
}
|
||||
|
||||
ch := channelMap[chKey]
|
||||
if ch == nil {
|
||||
ch = &chanInfo{Hash: hash, Name: name, Senders: map[string]bool{}, LastActivity: tx.FirstSeen, Encrypted: encrypted}
|
||||
channelMap[chKey] = ch
|
||||
} else {
|
||||
// Upgrade bucket name: if current is placeholder and we have a validated decrypted name
|
||||
if isPlaceholderName(ch.Name) && !isPlaceholderName(name) {
|
||||
ch.Name = name
|
||||
}
|
||||
}
|
||||
ch.Messages++
|
||||
ch.LastActivity = tx.FirstSeen
|
||||
@@ -4177,18 +3965,8 @@ func (s *PacketStore) computeAnalyticsChannels(region string, window TimeWindow)
|
||||
|
||||
// GetAnalyticsRF returns full RF analytics computed from in-memory observations.
|
||||
func (s *PacketStore) GetAnalyticsRF(region string) map[string]interface{} {
|
||||
return s.GetAnalyticsRFWithWindow(region, TimeWindow{})
|
||||
}
|
||||
|
||||
// GetAnalyticsRFWithWindow returns RF analytics bounded by an optional
|
||||
// time window (issue #842). Zero TimeWindow = all data (backwards compatible).
|
||||
func (s *PacketStore) GetAnalyticsRFWithWindow(region string, window TimeWindow) map[string]interface{} {
|
||||
cacheKey := region
|
||||
if !window.IsZero() {
|
||||
cacheKey = region + "|" + window.CacheKey()
|
||||
}
|
||||
s.cacheMu.Lock()
|
||||
if cached, ok := s.rfCache[cacheKey]; ok && time.Now().Before(cached.expiresAt) {
|
||||
if cached, ok := s.rfCache[region]; ok && time.Now().Before(cached.expiresAt) {
|
||||
s.cacheHits++
|
||||
s.cacheMu.Unlock()
|
||||
return cached.data
|
||||
@@ -4196,16 +3974,16 @@ func (s *PacketStore) GetAnalyticsRFWithWindow(region string, window TimeWindow)
|
||||
s.cacheMisses++
|
||||
s.cacheMu.Unlock()
|
||||
|
||||
result := s.computeAnalyticsRF(region, window)
|
||||
result := s.computeAnalyticsRF(region)
|
||||
|
||||
s.cacheMu.Lock()
|
||||
s.rfCache[cacheKey] = &cachedResult{data: result, expiresAt: time.Now().Add(s.rfCacheTTL)}
|
||||
s.rfCache[region] = &cachedResult{data: result, expiresAt: time.Now().Add(s.rfCacheTTL)}
|
||||
s.cacheMu.Unlock()
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func (s *PacketStore) computeAnalyticsRF(region string, window TimeWindow) map[string]interface{} {
|
||||
func (s *PacketStore) computeAnalyticsRF(region string) map[string]interface{} {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
@@ -4244,9 +4022,6 @@ func (s *PacketStore) computeAnalyticsRF(region string, window TimeWindow) map[s
|
||||
for obsID := range regionObs {
|
||||
obsList := s.byObserver[obsID]
|
||||
for _, obs := range obsList {
|
||||
if !window.Includes(obs.Timestamp) {
|
||||
continue
|
||||
}
|
||||
totalObs++
|
||||
tx := s.byTxID[obs.TransmissionID]
|
||||
hash := ""
|
||||
@@ -4332,12 +4107,6 @@ func (s *PacketStore) computeAnalyticsRF(region string, window TimeWindow) map[s
|
||||
} else {
|
||||
// No region: iterate all transmissions and their observations
|
||||
for _, tx := range s.packets {
|
||||
// Window filter: skip transmissions outside the requested window.
|
||||
// We use tx.FirstSeen as the bounding timestamp; per-obs window
|
||||
// filter below handles cases where individual obs timestamps differ.
|
||||
if !window.Includes(tx.FirstSeen) {
|
||||
continue
|
||||
}
|
||||
hash := tx.Hash
|
||||
if hash != "" {
|
||||
regionalHashes[hash] = true
|
||||
@@ -4748,19 +4517,12 @@ type nodeInfo struct {
|
||||
Lat float64
|
||||
Lon float64
|
||||
HasGPS bool
|
||||
LastSeen time.Time
|
||||
}
|
||||
|
||||
func (s *PacketStore) getAllNodes() []nodeInfo {
|
||||
// Try with last_seen first; fall back to without if column doesn't exist.
|
||||
rows, err := s.db.conn.Query("SELECT public_key, name, role, lat, lon, last_seen FROM nodes")
|
||||
hasLastSeen := true
|
||||
rows, err := s.db.conn.Query("SELECT public_key, name, role, lat, lon FROM nodes")
|
||||
if err != nil {
|
||||
rows, err = s.db.conn.Query("SELECT public_key, name, role, lat, lon FROM nodes")
|
||||
hasLastSeen = false
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
return nil
|
||||
}
|
||||
defer rows.Close()
|
||||
var nodes []nodeInfo
|
||||
@@ -4768,25 +4530,13 @@ func (s *PacketStore) getAllNodes() []nodeInfo {
|
||||
var pk string
|
||||
var name, role sql.NullString
|
||||
var lat, lon sql.NullFloat64
|
||||
var lastSeen sql.NullString
|
||||
if hasLastSeen {
|
||||
rows.Scan(&pk, &name, &role, &lat, &lon, &lastSeen)
|
||||
} else {
|
||||
rows.Scan(&pk, &name, &role, &lat, &lon)
|
||||
}
|
||||
rows.Scan(&pk, &name, &role, &lat, &lon)
|
||||
n := nodeInfo{PublicKey: pk, Name: nullStrVal(name), Role: nullStrVal(role)}
|
||||
if lat.Valid && lon.Valid {
|
||||
n.Lat = lat.Float64
|
||||
n.Lon = lon.Float64
|
||||
n.HasGPS = !(n.Lat == 0 && n.Lon == 0)
|
||||
}
|
||||
if hasLastSeen && lastSeen.Valid && lastSeen.String != "" {
|
||||
if t, err := time.Parse(time.RFC3339, lastSeen.String); err == nil {
|
||||
n.LastSeen = t
|
||||
} else if t, err := time.Parse("2006-01-02 15:04:05", lastSeen.String); err == nil {
|
||||
n.LastSeen = t
|
||||
}
|
||||
}
|
||||
nodes = append(nodes, n)
|
||||
}
|
||||
return nodes
|
||||
@@ -5032,17 +4782,8 @@ func parsePathJSON(pathJSON string) []string {
|
||||
}
|
||||
|
||||
func (s *PacketStore) GetAnalyticsTopology(region string) map[string]interface{} {
|
||||
return s.GetAnalyticsTopologyWithWindow(region, TimeWindow{})
|
||||
}
|
||||
|
||||
// GetAnalyticsTopologyWithWindow — see issue #842.
|
||||
func (s *PacketStore) GetAnalyticsTopologyWithWindow(region string, window TimeWindow) map[string]interface{} {
|
||||
cacheKey := region
|
||||
if !window.IsZero() {
|
||||
cacheKey = region + "|" + window.CacheKey()
|
||||
}
|
||||
s.cacheMu.Lock()
|
||||
if cached, ok := s.topoCache[cacheKey]; ok && time.Now().Before(cached.expiresAt) {
|
||||
if cached, ok := s.topoCache[region]; ok && time.Now().Before(cached.expiresAt) {
|
||||
s.cacheHits++
|
||||
s.cacheMu.Unlock()
|
||||
return cached.data
|
||||
@@ -5050,16 +4791,16 @@ func (s *PacketStore) GetAnalyticsTopologyWithWindow(region string, window TimeW
|
||||
s.cacheMisses++
|
||||
s.cacheMu.Unlock()
|
||||
|
||||
result := s.computeAnalyticsTopology(region, window)
|
||||
result := s.computeAnalyticsTopology(region)
|
||||
|
||||
s.cacheMu.Lock()
|
||||
s.topoCache[cacheKey] = &cachedResult{data: result, expiresAt: time.Now().Add(s.rfCacheTTL)}
|
||||
s.topoCache[region] = &cachedResult{data: result, expiresAt: time.Now().Add(s.rfCacheTTL)}
|
||||
s.cacheMu.Unlock()
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func (s *PacketStore) computeAnalyticsTopology(region string, window TimeWindow) map[string]interface{} {
|
||||
func (s *PacketStore) computeAnalyticsTopology(region string) map[string]interface{} {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
@@ -5090,9 +4831,6 @@ func (s *PacketStore) computeAnalyticsTopology(region string, window TimeWindow)
|
||||
perObserver := map[string]map[string]*struct{ minDist, maxDist, count int }{}
|
||||
|
||||
for _, tx := range s.packets {
|
||||
if !window.Includes(tx.FirstSeen) {
|
||||
continue
|
||||
}
|
||||
hops := txGetParsedPath(tx)
|
||||
if len(hops) == 0 {
|
||||
continue
|
||||
@@ -5184,103 +4922,6 @@ func (s *PacketStore) computeAnalyticsTopology(region string, window TimeWindow)
|
||||
}
|
||||
}
|
||||
|
||||
// pmLookup resolves a hop hex string to its prefix-map candidates,
|
||||
// applying the same truncation used during map construction.
|
||||
pmLookup := func(hop string) []nodeInfo {
|
||||
key := strings.ToLower(hop)
|
||||
if len(key) > maxPrefixLen {
|
||||
key = key[:maxPrefixLen]
|
||||
}
|
||||
return pm.m[key]
|
||||
}
|
||||
|
||||
// --- Dedup pass: merge hop prefixes that resolve unambiguously to the same node ---
|
||||
// Only merge when pm.m[hop] has exactly 1 candidate (unique_prefix).
|
||||
// Ambiguous short prefixes (efiten's concern: 1-byte collisions) stay separate.
|
||||
{
|
||||
type dedupInfo struct {
|
||||
totalCount int
|
||||
longestHop string
|
||||
}
|
||||
byPubkey := map[string]*dedupInfo{} // pubkey → merged info
|
||||
ambiguous := map[string]int{} // hop → count (kept as-is)
|
||||
for h, c := range hopFreq {
|
||||
candidates := pmLookup(h)
|
||||
if len(candidates) == 1 {
|
||||
pk := strings.ToLower(candidates[0].PublicKey)
|
||||
if info, ok := byPubkey[pk]; ok {
|
||||
info.totalCount += c
|
||||
if len(h) > len(info.longestHop) {
|
||||
info.longestHop = h
|
||||
}
|
||||
} else {
|
||||
byPubkey[pk] = &dedupInfo{totalCount: c, longestHop: h}
|
||||
}
|
||||
} else {
|
||||
ambiguous[h] = c
|
||||
}
|
||||
}
|
||||
// Rebuild hopFreq
|
||||
hopFreq = make(map[string]int, len(byPubkey)+len(ambiguous))
|
||||
for _, info := range byPubkey {
|
||||
hopFreq[info.longestHop] = info.totalCount
|
||||
}
|
||||
for h, c := range ambiguous {
|
||||
hopFreq[h] = c
|
||||
}
|
||||
}
|
||||
|
||||
// --- Dedup pass for pairs: merge by resolved pubkey pair ---
|
||||
{
|
||||
type pairDedupInfo struct {
|
||||
totalCount int
|
||||
longestA string
|
||||
longestB string
|
||||
}
|
||||
byPubkeyPair := map[string]*pairDedupInfo{} // "pkA|pkB" (sorted) → merged info
|
||||
ambiguousPairs := map[string]int{}
|
||||
for p, c := range pairFreq {
|
||||
parts := strings.SplitN(p, "|", 2)
|
||||
candA := pmLookup(parts[0])
|
||||
candB := pmLookup(parts[1])
|
||||
if len(candA) == 1 && len(candB) == 1 {
|
||||
pkA := strings.ToLower(candA[0].PublicKey)
|
||||
pkB := strings.ToLower(candB[0].PublicKey)
|
||||
// Canonicalize by sorted pubkey
|
||||
if pkA > pkB {
|
||||
pkA, pkB = pkB, pkA
|
||||
parts[0], parts[1] = parts[1], parts[0]
|
||||
}
|
||||
key := pkA + "|" + pkB
|
||||
if info, ok := byPubkeyPair[key]; ok {
|
||||
info.totalCount += c
|
||||
if len(parts[0]) > len(info.longestA) {
|
||||
info.longestA = parts[0]
|
||||
}
|
||||
if len(parts[1]) > len(info.longestB) {
|
||||
info.longestB = parts[1]
|
||||
}
|
||||
} else {
|
||||
byPubkeyPair[key] = &pairDedupInfo{totalCount: c, longestA: parts[0], longestB: parts[1]}
|
||||
}
|
||||
} else {
|
||||
ambiguousPairs[p] = c
|
||||
}
|
||||
}
|
||||
// Rebuild pairFreq
|
||||
pairFreq = make(map[string]int, len(byPubkeyPair)+len(ambiguousPairs))
|
||||
for _, info := range byPubkeyPair {
|
||||
a, b := info.longestA, info.longestB
|
||||
if a > b {
|
||||
a, b = b, a
|
||||
}
|
||||
pairFreq[a+"|"+b] = info.totalCount
|
||||
}
|
||||
for p, c := range ambiguousPairs {
|
||||
pairFreq[p] = c
|
||||
}
|
||||
}
|
||||
|
||||
// Top repeaters
|
||||
type freqEntry struct {
|
||||
hop string
|
||||
@@ -5805,16 +5446,6 @@ func (s *PacketStore) computeAnalyticsHashSizes(region string) map[string]interf
|
||||
regionObs = s.resolveRegionObservers(region)
|
||||
}
|
||||
|
||||
// #804: derive each node's HOME region from zero-hop direct adverts (the
|
||||
// most authoritative location signal — those packets cannot have been
|
||||
// relayed). When non-empty, multi-byte node attribution prefers this
|
||||
// over observer-region. Falls back to observer-region when unknown.
|
||||
nodeHomeRegion := s.computeNodeHomeRegions()
|
||||
attributionMethod := "observer"
|
||||
if region != "" && len(nodeHomeRegion) > 0 {
|
||||
attributionMethod = "repeater"
|
||||
}
|
||||
|
||||
allNodes, pm := s.getCachedNodesAndPM()
|
||||
|
||||
// Build pubkey→role map for filtering by node type.
|
||||
@@ -5833,6 +5464,18 @@ func (s *PacketStore) computeAnalyticsHashSizes(region string) map[string]interf
|
||||
if tx.RawHex == "" {
|
||||
continue
|
||||
}
|
||||
if regionObs != nil {
|
||||
match := false
|
||||
for _, obs := range tx.Observations {
|
||||
if regionObs[obs.ObserverID] {
|
||||
match = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !match {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// Parse header and path byte
|
||||
if len(tx.RawHex) < 4 {
|
||||
@@ -5862,84 +5505,52 @@ func (s *PacketStore) computeAnalyticsHashSizes(region string) map[string]interf
|
||||
continue
|
||||
}
|
||||
|
||||
// #804: pre-extract originator pubkey for ADVERT packets so we can
|
||||
// (a) relax observer-region filter when the originator's HOME region
|
||||
// matches the requested region (a flood relay heard outside the
|
||||
// home region must still attribute to the home), and
|
||||
// (b) reuse the parsed values below without re-parsing.
|
||||
var advertPK, advertName string
|
||||
var advertParsed bool
|
||||
// Track originator from advert packets (including zero-hop adverts,
|
||||
// keyed by pubKey so same-name nodes don't merge).
|
||||
if tx.PayloadType != nil && *tx.PayloadType == PayloadADVERT && tx.DecodedJSON != "" {
|
||||
var d map[string]interface{}
|
||||
if json.Unmarshal([]byte(tx.DecodedJSON), &d) == nil {
|
||||
pk := ""
|
||||
if v, ok := d["pubKey"].(string); ok {
|
||||
advertPK = v
|
||||
pk = v
|
||||
} else if v, ok := d["public_key"].(string); ok {
|
||||
advertPK = v
|
||||
pk = v
|
||||
}
|
||||
if n, ok := d["name"].(string); ok {
|
||||
advertName = n
|
||||
}
|
||||
advertParsed = advertPK != ""
|
||||
}
|
||||
}
|
||||
|
||||
if regionObs != nil {
|
||||
match := false
|
||||
for _, obs := range tx.Observations {
|
||||
if regionObs[obs.ObserverID] {
|
||||
match = true
|
||||
break
|
||||
if pk != "" {
|
||||
name := ""
|
||||
if n, ok := d["name"].(string); ok {
|
||||
name = n
|
||||
}
|
||||
if name == "" {
|
||||
if len(pk) >= 8 {
|
||||
name = pk[:8]
|
||||
} else {
|
||||
name = pk
|
||||
}
|
||||
}
|
||||
// Skip zero-hop direct adverts for hash_size — the
|
||||
// path byte is locally generated and unreliable.
|
||||
// Still count the packet and update lastSeen.
|
||||
isZeroHop := (routeType == uint64(RouteDirect) || routeType == uint64(RouteTransportDirect)) && (actualPathByte&0x3F) == 0
|
||||
if byNode[pk] == nil {
|
||||
role := nodeRoleByPK[pk] // empty if unknown
|
||||
initHS := hashSize
|
||||
if isZeroHop {
|
||||
initHS = 0
|
||||
}
|
||||
byNode[pk] = map[string]interface{}{
|
||||
"hashSize": initHS, "packets": 0,
|
||||
"lastSeen": tx.FirstSeen, "name": name,
|
||||
"role": role,
|
||||
}
|
||||
}
|
||||
byNode[pk]["packets"] = byNode[pk]["packets"].(int) + 1
|
||||
if !isZeroHop {
|
||||
byNode[pk]["hashSize"] = hashSize
|
||||
}
|
||||
byNode[pk]["lastSeen"] = tx.FirstSeen
|
||||
}
|
||||
}
|
||||
// #804: allow ADVERTs from a node whose HOME region matches the
|
||||
// requested region even if no observer in that region heard this
|
||||
// particular packet (e.g. flood relay heard only by an out-of-
|
||||
// region observer). Conservative: only ADVERTs (the source is
|
||||
// known by pubkey) and only when home is established.
|
||||
if !match && advertParsed {
|
||||
if home, ok := nodeHomeRegion[advertPK]; ok && iataMatchesRegion(home, region) {
|
||||
match = true
|
||||
}
|
||||
}
|
||||
if !match {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// Track originator from advert packets (including zero-hop adverts,
|
||||
// keyed by pubKey so same-name nodes don't merge).
|
||||
if advertParsed {
|
||||
pk := advertPK
|
||||
name := advertName
|
||||
if name == "" {
|
||||
if len(pk) >= 8 {
|
||||
name = pk[:8]
|
||||
} else {
|
||||
name = pk
|
||||
}
|
||||
}
|
||||
// Skip zero-hop direct adverts for hash_size — the
|
||||
// path byte is locally generated and unreliable.
|
||||
// Still count the packet and update lastSeen.
|
||||
isZeroHop := (routeType == uint64(RouteDirect) || routeType == uint64(RouteTransportDirect)) && (actualPathByte&0x3F) == 0
|
||||
if byNode[pk] == nil {
|
||||
role := nodeRoleByPK[pk] // empty if unknown
|
||||
initHS := hashSize
|
||||
if isZeroHop {
|
||||
initHS = 0
|
||||
}
|
||||
byNode[pk] = map[string]interface{}{
|
||||
"hashSize": initHS, "packets": 0,
|
||||
"lastSeen": tx.FirstSeen, "name": name,
|
||||
"role": role,
|
||||
}
|
||||
}
|
||||
byNode[pk]["packets"] = byNode[pk]["packets"].(int) + 1
|
||||
if !isZeroHop {
|
||||
byNode[pk]["hashSize"] = hashSize
|
||||
}
|
||||
byNode[pk]["lastSeen"] = tx.FirstSeen
|
||||
}
|
||||
|
||||
// Distribution/hourly/uniqueHops only for packets with relay hops
|
||||
@@ -6020,15 +5631,6 @@ func (s *PacketStore) computeAnalyticsHashSizes(region string) map[string]interf
|
||||
// Multi-byte nodes
|
||||
multiByteNodes := make([]map[string]interface{}, 0)
|
||||
for pk, data := range byNode {
|
||||
// #804: when a region filter is active, prefer the repeater's HOME
|
||||
// region over the observer that happened to relay it. Falls back to
|
||||
// the (already-applied) observer-region filter when the node's home
|
||||
// region is unknown.
|
||||
if region != "" {
|
||||
if home, ok := nodeHomeRegion[pk]; ok && !iataMatchesRegion(home, region) {
|
||||
continue
|
||||
}
|
||||
}
|
||||
if data["hashSize"].(int) > 1 {
|
||||
multiByteNodes = append(multiByteNodes, map[string]interface{}{
|
||||
"name": data["name"], "hashSize": data["hashSize"],
|
||||
@@ -6043,17 +5645,11 @@ func (s *PacketStore) computeAnalyticsHashSizes(region string) map[string]interf
|
||||
|
||||
// Distribution by repeaters: count unique REPEATER nodes per hash size
|
||||
distributionByRepeaters := map[string]int{"1": 0, "2": 0, "3": 0}
|
||||
for pk, data := range byNode {
|
||||
for _, data := range byNode {
|
||||
role, _ := data["role"].(string)
|
||||
if !strings.Contains(strings.ToLower(role), "repeater") {
|
||||
continue
|
||||
}
|
||||
// #804: same repeater-region preference as multiByteNodes.
|
||||
if region != "" {
|
||||
if home, ok := nodeHomeRegion[pk]; ok && !iataMatchesRegion(home, region) {
|
||||
continue
|
||||
}
|
||||
}
|
||||
hs := data["hashSize"].(int)
|
||||
key := strconv.Itoa(hs)
|
||||
distributionByRepeaters[key]++
|
||||
@@ -6066,7 +5662,6 @@ func (s *PacketStore) computeAnalyticsHashSizes(region string) map[string]interf
|
||||
"hourly": hourly,
|
||||
"topHops": topHops,
|
||||
"multiByteNodes": multiByteNodes,
|
||||
"attributionMethod": attributionMethod,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6543,51 +6138,6 @@ func EnrichNodeWithHashSize(node map[string]interface{}, info *hashSizeNodeInfo)
|
||||
}
|
||||
}
|
||||
|
||||
// EnrichNodeWithMultiByte adds multi-byte capability fields to a node map.
|
||||
func EnrichNodeWithMultiByte(node map[string]interface{}, entry *MultiByteCapEntry) {
|
||||
if entry == nil {
|
||||
return
|
||||
}
|
||||
node["multi_byte_status"] = entry.Status
|
||||
node["multi_byte_evidence"] = entry.Evidence
|
||||
node["multi_byte_max_hash_size"] = entry.MaxHashSize
|
||||
}
|
||||
|
||||
// GetMultiByteCapMap returns a cached pubkey → MultiByteCapEntry map.
|
||||
// Reuses the same 15s TTL cache pattern as hash size info.
|
||||
func (s *PacketStore) GetMultiByteCapMap() map[string]*MultiByteCapEntry {
|
||||
s.hashSizeInfoMu.Lock()
|
||||
if s.multiByteCapCache != nil && time.Since(s.multiByteCapAt) < 15*time.Second {
|
||||
cached := s.multiByteCapCache
|
||||
s.hashSizeInfoMu.Unlock()
|
||||
return cached
|
||||
}
|
||||
s.hashSizeInfoMu.Unlock()
|
||||
|
||||
// Get adopter hash sizes from analytics for cross-referencing
|
||||
analyticsData := s.GetAnalyticsHashSizes("")
|
||||
adopterSizes := make(map[string]int)
|
||||
if nodes, ok := analyticsData["nodes"].(map[string]map[string]interface{}); ok {
|
||||
for pk, data := range nodes {
|
||||
if hs, ok := data["hashSize"].(int); ok {
|
||||
adopterSizes[pk] = hs
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
caps := s.computeMultiByteCapability(adopterSizes)
|
||||
result := make(map[string]*MultiByteCapEntry, len(caps))
|
||||
for i := range caps {
|
||||
result[caps[i].PublicKey] = &caps[i]
|
||||
}
|
||||
|
||||
s.hashSizeInfoMu.Lock()
|
||||
s.multiByteCapCache = result
|
||||
s.multiByteCapAt = time.Now()
|
||||
s.hashSizeInfoMu.Unlock()
|
||||
return result
|
||||
}
|
||||
|
||||
// --- Multi-Byte Capability Inference ---
|
||||
|
||||
// MultiByteCapEntry represents a node's inferred multi-byte capability.
|
||||
|
||||
@@ -1,133 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"time"
|
||||
)
|
||||
|
||||
// TimeWindow is a half-open time range used to bound analytics queries.
|
||||
// Empty Since/Until means unbounded on that end (backwards compatible).
|
||||
type TimeWindow struct {
|
||||
Since string // RFC3339, empty = unbounded
|
||||
Until string // RFC3339, empty = unbounded
|
||||
// Label is a stable identifier for the user-requested window
|
||||
// (e.g. "24h"). For relative windows it is the original alias; for
|
||||
// absolute ranges it is empty (Since/Until are already stable).
|
||||
// Used only for cache keying so that "?window=24h" produces a single
|
||||
// cache entry instead of one per second.
|
||||
Label string
|
||||
}
|
||||
|
||||
// IsZero reports whether the window imposes no bounds at all.
|
||||
func (w TimeWindow) IsZero() bool {
|
||||
return w.Since == "" && w.Until == ""
|
||||
}
|
||||
|
||||
// CacheKey returns a deterministic key suitable for analytics caches.
|
||||
// For relative windows the key is the alias label so that the cache
|
||||
// remains stable across the wall-clock advancing.
|
||||
func (w TimeWindow) CacheKey() string {
|
||||
if w.IsZero() {
|
||||
return ""
|
||||
}
|
||||
if w.Label != "" {
|
||||
return "rel:" + w.Label
|
||||
}
|
||||
return w.Since + "|" + w.Until
|
||||
}
|
||||
|
||||
// Includes reports whether ts (an RFC3339-style string) falls within the
|
||||
// window. Empty ts is treated as included (for callers that don't have a
|
||||
// timestamp on every observation).
|
||||
//
|
||||
// Comparison is done by parsing both sides into time.Time. Lex compare is
|
||||
// unsafe here because stored timestamps carry millisecond precision
|
||||
// ("...HH:MM:SS.000Z") while bounds emitted by ParseTimeWindow do not
|
||||
// ("...HH:MM:SSZ"), and '.' (0x2e) sorts before 'Z' (0x5a). If a timestamp
|
||||
// fails to parse we fall back to lex compare to preserve old behavior.
|
||||
func (w TimeWindow) Includes(ts string) bool {
|
||||
if ts == "" {
|
||||
return true
|
||||
}
|
||||
tt, terr := parseAnyRFC3339(ts)
|
||||
if w.Since != "" {
|
||||
if s, err := parseAnyRFC3339(w.Since); err == nil && terr == nil {
|
||||
if tt.Before(s) {
|
||||
return false
|
||||
}
|
||||
} else if ts < w.Since {
|
||||
return false
|
||||
}
|
||||
}
|
||||
if w.Until != "" {
|
||||
if u, err := parseAnyRFC3339(w.Until); err == nil && terr == nil {
|
||||
if tt.After(u) {
|
||||
return false
|
||||
}
|
||||
} else if ts > w.Until {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// parseAnyRFC3339 accepts both fractional-second ("...000Z") and second-
|
||||
// precision ("...Z") RFC3339 timestamps. time.RFC3339Nano handles both.
|
||||
func parseAnyRFC3339(s string) (time.Time, error) {
|
||||
return time.Parse(time.RFC3339Nano, s)
|
||||
}
|
||||
|
||||
// ParseTimeWindow extracts a TimeWindow from query params.
|
||||
//
|
||||
// Supported parameters:
|
||||
//
|
||||
// ?window=1h | 24h | 7d | 30d — relative window ending "now"
|
||||
// ?from=<RFC3339>&to=<RFC3339> — absolute custom range (either bound optional)
|
||||
//
|
||||
// When neither is set, returns the zero TimeWindow (unbounded; original behavior).
|
||||
// Invalid values are silently ignored to preserve backwards compatibility.
|
||||
func ParseTimeWindow(r *http.Request) TimeWindow {
|
||||
q := r.URL.Query()
|
||||
|
||||
// Absolute range takes precedence if either bound is set.
|
||||
from := q.Get("from")
|
||||
to := q.Get("to")
|
||||
if from != "" || to != "" {
|
||||
w := TimeWindow{}
|
||||
if from != "" {
|
||||
if t, err := time.Parse(time.RFC3339, from); err == nil {
|
||||
w.Since = t.UTC().Format(time.RFC3339)
|
||||
}
|
||||
}
|
||||
if to != "" {
|
||||
if t, err := time.Parse(time.RFC3339, to); err == nil {
|
||||
w.Until = t.UTC().Format(time.RFC3339)
|
||||
}
|
||||
}
|
||||
return w
|
||||
}
|
||||
|
||||
// Relative window.
|
||||
if win := q.Get("window"); win != "" {
|
||||
var d time.Duration
|
||||
switch win {
|
||||
case "1h":
|
||||
d = 1 * time.Hour
|
||||
case "24h", "1d":
|
||||
d = 24 * time.Hour
|
||||
case "3d":
|
||||
d = 3 * 24 * time.Hour
|
||||
case "7d", "1w":
|
||||
d = 7 * 24 * time.Hour
|
||||
case "30d":
|
||||
d = 30 * 24 * time.Hour
|
||||
default:
|
||||
// Unknown values are silently ignored — backwards compatible.
|
||||
return TimeWindow{}
|
||||
}
|
||||
since := time.Now().UTC().Add(-d).Format(time.RFC3339)
|
||||
return TimeWindow{Since: since, Label: win}
|
||||
}
|
||||
|
||||
return TimeWindow{}
|
||||
}
|
||||
@@ -1,144 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Issue #842 — selectable analytics timeframes.
|
||||
// Backend must accept ?window=1h|24h|7d|30d and ?from=/?to= and yield a
|
||||
// TimeWindow that correctly bounds analytics queries.
|
||||
|
||||
func TestParseTimeWindow_Window24h(t *testing.T) {
|
||||
r := httptest.NewRequest("GET", "/api/analytics/rf?window=24h", nil)
|
||||
w := ParseTimeWindow(r)
|
||||
if w.Since == "" {
|
||||
t.Fatalf("window=24h: expected non-empty Since, got %q", w.Since)
|
||||
}
|
||||
since, err := time.Parse(time.RFC3339, w.Since)
|
||||
if err != nil {
|
||||
t.Fatalf("window=24h: Since %q is not RFC3339: %v", w.Since, err)
|
||||
}
|
||||
delta := time.Since(since)
|
||||
if delta < 23*time.Hour || delta > 25*time.Hour {
|
||||
t.Fatalf("window=24h: Since should be ~24h ago, got delta=%v", delta)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseTimeWindow_WindowAliases(t *testing.T) {
|
||||
cases := map[string]time.Duration{
|
||||
"1h": 1 * time.Hour,
|
||||
"24h": 24 * time.Hour,
|
||||
"7d": 7 * 24 * time.Hour,
|
||||
"30d": 30 * 24 * time.Hour,
|
||||
}
|
||||
for q, want := range cases {
|
||||
r := httptest.NewRequest("GET", "/api/analytics/rf?window="+q, nil)
|
||||
got := ParseTimeWindow(r)
|
||||
if got.Since == "" {
|
||||
t.Errorf("window=%s: empty Since", q)
|
||||
continue
|
||||
}
|
||||
since, err := time.Parse(time.RFC3339, got.Since)
|
||||
if err != nil {
|
||||
t.Errorf("window=%s: bad RFC3339 %q", q, got.Since)
|
||||
continue
|
||||
}
|
||||
delta := time.Since(since)
|
||||
// allow 5 minutes of slack
|
||||
if delta < want-5*time.Minute || delta > want+5*time.Minute {
|
||||
t.Errorf("window=%s: expected ~%v, got %v", q, want, delta)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseTimeWindow_FromTo(t *testing.T) {
|
||||
from := "2026-04-01T00:00:00Z"
|
||||
to := "2026-04-08T00:00:00Z"
|
||||
r := httptest.NewRequest("GET", "/api/analytics/rf?from="+from+"&to="+to, nil)
|
||||
w := ParseTimeWindow(r)
|
||||
if w.Since != from {
|
||||
t.Errorf("expected Since=%q, got %q", from, w.Since)
|
||||
}
|
||||
if w.Until != to {
|
||||
t.Errorf("expected Until=%q, got %q", to, w.Until)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseTimeWindow_NoParams_BackwardsCompatible(t *testing.T) {
|
||||
r := httptest.NewRequest("GET", "/api/analytics/rf", nil)
|
||||
w := ParseTimeWindow(r)
|
||||
if !w.IsZero() {
|
||||
t.Errorf("no params should yield zero window, got %+v", w)
|
||||
}
|
||||
}
|
||||
|
||||
func TestTimeWindow_Includes(t *testing.T) {
|
||||
w := TimeWindow{Since: "2026-04-01T00:00:00Z", Until: "2026-04-08T00:00:00Z"}
|
||||
if !w.Includes("2026-04-05T12:00:00Z") {
|
||||
t.Error("mid-range ts should be included")
|
||||
}
|
||||
if w.Includes("2026-03-31T23:59:59Z") {
|
||||
t.Error("ts before Since should be excluded")
|
||||
}
|
||||
if w.Includes("2026-04-08T00:00:01Z") {
|
||||
t.Error("ts after Until should be excluded")
|
||||
}
|
||||
// Empty ts always included (some observations lack timestamps)
|
||||
if !w.Includes("") {
|
||||
t.Error("empty ts should be included")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTimeWindow_CacheKey_DistinctPerWindow(t *testing.T) {
|
||||
a := TimeWindow{Since: "2026-04-01T00:00:00Z"}
|
||||
b := TimeWindow{Since: "2026-04-02T00:00:00Z"}
|
||||
z := TimeWindow{}
|
||||
if a.CacheKey() == b.CacheKey() {
|
||||
t.Error("different windows must produce different cache keys")
|
||||
}
|
||||
if z.CacheKey() != "" {
|
||||
t.Errorf("zero window cache key must be empty, got %q", z.CacheKey())
|
||||
}
|
||||
if !strings.Contains(a.CacheKey(), "2026-04-01") {
|
||||
t.Errorf("cache key should encode Since, got %q", a.CacheKey())
|
||||
}
|
||||
}
|
||||
|
||||
// Self-review fixes (#1018 polish).
|
||||
|
||||
// B1: a relative window must produce a STABLE cache key across calls,
|
||||
// otherwise the analytics cache thrashes (one entry per second).
|
||||
func TestTimeWindow_RelativeWindow_StableCacheKey(t *testing.T) {
|
||||
r1 := httptest.NewRequest("GET", "/api/analytics/rf?window=24h", nil)
|
||||
w1 := ParseTimeWindow(r1)
|
||||
time.Sleep(1100 * time.Millisecond)
|
||||
r2 := httptest.NewRequest("GET", "/api/analytics/rf?window=24h", nil)
|
||||
w2 := ParseTimeWindow(r2)
|
||||
if w1.CacheKey() != w2.CacheKey() {
|
||||
t.Fatalf("relative window cache key must be stable across calls, got %q vs %q", w1.CacheKey(), w2.CacheKey())
|
||||
}
|
||||
}
|
||||
|
||||
// B2: stored timestamps use millisecond precision (".000Z") while RFC3339
|
||||
// bounds have none. Includes() must use time-based compare, not lex compare,
|
||||
// so tx past Until are correctly excluded regardless of fractional digits.
|
||||
func TestTimeWindow_Includes_FractionalSecondsBoundary(t *testing.T) {
|
||||
w := TimeWindow{Until: "2026-04-08T00:00:00Z"}
|
||||
// A tx 1ms past Until should NOT be included.
|
||||
if w.Includes("2026-04-08T00:00:00.001Z") {
|
||||
t.Error("ts 1ms past Until must be excluded; lex compare against fractional ts is wrong")
|
||||
}
|
||||
// A tx well inside the window must be included.
|
||||
if !w.Includes("2026-04-07T23:59:59.999Z") {
|
||||
t.Error("ts just before Until must be included")
|
||||
}
|
||||
|
||||
w2 := TimeWindow{Since: "2026-04-01T00:00:00Z"}
|
||||
// A tx at exactly Since should be included.
|
||||
if !w2.Includes("2026-04-01T00:00:00.000Z") {
|
||||
t.Error("ts exactly at Since must be included; lex compare excludes it because '.' < 'Z'")
|
||||
}
|
||||
}
|
||||
@@ -1,338 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
_ "modernc.org/sqlite"
|
||||
)
|
||||
|
||||
// TestTopologyDedup_RepeatersMergeByPubkey verifies that topRepeaters
|
||||
// merges entries whose hop prefixes resolve unambiguously to the same node.
|
||||
func TestTopologyDedup_RepeatersMergeByPubkey(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
dbPath := filepath.Join(dir, "test.db")
|
||||
conn, err := sql.Open("sqlite", dbPath+"?_journal_mode=WAL")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
exec := func(s string) {
|
||||
if _, err := conn.Exec(s); err != nil {
|
||||
t.Fatalf("SQL exec failed: %v\nSQL: %s", err, s)
|
||||
}
|
||||
}
|
||||
exec(`CREATE TABLE transmissions (
|
||||
id INTEGER PRIMARY KEY, raw_hex TEXT, hash TEXT, first_seen TEXT,
|
||||
route_type INTEGER, payload_type INTEGER, payload_version INTEGER, decoded_json TEXT
|
||||
)`)
|
||||
exec(`CREATE TABLE observations (
|
||||
id INTEGER PRIMARY KEY, transmission_id INTEGER, observer_id TEXT, observer_name TEXT,
|
||||
direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT, raw_hex TEXT
|
||||
)`)
|
||||
exec(`CREATE TABLE observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
|
||||
exec(`CREATE TABLE nodes (
|
||||
public_key TEXT PRIMARY KEY, name TEXT, role TEXT, lat REAL, lon REAL,
|
||||
last_seen TEXT, frequency REAL
|
||||
)`)
|
||||
exec(`CREATE TABLE schema_version (version INTEGER)`)
|
||||
exec(`INSERT INTO schema_version (version) VALUES (1)`)
|
||||
exec(`CREATE INDEX idx_tx_first_seen ON transmissions(first_seen)`)
|
||||
|
||||
// Insert two repeater nodes with distinct pubkeys.
|
||||
// AQUA: pubkey starts with 0735bc...
|
||||
// BETA: pubkey starts with 99aabb...
|
||||
exec(`INSERT INTO nodes (public_key, name, role) VALUES ('0735bc6dda4d1122aabbccdd', 'AQUA', 'Repeater')`)
|
||||
exec(`INSERT INTO nodes (public_key, name, role) VALUES ('99aabb001122334455667788', 'BETA', 'Repeater')`)
|
||||
|
||||
base := time.Date(2026, 1, 1, 0, 0, 0, 0, time.UTC)
|
||||
|
||||
// Create packets:
|
||||
// - 10 packets with path ["07", "99aa"] (short prefix for AQUA, medium for BETA)
|
||||
// - 5 packets with path ["0735bc", "99"] (medium prefix for AQUA, short for BETA)
|
||||
// - 3 packets with path ["0735bc6dda4d", "99aabb"] (long prefix for both)
|
||||
txID := 1
|
||||
obsID := 1
|
||||
insertTx := func(path string, count int) {
|
||||
for i := 0; i < count; i++ {
|
||||
ts := base.Add(time.Duration(txID) * time.Minute).Format(time.RFC3339)
|
||||
hash := fmt.Sprintf("h%04d", txID)
|
||||
conn.Exec("INSERT INTO transmissions (id, raw_hex, hash, first_seen, route_type, payload_type, payload_version, decoded_json) VALUES (?, ?, ?, ?, 0, 4, 1, ?)",
|
||||
txID, "aabb", hash, ts, fmt.Sprintf(`{"pubKey":"pk%04d"}`, txID))
|
||||
conn.Exec("INSERT INTO observations (id, transmission_id, observer_id, observer_name, direction, snr, rssi, score, path_json, timestamp) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
|
||||
obsID, txID, "obs1", "Obs1", "RX", -10.0, -80.0, 5, path, ts)
|
||||
txID++
|
||||
obsID++
|
||||
}
|
||||
}
|
||||
|
||||
insertTx(`["07","99aa"]`, 10)
|
||||
insertTx(`["0735bc","99"]`, 5)
|
||||
insertTx(`["0735bc6d","99aabb"]`, 3)
|
||||
|
||||
// Total: AQUA appears as "07" (10×), "0735bc" (5×), "0735bc6d" (3×) = 18 total
|
||||
// Total: BETA appears as "99aa" (10×), "99" (5×), "99aabb" (3×) = 18 total
|
||||
// After dedup, each should appear ONCE with count=18.
|
||||
|
||||
db, err := OpenDB(dbPath)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer db.conn.Close()
|
||||
|
||||
store := NewPacketStore(db, &PacketStoreConfig{MaxMemoryMB: 100})
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
result := store.computeAnalyticsTopology("", TimeWindow{})
|
||||
topRepeaters := result["topRepeaters"].([]map[string]interface{})
|
||||
|
||||
// Build a map of pubkey → total count from topRepeaters
|
||||
pubkeyCounts := map[string]int{}
|
||||
for _, entry := range topRepeaters {
|
||||
pk, _ := entry["pubkey"].(string)
|
||||
if pk == "" {
|
||||
continue
|
||||
}
|
||||
pubkeyCounts[pk] += entry["count"].(int)
|
||||
}
|
||||
|
||||
// Each pubkey should appear exactly once in topRepeaters
|
||||
aquaEntries := 0
|
||||
betaEntries := 0
|
||||
for _, entry := range topRepeaters {
|
||||
pk, _ := entry["pubkey"].(string)
|
||||
if pk == "0735bc6dda4d1122aabbccdd" {
|
||||
aquaEntries++
|
||||
}
|
||||
if pk == "99aabb001122334455667788" {
|
||||
betaEntries++
|
||||
}
|
||||
}
|
||||
|
||||
if aquaEntries != 1 {
|
||||
t.Errorf("AQUA should appear exactly once in topRepeaters after dedup, got %d entries", aquaEntries)
|
||||
for _, e := range topRepeaters {
|
||||
t.Logf(" entry: hop=%v name=%v pubkey=%v count=%v", e["hop"], e["name"], e["pubkey"], e["count"])
|
||||
}
|
||||
}
|
||||
if betaEntries != 1 {
|
||||
t.Errorf("BETA should appear exactly once in topRepeaters after dedup, got %d entries", betaEntries)
|
||||
}
|
||||
|
||||
// Check that the merged count is correct (18 each)
|
||||
if c := pubkeyCounts["0735bc6dda4d1122aabbccdd"]; c != 18 {
|
||||
t.Errorf("AQUA total count should be 18, got %d", c)
|
||||
}
|
||||
if c := pubkeyCounts["99aabb001122334455667788"]; c != 18 {
|
||||
t.Errorf("BETA total count should be 18, got %d", c)
|
||||
}
|
||||
}
|
||||
|
||||
// TestTopologyDedup_AmbiguousPrefixNotMerged verifies that ambiguous short
|
||||
// prefixes (matching multiple nodes) are NOT merged — they stay separate.
|
||||
func TestTopologyDedup_AmbiguousPrefixNotMerged(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
dbPath := filepath.Join(dir, "test.db")
|
||||
conn, err := sql.Open("sqlite", dbPath+"?_journal_mode=WAL")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
exec := func(s string) {
|
||||
if _, err := conn.Exec(s); err != nil {
|
||||
t.Fatalf("SQL exec failed: %v\nSQL: %s", err, s)
|
||||
}
|
||||
}
|
||||
exec(`CREATE TABLE transmissions (
|
||||
id INTEGER PRIMARY KEY, raw_hex TEXT, hash TEXT, first_seen TEXT,
|
||||
route_type INTEGER, payload_type INTEGER, payload_version INTEGER, decoded_json TEXT
|
||||
)`)
|
||||
exec(`CREATE TABLE observations (
|
||||
id INTEGER PRIMARY KEY, transmission_id INTEGER, observer_id TEXT, observer_name TEXT,
|
||||
direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT, raw_hex TEXT
|
||||
)`)
|
||||
exec(`CREATE TABLE observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
|
||||
exec(`CREATE TABLE nodes (
|
||||
public_key TEXT PRIMARY KEY, name TEXT, role TEXT, lat REAL, lon REAL,
|
||||
last_seen TEXT, frequency REAL
|
||||
)`)
|
||||
exec(`CREATE TABLE schema_version (version INTEGER)`)
|
||||
exec(`INSERT INTO schema_version (version) VALUES (1)`)
|
||||
exec(`CREATE INDEX idx_tx_first_seen ON transmissions(first_seen)`)
|
||||
|
||||
// Two nodes whose pubkeys share the prefix "ab" — collision!
|
||||
exec(`INSERT INTO nodes (public_key, name, role) VALUES ('ab11223344556677aabbccdd', 'NODE_A', 'Repeater')`)
|
||||
exec(`INSERT INTO nodes (public_key, name, role) VALUES ('ab99887766554433aabbccdd', 'NODE_B', 'Repeater')`)
|
||||
|
||||
base := time.Date(2026, 1, 1, 0, 0, 0, 0, time.UTC)
|
||||
txID := 1
|
||||
obsID := 1
|
||||
|
||||
// 10 packets with hop "ab" — ambiguous (matches both NODE_A and NODE_B)
|
||||
for i := 0; i < 10; i++ {
|
||||
ts := base.Add(time.Duration(txID) * time.Minute).Format(time.RFC3339)
|
||||
hash := fmt.Sprintf("h%04d", txID)
|
||||
conn.Exec("INSERT INTO transmissions (id, raw_hex, hash, first_seen, route_type, payload_type, payload_version, decoded_json) VALUES (?, ?, ?, ?, 0, 4, 1, ?)",
|
||||
txID, "aabb", hash, ts, fmt.Sprintf(`{"pubKey":"pk%04d"}`, txID))
|
||||
conn.Exec("INSERT INTO observations (id, transmission_id, observer_id, observer_name, direction, snr, rssi, score, path_json, timestamp) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
|
||||
obsID, txID, "obs1", "Obs1", "RX", -10.0, -80.0, 5, `["ab"]`, ts)
|
||||
txID++
|
||||
obsID++
|
||||
}
|
||||
// 5 packets with hop "ab1122" — unambiguous (only NODE_A)
|
||||
for i := 0; i < 5; i++ {
|
||||
ts := base.Add(time.Duration(txID) * time.Minute).Format(time.RFC3339)
|
||||
hash := fmt.Sprintf("h%04d", txID)
|
||||
conn.Exec("INSERT INTO transmissions (id, raw_hex, hash, first_seen, route_type, payload_type, payload_version, decoded_json) VALUES (?, ?, ?, ?, 0, 4, 1, ?)",
|
||||
txID, "aabb", hash, ts, fmt.Sprintf(`{"pubKey":"pk%04d"}`, txID))
|
||||
conn.Exec("INSERT INTO observations (id, transmission_id, observer_id, observer_name, direction, snr, rssi, score, path_json, timestamp) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
|
||||
obsID, txID, "obs1", "Obs1", "RX", -10.0, -80.0, 5, `["ab1122"]`, ts)
|
||||
txID++
|
||||
obsID++
|
||||
}
|
||||
|
||||
db, err := OpenDB(dbPath)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer db.conn.Close()
|
||||
|
||||
store := NewPacketStore(db, &PacketStoreConfig{MaxMemoryMB: 100})
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
result := store.computeAnalyticsTopology("", TimeWindow{})
|
||||
topRepeaters := result["topRepeaters"].([]map[string]interface{})
|
||||
|
||||
// "ab" is ambiguous — should NOT be merged with "ab1122"
|
||||
// We expect two separate entries: one for "ab" (count=10) and one for "ab1122" (count=5)
|
||||
foundAb := false
|
||||
foundAb1122 := false
|
||||
for _, entry := range topRepeaters {
|
||||
hop := entry["hop"].(string)
|
||||
count := entry["count"].(int)
|
||||
if hop == "ab" {
|
||||
foundAb = true
|
||||
if count != 10 {
|
||||
t.Errorf("ambiguous hop 'ab' should have count=10, got %d", count)
|
||||
}
|
||||
}
|
||||
if hop == "ab1122" {
|
||||
foundAb1122 = true
|
||||
if count != 5 {
|
||||
t.Errorf("unambiguous hop 'ab1122' should have count=5, got %d", count)
|
||||
}
|
||||
}
|
||||
}
|
||||
if !foundAb {
|
||||
t.Error("ambiguous hop 'ab' should remain as separate entry")
|
||||
}
|
||||
if !foundAb1122 {
|
||||
t.Error("unambiguous hop 'ab1122' should remain as separate entry (not merged with ambiguous 'ab')")
|
||||
}
|
||||
}
|
||||
|
||||
// TestTopologyDedup_PairsMergeByPubkey verifies that topPairs merges
|
||||
// pair entries whose hops resolve unambiguously to the same node pair.
|
||||
func TestTopologyDedup_PairsMergeByPubkey(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
dbPath := filepath.Join(dir, "test.db")
|
||||
conn, err := sql.Open("sqlite", dbPath+"?_journal_mode=WAL")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
exec := func(s string) {
|
||||
if _, err := conn.Exec(s); err != nil {
|
||||
t.Fatalf("SQL exec failed: %v\nSQL: %s", err, s)
|
||||
}
|
||||
}
|
||||
exec(`CREATE TABLE transmissions (
|
||||
id INTEGER PRIMARY KEY, raw_hex TEXT, hash TEXT, first_seen TEXT,
|
||||
route_type INTEGER, payload_type INTEGER, payload_version INTEGER, decoded_json TEXT
|
||||
)`)
|
||||
exec(`CREATE TABLE observations (
|
||||
id INTEGER PRIMARY KEY, transmission_id INTEGER, observer_id TEXT, observer_name TEXT,
|
||||
direction TEXT, snr REAL, rssi REAL, score INTEGER, path_json TEXT, timestamp TEXT, raw_hex TEXT
|
||||
)`)
|
||||
exec(`CREATE TABLE observers (rowid INTEGER PRIMARY KEY, id TEXT, name TEXT)`)
|
||||
exec(`CREATE TABLE nodes (
|
||||
public_key TEXT PRIMARY KEY, name TEXT, role TEXT, lat REAL, lon REAL,
|
||||
last_seen TEXT, frequency REAL
|
||||
)`)
|
||||
exec(`CREATE TABLE schema_version (version INTEGER)`)
|
||||
exec(`INSERT INTO schema_version (version) VALUES (1)`)
|
||||
exec(`CREATE INDEX idx_tx_first_seen ON transmissions(first_seen)`)
|
||||
|
||||
exec(`INSERT INTO nodes (public_key, name, role) VALUES ('0735bc6dda4d1122aabbccdd', 'AQUA', 'Repeater')`)
|
||||
exec(`INSERT INTO nodes (public_key, name, role) VALUES ('99aabb001122334455667788', 'BETA', 'Repeater')`)
|
||||
|
||||
base := time.Date(2026, 1, 1, 0, 0, 0, 0, time.UTC)
|
||||
txID := 1
|
||||
obsID := 1
|
||||
insertTx := func(path string, count int) {
|
||||
for i := 0; i < count; i++ {
|
||||
ts := base.Add(time.Duration(txID) * time.Minute).Format(time.RFC3339)
|
||||
hash := fmt.Sprintf("h%04d", txID)
|
||||
conn.Exec("INSERT INTO transmissions (id, raw_hex, hash, first_seen, route_type, payload_type, payload_version, decoded_json) VALUES (?, ?, ?, ?, 0, 4, 1, ?)",
|
||||
txID, "aabb", hash, ts, fmt.Sprintf(`{"pubKey":"pk%04d"}`, txID))
|
||||
conn.Exec("INSERT INTO observations (id, transmission_id, observer_id, observer_name, direction, snr, rssi, score, path_json, timestamp) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
|
||||
obsID, txID, "obs1", "Obs1", "RX", -10.0, -80.0, 5, path, ts)
|
||||
txID++
|
||||
obsID++
|
||||
}
|
||||
}
|
||||
|
||||
// Path ["07","99aa"] → pair "07|99aa", 10 times
|
||||
// Path ["0735bc","99"] → pair "0735bc|99" but sorted = "0735bc|99", 5 times
|
||||
// Wait: pair sorting is by string comparison: "07" < "99aa", "0735bc" < "99"
|
||||
// After dedup both should merge to AQUA|BETA pair with count=15
|
||||
insertTx(`["07","99aa"]`, 10)
|
||||
insertTx(`["0735bc","99"]`, 5)
|
||||
|
||||
db, err := OpenDB(dbPath)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer db.conn.Close()
|
||||
|
||||
store := NewPacketStore(db, &PacketStoreConfig{MaxMemoryMB: 100})
|
||||
if err := store.Load(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
result := store.computeAnalyticsTopology("", TimeWindow{})
|
||||
topPairs := result["topPairs"].([]map[string]interface{})
|
||||
|
||||
// Should have exactly 1 pair entry for AQUA-BETA with count=15
|
||||
aquaBetaPairs := 0
|
||||
totalCount := 0
|
||||
for _, entry := range topPairs {
|
||||
pkA, _ := entry["pubkeyA"].(string)
|
||||
pkB, _ := entry["pubkeyB"].(string)
|
||||
if (pkA == "0735bc6dda4d1122aabbccdd" && pkB == "99aabb001122334455667788") ||
|
||||
(pkA == "99aabb001122334455667788" && pkB == "0735bc6dda4d1122aabbccdd") {
|
||||
aquaBetaPairs++
|
||||
totalCount += entry["count"].(int)
|
||||
}
|
||||
}
|
||||
|
||||
if aquaBetaPairs != 1 {
|
||||
t.Errorf("AQUA-BETA pair should appear exactly once after dedup, got %d entries", aquaBetaPairs)
|
||||
for _, e := range topPairs {
|
||||
t.Logf(" pair: hopA=%v hopB=%v count=%v pkA=%v pkB=%v", e["hopA"], e["hopB"], e["count"], e["pubkeyA"], e["pubkeyB"])
|
||||
}
|
||||
}
|
||||
if totalCount != 15 {
|
||||
t.Errorf("AQUA-BETA pair total count should be 15, got %d", totalCount)
|
||||
}
|
||||
}
|
||||
@@ -859,7 +859,6 @@ type ObserverResp struct {
|
||||
BatteryMv interface{} `json:"battery_mv"`
|
||||
UptimeSecs interface{} `json:"uptime_secs"`
|
||||
NoiseFloor interface{} `json:"noise_floor"`
|
||||
LastPacketAt interface{} `json:"last_packet_at"`
|
||||
PacketsLastHour int `json:"packetsLastHour"`
|
||||
Lat interface{} `json:"lat"`
|
||||
Lon interface{} `json:"lon"`
|
||||
|
||||
@@ -1,82 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"time"
|
||||
)
|
||||
|
||||
// checkAutoVacuum inspects the current auto_vacuum mode and logs a warning
|
||||
// if it's not INCREMENTAL. Optionally performs a one-time full VACUUM if
|
||||
// the operator has set db.vacuumOnStartup: true in config (#919).
|
||||
func checkAutoVacuum(db *DB, cfg *Config, dbPath string) {
|
||||
var autoVacuum int
|
||||
if err := db.conn.QueryRow("PRAGMA auto_vacuum").Scan(&autoVacuum); err != nil {
|
||||
log.Printf("[db] warning: could not read auto_vacuum: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
if autoVacuum == 2 {
|
||||
log.Printf("[db] auto_vacuum=INCREMENTAL")
|
||||
return
|
||||
}
|
||||
|
||||
modes := map[int]string{0: "NONE", 1: "FULL", 2: "INCREMENTAL"}
|
||||
mode := modes[autoVacuum]
|
||||
if mode == "" {
|
||||
mode = fmt.Sprintf("UNKNOWN(%d)", autoVacuum)
|
||||
}
|
||||
|
||||
log.Printf("[db] auto_vacuum=%s — DB needs one-time VACUUM to enable incremental auto-vacuum. "+
|
||||
"Set db.vacuumOnStartup: true in config to migrate (will block startup for several minutes on large DBs). "+
|
||||
"See https://github.com/Kpa-clawbot/CoreScope/issues/919", mode)
|
||||
|
||||
if cfg.DB != nil && cfg.DB.VacuumOnStartup {
|
||||
// WARNING: Full VACUUM creates a temporary copy of the entire DB file.
|
||||
// Requires ~2× the DB file size in free disk space or it will fail.
|
||||
log.Printf("[db] vacuumOnStartup=true — starting one-time full VACUUM (ensure 2x DB size free disk space)...")
|
||||
start := time.Now()
|
||||
|
||||
rw, err := cachedRW(dbPath)
|
||||
if err != nil {
|
||||
log.Printf("[db] VACUUM failed: could not open RW connection: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
if _, err := rw.Exec("PRAGMA auto_vacuum = INCREMENTAL"); err != nil {
|
||||
log.Printf("[db] VACUUM failed: could not set auto_vacuum: %v", err)
|
||||
return
|
||||
}
|
||||
if _, err := rw.Exec("VACUUM"); err != nil {
|
||||
log.Printf("[db] VACUUM failed: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
elapsed := time.Since(start)
|
||||
log.Printf("[db] VACUUM complete in %v — auto_vacuum is now INCREMENTAL", elapsed.Round(time.Millisecond))
|
||||
|
||||
// Re-check
|
||||
var newMode int
|
||||
if err := db.conn.QueryRow("PRAGMA auto_vacuum").Scan(&newMode); err == nil {
|
||||
if newMode == 2 {
|
||||
log.Printf("[db] auto_vacuum=INCREMENTAL (confirmed after VACUUM)")
|
||||
} else {
|
||||
log.Printf("[db] warning: auto_vacuum=%d after VACUUM — expected 2", newMode)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// runIncrementalVacuum runs PRAGMA incremental_vacuum(N) on a read-write
|
||||
// connection. Safe to call on auto_vacuum=NONE databases (noop).
|
||||
func runIncrementalVacuum(dbPath string, pages int) {
|
||||
rw, err := cachedRW(dbPath)
|
||||
if err != nil {
|
||||
log.Printf("[vacuum] could not open RW connection: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
if _, err := rw.Exec(fmt.Sprintf("PRAGMA incremental_vacuum(%d)", pages)); err != nil {
|
||||
log.Printf("[vacuum] incremental_vacuum error: %v", err)
|
||||
}
|
||||
}
|
||||
+5
-15
@@ -3,19 +3,12 @@
|
||||
"apiKey": "your-secret-api-key-here",
|
||||
"nodeBlacklist": [],
|
||||
"_comment_nodeBlacklist": "Public keys of nodes to hide from all API responses. Use for trolls, offensive names, or nodes reporting false data that operators refuse to fix.",
|
||||
"observerIATAWhitelist": [],
|
||||
"_comment_observerIATAWhitelist": "Global IATA region whitelist. When non-empty, only observers whose IATA code (from MQTT topic) matches are processed. Case-insensitive. Empty = allow all. Unlike per-source iataFilter, this applies across all MQTT sources.",
|
||||
"retention": {
|
||||
"nodeDays": 7,
|
||||
"observerDays": 14,
|
||||
"packetDays": 30,
|
||||
"_comment": "nodeDays: nodes not seen in N days moved to inactive_nodes (default 7). observerDays: observers not sending data in N days are removed (-1 = keep forever, default 14). packetDays: transmissions older than N days are deleted (0 = disabled)."
|
||||
},
|
||||
"db": {
|
||||
"vacuumOnStartup": false,
|
||||
"incrementalVacuumPages": 1024,
|
||||
"_comment": "vacuumOnStartup: run one-time full VACUUM to enable incremental auto-vacuum on existing DBs (blocks startup for minutes on large DBs; requires 2x DB file size in free disk space). incrementalVacuumPages: free pages returned to OS after each retention reaper cycle (default 1024). See #919."
|
||||
},
|
||||
"https": {
|
||||
"cert": "/path/to/cert.pem",
|
||||
"key": "/path/to/key.pem",
|
||||
@@ -131,9 +124,7 @@
|
||||
"SFO",
|
||||
"OAK",
|
||||
"MRY"
|
||||
],
|
||||
"region": "SJC",
|
||||
"connectTimeoutSec": 45
|
||||
]
|
||||
}
|
||||
],
|
||||
"channelKeys": {
|
||||
@@ -173,7 +164,7 @@
|
||||
[37.20, -122.52]
|
||||
],
|
||||
"bufferKm": 20,
|
||||
"_comment": "Optional. Restricts ingestion and API responses to nodes within the polygon + bufferKm. Polygon is an array of [lat, lon] pairs (minimum 3). Use the GeoFilter Builder (`/geofilter-builder.html`) to draw a polygon, save drafts to localStorage with Save Draft, and export a config snippet with Download — paste the snippet here as the `geo_filter` block. Remove this section to disable filtering. Nodes with no GPS fix are always allowed through."
|
||||
"_comment": "Optional. Restricts ingestion and API responses to nodes within the polygon + bufferKm. Polygon is an array of [lat, lon] pairs (minimum 3). Use tools/geofilter-builder.html to draw a polygon visually. Remove this section to disable filtering. Nodes with no GPS fix are always allowed through."
|
||||
},
|
||||
"regions": {
|
||||
"SJC": "San Jose, US",
|
||||
@@ -217,8 +208,7 @@
|
||||
"packetStore": {
|
||||
"maxMemoryMB": 1024,
|
||||
"estimatedPacketBytes": 450,
|
||||
"retentionHours": 168,
|
||||
"_comment": "In-memory packet store. maxMemoryMB caps RAM usage. retentionHours: only packets younger than this are loaded on startup and kept in memory (0 = unlimited, not recommended for large DBs — causes OOM on cold start). 168 = 7 days. Must be ≤ retention.packetDays * 24."
|
||||
"_comment": "In-memory packet store. maxMemoryMB caps RAM usage. All packets loaded on startup, served from RAM."
|
||||
},
|
||||
"resolvedPath": {
|
||||
"backfillHours": 24,
|
||||
@@ -228,10 +218,10 @@
|
||||
"maxAgeDays": 5,
|
||||
"_comment": "Neighbor edges older than this many days are pruned on startup and daily. Default: 5."
|
||||
},
|
||||
"_comment_mqttSources": "Each source connects to an MQTT broker. topics: what to subscribe to. iataFilter: only ingest packets from these regions (optional). region: default IATA region for this source — used when packet/topic doesn't specify one (optional, priority: payload > topic > this field).",
|
||||
"_comment_mqttSources": "Each source connects to an MQTT broker. topics: what to subscribe to. iataFilter: only ingest packets from these regions (optional).",
|
||||
"_comment_channelKeys": "Hex keys for decrypting channel messages. Key name = channel display name. public channel key is well-known.",
|
||||
"_comment_hashChannels": "Channel names whose keys are derived via SHA256. Key = SHA256(name)[:16]. Listed here so the ingestor can auto-derive keys.",
|
||||
"_comment_defaultRegion": "IATA code shown by default in region filters.",
|
||||
"_comment_mapDefaults": "Initial map center [lat, lon] and zoom level.",
|
||||
"_comment_regions": "IATA code → display name mapping for the region filter UI. Each key is a 3-letter IATA code that an observer is tagged with (resolved priority: MQTT payload `region` field > topic-derived region > mqttSources.region). Observers without an IATA tag will not appear under any region filter — only under 'All Regions'. The region filter dropdown shows one entry per code listed here PLUS any extra IATA codes the server discovers from observers at runtime (so you can omit codes here and they will still be selectable, just labelled with the bare IATA code instead of a friendly name). Selecting 'All Regions' (or no region) returns results from every observer including those with no IATA tag; selecting one or more codes restricts results to packets observed by observers tagged with those codes. The reserved value 'All' (case-insensitive) is treated as 'no filter' on the server, so the URL ?region=All behaves identically to omitting the param. Issue #770."
|
||||
"_comment_regions": "IATA code to display name mapping. Packets are tagged with region codes by MQTT topic structure."
|
||||
}
|
||||
|
||||
@@ -1,204 +0,0 @@
|
||||
# Scope Stats Page — Design Spec
|
||||
|
||||
**Issue**: Kpa-clawbot/CoreScope#899
|
||||
**Date**: 2026-04-23
|
||||
**Branch target**: `master`
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Add a dedicated **Scopes** page showing scope/region statistics for MeshCore transport-route packets. Scope filtering in MeshCore uses `TRANSPORT_FLOOD` (route_type 0) and `TRANSPORT_DIRECT` (route_type 3) packets that carry two 16-bit transport codes. Code1 ≠ `0000` means the packet is region-scoped.
|
||||
|
||||
Feature 3 from the issue (default scope per client via advert) is **not implemented** — the advert format has no scope field in the current firmware.
|
||||
|
||||
---
|
||||
|
||||
## How Scopes Work (Firmware)
|
||||
|
||||
Transport code derivation (authoritative source: `meshcore-dev/MeshCore`):
|
||||
|
||||
```
|
||||
key = SHA256("#regionname")[:16] // TransportKeyStore::getAutoKeyFor
|
||||
Code1 = HMAC-SHA256(key, type || payload) // TransportKey::calcTransportCode, 2-byte output
|
||||
```
|
||||
|
||||
Code1 is a **per-message** HMAC — the same region produces a different Code1 for every message. Identifying a region from Code1 requires knowing the region name in advance and recomputing the HMAC.
|
||||
|
||||
`Code1 = 0000` is the "no scope" sentinel (also `FFFF` is reserved). Packets with route_type 1 or 2 (plain FLOOD/DIRECT) carry no transport codes.
|
||||
|
||||
---
|
||||
|
||||
## Config
|
||||
|
||||
Add `hashRegions` to the ingestor `Config` struct in `cmd/ingestor/config.go`, mirroring `hashChannels`:
|
||||
|
||||
```json
|
||||
"hashRegions": ["#belgium", "#eu", "#brussels"]
|
||||
```
|
||||
|
||||
Normalization (same rules as `hashChannels`):
|
||||
- Trim whitespace
|
||||
- Prepend `#` if missing
|
||||
- Skip empty entries
|
||||
|
||||
---
|
||||
|
||||
## Ingestor Changes
|
||||
|
||||
### Key derivation (`loadRegionKeys`)
|
||||
|
||||
```go
|
||||
func loadRegionKeys(cfg *Config) map[string][]byte {
|
||||
// key = first 16 bytes of SHA256("#regionname")
|
||||
}
|
||||
```
|
||||
|
||||
Returns `map[string][]byte` (region name → 16-byte HMAC key). Called once at startup, stored on the `Store`.
|
||||
|
||||
### Decoder: expose raw payload bytes
|
||||
|
||||
Add `PayloadRaw []byte` to `DecodedPacket` in `cmd/ingestor/decoder.go`. Populated from the raw `buf` slice at the payload offset — zero-copy slice, no allocation. This is the **encrypted** payload bytes, matching what the firmware feeds into `calcTransportCode`.
|
||||
|
||||
### At-ingest region matching
|
||||
|
||||
In `BuildPacketData`:
|
||||
- Skip if `route_type` not in `{0, 3}` → `scope_name` stays `nil`
|
||||
- If `Code1 == "0000"` → `scope_name = nil` (unscoped transport, no scope involvement)
|
||||
- If `Code1 != "0000"` → try each region key:
|
||||
```
|
||||
HMAC-SHA256(key, payloadType_byte || PayloadRaw) → first 2 bytes as uint16
|
||||
```
|
||||
First match → `scope_name = "#regionname"`. No match → `scope_name = ""` (unknown scope).
|
||||
|
||||
Add `ScopeName *string` to `PacketData`.
|
||||
|
||||
### MQTT-sourced packets (DM / CHAN paths in main.go)
|
||||
|
||||
These are injected directly without going through `BuildPacketData`. They use `route_type = 1` (FLOOD), so they are never transport-route packets. No scope matching needed for these paths.
|
||||
|
||||
---
|
||||
|
||||
## Database
|
||||
|
||||
### Migration
|
||||
|
||||
```sql
|
||||
ALTER TABLE transmissions ADD COLUMN scope_name TEXT DEFAULT NULL;
|
||||
CREATE INDEX idx_tx_scope_name ON transmissions(scope_name) WHERE scope_name IS NOT NULL;
|
||||
```
|
||||
|
||||
### Column semantics
|
||||
|
||||
| Value | Meaning |
|
||||
|-------|---------|
|
||||
| `NULL` | Either: non-transport-route packet (route_type 1/2), or transport-route with Code1=0000 |
|
||||
| `""` (empty string) | Transport-route, Code1 ≠ 0000, but no configured region matched |
|
||||
| `"#belgium"` | Matched named region |
|
||||
|
||||
The API stats queries resolve the NULL ambiguity by always filtering `route_type IN (0, 3)` first:
|
||||
- `unscoped` count = `route_type IN (0,3) AND scope_name IS NULL`
|
||||
- `scoped` count = `route_type IN (0,3) AND scope_name IS NOT NULL`
|
||||
|
||||
### Backfill
|
||||
|
||||
On migration, re-decode `raw_hex` for all rows where `route_type IN (0, 3)` and `scope_name IS NULL`. Run the same HMAC matching logic. Rows with `Code1 = 0000` remain `NULL`.
|
||||
|
||||
The backfill runs in the existing migration framework in `cmd/ingestor/db.go`. If no regions are configured, backfill is skipped.
|
||||
|
||||
---
|
||||
|
||||
## API
|
||||
|
||||
### `GET /api/scope-stats`
|
||||
|
||||
**Query param**: `window` — one of `1h`, `24h` (default), `7d`
|
||||
|
||||
**Time-series bucket sizes**:
|
||||
| Window | Bucket |
|
||||
|--------|--------|
|
||||
| `1h` | 5 min |
|
||||
| `24h` | 1 hour |
|
||||
| `7d` | 6 hours|
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"window": "24h",
|
||||
"summary": {
|
||||
"transportTotal": 1240,
|
||||
"scoped": 890,
|
||||
"unscoped": 350,
|
||||
"unknownScope": 42
|
||||
},
|
||||
"byRegion": [
|
||||
{ "name": "#belgium", "count": 612 },
|
||||
{ "name": "#eu", "count": 236 }
|
||||
],
|
||||
"timeSeries": [
|
||||
{ "t": "2026-04-23T10:00:00Z", "scoped": 45, "unscoped": 18 },
|
||||
{ "t": "2026-04-23T11:00:00Z", "scoped": 51, "unscoped": 22 }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
- `transportTotal` = `scoped + unscoped` (transport-route packets only)
|
||||
- `scoped` = Code1 ≠ 0000 (named + unknown)
|
||||
- `unscoped` = transport-route with Code1 = 0000
|
||||
- `unknownScope` = scoped but no region name matched (subset of `scoped`)
|
||||
- `byRegion` sorted by count descending, excludes unknown
|
||||
- `timeSeries` covers the full window at the bucket granularity
|
||||
|
||||
Route: `GET /api/scope-stats` registered in `cmd/server/routes.go`.
|
||||
No auth required (same as other read endpoints).
|
||||
TTL cache: 30 seconds (heavier query than `/api/stats`).
|
||||
|
||||
---
|
||||
|
||||
## Frontend
|
||||
|
||||
### Navigation
|
||||
|
||||
Add nav link between Channels and Nodes in `public/index.html`:
|
||||
```html
|
||||
<a href="#/scopes" class="nav-link" data-route="scopes">Scopes</a>
|
||||
```
|
||||
|
||||
### `public/scopes.js`
|
||||
|
||||
Three sections on the page:
|
||||
|
||||
**1. Summary cards** (reuse existing card CSS pattern from home/analytics pages)
|
||||
- Transport total, Scoped, Unscoped, Unknown scope
|
||||
- Each card shows count + percentage of transport total
|
||||
|
||||
**2. Per-region table**
|
||||
Columns: Region, Messages, % of Scoped
|
||||
Sorted by count descending. Last row: "Unknown scope" (italic) if unknownScope > 0.
|
||||
Shows "No regions configured" message if `byRegion` is empty and `unknownScope = 0`.
|
||||
|
||||
**3. Time-series chart**
|
||||
- Window selector: `1h / 24h / 7d` (default 24h)
|
||||
- Two lines: **Scoped** (blue) and **Unscoped** (grey)
|
||||
- Uses the same lightweight canvas chart pattern as other pages (no external chart lib)
|
||||
|
||||
### Cache buster
|
||||
|
||||
`scopes.js` added to the `__BUST__` entries in `index.html` in the same commit.
|
||||
|
||||
---
|
||||
|
||||
## Testing
|
||||
|
||||
- Unit tests for `loadRegionKeys`: normalization, key bytes match firmware SHA256 derivation
|
||||
- Unit tests for HMAC matching: known Code1 value computed from firmware logic, verified against Go implementation
|
||||
- Integration test: ingest a synthetic transport-route packet with a known region, assert `scope_name` column is set correctly
|
||||
- API test: `GET /api/scope-stats` returns correct summary counts against fixture DB
|
||||
|
||||
---
|
||||
|
||||
## Out of Scope
|
||||
|
||||
- Feature 3 (default scope per client via advert) — firmware has no advert scope field
|
||||
- Drill-down from region row to filtered packet list (deferred)
|
||||
- Private regions (`$`-prefixed) — use secret keys not publicly derivable
|
||||
@@ -98,22 +98,6 @@ How long (in hours) before a node is marked degraded or silent:
|
||||
| `retention.nodeDays` | `7` | Nodes not seen in N days move to inactive |
|
||||
| `retention.packetDays` | `30` | Packets older than N days are deleted daily |
|
||||
|
||||
> **Note:** Lowering retention does **not** immediately shrink the database file.
|
||||
> SQLite marks deleted pages as free but does not return them to the filesystem
|
||||
> unless [incremental auto-vacuum](database.md) is enabled. New databases created
|
||||
> after v0.x.x have auto-vacuum enabled automatically. Existing databases require
|
||||
> a one-time migration — see the [Database](database.md) guide.
|
||||
|
||||
## Database
|
||||
|
||||
| Field | Default | Description |
|
||||
|-------|---------|-------------|
|
||||
| `db.vacuumOnStartup` | `false` | Run a one-time full `VACUUM` on startup to enable incremental auto-vacuum (blocks for minutes on large DBs) |
|
||||
| `db.incrementalVacuumPages` | `1024` | Free pages returned to the OS after each retention reaper cycle |
|
||||
|
||||
See [Database](database.md) for details on SQLite auto-vacuum, WAL, and manual maintenance.
|
||||
See [#919](https://github.com/Kpa-clawbot/CoreScope/issues/919) for background.
|
||||
|
||||
## Channel decryption
|
||||
|
||||
| Field | Description |
|
||||
@@ -166,9 +150,6 @@ Lower values = fresher data but more server load.
|
||||
|-------|---------|-------------|
|
||||
| `packetStore.maxMemoryMB` | `1024` | Maximum RAM for in-memory packet store |
|
||||
| `packetStore.estimatedPacketBytes` | `450` | Estimated bytes per packet (for memory budgeting) |
|
||||
| `packetStore.retentionHours` | `0` | Only load packets younger than N hours on startup and keep them in memory. **Set this on any instance with a large DB.** `0` = unlimited (loads full DB history — causes OOM on cold start when the DB has hundreds of thousands of paths). Recommended: same as `retention.packetDays × 24` (e.g. `168` for 7 days). |
|
||||
|
||||
> **Warning:** Leaving `retentionHours` at `0` on a large database will cause the server to OOM-kill itself on every cold start. The full packet history is loaded into the subpath index at startup; a DB with ~280K paths produces ~13M index entries before the process is killed.
|
||||
|
||||
## Timestamps
|
||||
|
||||
|
||||
@@ -1,82 +0,0 @@
|
||||
# Database
|
||||
|
||||
CoreScope uses SQLite in WAL (Write-Ahead Log) mode for both the server
|
||||
(read-only) and ingestor (read-write).
|
||||
|
||||
## WAL mode
|
||||
|
||||
WAL mode allows concurrent reads while writes happen. It is set automatically
|
||||
at connection time via `PRAGMA journal_mode=WAL`. No operator action needed.
|
||||
|
||||
The WAL file (`meshcore.db-wal`) grows during writes and is checkpointed
|
||||
(merged back into the main DB) periodically and at clean shutdown.
|
||||
|
||||
## Auto-vacuum
|
||||
|
||||
By default, SQLite does not shrink the database file after `DELETE` operations.
|
||||
Deleted pages are marked free and reused by future writes, but the file size
|
||||
on disk stays the same. This is surprising when lowering retention settings.
|
||||
|
||||
### New databases
|
||||
|
||||
Databases created after this feature was added automatically have
|
||||
`PRAGMA auto_vacuum = INCREMENTAL`. After each retention reaper cycle,
|
||||
CoreScope runs `PRAGMA incremental_vacuum(N)` to return free pages to the OS.
|
||||
|
||||
### Existing databases
|
||||
|
||||
The `auto_vacuum` mode is stored in the database header and can only be changed
|
||||
by rewriting the entire file with `VACUUM`. CoreScope will **not** do this
|
||||
automatically — on large databases (5+ GB seen in the wild) it takes minutes
|
||||
and holds an exclusive lock.
|
||||
|
||||
**To migrate an existing database:**
|
||||
|
||||
1. At startup, CoreScope logs a warning:
|
||||
```
|
||||
[db] auto_vacuum=NONE — DB needs one-time VACUUM to enable incremental auto-vacuum.
|
||||
```
|
||||
2. **Ensure at least 2× the database file size in free disk space.** Full VACUUM
|
||||
creates a temporary copy of the entire file — on a near-full disk it will fail.
|
||||
3. Set `db.vacuumOnStartup: true` in your `config.json`:
|
||||
```json
|
||||
{
|
||||
"db": {
|
||||
"vacuumOnStartup": true
|
||||
}
|
||||
}
|
||||
```
|
||||
4. Restart CoreScope. The one-time `VACUUM` will run and block startup.
|
||||
5. After migration, remove or set `vacuumOnStartup: false` — it's not needed again.
|
||||
|
||||
### Configuration
|
||||
|
||||
| Field | Default | Description |
|
||||
|-------|---------|-------------|
|
||||
| `db.vacuumOnStartup` | `false` | One-time full VACUUM to enable incremental auto-vacuum |
|
||||
| `db.incrementalVacuumPages` | `1024` | Pages returned to OS per reaper cycle |
|
||||
|
||||
## Manual VACUUM
|
||||
|
||||
You can also run a manual vacuum from the SQLite CLI:
|
||||
|
||||
```bash
|
||||
sqlite3 data/meshcore.db "PRAGMA auto_vacuum = INCREMENTAL; VACUUM;"
|
||||
```
|
||||
|
||||
This is equivalent to `vacuumOnStartup: true` but can be done offline.
|
||||
|
||||
> ⚠️ Full VACUUM requires **2× the database file size** in free disk space (it
|
||||
> creates a temporary copy). Check with `ls -lh data/meshcore.db` before running.
|
||||
|
||||
## Checking current mode
|
||||
|
||||
```bash
|
||||
sqlite3 data/meshcore.db "PRAGMA auto_vacuum;"
|
||||
```
|
||||
|
||||
- `0` = NONE (default for old databases)
|
||||
- `1` = FULL (automatic, but slower writes)
|
||||
- `2` = INCREMENTAL (recommended — CoreScope triggers vacuum after deletes)
|
||||
|
||||
See [#919](https://github.com/Kpa-clawbot/CoreScope/issues/919) for background on this feature.
|
||||
@@ -1,17 +0,0 @@
|
||||
// Package dbconfig provides the shared DBConfig struct used by both the server
|
||||
// and ingestor binaries for SQLite vacuum and maintenance settings (#919, #921).
|
||||
package dbconfig
|
||||
|
||||
// DBConfig controls SQLite vacuum and maintenance behavior (#919).
|
||||
type DBConfig struct {
|
||||
VacuumOnStartup bool `json:"vacuumOnStartup"` // one-time full VACUUM on startup if auto_vacuum is not INCREMENTAL
|
||||
IncrementalVacuumPages int `json:"incrementalVacuumPages"` // pages returned to OS per reaper cycle (default 1024)
|
||||
}
|
||||
|
||||
// GetIncrementalVacuumPages returns the configured pages or 1024 default.
|
||||
func (c *DBConfig) GetIncrementalVacuumPages() int {
|
||||
if c != nil && c.IncrementalVacuumPages > 0 {
|
||||
return c.IncrementalVacuumPages
|
||||
}
|
||||
return 1024
|
||||
}
|
||||
@@ -1,21 +0,0 @@
|
||||
package dbconfig
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestGetIncrementalVacuumPages_Default(t *testing.T) {
|
||||
var c *DBConfig
|
||||
if got := c.GetIncrementalVacuumPages(); got != 1024 {
|
||||
t.Fatalf("nil DBConfig: got %d, want 1024", got)
|
||||
}
|
||||
c = &DBConfig{}
|
||||
if got := c.GetIncrementalVacuumPages(); got != 1024 {
|
||||
t.Fatalf("zero DBConfig: got %d, want 1024", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetIncrementalVacuumPages_Configured(t *testing.T) {
|
||||
c := &DBConfig{IncrementalVacuumPages: 512}
|
||||
if got := c.GetIncrementalVacuumPages(); got != 512 {
|
||||
t.Fatalf("got %d, want 512", got)
|
||||
}
|
||||
}
|
||||
@@ -1,3 +0,0 @@
|
||||
module github.com/meshcore-analyzer/dbconfig
|
||||
|
||||
go 1.22
|
||||
+12
-38
@@ -75,16 +75,6 @@
|
||||
<h2>📊 Mesh Analytics</h2>
|
||||
<p class="text-muted">Deep dive into your mesh network data</p>
|
||||
<div id="analyticsRegionFilter" class="region-filter-container"></div>
|
||||
<div class="time-window-filter" style="margin:8px 0">
|
||||
<label for="analyticsTimeWindow" style="font-size:0.9em;color:var(--text-muted);margin-right:6px">Time window:</label>
|
||||
<select id="analyticsTimeWindow" data-testid="analytics-time-window" aria-label="Time window">
|
||||
<option value="">All data</option>
|
||||
<option value="1h">Last 1 hour</option>
|
||||
<option value="24h">Last 24 hours</option>
|
||||
<option value="7d">Last 7 days</option>
|
||||
<option value="30d">Last 30 days</option>
|
||||
</select>
|
||||
</div>
|
||||
<div class="analytics-tabs" id="analyticsTabs" role="tablist" aria-label="Analytics tabs">
|
||||
<button class="tab-btn active" data-tab="overview">Overview</button>
|
||||
<button class="tab-btn" data-tab="rf">RF / Signal</button>
|
||||
@@ -133,12 +123,6 @@
|
||||
RegionFilter.init(document.getElementById('analyticsRegionFilter'));
|
||||
RegionFilter.onChange(function () { loadAnalytics(); });
|
||||
|
||||
// Time-window picker (#842) — refresh analytics on change.
|
||||
const tw = document.getElementById('analyticsTimeWindow');
|
||||
if (tw) {
|
||||
tw.addEventListener('change', function () { loadAnalytics(); });
|
||||
}
|
||||
|
||||
// Delegated click/keyboard handler for clickable table rows
|
||||
const analyticsContent = document.getElementById('analyticsContent');
|
||||
if (analyticsContent) {
|
||||
@@ -166,24 +150,14 @@
|
||||
async function loadAnalytics() {
|
||||
try {
|
||||
_analyticsData = {};
|
||||
const rqs = RegionFilter.regionQueryString(); // "®ion=..." or ""
|
||||
// Time window picker (#842) — append &window=… when set.
|
||||
// NOTE: only the three window-aware endpoints (rf/topology/channels)
|
||||
// receive ?window=…; hash-sizes and hash-collisions are about node
|
||||
// identity / hash-byte distribution and intentionally span all data.
|
||||
const twEl = document.getElementById('analyticsTimeWindow');
|
||||
const twVal = twEl ? twEl.value : '';
|
||||
const tws = twVal ? '&window=' + encodeURIComponent(twVal) : '';
|
||||
const baseQS = rqs.slice(1); // drop leading '&', "" or "region=…"
|
||||
const sepBase = baseQS ? '?' + baseQS : '';
|
||||
const windowedQS = (rqs + tws).slice(1);
|
||||
const sepWin = windowedQS ? '?' + windowedQS : '';
|
||||
const rqs = RegionFilter.regionQueryString();
|
||||
const sep = rqs ? '?' + rqs.slice(1) : '';
|
||||
const [hashData, rfData, topoData, chanData, collisionData] = await Promise.all([
|
||||
api('/analytics/hash-sizes' + sepBase, { ttl: CLIENT_TTL.analyticsRF }),
|
||||
api('/analytics/rf' + sepWin, { ttl: CLIENT_TTL.analyticsRF }),
|
||||
api('/analytics/topology' + sepWin, { ttl: CLIENT_TTL.analyticsRF }),
|
||||
api('/analytics/channels' + sepWin, { ttl: CLIENT_TTL.analyticsRF }),
|
||||
api('/analytics/hash-collisions' + sepBase, { ttl: CLIENT_TTL.analyticsRF }),
|
||||
api('/analytics/hash-sizes' + sep, { ttl: CLIENT_TTL.analyticsRF }),
|
||||
api('/analytics/rf' + sep, { ttl: CLIENT_TTL.analyticsRF }),
|
||||
api('/analytics/topology' + sep, { ttl: CLIENT_TTL.analyticsRF }),
|
||||
api('/analytics/channels' + sep, { ttl: CLIENT_TTL.analyticsRF }),
|
||||
api('/analytics/hash-collisions' + sep, { ttl: CLIENT_TTL.analyticsRF }),
|
||||
]);
|
||||
_analyticsData = { hashData, rfData, topoData, chanData, collisionData };
|
||||
renderTab(_currentTab);
|
||||
@@ -1758,8 +1732,8 @@
|
||||
|
||||
<div class="subpath-section">
|
||||
<h5>⏱️ Timeline</h5>
|
||||
<div>First seen: ${data.firstSeen ? (typeof formatAbsoluteTimestamp === 'function' ? formatAbsoluteTimestamp(data.firstSeen) : new Date(data.firstSeen).toLocaleString()) : '—'}</div>
|
||||
<div>Last seen: ${data.lastSeen ? (typeof formatAbsoluteTimestamp === 'function' ? formatAbsoluteTimestamp(data.lastSeen) : new Date(data.lastSeen).toLocaleString()) : '—'}</div>
|
||||
<div>First seen: ${data.firstSeen ? new Date(data.firstSeen).toLocaleString() : '—'}</div>
|
||||
<div>Last seen: ${data.lastSeen ? new Date(data.lastSeen).toLocaleString() : '—'}</div>
|
||||
</div>
|
||||
|
||||
${data.observers.length ? `
|
||||
@@ -2686,7 +2660,7 @@ function destroy() { _analyticsData = {}; _channelData = null; if (_ngState && _
|
||||
const name = esc(n.name || n.public_key.slice(0, 12));
|
||||
const role = n.role ? `<span class="text-muted" style="font-size:0.82em">${esc(n.role)}</span>` : '';
|
||||
const hs = n.hash_size ? ` <span class="text-muted" style="font-size:0.78em;opacity:0.7">${n.hash_size}B hash</span>` : '';
|
||||
const when = n.last_seen ? ` <span class="text-muted" style="font-size:0.8em">${(typeof formatAbsoluteTimestamp === 'function') ? formatAbsoluteTimestamp(n.last_seen) : new Date(n.last_seen).toLocaleDateString()}</span>` : '';
|
||||
const when = n.last_seen ? ` <span class="text-muted" style="font-size:0.8em">${new Date(n.last_seen).toLocaleDateString()}</span>` : '';
|
||||
return `<div style="padding:3px 0"><a href="#/nodes/${encodeURIComponent(n.public_key)}" class="analytics-link">${name}</a> ${role}${hs}${when}</div>`;
|
||||
}
|
||||
|
||||
@@ -3184,7 +3158,7 @@ function destroy() { _analyticsData = {}; _channelData = null; if (_ngState && _
|
||||
const t = new Date(d.t);
|
||||
const x = sx(t.getTime());
|
||||
const y = sy(d.v);
|
||||
const ts = (typeof formatAbsoluteTimestamp === 'function') ? formatAbsoluteTimestamp(d.t) : t.toISOString().replace('T', ' ').replace(/\.\d+Z/, ' UTC');
|
||||
const ts = t.toISOString().replace('T', ' ').replace(/\.\d+Z/, ' UTC');
|
||||
const tip = `${label}: ${formatV(d.v)}${unit}\n${ts}`;
|
||||
svg += `<circle cx="${x.toFixed(1)}" cy="${y.toFixed(1)}" r="8" fill="transparent" stroke="none" pointer-events="all"><title>${tip}</title></circle>`;
|
||||
});
|
||||
@@ -3198,7 +3172,7 @@ function destroy() { _analyticsData = {}; _channelData = null; if (_ngState && _
|
||||
const idx = Math.floor(i * (data.length - 1) / Math.max(xTicks - 1, 1));
|
||||
const t = new Date(data[idx].t);
|
||||
const x = sx(t.getTime());
|
||||
const label = (typeof formatChartAxisLabel === 'function') ? formatChartAxisLabel(t, true) : t.toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' });
|
||||
const label = t.toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' });
|
||||
svg += `<text x="${x.toFixed(1)}" y="${h - 5}" text-anchor="middle" font-size="9" fill="var(--text-muted)">${label}</text>`;
|
||||
}
|
||||
return svg;
|
||||
|
||||
+5
-78
@@ -4,7 +4,7 @@
|
||||
// --- Route/Payload name maps ---
|
||||
const ROUTE_TYPES = { 0: 'TRANSPORT_FLOOD', 1: 'FLOOD', 2: 'DIRECT', 3: 'TRANSPORT_DIRECT' };
|
||||
const PAYLOAD_TYPES = { 0: 'Request', 1: 'Response', 2: 'Direct Msg', 3: 'ACK', 4: 'Advert', 5: 'Channel Msg', 6: 'Group Data', 7: 'Anon Req', 8: 'Path', 9: 'Trace', 10: 'Multipart', 11: 'Control', 15: 'Raw Custom' };
|
||||
const PAYLOAD_COLORS = { 0: 'req', 1: 'response', 2: 'txt-msg', 3: 'ack', 4: 'advert', 5: 'grp-txt', 6: 'grp-data', 7: 'anon-req', 8: 'path', 9: 'trace', 10: 'multipart', 11: 'control', 15: 'raw-custom' };
|
||||
const PAYLOAD_COLORS = { 0: 'req', 1: 'response', 2: 'txt-msg', 3: 'ack', 4: 'advert', 5: 'grp-txt', 7: 'anon-req', 8: 'path', 9: 'trace' };
|
||||
|
||||
function routeTypeName(n) { return ROUTE_TYPES[n] || 'UNKNOWN'; }
|
||||
function payloadTypeName(n) { return PAYLOAD_TYPES[n] || 'UNKNOWN'; }
|
||||
@@ -309,39 +309,6 @@ function formatTimestampWithTooltip(isoString, mode) {
|
||||
return { text, tooltip, isFuture };
|
||||
}
|
||||
|
||||
// Format a Date for chart axis labels, respecting customizer timestamp settings.
|
||||
// shortForm: true = time only (for intra-day), false = date+time (multi-day).
|
||||
function formatChartAxisLabel(d, shortForm) {
|
||||
if (!(d instanceof Date) || !isFinite(d.getTime())) return '—';
|
||||
var timezone = (typeof getTimestampTimezone === 'function') ? getTimestampTimezone() : 'local';
|
||||
var preset = (typeof getTimestampFormatPreset === 'function') ? getTimestampFormatPreset() : 'iso';
|
||||
var useUtc = timezone === 'utc';
|
||||
|
||||
if (preset === 'locale') {
|
||||
if (shortForm) {
|
||||
var opts = { hour: '2-digit', minute: '2-digit' };
|
||||
if (useUtc) opts.timeZone = 'UTC';
|
||||
return d.toLocaleTimeString([], opts);
|
||||
}
|
||||
var opts2 = { month: 'short', day: 'numeric', hour: '2-digit', minute: '2-digit' };
|
||||
if (useUtc) opts2.timeZone = 'UTC';
|
||||
return d.toLocaleString([], opts2);
|
||||
}
|
||||
|
||||
// ISO-style (iso or iso-seconds)
|
||||
var hour = useUtc ? d.getUTCHours() : d.getHours();
|
||||
var minute = useUtc ? d.getUTCMinutes() : d.getMinutes();
|
||||
var timeStr = pad2(hour) + ':' + pad2(minute);
|
||||
if (preset === 'iso-seconds') {
|
||||
var sec = useUtc ? d.getUTCSeconds() : d.getSeconds();
|
||||
timeStr += ':' + pad2(sec);
|
||||
}
|
||||
if (shortForm) return timeStr;
|
||||
var month = useUtc ? d.getUTCMonth() + 1 : d.getMonth() + 1;
|
||||
var day = useUtc ? d.getUTCDate() : d.getDate();
|
||||
return pad2(month) + '-' + pad2(day) + ' ' + timeStr;
|
||||
}
|
||||
|
||||
function truncate(str, len) {
|
||||
if (!str) return '';
|
||||
return str.length > len ? str.slice(0, len) + '…' : str;
|
||||
@@ -538,21 +505,6 @@ const pages = {};
|
||||
|
||||
function registerPage(name, mod) { pages[name] = mod; }
|
||||
|
||||
// Tools landing page — shows sub-menu with Trace and Path Inspector (spec §2.8, M1 fix).
|
||||
registerPage('tools-landing', {
|
||||
init: function (container) {
|
||||
container.innerHTML =
|
||||
'<div class="tools-landing">' +
|
||||
'<h2>Tools</h2>' +
|
||||
'<div class="tools-menu">' +
|
||||
'<a href="#/tools/path-inspector" class="tools-card"><h3>🔍 Path Inspector</h3><p>Resolve prefix paths to candidate full-pubkey routes with confidence scoring.</p></a>' +
|
||||
'<a href="#/tools/trace/" class="tools-card"><h3>📡 Trace Viewer</h3><p>View detailed packet traces by hash.</p></a>' +
|
||||
'</div>' +
|
||||
'</div>';
|
||||
},
|
||||
destroy: function () {}
|
||||
});
|
||||
|
||||
let currentPage = null;
|
||||
|
||||
function closeNav() {
|
||||
@@ -573,12 +525,6 @@ function closeMoreMenu() {
|
||||
function navigate() {
|
||||
closeNav();
|
||||
|
||||
// Backward-compat redirect: #/traces/<hash> → #/tools/trace/<hash> (issue #944).
|
||||
if (location.hash.startsWith('#/traces/')) {
|
||||
location.hash = location.hash.replace('#/traces/', '#/tools/trace/');
|
||||
return;
|
||||
}
|
||||
|
||||
const hash = location.hash.replace('#/', '') || 'packets';
|
||||
const route = hash.split('?')[0];
|
||||
|
||||
@@ -606,27 +552,9 @@ function navigate() {
|
||||
basePage = 'observer-detail';
|
||||
}
|
||||
|
||||
// Tools sub-routing (issue #944): tools/trace/<hash>, tools/path-inspector
|
||||
if (basePage === 'tools') {
|
||||
if (routeParam && routeParam.startsWith('trace/')) {
|
||||
basePage = 'traces';
|
||||
routeParam = routeParam.substring(6); // strip "trace/"
|
||||
} else if (routeParam === 'path-inspector' || (routeParam && routeParam.startsWith('path-inspector'))) {
|
||||
basePage = 'path-inspector';
|
||||
routeParam = null;
|
||||
} else if (!routeParam) {
|
||||
// Default tools landing shows menu with both entries.
|
||||
basePage = 'tools-landing';
|
||||
}
|
||||
}
|
||||
// Also support old #/traces (no sub-path) → traces page.
|
||||
if (basePage === 'traces' && !routeParam) {
|
||||
basePage = 'traces';
|
||||
}
|
||||
|
||||
// Update nav active state
|
||||
document.querySelectorAll('.nav-link[data-route]').forEach(el => {
|
||||
el.classList.toggle('active', el.dataset.route === basePage || (el.dataset.route === 'tools' && (basePage === 'traces' || basePage === 'path-inspector' || basePage === 'tools-landing')));
|
||||
el.classList.toggle('active', el.dataset.route === basePage);
|
||||
});
|
||||
// Update "More" button to show active state if a low-priority page is selected
|
||||
var moreBtn = document.getElementById('navMoreBtn');
|
||||
@@ -998,11 +926,10 @@ window.addEventListener('DOMContentLoaded', () => {
|
||||
}).catch(() => {
|
||||
window.SITE_CONFIG = { timestamps: { defaultMode: 'ago', timezone: 'local', formatPreset: 'iso', customFormat: '', allowCustomFormat: false } };
|
||||
if (window._customizerV2) window._customizerV2.init(window.SITE_CONFIG);
|
||||
}).finally(() => {
|
||||
if (!location.hash || location.hash === '#/') location.hash = '#/home';
|
||||
else navigate();
|
||||
});
|
||||
|
||||
// Navigate immediately — don't gate data-fetching pages on cosmetic theme fetch
|
||||
if (!location.hash || location.hash === '#/') location.hash = '#/home';
|
||||
else navigate();
|
||||
});
|
||||
|
||||
/**
|
||||
|
||||
@@ -120,8 +120,8 @@
|
||||
var ph = rect.height;
|
||||
var vw = window.innerWidth;
|
||||
var vh = window.innerHeight;
|
||||
var finalX = x + pw > vw ? Math.max(0, vw - pw - 14) : x;
|
||||
var finalY = y + ph > vh ? Math.max(0, vh - ph - 14) : y;
|
||||
var finalX = x + pw > vw ? Math.max(0, vw - pw - 8) : x;
|
||||
var finalY = y + ph > vh ? Math.max(0, vh - ph - 8) : y;
|
||||
el.style.left = finalX + 'px';
|
||||
el.style.top = finalY + 'px';
|
||||
}
|
||||
@@ -228,6 +228,12 @@
|
||||
if (ch) showPopover(ch, e.clientX, e.clientY);
|
||||
});
|
||||
|
||||
feed.addEventListener('contextmenu', function(e) {
|
||||
var item = e.target.closest('.live-feed-item');
|
||||
if (!item || !item._ccChannel) return;
|
||||
e.preventDefault();
|
||||
showPopover(item._ccChannel, e.clientX, e.clientY);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
+41
-185
@@ -15,7 +15,6 @@ window.ChannelDecrypt = (function () {
|
||||
'use strict';
|
||||
|
||||
var STORAGE_KEY = 'corescope_channel_keys';
|
||||
var LABELS_KEY = 'corescope_channel_labels';
|
||||
var CACHE_KEY = 'corescope_channel_cache';
|
||||
|
||||
// ---- Hex utilities ----
|
||||
@@ -38,25 +37,6 @@ window.ChannelDecrypt = (function () {
|
||||
|
||||
// ---- Key derivation ----
|
||||
|
||||
// Detect whether SubtleCrypto is available. SubtleCrypto is only exposed
|
||||
// in **secure contexts** (HTTPS or localhost) — when CoreScope is served
|
||||
// over plain HTTP, `crypto.subtle` is undefined and any digest/HMAC call
|
||||
// throws. We fall back to the vendored pure-JS implementation in
|
||||
// public/vendor/sha256-hmac.js. PR #1021 did the same for AES-ECB.
|
||||
function hasSubtle() {
|
||||
return typeof crypto !== 'undefined' && crypto && crypto.subtle && typeof crypto.subtle.digest === 'function';
|
||||
}
|
||||
|
||||
function pureCryptoOrThrow() {
|
||||
var host = (typeof window !== 'undefined') ? window
|
||||
: (typeof self !== 'undefined') ? self : null;
|
||||
if (!host || !host.PureCrypto || !host.PureCrypto.sha256 || !host.PureCrypto.hmacSha256) {
|
||||
throw new Error('PureCrypto vendor module not loaded (public/vendor/sha256-hmac.js). ' +
|
||||
'crypto.subtle is unavailable (HTTP context) and no fallback present.');
|
||||
}
|
||||
return host.PureCrypto;
|
||||
}
|
||||
|
||||
/**
|
||||
* Derive AES-128 key from channel name: SHA-256("#channelname")[:16].
|
||||
* @param {string} channelName - e.g. "#LongFast"
|
||||
@@ -64,12 +44,8 @@ window.ChannelDecrypt = (function () {
|
||||
*/
|
||||
async function deriveKey(channelName) {
|
||||
var enc = new TextEncoder();
|
||||
var data = enc.encode(channelName);
|
||||
if (hasSubtle()) {
|
||||
var hash = await crypto.subtle.digest('SHA-256', data);
|
||||
return new Uint8Array(hash).slice(0, 16);
|
||||
}
|
||||
return pureCryptoOrThrow().sha256(data).slice(0, 16);
|
||||
var hash = await crypto.subtle.digest('SHA-256', enc.encode(channelName));
|
||||
return new Uint8Array(hash).slice(0, 16);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -78,41 +54,46 @@ window.ChannelDecrypt = (function () {
|
||||
* @returns {Promise<number>} single byte (0-255)
|
||||
*/
|
||||
async function computeChannelHash(key) {
|
||||
if (hasSubtle()) {
|
||||
var hash = await crypto.subtle.digest('SHA-256', key);
|
||||
return new Uint8Array(hash)[0];
|
||||
}
|
||||
return pureCryptoOrThrow().sha256(key)[0];
|
||||
var hash = await crypto.subtle.digest('SHA-256', key);
|
||||
return new Uint8Array(hash)[0];
|
||||
}
|
||||
|
||||
// ---- AES-128-ECB via vendored pure-JS implementation ----
|
||||
//
|
||||
// Web Crypto exposes AES-CBC/CTR/GCM but NOT raw AES-ECB. The previous
|
||||
// implementation simulated ECB with AES-CBC + zero IV + a dummy PKCS7
|
||||
// padding block; that hack throws OperationError on real ciphertext
|
||||
// because Web Crypto validates PKCS7 padding on the decrypted output
|
||||
// and the dummy padding bytes rarely form a valid PKCS7 sequence
|
||||
// after decryption. We use a pure-JS AES-128 ECB core
|
||||
// (public/vendor/aes-ecb.js, MIT, derived from aes-js by Richard
|
||||
// Moore) so decryption is deterministic across browsers and works in
|
||||
// HTTP contexts.
|
||||
// ---- AES-128-ECB via Web Crypto (CBC with zero IV, block-by-block) ----
|
||||
|
||||
/**
|
||||
* Decrypt AES-128-ECB.
|
||||
* Decrypt AES-128-ECB by decrypting each 16-byte block independently
|
||||
* using AES-CBC with a zero IV (equivalent to ECB for single blocks).
|
||||
* @param {Uint8Array} key - 16-byte AES key
|
||||
* @param {Uint8Array} ciphertext - must be a non-zero multiple of 16 bytes
|
||||
* @returns {Promise<Uint8Array|null>} plaintext, or null on invalid input
|
||||
* @param {Uint8Array} ciphertext - must be multiple of 16 bytes
|
||||
* @returns {Promise<Uint8Array>} plaintext
|
||||
*/
|
||||
async function decryptECB(key, ciphertext) {
|
||||
if (!ciphertext || ciphertext.length === 0 || ciphertext.length % 16 !== 0) {
|
||||
if (ciphertext.length === 0 || ciphertext.length % 16 !== 0) {
|
||||
return null;
|
||||
}
|
||||
var host = (typeof window !== 'undefined') ? window
|
||||
: (typeof self !== 'undefined') ? self : null;
|
||||
if (!host || !host.AES_ECB || !host.AES_ECB.decrypt) {
|
||||
throw new Error('AES_ECB vendor module not loaded (public/vendor/aes-ecb.js)');
|
||||
var cryptoKey = await crypto.subtle.importKey(
|
||||
'raw', key, { name: 'AES-CBC' }, false, ['decrypt']
|
||||
);
|
||||
var zeroIV = new Uint8Array(16);
|
||||
var plaintext = new Uint8Array(ciphertext.length);
|
||||
|
||||
for (var i = 0; i < ciphertext.length; i += 16) {
|
||||
var block = ciphertext.slice(i, i + 16);
|
||||
// Append a dummy block (16 bytes of 0x10 = PKCS7 padding for empty next block)
|
||||
// so Web Crypto doesn't complain about padding
|
||||
var padded = new Uint8Array(32);
|
||||
padded.set(block, 0);
|
||||
// Second block is PKCS7 padding: 16 bytes of 0x10
|
||||
for (var j = 16; j < 32; j++) padded[j] = 16;
|
||||
|
||||
var decrypted = await crypto.subtle.decrypt(
|
||||
{ name: 'AES-CBC', iv: zeroIV }, cryptoKey, padded
|
||||
);
|
||||
var decBytes = new Uint8Array(decrypted);
|
||||
plaintext.set(decBytes.slice(0, 16), i);
|
||||
}
|
||||
return host.AES_ECB.decrypt(key, ciphertext);
|
||||
|
||||
return plaintext;
|
||||
}
|
||||
|
||||
// ---- MAC verification ----
|
||||
@@ -130,17 +111,13 @@ window.ChannelDecrypt = (function () {
|
||||
secret.set(key, 0);
|
||||
// remaining 16 bytes are already 0
|
||||
|
||||
var cryptoKey = await crypto.subtle.importKey(
|
||||
'raw', secret, { name: 'HMAC', hash: 'SHA-256' }, false, ['sign']
|
||||
);
|
||||
var sig = await crypto.subtle.sign('HMAC', cryptoKey, ciphertext);
|
||||
var sigBytes = new Uint8Array(sig);
|
||||
|
||||
var macBytes = hexToBytes(macHex);
|
||||
var sigBytes;
|
||||
if (hasSubtle() && typeof crypto.subtle.importKey === 'function' && typeof crypto.subtle.sign === 'function') {
|
||||
var cryptoKey = await crypto.subtle.importKey(
|
||||
'raw', secret, { name: 'HMAC', hash: 'SHA-256' }, false, ['sign']
|
||||
);
|
||||
var sig = await crypto.subtle.sign('HMAC', cryptoKey, ciphertext);
|
||||
sigBytes = new Uint8Array(sig);
|
||||
} else {
|
||||
sigBytes = pureCryptoOrThrow().hmacSha256(secret, ciphertext);
|
||||
}
|
||||
return sigBytes[0] === macBytes[0] && sigBytes[1] === macBytes[1];
|
||||
}
|
||||
|
||||
@@ -210,96 +187,12 @@ window.ChannelDecrypt = (function () {
|
||||
// Alias used by channels.js
|
||||
var decryptPacket = decrypt;
|
||||
|
||||
// ---- Live PSK decrypt (WS path) ----
|
||||
//
|
||||
// Build a Map<channelHashByte, { channelName, keyBytes, keyHex }> from all
|
||||
// stored PSK keys so the WebSocket handler can do an O(1) lookup on each
|
||||
// incoming GRP_TXT packet. Hash byte derivation is async, so we cache the
|
||||
// map between calls and only rebuild when the stored-keys set changes.
|
||||
var _keyMapCache = null;
|
||||
var _keyMapSig = '';
|
||||
|
||||
function _keysSignature(keys) {
|
||||
var names = Object.keys(keys).sort();
|
||||
var sig = '';
|
||||
for (var i = 0; i < names.length; i++) {
|
||||
sig += names[i] + '=' + keys[names[i]] + ';';
|
||||
}
|
||||
return sig;
|
||||
}
|
||||
|
||||
async function buildKeyMap() {
|
||||
var keys = getKeys();
|
||||
var sig = _keysSignature(keys);
|
||||
if (_keyMapCache && _keyMapSig === sig) return _keyMapCache;
|
||||
var map = new Map();
|
||||
var names = Object.keys(keys);
|
||||
for (var i = 0; i < names.length; i++) {
|
||||
var channelName = names[i];
|
||||
var keyHex = keys[channelName];
|
||||
if (!keyHex || typeof keyHex !== 'string') continue;
|
||||
var keyBytes;
|
||||
try { keyBytes = hexToBytes(keyHex); } catch (e) { continue; }
|
||||
if (keyBytes.length !== 16) continue;
|
||||
var hashByte;
|
||||
try { hashByte = await computeChannelHash(keyBytes); } catch (e) { continue; }
|
||||
// First-write-wins on collision (rare): different channel names can
|
||||
// hash to the same byte. The downstream MAC check still gates rendering.
|
||||
if (!map.has(hashByte)) {
|
||||
map.set(hashByte, { channelName: channelName, keyBytes: keyBytes, keyHex: keyHex });
|
||||
}
|
||||
}
|
||||
_keyMapCache = map;
|
||||
_keyMapSig = sig;
|
||||
return map;
|
||||
}
|
||||
|
||||
/**
|
||||
* Attempt to decrypt a live GRP_TXT payload using a prebuilt key map.
|
||||
* Returns { sender, text, channelName, channelHashByte } on success,
|
||||
* or null when no key matches, MAC verification fails, or the payload
|
||||
* is not an encrypted GRP_TXT.
|
||||
*/
|
||||
async function tryDecryptLive(payload, keyMap) {
|
||||
if (!payload || payload.type !== 'GRP_TXT') return null;
|
||||
if (!payload.encryptedData || !payload.mac) return null;
|
||||
if (!keyMap || typeof keyMap.get !== 'function') return null;
|
||||
var hashByte = payload.channelHash;
|
||||
// channelHash arrives as either a number or a hex string in some paths;
|
||||
// normalize to number so Map.get hits.
|
||||
if (typeof hashByte === 'string') {
|
||||
var n = parseInt(hashByte, 16);
|
||||
if (!isFinite(n)) return null;
|
||||
hashByte = n;
|
||||
}
|
||||
if (typeof hashByte !== 'number') return null;
|
||||
var entry = keyMap.get(hashByte);
|
||||
if (!entry) return null;
|
||||
var result;
|
||||
try {
|
||||
result = await decrypt(entry.keyBytes, payload.mac, payload.encryptedData);
|
||||
} catch (e) { return null; }
|
||||
if (!result) return null;
|
||||
return {
|
||||
sender: result.sender || 'Unknown',
|
||||
text: result.message || '',
|
||||
channelName: entry.channelName,
|
||||
channelHashByte: hashByte,
|
||||
timestamp: result.timestamp || null
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
// ---- Key storage (localStorage) ----
|
||||
|
||||
function saveKey(channelName, keyHex, label) {
|
||||
function saveKey(channelName, keyHex) {
|
||||
var keys = getKeys();
|
||||
keys[channelName] = keyHex;
|
||||
try { localStorage.setItem(STORAGE_KEY, JSON.stringify(keys)); } catch (e) { /* quota */ }
|
||||
_keyMapCache = null; // invalidate live-decrypt index
|
||||
if (typeof label === 'string' && label.trim()) {
|
||||
saveLabel(channelName, label.trim());
|
||||
}
|
||||
}
|
||||
|
||||
// Alias used by channels.js
|
||||
@@ -319,39 +212,8 @@ window.ChannelDecrypt = (function () {
|
||||
var keys = getKeys();
|
||||
delete keys[channelName];
|
||||
try { localStorage.setItem(STORAGE_KEY, JSON.stringify(keys)); } catch (e) { /* quota */ }
|
||||
_keyMapCache = null; // invalidate live-decrypt index
|
||||
// Also clear cached messages and any label for this channel (#1020)
|
||||
// Also clear cached messages for this channel
|
||||
clearChannelCache(channelName);
|
||||
var labels = getLabels();
|
||||
if (labels[channelName]) {
|
||||
delete labels[channelName];
|
||||
try { localStorage.setItem(LABELS_KEY, JSON.stringify(labels)); } catch (e) { /* quota */ }
|
||||
}
|
||||
}
|
||||
|
||||
// ---- User-supplied display labels (#1020) ----
|
||||
// Stored separately from keys so we can display friendly names instead of
|
||||
// psk:<hex8> for user-added PSK channels.
|
||||
function getLabels() {
|
||||
try {
|
||||
var raw = localStorage.getItem(LABELS_KEY);
|
||||
return raw ? JSON.parse(raw) : {};
|
||||
} catch (e) { return {}; }
|
||||
}
|
||||
|
||||
function getLabel(channelName) {
|
||||
var labels = getLabels();
|
||||
return labels[channelName] || '';
|
||||
}
|
||||
|
||||
function saveLabel(channelName, label) {
|
||||
var labels = getLabels();
|
||||
if (typeof label === 'string' && label.trim()) {
|
||||
labels[channelName] = label.trim();
|
||||
} else {
|
||||
delete labels[channelName];
|
||||
}
|
||||
try { localStorage.setItem(LABELS_KEY, JSON.stringify(labels)); } catch (e) { /* quota */ }
|
||||
}
|
||||
|
||||
/** Remove cached messages for a specific channel (by name or hash). */
|
||||
@@ -424,16 +286,10 @@ window.ChannelDecrypt = (function () {
|
||||
getKeys: getKeys,
|
||||
getStoredKeys: getStoredKeys,
|
||||
removeKey: removeKey,
|
||||
// #1020: optional user-friendly display labels for stored keys
|
||||
saveLabel: saveLabel,
|
||||
getLabel: getLabel,
|
||||
getLabels: getLabels,
|
||||
clearChannelCache: clearChannelCache,
|
||||
cacheMessages: cacheMessages,
|
||||
getCachedMessages: getCachedMessages,
|
||||
setCache: setCache,
|
||||
getCache: getCache,
|
||||
buildKeyMap: buildKeyMap,
|
||||
tryDecryptLive: tryDecryptLive
|
||||
getCache: getCache
|
||||
};
|
||||
})();
|
||||
|
||||
+37
-168
@@ -339,10 +339,8 @@
|
||||
}
|
||||
}
|
||||
|
||||
// Add a user channel by name (#channelname) or hex key.
|
||||
// `label` (#1020) is an optional friendly name shown in the sidebar instead
|
||||
// of "psk:<hex8>" — stored alongside the key in localStorage.
|
||||
async function addUserChannel(val, label) {
|
||||
// Add a user channel by name (#channelname) or hex key
|
||||
async function addUserChannel(val) {
|
||||
var displayName = val.startsWith('#') ? val : (isHexKey(val) ? val.substring(0, 8) + '…' : '#' + val);
|
||||
showAddStatus('Decrypting ' + displayName + ' messages…', 'loading');
|
||||
var channelName, keyHex;
|
||||
@@ -361,8 +359,7 @@
|
||||
keyHex = ChannelDecrypt.bytesToHex(keyBytes2);
|
||||
}
|
||||
|
||||
// #1020: persist optional user-supplied label alongside the key
|
||||
ChannelDecrypt.storeKey(channelName, keyHex, label);
|
||||
ChannelDecrypt.storeKey(channelName, keyHex);
|
||||
|
||||
// Compute channel hash byte to find matching encrypted channels
|
||||
var keyBytes3 = ChannelDecrypt.hexToBytes(keyHex);
|
||||
@@ -381,21 +378,15 @@
|
||||
if (existingEncrypted) {
|
||||
targetHash = existingEncrypted.hash;
|
||||
}
|
||||
var selectResult = await selectChannel(targetHash, { userKey: keyHex, channelHashByte: hashByte, channelName: channelName });
|
||||
await selectChannel(targetHash, { userKey: keyHex, channelHashByte: hashByte, channelName: channelName });
|
||||
|
||||
// #1020: derive count from selectChannel's reported result, not from a
|
||||
// DOM scrape that can race with rendering.
|
||||
var msgCount = (selectResult && typeof selectResult.messageCount === 'number')
|
||||
? selectResult.messageCount
|
||||
: (Array.isArray(messages) ? messages.length : 0);
|
||||
var displayLabel = (typeof label === 'string' && label.trim()) ? label.trim() :
|
||||
(channelName.startsWith('psk:') ? 'Custom channel (' + channelName.substring(4) + ')' : channelName);
|
||||
if (selectResult && selectResult.wrongKey) {
|
||||
showAddStatus('Key does not match any packets for ' + displayLabel, 'error');
|
||||
} else if (msgCount > 0) {
|
||||
showAddStatus('Added ' + displayLabel + ' — ' + msgCount + ' messages decrypted', 'success');
|
||||
// Show success feedback (#759)
|
||||
var msgCount = document.querySelectorAll('#chMessages .ch-msg').length;
|
||||
var userDisplay = channelName.startsWith('psk:') ? 'Custom channel (' + channelName.substring(4) + ')' : channelName;
|
||||
if (msgCount > 0) {
|
||||
showAddStatus('Added ' + userDisplay + ' — ' + msgCount + ' messages decrypted', 'success');
|
||||
} else {
|
||||
showAddStatus('Added ' + displayLabel + ' — no messages found yet', 'warn');
|
||||
showAddStatus('No messages found for ' + userDisplay, 'warn');
|
||||
}
|
||||
} catch (err) {
|
||||
showAddStatus('Failed to decrypt', 'error');
|
||||
@@ -408,17 +399,14 @@
|
||||
// remove a key they added but that the server already knows about.
|
||||
function mergeUserChannels() {
|
||||
var keys = ChannelDecrypt.getStoredKeys();
|
||||
var labels = (typeof ChannelDecrypt.getLabels === 'function') ? ChannelDecrypt.getLabels() : {};
|
||||
var names = Object.keys(keys);
|
||||
for (var i = 0; i < names.length; i++) {
|
||||
var name = names[i];
|
||||
var label = labels[name] || '';
|
||||
var matched = false;
|
||||
for (var j = 0; j < channels.length; j++) {
|
||||
var ch = channels[j];
|
||||
if (ch.name === name || ch.hash === name || ch.hash === ('user:' + name)) {
|
||||
ch.userAdded = true;
|
||||
if (label) ch.userLabel = label;
|
||||
matched = true;
|
||||
break;
|
||||
}
|
||||
@@ -427,7 +415,6 @@
|
||||
channels.push({
|
||||
hash: 'user:' + name,
|
||||
name: name,
|
||||
userLabel: label,
|
||||
messageCount: 0,
|
||||
lastActivityMs: 0,
|
||||
lastSender: '',
|
||||
@@ -643,11 +630,6 @@
|
||||
aria-label="Channel name or hex key" spellcheck="false">
|
||||
<button type="submit" class="ch-add-btn" title="Add channel">+</button>
|
||||
</div>
|
||||
<div class="ch-add-row">
|
||||
<input type="text" id="chKeyLabelInput" class="ch-key-label-input"
|
||||
placeholder="optional name (e.g. My Crew)"
|
||||
aria-label="Optional display name for this channel" spellcheck="false">
|
||||
</div>
|
||||
<div class="ch-add-hint">e.g. #LongFast or 32-char hex key — decrypted in your browser.</div>
|
||||
<div id="chAddStatus" class="ch-add-status" style="display:none"></div>
|
||||
</form>
|
||||
@@ -696,13 +678,10 @@
|
||||
var submitHandler = async function (e) {
|
||||
e.preventDefault();
|
||||
var input = document.getElementById('chKeyInput');
|
||||
var labelInput = document.getElementById('chKeyLabelInput');
|
||||
var val = (input.value || '').trim();
|
||||
var label = labelInput ? (labelInput.value || '').trim() : '';
|
||||
if (!val) return;
|
||||
input.value = '';
|
||||
if (labelInput) labelInput.value = '';
|
||||
await addUserChannel(val, label);
|
||||
await addUserChannel(val);
|
||||
};
|
||||
chKeyForm.addEventListener('submit', submitHandler);
|
||||
var chKeyInput = document.getElementById('chKeyInput');
|
||||
@@ -814,14 +793,6 @@
|
||||
renderChannelList();
|
||||
return;
|
||||
}
|
||||
// Color clear button — remove color without opening picker (#681)
|
||||
const clearBtn = e.target.closest('.ch-color-clear');
|
||||
if (clearBtn && window.ChannelColors) {
|
||||
e.stopPropagation();
|
||||
var clearCh = clearBtn.getAttribute('data-channel');
|
||||
if (clearCh) { window.ChannelColors.remove(clearCh); renderChannelList(); }
|
||||
return;
|
||||
}
|
||||
// Color dot click — open picker, don't select channel
|
||||
const dot = e.target.closest('.ch-color-dot');
|
||||
if (dot && window.ChannelColorPicker) {
|
||||
@@ -929,11 +900,6 @@
|
||||
if (!payload) continue;
|
||||
|
||||
var channelName = payload.channel || 'unknown';
|
||||
// For live-decrypted user-added (PSK) channels, decryptLivePSKBatch
|
||||
// also stamps payload.channelKey ("user:<name>") so we route the
|
||||
// message to the correct sidebar row and to the open chat view.
|
||||
// Falls back to channelName for server-known CHAN packets.
|
||||
var channelKey = payload.channelKey || channelName;
|
||||
var rawText = payload.text || '';
|
||||
var sender = payload.sender || null;
|
||||
var displayText = rawText;
|
||||
@@ -960,10 +926,10 @@
|
||||
var observer = m.data?.packet?.observer_name || m.data?.observer || null;
|
||||
|
||||
// Update channel list entry — only once per unique packet hash
|
||||
var isFirstObservation = pktHash && !seenHashes.has(pktHash + ':' + channelKey);
|
||||
if (pktHash) seenHashes.add(pktHash + ':' + channelKey);
|
||||
var isFirstObservation = pktHash && !seenHashes.has(pktHash + ':' + channelName);
|
||||
if (pktHash) seenHashes.add(pktHash + ':' + channelName);
|
||||
|
||||
var ch = channels.find(function (c) { return c.hash === channelKey; });
|
||||
var ch = channels.find(function (c) { return c.hash === channelName; });
|
||||
if (ch) {
|
||||
if (isFirstObservation) ch.messageCount = (ch.messageCount || 0) + 1;
|
||||
ch.lastActivityMs = Date.now();
|
||||
@@ -973,7 +939,7 @@
|
||||
} else if (isFirstObservation) {
|
||||
// New channel we haven't seen
|
||||
channels.push({
|
||||
hash: channelKey,
|
||||
hash: channelName,
|
||||
name: channelName,
|
||||
messageCount: 1,
|
||||
lastActivityMs: Date.now(),
|
||||
@@ -984,7 +950,7 @@
|
||||
}
|
||||
|
||||
// If this message is for the selected channel, append to messages
|
||||
if (selectedHash && channelKey === selectedHash) {
|
||||
if (selectedHash && channelName === selectedHash) {
|
||||
// Deduplicate by packet hash — same message seen by multiple observers
|
||||
var existing = pktHash ? messages.find(function (msg) { return msg.packetHash === pktHash; }) : null;
|
||||
if (existing) {
|
||||
@@ -1037,83 +1003,8 @@
|
||||
processWSBatch(msgs, selectedRegions);
|
||||
}
|
||||
|
||||
// Pre-pass: rewrite encrypted GRP_TXT live packets into decrypted form
|
||||
// when a stored PSK key matches their channel hash byte (#1029 — live
|
||||
// PSK decrypt). Without this, users viewing a PSK-decrypted channel
|
||||
// had to refresh the page to see new messages.
|
||||
async function decryptLivePSKBatch(msgs) {
|
||||
if (typeof ChannelDecrypt === 'undefined' ||
|
||||
typeof ChannelDecrypt.tryDecryptLive !== 'function') {
|
||||
return;
|
||||
}
|
||||
// Quick scan: do any messages look like encrypted GRP_TXT?
|
||||
var anyEncrypted = false;
|
||||
for (var i = 0; i < msgs.length; i++) {
|
||||
var p = msgs[i] && msgs[i].data && msgs[i].data.decoded && msgs[i].data.decoded.payload;
|
||||
if (p && p.type === 'GRP_TXT' && p.encryptedData && p.mac) { anyEncrypted = true; break; }
|
||||
}
|
||||
if (!anyEncrypted) return;
|
||||
var keyMap;
|
||||
try { keyMap = await ChannelDecrypt.buildKeyMap(); } catch (e) { return; }
|
||||
if (!keyMap || keyMap.size === 0) return;
|
||||
for (var j = 0; j < msgs.length; j++) {
|
||||
var m = msgs[j];
|
||||
var payload = m && m.data && m.data.decoded && m.data.decoded.payload;
|
||||
if (!payload || payload.type !== 'GRP_TXT' || !payload.encryptedData || !payload.mac) continue;
|
||||
var dec;
|
||||
try { dec = await ChannelDecrypt.tryDecryptLive(payload, keyMap); } catch (e) { dec = null; }
|
||||
if (!dec) continue;
|
||||
// Rewrite payload into a CHAN-like shape so processWSBatch picks it
|
||||
// up as a real message instead of an encrypted blob. Keep the original
|
||||
// hash byte for any downstream consumer that wants it.
|
||||
payload.channel = dec.channelName;
|
||||
// For user-added PSK channels the sidebar entry & selectedHash use a
|
||||
// "user:<name>" key (see addUserChannel). Stamp the canonical key on
|
||||
// the payload so processWSBatch routes the live message to the
|
||||
// correct sidebar row and to the open chat view instead of dropping
|
||||
// it / creating a duplicate plain entry. Falls back to the raw name
|
||||
// for non-user channels (server-known CHAN paths still work).
|
||||
var userKey = 'user:' + dec.channelName;
|
||||
var hasUserCh = false;
|
||||
for (var ck = 0; ck < channels.length; ck++) {
|
||||
if (channels[ck].hash === userKey) { hasUserCh = true; break; }
|
||||
}
|
||||
payload.channelKey = hasUserCh ? userKey : dec.channelName;
|
||||
payload.sender = dec.sender;
|
||||
payload.text = dec.sender ? (dec.sender + ': ' + dec.text) : dec.text;
|
||||
payload.decryptedLocally = true;
|
||||
if (m.data.decoded.header) {
|
||||
// Leave payloadTypeName as GRP_TXT — processWSBatch already
|
||||
// accepts both 'message' and GRP_TXT-typed packet messages.
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
wsHandler = debouncedOnWS(function (msgs) {
|
||||
var selectedRegions = getSelectedRegionsSnapshot();
|
||||
var prior = selectedHash;
|
||||
decryptLivePSKBatch(msgs).then(function () {
|
||||
// Bump unread for live-decrypted channels the user is NOT viewing.
|
||||
// Done here (not inside processWSBatch) so the count reflects ONLY
|
||||
// newly-decrypted live packets, not historical-fetch path.
|
||||
var bumped = false;
|
||||
for (var i = 0; i < msgs.length; i++) {
|
||||
var p = msgs[i] && msgs[i].data && msgs[i].data.decoded && msgs[i].data.decoded.payload;
|
||||
if (!p || !p.decryptedLocally) continue;
|
||||
// Use the canonical sidebar key stamped by decryptLivePSKBatch so
|
||||
// the comparison against `prior` (= selectedHash) actually matches
|
||||
// for user-added (user:*-prefixed) channels.
|
||||
var chKey = p.channelKey || p.channel;
|
||||
if (!chKey || chKey === prior) continue;
|
||||
var ch = channels.find(function (c) { return c.hash === chKey || c.name === chKey || c.hash === ('user:' + chKey); });
|
||||
if (ch) {
|
||||
ch.unread = (ch.unread || 0) + 1;
|
||||
bumped = true;
|
||||
}
|
||||
}
|
||||
processWSBatch(msgs, selectedRegions);
|
||||
if (bumped) renderChannelList();
|
||||
});
|
||||
handleWSBatch(msgs);
|
||||
});
|
||||
window._channelsHandleWSBatchForTest = handleWSBatch;
|
||||
window._channelsProcessWSBatchForTest = processWSBatch;
|
||||
@@ -1183,51 +1074,31 @@
|
||||
|
||||
el.innerHTML = sorted.map(ch => {
|
||||
const isEncrypted = ch.encrypted === true;
|
||||
const isUserAdded = ch.userAdded === true;
|
||||
// #1020: prefer user-supplied label over psk:<hex>
|
||||
const baseName = isEncrypted ? (ch.name || 'Unknown') : (ch.name || `Channel ${formatHashHex(ch.hash)}`);
|
||||
const name = (isUserAdded && ch.userLabel) ? ch.userLabel : baseName;
|
||||
const name = isEncrypted ? (ch.name || 'Unknown') : (ch.name || `Channel ${formatHashHex(ch.hash)}`);
|
||||
const color = isEncrypted ? 'var(--text-muted, #6b7280)' : getChannelColor(ch.hash);
|
||||
const time = ch.lastActivityMs ? formatSecondsAgo(Math.floor((Date.now() - ch.lastActivityMs) / 1000)) : '';
|
||||
const preview = isUserAdded
|
||||
? (ch.lastSender && ch.lastMessage
|
||||
? `${ch.lastSender}: ${truncate(ch.lastMessage, 28)}`
|
||||
: `${ch.messageCount || 0} messages (your key)`)
|
||||
: isEncrypted
|
||||
? `${ch.messageCount} encrypted messages (no key configured)`
|
||||
: ch.lastSender && ch.lastMessage
|
||||
? `${ch.lastSender}: ${truncate(ch.lastMessage, 28)}`
|
||||
: `${ch.messageCount} messages`;
|
||||
const preview = isEncrypted
|
||||
? `${ch.messageCount} encrypted messages (no key configured)`
|
||||
: ch.lastSender && ch.lastMessage
|
||||
? `${ch.lastSender}: ${truncate(ch.lastMessage, 28)}`
|
||||
: `${ch.messageCount} messages`;
|
||||
const sel = selectedHash === ch.hash ? ' selected' : '';
|
||||
// #1020: distinct class so styling/tests can tell user-added apart
|
||||
// from server-known encrypted channels.
|
||||
const encClass = isUserAdded
|
||||
? ' ch-user-added'
|
||||
: (isEncrypted ? ' ch-encrypted' : '');
|
||||
// #1020: 🔓 marks "I have the key" vs 🔒 "encrypted, no key"
|
||||
const badgeIcon = isUserAdded ? '🔓' : (isEncrypted ? '🔒' : null);
|
||||
const abbr = badgeIcon || (name.startsWith('#') ? name.slice(0, 3) : name.slice(0, 2).toUpperCase());
|
||||
const encClass = isEncrypted ? ' ch-encrypted' : '';
|
||||
const abbr = isEncrypted ? '🔒' : (name.startsWith('#') ? name.slice(0, 3) : name.slice(0, 2).toUpperCase());
|
||||
// Channel color dot for color picker (#674)
|
||||
const chColor = window.ChannelColors ? window.ChannelColors.get(ch.hash) : null;
|
||||
const dotStyle = chColor ? ` style="background:${chColor}"` : '';
|
||||
// Left border for assigned color
|
||||
const borderStyle = chColor ? ` style="border-left:3px solid ${chColor}"` : '';
|
||||
// M4 / #1020: Remove button for user-added channels
|
||||
const removeBtn = isUserAdded ? ' <button class="ch-remove-btn" data-remove-channel="' + escapeHtml(ch.hash) + '" title="Remove channel and clear saved key" aria-label="Remove ' + escapeHtml(name) + '">✕</button>' : '';
|
||||
// #1020: explicit badge marker for "your key" so it's distinguishable
|
||||
// from server-known encrypted rows at a glance and for screen readers.
|
||||
const userBadge = isUserAdded ? ' <span class="ch-user-badge" title="You added this key" aria-label="Your key">🔑</span>' : '';
|
||||
// #1029 Unread badge — bumped by live PSK decrypt for channels not currently selected.
|
||||
const unreadBadge = (ch.unread && ch.unread > 0)
|
||||
? ' <span class="ch-unread-badge" data-unread-channel="' + escapeHtml(ch.hash) + '" title="' + ch.unread + ' new" aria-label="' + ch.unread + ' unread">' + (ch.unread > 99 ? '99+' : ch.unread) + '</span>'
|
||||
: '';
|
||||
// M4: Remove button for user-added channels
|
||||
const removeBtn = ch.userAdded ? ' <button class="ch-remove-btn" data-remove-channel="' + escapeHtml(ch.hash) + '" title="Remove channel" aria-label="Remove ' + escapeHtml(name) + '">✕</button>' : '';
|
||||
|
||||
return `<button class="ch-item${sel}${encClass}" data-hash="${ch.hash}"${borderStyle} type="button" role="option" aria-selected="${selectedHash === ch.hash ? 'true' : 'false'}" aria-label="${escapeHtml(name)}"${isEncrypted ? ' data-encrypted="true"' : ''}${isUserAdded ? ' data-user-added="true"' : ''}>
|
||||
<div class="ch-badge" style="background:${color}" aria-hidden="true">${badgeIcon ? badgeIcon : escapeHtml(abbr)}</div>
|
||||
return `<button class="ch-item${sel}${encClass}" data-hash="${ch.hash}"${borderStyle} type="button" role="option" aria-selected="${selectedHash === ch.hash ? 'true' : 'false'}" aria-label="${escapeHtml(name)}"${isEncrypted ? ' data-encrypted="true"' : ''}>
|
||||
<div class="ch-badge" style="background:${color}" aria-hidden="true">${isEncrypted ? '🔒' : escapeHtml(abbr)}</div>
|
||||
<div class="ch-item-body">
|
||||
<div class="ch-item-top">
|
||||
<span class="ch-item-name">${escapeHtml(name)}</span>${userBadge}${unreadBadge}
|
||||
<span class="ch-color-dot" data-channel="${escapeHtml(ch.hash)}"${dotStyle} title="Change channel color" aria-label="Change color for ${escapeHtml(name)}"></span>${chColor ? '<span class="ch-color-clear" data-channel="' + escapeHtml(ch.hash) + '" title="Clear color" aria-label="Clear color for ' + escapeHtml(name) + '">✕</span>' : ''}
|
||||
<span class="ch-item-name">${escapeHtml(name)}</span>
|
||||
<span class="ch-color-dot" data-channel="${escapeHtml(ch.hash)}"${dotStyle} title="Change channel color" aria-label="Change color for ${escapeHtml(name)}"></span>
|
||||
<span class="ch-item-time" data-channel-hash="${ch.hash}">${time}</span>${removeBtn}
|
||||
</div>
|
||||
<div class="ch-item-preview">${escapeHtml(preview)}</div>
|
||||
@@ -1240,9 +1111,6 @@
|
||||
const rp = RegionFilter.getRegionParam() || '';
|
||||
const request = beginMessageRequest(hash, rp);
|
||||
selectedHash = hash;
|
||||
// Clear unread badge on the channel we're about to view (#1029).
|
||||
var __selCh = channels.find(function (c) { return c.hash === hash; });
|
||||
if (__selCh && __selCh.unread) { __selCh.unread = 0; }
|
||||
history.replaceState(null, '', `#/channels/${encodeURIComponent(hash)}`);
|
||||
renderChannelList();
|
||||
const ch = channels.find(c => c.hash === hash);
|
||||
@@ -1269,14 +1137,14 @@
|
||||
}
|
||||
}
|
||||
});
|
||||
if (isStaleMessageRequest(request)) return { stale: true };
|
||||
if (isStaleMessageRequest(request)) return true;
|
||||
if (result.wrongKey) {
|
||||
msgEl.innerHTML = '<div class="ch-empty ch-wrong-key">🔒 Key does not match — no messages could be decrypted</div>';
|
||||
return { wrongKey: true, messageCount: 0 };
|
||||
return true;
|
||||
}
|
||||
if (result.error) {
|
||||
msgEl.innerHTML = '<div class="ch-empty">' + escapeHtml(result.error) + '</div>';
|
||||
return { error: result.error, messageCount: 0 };
|
||||
return true;
|
||||
}
|
||||
messages = result.messages || [];
|
||||
if (messages.length === 0) {
|
||||
@@ -1286,12 +1154,13 @@
|
||||
renderMessages();
|
||||
scrollToBottom();
|
||||
}
|
||||
return { messageCount: messages.length };
|
||||
return true;
|
||||
}
|
||||
|
||||
// Client-side decryption path (#725 M2)
|
||||
if (decryptOpts && decryptOpts.userKey) {
|
||||
return await decryptAndRender(decryptOpts.userKey, decryptOpts.channelHashByte, decryptOpts.channelName);
|
||||
await decryptAndRender(decryptOpts.userKey, decryptOpts.channelHashByte, decryptOpts.channelName);
|
||||
return;
|
||||
}
|
||||
|
||||
// Check if this is a user-added channel that needs decryption
|
||||
|
||||
+5
-47
@@ -23,28 +23,8 @@ function comparePacketSets(hashesA, hashesB) {
|
||||
return { onlyA: onlyA, onlyB: onlyB, both: both };
|
||||
}
|
||||
|
||||
/**
|
||||
* Filter packets by route type.
|
||||
* mode: 'all' | 'flood' | 'direct'
|
||||
* Flood = route_type 0 (TransportFlood) or 1 (Flood)
|
||||
* Direct = route_type 2 (Direct) or 3 (TransportDirect)
|
||||
*/
|
||||
function filterPacketsByRoute(packets, mode) {
|
||||
if (!packets || mode === 'all') return packets || [];
|
||||
if (mode === 'flood') {
|
||||
return packets.filter(function (p) { return p.route_type === 0 || p.route_type === 1; });
|
||||
}
|
||||
if (mode === 'direct') {
|
||||
return packets.filter(function (p) { return p.route_type === 2 || p.route_type === 3; });
|
||||
}
|
||||
return packets;
|
||||
}
|
||||
|
||||
// Expose for testing
|
||||
if (typeof window !== 'undefined') {
|
||||
window.comparePacketSets = comparePacketSets;
|
||||
window.filterPacketsByRoute = filterPacketsByRoute;
|
||||
}
|
||||
if (typeof window !== 'undefined') window.comparePacketSets = comparePacketSets;
|
||||
|
||||
(function () {
|
||||
var PAYLOAD_LABELS = { 0: 'Request', 1: 'Response', 2: 'Direct Msg', 3: 'ACK', 4: 'Advert', 5: 'Channel Msg', 7: 'Anon Req', 8: 'Path', 9: 'Trace', 11: 'Control' };
|
||||
@@ -56,7 +36,6 @@ if (typeof window !== 'undefined') {
|
||||
var packetsA = [];
|
||||
var packetsB = [];
|
||||
var currentView = 'summary';
|
||||
var routeFilter = 'all';
|
||||
|
||||
function init(app, routeParam) {
|
||||
// Parse preselected observers from URL: #/compare?a=ID1&b=ID2
|
||||
@@ -68,7 +47,6 @@ if (typeof window !== 'undefined') {
|
||||
packetsA = [];
|
||||
packetsB = [];
|
||||
currentView = 'summary';
|
||||
routeFilter = 'all';
|
||||
|
||||
app.innerHTML = '<div class="compare-page" style="padding:16px">' +
|
||||
'<div class="page-header" style="display:flex;align-items:center;gap:12px;margin-bottom:16px">' +
|
||||
@@ -98,7 +76,6 @@ if (typeof window !== 'undefined') {
|
||||
comparisonResult = null;
|
||||
packetsA = [];
|
||||
packetsB = [];
|
||||
routeFilter = 'all';
|
||||
}
|
||||
|
||||
async function loadObservers() {
|
||||
@@ -138,14 +115,6 @@ if (typeof window !== 'undefined') {
|
||||
'<select id="compareObsB" class="compare-select">' + optionsHtml + '</select>' +
|
||||
'</div>' +
|
||||
'<button id="compareBtn" class="compare-btn" disabled>Compare</button>' +
|
||||
'<div class="compare-select-group">' +
|
||||
'<label for="compareRouteFilter">Packet Type</label>' +
|
||||
'<select id="compareRouteFilter" class="compare-select">' +
|
||||
'<option value="all">All packets</option>' +
|
||||
'<option value="flood">Flood only</option>' +
|
||||
'<option value="direct">Direct only</option>' +
|
||||
'</select>' +
|
||||
'</div>' +
|
||||
'</div>';
|
||||
|
||||
var ddA = document.getElementById('compareObsA');
|
||||
@@ -155,13 +124,6 @@ if (typeof window !== 'undefined') {
|
||||
if (selA) ddA.value = selA;
|
||||
if (selB) ddB.value = selB;
|
||||
|
||||
var ddRoute = document.getElementById('compareRouteFilter');
|
||||
ddRoute.value = routeFilter;
|
||||
ddRoute.addEventListener('change', function () {
|
||||
routeFilter = ddRoute.value;
|
||||
if (comparisonResult) runComparison();
|
||||
});
|
||||
|
||||
function updateBtn() {
|
||||
selA = ddA.value || null;
|
||||
selB = ddB.value || null;
|
||||
@@ -200,20 +162,16 @@ if (typeof window !== 'undefined') {
|
||||
packetsA = results[0].packets || [];
|
||||
packetsB = results[1].packets || [];
|
||||
|
||||
// Apply flood/direct filter (#928)
|
||||
var filteredA = filterPacketsByRoute(packetsA, routeFilter);
|
||||
var filteredB = filterPacketsByRoute(packetsB, routeFilter);
|
||||
|
||||
var hashesA = new Set(filteredA.map(function (p) { return p.hash; }));
|
||||
var hashesB = new Set(filteredB.map(function (p) { return p.hash; }));
|
||||
var hashesA = new Set(packetsA.map(function (p) { return p.hash; }));
|
||||
var hashesB = new Set(packetsB.map(function (p) { return p.hash; }));
|
||||
|
||||
comparisonResult = comparePacketSets(hashesA, hashesB);
|
||||
|
||||
// Build hash→packet lookups for detail rendering
|
||||
comparisonResult.packetMapA = new Map();
|
||||
comparisonResult.packetMapB = new Map();
|
||||
filteredA.forEach(function (p) { comparisonResult.packetMapA.set(p.hash, p); });
|
||||
filteredB.forEach(function (p) { comparisonResult.packetMapB.set(p.hash, p); });
|
||||
packetsA.forEach(function (p) { comparisonResult.packetMapA.set(p.hash, p); });
|
||||
packetsB.forEach(function (p) { comparisonResult.packetMapB.set(p.hash, p); });
|
||||
|
||||
currentView = 'summary';
|
||||
renderComparison();
|
||||
|
||||
+7
-44
@@ -33,7 +33,7 @@
|
||||
'meshcore-live-heatmap-opacity'
|
||||
];
|
||||
|
||||
var VALID_SECTIONS = ['branding', 'theme', 'themeDark', 'nodeColors', 'typeColors', 'home', 'timestamps', 'heatmapOpacity', 'liveHeatmapOpacity', 'distanceUnit', 'favorites', 'myNodes'];
|
||||
var VALID_SECTIONS = ['branding', 'theme', 'themeDark', 'nodeColors', 'typeColors', 'home', 'timestamps', 'heatmapOpacity', 'liveHeatmapOpacity', 'distanceUnit'];
|
||||
var OBJECT_SECTIONS = ['branding', 'theme', 'themeDark', 'nodeColors', 'typeColors', 'home', 'timestamps'];
|
||||
var SCALAR_SECTIONS = ['heatmapOpacity', 'liveHeatmapOpacity'];
|
||||
var DISTANCE_UNIT_VALUES = ['km', 'mi', 'auto'];
|
||||
@@ -313,17 +313,9 @@
|
||||
function readOverrides() {
|
||||
try {
|
||||
var raw = localStorage.getItem(STORAGE_KEY);
|
||||
var parsed = (raw != null) ? JSON.parse(raw) : {};
|
||||
if (parsed == null || typeof parsed !== 'object' || Array.isArray(parsed)) parsed = {};
|
||||
// Include favorites and claimed nodes from their own localStorage keys
|
||||
try {
|
||||
var favs = JSON.parse(localStorage.getItem('meshcore-favorites') || '[]');
|
||||
if (Array.isArray(favs) && favs.length) parsed.favorites = favs;
|
||||
} catch (e) { /* ignore */ }
|
||||
try {
|
||||
var myNodes = JSON.parse(localStorage.getItem('meshcore-my-nodes') || '[]');
|
||||
if (Array.isArray(myNodes) && myNodes.length) parsed.myNodes = myNodes;
|
||||
} catch (e) { /* ignore */ }
|
||||
if (raw == null) return {};
|
||||
var parsed = JSON.parse(raw);
|
||||
if (parsed == null || typeof parsed !== 'object' || Array.isArray(parsed)) return {};
|
||||
return parsed;
|
||||
} catch (e) {
|
||||
return {};
|
||||
@@ -394,28 +386,14 @@
|
||||
|
||||
function writeOverrides(delta) {
|
||||
if (delta == null || typeof delta !== 'object') return;
|
||||
// Extract favorites/myNodes and store in their own localStorage keys
|
||||
if (Array.isArray(delta.favorites)) {
|
||||
try { localStorage.setItem('meshcore-favorites', JSON.stringify(delta.favorites)); } catch (e) { /* ignore */ }
|
||||
}
|
||||
if (Array.isArray(delta.myNodes)) {
|
||||
try { localStorage.setItem('meshcore-my-nodes', JSON.stringify(delta.myNodes)); } catch (e) { /* ignore */ }
|
||||
}
|
||||
// Build theme-only delta (without favorites/myNodes)
|
||||
var themeDelta = {};
|
||||
for (var k in delta) {
|
||||
if (delta.hasOwnProperty(k) && k !== 'favorites' && k !== 'myNodes') {
|
||||
themeDelta[k] = delta[k];
|
||||
}
|
||||
}
|
||||
// If empty, remove key entirely
|
||||
var keys = Object.keys(themeDelta);
|
||||
var keys = Object.keys(delta);
|
||||
if (keys.length === 0) {
|
||||
try { localStorage.removeItem(STORAGE_KEY); } catch (e) { /* ignore */ }
|
||||
_updateSaveStatus('saved');
|
||||
return;
|
||||
}
|
||||
var validated = _validateDelta(themeDelta);
|
||||
var validated = _validateDelta(delta);
|
||||
try {
|
||||
localStorage.setItem(STORAGE_KEY, JSON.stringify(validated));
|
||||
_updateSaveStatus('saved');
|
||||
@@ -651,11 +629,7 @@
|
||||
}
|
||||
writeOverrides(delta);
|
||||
_runPipeline();
|
||||
// Skip re-render while the user is typing inside the panel — setting
|
||||
// innerHTML would destroy the focused input and collapse the mobile keyboard.
|
||||
if (!(_panelEl && _panelEl.contains(document.activeElement))) {
|
||||
_refreshPanel();
|
||||
}
|
||||
_refreshPanel();
|
||||
}, 300);
|
||||
}
|
||||
|
||||
@@ -780,17 +754,6 @@
|
||||
if (key === 'distanceUnit' && DISTANCE_UNIT_VALUES.indexOf(obj[key]) === -1) {
|
||||
errors.push('Invalid distanceUnit: "' + obj[key] + '" — must be km, mi, or auto');
|
||||
}
|
||||
// Validate favorites and myNodes arrays
|
||||
if (key === 'favorites') {
|
||||
if (!Array.isArray(obj[key])) {
|
||||
errors.push('"favorites" must be an array of public key strings');
|
||||
}
|
||||
}
|
||||
if (key === 'myNodes') {
|
||||
if (!Array.isArray(obj[key])) {
|
||||
errors.push('"myNodes" must be an array of node objects');
|
||||
}
|
||||
}
|
||||
}
|
||||
return { valid: errors.length === 0, errors: errors };
|
||||
}
|
||||
|
||||
@@ -26,12 +26,6 @@
|
||||
#btnCopy { padding: 6px 14px; background: #1a4a7a; color: #7ec8e3; border-radius: 6px; border: none; cursor: pointer; font-size: 0.85rem; white-space: nowrap; align-self: flex-end; }
|
||||
#btnCopy:hover { background: #2a6aaa; }
|
||||
#btnCopy.copied { background: #1a6a3a; color: #7effa0; }
|
||||
#btnSaveDraft { background: #1a5a3a; color: #7effa0; }
|
||||
#btnSaveDraft:hover { background: #2a7a4a; }
|
||||
#btnLoadDraft { background: #3a3a1a; color: #ffe07e; }
|
||||
#btnLoadDraft:hover { background: #5a5a2a; }
|
||||
#btnDownload { background: #1a4a7a; color: #7ec8e3; }
|
||||
#btnDownload:hover { background: #2a6aaa; }
|
||||
#counter { font-size: 0.8rem; color: #888; padding-top: 6px; white-space: nowrap; }
|
||||
.bufferRow { display: flex; align-items: center; gap: 8px; }
|
||||
.bufferRow label { font-size: 0.85rem; color: #aaa; }
|
||||
@@ -51,8 +45,6 @@
|
||||
<div class="controls">
|
||||
<button id="btnUndo">↩ Undo</button>
|
||||
<button id="btnClear">✕ Clear</button>
|
||||
<button id="btnSaveDraft">💾 Save Draft</button>
|
||||
<button id="btnLoadDraft">📂 Load Draft</button>
|
||||
</div>
|
||||
<div class="bufferRow">
|
||||
<label for="bufferKm">Buffer km:</label>
|
||||
@@ -71,18 +63,16 @@
|
||||
<div style="display:flex;flex-direction:column;gap:8px;align-items:flex-end">
|
||||
<span id="counter">0 points</span>
|
||||
<button id="btnCopy">Copy</button>
|
||||
<button id="btnDownload">⬇ Download</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Instructions: paste the output into config.json as a top-level "geo_filter" key, then restart the server -->
|
||||
<div id="help-bar">
|
||||
<strong>Save Draft</strong> preserves your polygon across sessions. <strong>Download</strong> exports a JSON snippet → paste as a top-level key in <code>config.json</code> → restart the server.
|
||||
Copy the JSON above → paste as a top-level key in <code>config.json</code> → restart the server.
|
||||
Nodes with no GPS fix always pass through. Remove the <code>geo_filter</code> block to disable filtering.
|
||||
· <a href="/geofilter-docs.html">Documentation</a>
|
||||
· <a href="https://github.com/Kpa-clawbot/CoreScope/blob/master/docs/user-guide/geofilter.md" target="_blank">Documentation ↗</a>
|
||||
</div>
|
||||
|
||||
<script src="geofilter-draft.js"></script>
|
||||
<script>
|
||||
const map = L.map('map').setView([50.5, 4.4], 8);
|
||||
|
||||
@@ -97,8 +87,7 @@ let polygon = null;
|
||||
let closingLine = null;
|
||||
|
||||
function latLonPair(latlng) {
|
||||
const w = latlng.wrap();
|
||||
return [parseFloat(w.lat.toFixed(6)), parseFloat(w.lng.toFixed(6))];
|
||||
return [parseFloat(latlng.lat.toFixed(6)), parseFloat(latlng.lng.toFixed(6))];
|
||||
}
|
||||
|
||||
function render() {
|
||||
@@ -176,40 +165,6 @@ document.getElementById('btnCopy').addEventListener('click', function() {
|
||||
setTimeout(() => { btn.textContent = 'Copy'; btn.classList.remove('copied'); }, 2000);
|
||||
});
|
||||
});
|
||||
|
||||
document.getElementById('btnSaveDraft').addEventListener('click', function() {
|
||||
if (points.length < 3) return;
|
||||
const bufferKm = parseFloat(document.getElementById('bufferKm').value) || 0;
|
||||
GeofilterDraft.saveDraft(points, bufferKm);
|
||||
const btn = document.getElementById('btnSaveDraft');
|
||||
btn.textContent = '✓ Saved';
|
||||
setTimeout(() => { btn.textContent = '💾 Save Draft'; }, 2000);
|
||||
});
|
||||
|
||||
document.getElementById('btnLoadDraft').addEventListener('click', function() {
|
||||
const draft = GeofilterDraft.loadDraft();
|
||||
if (!draft || !draft.polygon || draft.polygon.length < 3) return;
|
||||
// Clear current
|
||||
markers.forEach(m => map.removeLayer(m));
|
||||
markers = [];
|
||||
points = draft.polygon.slice();
|
||||
if (draft.bufferKm != null) document.getElementById('bufferKm').value = draft.bufferKm;
|
||||
// Recreate markers
|
||||
points.forEach(function(pt, i) {
|
||||
const marker = L.circleMarker([pt[0], pt[1]], {
|
||||
radius: 6, color: '#4a9eff', weight: 2, fillColor: '#4a9eff', fillOpacity: 0.9
|
||||
}).addTo(map).bindTooltip(String(i + 1), { permanent: true, direction: 'top', offset: [0, -8], className: 'pt-label' });
|
||||
markers.push(marker);
|
||||
});
|
||||
render();
|
||||
map.fitBounds(L.polygon(points).getBounds().pad(0.2));
|
||||
});
|
||||
|
||||
document.getElementById('btnDownload').addEventListener('click', function() {
|
||||
if (points.length < 3) return;
|
||||
const bufferKm = parseFloat(document.getElementById('bufferKm').value) || 0;
|
||||
GeofilterDraft.downloadConfig(points, bufferKm);
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
@@ -1,142 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>GeoFilter Docs — CoreScope</title>
|
||||
<style>
|
||||
* { box-sizing: border-box; margin: 0; padding: 0; }
|
||||
body { font-family: system-ui, sans-serif; background: #1a1a2e; color: #e0e0e0; min-height: 100vh; display: flex; flex-direction: column; }
|
||||
header { padding: 12px 16px; background: #0f0f23; border-bottom: 1px solid #333; display: flex; align-items: center; gap: 16px; }
|
||||
header h1 { font-size: 1rem; font-weight: 600; color: #4a9eff; }
|
||||
#back-link { font-size: 0.8rem; color: #4a9eff; text-decoration: none; white-space: nowrap; }
|
||||
#back-link:hover { text-decoration: underline; }
|
||||
main { flex: 1; max-width: 800px; margin: 0 auto; padding: 32px 24px; width: 100%; }
|
||||
h2 { font-size: 1.1rem; font-weight: 600; color: #4a9eff; margin: 32px 0 12px; border-bottom: 1px solid #222; padding-bottom: 6px; }
|
||||
h2:first-of-type { margin-top: 0; }
|
||||
h3 { font-size: 0.95rem; font-weight: 600; color: #c0c0c0; margin: 20px 0 8px; }
|
||||
p { font-size: 0.9rem; line-height: 1.6; color: #ccc; margin-bottom: 10px; }
|
||||
ul { padding-left: 20px; margin-bottom: 10px; }
|
||||
li { font-size: 0.9rem; line-height: 1.7; color: #ccc; }
|
||||
code { font-family: monospace; font-size: 0.85rem; color: #7ec8e3; background: #111; border: 1px solid #333; border-radius: 3px; padding: 1px 5px; }
|
||||
pre { background: #111; border: 1px solid #333; border-radius: 6px; padding: 14px 16px; overflow-x: auto; margin: 10px 0 16px; }
|
||||
pre code { background: none; border: none; padding: 0; font-size: 0.82rem; color: #7ec8e3; }
|
||||
.note { background: #1a2a1a; border: 1px solid #2a4a2a; border-radius: 6px; padding: 10px 14px; margin: 12px 0; }
|
||||
.note p { color: #aaddaa; margin: 0; }
|
||||
.warn { background: #2a1a0a; border: 1px solid #5a3a0a; border-radius: 6px; padding: 10px 14px; margin: 12px 0; }
|
||||
.warn p { color: #ddbb88; margin: 0; }
|
||||
table { width: 100%; border-collapse: collapse; margin: 10px 0 16px; font-size: 0.88rem; }
|
||||
th { background: #0f0f23; color: #888; font-weight: 500; text-align: left; padding: 8px 12px; border: 1px solid #333; }
|
||||
td { padding: 8px 12px; border: 1px solid #222; color: #ccc; vertical-align: top; }
|
||||
td code { font-size: 0.82rem; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
|
||||
<header>
|
||||
<a href="/geofilter-builder.html" id="back-link">← GeoFilter Builder</a>
|
||||
<h1>GeoFilter Docs</h1>
|
||||
</header>
|
||||
|
||||
<main>
|
||||
|
||||
<h2>How it works</h2>
|
||||
<p>Geographic filtering restricts which nodes are ingested and returned in API responses. It operates at two levels:</p>
|
||||
<ul>
|
||||
<li><strong>Ingest time</strong> — ADVERT packets carrying GPS coordinates are rejected by the ingestor if the node falls outside the configured area. The node never reaches the database.</li>
|
||||
<li><strong>API responses</strong> — Nodes already in the database are filtered from the <code>/api/nodes</code> response if they fall outside the area. This covers nodes ingested before the filter was configured.</li>
|
||||
</ul>
|
||||
<div class="note"><p>Nodes with no GPS fix (<code>lat=0, lon=0</code> or missing coordinates) always pass the filter regardless of configuration.</p></div>
|
||||
|
||||
<h2>Configuration</h2>
|
||||
<p>Add a <code>geo_filter</code> block to <code>config.json</code>:</p>
|
||||
<pre><code>"geo_filter": {
|
||||
"polygon": [
|
||||
[51.55, 3.80],
|
||||
[51.55, 5.90],
|
||||
[50.65, 5.90],
|
||||
[50.65, 3.80]
|
||||
],
|
||||
"bufferKm": 20
|
||||
}</code></pre>
|
||||
<table>
|
||||
<thead><tr><th>Field</th><th>Type</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
<tr><td><code>polygon</code></td><td><code>[[lat, lon], ...]</code></td><td>Array of at least 3 coordinate pairs defining the boundary</td></tr>
|
||||
<tr><td><code>bufferKm</code></td><td>number</td><td>Extra distance (km) around the polygon edge that is also accepted. <code>0</code> = exact boundary</td></tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<p>Both the server and the ingestor read <code>geo_filter</code> from <code>config.json</code>. Restart both after changing this section.</p>
|
||||
<p>To disable filtering entirely, remove the <code>geo_filter</code> block.</p>
|
||||
|
||||
<h2>Builder workflow: Save Draft, Load Draft, Download</h2>
|
||||
<p>The <a href="/geofilter-builder.html">GeoFilter Builder</a> lets you draw a polygon on a map and produce the <code>geo_filter</code> snippet without hand-editing JSON. Three buttons drive the workflow:</p>
|
||||
<ul>
|
||||
<li><strong>💾 Save Draft</strong> — writes the current polygon and <code>bufferKm</code> to your browser's <code>localStorage</code> under the key <code>geofilter-draft</code>. Drafts persist across page reloads and browser restarts so you can iterate on a shape over multiple sessions.</li>
|
||||
<li><strong>📂 Load Draft</strong> — restores the most recently saved draft into the builder. The current polygon is replaced. If no draft exists the button is a no-op.</li>
|
||||
<li><strong>⬇ Download</strong> — exports the current polygon and <code>bufferKm</code> as <code>geofilter-config-snippet.json</code> — a single JSON object containing a top-level <code>geo_filter</code> block. Open the file, copy the <code>geo_filter</code> entry, and paste it into your <code>config.json</code>.</li>
|
||||
</ul>
|
||||
<div class="note"><p>Drafts are stored locally in your browser only — they are not uploaded anywhere. Clearing site data or switching browsers will lose the draft. Use <strong>Download</strong> to keep a portable copy.</p></div>
|
||||
<p>After pasting the snippet into <code>config.json</code>, restart the server and ingestor for the new filter to take effect.</p>
|
||||
|
||||
<h2>Coordinate ordering</h2>
|
||||
<div class="warn"><p><strong>Important:</strong> Coordinates are <code>[lat, lon]</code> — latitude first, longitude second. This is the opposite of GeoJSON, which uses <code>[lon, lat]</code>. Swapping them will place your polygon in the wrong location.</p></div>
|
||||
|
||||
<h2>Multi-polygon</h2>
|
||||
<p>Only a single polygon is supported. If your deployment area consists of multiple disconnected regions, draw a single convex hull that covers all of them, or use the largest region with a generous <code>bufferKm</code> value.</p>
|
||||
|
||||
<h2>Examples</h2>
|
||||
<h3>Belgium (bounding rectangle)</h3>
|
||||
<pre><code>"geo_filter": {
|
||||
"polygon": [
|
||||
[51.55, 3.80],
|
||||
[51.55, 5.90],
|
||||
[50.65, 5.90],
|
||||
[50.65, 3.80]
|
||||
],
|
||||
"bufferKm": 20
|
||||
}</code></pre>
|
||||
<h3>Irregular shape</h3>
|
||||
<pre><code>"geo_filter": {
|
||||
"polygon": [
|
||||
[51.10, 3.70],
|
||||
[51.55, 4.20],
|
||||
[51.30, 5.10],
|
||||
[50.80, 5.50],
|
||||
[50.50, 4.80],
|
||||
[50.70, 3.90]
|
||||
],
|
||||
"bufferKm": 10
|
||||
}</code></pre>
|
||||
|
||||
<h2>Legacy bounding box</h2>
|
||||
<p>An older bounding box format is also supported as a fallback when no <code>polygon</code> is present:</p>
|
||||
<pre><code>"geo_filter": {
|
||||
"latMin": 50.65,
|
||||
"latMax": 51.55,
|
||||
"lonMin": 3.80,
|
||||
"lonMax": 5.90
|
||||
}</code></pre>
|
||||
<p>Prefer the polygon format — it supports irregular shapes and the <code>bufferKm</code> margin.</p>
|
||||
|
||||
<h2>Cleaning up historical nodes</h2>
|
||||
<p>The ingestor prevents new out-of-bounds nodes from being ingested, but does not retroactively remove nodes stored before the filter was configured. Use the prune script for that:</p>
|
||||
<pre><code># Dry run — shows what would be deleted without making any changes
|
||||
python3 scripts/prune-nodes-outside-geo-filter.py --dry-run
|
||||
|
||||
# Default paths: /app/data/meshcore.db and /app/config.json
|
||||
python3 scripts/prune-nodes-outside-geo-filter.py
|
||||
|
||||
# Custom paths
|
||||
python3 scripts/prune-nodes-outside-geo-filter.py /path/to/meshcore.db \
|
||||
--config /path/to/config.json
|
||||
|
||||
# In Docker — run inside the container
|
||||
docker exec -it meshcore-analyzer \
|
||||
python3 /app/scripts/prune-nodes-outside-geo-filter.py --dry-run</code></pre>
|
||||
<p>The script reads <code>geo_filter.polygon</code> and <code>geo_filter.bufferKm</code> from config, lists nodes that fall outside, then asks for <code>yes</code> confirmation before deleting. Nodes without coordinates are always kept.</p>
|
||||
<p>This is a one-time migration tool — run it once after first configuring <code>geo_filter</code> to clean up pre-filter data.</p>
|
||||
|
||||
</main>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,46 +0,0 @@
|
||||
// Geofilter draft save/load/download helpers.
|
||||
// Exposes GeofilterDraft global with: saveDraft, loadDraft, clearDraft, buildConfigSnippet, downloadConfig
|
||||
(function () {
|
||||
'use strict';
|
||||
var STORAGE_KEY = 'geofilter-draft';
|
||||
|
||||
function saveDraft(polygon, bufferKm) {
|
||||
localStorage.setItem(STORAGE_KEY, JSON.stringify({ polygon: polygon, bufferKm: bufferKm }));
|
||||
}
|
||||
|
||||
function loadDraft() {
|
||||
var raw = localStorage.getItem(STORAGE_KEY);
|
||||
if (!raw) return null;
|
||||
try { return JSON.parse(raw); } catch (e) { return null; }
|
||||
}
|
||||
|
||||
function clearDraft() {
|
||||
localStorage.removeItem(STORAGE_KEY);
|
||||
}
|
||||
|
||||
function buildConfigSnippet(polygon, bufferKm) {
|
||||
return JSON.stringify({ geo_filter: { bufferKm: bufferKm, polygon: polygon } }, null, 2);
|
||||
}
|
||||
|
||||
function downloadConfig(polygon, bufferKm) {
|
||||
var snippet = buildConfigSnippet(polygon, bufferKm);
|
||||
var blob = new Blob([snippet], { type: 'application/json' });
|
||||
var url = URL.createObjectURL(blob);
|
||||
var a = document.createElement('a');
|
||||
a.href = url;
|
||||
a.download = 'geofilter-config-snippet.json';
|
||||
document.body.appendChild(a);
|
||||
a.click();
|
||||
document.body.removeChild(a);
|
||||
URL.revokeObjectURL(url);
|
||||
}
|
||||
|
||||
// Export
|
||||
(typeof window !== 'undefined' ? window : this).GeofilterDraft = {
|
||||
saveDraft: saveDraft,
|
||||
loadDraft: loadDraft,
|
||||
clearDraft: clearDraft,
|
||||
buildConfigSnippet: buildConfigSnippet,
|
||||
downloadConfig: downloadConfig
|
||||
};
|
||||
})();
|
||||
@@ -1,70 +0,0 @@
|
||||
/* hash-color.js — Deterministic HSL color from packet hash
|
||||
* IIFE attaching window.HashColor = { hashToHsl, hashToOutline }
|
||||
* Pure function: no DOM access, no state, works in Node vm.createContext sandbox.
|
||||
*/
|
||||
(function() {
|
||||
'use strict';
|
||||
|
||||
/**
|
||||
* Derive a deterministic HSL color string from a hex hash.
|
||||
* Uses bytes 0-1 for hue, byte 2 for saturation, byte 3 for lightness.
|
||||
* Produces bright vivid fills; contrast is provided by a dark outline (hashToOutline).
|
||||
* @param {string|null|undefined} hashHex - Hex string (e.g. "a1b2c3d4...")
|
||||
* @param {string} theme - "light" or "dark"
|
||||
* @returns {string} CSS hsl() string
|
||||
*/
|
||||
function hashToHsl(hashHex, theme) {
|
||||
if (!hashHex || hashHex.length < 8) {
|
||||
return 'hsl(0, 0%, 50%)';
|
||||
}
|
||||
|
||||
var b0 = parseInt(hashHex.slice(0, 2), 16) || 0;
|
||||
var b1 = parseInt(hashHex.slice(2, 4), 16) || 0;
|
||||
var b2 = parseInt(hashHex.slice(4, 6), 16) || 0;
|
||||
var b3 = parseInt(hashHex.slice(6, 8), 16) || 0;
|
||||
|
||||
// Hue: 0-360 from bytes 0-1 (16-bit)
|
||||
var hue = Math.round(((b0 << 8) | b1) / 65535 * 360);
|
||||
// Saturation: 55-95% from byte 2
|
||||
var S = 55 + Math.round(b2 / 255 * 40);
|
||||
// Lightness: vivid range per theme from byte 3
|
||||
// Light: 50-65%, Dark: 55-72%
|
||||
var L;
|
||||
if (theme === 'dark') {
|
||||
L = 55 + Math.round(b3 / 255 * 17);
|
||||
} else {
|
||||
L = 50 + Math.round(b3 / 255 * 15);
|
||||
}
|
||||
|
||||
return 'hsl(' + hue + ', ' + S + '%, ' + L + '%)';
|
||||
}
|
||||
|
||||
/**
|
||||
* Derive a dark outline color (same hue) for contrast against backgrounds.
|
||||
* @param {string|null|undefined} hashHex - Hex string
|
||||
* @param {string} theme - "light" or "dark"
|
||||
* @returns {string} CSS hsl() string
|
||||
*/
|
||||
function hashToOutline(hashHex, theme) {
|
||||
if (!hashHex || hashHex.length < 8) {
|
||||
return 'hsl(0, 0%, 30%)';
|
||||
}
|
||||
|
||||
var b0 = parseInt(hashHex.slice(0, 2), 16) || 0;
|
||||
var b1 = parseInt(hashHex.slice(2, 4), 16) || 0;
|
||||
var hue = Math.round(((b0 << 8) | b1) / 65535 * 360);
|
||||
|
||||
// Dark outline: same hue, low lightness for contrast
|
||||
if (theme === 'dark') {
|
||||
return 'hsl(' + hue + ', 30%, 15%)';
|
||||
}
|
||||
return 'hsl(' + hue + ', 70%, 25%)';
|
||||
}
|
||||
|
||||
// Export
|
||||
if (typeof window !== 'undefined') {
|
||||
window.HashColor = { hashToHsl: hashToHsl, hashToOutline: hashToOutline };
|
||||
} else if (typeof module !== 'undefined') {
|
||||
module.exports = { hashToHsl: hashToHsl, hashToOutline: hashToOutline };
|
||||
}
|
||||
})();
|
||||
+1
-7
@@ -50,8 +50,7 @@
|
||||
<a href="#/live" class="nav-link" data-route="live" data-priority="high">🔴 Live</a>
|
||||
<a href="#/channels" class="nav-link" data-route="channels">Channels</a>
|
||||
<a href="#/nodes" class="nav-link" data-route="nodes" data-priority="high">Nodes</a>
|
||||
<a href="#/roles" class="nav-link" data-route="roles">Roles</a>
|
||||
<a href="#/tools" class="nav-link" data-route="tools">Tools</a>
|
||||
<a href="#/traces" class="nav-link" data-route="traces">Traces</a>
|
||||
<a href="#/observers" class="nav-link" data-route="observers">Observers</a>
|
||||
<a href="#/analytics" class="nav-link" data-route="analytics">Analytics</a>
|
||||
<a href="#/perf" class="nav-link" data-route="perf">⚡ Perf</a>
|
||||
@@ -95,10 +94,7 @@
|
||||
<script src="home.js?v=__BUST__"></script>
|
||||
<script src="table-sort.js?v=__BUST__"></script>
|
||||
<script src="packet-filter.js?v=__BUST__"></script>
|
||||
<script src="hash-color.js?v=__BUST__"></script>
|
||||
<script src="packet-helpers.js?v=__BUST__"></script>
|
||||
<script src="vendor/aes-ecb.js?v=__BUST__"></script>
|
||||
<script src="vendor/sha256-hmac.js?v=__BUST__"></script>
|
||||
<script src="channel-decrypt.js?v=__BUST__"></script>
|
||||
<script src="channel-colors.js?v=__BUST__"></script>
|
||||
<script src="channel-color-picker.js?v=__BUST__"></script>
|
||||
@@ -109,7 +105,6 @@
|
||||
<script src="table-sort.js?v=__BUST__"></script>
|
||||
<script src="nodes.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="traces.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="path-inspector.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="analytics.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="audio.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="audio-v1-constellation.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
@@ -119,7 +114,6 @@
|
||||
<script src="live.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="observers.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="observer-detail.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="roles-page.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="compare.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="node-analytics.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
<script src="perf.js?v=__BUST__" onerror="console.error('Failed to load:', this.src)"></script>
|
||||
|
||||
+10
-134
@@ -22,12 +22,6 @@
|
||||
let showOnlyFavorites = localStorage.getItem('live-favorites-only') === 'true';
|
||||
let matrixMode = localStorage.getItem('live-matrix-mode') === 'true';
|
||||
let matrixRain = localStorage.getItem('live-matrix-rain') === 'true';
|
||||
let colorByHash = localStorage.getItem('meshcore-color-packets-by-hash') !== 'false';
|
||||
/** Current theme string for hash-color functions. */
|
||||
function _liveTheme() { return document.documentElement.dataset.theme || (window.matchMedia('(prefers-color-scheme: dark)').matches ? 'dark' : 'light'); }
|
||||
let nodeFilterKeys = (localStorage.getItem('live-node-filter') || '').split(',').map(s => s.trim()).filter(Boolean);
|
||||
let nodeFilterTotal = 0;
|
||||
let nodeFilterShown = 0;
|
||||
let rainCanvas = null, rainCtx = null, rainDrops = [], rainRAF = null;
|
||||
const propagationBuffer = new Map(); // hash -> {timer, packets[]}
|
||||
let _onResize = null;
|
||||
@@ -831,8 +825,6 @@
|
||||
<span id="ghostDesc" class="sr-only">Show interpolated ghost markers for unknown hops</span>
|
||||
<label><input type="checkbox" id="liveRealisticToggle" aria-describedby="realisticDesc"> Realistic</label>
|
||||
<span id="realisticDesc" class="sr-only">Buffer packets by hash and animate all paths simultaneously</span>
|
||||
<label><input type="checkbox" id="liveColorHashToggle" aria-describedby="colorHashDesc"> Color by hash</label>
|
||||
<span id="colorHashDesc" class="sr-only">Color flying-packet dots and contrails by packet hash for propagation tracing</span>
|
||||
<label><input type="checkbox" id="liveMatrixToggle" aria-describedby="matrixDesc"> Matrix</label>
|
||||
<span id="matrixDesc" class="sr-only">Animate packet hex bytes flowing along paths like the Matrix</span>
|
||||
<label><input type="checkbox" id="liveMatrixRainToggle" aria-describedby="rainDesc"> Rain</label>
|
||||
@@ -841,12 +833,6 @@
|
||||
<span id="audioDesc" class="sr-only">Sonify packets — turn raw bytes into generative music</span>
|
||||
<label><input type="checkbox" id="liveFavoritesToggle" aria-describedby="favDesc"> ⭐ Favorites</label>
|
||||
<span id="favDesc" class="sr-only">Show only favorited and claimed nodes</span>
|
||||
<div class="live-node-filter-wrap">
|
||||
<input type="text" id="liveNodeFilterInput" list="liveNodeFilterList" placeholder="Filter by node…" autocomplete="off" class="live-node-filter-input">
|
||||
<datalist id="liveNodeFilterList"></datalist>
|
||||
<button id="liveNodeFilterClear" class="vcr-btn" title="Clear node filter" style="display:none">×</button>
|
||||
</div>
|
||||
<div id="liveNodeFilterCount" class="live-filter-count hidden"></div>
|
||||
<label id="liveGeoFilterLabel" style="display:none"><input type="checkbox" id="liveGeoFilterToggle"> Mesh live area</label>
|
||||
</div>
|
||||
<div class="audio-controls hidden" id="audioControls">
|
||||
@@ -997,14 +983,6 @@
|
||||
localStorage.setItem('live-realistic-propagation', realisticPropagation);
|
||||
});
|
||||
|
||||
const colorHashToggle = document.getElementById('liveColorHashToggle');
|
||||
colorHashToggle.checked = colorByHash;
|
||||
colorHashToggle.addEventListener('change', (e) => {
|
||||
colorByHash = e.target.checked;
|
||||
localStorage.setItem('meshcore-color-packets-by-hash', colorByHash);
|
||||
window.dispatchEvent(new Event('storage'));
|
||||
});
|
||||
|
||||
const favoritesToggle = document.getElementById('liveFavoritesToggle');
|
||||
favoritesToggle.checked = showOnlyFavorites;
|
||||
favoritesToggle.addEventListener('change', (e) => {
|
||||
@@ -1013,35 +991,6 @@
|
||||
applyFavoritesFilter();
|
||||
});
|
||||
|
||||
// Node filter input
|
||||
const nodeFilterInput = document.getElementById('liveNodeFilterInput');
|
||||
const nodeFilterClear = document.getElementById('liveNodeFilterClear');
|
||||
if (nodeFilterInput) {
|
||||
// Restore from URL param or localStorage
|
||||
const urlNode = getHashParams && getHashParams().get('node');
|
||||
if (urlNode) setNodeFilter(urlNode.split(',').map(s => s.trim()).filter(Boolean));
|
||||
else if (nodeFilterKeys.length) updateNodeFilterUI();
|
||||
|
||||
nodeFilterInput.addEventListener('change', (e) => {
|
||||
const val = e.target.value.trim();
|
||||
setNodeFilter(val ? val.split(',').map(s => s.trim()).filter(Boolean) : []);
|
||||
const params = getHashParams ? getHashParams() : new URLSearchParams();
|
||||
if (nodeFilterKeys.length) params.set('node', nodeFilterKeys.join(','));
|
||||
else params.delete('node');
|
||||
const base = location.hash.split('?')[0];
|
||||
const qs = params.toString();
|
||||
location.hash = base + (qs ? '?' + qs : '');
|
||||
});
|
||||
}
|
||||
if (nodeFilterClear) {
|
||||
nodeFilterClear.addEventListener('click', () => {
|
||||
if (nodeFilterInput) nodeFilterInput.value = '';
|
||||
setNodeFilter([]);
|
||||
const base = location.hash.split('?')[0];
|
||||
location.hash = base;
|
||||
});
|
||||
}
|
||||
|
||||
// Geo filter overlay
|
||||
(async function () {
|
||||
try {
|
||||
@@ -1707,47 +1656,6 @@
|
||||
return getFavoritePubkeys().some(f => f === pubkey);
|
||||
}
|
||||
|
||||
function packetInvolvesFilterNode(pkt, filterKeys) {
|
||||
if (!filterKeys.length) return true;
|
||||
const hops = (pkt.decoded?.path?.hops) || [];
|
||||
for (const hop of hops) {
|
||||
const h = (hop.id || hop.public_key || hop).toString().toLowerCase();
|
||||
if (filterKeys.some(f => f.toLowerCase().startsWith(h) || h.startsWith(f.toLowerCase()))) return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
function setNodeFilter(keys) {
|
||||
nodeFilterKeys = keys;
|
||||
nodeFilterTotal = 0;
|
||||
nodeFilterShown = 0;
|
||||
localStorage.setItem('live-node-filter', keys.join(','));
|
||||
updateNodeFilterUI();
|
||||
}
|
||||
|
||||
function updateNodeFilterUI() {
|
||||
const countEl = document.getElementById('liveNodeFilterCount');
|
||||
const clearBtn = document.getElementById('liveNodeFilterClear');
|
||||
const input = document.getElementById('liveNodeFilterInput');
|
||||
if (nodeFilterKeys.length > 0) {
|
||||
if (clearBtn) clearBtn.style.display = '';
|
||||
if (countEl) { countEl.textContent = `Showing ${nodeFilterShown} of ${nodeFilterTotal}`; countEl.classList.remove('hidden'); }
|
||||
if (input && input.value !== nodeFilterKeys.join(', ')) input.value = nodeFilterKeys.join(', ');
|
||||
} else {
|
||||
if (clearBtn) clearBtn.style.display = 'none';
|
||||
if (countEl) countEl.classList.add('hidden');
|
||||
}
|
||||
updateNodeFilterDatalist();
|
||||
}
|
||||
|
||||
function updateNodeFilterDatalist() {
|
||||
const dl = document.getElementById('liveNodeFilterList');
|
||||
if (!dl) return;
|
||||
dl.innerHTML = Object.values(nodeData).map(n =>
|
||||
`<option value="${n.public_key}">${n.name || n.public_key.slice(0, 8)}</option>`
|
||||
).join('');
|
||||
}
|
||||
|
||||
function rebuildFeedList() {
|
||||
const feed = document.getElementById('liveFeed');
|
||||
if (!feed) return;
|
||||
@@ -1954,9 +1862,6 @@
|
||||
window._liveGetFavoritePubkeys = getFavoritePubkeys;
|
||||
window._livePacketInvolvesFavorite = packetInvolvesFavorite;
|
||||
window._liveIsNodeFavorited = isNodeFavorited;
|
||||
window._livePacketInvolvesFilterNode = packetInvolvesFilterNode;
|
||||
window._liveGetNodeFilterKeys = function() { return nodeFilterKeys; };
|
||||
window._liveSetNodeFilter = setNodeFilter;
|
||||
window._liveFormatLiveTimestampHtml = formatLiveTimestampHtml;
|
||||
window._liveResolveHopPositions = resolveHopPositions;
|
||||
window._liveVcrSpeedCycle = vcrSpeedCycle;
|
||||
@@ -2047,14 +1952,6 @@
|
||||
// --- Favorites filter ---
|
||||
if (showOnlyFavorites && !packets.some(function(p) { return packetInvolvesFavorite(p); })) return;
|
||||
|
||||
// --- Node filter ---
|
||||
if (nodeFilterKeys.length) {
|
||||
nodeFilterTotal++;
|
||||
if (!packets.some(function(p) { return packetInvolvesFilterNode(p, nodeFilterKeys); })) return;
|
||||
nodeFilterShown++;
|
||||
updateNodeFilterUI();
|
||||
}
|
||||
|
||||
// --- Ensure ADVERT nodes appear on map ---
|
||||
for (var pi = 0; pi < packets.length; pi++) {
|
||||
var pkt = packets[pi];
|
||||
@@ -2171,7 +2068,7 @@
|
||||
var completedPositions = allPaths[ai].hopPositions.slice(0, hopsCompleted + 1);
|
||||
var remainingPositions = allPaths[ai].hopPositions.slice(hopsCompleted);
|
||||
if (completedPositions.length >= 2) {
|
||||
animatePath(completedPositions, typeName, color, allPaths[ai].raw, onHop, first.hash);
|
||||
animatePath(completedPositions, typeName, color, allPaths[ai].raw, onHop);
|
||||
} else if (completedPositions.length === 1) {
|
||||
pulseNode(completedPositions[0].key, completedPositions[0].pos, typeName);
|
||||
}
|
||||
@@ -2179,7 +2076,7 @@
|
||||
drawDashedPath(remainingPositions, color);
|
||||
}
|
||||
} else {
|
||||
animatePath(allPaths[ai].hopPositions, typeName, color, allPaths[ai].raw, onHop, first.hash);
|
||||
animatePath(allPaths[ai].hopPositions, typeName, color, allPaths[ai].raw, onHop);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -2288,7 +2185,7 @@
|
||||
return raw.filter(h => h.pos != null);
|
||||
}
|
||||
|
||||
function animatePath(hopPositions, typeName, color, rawHex, onHop, hash) {
|
||||
function animatePath(hopPositions, typeName, color, rawHex, onHop) {
|
||||
if (!animLayer || !pathsLayer) return;
|
||||
if (activeAnims >= MAX_CONCURRENT_ANIMS) return;
|
||||
activeAnims++;
|
||||
@@ -2340,7 +2237,7 @@
|
||||
const nextGhost = hopPositions[hopIndex + 1].ghost;
|
||||
const lineColor = (isGhost || nextGhost) ? '#94a3b8' : color;
|
||||
const lineOpacity = (isGhost || nextGhost) ? 0.3 : undefined;
|
||||
drawAnimatedLine(hp.pos, nextPos, lineColor, () => { hopIndex++; nextHop(); }, lineOpacity, rawHex, hash);
|
||||
drawAnimatedLine(hp.pos, nextPos, lineColor, () => { hopIndex++; nextHop(); }, lineOpacity, rawHex);
|
||||
} else {
|
||||
if (!isGhost) pulseNode(hp.key, hp.pos, typeName);
|
||||
hopIndex++; nextHop();
|
||||
@@ -2695,7 +2592,7 @@
|
||||
requestAnimationFrame(tick);
|
||||
}
|
||||
|
||||
function drawAnimatedLine(from, to, color, onComplete, overrideOpacity, rawHex, hash) {
|
||||
function drawAnimatedLine(from, to, color, onComplete, overrideOpacity, rawHex) {
|
||||
if (!animLayer || !pathsLayer) { if (onComplete) onComplete(); return; }
|
||||
if (matrixMode) return drawMatrixLine(from, to, color, onComplete, rawHex);
|
||||
const steps = 20;
|
||||
@@ -2706,30 +2603,17 @@
|
||||
const mainOpacity = overrideOpacity ?? 0.8;
|
||||
const isDashed = overrideOpacity != null;
|
||||
|
||||
// Hash-derived color for fill + contrail + outline (when toggle ON and not ghost/dashed line)
|
||||
var hashFill = '#fff';
|
||||
var hashOutline = color;
|
||||
var contrailColor = color;
|
||||
if (colorByHash && hash && !isDashed && window.HashColor) {
|
||||
var hsl = HashColor.hashToHsl(hash, _liveTheme());
|
||||
hashFill = hsl;
|
||||
hashOutline = HashColor.hashToOutline(hash, _liveTheme());
|
||||
contrailColor = hsl;
|
||||
}
|
||||
|
||||
const contrail = L.polyline([from], {
|
||||
color: contrailColor, weight: 6, opacity: mainOpacity * 0.2, lineCap: 'round'
|
||||
color: color, weight: 6, opacity: mainOpacity * 0.2, lineCap: 'round'
|
||||
}).addTo(pathsLayer);
|
||||
|
||||
const line = L.polyline([from], {
|
||||
color: (colorByHash && hash && !isDashed && window.HashColor) ? hashFill : color,
|
||||
weight: isDashed ? 1.5 : 2, opacity: mainOpacity, lineCap: 'round',
|
||||
dashArray: isDashed ? '4 6' : null,
|
||||
className: 'live-packet-trace'
|
||||
color: color, weight: isDashed ? 1.5 : 2, opacity: mainOpacity, lineCap: 'round',
|
||||
dashArray: isDashed ? '4 6' : null
|
||||
}).addTo(pathsLayer);
|
||||
|
||||
const dot = L.circleMarker(from, {
|
||||
radius: 3.5, fillColor: hashFill, fillOpacity: 1, color: hashOutline, weight: 1.5
|
||||
radius: 3.5, fillColor: '#fff', fillOpacity: 1, color: color, weight: 1.5
|
||||
}).addTo(animLayer);
|
||||
|
||||
let lastStep = performance.now();
|
||||
@@ -2844,7 +2728,7 @@
|
||||
var style = c
|
||||
? 'background:' + bg + ';border:1px solid ' + border
|
||||
: 'background:transparent;border:1px dashed ' + border;
|
||||
return '<span class="feed-color-dot" data-channel="' + escapeHtml(channel) + '" style="display:inline-block;width:18px;height:18px;border-radius:50%;' + style + ';cursor:pointer;vertical-align:middle;margin-left:4px;flex-shrink:0" title="Set color for ' + escapeHtml(channel) + '"></span>';
|
||||
return '<span class="feed-color-dot" data-channel="' + escapeHtml(channel) + '" style="display:inline-block;width:12px;height:12px;border-radius:50%;' + style + ';cursor:pointer;vertical-align:middle;margin-left:4px;flex-shrink:0" title="Set color for ' + escapeHtml(channel) + '"></span>';
|
||||
}
|
||||
|
||||
function addFeedItemDOM(icon, typeName, payload, hops, color, pkt, feed) {
|
||||
@@ -2861,10 +2745,6 @@
|
||||
item.setAttribute('tabindex', '0');
|
||||
item.setAttribute('role', 'button');
|
||||
item.style.cursor = 'pointer';
|
||||
// Hash-color stripe for feed items (mirrors packets table border-left)
|
||||
if (colorByHash && pkt.hash && window.HashColor) {
|
||||
item.style.borderLeft = '4px solid ' + HashColor.hashToHsl(pkt.hash, _liveTheme());
|
||||
}
|
||||
// Channel color highlighting for GRP_TXT packets (#271)
|
||||
var _cs = _getChannelStyle(pkt);
|
||||
if (_cs) item.style.cssText += _cs;
|
||||
@@ -2948,10 +2828,6 @@
|
||||
item.setAttribute('role', 'button');
|
||||
if (hash) item.setAttribute('data-hash', hash);
|
||||
item.style.cursor = 'pointer';
|
||||
// Hash-color stripe for feed items (mirrors packets table border-left)
|
||||
if (colorByHash && hash && window.HashColor) {
|
||||
item.style.borderLeft = '4px solid ' + HashColor.hashToHsl(hash, _liveTheme());
|
||||
}
|
||||
// Channel color highlighting for GRP_TXT packets (#271)
|
||||
var _chanStyle = _getChannelStyle(pkt);
|
||||
if (_chanStyle) item.style.cssText += _chanStyle;
|
||||
|
||||
+12
-195
@@ -9,7 +9,7 @@
|
||||
let nodes = [];
|
||||
let targetNodeKey = null;
|
||||
let observers = [];
|
||||
let filters = { repeater: true, companion: true, room: true, sensor: true, observer: true, lastHeard: '30d', neighbors: false, clusters: false, hashLabels: localStorage.getItem('meshcore-map-hash-labels') !== 'false', statusFilter: localStorage.getItem('meshcore-map-status-filter') || 'all', byteSize: localStorage.getItem('meshcore-map-byte-filter') || 'all', multiByteOverlay: localStorage.getItem('meshcore-map-multibyte-overlay') === 'true' };
|
||||
let filters = { repeater: true, companion: true, room: true, sensor: true, observer: true, lastHeard: '30d', neighbors: false, clusters: false, hashLabels: localStorage.getItem('meshcore-map-hash-labels') !== 'false', statusFilter: localStorage.getItem('meshcore-map-status-filter') || 'all', byteSize: localStorage.getItem('meshcore-map-byte-filter') || 'all' };
|
||||
let selectedReferenceNode = null; // pubkey of the reference node for neighbor filtering
|
||||
let neighborPubkeys = null; // Set of pubkeys that are direct neighbors of selected node
|
||||
let wsHandler = null;
|
||||
@@ -25,24 +25,20 @@
|
||||
|
||||
// Roles loaded from shared roles.js (ROLE_STYLE, ROLE_LABELS, ROLE_COLORS globals)
|
||||
|
||||
// Multi-byte support overlay colors
|
||||
var MB_COLORS = { confirmed: '#27ae60', suspected: '#f39c12', unknown: '#e74c3c' };
|
||||
|
||||
function makeMarkerIcon(role, isStale, isAlsoObserver, colorOverride) {
|
||||
function makeMarkerIcon(role, isStale, isAlsoObserver) {
|
||||
const s = ROLE_STYLE[role] || ROLE_STYLE.companion;
|
||||
const fillColor = colorOverride || s.color;
|
||||
const size = s.radius * 2 + 4;
|
||||
const c = size / 2;
|
||||
let path;
|
||||
switch (s.shape) {
|
||||
case 'diamond':
|
||||
path = `<polygon points="${c},2 ${size-2},${c} ${c},${size-2} 2,${c}" fill="${fillColor}" stroke="#fff" stroke-width="2"/>`;
|
||||
path = `<polygon points="${c},2 ${size-2},${c} ${c},${size-2} 2,${c}" fill="${s.color}" stroke="#fff" stroke-width="2"/>`;
|
||||
break;
|
||||
case 'square':
|
||||
path = `<rect x="3" y="3" width="${size-6}" height="${size-6}" fill="${fillColor}" stroke="#fff" stroke-width="2"/>`;
|
||||
path = `<rect x="3" y="3" width="${size-6}" height="${size-6}" fill="${s.color}" stroke="#fff" stroke-width="2"/>`;
|
||||
break;
|
||||
case 'triangle':
|
||||
path = `<polygon points="${c},2 ${size-2},${size-2} 2,${size-2}" fill="${fillColor}" stroke="#fff" stroke-width="2"/>`;
|
||||
path = `<polygon points="${c},2 ${size-2},${size-2} 2,${size-2}" fill="${s.color}" stroke="#fff" stroke-width="2"/>`;
|
||||
break;
|
||||
case 'star': {
|
||||
// 5-pointed star
|
||||
@@ -54,11 +50,11 @@
|
||||
pts += `${cx + outer * Math.cos(aOuter)},${cy + outer * Math.sin(aOuter)} `;
|
||||
pts += `${cx + inner * Math.cos(aInner)},${cy + inner * Math.sin(aInner)} `;
|
||||
}
|
||||
path = `<polygon points="${pts.trim()}" fill="${fillColor}" stroke="#fff" stroke-width="1.5"/>`;
|
||||
path = `<polygon points="${pts.trim()}" fill="${s.color}" stroke="#fff" stroke-width="1.5"/>`;
|
||||
break;
|
||||
}
|
||||
default: // circle
|
||||
path = `<circle cx="${c}" cy="${c}" r="${c-2}" fill="${fillColor}" stroke="#fff" stroke-width="2"/>`;
|
||||
path = `<circle cx="${c}" cy="${c}" r="${c-2}" fill="${s.color}" stroke="#fff" stroke-width="2"/>`;
|
||||
}
|
||||
// If this node is also an observer, add a small star overlay
|
||||
let obsOverlay = '';
|
||||
@@ -85,12 +81,12 @@
|
||||
});
|
||||
}
|
||||
|
||||
function makeRepeaterLabelIcon(node, isStale, isAlsoObserver, colorOverride) {
|
||||
function makeRepeaterLabelIcon(node, isStale, isAlsoObserver) {
|
||||
var s = ROLE_STYLE['repeater'] || ROLE_STYLE.companion;
|
||||
var hs = node.hash_size || 1;
|
||||
// Show the short mesh hash ID (first N bytes of pubkey, uppercased)
|
||||
var shortHash = node.public_key ? node.public_key.slice(0, hs * 2).toUpperCase() : '??';
|
||||
var bgColor = colorOverride || s.color;
|
||||
var bgColor = s.color;
|
||||
// If this repeater is also an observer, show a star indicator inside the label
|
||||
var obsIndicator = isAlsoObserver ? ' <span style="color:' + (ROLE_COLORS.observer || '#f1c40f') + ';font-size:13px;line-height:1;" title="Also an observer">★</span>' : '';
|
||||
var html = '<div style="background:' + bgColor + ';color:#fff;font-weight:bold;font-size:11px;padding:2px 5px;border-radius:3px;border:2px solid #fff;box-shadow:0 1px 3px rgba(0,0,0,0.4);text-align:center;line-height:1.2;white-space:nowrap;">' +
|
||||
@@ -106,21 +102,8 @@
|
||||
|
||||
async function init(container) {
|
||||
container.innerHTML = `
|
||||
<div id="map-wrap" style="position:relative;width:100%;height:100%;display:flex;">
|
||||
<div id="leaflet-map" style="flex:1 1 0%;height:100%;"></div>
|
||||
<div class="map-side-pane" id="mapSidePane">
|
||||
<div class="pane-toggle" id="mapPaneToggle" title="Path Inspector">◀</div>
|
||||
<div class="pane-content">
|
||||
<h3 style="margin:0 0 8px 0;font-size:14px;">Path Inspector</h3>
|
||||
<p style="font-size:11px;color:var(--text-muted);margin:0 0 8px 0;">Hex prefixes (1-3 bytes), comma or space separated.</p>
|
||||
<div style="display:flex;gap:4px;margin-bottom:8px;">
|
||||
<input type="text" id="mapPiInput" class="input" placeholder="2C,A1,F4" style="flex:1;">
|
||||
<button id="mapPiSubmit" class="btn btn-primary btn-sm">Go</button>
|
||||
</div>
|
||||
<div id="mapPiError" class="path-inspector-error"></div>
|
||||
<div id="mapPiResults"></div>
|
||||
</div>
|
||||
</div>
|
||||
<div id="map-wrap" style="position:relative;width:100%;height:100%;">
|
||||
<div id="leaflet-map" style="width:100%;height:100%;"></div>
|
||||
<button class="map-controls-toggle" id="mapControlsToggle" aria-label="Toggle map controls" aria-expanded="true">⚙️</button>
|
||||
<div class="map-controls" id="mapControls" role="region" aria-label="Map controls">
|
||||
<h3>🗺️ Map Controls</h3>
|
||||
@@ -142,7 +125,6 @@
|
||||
<label for="mcClusters"><input type="checkbox" id="mcClusters"> Show clusters</label>
|
||||
<label for="mcHeatmap"><input type="checkbox" id="mcHeatmap"> Heat map</label>
|
||||
<label for="mcHashLabels"><input type="checkbox" id="mcHashLabels"> Hash prefix labels</label>
|
||||
<label for="mcMultiByte"><input type="checkbox" id="mcMultiByte"> Multi-byte support</label>
|
||||
<label id="mcGeoFilterLabel" for="mcGeoFilter" style="display:none"><input type="checkbox" id="mcGeoFilter"> Mesh live area</label>
|
||||
</fieldset>
|
||||
<fieldset class="mc-section">
|
||||
@@ -300,11 +282,6 @@
|
||||
hashLabelEl.checked = filters.hashLabels;
|
||||
hashLabelEl.addEventListener('change', e => { filters.hashLabels = e.target.checked; localStorage.setItem('meshcore-map-hash-labels', filters.hashLabels); renderMarkers(); });
|
||||
}
|
||||
const multiByteEl = document.getElementById('mcMultiByte');
|
||||
if (multiByteEl) {
|
||||
multiByteEl.checked = filters.multiByteOverlay;
|
||||
multiByteEl.addEventListener('change', e => { filters.multiByteOverlay = e.target.checked; localStorage.setItem('meshcore-map-multibyte-overlay', e.target.checked); renderMarkers(); });
|
||||
}
|
||||
document.getElementById('mcLastHeard').addEventListener('change', e => { filters.lastHeard = e.target.value; loadNodes(); });
|
||||
|
||||
// Status filter buttons
|
||||
@@ -398,14 +375,6 @@
|
||||
}
|
||||
|
||||
function drawPacketRoute(hopKeys, origin) {
|
||||
// Defensive: origin must be an object with pubkey/lat/lon/name. A bare
|
||||
// string slips through both branches at lines below and silently no-ops
|
||||
// the originator marker (caused PR #950's bug). Coerce string → object
|
||||
// and warn so callers get a clear signal.
|
||||
if (typeof origin === 'string') {
|
||||
console.warn('drawPacketRoute: origin should be an object {pubkey,lat,lon,name}, got string. Coercing.');
|
||||
origin = { pubkey: origin };
|
||||
}
|
||||
// Hide default markers so only the route is visible
|
||||
if (markerLayer) map.removeLayer(markerLayer);
|
||||
if (clusterGroup) map.removeLayer(clusterGroup);
|
||||
@@ -584,32 +553,10 @@
|
||||
}
|
||||
}
|
||||
|
||||
// Check for pending path inspector route (cross-page navigation from Path Inspector).
|
||||
if (window._pendingPathInspectorRoute) {
|
||||
var pending = window._pendingPathInspectorRoute;
|
||||
delete window._pendingPathInspectorRoute;
|
||||
if (pending.path && pending.path.length > 0) {
|
||||
if (window.routeLayer) window.routeLayer.clearLayers();
|
||||
// Pass full path as hopKeys; null origin (origin is already the first
|
||||
// hop). slice(1) + path[0] string was wrong — drawPacketRoute expects
|
||||
// origin to be an OBJECT with pubkey/lat/lon, and stripping the head
|
||||
// hid the originating node from the route polyline.
|
||||
drawPacketRoute(pending.path, null);
|
||||
}
|
||||
}
|
||||
|
||||
// Wire up map side pane (Path Inspector embedded - spec §2.7).
|
||||
initMapSidePane();
|
||||
|
||||
// Don't fitBounds on initial load — respect the Bay Area default or saved view
|
||||
// Only fitBounds on subsequent data refreshes if user hasn't manually panned
|
||||
} catch (e) {
|
||||
console.error('Map load error:', e);
|
||||
} finally {
|
||||
// Always signal data-loaded — even on error — so E2E tests can proceed.
|
||||
// Otherwise an api() failure leaves the test waiting forever.
|
||||
var mapContainer = document.getElementById('leaflet-map');
|
||||
if (mapContainer) mapContainer.setAttribute('data-loaded', 'true');
|
||||
}
|
||||
}
|
||||
|
||||
@@ -864,12 +811,7 @@
|
||||
const pk = (node.public_key || '').toLowerCase();
|
||||
const isAlsoObserver = _observerByPubkey.has(pk);
|
||||
const useLabel = node.role === 'repeater' && filters.hashLabels;
|
||||
// Multi-byte overlay: color repeaters by multi_byte_status
|
||||
var mbColor = null;
|
||||
if (filters.multiByteOverlay && node.role === 'repeater') {
|
||||
mbColor = MB_COLORS[node.multi_byte_status] || MB_COLORS.unknown;
|
||||
}
|
||||
const icon = useLabel ? makeRepeaterLabelIcon(node, isStale, isAlsoObserver, mbColor) : makeMarkerIcon(node.role || 'companion', isStale, isAlsoObserver, mbColor);
|
||||
const icon = useLabel ? makeRepeaterLabelIcon(node, isStale, isAlsoObserver) : makeMarkerIcon(node.role || 'companion', isStale, isAlsoObserver);
|
||||
const latLng = L.latLng(node.lat, node.lon);
|
||||
allMarkers.push({ latLng, node, icon, isLabel: useLabel, popupFn: function() { return buildPopup(node); }, alt: (node.name || 'Unknown') + ' (' + (node.role || 'node') + (isAlsoObserver ? ' + observer' : '') + ')' });
|
||||
}
|
||||
@@ -1005,14 +947,6 @@
|
||||
const hashPrefix = node.public_key ? node.public_key.slice(0, hs * 2).toUpperCase() : '—';
|
||||
const hashPrefixRow = `<dt style="color:var(--text-muted);float:left;clear:left;width:80px;padding:2px 0;">Hash Prefix</dt>
|
||||
<dd style="font-family:var(--mono);font-size:11px;font-weight:700;margin-left:88px;padding:2px 0;">${safeEsc(hashPrefix)} <span style="font-weight:400;color:var(--text-muted);">(${hs}B)</span></dd>`;
|
||||
// Multi-byte support indicator for repeaters
|
||||
var mbRow = '';
|
||||
if (node.role === 'repeater' && node.multi_byte_status) {
|
||||
var mbLabel = { confirmed: '✅ Confirmed', suspected: '⚠️ Suspected', unknown: '❌ Unknown' }[node.multi_byte_status] || node.multi_byte_status;
|
||||
var mbEvidence = node.multi_byte_evidence ? ' (' + node.multi_byte_evidence + ')' : '';
|
||||
mbRow = '<dt style="color:var(--text-muted);float:left;clear:left;width:80px;padding:2px 0;">Multi-byte</dt>' +
|
||||
'<dd style="margin-left:88px;padding:2px 0;font-size:12px;">' + mbLabel + mbEvidence + '</dd>';
|
||||
}
|
||||
|
||||
return `
|
||||
<div class="map-popup" style="font-family:var(--font);min-width:180px;">
|
||||
@@ -1020,7 +954,6 @@
|
||||
${roleBadge}${obsBadge}
|
||||
<dl style="margin-top:8px;font-size:12px;">
|
||||
${hashPrefixRow}
|
||||
${mbRow}
|
||||
<dt style="color:var(--text-muted);float:left;clear:left;width:80px;padding:2px 0;">Key</dt>
|
||||
<dd style="font-family:var(--mono);font-size:11px;margin-left:88px;padding:2px 0;">${safeEsc(key)}</dd>
|
||||
<dt style="color:var(--text-muted);float:left;clear:left;width:80px;padding:2px 0;">Location</dt>
|
||||
@@ -1048,122 +981,6 @@
|
||||
map.fitBounds(bounds, { padding: [50, 50], maxZoom: 14 });
|
||||
}
|
||||
|
||||
// === Map Side Pane — Path Inspector (spec §2.7) ===
|
||||
function initMapSidePane() {
|
||||
var pane = document.getElementById('mapSidePane');
|
||||
var toggle = document.getElementById('mapPaneToggle');
|
||||
var input = document.getElementById('mapPiInput');
|
||||
var btn = document.getElementById('mapPiSubmit');
|
||||
if (!pane || !toggle) return;
|
||||
|
||||
toggle.addEventListener('click', function () {
|
||||
pane.classList.toggle('expanded');
|
||||
toggle.textContent = pane.classList.contains('expanded') ? '▶' : '◀';
|
||||
// Invalidate map size after transition.
|
||||
setTimeout(function () { if (map) map.invalidateSize(); }, 220);
|
||||
});
|
||||
|
||||
if (btn && input) {
|
||||
btn.addEventListener('click', function () { mapPiSubmit(input.value); });
|
||||
input.addEventListener('keydown', function (e) {
|
||||
if (e.key === 'Enter') mapPiSubmit(input.value);
|
||||
});
|
||||
}
|
||||
|
||||
// Auto-open if URL has prefixes param while on map.
|
||||
var params = new URLSearchParams(location.hash.split('?')[1] || '');
|
||||
var prefixParam = params.get('prefixes');
|
||||
if (prefixParam && input) {
|
||||
pane.classList.add('expanded');
|
||||
toggle.textContent = '▶';
|
||||
input.value = prefixParam;
|
||||
setTimeout(function () { if (map) map.invalidateSize(); }, 220);
|
||||
mapPiSubmit(prefixParam);
|
||||
}
|
||||
}
|
||||
|
||||
function mapPiSubmit(raw) {
|
||||
var errDiv = document.getElementById('mapPiError');
|
||||
var resultsDiv = document.getElementById('mapPiResults');
|
||||
if (!errDiv || !resultsDiv) return;
|
||||
errDiv.textContent = '';
|
||||
resultsDiv.innerHTML = '';
|
||||
|
||||
// Reuse PathInspector validation if available.
|
||||
var prefixes = raw.trim().split(/[\s,]+/).filter(function (s) { return s.length > 0; }).map(function (s) { return s.toLowerCase(); });
|
||||
var err = (window.PathInspector && window.PathInspector.validatePrefixes) ? window.PathInspector.validatePrefixes(prefixes) : null;
|
||||
if (!err && prefixes.length === 0) err = 'Enter at least one prefix.';
|
||||
if (err) { errDiv.textContent = err; return; }
|
||||
|
||||
resultsDiv.innerHTML = '<p style="font-size:12px;">Loading...</p>';
|
||||
fetch('/api/paths/inspect', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ prefixes: prefixes })
|
||||
})
|
||||
.then(function (r) {
|
||||
if (r.status === 503) return r.json().then(function () { throw new Error('Service warming up, retry shortly.'); });
|
||||
if (!r.ok) return r.json().then(function (d) { throw new Error(d.error || 'Request failed'); });
|
||||
return r.json();
|
||||
})
|
||||
.then(function (data) { renderMapPiResults(data, resultsDiv); })
|
||||
.catch(function (e) { resultsDiv.innerHTML = ''; errDiv.textContent = e.message; });
|
||||
}
|
||||
|
||||
function renderMapPiResults(data, div) {
|
||||
if (!data.candidates || data.candidates.length === 0) {
|
||||
div.innerHTML = '<p style="font-size:12px;color:var(--text-muted);">No candidates found.</p>';
|
||||
return;
|
||||
}
|
||||
var html = '<table class="path-inspector-table" style="font-size:11px;width:100%;"><thead><tr><th>#</th><th>Score</th><th>Path</th><th></th></tr></thead><tbody>';
|
||||
for (var i = 0; i < data.candidates.length; i++) {
|
||||
var c = data.candidates[i];
|
||||
var rowClass = c.speculative ? 'speculative-row' : '';
|
||||
html += '<tr class="' + rowClass + '">';
|
||||
html += '<td>' + (i + 1) + '</td>';
|
||||
html += '<td class="' + (c.speculative ? 'speculative-warning' : '') + '">' + c.score.toFixed(2) + (c.speculative ? ' ⚠' : '') + '</td>';
|
||||
html += '<td title="' + safeEsc(c.names.join(' → ')) + '">' + safeEsc(c.names.slice(0, 3).join('→')) + (c.names.length > 3 ? '…' : '') + '</td>';
|
||||
html += '<td><button class="btn btn-sm" data-idx="' + i + '" title="Show on Map">📍</button></td>';
|
||||
html += '</tr>';
|
||||
// Per-hop evidence (collapsed).
|
||||
html += '<tr class="evidence-row collapsed" data-evidence="' + i + '"><td colspan="4"><div class="evidence-detail" style="font-size:10px;">';
|
||||
if (c.evidence && c.evidence.perHop) {
|
||||
for (var j = 0; j < c.evidence.perHop.length; j++) {
|
||||
var h = c.evidence.perHop[j];
|
||||
html += '<div>Hop ' + (j+1) + ': ' + h.prefix + ' (×' + h.candidatesConsidered + ') w=' + h.edgeWeight.toFixed(2);
|
||||
if (h.alternatives && h.alternatives.length > 0) {
|
||||
html += ' <span style="color:var(--text-muted);">[+' + h.alternatives.length + ' alt]</span>';
|
||||
}
|
||||
html += '</div>';
|
||||
}
|
||||
}
|
||||
html += '</div></td></tr>';
|
||||
}
|
||||
html += '</tbody></table>';
|
||||
div.innerHTML = html;
|
||||
|
||||
// Wire buttons.
|
||||
div.querySelectorAll('button[data-idx]').forEach(function (btn) {
|
||||
btn.addEventListener('click', function () {
|
||||
var idx = parseInt(btn.dataset.idx);
|
||||
var cand = data.candidates[idx];
|
||||
if (routeLayer) routeLayer.clearLayers();
|
||||
drawPacketRoute(cand.path, null);
|
||||
});
|
||||
});
|
||||
// Expand evidence on row click.
|
||||
div.querySelectorAll('.path-inspector-table tbody tr:not(.evidence-row)').forEach(function (row) {
|
||||
row.style.cursor = 'pointer';
|
||||
row.addEventListener('click', function (e) {
|
||||
if (e.target.tagName === 'BUTTON') return;
|
||||
var b = row.querySelector('button[data-idx]');
|
||||
if (!b) return;
|
||||
var ev = div.querySelector('tr[data-evidence="' + b.dataset.idx + '"]');
|
||||
if (ev) ev.classList.toggle('collapsed');
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
function destroy() {
|
||||
if (wsHandler) offWS(wsHandler);
|
||||
wsHandler = null;
|
||||
|
||||
@@ -170,7 +170,7 @@
|
||||
data: {
|
||||
labels: tl.map(b => {
|
||||
const d = new Date(b.bucket);
|
||||
return (typeof formatChartAxisLabel === 'function') ? formatChartAxisLabel(d, currentDays <= 3) : (currentDays <= 3 ? d.toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' }) : d.toLocaleDateString([], { month: 'short', day: 'numeric' }));
|
||||
return currentDays <= 3 ? d.toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' }) : d.toLocaleDateString([], { month: 'short', day: 'numeric' });
|
||||
}),
|
||||
datasets: [{ label: 'Packets', data: tl.map(b => b.count), backgroundColor: 'rgba(74,158,255,0.5)', borderColor: '#4a9eff', borderWidth: 1 }]
|
||||
},
|
||||
@@ -197,7 +197,7 @@
|
||||
const longestObs = Object.values(byObs).sort((a, b) => b.points.length - a.points.length)[0];
|
||||
const labels = longestObs ? longestObs.points.map(p => {
|
||||
const d = p.x;
|
||||
return (typeof formatChartAxisLabel === 'function') ? formatChartAxisLabel(d, false) : d.toLocaleDateString([], { month: 'short', day: 'numeric' }) + ' ' + d.toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' });
|
||||
return d.toLocaleDateString([], { month: 'short', day: 'numeric' }) + ' ' + d.toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' });
|
||||
}) : [];
|
||||
const c = new Chart(ctx, {
|
||||
type: 'line',
|
||||
|
||||
+1
-53
@@ -492,7 +492,6 @@
|
||||
<div class="node-detail-key mono" style="font-size:11px;word-break:break-all;margin-bottom:6px">${n.public_key}</div>
|
||||
<div>
|
||||
<button class="btn-primary" id="copyUrlBtn" style="font-size:12px;padding:4px 10px">📋 Copy URL</button>
|
||||
<button class="btn-primary" id="copyShortUrlBtn" title="Short URL using an 8-char pubkey prefix — easier to send over the mesh (issue #772)" style="font-size:12px;padding:4px 10px;margin-left:6px">📡 Copy short URL</button>
|
||||
<a href="#/nodes/${encodeURIComponent(n.public_key)}/analytics" class="btn-primary" style="display:inline-block;margin-left:6px;text-decoration:none;font-size:12px;padding:4px 10px">📊 Analytics</a>
|
||||
</div>
|
||||
</div>
|
||||
@@ -613,17 +612,6 @@
|
||||
});
|
||||
});
|
||||
|
||||
// Copy short URL — issue #772. Uses an 8-char pubkey prefix; the
|
||||
// backend resolves it to the canonical pubkey when unambiguous.
|
||||
const shortUrl = location.origin + '#/nodes/' + n.public_key.slice(0, 8);
|
||||
document.getElementById('copyShortUrlBtn')?.addEventListener('click', () => {
|
||||
const btn = document.getElementById('copyShortUrlBtn');
|
||||
window.copyToClipboard(shortUrl, () => {
|
||||
btn.textContent = '✅ Copied!';
|
||||
setTimeout(() => btn.textContent = '📡 Copy short URL', 2000);
|
||||
});
|
||||
});
|
||||
|
||||
// Deep-link scroll: ?section=node-packets or ?section=node-packets
|
||||
const hashParams = location.hash.split('?')[1] || '';
|
||||
const urlParams = new URLSearchParams(hashParams);
|
||||
@@ -827,41 +815,6 @@
|
||||
* Shared between the full-screen detail page and the side panel (#813, #690).
|
||||
* No-op if the container is missing, the API errors, or the response lacks severity.
|
||||
*/
|
||||
/** Build collapsible evidence panel for node clock skew card */
|
||||
function buildEvidencePanel(cs) {
|
||||
var evidence = cs.recentHashEvidence;
|
||||
if (!evidence || evidence.length === 0) return '';
|
||||
|
||||
var calSum = cs.calibrationSummary || {};
|
||||
var calLine = calSum.totalSamples
|
||||
? '<div style="font-size:11px;color:var(--text-muted);margin-bottom:6px">Last ' + calSum.totalSamples + ' samples: ' + (calSum.calibratedSamples || 0) + ' corrected via observer calibration, ' + (calSum.uncalibratedSamples || 0) + ' uncorrected (single-observer).</div>'
|
||||
: '';
|
||||
|
||||
// Severity reason.
|
||||
var skewVal = window.currentSkewValue(cs);
|
||||
var sampleCount = (cs.samples || []).length;
|
||||
var sevLabel = SKEW_SEVERITY_LABELS[cs.severity] || cs.severity;
|
||||
var reasonLine = '<div style="font-size:12px;margin-bottom:8px"><strong>Recent ' + sampleCount + ' adverts median ' + formatSkew(skewVal) + ' → ' + sevLabel + '</strong></div>';
|
||||
|
||||
var hashBlocks = evidence.map(function(ev) {
|
||||
var shortHash = (ev.hash || '').substring(0, 8) + '…';
|
||||
var obsCount = ev.observers ? ev.observers.length : 0;
|
||||
var header = '<div style="font-weight:600;font-size:12px;margin-top:6px">Hash ' + shortHash + ' · ' + obsCount + ' observer' + (obsCount !== 1 ? 's' : '') + ' · median corrected: ' + formatSkew(ev.medianCorrectedSkewSec) + '</div>';
|
||||
var lines = (ev.observers || []).map(function(o) {
|
||||
var name = o.observerName || o.observerID;
|
||||
return '<div style="font-size:11px;padding-left:16px;font-family:var(--mono)">' +
|
||||
name + ' raw=' + formatSkew(o.rawSkewSec) + ' corrected=' + formatSkew(o.correctedSkewSec) + ' (observer offset ' + formatSkew(o.observerOffsetSec) + ')' +
|
||||
'</div>';
|
||||
}).join('');
|
||||
return header + lines;
|
||||
}).join('');
|
||||
|
||||
return '<details style="margin-top:10px"><summary style="cursor:pointer;font-size:12px;color:var(--text-muted)">Evidence (' + evidence.length + ' hashes)</summary>' +
|
||||
'<div style="margin-top:6px;padding:8px;background:var(--bg-secondary);border-radius:6px">' +
|
||||
reasonLine + calLine + hashBlocks +
|
||||
'</div></details>';
|
||||
}
|
||||
|
||||
async function loadClockSkewInto(container, pubkey) {
|
||||
if (!container) return;
|
||||
try {
|
||||
@@ -888,8 +841,7 @@
|
||||
'</div>' +
|
||||
driftHtml +
|
||||
(sparkHtml ? '<div class="skew-sparkline-wrap" style="margin-top:8px">' + sparkHtml + '<div style="font-size:10px;color:var(--text-muted)">Skew over time (' + (cs.samples || []).length + ' samples)</div></div>' : '') +
|
||||
bimodalWarning +
|
||||
buildEvidencePanel(cs);
|
||||
bimodalWarning;
|
||||
} catch (e) {
|
||||
// Non-fatal — section stays hidden
|
||||
}
|
||||
@@ -1003,10 +955,6 @@
|
||||
console.error('Failed to load nodes:', e);
|
||||
const tbody = document.getElementById('nodesBody');
|
||||
if (tbody) tbody.innerHTML = '<tr><td colspan="6" class="text-center" style="padding:24px;color:var(--error,#ef4444)"><div role="alert" aria-live="polite">Failed to load nodes. Please try again.</div></td></tr>';
|
||||
} finally {
|
||||
// Always signal data-loaded — even on error — so E2E tests can proceed.
|
||||
var nodesContainer = document.getElementById('nodesLeft') || document.getElementById('nodesBody');
|
||||
if (nodesContainer) nodesContainer.setAttribute('data-loaded', 'true');
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -70,24 +70,18 @@
|
||||
try {
|
||||
destroyCharts();
|
||||
chartDefaults();
|
||||
const [obs, analytics, obsSkewArr] = await Promise.all([
|
||||
const [obs, analytics] = await Promise.all([
|
||||
api('/observers/' + encodeURIComponent(currentId)),
|
||||
api('/observers/' + encodeURIComponent(currentId) + '/analytics?days=' + currentDays),
|
||||
api('/observers/clock-skew', { ttl: 30000 }).catch(function() { return []; }),
|
||||
]);
|
||||
// Find this observer's calibration data.
|
||||
var obsSkew = null;
|
||||
(Array.isArray(obsSkewArr) ? obsSkewArr : []).forEach(function(s) {
|
||||
if (s && s.observerID === currentId) obsSkew = s;
|
||||
});
|
||||
renderDetail(obs, analytics, obsSkew);
|
||||
renderDetail(obs, analytics);
|
||||
} catch (e) {
|
||||
document.getElementById('obsDetailContent').innerHTML =
|
||||
'<div class="text-muted" style="padding:40px">Error: ' + e.message + '</div>';
|
||||
}
|
||||
}
|
||||
|
||||
function renderDetail(obs, analytics, obsSkew) {
|
||||
function renderDetail(obs, analytics) {
|
||||
const el = document.getElementById('obsDetailContent');
|
||||
if (!el) return;
|
||||
|
||||
@@ -156,30 +150,10 @@
|
||||
<div class="stat-label">First Seen</div>
|
||||
<div class="stat-value" style="font-size:0.85em">${obs.first_seen ? new Date(obs.first_seen).toLocaleDateString() : '—'}</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<div class="stat-label">Last Status Update</div>
|
||||
<div class="stat-value" style="font-size:0.85em">${obs.last_seen ? timeAgo(obs.last_seen) + '<br><span style="font-size:0.8em;color:var(--text-muted)">' + new Date(obs.last_seen).toLocaleString() + '</span>' : '—'}</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<div class="stat-label">Last Packet Observation</div>
|
||||
<div class="stat-value" style="font-size:0.85em">${obs.last_packet_at ? timeAgo(obs.last_packet_at) + '<br><span style="font-size:0.8em;color:var(--text-muted)">' + new Date(obs.last_packet_at).toLocaleString() + '</span>' : '<span style="color:var(--text-muted)">never</span>'}</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="mono" style="font-size:0.75em;color:var(--text-muted);margin-bottom:20px;word-break:break-all">
|
||||
ID: ${obs.id}
|
||||
</div>
|
||||
${obsSkew && obsSkew.samples > 0 ? `
|
||||
<div class="node-full-card skew-detail-section" style="margin-bottom:20px;padding:12px">
|
||||
<h4 style="margin:0 0 6px">⏰ Clock Offset</h4>
|
||||
<div style="display:flex;align-items:center;gap:12px;flex-wrap:wrap">
|
||||
<span style="font-size:18px;font-weight:700;font-family:var(--mono)">${formatSkew(obsSkew.offsetSec)}</span>
|
||||
${renderSkewBadge(observerSkewSeverity(obsSkew.offsetSec), obsSkew.offsetSec)}
|
||||
<span class="text-muted" style="font-size:12px">${obsSkew.samples} sample${obsSkew.samples !== 1 ? 's' : ''}</span>
|
||||
</div>
|
||||
<div style="font-size:12px;color:var(--text-muted);margin-top:8px;max-width:600px">
|
||||
<strong>How this is computed:</strong> when this observer and another observer see the same packet, we compare their receive timestamps. The median deviation across all multi-observer packets is this observer's offset.
|
||||
</div>
|
||||
</div>` : ''}
|
||||
<div class="obs-charts" style="display:grid;grid-template-columns:repeat(auto-fit,minmax(400px,1fr));gap:16px">
|
||||
<div class="chart-card" style="padding:12px">
|
||||
<h3 style="margin:0 0 8px;font-size:0.95em">Packets Over Time</h3>
|
||||
|
||||
+3
-31
@@ -3,7 +3,6 @@
|
||||
|
||||
(function () {
|
||||
let observers = [];
|
||||
let obsSkewMap = {}; // observerID → {offsetSec, samples}
|
||||
let wsHandler = null;
|
||||
let refreshTimer = null;
|
||||
let regionChangeHandler = null;
|
||||
@@ -52,20 +51,12 @@
|
||||
if (regionChangeHandler) RegionFilter.offChange(regionChangeHandler);
|
||||
regionChangeHandler = null;
|
||||
observers = [];
|
||||
obsSkewMap = {};
|
||||
}
|
||||
|
||||
async function loadObservers() {
|
||||
try {
|
||||
const [data, skewData] = await Promise.all([
|
||||
api('/observers', { ttl: CLIENT_TTL.observers }),
|
||||
api('/observers/clock-skew', { ttl: 30000 }).catch(function() { return []; })
|
||||
]);
|
||||
const data = await api('/observers', { ttl: CLIENT_TTL.observers });
|
||||
observers = data.observers || [];
|
||||
obsSkewMap = {};
|
||||
(Array.isArray(skewData) ? skewData : []).forEach(function(s) {
|
||||
if (s && s.observerID) obsSkewMap[s.observerID] = s;
|
||||
});
|
||||
render();
|
||||
} catch (e) {
|
||||
document.getElementById('obsContent').innerHTML =
|
||||
@@ -84,17 +75,6 @@
|
||||
return { cls: 'health-red', label: 'Offline' };
|
||||
}
|
||||
|
||||
function packetBadge(o) {
|
||||
if (!o.last_packet_at) return '<span title="No packets ever observed">📡⚠ never</span>';
|
||||
const pktAgo = Date.now() - new Date(o.last_packet_at).getTime();
|
||||
const statusAgo = o.last_seen ? Date.now() - new Date(o.last_seen).getTime() : Infinity;
|
||||
const gap = pktAgo - statusAgo;
|
||||
if (gap > 600000) {
|
||||
return `<span title="Last packet ${timeAgo(o.last_packet_at)} — status is newer by ${Math.round(gap/60000)}min. Observer may be alive but not forwarding packets.">📡⚠ ${timeAgo(o.last_packet_at)}</span>`;
|
||||
}
|
||||
return timeAgo(o.last_packet_at);
|
||||
}
|
||||
|
||||
function uptimeStr(firstSeen) {
|
||||
if (!firstSeen) return '—';
|
||||
const ms = Date.now() - new Date(firstSeen).getTime();
|
||||
@@ -143,8 +123,8 @@
|
||||
<div class="obs-table-scroll"><table class="data-table obs-table" id="obsTable">
|
||||
<caption class="sr-only">Observer status and statistics</caption>
|
||||
<thead><tr>
|
||||
<th scope="col">Status</th><th scope="col">Name</th><th scope="col">Region</th><th scope="col">Last Status</th><th scope="col">Last Packet</th>
|
||||
<th scope="col">Packets</th><th scope="col">Packets/Hour</th><th scope="col">Clock Offset</th><th scope="col">Uptime</th>
|
||||
<th scope="col">Status</th><th scope="col">Name</th><th scope="col">Region</th><th scope="col">Last Seen</th>
|
||||
<th scope="col">Packets</th><th scope="col">Packets/Hour</th><th scope="col">Uptime</th>
|
||||
</tr></thead>
|
||||
<tbody>${filtered.map(o => {
|
||||
const h = healthStatus(o.last_seen);
|
||||
@@ -154,16 +134,8 @@
|
||||
<td class="mono">${o.name || o.id}</td>
|
||||
<td>${o.iata ? `<span class="badge-region">${o.iata}</span>` : '—'}</td>
|
||||
<td>${timeAgo(o.last_seen)}</td>
|
||||
<td>${o.last_packet_at ? timeAgo(o.last_packet_at) : '<span class="text-muted">—</span>'}</td>
|
||||
<td>${packetBadge(o)}</td>
|
||||
<td>${(o.packet_count || 0).toLocaleString()}</td>
|
||||
<td>${sparkBar(o.packetsLastHour || 0, maxPktsHr)}</td>
|
||||
<td>${(function() {
|
||||
var sk = obsSkewMap[o.id];
|
||||
if (!sk || sk.samples == null || sk.samples === 0) return '<span class="text-muted">—</span>';
|
||||
var sev = observerSkewSeverity(sk.offsetSec);
|
||||
return renderSkewBadge(sev, sk.offsetSec) + ' <span class="text-muted" title="Computed from ' + sk.samples + ' multi-observer packets. Positive = observer ahead of consensus.">(' + sk.samples + ')</span>';
|
||||
})()}</td>
|
||||
<td>${uptimeStr(o.first_seen)}</td>
|
||||
</tr>`;
|
||||
}).join('')}</tbody>
|
||||
|
||||
@@ -10,11 +10,6 @@
|
||||
// Aliases: display names → firmware names (for user convenience)
|
||||
var TYPE_ALIASES = { 'request': 'REQ', 'response': 'RESPONSE', 'direct msg': 'TXT_MSG', 'dm': 'TXT_MSG', 'ack': 'ACK', 'advert': 'ADVERT', 'channel msg': 'GRP_TXT', 'channel': 'GRP_TXT', 'group data': 'GRP_DATA', 'anon req': 'ANON_REQ', 'path': 'PATH', 'trace': 'TRACE', 'multipart': 'MULTIPART', 'control': 'CONTROL', 'raw': 'RAW_CUSTOM', 'custom': 'RAW_CUSTOM' };
|
||||
var ROUTE_TYPES = { 0: 'TRANSPORT_FLOOD', 1: 'FLOOD', 2: 'DIRECT', 3: 'TRANSPORT_DIRECT' };
|
||||
// Aliases: shorthand → canonical route name (issue #339)
|
||||
var ROUTE_ALIASES = { 't_flood': 'TRANSPORT_FLOOD', 't_direct': 'TRANSPORT_DIRECT' };
|
||||
// Transport route_type values: TRANSPORT_FLOOD (0) and TRANSPORT_DIRECT (3).
|
||||
// Mirrors isTransportRoute() in cmd/server/decoder.go.
|
||||
function isTransportRouteType(rt) { return rt === 0 || rt === 3; }
|
||||
|
||||
// Use window globals if available (they may have more types)
|
||||
function getRT() { return window.ROUTE_TYPES || ROUTE_TYPES; }
|
||||
@@ -185,7 +180,6 @@
|
||||
function resolveField(packet, field) {
|
||||
if (field === 'type') return FW_PAYLOAD_TYPES[packet.payload_type] || '';
|
||||
if (field === 'route') return getRT()[packet.route_type] || '';
|
||||
if (field === 'transport') return isTransportRouteType(packet.route_type);
|
||||
if (field === 'hash') return packet.hash || '';
|
||||
if (field === 'raw') return packet.raw_hex || '';
|
||||
if (field === 'size') return packet.raw_hex ? packet.raw_hex.length / 2 : 0;
|
||||
@@ -261,10 +255,6 @@
|
||||
var alias = TYPE_ALIASES[String(target).toLowerCase()];
|
||||
if (alias) resolvedTarget = alias;
|
||||
}
|
||||
if (ast.field === 'route' && typeof target === 'string') {
|
||||
var rAlias = ROUTE_ALIASES[String(target).toLowerCase()];
|
||||
if (rAlias) resolvedTarget = rAlias;
|
||||
}
|
||||
if (typeof fieldVal === 'number' && typeof resolvedTarget === 'number') {
|
||||
eq = fieldVal === resolvedTarget;
|
||||
} else if (typeof fieldVal === 'boolean' || typeof resolvedTarget === 'boolean') {
|
||||
|
||||
+25
-176
@@ -13,9 +13,6 @@
|
||||
return o.iata ? `${o.name} (${o.iata})` : o.name;
|
||||
}
|
||||
let selectedId = null;
|
||||
function _isColorByHash() { return localStorage.getItem('meshcore-color-packets-by-hash') !== 'false'; }
|
||||
function _currentTheme() { return document.documentElement.dataset.theme || (window.matchMedia('(prefers-color-scheme: dark)').matches ? 'dark' : 'light'); }
|
||||
function _hashStripeStyle(hash) { return _isColorByHash() && hash && window.HashColor ? 'border-left:4px solid ' + HashColor.hashToHsl(hash, _currentTheme()) + ';' : ''; }
|
||||
let groupByHash = true;
|
||||
let filters = {};
|
||||
{ const o = localStorage.getItem('meshcore-observer-filter'); if (o) filters.observer = o;
|
||||
@@ -26,7 +23,7 @@
|
||||
let observers = [];
|
||||
let observerMap = new Map(); // id → observer for O(1) lookups (#383)
|
||||
let regionMap = {};
|
||||
const TYPE_NAMES = { 0:'Request', 1:'Response', 2:'Direct Msg', 3:'ACK', 4:'Advert', 5:'Channel Msg', 6:'Group Data', 7:'Anon Req', 8:'Path', 9:'Trace', 10:'Multipart', 11:'Control', 15:'Raw Custom' };
|
||||
const TYPE_NAMES = { 0:'Request', 1:'Response', 2:'Direct Msg', 3:'ACK', 4:'Advert', 5:'Channel Msg', 7:'Anon Req', 8:'Path', 9:'Trace', 11:'Control' };
|
||||
function typeName(t) { return TYPE_NAMES[t] ?? `Type ${t}`; }
|
||||
const isMobile = window.innerWidth <= 1024;
|
||||
const PACKET_LIMIT = isMobile ? 1000 : 50000;
|
||||
@@ -59,12 +56,6 @@
|
||||
|
||||
function updatePacketsUrl() {
|
||||
history.replaceState(null, '', '#/packets' + buildPacketsQuery(savedTimeWindowMin, RegionFilter.getRegionParam()));
|
||||
// Update clear-filters button visibility
|
||||
var cb = document.getElementById('clearFiltersBtn');
|
||||
if (cb) {
|
||||
var active = !!(filters.hash || filters.node || filters.observer || filters.channel || filters.type || filters._filterExpr || filters.myNodes) || !!RegionFilter.getRegionParam() || savedTimeWindowMin !== DEFAULT_TIME_WINDOW;
|
||||
cb.style.display = active ? '' : 'none';
|
||||
}
|
||||
}
|
||||
|
||||
let filtersBuilt = false;
|
||||
@@ -477,9 +468,6 @@
|
||||
|
||||
// Check if new packets pass current filters
|
||||
const filtered = newPkts.filter(p => {
|
||||
// When user pinned a hash, accept ONLY that exact packet — bypass all
|
||||
// other filters (window/region/type/observer/node).
|
||||
if (filters.hash) return p.hash === filters.hash;
|
||||
// Respect time window filter — drop packets outside the selected window
|
||||
const windowMin = savedTimeWindowMin;
|
||||
if (windowMin > 0) {
|
||||
@@ -489,6 +477,7 @@
|
||||
}
|
||||
if (filters.type) { const types = filters.type.split(',').map(Number); if (!types.includes(p.payload_type)) return false; }
|
||||
if (filters.observer) { const obsSet = new Set(filters.observer.split(',')); if (!obsSet.has(p.observer_id) && !(p._children && p._children.some(c => obsSet.has(String(c.observer_id))))) return false; }
|
||||
if (filters.hash && p.hash !== filters.hash) return false;
|
||||
if (RegionFilter.getRegionParam()) {
|
||||
const selectedRegions = RegionFilter.getRegionParam().split(',');
|
||||
const obs = observerMap.get(p.observer_id);
|
||||
@@ -621,52 +610,27 @@
|
||||
} catch {}
|
||||
}
|
||||
|
||||
// Build URLSearchParams for /api/packets given UI state. Pure function for
|
||||
// testability — returns the params object the next call to /api/packets
|
||||
// would use. The hash filter is an exact identifier: when present it
|
||||
// suppresses ALL other filters (region, time window, observer, node,
|
||||
// channel). The user is asking for THAT packet regardless of saved
|
||||
// selections.
|
||||
function buildPacketsParams({ filters, regionParam, windowMin, groupByHash, limit }) {
|
||||
const params = new URLSearchParams();
|
||||
if (filters.hash) {
|
||||
params.set('hash', filters.hash);
|
||||
params.set('limit', String(limit));
|
||||
async function loadPackets() {
|
||||
try {
|
||||
const params = new URLSearchParams();
|
||||
const selectedWindow = Number(document.getElementById('fTimeWindow')?.value);
|
||||
const windowMin = Number.isFinite(selectedWindow) ? selectedWindow : savedTimeWindowMin;
|
||||
if (windowMin > 0 && !filters.hash) {
|
||||
const since = new Date(Date.now() - windowMin * 60000).toISOString();
|
||||
params.set('since', since);
|
||||
}
|
||||
params.set('limit', String(PACKET_LIMIT));
|
||||
const regionParam = RegionFilter.getRegionParam();
|
||||
if (regionParam) params.set('region', regionParam);
|
||||
if (filters.hash) params.set('hash', filters.hash);
|
||||
if (filters.node) params.set('node', filters.node);
|
||||
if (filters.observer) params.set('observer', filters.observer);
|
||||
if (filters.channel) params.set('channel', filters.channel);
|
||||
if (groupByHash) {
|
||||
params.set('groupByHash', 'true');
|
||||
} else {
|
||||
params.set('expand', 'observations');
|
||||
}
|
||||
return params;
|
||||
}
|
||||
if (windowMin > 0) {
|
||||
const since = new Date(Date.now() - windowMin * 60000).toISOString();
|
||||
params.set('since', since);
|
||||
}
|
||||
params.set('limit', String(limit));
|
||||
if (regionParam) params.set('region', regionParam);
|
||||
if (filters.node) params.set('node', filters.node);
|
||||
if (filters.observer) params.set('observer', filters.observer);
|
||||
if (filters.channel) params.set('channel', filters.channel);
|
||||
if (groupByHash) {
|
||||
params.set('groupByHash', 'true');
|
||||
} else {
|
||||
params.set('expand', 'observations');
|
||||
}
|
||||
return params;
|
||||
}
|
||||
|
||||
async function loadPackets() {
|
||||
try {
|
||||
const selectedWindow = Number(document.getElementById('fTimeWindow')?.value);
|
||||
const windowMin = Number.isFinite(selectedWindow) ? selectedWindow : savedTimeWindowMin;
|
||||
const params = buildPacketsParams({
|
||||
filters,
|
||||
regionParam: RegionFilter.getRegionParam(),
|
||||
windowMin,
|
||||
groupByHash,
|
||||
limit: PACKET_LIMIT,
|
||||
});
|
||||
|
||||
const data = await api('/packets?' + params.toString());
|
||||
packets = data.packets || [];
|
||||
@@ -754,10 +718,6 @@
|
||||
console.error('Failed to load packets:', e);
|
||||
const tbody = document.getElementById('pktBody');
|
||||
if (tbody) tbody.innerHTML = '<tr><td colspan="' + _getColCount() + '" class="text-center" style="padding:24px;color:var(--error,#ef4444)"><div role="alert" aria-live="polite">Failed to load packets. Please try again.</div></td></tr>';
|
||||
} finally {
|
||||
// Always signal data-loaded — even on error — so E2E tests can proceed.
|
||||
var pktContainer = document.getElementById('pktLeft') || document.getElementById('pktBody');
|
||||
if (pktContainer) pktContainer.setAttribute('data-loaded', 'true');
|
||||
}
|
||||
}
|
||||
|
||||
@@ -791,7 +751,6 @@
|
||||
</div>
|
||||
<div class="filter-bar" id="pktFilters">
|
||||
<button class="btn filter-toggle-btn" id="filterToggleBtn">Filters ▾</button>
|
||||
<button class="btn btn-clear-filters" id="clearFiltersBtn" title="Clear all filters" style="display:none;font-size:12px;padding:2px 8px;color:var(--text-muted);border:1px solid var(--border);border-radius:4px;background:transparent;cursor:pointer">✕ Clear</button>
|
||||
<div class="filter-group">
|
||||
<input type="text" placeholder="Packet hash…" id="fHash" aria-label="Filter by packet hash" title="Filter packets by hex hash prefix">
|
||||
<div class="node-filter-wrap" style="position:relative">
|
||||
@@ -1072,63 +1031,6 @@
|
||||
this.textContent = bar.classList.contains('filters-expanded') ? 'Filters ▴' : 'Filters ▾';
|
||||
});
|
||||
|
||||
// --- Clear filters button ---
|
||||
const clearBtn = document.getElementById('clearFiltersBtn');
|
||||
if (clearBtn) clearBtn.addEventListener('click', function() {
|
||||
// Reset filters object
|
||||
filters.hash = undefined;
|
||||
filters.node = undefined;
|
||||
filters.nodeName = undefined;
|
||||
filters.observer = undefined;
|
||||
filters.channel = undefined;
|
||||
filters.type = undefined;
|
||||
filters._filterExpr = undefined;
|
||||
filters._packetFilter = null;
|
||||
filters.myNodes = false;
|
||||
_observerFilterSet = null;
|
||||
|
||||
// Clear localStorage filter entries
|
||||
localStorage.removeItem('meshcore-observer-filter');
|
||||
localStorage.removeItem('meshcore-type-filter');
|
||||
|
||||
// Reset DOM inputs
|
||||
document.getElementById('fHash').value = '';
|
||||
document.getElementById('fNode').value = '';
|
||||
var pfInput = document.getElementById('packetFilterInput');
|
||||
if (pfInput) { pfInput.value = ''; pfInput.classList.remove('filter-active', 'filter-error'); }
|
||||
var pfError = document.getElementById('packetFilterError');
|
||||
if (pfError) pfError.style.display = 'none';
|
||||
var pfCount = document.getElementById('packetFilterCount');
|
||||
if (pfCount) pfCount.style.display = 'none';
|
||||
document.getElementById('fChannel').value = '';
|
||||
document.getElementById('fMyNodes').classList.remove('active');
|
||||
|
||||
// Reset observer multi-select
|
||||
var obMenu = document.getElementById('observerMenu');
|
||||
if (obMenu) obMenu.querySelectorAll('input[type=checkbox]').forEach(function(cb) { cb.checked = false; });
|
||||
document.getElementById('observerTrigger').textContent = 'All Observers ▾';
|
||||
|
||||
// Reset type multi-select
|
||||
var typeMenu = document.getElementById('typeMenu');
|
||||
if (typeMenu) typeMenu.querySelectorAll('input[type=checkbox]').forEach(function(cb) { cb.checked = false; });
|
||||
document.getElementById('typeTrigger').textContent = 'All Types ▾';
|
||||
|
||||
// Reset time window to default
|
||||
savedTimeWindowMin = DEFAULT_TIME_WINDOW;
|
||||
var fTW = document.getElementById('fTimeWindow');
|
||||
if (fTW) fTW.value = String(DEFAULT_TIME_WINDOW);
|
||||
localStorage.removeItem('meshcore-time-window');
|
||||
|
||||
// Reset region filter
|
||||
RegionFilter.setSelected([]);
|
||||
|
||||
// Update URL and reload
|
||||
updatePacketsUrl();
|
||||
loadPackets();
|
||||
});
|
||||
// Show clear button if page loaded with active filters (e.g. from URL params)
|
||||
updatePacketsUrl();
|
||||
|
||||
// Filter event listeners
|
||||
document.getElementById('fHash').value = filters.hash || '';
|
||||
document.getElementById('fHash').addEventListener('input', debounce((e) => { filters.hash = e.target.value || undefined; updatePacketsUrl(); loadPackets(); }, 300));
|
||||
@@ -1430,9 +1332,7 @@
|
||||
// Channel color highlighting (#271)
|
||||
const _grpDecoded = getParsedDecoded(p) || {};
|
||||
const _grpChanStyle = window.ChannelColors ? window.ChannelColors.getRowStyle(_grpDecoded.type || groupTypeName, _grpDecoded.channel) : '';
|
||||
const _grpHashStripe = _hashStripeStyle(p.hash);
|
||||
const _grpStyle = _grpHashStripe + _grpChanStyle;
|
||||
let html = `<tr class="${isSingle ? '' : 'group-header'} ${isExpanded ? 'expanded' : ''}" data-hash="${p.hash}" data-action="${isSingle ? 'select-hash' : 'toggle-select'}" data-value="${p.hash}" data-entry-idx="${entryIdx}" tabindex="0" role="row"${_grpStyle ? ' style="' + _grpStyle + '"' : ''}>
|
||||
let html = `<tr class="${isSingle ? '' : 'group-header'} ${isExpanded ? 'expanded' : ''}" data-hash="${p.hash}" data-action="${isSingle ? 'select-hash' : 'toggle-select'}" data-value="${p.hash}" data-entry-idx="${entryIdx}" tabindex="0" role="row"${_grpChanStyle ? ' style="' + _grpChanStyle + '"' : ''}>
|
||||
<td style="width:28px;text-align:center;cursor:pointer">${isSingle ? '' : (isExpanded ? '▼' : '▶')}</td>
|
||||
<td class="col-region">${groupRegion ? `<span class="badge-region">${groupRegion}</span>` : '—'}</td>
|
||||
<td class="col-time">${renderTimestampCell(p.latest)}</td>
|
||||
@@ -1458,8 +1358,7 @@
|
||||
const childRegion = c.observer_id ? (observerMap.get(c.observer_id)?.iata || '') : '';
|
||||
const childPath = getParsedPath(c);
|
||||
const childPathStr = renderPath(childPath, c.observer_id);
|
||||
const _childHashStripe = _hashStripeStyle(c.hash || p.hash);
|
||||
html += `<tr class="group-child" data-id="${c.id}" data-hash="${c.hash || ''}" data-action="select-observation" data-value="${c.id}" data-parent-hash="${p.hash}" data-entry-idx="${entryIdx}" tabindex="0" role="row"${_childHashStripe ? ' style="' + _childHashStripe + '"' : ''}>
|
||||
html += `<tr class="group-child" data-id="${c.id}" data-hash="${c.hash || ''}" data-action="select-observation" data-value="${c.id}" data-parent-hash="${p.hash}" data-entry-idx="${entryIdx}" tabindex="0" role="row">
|
||||
<td></td><td class="col-region">${childRegion ? `<span class="badge-region">${childRegion}</span>` : '—'}</td>
|
||||
<td class="col-time">${renderTimestampCell(c.timestamp)}</td>
|
||||
<td class="mono col-hash">${truncate(c.hash || '', 8)}</td>
|
||||
@@ -1489,9 +1388,7 @@
|
||||
const hashBytes = ((parseInt(p.raw_hex?.slice(2, 4), 16) || 0) >> 6) + 1;
|
||||
const pathStr = renderPath(pathHops, p.observer_id);
|
||||
const detail = getDetailPreview(decoded);
|
||||
const _flatHashStripe = _hashStripeStyle(p.hash);
|
||||
const _flatStyle = _flatHashStripe + _chanStyle;
|
||||
return `<tr data-id="${p.id}" data-hash="${p.hash || ''}" data-action="select-hash" data-value="${p.hash || p.id}" data-entry-idx="${entryIdx}" tabindex="0" role="row" class="${selectedId === p.id ? 'selected' : ''}"${_flatStyle ? ' style="' + _flatStyle + '"' : ''}>
|
||||
return `<tr data-id="${p.id}" data-hash="${p.hash || ''}" data-action="select-hash" data-value="${p.hash || p.id}" data-entry-idx="${entryIdx}" tabindex="0" role="row" class="${selectedId === p.id ? 'selected' : ''}"${_chanStyle ? ' style="' + _chanStyle + '"' : ''}>
|
||||
<td></td><td class="col-region">${region ? `<span class="badge-region">${region}</span>` : '—'}</td>
|
||||
<td class="col-time">${renderTimestampCell(p.timestamp)}</td>
|
||||
<td class="mono col-hash">${truncate(p.hash || String(p.id), 8)}</td>
|
||||
@@ -1743,10 +1640,6 @@
|
||||
const tbody = document.getElementById('pktBody');
|
||||
if (!tbody) return;
|
||||
|
||||
// Preserve scroll position across re-render (#431)
|
||||
const scrollContainer = document.getElementById('pktLeft');
|
||||
const savedScrollTop = scrollContainer ? scrollContainer.scrollTop : 0;
|
||||
|
||||
// Update dynamic parts of the header
|
||||
const countEl = document.querySelector('#pktLeft .count');
|
||||
const groupBtn = document.getElementById('fGroup');
|
||||
@@ -1754,14 +1647,7 @@
|
||||
|
||||
// Filter to claimed/favorited nodes — pure client-side filter (no server round-trip)
|
||||
let displayPackets = packets;
|
||||
|
||||
// When loading a specific packet by hash, bypass ALL client-side filters
|
||||
// (myNodes, type, observer, packet-filter-expression). The user is asking
|
||||
// for THAT exact packet — saved type/observer/expression filters must not
|
||||
// hide it. Hash filter is the exact identifier; nothing else applies.
|
||||
const hashOnly = !!filters.hash;
|
||||
|
||||
if (!hashOnly && filters.myNodes) {
|
||||
if (filters.myNodes) {
|
||||
const myNodes = JSON.parse(localStorage.getItem('meshcore-my-nodes') || '[]');
|
||||
const myKeys = myNodes.map(n => n.pubkey).filter(Boolean);
|
||||
const favs = getFavorites();
|
||||
@@ -1777,11 +1663,11 @@
|
||||
}
|
||||
|
||||
// Client-side type/observer filtering
|
||||
if (!hashOnly && filters.type) {
|
||||
if (filters.type) {
|
||||
const types = filters.type.split(',').map(Number);
|
||||
displayPackets = displayPackets.filter(p => types.includes(p.payload_type));
|
||||
}
|
||||
if (!hashOnly && filters.observer) {
|
||||
if (filters.observer) {
|
||||
const obsIds = new Set(filters.observer.split(','));
|
||||
displayPackets = displayPackets.filter(p => {
|
||||
if (obsIds.has(p.observer_id)) return true;
|
||||
@@ -1792,7 +1678,7 @@
|
||||
|
||||
// Packet Filter Language
|
||||
const pfCount = document.getElementById('packetFilterCount');
|
||||
if (!hashOnly && filters._packetFilter) {
|
||||
if (filters._packetFilter) {
|
||||
const beforeCount = displayPackets.length;
|
||||
displayPackets = displayPackets.filter(filters._packetFilter);
|
||||
if (pfCount) {
|
||||
@@ -1816,8 +1702,6 @@
|
||||
detachVScrollListener();
|
||||
const colCount = _getColCount();
|
||||
tbody.innerHTML = '<tr><td colspan="' + colCount + '" class="text-center text-muted" style="padding:24px">' + (filters.myNodes ? 'No packets from your claimed/favorited nodes' : 'No packets found') + '</td></tr>';
|
||||
// Restore scroll position after DOM rebuild (#431)
|
||||
if (scrollContainer) scrollContainer.scrollTop = savedScrollTop;
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -1835,9 +1719,6 @@
|
||||
|
||||
attachVScrollListener();
|
||||
renderVisibleRows();
|
||||
|
||||
// Restore scroll position after re-render (#431)
|
||||
if (scrollContainer) scrollContainer.scrollTop = savedScrollTop;
|
||||
}
|
||||
|
||||
function getDetailPreview(decoded) {
|
||||
@@ -2336,16 +2217,6 @@
|
||||
off += hashSize * pathHops.length;
|
||||
}
|
||||
|
||||
// TRACE SNR values (from header path bytes, decoded by backend)
|
||||
if (decoded.type === 'TRACE' && decoded.snrValues && decoded.snrValues.length > 0) {
|
||||
rows += sectionRow('SNR Path (' + decoded.snrValues.length + ' hops completed)', 'section-path');
|
||||
for (let i = 0; i < decoded.snrValues.length; i++) {
|
||||
const snr = decoded.snrValues[i];
|
||||
const snrStr = (snr >= 0 ? '+' : '') + snr.toFixed(2) + ' dB';
|
||||
rows += fieldRow('', 'SNR (hop ' + i + ')', snrStr, '');
|
||||
}
|
||||
}
|
||||
|
||||
// Payload
|
||||
rows += sectionRow('Payload — ' + payloadTypeName(pkt.payload_type), 'section-payload');
|
||||
|
||||
@@ -2382,13 +2253,6 @@
|
||||
if (decoded.sender_timestamp) rows += fieldRow(off + 2, 'Sender Time', decoded.sender_timestamp, '');
|
||||
} else if (decoded.type === 'ACK') {
|
||||
rows += fieldRow(off, 'Checksum (4B)', decoded.ackChecksum || '', '');
|
||||
} else if (decoded.type === 'TRACE') {
|
||||
rows += fieldRow(off, 'Trace Tag (4B)', decoded.tag ? '0x' + decoded.tag.toString(16).toUpperCase().padStart(8, '0') : '—', '');
|
||||
rows += fieldRow(off + 4, 'Auth Code (4B)', decoded.authCode ? '0x' + decoded.authCode.toString(16).toUpperCase().padStart(8, '0') : '—', '');
|
||||
rows += fieldRow(off + 8, 'Flags', decoded.traceFlags != null ? '0x' + decoded.traceFlags.toString(16).padStart(2, '0') : '—', decoded.traceFlags != null ? 'hash_size=' + (1 << (decoded.traceFlags & 0x03)) + ' byte(s)' : '');
|
||||
if (decoded.pathData) {
|
||||
rows += fieldRow(off + 9, 'Route Hops', decoded.pathData.toUpperCase(), pathHops.length + ' hop(s)');
|
||||
}
|
||||
} else if (decoded.destHash !== undefined) {
|
||||
rows += fieldRow(off, 'Dest Hash (1B)', decoded.destHash || '', '');
|
||||
rows += fieldRow(off + 1, 'Src Hash (1B)', decoded.srcHash || '', '');
|
||||
@@ -2658,22 +2522,12 @@
|
||||
} catch {}
|
||||
}
|
||||
|
||||
let _lastColorByHash = _isColorByHash();
|
||||
function _onStorageChange() {
|
||||
var current = _isColorByHash();
|
||||
if (_lastColorByHash !== current) {
|
||||
_lastColorByHash = current;
|
||||
renderVisibleRows();
|
||||
}
|
||||
}
|
||||
|
||||
let _themeRefreshHandler = null;
|
||||
|
||||
registerPage('packets', {
|
||||
init: function(app, routeParam) {
|
||||
_themeRefreshHandler = () => { if (typeof renderTableRows === 'function') renderTableRows(); };
|
||||
window.addEventListener('theme-refresh', _themeRefreshHandler);
|
||||
window.addEventListener('storage', _onStorageChange);
|
||||
var result = init(app, routeParam);
|
||||
// Install channel color picker on packets table (M2, #271)
|
||||
if (window.ChannelColorPicker) window.ChannelColorPicker.installPacketsTable();
|
||||
@@ -2681,7 +2535,6 @@
|
||||
},
|
||||
destroy: function() {
|
||||
if (_themeRefreshHandler) { window.removeEventListener('theme-refresh', _themeRefreshHandler); _themeRefreshHandler = null; }
|
||||
window.removeEventListener('storage', _onStorageChange);
|
||||
return destroy();
|
||||
}
|
||||
});
|
||||
@@ -2710,10 +2563,6 @@
|
||||
buildGroupRowHtml,
|
||||
buildFlatRowHtml,
|
||||
_calcVisibleRange,
|
||||
buildPacketsParams,
|
||||
renderTableRows,
|
||||
_setPackets: function(p) { packets = p; },
|
||||
_setFilter: function(k, v) { filters[k] = v; },
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
@@ -1,205 +0,0 @@
|
||||
// Path Inspector — prefix candidate scoring with map overlay (issue #944).
|
||||
// IIFE; exports window.PathInspector for testability.
|
||||
(function () {
|
||||
'use strict';
|
||||
|
||||
var container = null;
|
||||
var currentResults = null;
|
||||
|
||||
function init(app) {
|
||||
container = app;
|
||||
var params = new URLSearchParams(location.hash.split('?')[1] || '');
|
||||
var prefixParam = params.get('prefixes') || '';
|
||||
|
||||
container.innerHTML =
|
||||
'<div class="path-inspector-page">' +
|
||||
'<h2>Path Inspector</h2>' +
|
||||
'<p class="help-text">Enter comma or space-separated hex prefixes (1-3 bytes each, e.g. <code>2C,A1,F4</code> or <code>2C A1 F4</code>).</p>' +
|
||||
'<div class="path-inspector-input-row">' +
|
||||
'<input type="text" id="path-inspector-input" class="input" placeholder="2C,A1,F4 or 2C A1 F4" value="' + escapeAttr(prefixParam) + '">' +
|
||||
'<button id="path-inspector-submit" class="btn btn-primary">Inspect</button>' +
|
||||
'</div>' +
|
||||
'<div id="path-inspector-error" class="path-inspector-error"></div>' +
|
||||
'<div id="path-inspector-results"></div>' +
|
||||
'</div>';
|
||||
|
||||
var input = document.getElementById('path-inspector-input');
|
||||
var btn = document.getElementById('path-inspector-submit');
|
||||
btn.addEventListener('click', function () { submit(input.value); });
|
||||
input.addEventListener('keydown', function (e) {
|
||||
if (e.key === 'Enter') submit(input.value);
|
||||
});
|
||||
|
||||
// Auto-run if prefixes in URL.
|
||||
if (prefixParam) submit(prefixParam);
|
||||
}
|
||||
|
||||
function destroy() {
|
||||
container = null;
|
||||
currentResults = null;
|
||||
}
|
||||
|
||||
function parsePrefixes(raw) {
|
||||
// Accept comma or space separated.
|
||||
var parts = raw.trim().split(/[\s,]+/).filter(function (s) { return s.length > 0; });
|
||||
return parts.map(function (p) { return p.toLowerCase(); });
|
||||
}
|
||||
|
||||
function validatePrefixes(prefixes) {
|
||||
if (prefixes.length === 0) return 'Enter at least one prefix.';
|
||||
if (prefixes.length > 64) return 'Too many prefixes (max 64).';
|
||||
var hexRe = /^[0-9a-f]+$/;
|
||||
var byteLen = -1;
|
||||
for (var i = 0; i < prefixes.length; i++) {
|
||||
var p = prefixes[i];
|
||||
if (!hexRe.test(p)) return 'Invalid hex: ' + p;
|
||||
if (p.length % 2 !== 0) return 'Odd-length prefix: ' + p;
|
||||
var bl = p.length / 2;
|
||||
if (bl > 3) return 'Prefix too long (max 3 bytes): ' + p;
|
||||
if (byteLen === -1) byteLen = bl;
|
||||
else if (bl !== byteLen) return 'Mixed prefix lengths not allowed.';
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function submit(raw) {
|
||||
var errDiv = document.getElementById('path-inspector-error');
|
||||
var resultsDiv = document.getElementById('path-inspector-results');
|
||||
errDiv.textContent = '';
|
||||
resultsDiv.innerHTML = '';
|
||||
|
||||
var prefixes = parsePrefixes(raw);
|
||||
var err = validatePrefixes(prefixes);
|
||||
if (err) {
|
||||
errDiv.textContent = err;
|
||||
return;
|
||||
}
|
||||
|
||||
// Update URL.
|
||||
var base = '#/tools/path-inspector';
|
||||
if (location.hash.indexOf(base) === 0) {
|
||||
history.replaceState(null, '', base + '?prefixes=' + prefixes.join(','));
|
||||
}
|
||||
|
||||
resultsDiv.innerHTML = '<p>Loading...</p>';
|
||||
fetch('/api/paths/inspect', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ prefixes: prefixes })
|
||||
})
|
||||
.then(function (r) {
|
||||
if (r.status === 503) return r.json().then(function (d) { throw new Error('Service warming up, retry in a few seconds.'); });
|
||||
if (!r.ok) return r.json().then(function (d) { throw new Error(d.error || 'Request failed'); });
|
||||
return r.json();
|
||||
})
|
||||
.then(function (data) {
|
||||
currentResults = data;
|
||||
renderResults(data, resultsDiv);
|
||||
})
|
||||
.catch(function (e) {
|
||||
resultsDiv.innerHTML = '';
|
||||
errDiv.textContent = e.message;
|
||||
});
|
||||
}
|
||||
|
||||
function renderResults(data, div) {
|
||||
if (!data.candidates || data.candidates.length === 0) {
|
||||
div.innerHTML = '<p class="no-results">No candidates found. The prefixes may not match any known path-eligible nodes.</p>';
|
||||
return;
|
||||
}
|
||||
|
||||
var html = '<table class="path-inspector-table"><thead><tr>' +
|
||||
'<th>#</th><th>Score</th><th>Path</th><th>Action</th>' +
|
||||
'</tr></thead><tbody>';
|
||||
|
||||
for (var i = 0; i < data.candidates.length; i++) {
|
||||
var c = data.candidates[i];
|
||||
var rowClass = c.speculative ? 'speculative-row' : '';
|
||||
html += '<tr class="' + rowClass + '">';
|
||||
html += '<td>' + (i + 1) + '</td>';
|
||||
html += '<td class="' + (c.speculative ? 'speculative-warning' : '') + '">' +
|
||||
c.score.toFixed(3) +
|
||||
(c.speculative ? ' <span class="speculative-badge" title="Low evidence; may be wrong">⚠</span>' : '') +
|
||||
'</td>';
|
||||
html += '<td>' + escapeHtml(c.names.join(' → ')) + '</td>';
|
||||
html += '<td><button class="btn btn-sm" data-idx="' + i + '">Show on Map</button></td>';
|
||||
html += '</tr>';
|
||||
|
||||
// Per-hop evidence (collapsed).
|
||||
html += '<tr class="evidence-row collapsed" data-evidence="' + i + '"><td colspan="4"><div class="evidence-detail">';
|
||||
for (var j = 0; j < c.evidence.perHop.length; j++) {
|
||||
var h = c.evidence.perHop[j];
|
||||
html += '<div class="hop-evidence">Hop ' + (j + 1) + ': prefix=' + h.prefix +
|
||||
', candidates=' + h.candidatesConsidered +
|
||||
', edge=' + h.edgeWeight.toFixed(3);
|
||||
if (h.alternatives && h.alternatives.length > 0) {
|
||||
html += '<div class="hop-alternatives" style="margin-left:12px;font-size:12px;color:var(--text-muted);">';
|
||||
for (var k = 0; k < h.alternatives.length; k++) {
|
||||
var alt = h.alternatives[k];
|
||||
html += '<div>↳ ' + escapeHtml(alt.name || alt.publicKey.substring(0, 8)) + ' (score=' + alt.score.toFixed(3) + ')</div>';
|
||||
}
|
||||
html += '</div>';
|
||||
}
|
||||
html += '</div>';
|
||||
}
|
||||
html += '</div></td></tr>';
|
||||
}
|
||||
|
||||
html += '</tbody></table>';
|
||||
html += '<div class="path-inspector-stats">Beam width: ' + data.stats.beamWidth +
|
||||
' | Expansions: ' + data.stats.expansionsRun +
|
||||
' | Elapsed: ' + data.stats.elapsedMs + 'ms</div>';
|
||||
|
||||
div.innerHTML = html;
|
||||
|
||||
// Wire up Show on Map buttons.
|
||||
div.querySelectorAll('button[data-idx]').forEach(function (btn) {
|
||||
btn.addEventListener('click', function () {
|
||||
var idx = parseInt(btn.dataset.idx);
|
||||
showOnMap(data.candidates[idx]);
|
||||
});
|
||||
});
|
||||
|
||||
// Wire up row expand for evidence.
|
||||
div.querySelectorAll('.path-inspector-table tbody tr:not(.evidence-row)').forEach(function (row) {
|
||||
row.style.cursor = 'pointer';
|
||||
row.addEventListener('click', function (e) {
|
||||
if (e.target.tagName === 'BUTTON') return;
|
||||
var idx = row.querySelector('button[data-idx]');
|
||||
if (!idx) return;
|
||||
var evidenceRow = div.querySelector('tr[data-evidence="' + idx.dataset.idx + '"]');
|
||||
if (evidenceRow) evidenceRow.classList.toggle('collapsed');
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
function showOnMap(candidate) {
|
||||
// Store pending route for map init to pick up.
|
||||
window._pendingPathInspectorRoute = candidate;
|
||||
// Switch to map page if not there; map init will draw the route.
|
||||
if (location.hash.indexOf('#/map') !== 0) {
|
||||
location.hash = '#/map';
|
||||
} else {
|
||||
// Already on map — draw directly.
|
||||
delete window._pendingPathInspectorRoute;
|
||||
if (window.routeLayer) window.routeLayer.clearLayers();
|
||||
// Pass FULL path as hopKeys (not slice(1)) — drawPacketRoute resolves
|
||||
// each entry against nodes[] for plotting. The 2nd arg is the origin
|
||||
// OBJECT (with pubkey/lat/lon/name); pass null since the origin is
|
||||
// already the first hop in the path itself, and drawPacketRoute draws
|
||||
// a marker for every resolved hop.
|
||||
if (window.drawPacketRoute) window.drawPacketRoute(candidate.path, null);
|
||||
}
|
||||
}
|
||||
|
||||
function escapeAttr(s) {
|
||||
return s.replace(/&/g, '&').replace(/"/g, '"').replace(/</g, '<');
|
||||
}
|
||||
|
||||
function escapeHtml(s) {
|
||||
return s.replace(/&/g, '&').replace(/</g, '<').replace(/>/g, '>');
|
||||
}
|
||||
|
||||
window.PathInspector = { init: init, destroy: destroy, parsePrefixes: parsePrefixes, validatePrefixes: validatePrefixes };
|
||||
if (typeof registerPage === 'function') registerPage('path-inspector', { init: init, destroy: destroy });
|
||||
})();
|
||||
@@ -1,119 +0,0 @@
|
||||
/* === CoreScope — roles-page.js === */
|
||||
'use strict';
|
||||
|
||||
(function () {
|
||||
let refreshTimer = null;
|
||||
|
||||
function init(app) {
|
||||
app.innerHTML =
|
||||
'<div class="roles-page" data-page="roles">' +
|
||||
' <div class="page-header">' +
|
||||
' <h2>Roles</h2>' +
|
||||
' <button class="btn-icon" data-action="roles-refresh" title="Refresh" aria-label="Refresh roles">🔄</button>' +
|
||||
' </div>' +
|
||||
' <p class="text-muted" style="margin:0 0 12px 0">Distribution of node roles across the mesh, with per-role clock-skew posture.</p>' +
|
||||
' <div id="rolesContent"><div class="text-center text-muted" style="padding:40px">Loading…</div></div>' +
|
||||
'</div>';
|
||||
app.addEventListener('click', function (e) {
|
||||
var btn = e.target.closest('[data-action="roles-refresh"]');
|
||||
if (btn) load();
|
||||
});
|
||||
load();
|
||||
refreshTimer = setInterval(load, 60000);
|
||||
}
|
||||
|
||||
function destroy() {
|
||||
if (refreshTimer) clearInterval(refreshTimer);
|
||||
refreshTimer = null;
|
||||
}
|
||||
|
||||
async function load() {
|
||||
var container = document.getElementById('rolesContent');
|
||||
if (!container) return;
|
||||
try {
|
||||
var resp = await fetch('/api/analytics/roles');
|
||||
if (!resp.ok) throw new Error('HTTP ' + resp.status);
|
||||
var data = await resp.json();
|
||||
render(container, data);
|
||||
} catch (err) {
|
||||
container.innerHTML = '<div class="text-center" style="padding:40px;color:var(--color-error,#c00)">Failed to load roles: ' + escapeHtml(String(err.message || err)) + '</div>';
|
||||
}
|
||||
}
|
||||
|
||||
function escapeHtml(s) {
|
||||
return String(s).replace(/[&<>"']/g, function (c) {
|
||||
return { '&': '&', '<': '<', '>': '>', '"': '"', "'": ''' }[c];
|
||||
});
|
||||
}
|
||||
|
||||
function fmtSec(v) {
|
||||
if (!v && v !== 0) return '—';
|
||||
var abs = Math.abs(v);
|
||||
if (abs < 1) return v.toFixed(2) + 's';
|
||||
if (abs < 60) return v.toFixed(1) + 's';
|
||||
if (abs < 3600) return (v / 60).toFixed(1) + 'm';
|
||||
if (abs < 86400) return (v / 3600).toFixed(1) + 'h';
|
||||
return (v / 86400).toFixed(1) + 'd';
|
||||
}
|
||||
|
||||
function roleEmoji(role) {
|
||||
if (window.ROLE_EMOJI && window.ROLE_EMOJI[role]) return window.ROLE_EMOJI[role];
|
||||
return '•';
|
||||
}
|
||||
|
||||
function render(container, data) {
|
||||
var roles = (data && data.roles) || [];
|
||||
var total = (data && data.totalNodes) || 0;
|
||||
if (roles.length === 0) {
|
||||
container.innerHTML = '<div class="text-center text-muted" style="padding:40px">No roles to show.</div>';
|
||||
return;
|
||||
}
|
||||
var maxCount = roles.reduce(function (m, r) { return Math.max(m, r.nodeCount || 0); }, 0) || 1;
|
||||
|
||||
var rows = roles.map(function (r) {
|
||||
var pct = total > 0 ? ((r.nodeCount / total) * 100).toFixed(1) : '0.0';
|
||||
var barW = Math.round((r.nodeCount / maxCount) * 100);
|
||||
var sevCells =
|
||||
'<span title="OK (skew < 5min)" style="color:var(--color-success,#0a0)">' + (r.okCount || 0) + '</span> / ' +
|
||||
'<span title="Warning (5min – 1h)" style="color:var(--color-warning,#e80)">' + (r.warningCount || 0) + '</span> / ' +
|
||||
'<span title="Critical (1h – 30d)" style="color:var(--color-error,#c00)">' + (r.criticalCount || 0) + '</span> / ' +
|
||||
'<span title="Absurd (> 30d)" style="color:#a0a">' + (r.absurdCount || 0) + '</span> / ' +
|
||||
'<span title="No clock (> 365d)" style="color:#888">' + (r.noClockCount || 0) + '</span>';
|
||||
return '' +
|
||||
'<tr data-role="' + escapeHtml(r.role) + '">' +
|
||||
'<td>' + roleEmoji(r.role) + ' <strong>' + escapeHtml(r.role) + '</strong></td>' +
|
||||
'<td style="text-align:right">' + r.nodeCount + '</td>' +
|
||||
'<td style="text-align:right">' + pct + '%</td>' +
|
||||
'<td style="min-width:140px">' +
|
||||
'<div style="background:var(--color-surface-2,#eee);height:10px;border-radius:5px;overflow:hidden">' +
|
||||
'<div style="background:var(--color-accent,#06c);width:' + barW + '%;height:100%"></div>' +
|
||||
'</div>' +
|
||||
'</td>' +
|
||||
'<td style="text-align:right">' + (r.withSkew || 0) + '</td>' +
|
||||
'<td style="text-align:right">' + fmtSec(r.medianAbsSkewSec || 0) + '</td>' +
|
||||
'<td style="text-align:right">' + fmtSec(r.meanAbsSkewSec || 0) + '</td>' +
|
||||
'<td style="white-space:nowrap">' + sevCells + '</td>' +
|
||||
'</tr>';
|
||||
}).join('');
|
||||
|
||||
container.innerHTML =
|
||||
'<div class="roles-summary" style="margin-bottom:12px;color:var(--color-text-muted,#666)">' +
|
||||
'<strong>' + total + '</strong> nodes across <strong>' + roles.length + '</strong> roles' +
|
||||
'</div>' +
|
||||
'<table id="rolesTable" class="data-table" style="width:100%">' +
|
||||
'<thead><tr>' +
|
||||
'<th>Role</th>' +
|
||||
'<th style="text-align:right">Count</th>' +
|
||||
'<th style="text-align:right">Share</th>' +
|
||||
'<th>Distribution</th>' +
|
||||
'<th style="text-align:right" title="Nodes with clock-skew samples">w/ Skew</th>' +
|
||||
'<th style="text-align:right" title="Median absolute skew">Median |skew|</th>' +
|
||||
'<th style="text-align:right" title="Mean absolute skew">Mean |skew|</th>' +
|
||||
'<th title="OK / Warning / Critical / Absurd / No-clock">Severity</th>' +
|
||||
'</tr></thead>' +
|
||||
'<tbody>' + rows + '</tbody>' +
|
||||
'</table>';
|
||||
}
|
||||
|
||||
registerPage('roles', { init: init, destroy: destroy });
|
||||
})();
|
||||
+4
-12
@@ -15,18 +15,16 @@
|
||||
};
|
||||
|
||||
window.TYPE_COLORS = {
|
||||
ADVERT: '#22c55e', GRP_TXT: '#3b82f6', GRP_DATA: '#8b5cf6', TXT_MSG: '#f59e0b', ACK: '#6b7280',
|
||||
ADVERT: '#22c55e', GRP_TXT: '#3b82f6', TXT_MSG: '#f59e0b', ACK: '#6b7280',
|
||||
REQUEST: '#a855f7', RESPONSE: '#06b6d4', TRACE: '#ec4899', PATH: '#14b8a6',
|
||||
ANON_REQ: '#f43f5e', MULTIPART: '#0d9488', CONTROL: '#b45309', RAW_CUSTOM: '#c026d3',
|
||||
UNKNOWN: '#6b7280'
|
||||
ANON_REQ: '#f43f5e', UNKNOWN: '#6b7280'
|
||||
};
|
||||
|
||||
// Badge CSS class name mapping
|
||||
const TYPE_BADGE_MAP = {
|
||||
ADVERT: 'advert', GRP_TXT: 'grp-txt', GRP_DATA: 'grp-data', TXT_MSG: 'txt-msg', ACK: 'ack',
|
||||
ADVERT: 'advert', GRP_TXT: 'grp-txt', TXT_MSG: 'txt-msg', ACK: 'ack',
|
||||
REQUEST: 'req', RESPONSE: 'response', TRACE: 'trace', PATH: 'path',
|
||||
ANON_REQ: 'anon-req', MULTIPART: 'multipart', CONTROL: 'control', RAW_CUSTOM: 'raw-custom',
|
||||
UNKNOWN: 'unknown'
|
||||
ANON_REQ: 'anon-req', UNKNOWN: 'unknown'
|
||||
};
|
||||
|
||||
// Generate badge CSS from TYPE_COLORS — single source of truth
|
||||
@@ -457,12 +455,6 @@
|
||||
return '<span class="' + cls + '" title="Clock skew: ' + window.formatSkew(skewSec) + ' (' + (SKEW_SEVERITY_LABELS[severity] || severity) + ')">' + label + '</span>';
|
||||
};
|
||||
|
||||
/** Compute severity for an observer's clock offset (seconds). */
|
||||
window.observerSkewSeverity = function(offsetSec) {
|
||||
var abs = Math.abs(offsetSec);
|
||||
return abs >= 3600 ? 'critical' : abs >= 300 ? 'warning' : 'ok';
|
||||
};
|
||||
|
||||
/** Render a skew sparkline SVG (inline, word-sized) */
|
||||
window.renderSkewSparkline = function(samples, w, h) {
|
||||
w = w || 120; h = h || 24;
|
||||
|
||||
+1
-60
@@ -16,7 +16,6 @@
|
||||
--status-amber: #f59e0b;
|
||||
--status-amber-light: #fef3c7;
|
||||
--status-amber-text: #92400e;
|
||||
--path-inspector-speculative: #d97706;
|
||||
--role-observer: #8b5cf6;
|
||||
--accent-hover: #6db3ff;
|
||||
--text: #1a1a2e;
|
||||
@@ -53,7 +52,6 @@
|
||||
--status-amber: #f59e0b;
|
||||
--status-amber-light: #422006;
|
||||
--status-amber-text: #fcd34d;
|
||||
--path-inspector-speculative: #f59e0b;
|
||||
--surface-0: #0f0f23;
|
||||
--surface-1: #1a1a2e;
|
||||
--surface-2: #232340;
|
||||
@@ -524,19 +522,6 @@ button.ch-item.selected { background: var(--selected-bg); }
|
||||
.ch-item-top { display: flex; justify-content: space-between; align-items: baseline; margin-bottom: 2px; }
|
||||
.ch-item-name { font-weight: 600; font-size: 14px; }
|
||||
.ch-item-time { font-size: 11px; color: var(--text-muted); white-space: nowrap; }
|
||||
.ch-unread-badge {
|
||||
display: inline-block;
|
||||
min-width: 18px;
|
||||
padding: 1px 6px;
|
||||
margin-left: 4px;
|
||||
background: var(--accent, #3b82f6);
|
||||
color: #fff;
|
||||
font-size: 10px;
|
||||
font-weight: 600;
|
||||
border-radius: 9px;
|
||||
text-align: center;
|
||||
line-height: 1.4;
|
||||
}
|
||||
.ch-remove-btn { background: none; border: none; color: var(--text-muted); cursor: pointer; font-size: 13px; padding: 0 2px; margin-left: 4px; opacity: 0; transition: opacity 0.15s; line-height: 1; }
|
||||
button.ch-item:hover .ch-remove-btn { opacity: 0.6; }
|
||||
.ch-remove-btn:hover { opacity: 1 !important; color: var(--danger, #dc2626); }
|
||||
@@ -1571,7 +1556,7 @@ tr[data-hops]:hover { background: rgba(59,130,246,0.1); }
|
||||
|
||||
/* #20 — Observers table horizontal scroll on mobile */
|
||||
.obs-table-scroll { overflow-x: auto; -webkit-overflow-scrolling: touch; }
|
||||
.obs-table-scroll .obs-table { min-width: 720px; }
|
||||
.obs-table-scroll .obs-table { min-width: 640px; }
|
||||
|
||||
/* #206 — Analytics/Compare tables scroll wrappers on mobile */
|
||||
.analytics-table-scroll { overflow-x: auto; -webkit-overflow-scrolling: touch; }
|
||||
@@ -2188,16 +2173,6 @@ tr[data-hops]:hover { background: rgba(59,130,246,0.1); }
|
||||
margin-left: 6px;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
.ch-color-clear {
|
||||
display: inline-block;
|
||||
font-size: 10px;
|
||||
line-height: 1;
|
||||
color: var(--text-muted, #888);
|
||||
cursor: pointer;
|
||||
margin-left: 3px;
|
||||
vertical-align: middle;
|
||||
}
|
||||
.ch-color-clear:hover { color: var(--text-primary, #e0e0e0); }
|
||||
.ch-color-dot:not([style*="background"]) {
|
||||
background: transparent;
|
||||
border-style: dashed;
|
||||
@@ -2335,37 +2310,3 @@ th.sort-active { color: var(--accent, #60a5fa); }
|
||||
|
||||
.clock-filter-btn { font-size: 12px; padding: 3px 8px; border: 1px solid var(--border); border-radius: 4px; background: var(--card-bg, #fff); color: var(--text); cursor: pointer; margin-right: 4px; }
|
||||
.clock-filter-btn.active { background: var(--accent); color: #fff; border-color: var(--accent); }
|
||||
|
||||
/* === Path Inspector (issue #944) === */
|
||||
.path-inspector-page { padding: 16px; max-width: 900px; margin: 0 auto; }
|
||||
.path-inspector-input-row { display: flex; gap: 8px; margin-bottom: 12px; }
|
||||
.path-inspector-input-row .input { flex: 1; }
|
||||
.path-inspector-error { color: var(--status-red, #ef4444); font-size: 13px; margin-bottom: 8px; }
|
||||
.path-inspector-table { width: 100%; border-collapse: collapse; font-size: 13px; }
|
||||
.path-inspector-table th,
|
||||
.path-inspector-table td { padding: 6px 10px; border-bottom: 1px solid var(--border); text-align: left; }
|
||||
.path-inspector-table th { background: var(--card-bg); font-weight: 600; }
|
||||
.speculative-warning { color: var(--path-inspector-speculative, #d97706); font-weight: 600; }
|
||||
.speculative-badge { cursor: help; }
|
||||
.speculative-row { background: color-mix(in srgb, var(--path-inspector-speculative, #d97706) 8%, transparent); }
|
||||
.evidence-row { font-size: 12px; color: var(--text-muted); }
|
||||
.evidence-row.collapsed { display: none; }
|
||||
.evidence-detail { padding: 4px 10px; }
|
||||
.hop-evidence { margin: 2px 0; }
|
||||
.path-inspector-stats { margin-top: 12px; font-size: 12px; color: var(--text-muted); }
|
||||
.no-results { color: var(--text-muted); font-style: italic; }
|
||||
|
||||
/* Map side pane for path inspector */
|
||||
.map-side-pane { flex: 0 0 32px; overflow: hidden; transition: flex-basis 0.2s; border-left: 1px solid var(--border); background: var(--card-bg); }
|
||||
.map-side-pane.expanded { flex: 0 0 320px; overflow-y: auto; padding: 12px; }
|
||||
.map-side-pane .pane-toggle { cursor: pointer; padding: 8px; font-size: 14px; text-align: center; }
|
||||
.map-side-pane .pane-content { display: none; }
|
||||
.map-side-pane.expanded .pane-content { display: block; }
|
||||
|
||||
/* Tools landing page */
|
||||
.tools-landing { padding: 24px; max-width: 600px; }
|
||||
.tools-menu { display: flex; flex-direction: column; gap: 12px; margin-top: 16px; }
|
||||
.tools-card { display: block; padding: 16px; border-radius: 8px; border: 1px solid var(--border); background: var(--card-bg); color: var(--text); text-decoration: none; transition: border-color 0.2s; }
|
||||
.tools-card:hover { border-color: var(--primary); }
|
||||
.tools-card h3 { margin: 0 0 4px 0; font-size: 16px; }
|
||||
.tools-card p { margin: 0; font-size: 13px; color: var(--text-muted); }
|
||||
|
||||
Vendored
-155
@@ -1,155 +0,0 @@
|
||||
/* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Minimal pure-JS AES-128 ECB implementation (decrypt only).
|
||||
*
|
||||
* Adapted from aes-js by Richard Moore (MIT License,
|
||||
* https://github.com/ricmoo/aes-js, copyright 2015-2018), trimmed to
|
||||
* the minimum needed for AES-128-ECB decryption: S-box + inverse S-box,
|
||||
* Rcon, key expansion (FIPS-197 §5.2), inverse cipher (FIPS-197 §5.3).
|
||||
* Only the inverse-direction T-tables (T5..T8) and key-expansion U-tables
|
||||
* (U1..U4) are vendored; the forward-direction tables (T1..T4) and
|
||||
* encrypt path are intentionally omitted — we never encrypt on the
|
||||
* client.
|
||||
*
|
||||
* Why pure-JS instead of Web Crypto? Web Crypto exposes AES-CBC/CTR/GCM
|
||||
* but NOT raw AES-ECB. Simulating ECB via "AES-CBC with zero IV +
|
||||
* dummy PKCS7 padding block" is unreliable: Web Crypto validates PKCS7
|
||||
* padding on the decrypted output and throws OperationError whenever the
|
||||
* padding bytes don't form a valid PKCS7 sequence (the common case for
|
||||
* real ciphertext). MeshCore channel encryption uses single-block
|
||||
* AES-128-ECB per packet, so we need true ECB, not a CBC hack.
|
||||
*
|
||||
* API: window.AES_ECB.decrypt(key, ciphertext) -> Uint8Array
|
||||
* - key: Uint8Array (16 bytes; AES-128 only)
|
||||
* - ciphertext: Uint8Array (length must be a non-zero multiple of 16)
|
||||
*/
|
||||
/* eslint-disable no-var */
|
||||
(function (root) {
|
||||
'use strict';
|
||||
|
||||
// --- S-boxes ---
|
||||
var Si = [
|
||||
0x52,0x09,0x6a,0xd5,0x30,0x36,0xa5,0x38,0xbf,0x40,0xa3,0x9e,0x81,0xf3,0xd7,0xfb,
|
||||
0x7c,0xe3,0x39,0x82,0x9b,0x2f,0xff,0x87,0x34,0x8e,0x43,0x44,0xc4,0xde,0xe9,0xcb,
|
||||
0x54,0x7b,0x94,0x32,0xa6,0xc2,0x23,0x3d,0xee,0x4c,0x95,0x0b,0x42,0xfa,0xc3,0x4e,
|
||||
0x08,0x2e,0xa1,0x66,0x28,0xd9,0x24,0xb2,0x76,0x5b,0xa2,0x49,0x6d,0x8b,0xd1,0x25,
|
||||
0x72,0xf8,0xf6,0x64,0x86,0x68,0x98,0x16,0xd4,0xa4,0x5c,0xcc,0x5d,0x65,0xb6,0x92,
|
||||
0x6c,0x70,0x48,0x50,0xfd,0xed,0xb9,0xda,0x5e,0x15,0x46,0x57,0xa7,0x8d,0x9d,0x84,
|
||||
0x90,0xd8,0xab,0x00,0x8c,0xbc,0xd3,0x0a,0xf7,0xe4,0x58,0x05,0xb8,0xb3,0x45,0x06,
|
||||
0xd0,0x2c,0x1e,0x8f,0xca,0x3f,0x0f,0x02,0xc1,0xaf,0xbd,0x03,0x01,0x13,0x8a,0x6b,
|
||||
0x3a,0x91,0x11,0x41,0x4f,0x67,0xdc,0xea,0x97,0xf2,0xcf,0xce,0xf0,0xb4,0xe6,0x73,
|
||||
0x96,0xac,0x74,0x22,0xe7,0xad,0x35,0x85,0xe2,0xf9,0x37,0xe8,0x1c,0x75,0xdf,0x6e,
|
||||
0x47,0xf1,0x1a,0x71,0x1d,0x29,0xc5,0x89,0x6f,0xb7,0x62,0x0e,0xaa,0x18,0xbe,0x1b,
|
||||
0xfc,0x56,0x3e,0x4b,0xc6,0xd2,0x79,0x20,0x9a,0xdb,0xc0,0xfe,0x78,0xcd,0x5a,0xf4,
|
||||
0x1f,0xdd,0xa8,0x33,0x88,0x07,0xc7,0x31,0xb1,0x12,0x10,0x59,0x27,0x80,0xec,0x5f,
|
||||
0x60,0x51,0x7f,0xa9,0x19,0xb5,0x4a,0x0d,0x2d,0xe5,0x7a,0x9f,0x93,0xc9,0x9c,0xef,
|
||||
0xa0,0xe0,0x3b,0x4d,0xae,0x2a,0xf5,0xb0,0xc8,0xeb,0xbb,0x3c,0x83,0x53,0x99,0x61,
|
||||
0x17,0x2b,0x04,0x7e,0xba,0x77,0xd6,0x26,0xe1,0x69,0x14,0x63,0x55,0x21,0x0c,0x7d
|
||||
];
|
||||
|
||||
// --- GF(2^8) multiplications used by InvMixColumns ---
|
||||
// xtime: multiply by {02} in GF(2^8)
|
||||
function xt(b) { return ((b << 1) ^ ((b & 0x80) ? 0x1b : 0)) & 0xff; }
|
||||
function mul(a, b) {
|
||||
// Generic GF(2^8) multiply for small constants 9, 0xb, 0xd, 0xe.
|
||||
var p = 0;
|
||||
for (var i = 0; i < 8; i++) {
|
||||
if (b & 1) p ^= a;
|
||||
var hi = a & 0x80;
|
||||
a = (a << 1) & 0xff;
|
||||
if (hi) a ^= 0x1b;
|
||||
b >>= 1;
|
||||
}
|
||||
return p & 0xff;
|
||||
}
|
||||
|
||||
// --- Key expansion: AES-128 produces 11 round keys (44 words × 4 bytes) ---
|
||||
function expandKey(key) {
|
||||
if (key.length !== 16) throw new Error('AES-ECB: key must be 16 bytes (AES-128)');
|
||||
var Rcon = [0x00, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36];
|
||||
// S-box derived as the inverse of Si: build it once.
|
||||
var S = new Uint8Array(256);
|
||||
for (var x = 0; x < 256; x++) S[Si[x]] = x;
|
||||
var w = new Uint8Array(176); // 11 round keys × 16 bytes
|
||||
for (var i = 0; i < 16; i++) w[i] = key[i];
|
||||
for (var idx = 16, rcon = 1; idx < 176; idx += 4) {
|
||||
var t0 = w[idx - 4], t1 = w[idx - 3], t2 = w[idx - 2], t3 = w[idx - 1];
|
||||
if (idx % 16 === 0) {
|
||||
// RotWord + SubWord + Rcon
|
||||
var s0 = S[t1], s1 = S[t2], s2 = S[t3], s3 = S[t0];
|
||||
t0 = s0 ^ Rcon[rcon]; t1 = s1; t2 = s2; t3 = s3;
|
||||
rcon++;
|
||||
}
|
||||
w[idx ] = w[idx - 16] ^ t0;
|
||||
w[idx + 1] = w[idx - 15] ^ t1;
|
||||
w[idx + 2] = w[idx - 14] ^ t2;
|
||||
w[idx + 3] = w[idx - 13] ^ t3;
|
||||
}
|
||||
return w;
|
||||
}
|
||||
|
||||
// --- AES-128 single-block decrypt (FIPS-197 §5.3 InvCipher) ---
|
||||
function decryptBlock(state, w, out, outOff) {
|
||||
// state is a 16-byte block. Work on a local 16-byte buffer.
|
||||
var s = new Uint8Array(16);
|
||||
// AddRoundKey with last round key (round 10)
|
||||
for (var i = 0; i < 16; i++) s[i] = state[i] ^ w[160 + i];
|
||||
|
||||
for (var round = 9; round >= 1; round--) {
|
||||
// InvShiftRows
|
||||
var t = new Uint8Array(16);
|
||||
// Row 0: no shift
|
||||
t[0] = s[0]; t[4] = s[4]; t[8] = s[8]; t[12] = s[12];
|
||||
// Row 1: shift right by 1 -> source col offset -1 mod 4
|
||||
t[1] = s[13]; t[5] = s[1]; t[9] = s[5]; t[13] = s[9];
|
||||
// Row 2: shift right by 2
|
||||
t[2] = s[10]; t[6] = s[14]; t[10] = s[2]; t[14] = s[6];
|
||||
// Row 3: shift right by 3
|
||||
t[3] = s[7]; t[7] = s[11]; t[11] = s[15]; t[15] = s[3];
|
||||
// InvSubBytes
|
||||
for (var k = 0; k < 16; k++) t[k] = Si[t[k]];
|
||||
// AddRoundKey
|
||||
for (var k2 = 0; k2 < 16; k2++) t[k2] ^= w[round * 16 + k2];
|
||||
// InvMixColumns: each column [c0,c1,c2,c3] -> M^-1 * column
|
||||
// M^-1 = [[0e,0b,0d,09],[09,0e,0b,0d],[0d,09,0e,0b],[0b,0d,09,0e]]
|
||||
for (var c = 0; c < 4; c++) {
|
||||
var b0 = t[4 * c], b1 = t[4 * c + 1], b2 = t[4 * c + 2], b3 = t[4 * c + 3];
|
||||
s[4 * c ] = mul(b0, 0x0e) ^ mul(b1, 0x0b) ^ mul(b2, 0x0d) ^ mul(b3, 0x09);
|
||||
s[4 * c + 1] = mul(b0, 0x09) ^ mul(b1, 0x0e) ^ mul(b2, 0x0b) ^ mul(b3, 0x0d);
|
||||
s[4 * c + 2] = mul(b0, 0x0d) ^ mul(b1, 0x09) ^ mul(b2, 0x0e) ^ mul(b3, 0x0b);
|
||||
s[4 * c + 3] = mul(b0, 0x0b) ^ mul(b1, 0x0d) ^ mul(b2, 0x09) ^ mul(b3, 0x0e);
|
||||
}
|
||||
}
|
||||
|
||||
// Final round (no InvMixColumns): InvShiftRows + InvSubBytes + AddRoundKey(w0)
|
||||
var f = new Uint8Array(16);
|
||||
f[0] = s[0]; f[4] = s[4]; f[8] = s[8]; f[12] = s[12];
|
||||
f[1] = s[13]; f[5] = s[1]; f[9] = s[5]; f[13] = s[9];
|
||||
f[2] = s[10]; f[6] = s[14]; f[10] = s[2]; f[14] = s[6];
|
||||
f[3] = s[7]; f[7] = s[11]; f[11] = s[15]; f[15] = s[3];
|
||||
for (var j = 0; j < 16; j++) out[outOff + j] = Si[f[j]] ^ w[j];
|
||||
}
|
||||
|
||||
function decrypt(key, ciphertext) {
|
||||
if (!(ciphertext instanceof Uint8Array)) {
|
||||
throw new Error('AES-ECB: ciphertext must be a Uint8Array');
|
||||
}
|
||||
if (ciphertext.length === 0 || ciphertext.length % 16 !== 0) {
|
||||
throw new Error('AES-ECB: ciphertext length must be a non-zero multiple of 16');
|
||||
}
|
||||
var w = expandKey(key instanceof Uint8Array ? key : new Uint8Array(key));
|
||||
var out = new Uint8Array(ciphertext.length);
|
||||
var block = new Uint8Array(16);
|
||||
for (var i = 0; i < ciphertext.length; i += 16) {
|
||||
for (var b = 0; b < 16; b++) block[b] = ciphertext[i + b];
|
||||
decryptBlock(block, w, out, i);
|
||||
}
|
||||
return out;
|
||||
}
|
||||
|
||||
// Suppress lint by referencing xt (we kept it for clarity in case future
|
||||
// code wants it; the compiled `mul` function is fully self-contained).
|
||||
void xt;
|
||||
|
||||
root.AES_ECB = { decrypt: decrypt };
|
||||
})(typeof window !== 'undefined' ? window : (typeof self !== 'undefined' ? self : this));
|
||||
Vendored
-152
@@ -1,152 +0,0 @@
|
||||
/* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Minimal pure-JS SHA-256 + HMAC-SHA256.
|
||||
*
|
||||
* Why: Web Crypto's SubtleCrypto (`window.crypto.subtle`) is only exposed
|
||||
* in **secure contexts** (HTTPS or localhost). When CoreScope is served
|
||||
* over plain HTTP — common for self-hosted instances and LAN-side
|
||||
* deployments — `crypto.subtle` is undefined and any
|
||||
* `crypto.subtle.digest(...)` / `crypto.subtle.importKey(...)` call
|
||||
* throws `Cannot read properties of undefined`. PR #1021 fixed the
|
||||
* AES-ECB path for the same reason; this module does the same for the
|
||||
* SHA-256 / HMAC paths used by `computeChannelHash` and `verifyMAC`.
|
||||
*
|
||||
* Implementation: textbook FIPS-180-4 SHA-256 + RFC 2104 HMAC. Operates
|
||||
* on Uint8Array inputs; returns Uint8Array outputs. ~120 LOC, no deps.
|
||||
*
|
||||
* API:
|
||||
* window.PureCrypto.sha256(bytes: Uint8Array) -> Uint8Array(32)
|
||||
* window.PureCrypto.hmacSha256(key: Uint8Array, msg: Uint8Array) -> Uint8Array(32)
|
||||
*/
|
||||
/* eslint-disable no-var */
|
||||
(function (root) {
|
||||
'use strict';
|
||||
|
||||
// SHA-256 round constants (FIPS-180-4 §4.2.2).
|
||||
var K = new Uint32Array([
|
||||
0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5,
|
||||
0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174,
|
||||
0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc, 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,
|
||||
0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967,
|
||||
0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13, 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85,
|
||||
0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3, 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070,
|
||||
0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3,
|
||||
0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208, 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2
|
||||
]);
|
||||
|
||||
function ror(x, n) { return (x >>> n) | (x << (32 - n)); }
|
||||
|
||||
// Process a single 64-byte block, mutating `H` (8 × uint32 state).
|
||||
function processBlock(H, M) {
|
||||
var W = new Uint32Array(64);
|
||||
for (var i = 0; i < 16; i++) {
|
||||
W[i] = (M[i * 4] << 24) | (M[i * 4 + 1] << 16) | (M[i * 4 + 2] << 8) | M[i * 4 + 3];
|
||||
}
|
||||
for (var t = 16; t < 64; t++) {
|
||||
var s0 = ror(W[t - 15], 7) ^ ror(W[t - 15], 18) ^ (W[t - 15] >>> 3);
|
||||
var s1 = ror(W[t - 2], 17) ^ ror(W[t - 2], 19) ^ (W[t - 2] >>> 10);
|
||||
W[t] = (W[t - 16] + s0 + W[t - 7] + s1) >>> 0;
|
||||
}
|
||||
|
||||
var a = H[0], b = H[1], c = H[2], d = H[3];
|
||||
var e = H[4], f = H[5], g = H[6], h = H[7];
|
||||
|
||||
for (var j = 0; j < 64; j++) {
|
||||
var S1 = ror(e, 6) ^ ror(e, 11) ^ ror(e, 25);
|
||||
var ch = (e & f) ^ ((~e) & g);
|
||||
var temp1 = (h + S1 + ch + K[j] + W[j]) >>> 0;
|
||||
var S0 = ror(a, 2) ^ ror(a, 13) ^ ror(a, 22);
|
||||
var maj = (a & b) ^ (a & c) ^ (b & c);
|
||||
var temp2 = (S0 + maj) >>> 0;
|
||||
|
||||
h = g; g = f; f = e;
|
||||
e = (d + temp1) >>> 0;
|
||||
d = c; c = b; b = a;
|
||||
a = (temp1 + temp2) >>> 0;
|
||||
}
|
||||
|
||||
H[0] = (H[0] + a) >>> 0;
|
||||
H[1] = (H[1] + b) >>> 0;
|
||||
H[2] = (H[2] + c) >>> 0;
|
||||
H[3] = (H[3] + d) >>> 0;
|
||||
H[4] = (H[4] + e) >>> 0;
|
||||
H[5] = (H[5] + f) >>> 0;
|
||||
H[6] = (H[6] + g) >>> 0;
|
||||
H[7] = (H[7] + h) >>> 0;
|
||||
}
|
||||
|
||||
function sha256(bytes) {
|
||||
if (!(bytes instanceof Uint8Array)) {
|
||||
throw new Error('sha256: input must be a Uint8Array');
|
||||
}
|
||||
var bitLen = bytes.length * 8;
|
||||
// Padding: 0x80 then zeros until length ≡ 56 (mod 64), then 8-byte big-endian bit-length.
|
||||
var padLen = ((bytes.length + 9 + 63) & ~63) - bytes.length;
|
||||
var padded = new Uint8Array(bytes.length + padLen);
|
||||
padded.set(bytes, 0);
|
||||
padded[bytes.length] = 0x80;
|
||||
// 64-bit big-endian bit length. JS bitwise ops are 32-bit, so split.
|
||||
var hi = Math.floor(bitLen / 0x100000000);
|
||||
var lo = bitLen >>> 0;
|
||||
var off = padded.length - 8;
|
||||
padded[off] = (hi >>> 24) & 0xff;
|
||||
padded[off + 1] = (hi >>> 16) & 0xff;
|
||||
padded[off + 2] = (hi >>> 8) & 0xff;
|
||||
padded[off + 3] = hi & 0xff;
|
||||
padded[off + 4] = (lo >>> 24) & 0xff;
|
||||
padded[off + 5] = (lo >>> 16) & 0xff;
|
||||
padded[off + 6] = (lo >>> 8) & 0xff;
|
||||
padded[off + 7] = lo & 0xff;
|
||||
|
||||
var H = new Uint32Array([
|
||||
0x6a09e667, 0xbb67ae85, 0x3c6ef372, 0xa54ff53a,
|
||||
0x510e527f, 0x9b05688c, 0x1f83d9ab, 0x5be0cd19
|
||||
]);
|
||||
|
||||
for (var i = 0; i < padded.length; i += 64) {
|
||||
processBlock(H, padded.subarray(i, i + 64));
|
||||
}
|
||||
|
||||
var out = new Uint8Array(32);
|
||||
for (var k = 0; k < 8; k++) {
|
||||
out[k * 4] = (H[k] >>> 24) & 0xff;
|
||||
out[k * 4 + 1] = (H[k] >>> 16) & 0xff;
|
||||
out[k * 4 + 2] = (H[k] >>> 8) & 0xff;
|
||||
out[k * 4 + 3] = H[k] & 0xff;
|
||||
}
|
||||
return out;
|
||||
}
|
||||
|
||||
// RFC 2104 HMAC.
|
||||
function hmacSha256(key, msg) {
|
||||
if (!(key instanceof Uint8Array) || !(msg instanceof Uint8Array)) {
|
||||
throw new Error('hmacSha256: key and msg must be Uint8Array');
|
||||
}
|
||||
var blockSize = 64;
|
||||
var k = key;
|
||||
if (k.length > blockSize) k = sha256(k);
|
||||
if (k.length < blockSize) {
|
||||
var padded = new Uint8Array(blockSize);
|
||||
padded.set(k, 0);
|
||||
k = padded;
|
||||
}
|
||||
var oKeyPad = new Uint8Array(blockSize);
|
||||
var iKeyPad = new Uint8Array(blockSize);
|
||||
for (var i = 0; i < blockSize; i++) {
|
||||
oKeyPad[i] = k[i] ^ 0x5c;
|
||||
iKeyPad[i] = k[i] ^ 0x36;
|
||||
}
|
||||
var inner = new Uint8Array(blockSize + msg.length);
|
||||
inner.set(iKeyPad, 0);
|
||||
inner.set(msg, blockSize);
|
||||
var innerHash = sha256(inner);
|
||||
var outer = new Uint8Array(blockSize + innerHash.length);
|
||||
outer.set(oKeyPad, 0);
|
||||
outer.set(innerHash, blockSize);
|
||||
return sha256(outer);
|
||||
}
|
||||
|
||||
root.PureCrypto = { sha256: sha256, hmacSha256: hmacSha256 };
|
||||
})(typeof window !== 'undefined' ? window
|
||||
: typeof self !== 'undefined' ? self
|
||||
: this);
|
||||
+111
@@ -59,7 +59,118 @@ test('null lastSeenMs → stale', () => assert.strictEqual(getNodeStatus('repeat
|
||||
test('undefined lastSeenMs → stale', () => assert.strictEqual(getNodeStatus('repeater', undefined), 'stale'));
|
||||
test('0 lastSeenMs → stale', () => assert.strictEqual(getNodeStatus('repeater', 0), 'stale'));
|
||||
|
||||
// === getStatusInfo tests (inline since nodes.js has too many DOM deps) ===
|
||||
console.log('\n=== getStatusInfo (logic validation) ===');
|
||||
|
||||
// Simulate getStatusInfo logic
|
||||
function mockGetStatusInfo(n) {
|
||||
const ROLE_COLORS = ctx.window.ROLE_COLORS;
|
||||
const role = (n.role || '').toLowerCase();
|
||||
const roleColor = ROLE_COLORS[n.role] || '#6b7280';
|
||||
const lastHeardTime = n._lastHeard || n.last_heard || n.last_seen;
|
||||
const lastHeardMs = lastHeardTime ? new Date(lastHeardTime).getTime() : 0;
|
||||
const status = getNodeStatus(role, lastHeardMs);
|
||||
const statusLabel = status === 'active' ? '🟢 Active' : '⚪ Stale';
|
||||
const isInfra = role === 'repeater' || role === 'room';
|
||||
|
||||
let explanation = '';
|
||||
if (status === 'active') {
|
||||
explanation = 'Last heard recently';
|
||||
} else {
|
||||
const reason = isInfra
|
||||
? 'repeaters typically advertise every 12-24h'
|
||||
: 'companions only advertise when user initiates, this may be normal';
|
||||
explanation = 'Not heard — ' + reason;
|
||||
}
|
||||
return { status, statusLabel, roleColor, explanation, role };
|
||||
}
|
||||
|
||||
test('active repeater → 🟢 Active, red color', () => {
|
||||
const info = mockGetStatusInfo({ role: 'repeater', last_seen: new Date(now - 1*h).toISOString() });
|
||||
assert.strictEqual(info.status, 'active');
|
||||
assert.strictEqual(info.statusLabel, '🟢 Active');
|
||||
assert.strictEqual(info.roleColor, '#dc2626');
|
||||
});
|
||||
|
||||
test('stale companion → ⚪ Stale, explanation mentions "this may be normal"', () => {
|
||||
const info = mockGetStatusInfo({ role: 'companion', last_seen: new Date(now - 25*h).toISOString() });
|
||||
assert.strictEqual(info.status, 'stale');
|
||||
assert.strictEqual(info.statusLabel, '⚪ Stale');
|
||||
assert(info.explanation.includes('this may be normal'), 'should mention "this may be normal"');
|
||||
});
|
||||
|
||||
test('missing last_seen → stale', () => {
|
||||
const info = mockGetStatusInfo({ role: 'repeater' });
|
||||
assert.strictEqual(info.status, 'stale');
|
||||
});
|
||||
|
||||
test('missing role → defaults to empty string, uses node threshold', () => {
|
||||
const info = mockGetStatusInfo({ last_seen: new Date(now - 25*h).toISOString() });
|
||||
assert.strictEqual(info.status, 'stale');
|
||||
assert.strictEqual(info.roleColor, '#6b7280');
|
||||
});
|
||||
|
||||
test('prefers last_heard over last_seen', () => {
|
||||
// last_seen is stale, but last_heard is recent
|
||||
const info = mockGetStatusInfo({
|
||||
role: 'companion',
|
||||
last_seen: new Date(now - 48*h).toISOString(),
|
||||
last_heard: new Date(now - 1*h).toISOString()
|
||||
});
|
||||
assert.strictEqual(info.status, 'active');
|
||||
});
|
||||
|
||||
// === getStatusTooltip tests ===
|
||||
console.log('\n=== getStatusTooltip ===');
|
||||
|
||||
// Load from nodes.js by extracting the function
|
||||
// Since nodes.js is complex, I'll re-implement the tooltip function for testing
|
||||
function getStatusTooltip(role, status) {
|
||||
const isInfra = role === 'repeater' || role === 'room';
|
||||
const threshold = isInfra ? '72h' : '24h';
|
||||
if (status === 'active') {
|
||||
return 'Active — heard within the last ' + threshold + '.' + (isInfra ? ' Repeaters typically advertise every 12-24h.' : '');
|
||||
}
|
||||
if (role === 'companion') {
|
||||
return 'Stale — not heard for over ' + threshold + '. Companions only advertise when the user initiates — this may be normal.';
|
||||
}
|
||||
if (role === 'sensor') {
|
||||
return 'Stale — not heard for over ' + threshold + '. This sensor may be offline.';
|
||||
}
|
||||
return 'Stale — not heard for over ' + threshold + '. This ' + role + ' may be offline or out of range.';
|
||||
}
|
||||
|
||||
test('active repeater mentions "72h" and "advertise every 12-24h"', () => {
|
||||
const tip = getStatusTooltip('repeater', 'active');
|
||||
assert(tip.includes('72h'), 'should mention 72h');
|
||||
assert(tip.includes('advertise every 12-24h'), 'should mention advertise frequency');
|
||||
});
|
||||
|
||||
test('active companion mentions "24h"', () => {
|
||||
const tip = getStatusTooltip('companion', 'active');
|
||||
assert(tip.includes('24h'), 'should mention 24h');
|
||||
});
|
||||
|
||||
test('stale companion mentions "24h" and "user initiates"', () => {
|
||||
const tip = getStatusTooltip('companion', 'stale');
|
||||
assert(tip.includes('24h'), 'should mention 24h');
|
||||
assert(tip.includes('user initiates'), 'should mention user initiates');
|
||||
});
|
||||
|
||||
test('stale repeater mentions "offline or out of range"', () => {
|
||||
const tip = getStatusTooltip('repeater', 'stale');
|
||||
assert(tip.includes('offline or out of range'), 'should mention offline or out of range');
|
||||
});
|
||||
|
||||
test('stale sensor mentions "sensor may be offline"', () => {
|
||||
const tip = getStatusTooltip('sensor', 'stale');
|
||||
assert(tip.includes('sensor may be offline'));
|
||||
});
|
||||
|
||||
test('stale room uses 72h threshold', () => {
|
||||
const tip = getStatusTooltip('room', 'stale');
|
||||
assert(tip.includes('72h'));
|
||||
});
|
||||
|
||||
// === Bug check: renderRows uses last_seen instead of last_heard || last_seen ===
|
||||
console.log('\n=== BUG CHECK ===');
|
||||
|
||||
@@ -13,8 +13,6 @@ node test-packet-filter.js
|
||||
node test-aging.js
|
||||
node test-frontend-helpers.js
|
||||
node test-perf-go-runtime.js
|
||||
node test-channel-psk-ux.js
|
||||
node test-channel-decrypt-insecure-context.js
|
||||
|
||||
echo ""
|
||||
echo "═══════════════════════════════════════"
|
||||
|
||||
@@ -0,0 +1,123 @@
|
||||
/**
|
||||
* test-anim-perf.js — Performance benchmark for animation timer management
|
||||
*
|
||||
* Demonstrates that the rAF + concurrency-cap approach keeps active animation
|
||||
* count bounded, whereas the old setInterval approach accumulated without limit.
|
||||
*
|
||||
* Run: node test-anim-perf.js
|
||||
*/
|
||||
|
||||
'use strict';
|
||||
|
||||
let passed = 0, failed = 0;
|
||||
function assert(cond, msg) {
|
||||
if (cond) { console.log(` ✅ ${msg}`); passed++; }
|
||||
else { console.log(` ❌ ${msg}`); failed++; }
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Simulate OLD behaviour: setInterval-based, no concurrency cap
|
||||
// ---------------------------------------------------------------------------
|
||||
function simulateOldModel(packetsPerSec, hopsPerPacket, durationSec) {
|
||||
// Each hop spawns 3 intervals (pulse 26ms, line 33ms, fade 52ms).
|
||||
// Pulse lasts ~2s, line ~0.66s, fade ~0.8s+0.4s ≈ 1.2s
|
||||
// At any moment, timers from the last ~2s of packets are still alive.
|
||||
const intervalLifetimes = [2.0, 0.66, 1.2]; // seconds each interval lives
|
||||
let maxConcurrent = 0;
|
||||
// Walk through time in 0.1s steps
|
||||
const dt = 0.1;
|
||||
const spawns = []; // {time, lifetime}
|
||||
for (let t = 0; t < durationSec; t += dt) {
|
||||
// Spawn timers for packets arriving in this window
|
||||
const pktsInWindow = packetsPerSec * dt;
|
||||
for (let p = 0; p < pktsInWindow; p++) {
|
||||
for (let h = 0; h < hopsPerPacket; h++) {
|
||||
for (const lt of intervalLifetimes) {
|
||||
spawns.push({ time: t, lifetime: lt });
|
||||
}
|
||||
}
|
||||
}
|
||||
// Count alive timers
|
||||
const alive = spawns.filter(s => t < s.time + s.lifetime).length;
|
||||
if (alive > maxConcurrent) maxConcurrent = alive;
|
||||
}
|
||||
return maxConcurrent;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Simulate NEW behaviour: rAF + MAX_CONCURRENT_ANIMS cap
|
||||
// ---------------------------------------------------------------------------
|
||||
function simulateNewModel(packetsPerSec, hopsPerPacket, durationSec) {
|
||||
const MAX_CONCURRENT_ANIMS = 20;
|
||||
let activeAnims = 0;
|
||||
let maxConcurrent = 0;
|
||||
const anims = []; // {endTime}
|
||||
const dt = 0.1;
|
||||
for (let t = 0; t < durationSec; t += dt) {
|
||||
// Expire finished animations
|
||||
while (anims.length && anims[0].endTime <= t) {
|
||||
anims.shift();
|
||||
activeAnims--;
|
||||
}
|
||||
// Try to start new animations
|
||||
const pktsInWindow = packetsPerSec * dt;
|
||||
for (let p = 0; p < pktsInWindow; p++) {
|
||||
if (activeAnims >= MAX_CONCURRENT_ANIMS) break; // cap reached — drop
|
||||
activeAnims++;
|
||||
// rAF animation lifetime: longest is pulse ~2s
|
||||
anims.push({ endTime: t + 2.0 });
|
||||
}
|
||||
// Sort by endTime so expiry works
|
||||
anims.sort((a, b) => a.endTime - b.endTime);
|
||||
if (activeAnims > maxConcurrent) maxConcurrent = activeAnims;
|
||||
}
|
||||
return maxConcurrent;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
console.log('\n=== Animation timer accumulation: old vs new ===');
|
||||
|
||||
// Scenario: 5 pkts/sec, 3 hops each, 30 seconds
|
||||
const oldPeak30s = simulateOldModel(5, 3, 30);
|
||||
const newPeak30s = simulateNewModel(5, 3, 30);
|
||||
console.log(` Old model (30s @ 5pkt/s×3hops): peak ${oldPeak30s} concurrent timers`);
|
||||
console.log(` New model (30s @ 5pkt/s×3hops): peak ${newPeak30s} concurrent animations`);
|
||||
assert(oldPeak30s > 100, `old model accumulates >100 timers (got ${oldPeak30s})`);
|
||||
assert(newPeak30s <= 20, `new model stays ≤20 (got ${newPeak30s})`);
|
||||
|
||||
// Scenario: 5 minutes sustained
|
||||
const oldPeak5m = simulateOldModel(5, 3, 300);
|
||||
const newPeak5m = simulateNewModel(5, 3, 300);
|
||||
console.log(` Old model (5min @ 5pkt/s×3hops): peak ${oldPeak5m} concurrent timers`);
|
||||
console.log(` New model (5min @ 5pkt/s×3hops): peak ${newPeak5m} concurrent animations`);
|
||||
assert(oldPeak5m > 100, `old model at 5min still unbounded (got ${oldPeak5m})`);
|
||||
assert(newPeak5m <= 20, `new model at 5min still ≤20 (got ${newPeak5m})`);
|
||||
|
||||
// Scenario: burst — 20 pkts/sec for 10s
|
||||
const oldBurst = simulateOldModel(20, 3, 10);
|
||||
const newBurst = simulateNewModel(20, 3, 10);
|
||||
console.log(` Old model (burst 20pkt/s×3hops, 10s): peak ${oldBurst} concurrent timers`);
|
||||
console.log(` New model (burst 20pkt/s×3hops, 10s): peak ${newBurst} concurrent animations`);
|
||||
assert(oldBurst > 200, `old model under burst >200 timers (got ${oldBurst})`);
|
||||
assert(newBurst <= 20, `new model under burst stays ≤20 (got ${newBurst})`);
|
||||
|
||||
console.log('\n=== drawAnimatedLine frame-drop catch-up ===');
|
||||
|
||||
// Read the source and verify catch-up logic exists
|
||||
const fs = require('fs');
|
||||
const src = fs.readFileSync(__dirname + '/public/live.js', 'utf8');
|
||||
|
||||
// Extract the animateLine function body
|
||||
const lineMatch = src.match(/function animateLine\(now\)\s*\{[\s\S]*?requestAnimationFrame\(animateLine\)/);
|
||||
assert(lineMatch && /Math\.min\(Math\.floor\(elapsed\s*\/\s*33\)/.test(lineMatch[0]),
|
||||
'drawAnimatedLine catches up on frame drops (multi-tick per frame)');
|
||||
|
||||
const fadeMatch = src.match(/function animateFade\(now\)\s*\{[\s\S]*?requestAnimationFrame\(animateFade\)/);
|
||||
assert(fadeMatch && /Math\.min\(Math\.floor\(fadeElapsed\s*\/\s*52\)/.test(fadeMatch[0]),
|
||||
'animateFade catches up on frame drops (multi-tick per frame)');
|
||||
|
||||
console.log(`\n${passed} passed, ${failed} failed\n`);
|
||||
process.exit(failed ? 1 : 0);
|
||||
@@ -0,0 +1,64 @@
|
||||
/**
|
||||
* Tests for #759 — Add channel UX: button, hint, status feedback.
|
||||
* Validates the HTML structure rendered by channels.js init.
|
||||
*/
|
||||
'use strict';
|
||||
|
||||
const fs = require('fs');
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
function assert(cond, msg) {
|
||||
if (cond) { passed++; console.log(' ✓ ' + msg); }
|
||||
else { failed++; console.error(' ✗ ' + msg); }
|
||||
}
|
||||
|
||||
function assertIncludes(html, substr, msg) {
|
||||
assert(html.includes(substr), msg);
|
||||
}
|
||||
|
||||
// Read the channels.js source to extract the HTML template
|
||||
const src = fs.readFileSync(__dirname + '/public/channels.js', 'utf8');
|
||||
|
||||
// Extract the sidebar HTML from the template literal
|
||||
const htmlMatch = src.match(/app\.innerHTML\s*=\s*`([\s\S]*?)`;/);
|
||||
const html = htmlMatch ? htmlMatch[1] : '';
|
||||
|
||||
console.log('Test: Add channel UX (#759)');
|
||||
|
||||
// 1. Button renders in the form
|
||||
assertIncludes(html, 'class="ch-add-btn"', 'Add button has ch-add-btn class');
|
||||
assertIncludes(html, 'type="submit"', 'Button is type=submit');
|
||||
assertIncludes(html, '>+</button>', 'Button shows + text');
|
||||
|
||||
// 2. Form has proper structure
|
||||
assertIncludes(html, 'class="ch-add-form"', 'Form has ch-add-form class');
|
||||
assertIncludes(html, 'class="ch-add-row"', 'Row wrapper present');
|
||||
assert(!html.includes('class="ch-add-label"'), 'Label removed (redundant with hint)');
|
||||
|
||||
// 3. Hint text present
|
||||
assertIncludes(html, 'class="ch-add-hint"', 'Hint div present');
|
||||
assertIncludes(html, 'e.g. #LongFast or 32-char hex key', 'Hint text correct');
|
||||
|
||||
// 4. Status div present
|
||||
assertIncludes(html, 'id="chAddStatus"', 'Status div has correct id');
|
||||
assertIncludes(html, 'class="ch-add-status"', 'Status div has correct class');
|
||||
assertIncludes(html, 'style="display:none"', 'Status div hidden by default');
|
||||
|
||||
// 5. showAddStatus function exists in source
|
||||
assert(src.includes('function showAddStatus('), 'showAddStatus function defined');
|
||||
assert(src.includes("'success'"), 'Success status type referenced');
|
||||
assert(src.includes("'error'"), 'Error status type referenced');
|
||||
|
||||
// 6. CSS classes exist
|
||||
const css = fs.readFileSync(__dirname + '/public/style.css', 'utf8');
|
||||
assert(css.includes('.ch-add-form'), 'CSS: .ch-add-form defined');
|
||||
assert(css.includes('.ch-add-btn'), 'CSS: .ch-add-btn defined');
|
||||
assert(css.includes('.ch-add-hint'), 'CSS: .ch-add-hint defined');
|
||||
assert(css.includes('.ch-add-status'), 'CSS: .ch-add-status defined');
|
||||
assert(css.includes('.ch-add-row'), 'CSS: .ch-add-row defined');
|
||||
// .ch-add-label CSS kept for backward compat but label removed from HTML
|
||||
|
||||
console.log('\n' + passed + ' passed, ' + failed + ' failed');
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
@@ -1,112 +0,0 @@
|
||||
/**
|
||||
* Tests for AES-128-ECB decryption in public/channel-decrypt.js.
|
||||
*
|
||||
* Background: the original implementation simulated ECB via Web Crypto
|
||||
* AES-CBC with a zero IV and a dummy PKCS7 padding block. Web Crypto
|
||||
* validates PKCS7 padding on the decrypted output and throws an
|
||||
* `OperationError` whenever the last 16 bytes of the (CBC-decrypted)
|
||||
* output don't form a valid PKCS7 padding sequence — which is the
|
||||
* common case here, since the input is real ciphertext, not a padded
|
||||
* second block. This test pins decryptECB() to the FIPS-197 NIST
|
||||
* AES-128-ECB known-answer vector (Appendix B / C.1) so that the
|
||||
* implementation cannot regress to any Web Crypto + ECB hack.
|
||||
*
|
||||
* Vector (FIPS-197 Appendix C.1, single-block AES-128 ECB):
|
||||
* key = 000102030405060708090a0b0c0d0e0f
|
||||
* plaintext = 00112233445566778899aabbccddeeff
|
||||
* ciphertext = 69c4e0d86a7b0430d8cdb78070b4c55a
|
||||
*/
|
||||
'use strict';
|
||||
|
||||
const vm = require('vm');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { subtle } = require('crypto').webcrypto;
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
function assert(cond, msg) {
|
||||
if (cond) { passed++; console.log(' ✓ ' + msg); }
|
||||
else { failed++; console.error(' ✗ ' + msg); }
|
||||
}
|
||||
|
||||
function loadChannelDecrypt() {
|
||||
const storage = {};
|
||||
const localStorage = {
|
||||
getItem: (k) => storage[k] !== undefined ? storage[k] : null,
|
||||
setItem: (k, v) => { storage[k] = String(v); },
|
||||
removeItem: (k) => { delete storage[k]; },
|
||||
};
|
||||
const sandbox = {
|
||||
window: {}, crypto: { subtle }, TextEncoder, TextDecoder, Uint8Array,
|
||||
localStorage, console, Date, JSON, parseInt, Math, String, Number,
|
||||
Object, Array, RegExp, Error, Promise, setTimeout,
|
||||
};
|
||||
sandbox.window = sandbox; sandbox.self = sandbox;
|
||||
vm.createContext(sandbox);
|
||||
|
||||
// Load vendored AES (if present) before channel-decrypt.js.
|
||||
const vendorPath = path.join(__dirname, 'public/vendor/aes-ecb.js');
|
||||
if (fs.existsSync(vendorPath)) {
|
||||
vm.runInContext(fs.readFileSync(vendorPath, 'utf8'), sandbox);
|
||||
}
|
||||
vm.runInContext(
|
||||
fs.readFileSync(path.join(__dirname, 'public/channel-decrypt.js'), 'utf8'),
|
||||
sandbox
|
||||
);
|
||||
return sandbox.window.ChannelDecrypt;
|
||||
}
|
||||
|
||||
async function runTests() {
|
||||
console.log('\n=== AES-128-ECB known-answer vector (FIPS-197 C.1) ===');
|
||||
|
||||
const CD = loadChannelDecrypt();
|
||||
|
||||
const key = CD.hexToBytes('000102030405060708090a0b0c0d0e0f');
|
||||
const ct = CD.hexToBytes('69c4e0d86a7b0430d8cdb78070b4c55a');
|
||||
const expectedPlaintextHex = '00112233445566778899aabbccddeeff';
|
||||
|
||||
let result, threw = null;
|
||||
try {
|
||||
result = await CD.decryptECB(key, ct);
|
||||
} catch (e) {
|
||||
threw = e;
|
||||
}
|
||||
|
||||
assert(threw === null, 'decryptECB does not throw on valid ciphertext (got: ' + (threw && threw.message) + ')');
|
||||
assert(result instanceof Uint8Array, 'decryptECB returns a Uint8Array');
|
||||
assert(
|
||||
result && CD.bytesToHex(result) === expectedPlaintextHex,
|
||||
'decryptECB matches FIPS-197 vector (got ' + (result ? CD.bytesToHex(result) : 'null') + ')'
|
||||
);
|
||||
|
||||
// Multi-block: two copies of the same block must produce two copies
|
||||
// of the same plaintext (true ECB property — no chaining).
|
||||
console.log('\n=== AES-128-ECB multi-block (no chaining) ===');
|
||||
const ct2 = new Uint8Array(32);
|
||||
ct2.set(ct, 0); ct2.set(ct, 16);
|
||||
let result2, threw2 = null;
|
||||
try { result2 = await CD.decryptECB(key, ct2); }
|
||||
catch (e) { threw2 = e; }
|
||||
assert(threw2 === null, 'decryptECB does not throw on 2-block ciphertext');
|
||||
assert(
|
||||
result2 &&
|
||||
CD.bytesToHex(result2.slice(0, 16)) === expectedPlaintextHex &&
|
||||
CD.bytesToHex(result2.slice(16, 32)) === expectedPlaintextHex,
|
||||
'decryptECB on duplicated block yields duplicated plaintext (ECB, no chaining)'
|
||||
);
|
||||
|
||||
// Empty / misaligned input must return null (existing contract).
|
||||
console.log('\n=== Edge cases ===');
|
||||
const empty = await CD.decryptECB(key, new Uint8Array(0));
|
||||
assert(empty === null, 'empty ciphertext returns null');
|
||||
const misaligned = await CD.decryptECB(key, new Uint8Array(15));
|
||||
assert(misaligned === null, 'misaligned ciphertext returns null');
|
||||
|
||||
console.log('\n=== Results ===');
|
||||
console.log('Passed: ' + passed + ', Failed: ' + failed);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
}
|
||||
|
||||
runTests().catch(e => { console.error(e); process.exit(1); });
|
||||
@@ -1,181 +0,0 @@
|
||||
/**
|
||||
* Tests that channel decryption works in an "insecure context" — i.e. when
|
||||
* `window.crypto.subtle` is undefined.
|
||||
*
|
||||
* Why: when CoreScope is served over plain HTTP (or accessed via a non-https
|
||||
* origin like `http://<lan-ip>:8080`), browsers refuse to expose
|
||||
* `crypto.subtle` (it requires a secure context). The original
|
||||
* `channel-decrypt.js` used `crypto.subtle.digest('SHA-256', …)` for
|
||||
* `computeChannelHash` and `crypto.subtle.importKey(…)` +
|
||||
* `crypto.subtle.sign('HMAC', …)` for `verifyMAC`. PR #1021 fixed only the
|
||||
* AES-ECB path with a pure-JS vendor module, but left SHA-256 and HMAC paths
|
||||
* pinned to `crypto.subtle`. Result on HTTP origins:
|
||||
*
|
||||
* addUserChannel("372a9c93260507adcbf36a84bec0f33d")
|
||||
* -> computeChannelHash(key) throws "Cannot read properties of undefined
|
||||
* (reading 'digest')"
|
||||
* -> caught silently by addUserChannel's try/catch
|
||||
* -> user sees "Failed to decrypt"
|
||||
*
|
||||
* This test sandboxes channel-decrypt.js with `crypto.subtle === undefined`
|
||||
* and asserts both `computeChannelHash` and `verifyMAC` still work, using
|
||||
* a pure-JS SHA-256 / HMAC-SHA256 fallback.
|
||||
*
|
||||
* Reference vectors:
|
||||
* key bytes = 0x37,0x2a,0x9c,0x93,0x26,0x05,0x07,0xad,0xcb,0xf3,0x6a,0x84,0xbe,0xc0,0xf3,0x3d
|
||||
* SHA256(key) = b7ce04f7d9019788b69e709ffb796a36d00225818b444ad4f8979bc1d1445f47
|
||||
* -> first byte (channel hash) = 0xb7 = 183
|
||||
*
|
||||
* HMAC-SHA256 KAT (RFC 4231 Test Case 1):
|
||||
* key = 0x0b * 20
|
||||
* data = "Hi There"
|
||||
* mac = b0344c61d8db38535ca8afceaf0bf12b881dc200c9833da726e9376c2e32cff7
|
||||
*/
|
||||
'use strict';
|
||||
|
||||
const vm = require('vm');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
function assert(cond, msg) {
|
||||
if (cond) { passed++; console.log(' ✓ ' + msg); }
|
||||
else { failed++; console.error(' ✗ ' + msg); }
|
||||
}
|
||||
|
||||
function loadChannelDecryptInsecureContext() {
|
||||
const storage = {};
|
||||
const localStorage = {
|
||||
getItem: (k) => storage[k] !== undefined ? storage[k] : null,
|
||||
setItem: (k, v) => { storage[k] = String(v); },
|
||||
removeItem: (k) => { delete storage[k]; },
|
||||
};
|
||||
// CRITICAL: crypto present, but no .subtle. Mirrors browser HTTP context.
|
||||
const insecureCrypto = {};
|
||||
const sandbox = {
|
||||
window: {}, crypto: insecureCrypto, TextEncoder, TextDecoder, Uint8Array,
|
||||
localStorage, console, Date, JSON, parseInt, Math, String, Number,
|
||||
Object, Array, RegExp, Error, Promise, setTimeout,
|
||||
};
|
||||
sandbox.window = sandbox; sandbox.self = sandbox;
|
||||
vm.createContext(sandbox);
|
||||
|
||||
// Vendored AES (must load before channel-decrypt.js — same as index.html).
|
||||
const vendorAesPath = path.join(__dirname, 'public/vendor/aes-ecb.js');
|
||||
if (fs.existsSync(vendorAesPath)) {
|
||||
vm.runInContext(fs.readFileSync(vendorAesPath, 'utf8'), sandbox);
|
||||
}
|
||||
// Optional vendored SHA-256 / HMAC (the fix). Load if present so the test
|
||||
// works whether the fix vendors it as a separate file OR inlines it into
|
||||
// channel-decrypt.js.
|
||||
const vendorShaPath = path.join(__dirname, 'public/vendor/sha256-hmac.js');
|
||||
if (fs.existsSync(vendorShaPath)) {
|
||||
vm.runInContext(fs.readFileSync(vendorShaPath, 'utf8'), sandbox);
|
||||
}
|
||||
|
||||
vm.runInContext(
|
||||
fs.readFileSync(path.join(__dirname, 'public/channel-decrypt.js'), 'utf8'),
|
||||
sandbox
|
||||
);
|
||||
return sandbox.window.ChannelDecrypt;
|
||||
}
|
||||
|
||||
async function runTests() {
|
||||
console.log('\n=== channel-decrypt.js works without crypto.subtle (HTTP-context) ===');
|
||||
const CD = loadChannelDecryptInsecureContext();
|
||||
|
||||
// 1) computeChannelHash() — pure SHA-256 of 16-byte key, take byte 0.
|
||||
const KEY_HEX = '372a9c93260507adcbf36a84bec0f33d';
|
||||
const keyBytes = CD.hexToBytes(KEY_HEX);
|
||||
|
||||
let hashByte, threwHash = null;
|
||||
try {
|
||||
hashByte = await CD.computeChannelHash(keyBytes);
|
||||
} catch (e) {
|
||||
threwHash = e;
|
||||
}
|
||||
assert(threwHash === null,
|
||||
'computeChannelHash does not throw without crypto.subtle (got: ' +
|
||||
(threwHash && threwHash.message) + ')');
|
||||
assert(hashByte === 0xb7,
|
||||
'computeChannelHash returns 0xb7 for known PSK key (got: ' + hashByte + ')');
|
||||
|
||||
// 2) verifyMAC() — RFC 4231 HMAC-SHA256 Test Case 1.
|
||||
// We feed a hand-built scenario:
|
||||
// verifyMAC's HMAC key is `aesKey ++ 16 zero bytes` (32 bytes).
|
||||
// To exercise RFC 4231 TC1 we set aesKey = 16 * 0x0b and pad another 4
|
||||
// bytes of 0x0b in the second half (since verifyMAC zero-fills bytes
|
||||
// 16..31, we instead use the channel-decrypt API directly here only to
|
||||
// prove HMAC-SHA256 is computed correctly with the standard secret).
|
||||
//
|
||||
// We construct the secret manually and call verifyMAC on a synthetic
|
||||
// ciphertext whose HMAC-SHA256 first 2 bytes we precompute with Node's
|
||||
// crypto module (independent oracle).
|
||||
const nodeCrypto = require('crypto');
|
||||
const aesKey = new Uint8Array(16); for (let i = 0; i < 16; i++) aesKey[i] = 0xab;
|
||||
const ct = new Uint8Array(16); for (let i = 0; i < 16; i++) ct[i] = i;
|
||||
const secret = Buffer.alloc(32); Buffer.from(aesKey).copy(secret, 0);
|
||||
const fullMac = nodeCrypto.createHmac('sha256', secret).update(Buffer.from(ct)).digest();
|
||||
const expectedMacHex = fullMac.slice(0, 2).toString('hex');
|
||||
|
||||
let macOk, threwMac = null;
|
||||
try {
|
||||
macOk = await CD.verifyMAC(aesKey, ct, expectedMacHex);
|
||||
} catch (e) {
|
||||
threwMac = e;
|
||||
}
|
||||
assert(threwMac === null,
|
||||
'verifyMAC does not throw without crypto.subtle (got: ' +
|
||||
(threwMac && threwMac.message) + ')');
|
||||
assert(macOk === true,
|
||||
'verifyMAC returns true for valid 2-byte MAC (got: ' + macOk + ')');
|
||||
|
||||
// 3) verifyMAC must still REJECT a wrong MAC.
|
||||
let macBad, threwMacBad = null;
|
||||
try {
|
||||
macBad = await CD.verifyMAC(aesKey, ct, '0000');
|
||||
} catch (e) {
|
||||
threwMacBad = e;
|
||||
}
|
||||
assert(threwMacBad === null,
|
||||
'verifyMAC does not throw on wrong MAC (got: ' + (threwMacBad && threwMacBad.message) + ')');
|
||||
assert(macBad === false,
|
||||
'verifyMAC returns false for wrong 2-byte MAC (got: ' + macBad + ')');
|
||||
|
||||
// 4) End-to-end: decrypt() must work with subtle absent — exercises
|
||||
// SHA-256 (key derivation already done) + HMAC + AES-ECB together.
|
||||
// Build a synthetic encrypted packet from a known plaintext.
|
||||
const aesKey2 = nodeCrypto.randomBytes(16);
|
||||
const plaintext = Buffer.alloc(16);
|
||||
// timestamp(4 LE) + flags(1) + "alice: hi\0" then padded
|
||||
plaintext.writeUInt32LE(0x12345678, 0);
|
||||
plaintext[4] = 0x00;
|
||||
Buffer.from('alice: hi\0', 'utf8').copy(plaintext, 5);
|
||||
|
||||
const cipher = nodeCrypto.createCipheriv('aes-128-ecb', aesKey2, null);
|
||||
cipher.setAutoPadding(false);
|
||||
const ct2 = Buffer.concat([cipher.update(plaintext), cipher.final()]);
|
||||
const secret2 = Buffer.alloc(32); aesKey2.copy(secret2, 0);
|
||||
const macHex2 = nodeCrypto.createHmac('sha256', secret2).update(ct2).digest().slice(0, 2).toString('hex');
|
||||
|
||||
let decResult = null, threwDec = null;
|
||||
try {
|
||||
decResult = await CD.decrypt(new Uint8Array(aesKey2), macHex2, ct2.toString('hex'));
|
||||
} catch (e) {
|
||||
threwDec = e;
|
||||
}
|
||||
assert(threwDec === null,
|
||||
'decrypt() does not throw without crypto.subtle (got: ' +
|
||||
(threwDec && threwDec.message) + ')');
|
||||
assert(decResult && decResult.sender === 'alice' && decResult.message === 'hi',
|
||||
'decrypt() recovers sender + message in HTTP context (got: ' +
|
||||
JSON.stringify(decResult) + ')');
|
||||
|
||||
console.log('\n=== Results ===');
|
||||
console.log('Passed: ' + passed + ', Failed: ' + failed);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
}
|
||||
|
||||
runTests().catch(e => { console.error(e); process.exit(1); });
|
||||
@@ -1,286 +0,0 @@
|
||||
/**
|
||||
* Regression test: live PSK decrypt for user-added channels (#1029 follow-up).
|
||||
*
|
||||
* PR #1030 added decryptLivePSKBatch() which rewrites encrypted GRP_TXT
|
||||
* WS packets in place when a stored PSK key matches. It sets
|
||||
* payload.channel = dec.channelName (e.g. "medusa")
|
||||
* but user-added channels are stored in channels[] with hash:
|
||||
* "user:medusa"
|
||||
* (and selectedHash is also "user:medusa" when viewing).
|
||||
*
|
||||
* Symptoms in production:
|
||||
* - selectedHash === "user:medusa" but processWSBatch compares
|
||||
* `channelName === selectedHash` ("medusa" !== "user:medusa") so a live
|
||||
* packet for the open channel is NEVER appended to the message list.
|
||||
* - channels.find(c => c.hash === channelName) misses the user channel and
|
||||
* a duplicate plain entry "medusa" is pushed into the sidebar; the real
|
||||
* user-added channel's lastMessage / messageCount / lastActivityMs never
|
||||
* update.
|
||||
* - The unread bumper guards with `chName === prior` (raw name vs prefixed
|
||||
* selectedHash), so an unread badge is added even when the user IS
|
||||
* actively viewing that channel.
|
||||
*
|
||||
* Fix: have the live decrypt rewrite annotate the payload with the
|
||||
* canonical channel hash that channels[] / selectedHash use. A simple,
|
||||
* non-breaking shape: keep payload.channel = name (so the rest of
|
||||
* processWSBatch keeps working for non-user channels), AND also set
|
||||
* payload.channelKey = "user:" + name when a user-added channel exists for
|
||||
* that name. processWSBatch then uses channelKey when present for the
|
||||
* lookup + selectedHash comparison.
|
||||
*
|
||||
* This test loads the real channels.js in a vm sandbox, primes a
|
||||
* user-added channel, drives an encrypted GRP_TXT through the WS handler
|
||||
* and asserts:
|
||||
* 1. the open channel's message list grows by 1 (text is decrypted-locally
|
||||
* and visible in the messages array)
|
||||
* 2. the user-added channel's messageCount / lastMessage update
|
||||
* 3. NO duplicate plain "medusa" entry is added to channels[]
|
||||
* 4. unread is NOT bumped on the channel currently being viewed
|
||||
*/
|
||||
'use strict';
|
||||
|
||||
const vm = require('vm');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { createCipheriv, createHmac, createHash, webcrypto } = require('crypto');
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
function assert(cond, msg) {
|
||||
if (cond) { passed++; console.log(' ✓ ' + msg); }
|
||||
else { failed++; console.error(' ✗ ' + msg); }
|
||||
}
|
||||
|
||||
function buildEncryptedGrpTxt(channelName, sender, message) {
|
||||
const key = createHash('sha256').update(channelName).digest().slice(0, 16);
|
||||
const channelHash = createHash('sha256').update(key).digest()[0];
|
||||
const text = `${sender}: ${message}`;
|
||||
const inner = 5 + Buffer.byteLength(text, 'utf8') + 1;
|
||||
const padded = Math.ceil(inner / 16) * 16;
|
||||
const pt = Buffer.alloc(padded);
|
||||
pt.writeUInt32LE(Math.floor(Date.now() / 1000), 0);
|
||||
pt[4] = 0;
|
||||
pt.write(text, 5, 'utf8');
|
||||
const cipher = createCipheriv('aes-128-ecb', key, null);
|
||||
cipher.setAutoPadding(false);
|
||||
const ct = Buffer.concat([cipher.update(pt), cipher.final()]);
|
||||
const secret = Buffer.concat([key, Buffer.alloc(16)]);
|
||||
const mac = createHmac('sha256', secret).update(ct).digest().slice(0, 2);
|
||||
return {
|
||||
payload: {
|
||||
type: 'GRP_TXT',
|
||||
channelHash,
|
||||
channelHashHex: channelHash.toString(16).padStart(2, '0'),
|
||||
mac: mac.toString('hex'),
|
||||
encryptedData: ct.toString('hex'),
|
||||
decryptionStatus: 'no_key',
|
||||
},
|
||||
keyHex: key.toString('hex'),
|
||||
};
|
||||
}
|
||||
|
||||
function makeBrowserLikeSandbox() {
|
||||
const storage = {};
|
||||
const elements = {};
|
||||
function makeFakeEl(id) {
|
||||
return {
|
||||
id: id || '', innerHTML: '', textContent: '', value: '', scrollTop: 0,
|
||||
scrollHeight: 0,
|
||||
style: {}, dataset: {},
|
||||
classList: { add() {}, remove() {}, toggle() {}, contains() { return false; } },
|
||||
addEventListener() {}, removeEventListener() {},
|
||||
querySelector() { return makeFakeEl(); },
|
||||
querySelectorAll() { return []; },
|
||||
getAttribute() { return null; }, setAttribute() {},
|
||||
getBoundingClientRect() { return { width: 240, height: 0, top: 0, left: 0, right: 0, bottom: 0 }; },
|
||||
appendChild() {}, removeChild() {},
|
||||
focus() {}, blur() {},
|
||||
checked: false,
|
||||
};
|
||||
}
|
||||
function el(id) {
|
||||
if (!elements[id]) elements[id] = makeFakeEl(id);
|
||||
return elements[id];
|
||||
}
|
||||
const ctx = {
|
||||
window: {},
|
||||
document: {
|
||||
readyState: 'complete',
|
||||
documentElement: { getAttribute: () => null, setAttribute() {}, classList: { add() {}, remove() {}, toggle() {}, contains() { return false; } } },
|
||||
createElement: () => ({ id: '', textContent: '', innerHTML: '', style: {}, classList: { add() {}, remove() {}, toggle() {}, contains() { return false; } }, addEventListener() {}, appendChild() {}, querySelector() { return null; }, querySelectorAll() { return []; } }),
|
||||
head: { appendChild() {} },
|
||||
body: { appendChild() {} },
|
||||
getElementById: el,
|
||||
addEventListener() {}, removeEventListener() {},
|
||||
querySelector: () => null,
|
||||
querySelectorAll: () => [],
|
||||
},
|
||||
console,
|
||||
Date, Math, Array, Object, String, Number, JSON, RegExp, Error, TypeError, Set, Map, Promise,
|
||||
parseInt, parseFloat, isNaN, isFinite,
|
||||
encodeURIComponent, decodeURIComponent,
|
||||
setTimeout: (fn) => { Promise.resolve().then(fn); return 0; },
|
||||
clearTimeout: () => {},
|
||||
setInterval: () => 0,
|
||||
clearInterval: () => {},
|
||||
fetch: () => Promise.resolve({ ok: true, json: () => Promise.resolve({}) }),
|
||||
performance: { now: () => Date.now() },
|
||||
localStorage: {
|
||||
getItem: (k) => Object.prototype.hasOwnProperty.call(storage, k) ? storage[k] : null,
|
||||
setItem: (k, v) => { storage[k] = String(v); },
|
||||
removeItem: (k) => { delete storage[k]; },
|
||||
},
|
||||
location: { hash: '' },
|
||||
history: { replaceState() {}, pushState() {} },
|
||||
crypto: webcrypto,
|
||||
TextEncoder, TextDecoder,
|
||||
Uint8Array, Uint16Array, Uint32Array, Int8Array, Int16Array, Int32Array, ArrayBuffer,
|
||||
URLSearchParams,
|
||||
CustomEvent: class CustomEvent {},
|
||||
MutationObserver: class MutationObserver { observe() {} disconnect() {} },
|
||||
requestAnimationFrame: (cb) => setTimeout(cb, 0),
|
||||
matchMedia: () => ({ matches: false, addEventListener() {}, removeEventListener() {} }),
|
||||
addEventListener() {}, dispatchEvent() {},
|
||||
getHashParams: () => new URLSearchParams(),
|
||||
};
|
||||
ctx.self = ctx;
|
||||
ctx.globalThis = ctx;
|
||||
vm.createContext(ctx);
|
||||
return ctx;
|
||||
}
|
||||
|
||||
function loadInCtx(ctx, file) {
|
||||
const src = fs.readFileSync(path.join(__dirname, file), 'utf8');
|
||||
vm.runInContext(src, ctx, { filename: file });
|
||||
for (const k of Object.keys(ctx.window)) ctx[k] = ctx.window[k];
|
||||
}
|
||||
|
||||
async function run() {
|
||||
console.log('\n=== Live PSK decrypt: user-added channel (user:* prefix) routing ===');
|
||||
|
||||
const ctx = makeBrowserLikeSandbox();
|
||||
ctx.window.matchMedia = () => ({ matches: false, addEventListener() {}, removeEventListener() {} });
|
||||
ctx.window.addEventListener = () => {};
|
||||
ctx.btoa = (s) => Buffer.from(String(s), 'binary').toString('base64');
|
||||
ctx.atob = (s) => Buffer.from(String(s), 'base64').toString('binary');
|
||||
|
||||
// App.js stubs: provide debouncedOnWS / onWS / offWS / api / debounce /
|
||||
// invalidateApiCache / registerPage so channels.js loads cleanly.
|
||||
let wsListeners = [];
|
||||
ctx.onWS = (fn) => { wsListeners.push(fn); };
|
||||
ctx.offWS = (fn) => { wsListeners = wsListeners.filter(f => f !== fn); };
|
||||
ctx.debouncedOnWS = function (fn) {
|
||||
function handler(msg) { fn([msg]); }
|
||||
wsListeners.push(handler);
|
||||
return handler;
|
||||
};
|
||||
ctx.debounce = (fn) => fn;
|
||||
ctx.api = () => Promise.resolve({ channels: [], observers: [] });
|
||||
ctx.invalidateApiCache = () => {};
|
||||
ctx.CLIENT_TTL = { channels: 60000, observers: 600000 };
|
||||
ctx.escapeHtml = (s) => String(s == null ? '' : s);
|
||||
ctx.truncate = (s, n) => { s = String(s || ''); return s.length > n ? s.slice(0, n) : s; };
|
||||
ctx.formatHashHex = (h) => String(h);
|
||||
ctx.formatSecondsAgo = () => '';
|
||||
ctx.payloadTypeName = () => 'GRP_TXT';
|
||||
ctx.RegionFilter = {
|
||||
init() {},
|
||||
onChange(fn) { return () => {}; },
|
||||
offChange() {},
|
||||
getRegionParam() { return ''; },
|
||||
getSelected() { return null; },
|
||||
};
|
||||
ctx.ChannelColors = { get() { return null; }, remove() {} };
|
||||
ctx.ChannelColorPicker = { open() {} };
|
||||
ctx.normalizeObserverNameKey = (s) => String(s || '').toLowerCase();
|
||||
let pageMod = null;
|
||||
ctx.registerPage = (name, mod) => { if (name === 'channels') pageMod = mod; };
|
||||
|
||||
// Load AES + ChannelDecrypt + channels.js
|
||||
loadInCtx(ctx, 'public/vendor/aes-ecb.js');
|
||||
loadInCtx(ctx, 'public/channel-decrypt.js');
|
||||
loadInCtx(ctx, 'public/channels.js');
|
||||
|
||||
const CD = ctx.window.ChannelDecrypt;
|
||||
assert(typeof CD.tryDecryptLive === 'function', 'ChannelDecrypt.tryDecryptLive available');
|
||||
|
||||
const channelName = 'medusa';
|
||||
const fixture = buildEncryptedGrpTxt(channelName, 'Alice', 'hello darkness');
|
||||
CD.storeKey(channelName, fixture.keyHex);
|
||||
|
||||
// Initialize the channels page so wsHandler is wired up
|
||||
const appEl = ctx.document.getElementById('page');
|
||||
appEl.innerHTML = '';
|
||||
await pageMod.init(appEl, null);
|
||||
// pump microtasks
|
||||
await new Promise((r) => setTimeout(r, 0));
|
||||
|
||||
ctx.window._channelsSetStateForTest({
|
||||
channels: [{
|
||||
hash: 'user:' + channelName,
|
||||
name: channelName,
|
||||
messageCount: 0,
|
||||
lastActivityMs: 0,
|
||||
lastSender: '',
|
||||
lastMessage: 'Encrypted — click to decrypt',
|
||||
encrypted: true,
|
||||
userAdded: true,
|
||||
}],
|
||||
messages: [],
|
||||
selectedHash: 'user:' + channelName,
|
||||
});
|
||||
|
||||
// Drive the WS path — same shape the Go server broadcasts
|
||||
const wsMsg = {
|
||||
type: 'packet',
|
||||
data: {
|
||||
id: 12345,
|
||||
hash: 'deadbeef',
|
||||
observer_name: 'TestObserver',
|
||||
packet: { observer_name: 'TestObserver' },
|
||||
decoded: {
|
||||
header: { payloadTypeName: 'GRP_TXT' },
|
||||
payload: fixture.payload,
|
||||
},
|
||||
},
|
||||
};
|
||||
for (const fn of wsListeners) fn(wsMsg);
|
||||
// Allow async decryptLivePSKBatch + setTimeout chain to settle
|
||||
for (let i = 0; i < 20; i++) await new Promise((r) => setTimeout(r, 0));
|
||||
|
||||
const state = ctx.window._channelsGetStateForTest();
|
||||
|
||||
// (1) Message list for the open channel grew
|
||||
assert(state.messages.length === 1,
|
||||
'open user-added channel receives the live-decrypted message (got ' + state.messages.length + ')');
|
||||
if (state.messages[0]) {
|
||||
assert(state.messages[0].text === 'hello darkness',
|
||||
'decrypted text is rendered (got ' + JSON.stringify(state.messages[0].text) + ')');
|
||||
assert(state.messages[0].sender === 'Alice',
|
||||
'decrypted sender is rendered (got ' + JSON.stringify(state.messages[0].sender) + ')');
|
||||
}
|
||||
|
||||
// (2) The user-added channel's metadata updated
|
||||
const userCh = state.channels.find((c) => c.hash === 'user:' + channelName);
|
||||
assert(userCh && userCh.messageCount === 1,
|
||||
'user-added channel messageCount incremented (got ' + (userCh && userCh.messageCount) + ')');
|
||||
assert(userCh && userCh.lastMessage && userCh.lastMessage.indexOf('hello') !== -1,
|
||||
'user-added channel lastMessage updated (got ' + (userCh && userCh.lastMessage) + ')');
|
||||
|
||||
// (3) No duplicate plain "medusa" entry was created in the sidebar
|
||||
const dupes = state.channels.filter((c) => c.hash === channelName);
|
||||
assert(dupes.length === 0,
|
||||
'no duplicate non-prefixed channel entry created (got ' + dupes.length + ')');
|
||||
assert(state.channels.length === 1,
|
||||
'sidebar still has exactly the one user-added channel (got ' + state.channels.length + ')');
|
||||
|
||||
// (4) Unread NOT bumped on the channel actively being viewed
|
||||
assert(!userCh || !userCh.unread,
|
||||
'unread NOT bumped on the actively-viewed channel (got ' + (userCh && userCh.unread) + ')');
|
||||
|
||||
console.log('\n=== Results ===');
|
||||
console.log('Passed: ' + passed + ', Failed: ' + failed);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
}
|
||||
|
||||
run().catch((e) => { console.error(e); process.exit(1); });
|
||||
@@ -1,159 +0,0 @@
|
||||
/**
|
||||
* Tests for live PSK decrypt on WebSocket-delivered GRP_TXT packets.
|
||||
*
|
||||
* Bug: when a user has a stored PSK key for a channel and a new encrypted
|
||||
* GRP_TXT packet arrives via the WebSocket feed, the existing UI path
|
||||
* leaves it as an encrypted blob and only renders sender="Unknown" with
|
||||
* empty text. The user has to refresh the page to get the message decrypted
|
||||
* via the REST fetch path.
|
||||
*
|
||||
* Fix:
|
||||
* - ChannelDecrypt.buildKeyMap() -> Map<hashByte, { channelName, keyBytes, keyHex }>
|
||||
* - ChannelDecrypt.tryDecryptLive(payload, keyMap)
|
||||
* For GRP_TXT payloads with encryptedData/mac/channelHash matching
|
||||
* a stored key, returns { sender, text, channelName, channelHashByte }.
|
||||
* Returns null when no key matches or when MAC verification fails.
|
||||
* - channels.js processWSBatch() uses these to upgrade encrypted live
|
||||
* packets in-place before rendering, and bumps an unread badge for
|
||||
* channels the user is not currently viewing.
|
||||
*/
|
||||
'use strict';
|
||||
|
||||
const vm = require('vm');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { subtle } = require('crypto').webcrypto;
|
||||
const { createCipheriv, createHmac, createHash } = require('crypto');
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
function assert(cond, msg) {
|
||||
if (cond) { passed++; console.log(' ✓ ' + msg); }
|
||||
else { failed++; console.error(' ✗ ' + msg); }
|
||||
}
|
||||
|
||||
function createSandbox() {
|
||||
const storage = {};
|
||||
const localStorage = {
|
||||
getItem: (k) => storage[k] !== undefined ? storage[k] : null,
|
||||
setItem: (k, v) => { storage[k] = String(v); },
|
||||
removeItem: (k) => { delete storage[k]; },
|
||||
_data: storage,
|
||||
};
|
||||
const ctx = {
|
||||
window: {},
|
||||
crypto: { subtle },
|
||||
TextEncoder, TextDecoder, Uint8Array, Map, Set,
|
||||
localStorage,
|
||||
console, Date, JSON, parseInt, Math, String, Number, Object, Array, RegExp, Error, Promise, setTimeout,
|
||||
btoa: (s) => Buffer.from(s, 'binary').toString('base64'),
|
||||
atob: (s) => Buffer.from(s, 'base64').toString('binary'),
|
||||
};
|
||||
ctx.window = ctx;
|
||||
ctx.self = ctx;
|
||||
return ctx;
|
||||
}
|
||||
|
||||
function buildEncryptedGrpTxt(channelName, sender, message) {
|
||||
const key = createHash('sha256').update(channelName).digest().slice(0, 16);
|
||||
const channelHash = createHash('sha256').update(key).digest()[0];
|
||||
const text = `${sender}: ${message}`;
|
||||
const inner = 5 + Buffer.byteLength(text, 'utf8') + 1; // ts(4)+flags(1)+text+null
|
||||
const padded = Math.ceil(inner / 16) * 16;
|
||||
const pt = Buffer.alloc(padded);
|
||||
pt.writeUInt32LE(Math.floor(Date.now() / 1000), 0);
|
||||
pt[4] = 0;
|
||||
pt.write(text, 5, 'utf8');
|
||||
// remaining bytes already 0 (includes null terminator + ECB padding)
|
||||
const cipher = createCipheriv('aes-128-ecb', key, null);
|
||||
cipher.setAutoPadding(false);
|
||||
const ct = Buffer.concat([cipher.update(pt), cipher.final()]);
|
||||
const secret = Buffer.concat([key, Buffer.alloc(16)]);
|
||||
const mac = createHmac('sha256', secret).update(ct).digest().slice(0, 2);
|
||||
return {
|
||||
payload: {
|
||||
type: 'GRP_TXT',
|
||||
channelHash,
|
||||
channelHashHex: channelHash.toString(16).padStart(2, '0'),
|
||||
mac: mac.toString('hex'),
|
||||
encryptedData: ct.toString('hex'),
|
||||
decryptionStatus: 'no_key',
|
||||
},
|
||||
keyHex: key.toString('hex'),
|
||||
channelHash,
|
||||
};
|
||||
}
|
||||
|
||||
async function run() {
|
||||
console.log('\n=== Live PSK decrypt: ChannelDecrypt helpers ===');
|
||||
|
||||
const cdSrc = fs.readFileSync(path.join(__dirname, 'public/channel-decrypt.js'), 'utf8');
|
||||
const aesSrc = fs.readFileSync(path.join(__dirname, 'public/vendor/aes-ecb.js'), 'utf8');
|
||||
const sandbox = createSandbox();
|
||||
const ctx = vm.createContext(sandbox);
|
||||
vm.runInContext(aesSrc, ctx);
|
||||
vm.runInContext(cdSrc, ctx);
|
||||
const CD = sandbox.window.ChannelDecrypt;
|
||||
|
||||
assert(typeof CD.buildKeyMap === 'function',
|
||||
'ChannelDecrypt.buildKeyMap exists');
|
||||
assert(typeof CD.tryDecryptLive === 'function',
|
||||
'ChannelDecrypt.tryDecryptLive exists');
|
||||
|
||||
// Store a key for #LiveTest
|
||||
const channelName = '#LiveTest';
|
||||
const keyBytes = await CD.deriveKey(channelName);
|
||||
const keyHex = CD.bytesToHex(keyBytes);
|
||||
CD.storeKey(channelName, keyHex);
|
||||
|
||||
const map = await CD.buildKeyMap();
|
||||
const expectedHashByte = await CD.computeChannelHash(keyBytes);
|
||||
assert(map && typeof map.get === 'function',
|
||||
'buildKeyMap returns a Map');
|
||||
assert(map.get(expectedHashByte) && map.get(expectedHashByte).channelName === channelName,
|
||||
'buildKeyMap entry indexed by channel hash byte → channelName');
|
||||
|
||||
// Fabricate a live encrypted GRP_TXT packet on this channel
|
||||
const fixture = buildEncryptedGrpTxt(channelName, 'Alice', 'hello world');
|
||||
|
||||
const decrypted = await CD.tryDecryptLive(fixture.payload, map);
|
||||
assert(decrypted && decrypted.sender === 'Alice',
|
||||
'tryDecryptLive recovers sender from matching stored key');
|
||||
assert(decrypted && decrypted.text === 'hello world',
|
||||
'tryDecryptLive recovers message text');
|
||||
assert(decrypted && decrypted.channelName === channelName,
|
||||
'tryDecryptLive returns the matching channelName');
|
||||
assert(decrypted && decrypted.channelHashByte === expectedHashByte,
|
||||
'tryDecryptLive returns channelHashByte for unread bookkeeping');
|
||||
|
||||
// No match → null (different channel hash)
|
||||
const otherFixture = buildEncryptedGrpTxt('#NotStored', 'Bob', 'silent');
|
||||
const noMatch = await CD.tryDecryptLive(otherFixture.payload, map);
|
||||
assert(noMatch === null,
|
||||
'tryDecryptLive returns null when no stored key matches the channel hash');
|
||||
|
||||
// Non-GRP_TXT payload → null (defensive)
|
||||
const skip = await CD.tryDecryptLive({ type: 'CHAN', channel: channelName, text: 'already decrypted' }, map);
|
||||
assert(skip === null,
|
||||
'tryDecryptLive returns null for non-GRP_TXT payloads (already-decrypted CHAN)');
|
||||
|
||||
// Empty/missing fields → null (no crash)
|
||||
const empty = await CD.tryDecryptLive({ type: 'GRP_TXT' }, map);
|
||||
assert(empty === null,
|
||||
'tryDecryptLive returns null when encryptedData/mac missing');
|
||||
|
||||
console.log('\n=== Live PSK decrypt: channels.js integration contract ===');
|
||||
const chSrc = fs.readFileSync(path.join(__dirname, 'public/channels.js'), 'utf8');
|
||||
assert(/tryDecryptLive\s*\(/.test(chSrc),
|
||||
'channels.js calls ChannelDecrypt.tryDecryptLive() in the WS path');
|
||||
assert(/buildKeyMap\s*\(/.test(chSrc),
|
||||
'channels.js calls ChannelDecrypt.buildKeyMap() to refresh the lookup index');
|
||||
assert(/unread/i.test(chSrc),
|
||||
'channels.js tracks an unread counter for live-decrypted channels');
|
||||
|
||||
console.log('\n=== Results ===');
|
||||
console.log('Passed: ' + passed + ', Failed: ' + failed);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
}
|
||||
|
||||
run().catch((e) => { console.error(e); process.exit(1); });
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user