Compare commits

...

9 Commits

Author SHA1 Message Date
you 0be8b897bc test: add concurrent ingest + eviction race coverage 2026-04-30 19:44:38 +00:00
Kpa-clawbot 5678874128 fix: exclude non-repeater nodes from path-hop resolution (#935) (#936)
Fixes #935

## Problem

`buildPrefixMap()` indexed ALL nodes regardless of role, causing
companions/sensors to appear as repeater hops when their pubkey prefix
collided with a path-hop hash byte.

## Fix

### Server (`cmd/server/store.go`)
- Added `canAppearInPath(role string) bool` — allowlist of roles that
can forward packets (repeater, room_server, room)
- `buildPrefixMap` now skips nodes that fail this check

### Client (`public/hop-resolver.js`)
- Added matching `canAppearInPath(role)` helper
- `init()` now only populates `prefixIdx` for path-eligible nodes
- `pubkeyIdx` remains complete — `resolveFromServer()` still resolves
any node type by full pubkey (for server-confirmed `resolved_path`
arrays)

## Tests

- `cmd/server/prefix_map_role_test.go`: 7 new tests covering role
filtering in prefix map and resolveWithContext
- `test-hop-resolver-affinity.js`: 4 new tests verifying client-side
role filter + pubkeyIdx completeness
- All existing tests updated to include `Role: "repeater"` where needed
- `go test ./cmd/server/...` — PASS
- `node test-hop-resolver-affinity.js` — 16/17 pass (1 pre-existing
centroid failure unrelated to this change)

## Commits

1. `fix: filter prefix map to only repeater/room roles (#935)` — server
implementation
2. `test: prefix map role filter coverage (#935)` — server tests
3. `ui: filter HopResolver prefix index to repeater/room roles (#935)` —
client implementation
4. `test: hop-resolver role filter coverage (#935)` — client tests

---------

Co-authored-by: you <you@example.com>
2026-04-30 09:25:51 -07:00
Kpa-clawbot e857e0b1ce ci: update go-server-coverage.json [skip ci] 2026-04-25 00:34:06 +00:00
Kpa-clawbot 9da7c71cc5 ci: update go-ingestor-coverage.json [skip ci] 2026-04-25 00:34:05 +00:00
Kpa-clawbot 03484ea38d ci: update frontend-tests.json [skip ci] 2026-04-25 00:34:04 +00:00
Kpa-clawbot 27af4098e6 ci: update frontend-coverage.json [skip ci] 2026-04-25 00:34:03 +00:00
Kpa-clawbot 474023b9b7 ci: update e2e-tests.json [skip ci] 2026-04-25 00:34:02 +00:00
Kpa-clawbot f4484adb52 ci: move to GitHub-hosted runners, disable staging deploy (#908)
## Why

The Azure staging VM (`meshcore-vm`) is offline. Self-hosted runners are
unavailable, blocking all CI.

## What changed (per job)

| Job | Change | Revert |
|-----|--------|--------|
| `e2e-test` | `runs-on: [self-hosted, Linux]` → `ubuntu-latest`;
removed self-hosted-specific "Free disk space" step | Change `runs-on`
back to `[self-hosted, Linux]`, restore disk cleanup step |
| `build-and-publish` | `runs-on: [self-hosted, meshcore-runner-2]` →
`ubuntu-latest`; removed "Free disk space" prune step (noop on fresh
GH-hosted runners) | Change `runs-on` back, restore prune step |
| `deploy` | `if: false # disabled` (was `github.event_name == 'push'`);
`runs-on` kept as-is | Change `if:` back to `github.event_name ==
'push'` |
| `publish` | `runs-on: [self-hosted, Linux]` → `ubuntu-latest`; `needs:
[deploy]` → `needs: [build-and-publish]` | Change both back |

## Notes

- `go-test` and `release-artifacts` were already on `ubuntu-latest` —
untouched.
- The `deploy` job is disabled via `if: false` for trivial one-line
revert when the VM returns.
- No new `setup-*` actions were needed — `setup-node`, `setup-go`,
`docker/setup-buildx-action`, and `docker/login-action` were already
present.

Co-authored-by: you <you@example.com>
2026-04-24 17:25:53 -07:00
Kpa-clawbot a47fe26085 fix(channels): allow removing user-added keys for server-known channels (#898)
## Problem
Adding a channel key in the Channels UI for a channel the server already
knows about (e.g. `#public` from rainbow / config) leaves the
localStorage entry **unremovable**:

- `mergeUserChannels` sees the name already exists in the channel list
and skips the user entry.
- The existing channel row is never marked `userAdded:true`.
- The ✕ button (`[data-remove-channel]`) is only rendered for
`userAdded` rows.
- Result: stuck localStorage key, no UI to delete it.

There was also a latent bug in the remove handler — for non-`user:`
rows, it used the raw hash (e.g. `enc_11`) as the
`ChannelDecrypt.removeKey()` argument, but the storage key is the
channel **name**.

## Fix
1. **`mergeUserChannels`**: when a stored key matches an existing
channel by name/hash, mark the existing channel `userAdded=true` so the
✕ renders on it. (No magical/auto deletion of stored keys — the user
explicitly chooses to remove.)
2. **Remove handler**:
- Look up the channel object to get the correct display name for the
localStorage key.
- Keep server-known channels in the list when their ✕ is clicked (only
the user's localStorage entry + cache are cleared, `userAdded` is
unset). The channel still exists upstream.
   - Pure `user:`-prefixed channels are removed from the list as before.

## Repro
1. Open Channels.
2. Add a key for `#public` (or any rainbow-known channel).
3. Reload. Before this PR: row has no ✕, key is stuck. After this PR: ✕
appears, click clears the local key and cache.

## Files
- `public/channels.js` only.

## Notes
- No backend changes.
- No new APIs.
- Behaviour for purely user-added channels (e.g. `user:#somechannel` not
known to the server) is unchanged.

---------

Co-authored-by: you <you@example.com>
2026-04-22 21:41:43 -07:00
14 changed files with 851 additions and 168 deletions
+1 -1
View File
@@ -1 +1 @@
{"schemaVersion":1,"label":"e2e tests","message":"45 passed","color":"brightgreen"}
{"schemaVersion":1,"label":"e2e tests","message":"82 passed","color":"brightgreen"}
+1 -1
View File
@@ -1 +1 @@
{"schemaVersion":1,"label":"frontend coverage","message":"39.68%","color":"red"}
{"schemaVersion":1,"label":"frontend coverage","message":"37.26%","color":"red"}
+5 -18
View File
@@ -135,7 +135,7 @@ jobs:
e2e-test:
name: "🎭 Playwright E2E Tests"
needs: [go-test]
runs-on: [self-hosted, Linux]
runs-on: ubuntu-latest
defaults:
run:
shell: bash
@@ -145,13 +145,6 @@ jobs:
with:
fetch-depth: 0
- name: Free disk space
run: |
# Prune old runner diagnostic logs (can accumulate 50MB+)
find ~/actions-runner/_diag/ -name '*.log' -mtime +3 -delete 2>/dev/null || true
# Show available disk space
df -h / | tail -1
- name: Set up Node.js 22
uses: actions/setup-node@v5
with:
@@ -252,17 +245,11 @@ jobs:
build-and-publish:
name: "🏗️ Build & Publish Docker Image"
needs: [e2e-test]
runs-on: [self-hosted, meshcore-runner-2]
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v5
- name: Free disk space
run: |
docker system prune -af 2>/dev/null || true
docker builder prune -af 2>/dev/null || true
df -h /
- name: Compute build metadata
id: meta
run: |
@@ -372,7 +359,7 @@ jobs:
# ───────────────────────────────────────────────────────────────
deploy:
name: "🚀 Deploy Staging"
if: github.event_name == 'push'
if: false # disabled: staging VM offline, manual deploy required
needs: [build-and-publish]
runs-on: [self-hosted, meshcore-runner-2]
steps:
@@ -461,8 +448,8 @@ jobs:
publish:
name: "📝 Publish Badges & Summary"
if: github.event_name == 'push'
needs: [deploy]
runs-on: [self-hosted, Linux]
needs: [build-and-publish]
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v5
@@ -0,0 +1,403 @@
package main
import (
"fmt"
"sync"
"sync/atomic"
"testing"
"time"
)
// TestConcurrentIngestAndEviction exercises the race between IngestNewFromDB
// adding packets (via direct store manipulation simulating the locked section)
// and RunEviction removing packets. Without proper locking this would trigger
// the race detector and produce inconsistent index state.
func TestConcurrentIngestAndEviction(t *testing.T) {
// Seed store with 200 old packets that are eligible for eviction
startTime := time.Now().UTC().Add(-48 * time.Hour)
store := makeTestStore(200, startTime, 1)
store.retentionHours = 24 // everything older than 24h is evictable
store.loaded = true
// Track bytes for all seeded packets
for _, tx := range store.packets {
store.trackedBytes += estimateStoreTxBytes(tx)
for _, obs := range tx.Observations {
store.trackedBytes += estimateStoreObsBytes(obs)
}
}
const numIngestGoroutines = 5
const packetsPerGoroutine = 50
const numEvictionGoroutines = 3
var wg sync.WaitGroup
var ingestedCount int64
// Concurrent ingest: simulate what IngestNewFromDB does under the lock
for g := 0; g < numIngestGoroutines; g++ {
wg.Add(1)
go func(goroutineID int) {
defer wg.Done()
for i := 0; i < packetsPerGoroutine; i++ {
txID := 1000 + goroutineID*1000 + i
hash := fmt.Sprintf("new_hash_%d_%04d", goroutineID, i)
pt := 5 // GRP_TXT
ts := time.Now().UTC().Format(time.RFC3339)
tx := &StoreTx{
ID: txID,
Hash: hash,
FirstSeen: ts,
LatestSeen: ts,
PayloadType: &pt,
DecodedJSON: fmt.Sprintf(`{"pubKey":"newpk_%d_%04d"}`, goroutineID, i),
obsKeys: make(map[string]bool),
observerSet: make(map[string]bool),
}
obs := &StoreObs{
ID: txID*10 + 1,
TransmissionID: txID,
ObserverID: fmt.Sprintf("obs_g%d", goroutineID),
ObserverName: fmt.Sprintf("Observer_g%d", goroutineID),
Timestamp: ts,
}
tx.Observations = append(tx.Observations, obs)
tx.ObservationCount = 1
// Acquire write lock (same as IngestNewFromDB)
store.mu.Lock()
store.packets = append(store.packets, tx)
store.byHash[hash] = tx
store.byTxID[txID] = tx
store.byObsID[obs.ID] = obs
store.byObserver[obs.ObserverID] = append(store.byObserver[obs.ObserverID], obs)
store.byPayloadType[pt] = append(store.byPayloadType[pt], tx)
pk := fmt.Sprintf("newpk_%d_%04d", goroutineID, i)
if store.nodeHashes[pk] == nil {
store.nodeHashes[pk] = make(map[string]bool)
}
store.nodeHashes[pk][hash] = true
store.byNode[pk] = append(store.byNode[pk], tx)
store.trackedBytes += estimateStoreTxBytes(tx)
store.trackedBytes += estimateStoreObsBytes(obs)
store.totalObs++
store.mu.Unlock()
atomic.AddInt64(&ingestedCount, 1)
}
}(g)
}
// Concurrent eviction goroutines
var evictedTotal int64
for g := 0; g < numEvictionGoroutines; g++ {
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 10; i++ {
store.mu.Lock()
n := store.EvictStale()
store.mu.Unlock()
atomic.AddInt64(&evictedTotal, int64(n))
time.Sleep(time.Millisecond)
}
}()
}
// Concurrent readers (QueryPackets uses RLock)
for g := 0; g < 3; g++ {
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 20; i++ {
store.mu.RLock()
_ = len(store.packets)
_ = len(store.byHash)
store.mu.RUnlock()
time.Sleep(500 * time.Microsecond)
}
}()
}
wg.Wait()
// --- Post-state assertions ---
store.mu.RLock()
defer store.mu.RUnlock()
totalIngested := int(atomic.LoadInt64(&ingestedCount))
totalEvicted := int(atomic.LoadInt64(&evictedTotal))
if totalIngested != numIngestGoroutines*packetsPerGoroutine {
t.Fatalf("expected %d ingested, got %d", numIngestGoroutines*packetsPerGoroutine, totalIngested)
}
// Invariant: packets remaining = initial(200) + ingested - evicted
expectedRemaining := 200 + totalIngested - totalEvicted
if len(store.packets) != expectedRemaining {
t.Fatalf("packets count mismatch: got %d, expected %d (200 + %d ingested - %d evicted)",
len(store.packets), expectedRemaining, totalIngested, totalEvicted)
}
// Invariant: byHash must be consistent with packets slice
if len(store.byHash) != len(store.packets) {
t.Fatalf("byHash size %d != packets len %d", len(store.byHash), len(store.packets))
}
// Invariant: every packet in the slice must be in byHash
for _, tx := range store.packets {
if store.byHash[tx.Hash] != tx {
t.Fatalf("packet %s in slice but not in byHash (or points to different tx)", tx.Hash)
}
}
// Invariant: byTxID must map to packets in the slice
byTxIDCount := 0
for _, tx := range store.packets {
if store.byTxID[tx.ID] == tx {
byTxIDCount++
}
}
if byTxIDCount != len(store.packets) {
t.Fatalf("byTxID consistency: %d/%d packets found", byTxIDCount, len(store.packets))
}
// Invariant: trackedBytes must be non-negative
if store.trackedBytes < 0 {
t.Fatalf("trackedBytes went negative: %d", store.trackedBytes)
}
// Verify eviction actually happened (old packets were eligible)
if totalEvicted == 0 {
t.Fatal("expected some evictions to occur but got 0")
}
t.Logf("OK: ingested=%d, evicted=%d, remaining=%d, trackedBytes=%d",
totalIngested, totalEvicted, len(store.packets), store.trackedBytes)
}
// TestConcurrentIngestNewObservationsAndEviction exercises the race between
// adding new observations to existing transmissions and eviction removing those
// same transmissions. This targets the IngestNewObservations path.
func TestConcurrentIngestNewObservationsAndEviction(t *testing.T) {
// Create store with 100 packets, half old (evictable), half recent
now := time.Now().UTC()
store := makeTestStore(0, now, 1) // empty, we'll add manually
store.retentionHours = 1
// Add 50 old packets (2h ago) and 50 recent packets
for i := 0; i < 100; i++ {
var ts time.Time
if i < 50 {
ts = now.Add(-2 * time.Hour).Add(time.Duration(i) * time.Second)
} else {
ts = now.Add(-time.Duration(100-i) * time.Second)
}
hash := fmt.Sprintf("obs_hash_%04d", i)
txID := i + 1
pt := 4
tx := &StoreTx{
ID: txID,
Hash: hash,
FirstSeen: ts.UTC().Format(time.RFC3339),
LatestSeen: ts.UTC().Format(time.RFC3339),
PayloadType: &pt,
DecodedJSON: fmt.Sprintf(`{"pubKey":"pk%04d"}`, i),
obsKeys: make(map[string]bool),
observerSet: make(map[string]bool),
}
store.packets = append(store.packets, tx)
store.byHash[hash] = tx
store.byTxID[txID] = tx
store.byPayloadType[pt] = append(store.byPayloadType[pt], tx)
store.trackedBytes += estimateStoreTxBytes(tx)
}
store.loaded = true
const numObsGoroutines = 4
const obsPerGoroutine = 100
var wg sync.WaitGroup
var addedObs int64
// Goroutines adding observations to RECENT packets (index 50-99)
for g := 0; g < numObsGoroutines; g++ {
wg.Add(1)
go func(gID int) {
defer wg.Done()
for i := 0; i < obsPerGoroutine; i++ {
targetIdx := 50 + (i % 50) // only target recent packets
hash := fmt.Sprintf("obs_hash_%04d", targetIdx)
store.mu.Lock()
tx := store.byHash[hash]
if tx != nil {
obsID := 50000 + gID*10000 + i
obs := &StoreObs{
ID: obsID,
TransmissionID: tx.ID,
ObserverID: fmt.Sprintf("obs_new_%d", gID),
ObserverName: fmt.Sprintf("NewObs_%d", gID),
Timestamp: time.Now().UTC().Format(time.RFC3339),
}
dk := obs.ObserverID + "|"
if !tx.obsKeys[dk] || true { // allow duplicates for stress
tx.Observations = append(tx.Observations, obs)
tx.ObservationCount++
store.byObsID[obsID] = obs
store.byObserver[obs.ObserverID] = append(store.byObserver[obs.ObserverID], obs)
store.trackedBytes += estimateStoreObsBytes(obs)
store.totalObs++
atomic.AddInt64(&addedObs, 1)
}
}
store.mu.Unlock()
}
}(g)
}
// Concurrent eviction
var evictedTotal int64
for g := 0; g < 2; g++ {
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 15; i++ {
store.mu.Lock()
n := store.EvictStale()
store.mu.Unlock()
atomic.AddInt64(&evictedTotal, int64(n))
time.Sleep(500 * time.Microsecond)
}
}()
}
wg.Wait()
// --- Assertions ---
store.mu.RLock()
defer store.mu.RUnlock()
totalEvicted := int(atomic.LoadInt64(&evictedTotal))
totalAdded := int(atomic.LoadInt64(&addedObs))
// All 50 old packets should have been evicted
if totalEvicted < 50 {
t.Fatalf("expected at least 50 evictions (old packets), got %d", totalEvicted)
}
// Recent packets (50) should survive
if len(store.packets) < 50 {
t.Fatalf("expected at least 50 remaining packets (recent ones), got %d", len(store.packets))
}
// byHash consistency
for _, tx := range store.packets {
if store.byHash[tx.Hash] != tx {
t.Fatalf("byHash inconsistency for %s", tx.Hash)
}
}
// No evicted packet should remain in byHash
for i := 0; i < 50; i++ {
hash := fmt.Sprintf("obs_hash_%04d", i)
if store.byHash[hash] != nil {
t.Fatalf("evicted packet %s still in byHash", hash)
}
}
// byObsID should not reference observations from evicted packets
for obsID, obs := range store.byObsID {
if store.byTxID[obs.TransmissionID] == nil {
t.Fatalf("byObsID[%d] references evicted transmission %d", obsID, obs.TransmissionID)
}
}
// trackedBytes non-negative
if store.trackedBytes < 0 {
t.Fatalf("trackedBytes negative: %d", store.trackedBytes)
}
t.Logf("OK: evicted=%d, added_obs=%d, remaining=%d, trackedBytes=%d",
totalEvicted, totalAdded, len(store.packets), store.trackedBytes)
}
// TestConcurrentRunEvictionWithReads exercises RunEviction's two-phase locking
// against concurrent read operations (simulating QueryPackets / GetStoreStats).
// Without proper RWMutex usage, this would race on slice/map reads.
func TestConcurrentRunEvictionWithReads(t *testing.T) {
startTime := time.Now().UTC().Add(-3 * time.Hour)
store := makeTestStore(500, startTime, 1)
store.retentionHours = 1
store.loaded = true
for _, tx := range store.packets {
store.trackedBytes += estimateStoreTxBytes(tx)
for _, obs := range tx.Observations {
store.trackedBytes += estimateStoreObsBytes(obs)
}
}
var wg sync.WaitGroup
// Multiple RunEviction calls (uses its own locking)
var evicted int64
for g := 0; g < 3; g++ {
wg.Add(1)
go func() {
defer wg.Done()
n := store.RunEviction()
atomic.AddInt64(&evicted, int64(n))
}()
}
// Concurrent readers using the public read-lock pattern
var readCount int64
for g := 0; g < 5; g++ {
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 50; i++ {
store.mu.RLock()
count := len(store.packets)
_ = count
// Iterate a portion of byHash (simulating query)
for hash, tx := range store.byHash {
_ = hash
_ = tx.ObservationCount
break // just access one
}
store.mu.RUnlock()
atomic.AddInt64(&readCount, 1)
}
}()
}
wg.Wait()
store.mu.RLock()
defer store.mu.RUnlock()
totalEvicted := int(atomic.LoadInt64(&evicted))
// Must have evicted packets older than 1h (most of the 500 are 1-3h old)
if totalEvicted == 0 {
t.Fatal("expected evictions but got 0")
}
// Consistency: byHash == packets len
if len(store.byHash) != len(store.packets) {
t.Fatalf("byHash %d != packets %d after concurrent RunEviction+reads",
len(store.byHash), len(store.packets))
}
// All reads completed without panic
if atomic.LoadInt64(&readCount) != 250 {
t.Fatalf("not all reads completed: %d/250", atomic.LoadInt64(&readCount))
}
t.Logf("OK: evicted=%d, remaining=%d, reads=%d",
totalEvicted, len(store.packets), atomic.LoadInt64(&readCount))
}
+7 -7
View File
@@ -763,9 +763,9 @@ func TestGetChannelsFromStore(t *testing.T) {
func TestPrefixMapResolve(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aabbccdd11223344", Name: "NodeA", HasGPS: true, Lat: 37.5, Lon: -122.0},
{PublicKey: "aabbccdd55667788", Name: "NodeB", HasGPS: false},
{PublicKey: "eeff0011aabbccdd", Name: "NodeC", HasGPS: true, Lat: 38.0, Lon: -121.0},
{Role: "repeater", PublicKey: "aabbccdd11223344", Name: "NodeA", HasGPS: true, Lat: 37.5, Lon: -122.0},
{Role: "repeater", PublicKey: "aabbccdd55667788", Name: "NodeB", HasGPS: false},
{Role: "repeater", PublicKey: "eeff0011aabbccdd", Name: "NodeC", HasGPS: true, Lat: 38.0, Lon: -121.0},
}
pm := buildPrefixMap(nodes)
@@ -805,8 +805,8 @@ func TestPrefixMapResolve(t *testing.T) {
t.Run("multiple candidates no GPS", func(t *testing.T) {
noGPSNodes := []nodeInfo{
{PublicKey: "aa11bb22", Name: "X", HasGPS: false},
{PublicKey: "aa11cc33", Name: "Y", HasGPS: false},
{Role: "repeater", PublicKey: "aa11bb22", Name: "X", HasGPS: false},
{Role: "repeater", PublicKey: "aa11cc33", Name: "Y", HasGPS: false},
}
pm2 := buildPrefixMap(noGPSNodes)
n := pm2.resolve("aa11")
@@ -820,8 +820,8 @@ func TestPrefixMapResolve(t *testing.T) {
func TestPrefixMapCap(t *testing.T) {
// 16-char pubkey — longer than maxPrefixLen
nodes := []nodeInfo{
{PublicKey: "aabbccdd11223344", Name: "LongKey"},
{PublicKey: "eeff0011", Name: "ShortKey"}, // exactly 8 chars
{Role: "repeater", PublicKey: "aabbccdd11223344", Name: "LongKey"},
{Role: "repeater", PublicKey: "eeff0011", Name: "ShortKey"}, // exactly 8 chars
}
pm := buildPrefixMap(nodes)
+18 -18
View File
@@ -12,9 +12,9 @@ import (
func TestResolveAmbiguousEdges_GeoProximity(t *testing.T) {
// Node A at lat=45, lon=-122. Candidate B1 at lat=45.1, lon=-122.1 (close).
// Candidate B2 at lat=10, lon=10 (far away). Prefix "b0" matches both.
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB1 := nodeInfo{PublicKey: "b0b1eeee", Name: "CloseNode", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeB2 := nodeInfo{PublicKey: "b0c2ffff", Name: "FarNode", HasGPS: true, Lat: 10.0, Lon: 10.0}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB1 := nodeInfo{Role: "repeater", PublicKey: "b0b1eeee", Name: "CloseNode", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeB2 := nodeInfo{Role: "repeater", PublicKey: "b0c2ffff", Name: "FarNode", HasGPS: true, Lat: 10.0, Lon: 10.0}
pm := buildPrefixMap([]nodeInfo{nodeA, nodeB1, nodeB2})
@@ -62,8 +62,8 @@ func TestResolveAmbiguousEdges_GeoProximity(t *testing.T) {
// Test 2: Ambiguous edge merged with existing resolved edge (count accumulation).
func TestResolveAmbiguousEdges_MergeWithExisting(t *testing.T) {
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{PublicKey: "b0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{Role: "repeater", PublicKey: "b0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
pm := buildPrefixMap([]nodeInfo{nodeA, nodeB})
@@ -133,9 +133,9 @@ func TestResolveAmbiguousEdges_MergeWithExisting(t *testing.T) {
// Test 3: Ambiguous edge left as-is when resolution fails.
func TestResolveAmbiguousEdges_FailsNoChange(t *testing.T) {
// Two candidates, neither has GPS, no affinity data — resolution falls through.
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA"}
nodeB1 := nodeInfo{PublicKey: "b0b1eeee", Name: "B1"}
nodeB2 := nodeInfo{PublicKey: "b0c2ffff", Name: "B2"}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA"}
nodeB1 := nodeInfo{Role: "repeater", PublicKey: "b0b1eeee", Name: "B1"}
nodeB2 := nodeInfo{Role: "repeater", PublicKey: "b0c2ffff", Name: "B2"}
pm := buildPrefixMap([]nodeInfo{nodeA, nodeB1, nodeB2})
@@ -175,7 +175,7 @@ func TestResolveAmbiguousEdges_FailsNoChange(t *testing.T) {
// Test 3 (corrected): Resolution fails when prefix has no candidates in prefix map.
func TestResolveAmbiguousEdges_NoMatch(t *testing.T) {
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA"}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA"}
// pm has no entries matching prefix "zz"
pm := buildPrefixMap([]nodeInfo{nodeA})
@@ -215,8 +215,8 @@ func TestResolveAmbiguousEdges_NoMatch(t *testing.T) {
// Test 6: Phase 1 edge collection unchanged (no regression).
func TestPhase1EdgeCollection_Unchanged(t *testing.T) {
// Build a simple graph and verify non-ambiguous edges are not touched.
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{PublicKey: "bbbb2222", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{Role: "repeater", PublicKey: "bbbb2222", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
ts := time.Now().UTC().Format(time.RFC3339)
payloadType := 4
@@ -232,7 +232,7 @@ func TestPhase1EdgeCollection_Unchanged(t *testing.T) {
Observations: obs,
}
store := ngTestStore([]nodeInfo{nodeA, nodeB, {PublicKey: "cccc3333", Name: "Observer"}}, []*StoreTx{tx})
store := ngTestStore([]nodeInfo{nodeA, nodeB, {Role: "repeater", PublicKey: "cccc3333", Name: "Observer"}}, []*StoreTx{tx})
graph := BuildFromStore(store)
edges := graph.Neighbors("aaaa1111")
@@ -255,8 +255,8 @@ func TestPhase1EdgeCollection_Unchanged(t *testing.T) {
// Test 7: Merge preserves higher LastSeen timestamp.
func TestResolveAmbiguousEdges_PreservesHigherLastSeen(t *testing.T) {
nodeA := nodeInfo{PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{PublicKey: "b0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{Role: "repeater", PublicKey: "b0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
pm := buildPrefixMap([]nodeInfo{nodeA, nodeB})
graph := NewNeighborGraph()
@@ -307,10 +307,10 @@ func TestResolveAmbiguousEdges_PreservesHigherLastSeen(t *testing.T) {
// Test 5: Integration — node with both 1-byte and 2-byte prefix observations shows single entry.
func TestIntegration_DualPrefixSingleNeighbor(t *testing.T) {
nodeA := nodeInfo{PublicKey: "aaaa1111aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{PublicKey: "b0b1eeeeb0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeB2 := nodeInfo{PublicKey: "b0c2ffffb0c2ffff", Name: "NodeB2", HasGPS: true, Lat: 10.0, Lon: 10.0}
observer := nodeInfo{PublicKey: "cccc3333cccc3333", Name: "Observer"}
nodeA := nodeInfo{Role: "repeater", PublicKey: "aaaa1111aaaa1111", Name: "NodeA", HasGPS: true, Lat: 45.0, Lon: -122.0}
nodeB := nodeInfo{Role: "repeater", PublicKey: "b0b1eeeeb0b1eeee", Name: "NodeB", HasGPS: true, Lat: 45.1, Lon: -122.1}
nodeB2 := nodeInfo{Role: "repeater", PublicKey: "b0c2ffffb0c2ffff", Name: "NodeB2", HasGPS: true, Lat: 10.0, Lon: 10.0}
observer := nodeInfo{Role: "repeater", PublicKey: "cccc3333cccc3333", Name: "Observer"}
ts := time.Now().UTC().Format(time.RFC3339)
pt := 4
+63 -63
View File
@@ -86,9 +86,9 @@ func TestBuildNeighborGraph_EmptyStore(t *testing.T) {
func TestBuildNeighborGraph_AdvertSingleHopPath(t *testing.T) {
// ADVERT from X, path=["R1_prefix"] → edges: X↔R1 and Observer↔R1
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["r1aa"]`, nowStr, ngFloatPtr(-10)),
@@ -132,10 +132,10 @@ func TestBuildNeighborGraph_AdvertSingleHopPath(t *testing.T) {
func TestBuildNeighborGraph_AdvertMultiHopPath(t *testing.T) {
// ADVERT from X, path=["R1","R2"] → X↔R1 and Observer↔R2
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "r2ddeeff", Name: "R2"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "r2ddeeff", Name: "R2"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["r1aa","r2dd"]`, nowStr, nil),
@@ -170,8 +170,8 @@ func TestBuildNeighborGraph_AdvertMultiHopPath(t *testing.T) {
func TestBuildNeighborGraph_AdvertZeroHop(t *testing.T) {
// ADVERT from X, path=[] → X↔Observer direct edge
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `[]`, nowStr, nil),
@@ -195,8 +195,8 @@ func TestBuildNeighborGraph_AdvertZeroHop(t *testing.T) {
func TestBuildNeighborGraph_NonAdvertEmptyPath(t *testing.T) {
// Non-ADVERT, path=[] → no edges
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 2, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `[]`, nowStr, nil),
@@ -212,10 +212,10 @@ func TestBuildNeighborGraph_NonAdvertEmptyPath(t *testing.T) {
func TestBuildNeighborGraph_NonAdvertOnlyObserverEdge(t *testing.T) {
// Non-ADVERT with path=["R1","R2"] → only Observer↔R2, NO originator edge
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "r2ddeeff", Name: "R2"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "r2ddeeff", Name: "R2"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 2, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["r1aa","r2dd"]`, nowStr, nil),
@@ -236,9 +236,9 @@ func TestBuildNeighborGraph_NonAdvertOnlyObserverEdge(t *testing.T) {
func TestBuildNeighborGraph_NonAdvertSingleHop(t *testing.T) {
// Non-ADVERT with path=["R1"] → Observer↔R1 only
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 2, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["r1aa"]`, nowStr, nil),
@@ -259,10 +259,10 @@ func TestBuildNeighborGraph_NonAdvertSingleHop(t *testing.T) {
func TestBuildNeighborGraph_HashCollision(t *testing.T) {
// Two nodes share prefix "a3" → ambiguous edge
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "a3bb1111", Name: "CandidateA"},
{PublicKey: "a3bb2222", Name: "CandidateB"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "a3bb1111", Name: "CandidateA"},
{Role: "repeater", PublicKey: "a3bb2222", Name: "CandidateB"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["a3bb"]`, nowStr, nil),
@@ -308,13 +308,13 @@ func TestBuildNeighborGraph_ConfidenceAutoResolve(t *testing.T) {
// CandidateB has no known neighbors (Jaccard = 0).
// An ambiguous edge X↔prefix "a3" with candidates [A, B] should auto-resolve to A.
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "n1111111", Name: "N1"},
{PublicKey: "n2222222", Name: "N2"},
{PublicKey: "n3333333", Name: "N3"},
{PublicKey: "a3001111", Name: "CandidateA"},
{PublicKey: "a3002222", Name: "CandidateB"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "n1111111", Name: "N1"},
{Role: "repeater", PublicKey: "n2222222", Name: "N2"},
{Role: "repeater", PublicKey: "n3333333", Name: "N3"},
{Role: "repeater", PublicKey: "a3001111", Name: "CandidateA"},
{Role: "repeater", PublicKey: "a3002222", Name: "CandidateB"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
// Create resolved edges: X↔N1, X↔N2, X↔N3, A↔N1, A↔N2, A↔N3
@@ -373,11 +373,11 @@ func TestBuildNeighborGraph_ConfidenceAutoResolve(t *testing.T) {
func TestBuildNeighborGraph_EqualScoresAmbiguous(t *testing.T) {
// Two candidates with identical neighbor sets → should NOT auto-resolve.
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "n1111111", Name: "N1"},
{PublicKey: "a3001111", Name: "CandidateA"},
{PublicKey: "a3002222", Name: "CandidateB"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "n1111111", Name: "N1"},
{Role: "repeater", PublicKey: "a3001111", Name: "CandidateA"},
{Role: "repeater", PublicKey: "a3002222", Name: "CandidateB"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
var txs []*StoreTx
@@ -425,8 +425,8 @@ func TestBuildNeighborGraph_EqualScoresAmbiguous(t *testing.T) {
func TestBuildNeighborGraph_ObserverSelfEdgeGuard(t *testing.T) {
// Observer's own prefix in path → should NOT create self-edge.
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["obs0"]`, nowStr, nil),
@@ -445,8 +445,8 @@ func TestBuildNeighborGraph_ObserverSelfEdgeGuard(t *testing.T) {
func TestBuildNeighborGraph_OrphanPrefix(t *testing.T) {
// Path contains prefix matching zero nodes → edge recorded as unresolved.
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["ff99"]`, nowStr, nil),
@@ -506,9 +506,9 @@ func TestAffinityScore_StaleAndLow(t *testing.T) {
func TestBuildNeighborGraph_CountAccumulation(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
var txs []*StoreTx
@@ -535,10 +535,10 @@ func TestBuildNeighborGraph_CountAccumulation(t *testing.T) {
func TestBuildNeighborGraph_MultipleObservers(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "obs00001", Name: "Obs1"},
{PublicKey: "obs00002", Name: "Obs2"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "obs00001", Name: "Obs1"},
{Role: "repeater", PublicKey: "obs00002", Name: "Obs2"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
@@ -565,9 +565,9 @@ func TestBuildNeighborGraph_MultipleObservers(t *testing.T) {
func TestBuildNeighborGraph_TimeDecayOldObservations(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngFromNodeJSON("aaaa1111"), []*StoreObs{
@@ -592,10 +592,10 @@ func TestBuildNeighborGraph_TimeDecayOldObservations(t *testing.T) {
func TestBuildNeighborGraph_ADVERTOnlyConstraint(t *testing.T) {
// Non-ADVERT: should NOT create originator↔path[0] edge, only observer↔path[last].
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeX"},
{PublicKey: "r1aabbcc", Name: "R1"},
{PublicKey: "r2ddeeff", Name: "R2"},
{PublicKey: "obs00001", Name: "Observer"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeX"},
{Role: "repeater", PublicKey: "r1aabbcc", Name: "R1"},
{Role: "repeater", PublicKey: "r2ddeeff", Name: "R2"},
{Role: "repeater", PublicKey: "obs00001", Name: "Observer"},
}
tx := ngMakeTx(1, 2, ngFromNodeJSON("aaaa1111"), []*StoreObs{
ngMakeObs("obs00001", `["r1aa","r2dd"]`, nowStr, nil),
@@ -631,9 +631,9 @@ func ngPubKeyJSON(pubkey string) string {
func TestBuildNeighborGraph_AdvertPubKeyField(t *testing.T) {
// Real ADVERTs use "pubKey", not "from_node". Verify the builder handles it.
nodes := []nodeInfo{
{PublicKey: "99bf37abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234", Name: "Originator"},
{PublicKey: "r1aabbccdd001122334455667788990011223344556677889900112233445566", Name: "R1"},
{PublicKey: "obs0000100112233445566778899001122334455667788990011223344556677", Name: "Observer"},
{Role: "repeater", PublicKey: "99bf37abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234", Name: "Originator"},
{Role: "repeater", PublicKey: "r1aabbccdd001122334455667788990011223344556677889900112233445566", Name: "R1"},
{Role: "repeater", PublicKey: "obs0000100112233445566778899001122334455667788990011223344556677", Name: "Observer"},
}
tx := ngMakeTx(1, 4, ngPubKeyJSON("99bf37abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234"), []*StoreObs{
ngMakeObs("obs0000100112233445566778899001122334455667788990011223344556677", `["r1"]`, nowStr, ngFloatPtr(-8.5)),
@@ -666,10 +666,10 @@ func TestBuildNeighborGraph_OneByteHashPrefixes(t *testing.T) {
// Real-world scenario: 1-byte hash prefixes with multiple candidates.
// Should create edges (possibly ambiguous) rather than empty graph.
nodes := []nodeInfo{
{PublicKey: "c0dedad400000000000000000000000000000000000000000000000000000001", Name: "NodeC0-1"},
{PublicKey: "c0dedad900000000000000000000000000000000000000000000000000000002", Name: "NodeC0-2"},
{PublicKey: "a3bbccdd00000000000000000000000000000000000000000000000000000003", Name: "Originator"},
{PublicKey: "obs1234500000000000000000000000000000000000000000000000000000004", Name: "Observer"},
{Role: "repeater", PublicKey: "c0dedad400000000000000000000000000000000000000000000000000000001", Name: "NodeC0-1"},
{Role: "repeater", PublicKey: "c0dedad900000000000000000000000000000000000000000000000000000002", Name: "NodeC0-2"},
{Role: "repeater", PublicKey: "a3bbccdd00000000000000000000000000000000000000000000000000000003", Name: "Originator"},
{Role: "repeater", PublicKey: "obs1234500000000000000000000000000000000000000000000000000000004", Name: "Observer"},
}
// ADVERT from Originator with 1-byte path hop "c0"
tx := ngMakeTx(1, 4, ngPubKeyJSON("a3bbccdd00000000000000000000000000000000000000000000000000000003"), []*StoreObs{
@@ -809,10 +809,10 @@ func TestExtractFromNode_UsesCachedParse(t *testing.T) {
func BenchmarkBuildFromStore(b *testing.B) {
// Simulate a dataset with many packets and repeated pubkeys
nodes := []nodeInfo{
{PublicKey: "aaaa1111", Name: "NodeA"},
{PublicKey: "bbbb2222", Name: "NodeB"},
{PublicKey: "cccc3333", Name: "NodeC"},
{PublicKey: "dddd4444", Name: "NodeD"},
{Role: "repeater", PublicKey: "aaaa1111", Name: "NodeA"},
{Role: "repeater", PublicKey: "bbbb2222", Name: "NodeB"},
{Role: "repeater", PublicKey: "cccc3333", Name: "NodeC"},
{Role: "repeater", PublicKey: "dddd4444", Name: "NodeD"},
}
const numPackets = 1000
packets := make([]*StoreTx, 0, numPackets)
+5 -5
View File
@@ -58,8 +58,8 @@ func createTestDBWithSchema(t *testing.T) (*DB, string) {
func TestResolvePathForObs(t *testing.T) {
// Build a prefix map with known nodes
nodes := []nodeInfo{
{PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
{PublicKey: "bbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb11", Name: "Node-BB"},
{Role: "repeater", PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
{Role: "repeater", PublicKey: "bbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb11", Name: "Node-BB"},
}
pm := buildPrefixMap(nodes)
graph := NewNeighborGraph()
@@ -97,7 +97,7 @@ func TestResolvePathForObs_EmptyPath(t *testing.T) {
func TestResolvePathForObs_Unresolvable(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
{Role: "repeater", PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
}
pm := buildPrefixMap(nodes)
@@ -437,8 +437,8 @@ func TestExtractEdgesFromObs_NonAdvertNoPath(t *testing.T) {
func TestExtractEdgesFromObs_WithPath(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
{PublicKey: "ffgghhii1234567890aabbccddee1234567890aabbccddee1234567890aabb11", Name: "Node-FF"},
{Role: "repeater", PublicKey: "aabbccddee1234567890aabbccddee1234567890aabbccddee1234567890aabb", Name: "Node-AA"},
{Role: "repeater", PublicKey: "ffgghhii1234567890aabbccddee1234567890aabbccddee1234567890aabb11", Name: "Node-FF"},
}
pm := buildPrefixMap(nodes)
+212
View File
@@ -0,0 +1,212 @@
package main
import (
"encoding/json"
"testing"
)
func TestCanAppearInPath(t *testing.T) {
cases := []struct {
role string
want bool
}{
{"repeater", true},
{"Repeater", true},
{"REPEATER", true},
{"room_server", true},
{"Room_Server", true},
{"room", true},
{"companion", false},
{"sensor", false},
{"", false},
{"unknown", false},
}
for _, tc := range cases {
if got := canAppearInPath(tc.role); got != tc.want {
t.Errorf("canAppearInPath(%q) = %v, want %v", tc.role, got, tc.want)
}
}
}
func TestBuildPrefixMap_ExcludesCompanions(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "MyCompanion"},
}
pm := buildPrefixMap(nodes)
if len(pm.m) != 0 {
t.Fatalf("expected empty prefix map, got %d entries", len(pm.m))
}
}
func TestBuildPrefixMap_ExcludesSensors(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "sensor", Name: "MySensor"},
}
pm := buildPrefixMap(nodes)
if len(pm.m) != 0 {
t.Fatalf("expected empty prefix map, got %d entries", len(pm.m))
}
}
func TestResolveWithContext_NilWhenOnlyCompanionMatchesPrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "MyCompanion"},
}
pm := buildPrefixMap(nodes)
r, _, _ := pm.resolveWithContext("7a", nil, nil)
if r != nil {
t.Fatalf("expected nil, got %+v", r)
}
}
func TestResolveWithContext_NilWhenOnlySensorMatchesPrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "sensor", Name: "MySensor"},
}
pm := buildPrefixMap(nodes)
r, _, _ := pm.resolveWithContext("7a", nil, nil)
if r != nil {
t.Fatalf("expected nil for sensor-only prefix, got %+v", r)
}
}
func TestResolveWithContext_PrefersRepeaterOverCompanionAtSamePrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "MyCompanion"},
{PublicKey: "7a5678901234", Role: "repeater", Name: "MyRepeater"},
}
pm := buildPrefixMap(nodes)
r, _, _ := pm.resolveWithContext("7a", nil, nil)
if r == nil {
t.Fatal("expected non-nil result")
}
if r.Name != "MyRepeater" {
t.Fatalf("expected MyRepeater, got %s", r.Name)
}
}
func TestResolveWithContext_PrefersRoomServerOverCompanionAtSamePrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "ab1234abcdef", Role: "companion", Name: "MyCompanion"},
{PublicKey: "ab5678901234", Role: "room_server", Name: "MyRoom"},
}
pm := buildPrefixMap(nodes)
r, _, _ := pm.resolveWithContext("ab", nil, nil)
if r == nil {
t.Fatal("expected non-nil result")
}
if r.Name != "MyRoom" {
t.Fatalf("expected MyRoom, got %s", r.Name)
}
}
func TestResolve_NilWhenOnlyCompanionMatchesPrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "MyCompanion"},
}
pm := buildPrefixMap(nodes)
r := pm.resolve("7a")
if r != nil {
t.Fatalf("expected nil from resolve() for companion-only prefix, got %+v", r)
}
}
func TestResolve_NilWhenOnlySensorMatchesPrefix(t *testing.T) {
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "sensor", Name: "MySensor"},
}
pm := buildPrefixMap(nodes)
r := pm.resolve("7a")
if r != nil {
t.Fatalf("expected nil from resolve() for sensor-only prefix, got %+v", r)
}
}
func TestResolveWithContext_PicksRepeaterEvenWhenCompanionHasGPS(t *testing.T) {
// Adversarial: companion has GPS, repeater doesn't. Role filter should
// exclude companion entirely, so repeater wins despite lacking GPS.
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "GPSCompanion", Lat: 37.0, Lon: -122.0, HasGPS: true},
{PublicKey: "7a5678901234", Role: "repeater", Name: "NoGPSRepeater", Lat: 0, Lon: 0, HasGPS: false},
}
pm := buildPrefixMap(nodes)
r, _, _ := pm.resolveWithContext("7a", nil, nil)
if r == nil {
t.Fatal("expected non-nil result")
}
if r.Name != "NoGPSRepeater" {
t.Fatalf("expected NoGPSRepeater (role filter excludes companion), got %s", r.Name)
}
}
func TestComputeDistancesForTx_CompanionNeverInResolvedChain(t *testing.T) {
// Integration test: a path with a prefix matching both a companion and a
// repeater. The resolveHop function (using buildPrefixMap) should only
// return the repeater.
nodes := []nodeInfo{
{PublicKey: "7a1234abcdef", Role: "companion", Name: "BadCompanion", Lat: 37.0, Lon: -122.0, HasGPS: true},
{PublicKey: "7a5678901234", Role: "repeater", Name: "GoodRepeater", Lat: 38.0, Lon: -123.0, HasGPS: true},
{PublicKey: "bb1111111111", Role: "repeater", Name: "OtherRepeater", Lat: 39.0, Lon: -124.0, HasGPS: true},
}
pm := buildPrefixMap(nodes)
nodeByPk := make(map[string]*nodeInfo)
for i := range nodes {
nodeByPk[nodes[i].PublicKey] = &nodes[i]
}
repeaterSet := map[string]bool{
"7a5678901234": true,
"bb1111111111": true,
}
// Build a synthetic StoreTx with a path ["7a", "bb"] and a sender with GPS
senderPK := "cc0000000000"
sender := nodeInfo{PublicKey: senderPK, Role: "repeater", Name: "Sender", Lat: 36.0, Lon: -121.0, HasGPS: true}
nodeByPk[senderPK] = &sender
pathJSON, _ := json.Marshal([]string{"7a", "bb"})
decoded, _ := json.Marshal(map[string]interface{}{"pubKey": senderPK})
tx := &StoreTx{
PathJSON: string(pathJSON),
DecodedJSON: string(decoded),
FirstSeen: "2026-04-30T12:00",
}
resolveHop := func(hop string) *nodeInfo {
return pm.resolve(hop)
}
hops, pathRec := computeDistancesForTx(tx, nodeByPk, repeaterSet, resolveHop)
// Verify BadCompanion's pubkey never appears in hops
badPK := "7a1234abcdef"
for i, h := range hops {
if h.FromPk == badPK || h.ToPk == badPK {
t.Fatalf("hop[%d] contains BadCompanion pubkey: from=%s to=%s", i, h.FromPk, h.ToPk)
}
}
// Verify BadCompanion's pubkey never appears in pathRec
if pathRec == nil {
t.Fatal("expected non-nil path record (3 GPS nodes in chain)")
}
for i, hop := range pathRec.Hops {
if hop.FromPk == badPK || hop.ToPk == badPK {
t.Fatalf("pathRec.Hops[%d] contains BadCompanion pubkey: from=%s to=%s", i, hop.FromPk, hop.ToPk)
}
}
// Verify GoodRepeater IS in the chain (proves the prefix was resolved to the right node)
goodPK := "7a5678901234"
foundGood := false
for _, hop := range pathRec.Hops {
if hop.FromPk == goodPK || hop.ToPk == goodPK {
foundGood = true
break
}
}
if !foundGood {
t.Fatal("expected GoodRepeater (7a5678901234) in pathRec.Hops but not found")
}
}
+29 -29
View File
@@ -11,7 +11,7 @@ import (
func TestResolveWithContext_UniquePrefix(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1b2c3d4", Name: "Node-A", HasGPS: true, Lat: 1, Lon: 2},
{Role: "repeater", PublicKey: "a1b2c3d4", Name: "Node-A", HasGPS: true, Lat: 1, Lon: 2},
})
ni, confidence, _ := pm.resolveWithContext("a1b2c3d4", nil, nil)
if ni == nil || ni.Name != "Node-A" {
@@ -24,7 +24,7 @@ func TestResolveWithContext_UniquePrefix(t *testing.T) {
func TestResolveWithContext_NoMatch(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1b2c3d4", Name: "Node-A"},
{Role: "repeater", PublicKey: "a1b2c3d4", Name: "Node-A"},
})
ni, confidence, _ := pm.resolveWithContext("ff", nil, nil)
if ni != nil {
@@ -37,8 +37,8 @@ func TestResolveWithContext_NoMatch(t *testing.T) {
func TestResolveWithContext_AffinityWins(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "Node-A1"},
{PublicKey: "a1bbbbbb", Name: "Node-A2"},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "Node-A1"},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "Node-A2"},
})
graph := NewNeighborGraph()
@@ -60,9 +60,9 @@ func TestResolveWithContext_AffinityWins(t *testing.T) {
func TestResolveWithContext_AffinityTooClose_FallsToGeo(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "Node-A1", HasGPS: true, Lat: 10, Lon: 20},
{PublicKey: "a1bbbbbb", Name: "Node-A2", HasGPS: true, Lat: 11, Lon: 21},
{PublicKey: "c0c0c0c0", Name: "Ctx", HasGPS: true, Lat: 10.1, Lon: 20.1},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "Node-A1", HasGPS: true, Lat: 10, Lon: 20},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "Node-A2", HasGPS: true, Lat: 11, Lon: 21},
{Role: "repeater", PublicKey: "c0c0c0c0", Name: "Ctx", HasGPS: true, Lat: 10.1, Lon: 20.1},
})
graph := NewNeighborGraph()
@@ -85,8 +85,8 @@ func TestResolveWithContext_AffinityTooClose_FallsToGeo(t *testing.T) {
func TestResolveWithContext_GPSPreference(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "NoGPS"},
{PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "NoGPS"},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
})
ni, confidence, _ := pm.resolveWithContext("a1", nil, nil)
@@ -100,8 +100,8 @@ func TestResolveWithContext_GPSPreference(t *testing.T) {
func TestResolveWithContext_FirstMatchFallback(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "First"},
{PublicKey: "a1bbbbbb", Name: "Second"},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "First"},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "Second"},
})
ni, confidence, _ := pm.resolveWithContext("a1", nil, nil)
@@ -115,8 +115,8 @@ func TestResolveWithContext_FirstMatchFallback(t *testing.T) {
func TestResolveWithContext_NilGraphFallsToGPS(t *testing.T) {
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "NoGPS"},
{PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "NoGPS"},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
})
ni, confidence, _ := pm.resolveWithContext("a1", []string{"someone"}, nil)
@@ -131,8 +131,8 @@ func TestResolveWithContext_NilGraphFallsToGPS(t *testing.T) {
func TestResolveWithContext_BackwardCompatResolve(t *testing.T) {
// Verify original resolve() still works unchanged
pm := buildPrefixMap([]nodeInfo{
{PublicKey: "a1aaaaaa", Name: "NoGPS"},
{PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
{Role: "repeater", PublicKey: "a1aaaaaa", Name: "NoGPS"},
{Role: "repeater", PublicKey: "a1bbbbbb", Name: "HasGPS", HasGPS: true, Lat: 1, Lon: 2},
})
ni := pm.resolve("a1")
if ni == nil || ni.Name != "HasGPS" {
@@ -164,8 +164,8 @@ func TestResolveHopsAPI_UniquePrefix(t *testing.T) {
_ = srv
// Insert a unique node
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"ff11223344", "UniqueNode", 37.0, -122.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"ff11223344", "UniqueNode", 37.0, -122.0, "repeater")
srv.store.InvalidateNodeCache()
req := httptest.NewRequest("GET", "/api/resolve-hops?hops=ff11223344", nil)
@@ -189,10 +189,10 @@ func TestResolveHopsAPI_UniquePrefix(t *testing.T) {
func TestResolveHopsAPI_AmbiguousNoContext(t *testing.T) {
srv, router := setupTestServer(t)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"ee1aaaaaaa", "Node-E1", 37.0, -122.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"ee1bbbbbbb", "Node-E2", 38.0, -121.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"ee1aaaaaaa", "Node-E1", 37.0, -122.0, "repeater")
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"ee1bbbbbbb", "Node-E2", 38.0, -121.0, "repeater")
srv.store.InvalidateNodeCache()
req := httptest.NewRequest("GET", "/api/resolve-hops?hops=ee1", nil)
@@ -224,12 +224,12 @@ func TestResolveHopsAPI_AmbiguousNoContext(t *testing.T) {
func TestResolveHopsAPI_WithAffinityContext(t *testing.T) {
srv, router := setupTestServer(t)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"dd1aaaaaaa", "Node-D1", 37.0, -122.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"dd1bbbbbbb", "Node-D2", 38.0, -121.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"c0c0c0c0c0", "Context", 37.1, -122.1)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"dd1aaaaaaa", "Node-D1", 37.0, -122.0, "repeater")
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"dd1bbbbbbb", "Node-D2", 38.0, -121.0, "repeater")
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"c0c0c0c0c0", "Context", 37.1, -122.1, "repeater")
// Invalidate node cache so the PM includes newly inserted nodes.
srv.store.cacheMu.Lock()
@@ -279,8 +279,8 @@ func TestResolveHopsAPI_WithAffinityContext(t *testing.T) {
func TestResolveHopsAPI_ResponseShape(t *testing.T) {
srv, router := setupTestServer(t)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon) VALUES (?, ?, ?, ?)",
"bb1aaaaaaa", "Node-B1", 37.0, -122.0)
srv.db.conn.Exec("INSERT OR IGNORE INTO nodes (public_key, name, lat, lon, role) VALUES (?, ?, ?, ?, ?)",
"bb1aaaaaaa", "Node-B1", 37.0, -122.0, "repeater")
req := httptest.NewRequest("GET", "/api/resolve-hops?hops=bb1a", nil)
rr := httptest.NewRecorder()
+11
View File
@@ -4551,9 +4551,20 @@ type prefixMap struct {
// entries to ~7×N (+ 1 full-key entry per node for exact-match lookups).
const maxPrefixLen = 8
// canAppearInPath returns true if the node's role allows it to appear as a
// path hop. Only repeaters, room servers, and rooms can forward packets;
// companions and sensors originate but never relay.
func canAppearInPath(role string) bool {
r := strings.ToLower(role)
return strings.Contains(r, "repeater") || strings.Contains(r, "room_server") || r == "room"
}
func buildPrefixMap(nodes []nodeInfo) *prefixMap {
pm := &prefixMap{m: make(map[string][]nodeInfo, len(nodes)*(maxPrefixLen+1))}
for _, n := range nodes {
if !canAppearInPath(n.Role) {
continue
}
pk := strings.ToLower(n.PublicKey)
maxLen := maxPrefixLen
if maxLen > len(pk) {
+44 -17
View File
@@ -393,17 +393,25 @@
}
}
// Merge user-stored keys into the channel list
// Merge user-stored keys into the channel list.
// If a stored key matches a server-known channel, mark that channel as
// userAdded so the ✕ button appears — otherwise the user has no way to
// remove a key they added but that the server already knows about.
function mergeUserChannels() {
var keys = ChannelDecrypt.getStoredKeys();
var names = Object.keys(keys);
for (var i = 0; i < names.length; i++) {
var name = names[i];
// Check if channel already exists by name
var exists = channels.some(function (ch) {
return ch.name === name || ch.hash === name || ch.hash === ('user:' + name);
});
if (!exists) {
var matched = false;
for (var j = 0; j < channels.length; j++) {
var ch = channels[j];
if (ch.name === name || ch.hash === name || ch.hash === ('user:' + name)) {
ch.userAdded = true;
matched = true;
break;
}
}
if (!matched) {
channels.push({
hash: 'user:' + name,
name: name,
@@ -749,19 +757,38 @@
e.stopPropagation();
var channelHash = removeBtn.getAttribute('data-remove-channel');
if (!channelHash) return;
var chName = channelHash.startsWith('user:') ? channelHash.substring(5) : channelHash;
// The localStorage key is the channel name. For user:-prefixed entries
// strip the prefix; for server-known channels look up the channel
// object so we use its display name (the hash itself isn't the key).
var ch = channels.find(function (c) { return c.hash === channelHash; });
var chName = channelHash.startsWith('user:')
? channelHash.substring(5)
: (ch && ch.name) || channelHash;
if (!confirm('Remove channel "' + chName + '"? This will clear saved keys and cached messages.')) return;
ChannelDecrypt.removeKey(chName);
// Remove from channels array
channels = channels.filter(function (c) { return c.hash !== channelHash; });
if (selectedHash === channelHash) {
selectedHash = null;
messages = [];
history.replaceState(null, '', '#/channels');
var msgEl2 = document.getElementById('chMessages');
if (msgEl2) msgEl2.innerHTML = '<div class="ch-empty">Choose a channel from the sidebar to view messages</div>';
var header2 = document.getElementById('chHeader');
if (header2) header2.querySelector('.ch-header-text').textContent = 'Select a channel';
if (channelHash.startsWith('user:')) {
// Pure user-added channel — drop from the list entirely.
channels = channels.filter(function (c) { return c.hash !== channelHash; });
if (selectedHash === channelHash) {
selectedHash = null;
messages = [];
history.replaceState(null, '', '#/channels');
var msgEl2 = document.getElementById('chMessages');
if (msgEl2) msgEl2.innerHTML = '<div class="ch-empty">Choose a channel from the sidebar to view messages</div>';
var header2 = document.getElementById('chHeader');
if (header2) header2.querySelector('.ch-header-text').textContent = 'Select a channel';
}
} else if (ch) {
// Server-known channel: keep the row, just unmark as user-added so
// the ✕ disappears until they re-add a key.
ch.userAdded = false;
// If this was the selected channel, clear decrypted messages since
// the key is gone — they can't be re-decrypted without re-adding it.
if (selectedHash === channelHash) {
messages = [];
var msgEl2 = document.getElementById('chMessages');
if (msgEl2) msgEl2.innerHTML = '<div class="ch-empty">Key removed — add a key to decrypt messages</div>';
}
}
renderChannelList();
return;
+12
View File
@@ -7,6 +7,14 @@ window.HopResolver = (function() {
const MAX_HOP_DIST = 1.8; // ~200km in degrees
const REGION_RADIUS_KM = 300;
// Only repeaters and room servers can appear as path hops per protocol.
// Companions/sensors originate but never relay packets.
function canAppearInPath(role) {
if (!role) return false;
var r = String(role).toLowerCase();
return r.indexOf('repeater') >= 0 || r.indexOf('room_server') >= 0 || r === 'room';
}
let prefixIdx = {}; // lowercase hex prefix → [node, ...]
let pubkeyIdx = {}; // full lowercase pubkey → node (O(1) lookup)
let nodesList = [];
@@ -40,7 +48,11 @@ window.HopResolver = (function() {
for (const n of nodesList) {
if (!n.public_key) continue;
const pk = n.public_key.toLowerCase();
// pubkeyIdx includes ALL nodes — used by resolveFromServer for
// server-confirmed full-pubkey lookups (any node type).
pubkeyIdx[pk] = n;
// prefixIdx only includes nodes that can appear as path hops.
if (!canAppearInPath(n.role)) continue;
for (let len = 1; len <= 3; len++) {
const p = pk.slice(0, len * 2);
if (!prefixIdx[p]) prefixIdx[p] = [];
+40 -9
View File
@@ -22,9 +22,9 @@ function assert(condition, msg) {
// ── Test nodes ──
// Two nodes share the same 1-byte prefix "ab"
const nodeA = { public_key: 'ab1111', name: 'NodeA', lat: 37.0, lon: -122.0 };
const nodeB = { public_key: 'ab2222', name: 'NodeB', lat: 38.0, lon: -123.0 };
const nodeC = { public_key: 'cd3333', name: 'NodeC', lat: 37.5, lon: -122.5 };
const nodeA = { public_key: 'ab1111', name: 'NodeA', role: 'repeater', lat: 37.0, lon: -122.0 };
const nodeB = { public_key: 'ab2222', name: 'NodeB', role: 'repeater', lat: 38.0, lon: -123.0 };
const nodeC = { public_key: 'cd3333', name: 'NodeC', role: 'repeater', lat: 37.5, lon: -122.5 };
console.log('\n=== HopResolver Affinity Tests ===\n');
@@ -88,7 +88,7 @@ assert(result5['ab'].name === 'NodeB', 'Should pick NodeB (highest affinity 0.9)
// Test 6: Unambiguous hops are not affected by affinity
console.log('\nTest 6: Unambiguous hops unaffected by affinity');
const nodeD = { public_key: 'ee4444', name: 'NodeD', lat: 36.0, lon: -121.0 };
const nodeD = { public_key: 'ee4444', name: 'NodeD', role: 'repeater', lat: 36.0, lon: -121.0 };
HopResolver.init([nodeA, nodeB, nodeC, nodeD]);
HopResolver.setAffinity({ edges: [] });
const result6 = HopResolver.resolve(['ee44'], null, null, null, null, null);
@@ -97,9 +97,9 @@ assert(!result6['ee44'].ambiguous, 'Should not be marked ambiguous');
// Test 7: lat=0 / lon=0 candidates are NOT excluded (equator/prime-meridian bug fix)
console.log('\nTest 7: lat=0 / lon=0 candidates are included in geo scoring');
const nodeEquator = { public_key: 'ab5555', name: 'EquatorNode', lat: 0, lon: 10 };
const nodeFar = { public_key: 'ab6666', name: 'FarNode', lat: 60, lon: 60 };
const anchorNearEq = { public_key: 'cd7777', name: 'AnchorEq', lat: 1, lon: 11 };
const nodeEquator = { public_key: 'ab5555', name: 'EquatorNode', role: 'repeater', lat: 0, lon: 10 };
const nodeFar = { public_key: 'ab6666', name: 'FarNode', role: 'repeater', lat: 60, lon: 60 };
const anchorNearEq = { public_key: 'cd7777', name: 'AnchorEq', role: 'repeater', lat: 1, lon: 11 };
HopResolver.init([nodeEquator, nodeFar, anchorNearEq]);
HopResolver.setAffinity({});
// Anchor near equator — EquatorNode (0,10) should be geo-closest
@@ -109,13 +109,44 @@ assert(result7['ab'].name === 'EquatorNode',
// Test 8: lon=0 candidate is also included
console.log('\nTest 8: lon=0 candidate is included in geo scoring');
const nodePrime = { public_key: 'ab8888', name: 'PrimeMeridian', lat: 10, lon: 0 };
const anchorNearPM = { public_key: 'cd9999', name: 'AnchorPM', lat: 11, lon: 1 };
const nodePrime = { public_key: 'ab8888', name: 'PrimeMeridian', role: 'repeater', lat: 10, lon: 0 };
const anchorNearPM = { public_key: 'cd9999', name: 'AnchorPM', role: 'repeater', lat: 11, lon: 1 };
HopResolver.init([nodePrime, nodeFar, anchorNearPM]);
HopResolver.setAffinity({});
const result8 = HopResolver.resolve(['cd99', 'ab'], 11.0, 1.0, null, null, null);
assert(result8['ab'].name === 'PrimeMeridian',
'lon=0 candidate should be included and win by geo — got: ' + result8['ab'].name);
// ── Role filter tests (#935) ──
console.log('\nTest: Role filter — companions excluded from prefixIdx');
const companion = { public_key: 'ab9999', name: 'Companion1', role: 'companion', lat: 37.0, lon: -122.0 };
const sensor = { public_key: 'ab7777', name: 'Sensor1', role: 'sensor', lat: 37.0, lon: -122.0 };
const repeater = { public_key: 'ab1234', name: 'Repeater1', role: 'repeater', lat: 37.0, lon: -122.0 };
const roomSrv = { public_key: 'ff1234', name: 'RoomSrv1', role: 'room_server', lat: 37.0, lon: -122.0 };
HopResolver.init([companion, sensor, repeater, roomSrv]);
HopResolver.setAffinity({});
// Prefix 'ab' should only resolve to repeater (companion/sensor excluded)
const r1 = HopResolver.resolve(['ab12'], 0, 0, null, null, null);
assert(r1['ab12'] && r1['ab12'].name === 'Repeater1',
'prefix ab12 resolves to Repeater1 not companion — got: ' + (r1['ab12'] && r1['ab12'].name));
// Prefix 'ff' should resolve to room_server
const r2 = HopResolver.resolve(['ff12'], 0, 0, null, null, null);
assert(r2['ff12'] && r2['ff12'].name === 'RoomSrv1',
'prefix ff12 resolves to RoomSrv1 — got: ' + (r2['ff12'] && r2['ff12'].name));
// Prefix that only matches companion should return nothing
const r3 = HopResolver.resolve(['ab99'], 0, 0, null, null, null);
assert(!r3['ab99'] || !r3['ab99'].name,
'prefix ab99 (companion only) resolves to nothing — got: ' + (r3['ab99'] && r3['ab99'].name));
// pubkeyIdx should still have companion (full pubkey lookup)
console.log('\nTest: pubkeyIdx still includes all roles');
const fromServer = HopResolver.resolveFromServer(['ab99'], [companion.public_key]);
assert(fromServer['ab99'] && fromServer['ab99'].name === 'Companion1',
'resolveFromServer finds companion by full pubkey — got: ' + (fromServer['ab99'] && fromServer['ab99'].name));
console.log('\n' + (passed + failed) + ' tests, ' + passed + ' passed, ' + failed + ' failed\n');
process.exit(failed > 0 ? 1 : 0);