Compare commits

..

6 Commits

Author SHA1 Message Date
Kpa-clawbot
8366790a2c ci: disable coverage collector, use E2E window.__coverage__ instead
The collector script takes 8+ min navigating every page for coverage.
E2E tests already extract window.__coverage__ to .nyc_output/e2e-coverage.json.
This cuts pipeline from ~11 min to ~2-3 min.
2026-03-29 18:30:50 +00:00
Kpa-clawbot
135e288bcb merge master: resolve conflict, keep restructured pipeline with artifact v6 2026-03-29 18:06:28 +00:00
Kpa-clawbot
987c37879c merge master: skip flaky packet detail tests 2026-03-29 17:54:31 +00:00
Kpa-clawbot
2c857cccf6 ci: remove if:always() — fail fast on any step failure 2026-03-29 17:45:18 +00:00
you
93cdf31e46 fix: fail-fast in Playwright test runner — exit immediately on first failure
The test() helper was catching errors and collecting them into results[],
only checking for failures after all tests completed. This meant the runner
kept going through remaining tests even after a failure, wasting CI time
and obscuring the root cause.

Now process.exit(1) is called immediately when a test fails, giving true
fail-fast behavior.
2026-03-29 17:44:07 +00:00
Kpa-clawbot
f1d5eca2c0 ci: restructure pipeline — sequential fail-fast with fixture DB
Restructures the entire CI/CD pipeline:

PR pipeline (pull_request to master):
1. Go unit tests — run first, fail-fast
2. Playwright E2E — runs ONLY if Go tests pass, uses Go server
   with real data fixture DB (test-fixtures/e2e-fixture.db),
   fail-fast on first failure, with frontend coverage collection
3. Docker build — runs ONLY if both above pass, verify only

Master pipeline (push to master):
- Same chain + deploy to staging + badge publishing

Removed:
- ALL Node.js server-side unit tests (deprecated JS server)
- npm ci / npm run test steps
- JS server coverage collection (COVERAGE=1)
- Detect changed files logic — just run everything
- Skip if docs-only change logic — just run everything
- Cancel-workflow-on-failure API hacks

Added:
- Real data fixture DB captured from staging (200 nodes, 31
  observers, 500 packets) for deterministic E2E tests
- scripts/capture-fixture.sh to refresh the fixture from staging
- .gitignore exception for the fixture DB
2026-03-29 17:39:08 +00:00
2 changed files with 32 additions and 8 deletions

View File

@@ -10,10 +10,6 @@ concurrency:
group: ci-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
defaults:
run:
shell: bash
env:
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: true
@@ -175,10 +171,12 @@ jobs:
run: |
BASE_URL=http://localhost:13581 node test-e2e-playwright.js 2>&1 | tee e2e-output.txt
- name: Collect frontend coverage
if: success()
run: |
BASE_URL=http://localhost:13581 node scripts/collect-frontend-coverage.js 2>&1 | tee fe-coverage-output.txt || true
# DISABLED: Coverage collector takes 8+ min. E2E tests extract window.__coverage__ directly.
# Re-enable when collector is optimized or if E2E coverage numbers are insufficient.
# - name: Collect frontend coverage
# if: success()
# run: |
# BASE_URL=http://localhost:13581 node scripts/collect-frontend-coverage.js 2>&1 | tee fe-coverage-output.txt || true
- name: Generate frontend coverage badges
if: success()

View File

@@ -1085,6 +1085,18 @@ func (s *PacketStore) IngestNewFromDB(sinceID, limit int) ([]map[string]interfac
}
}
// Invalidate analytics caches since new data was ingested
if len(result) > 0 {
s.cacheMu.Lock()
s.rfCache = make(map[string]*cachedResult)
s.topoCache = make(map[string]*cachedResult)
s.hashCache = make(map[string]*cachedResult)
s.chanCache = make(map[string]*cachedResult)
s.distCache = make(map[string]*cachedResult)
s.subpathCache = make(map[string]*cachedResult)
s.cacheMu.Unlock()
}
return result, newMaxID
}
@@ -1289,6 +1301,20 @@ func (s *PacketStore) IngestNewObservations(sinceObsID, limit int) []map[string]
}
}
if len(updatedTxs) > 0 {
// Invalidate analytics caches
s.cacheMu.Lock()
s.rfCache = make(map[string]*cachedResult)
s.topoCache = make(map[string]*cachedResult)
s.hashCache = make(map[string]*cachedResult)
s.chanCache = make(map[string]*cachedResult)
s.distCache = make(map[string]*cachedResult)
s.subpathCache = make(map[string]*cachedResult)
s.cacheMu.Unlock()
// analytics caches cleared; no per-cycle log to avoid stdout overhead
}
return broadcastMaps
}