## Summary
Plain `docker build .` (no buildx) fails immediately:
```
Step 1/45 : FROM --platform=$BUILDPLATFORM golang:1.22-alpine AS builder
failed to parse platform : "" is an invalid component of "": platform specifier
component must match "^[A-Za-z0-9_-]+$"
```
`$BUILDPLATFORM` is only auto-populated by buildx; under plain
BuildKit/`docker build` it's empty.
## Fix
Add `ARG BUILDPLATFORM=linux/amd64` before the `FROM` so the variable
always resolves.
## Multi-arch preserved
`docker buildx build --platform=linux/arm64,linux/amd64 .` still
overrides `BUILDPLATFORM` at invocation time — the ARG default only
applies when the caller doesn't set one. The existing CI multi-arch
workflow is unaffected.
Fixes#884
Co-authored-by: meshcore-bot <bot@meshcore.local>
Closes#921
## Summary
Follow-up to #920 (incremental auto-vacuum). Addresses both items from
the adversarial review:
### 1. RW connection caching
Previously, every call to `openRW(dbPath)` opened a new SQLite RW
connection and closed it after use. This happened in:
- `runIncrementalVacuum` (~4x/hour)
- `PruneOldPackets`, `PruneOldMetrics`, `RemoveStaleObservers`
- `buildAndPersistEdges`, `PruneNeighborEdges`
- All neighbor persist operations
Now a single `*sql.DB` handle (with `MaxOpenConns(1)`) is cached
process-wide via `cachedRW(dbPath)`. The underlying connection pool
manages serialization. The original `openRW()` function is retained for
one-shot test usage.
### 2. DBConfig dedup
`DBConfig` was defined identically in both `cmd/server/config.go` and
`cmd/ingestor/config.go`. Extracted to `internal/dbconfig/` as a shared
package; both binaries now use a type alias (`type DBConfig =
dbconfig.DBConfig`).
## Tests added
| Test | File |
|------|------|
| `TestCachedRW_ReturnsSameHandle` | `cmd/server/rw_cache_test.go` |
| `TestCachedRW_100Calls_SingleConnection` |
`cmd/server/rw_cache_test.go` |
| `TestGetIncrementalVacuumPages_Default` |
`internal/dbconfig/dbconfig_test.go` |
| `TestGetIncrementalVacuumPages_Configured` |
`internal/dbconfig/dbconfig_test.go` |
## Verification
```
ok github.com/corescope/server 20.069s
ok github.com/corescope/ingestor 47.117s
ok github.com/meshcore-analyzer/dbconfig 0.003s
```
Both binaries build cleanly. 100 sequential `cachedRW()` calls return
the same handle with exactly 1 entry in the cache map.
---------
Co-authored-by: you <you@example.com>
## Problem
Per-observation `path_json` disagrees with `raw_hex` path section for
TRACE packets.
**Reproducer:** packet `af081a2c41281b1e`, observer `lutin🏡`
- `path_json`: `["67","33","D6","33","67"]` (5 hops — from TRACE
payload)
- `raw_hex` path section: `30 2D 0D 23` (4 bytes — SNR values in header)
## Root Cause
`DecodePacket` correctly parses TRACE packets by replacing `path.Hops`
with hop IDs from the payload's `pathData` field (the actual route).
However, the header path bytes for TRACE packets contain **SNR values**
(one per completed hop), not hop IDs.
`BuildPacketData` used `decoded.Path.Hops` to build `path_json`, which
for TRACE packets contained the payload-derived hops — not the header
path bytes that `raw_hex` stores. This caused `path_json` and `raw_hex`
to describe completely different paths.
## Fix
- Added `DecodePathFromRawHex(rawHex)` — extracts header path hops
directly from raw hex bytes, independent of any TRACE payload
overwriting.
- `BuildPacketData` now calls `DecodePathFromRawHex(msg.Raw)` instead of
using `decoded.Path.Hops`, guaranteeing `path_json` always matches the
`raw_hex` path section.
## Tests (8 new)
**`DecodePathFromRawHex` unit tests:**
- hash_size 1, 2, 3, 4
- zero-hop direct packets
- transport route (4-byte transport codes before path)
**`BuildPacketData` integration tests:**
- TRACE packet: asserts path_json matches raw_hex header path (not
payload hops)
- Non-TRACE packet: asserts path_json matches raw_hex header path
All existing tests continue to pass (`go test ./...` for both ingestor
and server).
Fixes#886
---------
Co-authored-by: you <you@example.com>
## Problem
`docker pull` on ARM devices fails because the published image is
amd64-only.
## Fix
Enable multi-arch Docker builds via `docker buildx`. **Builder stage
uses native Go cross-compilation; only the runtime-stage `RUN` steps use
QEMU emulation.**
### Changes
| File | Change |
|------|--------|
| `Dockerfile` | Pin builder stage to `--platform=$BUILDPLATFORM`
(always native), accept `ARG TARGETOS`/`ARG TARGETARCH` from buildx, set
`GOOS=$TARGETOS GOARCH=$TARGETARCH CGO_ENABLED=0` on every `go build` |
| `.github/workflows/deploy.yml` | Add `docker/setup-buildx-action@v3` +
`docker/setup-qemu-action@v3` (latter needed only for runtime-stage
RUNs), set `platforms: linux/amd64,linux/arm64` |
### Build architecture
- **Builder stage** (`FROM --platform=$BUILDPLATFORM
golang:1.22-alpine`) — runs natively on amd64. Go toolchain
cross-compiles the binaries to `$TARGETARCH` via `GOOS/GOARCH`. No
emulation, ~10× faster than emulated builds. Works because
`modernc.org/sqlite` is pure Go (no CGO).
- **Runtime stage** (`FROM alpine:3.20`) — buildx pulls the per-arch
base. RUN steps (`apk add`, `mkdir/chown`, `chmod`) execute inside the
target-arch image, so QEMU is required to interpret arm64 binaries on
the amd64 host. Only a handful of short shell commands run under
emulation, so the QEMU cost is small.
### Verify
After merge, on an ARM device:
```bash
docker pull ghcr.io/kpa-clawbot/corescope:edge
docker inspect ghcr.io/kpa-clawbot/corescope:edge --format '{{.Architecture}}'
# → arm64
```
> First arm64 image appears on the next push to master after this
merges.
Closes#768
---------
Co-authored-by: you <you@example.com>
Co-authored-by: Kpa-clawbot <agent@corescope.local>
PR #686 added internal/sigvalidate/ with replace directives in both
go.mod files but didn't update the Dockerfile to COPY it into the
Docker build context. go mod download fails with 'no such file'.
## Summary
Implements the `DISABLE_CADDY` environment variable in the Docker
entrypoint, fixing #629.
## Problem
The `DISABLE_CADDY` env var was documented but had no effect — the
entrypoint only handled `DISABLE_MOSQUITTO`.
## Changes
### New supervisord configs
- **`supervisord-go-no-caddy.conf`** — mosquitto + ingestor + server (no
Caddy)
- **`supervisord-go-no-mosquitto-no-caddy.conf`** — ingestor + server
only
### Updated entrypoint (`docker/entrypoint-go.sh`)
Handles all 4 combinations:
| DISABLE_MOSQUITTO | DISABLE_CADDY | Config used |
|---|---|---|
| false | false | `supervisord.conf` (default) |
| true | false | `supervisord-no-mosquitto.conf` |
| false | true | `supervisord-no-caddy.conf` |
| true | true | `supervisord-no-mosquitto-no-caddy.conf` |
### Dockerfiles
Added COPY lines for the new configs in both `Dockerfile` and
`Dockerfile.go`.
## Testing
```bash
# Verify correct config selection
docker run -e DISABLE_CADDY=true corescope
# Should log: [config] Caddy reverse proxy disabled (DISABLE_CADDY=true)
docker run -e DISABLE_CADDY=true -e DISABLE_MOSQUITTO=true corescope
# Should log both disabled messages
```
Fixes#629
Co-authored-by: you <you@example.com>
## Summary
Several features and fixes from a live deployment of the Go v3.0.0
backend.
### geo_filter — full enforcement
- **Go backend config** (`cmd/server/config.go`,
`cmd/ingestor/config.go`): added `GeoFilterConfig` struct so
`geo_filter.polygon` and `bufferKm` from `config.json` are parsed by
both the server and ingestor
- **Ingestor** (`cmd/ingestor/geo_filter.go`, `cmd/ingestor/main.go`):
ADVERT packets from nodes outside the configured polygon + buffer are
dropped *before* any DB write — no transmission, node, or observation
data is stored
- **Server API** (`cmd/server/geo_filter.go`, `cmd/server/routes.go`):
`GET /api/config/geo-filter` endpoint returns the polygon + bufferKm to
the frontend; `/api/nodes` responses filter out any out-of-area nodes
already in the DB
- **Frontend** (`public/map.js`, `public/live.js`): blue polygon overlay
(solid inner + dashed buffer zone) on Map and Live pages, toggled via
"Mesh live area" checkbox, state shared via localStorage
### Automatic DB pruning
- Add `retention.packetDays` to `config.json` to delete transmissions +
observations older than N days on a daily schedule (1 min after startup,
then every 24h). Nodes and observers are never pruned.
- `POST /api/admin/prune?days=N` for manual runs (requires `X-API-Key`
header if `apiKey` is set)
```json
"retention": {
"nodeDays": 7,
"packetDays": 30
}
```
### tools/geofilter-builder.html
Standalone HTML tool (no server needed) — open in browser, click to
place polygon points on a Leaflet map, set `bufferKm`, copy the
generated `geo_filter` JSON block into `config.json`.
### scripts/prune-nodes-outside-geo-filter.py
Utility script to clean existing out-of-area nodes from the database
(dry-run + confirm). Useful after first enabling geo_filter on a
populated DB.
### HB column in packets table
Shows the hop hash size in bytes (1–4) decoded from the path byte of
each packet's raw hex. Displayed as **HB** between Size and Type
columns, hidden on small screens.
## Test plan
- [x] ADVERT from node outside polygon is not stored (no new row in
nodes or transmissions)
- [x] `GET /api/config/geo-filter` returns polygon + bufferKm when
configured, `{polygon: null, bufferKm: 0}` when not
- [x] `/api/nodes` excludes nodes outside polygon even if present in DB
- [x] Map and Live pages show blue polygon overlay when configured;
checkbox toggles it
- [x] `retention.packetDays: 30` deletes old transmissions/observations
on startup and daily
- [x] `POST /api/admin/prune?days=30` returns `{deleted: N, days: 30}`
- [x] `tools/geofilter-builder.html` opens standalone, draws polygon,
copies valid JSON
- [x] HB column shows 1–4 for all packets in grouped and flat view
🤖 Generated with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary
- add `DISABLE_MOSQUITTO` support in container startup by switching
supervisord config when disabled
- add a no-mosquitto supervisord config
(`docker/supervisord-go-no-mosquitto.conf`)
- fix Compose port mapping regression so host ports map to fixed
internal listener ports (`80`, `443`, `1883`)
- add compose variants without MQTT port publishing
(`docker-compose.no-mosquitto.yml`,
`docker-compose.staging.no-mosquitto.yml`)
- update `manage.sh` setup flow to ask `Use built-in MQTT broker?
[Y/n]`, skip MQTT port prompt when disabled, persist
`DISABLE_MOSQUITTO`, and use no-mosquitto compose files when
starting/stopping/restarting
- align `.env.example` staging keys with compose
(`STAGING_GO_HTTP_PORT`, `STAGING_GO_MQTT_PORT`)
- fix staging Caddyfile generation to use `STAGING_GO_HTTP_PORT`
- fix `.env.example` staging default comments to match actual values
(82/1885)
## Validation performed
- ✅ `bash -n manage.sh` passes.
- ✅ With `DISABLE_MOSQUITTO=true`, no-mosquitto compose overrides are
selected, Mosquitto is not started, and MQTT port is not published.
- ✅ With `DISABLE_MOSQUITTO=false`, standard compose files are used,
Mosquitto starts, and MQTT port mapping is present.
- ℹ️ Runtime Docker validation requires a running Docker host.
Fixes#267
---------
Co-authored-by: Kpa-clawbot <259247574+Kpa-clawbot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The glob trick COPY .git-commi[t] only works with BuildKit.
manage.sh uses legacy docker build. Just create a default via RUN.
Commit hash comes through --build-arg ldflags anyway.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Go server is production-ready. Users upgrading via git pull + manage.sh
get Go automatically. No flags, no engine selection, no decision needed.
- Dockerfile (was Dockerfile.go) — Go multi-stage build
- Dockerfile.node — archived Node.js build for rollback
- docker-compose staging-go now builds from Dockerfile
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Node.js: reads version from package.json, commit from .git-commit file
or git rev-parse --short HEAD at runtime, with unknown fallback.
Go: uses -ldflags build-time variables (Version, Commit) with fallback
to .git-commit file and git command at runtime.
Dockerfile: copies .git-commit if present (CI bakes it before build).
Dockerfile.go: passes APP_VERSION and GIT_COMMIT as build args to ldflags.
deploy.yml: writes GITHUB_SHA to .git-commit before docker build steps.
docker-compose.yml: passes build args to Go staging build.
Tests updated to verify version and commit fields in both endpoints.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>