Files
meshcore-analyzer/cmd/server/rw_cache.go
T
Kpa-clawbot dd2f044f2b fix: cache RW SQLite connection + dedup DBConfig (closes #921) (#982)
Closes #921

## Summary

Follow-up to #920 (incremental auto-vacuum). Addresses both items from
the adversarial review:

### 1. RW connection caching

Previously, every call to `openRW(dbPath)` opened a new SQLite RW
connection and closed it after use. This happened in:
- `runIncrementalVacuum` (~4x/hour)
- `PruneOldPackets`, `PruneOldMetrics`, `RemoveStaleObservers`
- `buildAndPersistEdges`, `PruneNeighborEdges`
- All neighbor persist operations

Now a single `*sql.DB` handle (with `MaxOpenConns(1)`) is cached
process-wide via `cachedRW(dbPath)`. The underlying connection pool
manages serialization. The original `openRW()` function is retained for
one-shot test usage.

### 2. DBConfig dedup

`DBConfig` was defined identically in both `cmd/server/config.go` and
`cmd/ingestor/config.go`. Extracted to `internal/dbconfig/` as a shared
package; both binaries now use a type alias (`type DBConfig =
dbconfig.DBConfig`).

## Tests added

| Test | File |
|------|------|
| `TestCachedRW_ReturnsSameHandle` | `cmd/server/rw_cache_test.go` |
| `TestCachedRW_100Calls_SingleConnection` |
`cmd/server/rw_cache_test.go` |
| `TestGetIncrementalVacuumPages_Default` |
`internal/dbconfig/dbconfig_test.go` |
| `TestGetIncrementalVacuumPages_Configured` |
`internal/dbconfig/dbconfig_test.go` |

## Verification

```
ok  github.com/corescope/server    20.069s
ok  github.com/corescope/ingestor  47.117s
ok  github.com/meshcore-analyzer/dbconfig  0.003s
```

Both binaries build cleanly. 100 sequential `cachedRW()` calls return
the same handle with exactly 1 entry in the cache map.

---------

Co-authored-by: you <you@example.com>
2026-05-02 20:15:30 -07:00

60 lines
1.6 KiB
Go

package main
import (
"database/sql"
"fmt"
"sync"
)
// rwCache holds a process-wide cached RW connection per database path.
// Instead of opening and closing a new RW connection on every call to openRW,
// we cache a single *sql.DB (which internally manages one connection due to
// SetMaxOpenConns(1)). This eliminates repeated open/close overhead for
// vacuum, prune, persist operations that run frequently (#921).
var rwCache = struct {
mu sync.Mutex
conns map[string]*sql.DB
}{conns: make(map[string]*sql.DB)}
// cachedRW returns a cached read-write connection for the given dbPath.
// The connection is created on first call and reused thereafter.
// Callers MUST NOT call Close() on the returned *sql.DB.
func cachedRW(dbPath string) (*sql.DB, error) {
rwCache.mu.Lock()
defer rwCache.mu.Unlock()
if db, ok := rwCache.conns[dbPath]; ok {
return db, nil
}
dsn := fmt.Sprintf("file:%s?_journal_mode=WAL", dbPath)
db, err := sql.Open("sqlite", dsn)
if err != nil {
return nil, err
}
db.SetMaxOpenConns(1)
if _, err := db.Exec("PRAGMA busy_timeout = 5000"); err != nil {
db.Close()
return nil, fmt.Errorf("set busy_timeout: %w", err)
}
rwCache.conns[dbPath] = db
return db, nil
}
// closeRWCache closes all cached RW connections (for tests/shutdown).
func closeRWCache() {
rwCache.mu.Lock()
defer rwCache.mu.Unlock()
for k, db := range rwCache.conns {
db.Close()
delete(rwCache.conns, k)
}
}
// rwCacheLen returns the number of cached connections (for testing).
func rwCacheLen() int {
rwCache.mu.Lock()
defer rwCache.mu.Unlock()
return len(rwCache.conns)
}