Files
simplexmq/tests/CoreTests/XFTPStoreTests.hs
T
sh 8833e5c1b5 xftp-server: support postgresql backend (#1755)
* xftp: add PostgreSQL backend design spec

* update doc

* adjust styling

* add implementation plan

* refactor: move usedStorage from FileStore to XFTPEnv

* refactor: add getUsedStorage, getFileCount, expiredFiles store functions

* refactor: change file store operations from STM to IO

* refactor: extract FileStoreClass typeclass, move STM impl to Store.STM

* refactor: make XFTPEnv and server polymorphic over FileStoreClass

* feat: add PostgreSQL store skeleton with schema migration

* feat: implement PostgresFileStore operations

* feat: add PostgreSQL INI config, store dispatch, startup validation

* feat: add database import/export CLI commands

* test: add PostgreSQL backend tests

* fix: map ForeignKeyViolation to AUTH in addRecipient

When a file is concurrently deleted while addRecipient runs, the FK
constraint on recipients.sender_id raises ForeignKeyViolation. Previously
this propagated as INTERNAL; now it returns AUTH (file not found).

* fix: only decrement usedStorage for uploaded files on expiration

expireServerFiles unconditionally subtracted file_size from usedStorage
for every expired file, including files that were never uploaded (no
file_path). Since reserve only increments usedStorage during upload,
expiring never-uploaded files caused usedStorage to drift negative.

* fix: handle setFilePath error in receiveServerFile

setFilePath result was discarded with void. If it failed (file deleted
concurrently, or double-upload where file_path IS NULL guard rejected
the second write), the server still reported FROk, incremented stats,
and left usedStorage permanently inflated. Now the error is checked:
on failure, reserved storage is released and AUTH is returned.

* fix: escape double quotes in COPY CSV status field

The status field (e.g. "blocked,reason=spam,notice={...}") is quoted in
CSV for COPY protocol, but embedded double quotes from BlockingInfo
notice (JSON) were not escaped. This could break CSV parsing during
import. Now double quotes are escaped as "" per CSV spec.

* fix: reject upload to blocked file in Postgres setFilePath

In Postgres mode, getFile returns a snapshot TVar for fileStatus. If a
file is blocked between getFile and setFilePath, the stale status check
passes but the upload should be rejected. Added status = 'active' to
the UPDATE WHERE clause so blocked files cannot receive uploads.

* fix: add CHECK constraint on file_size > 0

Prevents negative or zero file_size values at the database level.
Without this, corrupted data from import or direct DB access could
cause incorrect storage accounting (getUsedStorage sums file_size,
and expiredFiles casts to Word32 which wraps negative values).

* fix: check for existing data before database import

importFileStore now checks if the target database already contains
files and aborts with an error. Previously, importing into a non-empty
database would fail mid-COPY on duplicate primary keys, leaving the
database in a partially imported state.

* fix: clean up disk file when setFilePath fails in receiveServerFile

When setFilePath fails (file deleted or blocked concurrently, or
duplicate upload), the uploaded file was left orphaned on disk with
no DB record pointing to it. Now the file is removed on failure,
matching the cleanup in the receiveChunk error path.

* fix: check storeAction result in deleteOrBlockServerFile_

The store action result (deleteFile/blockFile) was discarded with void.
If the DB row was already deleted by a concurrent operation, the
function still decremented usedStorage, causing drift. Now the error
propagates via ExceptT, skipping the usedStorage adjustment.

* fix: check deleteFile result in expireServerFiles

deleteFile result was discarded with void. If a concurrent delete
already removed the file, deleteFile returned AUTH but usedStorage
was still decremented — causing double-decrement drift. Now the
usedStorage adjustment and filesExpired stat only run on success.

* refactor: merge STM store into Store.hs, parameterize server tests

- Move STMFileStore and its FileStoreClass instance from Store/STM.hs
  back into Store.hs — the separate file was unnecessary indirection
  for the always-present default implementation.

- Parameterize xftpFileTests over store backend using HSpec SpecWith
  pattern (following SMP's serverTests approach). The same 11 tests
  now run against both memory and PostgreSQL backends via a bracket
  parameter, eliminating all *Pg test duplicates.

- Extract shared run* functions (runTestFileChunkDeliveryAddRecipients,
  runTestWrongChunkSize, runTestFileChunkExpiration, runTestFileStorageQuota)
  from inlined test bodies.

* refactor: clean up per good-code review

- Remove internal helpers from Postgres.hs export list (withDB, withDB',
  handleDuplicate, assertUpdated, withLog are not imported externally)
- Replace local isNothing_ with Data.Maybe.isNothing in Env.hs
- Consolidate duplicate/unused imports in XFTPStoreTests.hs
- Add file_path IS NULL and status guards to STM setFilePath, matching
  the Postgres implementation semantics

* test: parameterize XFTP server, agent and CLI tests over store backend

- xftpTest/xftpTest2/xftpTest4/xftpTestN now take XFTPTestBracket as
  first argument, enabling the same test to run against both memory
  and PostgreSQL backends.

- xftpFileTests (server tests), xftpAgentFileTests (agent tests), and
  xftpCLIFileTests (CLI tests) are SpecWith-parameterized suites that
  receive the bracket from HSpec's before combinator.

- Test.hs runs each parameterized suite twice: once with
  xftpMemoryBracket, once with xftpPostgresBracket (CPP-guarded).

- STM-specific tests (store log restore/replay) stay in memory-only
  xftpAgentTests. SNI/CORS tests stay in memory-only xftpServerTests.

* refactor: remove dead test wrappers after parameterization

Remove old non-parameterized test wrapper functions that were
superseded by the store-backend-parameterized test suites.
All test bodies (run* and _ functions) are preserved and called
from the parameterized specs. Clean up unused imports.

* feat: add manual tests and guide

* refactor: merge file_size CHECK into initial migration

* refactor: extract rowToFileRec shared by getFile sender/recipient paths

* refactor: parameterize XFTPServerConfig over store type

Embed XFTPStoreConfig s as serverStoreCfg field, matching SMP's
ServerConfig. runXFTPServer and newXFTPServerEnv now take a single
XFTPServerConfig s. Restore verifyCmd local helper structure.

* refactor: minimize diff in tests

Restore xftpServerTests and xftpAgentTests bodies to match master
byte-for-byte (only type signatures change for XFTPTestBracket
parameterization); inline the runTestXXX helpers that were split
on this branch.

* refactor: restore getFile position to match master

* refactor: rename withSTMFile back to withFile

* refactor: close store log inside closeFileStore for STM backend

Move STM store log close responsibility into closeFileStore to
match PostgresFileStore, removing the asymmetry where only PG's
close was self-contained.

STMFileStore holds the log in a TVar populated by newXFTPServerEnv
after readWriteFileStore; stopServer no longer needs the explicit
withFileLog closeStoreLog call. Writes still go through XFTPEnv.storeLog
via withFileLog (unchanged).

* refactor: rename XFTPTestBracket to XFTPTestServer

* fix: move file_size check from PG schema to store log import

* refactor: use SQL-standard type names in XFTP schema

* perf: batch expired file deletions with deleteFiles

* refactor: stream export instead of loading recipients into memory

* refactor: parameterize XFTP store with FSType singleton dispatch

* refactor: minimize diff per review feedback

* refactor: use types over strings, deduplicate parser

* refactor: always parse database store type, fail at startup

* fix compilation without postgresql

* refactor: always parse database store type, fail at startup
2026-04-16 09:06:04 +01:00

275 lines
12 KiB
Haskell

{-# LANGUAGE DataKinds #-}
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE OverloadedStrings #-}
module CoreTests.XFTPStoreTests (xftpStoreTests, xftpMigrationTests) where
import Control.Monad
import Data.Word (Word32)
import Simplex.FileTransfer.Protocol (FileInfo (..), SFileParty (..))
import Simplex.FileTransfer.Server.Store
import Simplex.FileTransfer.Server.Store.Postgres (PostgresFileStore, importFileStore, exportFileStore)
import Simplex.FileTransfer.Server.StoreLog (closeStoreLog, readWriteFileStore, writeFileStore)
import Simplex.FileTransfer.Transport (XFTPErrorType (..))
import qualified Simplex.Messaging.Crypto as C
import Simplex.Messaging.Protocol (BlockingInfo (..), BlockingReason (..), EntityId (..))
import Simplex.Messaging.Server.QueueStore (ServerEntityStatus (..))
import Simplex.Messaging.Server.StoreLog (openWriteStoreLog)
import Simplex.Messaging.SystemTime (RoundedSystemTime (..))
import System.Directory (doesFileExist, removeFile)
import Test.Hspec hiding (fit, it)
import UnliftIO.STM
import Util
import XFTPClient (testXFTPPostgresCfg)
xftpStoreTests :: Spec
xftpStoreTests = describe "PostgresFileStore operations" $ do
it "should add and get file by sender" testAddGetFileSender
it "should add and get file by recipient" testAddGetFileRecipient
it "should reject duplicate file" testDuplicateFile
it "should return AUTH for nonexistent file" testGetNonexistent
it "should set file path with IS NULL guard" testSetFilePath
it "should reject duplicate recipient" testDuplicateRecipient
it "should delete file and cascade recipients" testDeleteFileCascade
it "should block file and update status" testBlockFile
it "should ack file reception" testAckFile
it "should return expired files with limit" testExpiredFiles
it "should compute used storage and file count" testStorageAndCount
xftpMigrationTests :: Spec
xftpMigrationTests = describe "XFTP migration round-trip" $ do
it "should export to StoreLog and import back to Postgres preserving data" testMigrationRoundTrip
-- Test helpers
withPgStore :: (PostgresFileStore -> IO ()) -> IO ()
withPgStore test = do
st <- newFileStore testXFTPPostgresCfg :: IO PostgresFileStore
test st
closeFileStore st
testSenderId :: EntityId
testSenderId = EntityId "sender001_______"
testRecipientId :: EntityId
testRecipientId = EntityId "recipient001____"
testFileInfo :: C.APublicAuthKey -> FileInfo
testFileInfo sndKey =
FileInfo
{ sndKey,
size = 128000 :: Word32,
digest = "test_digest_bytes_here___"
}
testCreatedAt :: RoundedFileTime
testCreatedAt = RoundedSystemTime 1000000
-- Tests
testAddGetFileSender :: Expectation
testAddGetFileSender = withPgStore $ \st -> do
g <- C.newRandom
(sk, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
let fileInfo = testFileInfo sk
addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right ()
result <- getFile st SFSender testSenderId
case result of
Right (FileRec {senderId, fileInfo = fi, createdAt}, key) -> do
senderId `shouldBe` testSenderId
sndKey fi `shouldBe` sk
size fi `shouldBe` 128000
createdAt `shouldBe` testCreatedAt
key `shouldBe` sk
Left e -> expectationFailure $ "getFile failed: " <> show e
testAddGetFileRecipient :: Expectation
testAddGetFileRecipient = withPgStore $ \st -> do
g <- C.newRandom
(sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
(rcpKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
let fileInfo = testFileInfo sndKey
addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right ()
addRecipient st testSenderId (FileRecipient testRecipientId rcpKey) `shouldReturn` Right ()
result <- getFile st SFRecipient testRecipientId
case result of
Right (FileRec {senderId}, key) -> do
senderId `shouldBe` testSenderId
key `shouldBe` rcpKey
Left e -> expectationFailure $ "getFile failed: " <> show e
testDuplicateFile :: Expectation
testDuplicateFile = withPgStore $ \st -> do
g <- C.newRandom
(sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
let fileInfo = testFileInfo sndKey
addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right ()
addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Left DUPLICATE_
testGetNonexistent :: Expectation
testGetNonexistent = withPgStore $ \st -> do
getFile st SFSender testSenderId >>= (`shouldBe` Left AUTH) . fmap (const ())
getFile st SFRecipient testRecipientId >>= (`shouldBe` Left AUTH) . fmap (const ())
testSetFilePath :: Expectation
testSetFilePath = withPgStore $ \st -> do
g <- C.newRandom
(sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
let fileInfo = testFileInfo sndKey
addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right ()
setFilePath st testSenderId "/tmp/test_file" `shouldReturn` Right ()
-- Second setFilePath should fail (file_path IS NULL guard)
setFilePath st testSenderId "/tmp/other_file" `shouldReturn` Left AUTH
-- Verify path was set
result <- getFile st SFSender testSenderId
case result of
Right (FileRec {filePath}, _) -> readTVarIO filePath `shouldReturn` Just "/tmp/test_file"
Left e -> expectationFailure $ "getFile failed: " <> show e
testDuplicateRecipient :: Expectation
testDuplicateRecipient = withPgStore $ \st -> do
g <- C.newRandom
(sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
(rcpKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
let fileInfo = testFileInfo sndKey
addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right ()
addRecipient st testSenderId (FileRecipient testRecipientId rcpKey) `shouldReturn` Right ()
addRecipient st testSenderId (FileRecipient testRecipientId rcpKey) `shouldReturn` Left DUPLICATE_
testDeleteFileCascade :: Expectation
testDeleteFileCascade = withPgStore $ \st -> do
g <- C.newRandom
(sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
(rcpKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
let fileInfo = testFileInfo sndKey
addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right ()
addRecipient st testSenderId (FileRecipient testRecipientId rcpKey) `shouldReturn` Right ()
deleteFile st testSenderId `shouldReturn` Right ()
-- File and recipient should both be gone
getFile st SFSender testSenderId >>= (`shouldBe` Left AUTH) . fmap (const ())
getFile st SFRecipient testRecipientId >>= (`shouldBe` Left AUTH) . fmap (const ())
testBlockFile :: Expectation
testBlockFile = withPgStore $ \st -> do
g <- C.newRandom
(sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
let fileInfo = testFileInfo sndKey
addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right ()
let blockInfo = BlockingInfo {reason = BRContent, notice = Nothing}
blockFile st testSenderId blockInfo False `shouldReturn` Right ()
result <- getFile st SFSender testSenderId
case result of
Right (FileRec {fileStatus}, _) -> readTVarIO fileStatus `shouldReturn` EntityBlocked blockInfo
Left e -> expectationFailure $ "getFile failed: " <> show e
testAckFile :: Expectation
testAckFile = withPgStore $ \st -> do
g <- C.newRandom
(sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
(rcpKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
let fileInfo = testFileInfo sndKey
addFile st testSenderId fileInfo testCreatedAt EntityActive `shouldReturn` Right ()
addRecipient st testSenderId (FileRecipient testRecipientId rcpKey) `shouldReturn` Right ()
ackFile st testRecipientId `shouldReturn` Right ()
-- Recipient gone, but file still exists
getFile st SFRecipient testRecipientId >>= (`shouldBe` Left AUTH) . fmap (const ())
result <- getFile st SFSender testSenderId
case result of
Right _ -> pure ()
Left e -> expectationFailure $ "getFile failed: " <> show e
testExpiredFiles :: Expectation
testExpiredFiles = withPgStore $ \st -> do
g <- C.newRandom
(sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
let fileInfo = testFileInfo sndKey
oldTime = RoundedSystemTime 100000
newTime = RoundedSystemTime 999999999
-- Add old and new files
addFile st (EntityId "old_file________") fileInfo oldTime EntityActive `shouldReturn` Right ()
void $ setFilePath st (EntityId "old_file________") "/tmp/old"
addFile st (EntityId "new_file________") fileInfo newTime EntityActive `shouldReturn` Right ()
-- Query expired with cutoff that only catches old file
expired <- expiredFiles st 500000 100
length expired `shouldBe` 1
case expired of
[(sId, path, sz)] -> do
sId `shouldBe` EntityId "old_file________"
path `shouldBe` Just "/tmp/old"
sz `shouldBe` 128000
_ -> expectationFailure "expected 1 expired file"
testStorageAndCount :: Expectation
testStorageAndCount = withPgStore $ \st -> do
g <- C.newRandom
(sndKey, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
getUsedStorage st `shouldReturn` 0
getFileCount st `shouldReturn` 0
let fileInfo = testFileInfo sndKey
addFile st (EntityId "file_a__________") fileInfo testCreatedAt EntityActive `shouldReturn` Right ()
addFile st (EntityId "file_b__________") fileInfo testCreatedAt EntityActive `shouldReturn` Right ()
getFileCount st `shouldReturn` 2
used <- getUsedStorage st
used `shouldBe` 256000 -- 128000 * 2
-- Migration round-trip test
testMigrationRoundTrip :: Expectation
testMigrationRoundTrip = do
let storeLogPath = "tests/tmp/xftp-migration-test.log"
storeLogPath2 = "tests/tmp/xftp-migration-test2.log"
-- 1. Create STM store with test data
stmStore <- newFileStore () :: IO STMFileStore
g <- C.newRandom
(sndKey1, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
(rcpKey1, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
(sndKey2, _) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
let fileInfo1 = testFileInfo sndKey1
fileInfo2 = FileInfo {sndKey = sndKey2, size = 64000, digest = "other_digest____________"}
sId1 = EntityId "migration_file_1"
sId2 = EntityId "migration_file_2"
rId1 = EntityId "migration_rcp_1_"
addFile stmStore sId1 fileInfo1 testCreatedAt EntityActive `shouldReturn` Right ()
void $ setFilePath stmStore sId1 "/tmp/file1"
addRecipient stmStore sId1 (FileRecipient rId1 rcpKey1) `shouldReturn` Right ()
let testBlockInfo = BlockingInfo {reason = BRSpam, notice = Nothing}
addFile stmStore sId2 fileInfo2 testCreatedAt (EntityBlocked testBlockInfo) `shouldReturn` Right ()
-- 2. Write to StoreLog
sl <- openWriteStoreLog False storeLogPath
writeFileStore sl stmStore
closeStoreLog sl
-- 3. Import StoreLog to Postgres
importFileStore storeLogPath testXFTPPostgresCfg
-- StoreLog should be renamed to .bak
doesFileExist storeLogPath `shouldReturn` False
doesFileExist (storeLogPath <> ".bak") `shouldReturn` True
-- 4. Export from Postgres back to StoreLog
exportFileStore storeLogPath2 testXFTPPostgresCfg
-- 5. Read exported StoreLog into a new STM store and verify
stmStore2 <- newFileStore () :: IO STMFileStore
sl2 <- readWriteFileStore storeLogPath2 stmStore2
closeStoreLog sl2
-- Verify file 1
result1 <- getFile stmStore2 SFSender sId1
case result1 of
Right (FileRec {fileInfo = fi, filePath, fileStatus}, _) -> do
size fi `shouldBe` 128000
readTVarIO filePath `shouldReturn` Just "/tmp/file1"
readTVarIO fileStatus `shouldReturn` EntityActive
Left e -> expectationFailure $ "getFile sId1 failed: " <> show e
-- Verify recipient
result1r <- getFile stmStore2 SFRecipient rId1
case result1r of
Right (_, key) -> key `shouldBe` rcpKey1
Left e -> expectationFailure $ "getFile rId1 failed: " <> show e
-- Verify file 2 (blocked)
result2 <- getFile stmStore2 SFSender sId2
case result2 of
Right (FileRec {fileInfo = fi, fileStatus}, _) -> do
size fi `shouldBe` 64000
readTVarIO fileStatus `shouldReturn` EntityBlocked (BlockingInfo {reason = BRSpam, notice = Nothing})
Left e -> expectationFailure $ "getFile sId2 failed: " <> show e
-- Cleanup
removeFile (storeLogPath <> ".bak")
removeFile storeLogPath2