mirror of
https://github.com/simplex-chat/simplexmq.git
synced 2026-04-01 00:56:11 +00:00
* xftp: implementation of XFTP client as web page (rfc, low level functions) * protocol, file descriptions, more cryptogrpahy, handshake encoding, etc. * xftp server changes to support web slients: SNI-based certificate choice, CORS headers, OPTIONS request * web handshake * test for xftp web handshake * xftp-web client functions, fix transmission encoding * support description "redirect" in agent.ts and cross-platform compatibility tests (Haskell <> TypeScript) * rfc: web transport * client transport abstraction * browser environment * persistent client sessions * move rfcs * web page plan * improve plan * webpage implementation (not tested) * fix test * fix test 2 * fix test 3 * fixes and page test plan * allow sending xftp client hello after handshake - for web clients that dont know if established connection exists * page tests pass * concurrent and padded hellos in the server * update TS client to pad hellos * fix tests * preview:local * local preview over https * fixed https in the test page * web test cert fixtures * debug logging in web page and server * remove debug logging in server/browser, run preview xftp server via cabal run to ensure the latest code is used * debug logging for page sessions * add plan * improve error handling, handle browser reconnections/re-handshake * fix * debugging * opfs fallback * delete test screenshot * xftp CLI to support link * fix encoding for XFTPServerHandshake * support redirect file descriptions in xftp CLI receive * refactor CLI redirect * xftp-web: fixes and multi-server upload (#1714) * fix: await sodium.ready in crypto/keys.ts (+ digest.ts StateAddress cast) * multi-server parallel upload, remove pickRandomServer * fix worker message race: wait for ready signal before posting messages * suppress vite build warnings: emptyOutDir, externals, chunkSizeWarningLimit * fix Haskell web tests: use agent+server API, wrap server in array, suppress debug logs * remove dead APIs: un-export connectXFTP, delete closeXFTP * fix TypeScript errors in check:web (#1716) - client.ts: cast globalThis.process to any for browser tsconfig, suppress node:http2 import, use any for Buffer/chunks, cast fetch body - crypto.worker.ts: cast sha512_init() return to StateAddress * fix: serialize worker message processing to prevent OPFS handle race async onmessage allows interleaved execution at await points. When downloadFileRaw fetches chunks from multiple servers in parallel, concurrent handleDecryptAndStore calls both see downloadWriteHandle as null and race on createSyncAccessHandle for the same file, causing intermittent NoModificationAllowedError. Chain message handlers on a promise queue so each runs to completion before the next starts. * xftp-web: prepare for npm publishing (#1715) * prepare package.json for npm publishing Remove private flag, add description/license/repository/publishConfig, rename postinstall to pretest, add prepublishOnly, set files and main. * stable output filenames in production build * fix repository url format, expand files array * embeddable component: scoped CSS, dark mode, i18n, events, share - worker output to assets/ for single-directory deployment - scoped all CSS under #app, removed global resets - dark mode via .dark ancestor class - progress ring reads colors from CSS custom properties - i18n via window.__XFTP_I18N__ with t() helper - configurable mount element via data-xftp-app attribute - optional hashchange listener (data-no-hashchange) - completion events: xftp:upload-complete, xftp:download-complete - enhanced file-too-large error mentioning SimpleX app - native share button via navigator.share * deferred init and runtime server configuration - data-defer-init attribute skips auto-initialization - window.__XFTP_SERVERS__ overrides baked-in server list * use relative base path for relocatable build output * xftp-web: retry resets to default state, use innerHTML for errors * xftp-web: only enter download mode for valid XFTP URIs in hash * xftp-web: render UI before WASM is ready Move sodium.ready await after UI initialization so the upload/download interface appears instantly. WASM is only needed when user triggers an actual upload or download. Dispatch xftp:ready event once WASM loads. * xftp-web: CLS placeholder HTML and embedder CSS selectors Add placeholder HTML to index.html so the page renders a styled card before JS executes, preventing layout shift. Use a <template> element with an inline script to swap to the download placeholder when the URL hash indicates a file download. Auto-compute CSP SHA-256 hashes for inline scripts in the vite build plugin. Change all CSS selectors from #app to :is(#app, [data-xftp-app]) so styles apply when the widget is embedded with data-xftp-app attribute. * xftp-web: progress ring overhaul Rewrite progress ring with smooth lerp animation, green checkmark on completion, theme reactivity via MutationObserver, and per-phase color variables (encrypt/upload/download/decrypt). Show honest per-phase progress: each phase animates 0-100% independently with a ring color change between phases. Add decrypt progress callback from the web worker so the decryption phase tracks real chunk processing instead of showing an indeterminate spinner. Snap immediately on phase reset (0) and completion (1) to avoid lingering partial progress. Clean up animation and observers via destroy() in finally blocks. * xftp-web: single progress ring for upload, simplify ring color * xftp-web: single progress ring for download * feat(xftp-web): granular progress for encrypt/decrypt phases Add byte-level progress callbacks to encryptFile, decryptChunks, and sha512Streaming by processing data in 256KB segments. Worker reports fine-grained progress across all phases (encrypt+hash+write for upload, read+hash+decrypt for download). Progress ring gains fillTo method for smooth ease-out animation during minimum display delays. Encrypt/decrypt phases fill their weighted regions (0-15% and 85-99%) with real callbacks, with fillTo covering remaining time when work finishes under the 1s minimum for files >= 100KB. * rename package --------- Co-authored-by: Evgeny Poberezkin <evgeny@poberezkin.com> --------- Co-authored-by: Evgeny @ SimpleX Chat <259188159+evgeny-simplex@users.noreply.github.com> Co-authored-by: shum <github.shum@liber.li> Co-authored-by: sh <37271604+shumvgolove@users.noreply.github.com>
607 lines
28 KiB
Haskell
607 lines
28 KiB
Haskell
{-# LANGUAGE DataKinds #-}
|
|
{-# LANGUAGE DeriveAnyClass #-}
|
|
{-# LANGUAGE DuplicateRecordFields #-}
|
|
{-# LANGUAGE FlexibleContexts #-}
|
|
{-# LANGUAGE GADTs #-}
|
|
{-# LANGUAGE LambdaCase #-}
|
|
{-# LANGUAGE NamedFieldPuns #-}
|
|
{-# LANGUAGE OverloadedStrings #-}
|
|
{-# LANGUAGE ScopedTypeVariables #-}
|
|
{-# LANGUAGE TypeApplications #-}
|
|
{-# OPTIONS_GHC -fno-warn-ambiguous-fields #-}
|
|
|
|
module Simplex.FileTransfer.Client.Main
|
|
( SendOptions (..),
|
|
CLIError (..),
|
|
xftpClientCLI,
|
|
cliSendFile,
|
|
cliSendFileOpts,
|
|
encodeWebURI,
|
|
decodeWebURI,
|
|
fileWebLink,
|
|
singleChunkSize,
|
|
prepareChunkSizes,
|
|
prepareChunkSpecs,
|
|
getChunkDigest,
|
|
)
|
|
where
|
|
|
|
import qualified Codec.Compression.Zlib.Raw as Z
|
|
import Control.Logger.Simple
|
|
import Control.Monad
|
|
import Control.Monad.Except
|
|
import Control.Monad.Trans.Except
|
|
import Crypto.Random (ChaChaDRG)
|
|
import qualified Data.Attoparsec.ByteString.Char8 as A
|
|
import Data.Bifunctor (first)
|
|
import qualified Data.ByteString.Base64.URL as U
|
|
import qualified Data.ByteString.Char8 as B
|
|
import qualified Data.ByteString.Lazy.Char8 as LB
|
|
import Data.Char (toLower)
|
|
import Data.Either (partitionEithers)
|
|
import Data.Int (Int64)
|
|
import Data.List (foldl', isPrefixOf, sortOn)
|
|
import Data.List.NonEmpty (NonEmpty (..), nonEmpty)
|
|
import qualified Data.List.NonEmpty as L
|
|
import Data.Map.Strict (Map)
|
|
import qualified Data.Map.Strict as M
|
|
import Data.Maybe (fromMaybe)
|
|
import Data.Text (Text)
|
|
import qualified Data.Text as T
|
|
import Data.Word (Word32)
|
|
import GHC.Records (HasField (getField))
|
|
import Options.Applicative
|
|
import Simplex.FileTransfer.Chunks
|
|
import Simplex.FileTransfer.Client
|
|
import Simplex.FileTransfer.Client.Agent
|
|
import Simplex.FileTransfer.Client.Presets
|
|
import Simplex.FileTransfer.Crypto
|
|
import Simplex.FileTransfer.Description
|
|
import Simplex.FileTransfer.Protocol
|
|
import Simplex.FileTransfer.Transport (XFTPRcvChunkSpec (..))
|
|
import Simplex.FileTransfer.Types
|
|
import Simplex.FileTransfer.Util (uniqueCombine)
|
|
import qualified Simplex.Messaging.Crypto as C
|
|
import Simplex.Messaging.Crypto.File (CryptoFile (..), FTCryptoError (..))
|
|
import qualified Simplex.Messaging.Crypto.File as CF
|
|
import qualified Simplex.Messaging.Crypto.Lazy as LC
|
|
import Simplex.Messaging.Encoding
|
|
import Simplex.Messaging.Encoding.String (StrEncoding (..))
|
|
import Simplex.Messaging.Parsers (parseAll)
|
|
import Simplex.Messaging.Protocol (ProtoServerWithAuth (..), ProtocolServer (..), SenderId, SndPrivateAuthKey, XFTPServer, XFTPServerWithAuth)
|
|
import Simplex.Messaging.Server.CLI (getCliCommand')
|
|
import Simplex.Messaging.Util (groupAllOn, ifM, tshow, whenM)
|
|
import System.Exit (exitFailure)
|
|
import System.FilePath (splitFileName, (</>))
|
|
import System.IO.Temp (getCanonicalTemporaryDirectory)
|
|
import System.Random (StdGen, newStdGen, randomR)
|
|
import UnliftIO
|
|
import UnliftIO.Directory
|
|
|
|
xftpClientVersion :: String
|
|
xftpClientVersion = "1.0.1"
|
|
|
|
newtype CLIError = CLIError String
|
|
deriving (Eq, Show, Exception)
|
|
|
|
cliCryptoError :: FTCryptoError -> CLIError
|
|
cliCryptoError = \case
|
|
FTCECryptoError e -> CLIError $ "Error decrypting file: " <> show e
|
|
FTCEInvalidHeader e -> CLIError $ "Invalid file header: " <> e
|
|
FTCEInvalidAuthTag -> CLIError "Error decrypting file: incorrect auth tag"
|
|
FTCEInvalidFileSize -> CLIError "Error decrypting file: incorrect file size"
|
|
FTCEFileIOError e -> CLIError $ "File IO error: " <> show e
|
|
|
|
data CliCommand
|
|
= SendFile SendOptions
|
|
| ReceiveFile ReceiveOptions
|
|
| DeleteFile DeleteOptions
|
|
| RandomFile RandomFileOptions
|
|
| FileDescrInfo InfoOptions
|
|
|
|
data SendOptions = SendOptions
|
|
{ filePath :: FilePath,
|
|
outputDir :: Maybe FilePath,
|
|
numRecipients :: Int,
|
|
xftpServers :: [XFTPServerWithAuth],
|
|
retryCount :: Int,
|
|
tempPath :: Maybe FilePath,
|
|
verbose :: Bool
|
|
}
|
|
deriving (Show)
|
|
|
|
data ReceiveOptions = ReceiveOptions
|
|
{ fileDescription :: FilePath,
|
|
filePath :: Maybe FilePath,
|
|
retryCount :: Int,
|
|
tempPath :: Maybe FilePath,
|
|
verbose :: Bool,
|
|
yes :: Bool
|
|
}
|
|
deriving (Show)
|
|
|
|
data DeleteOptions = DeleteOptions
|
|
{ fileDescription :: FilePath,
|
|
retryCount :: Int,
|
|
verbose :: Bool,
|
|
yes :: Bool
|
|
}
|
|
deriving (Show)
|
|
|
|
newtype InfoOptions = InfoOptions
|
|
{ fileDescription :: FilePath
|
|
}
|
|
deriving (Show)
|
|
|
|
data RandomFileOptions = RandomFileOptions
|
|
{ filePath :: FilePath,
|
|
fileSize :: FileSize Int64
|
|
}
|
|
deriving (Show)
|
|
|
|
defaultRetryCount :: Int
|
|
defaultRetryCount = 3
|
|
|
|
cliCommandP :: Parser CliCommand
|
|
cliCommandP =
|
|
hsubparser
|
|
( command "send" (info (SendFile <$> sendP) (progDesc "Send file"))
|
|
<> command "recv" (info (ReceiveFile <$> receiveP) (progDesc "Receive file"))
|
|
<> command "del" (info (DeleteFile <$> deleteP) (progDesc "Delete file from server(s)"))
|
|
<> command "info" (info (FileDescrInfo <$> infoP) (progDesc "Show file description"))
|
|
)
|
|
<|> hsubparser (command "rand" (info (RandomFile <$> randomP) (progDesc "Generate a random file of a given size")) <> internal)
|
|
where
|
|
sendP :: Parser SendOptions
|
|
sendP =
|
|
SendOptions
|
|
<$> argument str (metavar "FILE" <> help "File to send")
|
|
<*> optional (argument str $ metavar "DIR" <> help "Directory to save file descriptions (default: current directory)")
|
|
<*> option auto (short 'n' <> metavar "COUNT" <> help "Number of recipients" <> value 1 <> showDefault)
|
|
<*> xftpServers
|
|
<*> retryCountP
|
|
<*> temp
|
|
<*> verboseP
|
|
receiveP :: Parser ReceiveOptions
|
|
receiveP =
|
|
ReceiveOptions
|
|
<$> fileDescrArg
|
|
<*> optional (argument str $ metavar "DIR" <> help "Directory to save file (default: system Downloads directory)")
|
|
<*> retryCountP
|
|
<*> temp
|
|
<*> verboseP
|
|
<*> yesP
|
|
deleteP :: Parser DeleteOptions
|
|
deleteP =
|
|
DeleteOptions
|
|
<$> fileDescrArg
|
|
<*> retryCountP
|
|
<*> verboseP
|
|
<*> yesP
|
|
infoP :: Parser InfoOptions
|
|
infoP = InfoOptions <$> fileDescrArg
|
|
randomP :: Parser RandomFileOptions
|
|
randomP =
|
|
RandomFileOptions
|
|
<$> argument str (metavar "FILE" <> help "Path to save file")
|
|
<*> argument str (metavar "SIZE" <> help "File size (bytes/kb/mb/gb)")
|
|
fileDescrArg = argument str (metavar "FILE" <> help "File description file")
|
|
retryCountP = option auto (long "retry" <> short 'r' <> metavar "RETRY" <> help "Number of network retries" <> value defaultRetryCount <> showDefault)
|
|
temp = optional (strOption $ long "tmp" <> metavar "TMP" <> help "Directory for temporary encrypted file (default: system temp directory)")
|
|
verboseP = switch (long "verbose" <> short 'v' <> help "Verbose output")
|
|
yesP = switch (long "yes" <> short 'y' <> help "Yes to questions")
|
|
xftpServers =
|
|
option
|
|
parseXFTPServers
|
|
( long "servers"
|
|
<> short 's'
|
|
<> metavar "SERVER"
|
|
<> help "Semicolon-separated list of XFTP server(s) to use (each server can have more than one hostname)"
|
|
<> value []
|
|
)
|
|
parseXFTPServers = eitherReader $ parseAll xftpServersP . B.pack
|
|
xftpServersP = strP `A.sepBy1` A.char ';'
|
|
|
|
data SentFileChunk = SentFileChunk
|
|
{ chunkNo :: Int,
|
|
sndId :: SenderId,
|
|
sndPrivateKey :: SndPrivateAuthKey,
|
|
chunkSize :: FileSize Word32,
|
|
digest :: FileDigest,
|
|
replicas :: [SentFileChunkReplica]
|
|
}
|
|
deriving (Show)
|
|
|
|
data SentFileChunkReplica = SentFileChunkReplica
|
|
{ server :: XFTPServer,
|
|
recipients :: [(ChunkReplicaId, C.APrivateAuthKey)]
|
|
}
|
|
deriving (Show)
|
|
|
|
logCfg :: LogConfig
|
|
logCfg = LogConfig {lc_file = Nothing, lc_stderr = True}
|
|
|
|
xftpClientCLI :: IO ()
|
|
xftpClientCLI =
|
|
getCliCommand' cliCommandP clientVersion >>= \case
|
|
SendFile opts -> runLogE opts $ cliSendFile opts
|
|
ReceiveFile opts -> runLogE opts $ cliReceiveFile opts
|
|
DeleteFile opts -> runLogE opts $ cliDeleteFile opts
|
|
FileDescrInfo opts -> runE $ cliFileDescrInfo opts
|
|
RandomFile opts -> cliRandomFile opts
|
|
where
|
|
clientVersion = "SimpleX XFTP client v" <> xftpClientVersion
|
|
|
|
runLogE :: HasField "verbose" a Bool => a -> ExceptT CLIError IO () -> IO ()
|
|
runLogE opts a
|
|
| getField @"verbose" opts = setLogLevel LogDebug >> withGlobalLogging logCfg (runE a)
|
|
| otherwise = runE a
|
|
|
|
runE :: ExceptT CLIError IO () -> IO ()
|
|
runE a =
|
|
runExceptT a >>= \case
|
|
Left (CLIError e) -> putStrLn e >> exitFailure
|
|
_ -> pure ()
|
|
|
|
cliSendFile :: SendOptions -> ExceptT CLIError IO ()
|
|
cliSendFile opts = cliSendFileOpts opts True $ printProgress "Uploaded"
|
|
|
|
cliSendFileOpts :: SendOptions -> Bool -> (Int64 -> Int64 -> IO ()) -> ExceptT CLIError IO ()
|
|
cliSendFileOpts SendOptions {filePath, outputDir, numRecipients, xftpServers, retryCount, tempPath, verbose} printInfo notifyProgress = do
|
|
let (_, fileNameStr) = splitFileName filePath
|
|
fileName = T.pack fileNameStr
|
|
liftIO $ when printInfo $ printNoNewLine "Encrypting file..."
|
|
g <- liftIO C.newRandom
|
|
(encPath, fdRcv, fdSnd, chunkSpecs, encSize) <- encryptFileForUpload g fileName
|
|
liftIO $ when printInfo $ printNoNewLine "Uploading file..."
|
|
uploadedChunks <- newTVarIO []
|
|
sentChunks <- uploadFile g chunkSpecs uploadedChunks encSize
|
|
whenM (doesFileExist encPath) $ removeFile encPath
|
|
-- TODO if only small chunks, use different default size
|
|
liftIO $ do
|
|
let fdRcvs = createRcvFileDescriptions fdRcv sentChunks
|
|
fdSnd' = createSndFileDescription fdSnd sentChunks
|
|
(fdRcvPaths, fdSndPath) <- writeFileDescriptions fileNameStr fdRcvs fdSnd'
|
|
when printInfo $ do
|
|
printNoNewLine "File uploaded!"
|
|
putStrLn $ "\nSender file description: " <> fdSndPath
|
|
putStrLn "Pass file descriptions to the recipient(s):"
|
|
forM_ fdRcvPaths putStrLn
|
|
when printInfo $ case fdRcvs of
|
|
rcvFd : _ -> forM_ (fileWebLink rcvFd) $ \(host, fragment) ->
|
|
putStrLn $ "\nWeb link:\nhttps://" <> B.unpack host <> "/#" <> B.unpack fragment
|
|
_ -> pure ()
|
|
where
|
|
encryptFileForUpload :: TVar ChaChaDRG -> Text -> ExceptT CLIError IO (FilePath, FileDescription 'FRecipient, FileDescription 'FSender, [XFTPChunkSpec], Int64)
|
|
encryptFileForUpload g fileName = do
|
|
fileSize <- fromInteger <$> getFileSize filePath
|
|
when (fileSize > maxFileSize) $ throwE $ CLIError $ "Files bigger than " <> maxFileSizeStr <> " are not supported"
|
|
encPath <- getEncPath tempPath "xftp"
|
|
key <- atomically $ C.randomSbKey g
|
|
nonce <- atomically $ C.randomCbNonce g
|
|
let fileHdr = smpEncode FileHeader {fileName, fileExtra = Nothing}
|
|
fileSize' = fromIntegral (B.length fileHdr) + fileSize
|
|
chunkSizes = prepareChunkSizes $ fileSize' + fileSizeLen + authTagSize
|
|
defChunkSize = head chunkSizes
|
|
chunkSizes' = map fromIntegral chunkSizes
|
|
encSize = sum chunkSizes'
|
|
srcFile = CF.plain filePath
|
|
withExceptT (CLIError . show) $ encryptFile srcFile fileHdr key nonce fileSize' encSize encPath
|
|
digest <- liftIO $ LC.sha512Hash <$> LB.readFile encPath
|
|
let chunkSpecs = prepareChunkSpecs encPath chunkSizes
|
|
fdRcv = FileDescription {party = SFRecipient, size = FileSize encSize, digest = FileDigest digest, key, nonce, chunkSize = FileSize defChunkSize, chunks = [], redirect = Nothing}
|
|
fdSnd = FileDescription {party = SFSender, size = FileSize encSize, digest = FileDigest digest, key, nonce, chunkSize = FileSize defChunkSize, chunks = [], redirect = Nothing}
|
|
logDebug $ "encrypted file to " <> tshow encPath
|
|
pure (encPath, fdRcv, fdSnd, chunkSpecs, encSize)
|
|
uploadFile :: TVar ChaChaDRG -> [XFTPChunkSpec] -> TVar [Int64] -> Int64 -> ExceptT CLIError IO [SentFileChunk]
|
|
uploadFile g chunks uploadedChunks encSize = do
|
|
a <- liftIO $ newXFTPAgent defaultXFTPClientAgentConfig
|
|
gen <- newTVarIO =<< liftIO newStdGen
|
|
let xftpSrvs = fromMaybe defaultXFTPServers (nonEmpty xftpServers)
|
|
srvs <- liftIO $ replicateM (length chunks) $ getXFTPServer gen xftpSrvs
|
|
let thd3 (_, _, x) = x
|
|
chunks' = groupAllOn thd3 $ zip3 [1 ..] chunks srvs
|
|
-- TODO shuffle/unshuffle chunks
|
|
-- the reason we don't do pooled downloads here within one server is that http2 library doesn't handle cleint concurrency, even though
|
|
-- upload doesn't allow other requests within the same client until complete (but download does allow).
|
|
logDebug $ "uploading " <> tshow (length chunks) <> " chunks..."
|
|
(errs, rs) <- partitionEithers . concat <$> liftIO (pooledForConcurrentlyN 16 chunks' . mapM $ runExceptT . uploadFileChunk a)
|
|
mapM_ throwE errs
|
|
pure $ map snd (sortOn fst rs)
|
|
where
|
|
uploadFileChunk :: XFTPClientAgent -> (Int, XFTPChunkSpec, XFTPServerWithAuth) -> ExceptT CLIError IO (Int, SentFileChunk)
|
|
uploadFileChunk a (chunkNo, chunkSpec@XFTPChunkSpec {chunkSize}, ProtoServerWithAuth xftpServer auth) = do
|
|
logDebug $ "uploading chunk " <> tshow chunkNo <> " to " <> showServer xftpServer <> "..."
|
|
(sndKey, spKey) <- atomically $ C.generateAuthKeyPair C.SEd25519 g
|
|
rKeys <- atomically $ L.fromList <$> replicateM numRecipients (C.generateAuthKeyPair C.SEd25519 g)
|
|
digest <- liftIO $ getChunkDigest chunkSpec
|
|
let ch = FileInfo {sndKey, size = chunkSize, digest}
|
|
c <- withRetry retryCount $ getXFTPServerClient a xftpServer
|
|
(sndId, rIds) <- withRetry retryCount $ createXFTPChunk c spKey ch (L.map fst rKeys) auth
|
|
withReconnect a xftpServer retryCount $ \c' -> uploadXFTPChunk c' spKey sndId chunkSpec
|
|
logDebug $ "uploaded chunk " <> tshow chunkNo
|
|
uploaded <- atomically . stateTVar uploadedChunks $ \cs ->
|
|
let cs' = fromIntegral chunkSize : cs in (sum cs', cs')
|
|
liftIO $ do
|
|
notifyProgress uploaded encSize
|
|
when verbose $ putStrLn ""
|
|
let recipients = L.toList $ L.map ChunkReplicaId rIds `L.zip` L.map snd rKeys
|
|
replicas = [SentFileChunkReplica {server = xftpServer, recipients}]
|
|
pure (chunkNo, SentFileChunk {chunkNo, sndId, sndPrivateKey = spKey, chunkSize = FileSize chunkSize, digest = FileDigest digest, replicas})
|
|
getXFTPServer :: TVar StdGen -> NonEmpty XFTPServerWithAuth -> IO XFTPServerWithAuth
|
|
getXFTPServer gen = \case
|
|
srv :| [] -> pure srv
|
|
servers -> do
|
|
atomically $ (servers L.!!) <$> stateTVar gen (randomR (0, L.length servers - 1))
|
|
|
|
-- M chunks, R replicas, N recipients
|
|
-- rcvReplicas: M[SentFileChunk] -> M * R * N [SentRecipientReplica]
|
|
-- rcvChunks: M * R * N [SentRecipientReplica] -> N[ M[FileChunk] ]
|
|
createRcvFileDescriptions :: FileDescription 'FRecipient -> [SentFileChunk] -> [FileDescription 'FRecipient]
|
|
createRcvFileDescriptions fd sentChunks = map (\chunks -> (fd :: (FileDescription 'FRecipient)) {chunks}) rcvChunks
|
|
where
|
|
rcvReplicas :: [SentRecipientReplica]
|
|
rcvReplicas =
|
|
concatMap
|
|
( \SentFileChunk {chunkNo, digest, chunkSize, replicas} ->
|
|
concatMap
|
|
( \SentFileChunkReplica {server, recipients} ->
|
|
zipWith (\rcvNo (replicaId, replicaKey) -> SentRecipientReplica {chunkNo, server, rcvNo, replicaId, replicaKey, digest, chunkSize}) [1 ..] recipients
|
|
)
|
|
replicas
|
|
)
|
|
sentChunks
|
|
rcvChunks :: [[FileChunk]]
|
|
rcvChunks = map (sortChunks . M.elems) $ M.elems $ foldl' addRcvChunk M.empty rcvReplicas
|
|
sortChunks :: [FileChunk] -> [FileChunk]
|
|
sortChunks = map reverseReplicas . sortOn (\FileChunk {chunkNo} -> chunkNo)
|
|
reverseReplicas ch@FileChunk {replicas} = (ch :: FileChunk) {replicas = reverse replicas}
|
|
addRcvChunk :: Map Int (Map Int FileChunk) -> SentRecipientReplica -> Map Int (Map Int FileChunk)
|
|
addRcvChunk m SentRecipientReplica {chunkNo, server, rcvNo, replicaId, replicaKey, digest, chunkSize} =
|
|
M.alter (Just . addOrChangeRecipient) rcvNo m
|
|
where
|
|
addOrChangeRecipient :: Maybe (Map Int FileChunk) -> Map Int FileChunk
|
|
addOrChangeRecipient = \case
|
|
Just m' -> M.alter (Just . addOrChangeChunk) chunkNo m'
|
|
_ -> M.singleton chunkNo $ FileChunk {chunkNo, digest, chunkSize, replicas = [replica]}
|
|
addOrChangeChunk :: Maybe FileChunk -> FileChunk
|
|
addOrChangeChunk = \case
|
|
Just ch@FileChunk {replicas} -> ch {replicas = replica : replicas}
|
|
_ -> FileChunk {chunkNo, digest, chunkSize, replicas = [replica]}
|
|
replica = FileChunkReplica {server, replicaId, replicaKey}
|
|
createSndFileDescription :: FileDescription 'FSender -> [SentFileChunk] -> FileDescription 'FSender
|
|
createSndFileDescription fd sentChunks = fd {chunks = sndChunks}
|
|
where
|
|
sndChunks :: [FileChunk]
|
|
sndChunks =
|
|
map
|
|
( \SentFileChunk {chunkNo, sndId, sndPrivateKey, chunkSize, digest, replicas} ->
|
|
FileChunk {chunkNo, digest, chunkSize, replicas = sndReplicas replicas (ChunkReplicaId sndId) sndPrivateKey}
|
|
)
|
|
sentChunks
|
|
-- SentFileChunk having sndId and sndPrivateKey represents the current implementation's limitation
|
|
-- that sender uploads each chunk only to one server, so we can use the first replica's server for FileChunkReplica
|
|
sndReplicas :: [SentFileChunkReplica] -> ChunkReplicaId -> C.APrivateAuthKey -> [FileChunkReplica]
|
|
sndReplicas [] _ _ = []
|
|
sndReplicas (SentFileChunkReplica {server} : _) replicaId replicaKey = [FileChunkReplica {server, replicaId, replicaKey}]
|
|
writeFileDescriptions :: String -> [FileDescription 'FRecipient] -> FileDescription 'FSender -> IO ([FilePath], FilePath)
|
|
writeFileDescriptions fileName fdRcvs fdSnd = do
|
|
outDir <- uniqueCombine (fromMaybe "." outputDir) (fileName <> ".xftp")
|
|
createDirectoryIfMissing True outDir
|
|
fdRcvPaths <- forM (zip [1 ..] fdRcvs) $ \(i :: Int, fd) -> do
|
|
let fdPath = outDir </> ("rcv" <> show i <> ".xftp")
|
|
B.writeFile fdPath $ strEncode fd
|
|
pure fdPath
|
|
let fdSndPath = outDir </> "snd.xftp.private"
|
|
B.writeFile fdSndPath $ strEncode fdSnd
|
|
pure (fdRcvPaths, fdSndPath)
|
|
|
|
cliReceiveFile :: ReceiveOptions -> ExceptT CLIError IO ()
|
|
cliReceiveFile ReceiveOptions {fileDescription, filePath, retryCount, tempPath, verbose, yes} =
|
|
getInputFileDescription >>= receive 1
|
|
where
|
|
getInputFileDescription
|
|
| "http://" `isPrefixOf` fileDescription || "https://" `isPrefixOf` fileDescription = do
|
|
let fragment = B.pack $ drop 1 $ dropWhile (/= '#') fileDescription
|
|
when (B.null fragment) $ throwE $ CLIError "Invalid URL: no fragment"
|
|
either (throwE . CLIError . ("Invalid web link: " <>)) pure $ decodeWebURI fragment
|
|
| otherwise = getFileDescription' fileDescription
|
|
receive :: Int -> ValidFileDescription 'FRecipient -> ExceptT CLIError IO ()
|
|
receive depth (ValidFileDescription FileDescription {size, digest, key, nonce, chunks, redirect}) = do
|
|
encPath <- getEncPath tempPath "xftp"
|
|
createDirectory encPath
|
|
a <- liftIO $ newXFTPAgent defaultXFTPClientAgentConfig
|
|
liftIO $ printNoNewLine "Downloading file..."
|
|
downloadedChunks <- newTVarIO []
|
|
let srv FileChunk {replicas} = case replicas of
|
|
[] -> error "empty FileChunk.replicas"
|
|
FileChunkReplica {server} : _ -> server
|
|
srvChunks = groupAllOn srv chunks
|
|
g <- liftIO C.newRandom
|
|
(errs, rs) <- partitionEithers . concat <$> liftIO (pooledForConcurrentlyN 16 srvChunks $ mapM $ runExceptT . downloadFileChunk g a encPath size downloadedChunks)
|
|
mapM_ throwE errs
|
|
let chunkPaths = map snd $ sortOn fst rs
|
|
encDigest <- liftIO $ LC.sha512Hash <$> readChunks chunkPaths
|
|
when (encDigest /= unFileDigest digest) $ throwE $ CLIError "File digest mismatch"
|
|
encSize <- liftIO $ foldM (\s path -> (s +) . fromIntegral <$> getFileSize path) 0 chunkPaths
|
|
when (FileSize encSize /= size) $ throwE $ CLIError "File size mismatch"
|
|
case redirect of
|
|
Just _
|
|
| depth > 0 -> do
|
|
CryptoFile tmpFile _ <- withExceptT cliCryptoError $ decryptChunks encSize chunkPaths key nonce $ \_ ->
|
|
fmap CF.plain $ uniqueCombine encPath "redirect.yaml"
|
|
forM_ chunks $ acknowledgeFileChunk a
|
|
yaml <- liftIO $ B.readFile tmpFile
|
|
whenM (doesPathExist encPath) $ removeDirectoryRecursive encPath
|
|
innerVfd <- either (throwE . CLIError . ("Redirect: invalid file description: " <>)) pure $ strDecode yaml
|
|
receive 0 innerVfd
|
|
| otherwise -> throwE $ CLIError "Redirect chain too long"
|
|
Nothing -> do
|
|
liftIO $ printNoNewLine "Decrypting file..."
|
|
CryptoFile path _ <- withExceptT cliCryptoError $ decryptChunks encSize chunkPaths key nonce $ fmap CF.plain . getFilePath
|
|
forM_ chunks $ acknowledgeFileChunk a
|
|
whenM (doesPathExist encPath) $ removeDirectoryRecursive encPath
|
|
liftIO $ do
|
|
printNoNewLine $ "File downloaded: " <> path
|
|
unless ("http://" `isPrefixOf` fileDescription || "https://" `isPrefixOf` fileDescription) $
|
|
removeFD yes fileDescription
|
|
downloadFileChunk :: TVar ChaChaDRG -> XFTPClientAgent -> FilePath -> FileSize Int64 -> TVar [Int64] -> FileChunk -> ExceptT CLIError IO (Int, FilePath)
|
|
downloadFileChunk g a encPath (FileSize encSize) downloadedChunks FileChunk {chunkNo, chunkSize, digest, replicas = replica : _} = do
|
|
let FileChunkReplica {server, replicaId, replicaKey} = replica
|
|
logDebug $ "downloading chunk " <> tshow chunkNo <> " from " <> showServer server <> "..."
|
|
chunkPath <- uniqueCombine encPath $ show chunkNo
|
|
let chunkSpec = XFTPRcvChunkSpec chunkPath (unFileSize chunkSize) (unFileDigest digest)
|
|
withReconnect a server retryCount $ \c -> downloadXFTPChunk g c replicaKey (unChunkReplicaId replicaId) chunkSpec
|
|
logDebug $ "downloaded chunk " <> tshow chunkNo <> " to " <> T.pack chunkPath
|
|
downloaded <- atomically . stateTVar downloadedChunks $ \cs ->
|
|
let cs' = fromIntegral (unFileSize chunkSize) : cs in (sum cs', cs')
|
|
liftIO $ do
|
|
printProgress "Downloaded" downloaded encSize
|
|
when verbose $ putStrLn ""
|
|
pure (chunkNo, chunkPath)
|
|
downloadFileChunk _ _ _ _ _ _ = throwE $ CLIError "chunk has no replicas"
|
|
getFilePath :: Text -> ExceptT String IO FilePath
|
|
getFilePath name = case filePath of
|
|
Just path ->
|
|
ifM (doesDirectoryExist path) (uniqueCombine path name') $
|
|
ifM (doesFileExist path) (throwE "File already exists") (pure path)
|
|
_ -> (`uniqueCombine` name') . (</> "Downloads") =<< getHomeDirectory
|
|
where
|
|
name' = T.unpack name
|
|
acknowledgeFileChunk :: XFTPClientAgent -> FileChunk -> ExceptT CLIError IO ()
|
|
acknowledgeFileChunk a FileChunk {replicas = replica : _} = do
|
|
let FileChunkReplica {server, replicaId, replicaKey} = replica
|
|
c <- withRetry retryCount $ getXFTPServerClient a server
|
|
withRetry retryCount $ ackXFTPChunk c replicaKey (unChunkReplicaId replicaId)
|
|
acknowledgeFileChunk _ _ = throwE $ CLIError "chunk has no replicas"
|
|
|
|
printProgress :: String -> Int64 -> Int64 -> IO ()
|
|
printProgress s part total = printNoNewLine $ s <> " " <> show ((part * 100) `div` total) <> "%"
|
|
|
|
printNoNewLine :: String -> IO ()
|
|
printNoNewLine s = do
|
|
putStr $ s <> replicate (max 0 $ 25 - length s) ' ' <> "\r"
|
|
hFlush stdout
|
|
|
|
cliDeleteFile :: DeleteOptions -> ExceptT CLIError IO ()
|
|
cliDeleteFile DeleteOptions {fileDescription, retryCount, yes} = do
|
|
getFileDescription' fileDescription >>= deleteFile
|
|
where
|
|
deleteFile :: ValidFileDescription 'FSender -> ExceptT CLIError IO ()
|
|
deleteFile (ValidFileDescription FileDescription {chunks}) = do
|
|
a <- liftIO $ newXFTPAgent defaultXFTPClientAgentConfig
|
|
forM_ chunks $ deleteFileChunk a
|
|
liftIO $ do
|
|
printNoNewLine "File deleted!"
|
|
removeFD yes fileDescription
|
|
deleteFileChunk :: XFTPClientAgent -> FileChunk -> ExceptT CLIError IO ()
|
|
deleteFileChunk a FileChunk {chunkNo, replicas = replica : _} = do
|
|
let FileChunkReplica {server, replicaId, replicaKey} = replica
|
|
withReconnect a server retryCount $ \c -> deleteXFTPChunk c replicaKey (unChunkReplicaId replicaId)
|
|
logDebug $ "deleted chunk " <> tshow chunkNo <> " from " <> showServer server
|
|
deleteFileChunk _ _ = throwE $ CLIError "chunk has no replicas"
|
|
|
|
cliFileDescrInfo :: InfoOptions -> ExceptT CLIError IO ()
|
|
cliFileDescrInfo InfoOptions {fileDescription} = do
|
|
getFileDescription fileDescription >>= \case
|
|
AVFD (ValidFileDescription FileDescription {party, size, chunkSize, chunks}) -> do
|
|
let replicas = groupReplicasByServer chunkSize chunks
|
|
liftIO $ do
|
|
printParty
|
|
putStrLn $ "File download size: " <> strEnc size
|
|
putStrLn "File server(s):"
|
|
forM_ replicas $ \srvReplicas@(FileServerReplica {server} :| _) -> do
|
|
let chSizes = fmap (\FileServerReplica {chunkSize = chSize_} -> unFileSize $ fromMaybe chunkSize chSize_) srvReplicas
|
|
putStrLn $ strEnc server <> ": " <> strEnc (FileSize $ sum chSizes)
|
|
where
|
|
printParty :: IO ()
|
|
printParty = case party of
|
|
SFRecipient -> putStrLn "Recipient file description"
|
|
SFSender -> putStrLn "Sender file description"
|
|
|
|
strEnc :: StrEncoding a => a -> String
|
|
strEnc = B.unpack . strEncode
|
|
|
|
getFileDescription :: FilePath -> ExceptT CLIError IO AValidFileDescription
|
|
getFileDescription path =
|
|
ExceptT $ first (CLIError . ("Failed to parse file description: " <>)) . strDecode <$> B.readFile path
|
|
|
|
getFileDescription' :: FilePartyI p => FilePath -> ExceptT CLIError IO (ValidFileDescription p)
|
|
getFileDescription' path =
|
|
getFileDescription path >>= \case
|
|
AVFD fd -> either (throwE . CLIError) pure $ checkParty fd
|
|
|
|
getEncPath :: MonadIO m => Maybe FilePath -> String -> m FilePath
|
|
getEncPath path name = (`uniqueCombine` (name <> ".encrypted")) =<< maybe (liftIO getCanonicalTemporaryDirectory) pure path
|
|
|
|
withReconnect :: Show e => XFTPClientAgent -> XFTPServer -> Int -> (XFTPClient -> ExceptT e IO a) -> ExceptT CLIError IO a
|
|
withReconnect a srv n run = withRetry n $ do
|
|
c <- withRetry n $ getXFTPServerClient a srv
|
|
withExceptT (CLIError . show) (run c) `catchError` \e -> do
|
|
liftIO $ closeXFTPServerClient a srv
|
|
throwE e
|
|
|
|
withRetry :: Show e => Int -> ExceptT e IO a -> ExceptT CLIError IO a
|
|
withRetry retryCount = withRetry' retryCount . withExceptT (CLIError . show)
|
|
where
|
|
withRetry' :: Int -> ExceptT CLIError IO a -> ExceptT CLIError IO a
|
|
withRetry' 0 _ = throwE $ CLIError "internal: no retry attempts"
|
|
withRetry' 1 a = a
|
|
withRetry' n a =
|
|
a `catchError` \e -> do
|
|
logWarn ("retrying: " <> tshow e)
|
|
withRetry' (n - 1) a
|
|
|
|
removeFD :: Bool -> FilePath -> IO ()
|
|
removeFD yes fd
|
|
| yes = do
|
|
removeFile fd
|
|
putStrLn $ "\nFile description " <> fd <> " is deleted."
|
|
| otherwise = do
|
|
y <- liftIO . getConfirmation $ "\nFile description " <> fd <> " can't be used again. Delete it"
|
|
when y $ removeFile fd
|
|
|
|
getConfirmation :: String -> IO Bool
|
|
getConfirmation prompt = do
|
|
putStr $ prompt <> " (Y/n): "
|
|
hFlush stdout
|
|
s <- getLine
|
|
case map toLower s of
|
|
"y" -> pure True
|
|
"" -> pure True
|
|
"n" -> pure False
|
|
_ -> getConfirmation prompt
|
|
|
|
cliRandomFile :: RandomFileOptions -> IO ()
|
|
cliRandomFile RandomFileOptions {filePath, fileSize = FileSize size} = do
|
|
withFile filePath WriteMode (`saveRandomFile` size)
|
|
putStrLn $ "File created: " <> filePath
|
|
where
|
|
saveRandomFile h sz = do
|
|
g <- C.newRandom
|
|
bytes <- atomically $ C.randomBytes (fromIntegral $ min mb' sz) g
|
|
B.hPut h bytes
|
|
when (sz > mb') $ saveRandomFile h (sz - mb')
|
|
mb' = mb 1
|
|
|
|
-- | Encode file description as web-compatible URI fragment.
|
|
-- Result is base64url(deflateRaw(YAML)), no leading '#'.
|
|
encodeWebURI :: FileDescription 'FRecipient -> B.ByteString
|
|
encodeWebURI fd = U.encode $ LB.toStrict $ Z.compress $ LB.fromStrict $ strEncode fd
|
|
|
|
-- | Decode web URI fragment to validated file description.
|
|
-- Input is base64url-encoded DEFLATE-compressed YAML, no leading '#'.
|
|
decodeWebURI :: B.ByteString -> Either String (ValidFileDescription 'FRecipient)
|
|
decodeWebURI fragment = do
|
|
compressed <- U.decode fragment
|
|
let yaml = LB.toStrict $ Z.decompress $ LB.fromStrict compressed
|
|
strDecode yaml >>= validateFileDescription
|
|
|
|
-- | Extract web link host and URI fragment from a file description.
|
|
-- Returns (hostname, uriFragment) for https://hostname/#uriFragment.
|
|
fileWebLink :: FileDescription 'FRecipient -> Maybe (B.ByteString, B.ByteString)
|
|
fileWebLink fd@FileDescription {chunks} = case chunks of
|
|
(FileChunk {replicas = FileChunkReplica {server = ProtocolServer {host}} : _} : _) ->
|
|
Just (strEncode (L.head host), encodeWebURI fd)
|
|
_ -> Nothing
|