Merge branch 'master' into chat-relays

This commit is contained in:
spaced4ndy
2025-10-24 15:32:17 +04:00
8 changed files with 258 additions and 14 deletions
+208
View File
@@ -0,0 +1,208 @@
# SimpleX Vouchers for Unlinkable Payments
See [this doc](./2024-04-26-commercial-model.md) about commercial model that proposed the approach to making network sustainable and commercially attractive to the server operators.
This document proposes the cryptographic design for the system of vouchers that can enable these payments.
Big thank you to [Alain Brenzikofer](https://x.com/brenzi5), co-founder of [Integritee Network](https://x.com/integri_t_e_e), who contributed the draft of this design, which we then evolved collaboratively.
## High-level diagram
![Payments diagram](./diagrams/2025-10-23-vouchers-diagram.svg)
### Coordination Layer (CL)
Abstract component which allows all involved parties to come to consensus about voucher issuance and redemption
* can be centralized trusted third party (TTP).
* can be a decentralized ledger with smart contracts, e.g. some L2 Ethereum blockchain with ZK-proofs support.
### Issuing Operator (IO)
* must be whitelisted by CL.
* CL defines voucher issuing limit.
### Accepting Operator (AO)
* delivers a service and accepts vouchers.
### User
* uses a service by an AO.
* seeks anonymity.
### Voucher
* token allowing limited number of transfers (0-2) to be redeemed for AO credits.
* comes in few fixed denominations around e.g. 1, 10, 100 operator credits, that would be initially set to USD 1, and adjusted for service costs that are likely to be reduced with scale and inflation.
* expected to be redeemed at low frequency: only every few days per user.
### AO Credits
* per-operator tokens for micropayments as-you-go.
* expected to be used to pay fractions of cents for every request to the service.
* balances maintained by the operator.
Blind signatures to be used with operator issued credits:
- Client generates random token(s): `t[i]`
- Client sends a set of blinded tokens `blind(t[i])` when presenting a voucher.
- Operator's server signs them with operator's key and returns to the client.
- Client de-blinds them so they can be used.
The signed tokens should include an approximate timestamp, e.g. rounded to a day (or more) - this would allow expiration of credits at the cost of acceptable reduction of anonymity set.
These tokens would be fungible and would also have multiple denominations - the client would send new random blinded numbers to receive change on the resource provisioning requests. We can use token denominations representing powers of 2.
When credit is presented it would be validated to prove that it is:
1) properly signed.
2) not expired.
3) not used.
The checks 1 and 2 are local, and can be done locally on the server. The check 3 requires verification across all operator's servers. The resource can be provisioned instantly, without waiting for the confirmation. Failed double-spend verification can result in resource cancellation. The "change" can be provided only after verification, as otherwise it may increase the number of issued credits (the provisioned resource can include "pending change" associated with it).
Another approach would be allocating the registry for spent coins deterministically to different servers, and making these allocations known to the client, so while coins would be accepted by any operator's server, the change would be given faster if it's presented to the server with the coin registry.
## Abstract Protocol
Start with the most simple approach, then iterate to improve the anonymity properties.
### v0.1: Chaumian eCash-style atomic, indivisible vouchers of single denomination
not yet using ZK, not yet with expiry (see extension)
```
# user buys voucher at t1
s = random(256 bits)
C = hash(S)
B = blind(C)
CoordinationLayer.checkIssuingLimit(issuer=I1)
App(issuer=I1).buyVoucher(ref=B)
# issuer I1
ensure_payment()
σB = B.sign(K_I1)
# user publishes voucher at t2
σC = σB.unblind()
CoordinationLayer.publish(σC)
# CoordinationLayer (global, trusted entity)
issuer = verify_signature(σC)
ensure_issuing_limit(issuer)
ensure_is_unknown(C)
store_unspent_voucher(C, issuer=I1)
# user redeems voucher at t3
proof=encrypt(payload=[C, s], pubkey=CoordinationLayerKey)
ServiceProvider.redeem_voucher(proof)
# ServiceProvider SP1
CoordinationLayer.redeem(proof, SP1)
# CoordinationLayer
[C, s] = decrypt(proof)
ensure(C=hash(s))
atomic_invalidate_unspent_voucher(C)
clearing(1 voucher, I1 pays to SP1)
confirm_redemption(SP1)
```
### Unlinkability analysis
* issuer cant link the purchase to later redemption, not even if colluding with the ServiceProvider (assuming large number of users behaving indistinguishably).
* CoordinationLayer can trivially link timing and IP of publishing (t2) and redeeming C (t3). could collude with issuer to link redemption to purchase correlating timing and IP:
* the user can mask timing with random delays between t1-t2 to make collusion harder.
* the user can hide their IP from the CL if they use the issuer as a proxy through a TLS tunnel. That, in turn, will leak t2 to the issuer unless the user performs indistinguishable dummy requests to mask t2.
### Adding Voucher Expiry
Design choices for maximal anonymity set / unlinkability:
* expiry is the same for all vouchers.
* expiry starts with the publishing step, not with the purchase.
Extension of v0.1:
* CoordinationLayer stores publishing date along with C.
* CoordinationLayer enforces expiry upon redemption.
* CoordinationLayer ensures issuers rotate keys every M days (to invalidate vouchers which have been issued but not published within 2xM days).
*Alternative to allow expiry to start with purchase: blind signature with public metadata. Not trivial if issuer must verify public metadata and bind signature to ensure correctness of expiry*.
## v0.2: Chaumian eCash-style atomic, Indivisible vouchers of single denomination plus ZK
Avoid linkability of redemption by using a ZK set membership proof into merkle-mountain range (MMR).
Change later steps as follows:
```
... same as v0.1
# CoordinationLayer (global) at ~t2
... same as in v0.1, adding:
store_unspent_voucher(C, t=now, issuer=I1)
update_unspent_vouchers_mmr()
publish_mmr_root()
return [mmr_path] # to user
# user redeems the voucher at t3
mmr_root = root of mmr_path # as received from CL upon publishing
N=hash2(s || "redeem")
proof=ZK(
secret_inputs: s, mmr_path
public_inputs: mmr_root, nullifier: N
assertions: hash(s) is leaf of mmr_path with mmr_root && N=hash2(s || "redeem")
)
ServiceProvider.redeem_voucher(proof)
# ServiceProvider SP1
CoordinationLayer.redeem(proof, SP1)
# CoordinationLayer
ensure_unknown(proof.N)
ensure(age(proof.mmr_root) < EXPIRY)
verify(proof)
store_nullifier(proof.N)
clearing(1 voucher, I1 pays to SP1)
confirm_redemption(SP1)
```
(!) If the MMR is public (e.g. if the CL operating on a public ledger), the user can extend voucher expiry arbitrarily by updating their mmr_path to a newer merkle_root. Therefore, expiry cant rely just on the age of mmr_root. For a mitigation, we need to extend the protocol and rotate MMRs.
### Adding MMR Rotation
1. Start a new MMR every T days
2. To mitigate the small anonymity set at the start of each new MMR, let them overlap and let the user choose which one they use.
![MMR rotation](./diagrams/2025-10-23-vouchers-mmrs.svg)
Upon publishing:
* CL returns mmr1_path and mmr_2 path to the user
Upon redemption:
* user selects one of the two MMRs to generate the proof. Here, the user can trade off later expiry (mmr2_path, expiry2) against larger anonymity set (mmr1_path, expiry1).
### Unlinkability Analysis
* Generating a proof using mmr_root(t2) leaks t2. The CL could therefore still learn the exact time when the redeemed voucher was published
* this can be mitigated by updated MMR peak-bagging before generating the proof. The user downloads the entire MMR and updates the mmr_path to a later root at e.g. t2' or t2'' (maybe partial download backward to t2 + a masking random bit further back is sufficient). If download size gets too big, reduce MMR duration T.
* thanks to the ZK proof, now even the CoordinationLayer cant directly link the publishing of C with the redemption, because the redemption just discloses that “one among all non-expired vouchers shall be redeemed“ (double-spending prevented through tracking nullifiers).
* the Coordination Layer still observes timing and IP address.
* users can wait until anonymity set is big enough for their requirements, but that only masks timing, not networking IP address.
* If we use the ServiceProvider as a proxy to forward the redemption proof, timing and IP leak to SP instead of CL, which is better because the SP learns the IP and timing (user behavior) anyway. trusting the SP with the proof is fine because it doesnt disclose sensitive information and we trust them to provide their service after redemption anyway.
### ZK Reasonings
* to avoid trusted setup we could use STARK, not SNARK, but STARK has heavier proving complexity (expect >30s on mobile. should be evaluated with a PoC).
* we can accept a trusted setup with multiple independent parties contributing to it, with the benefit of much lighter proving.
* STARK friendly hash function: e.g. poseidon2
* proving time (client-side) is probably still quite heavy for mobile, even if the proposed proof is pretty lean. But redeeming vouchers is only expected to happen infrequently
* verification time (CoordinationLayer side) expected to be light
* Nullifier set is bounded thanks to voucher expiry window M, so it wont grow indefinitely. Downside: smaller anonymity set.
Overall, SNARK seems more preferrable.
### Possible Enhancements
* Avoid centralized CoordinationLayer SPOF, replace with smart contract on distributed consortial ledger with non-collusion contractor validators:
* or even public permissionless blockchain.
* storing mmr_root and nullifiers onchain helps public auditability.
* publishing σC still leaks publicly observable timing because the CL has to update and publish the MMR.
* possible remedy: use TEE as a random-delay mixer proxy for the user to publish σC.
* optionally delegate heavy ZK proving to TEE for thin clients (s will be exposed to TEE trust assumptions). But then, we need to incentivize TEE-provers as they are service providers in their own right.
File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 372 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 100 KiB

+2 -2
View File
@@ -76,8 +76,8 @@ toDBOpts ChatDbOpts {dbFilePrefix, dbKey, trackQueries, vacuumOnMigration} dbSuf
{ dbFilePath = dbFilePrefix <> dbSuffix,
dbKey,
keepKey,
track = trackQueries,
vacuum = vacuumOnMigration
vacuum = vacuumOnMigration,
track = trackQueries
}
chatSuffix :: String
+16 -7
View File
@@ -1422,21 +1422,30 @@ viewUserPrivacy User {userId} User {userId = userId', localDisplayName = n', sho
]
viewConnDiffSync :: DatabaseDiff AgentUserId -> DatabaseDiff AgentConnId -> [StyledString]
viewConnDiffSync userDiff connDiff =
viewConnDiffSummary userDiff connDiff
<> ["removed extra users in agent" | not (null $ extraIds userDiff)]
<> ["removed extra connections in agent" | not (null $ extraIds connDiff)]
viewConnDiffSync userDiff connDiff
| noDiff userDiff && noDiff connDiff = []
| otherwise =
viewConnDiffSummary' userDiff connDiff
<> ["removed extra users in agent" | not (null $ extraIds userDiff)]
<> ["removed extra connections in agent" | not (null $ extraIds connDiff)]
where
noDiff DatabaseDiff {missingIds, extraIds} = null missingIds && null extraIds
viewConnDiffSummary :: DatabaseDiff AgentUserId -> DatabaseDiff AgentConnId -> [StyledString]
viewConnDiffSummary userDiff connDiff
| noDiff userDiff && noDiff connDiff =
["no difference between agent and chat connections"]
| otherwise =
["connections difference summary:"]
<> showDatabaseDiff "users" userDiff
<> showDatabaseDiff "connections" connDiff
viewConnDiffSummary' userDiff connDiff
where
noDiff DatabaseDiff {missingIds, extraIds} = null missingIds && null extraIds
viewConnDiffSummary' :: DatabaseDiff AgentUserId -> DatabaseDiff AgentConnId -> [StyledString]
viewConnDiffSummary' userDiff connDiff =
["connections difference summary:"]
<> showDatabaseDiff "users" userDiff
<> showDatabaseDiff "connections" connDiff
where
showDatabaseDiff name DatabaseDiff {missingIds, extraIds} =
["number of missing " <> name <> " in agent: " <> sShow (length missingIds) | not (null missingIds)]
<> ["number of extra " <> name <> " in agent: " <> sShow (length extraIds) | not (null extraIds)]
+6
View File
@@ -28,6 +28,12 @@ chatStartedSwift = "{\"result\":{\"_owsf\":true,\"chatStarted\":{}}}"
chatStartedTagged :: LB.ByteString
chatStartedTagged = "{\"result\":{\"type\":\"chatStarted\"}}"
connectionsDiffSwift :: LB.ByteString
connectionsDiffSwift = "{\"result\":{\"_owsf\":true,\"connectionsDiff\":{\"userIds\":{\"missingIds\":[],\"extraIds\":[]},\"connIds\":{\"missingIds\":[],\"extraIds\":[]}}}}"
connectionsDiffTagged :: LB.ByteString
connectionsDiffTagged = "{\"result\":{\"type\":\"connectionsDiff\",\"userIds\":{\"missingIds\":[],\"extraIds\":[]},\"connIds\":{\"missingIds\":[],\"extraIds\":[]}}}"
userJSON :: LB.ByteString
userJSON = "{\"userId\":1,\"agentUserId\":\"1\",\"userContactId\":1,\"localDisplayName\":\"alice\",\"profile\":{\"profileId\":1,\"displayName\":\"alice\",\"fullName\":\"\",\"shortDescr\":\"Alice\",\"localAlias\":\"\"},\"fullPreferences\":{\"timedMessages\":{\"allow\":\"yes\"},\"fullDelete\":{\"allow\":\"no\"},\"reactions\":{\"allow\":\"yes\"},\"voice\":{\"allow\":\"yes\"},\"files\":{\"allow\":\"always\"},\"calls\":{\"allow\":\"yes\"},\"sessions\":{\"allow\":\"no\"},\"commands\":[]},\"activeUser\":true,\"activeOrder\":1,\"showNtfs\":true,\"sendRcptsContacts\":true,\"sendRcptsSmallGroups\":true,\"autoAcceptMemberContacts\":false,\"userChatRelay\":false}"
+1
View File
@@ -25,6 +25,7 @@ owsf2TaggedJSONTest = do
activeUserExistsSwift `to` activeUserExistsTagged
activeUserSwift `to` activeUserTagged
chatStartedSwift `to` chatStartedTagged
connectionsDiffSwift `to` connectionsDiffTagged
parsedMarkdownSwift `to` parsedMarkdownTagged
where
to :: LB.ByteString -> LB.ByteString -> IO ()
+17 -5
View File
@@ -1,4 +1,6 @@
{-# LANGUAGE CPP #-}
{-# LANGUAGE DuplicateRecordFields #-}
{-# LANGUAGE NamedFieldPuns #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TemplateHaskell #-}
@@ -7,6 +9,7 @@
module MobileTests (mobileTests) where
import ChatClient
import ChatTests.DBUtils
import ChatTests.Utils
import Control.Concurrent.STM
@@ -28,7 +31,8 @@ import Foreign.StablePtr
import Foreign.Storable (peek)
import GHC.IO.Encoding (setLocaleEncoding, setFileSystemEncoding, setForeignEncoding)
import JSONFixtures
import Simplex.Chat.Controller (ChatController (..))
import Simplex.Chat
import Simplex.Chat.Controller (ChatController (..), ChatDatabase (..))
import Simplex.Chat.Mobile hiding (error)
import Simplex.Chat.Mobile.File
import Simplex.Chat.Mobile.Shared
@@ -37,7 +41,6 @@ import Simplex.Chat.Options.DB
import Simplex.Chat.Store
import Simplex.Chat.Store.Profiles
import Simplex.Chat.Types (AgentUserId (..), Profile (..))
import Simplex.Messaging.Agent.Store.Interface
import Simplex.Messaging.Agent.Store.Shared (MigrationConfig (..), MigrationConfirmation (..))
import qualified Simplex.Messaging.Agent.Store.SQLite.DB as DB
import qualified Simplex.Messaging.Crypto as C
@@ -111,6 +114,14 @@ chatStarted =
chatStartedTagged
#endif
connectionsDiff :: LB.ByteString
connectionsDiff =
#if defined(darwin_HOST_OS) && defined(swiftJSON)
connectionsDiffSwift
#else
connectionsDiffTagged
#endif
parsedMarkdown :: LB.ByteString
parsedMarkdown =
#if defined(darwin_HOST_OS) && defined(swiftJSON)
@@ -134,15 +145,16 @@ testChatApi :: TestParams -> IO ()
testChatApi ps = do
let tmp = tmpPath ps
dbPrefix = tmp </> "1"
f = dbPrefix <> chatSuffix
Right st <- createChatStore (DBOpts f "myKey" False True DB.TQOff) (MigrationConfig MCYesUp Nothing)
Right _ <- withTransaction st $ \db -> runExceptT $ createUserRecord db (AgentUserId 1) aliceProfile {preferences = Nothing} False True
Right ChatDatabase {chatStore, agentStore} <- createChatDatabase (ChatDbOpts dbPrefix "myKey" DB.TQOff True) (MigrationConfig MCYesUp Nothing)
insertUser agentStore
Right _ <- withTransaction chatStore $ \db -> runExceptT $ createUserRecord db (AgentUserId 1) aliceProfile {preferences = Nothing} False True
Right cc <- chatMigrateInit dbPrefix "myKey" "yesUp"
Left (DBMErrorNotADatabase _) <- chatMigrateInit dbPrefix "" "yesUp"
Left (DBMErrorNotADatabase _) <- chatMigrateInit dbPrefix "anotherKey" "yesUp"
chatSendCmd cc "/u" `shouldReturn` activeUser
chatSendCmd cc "/create user alice Alice" `shouldReturn` activeUserExists
chatSendCmd cc "/_start" `shouldReturn` chatStarted
chatRecvMsg cc `shouldReturn` connectionsDiff
chatRecvMsgWait cc 10000 `shouldReturn` ""
chatParseMarkdown "hello" `shouldBe` "{}"
chatParseMarkdown "*hello*" `shouldBe` parsedMarkdown