Merge branch 'master' into new-website

This commit is contained in:
Evgeny Poberezkin
2025-10-23 22:04:00 +01:00
14 changed files with 335 additions and 6 deletions
+208
View File
@@ -0,0 +1,208 @@
# SimpleX Vouchers for Unlinkable Payments
See [this doc](./2024-04-26-commercial-model.md) about commercial model that proposed the approach to making network sustainable and commercially attractive to the server operators.
This document proposes the cryptographic design for the system of vouchers that can enable these payments.
Big thank you to [Alain Brenzikofer](https://x.com/brenzi5), co-founder of [Integritee Network](https://x.com/integri_t_e_e), who contributed the draft of this design, which we then evolved collaboratively.
## High-level diagram
![Payments diagram](./diagrams/2025-10-23-vouchers-diagram.svg)
### Coordination Layer (CL)
Abstract component which allows all involved parties to come to consensus about voucher issuance and redemption
* can be centralized trusted third party (TTP).
* can be a decentralized ledger with smart contracts, e.g. some L2 Ethereum blockchain with ZK-proofs support.
### Issuing Operator (IO)
* must be whitelisted by CL.
* CL defines voucher issuing limit.
### Accepting Operator (AO)
* delivers a service and accepts vouchers.
### User
* uses a service by an AO.
* seeks anonymity.
### Voucher
* token allowing limited number of transfers (0-2) to be redeemed for AO credits.
* comes in few fixed denominations around e.g. 1, 10, 100 operator credits, that would be initially set to USD 1, and adjusted for service costs that are likely to be reduced with scale and inflation.
* expected to be redeemed at low frequency: only every few days per user.
### AO Credits
* per-operator tokens for micropayments as-you-go.
* expected to be used to pay fractions of cents for every request to the service.
* balances maintained by the operator.
Blind signatures to be used with operator issued credits:
- Client generates random token(s): `t[i]`
- Client sends a set of blinded tokens `blind(t[i])` when presenting a voucher.
- Operator's server signs them with operator's key and returns to the client.
- Client de-blinds them so they can be used.
The signed tokens should include an approximate timestamp, e.g. rounded to a day (or more) - this would allow expiration of credits at the cost of acceptable reduction of anonymity set.
These tokens would be fungible and would also have multiple denominations - the client would send new random blinded numbers to receive change on the resource provisioning requests. We can use token denominations representing powers of 2.
When credit is presented it would be validated to prove that it is:
1) properly signed.
2) not expired.
3) not used.
The checks 1 and 2 are local, and can be done locally on the server. The check 3 requires verification across all operator's servers. The resource can be provisioned instantly, without waiting for the confirmation. Failed double-spend verification can result in resource cancellation. The "change" can be provided only after verification, as otherwise it may increase the number of issued credits (the provisioned resource can include "pending change" associated with it).
Another approach would be allocating the registry for spent coins deterministically to different servers, and making these allocations known to the client, so while coins would be accepted by any operator's server, the change would be given faster if it's presented to the server with the coin registry.
## Abstract Protocol
Start with the most simple approach, then iterate to improve the anonymity properties.
### v0.1: Chaumian eCash-style atomic, indivisible vouchers of single denomination
not yet using ZK, not yet with expiry (see extension)
```
# user buys voucher at t1
s = random(256 bits)
C = hash(S)
B = blind(C)
CoordinationLayer.checkIssuingLimit(issuer=I1)
App(issuer=I1).buyVoucher(ref=B)
# issuer I1
ensure_payment()
σB = B.sign(K_I1)
# user publishes voucher at t2
σC = σB.unblind()
CoordinationLayer.publish(σC)
# CoordinationLayer (global, trusted entity)
issuer = verify_signature(σC)
ensure_issuing_limit(issuer)
ensure_is_unknown(C)
store_unspent_voucher(C, issuer=I1)
# user redeems voucher at t3
proof=encrypt(payload=[C, s], pubkey=CoordinationLayerKey)
ServiceProvider.redeem_voucher(proof)
# ServiceProvider SP1
CoordinationLayer.redeem(proof, SP1)
# CoordinationLayer
[C, s] = decrypt(proof)
ensure(C=hash(s))
atomic_invalidate_unspent_voucher(C)
clearing(1 voucher, I1 pays to SP1)
confirm_redemption(SP1)
```
### Unlinkability analysis
* issuer cant link the purchase to later redemption, not even if colluding with the ServiceProvider (assuming large number of users behaving indistinguishably).
* CoordinationLayer can trivially link timing and IP of publishing (t2) and redeeming C (t3). could collude with issuer to link redemption to purchase correlating timing and IP:
* the user can mask timing with random delays between t1-t2 to make collusion harder.
* the user can hide their IP from the CL if they use the issuer as a proxy through a TLS tunnel. That, in turn, will leak t2 to the issuer unless the user performs indistinguishable dummy requests to mask t2.
### Adding Voucher Expiry
Design choices for maximal anonymity set / unlinkability:
* expiry is the same for all vouchers.
* expiry starts with the publishing step, not with the purchase.
Extension of v0.1:
* CoordinationLayer stores publishing date along with C.
* CoordinationLayer enforces expiry upon redemption.
* CoordinationLayer ensures issuers rotate keys every M days (to invalidate vouchers which have been issued but not published within 2xM days).
*Alternative to allow expiry to start with purchase: blind signature with public metadata. Not trivial if issuer must verify public metadata and bind signature to ensure correctness of expiry*.
## v0.2: Chaumian eCash-style atomic, Indivisible vouchers of single denomination plus ZK
Avoid linkability of redemption by using a ZK set membership proof into merkle-mountain range (MMR).
Change later steps as follows:
```
... same as v0.1
# CoordinationLayer (global) at ~t2
... same as in v0.1, adding:
store_unspent_voucher(C, t=now, issuer=I1)
update_unspent_vouchers_mmr()
publish_mmr_root()
return [mmr_path] # to user
# user redeems the voucher at t3
mmr_root = root of mmr_path # as received from CL upon publishing
N=hash2(s || "redeem")
proof=ZK(
secret_inputs: s, mmr_path
public_inputs: mmr_root, nullifier: N
assertions: hash(s) is leaf of mmr_path with mmr_root && N=hash2(s || "redeem")
)
ServiceProvider.redeem_voucher(proof)
# ServiceProvider SP1
CoordinationLayer.redeem(proof, SP1)
# CoordinationLayer
ensure_unknown(proof.N)
ensure(age(proof.mmr_root) < EXPIRY)
verify(proof)
store_nullifier(proof.N)
clearing(1 voucher, I1 pays to SP1)
confirm_redemption(SP1)
```
(!) If the MMR is public (e.g. if the CL operating on a public ledger), the user can extend voucher expiry arbitrarily by updating their mmr_path to a newer merkle_root. Therefore, expiry cant rely just on the age of mmr_root. For a mitigation, we need to extend the protocol and rotate MMRs.
### Adding MMR Rotation
1. Start a new MMR every T days
2. To mitigate the small anonymity set at the start of each new MMR, let them overlap and let the user choose which one they use.
![MMR rotation](./diagrams/2025-10-23-vouchers-mmrs.svg)
Upon publishing:
* CL returns mmr1_path and mmr_2 path to the user
Upon redemption:
* user selects one of the two MMRs to generate the proof. Here, the user can trade off later expiry (mmr2_path, expiry2) against larger anonymity set (mmr1_path, expiry1).
### Unlinkability Analysis
* Generating a proof using mmr_root(t2) leaks t2. The CL could therefore still learn the exact time when the redeemed voucher was published
* this can be mitigated by updated MMR peak-bagging before generating the proof. The user downloads the entire MMR and updates the mmr_path to a later root at e.g. t2' or t2'' (maybe partial download backward to t2 + a masking random bit further back is sufficient). If download size gets too big, reduce MMR duration T.
* thanks to the ZK proof, now even the CoordinationLayer cant directly link the publishing of C with the redemption, because the redemption just discloses that “one among all non-expired vouchers shall be redeemed“ (double-spending prevented through tracking nullifiers).
* the Coordination Layer still observes timing and IP address.
* users can wait until anonymity set is big enough for their requirements, but that only masks timing, not networking IP address.
* If we use the ServiceProvider as a proxy to forward the redemption proof, timing and IP leak to SP instead of CL, which is better because the SP learns the IP and timing (user behavior) anyway. trusting the SP with the proof is fine because it doesnt disclose sensitive information and we trust them to provide their service after redemption anyway.
### ZK Reasonings
* to avoid trusted setup we could use STARK, not SNARK, but STARK has heavier proving complexity (expect >30s on mobile. should be evaluated with a PoC).
* we can accept a trusted setup with multiple independent parties contributing to it, with the benefit of much lighter proving.
* STARK friendly hash function: e.g. poseidon2
* proving time (client-side) is probably still quite heavy for mobile, even if the proposed proof is pretty lean. But redeeming vouchers is only expected to happen infrequently
* verification time (CoordinationLayer side) expected to be light
* Nullifier set is bounded thanks to voucher expiry window M, so it wont grow indefinitely. Downside: smaller anonymity set.
Overall, SNARK seems more preferrable.
### Possible Enhancements
* Avoid centralized CoordinationLayer SPOF, replace with smart contract on distributed consortial ledger with non-collusion contractor validators:
* or even public permissionless blockchain.
* storing mmr_root and nullifiers onchain helps public auditability.
* publishing σC still leaks publicly observable timing because the CL has to update and publish the MMR.
* possible remedy: use TEE as a random-delay mixer proxy for the user to publish σC.
* optionally delegate heavy ZK proving to TEE for thin clients (s will be exposed to TEE trust assumptions). But then, we need to incentivize TEE-provers as they are service providers in their own right.
File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 372 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 100 KiB

+2
View File
@@ -120,6 +120,7 @@ library
Simplex.Chat.Store.Postgres.Migrations.M20250919_group_summary
Simplex.Chat.Store.Postgres.Migrations.M20250922_remove_unused_connections
Simplex.Chat.Store.Postgres.Migrations.M20251007_connections_sync
Simplex.Chat.Store.Postgres.Migrations.M20251017_chat_tags_cascade
else
exposed-modules:
Simplex.Chat.Archive
@@ -264,6 +265,7 @@ library
Simplex.Chat.Store.SQLite.Migrations.M20250919_group_summary
Simplex.Chat.Store.SQLite.Migrations.M20250922_remove_unused_connections
Simplex.Chat.Store.SQLite.Migrations.M20251007_connections_sync
Simplex.Chat.Store.SQLite.Migrations.M20251017_chat_tags_cascade
other-modules:
Paths_simplex_chat
hs-source-dirs:
@@ -20,6 +20,7 @@ import Simplex.Chat.Store.Postgres.Migrations.M20250813_delivery_tasks
import Simplex.Chat.Store.Postgres.Migrations.M20250919_group_summary
import Simplex.Chat.Store.Postgres.Migrations.M20250922_remove_unused_connections
import Simplex.Chat.Store.Postgres.Migrations.M20251007_connections_sync
import Simplex.Chat.Store.Postgres.Migrations.M20251017_chat_tags_cascade
import Simplex.Messaging.Agent.Store.Shared (Migration (..))
schemaMigrations :: [(String, Text, Maybe Text)]
@@ -39,7 +40,8 @@ schemaMigrations =
("20250813_delivery_tasks", m20250813_delivery_tasks, Just down_m20250813_delivery_tasks),
("20250919_group_summary", m20250919_group_summary, Just down_m20250919_group_summary),
("20250922_remove_unused_connections", m20250922_remove_unused_connections, Just down_m20250922_remove_unused_connections),
("20251007_connections_sync", m20251007_connections_sync, Just down_m20251007_connections_sync)
("20251007_connections_sync", m20251007_connections_sync, Just down_m20251007_connections_sync),
("20251017_chat_tags_cascade", m20251017_chat_tags_cascade, Just down_m20251017_chat_tags_cascade)
]
-- | The list of migrations in ascending order by date
@@ -16,7 +16,7 @@ CREATE TABLE connections_sync(
last_sync_ts TIMESTAMPTZ
);
INSERT INTO connections_sync (connections_sync_id, should_sync, last_sync_ts) VALUES (1,0,NULL);
INSERT INTO connections_sync (connections_sync_id, should_sync, last_sync_ts) VALUES (1, 1, NULL);
|]
down_m20251007_connections_sync :: Text
@@ -0,0 +1,32 @@
{-# LANGUAGE QuasiQuotes #-}
module Simplex.Chat.Store.Postgres.Migrations.M20251017_chat_tags_cascade where
import Data.Text (Text)
import qualified Data.Text as T
import Text.RawString.QQ (r)
m20251017_chat_tags_cascade :: Text
m20251017_chat_tags_cascade =
T.pack
[r|
ALTER TABLE chat_tags DROP CONSTRAINT chat_tags_user_id_fkey;
ALTER TABLE chat_tags
ADD CONSTRAINT chat_tags_user_id_fkey
FOREIGN KEY (user_id)
REFERENCES users(user_id)
ON DELETE CASCADE;
|]
down_m20251017_chat_tags_cascade :: Text
down_m20251017_chat_tags_cascade =
T.pack
[r|
ALTER TABLE chat_tags DROP CONSTRAINT chat_tags_user_id_fkey;
ALTER TABLE chat_tags
ADD CONSTRAINT chat_tags_user_id_fkey
FOREIGN KEY (user_id)
REFERENCES users(user_id);
|]
@@ -2479,7 +2479,7 @@ ALTER TABLE ONLY test_chat_schema.chat_tags_chats
ALTER TABLE ONLY test_chat_schema.chat_tags
ADD CONSTRAINT chat_tags_user_id_fkey FOREIGN KEY (user_id) REFERENCES test_chat_schema.users(user_id);
ADD CONSTRAINT chat_tags_user_id_fkey FOREIGN KEY (user_id) REFERENCES test_chat_schema.users(user_id) ON DELETE CASCADE;
+3 -1
View File
@@ -143,6 +143,7 @@ import Simplex.Chat.Store.SQLite.Migrations.M20250813_delivery_tasks
import Simplex.Chat.Store.SQLite.Migrations.M20250919_group_summary
import Simplex.Chat.Store.SQLite.Migrations.M20250922_remove_unused_connections
import Simplex.Chat.Store.SQLite.Migrations.M20251007_connections_sync
import Simplex.Chat.Store.SQLite.Migrations.M20251017_chat_tags_cascade
import Simplex.Messaging.Agent.Store.Shared (Migration (..))
schemaMigrations :: [(String, Query, Maybe Query)]
@@ -285,7 +286,8 @@ schemaMigrations =
("20250813_delivery_tasks", m20250813_delivery_tasks, Just down_m20250813_delivery_tasks),
("20250919_group_summary", m20250919_group_summary, Just down_m20250919_group_summary),
("20250922_remove_unused_connections", m20250922_remove_unused_connections, Just down_m20250922_remove_unused_connections),
("20251007_connections_sync", m20251007_connections_sync, Just down_m20251007_connections_sync)
("20251007_connections_sync", m20251007_connections_sync, Just down_m20251007_connections_sync),
("20251017_chat_tags_cascade", m20251017_chat_tags_cascade, Just down_m20251017_chat_tags_cascade)
]
-- | The list of migrations in ascending order by date
@@ -15,7 +15,7 @@ CREATE TABLE connections_sync(
last_sync_ts TEXT
);
INSERT INTO connections_sync (connections_sync_id, should_sync, last_sync_ts) VALUES (1,0,NULL);
INSERT INTO connections_sync (connections_sync_id, should_sync, last_sync_ts) VALUES (1, 1, NULL);
|]
down_m20251007_connections_sync :: Query
@@ -0,0 +1,30 @@
{-# LANGUAGE QuasiQuotes #-}
module Simplex.Chat.Store.SQLite.Migrations.M20251017_chat_tags_cascade where
import Database.SQLite.Simple (Query)
import Database.SQLite.Simple.QQ (sql)
m20251017_chat_tags_cascade :: Query
m20251017_chat_tags_cascade =
[sql|
PRAGMA writable_schema=1;
UPDATE sqlite_master
SET sql = replace(sql, 'user_id INTEGER REFERENCES users', 'user_id INTEGER REFERENCES users ON DELETE CASCADE')
WHERE name = 'chat_tags' AND type = 'table';
PRAGMA writable_schema=0;
|]
down_m20251017_chat_tags_cascade :: Query
down_m20251017_chat_tags_cascade =
[sql|
PRAGMA writable_schema=1;
UPDATE sqlite_master
SET sql = replace(sql, 'user_id INTEGER REFERENCES users ON DELETE CASCADE', 'user_id INTEGER REFERENCES users')
WHERE name = 'chat_tags' AND type = 'table';
PRAGMA writable_schema=0;
|]
@@ -3329,6 +3329,16 @@ Query:
Plan:
SEARCH chat_item_versions USING INDEX idx_chat_item_versions_chat_item_id (chat_item_id=?)
Query:
SELECT chat_tag_id, chat_tag_emoji, chat_tag_text
FROM chat_tags
WHERE user_id = ?
ORDER BY tag_order
Plan:
SEARCH chat_tags USING INDEX idx_chat_tags_user_id (user_id=?)
USE TEMP B-TREE FOR ORDER BY
Query:
SELECT command_id, connection_id, command_function, command_status
FROM commands
@@ -4333,6 +4343,20 @@ Query:
Plan:
Query:
INSERT INTO chat_tags (user_id, chat_tag_emoji, chat_tag_text, tag_order)
VALUES (?,?,?, COALESCE((SELECT MAX(tag_order) + 1 FROM chat_tags WHERE user_id = ?), 1))
Plan:
SCALAR SUBQUERY 1
SEARCH chat_tags USING INDEX idx_chat_tags_user_id (user_id=?)
Query:
INSERT INTO chat_tags_chats (contact_id, chat_tag_id)
VALUES (?,?)
Plan:
Query:
INSERT INTO commands (connection_id, command_function, command_status, user_id, created_at, updated_at)
VALUES (?,?,?,?,?,?)
@@ -663,7 +663,7 @@ CREATE TABLE operator_usage_conditions(
);
CREATE TABLE chat_tags(
chat_tag_id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER REFERENCES users,
user_id INTEGER REFERENCES users ON DELETE CASCADE,
chat_tag_text TEXT NOT NULL,
chat_tag_emoji TEXT,
tag_order INTEGER NOT NULL
+21
View File
@@ -134,6 +134,7 @@ chatDirectTests = do
it "both users have contact link" testMultipleUserAddresses
it "create user with same servers" testCreateUserSameServers
it "delete user" testDeleteUser
it "delete user with chat tags" testDeleteUserChatTags
it "users have different chat item TTL configuration, chat items expire" testUsersDifferentCIExpirationTTL
it "chat items expire after restart for all users according to per user configuration" testUsersRestartCIExpiration
it "chat items only expire for users who configured expiration" testEnableCIExpirationOnlyForOneUser
@@ -2110,6 +2111,26 @@ testDeleteUser =
alice ##> "/users"
alice <## "no users"
testDeleteUserChatTags :: HasCallStack => TestParams -> IO ()
testDeleteUserChatTags =
testChat2 aliceProfile bobProfile $
\alice bob -> do
connectUsers alice bob
alice ##> "/_create tag {\"text\":\"my tag\"}"
alice <## "[{\"chatTagId\":1,\"chatTagText\":\"my tag\"}]"
alice ##> "/_tags @2 1"
alice <## "chat tags updated"
alice ##> "/create user alisa"
showActiveUser alice "alisa"
alice ##> "/_delete user 1 del_smp=off"
alice <## "ok"
alice ##> "/users"
alice <## "alisa (active)"
testUsersDifferentCIExpirationTTL :: HasCallStack => TestParams -> IO ()
testUsersDifferentCIExpirationTTL ps = do
withNewTestChat ps "bob" bobProfile $ \bob -> do