BLUE
Profile banner
HL
Helger Lipmaa
@helger.bsky.social
Cryptography professor at the University of Tartu, Estonia. Zero-Knowledge. SNARKs.
55 followers75 following3 posts
Reposted by Helger Lipmaa
TLtancre.de

The program of Crypto 2024 is online, with an invited talk by Karthik Bhargavan on formal verification in crypto! #crypto2024crypto.iacr.org/2024/program...

Crypto 2024 program
Crypto 2024 program

44th Annual International Cryptology Conference

1
Reposted by Helger Lipmaa
EUeprint.bsky.social

On Knowledge-Soundness of Plonk in ROM from Falsifiable Assumptions (Helger Lipmaaia.cr/2024/994

Abstract. Lipmaa, Parisella, and Siim [Eurocrypt, 2024] proved the extractability of the KZG polynomial commitment scheme under the falsifiable assumption ARSDH. They also showed that variants of real-world zk-SNARKs like Plonk can be made knowledge-sound in the random oracle model (ROM) under the ARSDH assumption. However, their approach did not consider various batching optimizations, resulting in their variant of Plonk having approximately 3.5 times longer argument. Our contributions are: (1) We prove that several batch-opening protocols for KZG, used in modern zk-SNARKs, have computational special-soundness under the ARSDH assumption. (2) We prove that interactive Plonk has computational special-soundness under the ARSDH assumption and a new falsifiable assumption TriRSDH. We also prove that a minor modification of the interactive Plonk has computational special-soundness under only the ARSDH assumption. The Fiat-Shamir transform can be applied to obtain non-interactive versions, which are secure in the ROM under the same assumptions.
0
Reposted by Helger Lipmaa
EUeprint.bsky.social

Polymath: Groth16 Is Not The Limit (Helger Lipmaaia.cr/2024/916

Abstract. Shortening the argument (three group elements or 1536 / 3072 bits over the BLS12-381/BLS24-509 curves) of the Groth16 zk-SNARK for R1CS is a long-standing open problem. We propose a zk-SNARK Polymath for the Square Arithmetic Programming constraint system using the KZG polynomial commitment scheme. Polymath has a shorter argument (1408 / 1792 bits over the same curves) than Groth16. At 192-bit security, Polymath’s argument is nearly half the size, making it highly competitive for high-security future applications. Notably, we handle public inputs in a simple way. We optimized Polymath’s prover through an exhaustive parameter search. Polymath’s prover does not output 𝔾₂ elements, aiding in batch verification, SNARK aggregation, and recursion. Polymath’s properties make it highly suitable to be the final SNARK in SNARK compositions.
0
Reposted by Helger Lipmaa

The word "incremental" should not be the kiss of death that it is. All research is incremental: increment by increment, you make progress! Katalin Karikó on how the discovery that brought us the mRNA Covid vaccine was received (from her memoir "Breaking Through," HIGHLY RECOMMEND):

0
Reposted by Helger Lipmaa
NFcryptomaeher.bsky.social

New paper! We construct an extractable witness encryption scheme for KZG commitments. This leads to a surprisingly efficient Laconic OT. ia.cr/2024/264

2
Reposted by Helger Lipmaa
EUeprint.bsky.social

NIZKs with Maliciously Chosen CRS: Subversion Advice-ZK and Accountable Soundness (Prabhanjan Ananth, Gilad Asharov, Vipul Goyal, Hadar Kaner, Pratik Soni, Brent Waters) ia.cr/2024/207

Abstract. Trusted setup is commonly used for non-interactive proof and argument systems. However, there is no guarantee that the setup parameters in these systems are generated in a trustworthy manner. Building upon previous works, we conduct a systematic study of non-interactive zero-knowledge arguments in the common reference string model where the authority running the trusted setup might be corrupted. We explore both zero-knowledge and soundness properties in this setting. 

-   We consider a new notion of NIZK called subversion advice-ZK NIZK that strengthens the notion of zero-knowledge with malicious authority security considered by Ananth, Asharov, Dahari and Goyal (EUROCRYPT’21), and present a construction of a subversion advice-ZK NIZK from the sub-exponential hardness of learning with errors.

-   We introduce a new notion that strengthens the traditional definition of soundness, called accountable soundness, and present generic compilers that lift any NIZK for interesting languages in NP to additionally achieve accountable soundness.

-   Finally, we combine our results for both subversion advice-ZK and accountable soundness to achieve a subversion advice-ZK NIZK that also satisfies accountable soundness. This results in the first NIZK construction that satisfies meaningful notions of both soundness and zero-knowledge even for maliciously chosen CRS.
Image showing part 2 of abstract.
0
HLhelger.bsky.social

And forgot to say, it was accepted to eurocrypt 2024. With ex students Roberto Parisella and Janno Siim

0
Reposted by Helger Lipmaa
EUeprint.bsky.social

Zero-Knowledge Proofs of Training for Deep Neural Networks (Kasra Abbaszadeh, Christodoulos Pappas, Dimitrios Papadopoulos, Jonathan Katz) ia.cr/2024/162

Abstract. A zero-knowledge proof of training (zkPoT) enables a party to prove that they have correctly trained a committed model based on a committed dataset without revealing any additional information about the model or the dataset. An ideal zkPoT should offer provable security and privacy guarantees, succinct proof size and verifier runtime, and practical prover efficiency. In this work, we present Kaizen, a zkPoT targeted for deep neural networks (DNNs) that achieves the above ideals all at once. In particular, our construction enables a prover to iteratively train their model by the (mini-batch) gradient-descent algorithm where the number of iterations need not be fixed in advance; at the end of each iteration, the prover generates a commitment to the trained model attached with a succinct zkPoT, attesting to the correctness of the entire training process. The proof size and verifier time are independent of the iteration number.

Kaizen relies on two essential building blocks to achieve both prover efficiency and verification succinctness. First, we construct an optimized GKR-style (sumcheck-based) proof system for the gradient-descent algorithm with concretely efficient prover cost; this scheme allows the prover to generate a proof for each iteration of the training process. Then, we recursively compose these proofs across multiple iterations to attain succinctness. As of independent interests, we propose a framework for recursive composition of GKR-style proofs and techniques, such as aggregatable polynomial commitment schemes, to minimize the recursion overhead.

Benchmarks indicate that Kaizen can handle a large model of VGG-11 with 10 million parameters and batch size 16. The prover runtime is 22 minutes (per iteration), which is 43× faster than generic recursive proofs, while we further achieve at least 224× less prover memory overhead. Independent of the number of iterations and, hence, the size of the dataset, the proof size is 1.36 megabytes, and the verifier runtime is only 103 milliseconds.
Image showing part 2 of abstract.
0
Reposted by Helger Lipmaa
EUeprint.bsky.social

Adaptively-Sound Succinct Arguments for NP from Indistinguishability Obfuscation (Brent Waters, David J. Wu) ia.cr/2024/165

Abstract. A succinct non-interactive argument (SNARG) for NP allows a prover to convince a verifier that an NP statement x is true with a proof of size o(|x|+|w|), where w is the associated NP witness. A SNARG satisfies adaptive soundness if the malicious prover can choose the statement to prove after seeing the scheme parameters. In this work, we provide the first adaptively-sound SNARG for NP in the plain model assuming sub-exponentially-hard indistinguishability obfuscation, sub-exponentially-hard one-way functions, and either the (polynomial) hardness of the discrete log assumption or the (polynomial) hardness of factoring. This gives the first adaptively-sound SNARG for NP from falsifiable assumptions. All previous SNARGs for NP in the plain model either relied on non-falsifiable cryptographic assumptions or satisfied a weak notion of non-adaptive soundness (where the adversary has to choose the statement it proves before seeing the scheme parameters).
0
Profile banner
HL
Helger Lipmaa
@helger.bsky.social
Cryptography professor at the University of Tartu, Estonia. Zero-Knowledge. SNARKs.
55 followers75 following3 posts