Publications

2019
  • Giving State to the Stateless: Augmenting Trustworthy Computation with Ledgers
    Gabriel Kaptchuk, Ian Miers, Matthew Green
    NDSS 2019
    [preprint] [proceedings] [show abstract] [bibTeX]

    In this work we investigate the problem of achieving secure computation by combining stateless trusted devices with public ledgers. We consider a hybrid paradigm in which a client-side device (such as a co-processor or trusted enclave) performs secure computation, while interacting with a public ledger via a possibly malicious host computer. We explore both the constructive and potentially destructive implications of such systems. We first show that this combination allows for the construction of stateful interactive functionalities (including general computation) even when the device has no persistent storage; this allows us to build sophisticated applications using inexpensive trusted hardware or even pure cryptographic obfuscation techniques. We further show how to use this paradigm to achieve censorship-resistant communication with a network, even when network communications are mediated by a potentially malicious host. Finally we describe a number of practical applications that can be achieved today. These include the synchronization of private smart contracts; rate limited mandatory logging; strong encrypted backups from weak passwords; enforcing fairness in multi-party computation; and destructive applications such as autonomous ransomware, which allows for payments without an online party.


2018
  • Practical State Recovery Attacks against Legacy RNG Implementations
    Shaanan N. Cohney, Matthew Green, Nadia Heninger
    CCS 2018
    [preprint] [proceedings] [show abstract] [bibTeX]

    The ANSI X9.17/X9.31 pseudorandom number generator design was first standardized in 1985, with variants incorporated into numerous cryptographic standards over the next three decades. The design uses timestamps together with a statically keyed block cipher to produce pseudo-random output. It has been known since 1998 that the key must remain secret in order for the output to be secure. However, neither the FIPS 140-2 standardization process nor NIST's later descriptions of the algorithm specified any process for key generation.

    We performed a systematic study of publicly available FIPS 140- 2 certifications for hundreds of products that implemented the ANSI X9.31 random number generator, and found twelve whose certification documents use of static, hard-coded keys in source code, leaving the implementation vulnerable to an attacker who can learn this key from the source code or binary. In order to demonstrate the practicality of such an attack, we develop a full passive decryption attack against FortiGate VPN gateway products using FortiOS v4 that recovers the private key in seconds. We measure the prevalence of this vulnerability on the visible Internet using active scans, and demonstrate state recovery and full private key recovery in the wild. Our work highlights the extent to which the validation and certification process has failed to provide even modest security guarantees.


  • Non-Interactive Secure Computation from One-Way Functions
    Saikrishna Badrinarayanan, Abhishek Jain, Rafail Ostrovsky, Ivan Visconti
    ASIACRYPT 2018
    [preprint] [proceedings] [show abstract] [bibTeX]

    The notion of non-interactive secure computation (NISC) first introduced in the work of Ishai et al. [EUROCRYPT 2011] studies the following problem: Suppose a receiver $R$ wishes to publish an encryption of her secret input $y$ so that any sender $S$ with input $x$ can then send a message $m$ that reveals $f(x,y)$ to $R$ (for some function $f$). Here, $m$ can be viewed as an encryption of $f(x,y)$ that can be decrypted by $R$. NISC requires security against both malicious senders and receivers, and also requires the receiver's message to be reusable across multiple computations (w.r.t. a fixed input of the receiver).

    All previous solutions to this problem necessarily rely upon OT (or specific number-theoretic assumptions) even in the common reference string model or the random oracle model or to achieve weaker notions of security such as super-polynomial-time simulation.

    In this work, we construct a NISC protocol based on the minimal assumption of one way functions, in the stateless hardware token model. Our construction achieves UC security and requires a single token sent by the receiver to the sender.


  • Deterministic Document Exchange Protocols, and Almost Optimal Binary Codes for Edit Errors
    Kuan Cheng, Zhengzhong Jin, Xin Li, Ke Wu
    FOCS 2018
    [preprint] [proceedings] [show abstract] [bibTeX]

    We study two basic problems regarding edit error, i.e. document exchange and error correcting codes for edit errors (insdel codes). For message length $n$ and edit error upper bound $k$, it is known that in both problems the optimal sketch size or the optimal number of redundant bits is $\Theta(k \log \frac{n}{k})$. However, known constructions are far from achieving these bounds.

    We significantly improve previous results on both problems. For document exchange, we give an efficient deterministic protocol with sketch size $O(k\log^2 \frac{n}{k})$. This significantly improves the previous best known deterministic protocol, which has sketch size $O(k^2 + k \log^2 n)$ (Belazzougui15). For binary insdel codes, we obtain the following results:

    1. An explicit binary insdel code which encodes an $n$-bit message $x$ against $k$ errors with redundancy $O(k \log^2 \frac{n}{k})$. In particular this implies an explicit family of binary insdel codes that can correct $\varepsilon$ fraction of insertions and deletions with rate $1-O(\varepsilon \log^2 (\frac{1}{\varepsilon}))=1-\widetilde{O}(\varepsilon)$.

    2. An explicit binary insdel code which encodes an $n$-bit message $x$ against $k$ errors with redundancy $O(k \log n)$. This is the first explicit construction of binary insdel codes that has optimal redundancy for a wide range of error parameters $k$, and this brings our understanding of binary insdel codes much closer to that of standard binary error correcting codes.

    In obtaining our results we introduce the notion of \emph{$\varepsilon$-self matching hash functions} and \emph{$\varepsilon$-synchronization hash functions}. We believe our techniques can have further applications in the literature.


  • Round-Optimal Secure Multiparty Computation with Honest Majority
    Prabhanjan Ananth, Arka Rai Choudhuri, Aarushi Goel, Abhishek Jain
    CRYPTO 2018
    [preprint] [proceedings] [show abstract] [bibTeX]

    We study the exact round complexity of secure multiparty computation (MPC) in the honest majority setting. We construct several round-optimal $n$-party protocols, tolerating any $t< \frac{n}{2}$ corruptions.

    - Security with abort: We give the first construction of two round MPC for general functions that achieves security with abort against malicious adversaries in the plain model. The security of our protocol only relies on one-way functions.

    - Guaranteed output delivery: We also construct protocols that achieve security with guaranteed output delivery: (i) Against fail-stop adversaries, we construct two round MPC either in the (bare) public-key infrastructure model with no additional assumptions, or in the plain model assuming two-round semi-honest oblivious transfer. In three rounds, however, we can achieve security assuming only one-way functions. (ii) Against malicious adversaries, we construct three round MPC in the plain model, assuming public-key encryption and Zaps. Previously, such protocols were only known based on specific learning assumptions and required the use of common reference strings.

    All of our results are obtained via general compilers that may be of independent interest.


  • Promise Zero Knowledge and its Applications to Round-Optimal MPC
    Saikrishna Badrinarayanan, Vipul Goyal, Abhishek Jain, Yael Tauman Kalai, Dakshita Khurana, Amit Sahai
    CRYPTO 2018
    [preprint] [proceedings] [show abstract] [bibTeX]

    We devise a new partitioned simulation technique for MPC where the simulator uses different strategies for simulating the view of aborting adversaries and non-aborting adversaries. The protagonist of this technique is a new notion of promise zero knowledge (ZK) where the ZK property only holds against non-aborting verifiers. We show how to realize promise ZK in three rounds in the simultaneous-message model assuming polynomially hard DDH (or QR or $N^{th}$-Residuosity).

    We demonstrate the following applications of our new technique:

    - We construct the first round-optimal (i.e., four round) MPC protocol for general functions based on polynomially hard DDH (or QR or $N^{th}$-Residuosity).

    - We further show how to overcome the four-round barrier for MPC by constructing a three-round protocol for ``list coin-tossing'' -- a slight relaxation of coin-tossing that suffices for most conceivable applications -- based on polynomially hard DDH (or QR or $N^{th}$-Residuosity). This result generalizes to randomized input-less functionalities.

    Previously, four round MPC protocols required sub-exponential-time hardness assumptions and no multi-party three-round protocols were known for any relaxed security notions with polynomial-time simulation against malicious adversaries.

    In order to base security on polynomial-time standard assumptions, we also rely upon a leveled rewinding security technique that can be viewed as a polynomial-time alternative to leveled complexity leveraging for achieving ``non-malleability'' across different primitives.


  • The Bottleneck Complexity of Secure Multiparty Computation
    Elette Boyle, Abhishek Jain, Manoj Prabhakaran, Ching-Hua Yu
    ICALP 2018
    [preprint] [proceedings] [show abstract] [bibTeX]

    In this work, we initiate the study of bottleneck complexity as a new communication efficiency measure for secure multiparty computation (MPC). Roughly, the bottleneck complexity of an MPC protocol is defined as the maximum communication complexity required by any party within the protocol execution.

    We observe that even without security, bottleneck communication complexity is an interesting measure of communication complexity for (distributed) functions and propose it as a fundamental area to explore. While achieving $O(n)$ bottleneck complexity (where n is the number of parties) is straightforward, we show that: (1) achieving sublinear bottleneck complexity is not always possible, even when no security is required. (2) On the other hand, several useful classes of functions do have $o(n)$ bottleneck complexity, when no security is required.

    Our main positive result is a compiler that transforms any (possibly insecure) efficient protocol with fixed communication-pattern for computing any functionality into a secure MPC protocol while preserving the bottleneck complexity of the underlying protocol (up to security parameter overhead). Given our compiler, an efficient protocol for any function $f$ with sublinear bottleneck complexity can be transformed into an MPC protocol for $f$ with the same bottleneck complexity.

    Along the way, we build cryptographic primitives - incremental fully-homomorphic encryption, succinct non-interactive arguments of knowledge with ID-based simulation-extractability property and verifiable protocol execution - that may be of independent interest.


  • On the Existence of Three Round Zero-Knowledge Proofs
    Nils Fleischhacker, Vipul Goyal, Abhishek Jain
    EUROCRYPT 2018
    [preprint] [proceedings] [show abstract] [bibTeX]

    We study the round complexity of zero-knowledge (ZK) proof systems. While five round ZK proofs for NP are known from standard assumptions [Goldreich-Kahan, J. Cryptology'96], Katz [TCC'08] proved that four rounds are insufficient for this task w.r.t. black-box simulation.

    In this work, we study the feasibility of ZK proofs using non-black-box simulation. Our main result is that three round private-coin ZK proofs for NP do not exist (even w.r.t. non-black-box simulation), under certain assumptions on program obfuscation. Our approach builds upon the recent work of Kalai et al. [Crypto'17] who ruled out constant round public-coin ZK proofs under the same assumptions as ours.


  • Synchronized Aggregate Signatures from the RSA Assumption
    Susan Hohenberger, Brent Waters
    EUROCRYPT 2018
    [preprint] [proceedings] [show abstract] [bibTeX]

    In this work we construct efficient aggregate signatures from the RSA assumption in the synchronized setting. In this setting, the signing algorithm takes as input a (time) period $t$ as well the secret key and message. A signer should sign at most once for each $t$. A set of signatures can be aggregated so long as they were all created for the same period $t$. Synchronized aggregate signatures are useful in systems where there is a natural reporting period such as log and sensor data, or for signatures embedded in a blockchain protocol where the creation of an additional block is a natural synchronization event.

    We design a synchronized aggregate signature scheme that works for a bounded number of periods $T$ that is given as a parameter to a global system setup. The big technical question is whether we can create solutions that will perform well with the large $T$ values that we might use in practice. For instance, if one wanted signing keys to last up to ten years and be able to issue signatures every second, then we would need to support a period bound of upwards of $2^{28}$.

    We build our solution in stages where we start with an initial solution that establishes feasibility, but has an impractically large signing time where the number of exponentiations and prime searches grows linearly with $T$. We prove this scheme secure in the standard model under the RSA assumption with respect to honestly-generated keys. We then provide a tradeoff method where one can tradeoff the time to create signatures with the space required to store private keys. One point in the tradeoff is where each scales with $\sqrt{T}$.

    Finally, we reach our main innovation which is a scheme where both the signing time and storage scale with $\lg{T}$ which allows for us to keep both computation and storage costs modest even for large values of $T$. Conveniently, our final scheme uses the same verification algorithm, and has the same distribution of public keys and signatures as the first scheme. Thus we are able to recycle the existing security proof for the new scheme.

    We also show how to extend our results to the identity-based setting in the random oracle model, which can further reduce the overall cryptographic overhead. We conclude with a detailed evaluation of the signing time and storage requirements for various practical settings of the system parameters.


  • How to Squeeze a Crowd: Reducing Bandwidth in Mixing Cryptocurrencies
    Alishah Chator, Matthew Green
    IEEE S&B 2018
    [preprint] [proceedings] [show abstract] [bibTeX]

    Several popular cryptocurrencies incorporate privacy features that “mix” real transactions with cover traffic in order to obfuscate the public transaction graph. The underlying protocols, which include CryptoNote and Monero’s RingCT, work by first identifying a real transaction output (TXO), sampling a number of cover outputs, and transmitting the entire resulting set to verifiers, along with a zero knowledge (or WI) proof that hides the identity of the real transaction. Unfortunately, many of these schemes suffer from a practical limitation: the description of the combined input set grows linearly with size of the anonymity set.

    In this work we propose a simple technique for efficiently sampling cover traffic from a finite (and public) set of known values, while deriving a compact description of the resulting transaction set. This technique, which is based on programmable hash functions, allows us to dramatically reduce transaction bandwidth when large cover sets are used. We refer to our construction as a recoverable sampling scheme, and note that it may be of independent interest for other privacy applications. We present formal security definitions; prove our constructions secure; and show how these constructions can be integrated with various currencies and different cover sampling distributions.


  • Don’t Talk to Strangers: On the Challenges of Intelligent Vehicle Authentication
    Alishah Chator, Matthew Green
    VEHITS 2018
    [preprint] [proceedings] [show abstract] [bibTeX]

    Vehicle-to-vehicle (V2V) communications offer an unprecedented opportunity to increase driver safety. At the same time, the use of computer networking technologies raises new concerns around information security and privacy. Specifically, V2V communications systems provide the opportunity for malicious individuals to transmit false data, with unknown effects on future vehicle systems. A number of proposals have been advanced in order to add authenticity guarantees to V2V systems using cryptographic techniques. Unfortunately, many of these proposals have a number of side effects related to efficiency and driver privacy. In this work we discuss these tradeoffs and explain why it is challenging to achieve all desired properties in a single system. We then suggest alternative approaches that may be more realistic than current proposals.


2017
  • Non-Interactive Multiparty Computation without Correlated Randomness
    Shai Halevi, Yuval Ishai, Abhishek Jain, Ilan Komargodski, Amit Sahai, Eylon Yogev
    ASIACRYPT 2017
    [preprint] [proceedings] [show abstract] [bibTeX]

    We study the problem of non-interactive multiparty computation (NI-MPC) where a group of completely asynchronous parties can evaluate a function over their joint inputs by sending a single message to an evaluator who computes the output. Previously, the only general solutions to this problem that resisted collusions between the evaluator and a set of parties were based on multi-input functional encryption and required the use of complex correlated randomness setup.

    In this work, we present a new solution for NI-MPC against arbitrary collusions using a public-key infrastructure (PKI) setup supplemented with a common random string. A PKI is, in fact, the minimal setup that one can hope for in this model in order to achieve a meaningful ``best possible'' notion of security, namely, that an adversary that corrupts the evaluator and an arbitrary set of parties only learns the residual function obtained by restricting the function to the inputs of the uncorrupted parties. Our solution is based on indistinguishability obfuscation and DDH both with sub-exponential security. We extend this main result to the case of general interaction patterns, providing the above best possible security that is achievable for the given interaction.

    Our main result gives rise to a novel notion of (public-key) multiparty obfuscation, where $n$ parties can independently obfuscate program modules $M_i$ such that the obfuscated modules, when put together, exhibit the functionality of the program obtained by ``combining'' the underlying modules $M_i$. This notion may be of independent interest.


  • A Generic Approach to Constructing and Proving Verifiable Random Functions
    Rishab Goyal, Susan Hohenberger, Venkata Koppula, Brent Waters
    TCC 2017
    [preprint] [proceedings] [show abstract] [bibTeX]

    Verifiable Random Functions (VRFs) as introduced by Micali, Rabin and Vadhan are a special form of Pseudo Random Functions (PRFs) wherein a secret key holder can also prove validity of the function evaluation relative to a statistically binding commitment. Prior works have approached the problem of constructing VRFs by proposing a candidate under specific number theoretic setting --- mostly in bilinear groups --- and then grapple with the challenges of proving security in the VRF environments. These constructions achieved different results and tradeoffs in practical efficiency, tightness of reductions and cryptographic assumptions.

    In this work we take a different approach. Instead of tackling the VRF problem as a whole we demonstrate a simple and generic way of building Verifiable Random Functions from more basic and narrow cryptographic primitives. Then we can turn to exploring solutions to these primitives with a more focused mindset. In particular, we show that VRFs can be constructed generically from the ingredients of: (1) a 1-bounded constrained pseudo random function for a functionality that is ``admissible hash friendly" , (2) a non-interactive statistically binding commitment scheme (without trusted setup) and (3) a non-interactive witness indistinguishable proofs or NIWIs. The first primitive can be replaced with a more basic puncturable PRF constraint if one is willing to settle for selective security or assume sub-exponential hardness of assumptions.

    In the second half of our work we support our generic approach by giving new constructions of the underlying primitives. We first provide new constructions of perfectly binding commitments from the Learning with Errors (LWE) and Learning Parity with Noise (LPN) assumptions. Second, we give give two new constructions of 1-bounded constrained PRFs for admissible hash friendly constructions. Our first construction is from the $n\mathsf{-powerDDH}$ assumption. The next is from the $\phi$ hiding assumption.


  • On Secure Two-Party Computation in Three Rounds
    Prabhanjan Ananth, Abhishek Jain
    TCC 2017
    [preprint] [proceedings] [show abstract] [bibTeX]

    In this paper, we study the round complexity of concurrently secure multi-party computation (MPC) with super-polynomial simulation (SPS) in the plain model. In the plain model, there are known explicit attacks that show that concurrently secure MPC with polynomial simulation is impossible to achieve; SPS security is the most widely studied model for concurrently secure MPC in the plain model.

    We obtain the following results:

    – Three-round concurrent MPC with SPS security against Byzantine adversaries, assuming sub-exponentially secure DDH and LWE.

    – Two-round concurrent MPC with SPS security against Byzantine adversaries for input-less randomized functionalities, assuming sub- exponentially secure indistinguishability obfuscation and DDH. In particular, this class includes sampling functionalities that allow parties to jointly sample a secure common reference string for cryptographic applications.

    Prior to our work, to the best of our knowledge, concurrent MPC with SPS security required roughly 20 rounds, although we are not aware of any work that even gave an approximation of the constant round complexity sufficient for the multi-party setting. We also improve over the previous best round complexity for the two-party setting, where 5 rounds were needed (Garg, Kiyoshima, and Pandey, Eurocrypt 2017).

    To obtain our results, we compile protocols that already achieve security against “semi-malicious” adversaries, to protocols secure against fully malicious adversaries, additionally assuming sub-exponential DDH. Our protocols develop new techniques to use two-round zero-knowledge with super-polynomial strong simulation, defined by Pass (Eurocrypt 2003) and very recently realized by Khurana and Sahai (FOCS 2017). These remain zero-knowledge against adversaries running in time larger than the running time of the simulator.


  • Round Optimal Concurrent MPC via Strong Simulation
    Saikrishna Badrinarayanan, Vipul Goyal, Abhishek Jain, Dakshita Khurana, Amit Sahai
    TCC 2017
    [preprint] [proceedings] [show abstract] [bibTeX]

    In this paper, we study the round complexity of concurrently secure multi-party computation (MPC) with super-polynomial simulation (SPS) in the plain model. In the plain model, there are known explicit attacks that show that concurrently secure MPC with polynomial simulation is impossible to achieve; SPS security is the most widely studied model for concurrently secure MPC in the plain model.

    We obtain the following results:

    – Three-round concurrent MPC with SPS security against Byzantine adversaries, assuming sub-exponentially secure DDH and LWE.

    – Two-round concurrent MPC with SPS security against Byzantine adversaries for input-less randomized functionalities, assuming sub- exponentially secure indistinguishability obfuscation and DDH. In particular, this class includes sampling functionalities that allow parties to jointly sample a secure common reference string for cryptographic applications.

    Prior to our work, to the best of our knowledge, concurrent MPC with SPS security required roughly 20 rounds, although we are not aware of any work that even gave an approximation of the constant round complexity sufficient for the multi-party setting. We also improve over the previous best round complexity for the two-party setting, where 5 rounds were needed (Garg, Kiyoshima, and Pandey, Eurocrypt 2017).

    To obtain our results, we compile protocols that already achieve security against “semi-malicious” adversaries, to protocols secure against fully malicious adversaries, additionally assuming sub-exponential DDH. Our protocols develop new techniques to use two-round zero-knowledge with super-polynomial strong simulation, defined by Pass (Eurocrypt 2003) and very recently realized by Khurana and Sahai (FOCS 2017). These remain zero-knowledge against adversaries running in time larger than the running time of the simulator.


  • Verified Correctness and Security of mbedTLS HMAC-DRBG
    Katherine Q. Ye, Matthew Green, Naphat Sanguansin, Lennart Beringer, Adam Petcher, Andrew W. Appel
    CCS 2017
    [preprint] [proceedings] [show abstract] [bibTeX]

    We have formalized the functional specification of HMAC-DRBG (NIST 800-90A), and we have proved its cryptographic security-that its output is pseudorandom--using a hybrid game-based proof. We have also proved that the mbedTLS implementation (C program) correctly implements this functional specification. That proof composes with an existing C compiler correctness proof to guarantee, end-to-end, that the machine language program gives strong pseudorandomness. All proofs (hybrid games, C program verification, compiler, and their composition) are machine-checked in the Coq proof assistant. Our proofs are modular: the hybrid game proof holds on any implementation of HMAC-DRBG that satisfies our functional specification. Therefore, our functional specification can serve as a high-assurance reference.


  • Bolt: Anonymous Payment Channels for Decentralized Currencies
    Matthew Green, Ian Miers
    CCS 2017
    [preprint] [proceedings] [show abstract] [bibTeX]

    Bitcoin owes its success to the fact that transactions are transparently recorded in the blockchain, a global public ledger that removes the need for trusted parties. Unfortunately, recording every transaction in the blockchain causes privacy, latency, and scalability issues. Building on recent proposals for "micropayment channels" --- two party associations that use the ledger only for dispute resolution --- we introduce techniques for constructing anonymous payment channels. Our proposals allow for secure, instantaneous and private payments that substantially reduce the storage burden on the payment network. Specifically, we introduce three channel proposals, including a technique that allows payments via untrusted intermediaries. We build a concrete implementation of our scheme and show that it can be deployed via a soft fork to existing anonymous currencies such as ZCash.


  • Fairness in an Unfair World: Fair Multiparty Computation from Public Bulletin Boards
    Arka Rai Choudhuri, Matthew Green, Abhishek Jain, Gabriel Kaptchuk, Ian Miers
    CCS 2017
    [preprint] [proceedings] [show abstract] [bibTeX]

    Secure multiparty computation allows mutually distrusting parties to compute a function on their private inputs such that nothing but the function output is revealed. Achieving fairness --- that all parties learn the output or no one does -- is a long studied problem with known impossibility results in the standard model if a majority of parties are dishonest.

    We present a new model for achieving fairness in MPC against dishonest majority by using public bulletin boards implemented via existing infrastructure such as blockchains or Google's certificate transparency logs. We present both theoretical and practical constructions using either witness encryption or trusted hardware (such as Intel SGX).

    Unlike previous works that either penalize an aborting party or achieve weaker notions such as $\Delta$-fairness, we achieve complete fairness using existing infrastructure.


  • A New Approach to Round-Optimal Secure Multiparty Computation
    Prabhanjan Ananth, Arka Rai Choudhuri, Abhishek Jain
    CRYPTO 2017
    [preprint] [proceedings] [show abstract] [bibTeX]

    We present a new approach towards constructing round-optimal secure multiparty computation (MPC) protocols against malicious adversaries without trusted setup assumptions. Our approach builds on ideas previously developed in the context of covert multiparty computation [Chandran et al., FOCS'07] even though we do not seek covert security. Using our new approach, we obtain the following results:

    1. A five round MPC protocol based on the Decisional Diffie-Hellman (DDH) assumption.

    2. A four round MPC protocol based on one-way permutations and sub-exponentially secure DDH. This result is *optimal* in the number of rounds.

    Previously, no four-round MPC protocol for general functions was known and five-round protocols were only known based on indistinguishability obfuscation (and some additional assumptions) [Garg et al., EUROCRYPT'16].


  • Distinguisher Dependent Simulation in Two Rounds and its Applications
    Abhishek Jain, Yael Tauman Kalai, Dakshita Khuarana, Ron Rothblum
    CRYPTO 2017
    [preprint] [proceedings] [show abstract] [bibTeX]

    We devise a novel simulation technique that makes black-box use of the adversary as well as the distinguisher. Using this technique we construct several round-optimal protocols, many of which were previously unknown even using non-black-box simulation techniques:

    - Two-round witness indistinguishable (WI) arguments for NP from different assumptions than previously known.

    - Two-round arguments and three-round arguments of knowledge for NP that achieve strong WI, witness hiding (WH) and distributional weak zero knowledge (WZK) properties in a setting where the instance is only determined by the prover in the last round of the interaction. The soundness of these protocols is guaranteed against adaptive provers.

    - Three-round two-party computation satisfying input-indistinguishable security as well as a weaker notion of simulation security against malicious adversaries.

    - Three-round extractable commitments with guaranteed correctness of extraction from polynomial hardness assumptions.

    Our three-round protocols can be based on DDH or QR or N^th residuosity and our two-round protocols require quasi-polynomial hardness of the same assumptions. In particular, prior to this work, two-round WI arguments for NP were only known based on assumptions such as the existence of trapdoor permutations, hardness assumptions on bilinear maps, or the existence of program obfuscation; we give the first construction based on (quasi-polynomial) DDH.

    Our simulation technique bypasses known lower bounds on black-box simulation [Goldreich-Krawcyzk'96] by using the distinguisher's output in a meaningful way. We believe that this technique is likely to find more applications in the future.


  • Indistinguishability Obfuscation for Turing Machines: Constant Overhead and Amortization
    Prabhanjan Ananth, Abhishek Jain, Amit Sahai
    CRYPTO 2017
    [preprint] [proceedings] [show abstract] [bibTeX]

    We study the asymptotic efficiency of indistinguishability obfuscation (iO) on two fronts:

    - Obfuscation size: Present constructions of indistinguishability obfuscation (iO) create obfuscated programs where the size of the obfuscated program is at least a multiplicative factor of security parameter larger than the size of the original program.

    In this work, we construct the first iO scheme for (bounded-input) Turing machines that achieves only a constant multiplicative overhead in size. The constant in our scheme is, in fact, 2.

    - Amortization: Suppose we want to obfuscate an arbitrary polynomial number of (bounded-input) Turing machines $M_1,...,M_n$. We ask whether it is possible to obfuscate $M_1,...,M_n$ using a single application of an iO scheme for a circuit family where the size of any circuit is independent of n as well the size of any Turing machine $M_i$.

    In this work, we resolve this question in the affirmative, obtaining a new bootstrapping theorem for obfuscating arbitrarily many Turing machines.

    Our results rely on the existence of sub-exponentially secure iO for circuits and re-randomizable encryption schemes.

    In order to obtain these results, we develop a new template for obfuscating Turing machines that is of independent interest and has recently found application in subsequent work on patchable obfuscation [Ananth et al, EUROCRYPT'17].


  • Signature Schemes with Randomized Verification
    Cody Freitag, Rishab Goyal, Susan Hohenberger, Venkata Koppula, Eysa Lee, Tatsuaki Okamoto, Jordan Tran, Brent Waters
    ACNS 2017
    [preprint] [proceedings] [show abstract] [bibTeX]

    A signature scheme consists of a setup, signing and verification algorithms. In most existing works, the verification algorithm is assumed to be deterministic. However, there could be signature schemes where the verification algorithm is randomized. In this work, we study signature schemes with randomized verification. Our results can be summarized as follows.

    First, we present a security definition for signature schemes with randomized verification. The standard EUFCMA notion of security for signature schemes with deterministic verification is very restrictive when we consider randomized verification. Therefore, we propose a new security definition called $\chi$ -EUFCMA which captures a broad class of signature schemes with randomized verification.

    Next, we analyse the security of Naor’s transformation from Identity Based Encryption to signature schemes. Such a transformation results in a scheme with randomized verification. We show that this transformation can be proven $\chi$-EUFCMA secure by choosing $\chi$ appropriately.

    Finally, we show how a scheme with randomized verification can be generically transformed to one with deterministic verification.


  • Decentralized Anonymous Micropayments
    Alessandro Chiesa, Matthew Green, Jingcheng Liu, Peihan Miao, Ian Miers, Pratyush Mishra
    EUROCRYPT 2017
    [preprint] [proceedings] [show abstract] [bibTeX]

    Micropayments (payments worth a few pennies) have numerous potential applications. A challenge in achieving them is that payment networks charge fees that are high compared to “micro” sums of money.

    Wheeler (1996) and Rivest (1997) proposed probabilistic payments as a technique to achieve micropayments: a merchant receives a macro-value payment with a given probability so that, in expectation, he receives a micro-value payment. Despite much research and trial deployment, micropayment schemes have not seen adoption, partly because a trusted party is required to process payments and resolve disputes.

    The widespread adoption of decentralized currencies such as Bitcoin (2009) suggests that decentralized micropayment schemes are easier to deploy. Pass and Shelat (2015) proposed several micropayment schemes for Bitcoin, but their schemes provide no more privacy guarantees than Bitcoin itself, whose transactions are recorded in plaintext in a public ledger.

    We formulate and construct *decentralized anonymous micropayment* (DAM) schemes, which enable parties with access to a ledger to conduct offline probabilistic payments with one another, directly and privately. Our techniques extend those of Zerocash (2014) with a new probabilistic payment scheme; we further provide an efficient instantiation based on a new fractional message transfer protocol.

    Double spending in our setting cannot be prevented. Our second contribution is an economic analysis that bounds the additional utility gain of any cheating strategy, and applies to virtually any probabilistic payment scheme with offline validation. In our construction, this bound allows us to deter double spending by way of advance deposits that are revoked when cheating is detected.


  • Cryptography with Updates
    Prabhanjan Ananth, Aloni Cohen, Abhishek Jain
    EUROCRYPT 2017
    [preprint] [proceedings] [show abstract] [bibTeX]

    Starting with the work of Bellare, Goldreich and Goldwasser [CRYPTO’94], a rich line of work has studied the design of updatable cryptographic primitives. For example, in an updatable signature scheme, it is possible to efficiently transform a signature over a message into a signature over a related message without recomputing a fresh signature.

    In this work, we continue this line of research, and perform a systematic study of updatable cryptography. We take a unified approach towards adding updatability features to recently studied cryptographic objects such as attribute-based encryption, functional encryption, witness encryption, indistinguishability obfuscation, and many others that support non-interactive computation over inputs. We, in fact, go further and extend our approach to classical protocols such as zero-knowledge proofs and secure multiparty computation.

    To accomplish this goal, we introduce a new notion of updatable randomized encodings that extends the standard notion of randomized encodings to incorporate updatability features. We show that updatable randomized encodings can be used to generically transform cryptographic primitives to their updatable counterparts.

    We provide various definitions and constructions of updatable randomized encodings based on varying assumptions, ranging from one-way functions to compact functional encryption.


  • Patchable Indistinguishability Obfuscation: iO for Evolving Software
    Prabhanjan Ananth, Abhishek Jain, Amit Sahai
    EUROCRYPT 2017
    [preprint] [proceedings] [show abstract] [bibTeX]

    In this work, we introduce patchable indistinguishability obfuscation: our notion adapts the notion of indistinguishability obfuscation (${i\mathcal {O}}$) to a very general setting where obfuscated software evolves over time. We model this broadly by considering software patches $P$ as arbitrary Turing Machines that take as input the description of a Turing Machine $M$, and output a new Turing Machine description $M′=P(M)$. Thus, a short patch $P$ can cause changes everywhere in the description of $M$ and can even cause the description length of the machine to increase by an arbitrary polynomial amount. We further considermulti-program patchable indistinguishability obfuscation where a patch is applied not just to a single machine $M$, but to an unbounded set of machines $M_1,…,M_n$ to yield $P(M_1),…,P(M_n)$.

    We consider both single-program and multi-program patchable indistinguishability obfuscation in a setting where there are an unbounded number of patches that can be adaptively chosen by an adversary. We show that sub-exponentially secure ${i\mathcal {O}}$ for circuits and sub-exponentially secure re-randomizable encryption schemes (Re-randomizable encryption schemes can be instantiated under standard assumptions such as DDH, LWE.) imply single-program patchable indistinguishability obfuscation; and we show that sub-exponentially secure ${i\mathcal {O}}$ for circuits and sub-exponentially secure DDH imply multi-program patchable indistinguishability obfuscation.

    At the our heart of results is a new notion of splittable ${i\mathcal {O}}$ that allows us to transform any ${i\mathcal {O}}$ scheme into a patchable one. Finally, we exhibit some simple applications of patchable indistinguishability obfuscation, to demonstrate how these concepts can be applied.


  • Outsourcing Medical Dataset Analysis: A Possible Solution
    Gabriel Kaptchuk, Matthew Green, Aviel D. Rubin
    Financial Cryptography and Data Security 2017
    [preprint] [proceedings] [show abstract] [bibTeX]

    We explore the possible ways modern cryptographic methods can be applied to the field of medical data analysis. Current systems require large computational facilities owned by the data owners or excessive trust given to the researchers. We implement one possible solution in which researchers operate directly on homomorphically encrypted data and the data owner decrypts the results. We test our implementation on large datasets and show that it is suciently practical that it could be a helpful tool for modern researchers. We also perform a heuristic analysis of the security of our system.


  • Significantly Improved Multi-bit Differentials for Reduced Round Salsa and ChaCha
    Arka Rai Choudhuri, Subhamoy Maitra
    FSE 2017/ ToSC 2016
    [preprint] [proceedings] [show abstract] [bibTeX]

    ChaCha and Salsa are two software oriented stream ciphers that have attracted serious attention in academic as well as commercial domain. The most important cryptanalysis of reduced versions of these ciphers was presented by Aumasson et al. in FSE 2008. One part of their attack was to apply input difference(s) to investigate biases after a few rounds. So far there have been certain kind of limited exhaustive searches to obtain such biases. For the first time, in this paper, we show how to theoretically choose the combinations of the output bits to obtain significantly improved biases. The main idea here is to consider the multi-bit differentials as extension of suitable single-bit differentials with linear approximations, which is essentially a differential-linear attack. As we consider combinations of many output bits (for example 19 for Salsa and 21 for ChaCha), exhaustive search is not possible here. By this method we obtain very high biases for linear combinations of bits in Salsa after 6 rounds and in ChaCha after 5 rounds. These are clearly two rounds of improvement for both the ciphers over the existing works. Using these biases we obtain several significantly improved cryptanalytic results for reduced round Salsa and ChaCha that could not be obtained earlier. In fact, with our results it is now possible to cryptanalyse 6-round Salsa and 5-round ChaCha in practical time.