Publications

NonInteractive Multiparty Computation without Correlated Randomness
ASIACRYPT 2017
[preprint] [proceedings] [show abstract] [bibTeX]We study the problem of noninteractive multiparty computation (NIMPC) where a group of completely asynchronous parties can evaluate a function over their joint inputs by sending a single message to an evaluator who computes the output. Previously, the only general solutions to this problem that resisted collusions between the evaluator and a set of parties were based on multiinput functional encryption and required the use of complex correlated randomness setup.
In this work, we present a new solution for NIMPC against arbitrary collusions using a publickey infrastructure (PKI) setup supplemented with a common random string. A PKI is, in fact, the minimal setup that one can hope for in this model in order to achieve a meaningful ``best possible'' notion of security, namely, that an adversary that corrupts the evaluator and an arbitrary set of parties only learns the residual function obtained by restricting the function to the inputs of the uncorrupted parties. Our solution is based on indistinguishability obfuscation and DDH both with subexponential security. We extend this main result to the case of general interaction patterns, providing the above best possible security that is achievable for the given interaction.
Our main result gives rise to a novel notion of (publickey) multiparty obfuscation, where $n$ parties can independently obfuscate program modules $M_i$ such that the obfuscated modules, when put together, exhibit the functionality of the program obtained by ``combining'' the underlying modules $M_i$. This notion may be of independent interest.

A Generic Approach to Constructing and Proving Verifiable Random Functions
TCC 2017
[preprint] [proceedings] [show abstract] [bibTeX]Verifiable Random Functions (VRFs) as introduced by Micali, Rabin and Vadhan are a special form of Pseudo Random Functions (PRFs) wherein a secret key holder can also prove validity of the function evaluation relative to a statistically binding commitment. Prior works have approached the problem of constructing VRFs by proposing a candidate under specific number theoretic setting  mostly in bilinear groups  and then grapple with the challenges of proving security in the VRF environments. These constructions achieved different results and tradeoffs in practical efficiency, tightness of reductions and cryptographic assumptions.
In this work we take a different approach. Instead of tackling the VRF problem as a whole we demonstrate a simple and generic way of building Verifiable Random Functions from more basic and narrow cryptographic primitives. Then we can turn to exploring solutions to these primitives with a more focused mindset. In particular, we show that VRFs can be constructed generically from the ingredients of: (1) a 1bounded constrained pseudo random function for a functionality that is ``admissible hash friendly" , (2) a noninteractive statistically binding commitment scheme (without trusted setup) and (3) a noninteractive witness indistinguishable proofs or NIWIs. The first primitive can be replaced with a more basic puncturable PRF constraint if one is willing to settle for selective security or assume subexponential hardness of assumptions.
In the second half of our work we support our generic approach by giving new constructions of the underlying primitives. We first provide new constructions of perfectly binding commitments from the Learning with Errors (LWE) and Learning Parity with Noise (LPN) assumptions. Second, we give give two new constructions of 1bounded constrained PRFs for admissible hash friendly constructions. Our first construction is from the $n\mathsf{powerDDH}$ assumption. The next is from the $\phi$ hiding assumption.

On Secure TwoParty Computation in Three Rounds
TCC 2017
[preprint] [proceedings] [show abstract] [bibTeX]In this paper, we study the round complexity of concurrently secure multiparty computation (MPC) with superpolynomial simulation (SPS) in the plain model. In the plain model, there are known explicit attacks that show that concurrently secure MPC with polynomial simulation is impossible to achieve; SPS security is the most widely studied model for concurrently secure MPC in the plain model.
We obtain the following results:
– Threeround concurrent MPC with SPS security against Byzantine adversaries, assuming subexponentially secure DDH and LWE.
– Tworound concurrent MPC with SPS security against Byzantine adversaries for inputless randomized functionalities, assuming sub exponentially secure indistinguishability obfuscation and DDH. In particular, this class includes sampling functionalities that allow parties to jointly sample a secure common reference string for cryptographic applications.
Prior to our work, to the best of our knowledge, concurrent MPC with SPS security required roughly 20 rounds, although we are not aware of any work that even gave an approximation of the constant round complexity sufficient for the multiparty setting. We also improve over the previous best round complexity for the twoparty setting, where 5 rounds were needed (Garg, Kiyoshima, and Pandey, Eurocrypt 2017).
To obtain our results, we compile protocols that already achieve security against “semimalicious” adversaries, to protocols secure against fully malicious adversaries, additionally assuming subexponential DDH. Our protocols develop new techniques to use tworound zeroknowledge with superpolynomial strong simulation, defined by Pass (Eurocrypt 2003) and very recently realized by Khurana and Sahai (FOCS 2017). These remain zeroknowledge against adversaries running in time larger than the running time of the simulator.

Round Optimal Concurrent MPC via Strong Simulation
TCC 2017
[preprint] [proceedings] [show abstract] [bibTeX]In this paper, we study the round complexity of concurrently secure multiparty computation (MPC) with superpolynomial simulation (SPS) in the plain model. In the plain model, there are known explicit attacks that show that concurrently secure MPC with polynomial simulation is impossible to achieve; SPS security is the most widely studied model for concurrently secure MPC in the plain model.
We obtain the following results:
– Threeround concurrent MPC with SPS security against Byzantine adversaries, assuming subexponentially secure DDH and LWE.
– Tworound concurrent MPC with SPS security against Byzantine adversaries for inputless randomized functionalities, assuming sub exponentially secure indistinguishability obfuscation and DDH. In particular, this class includes sampling functionalities that allow parties to jointly sample a secure common reference string for cryptographic applications.
Prior to our work, to the best of our knowledge, concurrent MPC with SPS security required roughly 20 rounds, although we are not aware of any work that even gave an approximation of the constant round complexity sufficient for the multiparty setting. We also improve over the previous best round complexity for the twoparty setting, where 5 rounds were needed (Garg, Kiyoshima, and Pandey, Eurocrypt 2017).
To obtain our results, we compile protocols that already achieve security against “semimalicious” adversaries, to protocols secure against fully malicious adversaries, additionally assuming subexponential DDH. Our protocols develop new techniques to use tworound zeroknowledge with superpolynomial strong simulation, defined by Pass (Eurocrypt 2003) and very recently realized by Khurana and Sahai (FOCS 2017). These remain zeroknowledge against adversaries running in time larger than the running time of the simulator.

Verified Correctness and Security of mbedTLS HMACDRBG
CCS 2017
[preprint] [proceedings] [show abstract] [bibTeX]We have formalized the functional specification of HMACDRBG (NIST 80090A), and we have proved its cryptographic securitythat its output is pseudorandomusing a hybrid gamebased proof. We have also proved that the mbedTLS implementation (C program) correctly implements this functional specification. That proof composes with an existing C compiler correctness proof to guarantee, endtoend, that the machine language program gives strong pseudorandomness. All proofs (hybrid games, C program verification, compiler, and their composition) are machinechecked in the Coq proof assistant. Our proofs are modular: the hybrid game proof holds on any implementation of HMACDRBG that satisfies our functional specification. Therefore, our functional specification can serve as a highassurance reference.

Bolt: Anonymous Payment Channels for Decentralized Currencies
CCS 2017
[preprint] [proceedings] [show abstract] [bibTeX]Bitcoin owes its success to the fact that transactions are transparently recorded in the blockchain, a global public ledger that removes the need for trusted parties. Unfortunately, recording every transaction in the blockchain causes privacy, latency, and scalability issues. Building on recent proposals for "micropayment channels"  two party associations that use the ledger only for dispute resolution  we introduce techniques for constructing anonymous payment channels. Our proposals allow for secure, instantaneous and private payments that substantially reduce the storage burden on the payment network. Specifically, we introduce three channel proposals, including a technique that allows payments via untrusted intermediaries. We build a concrete implementation of our scheme and show that it can be deployed via a soft fork to existing anonymous currencies such as ZCash.

Fairness in an Unfair World: Fair Multiparty Computation from Public Bulletin Boards
CCS 2017
[preprint] [proceedings] [show abstract] [bibTeX]Secure multiparty computation allows mutually distrusting parties to compute a function on their private inputs such that nothing but the function output is revealed. Achieving fairness  that all parties learn the output or no one does  is a long studied problem with known impossibility results in the standard model if a majority of parties are dishonest.
We present a new model for achieving fairness in MPC against dishonest majority by using public bulletin boards implemented via existing infrastructure such as blockchains or Google's certificate transparency logs. We present both theoretical and practical constructions using either witness encryption or trusted hardware (such as Intel SGX).
Unlike previous works that either penalize an aborting party or achieve weaker notions such as $\Delta$fairness, we achieve complete fairness using existing infrastructure.

A New Approach to RoundOptimal Secure Multiparty Computation
CRYPTO 2017
[preprint] [proceedings] [show abstract] [bibTeX]We present a new approach towards constructing roundoptimal secure multiparty computation (MPC) protocols against malicious adversaries without trusted setup assumptions. Our approach builds on ideas previously developed in the context of covert multiparty computation [Chandran et al., FOCS'07] even though we do not seek covert security. Using our new approach, we obtain the following results:
1. A five round MPC protocol based on the Decisional DiffieHellman (DDH) assumption.
2. A four round MPC protocol based on oneway permutations and subexponentially secure DDH. This result is *optimal* in the number of rounds.
Previously, no fourround MPC protocol for general functions was known and fiveround protocols were only known based on indistinguishability obfuscation (and some additional assumptions) [Garg et al., EUROCRYPT'16].

Distinguisher Dependent Simulation in Two Rounds and its Applications
CRYPTO 2017
[preprint] [proceedings] [show abstract] [bibTeX]We devise a novel simulation technique that makes blackbox use of the adversary as well as the distinguisher. Using this technique we construct several roundoptimal protocols, many of which were previously unknown even using nonblackbox simulation techniques:
 Tworound witness indistinguishable (WI) arguments for NP from different assumptions than previously known.
 Tworound arguments and threeround arguments of knowledge for NP that achieve strong WI, witness hiding (WH) and distributional weak zero knowledge (WZK) properties in a setting where the instance is only determined by the prover in the last round of the interaction. The soundness of these protocols is guaranteed against adaptive provers.
 Threeround twoparty computation satisfying inputindistinguishable security as well as a weaker notion of simulation security against malicious adversaries.
 Threeround extractable commitments with guaranteed correctness of extraction from polynomial hardness assumptions.
Our threeround protocols can be based on DDH or QR or N^th residuosity and our tworound protocols require quasipolynomial hardness of the same assumptions. In particular, prior to this work, tworound WI arguments for NP were only known based on assumptions such as the existence of trapdoor permutations, hardness assumptions on bilinear maps, or the existence of program obfuscation; we give the first construction based on (quasipolynomial) DDH.
Our simulation technique bypasses known lower bounds on blackbox simulation [GoldreichKrawcyzk'96] by using the distinguisher's output in a meaningful way. We believe that this technique is likely to find more applications in the future.

Indistinguishability Obfuscation for Turing Machines: Constant Overhead and Amortization
CRYPTO 2017
[preprint] [proceedings] [show abstract] [bibTeX]We study the asymptotic efficiency of indistinguishability obfuscation (iO) on two fronts:
 Obfuscation size: Present constructions of indistinguishability obfuscation (iO) create obfuscated programs where the size of the obfuscated program is at least a multiplicative factor of security parameter larger than the size of the original program.
In this work, we construct the first iO scheme for (boundedinput) Turing machines that achieves only a constant multiplicative overhead in size. The constant in our scheme is, in fact, 2.
 Amortization: Suppose we want to obfuscate an arbitrary polynomial number of (boundedinput) Turing machines $M_1,...,M_n$. We ask whether it is possible to obfuscate $M_1,...,M_n$ using a single application of an iO scheme for a circuit family where the size of any circuit is independent of n as well the size of any Turing machine $M_i$.
In this work, we resolve this question in the affirmative, obtaining a new bootstrapping theorem for obfuscating arbitrarily many Turing machines.
Our results rely on the existence of subexponentially secure iO for circuits and rerandomizable encryption schemes.
In order to obtain these results, we develop a new template for obfuscating Turing machines that is of independent interest and has recently found application in subsequent work on patchable obfuscation [Ananth et al, EUROCRYPT'17].

Signature Schemes with Randomized Verification
ACNS 2017
[preprint] [proceedings] [show abstract] [bibTeX]A signature scheme consists of a setup, signing and verification algorithms. In most existing works, the verification algorithm is assumed to be deterministic. However, there could be signature schemes where the verification algorithm is randomized. In this work, we study signature schemes with randomized verification. Our results can be summarized as follows.
First, we present a security definition for signature schemes with randomized verification. The standard EUFCMA notion of security for signature schemes with deterministic verification is very restrictive when we consider randomized verification. Therefore, we propose a new security definition called $\chi$ EUFCMA which captures a broad class of signature schemes with randomized verification.
Next, we analyse the security of Naor’s transformation from Identity Based Encryption to signature schemes. Such a transformation results in a scheme with randomized verification. We show that this transformation can be proven $\chi$EUFCMA secure by choosing $\chi$ appropriately.
Finally, we show how a scheme with randomized verification can be generically transformed to one with deterministic verification.

Decentralized Anonymous Micropayments
EUROCRYPT 2017
[preprint] [proceedings] [show abstract] [bibTeX]Micropayments (payments worth a few pennies) have numerous potential applications. A challenge in achieving them is that payment networks charge fees that are high compared to “micro” sums of money.
Wheeler (1996) and Rivest (1997) proposed probabilistic payments as a technique to achieve micropayments: a merchant receives a macrovalue payment with a given probability so that, in expectation, he receives a microvalue payment. Despite much research and trial deployment, micropayment schemes have not seen adoption, partly because a trusted party is required to process payments and resolve disputes.
The widespread adoption of decentralized currencies such as Bitcoin (2009) suggests that decentralized micropayment schemes are easier to deploy. Pass and Shelat (2015) proposed several micropayment schemes for Bitcoin, but their schemes provide no more privacy guarantees than Bitcoin itself, whose transactions are recorded in plaintext in a public ledger.
We formulate and construct *decentralized anonymous micropayment* (DAM) schemes, which enable parties with access to a ledger to conduct offline probabilistic payments with one another, directly and privately. Our techniques extend those of Zerocash (2014) with a new probabilistic payment scheme; we further provide an efficient instantiation based on a new fractional message transfer protocol.
Double spending in our setting cannot be prevented. Our second contribution is an economic analysis that bounds the additional utility gain of any cheating strategy, and applies to virtually any probabilistic payment scheme with offline validation. In our construction, this bound allows us to deter double spending by way of advance deposits that are revoked when cheating is detected.

Cryptography with Updates
EUROCRYPT 2017
[preprint] [proceedings] [show abstract] [bibTeX]Starting with the work of Bellare, Goldreich and Goldwasser [CRYPTO’94], a rich line of work has studied the design of updatable cryptographic primitives. For example, in an updatable signature scheme, it is possible to efficiently transform a signature over a message into a signature over a related message without recomputing a fresh signature.
In this work, we continue this line of research, and perform a systematic study of updatable cryptography. We take a unified approach towards adding updatability features to recently studied cryptographic objects such as attributebased encryption, functional encryption, witness encryption, indistinguishability obfuscation, and many others that support noninteractive computation over inputs. We, in fact, go further and extend our approach to classical protocols such as zeroknowledge proofs and secure multiparty computation.
To accomplish this goal, we introduce a new notion of updatable randomized encodings that extends the standard notion of randomized encodings to incorporate updatability features. We show that updatable randomized encodings can be used to generically transform cryptographic primitives to their updatable counterparts.
We provide various definitions and constructions of updatable randomized encodings based on varying assumptions, ranging from oneway functions to compact functional encryption.
 Patchable Indistinguishability Obfuscation: iO for Evolving Software
EUROCRYPT 2017
[preprint] [proceedings] [show abstract] [bibTeX]In this work, we introduce patchable indistinguishability obfuscation: our notion adapts the notion of indistinguishability obfuscation (${i\mathcal {O}}$) to a very general setting where obfuscated software evolves over time. We model this broadly by considering software patches $P$ as arbitrary Turing Machines that take as input the description of a Turing Machine $M$, and output a new Turing Machine description $M′=P(M)$. Thus, a short patch $P$ can cause changes everywhere in the description of $M$ and can even cause the description length of the machine to increase by an arbitrary polynomial amount. We further considermultiprogram patchable indistinguishability obfuscation where a patch is applied not just to a single machine $M$, but to an unbounded set of machines $M_1,…,M_n$ to yield $P(M_1),…,P(M_n)$.
We consider both singleprogram and multiprogram patchable indistinguishability obfuscation in a setting where there are an unbounded number of patches that can be adaptively chosen by an adversary. We show that subexponentially secure ${i\mathcal {O}}$ for circuits and subexponentially secure rerandomizable encryption schemes (Rerandomizable encryption schemes can be instantiated under standard assumptions such as DDH, LWE.) imply singleprogram patchable indistinguishability obfuscation; and we show that subexponentially secure ${i\mathcal {O}}$ for circuits and subexponentially secure DDH imply multiprogram patchable indistinguishability obfuscation.
At the our heart of results is a new notion of splittable ${i\mathcal {O}}$ that allows us to transform any ${i\mathcal {O}}$ scheme into a patchable one. Finally, we exhibit some simple applications of patchable indistinguishability obfuscation, to demonstrate how these concepts can be applied.

Outsourcing Medical Dataset Analysis: A Possible Solution
Financial Cryptography and Data Security 2017
[preprint] [proceedings] [show abstract] [bibTeX]We explore the possible ways modern cryptographic methods can be applied to the field of medical data analysis. Current systems require large computational facilities owned by the data owners or excessive trust given to the researchers. We implement one possible solution in which researchers operate directly on homomorphically encrypted data and the data owner decrypts the results. We test our implementation on large datasets and show that it is suciently practical that it could be a helpful tool for modern researchers. We also perform a heuristic analysis of the security of our system.

Significantly Improved Multibit Differentials for Reduced Round Salsa and ChaCha
FSE 2017/ ToSC 2016
[preprint] [proceedings] [show abstract] [bibTeX]ChaCha and Salsa are two software oriented stream ciphers that have attracted serious attention in academic as well as commercial domain. The most important cryptanalysis of reduced versions of these ciphers was presented by Aumasson et al. in FSE 2008. One part of their attack was to apply input difference(s) to investigate biases after a few rounds. So far there have been certain kind of limited exhaustive searches to obtain such biases. For the first time, in this paper, we show how to theoretically choose the combinations of the output bits to obtain significantly improved biases. The main idea here is to consider the multibit differentials as extension of suitable singlebit differentials with linear approximations, which is essentially a differentiallinear attack. As we consider combinations of many output bits (for example 19 for Salsa and 21 for ChaCha), exhaustive search is not possible here. By this method we obtain very high biases for linear combinations of bits in Salsa after 6 rounds and in ChaCha after 5 rounds. These are clearly two rounds of improvement for both the ciphers over the existing works. Using these biases we obtain several significantly improved cryptanalytic results for reduced round Salsa and ChaCha that could not be obtained earlier. In fact, with our results it is now possible to cryptanalyse 6round Salsa and 5round ChaCha in practical time.