Hardware Security without Secure Hardware: How to Decrypt with a Password and a Server 

Get Complete Project Material File(s) Now! »

Provable Security

“Provable” security refers to the paradigm which consists in reducing, as in computability theory, the ability to jeopardize the security of a cryptosys-tem to solving an a priori hard computational problem. The underlying rationale is that if a certain computational problem is hard and that one can “efficiently” reduce the security of the scheme to solving the problem, then the system must be secure as otherwise an attacker can be converted into a solver. Giving a precise meaning to this intuition requires to formally define the security of a cryptosystem, which is often done via experiments as defined below, and to parametrize the hardness of the problem and char-acterize what it means for an algorithm to be efficient.
Security Parameter. Given enough time, any computational problem which has a computable solution can be solved by a computer. When rely-ing on computational assumptions to build cryptosystems, the goal is sim-ply to make sure that an attacker cannot break the system in reasonable time. Nowadays, if 2128 elementary operations are necessary for the cur-rently available computers to break a cryptosystem with a probability not too small (e.g., 2−7), then the system is considered secure.
To formalize this idea, a security parameter λ ∈ N is introduced. All algorithms considered in this thesis take a unary representation 1λ of this parameter (although it may not always be made explicit). The unary rep-resentation is simply to make sure that all efficient algorithms run in time polynomial in λ.
Asymptotic Statements. Now that security parameters have been in-troduced, an algorithm is considered efficient if its runtime is polynomial time in the security parameter. A function (of the security parameter) is said to be negligible if it is asymptotically dominated by all polynomial func-tions (of the security parameter). The probability of an event will be said to be overwhelming if the probability of the complement event is negligible.
Adversaries, Experiments and Oracles. The security of cryptosys-tems is often formally defined via so-called “security experiments” or “games”. A security experiment is essentially an interaction between two algorithms: a challenger and an adversary. The experiment specifies a series of actions between the two algorithms that depends on the considered cryptosystem, and how much leeway the adversary has during the interaction.
During the security experiments, the adversaries are often also given access to oracles in addition to the inputs and the random bits. An oracle is an idealized black-box which returns a value on an input specified by the adversary, and it models the fact that an algorithm might in practice have access to the answers to these queries without any restriction on the way they are computed or obtained. Given an Oracle O, the notation (A, B, •) means that (A, B) is its inner (secret) state, while • denotes the (adversarial) query.
Generally, at the end of the experiment, the challenger returns a decision bit indicating whether the adversary “won the game”, i.e., broke the system. The goal of the experiment is to capture a clear range of attacks, and usually, the wider this range is, the more realistic the attack model is and the more difficult it is to build efficient schemes.
For a given experiment, the success probability or advantage of an adver-sary refers to the probability that it wins the game against the challenger. A cryptosystem is generally deemed secure in the sense specified by the ex-periment if no PPT algorithm has a non-negligible advantage in winning the game. Note that such statements are asymptotic, and although they give an intuition as to why a system should be secure in practice, they fail to give to quantify the actual advantage that an attacker can have in practice. That is why for certain problems, it might be more meaningful to rather say that a scheme is (T, q, ε)-secure if no adversary that runs in time at most T and makes at most q oracle queries has a success probability of at most ε.
Alternatively, the security of a scheme can rather be defined by requiring an attacker to distinguish two experiments. The adversary interacts with the challenger of either of them and returns a decision bit at the end of the interaction. The advantage of the adversary is then defined as the difference (in absolute value) of the probabilities that it returns the same decision bit.
Hybrid Arguments. Hybrid arguments are a useful tool to prove that cryptographic schemes are secure in the sense defined above. They generally consist in defining a sequence of experiments that are indistinguishable start-ing from the original experiment which defines the security of the scheme. The indistinguishability between two consecutive experiments can be either information theoretic or computational, i.e., in the second case, distinguish-ing two consecutive experiments can be reduced to solving a computational problem. The last experiment is usually one in which the advantage of the adversary is nil.
This technique is especially helpful when several schemes are involved to build a larger one, as it allows to gradually prove the security of the overall scheme by focusing on one building block at a time.

Computational Assumptions.

Random-Oracle Model. The random-oracle model was introduced in cryptography by Bellare and Rogaway [BR93]. It is an idealized model for hash functions. In short, a random oracle returns consistent answers to previous inputs, and a uniformly random value in its range on each new input. The model provides considerable leverage in security reductions as it allows the reduction algorithm to observe and program the random-oracle queries of the adversary. However, the random oracle cannot be realized in practice with a concrete hash function. In fact, there exist con-trived schemes [CGH98, CGH04, MRH04] which can be proven secure in the random-oracle model but are insecure when instantiated with a concrete hash function – on the other hand, no such result for a “natural” protocol has been proved so far. On this ground, proofs in the random oracle model should rather be considered as heuristic indications of security.

Computational Assumptions.

This section covers classical computational assumptions that are used in this thesis. Contrarily to complexity theory, most assumptions in cryptog-raphy are about average-case hardness of problems rather than worst-case hardness. The assumptions presented hereafter are based on the discrete-logarithm problem or are closely related to the factorization problem. These types of assumptions underpin most of the now classical cryptosystems, and in particular the celebrated RSA [RSA78] and Elgamal [ElG85] cryptosys-tems.
Despite a result by Shor [Sho94] which shows that these problems can be solved in polynomial time with quantum computers, no classical polynomial-time algorithm to solve them is known today. They thus still are at the heart of many cryptographic schemes built nowadays, especially due to the algebraic properties they provide which allow for practical instantiations.

Group-Family Generators.

A generator of a group family is an algorithm G which takes as an input a security parameter 1λ, and returns the description of a group G (groups are denoted multiplicatively) and potentially auxiliary information such as a group element. It is assumed that given the description of G, the group law and the inversion of group elements can be efficiently computed and that group elements can be sampled uniformly at random. When exact-security statements are made, recall that the bit complexity of an elementary operation in a group G is denoted TG.
The group description may or may not allow to directly infer the group order. Actually, it is precisely this distinction which separates the assump-tions based on the discrete-logarithm problem from those related to the factorization problem.

Assumptions on Public-Order-Group Generators

This section presents classical assumptions on generators of groups with public prime orders (note that these groups are necessarily cyclic). The as-sumptions are stated in asymptotic terms, even though they could of course also be stated in exact-security terms.
Discrete-Logarithm Assumption.
The discrete-logarithm assumption is one of the most central assumptions in cryptography. It has many stronger variants on which numerous practical schemes rely.
Definition 2.4.3 (Discrete-Logarithm Assumption). Let G be a group-family generator. The discrete logarithm assumption on G states that for any PPT adversary A,
Pr « x ← A(G, p, g, h) : (G, p, g) ←; G(1λ) #
x ←$ Z∗p h ← gx
is negligible.
Cryptanalysis. Shanks [Sha71] designed a generic (i.e., which only relies on exponentiations and group operations) deterministic algorithm to com-pute discrete logarithm in time O(√p). Pollard [Pol78] later improved this algorithm to use only constant memory and still run in time O(√p), but his algorithm is probabilistic. This latter algorithm is now known as Pollard’s “rho” algorithm, and several improvements [Tes01,Mon87,BLS11] have been proposed since its introduction. Although more efficient algorithms can exist for specific groups, Shoup [Sho97] showed that any generic algorithm that solves the discrete-logarithm problem must run in time Ω(√p). Pollard’s rho algorithm is thus optimal in this sense. As a consequence, research on the discrete-logarithm problem has focused on designing more efficient al-gorithms for the specific groups commonly used in practical cryptography, and on elaborating groups for which no better algorithm is known.
Instantiation with Finite-Field Multiplicative Subgroups. A com-mon construction of groups with public prime order is as follows. Let p be a large strong prime, i.e., p0 := (p − 1)/2 is also prime. The sub-group of squares1 in Z∗p is a (cyclic) group of prime order p0 and is used in practice. To compute a generator of this subgroup, it suffices to gen-erate h ←$ G and return g ← h2. The best known algorithms for the discrete-logarithm problem in such groups are based on index calculus (with seminal ideas harking back to Kraitchik’s work [Kra22]) and run in time O exp a logb(t)(log log(t))1−b for some constants a and b at most 1/3. The most recent breakthroughs for this class of algorithms are due to Joux [Jou14], and Kleinjung and Wesolowski [KW19]. The existence of sub-exponential algorithms then entails that the parameters must be large; some recommendations advocate primes of 2048 bits for 128 bits of security.
Instantiation with Elliptic Curves. The discrete-logarithm assump-tions is also believed to hold over elliptic curves, which are algebraic groups defined by equations of the form y2 − x3 − ax − b = 0. Despite strenuous cryptanalytic research on elliptic curves, Pollard rho’s algorithm is still the best known algorithm to solve the discrete-logarithm problem in chosen par-ticular curves for which the group operation can still be efficiently computed. For such curves, 256-bit primes are enough to achieve 128 bits of security.
Relation Assumptions.
The computational assumptions given below are related and can be reduced in polynomial time to the hardness of computing discrete logarithms, and there is no evidence that these problems are equivalent to the discrete-logarithm problem. Yet, to break these assumptions, no better attack than computing discrete logarithms is known.
Decisional Diffie–Hellman Assumption. The following assumption was introduced by Diffie and Hellman in their work [DH76] which initiated public-key cryptography. A plethora of cryptosystems are based on it, e.g., the Elgamal encryption scheme.
Definition 2.4.4 (Decisional Diffie-Hellman Assumption). Let G be a group family generator. The Decisional Diffie-Hellman assumption (DDH) over G states that the advantage (function of λ)
Pr b = A(G, `, g, gx, gy, gαb ) :
(x, y) (G, p, g) ← G(1λ) 1
$ Zp∗2, b $ 0, 1
← ← { } − α0 xy mod p, α1 ∗
← ← $ Zp
of any PPT adversary A is negligible.
Decisional Diffie-Hellman-Inversion Assumption. The following as-sumption is somewhat related to the DDH assumption but for the problem of inversion.
Definition 2.4.5 (Decisional Diffie-Hellman-Inversion Assumption [BB04]). Let G be a group family generator. The q-Decisional Diffie-Hellman-Inversion

Pairing-Group Structures

An (asymmetric) bilinear structure or pairing-group structure is a tuple (p, G1, G2, GT , e) such that p is a prime, G1 = hg1i, G2 = hg2i and GT are p-order groups, and such that e: G1 × G2 → GT is a pairing, i.e., an efficiently computable non-degenerate (e =6 1GT ) bilinear map. Bilinearity here means that e g1a, g2b = e(g1, g2)ab for all a, b ∈ Zp. In what follows, gT denote e(g1, g2), which is a generator of GT . Type-3 bilinear structures are bilinear structures for which there is no efficiently computable homomor-phism from G2 to G1. They are advocated for their efficiency and security guarantees.
A bilinear structure generator is an algorithm G which, on the input of a security parameter 1λ, returns the description of a bilinear structure. For any integers n, m ≥ 1, given vectors x ∈ Gn1 and y ∈ Gm2, f(x, y) denotes the matrix he (xi, yj)i ∈ Gn×m.
Instantiation. Pairing-group structures are instantiated via elliptic curves. Consider an ordinary elliptic curve E over a prime finite field Fp, and let r be the largest prime such that r | #E(Fp). The embedding degree k of E is the minimal degree k for which all the r-th roots of unity are contained in Fpk . Usually, G1 and G2 are r-order subgroups of E(Fpk ) and GT is the subgroup of F∗pk of the r-th roots of unity. The most widely used pairings on ordinary elliptic curves are efficiently computable using variations of Miller’s algorithm [Mil04].
Cryptanalysis. For the curves considered in cryptography, the best al-gorithm to compute discrete logarithms in E(Fpk ) is still Pollard’s rho al-gorithm, so it can be done in time O(√r ). Computing discrete logarithms in F∗pk depends on k and p. Kim and Barbulescu [KB16] improved the Tower Number-Field Sieve (TNFS) to solve the discrete-logarithm problem when the embedding degree k is composite. Recently, Fotiadis and Martin-dale [FM19] then proposed TNFS-secure curves with composite embedding degree.

READ  Inter and multi-annual variability of pCO2 and air-sea CO2 fluxes

Computational Assumptions.

Assumptions.
The assumptions on generators of pairing-group structures are to a certain extend similar to these on group generators, after eliminating the trivial attacks due to the existence of a pairing. Boyen [Boy08] gave an extensive survey and analysis (with a generic approach) of assumptions on generators of pairing-group structures. The assumptions in this thesis are recalled in the specific chapters in which they are used.

Assumptions on Hidden-Order-Group Generators

Thi section presents assumptions on generator of hidden-order groups. These assumptions are given in exact-security term as all statements in the only chapter (i.e., Chapter 9) in which they appear are in exact-security terms.
A hidden-order-group generator G is an algorithm which takes as an input a security parameter 1λ and returns the description of a finite Abelian group (G, •) and an integer P ≥ 2. Integer P is assumed to be smaller than the order of G, but to still be a super-polynomial function of the security parameter. The role of P is mainly to adjust the soundness of the protocols herein, as their challenge spaces will typically be q0; P Ω(1) − 1y.
It is also assumed that given the description of G, the group law and the inversion of group elements can be efficiently computed, that group elements can be sampled uniformly at random and that an upper bound 2bG on ord(G) can be efficiently computed, with bG := bG(λ) polynomial in λ (it is further assumed that bG = Ω(λ)). Recall that the bit complexity of an elementary operation in a group G is denoted TG.
The following assumptions are classical for hidden-order-group genera-tors and were introduced by Damgård and Fujisaki [DF02]. They are best illustrated for P such that natural integers less than P are factorizable in polynomial time in λ (e.g., λlogΩ(1)(λ) given current knowledge in computa-tional number theory), and for G as the group Z∗N for an RSA modulus N with prime factors p and q such that p = q = 3 mod 4, gcd(p −1, q −1) = 2 and the number of divisors of p − 1 and q − 1 with prime factors less than P is of magnitude O(λ). However, these assumptions are believed to also hold over generators of ideal-class groups.
Definition 2.4.8 (Strong-Root Assumption). A group generator G satisfies the (T, ε)-strong-root assumption if for all λ ∈ N, for every adversary A that runs in time at most T (λ),
Pr gn = h n > 1 : (G,P) ← G 1λ ε(λ).
∧ (g, n) h G
←$(G, P, h) ≤
← A
This assumption is simply a generalization of the strong RSA assump-tion [BP97, FO97] to hidden-order groups.
Definition 2.4.9 (Small-Order Assumption). A group generator G satisfies the (T, ε)-small-order assumption if for all λ ∈ N, for every adversary A that runs in time at most T (λ),
 » 0 < n < P (g, n) ← ( , P ) # ≤
Pr gn = 1G ∧ g2 6= 1G : (G,P) G 1λ ε(λ).
← A G
The small-order assumption simply states that it should be hard to find low-order elements in the group (different from 1G), except for square roots of unity which may be easy to compute (e.g., −1 in RSA groups). In the group Z∗N for N = pq with p and q prime such that gcd(p − 1, q − 1) = 2, Damgård and Fujisaki [DF02] showed that factoring N can be reduced to this problem in polynomial time if integers less than P are factorizable in polynomial time in λ.
Definition 2.4.10 (Orders with Low Dyadic Valuation). A group generator G satisfies the low-dyadic-valuation assumption on orders if for all λ ∈ N, for every (G, P ) ← G 1λ , for every g ∈ G, ord(g) is divisible by 2 at most once.
Notice that in the group Z∗N for N = pq with p and q prime such that p = q = 3 mod 4, the order of any element is divisible by 2 at most once since 2 divides p − 1 and q − 1 exactly once.
Definition 2.4.11 (Many Rough-Order Elements or µ-Assumption). An integer is said to be P -rough if all its prime factors are greater than or equal to P . A group generator G satisfies the µ-assumption that there are many rough-order elements in the groups generated by G (or simply the µ- assumption) if for all λ ∈ N, Pr ord(h) is P -rough :
(G,P) ← G 1λ # ≥ µ(λ).
h ←$ G

Cryptographic Primivites

This section introduces primitives that are fundamental in cryptography.

Public-Key Encryption

Public-key encryption is a fundamental cryptographic primitives which al-lows two parties who have never met to confidentially exchange information. More precisely, public-key encryption enables a party, A, to privately send a message to another party, B (the user hereafter), given the public key of the latter, assuming it to be authentic. Using a corresponding secret key, B can decipher A’s message.

Cryptographic Primivites

Formally, a public-key encryption scheme consists of a setup algorithm Setup 1λ → pp, a key-generation algorithm KG(pp) → (pk, sk ) which re-turns a public encryption key and a secret decryption key, a probabilis-tic encryption algorithm Enc(pk, m; r ) → C (the randomness r may at times be omitted from the syntax) and a deterministic decryption algorithm Dec(sk, C ) → m/⊥.
Golwasser and Micali [GM84] proposed the first security definitions of public-key encryption, and standard definitions nowadays include the so-called INDistinguishability under Chosen -Plaintext and Chosen-Ciphertext Attacks (IND-CPA and IND-CCA). A prominent example of IND-CPA scheme is the aforementioned Elgamal scheme [ElG85] under the DDH as-sumption, and under the same assumption, an example of IND-CCA scheme is the Cramer–Shoup encryption scheme [CS98].

Table of contents :

1 General Introduction 
2 Preliminaries 
2.1 Mathematical Notation
2.2 Algorithm
2.3 Provable Security
2.4 Computational Assumptions
2.4.1 Group-Family Generators
2.4.2 Assumptions on Public-Order-Group Generators
Discrete-Logarithm Assumption
Relation Assumptions
2.4.6 Pairing-Group Structures
Assumptions
2.4.7 Assumptions on Hidden-Order-Group Generators
2.5 Cryptographic Primivites
2.5.1 Public-Key Encryption
2.5.2 Digital Signatures
2.5.3 Non-Interactive Commitments
Extractable Commitments
2.6 Zero-Knowledge Arguments
2.6.1 Interactive Systems
Schnorr Proofs
Interactive Arguments in the Random–Oracle Model
Fiat–Shamir Heuristic
2.6.2 Non-Interactive Systems
I Group Signatures and Protocols for Message Confidentiality
3 Introduction 
3.1 Group Signatures
3.2 Vehicle-to-Vehicle Communication
3.3 Hardware Security without Secure Hardware
3.4 Results
3.4.1 Group Signatures
3.4.2 Vehicle-to-Vehicle Communication
3.4.3 Encryption with Password-Protected Assisted Decryption
4 Short Threshold Dynamic Group Signatures 
4.1 Preliminaries
4.1.1 Hardness Assumptions
4.1.5 Pointcheval–Sanders Signature Scheme
4.1.6 Multi-Signatures with Key Aggregation
Syntax
Security Model
4.1.7 Generalized Forking Lemma
4.2 Threshold Dynamic Group Signatures
4.2.1 Syntax
Correctness
4.2.2 Security Model
Global Variables
Oracles
Anonymity
Traceability
On Non-Frameability
4.3 Main Construction
4.3.1 Variant of the PS Signature Scheme
4.3.2 Construction with Separate Issuers and Openers
Scheme Description
Discussion
Efficiency
Comparison with other Schemes
4.4 Threshold Group Signatures without Ledger
4.5 Pointcheval–Sanders Multi-Signatures
4.6 Distributed Group Signatures from Multi-Signatures
4.6.1 Construction Security
5 Zone Encryption with Anonymous Authentication 
5.1 Preliminaries
5.1.1 Deterministic Authenticated Encryption
Security Properties
SIV Construction
5.2 Group Signatures with Attributes
5.2.1 Definition
Syntax
Correctness & Security Properties
5.2.2 Construction of Group Signatures with Attributes
Efficiency
Threshold Group Signatures with Attributes
5.3 Zone Encryption
5.3.1 Syntax of Zone Encryption Schemes
5.3.2 Security of Zone Encryption Schemes
Common Oracles
Payload Hiding
Anonymity
Traceability
Ciphertext Integrity
Zone Encryption with Multiple Authorities
5.3.7 Construction of a Zone-Encryption Scheme
Formal Description
Correctness & Security
5.3.13 Efficiency & Comparison
Efficiency
C-ITS Deployment and Comparison
5.3.14 Threat Model and Design Choices
5.3.15 Deployment Challenges
6 Hardware Security without Secure Hardware: How to Decrypt with a Password and a Server 
6.1 Preliminaries
6.1.1 Hardness Assumptions
6.1.3 Signatures
6.1.4 Groth’s Strong One-Time Signatures
6.1.5 Jutla and Roy’s Signature Scheme
6.1.6 Public-Key Encryption
6.1.7 Smooth Projective Hash Functions
6.1.8 Key-Derivation Functions
6.2 Malleable Non-Interactive Proofs
6.2.1 Transformations
6.2.2 Simulation Soundness under Controlled Malleability
Formal Definition
6.2.3 Generic Construction
6.2.4 Strong Derivation Privacy
6.2.5 Groth–Sahai Proofs
Instantiation under the SXDH Assumption
6.3 Model for Password-Assisted Decryption
6.3.1 Syntax
6.3.2 Security Definitions
P-IND-RCCA Security
Blindness
Verifiability
6.4 Construction
6.4.1 Verification of Blinded Ciphertexts
Formal Description
6.4.2 Main Construction
Construction Overview
Formal Description
Correctness & Security
Efficiency
On Adaptive Corruptions
On Composability
Mitigating Server Breaches
7 Conclusion and Future Work 
7.1 Conclusion
7.2 Future Work
II Zero-Knowledge Arguments and Randomness Certification
8 Introduction 
8.1 Diophantine Satisfiability
8.1.1 Prior Work
8.2 Public-Key Generation with Verifiable Randomness
8.2.1 Related Work
8.3 Results
8.3.1 Diophantine Satisfiability
8.3.2 Public-Key Generation with Verifiable Randomness
9 Succinct Diophantine-Satisfiability Arguments
9.1 Preliminaries
9.1.1 Non-interactive Commitments in the Random-Oracle
Model
9.2 Integer Commitments
9.2.1 Damgård–Fujisaki Commitments
9.2.2 A new Integer-Commitment Scheme
Correctness & Security
Argument System FS.H
Arguing Knowledge of Openings
Multi-Integer Commitments
9.3 Succinct Inner-Product Arguments on Integers
9.3.1 Formal Description
Relations
Main Insights
Protocol Algorithms
Prover-Communication Complexity
Verification via a Single Multi-Exponentiation
Ultimate Commitment
Expression for g and h
Verification Efficiency
9.3.3 Completeness and Security
Challenge-Tree Generators
9.4 Succinct Arguments for Multi-Integer Commitments
9.4.1 Succinct Arguments of Openings
9.4.2 Aggregating Arguments of Openings to Integer Commitments
Protocol
Completeness and Security
9.4.3 Shorter Parameters for Integer Commitments
9.4.4 Succinct Base-Switching Arguments
9.5 Succinct Argument for Diophantine Equations
9.5.1 Arguments via Polynomial-Degree Reductions
Reducing Arbitrary Polynomials to Polynomials of Degree at most
Diophantine Equations as Circuits
9.5.2 Protocol
Main Insights
Protocol Algorithms
Prover-Communication Complexity
Verification Effiency
9.5.3 Completeness and Security
9.6 Applications
9.6.1 Arguing Knowledge of RSA signatures
9.6.2 Argument of Knowledge of (EC)DSA Signatures
DSA Signatures
ECDSA Signatures
9.6.3 Argument of Knowledge of List Permutation
9.6.4 3-SAT Satisfiability Argument
9.6.5 Integer-Linear-Programming Satisfiability Argument
10 Public-Key Generation with Verifiable Randomness 
10.1 Preliminaries
10.1.1 Randomness Sources and Min-Entropy
10.1.2 Randomness Extractors
10.1.3 Universal Computational Extractors
Pseudo-Random Functions
Dodis–Yampolskiy Pseudo-Random Function
10.1.4 Chernoff’s Bound
10.2 Model for Key Generation with Verifiable Randomness
10.2.1 Syntax
10.2.3 Security Oracles
10.3 Generic Constructions
10.3.1 Key-Generation Protocol with Verifiable Randomness
for Probabilistic Circuits
Probabilistic Circuits
Generic Protocol
Discrete-Logarithm Keys
10.3.4 RSA-Key Generation Protocol with Verifiable Randomness Protocol
10.4 Instantiation of the RSA-Key Generation Protocol
10.4.1 Zero-Knowledge Argument with the Dodis–Yampolskiy
PRF Proof Strategy
10.4.2 Logarithmic-Size Argument of Double Discrete Logarithm
10.4.3 Logarithmic-Size Argument of Discrete-Logarithm Equality in two Groups
10.4.4 An Intermediate Protocol in G2
10.4.5 Protocol for R0
Security
Efficiency
Total Proof Size
Running Time
Overall Communication Size

GET THE COMPLETE PROJECT

Related Posts