At a recent workshop held in Bertinoro, Italy, Jacob Alperin-Sheriff from NIST suggested 10 questions regarding the security of lattice-based schemes.
We wish to propose an alternative list, reflecting our personal view on the most pressing topics.
It is to be noted that some of those questions are likely to be dead-ends. We encourage our peers to document their failed attempts, and hope that the community will be receptive to such negative results, as contributing to the security assessments of future International cryptographic standards.
1. How to refine cost estimates?
Asymptotic estimates of number of bit operations in the RAM model is an oversimplification of concrete cost, in particular for large computational tasks. In its call for proposal, the NIST even suggested to go down to counting (quantum) logic gates, and the choice of this metric has a non-negligible effect on how algorithms compare. For example, a CPU-cycle of enumeration (typically, a floating-point multiplication) is not equivalent to a CPU-cycle of Sieving (a pop-count [FBB+14, D18]).
In addition, the RAM model is not always accurate when considering memory-intensive or communication-intensive algorithms such as sieving. While the simplest sieve [NV08] admits a circuit with Area = Time = 2^{0.2075n + o(n)} [K16], adapting advanced algorithms (such as [BDGL16]) to the AT model remains fully open.
Problem 1.1
Estimate lower order cost terms of the cryptanalytic algorithms.
Problem 1.2
Devise accurate cost models for classical and quantum algorithms, taking other resources than just time into account (memory access, communication, ?). And estimate the performance of algorithms for these cost models.
Problem 1.3
Explore hybridization of Enumeration and Sieving in these refined cost model.
2. How do discrete and usual pruning compare?
Recently, records in solving lattice challenges were achieved using a revived technique called discrete pruning originally proposed in [S03]. See highest solved dimensions. Despite attempts to clarify the situation [AN17, FK15], the performance of this new technique relative to regular pruning [SE94] is unclear:
there is no asymptotic prediction so far,
the fit from current experimental data is not very conclusive [TKH18, Figure 1].
Problem 2.1
Produce a usable open-source implementation for the discrete pruning method, and compare it to the state-of-the-art implementation of regular pruned enumeration available in FPLLL.
Problem 2.2
Provide a reliable way to predict its performance for large-dimensional instances.
3. What is the cost of enumeration when used inside BKZ?
The SVP instances encountered inside BKZ may not be as generic as typically assumed in the current security analyses. This may impact the enumeration-based SVP solvers.
Assume a basis is BKZ-reduced with block-size b, and that Schnorr's Geometric Series Assumption [S03] is satisfied with the (expected) rate ≈ b^{-1/b}. Then enumeration on a block in the middle would cost b^{b/8 + o(b)} rather than the worst-case bound of b^{b/(2e) + o(b)} [HS07]. However, the latest blocks of the basis would still be more costly, as they do not satisfy the GSA. Moreover, modifying a block in the middle would affect the GSA for the following blocks.
Problem 3.1
Account for those potentially easier SVP instances in BKZ cost prediction.
Problem 3.2
Explore the feasibility of a BKZ variant reaching the same Hermite factor as BKZ with block-size b, but for which the cost of SVP solving never exceeds b^{b/8 + o(b)}.
4. Are stretched problem parameters helping cryptanalysis?
Some variants of standard problem enjoy reductions from more standard variants (e.g., LWR in some regimes, LWE with small secrets of sufficient min-entropy, etc). Even though these reductions may indicate that, asymptotically, these non-standard variants are secure, these proofs are often too loose or for too restricted parameter sets to be of concrete interest.
As a first example, it has been noticed for a relatively long time that choosing small and or sparse secrets opens the door to hybrid (lattice+combinatorial) attacks (see [H07], improving over an older attack due to Odlyzko), which are substantially more painful to cost [W16]. As another example, the SIS problem for a large l_infty norm (w.r.t. to the modulus q) already showed counter-intuitive properties (see the Dilithium supporting material). Finally, attacks specialized to LWR should be investigated, given the well-spread use of that problem (in particular for parameters not covered by reductions from LWE). In particular, we note that the “deterministic error” does not always have the same distribution when the rounding goes for q to p where p does not divide q: an attacker may plausibly benefit a bit from a-posteriori re-centering-and scalings.
Problem 4.1
Assess the hardness of SIS for a large l_infty norm.
Problem 4.2
Assess the hardness of the LWR variants used in submissions.
Problem 4.3
Provide accurate predictions for the Kirchner-Fouque attack on NTRU [KF17]. Although the asymptotic analysis suggests that it is not applicable for typical NTRU-based schemes, it would be reassuring to have a refined analysis based on BKZ-simulation and accounting for the exact GSO-profile of the secret key.
5. Are Module lattices as hard to reduce as non-structured lattices?
Reductions exist from Ideal-SVP to Ring-LWE/Module-LWE. At the same time, Ideal-SVP is weaker for some parameters than SVP for arbitrary lattices [CDW17]. There are two ways to interpret this facts:
there might be a weakness in Ring-LWE/Module-LWE,
the reductions are not meaningful (at least in some parameter ranges).
The cryptanalytic state-of-the-art tends to suggest option 2, but more evidence would be reassuring. Breaking Ring-LWE can be expressed as a (cyclotomic-ring-)module lattice problem, where the module rank is larger or equal to 2.
Both enumeration and sieving algorithms admit speed-ups exploiting symmetries of the input module lattice, yet it is unclear that those speeds-up are relevant inside lattice reduction, as LLL/BKZ will not maintain those symmetries in the considered blocks. For sieving, see, e.g., [BNvdP14]. For enumeration, one may decrease the success probability of the pruned enumeration, since there are more shortest vector.
Problem 5.1
Devise a module-BKZ algorithm, which would preserve (some) structured-ness in its SVP calls, and assess whether such an approach could be competitive with BKZ. Such an algorithm may have the form of a reduction from Module-SVP in rank k to Module-SVP in rank 2, and/or exploit subfields.
Problem 5.2
Assess whether algorithms for Ideal-SVP can be extended to Module-SVP with module rank 2.
6. Should we be afraid of algebra?
Many submissions rely on the cyclotomic rings ℤ[x] / (x^n + 1) for n a power of 2, and consider a fully splitting prime q (q = 1 mod 2), or q a power of 2. These rings, and their reductions modulo q, enjoy heavy algebraic structure. The cyclotomic fields have interesting subfield decompositions, and taking q = 1 mod 2n makes ℤ[x] / (x^n + 1) ring-isomorphic to (ℤ_q)^n via the Chinese Remainder Theorem. But so far, these have not being exploited to break Ring-LWE over power-of-2 cyclotomics.
Problem 6.1
Assess whether this rich algebraic structure helps cryptanalysis in any way?
7. Can weak-key attacks be mounted?
In many lattice-based proposals, the magnitudes of error vectors and secret key vectors are not fixed. Yet the security analyses often consider them essentially fixed. This leaves open the possibility of a weak-key attack: an attacker could just hope that the key is sufficiently short (at least in some directions), and run a cheaper attack targeting such short secrets. The question is whether the cost of this attack decreases faster than the probability of the key being short.
One may note that this reasoning directly applied to the projected key in the uSVP attack as analyzed in [AGVW17] and is essentially the idea of pruned enumeration for BDD [LN13].
Problem 7.1
Assess the feasibility of weak-key attacks.
Problem 7.2
What about fixing the norm of the error terms?
8. Can one exploit decryption errors in schemes using Error Correcting Codes?
Several candidate Ring-LWE-based schemes improve error tolerance using error correction techniques. The analyses of decryption failure probability typically assume that the coefficients of the final error E = e' s + e s' + e'' have statistically independent coordinates. Unfortunately this is not formally true: for example, the probability that two fixed coefficients exceeds q/4 is strictly larger than the square of the probability that a fixed coefficient exceeds q/4.
Problem 8.1
Quantify these correlations and understand more precisely on the failure probability. What about fixing the Euclidean norm of the error terms?
Problem 8.2
Devise strategies that avoid this pitfall. A solution in this direction was proposed in the Mersenne scheme. The authors show that applying a pseudo-random permutation of the bits for the encoding and decoding provides a solution to that statistical independence issue.
9. Are the reductions to CCA-security tight?
Maintaining k bits of security through a Fujisaki-Okamoto-like transform to CCA security seems to require the failure probability of the base scheme to be ≈ 2^{-k} (or even 2^{-2k} in the quantum setting) for honestly generated ciphertexts, and this even if the number of decryption request is very limited. See, e.g., [HHK17]. Indeed, there is a pathological example where the attacker can simply bruteforce the random oracle, issue a handful of queries, and get the secret key as a response. However, the actual scenario in LWE-type schemes is more subtle, since part of the randomness triggering failure is not controlled by the adversary. Very rough arguments have been detailed in some proposals (e.g., Kyber) arguing that ≈ 2^{-2k} therefore seems an overkill.
Problem 9.1
Assert the tightness of the upgrades to CCA-security for the lattice-based submissions. This could be handled by improving the upgrades to CCA-security, by improving their analyses, or by describing attacks.
Problem 9.2
Circumvent the issue altogether, with better upgrades to CCA-security.
10. Is QROM non-tightness an artifact or is it meaningful?
In the quantum-access ROM setting (QROM), the known upgrades to CCA-security are non-tight (in the cases of the NIST candidates). See [HHK17, SXY18].
Problem 10.1
Better understand this non-tightness. Can the tightness of those reductions be improved? Are there (possibly pathologically designed) examples for which the attacks match the bounds given by the current QROM proofs? Are there reasons to think that these losses are artifacts or only meaningful for pathological cases, or could they be meaningful in the context of lattice-based schemes?
Bibliography
[AN17] Y. Aono and P. Q. Nguyen, Eurocrypt'17.
[AGVW17] M. R. Albrecht, F. Göpfert, F. Virdia and T. Wunderer. Asiacrypt'17.
[BDGL16] A. Becker, L. Ducas, N. Gama and T. Laarhoven, SODA'16.
[BNvdP14] J. W. Bos, M. Naehrig and J. van de Pol, ePrint 2014/880.
[CDW17] R. Cramer, L. Ducas and B. Wesolowski, Eurocrypt'17.
[D18] L. Ducas, Eurocrypt'18.
[FBB+14] R. Fitzpatrick, C. H. Bischof, J. Buchmann, Ö. Dagdelen, F. Göpfert, A. Mariano and B.-Y. Yang. Latincrypt'14.
[FK15] M. Fukase and K. Kashiwabara, JIP'15.
[H07] N. Howgrave-Graham, Crypto'07.
[HHK17] D. Hofheinz, K. Hovelmanns and E. Kiltz, TCC'17.
[HS07] G. Hanrot and D. Stehlé, Crypto'07.
[K16] P. Kirchner, Cryptanalytic Algorithm Mailing List 26/05/2016.
[KF17] P. Kirchner and P.-A. Fouque, Eurocrypt'17.
[LN13] M. Liu and P. Q. Nguyen, CT-RSA'13.
[NV08] P. Q. Nguyen and T. Vidick, J.Math.Crypto.'08.
[SXY18] T. Saito, K. Xagawa and T. Yamakawa, Eurocrypt'18.
[S03] C.-P. Schnorr, STACS'03.
[SE94] C.-P. Schnorr and M. Euchner, Math.Prog.'94.
[TKH18] T. Teruya, K. Kashiwabara and G. Hanaoka, PKC'18.
[W16] T. Wunderer, ePrint 2016/733.