Questions regarding the decentralized timeboost spec

Our team at the Shutter Network has been looking into the recently released spec on decentralized timeboost and we found it a really interesting read! We had the following follow-up questions:

1. Cryptography Questions

  • What cryptographic schemes are planned to be used for threshold decryption and signatures?
  • Besides CCA-security for the threshold decryption scheme, what security and communication complexity requirements should these cryptographic primitives satisfy?
  • Has Identity-Based Encryption (IBE) been considered? This would allow to derive an identity secret key per decryption phase and thereby reduce communication complexity.
  • Are always the same members in the committee? Is a fresh Distributed Key Generation (DKG) process run each time keys are updated?

2. Questions Regarding the Specification

  • How do rounds and epochs align? Are they tied to timestamps, and if so, who generates these timestamps?
  • Since decryption phases can run concurrently across rounds, is it possible for the inclusion phase of round n + 1 to complete while decryption for round n is still ongoing? If this is the case, could the priority address hold an incomplete state during the inclusion phase of round n + 1?
  • What is the protocol’s approach if decryption stalls?
  • From the spec it seems there are multiple decryption committees, but only a single signing committee. Is that correct? If yes, what is the reasoning behind this?
  • Are you expecting to use Rust as the main tech stack?

3. Timeboost for L1
Considering an eventual integration on L1: Do you already have an idea on how this would align with PBS? Would it be a separate transaction supply chain or somehow integrated into the current PBS supply chain? Would it function through a based sequencing approach? Could you provide some insights on how these elements might fit together?

Looking forward to your thoughts and to interesting discussions.

2 Likes

Thank you for the questions. Here’s a partial response on part 2. Responses on the other parts coming soon.

  • How do rounds and epochs align? Are they tied to timestamps, and if so, who generates these timestamps?

Ans. Epochs would change at fixed time intervals, say every minute. This would be based on the wall clock time maintained by the validators. Parties start a round at a well-defined time after some progress has happened in the previous round. This is not necessarily based on wall clock time. A round is eventually tied to an epoch based on when it was started by each of the parties. For instance, in the example specification implementation, each party provides a timestamp for when the round started and it proposes a block, and the round timestamp is the median of the candidate lists output by the protocol. This median value will determine the epoch the round aligns to.

  • Since decryption phases can run concurrently across rounds, is it possible for the inclusion phase of round n + 1 to complete while decryption for round n is still ongoing? If this is the case, could the priority address hold an incomplete state during the inclusion phase of round n + 1?

Ans. Yes, it is possible for inclusion phase of round r+1 to complete before decryption for round r is still ongoing. Thus, the priority address will have some incomplete state during the inclusion phase.

  • What is the protocol’s approach if decryption stalls?

Ans. We do not follow the question entirely. Decryption happens when we commit blocks, and thus is related to the liveness of the protocol. It is unclear why decryption will stall.

  • Are you expecting to use Rust as the main tech stack?

Ans. Yes

2 Likes

We haven’t deeply explored how decentralized Timeboost could fit into the general L1 block production pipeline. However, Timeboost is completely compatible with based rollups and can be used for app-specific-sequencing on the L1, if desired, and that seems to be the easiest way to introduce Timeboost into the L1. Theoretically Ethereum could also adopt Timeboost as the de-facto ordering policy, but that is unlikely in practice. Thank you for the questions! Answers to part 1 coming soon!

1 Like

Here is a relayed response for the cryptography questions!

What cryptographic schemes are planned to be used for threshold decryption and signatures?

We are looking at several schemes.

Besides CCA-security for the threshold decryption scheme, what security and communication complexity requirements should these cryptographic primitives satisfy?

An interesting additional requirement is that shares from a failed prior encryption shouldn’t be usable to decrypt a future round’s message

Has Identity-Based Encryption (IBE) been considered? This would allow to derive an identity secret key per decryption phase and thereby reduce communication complexity.

Yes, but we only have a few parties that are fixed for a reasonable amount of time >

Are always the same members in the committee? Is a fresh Distributed Key Generation (DKG) process run each time keys are updated?

No, the members aren’t fresh every time, only once in a while (say some longer epoch). For forward security we will likely run a fresh DKG.

Thanks a lot for the answers to our questions, they really help clarifying our understanding of the specification. We are quite excited about the spec as it is a significant step towards a more decentralized system. We actually believe that our implementation of a threshold identity-based encryption scheme (https://github.com/shutter-network/shutter) could be an interesting option for the threshold encryption component of the decentralized timeboost sequencer. If that looks interesting to you, we’d be happy to discuss more and support you guys.

1 Like

Thanks! I’ll make sure our cryptography team sees this!

1 Like

Thanks for sharing the link.

It looks like the Shutter protocol is solving the problem of threshold decryption using an IBE approach with periodical DKG enabled by Feldman VSS protocol. A few thoughts:

  1. IBE schemes generally rely on expensive pairings/exponentiations which may decrease the latency/throughput of the overall system. An alternative (simpler) approach is a naive threshold KEM-version of El Gamal encryption accompanied by a standard Chaum-Pedersen (DLEQ) proof to check validity of the ciphertext ([SG02]).
  2. It seems that the DKG based on naive Feldman VSS is susceptible to bias attacks where a party maliciously creates its share based on the value of shares of other parties. We do envision to use an efficient DKG (and not proactive share refresh) but haven’t settled on the scheme yet.
  3. It is unclear if a network (ledger) based on Tendermint can provide the necessary performance. Ideally we would explore BFTs with better latency/throughput.

Happy to continue the discussion.

2 Likes

Thanks for the interesting comments, I’d like to reply to them in the following.

Yes, that’s a good point and indeed also Shutter relies on pairing operations. However, the advantage of an IBE scheme such as Shutter in comparison to e.g. ElGamal is that the decryption committee can derive an identity key per epoch and just release shares of this key for decryption. That is, each committee member only has to send a single group element for the decryption of a block (i.e. requires constant communication), whereas for ElGamal-like schemes a committee member has to send a decryption share per transaction (i.e. requires linear communication).
In addition, it is worth noting that Shutter requires only a single pairing operation per decryption.
In past integrations, the latency added by Shutter is in the range of a few hundred milliseconds and there is still potential for optimization.

Yes, it’s true that a malicious party can bias the public key using a DKG based on Feldman VSS. However, a series of previous works (e.g., [GJKR07][CGRS23][KG09]) showed that some threshold schemes remain secure despite this bias. Actually, the recent trend in threshold cryptography rather seems to go towards Feldman VSS based DKGs due to the efficiency gains they offer compared to e.g. Pedersen VSS based DKGs.
Shutter uses Feldman VSS for the same reason. The security argument for the Shutter protocol takes the use of Feldman VSS into consideration (happy to elaborate on this).

In previous integrations, Shutter used Tendermint only during the initial DKG process, afterwards communication happens over a p2p network. In particular, Shutter is not inherently tied to Tendermint and can be used with another BFT consensus mechanism with better latency.

2 Likes

Thanks for the feedback. Replied below:

That is, each committee member only has to send a single group element for the decryption of a block (i.e. requires constant communication), whereas for ElGamal-like schemes a committee member has to send a decryption share per transaction (i.e. requires linear communication). In addition, it is worth noting that Shutter requires only a single pairing operation per decryption.

Indeed, IBE has the nice property that one can do threshold decryption on a ciphertext without deriving the decryption key (Sec. 6, [BF01]). But still it seems necessary to communicate the decryption share per encrypted transaction. Unless all transactions are encrypted towards the same label which does not sound ideal. In KEM-version of ElGamal one also does not derive the decryption key but instead computes the ephemeral key in the exponent.

… However, a series of previous works (e.g., [GJKR07][CGRS23][KG09]) showed that some threshold schemes remain secure despite this bias.

Right, the scheme looks similar to Ped-DKG in [GJKR07]. It seems that, despite the adversarial bias, a threshold system instantiated with Ped-DKG can be proved secure based on the hardness of DL (as opposed to proof through simulation of centralized threshold schemes). Appreciate the helpful references.

In previous integrations, Shutter used Tendermint only during the initial DKG process, afterwards communication happens over a p2p network. In particular, Shutter is not inherently tied to Tendermint and can be used with another BFT consensus mechanism with better latency.

In terms of DKG, another interesting scheme in a p2p setting is the aggregated DKG of [GJMMST21] which is using the Scrape-check for public verification.

In general it seems that some ledger implementations (like Tendermint) favors schemes like TIBE that reduces communication to a minimum. However, the communication does not seem to be an immediate bottleneck if the underlying BFT is DAG-based (or similar) and already incur a large communication overhead.

Will discuss further with the team but we will likely initially explore the simpler ElGamal KEM approach and if there are unforeseen challenges we can revisit the IBE-based schemes.

2 Likes

Sure that sounds good. In any case, we’re always happy to discuss more regardless of which concrete scheme you eventually decide to use. We’re excited to see more and more threshold schemes being deployed and are always happy to contribute. We’d also be happy to stay in touch e.g. via regular calls to check in on any potential collaboration. See my response to your comments below.

It’s true that in order to achieve constant communication complexity with IBE schemes, transactions need to be encrypted to the same identity. This has the downside that we lose pending transaction privacy, i.e., encrypted transactions which don’t make it into a bundle are revealed/decrypted. Of course IBE can also be used in such a way that every transaction is encrypted to a different identity. In that case we have pending transaction privacy but only linear communication complexity. So essentially IBE offers a trade-off between communication complexity and pending transaction privacy: one can choose if communication complexity or pending transaction privacy is more important.
To the best of my knowledge, ElGamal does not offer such a trade-off (please correct me if I’m wrong). With ElGamal you always get linear communication with pending transaction privacy.

As a side note: A very recent line of research looks into this problem of achieving constant communication and pending transaction privacy at the same time [CGPW24][BFOQ24][AFP24][CGPP24]. These papers propose a primitive called batched threshold encryption and most of the constructions are based on IBE as well.

Yes, the aggregated DKG is interesting due to its efficiency, but unfortunately it produces group elements as secret key shares and is therefore not compatible with many encryption schemes.

2 Likes