This is a user-friendly overview of cryptographic primitives of Ethereum. I will not describe the algorithms or math, but you will find a lot of useful references to read the details. This post was written as an outcome of the Ethereum Devconnect Scholars programme.

Ethereum Crypto Garden …cover image is generated by AI

At the time of writing, Ethereum processes over a million transactions daily. The combined market capitalization sits at hundreds of billions of dollars, with tens of billions more locked in smart contracts (TVL). That is a lot of trust and value placed on a single protocol. But what exactly secures this value? In this post, we will look at the cryptographic primitives that make Ethereum secure. We will cover the use of algorithms ECDSA, BLS, keccak256 and KZG. Moreover, we will take a look at how is Ethereum positioned again the quantum computers.

ECDSA

ECDSA stands for the Elliptic Curve Digital Signature Algorithm. It is a variant of the Digital Signature Algorithm (DSA) which uses Elliptic Curve Cryptography (ECC) rather than the modular exponentiation found in earlier schemes like RSA. The advantage over RSA is that ECDSA can achieve the same level of security with a much smaller key size.

ECDSA is used to authorize state transitions initiated by network users (EOAs). In other words, it allows you to prove you are the owner of a wallet without revealing your private key. For every public key, there exists a corresponding private key. The private key is used to generate a signature, and the public key is sufficient to verify that the signature was generated by the owner.

Every time a user sends a transaction from an EOA address, they generate an ECDSA signature along with the transaction data to prove that they own the private key for that respective wallet. The core concept is that only the owner of the private key can generate a valid signature, but anyone can verify it. For each transaction, before submitting it to the execution layer, the node checks the validity of the signature. The user authorizes a transaction in the following way:

  1. Data preparation: the account nonce is attached to the transaction data (recipient, amount, …) to prevent double-spending
  2. Hashing: the transaction data is hashed using keccak256 to create a message digest
  3. Signature Generation: apply the ECDSA algorithm on the message digest and private key to generate the resulting signature

Ethereum relies on public key recovery. Unlike standard verification where the output of the verification is true/false, Ethereum transactions provide the signature, and the network recovers the public key from the signature. If that recovered address matches the from field in the transaction, the transaction is considered valid.

The security of ECDSA relies on the Elliptic Curve Discrete Logarithm Problem (ECDLP). This problem currently secures the vast majority of online communication. The algorithm relies on an algebraic structure where the elements are points on an elliptic curve. To understand this better take a look at my math toolkit post. It is widely believed that calculating the discrete logarithm in this structure is infeasible because no we do not know any algorithm that would be significantly faster than brute-force. The specific curve used is secp256k1 which is the same curve used in Bitcoin. The ECDLP is also referred to as a one way function because in this structure it is easy to perform exponentiation (multiplicative notation) but hard to compute the logarithm. For a deeper look into how this structure works check the blog post “Elliptic Curve Cryptography: a gentle introduction” by Andrea Corbellini.

BLS Signature Scheme

This is a different type of scheme with a property that allows signature aggregation. While ECDSA is used at the execution level this scheme is useful in proof of stake where validators attest to blocks. Ethereum has hundreds of thousands of validators who vote on the proposed blocks and the blocks need a supermajority to get finalized. The BLS signature aggregation is useful for attestations in Ethereum’s consensus layer, where validators vote on blocks every epoch. Checking all of the signatures from validators for each block would be computationally expensive to say the least.

This signature algorithm is fairly similar to ECDSA but the underlying curve BLS12-381 has “special pairing properties”. This is well described in the blog BLS12-381 For The Rest Of Us. The pairing describes a bilinear mapping of a sort. The important thing is that this maps points from a source group to a target group and this property is used in the verification procedure.

In this case aggregation means that all signatures over the same message or vote can be checked with a single verification and the size of the signature is not related to the number of signers. This is because the whole algebraic structure has linearity and the public keys can be summed up into a new key. To the scheme this aggregate key is not different from any other key.

Besides this usage there is also RANDAO which is the source of randomness in Ethereum. For example, a block proposer is randomly selected and the randomness is determined by this mechanism. The usage of BLS is crucial here because BLS signatures are unique and deterministic. Unlike ECDSA where you can create many valid signatures for one message using different random nonces, a BLS signature is fixed for a given key and message. This prevents the block proposer from manipulating the randomness by trying different signatures until they get a favorable result. If you are interested in how this works here is a really nice explanation in the ethbook

The hardness of the BLS signatures comes from the Computational Diffie Hellman assumption and Pairing Inversion. The first assumes that even if you know the public points corresponding to secret numbers it is hard to compute the point corresponding to their product. The second states that the pairing function itself acts as a one way function meaning you cannot reverse the mapping to recover the original inputs from the target group.

Keccak-256 Hash Functions

Now we get to the hash function of choice. We have already seen this in the ECDSA section where the signature is created from the message digest, where the digest is generated by the hash function. This is only one of many places where the hash function is used. The hash function takes arbitrary length input and generates a 256-bit output deterministically.

The prime concept in blockchain is that each block has a hash that is in a way dependent on the hash of the previous block. This chain of dependencies makes it possible to easily check if the finalized data has been modified. If any data in a previous block were modified, its hash would change, and all subsequent blocks would become invalid. This mechanism gives a blockchain immutability. Likewise we utilize hashes on transactions to make sure that the transaction details were not modified.

Another important use is address derivation. User addresses (EOAs) are derived by taking the Keccak-256 hash of a public key and keeping only the last few bytes. Contract addresses are similarly derived using Keccak-256 over a combination of the creator’s address and nonce. This ensures that addresses are deterministic, unique, and infeasible to predict or reverse-engineer in a meaningful way.

Another important part is the connection to Merkle trees. The data is split into pieces, hashed, and arranged into a tree, and a root hash commits to the entire set of data in the leaves of the tree. This is used to commit to Ethereum’s entire state. These roots act as compact commitments to millions of accounts and transactions and allow nodes to prove the inclusion or absence of data using Merkle proofs. Moreover, the keccak256 hash is used in EVM to handle smart contract storage locations, but you get the point…the hash function is a crucial building block.

From a security perspective, it is designed to satisfy the standard assumptions for cryptographic hash functions:

  • Preimage resistance: given a hash output $y$, it is computationally infeasible to find an input $x$ such that $h(x) = y$
  • Second preimage resistance: given an input $x$, it is infeasible to find a different input $x’$ such that $h(x) = h(x’)$
  • Collision resistance: it is infeasible to find any two distinct inputs $x, x’$ such that $h(x) = h(x’)$

Probabilistically from the birthday paradox, we would need to try approximately $2^{128}$ hashes in order to get a collision, which is beyond feasible. Now where does this function come from, how does it work, and why do we trust it? Keccak256 is closely related to the standardized SHA-3. This is the winning proposal for NIST’s standardized hash function competition where leading cryptographers submitted their proposals for an optimal hash function. Generally there is a consensus that this is safe to use. To get more information on the proposal you can directly check the keccak team website.

KZG Commitment Scheme

The KZG commitment scheme is one of the main components in Ethereum’s scaling roadmap. Specifically Danksharding provides a way to verify that large chunks of data have been published without requiring every node to process that data permanently.

To understand why KZG is necessary, we briefly explain how Layer 2 (L2) scaling works. L2s execute transactions off-chain to save costs. However, these transactions are secure only if an independent observer can verify that they are not fraudulent. If a Layer 2 sequencer claims, “I processed $n$ transactions and the new state is $X$,” but refuses to show the actual transaction data, no one can verify if they are telling the truth. Data availability guarantees that the transaction data is public so that honest actors can audit the state and dispute it if necessary. To get deeper into danksharding check the a16z post on danksharding.

Traditionally, data integrity is enforced using Merkle trees. While secure, Merkle proofs grow logarithmically $\mathcal{O}(\log(n))$ with the size of the data. KZG allows committing to an entire blob using a single, constant-size commitment, and later verifying parts of that blob with constant-size proofs, independent of the blob size.

To oversimplify, KZG represents the data blob as a mathematical polynomial $f(x)$. The commitment is essentially the evaluation of this polynomial at a specific secret value. Later, when a verifier wants to check that a certain piece of data belongs to the committed blob, the prover generates a small opening proof showing that the polynomial really evaluates to a certain value at a given point. The verifier can check this with a constant number of pairing operations. If even a single coefficient of the polynomial was modified, it would be computationally infeasible to create a valid proof for the original commitment. So if the data was changed, it would be almost impossible to construct a valid proof for the original commitment. For a clear description of KZG see my detailed write-up from a different blog.

When an L2 is acting honestly, the flow is as follows:

  1. Batching: L2 transactions are batched into a “blob”
  2. Commitment: the polynomial commitment to the blob is calculated
  3. Submission: the transaction (containing the blob and the commitment) is sent to Ethereum
  4. Verification: Ethereum nodes verify that the blob data matches the commitment. The network guarantees the data is available for a temporary period
  5. Finalization: The state proposed by the L2 is tentatively accepted. Because the data is guaranteed to be available, other actors can download it, re-execute the transactions, and raise a dispute.

The KZG polynomial commitment scheme has multiple security properties which assume the ECDLP mentioned before and also a variant of the stong Diffie-Hellman Assumption, which is more specific than computational Diffie-Hellman Assumption for ECDSA.

Summary

Before we sum up the primitives used in Ethereum, we make a quick detour to the traditional internet. The Transport Layer Security (TLS) is the cryptographic protocol that essentially secures the communication over the Internet. The protocol is used heavily in email, instant messaging, but most importantly, HTTPS. HTTPS is the implementation of TLS encryption layered on top of the traditional HTTP protocol.

Every time you see https:// in your browser, that communication is secured with TLS. In the most recent version, TLS 1.3, there has been a shift towards exclusively using Elliptic Curve Cryptography. From the specification of comparing TLS 1.2 to TLS 1.3:

“Elliptic curve algorithms are now in the base spec, and new signature algorithms, such as EdDSA, are included…”

There are multiple supported curves. If you want, you can check the specification yourself in section 4.2.7. Without getting tangled up in the details on which specific curve is being used where, we can oversimplify and claim that the ECDLP is the hardness problem of choice for internet communication. Now for an experiment, try going to your internet banking and check the connection you have to the banking server. Most probably this is going to be https that is relying on elliptic curve cryptography. From this, you can see that if people can break the hardness assumptions, there is not only a problem with Ethereum but also with your bank.

Now to sum it up, Ethereum relies on the ECDLP along with Computational and Strong Diffie-Hellman Assumptions. The Strong Diffie-Hellman assumption is more specific or rather mathematically a stronger assumption than the computational variant.

Now there is also the hash function, right? There is a slight difference between keccak256 and the accepted SHA-3, however this does not affect the security. While SHA-2 still remains very commonly used, the security of SHA-3 is assumed to be higher. These hash functions use entirely different constructions and while SHA-2 remains the most popular, SHA-3 is more secure.

Primitive Primary Use Case Underlying Structure Hardness Assumption Key Properties
ECDSA Transaction authorization (EOAs), wallet ownership proof secp256k1 elliptic curve ECDLP Public key recovery, deterministic per nonce
BLS Signatures Validator attestations, RANDAO BLS12-381 curve with pairing properties ECDLP + Computational Diffie-Hellman, Pairing Inversion Signature aggregation, deterministic and unique
KZG Commitments Data availability, blob verification (EIP-4844) Polynomial commitments over elliptic curves ECDLP + Strong Diffie-Hellman Constant-size commitments and proofs
Keccak-256 Block hashing, address derivation, Merkle trees, state roots Sponge construction (SHA-3 family) Preimage, second preimage, collision resistance Deterministic 256-bit output, one-way function

To sum it up, if these problems are broken, the internet has a problem and with it goes Ethereum. For now we can overgeneralize to say that the hardness of Ethereum is actually breaking the internet:)

Post-Quantum Future

Okay great, now one question that is going to come up in this conversation is what about quantum computers. The truth is that Ethereum is not post-quantum secure. The cryptographic primitives we discussed, ECDSA, BLS signatures, and KZG commitments, all rely on hardness assumptions that a sufficiently powerful quantum computer could break. In the roadmap highlighted by Vitalik, the transition to post-quantum schemes is planned for the last phase, the Splurge.

There is a development team working on Lean Consensus that aims to ship post-quantum security among other improvements. Justin Drake and his team have been developing this lean Ethereum consensus protocol that should address quantum vulnerabilities while also improving efficiency and simplicity. You can follow their progress and technical specifications at leanroadmap.org.

However, there has already been some discussion on Ethereum research forums about what to do in case of an overnight quantum breakthrough. The scenario being planned for is one where quantum computers suddenly become capable of breaking ECDLP, which would allow attackers to derive private keys from public keys and steal funds from any address that has ever made a transaction. In this emergency case, there would be a handbrake mechanism in the form of a hard fork. The proposed recovery mechanism is as follows:

  1. Freeze and rollback: The chain undergoes a hard fork that rolls back to the state before the chain was affected
  2. Disable EOAs: The fork disables all traditional transaction types. You can no longer send ETH by simply signing with your private key because the attacker can forge this.
  3. Prove ownership: Users cannot safely reveal their private key. However, they can prove they know the seed phrase that generated their address. A new transaction type is added where users submit a STARK proof. This proof mathematically demonstrates: “I know the seed phrase that hashes to this address” without revealing the seed phrase itself.
  4. Migration: Successful proof submission moves the user’s funds into a new wallet equipped with a quantum-safe signature scheme

In this article we explored the cryptographic primitives securing Ethereum: ECDSA for transactions, BLS signatures for consensus, KZG commitments for data availability, and Keccak-256 for hashing. These all rely on the similar hardness assumptions protecting the internet you use every day for everything. Breaking Ethereum means breaking the internet. Looking ahead, teams of really smart people are working on new cryptography to future-proof against tomorrow’s computers. It seems that the security of the protocol has a bright future.

Thanks for reading. In case I am spreading any misinformation, feel free to contact me. Huge thanks to Ethereum Foundation for making it possible for me to attend the Devconnect 2025<3

Devconnect Scholars 2025