newsence
來源篩選

Scaling the DA layer with Blob Streaming

ethresear.ch

Blob Streaming By @QED , @fradamt , @Julian with special thanks to @soispoke , @aelowsson and @casparschwa for their valuable input. We propose blob streaming : enshrining continuous, sampled pre-propagation of blob data as a first-class in-protocol mechanism, alongside the existing critical-path blob lane. Pre-propagation already happens through the blobpool, but with weak guarantees and without data-availability sampling, it cannot be relied on to safely scale throughput. Blob streaming introduces a ticket mechanism to rate-limit pre-propagation, allowing sampling to be reliably extended to the entire slot without extending the critical path — therefore alleviating the free option problem. Beyond scaling, the mechanism also enables full censorship resistance for blob txs. Introduction Definition: JIT and AOT blobs We say that a blob is JIT (just-in-time) if it is propagated in the critical path of the slot that requires its availability. Otherwise, we say that a blob is AOT (ahead-of-time). Ethereum’s DA machinery was originally designed around JIT blobs, with propagation in the critical path on the Consensus Layer (CL). This machinery has been upgraded via PeerDAS , to enable data-availability sampling. However, the Execution Layer (EL) blobpool has in practice enabled AOT blobs, by providing an avenue for pre-propagation. Though JIT blobs in principle enable functionality that AOT blobs cannot provide — co-creation of blocks and blobs is necessary for synchronous composability with the L1 — all current blob usage is in practice AOT. The blobpool has therefore allowed work to move out of the critical path, spreading bandwidth use over time. Yet this pre-propagation happens with weak guarantees and without data-availability sampling, so the scaling benefits it can provide are limited. Figure 1. Slot timeline visualizing the JIT/AOT definition. AOT blobs (orange) are propagated before the critical path; JIT blobs (yellow) are propagated within the critical path of the slot requiring their availability. Regardless of when and how propagation has happened, attesters verify availability of all blobs. In this post, we propose blob streaming : enshrining AOT blobs as a first-class, ticket-based lane — where users purchase the right to propagate a blob ahead of time — alongside a spot-priced JIT lane — which can be thought of as today’s private blobs. The streaming lane is additive: AOT blobs pre-propagate before the critical path while JIT blobs propagate within it. Crucially, ticket-based rate-limiting bounds the propagation load, making pre-propagation inherently reliable — consistent node views, clear DoS resistance. Moreover, because propagation rights are decoupled from availability determination (unlike in the blobpool), data-availability sampling can be safely layered on top, extending the sampling window to the whole slot and enabling blob throughput to scale. As such, the proposed mechanism can be seen as an alternative to blobpool sampling mechanisms (e.g vertical sharding , horizontal sharding ) capable of achieving the same throughput scaling while also providing: to users: strong censorship resistance guarantees, possibility to acquire blob space in advance, loosening of mempool restrictions for blob txs. to the protocol: clear load bounds and a smaller critical-path propagation window, which mitigates the free option problem . Figure 2. Two-lane view of a slot. AOT blobs pre-propagate before the critical path, spread over time; JIT blobs propagate within the critical path alongside the payload. Attesters verify availability of all blobs. A ticket-based design could power both JIT and AOT blobs. However, we choose to retain the existing JIT path, where capacity is spot-priced rather than allocated with tickets, to accommodate use cases that do not involve ahead of time capacity planning, such as (pure) based rollups and eventually blob usage by the L1 itself ( blocks-in-blobs , native rollups ). Before digging into the specifics of the mechanism, let’s expand on why it is worth it to introduce an AOT lane, by looking at the supply and demand side of AOT and JIT blobs. The protocol’s perspective (supply side) The AOT blob throughput that the system can provide is meaningfully higher than pure JIT throughput, because JIT blobs must propagate within a narrow time window, constrained by: The next proposer’s and builder’s need to determine availability before acting The free option problem , which becomes worse the longer we make this window This creates bandwidth surges during the propagation window while leaving bandwidth effectively unutilized during the remainder of the slot. With pre-propagation from AOT blobs, propagation spreads over a larger time window (see Figure 2), smoothing bandwidth consumption and avoiding bottlenecks. Effective pre-propagation can achieve steady bandwidth usage at capacity, translating to increased throughput. Figure 3. Illustration of bandwidth variation throughout slot in JIT vs AOT blobs. JIT blobs require bandwidth over smaller time periods leading to a more spiky consumption while AOT blobs’ propagation is spread out leading to smoother bandwidth consumption. The user’s perspective (demand side) The main use case for JIT blobs for rollups is synchronous composability with L1, which requires based sequencing as well as real-time proving. In contrast, externally sequenced rollups, as well as based rollups with preconfirmations (see for example Taiko ), can make use of AOT blobs. There might also be a hybrid design of based rollups that combine both preconfirmations and synchronous composability, essentially by using preconfs during most of the L1 slot but switching to based mode during the period of L1 block production, such that synchronous interactions are possible in this period . Such rollups might utilize a mix of JIT and AOT blobs, with JIT blobs being necessary in the period of L1 block production, to enable synchronous interactions with the L1. Crucially, JIT blobs become significant for the L1 as well, once we look ahead to the future we’re rapidly moving toward, where zkEVM + DAS are used to scale the L1 itself by placing its payload into blobs , essentially turning the EL into a validity rollup. Even with these use cases in mind, it is hard to argue exactly how the JIT vs AOT blob demand will shape up in the future. We know synchronous composability is beneficial, but how beneficial exactly? We know it is costly, but how costly exactly? We know that L1 will need JIT blobs, but what portion of the total blob throughput will it consume? We argue that we do not need to be able to precisely predict the future demand of these two blob types in order to determine whether it is worth it to introduce more machinery for AOT blobs in the protocol. As long as there is meaningful demand for AOT blobs (which seems very likely, as not all rollups will want to be fully based, and even a hyperscaled L1 is unlikely to consume all blob throughput), all blob users benefit from moving AOT blob throughput into its own lane. As argued above, AOT blobs can make use of bandwidth outside the critical path that is currently left unused. Letting them use those resources frees up the critical path for JIT blobs. This means that introducing AOT blobs benefits not only users who consume them, but also users of JIT blobs who gain access to more of the constrained critical-path resources. Design We now describe the design, starting from the existing blobpool and building toward the full two-lane architecture. Recap: blobpool tickets Today, pre-propagation happens through the EL blobpool, but without any in-protocol mechanism to allocate propagation bandwidth. As blob throughput grows, the blobpool necessarily fragments — it has no global way to curb inflow, and thus can only do so locally, at the node level. Moreover, the blobpool does not benefit from data-availability sampling, and introducing it is challenging: the blobpool’s security model relies on only propagating valid, includable transactions, something which is hard to ensure when sampling, as a sampling node cannot fully verify blob availability by itself. Figure 4. Pre-propagation in the blobpool today. Note that the blob transaction has to be propagated in tandem with the full blob data since availability of the latter is a validity criterion for the former. Much has been written about blob ticketing mechanisms (see here , here , here ), that propose to auction the right to propagate a blob. Recently, a blobpool ticket mechanism was proposed as a step in this direction, augmenting the blobpool to ensure DoS resistant pre-propagation of blobs in the blobpool, and retaining such a guarantee even if implemented alongside a blobpool sampling mechanism (see vertically sharded mempool and EIP-8070: Sparse Blobpool ). In order to submit and propagate a blob in the blobpool, a submitter would be required to hold a valid ticket, acquired by interacting with a designated ticket contract (for example implementing a first-price auction). This would ensure a limit on the number of blobs propagated through the blobpool, and fairly allocate the limited space. Because blobpool admission is now controlled by pre-paid tickets rather than by validity checks, sampling could also be safely introduced — nodes no longer need to verify full blob availability to decide whether to propagate a transaction. Figure 5. Augmenting the blobpool with blobpool tickets. Blob tickets Blob tickets take the ticketing concept further, moving pre-propagation to the CL and more deeply integrating it into the data availability pipeline. This differs from blobpool tickets in two key ways: For AOT blobs, a ticket is all users need to get a blob included on chain. In particular, AOT blob txs do not pay a blob basefee when included ; they only pay regular gas fees. Hence the name blob tickets instead of blobpool tickets — what you’re buying with a ticket is actually blob space, not just blobpool space. From the protocol’s perspective, this is because propagation is where we actually consume scarce bandwidth resources. (JIT blobs are discussed later and remain spot-priced.) Propagation moves to the CL , reusing its already developed infrastructure for DA sampling, which would otherwise need to be unnecessarily duplicated on the EL. Moreover, DA sampling is fundamentally part of the consensus mechanism, because DA is a precondition to importing a block into the fork-choice. Today, we instead stitch together EL pre-propagation and CL availability enforcement with the getBlobs engine API call . The workflow with tickets is: Buying a ticket : By sending a transaction to the ticket contract, the user can acquire the right to propagate a blob at some point in the future. Propagation : Blob : Using the ticket (at the specified time), the user pushes blob data through the CL sampling infrastructure. Blob tx : Also using the ticket, the blob tx ( without blob data ) goes to the regular mempool . Inclusion and availability enforcement : When a blob tx is included, attesters enforce availability of the associated blobs (identified by blob_tx.versioned_hashes ), exactly as today. Figure 6. User workflow with blob tickets. The user acquires a ticket, propagates the blob on the CL, and submits a blob tx to the EL mempool. Once included in a block, a blob tx functions exactly as today, conditioning block validity on availability of the related blobs and thus giving the user strict availability guarantees. Each ticket grants the right to propagate: One blob on the CL (propagation right) Multiple blob txs on the EL. For example up to 16, matching the max number of blob txs ( maxTxsPerAccount ) currently allowed by the Geth blobpool, as well as the default maximum number of guaranteed mempool slots for regular txs in the Geth mempool ( AccountSlots ). Except, the limit here is per ticket, not per account . These two rights are independent — the CL and EL each track ticket usage separately. This means a ticket holder can propagate their blob data on the CL and their blob tx on the EL in parallel, without coordination between the layers. Allowing multiple blob txs to propagate in the EL mempool with a single ticket lets users do resubmissions without purchasing another ticket, for example in case of base fee changes that invalidate the transaction. User-note : Tying blob tx propagation to tickets, rather than to their validity, lets the mempool do without the strict rules that it currently applies to them. In particular, the same address can be allowed to queue up many blob txs in parallel, because one tx invalidating the others isn’t a concern — again, the right to propagate is about the ticket, not about validity! This is a concrete pain point for blob submitters, which if unaddressed is only going to get worse as throughput of individual L2s increases, since the cap on the number of blobs per tx will lead to an increase in the rate of txs from each L2. The hybrid design: AOT + JIT Given what we have discussed so far, AOT and JIT can in principle be structurally the same: we could design the system so that all blob capacity, including critical-path capacity, is sold via tickets. In such a system, the distinction between JIT and AOT would be purely about propagation timing. That ticket-only design works well for the blob demand of actors who can plan throughput and buy tickets ahead of time, like operators of externally sequenced rollups. However, it breaks down once demand includes open user flow with no canonical ticket manager: users cannot be expected to source tickets ahead of time, and making builders intermediate tickets by default creates inventory, capital, and centralization pressure. In demand spikes, ticket inventory can become the bottleneck even when the network still has critical-path propagation capacity. This applies not only to L1 itself once it eventually uses blobs ( blocks-in-blobs ), but also to based rollups. For this reason, the final architecture in this post is explicitly hybrid, introducing ticket-based AOT blobs alongside a spot-priced JIT lane . End users can then show up with transactions and directly pay for critical-path blob resources (JIT blobs, via blob basefee), while planned flow (AOT blobs) can use tickets (and benefits from doing so). In the payload, this separation is made explicit by introducing two lists of versioned hashes: jit_versioned_hashes : for blobs that the builder commits to just-in-time. They are propagated (sampled) alongside the payload and pay for their resources immediately (through a JIT blob basefee set equal to \text{bf}^{AOT} - see the section of the ticket contract for more details), exactly like today’s blob lane. aot_versioned_hashes : blobs that were pre-propagated with tickets , now being asserted as available. The payment for blob resources has already happened when purchasing the ticket, no immediate payment is required. Both lists condition the payload’s validity on availability : all blobs corresponding to jit_versioned_hashes and aot_versioned_hashes must be available in order for the payload to become canonical. The difference is only how propagation resources are paid for and consumed. In summary, the network’s propagation resources have been explicitly split into two buckets: critical path and outside the critical path. Different markets govern the allocations of the two resources: an ahead-of-time onchain ticket auction for pre-propagation capacity, and a just-in-time spot market for critical-path propagation, where the builder has ultimate inclusion power. The JIT mechanism is the same product as today’s blob lane: critical-path propagation, builder-driven inclusion, and spot payment via blob basefee at inclusion. Note however that since there is no blobpool pre-propagation for JIT blobs, users will have to communicate their blobs directly to builders. As such, JIT blobs correspond to today’s private blobs . The AOT mechanism is additive, providing an additional pathway with different properties: ahead-of-time ticket purchase, pre-propagation and therefore higher capacity, and, as we will see, censorship resistance (CR) guarantees. This path becomes the default for any blob that can be pre-propagated. JIT vs AOT capacity A fundamental question that arises in the hybrid design is about the resource split: how much of the network’s capacity to propagate blob data do we want to devote to JIT blobs and how much to AOT blobs? We propose a design where capacity constraints are governed by three parameters, which have to be set much like the blob gas target and limit are set today: B_1 ( JIT max ): the maximum number of JIT blobs per slot. This is an upper bound determined by how long we are willing to make the critical path — and correspondingly, how large a free option window we tolerate. B_2 ( blob (JIT + AOT) max ): the maximum total number of blobs per slot, an aggregate limit for JIT + AOT. This is determined by the network’s total propagation throughput over the course of a slot. R \leq B_1 ( reserved JIT capacity ): a portion of JIT capacity that is protected from AOT usage. These parameters induce the following rules for a given slot n : AOT ticket sales : Up to B_2 - R tickets can be sold for a future slot. Since R is reserved for JIT, at most B_2 - R blobs can be scheduled ahead of time. JIT capacity : If a \leq B_2 - R AOT blobs have been scheduled for slot n , then up to \min(B_1,\, B_2 - a) JIT blobs can be included. This guarantees at least R JIT capacity, and allows JIT to expand up to B_1 when AOT demand is low. B_2 is purely a technical constraint, reflecting the network’s total propagation budget over a slot. B_1 requires more discretion: it is bounded by the slot structure but may be set lower to limit the free option window. However, B_1 is not opinionated about JIT vs AOT — all capacity beyond R is shared, so a higher B_1 simply lets JIT expand further into the shared pool when AOT demand is low. The most interesting design choice is R . The unreserved capacity B_2 - R is a shared pool usable by both AOT (via tickets) and JIT (if AOT demand leaves room). Setting R too low risks underserving JIT needs, which is particularly problematic once the L1 itself relies on JIT blobs. Setting R too high pushes capacity that could have been sold as tickets into the JIT-only path; the reserved capacity is never fully lost — any AOT activity can in principle use a JIT blob — but users are forced to go through builders directly and lose the protocol’s censorship resistance guarantees. Baseline pricing As a baseline, the existing blob base fee update mechanism can be applied just as today. Given the outlined limits, \min(B_1,\, B_2 - a) + B_2 - R blobs can be sold in the slot, \min(B_1,\, B_2 - a) as JIT blobs for the current slot and B_2 - R as tickets for a future slot. At most B_1 + B_2 - R blobs can thus be sold, in the case that a low number of AOT blobs were scheduled for the slot. The blobSchedule.target could be situated at B_2\times2/3 and blob_gas_used computed from the total number of JIT blobs and AOT tickets sold in the slot, multiplied by GAS_PER_BLOB . In practice, we would also add to the blobSchedule the new variables B_1 , B_2 and R , which impose the more granular capacity constraints that the overall mechanism demands. Ticket contract and pricing mechanisms for higher throughput As mentioned earlier, in order to buy a ticket a transaction is sent to a dedicated ticket contract. The contract has two main functions, it outputs new tickets and updates old tickets if they got used or expired. The ticket contract outputs AOT tickets according to the aforementioned capacity constraints. Tickets are priced according to a the AOT base-fee, \text{bf}^{AOT} , in turn set by a target AOT blob capacity through a EIP-1559 type controller mechanism. As we noted earlier the base fee for JIT is set equal to \text{bf}^{AOT} . Concretely this means that \text{bf}^{AOT} increases or decreases from slot i to slot i+1 if more or , respectively less, tickets are sold than \text{AOT}_{\text{target}} . Here \text{AOT}_{\text{target}} is a function of the AOT blob capacity B_2 - R e.g \text{AOT}_{\text{target}}=B_2 - R . The update rule for the base fee could be exactly the same as used for EIP-1559: $$ \text{bf}^{AOT} {i+1} = \text{bf}^{AOT} {i} \times \Big(1 + \frac{1}{8} \times \frac{\text{AOT} {i} - \text{AOT} {\text{target}}}{\text{AOT}_{\text{target}}}\Big) $$ where \text{AOT}_{i} is the number of tickets sold in slot i . Each ticket transaction needs to specify four variables base_fee , auction_bid , number of tickets — from which the contract can deduct auction_bid_per_ticket = auction_bid / number of tickets , base_fee_per_ticket = base_fee / number of tickets — and sender_adress which specifies the address getting the tickets. For the ticket transaction to be eligible to aquire tickets the condition base_fee_per_ticket \geq\text{bf}^{AOT} needs to be satisfied. Transactions are then ordered by decreasing auction_bid_per_ticket and tickets are allocated up to a limit of at most 2\times(B_2 - R) . In the case of overdemand (i.e if the total number of tickets users are trying to buy in a given slot exceeds the limit of 2\times(B_2 - R) both the base_fee and auction_bid of those transactions that acquire tickets is burned while the corresponding values of transactions that fail to acquire a ticket are returned to the senders address. Recall that the AOT blob throughput capacity is B_2 - R . Thus, all tickets (up to B_2 - R since we set the limit to twice the capacity) corresponding to transactions in the lower end of the ordered transaction list become valid for the next slot. For example let B_2 - R=5 and the following ordering of bidders in slot N . Alice: 3 tickets, Bob: 4 tickets, Charlie: 3 tickets. Then Alice will receive 3 tickets for the slot N+1 , Bob will receive 2 tickets for slot N+1 and 2 for slot N+2 while Charlie will receive 3 tickets for slot N+2 . In the case where demand does not exceed 2\times(B_2 - R) , a value corresponding to \text{bf}^{AOT}\times number of tickets gets burned from each transaction. Finally, note that there is no reason to only sell tickets for the next slot. Rather we could consider selling tickets in slot N which are valid for slot N+k . Being able to acquire tickets well in advance can have important benefits to L2s which can use this to more accurately price their own transactions given apriori knowledge of the price of a given blob. Determining the details of this — how large should k be? Should users be able to specify which slots they want tickets from a range of options ? — and so on is left as an open question at this point. Censorship resistance A key promise of the AOT lane is censorship resistance for blob txs. Tickets let us identify a restricted set of blobs whose availability can be established by a committee, so that they can be given inclusion guarantees — each ticket can be seen as also granting the role of inclusion list proposer, for a single blob . However, the base system with blob tickets alone does not fully deliver on this. In what follows, we identify the gap, show how recording availability onchain via a DA contract addresses it, and build end-to-end inclusion guarantees on top. Limitations of blob tickets Blob tickets are a very meaningful improvement, but leave a gap in the censorship resistance story. FOCIL is a censorship resistance mechanism that allows a committee to guarantee inclusion of certain transactions (for example taken from the mempool). However, blob tx validity depends on availability. Since availability hasn’t been recorded anywhere, inclusion of blob txs cannot be enforced. The root cause is that availability can only be determined at the moment of blob tx inclusion : even after a blob propagates and everyone has sampled it, there is no record of which blobs are available, and the only way to assert availability is through inclusion of a blob tx on chain. Recording availability: DA contract A clean way to address this is to record availability independently of blob tx inclusion. We introduce two changes: Payloads can contain versioned hashes independent of blob txs. A builder can include a list of versioned hashes for blobs whose availability it wants to assert, even without corresponding blob txs in the same block. Availability is recorded in a DA contract . At the start of each block, a system call records the versioned hashes from the payload into a DA contract. This creates a record of which blobs are available, queryable by nodes (as part of mempool and FOCIL participation) as well as within the EVM. Enforcing availability works exactly like today: attesters only vote for blocks whose blobs are available. If a builder includes a versioned hash for unavailable data, the block won’t gain attestations. The only change is that versioned hashes can now come from the payload directly, not just from blob txs. In addition, we adjust the onchain behavior of blob txs to work with the contract. Blob tx validity remains conditional on availability of the corresponding blobs, but to ensure this we now check the DA contract for availability of blob_tx.versioned_hashes , as part of validating blob_tx . In particular, we do not require blob_tx.versioned_hashes to be included in payload.versioned_hashes if they had already been recorded as available in a previous block. Availability only has to be established once. Note: since the DA contract is queryable within the EVM, regular transactions can also check blob availability, for example a contract could condition its logic on whether a specific blob is available. Blob txs can remain the primary interface, but the DA contract opens the door to more flexible interactions with availability. Figure 7. Availability recording and enforcement, and blob tx validity. The builder includes a list of versioned hashes referring to available blobs, whose availability is enforced through attestations. The versioned hashes are recorded in a dedicated DA contract, and blob tx validation checks availability in the contract. Moreover, we adjust how we handle blob txs in the mempool, so that the mempool can benefit from the availability information in the contract. A blob tx can now propagate in the mempool if either: 1. Availability is recorded : The referenced blobs’ availability is recorded in the DA contract, OR 2. Sender holds an unused ticket : The sender has a valid ticket that hasn’t been used yet on the EL (as seen locally by the node). In other words, a ticket is only necessary if the blob tx is propagated prior to availability being recorded, e.g. in parallel with the blob itself. Once availability is recorded, a blob tx can propagate according to normal mempool rules, exactly like a regular tx. This also resolves a practical limitation of the base ticket system: without the DA contract, a ticket only grants a few blob tx submissions, and the mempool must restrict blob tx propagation to ticket holders. With availability recorded, these constraints disappear — resubmission, for example, does not need to be handled in any special way. Figure 8. The full picture of blob and blob tx propagation with the DA contract. A ticket grants two independent propagation rights: one blob on the CL and a few blob txs on the EL. After availability is recorded in the DA contract, blob txs can propagate freely without tickets, enabling unlimited resubmission. Full inclusion story for blob txs With availability recorded independently of blob txs, we can now provide censorship resistance to blob txs, by first ensuring inclusion (availability determination) of blobs. We do so with a mechanism based on blob tickets, and adapting FOCIL-like fork-choice enforcement to blobs: Each PTC (Payload Timeliness Committee) member observes which blobs have been propagated by a deadline prior to block production. They sample these blobs and form a local view of availability. Members send lists of versioned hashes they observed as available. A majority vote determines which versioned hashes the proposer must include in the payload (and thus record in the DA contract). The proposer may include additional blobs but cannot exclude those the PTC requires. Attesters enforce this: they only vote for blocks that include PTC-required versioned hashes, unless the attester locally doesn’t see those blobs as available (safety always takes precedence). Note that the proposer is now constrained from both directions when it comes to blob inclusions: they must include what the PTC requires (liveness) and cannot include what isn’t available (safety). Crucially, once availability of a blob has been recorded, a blob tx referencing it becomes equivalent to a regular transaction, because its additional validity condition is guaranteed to be satisfied: As already mentioned, it can propagate without tickets, according to normal mempool rules It can be included through normal FOCIL Moreover, ticket txs are themselves regular txs, benefitting from the same mempool and FOCIL infrastructure. Therefore, we get an end-to-end censorship resistance story for blob txs. Figure 9. End-to-end inclusion guarantees for blob txs. The PTC enforces inclusion of propagated blobs (as versioned hashes recorded in the DA contract), while FOCIL provides inclusion guarantees for the ticket-buying and blob transactions. Note that this end-to-end censorship resistance story applies to blob txs that use AOT blobs. Even an AOT blob propagated after the PTC deadline for a given slot can still receive inclusion guarantees from the next slot onward, as long as it remains available — this mirrors regular FOCIL behavior, where transactions not in the mempool by the IL (inclusion list) deadline cannot be force-included in the current block but can be in the next. JIT blobs, on the other hand, are by definition only propagated when included in a block — there is no pre-inclusion propagation path, and therefore no way to determine their availability and guarantee their inclusion ahead of time. By the time a JIT blob is allowed to propagate, it is already included. We can think of JIT capacity as assigning a portion of the tickets (the “critical path” portion) to the proposer, who resells the right to the builder: inclusion is then entirely at the builder’s discretion (but incentivized by priority fees). For use cases that truly need JIT blobs — block and blob co-creation for synchronous composability — there is no alternative to having the builder do this, so the lack of inclusion guarantees is inherent. For use cases that don’t strictly need co-creation, a blob that fails to be included as JIT can always be re-submitted as AOT, gaining the full censorship resistance guarantees described above. We conclude with a full picture of the design, contrasting it with today’s system. Figure 10. Today (top) vs Blob Streaming (bottom). Today, blob data, availability determination, and execution are coupled in a single block. With blob streaming, AOT blobs pre-propagate via tickets and are validated against the DA contract; JIT blobs propagate in the critical path as today. The DA contract records availability via a system call, after which blob txs are validated against it and execute normally. Appendix DA system contract Design considerations Write pattern: At the start of each block, a system call records the versioned hashes of blobs whose availability is being asserted in that block. Read pattern: Contracts query by versioned hash to check if a blob is available. This should be cheap and simple. Storage management: Entries must be periodically deleted, to bound storage growth. At 128 blobs per slot, unbounded storage would grow ~10 GB per year. Current-block access: Transactions in the same block as availability recording must be able to check availability without external proofs, since they cannot produce proofs for data just recorded. Contract design The contract maintains a recent window (~128 blocks) via a ring buffer, enabling O(1) proof-free queries. This covers current-block access and typical rollup use cases. Beyond that, users can prove inclusion against versioned_hashes_root stored in each block’s header. This keeps contract storage minimal while still enabling availability queries for a long period, arguably more than enough to make sure that a user can land a tx onchain after a blob’s availability has been determined. Note that checking the DA contract as part of validating a blob tx is very cheap when the versioned_hashes are included in the current payload, since it’s a warm read — the versioned hashes were written to the DA contract at block start. The current usage pattern, with availability determination happening at the same time as execution, is then essentially unaffected. # Constants BLOCK_WINDOW = 128 MAX_BLOBS_PER_BLOCK = 128 RECENT_RING_SIZE = BLOCK_WINDOW * MAX_BLOBS_PER_BLOCK # Storage recent_vhs_buffer: list[Optional[bytes]] = [None] * RECENT_RING_SIZE recent_availability: dict[bytes, int] = {} # vh => 1 if in recent window recent_write_cursor: int = 0 def record_availability(versioned_hashes: list[bytes]): """Called via system call at block start.""" global recent_write_cursor for vh in versioned_hashes: # Clear old entry at current position old_vh = recent_vhs_buffer[recent_write_cursor] if old_vh is not None: del recent_availability[old_vh] # Record new entry recent_vhs_buffer[recent_write_cursor] = vh recent_availability[vh] = 1 recent_write_cursor = (recent_write_cursor + 1) % RECENT_RING_SIZE def is_available(versioned_hash: bytes) -> bool: """Check availability in recent window (no proof needed).""" return recent_availability.get(versioned_hash) == 1 1 post - 1 participant Read full topic

newsence

透過 Blob Streaming 擴展數據可用性層

ethresear.ch
3 天前

AI 生成摘要

我們提出 Blob Streaming:將持續且經過抽樣的 Blob 數據預傳播確立為協議內的一等機制,與現有的關鍵路徑 Blob 通道並行,藉此在不延長關鍵路徑的情況下安全地擴展吞吐量並實現完全的抗審查性。

Blob Streaming(Blob 串流)

作者:, , ;特別感謝 , 提供寶貴意見。

我們提議 Blob Streaming:將連續、抽樣的 Blob 數據預傳播(pre-propagation)作為協議內的一等公民機制,與現有的關鍵路徑(critical-path)Blob 通道並行。預傳播目前已透過 Blob 池(blobpool)實現,但由於缺乏強保障且沒有數據可用性抽樣(DAS),無法依賴它來安全地擴展吞吐量。Blob Streaming 引入了票據(ticket)機制來限制預傳播的速率,允許抽樣可靠地擴展到整個時隙(slot)而無需延長關鍵路徑,從而緩解了「免費期權問題」(free option problem)。除了擴展性,該機制還能為 Blob 交易提供完全的抗審查性。

簡介

定義:JIT 與 AOT Blob

如果一個 Blob 是在需要其可用性的時隙關鍵路徑中傳播的,我們稱之為 JIT(Just-In-Time,即時)。否則,我們稱之為 AOT(Ahead-Of-Time,提前)。
以太坊的 DA(數據可用性)機制最初是圍繞 JIT Blob 設計的,傳播發生在共識層(CL)的關鍵路徑中。該機制已透過 升級,以支持數據可用性抽樣。然而,執行層(EL)的 Blob 池在實踐中透過提供預傳播途徑,啟用了 AOT Blob。

雖然 JIT Blob 在原則上能提供 AOT Blob 無法提供的功能——區塊與 Blob 的共同創建對於與 L1 的是必要的——但目前所有的 Blob 使用在實踐中都是 AOT。因此,Blob 池允許工作移出關鍵路徑,將頻寬使用分散到不同時間。然而,這種預傳播的保障較弱且缺乏數據可用性抽樣,因此其提供的擴展效益有限。

圖 1. 視覺化 JIT/AOT 定義的時隙時間線。AOT Blob(橘色)在關鍵路徑之前傳播;JIT Blob(黃色)在需要其可用性的時隙關鍵路徑內傳播。無論傳播何時及如何發生,證明者(attesters)都會驗證所有 Blob 的可用性。

在本文中,我們提議 Blob Streaming:將 AOT Blob 確立為一種基於票據的一等公民通道——用戶購買提前傳播 Blob 的權利——並與現有的現貨定價 JIT 通道(可視為今天的私有 Blob)並行。串流通道是增量式的:AOT Blob 在關鍵路徑之前預傳播,而 JIT Blob 在關鍵路徑內傳播。至關重要的是,基於票據的速率限制約束了傳播負載,使預傳播具有內在的可靠性——一致的節點視角、明確的抗 DoS 能力。此外,由於傳播權與可用性判定解耦(與 Blob 池不同),數據可用性抽樣可以安全地分層其上,將抽樣窗口擴展到整個時隙並實現 Blob 吞吐量的擴展。因此,提議的機制可被視為 Blob 池抽樣機制(如 )的替代方案,能夠實現相同的吞吐量擴展,同時還提供:

  • 對用戶:強大的抗審查保障、提前獲取 Blob 空間的可能性、放寬 Blob 交易的內存池(mempool)限制。

  • 對協議:明確的負載邊界和更小的關鍵路徑傳播窗口,從而減輕

圖 2. 時隙的雙通道視圖。AOT Blob 在關鍵路徑之前預傳播,隨時間分散;JIT Blob 與有效載荷(payload)一起在關鍵路徑內傳播。證明者驗證所有 Blob 的可用性。

基於票據的設計可以同時驅動 JIT 和 AOT Blob。然而,我們選擇保留現有的 JIT 路徑(其容量採現貨定價而非票據分配),以適應不涉及提前容量規劃的使用場景,例如(純粹的),以及最終 L1 自身的 Blob 使用()。

在深入探討機制細節之前,讓我們從 AOT 和 JIT Blob 的供需兩端來詳述為什麼值得引入 AOT 通道。

協議視角(供應端)

系統能提供的 AOT Blob 吞吐量顯著高於純 JIT 吞吐量,因為 JIT Blob 必須在狹窄的時間窗口內傳播,受限於:

  • 下一個提議者和構建者在行動前需要確定可用性。

  • ,該窗口越長,問題越嚴重。

這導致傳播窗口期間出現頻寬激增,而時隙的其餘時間頻寬實際上未被利用。透過 AOT Blob 的預傳播,傳播分散在更大的時間窗口內(見圖 2),平滑了頻寬消耗並避免了瓶頸。有效的預傳播可以實現穩定的頻寬使用,轉化為更高的吞吐量。

圖 3. JIT 與 AOT Blob 在整個時隙中頻寬變化的說明。JIT Blob 需要在較短時間內佔用頻寬,導致消耗更具峰值性;而 AOT Blob 的傳播是分散的,導致頻寬消耗更平滑。

用戶視角(需求端)

Rollup 使用 JIT Blob 的主要場景是與 L1 的同步組合性,這需要以及實時證明。相比之下,外部排序的 Rollup 以及(例如 )可以使用 AOT Blob。

也可能存在一種結合了預確認和同步組合性的基於型 Rollup ,本質上是在 L1 時隙的大部分時間使用預確認,但在 L1 區塊生產期間切換到基於模式,以便在該期間進行同步交互。這類 Rollup 可能混合使用 JIT 和 AOT Blob,其中 JIT Blob 在 L1 區塊生產期間是必要的,以實現與 L1 的同步交互。

至關重要的是,一旦我們展望未來——zkEVM + DAS 被用於透過將其 來擴展 L1 自身,本質上將 EL 轉變為一個有效性 Rollup(validity rollup)——JIT Blob 對 L1 也變得意義重大。

即便考慮到這些場景,也很難準確預測未來 JIT 與 AOT Blob 的需求分佈。我們知道同步組合性是有益的,但具體多有益?我們知道它成本高昂,但具體多高?我們知道 L1 將需要 JIT Blob,但它會消耗總吞吐量的多少比例?

我們認為,不需要精確預測這兩類 Blob 的未來需求,也能判斷協議中是否值得為 AOT Blob 引入更多機制。只要對 AOT Blob 有實質性需求(這看起來非常可能,因為並非所有 Rollup 都想完全基於 L1,且即便超大規模擴展的 L1 也不太可能消耗所有 Blob 吞吐量),所有 Blob 用戶都能從將 AOT Blob 吞吐量移入獨立通道中獲益。 如前所述,AOT Blob 可以利用目前閒置的關鍵路徑外頻寬。讓它們使用這些資源可以為 JIT Blob 騰出關鍵路徑。這意味著引入 AOT Blob 不僅惠及使用它們的用戶,也惠及 JIT Blob 的用戶,因為後者可以獲得更多受限的關鍵路徑資源。

設計

我們現在描述該設計,從現有的 Blob 池出發,逐步構建完整的雙通道架構。

回顧:Blob 池票據(blobpool tickets)

今天,預傳播透過 EL Blob 池發生,但沒有任何協議內機制來分配傳播頻寬。隨著 Blob 吞吐量增長,Blob 池必然會碎片化——它沒有全局方法來遏制流入,因此只能在節點層級局部進行。此外,Blob 池無法從數據可用性抽樣中獲益,且引入抽樣具有挑戰性:Blob 池的安全模型依賴於僅傳播有效的、可包含的交易,這在抽樣時很難確保,因為抽樣節點本身無法完全驗證 Blob 的可用性。

圖 4. 今天的 Blob 池預傳播。注意,Blob 交易必須與完整的 Blob 數據一起傳播,因為後者的可用性是前者的有效性標準。

關於 Blob 票據機制已有許多討論(見),提議拍賣傳播 Blob 的權利。最近,一種 被提議作為邁向此方向的一步,增強 Blob 池以確保 Blob 的抗 DoS 預傳播,即使與 Blob 池抽樣機制(見)一起實施,也能保留這種保證。

為了在 Blob 池中提交和傳播 Blob,提交者需要持有有效的票據,該票據透過與指定的票據合約(例如實施第一價格拍賣)交互獲得。這將確保限制透過 Blob 池傳播的 Blob 數量,並公平分配有限的空間。由於 Blob 池的准入現在由預付票據而非有效性檢查控制,抽樣也可以安全地引入——節點不再需要驗證完整的 Blob 可用性來決定是否傳播交易。

圖 5. 使用 Blob 池票據增強 Blob 池。

Blob 票據(Blob tickets)

Blob 票據進一步擴展了票據概念,將預傳播移至共識層(CL),並更深層地整合到數據可用性流水線中。這與 Blob 池票據有兩個關鍵不同點:

  • 對於 AOT Blob,票據是用戶在鏈上包含 Blob 所需的全部。 特別是,AOT Blob 交易在包含時不支付 Blob 基礎費用(basefee);它們僅支付常規 Gas 費。因此稱為 Blob 票據而非 Blob 池 票據——你購買的是實際的 Blob 空間,而不僅僅是 Blob 池空間。從協議的角度來看,這是因為傳播才是我們實際消耗稀缺頻寬資源的地方。(JIT Blob 稍後討論,仍維持現貨定價。)

  • 傳播移至共識層(CL),重用其已開發的 DA 抽樣基礎設施,否則這些設施需要在執行層(EL)上不必要地重複構建。此外,DA 抽樣從根本上是共識機制的一部分,因為 DA 是將區塊導入分叉選擇(fork-choice)的前提條件。今天,我們則是透過 將 EL 預傳播和 CL 可用性強制執行縫合在一起。

使用票據的工作流程為:

  • 購買票據:透過向票據合約發送交易,用戶可以獲得在未來某個時間點傳播 Blob 的權利。

  • 傳播

    • Blob:使用票據(在指定時間),用戶透過 CL 抽樣基礎設施推送 Blob 數據。
    • Blob 交易:同樣使用票據,Blob 交易(不含 Blob 數據)進入常規內存池
  • 包含與可用性強制執行:當 Blob 交易被包含時,證明者強制執行相關 Blob(由 blob_tx.versioned_hashes 標識)的可用性,與今天完全一致。

圖 6. 使用 Blob 票據的用戶工作流程。用戶獲取票據,在 CL 上傳播 Blob,並向 EL 內存池提交 Blob 交易。一旦包含在區塊中,Blob 交易的功能與今天完全相同,將區塊有效性與相關 Blob 的可用性掛鉤,從而為用戶提供嚴格的可用性保障。

每張票據授予以下權利:

  • 在 CL 上傳播一個 Blob(傳播權)。

  • 在 EL 上傳播多個 Blob 交易。例如最多 16 個,匹配目前 Geth Blob 池允許的每個帳戶最大 Blob 交易數(maxTxsPerAccount),以及 Geth 內存池中常規交易的默認最大保證槽位數(AccountSlots)。不同之處在於,這裡的限制是按票據計算,而非按帳戶

這兩項權利是獨立的——CL 和 EL 分別跟蹤票據使用情況。這意味著票據持有人可以並行地在 CL 上傳播其 Blob 數據並在 EL 上傳播其 Blob 交易,而無需兩層之間的協調。允許單張票據在 EL 內存池中傳播多個 Blob 交易,方便用戶在基礎費用變動導致交易失效等情況下進行重新提交,而無需購買新票據。

用戶筆記:將 Blob 交易傳播與票據掛鉤,而非與其有效性掛鉤,可以讓內存池擺脫目前對其應用的嚴格規則。特別是,可以允許同一個地址並行排隊多個 Blob 交易,因為一個交易使其他交易失效不再是問題——同樣,傳播權是關於票據的,而非關於有效性的!這是 Blob 提交者的一個具體痛點,如果不解決,隨著單個 L2 吞吐量的增加,這個問題只會惡化,因為每個交易的 Blob 數量上限將導致每個 L2 的交易頻率增加。

混合設計:AOT + JIT

基於目前的討論,AOT 和 JIT 在原則上可以結構相同:我們可以設計系統使所有 Blob 容量(包括關鍵路徑容量)都透過票據出售。在這樣的系統中,JIT 和 AOT 的區別純粹在於傳播時機。

這種「僅限票據」的設計對於能夠規劃吞吐量並提前購買票據的參與者(如外部排序的 Rollup 運營商)運作良好。然而,一旦需求包含沒有規範票據管理者的開放用戶流,這種設計就會崩潰:不能要求用戶提前獲取票據,而讓構建者默認中介票據會產生庫存、資金和中心化壓力。在需求激增時,票據庫存可能成為瓶頸,即使網絡仍有關鍵路徑傳播容量。這不僅適用於未來使用 Blob 的 L1 自身(),也適用於基於型 Rollup。

因此,本文最終的架構是明確的混合型,引入基於票據的 AOT Blob 以及現貨定價的 JIT 通道。終端用戶可以直接帶著交易為關鍵路徑 Blob 資源付費(透過 Blob 基礎費用使用 JIT Blob),而有計劃的流量(AOT Blob)可以使用票據(並從中獲益)。在有效載荷中,這種分離透過引入兩個版本化哈希(versioned hashes)列表來明確:

  • jit_versioned_hashes:構建者承諾即時處理的 Blob。它們與有效載荷一起傳播(抽樣),並立即支付資源費用(透過設置為等於 $\text{bf}^{AOT}$ 的 JIT Blob 基礎費用——詳見票據合約章節),與今天的 Blob 通道完全一樣。

  • aot_versioned_hashes:使用票據預傳播的 Blob,現在被斷言為可用。Blob 資源的支付在購買票據時已經發生,不需要立即支付。

兩個列表都將有效載荷的有效性與可用性掛鉤:所有對應於 jit_versioned_hashesaot_versioned_hashes 的 Blob 必須可用,有效載荷才能成為規範鏈的一部分。區別僅在於傳播資源如何支付和消耗。

總之,網絡的傳播資源被明確分為兩個桶:關鍵路徑和關鍵路徑外。不同的市場管理這兩類資源的分配:針對預傳播容量的提前鏈上票據拍賣,以及針對關鍵路徑傳播的即時現貨市場(構建者擁有最終包含權)。

JIT 機制與今天的 Blob 通道是相同的產品:關鍵路徑傳播、構建者驅動的包含,以及包含時透過 Blob 基礎費用進行現貨支付。但請注意,由於 JIT Blob 沒有 Blob 池預傳播,用戶必須直接將其 Blob 傳達給構建者。因此,JIT Blob 對應於今天的私有 Blob。AOT 機制是增量式的,提供了具有不同屬性的額外路徑:提前購買票據、預傳播(因此容量更高),以及我們將看到的抗審查(CR)保障。對於任何可以預傳播的 Blob,這條路徑將成為默認選擇。

JIT 與 AOT 容量

混合設計中出現的一個根本問題是資源分配:我們希望將多少網絡 Blob 數據傳播容量分配給 JIT Blob,多少分配給 AOT Blob?我們提議一種設計,其容量約束由三個參數管理,這些參數的設置方式與今天的 Blob Gas 目標值和限制值類似:

  • $B_1$ (JIT 最大值):每個時隙 JIT Blob 的最大數量。這是一個上限,取決於我們願意讓關鍵路徑延長多久——以及相應地,我們能容忍多大的免費期權窗口。

  • $B_2$ (Blob (JIT + AOT) 最大值):每個時隙 Blob 的總數最大值,是 JIT + AOT 的總和限制。這取決於網絡在一個時隙內的總傳播吞吐量。

  • $R \leq B_1$ (預留 JIT 容量):受保護、不被 AOT 使用的一部分 JIT 容量。

這些參數在給定時隙 $n$ 誘導出以下規則:

  • AOT 票據銷售:未來某個時隙最多可售出 $B_2 - R$ 張票據。由於 $R$ 是為 JIT 預留的,最多可以提前調度 $B_2 - R$ 個 Blob。

  • JIT 容量:如果時隙 $n$ 已調度 $a \leq B_2 - R$ 個 AOT Blob,則最多可以包含 $\min(B_1,, B_2 - a)$ 個 JIT Blob。這保證了至少有 $R$ 的 JIT 容量,並允許在 AOT 需求低時 JIT 擴展到 $B_1$。

$B_2$ 純粹是技術約束,反映了網絡在一個時隙內的總傳播預算。$B_1$ 需要更多權衡:它受時隙結構限制,但可能設置得更低以限制免費期權窗口。然而,$B_1$ 對 JIT 與 AOT 並無偏好——超過 $R$ 的所有容量都是共享的,因此較高的 $B_1$ 僅僅是讓 JIT 在 AOT 需求低時能進一步擴展到共享池中。

最有趣的設計選擇是 $R$。未預留的容量 $B_2 - R$ 是一個共享池,可供 AOT(透過票據)和 JIT(如果 AOT 需求留有空間)使用。將 $R$ 設置得太低可能無法滿足 JIT 需求,這在 L1 自身依賴 JIT Blob 時尤為成問題。將 $R$ 設置得太高則會將本可以作為票據出售的容量推入僅限 JIT 的路徑;預留容量永遠不會完全損失——任何 AOT 活動原則上都可以使用 JIT Blob——但用戶將被迫直接通過構建者,並失去協議的抗審查保障。

基準定價

作為基準,現有的 Blob 基礎費用更新機制可以像今天一樣應用。給定上述限制,時隙中最多可以售出 $\min(B_1,, B_2 - a) + B_2 - R$ 個 Blob,其中 $\min(B_1,, B_2 - a)$ 作為當前時隙的 JIT Blob,而 $B_2 - R$ 作為未來時隙的票據。在 AOT 調度數量較少的情況下,最多可售出 $B_1 + B_2 - R$ 個 Blob。blobSchedule.target 可以設定在 $B_2 \times 2/3$,而 blob_gas_used 則根據時隙中售出的 JIT Blob 和 AOT 票據總數乘以 GAS_PER_BLOB 來計算。在實踐中,我們還會在 blobSchedule 中加入新變量 $B_1, B_2$ 和 $R$,以施加整體機制所需的更細粒度的容量約束。

票據合約與高吞吐量定價機制

如前所述,為了購買票據,需要向專用的票據合約發送交易。該合約有兩個主要功能:輸出新票據,以及在舊票據被使用或過期時更新狀態。票據合約根據上述容量約束輸出 AOT 票據。票據根據 AOT 基礎費用 $\text{bf}^{AOT}$ 定價,而該費用又由目標 AOT Blob 容量透過類似 EIP-1559 的控制器機制設定。如前所述,JIT 的基礎費用設置為等於 $\text{bf}^{AOT}$。

具體而言,這意味著如果售出的票據多於(或少於)$\text{AOT}{\text{target}}$,則 $\text{bf}^{AOT}$ 從時隙 $i$ 到時隙 $i+1$ 會增加(或減少)。這裡 $\text{AOT}{\text{target}}$ 是 AOT Blob 容量 $B_2 - R$ 的函數,例如 $\text{AOT}_{\text{target}} = B_2 - R$。基礎費用的更新規則可以與 EIP-1559 完全相同:

$$ \text{bf}^{AOT}{i+1} = \text{bf}^{AOT}{i} \times \Big(1 + \frac{1}{8} \times \frac{\text{AOT}{i} - \text{AOT}{\text{target}}}{\text{AOT}_{\text{target}}}\Big) $$

其中 $\text{AOT}_{i}$ 是時隙 $i$ 中售出的票據數量。

每筆票據交易需要指定四個變量:base_feeauction_bid、票據數量——合約從中可以扣除 auction_bid_per_ticket = auction_bid / 票據數量base_fee_per_ticket = base_fee / 票據數量——以及指定獲得票據地址的 sender_address。票據交易要具備獲取票據的資格,必須滿足 base_fee_per_ticket \geq \text{bf}^{AOT}。交易隨後按 auction_bid_per_ticket 降序排列,票據分配上限為 $2 \times (B_2 - R)$。

在需求過剩的情況下(即用戶在給定時隙嘗試購買的票據總數超過 $2 \times (B_2 - R)$ 限制),成功獲取票據的交易之 base_feeauction_bid 都會被銷毀,而未能獲取票據的交易之相應價值將退還給發送者地址。回想一下,AOT Blob 吞吐容量是 $B_2 - R$。因此,排序列表中較後端交易對應的所有票據(最多到 $B_2 - R$,因為我們將限制設為容量的兩倍)將對下一個時隙有效。例如,假設 $B_2 - R = 5$,時隙 $N$ 的投標者排序如下:Alice 3 張,Bob 4 張,Charlie 3 張。那麼 Alice 將獲得 3 張時隙 $N+1$ 的票據,Bob 將獲得 2 張時隙 $N+1$ 的票據和 2 張時隙 $N+2$ 的票據,而 Charlie 將獲得 3 張時隙 $N+2$ 的票據。在需求未超過 $2 \times (B_2 - R)$ 的情況下,每筆交易中相當於 $\text{bf}^{AOT} \times \text{票據數量}$ 的價值將被銷毀。

最後,請注意沒有理由僅出售下一個時隙的票據。相反,我們可以考慮在時隙 $N$ 出售對時隙 $N+k$ 有效的票據。能夠提前很久獲取票據對 L2 有重要好處,它們可以利用先驗的 Blob 價格知識更準確地為自己的交易定價。確定其細節——$k$ 應該多大?用戶是否應該能從一系列選項中指定想要的時隙?——等等,目前仍是一個開放性問題。

抗審查性

AOT 通道的一個關鍵承諾是 Blob 交易的抗審查性。票據讓我們能識別一組受限的 Blob,其可用性可以由委員會確立,從而賦予它們包含保障——每張票據可以被視為同時授予了角色。然而,僅靠 Blob 票據的基礎系統並不能完全實現這一點。接下來,我們將找出差距,展示透過 DA 合約在鏈上記錄可用性如何解決此問題,並在此基礎上構建端到端的包含保障。

Blob 票據的局限性

Blob 票據是一個非常有意義的改進,但在抗審查故事中留下了缺口。 是一種抗審查機制,允許委員會保證某些交易(例如從內存池中提取的交易)的包含。然而,Blob 交易的有效性取決於可用性。由於可用性尚未在任何地方記錄,Blob 交易的包含無法被強制執行。根本原因是可用性只能在 Blob 交易包含的那一刻確定:即使在 Blob 傳播且每個人都抽樣之後,也沒有記錄顯示哪些 Blob 是可用的,斷言可用性的唯一方法是透過在鏈上包含 Blob 交易。

記錄可用性:DA 合約

解決此問題的一個簡潔方法是獨立於 Blob 交易包含來記錄可用性。我們引入兩項變動:

  • 有效載荷可以包含獨立於 Blob 交易的版本化哈希。 構建者可以在有效載荷中包含其想要斷言可用性的版本化哈希列表,即使同一區塊中沒有相應的 Blob 交易。

  • 可用性記錄在 DA 合約中。在每個區塊開始時,一個系統調用(system call)將有效載荷中的版本化哈希記錄到 DA 合約中。這創建了一個哪些 Blob 可用的記錄,可供節點(作為內存池和 FOCIL 參與的一部分)以及 EVM 內部查詢。

強制執行可用性的方式與今天完全一樣:證明者僅投票給 Blob 可用的區塊。如果構建者包含了不可用數據的版本化哈希,該區塊將無法獲得證明。唯一的變化是版本化哈希現在可以直接來自有效載荷,而不僅僅是來自 Blob 交易。

此外,我們調整了 Blob 交易在鏈上的行為以配合該合約。Blob 交易的有效性仍然取決於相應 Blob 的可用性,但為了確保這一點,我們現在在驗證 blob_tx 時,檢查 DA 合約中 blob_tx.versioned_hashes 的可用性。特別是,如果版本化哈希已經在之前的區塊中被記錄為可用,我們不要求它們必須包含在當前 payload.versioned_hashes 中。可用性只需建立一次。

注:由於 DA 合約可在 EVM 內查詢,常規交易也可以檢查 Blob 可用性,例如合約可以根據特定 Blob 是否可用來決定其邏輯。Blob 交易仍可作為主要接口,但 DA 合約為與可用性的更靈活交互打開了大門。

圖 7. 可用性記錄、強制執行與 Blob 交易有效性。構建者包含一個指向可用 Blob 的版本化哈希列表,其可用性透過證明強制執行。版本化哈希記錄在專用的 DA 合約中,Blob 交易驗證時會檢查合約中的可用性。

此外,我們調整了內存池處理 Blob 交易的方式,以便內存池能從合約中的可用性信息獲益。現在,如果滿足以下任一條件,Blob 交易即可在內存池中傳播:

  1. 可用性已記錄:引用的 Blob 可用性已記錄在 DA 合約中,或者
  2. 發送者持有未使用的票據:發送者擁有一張在 EL 上尚未使用的有效票據(由節點局部觀察)。

換句話說,只有當 Blob 交易在可用性記錄之前(例如與 Blob 本身並行)傳播時,才需要票據。一旦可用性被記錄,Blob 交易就可以根據正常的內存池規則傳播,與常規交易完全一樣。這也解決了基礎票據系統的一個實際限制:如果沒有 DA 合約,一張票據僅授予幾次 Blob 交易提交權,且內存池必須將 Blob 交易傳播限制在票據持有人範圍內。有了可用性記錄,這些約束就消失了——例如,重新提交不需要以任何特殊方式處理。

圖 8. 帶有 DA 合約的 Blob 和 Blob 交易傳播全景圖。一張票據授予兩項獨立的傳播權:CL 上的一個 Blob 和 EL 上的幾個 Blob 交易。在可用性記錄到 DA 合約後,Blob 交易可以在沒有票據的情況下自由傳播,實現無限次重新提交。

Blob 交易的完整包含故事

由於可用性獨立於 Blob 交易記錄,我們現在可以為 Blob 交易提供抗審查性,方法是首先確保 Blob 的包含(可用性判定)。我們透過基於 Blob 票據的機制,並將類似 FOCIL 的分叉選擇強制執行適配到 Blob 來實現:

  • 每個 PTC(有效載荷及時性委員會,Payload Timeliness Committee)成員觀察在區塊生產截止時間前已傳播哪些 Blob。他們對這些 Blob 進行抽樣,並形成局部的可用性視角。

  • 成員發送他們觀察到可用的版本化哈希列表。

  • 多數票決定提議者必須在有效載荷中包含哪些版本化哈希(並因此記錄在 DA 合約中)。

  • 提議者可以包含額外的 Blob,但不能排除 PTC 要求的 Blob。

  • 證明者強制執行此規則:他們僅投票給包含 PTC 要求之版本化哈希的區塊,除非證明者局部未看到那些 Blob 可用(安全始終優先)。

請注意,提議者現在在 Blob 包含方面受到雙向約束:他們必須包含 PTC 要求的內容(活性),且不能包含不可用的內容(安全性)。

至關重要的是,一旦 Blob 的可用性被記錄,引用它的 Blob 交易就等同於常規交易,因為其額外的有效性條件已保證滿足:

  • 如前所述,它可以根據正常內存池規則在沒有票據的情況下傳播。

  • 它可以透過正常的 FOCIL 被包含。

此外,票據交易本身也是常規交易,受益於相同的內存池和 FOCIL 基礎設施。因此,我們獲得了 Blob 交易的端到端抗審查故事。

圖 9. Blob 交易的端到端包含保障。PTC 強制包含已傳播的 Blob(作為記錄在 DA 合約中的版本化哈希),而 FOCIL 為購票交易和 Blob 交易提供包含保障。

請注意,這個端到端的抗審查故事適用於使用 AOT Blob 的 Blob 交易。即使是在給定時隙的 PTC 截止時間之後傳播的 AOT Blob,只要它保持可用,仍可從下一個時隙開始獲得包含保障——這鏡像了常規 FOCIL 的行為,即未能在包含列表(IL)截止時間前進入內存池的交易無法在當前區塊強制包含,但可以在下一個區塊包含。

另一方面,JIT Blob 根據定義僅在包含在區塊中時才傳播——沒有包含前的傳播路徑,因此無法提前確定其可用性並保證其包含。當 JIT Blob 被允許傳播時,它已經被包含了。我們可以將 JIT 容量視為將票據的一部分(「關鍵路徑」部分)分配給提議者,提議者再將權利轉售給構建者:包含與否完全由構建者決定(但受優先費激勵)。對於真正需要 JIT Blob 的場景——為了同步組合性而共同創建區塊和 Blob——除了讓構建者執行此操作外別無選擇,因此缺乏包含保障是內在的。對於不嚴格需要共同創建的場景,未能作為 JIT 包含的 Blob 總是可以作為 AOT 重新提交,從而獲得上述完整的抗審查保障。

我們以設計的全景圖作結,並將其與今天的系統進行對比。

圖 10. 今天(上)與 Blob Streaming(下)的對比。今天,Blob 數據、可用性判定和執行耦合在單個區塊中。在 Blob Streaming 中,AOT Blob 透過票據預傳播並針對 DA 合約進行驗證;JIT Blob 像今天一樣在關鍵路徑中傳播。DA 合約透過系統調用記錄可用性,之後 Blob 交易針對合約進行驗證並正常執行。

附錄

DA 系統合約

設計考量

  • 寫入模式:在每個區塊開始時,透過系統調用記錄在該區塊中斷言可用性的 Blob 版本化哈希。

  • 讀取模式:合約透過版本化哈希查詢以檢查 Blob 是否可用。這應該廉價且簡單。

  • 存儲管理:必須定期刪除條目以限制存儲增長。以每時隙 128 個 Blob 計算,無限制的存儲每年將增長約 10 GB。

  • 當前區塊訪問:與可用性記錄發生在同一區塊的交易必須能夠在沒有外部證明的情況下檢查可用性,因為它們無法為剛記錄的數據生成證明。

合約設計

合約透過環形緩衝區維護一個近期窗口(約 128 個區塊),實現 O(1) 的無證明查詢。這涵蓋了當前區塊訪問和典型的 Rollup 使用場景。

除此之外,用戶可以針對存儲在每個區塊頭中的 versioned_hashes_root 證明包含。這使合約存儲保持極小,同時仍能在很長一段時間內支持可用性查詢,這足以確保用戶在 Blob 可用性確定後能在鏈上完成交易。

請注意,作為驗證 Blob 交易的一部分來檢查 DA 合約是非常廉價的,特別是當版本化哈希包含在當前有效載荷中時,因為這是一個熱讀取(warm read)——版本化哈希在區塊開始時已寫入 DA 合約。目前的用法(可用性判定與執行同時發生)基本上不受影響。

python
# 常數BLOCK_WINDOW = 128MAX_BLOBS_PER_BLOCK = 128RECENT_RING_SIZE = BLOCK_WINDOW * MAX_BLOBS_PER_BLOCK# 存儲recent_vhs_buffer: list[Optional[bytes]] = [None] * RECENT_RING_SIZErecent_availability: dict[bytes, int] = {}  # vh => 1 如果在近期窗口內recent_write_cursor: int = 0def record_availability(versioned_hashes: list[bytes]):    """在區塊開始時透過系統調用執行。"""    global recent_write_cursor    for vh in versioned_hashes:        # 清除當前位置的舊條目        old_vh = recent_vhs_buffer[recent_write_cursor]        if old_vh is not None:            del recent_availability[old_vh]        # 記錄新條目        recent_vhs_buffer[recent_write_cursor] = vh        recent_availability[vh] = 1        recent_write_cursor = (recent_write_cursor + 1) % RECENT_RING_SIZEdef is_available(versioned_hash: bytes) -> bool:    """檢查近期窗口內的可用性(無需證明)。"""    return recent_availability.get(versioned_hash) == 1