Transcripts

Alternatives To Block Size As Aggregate Resource Limits

Date

Topics

pencil icon

Transcript by

Bryan Bishop

I am talking about block size, but not any particular proposal. Why do we have the block size limit and is it a good thing in the first place? The block size exists as a denial-of-service limiter. It limits the amount of resources that a node validating the blockchain can exhaust when validating a block. We want to do this because in the early days of bitcoin there are often certain kinds of transactions there are ways to slow down a validator by using a non-standard transaction or filling up a block with tons of spam. As we have seen in the talks today and yesterday, if you can increase the validation cost of a node, there are great centralizing pressure from being able to deny service to the network in this way.

There's also an engineering concern here as well; originally there was no block size limit, you couldn't validate a block more than 32 MB because of the maximum network message size. There has to be engineering consideration about what the max block size is, just by testing. There are also aggregate limits for blocks as well, like the number of sigops that can be included. These are derived directly from the maximum block size.

Some problems do emerge in practice, it's observed that typical blocks does correlate very well with the validation time of the block. That doesn't always work when people are messing around with it. You can construct blocks that take lots of time to validate even under the block size limit. I'll give you some examples that are in the blockchain.

The core issue is that if we're creating a system that is trustless and works in a variety of circumstances without gated access, we need to desig nfor the worst-case scenario. If miners have an incentive to create slow-to-validate blocks, then they should not be able to generate significantly-worse blocks than what we see on the network. The block limit can be whatever you want, but it should be such that your typical max size block is in fact the maximum has the maximum resource utilization that one can expect. So how bad is it?

So how bad is it? There was a block consisting of one transaction in the coinbase, it was broadcasted about one month ago. It was f2pool that was clearing up a bunch of spam on the blockchain. The problem is that this block takes 30 seconds to validate. It's a 990 kilobyte transaction. It contains about 500,000 signature operations, which each time serializes the entire 1 MB transaction out, moves buffers around, then serializes it, then hashes it, it creates 1.25 GB. The bitcoin serializer is not fast, it's about where 60% of the validation time was. So 30 seconds to validate and that's on a beefy computer, I don't know what that is on a raspberrypi, it's not linear it's quadratic scaling... If you are doing 8 MB, then it's 2 hours and 8 minutes. There are some ways that gavinandresen has proposed that this can be fixed. This transaction would be considered non-standard on the network now, but miners can still generate these transactions.

Some problems with transaction size scale quadratically and not linearly. One proposal was to limit transaction size to 100 kilobytes. But what if we wanted to add confidential transactions to bitcoin mainnet? These require 100 kilobytes per confidential output. So it massively increases the size of your transaction, and your average transaction with one or two transactions might run up against a transaction size limit. There's no exact solution at this point and time.

In 2013, Sergio Lerner discovered a vulnerability which had to do with the max block sigops. This limit was put in place to protect the network, it said that a block can't have a certain numbre of checksigs in its scriptpubkeys and that number is 20,000 with a 1 MB block. Take the block size and divide by 50. It's very sensible until you realize that it's not the outputs are executed in block validation, it's the inputs. With some lead time, yo ucan create transactions- Sergio created some numbers, he saw that you can have 200 checksigs in an output; you can reference that output and you already have a lot of checksigs. You can create a block that takes an extremely long time to validate. There's a tradeoff between how many checksigs you have and how large your transaction; the more checksig the smaller the transaction is. He constructed one that is in theory 3 minutes to verify, and that number hasn't changed much to this day. All of those scalability improvements we had didn't really apply to this. Thankfully such transactions are non-standard, you can broadcast them but the network ignores them. This is a vulnerability that we still have.

Finally we have UTXO set size growth being greatly effected by user action. This is a graph of the last 3 months from statoshi. It illustrates a couple different stress tests that have been running in the last few months. The red line indicates one of the first stress tests that grew the stress test size to up about 850 MB. A massive increase at relatively little cost to those running this; for the most part, they are clogging up the database that is used by bitcoin to validate blocks. They are making every block afterwards, slower.

How bad is it? It's pretty bad... if you are trying to engineer a block to validate as slowly as possible, you can get it from somewhere 10 to 100x sloewr than a typical block. The average 100 MB block, you can make an argument that bitcoin nodes can support it just fine, you can take.. takes a couple seconds to validate, and multiply that by 50 to 100 and that's what you should be engineering for if you're using the block size limit, if you're using block size as your limiter. These attacks are relatively cheap, as you can see during those stress tests. In each one of these cases you can see this quadratic scaling due to the size of the transactions. These are not theoretical, they were observed in the live.

If we are thinking about proposals for scaling bitcoin, why then are we using block size as a resource limiter? Perhaps we should think about using something other than block size, using a more direct measurement of resource utilization by block. Well, block size is still on the list because of worst-case latency for transmission. We have talked about IBLT, and the bitcoin relay network which has the capacity to fit a block into a single TCP packet to minimize latency effects no matter the block size sort of; with IBLT or related schemes you can create a block of any size, and if you are doing it in such a way that you are using transactions that the network already knows about; however, if you are thinking adversarily, you want to maximize the set of transactions per block, it is trivial simple to create blocks and broadcast it before, so maximum block size does in fact impact full validation. We do have other factors.

UTXO set size has... Patrick showed that the time to do initial block download scales quadratically; but if we don't want to grow the UTXO set size per year, then how large of blocks do we want to support, given the assumptions about how many transactions create outputs versus spend outputs per block? Also you want to consider script opcodes executed, space required, bytes copied, stack duplication and other stack operations. You can prevent OP_DUP from creating too large of an item on the stack and just rejecting a transaction. It does exist there, so if you want to use a transaction that uses a couple megs of memory, it's there. There's also libscript which handles script execution, does not at this time does not track information about which opcodes are being used, and how much memory space is requird. This could be done, and we could do some refactoring to get some insight into what resource utilization is used by libscript. This script is going to take 1.5 MB, here's a chunk of 1.5 MB you can use to validate. There are some refactoring changes to make; you can have input into this metric the output count or outpoint count. Elliptic curve checks are usually not hardware-accelerated, we're measuring this in inputs, not outputs. We should be measuring outputs. With the f2pool block, we had this issue of blocks being slowed down by the amount of data that is being hashed. So if we just counted or accumulated the number of blocks per hash, that would help prevent things like f2pool blocks.

There's a lot of different parameters there. There are many ways to combine this into a single metric. The reality is that for a lot of engineering reasons, linear combination makes sense. A linear combination of factor is pretty easy to do, the existing system right now is a linear function of one parameter (block size), it's pretty much a drop-in to have a more complicated but still linear function. It also has the advantage that your coin selection algorithm stays a simple knapsack problem instead of a multidimensional knapsack problem which is significantly harder to solve.

How many single opcode executions are equal to one round of sha256? You can just measure that and decide fairly well because you are comparing CPU runtime. How many bytes of RAM equal how many seconds of CPU utilization? Well that's kind of variable depending on hardware evolution rat.e And some of them are kind of intangible; it's arbitrary how you compare growth of the UTXO set size,versus how costly it was to validate transactions. There's room for future work there for exactly how that can be combined there.

There will be an extension of this work at the Hong Kong workshop and I hope to see you all there.

Transcripts

Community-maintained archive to unlocking knowledge from technical bitcoin transcripts

Transcripts

Explore all Products

ChatBTC imageBitcoin searchBitcoin TLDRSaving SatoshiBitcoin Transcripts Review
Built with 🧡 by the Bitcoin Dev Project
View our public visitor count
We'd love to hear your feedback on this project?Give Feedback