EOSCommunity.org Forums

EOSIO Perfornace and scalability queries

Hi team,
I am trying to understand EOSIO and have some questions.

1: Performance

How fast transactions are accepted by the EOSIO’s network

  • How much bandwidth it uses,
  • How much EOSIO data needs to be stored and in what way the data must be stored.
  • Important metrices:
    • numbers of blocks are added to the EOSIO,
    • block and transaction sizes,
    • transaction rates

2:Scalability
'- How does blockchain respond to increasing number of nodes

  • How does blockchain respond to increased number of transactions for one specific node.

'- How much data is stored is stored in the blocks

Any limitations and how secure it is

I’ll try to briefly answer some of these questions, but you’re asking for like a whole papers worth of information :sweat_smile:

Not sure exactly how to answer this, since the amount of transactions the network can process is based on the complexity of the transactions themselves. A block (which happens every 0.5 seconds) could contain 10 huge transactions, or 1000’s of very small ones. The metric in which blocks are constrained in their transactions is based on execution time (CPU usage). I think the maximums are each transaction can only be 30ms of execution, and the block itself cannot exceed 200ms.

It depends on the purposes of the server. If it’s a low-traffic node (maybe a private peer), it’s not much. If it’s a public peer with dozens and dozens of p2p connections on a busy network, then maybe 5-15mbs?

2 primary storage mechanisms:

  • block log: contains all blocks + transactions of everything committed to the chain (stored on HDD)
  • state: contains the current state of the blockchain (stored in RAM or fast disk typically)

This storage happens automatically if you’re running the software. State is a fixed size, and the block log is an ever-growing file (which supports trimming to only recent blocks if desired)

Every 0.5 seconds for the past 4+ years. 277m blocks at the time of this writing on EOS.

On most live networks (EOS, WAX, Telos, UX, etc), we see anywhere from 10 per second to upwards of 1000’s of transactions per second in bursts. Not sure what the averages are.

Doesn’t impact performance. Consensus is a fixed size, and all other nodes are ancillary.

So long as the node has a fast enough CPU (most nodes are 3-4+ ghz), it’ll increase load on that single core - but will continue processing as fast as it can.

The complete block log is a few terabytes of size now, which can be trimmed. Most nodes run with a trimmed blocklog which is maybe a gigabyte in size.

Each block has some sort of limit too - but I don’t know exactly what it is and haven’t seen many people concerned with it.