Botanix Federation Book

What is Botanix Labs?

Botanix Labs is a Bitcoin-focused company dedicated to creating decentralized sidechains. Our primary objective is to develop the Spiderchain, a sidechain scheme that utilizes a Proof-of-Stake consensus algorithm. This innovation allows anyone to validate the Bitcoin sidechain.

Botanix Federation

The Botanix Federation represents the initial version of our sidechain framework, enabling a fixed group of signatories to manage funds on behalf of users. Botanix Labs, as a company, will establish its own federation in collaboration with 14 other members. However, anyone can create their own federation by following the provided documentation.

Who is this for?

This documentation is intended for developers interested in running their own RPC node. RPC nodes are non-block-producing entities within a federation. While they have access to the canonical blockchain, they do not produce blocks.

In addition, this documentation contains more information about the federation and some details that you need to run your own federation.

Alpha software warning copy

Botanix Federation is a alpha software that has not been audited. Please only deposit funds that you are willing to lose.

Note

This book is copied over from The Reth book modified to fit the requirements needed for the Botanix Federation.

Run an RPC Node

In this chapter you will find the necessary information on how to set up your own RPC node for the Botanix Testnet.

  1. Hardware requirements
  2. RPC via Docker Compose
  3. Important ports
  4. Parameters

Hardware Requirements

Installation

The Botanix Federation operates on POSIX-based operating systems, including Linux and macOS. It is recommended for the time being that node operators deploy on non-ARM based Linux machines. ARM based architecture is not supported but will come in the future.

Hardware requirements

The hardware requirements for running Reth depend on the node configuration and can change over time as the network grows or new features are implemented.

The most important requirement is by far the disk, whereas CPU and RAM requirements are relatively flexible.

Full Node
DiskAt least 1.2TB (TLC NVMe recommended)
Memory8GB+
CPUHigher clock speeds over core count
BandwidthStable 24Mbps+

Disk

There are multiple types of disks to sync Reth, with varying size requirements, depending on the syncing mode. As of April 2024 at block number 19.6M:

  • Archive Node: At least 2.14TB is required
  • Full Node: At least 1.13TB is required

NVMe drives are recommended for the best performance, with SSDs being a cheaper alternative. HDDs are the cheapest option, but they will take the longest to sync, and are not recommended.

As of February 2024, syncing an Ethereum mainnet node to block 19.3M on NVMe drives takes about 50 hours, while on a GCP "Persistent SSD" it takes around 5 days.

Note - QLC and TLC

It is highly recommended to choose a TLC drive when using NVMe, and not a QLC drive. A list of recommended drives can be found here. It is crucial to understand the difference between QLC and TLC NVMe drives when considering the disk requirement. QLC (Quad-Level Cell) NVMe drives utilize four bits of data per cell, allowing for higher storage density and lower manufacturing costs. However, this increased density comes at the expense of performance. QLC drives have slower read and write speeds compared to TLC drives. They also have a lower endurance, meaning they may have a shorter lifespan and be less suitable for heavy workloads or constant data rewriting. TLC (Triple-Level Cell) NVMe drives, on the other hand, use three bits of data per cell. While they have a slightly lower storage density compared to QLC drives, TLC drives offer faster performance. They typically have higher read and write speeds, making them more suitable for demanding tasks such as data-intensive applications, gaming, and multimedia editing. TLC drives also tend to have a higher endurance, making them more durable and longer-lasting.

CPU

Most of the time spent during syncing is used to execute transactions, a single-threaded operation due to potential state dependencies of one transaction on previous ones. As a result, the number of cores matters less, but in general higher clock speeds are better. More cores are better for parallelizable stages (like sender recovery or bodies downloading), but these stages are not the primary bottleneck for syncing.

Memory

It is recommended to use at least 8GB of RAM.

Most of Reth's components tend to consume a low amount of memory, unless you are under heavy RPC load, so this should matter less than the other requirements.

Higher memory is generally better as it allows for better caching, resulting in less stress on the disk.

Bandwidth

A stable and dependable internet connection is crucial for both syncing a node from genesis and for keeping up with the chain's tip.

Note that due to Reth's staged sync, you only need an internet connection for the Headers and Bodies stages. This means that the first 1-3 hours (depending on your internet connection) will be online, downloading all necessary data, and the rest will be done offline and does not require an internet connection.

Once you're synced to the tip you will need a reliable connection, especially if you're operating a validator. A 24Mbps connection is recommended, but you can probably get away with less. Make sure your ISP does not cap your bandwidth.

Docker

Our installation docs support running RPC nodes via Docker Compose. In the future we will provide federation member support, also via Docker Compose.

Note

Reth requires Docker Engine version 20.10.10 or higher due to missing support for the clone3 syscall in previous versions.

Prerequisites

To use the instructions below, you’ll need to run mutinynet signet node. Ensure that this node is fully synced to the tip before proceeding with the Docker Compose instructions provided afterward. You can find instructions for running your own node at the following links:

Note

Mutinynet is a fork of bitcoin core that is configured for 30 second blocks. This allows our team to test more rapidly. There is a whole suite of tools available for MutinyNet, including coin faucet and block explorer.

GitHub

Botanix Docker images are released on Docker Hub.

You can obtain the latest image with:

docker pull us-central1-docker.pkg.dev/botanix-391913/botanix-testnet-node-v1/botanix-poa-node

Or a specific version (e.g. v0.0.1) with:

docker pull us-central1-docker.pkg.dev/botanix-391913/botanix-testnet-node-v1/botanix-poa-node:v.0.0.1

Using Docker Compose

This setup provides a environment for running a Bitcoin Core node, a Botanix RPC node, and monitoring tools using Grafana Alloy. The services are configured to work together, with appropriate dependencies and ports exposed for interaction.

version: '3.7'
services:
  poa-node-rpc:
    env_file:
      - .bitcoin.env
    container_name: poa-node-rpc
    image: us-central1-docker.pkg.dev/botanix-391913/botanix-testnet-node-v1/botanix-poa-node
    command:
      - poa
      - --federation-config-path=/reth/botanix_testnet/chain.toml
      - --datadir=/reth/botanix_testnet
      - --http
      - --http.addr=0.0.0.0
      - --http.port=8545
      - --http.api=debug,eth,net,trace,txpool,web3,rpc
      - --http.corsdomain=*
      - --ws
      - --ws.addr=0.0.0.0
      - --ws.port=8546
      - -vvv
      - --bitcoind.url=${BITCOIND_HOST}
      - --bitcoind.username=${BITCOIND_USER}
      - --bitcoind.password=${BITCOIND_PASS}
      - --p2p-secret-key=/reth/botanix_testnet/discovery-secret
      - --port=30303
      - --btc-network=signet
      - --metrics=0.0.0.0:9001
      - --ipcdisable
      - --abci-port=26658
      - --abci-host=0.0.0.0
      - --cometbft-rpc-port=8888
      - --cometbft-rpc-host=consensus-node
    ports:
      - 8545:8545
      - 8546:8546
      - 9001:9001
      - 30303:30303
      - 26658:26658
      - 8888:8888
    volumes:
      - ./poa-rpc:/reth/botanix_testnet:rw
    restart: on-failure

  consensus-node:
    container_name: consensus-node
    image: us-central1-docker.pkg.dev/botanix-391913/botanix-testnet-cometbft/botanix-testnet-cometft:v4
    ports:
        - 26656:26656
        - 26657:26657
        - 26660:26660
    volumes:
        - ./consensus-node:/cometbft:rw
    restart: on-failure
    environment:
        - ALLOW_DUPLICATE_IP=TRUE
        - LOG_LEVEL=DEBUG
        - NODE_NAME=poa-node-rpc
        - MONIKER=botanix-consensus-node
        - PERSISTENT_PEERS=2561602572b54dbdcf44b02157ab62717c09d895@34.35.52.165:26656, [email protected]:26656, [email protected]:26656

Docker Compose File Documentation

This Docker Compose file defines a multi-service setup that includes a Bitcoin Core node, a Botanix RPC node, and a Grafana Alloy instance. Below is a detailed explanation of each service.

1. bitcoin-core

This service runs a Bitcoin Core node using the latest version of the ruimarinho/bitcoin-core Docker image. It operates on the Signet network for testnet.

  • Environment Variables: The service loads environment variables from the .bitcoin.env file, where BITCOIND_USER and BITCOIND_PASS are defined.
  • Command: The command options specify the following:
    • -printtoconsole: Logs output to the console.
    • -signet=1: Enables Signet mode.
    • -txindex=1: Maintains a full transaction index.
    • -server=1: Runs the node as a server.
    • -rpcport=38332: Sets the RPC port.
    • -rpcuser and -rpcpassword: Set the RPC authentication using environment variables.
    • -rpcbind=0.0.0.0 and -rpcallowip=0.0.0.0/0: Allow RPC connections from any IP address.
    • -blockfilterindex=1: Enables block filtering.

2. poa-node-rpc

This service runs a Botanix PoA node, which connects to the Bitcoin Core node and provides RPC (Remote Procedure Call) access.

  • Environment Variables: It uses the same .bitcoin.env file as the Bitcoin Core service.
  • Container Name: The container is named botanix-poa-node-rpc.
  • Image: It uses a custom Botanix image (botanix-testnet-node-v1).
  • Command: The command options are listed and explained in the CLI documentation
  • Dependencies: This service depends on the bitcoin-core service to ensure it starts only after Bitcoin Core is running.
  • Restart Policy: The service is configured to restart on failure. The RPC node will exit if Bitcoin Core is not fully sync'd

Note To re-sync your node please remove both the database and the static file directory.

For more information please visit rpc-compose-file

Connecting to federated testnet

Botanix will be hosting a testnet federation. To connect your RPC set up with the federation please use the following chain.toml. Warning: this config may change in the future as we add and remove federation members.

botanix-fee-recipient="0xb8c03cb8C9bAC79c53926E3C66344C13452105f5"

minting-contract-bytecode = "60806040526004361061003f5760003560e01c80635fe03f45146100445780636f194dc914610066578063a5d0bb93146100b3578063a8de6d8c146100d6575b600080fd5b34801561005057600080fd5b5061006461005f366004610562565b6100fd565b005b34801561007257600080fd5b506100996100813660046105eb565b60006020819052908152604090205463ffffffff1681565b60405163ffffffff90911681526020015b60405180910390f35b6100c66100c136600461060d565b610422565b60405190151581526020016100aa565b3480156100e257600080fd5b506100ef6402540be40081565b6040519081526020016100aa565b60005a6001600160a01b03881660009081526020819052604090205490915063ffffffff9081169086161161018b5760405162461bcd60e51b815260206004820152602960248201527f7573657220626974636f696e426c6f636b486569676874206e6565647320746f60448201526820696e63726561736560b81b60648201526084015b60405180910390fd5b6001600160a01b0387166000908152602081905260408120805463ffffffff191663ffffffff88161790553a60016101c460048761068f565b61048560036107d36108fc805a6101db908b6106b1565b6101e591906106c8565b6101ef91906106c8565b6101f991906106c8565b61020391906106c8565b61020d91906106c8565b61021791906106c8565b61022191906106b1565b61022b91906106e0565b90508681111561027d5760405162461bcd60e51b815260206004820152601c60248201527f547820636f7374206578636565647320706567696e20616d6f756e74000000006044820152606401610182565b61028781886106b1565b96506000886001600160a01b03168860405160006040518083038185875af1925050503d80600081146102d6576040519150601f19603f3d011682016040523d82523d6000602084013e6102db565b606091505b505090508061032c5760405162461bcd60e51b815260206004820152601a60248201527f4d696e7420746f2064657374696e6174696f6e206661696c65640000000000006044820152606401610182565b6000846001600160a01b03168360405160006040518083038185875af1925050503d8060008114610379576040519150601f19603f3d011682016040523d82523d6000602084013e61037e565b606091505b50509050806103cf5760405162461bcd60e51b815260206004820152601e60248201527f526566756e6420746f20726566756e6441646472657373206661696c656400006044820152606401610182565b896001600160a01b03167f922344dc04648c0ce028ecdf9b2c9eed9a6794dbb47b777b54b0cfe069f128aa8a8a8a8a60405161040e9493929190610728565b60405180910390a250505050505050505050565b60006104356402540be40061014a6106e0565b34116104a95760405162461bcd60e51b815260206004820152603860248201527f56616c7565206d7573742062652067726561746572207468616e20647573742060448201527f616d6f756e74206f662033333020736174732f764279746500000000000000006064820152608401610182565b336001600160a01b03167f17f87987da8ca71c697791dcfd190d07630cf17bf09c65c5a59b8277d9fe171534878787876040516104ea959493929190610758565b60405180910390a2506001949350505050565b80356001600160a01b038116811461051457600080fd5b919050565b60008083601f84011261052b57600080fd5b50813567ffffffffffffffff81111561054357600080fd5b60208301915083602082850101111561055b57600080fd5b9250929050565b60008060008060008060a0878903121561057b57600080fd5b610584876104fd565b955060208701359450604087013563ffffffff811681146105a457600080fd5b9350606087013567ffffffffffffffff8111156105c057600080fd5b6105cc89828a01610519565b90945092506105df9050608088016104fd565b90509295509295509295565b6000602082840312156105fd57600080fd5b610606826104fd565b9392505050565b6000806000806040858703121561062357600080fd5b843567ffffffffffffffff8082111561063b57600080fd5b61064788838901610519565b9096509450602087013591508082111561066057600080fd5b5061066d87828801610519565b95989497509550505050565b634e487b7160e01b600052601160045260246000fd5b6000826106ac57634e487b7160e01b600052601260045260246000fd5b500490565b6000828210156106c3576106c3610679565b500390565b600082198211156106db576106db610679565b500190565b60008160001904831182151516156106fa576106fa610679565b500290565b81835281816020850137506000828201602090810191909152601f909101601f19169091010190565b84815263ffffffff8416602082015260606040820152600061074e6060830184866106ff565b9695505050505050565b8581526060602082015260006107726060830186886106ff565b82810360408401526107858185876106ff565b9897505050505050505056fea264697066735822122058bba5f85cc573a5323f630452faca186769309f0808e1ca3fdf25351f8d078264736f6c634300080d0033"

# >>>>>>>>>>> federation members public keys
[[federation-member-public-key]]
key="039bef292b80427d355cecb89eda8a50a7d2196a93d73dade5a0c4a07cd334815d"
socket-addr="34.79.189.111:30303"

[[federation-member-public-key]]
key="02bdc272b244f717604fffe659d2d98205d1e6764fdf453d1631f42c2db4d8d710"
socket-addr="34.35.52.165:30303"

[[federation-member-public-key]]
key="0234324e2ef7a3c4a27884d939d2ef2138e309aa7538915ae77137d0f792881be8"
socket-addr="35.201.136.224:30303"

Ports

This section provides essential information about the ports used by the system, their primary purposes, and recommendations for exposure settings.

Peering Ports

  • Port: 30303
  • Protocol: TCP and UDP
  • Purpose: Peering with other nodes for synchronization of blockchain data. Nodes communicate through this port to maintain network consensus and share updated information.
  • Exposure Recommendation: This port should be exposed to enable seamless interaction and synchronization with other nodes in the network.

HTTP RPC Port

  • Port: 8545
  • Protocol: TCP
  • Purpose: Port 8545 provides an HTTP-based Remote Procedure Call (RPC) interface. It enables external applications to interact with the blockchain by sending requests over HTTP.
  • Exposure Recommendation: Similar to the metrics port, exposing this port to the public is not recommended by default.

WS RPC Port

  • Port: 8546
  • Protocol: TCP
  • Purpose: Port 8546 offers a WebSocket-based Remote Procedure Call (RPC) interface. It allows real-time communication between external applications and the blockchain.
  • Exposure Recommendation: As with the HTTP RPC port, the WS RPC port should not be exposed to the public by default.

ABCI Port

  • Port: 26658
  • Protocol: TCP
  • Purpose: Enables communication between the consensus engine and the application layer
  • Exposure Recommendation: As with the HTTP RPC port, the ABCI port should not be exposed to the public by default.

CometBFT RPC server Port

  • Port: 26657
  • Protocol: TCP
  • Purpose: Enables RPC requests for CometBFT node
  • Exposure Recommendation: As with the HTTP RPC port, the CometBFT RPC server Port should not be exposed to the public by default.

CometBFT incoming connections Port

  • Port: 26656
  • Protocol: TCP
  • Purpose: Listens for incoming connections of peers
  • Exposure Recommendation: As with the HTTP RPC port, the CometBFT incoming connections Port port should not be exposed to the public by default.

Metrics Port

  • Port: 9001
  • Protocol: TCP
  • Purpose: This port is designated for serving metrics related to the system's performance and operation. It allows internal monitoring and data collection for analysis.
  • Exposure Recommendation: By default, this port should not be exposed to the public. It is intended for internal monitoring and analysis purposes.

reth poa

Initialize the Botanix PoA node

$ reth poa --help
Start the POA node

Usage: reth poa [OPTIONS]

Options:
      --datadir <DATA_DIR>
          The path to the data dir for all reth files and subdirectories.

          Defaults to the OS-specific data directory:

          - Linux: `$XDG_DATA_HOME/reth/` or `$HOME/.local/share/reth/`
          - macOS: `$HOME/Library/Application Support/reth/`

          [default: default]

      --network-config-path <FILE>
          The path to the configuration file to use for network properties.

      --chain <CHAIN_OR_PATH>
          The chain this node is running.
          Possible values are either a built-in chain or the path to a chain specification file.

          Built-in chains:
              mainnet, sepolia, goerli, holesky, dev, botanix_testnet

          [default: mainnet]

      --federation-mode
          Run in federation mode. Only the nodes in the federation will be able to produce blocks

      --instance <INSTANCE>
          Add a new instance of a node.

          Configures the ports of the node to avoid conflicts with the defaults. This is useful for running multiple nodes on the same machine.

          Max number of instances is 200. It is chosen in a way so that it is not possible to have port numbers that conflict with each other.

          Changes to the following port numbers: - DISCOVERY_PORT: default + `instance` - 1 - AUTH_PORT: default + `instance` * 100 - 100 - HTTP_RPC_PORT: default - `instance` + 1 - WS_RPC_PORT: default + `instance` * 2 - 2

          [default: 1]

      --with-unused-ports
          Sets all ports to unused, allowing the OS to choose random unused ports when sockets are bound.

          Mutually exclusive with `--instance`.

  -h, --help
          Print help (see a summary with '-h')

Metrics:
      --metrics <SOCKET>
          Enable Prometheus metrics.

          The metrics will be served at the given interface and port.

Abci client/app:
      --abci-host
          [default: 0.0.0.0]
      --abci-port
          [default: 26658]

Networking:
  -d, --disable-discovery
          Disable the discovery service

      --disable-dns-discovery
          Disable the DNS discovery

      --disable-discv4-discovery
          Disable Discv4 discovery

      --enable-discv5-discovery
          Enable Discv5 discovery

      --discovery.addr <DISCOVERY_ADDR>
          The UDP address to use for devp2p peer discovery version 4

          [default: 0.0.0.0]

      --discovery.port <DISCOVERY_PORT>
          The UDP port to use for devp2p peer discovery version 4

          [default: 30303]

      --discovery.v5.addr <DISCOVERY_V5_ADDR>
          The UDP address to use for devp2p peer discovery version 5

          [default: 0.0.0.0]

      --discovery.v5.port <DISCOVERY_V5_PORT>
          The UDP port to use for devp2p peer discovery version 5

          [default: 9000]

      --trusted-peers <TRUSTED_PEERS>
          Comma separated enode URLs of trusted peers for P2P connections.

          --trusted-peers enode://[email protected]:30303

      --trusted-only
          Connect only to trusted peers

      --bootnodes <BOOTNODES>
          Comma separated enode URLs for P2P discovery bootstrap.

          Will fall back to a network-specific default if not specified.

      --peers-file <FILE>
          The path to the known peers file. Connected peers are dumped to this file on nodes
          shutdown, and read on startup. Cannot be used with `--no-persist-peers`.

      --identity <IDENTITY>
          Custom node identity

          [default: reth/v0.2.0-beta.6-778feb0a2/x86_64-apple-darwin]

      --p2p-secret-key <PATH>
          Secret key to use for this node.

          This will also deterministically set the peer ID. If not specified, it will be set in the data dir for the chain being used.

      --no-persist-peers
          Do not persist peers.

      --nat <NAT>
          NAT resolution method (any|none|upnp|publicip|extip:\<IP\>)

          [default: any]

      --addr <ADDR>
          Network listening address

          [default: 0.0.0.0]

      --port <PORT>
          Network listening port

          [default: 30303]

      --max-outbound-peers <MAX_OUTBOUND_PEERS>
          Maximum number of outbound requests. default: 100

      --max-inbound-peers <MAX_INBOUND_PEERS>
          Maximum number of inbound requests. default: 30

      --pooled-tx-response-soft-limit <BYTES>
          Soft limit for the byte size of a `PooledTransactions` response on assembling a `GetPooledTransactions` request. Spec`d at 2 MiB.

          <https://github.com/ethereum/devp2p/blob/master/caps/eth.md#protocol-messages>.

          [default: 2097152]

      --pooled-tx-pack-soft-limit <BYTES>
          Default soft limit for the byte size of a `PooledTransactions` response on assembling a `GetPooledTransactions` request. This defaults to less than the [`SOFT_LIMIT_BYTE_SIZE_POOLED_TRANSACTIONS_RESPONSE`], at 2 MiB, used when assembling a `PooledTransactions` response. Default is 128 KiB

          [default: 131072]

RPC:
      --http
          Enable the HTTP-RPC server

      --http.addr <HTTP_ADDR>
          Http server address to listen on

          [default: 127.0.0.1]

      --http.port <HTTP_PORT>
          Http server port to listen on

          [default: 8545]

      --http.api <HTTP_API>
          Rpc Modules to be configured for the HTTP server

          [possible values: admin, debug, eth, net, trace, txpool, web3, rpc, reth, ots, eth-call-bundle]

      --http.corsdomain <HTTP_CORSDOMAIN>
          Http Corsdomain to allow request from

      --ws
          Enable the WS-RPC server

      --ws.addr <WS_ADDR>
          Ws server address to listen on

          [default: 127.0.0.1]

      --ws.port <WS_PORT>
          Ws server port to listen on

          [default: 8546]

      --ws.origins <ws.origins>
          Origins from which to accept WebSocket requests

      --ws.api <WS_API>
          Rpc Modules to be configured for the WS server

          [possible values: admin, debug, eth, net, trace, txpool, web3, rpc, reth, ots, eth-call-bundle]

      --ipcdisable
          Disable the IPC-RPC server

      --ipcpath <IPCPATH>
          Filename for IPC socket/pipe within the datadir

          [default: /tmp/reth.ipc]

      --authrpc.addr <AUTH_ADDR>
          Auth server address to listen on

          [default: 127.0.0.1]

      --authrpc.port <AUTH_PORT>
          Auth server port to listen on

          [default: 8551]

      --authrpc.jwtsecret <PATH>
          Path to a JWT secret to use for the authenticated engine-API RPC server.

          This will enforce JWT authentication for all requests coming from the consensus layer.

          If no path is provided, a secret will be generated and stored in the datadir under `<DIR>/<CHAIN_ID>/jwt.hex`. For mainnet this would be `~/.reth/mainnet/jwt.hex` by default.

      --auth-ipc
          Enable auth engine API over IPC

      --auth-ipc.path <AUTH_IPC_PATH>
          Filename for auth IPC socket/pipe within the datadir

          [default: /tmp/reth_engine_api.ipc]

      --rpc.jwtsecret <HEX>
          Hex encoded JWT secret to authenticate the regular RPC server(s), see `--http.api` and `--ws.api`.

          This is __not__ used for the authenticated engine-API RPC server, see `--authrpc.jwtsecret`.

      --rpc.max-request-size <RPC_MAX_REQUEST_SIZE>
          Set the maximum RPC request payload size for both HTTP and WS in megabytes

          [default: 15]

      --rpc.max-response-size <RPC_MAX_RESPONSE_SIZE>
          Set the maximum RPC response payload size for both HTTP and WS in megabytes

          [default: 160]
          [aliases: rpc.returndata.limit]

      --rpc.max-subscriptions-per-connection <RPC_MAX_SUBSCRIPTIONS_PER_CONNECTION>
          Set the maximum concurrent subscriptions per connection

          [default: 1024]

      --rpc.max-connections <COUNT>
          Maximum number of RPC server connections

          [default: 500]

      --rpc.max-tracing-requests <COUNT>
          Maximum number of concurrent tracing requests

          [default: 10]

      --rpc.max-blocks-per-filter <COUNT>
          Maximum number of blocks that could be scanned per filter request. (0 = entire chain)

          [default: 100000]

      --rpc.max-logs-per-response <COUNT>
          Maximum number of logs that can be returned in a single response. (0 = no limit)

          [default: 20000]

      --rpc.gascap <GAS_CAP>
          Maximum gas limit for `eth_call` and call tracing RPC methods

          [default: 50000000]

RPC State Cache:
      --rpc-cache.max-blocks <MAX_BLOCKS>
          Max number of blocks in cache

          [default: 5000]

      --rpc-cache.max-receipts <MAX_RECEIPTS>
          Max number receipts in cache

          [default: 2000]

      --rpc-cache.max-envs <MAX_ENVS>
          Max number of bytes for cached env data

          [default: 1000]

      --rpc-cache.max-concurrent-db-requests <MAX_CONCURRENT_DB_REQUESTS>
          Max number of concurrent database requests

          [default: 512]

Gas Price Oracle:
      --gpo.blocks <BLOCKS>
          Number of recent blocks to check for gas price

          [default: 20]

      --gpo.ignoreprice <IGNORE_PRICE>
          Gas Price below which gpo will ignore transactions

          [default: 2]

      --gpo.maxprice <MAX_PRICE>
          Maximum transaction priority fee(or gasprice before London Fork) to be recommended by gpo

          [default: 500000000000]

      --gpo.percentile <PERCENTILE>
          The percentile of gas prices to use for the estimate

          [default: 60]

Btc_server:
      --btc-server <BTC_SERVER>
          Btc signing service

          The metrics will be served at the given interface and port.

Bitcoind:
      --bitcoind.url <BITCOIND_URL>
          bitcoind RPC url

          The url of the bitcoind server.

          [default: localhost:18443]

      --bitcoind.username <BITCOIND_USERNAME>
          Btcd username

          The username of the bitcoind server.

          [default: foo]

      --bitcoind.password <BITCOIND_PASSWORD>
          Btcd password

          The password of the bitcoind server.

          [default: bar]

      --frost.min_signers <MIN_SIGNERS>
          The minimum number required for frost signing

      --frost.max_signers <MAX_SIGNERS>
          The maximum number required for frost signing

Btc_network:
      --btc-network <BITCOIN_NETWORK>
          The bitcoin network to operate on

          [default: regtest]

TxPool:
      --txpool.pending-max-count <PENDING_MAX_COUNT>
          Max number of transaction in the pending sub-pool

          [default: 10000]

      --txpool.pending-max-size <PENDING_MAX_SIZE>
          Max size of the pending sub-pool in megabytes

          [default: 20]

      --txpool.basefee-max-count <BASEFEE_MAX_COUNT>
          Max number of transaction in the basefee sub-pool

          [default: 10000]

      --txpool.basefee-max-size <BASEFEE_MAX_SIZE>
          Max size of the basefee sub-pool in megabytes

          [default: 20]

      --txpool.queued-max-count <QUEUED_MAX_COUNT>
          Max number of transaction in the queued sub-pool

          [default: 10000]

      --txpool.queued-max-size <QUEUED_MAX_SIZE>
          Max size of the queued sub-pool in megabytes

          [default: 20]

      --txpool.max-account-slots <MAX_ACCOUNT_SLOTS>
          Max number of executable transaction slots guaranteed per account

          [default: 16]

      --txpool.pricebump <PRICE_BUMP>
          Price bump (in %) for the transaction pool underpriced check

          [default: 10]

      --blobpool.pricebump <BLOB_TRANSACTION_PRICE_BUMP>
          Price bump percentage to replace an already existing blob transaction

          [default: 100]

      --txpool.max-tx-input-bytes <MAX_TX_INPUT_BYTES>
          Max size in bytes of a single transaction allowed to enter the pool

          [default: 131072]

      --txpool.max-cached-entries <MAX_CACHED_ENTRIES>
          The maximum number of blobs to keep in the in memory blob cache

          [default: 100]

      --txpool.nolocals
          Flag to disable local transaction exemptions

      --txpool.locals <LOCALS>
          Flag to allow certain addresses as local

      --txpool.no-local-transactions-propagation
          Flag to toggle local transaction propagation

Debug:
      --debug.continuous
          Prompt the downloader to download blocks one at a time.

          NOTE: This is for testing purposes only.

      --debug.terminate
          Flag indicating whether the node should be terminated after the pipeline sync

      --debug.tip <TIP>
          Set the chain tip manually for testing purposes.

          NOTE: This is a temporary flag

      --debug.max-block <MAX_BLOCK>
          Runs the sync only up to the specified block

      --debug.print-inspector
          Print opcode level traces directly to console during execution

      --debug.hook-block <HOOK_BLOCK>
          Hook on a specific block during execution

      --debug.hook-transaction <HOOK_TRANSACTION>
          Hook on a specific transaction during execution

      --debug.hook-all
          Hook on every transaction in a block

      --debug.skip-fcu <SKIP_FCU>
          If provided, the engine will skip `n` consecutive FCUs

      --debug.engine-api-store <PATH>
          The path to store engine API messages at. If specified, all of the intercepted engine API messages will be written to specified location

Database:
      --db.log-level <LOG_LEVEL>
          Database logging level. Levels higher than "notice" require a debug build

          Possible values:
          - fatal:   Enables logging for critical conditions, i.e. assertion failures
          - error:   Enables logging for error conditions
          - warn:    Enables logging for warning conditions
          - notice:  Enables logging for normal but significant condition
          - verbose: Enables logging for verbose informational
          - debug:   Enables logging for debug-level messages
          - trace:   Enables logging for trace debug-level messages
          - extra:   Enables logging for extra debug-level messages

      --db.exclusive <EXCLUSIVE>
          Open environment in exclusive/monopolistic mode. Makes it possible to open a database on an NFS volume

          [possible values: true, false]

      --bitcoind-config-path <FILE>
          The path to the configuration file to use for network properties.

Logging:
      --log.stdout.format <FORMAT>
          The format to use for logs written to stdout

          [default: terminal]

          Possible values:
          - json:     Represents JSON formatting for logs. This format outputs log records as JSON objects, making it suitable for structured logging
          - log-fmt:  Represents logfmt (key=value) formatting for logs. This format is concise and human-readable, typically used in command-line applications
          - terminal: Represents terminal-friendly formatting for logs

      --log.stdout.filter <FILTER>
          The filter to use for logs written to stdout

          [default: ]

      --log.file.format <FORMAT>
          The format to use for logs written to the log file

          [default: terminal]

          Possible values:
          - json:     Represents JSON formatting for logs. This format outputs log records as JSON objects, making it suitable for structured logging
          - log-fmt:  Represents logfmt (key=value) formatting for logs. This format is concise and human-readable, typically used in command-line applications
          - terminal: Represents terminal-friendly formatting for logs

      --log.file.filter <FILTER>
          The filter to use for logs written to the log file

          [default: debug]

      --log.file.directory <PATH>
          The path to put log files in

          [default: /Users/armins/Library/Caches/reth/logs]

      --log.file.max-size <SIZE>
          The maximum size (in MB) of one log file

          [default: 200]

      --log.file.max-files <COUNT>
          The maximum amount of log files that will be stored. If set to 0, background file logging is disabled

          [default: 5]

      --log.journald
          Write logs to journald

      --log.journald.filter <FILTER>
          The filter to use for logs written to journald

          [default: error]

      --color <COLOR>
          Sets whether or not the formatter emits ANSI terminal escape codes for colors and other text formatting

          [default: always]

          Possible values:
          - always: Colors on
          - auto:   Colors on
          - never:  Colors off

Display:
  -v, --verbosity...
          Set the minimum log level.

          -v      Errors
          -vv     Warnings
          -vvv    Info
          -vvvv   Debug
          -vvvvv  Traces (warning: very verbose!)

  -q, --quiet
          Silence all log output



Federation information

In the following section, you'll find additional information on setting up the federation. This part of the documentation isn’t required if you're only interested in Running an RPC Node. However, it may be valuable for those who want to set up their own federation or gain a deeper understanding of the project.

Setting up federation.toml

Federation.toml defines your federation. It includes federation public keys and socket addresses.

An example of a two person federation would be

botanix-fee-recipient = "0xb8c03cb8C9bAC79c53926E3C66344C13452105f5"

minting-contract-bytecode = "60806040526004361061003f5760003560e01c80635fe03f45146100445780636f194dc914610066578063a5d0bb93146100b3578063a8de6d8c146100d6575b600080fd5b34801561005057600080fd5b5061006461005f366004610489565b6100fd565b005b34801561007257600080fd5b50610099610081366004610512565b60006020819052908152604090205463ffffffff1681565b60405163ffffffff90911681526020015b60405180910390f35b6100c66100c1366004610534565b610349565b60405190151581526020016100aa565b3480156100e257600080fd5b506100ef6402540be40081565b6040519081526020016100aa565b60005a6001600160a01b03881660009081526020819052604090205490915063ffffffff9081169086161161018b5760405162461bcd60e51b815260206004820152602960248201527f7573657220626974636f696e426c6f636b486569676874206e6565647320746f60448201526820696e63726561736560b81b60648201526084015b60405180910390fd5b6001600160a01b0387166000908152602081905260408120805463ffffffff191663ffffffff88161790553a60016101c46004876105b6565b61048560036107d3615208805a6101db908b6105d8565b6101e591906105f1565b6101ef91906105f1565b6101f991906105f1565b61020391906105f1565b61020d91906105f1565b61021791906105f1565b61022191906105d8565b61022b9190610604565b90508681111561027d5760405162461bcd60e51b815260206004820152601c60248201527f547820636f7374206578636565647320706567696e20616d6f756e74000000006044820152606401610182565b61028781886105d8565b6040519097506001600160a01b0389169088156108fc029089906000818181858888f193505050501580156102c0573d6000803e3d6000fd5b506040516001600160a01b0384169082156108fc029083906000818181858888f193505050501580156102f7573d6000803e3d6000fd5b50876001600160a01b03167f922344dc04648c0ce028ecdf9b2c9eed9a6794dbb47b777b54b0cfe069f128aa888888886040516103379493929190610644565b60405180910390a25050505050505050565b600061035c6402540be40061014a610604565b34116103d05760405162461bcd60e51b815260206004820152603860248201527f56616c7565206d7573742062652067726561746572207468616e20647573742060448201527f616d6f756e74206f662033333020736174732f764279746500000000000000006064820152608401610182565b336001600160a01b03167f17f87987da8ca71c697791dcfd190d07630cf17bf09c65c5a59b8277d9fe17153487878787604051610411959493929190610674565b60405180910390a2506001949350505050565b80356001600160a01b038116811461043b57600080fd5b919050565b60008083601f84011261045257600080fd5b50813567ffffffffffffffff81111561046a57600080fd5b60208301915083602082850101111561048257600080fd5b9250929050565b60008060008060008060a087890312156104a257600080fd5b6104ab87610424565b955060208701359450604087013563ffffffff811681146104cb57600080fd5b9350606087013567ffffffffffffffff8111156104e757600080fd5b6104f389828a01610440565b9094509250610506905060808801610424565b90509295509295509295565b60006020828403121561052457600080fd5b61052d82610424565b9392505050565b6000806000806040858703121561054a57600080fd5b843567ffffffffffffffff8082111561056257600080fd5b61056e88838901610440565b9096509450602087013591508082111561058757600080fd5b5061059487828801610440565b95989497509550505050565b634e487b7160e01b600052601160045260246000fd5b6000826105d357634e487b7160e01b600052601260045260246000fd5b500490565b818103818111156105eb576105eb6105a0565b92915050565b808201808211156105eb576105eb6105a0565b80820281158282048414176105eb576105eb6105a0565b81835281816020850137506000828201602090810191909152601f909101601f19169091010190565b84815263ffffffff8416602082015260606040820152600061066a60608301848661061b565b9695505050505050565b85815260606020820152600061068e60608301868861061b565b82810360408401526106a181858761061b565b9897505050505050505056fea2646970667358221220cf16442b31d8d5a64fc0a5e558f76e2e76039b54484fece01be27ffcf75ede6f64736f6c63430008150033"

[[federation-member-public-key]]
key = "039bef292b80427d355cecb89eda8a50a7d2196a93d73dade5a0c4a07cd334815d"
socket-addr = "127.0.0.1:30303"

[[federation-member-public-key]]



...key = "02bdc272b244f717604fffe659d2d98205d1e6764fdf453d1631f42c2db4d8d710"
socket-addr = "127.0.0.1:30304"

Additional fee-recipient

The Botanix Federation requires you to setup an additional fee recipient. Any 20-byte eth address will work for this field. The additional fee recipient is traditionally the party responsible for setting up the federation, coordinating setup and maintenance and responding to emergencies. For their additional responsibilities they will receive 20% of all block fees.

Minting Contract bytecode

The minting contract is essential to the operation of the sidechain, as it manages deposits and withdrawals. The fields specified in the federation.toml file only serve to verify the integrity of the actual minting contract deployed in the federation's genesis block.

Coming soon...

Setting up bitcoind

To run either a Botanix RPC or Federation node you need to setup a Bitcoin block source. Our instructions refer to bitcoind but you are free to use any bitcoin implementation.

Getting bitcoind

Please refer to Setting up bitcoin core

Base configs

The Botanix node will always use RPC credentials for authentication. Please start with these base configs.

rpcuser=<username>
rpcpassword=<password>
rpcallowip=127.0.0.1
server=1

Note that the bitcoind RPC does not secure the traffic. It is recommended to run bitcoind on the same machine or in the same VPC as your Botanix node.

Testnet

Botanix testnet uses bitcoin signet as its L1 chain. To start bitcoind in signet mode please start bitcoind with the signet flag.

signet=1

Bitcoin Signing Server

The Bitcoin signing server is responsible for managing the Bitcoin multisig keys of the federation. This service needs to be live before the PoA (Proof of Authority) node can begin to produce blocks. This service only needs to be ran for block producing federation nodes.

Additionally this service does not need to be publicly accessible. It is recommended that only the machine hosting your Botanix node should be able to access the Bitcoin signing server.

Additional notes

What is the identifier?

Your identifier is your index into the federation list. More about this list can be found in chain-config.md. For example if my public key's index into the list is the first one my identifier is 0. If its the fourth, my identifier is 3.

What is the database?

This service needs to store several key pieces of information that is critical for signing bitcoin withdrawal requests. For example the UTXO set and information about its private key share in the FROST multisig. This database includes sensitive data and is non-recoverable once deleted.

CLI reference

$ cargo run -- --help
Usage: btc-server [OPTIONS]

Options:
      --db <DB>
          The path to the database
      --config-path <CONFIG_PATH>
          The path to the database
      --btc-network <BTC_NETWORK>
          The bitcoin network to operate on
      --identifier <IDENTIFIER>
          Frost participant identifier
      --address <ADDRESS>

      --max-signers <MAX_SIGNERS>
          max signers
      --min-signers <MIN_SIGNERS>
          min signers
      --toml <TOML>
          toml configuration path
      --jwt-secret <JWT_SECRET>
          jwt secret path
      --bitcoind-url <BITCOIND_URL>
          bitcoind url
      --bitcoind-user <BITCOIND_USER>
          bitcoind user
      --bitcoind-pass <BITCOIND_PASS>
          bitcoind pass
      --fee-rate-diff-percentage <FEE_RATE_DIFF_PERCENTAGE>
          acceptable fee rate difference percentage as an integer (ex. 2 = 2%, 20 = 20%)
      --fall-back-fee-rate-sat-per-vbyte <FALL_BACK_FEE_RATE_SAT_PER_VBYTE>
          Fall back fee rate expressed in sat per vbyte
      --pegin-confirmation-depth <PEGIN_CONFIRMATION_DEPTH>
          The number of confirmations required for pegins
  -h, --help
          Print help


Setting up CometBFT

Install CometBFT from Source

Full installation guidelines for CometBFT can be found on Github.

Note

You should now have the cometbft binary in build/.

Initialize the node

To initialize nodes, run the following commands:

# Node 1
cometbft init  -k "secp256k1" --home ./node1

# Node 2
cometbft init  -k "secp256k1" --home ./node2

Note

By default the output from init command is ~/.cometbft

Update config.toml

Update ports so they don’t conflict with other peers, ie:

tcp://127.0.0.1:26657 > tcp://127.0.0.1:36657

Set persistent_peers like so:

persistent_peers = "[email protected]:36656"

To get the peer id to use above, use these commands:

cometbft show-node-id --home ./node1
cometbft show-node-id --home ./node2

Update the genesis.json for all nodes

Make sure that:

  1. the chain_id value is the same
  2. max_gas is set to “-1” under “consensus_params”
  3. the validator pubkeys types are the same: Should be “secp256k1”
  4. allow_duplicate_ip value is “true” if getting duplicate connections error
  5. addr_book_strict value is “false” if running locally
  6. features is:
    "feature": {
                "vote_extensions_enable_height": "0",
                "pbts_enable_height": "1"
            }
    
  7. validators entry has all nodes like so:
    "validators": [
            {
                "address": "044583E6D3EFDC706BA0AE434ADA527E4E82D079",
                "pub_key": {
                    "type": "tendermint/PubKeyEd25519",
                    "value": "Ww1JVYV8bZhv49VOvJ25iLsH5Uu/Sn6J3hBOZekifVo="
                },
                "power": "10",
                "name": ""
            },
            {
                "address": "634AB30AC3F8888AA4071C893036B22BC7A3D9A7",
                "pub_key": {
                    "type": "tendermint/PubKeyEd25519",
                    "value": "hEmXgh89RUOWmvL/XU7aQWKeBsKnZWHFyTJWsbeTvlU="
                },
                "power": "10",
                "name": ""
            }
        ]
    

Commands to start and reset the nodes

Start

cometbft start –home ./path_to_node

Reset

cometbft unsafe_reset_all –home ./path_to_node

Note

unsafe_reset_all only resets the data directory, not the genesis.json

Example genesis.json

{
  "genesis_time": "2024-08-28T15:28:48.066686Z",
  "chain_id": "3636",
  "initial_height": "0",
  "consensus_params": {
    "block": {
      "max_bytes": "4194304",
      "max_gas": "-1"
    },
    "evidence": {
      "max_age_num_blocks": "100000",
      "max_age_duration": "172800000000000",
      "max_bytes": "1048576"
    },
    "validator": {
      "pub_key_types": ["secp256k1"]
    },
    "version": {
      "app": "0"
    },
    "synchrony": {
      "precision": "500000000",
      "message_delay": "2000000000"
    },
    "feature": {
      "vote_extensions_enable_height": "1",
      "pbts_enable_height": "0"
    }
  },
  "validators": [
    {
      "address": "009D915D631DB0A3FEFB32685779D023153698DB",
      "pub_key": {
        "type": "tendermint/PubKeySecp256k1",
        "value": "AqSCYUYNrmyoGL6j6o3cKRt63fzpdv+LB3guvIdCNziU"
      },
      "power": "10",
      "name": ""
    },
    {
      "address": "09784D2BAA503337EAB2B1B023296324BA4827A6",
      "pub_key": {
        "type": "tendermint/PubKeySecp256k1",
        "value": "A9XFfycX13DcZWZHWTSoTA8Yu6Cappw0U7Ji5hbD8l5V"
      },
      "power": "10",
      "name": ""
    }
  ],
  "app_hash": ""
}

Configuring Reth

Reth places a configuration file named reth.toml in the data directory specified when starting the node. It is written in the TOML format.

The default data directory is platform dependent:

  • Linux: $XDG_DATA_HOME/reth/ or $HOME/.local/share/reth/
  • Windows: {FOLDERID_RoamingAppData}/reth/
  • macOS: $HOME/Library/Application Support/reth/

The configuration file contains the following sections:

The [stages] section

The stages section is used to configure how individual stages in reth behave, which has a direct impact on resource utilization and sync speed.

The defaults shipped with Reth try to be relatively reasonable, but may not be optimal for your specific set of hardware.

headers

The headers section controls both the behavior of the header stage, which download historical headers, as well as the primary downloader that fetches headers over P2P.

[stages.headers]
# The minimum and maximum number of concurrent requests to have in flight at a time.
#
# The downloader uses these as best effort targets, which means that the number
# of requests may be outside of these thresholds within a reasonable degree.
#
# Increase these for faster sync speeds at the cost of additional bandwidth and memory
downloader_max_concurrent_requests = 100
downloader_min_concurrent_requests = 5
# The maximum number of responses to buffer in the downloader at any one time.
#
# If the buffer is full, no more requests will be sent until room opens up.
#
# Increase the value for a larger buffer at the cost of additional memory consumption
downloader_max_buffered_responses = 100
# The maximum number of headers to request from a peer at a time.
downloader_request_limit = 1000
# The amount of headers to persist to disk at a time.
#
# Lower thresholds correspond to more frequent disk I/O (writes),
# but lowers memory usage
commit_threshold = 10000

bodies

The bodies section controls both the behavior of the bodies stage, which download historical block bodies, as well as the primary downloader that fetches block bodies over P2P.

[stages.bodies]
# The maximum number of bodies to request from a peer at a time.
downloader_request_limit = 200
# The maximum amount of bodies to download before writing them to disk.
#
# A lower value means more frequent disk I/O (writes), but also
# lowers memory usage.
downloader_stream_batch_size = 1000
# The size of the internal block buffer in bytes.
#
# A bigger buffer means that bandwidth can be saturated for longer periods,
# but also increases memory consumption.
#
# If the buffer is full, no more requests will be made to peers until
# space is made for new blocks in the buffer.
#
# Defaults to around 2GB.
downloader_max_buffered_blocks_size_bytes = 2147483648
# The minimum and maximum number of concurrent requests to have in flight at a time.
#
# The downloader uses these as best effort targets, which means that the number
# of requests may be outside of these thresholds within a reasonable degree.
#
# Increase these for faster sync speeds at the cost of additional bandwidth and memory
downloader_min_concurrent_requests = 5
downloader_max_concurrent_requests = 100

sender_recovery

The sender recovery stage recovers the address of transaction senders using transaction signatures.

[stages.sender_recovery]
# The amount of transactions to recover senders for before
# writing the results to disk.
#
# Lower thresholds correspond to more frequent disk I/O (writes),
# but lowers memory usage
commit_threshold = 100000

execution

The execution stage executes historical transactions. This stage is generally very I/O and memory intensive, since executing transactions involves reading block headers, transactions, accounts and account storage.

Each executed transaction also generates a number of changesets, and mutates the current state of accounts and storage.

For this reason, there are several ways to control how much work to perform before the results are written to disk.

[stages.execution]
# The maximum number of blocks to process before the execution stage commits.
max_blocks = 500000
# The maximum number of state changes to keep in memory before the execution stage commits.
max_changes = 5000000
# The maximum cumulative amount of gas to process before the execution stage commits.
max_cumulative_gas = 1500000000000 # 30_000_000 * 50_000_000
# The maximum time spent on blocks processing before the execution stage commits.
max_duration = '10m'

For all thresholds specified, the first to be hit will determine when the results are written to disk.

Lower values correspond to more frequent disk writes, but also lower memory consumption. A lower value also negatively impacts sync speed, since reth keeps a cache around for the entire duration of blocks executed in the same range.

account_hashing

The account hashing stage builds a secondary table of accounts, where the key is the hash of the address instead of the raw address.

This is used to later compute the state root.

[stages.account_hashing]
# The threshold in number of blocks before the stage starts from scratch
# and re-hashes all accounts as opposed to just the accounts that changed.
clean_threshold = 500000
# The amount of accounts to process before writing the results to disk.
#
# Lower thresholds correspond to more frequent disk I/O (writes),
# but lowers memory usage
commit_threshold = 100000

storage_hashing

The storage hashing stage builds a secondary table of account storages, where the key is the hash of the address and the slot, instead of the raw address and slot.

This is used to later compute the state root.

[stages.storage_hashing]
# The threshold in number of blocks before the stage starts from scratch
# and re-hashes all storages as opposed to just the storages that changed.
clean_threshold = 500000
# The amount of storage slots to process before writing the results to disk.
#
# Lower thresholds correspond to more frequent disk I/O (writes),
# but lowers memory usage
commit_threshold = 100000

merkle

The merkle stage uses the indexes built in the hashing stages (storage and account hashing) to compute the state root of the latest block.

[stages.merkle]
# The threshold in number of blocks before the stage starts from scratch
# and re-computes the state root, discarding the trie that has already been built,
# as opposed to incrementally updating the trie.
clean_threshold = 5000

transaction_lookup

The transaction lookup stage builds an index of transaction hashes to their sequential transaction ID.

[stages.transaction_lookup]
# The maximum number of transactions to process before writing the results to disk.
#
# Lower thresholds correspond to more frequent disk I/O (writes),
# but lowers memory usage
chunk_size = 5000000

index_account_history

The account history indexing stage builds an index of what blocks a particular account changed.

[stages.index_account_history]
# The maximum amount of blocks to process before writing the results to disk.
#
# Lower thresholds correspond to more frequent disk I/O (writes),
# but lowers memory usage
commit_threshold = 100000

index_storage_history

The storage history indexing stage builds an index of what blocks a particular storage slot changed.

[stages.index_storage_history]
# The maximum amount of blocks to process before writing the results to disk.
#
# Lower thresholds correspond to more frequent disk I/O (writes),
# but lowers memory usage
commit_threshold = 100000

etl

An ETL (extract, transform, load) data collector. Used mainly to insert data into MDBX in a sorted manner.

[stages.etl]
# The maximum size in bytes of data held in memory before being flushed to disk as a file.
#
# Lower threshold corresponds to more frequent flushes,
# but lowers temporary storage usage
file_size = 524_288_000 # 500 * 1024 * 1024

The [peers] section

The peers section is used to configure how the networking component of reth establishes and maintains connections to peers.

In the top level of the section you can configure trusted nodes, and how often reth will try to connect to new peers.

[peers]
# How often reth will attempt to make outgoing connections,
# if there is room for more peers
refill_slots_interval = '1s'
# A list of ENRs for trusted peers, which are peers reth will always try to connect to.
trusted_nodes = []
# Whether reth will only attempt to connect to the peers specified above,
# or if it will connect to other peers in the network
connect_trusted_nodes_only = false
# The duration for which a badly behaving peer is banned
ban_duration = '12h'

connection_info

This section configures how many peers reth will connect to.

[peers.connection_info]
# The maximum number of outbound peers (peers we connect to)
max_outbound = 100
# The maximum number of inbound peers (peers that connect to us)
max_inbound = 30

reputation_weights

This section configures the penalty for various offences peers can commit.

All peers start out with a reputation of 0, which increases over time as the peer stays connected to us.

If the peer misbehaves, various penalties are exacted to their reputation, and if it falls below a certain threshold (currently 50 * -1024), reth will disconnect and ban the peer temporarily (except for protocol violations which constitute a permanent ban).

[peers.reputation_weights]
bad_message = -16384
bad_block = -16384
bad_transactions = -16384
already_seen_transactions = 0
timeout = -4096
bad_protocol = -2147483648
failed_to_connect = -25600
dropped = -4096

backoff_durations

If reth fails to establish a connection to a peer, it will not re-attempt for some amount of time, depending on the reason the connection failed.

[peers.backoff_durations]
low = '30s'
medium = '3m'
high = '15m'
max = '1h'

The [sessions] section

The sessions section configures the internal behavior of a single peer-to-peer connection.

You can configure the session buffer sizes, which limits the amount of pending events (incoming messages) and commands (outgoing messages) each session can hold before it will start to ignore messages.

Note

These buffers are allocated per peer, which means that increasing the buffer sizes can have large impact on memory consumption.

[sessions]
session_command_buffer = 32
session_event_buffer = 260

You can also configure request timeouts:

[sessions.initial_internal_request_timeout]
secs = 20
nanos = 0

# The amount of time before the peer will be penalized for
# being in violation of the protocol. This exacts a permaban on the peer.
[sessions.protocol_breach_request_timeout]
secs = 120
nanos = 0

The [prune] section

The prune section configures the pruning configuration.

You can configure the pruning of different segments of the data independently of others. For any unspecified segments, the default setting is no pruning.

Default config

No pruning, run as archive node.

Example of the custom pruning configuration

This configuration will:

  • Run pruning every 5 blocks
  • Continuously prune all transaction senders, account history and storage history before the block head-100_000, i.e. keep the data for the last 100_000 blocks
  • Prune all receipts before the block 1920000, i.e. keep receipts from the block 1920000
[prune]
# Minimum pruning interval measured in blocks
block_interval = 5

[prune.parts]
# Sender Recovery pruning configuration
sender_recovery = { distance = 100_000 } # Prune all transaction senders before the block `head-100000`, i.e. keep transaction senders for the last 100001 blocks

# Transaction Lookup pruning configuration
transaction_lookup = "full" # Prune all TxNumber => TxHash mappings

# Receipts pruning configuration. This setting overrides `receipts_log_filter`.
receipts = { before = 1920000 } # Prune all receipts from transactions before the block 1920000, i.e. keep receipts from the block 1920000

# Account History pruning configuration
account_history = { distance = 100_000 } # Prune all historical account states before the block `head-100000`

# Storage History pruning configuration
storage_history = { distance = 100_000 } # Prune all historical storage states before the block `head-100000`

We can also prune receipts more granular, using the logs filtering:

# Receipts pruning configuration by retaining only those receipts that contain logs emitted
# by the specified addresses, discarding all others. This setting is overridden by `receipts`.
[prune.parts.receipts_log_filter]
# Prune all receipts, leaving only those which:
# - Contain logs from address `0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48`, starting from the block 17000000
# - Contain logs from address `0xdac17f958d2ee523a2206206994597c13d831ec7` in the last 1001 blocks
"0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48" = { before = 17000000 }
"0xdac17f958d2ee523a2206206994597c13d831ec7" = { distance = 1000 }

Transaction types

Over time, the Ethereum network has undergone various upgrades and improvements to enhance transaction efficiency, security, and user experience. Four significant transaction types that have evolved are:

  • Legacy Transactions,
  • EIP-2930 Transactions,
  • EIP-1559 Transactions,
  • EIP-4844 Transactions

Each of these transaction types brings unique features and improvements to the Ethereum network.

Legacy Transactions

Legacy Transactions (type 0x0), the traditional Ethereum transactions in use since the network's inception, include the following parameters:

  • nonce,
  • gasPrice,
  • gasLimit,
  • to,
  • value,
  • data,
  • v,
  • r,
  • s.

These transactions do not utilize access lists, which specify the addresses and storage keys to be accessed, nor do they incorporate EIP-1559 fee market changes.

EIP-2930 Transactions

Introduced in EIP-2930, transactions with type 0x1 incorporate an accessList parameter alongside legacy parameters. This accessList specifies an array of addresses and storage keys that the transaction plans to access, enabling gas savings on cross-contract calls by pre-declaring the accessed contract and storage slots. They do not include EIP-1559 fee market changes.

EIP-1559 Transactions

EIP-1559 transactions (type 0x2) were introduced in Ethereum's London fork to address network congestion and transaction fee overpricing caused by the historical fee market. Unlike traditional transactions, EIP-1559 transactions don't specify a gas price (gasPrice). Instead, they use an in-protocol, dynamically changing base fee per gas, adjusted at each block to manage network congestion.

Alongside the accessList parameter and legacy parameters (except gasPrice), EIP-1559 transactions include:

  • maxPriorityFeePerGas, specifying the maximum fee above the base fee the sender is willing to pay,
  • maxFeePerGas, setting the maximum total fee the sender is willing to pay.

The base fee is burned, while the priority fee is paid to the miner who includes the transaction, incentivizing miners to include transactions with higher priority fees per gas.

EIP-4844 Transaction

EIP-4844 transactions (type 0x3) was introduced in Ethereum's Dencun fork. This provides a temporary but significant scaling relief for rollups by allowing them to initially scale to 0.375 MB per slot, with a separate fee market allowing fees to be very low while usage of this system is limited.

Alongside the legacy parameters & parameters from EIP-1559, the EIP-4844 transactions include:

  • max_fee_per_blob_gas, The maximum total fee per gas the sender is willing to pay for blob gas in wei
  • blob_versioned_hashes, List of versioned blob hashes associated with the transaction's EIP-4844 data blobs.

The actual blob fee is deducted from the sender balance before transaction execution and burned, and is not refunded in case of transaction failure.

Logs and observability

Reth exposes a number of metrics, which are listed here. We can serve them from an HTTP endpoint by adding the --metrics flag:

reth node --metrics 127.0.0.1:9001

Now, as the node is running, you can curl the endpoint you provided to the --metrics flag to get a text dump of the metrics at that time:

curl 127.0.0.1:9001

The response from this is quite descriptive, but it can be a bit verbose. Plus, it's just a static_file of the metrics at the time that you curled the endpoint.

You can run the following command in a separate terminal to periodically poll the endpoint, and just print the values (without the header text) to the terminal:

while true; do
    date
    curl -s localhost:9001 | grep -Ev '^(#|$)' | sort
    echo
    sleep 10
done

We're finally getting somewhere! As a final step, though, wouldn't it be great to see how these metrics progress over time (and generally, in a GUI)?

Prometheus & Grafana

We're going to be using Prometheus to collect metrics off of the endpoint we set up, and use Grafana to scrape the metrics from Prometheus and define a dashboard with them.

Let's begin by installing both Prometheus and Grafana, which one can do with e.g. Homebrew:

brew update
brew install prometheus
brew install grafana

Then, kick off the Prometheus and Grafana services:

brew services start prometheus
brew services start grafana

This will start a Prometheus service which by default scrapes itself about the current instance. So you'll need to change its config to hit your Reth nodes metrics endpoint at localhost:9001 which you set using the --metrics flag.

You can find an example config for the Prometheus service in the repo here: etc/prometheus/prometheus.yml

Depending on your installation you may find the config for your Prometheus service at:

  • OSX: /opt/homebrew/etc/prometheus.yml
  • Linuxbrew: /home/linuxbrew/.linuxbrew/etc/prometheus.yml
  • Others: /usr/local/etc/prometheus/prometheus.yml

Next, open up "localhost:3000" in your browser, which is the default URL for Grafana. Here, "admin" is the default for both the username and password.

Once you've logged in, click on the gear icon in the lower left, and select "Data Sources". Click on "Add data source", and select "Prometheus" as the type. In the HTTP URL field, enter http://localhost:9090. Finally, click "Save & Test".

As this might be a point of confusion, localhost:9001, which we supplied to --metrics, is the endpoint that Reth exposes, from which Prometheus collects metrics. Prometheus then exposes localhost:9090 (by default) for other services (such as Grafana) to consume Prometheus metrics.

To configure the dashboard in Grafana, click on the squares icon in the upper left, and click on "New", then "Import". From there, click on "Upload JSON file", and select the example file in reth/etc/grafana/dashboards/overview.json. Finally, select the Prometheus data source you just created, and click "Import".

And voilá, you should see your dashboard! If you're not yet connected to any peers, the dashboard will look like it's in an empty state, but once you are, you should see it start populating with data.

Conclusion

In this runbook, we took you through starting the node, exposing different log levels, exporting metrics, and finally viewing those metrics in a Grafana dashboard.

This will all be very useful to you, whether you're simply running a home node and want to keep an eye on its performance, or if you're a contributor and want to see the effect that your (or others') changes have on Reth's operations.

Troubleshooting

This page tries to answer how to deal with the most popular issues.

Database

Slow database inserts and updates

If you're:

  1. Running behind the tip
  2. Have slow canonical commit time according to the Canonical Commit Latency time chart on Grafana dashboard (more than 2-3 seconds)
  3. Seeing warnings in your logs such as
    2023-11-08T15:17:24.789731Z WARN providers::db: Transaction insertion took too long block_number=18528075 tx_num=2150227643 hash=0xb7de1d6620efbdd3aa8547c47a0ff09a7fd3e48ba3fd2c53ce94c6683ed66e7c elapsed=6.793759034s
    

then most likely you're experiencing issues with the database freelist. To confirm it, check if the values on the Freelist chart on Grafana dashboard is greater than 10M.

Currently, there are two main ways to fix this issue.

Compact the database

It will take around 5-6 hours and require additional disk space located on the same or different drive.

  1. Clone Reth
    git clone https://github.com/paradigmxyz/reth
    cd reth
    
  2. Build database debug tools
    make db-tools
    
  3. Run compaction (this step will take 5-6 hours, depending on the I/O speed)
    ./db-tools/mdbx_copy -c $(reth db path) reth_compact.dat
    
  4. Stop Reth
  5. Backup original database
    mv $(reth db path)/mdbx.dat reth_old.dat
    
  6. Move compacted database in place of the original database
    mv reth_compact.dat $(reth db path)/mdbx.dat
    
  7. Start Reth
  8. Confirm that the values on the Freelist chart is near zero and the values on the Canonical Commit Latency time chart is less than 1 second.
  9. Delete original database
    rm reth_old.dat
    

Re-sync from scratch

It will take the same time as initial sync.

  1. Stop Reth
  2. Drop the database using by remove the db and static file directories found in your data_dir
  3. Start reth

Database write error

If you encounter an irrecoverable database-related errors, in most of the cases it's related to the RAM/NVMe/SSD you use. For example:

Error: A stage encountered an irrecoverable error.

Caused by:
0: An internal database error occurred: Database write error code: -30796
1: Database write error code: -30796

or

Error: A stage encountered an irrecoverable error.

Caused by:
0: An internal database error occurred: Database read error code: -30797
1: Database read error code: -30797
  1. Check your memory health: use memtest86+ or memtester. If your memory is faulty, it's better to resync the node on different hardware.
  2. Check database integrity:
    git clone https://github.com/paradigmxyz/reth
    cd reth
    make db-tools
    ./db-tools/mdbx_chk $(reth db path)/mdbx.dat | tee mdbx_chk.log
    
    If mdbx_chk has detected any errors, please open an issue and post the output from the mdbx_chk.log file.

Concurrent database access error (using containers/Docker)

If you encounter an error while accessing the database from multiple processes and you are using multiple containers or a mix of host and container(s), it is possible the error is related to PID namespaces. You might see one of the following error messages.

mdbx:0: panic: Assertion `osal_rdt_unlock() failed: err 1' failed.

or

pthread_mutex_lock.c:438: __pthread_mutex_lock_full: Assertion `e != ESRCH || !robust' failed

If you are using Docker, a possible solution is to run all database-accessing containers with --pid=host flag.

For more information, check out the Containers section in the libmdbx README.

Hardware Performance Testing

If you're experiencing degraded performance, it may be related to hardware issues. Below are some tools and tests you can run to evaluate your hardware performance.

If your hardware performance is significantly lower than these reference numbers, it may explain degraded node performance. Consider upgrading your hardware or investigating potential issues with your current setup.

Disk Speed Testing with IOzone

  1. Test disk speed:

    iozone -e -t1 -i0 -i2 -r1k -s1g /tmp
    

    Reference numbers (on Latitude c3.large.x86):

    Children see throughput for 1 initial writers = 907733.81 kB/sec
    Parent sees throughput for 1 initial writers = 907239.68 kB/sec
    Children see throughput for 1 rewriters = 1765222.62 kB/sec
    Parent sees throughput for 1 rewriters = 1763433.35 kB/sec
    Children see throughput for 1 random readers = 1557497.38 kB/sec
    Parent sees throughput for 1 random readers = 1554846.58 kB/sec
    Children see throughput for 1 random writers = 984428.69 kB/sec
    Parent sees throughput for 1 random writers = 983476.67 kB/sec
    
  2. Test disk speed with memory-mapped files:

    iozone -B -G -e -t1 -i0 -i2 -r1k -s1g /tmp
    

    Reference numbers (on Latitude c3.large.x86):

    Children see throughput for 1 initial writers = 56471.06 kB/sec
    Parent sees throughput for 1 initial writers = 56365.14 kB/sec
    Children see throughput for 1 rewriters = 241650.69 kB/sec
    Parent sees throughput for 1 rewriters = 239067.96 kB/sec
    Children see throughput for 1 random readers = 6833161.00 kB/sec
    Parent sees throughput for 1 random readers = 5597659.65 kB/sec
    Children see throughput for 1 random writers = 220248.53 kB/sec
    Parent sees throughput for 1 random writers = 219112.26 kB/sec
    

RAM Speed and Health Testing

  1. Check RAM speed with lshw:

    sudo lshw -short -C memory
    

    Look for the frequency in the output. Reference output:

    H/W path              Device          Class          Description
    ================================================================
    /0/24/0                               memory         64GiB DIMM DDR4 Synchronous Registered (Buffered) 3200 MHz (0.3 ns)
    /0/24/1                               memory         64GiB DIMM DDR4 Synchronous Registered (Buffered) 3200 MHz (0.3 ns)
    ...
    
  2. Test RAM health with memtester:

    sudo memtester 10G
    

    This will take a while. You can test with a smaller amount first:

    sudo memtester 1G 1
    

    All checks should report "ok".

JSON-RPC

You can interact with Reth over JSON-RPC. Reth supports all standard Ethereum JSON-RPC API methods.

JSON-RPC is provided on multiple transports. Reth supports HTTP, WebSocket and IPC (both UNIX sockets and Windows named pipes). Transports must be enabled through command-line flags.

The JSON-RPC APIs are grouped into namespaces, depending on their purpose. All method names are composed of their namespace and their name, separated by an underscore.

Each namespace must be explicitly enabled.

Namespaces

The methods are grouped into namespaces, which are listed below:

NamespaceDescriptionSensitive
ethThe eth API allows you to interact with Ethereum.Maybe
web3The web3 API provides utility functions for the web3 client.No
netThe net API provides access to network information of the node.No
txpoolThe txpool API allows you to inspect the transaction pool.No
debugThe debug API provides several methods to inspect the Ethereum state, including Geth-style traces.No
traceThe trace API provides several methods to inspect the Ethereum state, including Parity-style traces.No
adminThe admin API allows you to configure your node.Yes
rpcThe rpc API provides information about the RPC server and its modules.No

Note that some APIs are sensitive, since they can be used to configure your node (admin), or access accounts stored on the node (eth).

Generally, it is advisable to not expose any JSONRPC namespace publicly, unless you know what you are doing.

Transports

Reth supports HTTP, WebSockets and IPC.

HTTP

Using the HTTP transport, clients send a request to the server and immediately get a response back. The connection is closed after the response for a given request is sent.

Because HTTP is unidirectional, subscriptions are not supported.

To start an HTTP server, pass --http to reth node:

reth node --http

The default port is 8545, and the default listen address is localhost.

You can configure the listen address and port using --http.addr and --http.port respectively:

reth node --http --http.addr 127.0.0.1 --http.port 12345

To enable JSON-RPC namespaces on the HTTP server, pass each namespace separated by a comma to --http.api:

reth node --http --http.api eth,net,trace

You can pass the all option, which is a convenient wrapper for the all the JSON-RPC namespaces admin,debug,eth,net,trace,txpool,web3,rpc on the HTTP server:

reth node --http --http.api all
reth node --http --http.api All

You can also restrict who can access the HTTP server by specifying a domain for Cross-Origin requests. This is important, since any application local to your node will be able to access the RPC server:

reth node --http --http.corsdomain https://mycoolapp.rs

Alternatively, if you want to allow any domain, you can pass *:

reth node --http --http.corsdomain "*"

WebSockets

WebSockets is a bidirectional transport protocol. Most modern browsers support WebSockets.

A WebSocket connection is maintained until it is explicitly terminated by either the client or the node.

Because WebSockets are bidirectional, nodes can push events to clients, which enables clients to subscribe to specific events, such as new transactions in the transaction pool, and new logs for smart contracts.

The configuration of the WebSocket server follows the same pattern as the HTTP server:

  • Enable it using --ws
  • Configure the server address by passing --ws.addr and --ws.port (default 8546)
  • Configure cross-origin requests using --ws.origins
  • Enable APIs using --ws.api

IPC

IPC is a simpler transport protocol for use in local environments where the node and the client exist on the same machine.

The IPC transport is enabled by default and has access to all namespaces, unless explicitly disabled with --ipcdisable.

Reth creates a UNIX socket on Linux and macOS at /tmp/reth.ipc. On Windows, IPC is provided using named pipes at \\.\pipe\reth.ipc.

You can configure the IPC path using --ipcpath.

Interacting with the RPC

One can easily interact with these APIs just like they would with any Ethereum client.

You can use curl, a programming language with a low-level library, or a tool like Foundry to interact with the chain at the exposed HTTP or WS port.

As a reminder, you need to run the command below to enable all of these APIs using an HTTP transport:

reth node --http --http.api "admin,debug,eth,net,trace,txpool,web3,rpc"

This allows you to then call:

cast block-number
cast rpc admin_nodeInfo
cast rpc debug_traceTransaction
cast rpc trace_replayBlockTransactions

eth Namespace

Documentation for the API methods in the eth namespace can be found on ethereum.org.

web3 Namespace

The web3 API provides utility functions for the web3 client.

web3_clientVersion

Get the web3 client version.

ClientMethod invocation
RPC{"method": "web3_clientVersion"}

Example

// > {"jsonrpc":"2.0","id":1,"method":"web3_clientVersion","params":[]}
{"jsonrpc":"2.0","id":1,"result":"reth/v0.0.1/x86_64-unknown-linux-gnu"}

web3_sha3

Get the Keccak-256 hash of the given data.

ClientMethod invocation
RPC{"method": "web3_sha3", "params": [bytes]}

Example

// > {"jsonrpc":"2.0","id":1,"method":"web3_sha3","params":["rust is awesome"]}
{"jsonrpc":"2.0","id":1,"result":"0xe421b3428564a5c509ac118bad93a3b84485ec3f927e214b0c4c23076d4bc4e0"}

net Namespace

The net API provides information about the networking component of the node.

net_listening

Returns a bool indicating whether or not the node is listening for network connections.

ClientMethod invocation
RPC{"method": "net_listening", "params": []}

Example

// > {"jsonrpc":"2.0","id":1,"method":"net_listening","params":[]}
{"jsonrpc":"2.0","id":1,"result":true}

net_peerCount

Returns the number of peers connected to the node.

ClientMethod invocation
RPC{"method": "net_peerCount", "params": []}

Example

// > {"jsonrpc":"2.0","id":1,"method":"net_peerCount","params":[]}
{"jsonrpc":"2.0","id":1,"result":10}

net_version

Returns the network ID (e.g. 1 for mainnet)

ClientMethod invocation
RPC{"method": "net_version", "params": []}

Example

// > {"jsonrpc":"2.0","id":1,"method":"net_version","params":[]}
{"jsonrpc":"2.0","id":1,"result":1}

txpool Namespace

The txpool API allows you to inspect the transaction pool.

txpool_content

Returns the details of all transactions currently pending for inclusion in the next block(s), as well as the ones that are being scheduled for future execution only.

See here for more details

ClientMethod invocation
RPC{"method": "txpool_content", "params": []}

txpool_contentFrom

Retrieves the transactions contained within the txpool, returning pending as well as queued transactions of this address, grouped by nonce.

See here for more details

ClientMethod invocation
RPC{"method": "txpool_contentFrom", "params": [address]}

txpool_inspect

Returns a summary of all the transactions currently pending for inclusion in the next block(s), as well as the ones that are being scheduled for future execution only.

See here for more details

ClientMethod invocation
RPC{"method": "txpool_inspect", "params": []}

txpool_status

Returns the number of transactions currently pending for inclusion in the next block(s), as well as the ones that are being scheduled for future execution only.

See here for more details

ClientMethod invocation
RPC{"method": "txpool_status", "params": []}

debug Namespace

The debug API provides several methods to inspect the Ethereum state, including Geth-style traces.

debug_getRawHeader

Returns an RLP-encoded header.

ClientMethod invocation
RPC{"method": "debug_getRawHeader", "params": [block]}

debug_getRawBlock

Retrieves and returns the RLP encoded block by number, hash or tag.

ClientMethod invocation
RPC{"method": "debug_getRawBlock", "params": [block]}

debug_getRawTransaction

Returns an EIP-2718 binary-encoded transaction.

ClientMethod invocation
RPC{"method": "debug_getRawTransaction", "params": [tx_hash]}

debug_getRawReceipts

Returns an array of EIP-2718 binary-encoded receipts.

ClientMethod invocation
RPC{"method": "debug_getRawReceipts", "params": [block]}

debug_getBadBlocks

Returns an array of recent bad blocks that the client has seen on the network.

ClientMethod invocation
RPC{"method": "debug_getBadBlocks", "params": []}

debug_traceChain

Returns the structured logs created during the execution of EVM between two blocks (excluding start) as a JSON object.

ClientMethod invocation
RPC{"method": "debug_traceChain", "params": [start_block, end_block]}

debug_traceBlock

The debug_traceBlock method will return a full stack trace of all invoked opcodes of all transaction that were included in this block.

This expects an RLP-encoded block.

Note

The parent of this block must be present, or it will fail.

ClientMethod invocation
RPC{"method": "debug_traceBlock", "params": [rlp, opts]}

debug_traceBlockByHash

Similar to debug_traceBlock, debug_traceBlockByHash accepts a block hash and will replay the block that is already present in the database.

ClientMethod invocation
RPC{"method": "debug_traceBlockByHash", "params": [block_hash, opts]}

debug_traceBlockByNumber

Similar to debug_traceBlockByHash, debug_traceBlockByNumber accepts a block number and will replay the block that is already present in the database.

ClientMethod invocation
RPC{"method": "debug_traceBlockByNumber", "params": [block_number, opts]}

debug_traceTransaction

The debug_traceTransaction debugging method will attempt to run the transaction in the exact same manner as it was executed on the network. It will replay any transaction that may have been executed prior to this one before it will finally attempt to execute the transaction that corresponds to the given hash.

ClientMethod invocation
RPC{"method": "debug_traceTransaction", "params": [tx_hash, opts]}

debug_traceCall

The debug_traceCall method lets you run an eth_call within the context of the given block execution using the final state of parent block as the base.

The first argument (just as in eth_call) is a transaction request.

The block can optionally be specified either by hash or by number as the second argument.

ClientMethod invocation
RPC{"method": "debug_traceCall", "params": [call, block_number, opts]}

trace Namespace

The trace API provides several methods to inspect the Ethereum state, including Parity-style traces.

A similar module exists (with other debug functions) with Geth-style traces (debug).

The trace API gives deeper insight into transaction processing.

There are two types of methods in this API:

  • Ad-hoc tracing APIs for performing diagnostics on calls or transactions (historical or hypothetical).
  • Transaction-trace filtering APIs for getting full externality traces on any transaction executed by reth.

Ad-hoc tracing APIs

Ad-hoc tracing APIs allow you to perform diagnostics on calls or transactions (historical or hypothetical), including:

  • Transaction traces (trace)
  • VM traces (vmTrace)
  • State difference traces (stateDiff)

The ad-hoc tracing APIs are:

Transaction-trace filtering APIs

Transaction trace filtering APIs are similar to log filtering APIs in the eth namespace, except these allow you to search and filter based only upon address information.

Information returned includes the execution of all contract creations, destructions, and calls, together with their input data, output data, gas usage, transfer amounts and success statuses.

The transaction trace filtering APIs are:

trace_call

Executes the given call and returns a number of possible traces for it.

The first parameter is a transaction object where the from field is optional and the nonce field is omitted.

The second parameter is an array of one or more trace types (vmTrace, trace, stateDiff).

The third and optional parameter is a block number, block hash, or a block tag (latest, finalized, safe, earliest, pending).

ClientMethod invocation
RPC{"method": "trace_call", "params": [tx, type[], block]}

Example

// > {"jsonrpc":"2.0","id":1,"method":"trace_call","params":[{},["trace"]]}
{
    "id": 1,
    "jsonrpc": "2.0",
    "result": {
        "output": "0x",
        "stateDiff": null,
        "trace": [{
            "action": { ... },
            "result": {
                "gasUsed": "0x0",
                "output": "0x"
            },
            "subtraces": 0,
            "traceAddress": [],
            "type": "call"
        }],
        "vmTrace": null
    }
}

trace_callMany

Performs multiple call traces on top of the same block, that is, transaction n will be executed on top of a pending block with all n - 1 transaction applied (and traced) first.

The first parameter is a list of call traces, where each call trace is of the form [tx, type[]] (see trace_call).

The second and optional parameter is a block number, block hash, or a block tag (latest, finalized, safe, earliest, pending).

ClientMethod invocation
RPC{"method": "trace_call", "params": [trace[], block]}

Example

// > {"jsonrpc":"2.0","id":1,"method":"trace_callMany","params":[[[{"from":"0x407d73d8a49eeb85d32cf465507dd71d507100c1","to":"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b","value":"0x186a0"},["trace"]],[{"from":"0x407d73d8a49eeb85d32cf465507dd71d507100c1","to":"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b","value":"0x186a0"},["trace"]]],"latest"]}
{
    "id": 1,
    "jsonrpc": "2.0",
    "result": [
        {
            "output": "0x",
            "stateDiff": null,
            "trace": [{
                "action": {
                    "callType": "call",
                    "from": "0x407d73d8a49eeb85d32cf465507dd71d507100c1",
                    "gas": "0x1dcd12f8",
                    "input": "0x",
                    "to": "0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b",
                    "value": "0x186a0"
                },
                "result": {
                    "gasUsed": "0x0",
                    "output": "0x"
                },
                "subtraces": 0,
                "traceAddress": [],
                "type": "call"
            }],
            "vmTrace": null
        },
        {
            "output": "0x",
            "stateDiff": null,
            "trace": [{
                "action": {
                    "callType": "call",
                    "from": "0x407d73d8a49eeb85d32cf465507dd71d507100c1",
                    "gas": "0x1dcd12f8",
                    "input": "0x",
                    "to": "0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b",
                    "value": "0x186a0"
                },
                "result": {
                    "gasUsed": "0x0",
                    "output": "0x"
                },
                "subtraces": 0,
                "traceAddress": [],
                "type": "call"
            }],
            "vmTrace": null
        }
    ]
}

trace_rawTransaction

Traces a call to eth_sendRawTransaction without making the call, returning the traces.

ClientMethod invocation
RPC{"method": "trace_call", "params": [raw_tx, type[]]}

Example

// > {"jsonrpc":"2.0","id":1,"method":"trace_rawTransaction","params":["0xd46e8dd67c5d32be8d46e8dd67c5d32be8058bb8eb970870f072445675058bb8eb970870f072445675",["trace"]]}
{
    "id": 1,
    "jsonrpc": "2.0",
    "result": {
        "output": "0x",
            "stateDiff": null,
            "trace": [{
            "action": { ... },
            "result": {
                "gasUsed": "0x0",
                "output": "0x"
            },
            "subtraces": 0,
            "traceAddress": [],
            "type": "call"
        }],
            "vmTrace": null
    }
}

trace_replayBlockTransactions

Replays all transactions in a block returning the requested traces for each transaction.

ClientMethod invocation
RPC{"method": "trace_replayBlockTransactions", "params": [block, type[]]}

Example

// > {"jsonrpc":"2.0","id":1,"method":"trace_replayBlockTransactions","params":["0x2ed119",["trace"]]}
{
    "id": 1,
    "jsonrpc": "2.0",
    "result": [
        {
            "output": "0x",
            "stateDiff": null,
            "trace": [{
                "action": { ... },
                "result": {
                    "gasUsed": "0x0",
                    "output": "0x"
                },
                "subtraces": 0,
                "traceAddress": [],
                "type": "call"
            }],
            "transactionHash": "0x...",
            "vmTrace": null
        },
        { ... }
    ]
}

trace_replayTransaction

Replays a transaction, returning the traces.

ClientMethod invocation
RPC{"method": "trace_replayTransaction", "params": [tx_hash, type[]]}

Example

// > {"jsonrpc":"2.0","id":1,"method":"trace_replayTransaction","params":["0x02d4a872e096445e80d05276ee756cefef7f3b376bcec14246469c0cd97dad8f",["trace"]]}
{
    "id": 1,
    "jsonrpc": "2.0",
    "result": {
        "output": "0x",
        "stateDiff": null,
        "trace": [{
            "action": { ... },
            "result": {
                "gasUsed": "0x0",
                "output": "0x"
            },
            "subtraces": 0,
            "traceAddress": [],
            "type": "call"
        }],
        "vmTrace": null
    }
}

trace_block

Returns traces created at given block.

ClientMethod invocation
RPC{"method": "trace_block", "params": [block]}

Example

// > {"jsonrpc":"2.0","id":1,"method":"trace_block","params":["0x2ed119"]}
{
    "id": 1,
    "jsonrpc": "2.0",
    "result": [
        {
            "action": {
                "callType": "call",
                "from": "0xaa7b131dc60b80d3cf5e59b5a21a666aa039c951",
                "gas": "0x0",
                "input": "0x",
                "to": "0xd40aba8166a212d6892125f079c33e6f5ca19814",
                "value": "0x4768d7effc3fbe"
            },
            "blockHash": "0x7eb25504e4c202cf3d62fd585d3e238f592c780cca82dacb2ed3cb5b38883add",
            "blockNumber": 3068185,
            "result": {
                "gasUsed": "0x0",
                "output": "0x"
            },
            "subtraces": 0,
            "traceAddress": [],
            "transactionHash": "0x07da28d752aba3b9dd7060005e554719c6205c8a3aea358599fc9b245c52f1f6",
            "transactionPosition": 0,
            "type": "call"
        },
        ...
    ]
}

trace_filter

Returns traces matching given filter.

Filters are objects with the following properties:

  • fromBlock: Returns traces from the given block (a number, hash, or a tag like latest).
  • toBlock: Returns traces to the given block.
  • fromAddress: Sent from these addresses
  • toAddress: Sent to these addresses
  • after: The offset trace number
  • count: The number of traces to display in a batch

All properties are optional.

ClientMethod invocation
RPC{"method": "trace_filter", "params": [filter]}

Example

// > {"jsonrpc":"2.0","id":1,"method":"trace_filter","params":[{"fromBlock":"0x2ed0c4","toBlock":"0x2ed128","toAddress":["0x8bbB73BCB5d553B5A556358d27625323Fd781D37"],"after":1000,"count":100}]}
{
    "id": 1,
    "jsonrpc": "2.0",
    "result": [
        {
            "action": {
                "callType": "call",
                "from": "0x32be343b94f860124dc4fee278fdcbd38c102d88",
                "gas": "0x4c40d",
                "input": "0x",
                "to": "0x8bbb73bcb5d553b5a556358d27625323fd781d37",
                "value": "0x3f0650ec47fd240000"
            },
            "blockHash": "0x86df301bcdd8248d982dbf039f09faf792684e1aeee99d5b58b77d620008b80f",
            "blockNumber": 3068183,
            "result": {
                "gasUsed": "0x0",
                "output": "0x"
            },
            "subtraces": 0,
            "traceAddress": [],
            "transactionHash": "0x3321a7708b1083130bd78da0d62ead9f6683033231617c9d268e2c7e3fa6c104",
            "transactionPosition": 3,
            "type": "call"
        },
        ...
    ]
}

trace_get

Returns trace at given position.

ClientMethod invocation
RPC{"method": "trace_get", "params": [tx_hash,indices[]]}

Example

// > {"jsonrpc":"2.0","id":1,"method":"trace_get","params":["0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3",["0x0"]]}
{
    "id": 1,
    "jsonrpc": "2.0",
    "result": {
        "action": {
            "callType": "call",
            "from": "0x1c39ba39e4735cb65978d4db400ddd70a72dc750",
            "gas": "0x13e99",
            "input": "0x16c72721",
            "to": "0x2bd2326c993dfaef84f696526064ff22eba5b362",
            "value": "0x0"
        },
        "blockHash": "0x7eb25504e4c202cf3d62fd585d3e238f592c780cca82dacb2ed3cb5b38883add",
            "blockNumber": 3068185,
            "result": {
            "gasUsed": "0x183",
            "output": "0x0000000000000000000000000000000000000000000000000000000000000001"
        },
        "subtraces": 0,
            "traceAddress": [
            0
        ],
        "transactionHash": "0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3",
        "transactionPosition": 2,
        "type": "call"
    }
}

trace_transaction

Returns all traces of given transaction

ClientMethod invocation
RPC{"method": "trace_transaction", "params": [tx_hash]}

Example

// > {"jsonrpc":"2.0","id":1,"method":"trace_transaction","params":["0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3"]}
{
    "id": 1,
    "jsonrpc": "2.0",
    "result": [
        {
            "action": {
                "callType": "call",
                "from": "0x1c39ba39e4735cb65978d4db400ddd70a72dc750",
                "gas": "0x13e99",
                "input": "0x16c72721",
                "to": "0x2bd2326c993dfaef84f696526064ff22eba5b362",
                "value": "0x0"
            },
            "blockHash": "0x7eb25504e4c202cf3d62fd585d3e238f592c780cca82dacb2ed3cb5b38883add",
            "blockNumber": 3068185,
            "result": {
                "gasUsed": "0x183",
                "output": "0x0000000000000000000000000000000000000000000000000000000000000001"
            },
            "subtraces": 0,
            "traceAddress": [
                0
            ],
            "transactionHash": "0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3",
            "transactionPosition": 2,
            "type": "call"
        },
        ...
    ]
}

admin Namespace

The admin API allows you to configure your node, including adding and removing peers.

Note

As this namespace can configure your node at runtime, it is generally not advised to expose it publicly.

admin_addPeer

Add the given peer to the current peer set of the node.

The method accepts a single argument, the enode URL of the remote peer to connect to, and returns a bool indicating whether the peer was accepted or not.

ClientMethod invocation
RPC{"method": "admin_addPeer", "params": [url]}

Example

// > {"jsonrpc":"2.0","id":1,"method":"admin_addPeer","params":["enode://a979fb575495b8d6db44f750317d0f4622bf4c2aa3365d6af7c284339968eef29b69ad0dce72a4d8db5ebb4968de0e3bec910127f134779fbcb0cb6d3331163c@52.16.188.185:30303"]}
{"jsonrpc":"2.0","id":1,"result":true}

admin_removePeer

Disconnects from a peer if the connection exists. Returns a bool indicating whether the peer was successfully removed or not.

ClientMethod invocation
RPC{"method": "admin_removePeer", "params": [url]}

Example

// > {"jsonrpc":"2.0","id":1,"method":"admin_removePeer","params":["enode://a979fb575495b8d6db44f750317d0f4622bf4c2aa3365d6af7c284339968eef29b69ad0dce72a4d8db5ebb4968de0e3bec910127f134779fbcb0cb6d3331163c@52.16.188.185:30303"]}
{"jsonrpc":"2.0","id":1,"result":true}

admin_addTrustedPeer

Adds the given peer to a list of trusted peers, which allows the peer to always connect, even if there would be no room for it otherwise.

It returns a bool indicating whether the peer was added to the list or not.

ClientMethod invocation
RPC{"method": "admin_addTrustedPeer", "params": [url]}

Example

// > {"jsonrpc":"2.0","id":1,"method":"admin_addTrustedPeer","params":["enode://a979fb575495b8d6db44f750317d0f4622bf4c2aa3365d6af7c284339968eef29b69ad0dce72a4d8db5ebb4968de0e3bec910127f134779fbcb0cb6d3331163c@52.16.188.185:30303"]}
{"jsonrpc":"2.0","id":1,"result":true}

admin_removeTrustedPeer

Removes a remote node from the trusted peer set, but it does not disconnect it automatically.

Returns true if the peer was successfully removed.

ClientMethod invocation
RPC{"method": "admin_removeTrustedPeer", "params": [url]}

Example

// > {"jsonrpc":"2.0","id":1,"method":"admin_removeTrustedPeer","params":["enode://a979fb575495b8d6db44f750317d0f4622bf4c2aa3365d6af7c284339968eef29b69ad0dce72a4d8db5ebb4968de0e3bec910127f134779fbcb0cb6d3331163c@52.16.188.185:30303"]}
{"jsonrpc":"2.0","id":1,"result":true}

admin_nodeInfo

Returns all information known about the running node.

These include general information about the node itself, as well as what protocols it participates in, its IP and ports.

ClientMethod invocation
RPC{"method": "admin_nodeInfo"}

Example

// > {"jsonrpc":"2.0","id":1,"method":"admin_nodeInfo","params":[]}
{
    "jsonrpc": "2.0",
    "id": 1,
    "result": {
        "enode": "enode://44826a5d6a55f88a18298bca4773fca5749cdc3a5c9f308aa7d810e9b31123f3e7c5fba0b1d70aac5308426f47df2a128a6747040a3815cc7dd7167d03be320d@[::]:30303",
            "id": "44826a5d6a55f88a18298bca4773fca5749cdc3a5c9f308aa7d810e9b31123f3e7c5fba0b1d70aac5308426f47df2a128a6747040a3815cc7dd7167d03be320d",
            "ip": "::",
            "listenAddr": "[::]:30303",
            "name": "reth/v0.0.1/x86_64-unknown-linux-gnu",
            "ports": {
                "discovery": 30303,
                "listener": 30303
        },
        "protocols": {
            "eth": {
                "difficulty": 17334254859343145000,
                "genesis": "0xd4e56740f876aef8c010b86a40d5f56745a118d0906a34e69aec8c0db1cb8fa3",
                "head": "0xb83f73fbe6220c111136aefd27b160bf4a34085c65ba89f24246b3162257c36a",
                "network": 1
            }
        }
    }
}

admin_peerEvents, admin_peerEvents_unsubscribe

Subscribe to events received by peers over the network.

Like other subscription methods, this returns the ID of the subscription, which is then used in all events subsequently.

To unsubscribe from peer events, call admin_peerEvents_unsubscribe

ClientMethod invocation
RPC{"method": "admin_peerEvents"}

Example

// > {"jsonrpc":"2.0","id":1,"method":"admin_peerEvents","params":[]}
// responds with subscription ID
{"jsonrpc": "2.0", "id": 1, "result": "0xcd0c3e8af590364c09d0fa6a1210faf5"}

rpc Namespace

The rpc API provides methods to get information about the RPC server itself, such as the enabled namespaces.

rpc_modules

Lists the enabled RPC namespaces and the versions of each.

ClientMethod invocation
RPC{"method": "rpc_modules", "params": []}

Example

// > {"jsonrpc":"2.0","id":1,"method":"rpc_modules","params":[]}
{"jsonrpc":"2.0","id":1,"result":{"txpool":"1.0","eth":"1.0","rpc":"1.0"}}

Handling Responses During Syncing

When interacting with the RPC server while it is still syncing, some RPC requests may return an empty or null response, while others return the expected results. This behavior can be observed due to the asynchronous nature of the syncing process and the availability of required data. Notably, endpoints that rely on specific stages of the syncing process, such as the execution stage, might not be available until those stages are complete.

It's important to understand that during pipeline sync, some endpoints may not be accessible until the necessary data is fully synchronized. For instance, the eth_getBlockReceipts endpoint is only expected to return valid data after the execution stage, where receipts are generated, has completed. As a result, certain RPC requests may return empty or null responses until the respective stages are finished.

This behavior is intrinsic to how the syncing mechanism works and is not indicative of an issue or bug. If you encounter such responses while the node is still syncing, it's recommended to wait until the sync process is complete to ensure accurate and expected RPC responses.

reth

Reth

$ reth --help
Usage: reth [OPTIONS] <COMMAND>

Commands:
  node          Start the node
  init          Initialize the database from a genesis file
  init-state    Initialize the database from a state dump file
  import        This syncs RLP encoded blocks from a file
  dump-genesis  Dumps genesis block JSON configuration to stdout
  db            Database debugging utilities
  stage         Manipulate individual stages
  p2p           P2P Debugging utilities
  config        Write config to stdout
  debug         Various debug routines
  recover       Scripts for node recovery
  prune         Prune according to the configuration without any limits
  help          Print this message or the help of the given subcommand(s)

Options:
      --chain <CHAIN_OR_PATH>
          The chain this node is running.
          Possible values are either a built-in chain or the path to a chain specification file.

          Built-in chains:
              mainnet, sepolia, holesky, dev

          [default: mainnet]

      --instance <INSTANCE>
          Add a new instance of a node.

          Configures the ports of the node to avoid conflicts with the defaults. This is useful for running multiple nodes on the same machine.

          Max number of instances is 200. It is chosen in a way so that it's not possible to have port numbers that conflict with each other.

          Changes to the following port numbers: - `DISCOVERY_PORT`: default + `instance` - 1 - `AUTH_PORT`: default + `instance` * 100 - 100 - `HTTP_RPC_PORT`: default - `instance` + 1 - `WS_RPC_PORT`: default + `instance` * 2 - 2

          [default: 1]

  -h, --help
          Print help (see a summary with '-h')

  -V, --version
          Print version

Logging:
      --log.stdout.format <FORMAT>
          The format to use for logs written to stdout

          [default: terminal]

          Possible values:
          - json:     Represents JSON formatting for logs. This format outputs log records as JSON objects, making it suitable for structured logging
          - log-fmt:  Represents logfmt (key=value) formatting for logs. This format is concise and human-readable, typically used in command-line applications
          - terminal: Represents terminal-friendly formatting for logs

      --log.stdout.filter <FILTER>
          The filter to use for logs written to stdout

          [default: ]

      --log.file.format <FORMAT>
          The format to use for logs written to the log file

          [default: terminal]

          Possible values:
          - json:     Represents JSON formatting for logs. This format outputs log records as JSON objects, making it suitable for structured logging
          - log-fmt:  Represents logfmt (key=value) formatting for logs. This format is concise and human-readable, typically used in command-line applications
          - terminal: Represents terminal-friendly formatting for logs

      --log.file.filter <FILTER>
          The filter to use for logs written to the log file

          [default: debug]

      --log.file.directory <PATH>
          The path to put log files in

          [default: <CACHE_DIR>/logs]

      --log.file.max-size <SIZE>
          The maximum size (in MB) of one log file

          [default: 200]

      --log.file.max-files <COUNT>
          The maximum amount of log files that will be stored. If set to 0, background file logging is disabled

          [default: 5]

      --log.journald
          Write logs to journald

      --log.journald.filter <FILTER>
          The filter to use for logs written to journald

          [default: error]

      --color <COLOR>
          Sets whether or not the formatter emits ANSI terminal escape codes for colors and other text formatting

          [default: always]

          Possible values:
          - always: Colors on
          - auto:   Colors on
          - never:  Colors off

Display:
  -v, --verbosity...
          Set the minimum log level.

          -v      Errors
          -vv     Warnings
          -vvv    Info
          -vvvv   Debug
          -vvvvv  Traces (warning: very verbose!)

  -q, --quiet
          Silence all log output