diff --git a/docs/alpha/smart_rollups.rst b/docs/alpha/smart_rollups.rst index b8b1039c3e17d0d420f0870253b550d6b1a442e3..4fd2bd8bfe60c8e437a22c1cc7712e229d638beb 100644 --- a/docs/alpha/smart_rollups.rst +++ b/docs/alpha/smart_rollups.rst @@ -525,7 +525,7 @@ processes the inbox of each level. Notice that distinct Layer 1 addresses could be used for the Layer 1 operations issued by the rollup node simply by editing the -configuration file to set different addresses for ``publish``, +:ref:`configuration file ` to set different addresses for ``publish``, ``add_messages``, ``cement``, and ``refute``. In addition, a rollup node can run under different modes: @@ -577,10 +577,12 @@ operations which are injected by the rollup node in each mode. .. [*] An accuser node will publish commitments only when it detects conflicts; for such cases it must make a deposit of 10,000 tez. +.. _rollup_node_config_file_alpha: + Configuration file """""""""""""""""" -The rollup node can also be configured with the following command that +The rollup node can also be configured via one configuration file stored in its own data directory, with the following command that uses the same arguments as the ``run`` command: .. code:: sh @@ -590,7 +592,9 @@ uses the same arguments as the ``run`` command: with operators "${OPERATOR_ADDR}" \ --data-dir "${ROLLUP_NODE_DIR}" -This creates a configuration file: +where ``${OCLIENT_DIR}`` must be the directory of the client, containing all the keys used by the rollup node, i.e. ``${OPERATOR_ADDR}``. + +This creates a smart rollup node configuration file: :: diff --git a/docs/introduction/howtorun.rst b/docs/introduction/howtorun.rst index 80157df7019f0cfc65048bf310332e4cff079a18..1a9635deb3ac0b8aea9b1a18acef45823ae43628 100644 --- a/docs/introduction/howtorun.rst +++ b/docs/introduction/howtorun.rst @@ -196,6 +196,8 @@ However, it is safe (and actually necessary) to temporarily run two bakers just It is possible to bake and attest using a dedicated :ref:`consensus_key` instead of the delegate's key. +The baker uses the same format of configuration file as the client (see :ref:`client_conf_file`). + Accuser ~~~~~~~ @@ -214,6 +216,7 @@ cause the offender to be :ref:`slashed`, that is, to lose part of its octez-accuser-alpha run +The accuser uses the same format of configuration file as the client (see :ref:`client_conf_file`). Docker ~~~~~~ diff --git a/docs/introduction/howtouse.rst b/docs/introduction/howtouse.rst index 7a5f887023482ceb260b0b31e1dc3e10003295a4..af5db2c85e781fdb05ac96b214e0c75e7b689ea7 100644 --- a/docs/introduction/howtouse.rst +++ b/docs/introduction/howtouse.rst @@ -271,6 +271,8 @@ protocol run by the node. For instance, ``get timestamp`` isn't available when the node runs the genesis protocol, which may happen for a few minutes when launching a node for the first time. +The behaviour of the client can be customized using various mechanims, including command-line options, a configuration file, and environment variables. For details, refer to :doc:`../user/setup-client`. + A Simple Wallet ~~~~~~~~~~~~~~~ @@ -622,25 +624,6 @@ cycle as many delegates receive back part of their unfrozen accounts. You can find more info in the :doc:`RPCs' page <../active/rpc>`. -Environment variables for the client -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The behavior of the client can be configured using the following environment variables: - -- ``TEZOS_CLIENT_UNSAFE_DISABLE_DISCLAIMER``: Setting this variable to "YES" (or: "yes", "Y", "y") disables the warning displayed by the client at startup when it is not launched on Mainnet. -- ``TEZOS_CLIENT_DIR``: This variable may be used to supply the client data directory (by default, ``~/.tezos-client``). - Its value is overridden by option ``-d``. -- ``TEZOS_SIGNER_*``: These variables are used for connecting the client to a remote :ref:`signer ` (see there for details). -- ``TEZOS_CLIENT_RPC_TIMEOUT_SECONDS``: This variable controls how long (in seconds, as an integer) - the client will wait for a response from the node, for each of the two RPC calls made during startup. - If this variable is not set, or otherwise cannot be parsed as a positive integer, a default value of ``10`` seconds is used for each call. - The two RPC calls this variable affects are queries that the client makes to the node in order to determine: - (1) the protocol version of the node it connects to, and (2) the commands supported in that version. -- ``TEZOS_CLIENT_REMOTE_OPERATIONS_POOL_HTTP_HEADERS``: This variable specifies - custom HTTP headers to use with the ``--operations-pool`` option. Only the Host - header is supported as of now (see description in `rfc2616, section 14.23 - `_ - Other binaries -------------- diff --git a/docs/shell/data_availability_committees.rst b/docs/shell/data_availability_committees.rst index 7c36bd6b6a27d2a7c21cb382bbef7677cab2f7ba..3b42bf442b63ced893021f9a5139ebe8266933ca 100644 --- a/docs/shell/data_availability_committees.rst +++ b/docs/shell/data_availability_committees.rst @@ -3,7 +3,7 @@ Data Availability Committees Overview ^^^^^^^^ A Data Availability Committee (DAC) is a solution to scale the data bandwidth available for off-chain applications running in :doc:`Tezos smart rollups <../active/smart_rollups>`. -It relies on a distributed network of data storage providers, subject to a slight trust assumption. +It relies on a distributed network of data storage providers, subject to a slight trust assumption. By utilizing a DAC, smart rollups bypass the data limit imposed by the Tezos block and can increase the amount of transaction data available for processing beyond that limit. In addition to data scalability, DACs also serve as a data source for smart rollups that satisfies the following properties: @@ -12,39 +12,39 @@ In addition to data scalability, DACs also serve as a data source for smart roll - **Availability**: Any data addressed to smart rollups are available upon request. -A DAC consists of a group of Committee Members (known also as "DAC Committee") who commit to storing copies of input data and providing the data upon request. -Each DAC Member provides their signature (of the root hash of the Merkle Tree representation of the data) as an attestation of this commitment. -The DAC Members' signatures are aggregated into a `DAC Certificate`_ which clients can use to verify the number of signers and the integrity of data, and to request underlying pages of the root hash. +A DAC consists of a group of Committee Members (known also as "DAC Committee") who commit to storing copies of input data and providing the data upon request. +Each DAC Member provides their signature (of the root hash of the Merkle Tree representation of the data) as an attestation of this commitment. +The DAC Members' signatures are aggregated into a `DAC Certificate`_ which clients can use to verify the number of signers and the integrity of data, and to request underlying pages of the root hash. However, the trust assumption here is that DAC Members who commit to making data available will fulfill their commitment. DAC and Smart Rollups --------------------- -Smart rollups expose a native mechanism called the :ref:`reveal data channel ` for kernels to import data. -The reveal data channel allows data to be requested from a fixed location in the rollup node's local storage, and satisfies the integrity property above. +Smart rollups expose a native mechanism called the :ref:`reveal data channel ` for kernels to import data. +The reveal data channel allows data to be requested from a fixed location in the rollup node's local storage, and satisfies the integrity property above. Scalability is achieved by the kernel's ability to request an unlimited amount of data. -However, the reveal data channel lacks the assurance that the rollup node will have the data available in its local storage. +However, the reveal data channel lacks the assurance that the rollup node will have the data available in its local storage. By integrating the DAC infrastructure with the rollup node, the necessary data is guaranteed to be available, complementing the limitations of the reveal data channel. It is important to note that the DAC infrastructure is external to the Tezos protocol, and that the Tezos Layer 1 is unaware of its existence. -Smart rollup nodes must be configured to utilize the DAC infrastructure to take full advantage of its capabilities. +Smart rollup nodes must be configured to utilize the DAC infrastructure to take full advantage of its capabilities. For more information, please refer to the `User Guide`_ and `Operator Guide`_. Tools ----- The DAC infrastructure is implemented by two executables: ``octez-dac-node`` and ``octez-dac-client``. - * ``octez-dac-node`` is used for setting up a new DAC Committee or track an existing one. + * ``octez-dac-node`` is used for setting up a new DAC Committee or track an existing one. * ``octez-dac-client`` is used for sending payloads to the DAC infrastructure for storage and for retrieving certificates signed by the DAC Members. There is support for DAC in the Rust `Smart Rollup Kernel SDK `_ for revealing the underlying data of a DAC Certificate and verifying DAC Member signatures. DAC Certificate ^^^^^^^^^^^^^^^ -The DAC Certificate is a key artifact in the DAC workflow. +The DAC Certificate is a key artifact in the DAC workflow. It represents the commitment of DAC Members to provide the underlying data upon request and is used to verify data integrity and signature validity. It is composed of 4 attributes: - + * **Version** - The version of the DAC Certificate schema. * **Root hash** - The Merkle tree root hash of the payload. * **Aggregate signature** - The aggregate of the DAC Member signatures as proof of their commitment to provide the data. @@ -60,9 +60,9 @@ The diagram below illustrates the workflow for sending data to a kernel via the :alt: DAC workflow #. The data provider sends a payload to the DAC infrastructure (1a) and waits for a certificate with a sufficient number of signatures (1b). The threshold number of signatures must match what the kernel expects. -#. The data provider then posts the certificate (approximately 140 bytes) to the rollup inbox as a Layer 1 external message (2a) which will eventually be downloaded by the rollup node (2b). +#. The data provider then posts the certificate (approximately 140 bytes) to the rollup inbox as a Layer 1 external message (2a) which will eventually be downloaded by the rollup node (2b). #. The kernel imports the certificate from the rollup inbox and verifies that it contains a sufficient number of valid DAC Member signatures. It is the responsibility of the kernel to define the minimum number of signatures required for a certificate to be considered valid. -#. If the certificate is deemed valid, the kernel will request to import the pages of the original payload via the rollup node. +#. If the certificate is deemed valid, the kernel will request to import the pages of the original payload via the rollup node. The rollup node downloads those pages from the DAC infrastructure (4a) before importing them into the kernel (4b). DAC Infrastructure @@ -72,33 +72,33 @@ DAC Infrastructure :height: 550 :alt: DAC Infrastructure -The DAC infrastructure consists of inter-connected DAC nodes operating in one of three modes: Coordinator, Committee Member, or Observer. +The DAC infrastructure consists of inter-connected DAC nodes operating in one of three modes: Coordinator, Committee Member, or Observer. To set up a DAC Committee, the network needs exactly one Coordinator node and at least one Committee Member node. For increased decentralization and redundancy, it is desirable to have multiple Committee Member nodes. Ultimately, it is up to the DAC operators to determine the suitable size of their DAC Committee. -The **Coordinator** acts as a gateway between the clients of the DAC and the other DAC nodes. -It is responsible for receiving payloads, splitting them into pages of 4KBs each (the maximum size of a preimage that can be imported into a rollup), and forwarding the resulting pages to other nodes. +The **Coordinator** acts as a gateway between the clients of the DAC and the other DAC nodes. +It is responsible for receiving payloads, splitting them into pages of 4KBs each (the maximum size of a preimage that can be imported into a rollup), and forwarding the resulting pages to other nodes. The pages are processed to construct a Merkle Tree, ultimately yielding a root hash (the Blake2b hash for the root page). The Coordinator is also responsible for providing clients with DAC Certificates for available root hashes. -A **Committee Member** receives pages from the Coordinator and stores them on disk. -Once all the pages for the original payload are received, the Committee Member sends a BLS12-381 signature to the Coordinator to attest its commitment to storing the data and making it available upon request. +A **Committee Member** receives pages from the Coordinator and stores them on disk. +Once all the pages for the original payload are received, the Committee Member sends a BLS12-381 signature to the Coordinator to attest its commitment to storing the data and making it available upon request. The Coordinator collects these signatures and includes them in the data availability Certificate for the respective payload. -An **Observer** receives published pages from the Coordinator and stores them in the reveal data directory of the smart rollup node. +An **Observer** receives published pages from the Coordinator and stores them in the reveal data directory of the smart rollup node. It also exposes an API endpoint that the rollup node can call to fetch missing pages. It must be run on the same host machine as the rollup node to integrate with the DAC infrastructure. User Guide ^^^^^^^^^^ -In this section, we will look at how to use a DAC in a smart rollup setup. +In this section, we will look at how to use a DAC in a smart rollup setup. If you are interested in operating the DAC infrastructure, the `Operator Guide`_ offers instructions on how to setup a DAC Committee and integrate DAC with a smart rollup node. Generating a DAC Certificate ---------------------------- -A DAC Certificate can be generated by sending a hex-encoded payload to the Coordinator node. +A DAC Certificate can be generated by sending a hex-encoded payload to the Coordinator node. This can be done with the following command: .. code:: bash @@ -107,12 +107,12 @@ This can be done with the following command: with content $PAYLOAD \ --wait-for-threshold $THRESHOLD -where +where * ``$COORDINATOR_RPC_ADDR`` - RPC address of the coordinator node in the format ``{host}:{port}``. eg. ``104.16.227.108:443`` * ``$PAYLOAD`` - Hex-encoded payload that DAC Members will store. * ``$THRESHOLD`` - Minimum number of DAC Members that must commit to provide the data before the command returns. - + Upon executing the command, the client will wait until the threshold number of signatures on the certificate is reached before returning the certificate as a hex-encoded string. This certificate must be posted to the global rollup inbox (see :ref:`sending_external_inbox_message`) which will eventually be processed by the kernel. @@ -126,12 +126,15 @@ If you are a user of DAC, the `User Guide`_ offers instructions on how to use th Deploying a DAC Committee ------------------------- -A DAC Committee consists of one Coordinator node and many Committee Members nodes. +A DAC Committee consists of one Coordinator node and many Committee Members nodes. Each Committee Member node will subscribe to the Coordinator for new payloads so the Coordinator must be deployed first. Running a Coordinator """"""""""""""""""""" -A Coordinator node can be configured with the following command: + +For aspects related to the interaction with the Octez client, the DAC node uses the :ref:`Octez client's configuration file `. + +A Coordinator node can be further configured with the following command: .. code:: bash @@ -139,12 +142,12 @@ A Coordinator node can be configured with the following command: with data availability committee members $BLS_PUBLIC_KEYS \ --data-dir $DATA_DIR --reveal-data-dir $REVEAL_DATA_DIR - + where * ``$BLS_PUBLIC_KEYS`` - Space separated list of BLS12-381 public keys of the committee members. Note that the order of keys will ultimately affect the Certificate's hash and should be respected among all parties in the DAC network. eg. ``BLpk1yH... BLpk1wV...`` - * ``$DATA_DIR`` - Optional directory containing the persisted store of the DAC node instance. It is advised to give different values in case multiple DAC nodes run on the same host. Defaults to ``~/.octez-dac-node``. + * ``$DATA_DIR`` - Optional directory containing the persisted store of the DAC node instance. It is advised to give different values in case multiple DAC nodes run on the same host. Defaults to ``~/.octez-dac-node``. * ``$REVEAL_DATA_DIR`` - Directory where pages are stored. It is advised to provide different values in case multiple DAC nodes run on the same host. Once configured, the Coordinator can be run with: @@ -152,7 +155,7 @@ Once configured, the Coordinator can be run with: .. code:: bash octez-dac-node --endpoint $NODE_ENDPOINT \ - run --data-dir $DATA_DIR + run --data-dir $DATA_DIR where @@ -163,10 +166,10 @@ where Running a Committee Member """""""""""""""""""""""""" Before you can run a Committee Member node, you need a BLS secret key which will be used to sign root hashes. -Ensure that the secret key has been imported into the local Octez wallet with the following command +Ensure that the secret key has been imported into the local Octez wallet with the following command .. code:: bash - + octez-client bls import secret key @@ -184,7 +187,7 @@ where: * ``$COORDINATOR_RPC_ADDR`` - RPC address of the coordinator node, in the format ``{host}:{port}``. eg. ``127.0.0.1:10832`` * ``$TZ4_ADDRESS`` - ``tz4`` address of the account of the committee member. eg. ``tz4KWwWMTZJLX5CKxAifUAy1WS3HdEKsk8Ys`` - * ``$DATA_DIR`` - Optional directory containing the persisted store of the DAC node instance. It is advised to give different values in case multiple DAC nodes run on the same host. Defaults to ``~/.octez-dac-node``. + * ``$DATA_DIR`` - Optional directory containing the persisted store of the DAC node instance. It is advised to give different values in case multiple DAC nodes run on the same host. Defaults to ``~/.octez-dac-node``. * ``$REVEAL_DATA_DIR`` - Directory where pages are stored. It is advised to provide different values in case multiple DAC nodes run on the same host. Once configured, the Committee Member can be run with: @@ -192,7 +195,7 @@ Once configured, the Committee Member can be run with: .. code:: bash octez-dac-node --endpoint $NODE_ENDPOINT \ - run --data-dir $DATA_DIR + run --data-dir $DATA_DIR where @@ -202,8 +205,8 @@ where Integrate DAC with a Smart Rollup node --------------------------------------- -Before a rollup node can receive messages, a DAC Obsever node must run on the same host machine and have its reveal data directory set to the same one as the rollup node. -The rollup node must further be configured to fetch missing pages from the Observer node. +Before a rollup node can receive messages, a DAC Obsever node must run on the same host machine and have its reveal data directory set to the same one as the rollup node. +The rollup node must further be configured to fetch missing pages from the Observer node. Running an Observer """"""""""""""""""" @@ -211,7 +214,7 @@ Running an Observer An Observer node can be configured with the following command: .. code:: bash - + octez-dac-node configure as observer \ with coordinator $COORDINATOR_RPC_ADDR \ and committee member rpc addresses $COMMITTEE_MEMBER_RPC_ADDRESSES \ @@ -221,10 +224,10 @@ An Observer node can be configured with the following command: --rpc-port $RPC_PORT where - + * ``$COORDINATOR_RPC_ADDR`` - RPC address of the coordinator node in the format ``{host}:{port}``. eg. ``127.0.0.1:10832`` * ``$COMMITTEE_MEMBER_RPC_ADDRESSES`` - Space separated list of the RPC addresses of the committee member nodes in the format ``{host1}:{port1} {host2}:{port2} ...``. eg. ``104.16.227.108:443 172.64.155.164:443`` - * ``$DATA_DIR`` - Optional directory containing the persisted store of the DAC node instance. It is advised to give different values in case multiple DAC nodes run on the same host. Defaults to ``~/.octez-dac-node``. + * ``$DATA_DIR`` - Optional directory containing the persisted store of the DAC node instance. It is advised to give different values in case multiple DAC nodes run on the same host. Defaults to ``~/.octez-dac-node``. * ``$REVEAL_DATA_DIR`` - Directory where pages are stored. It is advised to provide different values in case multiple DAC nodes run on the same host. * ``$RPC_ADDR`` - Host that the DAC node listens on. Defaults to ``127.0.0.1``. * ``$RPC_PORT`` - Port the DAC node listens on. Defaults to ``10832``. @@ -234,7 +237,7 @@ Once configured, the Observer can be run with: .. code:: bash octez-dac-node --endpoint $NODE_ENDPOINT \ - run --data-dir $DATA_DIR + run --data-dir $DATA_DIR where @@ -246,11 +249,11 @@ Fetching missing pages from the Observer The rollup node can be configured to fetch missing pages from an Observer node by configuring ``--dac-observer`` flag on startup as shown in the following command: .. code:: bash - + octez-smart-rollup-node-alpha run \ <..other configurations> \ --dac-observer $OBSERVER_RPC_ADDR -where +where * ``$OBSERVER_RPC_ADDR`` - RPC address of the Observer node in the format ``{host}:{port}``. eg. ``127.0.0.1:10832`` diff --git a/docs/user/client-configuration.rst b/docs/user/client-configuration.rst new file mode 100644 index 0000000000000000000000000000000000000000..1e760b8e122fd88bdb079234a95ce5ec8732e117 --- /dev/null +++ b/docs/user/client-configuration.rst @@ -0,0 +1,87 @@ +Client Configuration +==================== + +The Octez client can be configured in flexible ways to control various +aspects of its behavior, such as running in different running modes (:doc:`./sandbox`, :doc:`./mockup`, ...), connecting to a public Tezos node, selecting the directory for storing data, and so on. + +All these aspects +can be customized by supplying **options** on the command line when running the client. Refer to :ref`the client manual ` for details. + +A subset of these aspects can be customized by specifying parameters in a **configuration file** for the client. +These include, for example: + +- the address and port of a Tezos node to connect to, as an RPC endpoint (by default, the local node) +- the directory where the client stores data +- the number of confirmation blocks needed before an operation is considered included +- the files defining bootstrap accounts and protocol constants, when running in :doc:`mockup mode <./mockup>`. + +When the same parameter is set both in the configuration file and using a command line option, the value on the command line is taken into account (and the configuration file is not updated). + +Finally, a few aspects of the client behavior can be customized by a set of **environment variables**. + +.. _client_conf_file: + +Client configuration file +------------------------- + +.. note:: + + The format of the client configuration file (and the associated commands to manipulate it) is understood not only by the Octez client, but also by several other Octez executables, such as ``octez-admin-client``, the baker, and the accuser. For details, refer to the manual of each tool. + +Parameters in the configuration file can be specified in two different ways: + +- by creating and updating the configuration file using the ``config`` command of ``octez-client``. + +- by directly editing the configuration file. + +The config command +~~~~~~~~~~~~~~~~~~ + +:: + + ./octez-client config init + +This will initialize a configuration file for the client in +``$HOME/.tezos-client/config``, using default values. For instance, it +specifies that the client will use the local node as an RPC endpoint. + +The easiest way to amend this default configuration is to use commands such as: + +:: + + # Update the config file: + octez-client config update + # Check your new values: + octez-client config show + # If you want to restart from an empty cfg file: + octez-client config reset + +Editing the configuration file +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +You may also edit the configuration file directly (``$HOME/.tezos-client/config`` by default). + +To run the client in multiple configurations on the same machine, you can duplicate and edit +``$HOME/.tezos-client/config`` while making sure they don't share +the same ``base-dir``. Then run your client with ``./octez-client run --base-dir=``. + +.. _client_variables: + +Environment variables for the client +------------------------------------ + +The behavior of the client can be configured using the following environment variables: + +- ``TEZOS_CLIENT_UNSAFE_DISABLE_DISCLAIMER``: Setting this variable to "YES" (or: "yes", "Y", "y") disables the warning displayed by the client at startup when it is not launched on Mainnet. +- ``TEZOS_CLIENT_DIR``: This variable may be used to supply the client data directory (by default, ``~/.tezos-client``). + Its value is overridden by option ``-d``. +- ``TEZOS_SIGNER_*``: These variables are used for connecting the client to a remote :ref:`signer ` (see there for details). +- ``TEZOS_CLIENT_RPC_TIMEOUT_SECONDS``: This variable controls how long (in seconds, as an integer) + the client will wait for a response from the node, for each of the two RPC calls made during startup. + If this variable is not set, or otherwise cannot be parsed as a positive integer, a default value of ``10`` seconds is used for each call. + The two RPC calls this variable affects are queries that the client makes to the node in order to determine: + (1) the protocol version of the node it connects to, and (2) the commands supported in that version. +- ``TEZOS_CLIENT_REMOTE_OPERATIONS_POOL_HTTP_HEADERS``: This variable specifies + custom HTTP headers to use with the ``--operations-pool`` option. Only the Host + header is supported as of now (see description in `rfc2616, section 14.23 + `_ diff --git a/docs/user/node-configuration.rst b/docs/user/node-configuration.rst index ed520a9a486671712cb610653b453a721f080d0e..acc0dbc098abd2ad7b52de96963558b2f11cba90 100644 --- a/docs/user/node-configuration.rst +++ b/docs/user/node-configuration.rst @@ -19,8 +19,8 @@ obtained using the following command:: .. _node-conf-file: -Configuration file ------------------- +Node configuration file +----------------------- Parameters in the configuration file can be specified in two different ways: diff --git a/docs/user/setup-client.rst b/docs/user/setup-client.rst index 5237530a74ec6702ba906ab7964035e48d4821a0..6ea407b08e8387d8f5f0a0c8a6119f6764a4ac78 100644 --- a/docs/user/setup-client.rst +++ b/docs/user/setup-client.rst @@ -3,11 +3,17 @@ Setting up the client A client in the Tezos network provides different configuration possibilities: -- Key management: configured how keys are managed, with different security tradeoffs. -- Running modes: different running modes, beside the default mode, are intended to facilitate testing applications on smaller or fake networks, for example by executing some RPCs locally, without sending requests to a Tezos node. +- tune various parameters of the client using flexible combinations of: a configuration file, command-line options, and environment variables +- configure how keys are managed, with different security tradeoffs +- select different running modes, beside the default mode, which are intended to facilitate testing applications on smaller or fake networks: for example by executing some RPCs locally, without sending requests to a Tezos node. These configuration possibilities are described in the following pages. +.. toctree:: + :maxdepth: 2 + + client-configuration + .. toctree:: :maxdepth: 2 diff --git a/docs/user/various.rst b/docs/user/various.rst index bce925ea97dfc7af7a07db0f37d8b33743e0efc5..8b00b41767e5907f1c526234ddc29ba37efdf6bc 100644 --- a/docs/user/various.rst +++ b/docs/user/various.rst @@ -26,6 +26,8 @@ A useful command to debug a node that is not syncing is: octez-admin-client p2p stat +The admin client uses the same format of configuration file as the client (see :ref:`client_conf_file`). + .. _tezos_binaries_signals_and_exit_codes: Octez binaries: signals and exit codes