diff --git a/docs/alpha/consensus.rst b/docs/alpha/consensus.rst index ea3f163139a69380e8d23c08fad2b9d5e28fdd72..17f9dc3ff35fd9a85be69c3af9eb33c5694a2f59 100644 --- a/docs/alpha/consensus.rst +++ b/docs/alpha/consensus.rst @@ -296,7 +296,7 @@ producer and are distributed immediately. To encourage fairness and participation, the *block* proposer receives a bonus for the extra endorsements it includes in the block. The bonus is proportional to the number of -validator slots above the threshold of :math:`\lceil CONSENSUS\_COMMITTEE\_SIZE \times \frac{2}{3} \rceil` that +validator slots above the threshold of ``CONSENSUS_COMMITTEE_SIZE * 2 / 3`` that the included endorsements represent. The bonus is also distributed immediately. diff --git a/docs/api/openapi.rst b/docs/api/openapi.rst index 65561e48c331ac6203476b6276806f0c199b5de4..fdb81095b298b0dba2f9fff7bec7d6dcaf0ea0ad 100644 --- a/docs/api/openapi.rst +++ b/docs/api/openapi.rst @@ -18,10 +18,18 @@ The full RPC served by a node is a union of: + ``$PROTOCOL-openapi.json`` (served under the prefix: ``/chains//blocks/``) + ``$PROTOCOL-mempool-openapi.json`` (served under the prefix: ``/chains//mempool``) -For instance, for an RPC listed as ``GET /filter`` in ``$PROTOCOL-openapi.json``, its real endpoint is ``GET /chains//mempool/filter``. +For instance, for an RPC listed as ``GET /filter`` in ``$PROTOCOL-mempool-openapi.json``, its real endpoint is ``GET /chains//mempool/filter``. These OpenAPI specifications, detailed below, can be generated by running the Octez node as shown in section :ref:`openapi_generate`. +.. warning:: + The links below to the different OpenAPI specifications are opened using the Swagger UI integrated in GitLab. + This UI can be used for browsing the OpenAPIs (no need to install Swagger UI for that). + However, the interactive use suggested in this UI does not currently work because: + + - the UI does not allow one to specify a server (which should correspond to a runnning Tezos node), and + - browsers may block some of the generated requests or responses for security issues. + Shell RPCs ---------- diff --git a/docs/developer/long-tezts.rst b/docs/developer/long-tezts.rst index 72692068b52410619044de34fcea57991da75338..958d0c9fba799b7eca6d58e7b4b40f5ef37ecdca 100644 --- a/docs/developer/long-tezts.rst +++ b/docs/developer/long-tezts.rst @@ -48,6 +48,8 @@ however ask that a new dedicated machine is created to run your test. Please ask on the ``#tests`` Slack channel of ``tezos-dev`` before merging. +.. _performance_regression_test_fw: + Performance Regression Test framework: Time Series, Alerts and Graphs --------------------------------------------------------------------- @@ -216,9 +218,9 @@ Configuring and Running Tezt Long Tests ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ For more information about how to use the configuration file, please refer -to the `Long test module API `__. +to the `Long test module API `__. -A predefined configuration has already been shiped in ``tezt/lib_performance_regression/local-sandbox/tezt_config.json``. +A predefined configuration has already been shiped in :src:`tezt/lib_performance_regression/local-sandbox/tezt_config.json`. It allows to use the InfluxDB and Grafana instances set up by the Docker compose file presented in the previous section. @@ -230,4 +232,3 @@ of your local machine. To run Tezt long tests, run the following command:: TEZT_CONFIG=tezt/lib_performance_regression/local-sandbox/tezt_config.json dune exec tezt/long_tests/main.exe - diff --git a/docs/developer/protocol_release_checklist.rst b/docs/developer/protocol_release_checklist.rst index e4218da406cbaa78f9cb8d727de1b0f4258d7850..00c7409a676da5217e7963889369056effd92141 100644 --- a/docs/developer/protocol_release_checklist.rst +++ b/docs/developer/protocol_release_checklist.rst @@ -183,11 +183,11 @@ migration that should be reverted. Soon after the injection (during the following days), the documentation has to be shifted to reflect the new active protocol and to drop the documentation of -the previous protocol, see meta-issue :gl:`nomadic-labs/tezos#462`. Also, part +the previous protocol, see meta-issue :gl:`#2170`. Also, part of the code related to the old protocol can now be dropped, see :doc:`../developer/howto-freeze-protocols`. One month after the activation of protocol N, we deactivate the N-1 test network. (For example, the Babylon net was deactivated one month after -Carthage went live on the main network.) This deactivation has been already +Carthage went live on the main network.) This deactivation has been already announced one month before activation (see above). diff --git a/docs/developer/snoop_arch.rst b/docs/developer/snoop_arch.rst index 5eed0a568e2bdc8d647aa079b5707473fb50399b..27b3bbb3323173f7a745de42a98f6a3b8e7a273e 100644 --- a/docs/developer/snoop_arch.rst +++ b/docs/developer/snoop_arch.rst @@ -45,7 +45,7 @@ Using ``tezos-benchmark`` requires to provide, for each benchmark, the following - a type of execution ``workload``; - a statistical model, corresponding to a function which to each ``workload`` associates an expression (possibly with free variables) denoting the predicted execution time for that workload. In simple cases, the model consists in *a single* expression computing a predicted execution time for any given workload. -- A family of pieces of code (ie closures) to be benchmarked, each associated to its ``workload``. Thus, each closure contains the application of a piece of a code to arguments instantiating a specific workload. We assume that the execution time of each closure has a well-defined distribution. In most cases, these closures correspond to executing *a single* piece of code of interest with different inputs. +- A family of pieces of code (i.e. closures) to be benchmarked, each associated to its ``workload``. Thus, each closure contains the application of a piece of a code to arguments instantiating a specific workload. We assume that the execution time of each closure has a well-defined distribution. In most cases, these closures correspond to executing *a single* piece of code of interest with different inputs. From this input, ``tezos-benchmark`` can perform for you the following tasks: @@ -66,6 +66,7 @@ The main items required by this type are: The library is meant to be used as follows: - define a ``Benchmark.S``, which requires + - constructing benchmarks - defining models, either pre-built (via the ``Model`` module) or from scratch (using the ``Costlang`` DSL); @@ -73,6 +74,7 @@ The library is meant to be used as follows: - given the data generated, infer parameters of the models using ``Inference.make_problem`` and ``Inference.solve_problem``; - exploit the results: + - input back the result of inference in the model to make it predictive - plot the data (``tezos-benchmark`` can generate CSV) - generate code from the model (``Codegen`` module) @@ -86,7 +88,7 @@ Defining benchmarks: the ``Generator`` module The ``Generator.benchmark`` type defines the interface that each benchmark must implement. At the time of writing, this type specifies three ways -to provide a benchmark (but more could be easily added): +to provide a benchmark (but more could easily be added): .. code-block:: ocaml @@ -119,7 +121,7 @@ The ``With_context`` constructor allows to define benchmarks we require to set up and cleanup a *context*, shared by all executions of the closure. An example (which prompted the addition of this feature) is the case of storage benchmarks, where we need to create a directory -and set up some files before executing a closure containing eg +and set up some files before executing a closure containing e.g. a read or write access, after which the directory must be removed. With_probe benchmarks @@ -150,9 +152,9 @@ The intended semantics of each method is as follows: Note that ``With_probe`` benchmarks do not come with a fixed workload, but rather with an aspect-indexed family of workloads. This reflects -the fact that this kind of benchmark can measure in the same run -several different pieces of code, each potentially associated to its -own cost model. +the fact that this kind of benchmark can measure +several different pieces of code in the same run, +each potentially associated to its own cost model. The function ``Measure.make_timing_probe`` provides a basic probe implementation. The unit test in ``src/lib_benchmark/test/test_probe.ml`` @@ -183,7 +185,7 @@ problem. Note that since :math:`S` is typically not finite, :math:`S \rightarrow \mathbb{R}_{\ge 0}` is an infinite-dimensional vector space. We will restrict our search -to a :math:`n`-dimensional subset of functions :math:`f_\theta, \theta \in \mathbb{R}^n` +to a :math:`n`-dimensional subset of functions :math:`f_\theta`, with :math:`\theta \in \mathbb{R}^n`, of the form .. math:: @@ -192,11 +194,11 @@ of the form where the :math:`(g_i)_{i=1}^n` is a **fixed** family of functions :math:`g_i : S \rightarrow \mathbb{R}_{\ge 0}`. -An n-dimensional linear cost model is entirely determined by the :math:`g_i`. +An :math:`n`-dimensional linear cost model is entirely determined by the :math:`g_i`. Enumerating the currying isomorphisms, a linear model can be considered as: -1. a **linear** function :math:`\mathbb{R}^n \multimap (S \rightarrow \mathbb{R}_{ge 0})` +1. a **linear** function :math:`\mathbb{R}^n \multimap (S \rightarrow \mathbb{R}_{\ge 0})` from "meta" parameters to cost functions; 2. a function :math:`S \rightarrow (\mathbb{R}^n \rightarrow \mathbb{R}_{\ge 0})` from sizes to linear forms over "meta" parameters; @@ -251,7 +253,7 @@ type: end In a nutshell, the type of terms is ``type 'a term = \pi (X : S). 'a X.repr``, -ie terms must be thought of as parametric in their implementation, +i.e. terms must be thought of as parametric in their implementation, provided by a module of type ``S``. It must be noted that this language does not enforce that built @@ -271,7 +273,7 @@ terms and printing terms: - ``Costlang.Hash_cons`` allows to manipulate hash-consed terms, - ``Costlang.Beta_normalize`` allows to beta-normalize... -Other implementations are provided elsewhere, eg for code or +Other implementations are provided elsewhere, e.g. for code or report generation. Definition of cost models: the ``Model`` module @@ -290,7 +292,7 @@ The ``Measure`` module The ``Measure`` module is dedicated to measuring the execution time of closures held in ``Generator.benchmark`` values and -turn these into timed workloads (ie pairs of workload and execution time). +turn these into timed workloads (i.e. pairs of workload and execution time). It also contains routines to remove outliers and to save and load workload data together with extra metadata. @@ -305,7 +307,7 @@ function. val perform_benchmark : Measure.options -> ('c, 't) Tezos_benchmark.Benchmark.poly -> 't workload_data -Before delving in its implementation, let's examine its type. +Before delving into its implementation, let's examine its type. A value of type ``('c, 't) Tezos_benchmark.Benchmark.poly`` is a first class module where ``'c`` is a type variable corresponding to the configuration of the benchmark and ``'t`` is a variable corresponding to the type @@ -315,7 +317,7 @@ in these types. Under the hood, this functions calls to the ``create_benchmarks`` function provided by the first class module to create a list of ``Generator.benchmark`` values. This might involve loading from -benchmark-specific parameters from a json file if the benchmark +benchmark-specific parameters from a JSON file if the benchmark so requires. After setting up some benchmark parameters (random seed, GC parameters, CPU affinity), the function iterates over the list of ``Generator.benchmark`` and calls @@ -324,7 +326,7 @@ in the ``Generator.benchmark`` value. This yields an empirical distribution of timings which must be determinized: the user can pick either a percentile or the mean of this distribution. The function then records the execution time together with the workload -(contained in the ```Generator.benchmark```) in its list of results. +(contained in the ``Generator.benchmark``) in its list of results. Loading and saving benchmark results ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -338,7 +340,7 @@ name, benchmark date). Removing outliers from benchmark data ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -It can happen that some timing measurement is polluted by eg another +It can happen that some timing measurement is polluted by e.g. another process running in the same machine, or an unlucky scheduling. In this case, it is legitimate to remove the tainted data point from the data set in order to make fitting cost models easier. The function @@ -375,7 +377,7 @@ decomposed in the two following steps: - transform the cost model and the empirical data into a matrix equation :math:`A x = T` where the input dimensions of :math:`A` - (ie the columns) are indexed by free variables (corresponding to + (i.e. the columns) are indexed by free variables (corresponding to cost coefficients to be inferred), the output dimensions of :math:`A` are indexed by workloads and where :math:`T` is the column vector containing execution times for each workload; @@ -393,7 +395,7 @@ Case study: constructing the matrices We'd like to model the execution time of an hypothetical piece of code sorting an array using merge sort. We *know* that the time complexity of merge sort -is :math:`O(n log n)` where `n` is the size of the array: we're +is :math:`O(n \log{n})` where `n` is the size of the array: we're interested in predicting the actual execution time as a function of :math:`n` for practical values of `n`. @@ -401,7 +403,7 @@ We pick the following cost model: .. math:: - cost(n) = \theta_0 + \theta_1 \times n log(n) + \text{cost}(n) = \theta_0 + \theta_1 \times n \log{n} Our goal is to determine the parameters :math:`\theta_0` and :math:`\theta_1`. Using the :ref:`Costlang DSL`, @@ -424,10 +426,10 @@ where :math:`n_i` and :math:`t_i` correspond respectively to the size of the array and the measured sorting time for the :math:`i` th benchmark. -By evaluating the model ``cost`` on each :math:`n_i`, we get a -family of linear combinations :math:`\theta_0 + \theta_1 \times n_i log(n_i)`. +By evaluating the model :math:`cost` on each :math:`n_i`, we get a +family of linear combinations :math:`\theta_0 + \theta_1 \times n_i \log{n_i}`. Each such linear combination is isomorphic to the vector -:math:`(1, n_i log(n_i))`. These vectors correspond to the row vectors of +:math:`(1, n_i \log{n_i})`. These vectors correspond to the row vectors of the matrix :math:`A` and the durations :math:`t_i` form the components of the column vector :math:`T`. @@ -568,7 +570,7 @@ one sometimes needs to perform a benchmark for a given piece of code, estimate the cost of this piece of code using the inference module and then inject the result into another inference problem. For short chains of dependencies this is doable by hand, however when dealing with -eg more than one hundred Michelson instructions it nice to have an +e.g. more than one hundred Michelson instructions it nice to have an automated tool figuring out the dependencies and scheduling the inference automatically. diff --git a/docs/developer/snoop_example.rst b/docs/developer/snoop_example.rst index 5311f9c1caa65b810876b1ebe1aac37ada6eb905..07603517a1aad3b7f973ef49e4d877a1b1e849b2 100644 --- a/docs/developer/snoop_example.rst +++ b/docs/developer/snoop_example.rst @@ -11,16 +11,16 @@ is used among other things to hash blocks, operations and contexts: At the time of writing, this function is a thin wrapper which concatenates the list of bytes and passes it to the ``blake2b`` -implementation provided by `HACL *`. +implementation provided by `HACL* `_. -Step 1: defining the benchmark +Step 1: Defining the benchmark ------------------------------ Benchmarks correspond to OCaml modules implementing the ``Benchmark.S`` signature. These must then be registered via the ``Registration.register`` function. Of course, for this registration to happen, the file containing the benchmark and the call to ``Registration.register`` should be linked with ``tezos-snoop``. -See the :doc:`architecture of tezos-snoop ` for complementary details. +See :doc:`snoop_arch` for complementary details. We'll define the benchmark module chunk by chunk and describe each part. Benchmarks are referenced by ``name``. The ``info`` field is a brief @@ -34,7 +34,7 @@ that allows listing benchmarks by kind. let info = "Illustrating tezos-benchmark by benchmarking blake2b" let tags = ["example"] -Typically, a benchmark will depend on a set of parameters corresponding eg to +Typically, a benchmark will depend on a set of parameters corresponding e.g. to the parameters of the samplers used to generate input data to the function being benchmarked. This corresponds to the type ``config``. A ``default_config`` is provided, which can be overridden by specifying a well-formatted JSON file. @@ -54,7 +54,7 @@ This is made possible by defining a ``config_encoding`` using the Benchmarking involves measuring the execution time of some piece of code and using the recorded execution time to fit a model. -As explained in the :doc:`architecture of tezos-snoop `, +As explained in :doc:`snoop_arch`, a model is in fact a function of two parameters: a ``workload`` and the vector of free parameters to be fitted. The ``workload`` corresponds to the information on the input of the function being benchmarked required @@ -80,9 +80,9 @@ not interested in plotting, this function can be made to always return Sparse_vec.String.of_list [("nbytes", float_of_int nbytes)] We expect the execution time of ``Blake2b.hash_bytes`` to be proportional -to the number of bytes being hashed, with possibly a small constant time overhead. +to the number of bytes being hashed, with possibly a small constant-time overhead. Hence, we pick an ``affine`` model. The ``affine`` model is generic, of the form -:math:`affine(n) = \theta_0 + \theta_1 \times n` with :math:`\theta_i` the free +:math:`\text{affine}(n) = \theta_0 + \theta_1 \times n` with :math:`\theta_i` the free parameters. One must explain how to convert the ``workload`` to the argument ``n``. This is the purpose of the ``conv`` parameter. @@ -151,15 +151,15 @@ For illustrative purposes, we also make the ``blake2b`` available for code gener "blake2b_codegen" (Model.For_codegen (List.assoc "blake2b" Blake2b_bench.models)) -Step 2: checking the timer +Step 2: Checking the timer -------------------------- Before we perform the benchmarks, we need to ensure that the system timer is sufficiently precise. This data is also useful to subtract the latency -of time timer for very small duration benchmarks (which is not required here). +of the timer for benchmarks of very small duration (which is not required here). We invoke the tool on the built-in benchmark ``TIMER_LATENCY`` and specify -that we want only one closure to benchmark (since all closures are identical -for this benchmark) but execute this closure ``100000`` times. +(through ``--bench-num``) that we want only one closure to benchmark (since all closures are identical +for this benchmark) but to execute this closure ``100000`` times (through ``--nsamples``). .. code-block:: shell @@ -186,9 +186,9 @@ The tool returns the following on standard output: benchmarking 1/1 stats over all benchmarks: { max_time = 25.000000 ; min_time = 25.000000 ; mean_time = 25.000000 ; sigma = 0.000000 } -This commands measures `100000` times the latency of the timer, that is +This commands measures ``100000`` times the latency of the timer, that is the minimum time between two timing measurements. This yields an empirical distribution -on timings. The tool takes the 50th percentile (ie the median) of the empirical distribution +on timings. The tool takes the 50th percentile (i.e. the median) of the empirical distribution and returns the result: 25ns latency. This is reasonable. Since there's only one benchmark (with many samples), the standard deviation is by definition zero. One could also run many benchmarks with fewer samples per benchmark: @@ -224,23 +224,23 @@ A reliable timer should have a latency of the order of 20 to 30 nanoseconds, wit It can happen on some hardware or software configurations that the timer latency is of the order of *microseconds* or worse: this makes benchmarking short-lived computations impossible. -Step 3: benchmarking +Step 3: Benchmarking -------------------- If the results obtained in the previous section are reasonable, we can proceed to the generation of raw timing data. We want to invoke the ``Blake2b_example`` benchmark and save the resulting data to ``./blake2b.workload``. -We want `500` distinct random inputs, and for each stack we will perform -the timing measurement `3000` times. The ``--determinizer`` option specifies -how the empirical timing distribution corresponding to the per-stack `3000` samples -will be converted to a fixed value: here we pick the 50th percentile, ie the median -(which happens to also be the default, so this last option could have been omitted). +We want ``500`` distinct random inputs, and for each input we will perform +the timing measurement ``3000`` times. The ``--determinizer`` option specifies +how the empirical timing distribution corresponding to the per-input ``3000`` samples +will be converted to a fixed value: here we pick the 50th percentile, i.e. the median +(which happens to also be the default, so this option could have been omitted). We also use an explicit random seed in case we want to reproduce the exact same benchmarks. If not specified, the PRNG will self-initialize using an unknown seed. .. code-block:: shell - tezos-snoop benchmark Blake2b_example and save to blake2b.workload --bench-num 500 --nsamples 3000 --seed 12897 + tezos-snoop benchmark Blake2b_example and save to blake2b.workload --bench-num 500 --nsamples 3000 --determinizer percentile@50 --seed 12897 Here's the output: @@ -265,7 +265,7 @@ Here's the output: Since the size of inputs varies a lot, the statistics over all benchmarks are less useful. -Step 3.5: (optional) removing outliers +Step 3.5: (optional) Removing outliers -------------------------------------- It is possible to remove outliers from the raw benchmark data. The command is the following: @@ -288,11 +288,11 @@ The best defense against outliers is to have clean data in the first place: use .. _Fitting the model: -Step 4: fitting the model +Step 4: Fitting the model ------------------------- We can now proceed to inferring the free parameters from the model using the data. -At the time of writing, the tool offloads the regression problem to the scikit-learn +At the time of writing, the tool offloads the regression problem to the `scikit-learn `_ (aka sklearn) Python library: install it before proceeding. Let's execute the following command: .. code-block:: shell @@ -347,7 +347,7 @@ if the model is good, one should observe that the empirical data lies along a linear subspace. Here, the model is trivial so the central plot is less interesting. -Step 5: generating code +Step 5: Generating code ----------------------- As a final step, we demonstrate how to generate code corresponding to the diff --git a/docs/developer/time_measurement_ppx.rst b/docs/developer/time_measurement_ppx.rst index 4177bcd2c51aca414035011d2b3855a41a1573de..fb5ec97eb407e0e2935e8067072360310f390624 100644 --- a/docs/developer/time_measurement_ppx.rst +++ b/docs/developer/time_measurement_ppx.rst @@ -187,6 +187,8 @@ The PPX provides the handling of three attributes: inside a ``Lwt.t`` monad. So, this attribute must be placed on an expression evaluating in a ``Lwt.t`` value in order to compile. +Some of these attributes are used, for instance, in the implementation of the :ref:`performance regression test framework `. + Instrumenting the tezos-node executable --------------------------------------- diff --git a/docs/ithaca/consensus.rst b/docs/ithaca/consensus.rst index 94603241ed60534684cdfe6cdb9bfc3c1808c50d..4c3f2fb50b77c127b95f100ff925e2397b74ee9b 100644 --- a/docs/ithaca/consensus.rst +++ b/docs/ithaca/consensus.rst @@ -299,7 +299,7 @@ producer and are distributed immediately. To encourage fairness and participation, the *block* proposer receives a bonus for the extra endorsements it includes in the block. The bonus is proportional to the number of -validator slots above the threshold of :math:`\lceil CONSENSUS\_COMMITTEE\_SIZE \times \frac{2}{3} \rceil` that +validator slots above the threshold of ``CONSENSUS_COMMITTEE_SIZE * 2 / 3`` that the included endorsements represent. The bonus is also distributed immediately. diff --git a/docs/protocols/012_ithaca.rst b/docs/protocols/012_ithaca.rst index 393a8e149c747b1fb8a4a5c8713eed83c09aa4cf..0d5833d138a3812670461989062ee9fc64dbb19c 100644 --- a/docs/protocols/012_ithaca.rst +++ b/docs/protocols/012_ithaca.rst @@ -170,3 +170,4 @@ Minor Changes Context entries located in ``/chains/main/blocks//context/raw/bytes/cycle//roll_snapshot`` are no longer accessible after Tenderbake. + As observed in issue `:gl:`#2764`, the RPC is buggy for cycle ``474``: the correct result for that cycle is index 16 (not 4). diff --git a/docs/releases/version-12.rst b/docs/releases/version-12.rst index 7af7c9a8c6de7abf2e25bdfef0cbb6e4e3238f05..6d83f1db29a8e09c8b8b008c1b3c5d2c96a4a18e 100644 --- a/docs/releases/version-12.rst +++ b/docs/releases/version-12.rst @@ -54,7 +54,7 @@ functionality is now integrated into the *baker* daemons: Changelog --------- -- `Version 12.3 <../CHANGES.html#version-12-2>`_ +- `Version 12.3 <../CHANGES.html#version-12-3>`_ - `Version 12.2 <../CHANGES.html#version-12-2>`_ - `Version 12.1 <../CHANGES.html#version-12-1>`_ - `Version 12.0 <../CHANGES.html#version-12-0>`_ diff --git a/docs/user/logging.rst b/docs/user/logging.rst index 385cde6a7a20ff6ff7a4fdb535d45a024da5e5bf..a83918d0fb9dcee0c61116b0b9b5f0f4a5e62e07 100644 --- a/docs/user/logging.rst +++ b/docs/user/logging.rst @@ -221,7 +221,7 @@ called; this should include *all* the regular ``tezos-*`` binaries. - rules are ordered, i.e., the first pattern that matches, from left to right, fires the corresponding rule. -- ``TEZOS_EVENT_HOSTNAME`` is used by the file-descriptor-sink to tweak the JSON +- ``TEZOS_EVENT_HOSTNAME`` is used by the file-descriptor-sink to tweak the JSON output (see above). As the Irmin context backend uses an internal and specific logging diff --git a/src/proto_alpha/lib_protocol/merkle_list.ml b/src/proto_alpha/lib_protocol/merkle_list.ml index 88da122fa314610722923e3d33c3ba841008017b..12f1bd6ca9db0c38d52f98f4118aab20d367f01b 100644 --- a/src/proto_alpha/lib_protocol/merkle_list.ml +++ b/src/proto_alpha/lib_protocol/merkle_list.ml @@ -26,7 +26,7 @@ type error += Merkle_list_invalid_position let max_depth ~count_limit = - (* We assume that the Merkle_tree implemenation computes a tree in a + (* We assume that the Merkle_tree implementation computes a tree in a logarithmic size of the number of leaves. *) let log2 n = Z.numbits (Z.of_int n) in log2 count_limit diff --git a/src/proto_alpha/lib_protocol/tx_rollup_gas.mli b/src/proto_alpha/lib_protocol/tx_rollup_gas.mli index 0abb73e189cee2a703da3be9a85eb176120ac0be..4340dd40557df623f047e23e0296aaea27e4146f 100644 --- a/src/proto_alpha/lib_protocol/tx_rollup_gas.mli +++ b/src/proto_alpha/lib_protocol/tx_rollup_gas.mli @@ -38,7 +38,7 @@ val hash : (** [hash_cost size] returns the cost of gas for hashing a buffer of [size] bytes. - Returns [Tx_rollup_input_message_size] iff [size < 0]. *) + Raises [Tx_rollup_negative_input_size] iff [size < 0]. *) val hash_cost : int -> Gas_limit_repr.cost tzresult val consume_check_path_inbox_cost : Raw_context.t -> Raw_context.t tzresult