diff --git a/CHANGES.rst b/CHANGES.rst index b32988c57a8b76f2ba3b823d4eba19c60c11deef..fca4f21f9c3c8251abe150173bc42944a7426e4f 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -53,6 +53,10 @@ Node It can be used by calling the RPC with the parameter ``?version=1`` (default version is still ``0``). +- Added an RPC ``/config/logging`` to reconfigure the logging framework + without having to restart the node. See also the new documentation pages + related to logging. + - Better handling of mempool cache in the `distributed_db` which should make the `distributed_db` RAM consumption strongly correlated to the one of the mempool. @@ -78,6 +82,14 @@ Docker Images Miscellaneous ------------- +- Made the ``file-descriptor-{path,stdout,stderr}://`` event-logging + sink more configurable (e.g. filtering per level and per section). The + environment variable ``TEZOS_NODE_HOSTNAME`` used for the output of events + was renamed to the more appropriate ``TEZOS_EVENT_HOSTNAME``. + +- Added specific documentation pages about logging for users and + developers. + Version 11.0~rc1 ================ diff --git a/docs/Makefile b/docs/Makefile index 337e8e173e0cb98c9ed1506653f480fd4a82ece8..bed7c4fe05fbd9164321186ff30292977067844e 100644 --- a/docs/Makefile +++ b/docs/Makefile @@ -53,7 +53,6 @@ manuals: main @../tezos-signer man -verbosity 3 -format html | sed "s#${HOME}#\$$HOME#g" > api/tezos-signer.html @../tezos-codec man -verbosity 3 -format html | sed "s#${HOME}#\$$HOME#g" > api/tezos-codec.html @../tezos-snoop man -verbosity 3 -format html | sed "s#${HOME}#\$$HOME#g" > api/tezos-snoop.html - ../tezos-node run --help | sed "s#\(.\)$$(echo "\010")\1#\1#g; s#_$$(echo "\010")##g" > api/tezos-node-run.txt .PHONY: odoc odoc: main diff --git a/docs/developer/event_logging_framework.rst b/docs/developer/event_logging_framework.rst new file mode 100644 index 0000000000000000000000000000000000000000..571f2f05218d4f1593931cc5cbd8f968a1cd4e50 --- /dev/null +++ b/docs/developer/event_logging_framework.rst @@ -0,0 +1,70 @@ +Using The Event Logging Framework +--------------------------------- + +Developers of most modules should consider generating useful information +for the :doc:`Logging subsystem <./user/logging>`. + +In the Octez code base, logging is instrumented using an asynchronous event +system, where log events are emitted in various part of the code and consumed by +the so called sinks. This is done by using the `tezos-event-logging` library. + +We use two levels of abstraction to define these events. Heavy events are +defined using the most generic API and have the full expressive power and +structure while being 100% lazy. Simple events are record-like structures which +are constructed on top of the generic event api, and are meant to be less +verbose and more developer friendly. + + +Adding Events +~~~~~~~~~~~~~ + +Simple Events +^^^^^^^^^^^^^ + +Simple events are record-like structures which are constructed directly +(i.e. there is no high-level OCaml type, and parsing events is not accessible +programmatically). The API is a bit less lazy than with the generic API (the +fields of the events are still evaluated every time, which is usually not an +issue since they are evaluated anyway by the application code itself). + +See for instance +:src:`bin_node/node_config_file.ml` (search for the string "declare_"): + +- one declares typed-record events with + ``Internal_event.Simple.declare_``, +- and they are called with + ``val Simple.emit: 'a Simple.t -> 'a -> (unit, trace) result Lwt.t``. + +Please read the guidelines that are documented in +`the Internal_event.Simple module. <../api/odoc/_html/tezos-event-logging/Tezos_event_logging/Internal_event/Simple/index.html>`__ + + +“Heavy” Events +^^^^^^^^^^^^^^ +The most generic API for defining events ``Internal_event.Make`` gives full +expressive power and structure while being 100% lazy. + +See for instance: +:src:`proto_alpha/lib_delegate/client_baking_blocks.ml` (module +``Block_seen_event``): + +- one needs to call the functor + ``module Event = Internal_event.Make(Definition)`` where + ``Definition: Internal_event.EVENT_DEFINITION``; +- this provides + ``val Event.emit: ?section:Section.t -> (unit -> Block_seen_event.t) -> (unit, trace) result Lwt.t``. + +Creating Sinks +~~~~~~~~~~~~~~ + +A sink is a module of type ``Internal_event.SINK`` which is registered +within the system (just like events or errors kinds) using: +``let () = Internal_event.All_sinks.register (module My_sink)``. Its +chosen URI-scheme should be unique. + +See for instance: + +- The (simplest) module ``Lwt_log_sink``: + :src:`lib_event_logging/internal_event.ml`, +- or, the much more complex, + :src:`lib_stdlib_unix/file_descriptor_sink.ml`. diff --git a/docs/index.rst b/docs/index.rst index 5d2c961a2b182c15abc8b4b483a2976db93c0ba0..91a05e7d0afd4f5c585d7a343629e9e31c8ab28f 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -126,6 +126,7 @@ in the :ref:`introduction `. user/multisig user/fa12 user/various + user/logging .. toctree:: :maxdepth: 2 @@ -199,6 +200,7 @@ in the :ref:`introduction `. developer/entering_alpha developer/protocol_environment developer/protocol_environment_upgrade + developer/event_logging_framework developer/testing_index developer/profiling developer/snoop diff --git a/docs/introduction/howtouse.rst b/docs/introduction/howtouse.rst index 100389f409425958412dc2d7275abc792629f85e..bb541922201e2d8b460c8bd9f4b171966fd954cf 100644 --- a/docs/introduction/howtouse.rst +++ b/docs/introduction/howtouse.rst @@ -193,7 +193,7 @@ Many options of the node can be configured when running the node: - The directory where the node stores local data (using option ``--data-dir``) - Network parameters (e.g. the number of connections to peers, using option ``--connections``) - Validator and mempool parameters -- :ref:`Logging options `. +- :ref:`Logging options `. The list of configurable options can be obtained using the following command:: diff --git a/docs/user/logging.rst b/docs/user/logging.rst new file mode 100644 index 0000000000000000000000000000000000000000..f9fe8188e376f8f61965bff57f0ca637dad2ca98 --- /dev/null +++ b/docs/user/logging.rst @@ -0,0 +1,300 @@ +Logging +======= + +Logging features in Tezos allow to monitor its execution and be informed in real +time about *events* of interest, such as errors, completion of certain steps, +etc. This is why various software components emit *events* throughout the +codebase (see :doc:`../developer/event_logging_framework`), the logging +framework dispatches them to an arbitrary number of (active) *sinks* which can +filter print, store, or otherwise handle events. + +Events have: + +- a *name* +- a *section*; a hierarchical event classification mechanism, i.e. a path-like + list of strings, sometimes the name is appended to make a fully-qualified + name, +- a *level* quantifying the relative importance of the event (Debug, Info, + Notice, Warning, Error, and Fatal — inspired from the + `Syslog standard severity levels `_), +- and, as *contents*, arbitrary structured data. They can be serialized to any + format supported by the data-encoding library (e.g. JSON) and pretty-printed + for human readers. + +Events are by default “lazy”: if no active sink accepts their section and level, +they are not evaluated at all (i.e. in OCaml, the ``unit -> event`` argument of +``Event.emit`` is never called). This means that outputting *more* events +implies more CPU and memory usage. + +*“Legacy events”* are the events that use the old & deprecated logging API; +their contents all have the same structure: they are just human-readable +text. The API makes that these events are evaluated even if they are not +consumed by a sink; i.e. they are not lazily evaluated. The codebase is in the +process of getting rid of them. + +Sink Configuration +------------------- + +The logging framework refers to sinks with URIs (a.k.a. structured strings of +the form ``schema://path?query``): the schema of the URI is used to find the +appropriate sink-implementation, the rest of the URI is used to configure the +particular instance. There can be as many instances of a given implementation as +needed, except for the (legacy) *Lwt-Log Sink* which should be activated at most +once. + +(Legacy) Lwt-Log Sink +~~~~~~~~~~~~~~~~~~~~~ + +This sink is the one currently activated by default: + +- Schema: ``lwt-log`` +- URI-Configuration: **none**; (for now) can only use the legacy + ``TEZOS_LOG`` environment variable or the specific section in the + node’s configuration file (see below). + +This sink can only output pretty-printed versions of the events. + +File-Descriptor Sinks +~~~~~~~~~~~~~~~~~~~~~ + +This is configurable and duplicatable replacement for the lwt-log-sink; +it allows to output events to regular Unix file-descriptors. It is +actually a *family* of three URI-schemes: + +- ``file-descriptor-path://`` to output to a file, +- ``file-descriptor-stdout://`` to output to ``stdout`` / ``1``, or +- ``file-descriptor-stderr://`` to output to ``stderr`` / ``2``. + +*Note:* ``-stdout`` and ``-stderr`` schemes are there for convenience +and for making the API consistent; most Unix-ish systems nowadays have +``/dev/std{out,err}`` and ``/dev/fd/[0-9]+`` special files which can be +used with ``file-descriptor-path`` (with some precautions). + +The path of the URI is used by ``file-descriptor-path`` to choose the +path to write to and ignored for the other two. + +The query of the URI is used to further configure the sink instance. + +Common options: + +- ``level-at-least=`` the minimal log-level that the sink + will output. +- ``section-prefix`` can be given many times and defines a list of + pairs ``:`` which can be used to + setup more precise filters. ``level-at-least=info`` can be understood + as ``section-prefix=:info``, the empty section prefix matches all + sections. +- ``format=`` the output format used; acceptable values are: + + - ``one-per-line`` (the default): output JSON objects, one per line, + - ``netstring``: use the Netstring format + (cf. `Wikipedia `__) to + separate JSON records, + - ``pp`` to output the events pretty-printed, one per line, using a + format compatible with + `RFC-5424 `__ (or + Syslog). + +Options available only for the ``file-descriptor-path://`` case: + +- ``with-pid=`` when ``true`` adds the current process-id to the + file path provided (for instance, useful for the node when not + running in ``--singleprocess`` mode). +- ``fresh=`` when ``true`` smashes the content of the file if it + already exists instead of appending to it. +- ``chmod=`` sets the access-rights of the file at creation time + (default is ``0o600``, provided + `Umask `__ allows it). + +Examples: + +- ``file-descriptor-path:///the/path/to/write.log?format=one-per-line&level-at-least=notice&with-pid=true&chmod=0o640`` + → Executables will write all log events of level at least ``Notice`` + to a file ``/the/path/to/write-XXXX.log`` where ``XXXX`` is the PID, + the file will be also readable by the user’s group (``0o640``). +- ``file-descriptor-stderr://?format=netstring`` → Executables will + write to ``stderr`` JSON blobs *“packetized” as* Netstrings. +- ``file-descriptor-path:///dev/fd/4?section-prefix=rpc:debug`` → + Executables will write to the file-descriptor ``4`` likely opened by + a parent monitoring process. The reader will only receive the logs + from the section ``rpc`` (but all of them including ``Debug``). + +The format of the events is (usually minified): + +.. code:: json + + {"fd-sink-item.v0": + {"hostname": , + "time_stamp": , + "section":[ ], + "event": + } } + + +Additionally, the ``"hostname"`` field can be customized with environment +variable ``TEZOS_EVENT_HOSTNAME``; Its default value is the hostname of the +device the node is running on. + + + +File-Tree Sink +~~~~~~~~~~~~~~ + +This is a sink that dumps events as JSON files (same format as above) +in a directory structure guided by the section of the events. It can be +useful for testing the logging framework itself, or for off-line +post-mortem analysis for instance. + +The URI scheme is ``unix-files``, the path is the top-level directory in +which the JSON files will be written. + +The query of the URI allows one to filter the events early on. + +- ``level-at-least=`` the minimal log-level that the sink + will output. +- ``name-matches=`` comma-separated-list of POSIX regular + expressions on the name of the events. +- ``name=`` comma-separated-list of event names matched + *exactly*. +- ``section=`` comma-separated-list of event sections matched + *exactly*. +- ``no-section=`` when true only catch the events that have an + empty section. + +Example: ``unix-files:///the/path/to/write?level-at-least=info`` (the +path should be inexistent or already a directory). + +The directory structure is as follows: +``////`` +where ```` is either ``no-section`` or +``section-``. + +Global Defaults +--------------- + +By default, only the ``lwt-log://`` sinks are activated and configured to +output events of level at least ``Notice``. + +JSON Configuration Format +------------------------- + +A configuration JSON blob, is an object with one field ``"active_sinks"`` +which contains a list of URIs: + +.. code:: json + + { + "active_sinks": [ ] + } + +The URIs are discriminated among the sink implementations above using +their schemes and activated. + +It is used in various places: node configuration file, +logging-configuration RPC, etc. + +Environment Variables +--------------------- + +The logging framework can be configured with environment variables +before starting the node. Those variables work on all the code using the +``tezos-stdlib-unix`` library as long as ``Internal_event_unix.init`` is +called; this should include *all* the regular ``tezos-*`` binaries. + +- ``TEZOS_EVENTS_CONFIG`` must be a whitespace-separated list of URIs: + + - URIs that have a schema are activated. + - URIs without a schema, i.e. simple paths, are understood as paths + to configuration JSON files (format above) to load (which + themselves activate sinks). + +- ``TEZOS_LOG`` and ``LWT_LOG`` (with lower priority) contain “rules” + to configure the ``lwt-log://`` sink. The rules are expressed with a + DSL documented at + `Lwt_log_core `__: + + - rules are separated by semi-colons ``;``, + - each rule has the form ``pattern -> level``, + - a pattern is a minimalist glob-expression on the ``section.name`` of + the event, e.g. ``rpc*`` for all events whose section.name starts + with ``rpc``, + - rules are ordered, i.e., the first pattern that matches, from left to + right, fires the corresponding rule. + +- ``TEZOS_EVENT_HOSTNAME`` is used by the file-descriptor-sink to tweak the JSON + output (see above). + + +.. _configure_node_logging: + +Node-Specific Configuration +--------------------------- + +Configuration File +~~~~~~~~~~~~~~~~~~ + +See ``tezos-node config --help`` for the full schema of the node’s JSON +configuration file. + +In particular the fields: + +- ``"internal-events"`` contains a configuration of the sinks (format + above). +- ``"log"`` is an object which defines the configuration of the + ``lwt-log://`` sink; one can redirect the output to a file, set the + rules, and change the formatting template. + +Command Line Options +~~~~~~~~~~~~~~~~~~~~ + +See ``tezos-node run --help``, the ``lwt-log://`` sink configuration can +be also changed with 2 options: + +- ``-v`` / ``-vv``: set the global log level to ``Info`` or ``Debug`` + respectively. +- ``--log-output``: set the output file. + +RPC ``/config/logging`` +~~~~~~~~~~~~~~~~~~~~~~~ + +The node exposes an administrative ``PUT`` endpoint: +``/config/logging``. + +The input schema is the JSON configuration of the sinks. It +deactivates all current sinks and activates the ones provided **except** +the ``lwt-log://`` sink that is left untouched. + +Example: (assuming the ``lwt-log://`` is active not to miss other +events) this call adds a sink to suddenly start pretty-printing all +``rpc`` events to a ``/tmp/rpclogs`` file: + +:: + + tezos-client rpc post /config/logging with \ + '{ "active_sinks": [ "file-descriptor-path:///tmp/rpclogs?section-prefix=rpc:debug&format=pp&fresh=true" ] }' + +Client and Baking Daemons +------------------------- + +For now, ``tezos-client``, ``tezos-{baker,endorser,accuser}-*``, etc. +can only be configured using the environment variables. + +There is one common option ``--log-requests`` which can be used to trace +all the interactions with the node (but it does *not* use the logging +framework). + +Processing Structured Events +---------------------------- + +This is work-in-progress, see: + +- ``tezos-admin-client show event-logging`` outputs the configuration + currently understood by ``tezos-admin-client`` (hence through the + ``TEZOS_EVENTS_CONFIG`` variable) and lists all the events it knows + about. +- ``tezos-admin-client output schema of to `` + get the JSON-Schema for an event. + +Example: +``tezos-admin-client output schema of block-seen-alpha to block-seen-alpha.json`` + diff --git a/docs/user/node-configuration.rst b/docs/user/node-configuration.rst index 06085ad6bab2112b8dca15fd4ca593c1caf85023..c077113bbdb43f153f25e3d30af9c5af67f497f6 100644 --- a/docs/user/node-configuration.rst +++ b/docs/user/node-configuration.rst @@ -2,7 +2,10 @@ Node Configuration ================== The Tezos node can be configured in flexible ways to control various -aspects of its behavior, such as :ref:`RPC `, :ref:`P2P `, or :ref:`shell ` parameters, the directory for storing data, :ref:`logging levels `, and so on. These aspects can be customized in two different ways: +aspects of its behavior, such as :ref:`RPC `, :ref:`P2P +`, or :ref:`shell ` parameters, the directory +for storing data, :doc:`logging <./logging>`, and so on. These aspects +can be customized in two different ways: - by supplying **options** on the command line when running the node - by specifying **parameters** in a configuration file for the node @@ -331,47 +334,6 @@ hardware wallet. --peer -.. _configure_logging: - -Logging -------- - -It is possible to set independent log levels for different components in the Tezos node, as well as to specify an output file for logging. -This can be done in two different ways: - -- by defining log parameters in the configuration file (see :ref:`node-conf-file`) -- by setting the environment variable ``TEZOS_LOG`` before starting the node. - -The list of the Tezos components that can be logged and the syntax for the logging options can be found in the DEBUG section displayed by ``tezos-node run --help``: - -.. literalinclude:: ../api/tezos-node-run.txt - :start-after: DEBUG - :end-before: OPTIONS - -Additionally, specific logs can be activated for the context part, see the :ref:`storage layer ` documentation. - -Event-based logging -~~~~~~~~~~~~~~~~~~~ - -A more advanced interface for logging, based on events generated by the different components, is available using the environment variable ``TEZOS_EVENTS_CONFIG``. Its value must be a list of URIs separated by spaces. -Each URI defines a *sink* for log events consisting in a Unix output stream and possibly some logging options, and has one of the following forms: - -- ``file-descriptor-path://`` -- ``file-descriptor-stdout://`` -- ``file-descriptor-stderr://`` - -The URIs may further append options in the form of a query ``?=&=...``. - -For example, the following definition sends all the events to file ``/the/path/to/write.log``, formatted as one event per line:: - - export TEZOS_EVENTS_CONFIG=file-descriptor-path:///the/path/to/write.log?format=one-per-line - -You may find all the available options in the `file descriptor sink API `__. - -.. note:: - Note that, as the Tezos codebase is in the process of moving from the legacy logging framework to the events-based logging framework, some interferences are observed between the two. - In particular, the configuration option ``level-at-least`` documented in the above API does not currently work as expected, so you should avoid using it. - .. _configure_shell: Shell parameters diff --git a/src/bin_validation/validator.ml b/src/bin_validation/validator.ml index 644d3062057ba2d527d016f2393a372e0d0e7859..a346ca69aa21f6c6fbad77a3eb8cc14742c55763 100644 --- a/src/bin_validation/validator.ml +++ b/src/bin_validation/validator.ml @@ -295,6 +295,13 @@ let run input output = fork_test_chain >>= fun () -> loop cache | External_validation.Terminate -> Lwt_io.flush_all () >>= fun () -> Events.(emit termination_request ()) + | External_validation.Reconfigure_event_logging config -> + Internal_event_unix.Configuration.reapply config >>= fun res -> + External_validation.send + output + (Error_monad.result_encoding Data_encoding.empty) + res + >>= fun () -> loop cache in loop None >>= fun () -> return_unit diff --git a/src/lib_event_logging/internal_event.ml b/src/lib_event_logging/internal_event.ml index 6885f35f69db21ef8a59f6cd2ee58cbb8267e6f0..af0bb57dbe96a33c072a93a4d3ec7e7f30e65004 100644 --- a/src/lib_event_logging/internal_event.ml +++ b/src/lib_event_logging/internal_event.ml @@ -102,6 +102,8 @@ module Section : sig val to_lwt_log : t -> Lwt_log_core.section + val is_prefix : prefix:t -> t -> bool + val encoding : t Data_encoding.t val to_string_list : t -> string list @@ -142,6 +144,20 @@ end = struct let to_lwt_log s = s.lwt_log_section + let is_prefix ~prefix main = + try + let _ = + List.fold_left + (fun prev elt -> + match prev with + | t :: q when String.equal t elt -> q + | _ -> raise Not_found) + main.path + prefix.path + in + true + with Not_found -> false + let encoding = let open Data_encoding in conv (fun {path; _} -> path) (fun l -> make l) (list string) @@ -300,16 +316,29 @@ module All_sinks = struct active := act :: !active ; return_unit - let close () = + let close ?(except = fun _ -> false) () = let close_one (type a) sink definition = let module S = (val definition : SINK with type t = a) in S.close sink in - let sinks_to_close = !active in - active := [] ; - List.iter_es - (fun (Active {sink; definition; _}) -> close_one sink definition) - sinks_to_close + (* We want to filter the list in one Lwt-go (atomically), and only then + call close on the ones that are being deleted. *) + let (next_active, to_close_list) = + List.partition + (fun act -> + match act with Active {configuration; _} -> except configuration) + !active + in + active := next_active ; + (* We don't want one failure to prevent the attempt at closing as many + sinks as possible, so we record all errors and combine them: *) + List.fold_left_s + (fun current -> function + | Active {sink; definition; _} -> + close_one sink definition >>= fun close_result -> + Lwt.return (Error_monad.join_e [current; close_result])) + (Ok ()) + to_close_list let handle def section v = let handle (type a) sink definition = @@ -1227,31 +1256,20 @@ module Legacy_logging = struct module Event = Make (Definition) let emit_async level fmt ?tags = - (* Prevent massive calls to kasprintf *) - let log_section = Section.to_lwt_log section in - if should_log ~level ~sink_level:(Lwt_log_core.Section.level log_section) - then - Format.kasprintf - (fun message -> - Lwt.ignore_result - (Event.emit ~section (fun () -> - Definition.make ?tags level message))) - fmt - else Format.ifprintf Format.std_formatter fmt + Format.kasprintf + (fun message -> + Lwt.ignore_result + (Event.emit ~section (fun () -> Definition.make ?tags level message))) + fmt let emit_lwt level fmt ?tags = - (* Prevent massive calls to kasprintf *) - let log_section = Section.to_lwt_log section in - if should_log ~level ~sink_level:(Lwt_log_core.Section.level log_section) - then - Format.kasprintf - (fun message -> - Event.emit ~section (fun () -> Definition.make ?tags level message) - >>= function - | Ok () -> Lwt.return_unit - | Error el -> Format.kasprintf Lwt.fail_with "%a" pp_print_trace el) - fmt - else Format.ikfprintf (fun _ -> Lwt.return_unit) Format.std_formatter fmt + Format.kasprintf + (fun message -> + Event.emit ~section (fun () -> Definition.make ?tags level message) + >>= function + | Ok () -> Lwt.return_unit + | Error el -> Format.kasprintf Lwt.fail_with "%a" pp_print_trace el) + fmt end module Make (P : sig diff --git a/src/lib_event_logging/internal_event.mli b/src/lib_event_logging/internal_event.mli index ae02660d59310b54bc004c810a4c3ab20931a51e..3045ff63148f90da06eb6f850d82b8a54b577e8e 100644 --- a/src/lib_event_logging/internal_event.mli +++ b/src/lib_event_logging/internal_event.mli @@ -39,15 +39,9 @@ open Error_monad (** {3 Events Definitions and Registration } *) -type level = - | Debug - | Info - | Notice - | Warning - | Error - | Fatal - (** The relative importance of a particular event (compatible with +(** The relative importance of a particular event (compatible with traditional logging systems, cf. {!Lwt_log_core.level}). *) +type level = Debug | Info | Notice | Warning | Error | Fatal (** Module to manipulate values of type {!level}. *) module Level : sig @@ -82,6 +76,9 @@ module Section : sig (** Make the equivalent {!Lwt_log} section. *) val to_lwt_log : t -> Lwt_log_core.section + (** [is_prefix ~prefix p] checks that [p] starts with [~prefix]. *) + val is_prefix : prefix:t -> t -> bool + val encoding : t Data_encoding.t val to_string_list : t -> string list @@ -450,8 +447,9 @@ module All_sinks : sig scheme and calls {!configure}. *) val activate : Uri.t -> unit tzresult Lwt.t - (** Call [close] on all the active sinks. *) - val close : unit -> unit tzresult Lwt.t + (** Call [close] on all the sinks, except the ones matching the + predicate [?except] (default: all of them). *) + val close : ?except:(Uri.t -> bool) -> unit -> unit tzresult Lwt.t (** Display the state of registered/active sinks. *) val pp_state : Format.formatter -> unit -> unit diff --git a/src/lib_shell/block_validator_process.ml b/src/lib_shell/block_validator_process.ml index 43f578f4bbec8231a63b5fc588bd9b8c03278a06..2a431220814f2deb470bc0154e88fb6903073c4e 100644 --- a/src/lib_shell/block_validator_process.ml +++ b/src/lib_shell/block_validator_process.ml @@ -61,6 +61,9 @@ module type S = sig (** [init_test_chain] must only be called on a forking block. *) val init_test_chain : t -> Store.Block.t -> Block_header.t tzresult Lwt.t + + val reconfigure_event_logging : + t -> Internal_event_unix.Configuration.t -> unit tzresult Lwt.t end (* We hide the validator (of type [S.t]) and the according module in a GADT. @@ -207,6 +210,8 @@ module Internal_validator_process = struct let forked_header = Store.Block.header forking_block in Store.Block.context validator.chain_store forking_block >>=? fun context -> Block_validation.init_test_chain context forked_header + + let reconfigure_event_logging _ _ = return_unit end (** Block validation using an external process *) @@ -599,6 +604,12 @@ module External_validator_process = struct in send_request validator request Block_header.encoding + let reconfigure_event_logging validator config = + send_request + validator + (External_validation.Reconfigure_event_logging config) + Data_encoding.unit + let close vp = Events.(emit close ()) >>= fun () -> match vp.validator_process with @@ -676,6 +687,10 @@ let init validator_environment validator_kind = let close (E {validator_process = (module VP); validator}) = VP.close validator +let reconfigure_event_logging (E {validator_process = (module VP); validator}) + config = + VP.reconfigure_event_logging validator config + let apply_block (E {validator_process = (module VP); validator}) chain_store ~predecessor header operations = Store.Block.get_block_metadata chain_store predecessor >>=? fun metadata -> diff --git a/src/lib_shell/block_validator_process.mli b/src/lib_shell/block_validator_process.mli index e783535062b521e25c87d795fab6a620d69b678d..0b23a6b0c49648fab844d0a123892870cb81ea6e 100644 --- a/src/lib_shell/block_validator_process.mli +++ b/src/lib_shell/block_validator_process.mli @@ -65,6 +65,9 @@ val init : validator_environment -> validator_kind -> t tzresult Lwt.t registered Lwt_exit.clean_up_callback). *) val close : t -> unit Lwt.t +val reconfigure_event_logging : + t -> Internal_event_unix.Configuration.t -> unit tzresult Lwt.t + (** [apply_block bvp predecessor header os] checks the liveness of the operations and then call [Block_validation.apply] *) val apply_block : diff --git a/src/lib_shell/chain_validator.ml b/src/lib_shell/chain_validator.ml index 75bc9fc7a9ef781a8ad7297d0f880cb140d72c51..860a748e28b3181cf8f72071b68fe4f45e1d2134 100644 --- a/src/lib_shell/chain_validator.ml +++ b/src/lib_shell/chain_validator.ml @@ -772,6 +772,11 @@ let bootstrapped w = let is_bootstrapped w = Types.is_bootstrapped (Worker.state w) +let reconfigure_event_logging w config = + Block_validator_process.reconfigure_event_logging + (Worker.state w).parameters.block_validator_process + config + let force_bootstrapped w b = let state = Worker.state w in Synchronisation_heuristic.Bootstrapping.force_bootstrapped diff --git a/src/lib_shell/chain_validator.mli b/src/lib_shell/chain_validator.mli index a7838999f1c522c49bf920d15bdb6c53b3d3986c..558562c502072fdb6eae82d542ca29195c961d76 100644 --- a/src/lib_shell/chain_validator.mli +++ b/src/lib_shell/chain_validator.mli @@ -79,6 +79,9 @@ val chain_db : t -> Distributed_db.chain_db val child : t -> t option +val reconfigure_event_logging : + t -> Internal_event_unix.Configuration.t -> unit tzresult Lwt.t + val validate_block : t -> ?force:bool -> diff --git a/src/lib_shell/config_directory.ml b/src/lib_shell/config_directory.ml index 557b07b78238239b0e03db6e5049624f03e169bb..d8fe12a19d9b9c4adfde60d7573d7e35397cd951 100644 --- a/src/lib_shell/config_directory.ml +++ b/src/lib_shell/config_directory.ml @@ -24,7 +24,7 @@ (*****************************************************************************) let build_rpc_directory ~user_activated_upgrades - ~user_activated_protocol_overrides store = + ~user_activated_protocol_overrides ~mainchain_validator store = let register endpoint f directory = RPC_directory.register directory endpoint f in @@ -37,3 +37,9 @@ let build_rpc_directory ~user_activated_upgrades |> register Config_services.history_mode (fun () () () -> let chain_store = Store.main_chain_store store in return (Store.Chain.history_mode chain_store)) + |> register Config_services.Logging.configure (fun () () configuration -> + Internal_event_unix.Configuration.reapply configuration >>=? fun () -> + Chain_validator.reconfigure_event_logging + mainchain_validator + configuration + >>=? fun () -> return_unit) diff --git a/src/lib_shell/config_directory.mli b/src/lib_shell/config_directory.mli index 62c9fba057cb6e36620466881e7cc916b779f596..a47688fefbaf5273289d99e6fc62ff8ef44878a0 100644 --- a/src/lib_shell/config_directory.mli +++ b/src/lib_shell/config_directory.mli @@ -26,5 +26,6 @@ val build_rpc_directory : user_activated_upgrades:User_activated.upgrades -> user_activated_protocol_overrides:User_activated.protocol_overrides -> + mainchain_validator:Chain_validator.t -> Store.t -> unit RPC_directory.t diff --git a/src/lib_shell/node.ml b/src/lib_shell/node.ml index 6d84fbdb19f69c39c089b8519d89919a8e893067..15ebebe8af125744571a23517d58f5126fcf1791 100644 --- a/src/lib_shell/node.ml +++ b/src/lib_shell/node.ml @@ -351,6 +351,7 @@ let build_rpc_directory node = (Config_directory.build_rpc_directory ~user_activated_upgrades:node.user_activated_upgrades ~user_activated_protocol_overrides:node.user_activated_protocol_overrides + ~mainchain_validator:node.mainchain_validator node.store) ; merge (Version_directory.rpc_directory node.p2p) ; register0 RPC_service.error_service (fun () () -> diff --git a/src/lib_shell_services/config_services.ml b/src/lib_shell_services/config_services.ml index 27fa3780184e46f743590f24f5d4734325b90925..23f80dc655de8576c615a4ae62720c4f258e9986 100644 --- a/src/lib_shell_services/config_services.ml +++ b/src/lib_shell_services/config_services.ml @@ -51,5 +51,15 @@ let history_mode = ~output:history_mode_encoding RPC_path.(path / "history_mode") +module Logging = struct + let configure = + RPC_service.put_service + ~description:"Replace the logging configuration of the node." + ~query:RPC_query.empty + ~input:Tezos_stdlib_unix.Internal_event_unix.Configuration.encoding + ~output:Data_encoding.empty + RPC_path.(root / "config" / "logging") +end + let user_activated_upgrades cctxt = RPC_context.make_call Network.user_activated_upgrades cctxt () () () diff --git a/src/lib_shell_services/config_services.mli b/src/lib_shell_services/config_services.mli index 83c2fd06e189da689be489caf79710e974f9d27c..454d9a7efbc7636a9f72d94c7c66d5527f781f34 100644 --- a/src/lib_shell_services/config_services.mli +++ b/src/lib_shell_services/config_services.mli @@ -40,5 +40,16 @@ end val history_mode : ([`GET], unit, unit, unit, unit, History_mode.t, error trace) RPC_service.raw +module Logging : sig + val configure : + ( [`PUT], + unit, + unit, + unit, + Tezos_stdlib_unix.Internal_event_unix.Configuration.t, + unit ) + RPC_service.service +end + val user_activated_upgrades : #RPC_context.simple -> User_activated.upgrades tzresult Lwt.t diff --git a/src/lib_shell_services/dune b/src/lib_shell_services/dune index 248730cdb91ae99a6b8a33e436571ff8d9af7b77..395357239d449d3bb7dd113616d3cda099ce722e 100644 --- a/src/lib_shell_services/dune +++ b/src/lib_shell_services/dune @@ -5,6 +5,7 @@ (libraries tezos-base tezos-workers + tezos-stdlib-unix ;; for Tezos_stdlib_unix.Internal_event_unix tezos-p2p-services tezos-version) (flags diff --git a/src/lib_stdlib_unix/file_descriptor_sink.ml b/src/lib_stdlib_unix/file_descriptor_sink.ml index e5d5c8b99d2ac7f486696b1b8aa5cfe09d719f56..9ac18fd5ee2a9ff25059851e51b9e634c7c4ec7c 100644 --- a/src/lib_stdlib_unix/file_descriptor_sink.ml +++ b/src/lib_stdlib_unix/file_descriptor_sink.ml @@ -27,16 +27,19 @@ open Error_monad type t = { output : Lwt_unix.file_descr; - format : [`One_per_line | `Netstring]; + format : [`One_per_line | `Netstring | `Pp]; (* Hopefully temporary hack to handle event which are emitted with the non-cooperative log functions in `Legacy_logging`: *) - lwt_bad_citizen_hack : Data_encoding.json list ref; - level_at_least : Internal_event.Level.t; + lwt_bad_citizen_hack : string list ref; + filter : + [ `Level_at_least of Internal_event.Level.t + | `Per_section_prefix of + (Internal_event.Section.t * Internal_event.Level.t) list ]; } let hostname = Option.value - (Sys.getenv_opt "TEZOS_NODE_HOSTNAME") + (Sys.getenv_opt "TEZOS_EVENT_HOSTNAME") ~default:(Unix.gethostname ()) type 'event wrapped = { @@ -74,17 +77,66 @@ end) : Internal_event.SINK with type t = t = struct | `Stderr -> "file-descriptor-stderr" let configure uri = - let level_at_least = - let ( >?? ) = Option.bind in - Uri.get_query_param uri "level-at-least" - >?? Internal_event.Level.of_string - |> Option.value ~default:Internal_event.Level.default - in let fail_parsing fmt = Format.kasprintf (failwith "Parsing URI: %s: %s" (Uri.to_string uri)) fmt in + let section_prefixes = + let all = + List.filter_map + (function ("section-prefix", l) -> Some l | _ -> None) + (Uri.query uri) + in + match all with [] -> None | more -> Some (List.concat more) + in + (match (Uri.get_query_param uri "level-at-least", section_prefixes) with + | (None, None) -> return (`Level_at_least Internal_event.Level.default) + | (Some l, None) -> ( + match Internal_event.Level.of_string l with + | Some l -> return (`Level_at_least l) + | None -> fail_parsing "Wrong level: %S" l) + | (base_level, Some l) -> ( + try + let sections = + let parse_section s = + match String.split_on_char ':' s with + | [one] -> + ( Internal_event.Section.make_sanitized + (String.split_on_char '.' one), + Internal_event.Level.default ) + | [one; two] -> + let lvl = + match Internal_event.Level.of_string two with + | Some s -> s + | None -> + Fmt.failwith "Wrong level name: %S in argument %S" two s + in + let section = + match one with + | "" -> Internal_event.Section.empty + | _ -> + Internal_event.Section.make_sanitized + (String.split_on_char '.' one) + in + (section, lvl) + | _ -> Fmt.failwith "Wrong section-level entry: %S" s + in + let pairs = List.map parse_section l in + match base_level with + | None -> pairs + | Some lvl -> ( + match Internal_event.Level.of_string lvl with + | Some l -> (Internal_event.Section.empty, l) :: pairs + | None -> + Fmt.failwith + "Wrong level name %S in level-at-least argument" + lvl) + in + return (`Per_section_prefix sections) + with Failure s -> fail_parsing "%s" s)) + >>=? fun filter -> (match Uri.get_query_param uri "format" with | Some "netstring" -> return `Netstring + | Some "pp" -> return `Pp | None | Some "one-per-line" -> return `One_per_line | Some other -> fail_parsing "Unknown format: %S" other) >>=? fun format -> @@ -130,37 +182,68 @@ end) : Internal_event.SINK with type t = t = struct | `Stdout -> return Lwt_unix.stdout | `Stderr -> return Lwt_unix.stderr) >>=? fun output -> - let t = {output; lwt_bad_citizen_hack = ref []; level_at_least; format} in + let t = {output; lwt_bad_citizen_hack = ref []; filter; format} in return t let write_mutex = Lwt_mutex.create () - let output_one output format event_json = - let to_write = - match format with - | `One_per_line -> Ezjsonm.value_to_string ~minify:true event_json ^ "\n" - | `Netstring -> - let bytes = Ezjsonm.value_to_string ~minify:true event_json in - Fmt.str "%d:%s," (String.length bytes) bytes - in + let output_one output to_write = protect (fun () -> Lwt_mutex.with_lock write_mutex (fun () -> Lwt_utils_unix.write_string output to_write) >>= fun () -> return_unit) - let handle (type a) {output; lwt_bad_citizen_hack; level_at_least; format; _} - m ?(section = Internal_event.Section.empty) (v : unit -> a) = + let handle (type a) {output; lwt_bad_citizen_hack; filter; format; _} m + ?(section = Internal_event.Section.empty) (v : unit -> a) = let module M = (val m : Internal_event.EVENT_DEFINITION with type t = a) in let now = Unix.gettimeofday () in let forced_event = v () in let level = M.level forced_event in - if Internal_event.Level.compare level level_at_least >= 0 then ( + let filter_run = + match filter with + | `Level_at_least level_at_least -> + Internal_event.Level.compare level level_at_least >= 0 + | `Per_section_prefix kvl -> + List.exists + (fun (prefix, lvl) -> + Internal_event.( + Section.is_prefix ~prefix section + && Level.compare level lvl >= 0)) + kvl + in + if filter_run then ( let wrapped_event = wrap now section forced_event in - let event_json = - Data_encoding.Json.construct (wrapped_encoding M.encoding) wrapped_event + let to_write = + let json () = + Data_encoding.Json.construct + (wrapped_encoding M.encoding) + wrapped_event + in + match format with + | `Pp -> + let s = + String.map + (function '\n' -> ' ' | c -> c) + (Format.asprintf "%a" (M.pp ~short:false) forced_event) + in + (* See https://tools.ietf.org/html/rfc5424#section-6 *) + Format.asprintf + "%a [%s%s] %s\n" + (Ptime.pp_rfc3339 ~frac_s:3 ()) + (match Ptime.of_float_s wrapped_event.time_stamp with + | Some s -> s + | None -> Ptime.min) + (Internal_event.Section.to_string_list wrapped_event.section + |> String.concat ".") + M.name + s + | `One_per_line -> Ezjsonm.value_to_string ~minify:true (json ()) ^ "\n" + | `Netstring -> + let bytes = Ezjsonm.value_to_string ~minify:true (json ()) in + Fmt.str "%d:%s," (String.length bytes) bytes in - lwt_bad_citizen_hack := event_json :: !lwt_bad_citizen_hack ; - output_one output format event_json >>= function + lwt_bad_citizen_hack := to_write :: !lwt_bad_citizen_hack ; + output_one output to_write >>= function | Error [Exn (Unix.Unix_error (Unix.EBADF, _, _))] -> (* The file descriptor was closed before the event arrived, ignore it. *) @@ -168,16 +251,18 @@ end) : Internal_event.SINK with type t = t = struct | Error _ as err -> Lwt.return err | Ok () -> lwt_bad_citizen_hack := - List.filter (( = ) event_json) !lwt_bad_citizen_hack ; + List.filter (( = ) to_write) !lwt_bad_citizen_hack ; return_unit) else return_unit - let close {lwt_bad_citizen_hack; output; format; _} = + let close {lwt_bad_citizen_hack; output; _} = List.iter_es - (fun event_json -> output_one output format event_json) + (fun event_string -> output_one output event_string) !lwt_bad_citizen_hack >>=? fun () -> - Lwt_unix.close output >>= fun () -> return_unit + match K.kind with + | `Path -> Lwt_unix.close output >>= fun () -> return_unit + | `Stdout | `Stderr -> return_unit end module Sink_implementation_path = Make_sink (struct diff --git a/src/lib_stdlib_unix/file_descriptor_sink.mli b/src/lib_stdlib_unix/file_descriptor_sink.mli index 381dcf455764779690f263afa940347fef0cafe8..c7aa9c7bd12ce1ab7af71e68c999bec36ba6bac7 100644 --- a/src/lib_stdlib_unix/file_descriptor_sink.mli +++ b/src/lib_stdlib_unix/file_descriptor_sink.mli @@ -24,7 +24,7 @@ (*****************************************************************************) (** An implementation of {!Tezos_base.Internal_event.SINK} which - writes the events as JSON into a single file-descriptor. + writes the events as JSON or pretty printed into a single file-descriptor. It is registered with the URI scheme ["file-descriptor-path"] to output to a file or @@ -35,10 +35,16 @@ - ["level-at-least"] the minimal log-level that the sink will output (see {!Tezos_event_logging.Internal_event.level}). - - ["format"] the output format used to separate JSON records: - acceptable values are ["one-per-line"] (the default), and + - ["section-prefix"] can be given many times and defines a list of pairs + [":"] which can be used to setup more + precise filters. ["level-at-least=info"] is understood as + ["section-prefix=:info"] (the empty section prefix matches all + sections). + - ["format"] the output format used; + acceptable values are ["one-per-line"] (the default), ["netstring"] (see {{:https://en.wikipedia.org/wiki/Netstring}The - Netstring format}). + Netstring format}) (both to separate JSON records), {i or} ["pp"] to + output the events pretty-printed as text using the [syslog] format. Options available only for ["file-descriptor-path://"]: @@ -62,7 +68,7 @@ by a parent monitoring process (non-standard feature available on mainstream UNIX hosts, e.g. Linux and MacOSX). - - ["export TEZOS_NODE_HOSTNAME=hostname"] + - ["export TEZOS_EVENT_HOSTNAME=hostname"] The [hostname] will be used in the JSON representation of the event. By default, it is the hostname given by [Unix.gethostname ()]. diff --git a/src/lib_stdlib_unix/file_event_sink.mli b/src/lib_stdlib_unix/file_event_sink.mli index cdf7a3965355326fb3ed24afced90902d816694c..659548f4b34e5deb6b607d3a187bff802501677b 100644 --- a/src/lib_stdlib_unix/file_event_sink.mli +++ b/src/lib_stdlib_unix/file_event_sink.mli @@ -33,7 +33,7 @@ (the path should be inexistent or already a directory). The directory structure is as follows: - ["////"] + ["////"] where [""] is either ["no-section"] or ["section-"]. *) diff --git a/src/lib_stdlib_unix/internal_event_unix.ml b/src/lib_stdlib_unix/internal_event_unix.ml index 591a19058be1c41a44d4b89da71ffa1a235a1d99..3e4f37db36a43b7bd038e08a53d4e8977f6e8163 100644 --- a/src/lib_stdlib_unix/internal_event_unix.ml +++ b/src/lib_stdlib_unix/internal_event_unix.ml @@ -26,28 +26,56 @@ open Error_monad module Configuration = struct - type t = {activate : Uri.t list} + type t = {active_sinks : Uri.t list} let default = - {activate = [Uri.make ~scheme:Internal_event.Lwt_log_sink.uri_scheme ()]} + { + active_sinks = [Uri.make ~scheme:Internal_event.Lwt_log_sink.uri_scheme ()]; + } let encoding = let open Data_encoding in - conv - (fun {activate} -> List.map Uri.to_string activate) - (fun activate -> {activate = List.map Uri.of_string activate}) - (obj1 - (dft - "activate" - ~description:"List of URIs to activate/configure sinks." - (list string) - [])) + let object_1 key_name = + obj1 + (dft + key_name + ~description:"List of URIs to activate/configure sinks." + (list string) + []) + in + union + [ + case + ~title:"Active-Sinks" + ~description:"List of sinks to make sure are activated." + (Tag 0) + (object_1 "active_sinks") + (fun {active_sinks} -> Some (List.map Uri.to_string active_sinks)) + (fun active_sinks -> + {active_sinks = List.map Uri.of_string active_sinks}); + case + ~title:"Active-Sinks-Deprecated" + ~description: + "List of sinks to make sure are activated, deprecated \ + backwards-compatibility encoding." + (Tag 1) + (object_1 "activate") + (fun {active_sinks = _} -> None) + (* (fun {active_sinks} -> Some (List.map Uri.to_string active_sinks)) *) + (fun active_sinks -> + {active_sinks = List.map Uri.of_string active_sinks}); + ] let of_file path = Lwt_utils_unix.Json.read_file path >>=? fun json -> protect (fun () -> return (Data_encoding.Json.destruct encoding json)) - let apply {activate} = List.iter_es Internal_event.All_sinks.activate activate + let apply {active_sinks} = + List.iter_es Internal_event.All_sinks.activate active_sinks + + let reapply config = + let except u = Uri.scheme u = Some "lwt-log" in + Internal_event.All_sinks.close ~except () >>=? fun () -> apply config end let env_var_name = "TEZOS_EVENTS_CONFIG" diff --git a/src/lib_stdlib_unix/internal_event_unix.mli b/src/lib_stdlib_unix/internal_event_unix.mli index 767ff509f3ba993c67436f4ae205617a1c1933d8..a248e286b1c3aa1dc1f5fa251e77c48d02457276 100644 --- a/src/lib_stdlib_unix/internal_event_unix.mli +++ b/src/lib_stdlib_unix/internal_event_unix.mli @@ -46,6 +46,9 @@ module Configuration : sig (** Run {!Tezos_base.Internal_event.All_sinks.activate} for every URI in the configuration. *) val apply : t -> unit tzresult Lwt.t + + (** Close all the sinks except ["lwt-log://"] and call {!apply}. *) + val reapply : t -> unit tzresult Lwt.t end (** Initialize the internal-event sinks by looking at the diff --git a/src/lib_store/legacy_store/legacy_store_builder.ml b/src/lib_store/legacy_store/legacy_store_builder.ml index 7e722632be6bf8bed6d04757a5bfc15f35101669..56b452fb4767399ec4b2407cd1fca2c6a1d4439b 100644 --- a/src/lib_store/legacy_store/legacy_store_builder.ml +++ b/src/lib_store/legacy_store/legacy_store_builder.ml @@ -224,6 +224,8 @@ let run () = (* commit_genesis is done by [Legacy_state.init] *) | External_validation.Terminate -> ok "exiting" >>=? fun () -> exit 0 + | External_validation.Reconfigure_event_logging _ -> + (* noop *) ok "noop" | External_validation.Validate { chain_id; diff --git a/src/lib_validation/external_validation.ml b/src/lib_validation/external_validation.ml index 20c1271d3ba073e6fccfb49bc8f4007698660a7a..c7e64786a6021711f5546adc19fb57cb2ffb7543 100644 --- a/src/lib_validation/external_validation.ml +++ b/src/lib_validation/external_validation.ml @@ -51,6 +51,7 @@ type request = forked_header : Block_header.t; } | Terminate + | Reconfigure_event_logging of Internal_event_unix.Configuration.t let request_pp ppf = function | Init -> Format.fprintf ppf "process handshake" @@ -75,6 +76,8 @@ let request_pp ppf = function Block_hash.pp_short (Block_header.hash forked_header) | Terminate -> Format.fprintf ppf "terminate validation process" + | Reconfigure_event_logging _ -> + Format.fprintf ppf "reconfigure event logging" let magic = Bytes.of_string "TEZOS_FORK_VALIDATOR_MAGIC_0" @@ -185,7 +188,9 @@ let request_encoding = (obj1 (req "chain_id" Chain_id.encoding)) (function | Commit_genesis {chain_id} -> Some chain_id - | Init | Validate _ | Fork_test_chain _ | Terminate -> None) + | Init | Validate _ | Fork_test_chain _ | Terminate + | Reconfigure_event_logging _ -> + None) (fun chain_id -> Commit_genesis {chain_id}); case (Tag 3) @@ -205,6 +210,13 @@ let request_encoding = unit (function Terminate -> Some () | _ -> None) (fun () -> Terminate); + (* Tag 5 was ["restore_integrity"]. *) + case + (Tag 6) + ~title:"reconfigure_event_logging" + Internal_event_unix.Configuration.encoding + (function Reconfigure_event_logging c -> Some c | _ -> None) + (fun c -> Reconfigure_event_logging c); ] let send pin encoding data = diff --git a/src/lib_validation/external_validation.mli b/src/lib_validation/external_validation.mli index 4d667ce2999036f9130b6717171de12b8a6684b5..82c11227924bb23fd18a11f89d33a5272711ae69 100644 --- a/src/lib_validation/external_validation.mli +++ b/src/lib_validation/external_validation.mli @@ -51,6 +51,7 @@ type request = forked_header : Block_header.t; } | Terminate + | Reconfigure_event_logging of Internal_event_unix.Configuration.t val request_pp : Format.formatter -> request -> unit diff --git a/tezt/tests/main.ml b/tezt/tests/main.ml index 2341be644a10e006c01f153839905fc316b2dd7b..d7c38c7efe20f68cf03f88fc5fb875b90662862f 100644 --- a/tezt/tests/main.ml +++ b/tezt/tests/main.ml @@ -63,6 +63,7 @@ let () = Protocol_limits.register ~protocols:[Alpha] ; Protocol_migration.register ~migrate_from ~migrate_to ; User_activated_upgrade.register ~migrate_from ~migrate_to ; + Rpc_config_logging.register ~protocols:[Alpha] ; Protocol_table_update.register ~migrate_from ~migrate_to ; (* TODO: https://gitlab.com/tezos/tezos/-/issues/1823 the "Baking" test does not have a documentation. diff --git a/tezt/tests/rpc_config_logging.ml b/tezt/tests/rpc_config_logging.ml new file mode 100644 index 0000000000000000000000000000000000000000..9d382f6005e251e7af092226758434999de880b0 --- /dev/null +++ b/tezt/tests/rpc_config_logging.ml @@ -0,0 +1,169 @@ +(*****************************************************************************) +(* *) +(* Open Source License *) +(* Copyright (c) 2021 TQ Tezos *) +(* *) +(* Permission is hereby granted, free of charge, to any person obtaining a *) +(* copy of this software and associated documentation files (the "Software"),*) +(* to deal in the Software without restriction, including without limitation *) +(* the rights to use, copy, modify, merge, publish, distribute, sublicense, *) +(* and/or sell copies of the Software, and to permit persons to whom the *) +(* Software is furnished to do so, subject to the following conditions: *) +(* *) +(* The above copyright notice and this permission notice shall be included *) +(* in all copies or substantial portions of the Software. *) +(* *) +(* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR*) +(* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, *) +(* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL *) +(* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER*) +(* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING *) +(* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER *) +(* DEALINGS IN THE SOFTWARE. *) +(* *) +(*****************************************************************************) + +open Base + +let change_logging_configuration = + Protocol.register_test + ~__FILE__ + ~title:"logging configuration RPCs" + ~tags:["rpc"; "node"; "logging"] + @@ fun protocol -> + let* main_node = Node.init ~name:"main_node" [] in + let* client = Client.init ~endpoint:(Node main_node) () in + let* () = + Client.activate_protocol ~protocol ~timestamp_delay:1_000_000. client + in + let* () = repeat 2 (fun () -> Client.bake_for client) in + let should_fail msg f = + let* actually_failed = + Lwt.catch + (fun () -> + let* _ = f () in + return false) + (fun _ -> return true) + in + if actually_failed then return () else Test.fail "Expecting failure: %s" msg + in + let call_config data = + let* _ = Client.rpc ~data Client.PUT ["config"; "logging"] client in + Lwt.return_unit + in + let* () = + should_fail "wrong-json" (fun () -> + call_config Ezjsonm.(dict [("wrong", `Null)])) + in + (* Empty just works: *) + let* () = call_config Ezjsonm.(dict []) in + let call_config_activate ?(use_deprecated_key = false) l = + let key_name = if use_deprecated_key then "activate" else "active_sinks" in + call_config Ezjsonm.(dict [(key_name, strings l)]) + in + let tmp0 = Temp.file "tezt-rpc-logging0.log" in + let tmp1 = Temp.file "tezt-rpc-logging1.log" in + let* () = + call_config_activate + ~use_deprecated_key:true + [sf "file-descriptor-path://%s?level-at-least=debug" tmp0] + in + (* Let's make some noise: *) + let* () = Client.bake_for client in + let* tmp0_content = read_file tmp0 in + if String.length tmp0_content < 100 then + Test.fail "File %s should have more data" tmp0 ; + (* We reopen the same one plus another one with the same configuration: *) + let* () = + call_config_activate + [ + sf "file-descriptor-path://%s?level-at-least=debug" tmp1; + sf "file-descriptor-path://%s?level-at-least=debug" tmp0; + ] + in + (* More noise *) + let* () = Client.bake_for client in + let* tmp1_content_1 = read_file tmp1 in + let* tmp0_content_2 = read_file tmp0 in + let lines s = String.split_on_char '\n' s in + if lines tmp0_content_2 > lines tmp1_content_1 then + Test.fail "%s is not smaller than %s" tmp1 tmp0 ; + let* () = + (* Now just the second one *) + call_config_activate + [sf "file-descriptor-path://%s?level-at-least=debug" tmp1] + in + let* tmp0_content_3 = read_file tmp0 in + let* () = + (* More noise *) + Client.bake_for client + in + let* tmp0_content_4 = read_file tmp0 in + if tmp0_content_3 <> tmp0_content_4 then + Test.fail "Sink %s has not stopped growing." tmp0 ; + (* We now configure the sink to output only consume events in the `rpc` + and `validator` top-level sections, and then we check that only those events + can be found in the resulting file: *) + let* () = + call_config_activate + [ + sf + "file-descriptor-path://%s?section-prefix=rpc:debug§ion-prefix=validator:debug&fresh=true" + tmp1; + ] + in + (* More noise *) + let* () = Client.bake_for client in + let* tmp1_content = read_file tmp1 in + (* Let's check they are all from the RPC or validator sections: *) + let () = + let check_line ith line = + let origin = sf "%s:%d" tmp1 ith in + JSON.( + let json = parse ~origin line in + match + List.map + as_string + (json |-> "fd-sink-item.v0" |-> "section" |> as_list) + with + | "rpc" :: _ | "validator" :: _ -> () + | _ -> Test.fail "Event not in 'rpc' section: %S" line) + in + List.iteri + check_line + (String.split_on_char '\n' tmp1_content |> List.filter (( <> ) "")) + in + let* () = + (* We check here that the argument `with-pid` forces the creation of + 2 files, ones for the main process, and one for the external + validator: *) + let tmpdir = Temp.dir "tezt-rpc-logging-pid" in + let interesting_prefix = "log-file-prefix" in + let* () = + call_config_activate + [ + sf + "file-descriptor-path://%s/%s.log?section-prefix=:debug&with-pid=true&fresh=true" + tmpdir + interesting_prefix; + ] + in + (* More noise *) + let* () = Client.bake_for client in + let* files = Lwt_unix.files_of_directory tmpdir |> Lwt_stream.to_list in + let should_be_two = + List.fold_left + (fun count f -> + match String.sub f 0 (String.length interesting_prefix) with + | some when some = interesting_prefix -> count + 1 + | _ | (exception _) -> count) + 0 + files + in + if should_be_two = 2 then () + else Test.fail "with-pid created %d files, not 2" should_be_two ; + Lwt.return_unit + in + Lwt.return_unit + +let register ~protocols = change_logging_configuration ~protocols