kube
Rust client for Kubernetes with reinterpretations of the Reflector
and Informer
abstractions from the go client.
This client thus aims cater to the more common controller/operator case, but allows you sticking in dependencies like k8s-openapi for accurate struct representations.
Usage
See the examples directory for how to watch over resources in a simplistic way.
See controller-rs for a full example with actix.
Reflector
The biggest abstraction exposed in this client is Reflector<T, U>
. This is effectively a cache of a resource that's meant to "reflect the state in etcd".
It handles the api mechanics for watching kube resources, tracking resourceVersions, and maintaining an internal cache map.
To use it, you just feed in T
as a Spec
struct and U
as a Status
struct, which can be as complete or incomplete as you like. Here, using the complete structs via k8s-openapi:
use ;
let resource = Pods;
let rf : = new?;
then you can poll()
the reflector, and read()
to get the current cached state:
rf.poll?; // blocks and updates state
// read state and use it:
rf.read?.into_iter.for_each;
The reflector itself is responsible for acquiring the write lock and update the state as long as you call poll()
periodically.
Informers
The simplest abstraction exposed from this client. This is a struct with the internal behaviour for watching kube resources, but keeps no internal state except the resourceVersion
.
You tell it what type parameters correspond to; T
should be a Spec
struct, and U
should be a Status
struct. Again, these can be as complete or incomplete as you like. Here, using the complete structs via k8s-openapi:
use ;
let resource = Pods;
let inf : = new?;
The main difference with Reflector<T, U>
is that the only exposed function is .poll()
and it returns WatchEvents
that you are meant to handle yourself:
let events = inf.poll?;
reconcile?; // pass them on somewhere
How you handle them is up to you, you could build your own Reflector
, or you can do more controllery logic. Here's how such a function would look:
Examples
Examples that show a little common flows. These all have logging of this library set up to trace
:
# watch pod events in kube-system
or for the reflectors:
cargo run --example pod_reflector
cargo run --example node_reflector
cargo run --example deployment_reflector
for one based on a CRD, you need to create the CRD first:
then you can kubectl apply -f crd-baz.yaml -n kube-system
, or kubectl delete -f crd-baz.yaml -n kube-system
, or kubectl edit foos baz -n kube-system
to verify that the events are being picked up.
License
Apache 2.0 licensed. See LICENSE for details.