<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Enix.io – DevOps Kubernetes &amp; Cloud Native</title>
    <link>https://enix.io/en/</link>
    <description>Enix : Cloud, DevOps and Kubernetes experts. Everywhere : on our private Cloud, On-premises, on our Cloud SP partners.</description>
    <generator>Hugo -- gohugo.io</generator>
    <lastBuildDate>Tue, 31 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://enix.io/en/index.xml" rel="self" type="application/rss+xml" />
    
    <item>
      <title>Helm 4: the key improvements!</title>
      <link>https://enix.io/en/blog/helm-4/</link>
      <pubDate>Tue, 31 Mar 2026 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/helm-4/</guid>
      <description>Helm 4 was released as planned in late 2025 during KubeCon + CloudNativeCon North America. We had anticipated 1 year ago the main new features in this article — here’s what the final release confirms and clarifies.
We took a deep dive into the main new features of Helm 4 — from the shift to Server Side Apply to the advanced management of resource status. The goal: to better understand what these changes mean for your Kubernetes deployments, and how to anticipate the migration to Helm 4.</description>
    </item>
    
    
    
    <item>
      <title>We tested Proxmox Datacenter Manager!</title>
      <link>https://enix.io/en/blog/proxmox-datacenter-manager/</link>
      <pubDate>Thu, 19 Mar 2026 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/proxmox-datacenter-manager/</guid>
      <description>Proxmox has been developing two flagship open-source solutions for virtualization for over 15 years: Proxmox Virtual Environment (Proxmox VE) and Proxmox Backup Server (Proxmox BS).
Proxmox VE enables the creation of virtualization clusters on physical servers to host virtual resources, while Proxmox PBS allows for backing up cluster resources in case of a virtual environment failure.
Since December 4, 2025, Proxmox has officially released its Proxmox Datacenter Manager - also known by its short name: “Proxmox DM” or “PDM”.</description>
    </item>
    
    
    
    <item>
      <title>Our open-source CLI to efficiently manage your Proxmox VE clusters!</title>
      <link>https://enix.io/en/blog/cli-tool-proxmox-management/</link>
      <pubDate>Mon, 15 Sep 2025 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/cli-tool-proxmox-management/</guid>
      <description>At Enix, we manage many Proxmox platforms for our clients. Our teams are therefore often faced with various needs in Proxmox VE cluster management: VM audit and management (virtual servers), updates, migrations, inventory, etc.
Although the Proxmox VE GUI (web interface) is already very complete, some repeated operations can become tedious, especially in a multi-cluster context. Other operations are not implemented in the Proxmox GUI, likely because they would be too complex to develop (e.</description>
    </item>
    
    
    
    <item>
      <title>Breathing new life into old tech: PXE for Talos, the Cloud Native way.</title>
      <link>https://enix.io/en/blog/pxe-talos/</link>
      <pubDate>Fri, 22 Aug 2025 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/pxe-talos/</guid>
      <description>Breathing new life into old tech: PXE for Talos, the Cloud Native way.
Simplify the installation of Talos OS for Kubernetes on bare metal servers with a containerized, GitOps-ready PXE server!
We recently helped a client migrate their Kubernetes clusters to Talos, an immutable OS purpose-built for Kubernetes (we particularly appreciate it for highly secure managed clusters, whether on-premises or in a private cloud).
With an infrastructure made up of around fifty large bare-metal servers, we faced a key challenge: how to efficiently install Talos OS on so many servers, both for the initial installation and for the frequent updates (which come with the immutable OS approach).</description>
    </item>
    
    
    
    <item>
      <title>GitOps: The Principles and Why You Should Adopt It</title>
      <link>https://enix.io/en/blog/gitops/</link>
      <pubDate>Tue, 25 Mar 2025 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/gitops/</guid>
      <description>This article is intended for those who are just discovering or haven&amp;rsquo;t yet formed a strong opinion about GitOps.
It can be particularly difficult to get a clear picture when navigating between the DevOps movement, Infrastructure-as-Code (IaC) tools, and the GitOps method.
The goal of this article is to shed light on all of these concepts. By the end of this read, you should have formed your own opinion — and maybe even a preference — about the GitOps approach that suits you best.</description>
    </item>
    
    
    
    <item>
      <title>Survey - Impacts of Broadcom&#39;s Acquisition of VMware on French CIOs</title>
      <link>https://enix.io/en/blog/enquete-vmware/</link>
      <pubDate>Tue, 17 Dec 2024 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/enquete-vmware/</guid>
      <description>Between July and October 2024, we conducted a comprehensive survey of over 100 French IT decision-makers to understand the impact of Broadcom&amp;rsquo;s acquisition of VMware on their organizations.
In this freely downloadable white paper, we present the key findings, enriched with our field insights and analyses.
  Download our survey (french)     </description>
    </item>
    
    
    
    <item>
      <title>We tested the Veeam Backup Solution with Proxmox VE!</title>
      <link>https://enix.io/en/blog/veeam-proxmox/</link>
      <pubDate>Tue, 03 Dec 2024 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/veeam-proxmox/</guid>
      <description>Since VMware was acquired by Broadcom, IT service departments have been facing significant price increases, prompting them to evolve their IT infrastructure (Information Systems and Business-specific IT platform) toward more sustainable solutions.
Among hypervisors, the open-source solution Proxmox VE has become one of the most popular alternatives to VMware. In response to its growing adoption, developers of virtualization-related software solutions are beginning to integrate Proxmox VE support into their offerings. This is notably the case with Veeam, a major player in backup solutions historically used with VMware.</description>
    </item>
    
    
    
    <item>
      <title>A Guide to Migrating from VMware to Proxmox for CIOs and CTOs</title>
      <link>https://enix.io/en/blog/migration-vmware-proxmox/</link>
      <pubDate>Tue, 01 Oct 2024 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/migration-vmware-proxmox/</guid>
      <description>A Guide to Migrating from VMware to Proxmox for CIOs and CTOs Originally published on Le Monde Informatique.
In light of VMware’s pricing policy changes, many companies have already started migrating to alternative solutions like Proxmox VE (Proxmox Virtual Environment). We’ve supported french companies across various industries, such as Oodrive, XBTO, Weka, and IMIO.be, in making this transition. A migration of this nature presents concrete challenges, both technical and organizational, which we aim to share in this experience report.</description>
    </item>
    
    
    
    <item>
      <title>Alternatives to VMware: The Benefits of Open Source</title>
      <link>https://enix.io/en/blog/vmware-alternatives/</link>
      <pubDate>Mon, 30 Sep 2024 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/vmware-alternatives/</guid>
      <description>Alternatives to VMware: The Benefits of Open Source Originally published on Le Monde Informatique.
As the effects of VMware&amp;rsquo;s acquisition by Broadcom materialize, VMware’s dominance could be challenged by the rise of alternative solutions.
Some companies are choosing to replace VMware with other proprietary solutions, such as Nutanix, Citrix, or Microsoft, with equivalent functionalities. Others, frustrated by vendor lock-in and often high prices, are looking to avoid the same model of technological and financial dependency, turning instead to open-source technologies like Proxmox VE or XCP-NG.</description>
    </item>
    
    
    
    <item>
      <title>VMware Buyout: What Strategy Should You Adopt?</title>
      <link>https://enix.io/en/blog/migration-vmware-strategy/</link>
      <pubDate>Sun, 29 Sep 2024 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/migration-vmware-strategy/</guid>
      <description>VMware Buyout: What Strategy Should You Adopt? Originally published on Le Monde Informatique.
In November 2023, Broadcom, the American semiconductor giant, officially acquired VMware, the virtualization specialist, for a sum of $69 billion. This massive transaction has already had significant repercussions for thousands of VMware clients worldwide. However, some CIOs and infrastructure leaders remain optimistic, viewing this as an opportunity to rethink their IT strategy, aiming for more sovereignty and long-term cost savings.</description>
    </item>
    
    
    
    <item>
      <title>Kubernetes Ingress Controller : migration handbook</title>
      <link>https://enix.io/en/blog/k8s-ingress-migration/</link>
      <pubDate>Tue, 09 Jan 2024 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/k8s-ingress-migration/</guid>
      <description>In this article, we’ll present various techniques that can be used to migrate from one existing Kubernetes Ingress Controller to another one.
First things first: what is an ingress controller ? Ingress is a popular Kubernetes API resource to route incoming HTTP (and HTTPS) requests to their corresponding pods. Developers and administrators can create ingress resources, which define routing rules. For instance, “requests to www.example.com/api are load balanced across pods associated with service XYZ.</description>
    </item>
    
    
    
    <item>
      <title>Overcoming the Deployment Challenges of H100 GPUs in Azure Kubernetes</title>
      <link>https://enix.io/en/blog/azure-kubernetes-gpu-h100/</link>
      <pubDate>Thu, 21 Dec 2023 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/azure-kubernetes-gpu-h100/</guid>
      <description>In the ever-evolving world of cloud computing, NVIDIA H100 GPUs have made a name for themselves with their stellar performance, driving significant strides in AI and intensive computing. Recently made available by most cloud providers, including Azure, we got the chance to integrate them into an Azure Kubernetes Service (AKS) cluster for a heavyweight in Generative AI (LLM).
Our original deployment plan, which worked with the A100 generation, never led to a functional cluster.</description>
    </item>
    
    
    
    <item>
      <title>Deploying Kubernetes on OVHcloud Dedicated Infrastructure</title>
      <link>https://enix.io/en/blog/kubernetes-ovhcloud/</link>
      <pubDate>Mon, 23 Oct 2023 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/kubernetes-ovhcloud/</guid>
      <description>OVHcloud&amp;rsquo;s Managed Kubernetes Service (public cloud) addresses many of the challenges highlighted in this article. However, there are relevant cases for deploying Kubernetes on dedicated infrastructures (bare metal, VPC, private cloud).
In this article, we delve into this scenario, sharing our tips for deploying a Kubernetes cluster on OVHcloud dedicated infrastructure. We&amp;rsquo;ll set it up manually or using our automation tools.
When would this approach be useful? For example, when addressing these specific needs expressed by some of our shared clients:</description>
    </item>
    
    
    
    <item>
      <title>Advanced Kubectl Commands for Managing Your Kubernetes Cluster</title>
      <link>https://enix.io/en/blog/kubectl-commands-2/</link>
      <pubDate>Wed, 11 Oct 2023 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/kubectl-commands-2/</guid>
      <description>Navigating the world of Kubernetes can be daunting, but with the right set of commands, you can master it!
In our previous article on Kubectl Commands: Efficient Kubernetes Cluster Administration, I introduced you to the tool and some of the main kubectl commands.
Today, we delve deeper into the DevOps jungle to discover these fascinating creatures known as advanced kubectl commands. Let&amp;rsquo;s go! 🦍
Advanced Kubectl Commands for Everyday Daily Operations Here, we&amp;rsquo;ll explore slightly complex kubectl commands that can be very useful in daily operations.</description>
    </item>
    
    
    
    <item>
      <title>Kubectl Commands: Efficient Kubernetes Cluster Administration</title>
      <link>https://enix.io/en/blog/kubectl-commands/</link>
      <pubDate>Wed, 11 Oct 2023 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/kubectl-commands/</guid>
      <description>In this article, we&amp;rsquo;ll explore how to manage Kubernetes clusters with the official command-line tool, kubectl.
We&amp;rsquo;ll delve into the primary commands and also touch upon some advanced kubectl commands that might be unfamiliar!
To start, kubectl is a blend of the words Kubernetes and control. It&amp;rsquo;s a tool that allows communication with the Kubernetes API to create, modify, read, or even delete resources from our Kubernetes cluster.
For those needing a refresher, I suggest revisiting our introductory article on Kubernetes.</description>
    </item>
    
    
    
    <item>
      <title>How to Create a Prometheus Exporter?</title>
      <link>https://enix.io/en/blog/create-prometheus-exporter/</link>
      <pubDate>Tue, 10 Oct 2023 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/create-prometheus-exporter/</guid>
      <description>If you&amp;rsquo;re reading this article, chances are you&amp;rsquo;re well-versed in Kubernetes and Prometheus&amp;hellip; Fortunately, that&amp;rsquo;s the very topic we&amp;rsquo;re diving into!
From defining to scraping and all the way to deployment, I&amp;rsquo;ll cover nearly every aspect of the exporter we&amp;rsquo;re about to craft together. Don&amp;rsquo;t worry, it&amp;rsquo;ll go smoothly. 😉
Before diving in, it&amp;rsquo;s worth noting that I&amp;rsquo;m working on a sandbox cluster with Prometheus already set up using prometheus-operator.</description>
    </item>
    
    
    
    <item>
      <title>K9s: a Kubernetes Cluster Management Tool</title>
      <link>https://enix.io/en/blog/k9s/</link>
      <pubDate>Mon, 09 Oct 2023 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/k9s/</guid>
      <description>(Article updated on 02/10/2025).
If you&amp;rsquo;re regularly managing Kubernetes clusters, you&amp;rsquo;ve probably noticed the repetitive nature of typing out kubectl commands.
Listing pods or deployments, switching namespaces, viewing container logs, editing or deleting resources&amp;hellip; these actions are relatively simple but can quickly become tedious. (you can check my articles on Kubectl Commands and Advanced Kubectl Commands).
K9s is a robust tool designed to simplify these routine tasks on your clusters. In this article, we&amp;rsquo;ll walk you through K9s using a Prometheus stack example.</description>
    </item>
    
    
    
    <item>
      <title>Kubebuilder: Easily create a Kubernetes operator</title>
      <link>https://enix.io/en/blog/kubebuilder/</link>
      <pubDate>Mon, 09 Oct 2023 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/kubebuilder/</guid>
      <description>In this article, I will introduce you to how to use kubebuilder to easily create a Kubernetes operator.
After a quick introduction on kubebuilder, we will create step by step a k8s operator (using the example of the kube-image-keeper operator, an open-source tool for caching images within the Kubernetes cluster). Finally, we will discuss the benefits and limitations of kubebuilder.
What is Kubebuilder? Kubebuilder, a framework for K8s operator creation Kubebuilder is a framework designed to streamline the creation of Kubernetes operators.</description>
    </item>
    
    
    
    <item>
      <title>Thanos: Aggregating Multiple Prometheus Instances</title>
      <link>https://enix.io/en/blog/thanos-prometheus/</link>
      <pubDate>Fri, 06 Oct 2023 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/thanos-prometheus/</guid>
      <description>This is the concluding article in our three-part series on Thanos monitoring.
 Part 1: &amp;ldquo;Thanos: Long-Term Storage of Prometheus Metrics&amp;rdquo; Part 2: &amp;ldquo;Deploying Thanos and Prometheus on a K8s Cluster&amp;rdquo; Part 3: &amp;ldquo;Thanos: Aggregating Multiple Prometheus Instances&amp;rdquo;   In the previous article, we ended by configuring the Thanos Query as a datasource in Grafana to query multiple Prometheus instances from a single datasource.
Now, let&amp;rsquo;s explore how Thanos aggregates multiple Prometheus instances.</description>
    </item>
    
    
    
    <item>
      <title>Deploying Thanos and Prometheus on a K8s Cluster</title>
      <link>https://enix.io/en/blog/thanos-k8s/</link>
      <pubDate>Thu, 05 Oct 2023 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/thanos-k8s/</guid>
      <description>This article is the second in a series of 3 on Thanos monitoring.
 Part 1: &amp;ldquo;Thanos: Long-term Storage for Prometheus Metrics&amp;rdquo; Part 2: &amp;ldquo;Deploying Thanos and Prometheus on a K8s Cluster&amp;rdquo; Part 3: &amp;ldquo;Thanos: Aggregating Multiple Prometheus&amp;rdquo;   Today, it&amp;rsquo;s time for action: we heat up our Kubernetes cluster and focus on how to deploy Thanos.
Note: Our focus here is on deploying a single Prometheus and Thanos instance on Kubernetes, but Thanos can also be used and deployed outside of Kubernetes.</description>
    </item>
    
    
    
    <item>
      <title>Thanos : Long-Term Storage of Prometheus Metrics</title>
      <link>https://enix.io/en/blog/prometheus-thanos/</link>
      <pubDate>Wed, 04 Oct 2023 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/prometheus-thanos/</guid>
      <description>This article kicks off our three-part series on Thanos monitoring.
 Part 1: &amp;ldquo;Thanos: Long-term Storage for Prometheus Metrics&amp;rdquo; Part 2: &amp;ldquo;Deploying Thanos and Prometheus on a K8s Cluster&amp;rdquo; Part 3: &amp;ldquo;Thanos: Aggregating Multiple Prometheus&amp;rdquo;   This initial article introduces Thanos used for long-term collection of Prometheus metrics.
By default, the retention duration for Prometheus metrics is 15 days. To keep them for a longer period (months, or even years), the first reflex would be to increase Prometheus&amp;rsquo;s retention duration.</description>
    </item>
    
    
    
    <item>
      <title>What is Kubernetes?</title>
      <link>https://enix.io/en/blog/kubernetes-k8s/</link>
      <pubDate>Tue, 03 Oct 2023 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/kubernetes-k8s/</guid>
      <description>Commonly abbreviated as kube or K8s, Kubernetes has risen in recent years as the reference container orchestration solution for application deployment.
With Kubernetes, you can deploy containerized applications across any type of IT infrastructure and centrally managing the various resources they require. These can be computing resources, storage, databases, networking, etc. These resources are grouped into a Kubernetes cluster composed of a set of servers.
While this article doesn&amp;rsquo;t delve into the basics of containers (a prerequisite for using K8s), it presents Kubernetes in broad strokes, its native mechanisms, its utility for platform operation, and the different types of K8s deployments.</description>
    </item>
    
    
    
    <item>
      <title>Increase Availability &amp; Kubernetes Image Caching with kube-image-keeper</title>
      <link>https://enix.io/en/blog/cache-image-docker-kubernetes/</link>
      <pubDate>Wed, 25 Jan 2023 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/cache-image-docker-kubernetes/</guid>
      <description>Project background
At Enix, we manage hundreds of Kubernetes clusters for our customers and our own internal use. On cloud, on premises, big and small, from development to large-scale production deployments. One recurring challenge we face across all these environments is ensuring reliable Kubernetes image caching.
Every Kubernetes administrator has encountered, or will encounter, an issue related to container image retrieval. This may happen when you roll out an update to patch a security issue, fix a bug or rollback to a stable release after a faulty update.</description>
    </item>
    
    
    
    <item>
      <title>Storage in Kubernetes: how we developed a CSI plugin</title>
      <link>https://enix.io/en/blog/kubernetes-storage-csi-plugin/</link>
      <pubDate>Mon, 18 Oct 2021 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/kubernetes-storage-csi-plugin/</guid>
      <description>Containers are a great way to build, ship and run applications anywhere, on-premises or in the cloud. When they&amp;rsquo;re used correctly, they improve the portability and operability of our workloads.
For production deployments involving multiple machines (for scalability or reliability reasons), it is fairly common to use an orchestrator like Kubernetes. The orchestrator can also manage the underlying infrastructure components, including network and storage.
Stateful components (like databases or message queues) are comparatively harder to deploy than their stateless counterparts, precisely because of these storage components.</description>
    </item>
    
    
    
    <item>
      <title>Avoiding certificate expiration in a Kubernetes infrastructure</title>
      <link>https://enix.io/en/blog/avoiding-certificate-expiration-kubernetes-infrastructure/</link>
      <pubDate>Thu, 18 Mar 2021 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/avoiding-certificate-expiration-kubernetes-infrastructure/</guid>
      <description>We have all seen these ugly security alerts caused by expired TLS certificates.
In this blog post, we will see a “belt and suspenders” approach to avoid this situation, by renewing them automatically and detecting certificates that might expire (before they do!). In pure Cloud Native fashion, our solution will even run entirely on Kubernetes!
X.509 certificates, what are they? X.509 certificates are used to ensure the security of various actions in our Cloud Native world, and are widely used to guarantee the security of exchanges via HTTPS, so the APIs we all use also depend on them.</description>
    </item>
    
    
    
    <item>
      <title>Rancher 2: Three Installation Methods</title>
      <link>https://enix.io/en/blog/rancher-2-three-installation-methods/</link>
      <pubDate>Mon, 15 Jun 2020 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/rancher-2-three-installation-methods/</guid>
      <description>Whether you&amp;rsquo;re on the lookout for a practical solution to hold the reins of Kubernetes, or an active Rancher 1.6 user, this blogpost on Rancher 2 may be of interest to you.
What is Rancher 2? If you only take a quick look at Rancher, you might think it&amp;rsquo;s &amp;ldquo;just&amp;rdquo; a simple graphical interface and wonder how it differs from the official Kubernetes dashboard. However, there is a world between the two, since Rancher also manages :</description>
    </item>
    
    
    
    <item>
      <title>&#34;Honey, I Shrunk Docker!&#34; - part 1/3</title>
      <link>https://enix.io/en/blog/docker-image-size-optimizations-1/</link>
      <pubDate>Tue, 07 Apr 2020 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/docker-image-size-optimizations-1/</guid>
      <description>This is the first part of our serie of blogposts dealing with Docker image size optimization:
  “Honey, I Shrunk Docker!”, Part 1/3 : here we are!
  “Honey, I Shrunk Docker!”, Part 2/3 
  “Honey, I Shrunk Docker!”, Part 3/3 
  Introduction When getting started with containers, it’s pretty easy to be shocked by the size of the images that we build. We’re going to review a number of techniques to reduce image size, without sacrificing developers’ and ops’ convenience.</description>
    </item>
    
    
    
    <item>
      <title>&#34;Honey, I Shrunk Docker!&#34; - part 2/3</title>
      <link>https://enix.io/en/blog/docker-image-size-optimizations-2/</link>
      <pubDate>Tue, 07 Apr 2020 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/docker-image-size-optimizations-2/</guid>
      <description>This is the second part of our serie of blogposts dealing with Docker image size optimization:
  “Honey, I Shrunk Docker!”, Part 1/3 
  “Honey, I Shrunk Docker!”, Part 2/3 : here we are!
  “Honey, I Shrunk Docker!”, Part 3/3 
  Introduction In the first part, we introduced multi-stage builds, static and dynamic linking, and briefly mentioned Alpine. In this second part, we are going to dive into some details specific to Go.</description>
    </item>
    
    
    
    <item>
      <title>&#34;Honey, I Shrunk Docker!&#34; - part 3/3</title>
      <link>https://enix.io/en/blog/docker-image-size-optimizations-3/</link>
      <pubDate>Tue, 07 Apr 2020 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/docker-image-size-optimizations-3/</guid>
      <description>This is the last part of our serie of blogposts dealing with Docker image size optimization:
  “Honey, I Shrunk Docker!”, Part 1/3 
  “Honey, I Shrunk Docker!”, Part 2/3 
  “Honey, I Shrunk Docker!”, Part 3/3 : here we are!
  Introduction In the first two parts of this series, we covered the most common methods to optimize Docker image size. We saw how multi-stage builds, combined with Alpine-based images, and sometimes static builds, would generally give us the most dramatic savings.</description>
    </item>
    
    
    
    <item>
      <title>Continuous Integration: Getting rid of Manual Release Management</title>
      <link>https://enix.io/en/blog/continuous-integration-getting-rid-manual-release-management/</link>
      <pubDate>Mon, 16 Dec 2019 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/continuous-integration-getting-rid-manual-release-management/</guid>
      <description>A proper release is a lot of work and a lot of steps. To ensure that the work is done well and quickly, it is a good idea to automate as many of them as possible.
Let’s talk about two steps in particular: the (automatic) numbering of the next version, and the generation of its changelog.
After having worked on a rather wide range of applications and libraries, our feedback is clear: Automating the release process pays off!</description>
    </item>
    
    
    
    <item>
      <title>DIY at Enix, designing and producing a rotating beacon</title>
      <link>https://enix.io/en/blog/diy-enix-designing-producing-rotating-beacon/</link>
      <pubDate>Tue, 28 May 2019 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/diy-enix-designing-producing-rotating-beacon/</guid>
      <description>In our office, we have been using a rotating beacon to be notified whenever a new customer support ticket comes in, allowing a rapid response to our customers’s requests and sometimes urgent demands. We would now like to use an identical device for the notification of several types of monitoring alerts. Our original rotating beacon, of a pretty classical type, is made of an engine and a light bulb, both of which are controlled through a WiFi relay switch.</description>
    </item>
    
    
    
    <item>
      <title>Kubernetes : kubectl wait</title>
      <link>https://enix.io/en/blog/kubernetes-tips-tricks-kubectl-wait/</link>
      <pubDate>Tue, 02 Apr 2019 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/kubernetes-tips-tricks-kubectl-wait/</guid>
      <description>The Kubernetes CLI offers a powerful command to monitor and react to changes in your cluster: the kubectl wait command.
This command enables you to block execution (i.e. wait) until a specific condition is met, such as:
 a specified resource is deleted ; a specified resource transitions to a specific state  Waiting for resource deletion: kubectl wait --for=delete In this case, you will use the --for=delete option as follows, for example on a pod :</description>
    </item>
    
    
    
    <item>
      <title>Deploying Kubernetes 1.13 on Openstack with Terraform</title>
      <link>https://enix.io/en/blog/deploying-kubernetes-1-13-openstack-terraform/</link>
      <pubDate>Wed, 26 Dec 2018 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/deploying-kubernetes-1-13-openstack-terraform/</guid>
      <description>We use OpenStack a lot at Enix, especially to automate the setup of Kubernetes clusters used during our training sessions. Whether through the Horizon web interface or via the CLI, the pleasure of deploying a bulk of Virtual Machines never dries up!
With a few years of using AWS behind me, switching to a private cloud was really easy. But in both cases, setting up and dismantling multiple virtual machines is still very time consuming.</description>
    </item>
    
    
    
    <item>
      <title>Prometheus Service Discovery with Netbox</title>
      <link>https://enix.io/en/blog/service-discovery-netbox-prometheus/</link>
      <pubDate>Tue, 13 Nov 2018 00:00:00 +0000</pubDate>
      
      <guid>https://enix.io/en/blog/service-discovery-netbox-prometheus/</guid>
      <description>Today, we will see how we can &amp;ldquo;connect&amp;rdquo; the Prometheus Service Discovery with Netbox used as the source of truth for your infrastructures !
Prometheus Service Discovery Prometheus, a metrology system whose reputation is well established, has the particularity of retrieving the metrics from the device (or target) it monitors. While this &amp;ldquo;pull&amp;rdquo; operating mode (as opposed to &amp;ldquo;push&amp;rdquo;) has many advantages in terms of scalability and practicality, it nevertheless implies that all the services to be monitored must be declared to the supervision server.</description>
    </item>
    
    
    
    
    
    
    
    
    
    
    
    
  </channel>
</rss>