Skip to main content

Using Embedded Cluster (Beta)

This topic describes how to use the Replicated Embedded Cluster to configure, install, and manage your application in an embedded Kubernetes cluster.

note

Embedded cluster is in beta. If you are instead looking for information about creating Kubernetes Installers with Replicated kURL, see the Replicated kURL section.

Overview

Replicated Embedded Cluster allows you to distribute a Kubernetes cluster and your application together as a single appliance, making it easy for enterprise users to install, update, and manage the application and the cluster in tandem. Embedded Cluster is based on the open source Kubernetes distribution k0s. For more information, see the k0s documentation.

For software vendors, Embedded Cluster provides a simplified config for defining characteristics of the cluster that will be created in the customer environment. Additionally, each version of Embedded Cluster includes a specific version of Replicated KOTS, ensuring compatibility between KOTS and the cluster. For enterprise users, cluster updates are done automatically at the same time as application updates, allowing users to more easily keep the cluster up-to-date without needing to use kubectl.

Embedded Cluster is a successor to Replicated kURL. Compared to kURL, Embedded Cluster offers several improvements such as:

  • Significantly faster installation, updates, and node joins
  • A redesigned Admin Console UI for managing the cluster
  • Improved support for multi-node clusters
  • One-click updates of both the application and the cluster at the same time

Requirements

Embedded cluster has the following requirements:

  • Linux operating system
  • x86-64 architecture
  • systemd
  • At least 2GB of memory and 2 CPU cores
  • Disk space requirements:
    • The filesystem at /var/lib/embedded-cluster has 40Gi or more of total space
    • The filesystem at /var/lib/k0s has 40Gi or more of total space and must be no more than 80% full
    • The filesystem at /opt/openebs has 5Gi or more of total space
    • The filesystem at /tmp has 5Gi or more of total space
  • Embedded Cluster is based on k0s, so all k0s system requirements apply. See System requirements in the k0s documentation.

Limitations

Embedded Cluster has the following limitations:

  • No migration from kURL: There is not a supported path to migrate an existing kURL instance to Embedded Cluster.

  • Multi-node support is in alpha: Support for multi-node embedded clusters is in alpha. Only single-node embedded clusters are in beta. For more information, see Enable High Availability for Multi-Node Clusters (Alpha) below.

  • Air gap support is in alpha: Support for air gap installations with Embedded Cluster is an alpha feature. For more information, see Installing in Air Gap Environments with Embedded Cluster (Alpha).

  • Disaster recovery is in alpha: Disaster Recovery for Embedded Cluster installations is in alpha. For more information, see Disaster Recovery for Embedded Cluster (Alpha)

  • Rollbacks not supported: The allowRollback field in the KOTS Application custom resource is not supported for Embedded Cluster installations. In Embedded Cluster installations, the application and the cluster are installed and updated together as a single appliance. Because of this, users cannot roll back (downgrade) the application to an earlier version.

  • Cannot change spec.api and spec.storage after installation: The spec.api and spec.storage keys in the k0s config cannot be changed after installation. Whereas most keys in the k0s config apply to the whole cluster, these two keys are set for each node. Embedded Cluster cannot update these keys on each individual node during updates, so they cannot be changed after installation.

  • Changing node hostnames is not supported: After a host is added to a Kubernetes cluster, Kubernetes assumes that the hostname and IP address of the host will not change. If you need to change the hostname or IP address of a node, you must first remove the node from the cluster. For more information about the requirements for naming nodes, see Node name uniqueness in the Kubernetes documentation.

  • Automated installations not supported: Users cannot do automated (headless) Embedded Cluster installations because it is not possible to configure the application by passing the ConfigValues file with the installation command. Embedded Cluster installations require that the application is configured from the Admin Console config screen. For more information about automating existing cluster or kURL installations with the KOTS CLI, see Installing with the CLI.

  • Embedded Cluster release artifact not available through the Download Portal: The binary required to install with Embedded Cluster cannot be shared with users through the Download Portal. Users can follow the Embedded Cluster installation instructions to download and extract the binary in order to install with Embedded Cluster. For more information, see Download the Embedded Cluster Binary.

  • minKotsVersion and targetKotsVersion not supported: The minKotsVersion and targetKotsVersion fields in the KOTS Application custom resource are not supported for Embedded Cluster installations. This is because each version of Embedded Cluster includes a particular version of KOTS. Setting targetKotsVersion or minKotsVersion to a version of KOTS that does not coincide with the version that is included in the specified version of Embedded Cluster will cause Embedded Cluster installations to fail with an error message like: Error: This version of App Name requires a different version of KOTS from what you currently have installed. To avoid installation failures, do not use targetKotsVersion or minKotsVersion in releases that support installation with Embedded Cluster.

  • Support bundles over 100MB: Support bundles are stored in rqlite. Bundles over 100MB could cause rqlite to crash, causing errors in the installation.

  • Kubernetes version template functions not supported: The KOTS KubernetesVersion, KubernetesMajorVersion, and KubernetesMinorVersion template functions do not provide accurate Kubernetes version information for Embedded Cluster installations. This is because these template functions are rendered before the Kubernetes cluster has been updated to the intended version. However, KubernetesVersion is not necessary for Embedded Cluster because vendors specify the Embedded Cluster version, which includes a known Kubernetes version.

  • Custom domains not supported: Embedded Cluster does not support the use of custom domains, even if custom domains are configured. We intend to add support for custom domains. For more information about custom domains, see About Custom Domains.

  • KOTS Auto-GitOps workflow not supported: Embedded Cluster does not support the KOTS Auto-GitOps workflow. If an end-user is interested in GitOps, consider the Helm install method instead. For more information, see Installing with Helm.

  • Upgrading by more than one Kubernetes minor version not supported: Embedded Cluster does not support upgrading Kubernetes by more than one minor version at a time. The Kubernetes minor version is present in the build metadata for an Embedded Cluster version (e.g., X.Y.Z+k8s-1.28). For example, if an end-user is running X.Y.Z+k8s-1.28, they must upgrade to X.Y.Z+k8s-1.29 before they can upgrade to X.Y.Z+k8s-1.30. The Embedded Cluster version X.Y.Z can be the same or different during these upgrades. The Admin Console will not prevent end-users from attempting to upgrade by more than one Kubernetes minor version at a time, so you must manage this with your customers.

  • Downgrading Kubernetes not supported: Embedded Cluster does not support downgrading Kubernetes. The admin console will not prevent end-users from attempting to downgrade Kubernetes if a more recent version of your application specifies a previous Embedded Cluster version. You must ensure that you do not promote new versions with previous Embedded Cluster versions.

Known Issue

When a customer instance fetches a new version of your application, the version is rendered by the currently deployed version of KOTS. If your new application version uses functionality that the currently deployed version of KOTS doesn't support, your application version will not be rendered and cannot be upgraded to. In the interim, we are limiting new functionality introduced in KOTS that could trigger this issue, and we are working on a solution.

Quick Start

You can use the following steps to get started quickly with Embedded Cluster. More detailed documentation is available below.

  1. Create a new customer or edit an existing customer and select the Embedded Cluster Enabled (Beta) license option. Save the customer.

  2. Create a new release that includes your application. In that release, create an embedded cluster config that includes, at minimum, the Embedded Cluster version you want to use. See the Embedded Cluster GitHub repo to find the latest version.

    Example Embedded Cluster config:

    apiVersion: embeddedcluster.replicated.com/v1beta1
    kind: Config
    spec:
    version: 1.6.0+k8s-1.29
  3. Save the release and promote it to the channel your customer is assigned to.

  4. Return to the customer page where you enabled Embedded Cluster. At the top right, click Install instructions and choose Embedded cluster. A dialog appears with instructions on how to download the Embedded Cluster binary and install your application.

    Customer install instructions drop down button

    View a larger version of this image

  5. On your VM, run the commands in the Embedded Cluster install instructions dialog.

    Embedded cluster install instruction dialog

    View a larger version of this image

  6. Enter an Admin Console password when prompted.

    The Admin Console URL is printed when the installation finishes. Access the Admin Console to begin installing your application. You’ll have the opportunity to add nodes if you want a multi-node cluster. Then you can provide application config, run preflights, and deploy your application.

Configure Embedded Cluster

To install your application with Embedded Cluster, an Embedded Cluster config must be created in a release. The Embedded Cluster config lets you define several characteristics about the cluster that will be created.

For more information, see Embedded Cluster Config.

Download the Embedded Cluster Binary

To install your application with Embedded Cluster, you need to download an Embedded Cluster binary and a license. The binary is specific to your application release: it installs that particular version of your application using the Embedded Cluster config defined in that release.

Your customers can download the binary using the Replicated app service, or you can host/serve the binary yourself using the Replicated Vendor API.

For information about downloading the Embedded Cluster binary for air gap installations, see Installing in Air Gap Environments with Embedded Cluster (Alpha).

Using the Replicated App Service

For online installations, your customers can run a command to download the Embedded Cluster binary and their license directly to the machine where they will install your application. The command to download the binary and license is easily available on the customer page in the vendor portal. You can copy this and share it with your customers.

To download the binary using the Replicated app service:

  1. Create a new customer or edit an existing customer and select the Embedded Cluster Enabled (Beta) license option. Save the customer.

  2. At the top right of the customer page, click Install instructions and choose Embedded cluster.

    Customer install instructions drop down button

    View a larger version of this image

    An Embedded cluster installation instructions dialog is displayed with instructions on how to download the Embedded Cluster binary and install your application.

  3. (Optional) In the Select a version dropdown, select a version label to download the release artifacts for a specific version of your application.

    For example:

    Embedded cluster install instruction dialog

    View a larger version of this image

  4. On a VM, run the first two commands in the Embedded cluster installation instructions dialog to download the binary and license and extract them.

note

You can configure a custom domain for the replicated.app endpoint so that your customers can download the binary from your own domain instead of from replicated.app. For more information, see About Custom Domains.

Using the Vendor API

Some vendors already have a portal that their customers log in to to access documentation or download artifacts. In cases like this, you can serve the Embedded Cluster binary yourself using the Replicated Vendor API.

To serve the binary with the Vendor API:

  1. If you have not done so already, create an API token for the Vendor API. See Using the Vendor API v3.

  2. Call the Get an Embedded Cluster release endpoint to download the artifacts needed to install your application in an embedded cluster. Your customers must take this binary and their license and copy them to the machine where they will install your application.

    Note the following:

    • (Recommended) Provide the customerId query parameter so that the customer’s license is included in the downloaded tarball. This mirrors what is returned when a customer downloads the binary directly using the Replicated app service and is the most useful option. Excluding the customerId is useful if you plan to distribute the license separately.

    • If you do not provide any query parameters, this endpoint downloads the Embedded Cluster binary for the latest release on the specified channel. You can provide the channelSequence query parameter to download the binary for a particular release.

Install

Your customers can install your application once they have obtained the Embedded Cluster binary and their license and copied them to the machine where they will install your application. For more information about how to get the binary and license, see Download the Embedded Cluster Binary above.

For information about installing in air gap environments, see Installing in Air Gap Environments with Embedded Cluster (Alpha).

To install your application in an embedded cluster:

  1. SSH onto the machine where you will install your application.

  2. Ensure that the Embedded Cluster binary and the license are available in the working directory.

  3. Install the application:

    sudo ./APP_SLUG install --license LICENSE_FILE

    Where:

    • APP_SLUG is the unique slug for the application.
    • LICENSE_FILE is the customer's license.
  4. Enter an Admin Console password when prompted.

    The Admin Console URL is printed when the installation finishes. Access the Admin Console to begin installing your application. You’ll have the opportunity to add nodes if you want a multi-node cluster. Then you can provide application config, run preflights, and deploy your application.

Install Behind a Proxy Server

The following flags can be used with the Embedded Cluster install command to install behind a proxy server:

FlagDescription
--http-proxyProxy server to use for HTTP
--https-proxyProxy server to use for HTTPS
--no-proxyComma-separated list of hosts for which not to use a proxy

Example:

sudo ./APP_SLUG install --license LICENSE_FILE \
--http-proxy=HOST:PORT \
--https-proxy=HOST:PORT \
--no-proxy=LIST_OF_HOSTS

Where:

  • LICENSE_FILE is the customer's license
  • HOST:PORT is the host and port of the proxy server
  • LIST_OF_HOSTS is the list of hosts to not proxy

For multi-node clusters, the proxy settings are automatically configured when joining additional nodes. When deploying the first node of a multi-node cluster, you should pass the node IP addresses (typically in CIDR notation) to --no-proxy.

The following are never proxied:

  • Internal cluster communication (localhost, 127.0.0.1, .default, .local, .svc, kubernetes)
  • Communiation to the KOTS database (kotsadm-rqlite)
  • The CIDR used for assigning IPs to Kubernetes Pods and Services (10.0.0.0/8)

Requirement:

Proxy installations require Embedded Cluster 1.5.1 or later with Kubernetes 1.29 or later. For example, Embedded Cluster 1.6.0+k8s-1.29 supports installing behind a proxy, and 1.6.0+k8s-1.28 does not. For the latest version information, see the embedded-cluster repository in GitHub.

Limitations:

  • Proxy settings are not automatically propagated to your Helm extensions and must be manually configured.
  • Proxy settings cannot be changed after installation or during upgrade.

Set IP Address Ranges for Pods and Services

The following flags can be used with the Embedded Cluster install command to allocate IP address ranges for Pods and Services:

FlagDescription
--pod-cidrThe range of IP addresses that can be assigned to Pods, in CIDR notation. By default, the Pod CIDR is 10.96.0.0/12.
--service-cidrThe range of IP addresses that can be assigned to Services, in CIDR notation. By default, the Service CIDR is 10.244.0.0/16.

Example:

sudo ./my-app install --license license.yaml --pod-cidr 172.16.136.0/16

Limitation:

The --pod-cidr and --service-cidr flags are not supported on Red Hat Enterprise Linux (RHEL) 9 operating systems.

Host Preflight Checks

During installation, Embedded Cluster automatically runs a default set of host preflight checks. The default host preflight checks are designed to verify that the installation environment meets the requirements for Embedded Cluster, such as:

  • The system has sufficient disk space
  • The system has at least 2G of memory and 2 CPU cores
  • The system clock is synchronized

For Embedded Cluster requirements, see Requirements. For the full default host preflight spec for Embedded Cluster, see host-preflight.yaml in the embedded-cluster repository in GitHub.

If any of the host preflight checks fail, installation is blocked and a message describing the failure is displayed. For more information about host preflight checks for installations on VMs or bare metal servers, see (KOTS Only) Host Preflights for VM or Bare Metal Installations.

Limitations

Embedded Cluster host preflight checks have the following limitations:

  • The default host preflight checks for Embedded Cluster cannot be modified, and vendors cannot provide their own custom host preflight spec for Embedded Cluster.
  • Host preflight checks do not check that any application-specific requirements are met. For more information about defining preflight checks for your application, see Defining Preflight Checks.

Skip Host Preflight Checks

You can skip host preflight checks by passing the --skip-host-preflights flag with the Embedded Cluster install command. For example:

sudo ./my-app install --license license.yaml --skip-host-preflights

When you skip host preflight checks, the Admin Console still runs any application-specific preflight checks that are defined in the release before the application is deployed.

note

Skipping host preflight checks is not recommended for production installations.

Host Storage

Embedded Cluster stores data in the following primary locations, all of which must be writable:

note

Replicated does not support using symlinked directories for these locations.

/var/lib/embedded-cluster

  • Used to store your installer
  • Includes:
    • Logs
    • Temporary files
    • Supporting binaries (for example, kubectl and Replicated’s preflight and support-bundle plugins)
  • Should be large enough to fit at least 2 installer binaries

/var/lib/k0s

  • Where the Kubernetes cluster runs
  • Includes:
    • All Kubernetes components and binaries
    • Pod log files
    • Images currently in use by containerd
  • Storage requirements:
    • The total space should be sufficient to hold the following while remaining below 85% utilization:
      • All Pod logs
      • Two copies of infrastructure images included in k0s
      • All Pod images for your application
    • Kubernetes will initiate garbage collection of images when utilization reaches 85%
  • Calculation example: If the sum of required storage for logs, images, and components is X GB, the total allocated space should be at least X / 0.85 GB to ensure it remains below 85% utilization.

/var/openebs

  • Where PVC data is stored
  • Needs enough storage to support all volumes used in your application
  • For airgap installs:
    • Registry uses a volume
    • Requires space for two copies of your images (for registry)
    • Additional space needed for other application volume requirements
    • If high availability is enabled, all controller nodes will have the same requirements because data will be replicated to all controller nodes.

/tmp

  • Temporary space during installation

Update

When you update an application installed in an Embedded Cluster, you update both the application and the cluster together. There is no need or mechanism to update the Embedded Cluster on its own.

You can check for available application updates in the Admin Console, or use automatic updates, to retrieve newly available versions. When you deploy a new version, any changes to your application are deployed first. The Admin Console waits until your application is ready, as indicated by status informers, before making any changes to the cluster.

Any changes made to the Embedded Cluster config, including changes to the Embedded Cluster version, Helm extensions, and unsupported overrides, trigger a cluster update.

During cluster updates, the Admin Console can experience downtime and become unavailable if the Kubernetes version is updated. If you’re already viewing the Admin Console when the downtime occurs, you will see a message informing you that a cluster update is in progress and that the connection will resume once the Admin Console is available again:

Embedded cluster update in progress modal

View a larger version of this image

important

Do not downgrade the Embedded Cluster version. This is not supported but is not prohibited, and it can lead to unexpected behavior.

Access the Cluster

With Embedded Cluster, end-users are rarely supposed to need to use the CLI. Typical workflows, like updating the application and the cluster, are driven through the Admin Console.

Nonetheless, there are times when vendors or their customers need to use the CLI for development or troubleshooting.

To access the cluster and use other included binaries:

  1. SSH onto a controller node.

  2. Use the Embedded Cluster shell command to start a shell with access to the cluster:

    sudo ./APP_SLUG shell

    The output looks similar to the following:

       __4___
    _ \ \ \ \ Welcome to APP_SLUG debug shell.
    <'\ /_/_/_/ This terminal is now configured to access your cluster.
    ((____!___/) Type 'exit' (or CTRL+d) to exit.
    \0\0\0\0\/ Happy hacking.
    ~~~~~~~~~~~
    root@alex-ec-2:/home/alex# export KUBECONFIG="/var/lib/k0s/pki/admin.conf"
    root@alex-ec-2:/home/alex# export PATH="$PATH:/var/lib/embedded-cluster/bin"
    root@alex-ec-2:/home/alex# source <(kubectl completion bash)
    root@alex-ec-2:/home/alex# source /etc/bash_completion

    The appropriate kubeconfig is exported, and the location of useful binaries like kubectl and Replicated’s preflight and support-bundle plugins is added to PATH.

    note

    The shell command cannot be run on non-controller nodes.

  3. Use the available binaries as needed.

    Example:

    kubectl version
    Client Version: v1.29.1
    Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
    Server Version: v1.29.1+k0s
  4. Type exit or Ctrl + D to exit the shell.

    note

    If you encounter a typical workflow where your customers have to use the Embedded Cluster shell, reach out to Alex Parker at [email protected]. These workflows might be candidates for additional Admin Console functionality.

Troubleshoot

From the shell, you can generate a support bundle for the Embedded Cluster to help with troubleshooting.

The support bundle collects host-level information to troubleshoot failures related to host configuration like DNS, networking, or storage problems. It also collects cluster-level information about the components provided by Replicated, such as the Admin Console and Embedded Cluster operator that manage install and upgrade operations.

If the cluster hasn't installed successfully and cluster-level information is not available, it will be excluded from the bundle.

To generate a support bundle:

  1. Access the Embedded Cluster shell:

    sudo ./APP_SLUG shell
  2. Generate the support bundle:

    kubectl support-bundle /var/lib/embedded-cluster/support/host-support-bundle.yaml --load-cluster-specs

You can then use the support bundle for troubleshooting or provide it to Replicated for assistance.

Manage Nodes

The section describes managing nodes in clusters created with Embedded Cluster, including how to add or reset nodes.

Add Nodes (Alpha)

You can add nodes and create a multi-node cluster. When adding nodes, you select one or more roles for that node, depending on which roles are defined in the Embedded Cluster config. The Admin Console provides the join command you use to join nodes to the cluster.

For more information about defining node roles, see Roles in Embedded Cluster Config.

To add nodes to a cluster:

  1. In the Admin Console, click Cluster Management at the top.

    When initially installing the application, you are brought to this page automatically after logging into the Admin Console.

  2. Click Add node.

  3. In the Add a Node dialog, select one or more roles for this node. If no custom roles are defined, the role selection will not appear and only the join command will show.

    Add node page in the Admin Console

    View a larger version of this image

  4. Copy the provided join command.

    Example:

    sudo ./APP_SLUG join 10.128.0.43:30000 bM8DO3MNvkouz9TFK3TcFanI
  5. SSH onto the machine you want to join to the cluster. Ensure that the Embedded Cluster binary is available on that node. For more information on downloading the Embedded Cluster binary, see Installing.

    important

    You must join nodes with the same installer that you used for the first node. If you use a different installer from a different release of your application, the cluster will not be stable.

  6. Run the copied join command using the Embedded Cluster binary.

  7. Check the Cluster Management page in the Admin Console to see the node appear and wait for the status to change to Ready.

  8. Repeat the process for as many nodes as you would like to add.

Enable High Availability for Multi-Node Clusters (Alpha)

Multi-node clusters are not highly available by default. The first node of the cluster is special and holds important data for Kubernetes and KOTS, such that the loss of this node would be catastrophic for the cluster. After three controller nodes are present in the cluster, high availability (HA) can be enabled.

important

High availability for Embedded Cluster in an Alpha feature. This feature is subject to change, including breaking changes. To get access to this feature, reach out to Alex Parker at [email protected].

Requirement

High availability is supported with Embedded cluster version 1.4.1 or later

Create a Multi-Node HA Cluster

To create a multi-node HA cluster:

  1. Set up a cluster with at least two controller nodes. You can do an online (internet-connected) or air gap installation. For online clusters, see Add Nodes (Alpha) above. For air gap clusters, see Installing in Air Gap Environments with Embedded Cluster (Alpha).

  2. SSH onto a third node that you want to join to the cluster as a controller.

  3. Run the join command and pass the --enable-ha flag. For example:

    sudo ./APP_SLUG join `--enable-ha` 10.128.0.80:30000 tI13KUWITdIerfdMcWTA4Hpf
  4. After the third node joins the cluster, type y in response to the prompt asking if you want to enable high availability.

    high availability command line prompt View a larger version of this image

  5. Wait for the migration to complete.

Limitations and Known Issues

  • Support bundles are now stored in rqlite instead of MinIO because MinIO is no longer deployed. We’ve successfully stored 100 MB support bundles (which is much larger than most support bundles), but bundles over 100 MB can cause rqlite to crash and restart. We are considering making support bundles ephemeral instead of storing them to address this issue, because support bundles are rarely needed long after they’re generated.

  • The --enable-ha flag serves as a feature flag during the Alpha phase. In the future, the prompt about migrating to high availability will display automatically if the cluster is not yet HA and you are adding the third or more controller node.

Reset a Node

Resetting a node removes the cluster and your application from that node. This is useful for iteration, development, and when mistakes are made, so you can reset a machine and reuse it instead of having to procure another machine.

If you want to completely remove a cluster, you need to reset each node individually.

When resetting a node, OpenEBS PVCs on the node are deleted. Only PVCs created as part of a StatefulSet will be recreated automatically on another node. To recreate other PVCs, the application will need to be redeployed.

To reset the node of a cluster:

  1. SSH onto the machine. Ensure that the Embedded Cluster binary is still available on that machine. For more information on downloading the Embedded Cluster binary, see Download the Embedded Cluster Binary above.

  2. Run the reset command to reset the node. The --reboot flag automatically reboots the machine to ensure that transient configuration is also reset.

    sudo ./APP_SLUG reset --reboot
    note

    Pass the --no-prompt flag to disable interactive prompts. Pass the --force flag to ignore any errors encountered during the reset.

Additional Use Cases

This section outlines some additional use cases for Embedded Cluster. These are not officially supported features from Replicated, but are ways of using Embedded Cluster that we or our customers have experimented with that might be useful to you.

NVIDIA GPU Operator

The NVIDIA GPU Operator uses the operator framework within Kubernetes to automate the management of all NVIDIA software components needed to provision GPUs. For more information about this operator, see the NVIDIA GPU Operator documentation. You can include the operator in your release as an additional Helm chart, or using the Embedded Cluster Helm extensions. For information about Helm extensions, see Helm extensions in Embedded Cluster Config.

Using this operator with Embedded Cluster requires configuring the containerd options in the operator as follows:

toolkit:
env:
- name: CONTAINERD_CONFIG
value: /etc/k0s/containerd.d/nvidia.toml
- name: CONTAINERD_SOCKET
value: /run/k0s/containerd.sock