Configure Embedded Cluster (Beta)
This topic describes how to configure and use Replicated Embedded Cluster with your application. For more information about Embedded Cluster, see Embedded Cluster Overview. For information about updating an existing release from Embedded Cluster v2 to v3, see Migrate from Embedded Cluster v2.
Create a release with Embedded Cluster v3
To create an application release that supports installation with Embedded Cluster v3:
-
If you use the Replicated proxy registry, update all references to private or third-party images to use the Replicated proxy registry domain. See the Embedded Cluster v3 steps in Configure your application to use the proxy registry.
-
In your application Helm chart
Chart.yamlfile, add the SDK as a dependency. If your application uses multiple charts, declare the SDK as a dependency of the chart that customers install first. Do not declare the SDK in more than one chart.# Chart.yamldependencies:- name: replicatedrepository: oci://registry.replicated.com/libraryversion: 1.19.3For the latest version information for the Replicated SDK, see the replicated-sdk repository in GitHub.
-
Package each chart into a
.tgzchart archive. See Package a Helm chart for a release. -
For each chart archive, add a unique HelmChart v2 custom resource (version
kots.io/v1beta2).# HelmChart custom resourceapiVersion: kots.io/v1beta2kind: HelmChartmetadata:name: samplechartspec:# chart identifies a matching chart from a .tgzchart:name: samplechartchartVersion: 3.1.7 -
If you support air gap installations, update all image references so they resolve correctly in both online and air gap installations. See Add support for air gap installations on this page.
-
Add an Embedded Cluster Config manifest to the release. At minimum, the Config must specify the Embedded Cluster version to use.
apiVersion: embeddedcluster.replicated.com/v1beta1kind: Configspec:version: 3.0.0-beta.1+k8s-1.34 -
If you use custom domains for the Replicated proxy registry or Replicated app service, add them to the Embedded Cluster Config
domainskey. See Configure Embedded Cluster to use custom domains in Use custom domains. -
If you need Embedded Cluster to deploy certain components to the cluster before it deploys your application, add the Helm charts for those components to the Embedded Cluster Config
extensionskey. See (Optional) Add Helm chart extensions on this page. -
Save the release and promote it to the channel that you use for testing internally.
-
Install with Embedded Cluster in a development environment to test. See Online installation with Embedded Cluster or Air gap installation with Embedded Cluster.
Add support for air gap installations
This section describes how to support air gap installations with Embedded Cluster v3. It includes information about how to configure your release and lists the limitations and known issues of air gap installations with Embedded Cluster v3.
Configure the release
To support air gap installations with Embedded Cluster v3:
-
Configure each HelmChart custom resource's
builderkey. This ensures that all the required and optional images for your application are available in environments without internet access. Seebuilderin HelmChart v2.My chart's default values already expose all images. Do I still need to configure the
builderkey?If the default values in your Helm chart already expose all the images for air gap installations, then you do not need to configure the
builderkey.When building an air gap bundle, the Vendor Portal runs
helm templateon each Helm chart to detect which images to include. The bundle includes all images thathelm templateyields.For many applications, running
helm templatewith the default values would not yield all the images required to install. In these cases, vendors can pass the additional values in thebuilderkey to ensure that the air gap bundle includes all the necessary images. -
Configure each HelmChart custom resource to ensure that all image references resolve correctly in both online and air gap installations. You do this in the HelmChart custom resource's
valueskey using the ReplicatedImageName and ReplicatedImageRegistry template functions. See the following examples for more information:Example (Single value for full image name)
For charts that expect the full image reference in a single field, use the ReplicatedImageName template function in the HelmChart custom resource. ReplicatedImageName returns the full image name, including both the repository and registry.
For example:
# values.yamlinitImage: proxy.replicated.com/proxy/my-app/docker.io/library/busybox:1.36# HelmChart custom resourceapiVersion: kots.io/v1beta2kind: HelmChartspec:values:initImage: '{{repl ReplicatedImageName (HelmValue ".initImage") true }}'ReplicatedImageName sets
noProxytotruebecause the image reference value invalues.yamlalready contains the proxy path prefix (proxy.replicated.com/proxy/my-app/...)Example (Separate values for image registry and repository)
If a chart uses separate registry and repository fields for image references, use the ReplicatedImageRegistry template function to rewrite the
registryfield. You do not need to template therepositoryfield.# values.yamlpostgresql:image:# proxy.replicated.com or your custom domainregistry: proxy.replicated.com/proxy/app-slug/docker.iorepository: bitnami/postgresql# HelmChart custom resourceapiVersion: kots.io/v1beta2kind: HelmChartspec:values:image:registry: '{{repl ReplicatedImageRegistry (HelmValue ".image.registry") }}'Example (References to public images)
For public images that don't go through the Replicated proxy registry, set the upstream reference directly in the chart's
values.yaml. UsenoProxyso that ReplicatedImageName leaves the reference unchanged in online installations. When you includenoProxy, ReplicatedImageName still rewrites the image to the local registry in air gap installations.# values.yamlpublicImage: docker.io/library/busybox:1.36# HelmChart custom resourceapiVersion: kots.io/v1beta2kind: HelmChartspec:values:publicImage: '{{repl ReplicatedImageName (HelmValue ".publicImage") true }}' -
In the HelmChart resource that corresponds to the chart where you included the Replicated SDK as a dependency, rewrite the Replicated SDK image registry using the ReplicatedImageRegistry template function:
# HelmChart custom resourceapiVersion: kots.io/v1beta2kind: HelmChartspec:values:replicated:image:registry: '{{repl ReplicatedImageRegistry (HelmValue ".replicated.image.registry") }}' -
If you added any Helm chart
extensionsin the Embedded Cluster Config, rewrite image references in each extension using either the ReplicatedImageName template function (if the chart uses a single field for the full image reference) or the ReplicatedImageRegistry template function (if the chart uses separate fields for registry and repository).Example (Extension for a Helm chart that you own)
# Embedded Cluster ConfigapiVersion: embeddedcluster.replicated.com/v1beta1kind: Configspec:extensions:helmCharts:- chart:name: ingresschartVersion: "1.2.3"releaseName: ingressnamespace: ingressvalues: |controller:image:registry: 'repl{{ ReplicatedImageRegistry (HelmValue ".controller.image.registry") }}'Example (Extension for a third-party Helm chart)
# Embedded Cluster ConfigapiVersion: embeddedcluster.replicated.com/v1beta1kind: Configspec:extensions:helmCharts:- chart:name: ingress-nginxchartVersion: "4.11.3"releaseName: ingress-nginxnamespace: ingress-nginxvalues: |controller:image:registry: 'repl{{ ReplicatedImageRegistry "registry.k8s.io" }}'The template functions add the proxy prefix in online installations and rewrite to the local registry in air gap installations.
-
In the Vendor Portal, go to the channel where you promoted the release to build the air gap bundle. Do one of the following:
- If you enabled the Automatically create airgap builds for newly promoted releases in this channel setting for the channel, watch for the build status to complete.
- If automatic air gap builds are not enabled, go to the Release history page for the channel and build the air gap bundle manually.
-
Create or edit a customer with the Air Gap Installation Option (Replicated Installers only) entitlement enabled so that you can test air gap installations. See Create and Manage Customers.
-
(Optional) Create a VM with Compatibility Matrix and set its network policy to
airgapto block outbound network access:replicated vm create --distribution ubuntureplicated network update NETWORK_ID --policy airgapWhere
NETWORK_IDis the ID of the network from the output of thevm createcommand. -
Install in your development environment to test. See Air gap installation with Embedded Cluster.
Limitations and known issues
Embedded Cluster installations in air gap environments have the following limitations and known issues:
-
If you pass
?airgap=trueto thereplicated.appendpoint but an air gap bundle is not built for the latest release, the API will not return a 404. Instead it will return the tarball without the air gap bundle (as in, with the installer and the license in it, like for online installations). -
Images used by Helm extensions must not refer to a multi-architecture image by digest. Air gap bundles include only x64 images, and the digest for the x64 image will be different from the digest for the multi-architecture image, preventing Kubernetes from locating the image in the bundle. An example of a chart that does this is ingress-nginx/ingress-nginx chart. For an example of how to set digests to empty strings and pull by tag only, see extensions in Embedded Cluster Config.
-
Embedded Cluster loads images for Helm extensions directly into containerd so that they are available without internet access. But if an image used by a Helm extension has Always set as the image pull policy, Kubernetes will try to pull the image from the internet. If necessary, use the Helm values to configure
IfNotPresentas the image pull policy to ensure the extension works in air gap environments. -
On the channel release history page, the links for Download air gap bundle, Copy download URL, and View bundle contents pertain to the application air gap bundle only, not the Embedded Cluster bundle.
Add Helm chart extensions
If your application requires certain components deployed before the application and as part of the cluster itself, add them as extensions in the Embedded Cluster Config. For example, you can add a Helm extension to deploy an ingress controller. You can add extensions for Helm charts that you own or for third-party charts.
To add Helm extensions:
-
In the Embedded Cluster Config, add the Helm chart to the
extensionskey. -
If you support air gap installations, configure each of your
extensionsso that they resolve correctly for both online and air gap installations. See Add support for air gap installations on this page. -
Save the release and promote it to the channel that you use for testing internally.
-
Install with Embedded Cluster in a development environment to test. See Online installation with Embedded Cluster or Air gap installation with Embedded Cluster.
Serve installation assets using the Vendor API
To install with Embedded Cluster, your end customers need to download the Embedded Cluster installer binary and their license. Air gap installations also require an air gap bundle. End customers can download all these installation assets using a curl command by following the installation steps available in the Replicated Enterprise Portal.
However, some vendors already have a portal where their customers can log in to access documentation or download artifacts. In cases like this, you can serve the Embedded Cluster installation assets yourself using the Replicated Vendor API. This removes the need for customers to download assets from the Replicated app service using a curl command during installation.
To serve Embedded Cluster installation assets with the Vendor API:
-
If you have not done so already, create an API token for the Vendor API. See Use the Vendor API v3.
-
Call the Get an Embedded Cluster release endpoint to download the assets needed to install your application with Embedded Cluster. Your customers must take this binary and their license and copy them to the machine where they will install your application.
Note the following:
-
(Recommended) Provide the
customerIdquery parameter so that the downloaded tarball includes the customer’s license. This mirrors what the Replicated app service returns when a customer downloads the binary directly and is the most useful option. Excluding thecustomerIdis useful if you plan to distribute the license separately. -
If you do not provide any query parameters, this endpoint downloads the Embedded Cluster binary for the latest release on the specified channel. You can provide the
channelSequencequery parameter to download the binary for a particular release.
-
Distribute the NVIDIA gpu operator with Embedded Cluster
Distributing the NVIDIA GPU Operator with Embedded Cluster is not an officially supported feature from Replicated. However, it is a common use case.
The NVIDIA GPU Operator uses the operator framework within Kubernetes to automate the management of all NVIDIA software components needed to provision GPUs. For more information about this operator, see the NVIDIA GPU Operator documentation.
Include the NVIDIA gpu operator and configure containerd options
You can include the NVIDIA GPU Operator in your release as an additional Helm chart, or using Embedded Cluster Helm extensions. For information about adding Helm extensions, see extensions in Embedded Cluster Config.
Using the NVIDIA GPU Operator with Embedded Cluster requires configuring the containerd options in the operator as follows:
# Embedded Cluster Config
extensions:
helm:
repositories:
- name: nvidia
url: https://nvidia.github.io/gpu-operator
charts:
- name: gpu-operator
chartname: nvidia/gpu-operator
namespace: gpu-operator
version: "v24.9.1"
values: |
# configure the containerd options
toolkit:
env:
- name: CONTAINERD_CONFIG
value: /etc/k0s/containerd.d/nvidia.toml
- name: CONTAINERD_SOCKET
value: /run/k0s/containerd.sock
containerd known issue
When you configure the containerd options as shown earlier on this page, the NVIDIA GPU Operator automatically creates the required configurations in the /etc/k0s/containerd.d/nvidia.toml file. It is not necessary to create this file manually, or modify any other configuration on the hosts.
If you include the NVIDIA GPU Operator as a Helm extension, remove any existing containerd services from the host before installing with Embedded Cluster. This includes services deployed by Docker. If any containerd services are present on the host, the NVIDIA GPU Operator will generate an invalid containerd config, causing the installation to fail. For more information, see Installation failure when NVIDIA GPU Operator is included as Helm extension in Troubleshooting Embedded Cluster.
This is the result of a known issue with v24.9.x of the NVIDIA GPU Operator. For more information about the known issue, see container-toolkit does not modify the containerd config correctly when there are multiple instances of the containerd binary in the nvidia-container-toolkit repository in GitHub.