[View a larger version of this image](/images/admin-console-add-node.png)
1. Either on the Admin Console **Nodes** screen that is displayed during installation or in the **Add a Node** dialog, select one or more roles for the new node that you will join. Copy the join command.
Note the following:
* If the Embedded Cluster Config [roles](/reference/embedded-config#roles) key is not configured, all new nodes joined to the cluster are assigned the `controller` role by default. The `controller` role designates nodes that run the Kubernetes control plane. Controller nodes can also run other workloads, such as application or Replicated KOTS workloads.
* Roles are not updated or changed after a node is added. If you need to change a node's role, reset the node and add it again with the new role.
* For multi-node clusters with high availability (HA), at least three `controller` nodes are required. You can assign both the `controller` role and one or more `custom` roles to the same node. For more information about creating HA clusters with Embedded Cluster, see [Enable High Availability for Multi-Node Clusters (Alpha)](#ha) below.
* To add non-controller or _worker_ nodes that do not run the Kubernetes control plane, select one or more `custom` roles for the node and deselect the `controller` role.
1. Do one of the following to make the Embedded Cluster installation assets available on the machine that you will join to the cluster:
* **For online (internet-connected) installations**: SSH onto the machine that you will join. Then, use the same commands that you ran during installation to download and untar the Embedded Cluster installation assets on the machine. See [Online Installation with Embedded Cluster](/enterprise/installing-embedded).
* **For air gap installations with limited or no outbound internet access**: On a machine that has internet access, download the Embedded Cluster installation assets (including the air gap bundle) using the same command that you ran during installation. See [Air Gap Installation with Embedded Cluster](/enterprise/installing-embedded-air-gap). Then, move the downloaded assets to the air-gapped machine that you will join, and untar.
:::important
The Embedded Cluster installation assets on each node must all be the same version. If you use a different version than what is installed elsewhere in the cluster, the cluster will not be stable. To download a specific version of the Embedded Cluster assets, select a version in the **Embedded cluster install instructions** dialog.
:::
1. On the machine that you will join to the cluster, run the join command that you copied from the Admin Console.
**Example:**
```bash
sudo ./APP_SLUG join 10.128.0.32:30000 TxXboDstBAamXaPdleSK7Lid
```
**Air Gap Example:**
```bash
sudo ./APP_SLUG join --airgap-bundle APP_SLUG.airgap 10.128.0.32:30000 TxXboDstBAamXaPdleSK7Lid
```
1. In the Admin Console, either on the installation **Nodes** screen or on the **Cluster Management** page, verify that the node appears. Wait for the node's status to change to Ready.
1. Repeat these steps for each node you want to add.
## Enable High Availability for Multi-Node Clusters (Alpha) {#ha}
Multi-node clusters are not highly available by default. The first node of the cluster is special and holds important data for Kubernetes and KOTS, such that the loss of this node would be catastrophic for the cluster. Enabling high availability (HA) requires that at least three controller nodes are present in the cluster. Users can enable HA when joining the third node.
:::important
High availability for Embedded Cluster in an Alpha feature. This feature is subject to change, including breaking changes. For more information about this feature, reach out to Alex Parker at [alexp@replicated.com](mailto:alexp@replicated.com).
:::
### HA Architecture
The following diagram shows the architecture of an HA multi-node Embedded Cluster installation:

[View a larger version of this image](/images/embedded-architecture-multi-node-ha.png)
As shown in the diagram above, in HA installations with Embedded Cluster:
* A single replica of the Embedded Cluster Operator is deployed and runs on a controller node.
* A single replica of the KOTS Admin Console is deployed and runs on a controller node.
* Three replicas of rqlite are deployed in the kotsadm namespace. Rqlite is used by KOTS to store information such as support bundles, version history, application metadata, and other small amounts of data needed to manage the application.
* For installations that include disaster recovery, the Velero pod is deployed on one node. The Velero Node Agent runs on each node in the cluster. The Node Agent is a Kubernetes DaemonSet that performs backup and restore tasks such as creating snapshots and transferring data during restores.
* For air gap installations, two replicas of the air gap image registry are deployed.
Any Helm [`extensions`](/reference/embedded-config#extensions) that you include in the Embedded Cluster Config are installed in the cluster depending on the given chart and whether or not it is configured to be deployed with high availability.
For more information about the Embedded Cluster built-in extensions, see [Built-In Extensions](/vendor/embedded-overview#built-in-extensions) in _Embedded Cluster Overview_.
### Requirements
Enabling high availability has the following requirements:
* High availability is supported with Embedded Cluster 1.4.1 or later.
* High availability is supported only for clusters where at least three nodes with the `controller` role are present.
### Limitations
Enabling high availability has the following limitations:
* High availability for Embedded Cluster in an Alpha feature. This feature is subject to change, including breaking changes. For more information about this feature, reach out to Alex Parker at [alexp@replicated.com](mailto:alexp@replicated.com).
* The `--enable-ha` flag serves as a feature flag during the Alpha phase. In the future, the prompt about migrating to high availability will display automatically if the cluster is not yet HA and you are adding the third or more controller node.
* HA multi-node clusters use rqlite to store support bundles up to 100 MB in size. Bundles over 100 MB can cause rqlite to crash and restart.
### Best Practices for High Availability
Consider the following best practices and recommendations for creating HA clusters:
* At least three _controller_ nodes that run the Kubernetes control plane are required for HA. This is because clusters use a quorum system, in which more than half the nodes must be up and reachable. In clusters with three controller nodes, the Kubernetes control plane can continue to operate if one node fails because a quorum can still be reached by the remaining two nodes. By default, with Embedded Cluster, all new nodes added to a cluster are controller nodes. For information about customizing the `controller` node role, see [roles](/reference/embedded-config#roles) in _Embedded Cluster Config_.
* Always use an odd number of controller nodes in HA clusters. Using an odd number of controller nodes ensures that the cluster can make decisions efficiently with quorum calculations. Clusters with an odd number of controller nodes also avoid split-brain scenarios where the cluster runs as two, independent groups of nodes, resulting in inconsistencies and conflicts.
* You can have any number of _worker_ nodes in HA clusters. Worker nodes do not run the Kubernetes control plane, but can run workloads such as application or Replicated KOTS workloads.
### Create a Multi-Node HA Cluster
To create a multi-node HA cluster:
1. Set up a cluster with at least two controller nodes. You can do an online (internet-connected) or air gap installation. For more information, see [Online Installation with Embedded Cluster](/enterprise/installing-embedded) or [Air Gap Installation with Embedded Cluster](/enterprise/installing-embedded-air-gap).
1. SSH onto a third node that you want to join to the cluster as a controller.
1. Run the join command provided in the Admin Console **Cluster Management** tab and pass the `--enable-ha` flag. For example:
```bash
sudo ./APP_SLUG join --enable-ha 10.128.0.80:30000 tI13KUWITdIerfdMcWTA4Hpf
```
1. After the third node joins the cluster, type `y` in response to the prompt asking if you want to enable high availability.

[View a larger version of this image](/images/embedded-cluster-ha-prompt.png)
1. Wait for the migration to complete.
---
# Update Custom TLS Certificates in Embedded Cluster Installations
This topic describes how to update custom TLS certificates in Replicated Embedded Cluster installations.
## Update Custom TLS Certificates
Users can provide custom TLS certificates with Embedded Cluster installations and can update TLS certificates through the Admin Console.
:::important
Adding the `acceptAnonymousUploads` annotation temporarily creates a vulnerability for an attacker to maliciously upload TLS certificates. After TLS certificates have been uploaded, the vulnerability is closed again.
Replicated recommends that you complete this upload process quickly to minimize the vulnerability risk.
:::
To upload a new custom TLS certificate in Embedded Cluster installations:
1. SSH onto a controller node where Embedded Cluster is installed. Then, run the following command to start a shell so that you can access the cluster with kubectl:
```bash
sudo ./APP_SLUG shell
```
Where `APP_SLUG` is the unique slug of the installed application.
1. In the shell, run the following command to restore the ability to upload new TLS certificates by adding the `acceptAnonymousUploads` annotation:
```bash
kubectl -n kotsadm annotate secret kotsadm-tls acceptAnonymousUploads=1 --overwrite
```
1. Run the following command to get the name of the kurl-proxy server:
```bash
kubectl get pods -A | grep kurl-proxy | awk '{print $2}'
```
:::note
This server is named `kurl-proxy`, but is used in both Embedded Cluster and kURL installations.
:::
1. Run the following command to delete the kurl-proxy pod. The pod automatically restarts after the command runs.
```bash
kubectl delete pods PROXY_SERVER
```
Replace `PROXY_SERVER` with the name of the kurl-proxy server that you got in the previous step.
1. After the pod has restarted, go to `http://| Field Name | Description |
|---|---|
| Owner & Repository | Enter the owner and repository name where the commit will be made. |
| Branch | Enter the branch name or leave the field blank to use the default branch. |
| Path | Enter the folder name in the repository where the application deployment file will be committed. If you leave this field blank, the Replicated KOTS creates a folder for you. However, the best practice is to manually create a folder in the repository labeled with the application name and dedicated for the deployment file only. |
[View a larger version of this image](/images/registry-settings.png)
The following table describes the fields:
| Field | Description |
|---|---|
| Hostname | Specify a registry domain that uses the Docker V2 protocol. |
| Username | Specify the username for the domain. |
| Password | Specify the password for the domain. |
| Registry Namespace | Specify the registry namespace. The registry namespace is the path between the registry and the image name. For example, `my.registry.com/namespace/image:tag`. For air gap environments, this setting overwrites the registry namespace where images where pushed when KOTS was installed. |
| Disable Pushing Images to Registry | (Optional) Select this option to disable KOTS from pushing images. Make sure that an external process is configured to push images to your registry instead. Your images are still read from your registry when the application is deployed. |
| Field | Description |
|---|---|
| Hostname | Specify a registry domain that uses the Docker V2 protocol. |
| Username | Specify the username for the domain. |
| Password | Specify the password for the domain. |
| Registry Namespace | Specify the registry namespace. For air gap environments, this setting overwrites the registry namespace that you pushed images to when you installed KOTS. |
[View a larger version of this image](/images/embedded-cluster-install-dialog-airgap.png)
1. (Optional) For **Select a version**, select a specific application version to install. By default, the latest version is selected.
1. SSH onto the machine where you will install.
1. On a machine with internet access, run the curl command to download the air gap installation assets as a `.tgz`.
1. Move the downloaded `.tgz` to the air-gapped machine where you will install.
1. On your air-gapped machine, untar the `.tgz` following the instructions provided in the **Embedded Cluster installation instructions** dialog. This will produce three files:
* The installer
* The license
* The air gap bundle (`APP_SLUG.airgap`)
1. Install the application with the installation command copied from the **Embedded Cluster installation instructions** dialog:
```bash
sudo ./APP_SLUG install --license license.yaml --airgap-bundle APP_SLUG.airgap
```
Where `APP_SLUG` is the unique application slug.
:::note
Embedded Cluster supports installation options such as installing behind a proxy and changing the data directory used by Embedded Cluster. For the list of flags supported with the Embedded Cluster `install` command, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
:::
1. When prompted, enter a password for accessing the KOTS Admin Console.
The installation command takes a few minutes to complete. During installation, Embedded Cluster completes tasks to prepare the cluster and install KOTS in the cluster. Embedded Cluster also automatically runs a default set of [_host preflight checks_](/vendor/embedded-using#about-host-preflight-checks) which verify that the environment meets the requirements for the installer.
**Example output:**
```bash
? Enter an Admin Console password: ********
? Confirm password: ********
✔ Host files materialized!
✔ Running host preflights
✔ Node installation finished!
✔ Storage is ready!
✔ Embedded Cluster Operator is ready!
✔ Admin Console is ready!
✔ Additional components are ready!
Visit the Admin Console to configure and install gitea-kite: http://104.155.145.60:30000
```
At this point, the cluster is provisioned and the Admin Console is deployed, but the application is not yet installed.
1. Go to the URL provided in the output to access to the Admin Console.
1. On the Admin Console landing page, click **Start**.
1. On the **Secure the Admin Console** screen, review the instructions and click **Continue**. In your browser, follow the instructions that were provided on the **Secure the Admin Console** screen to bypass the warning.
1. On the **Certificate type** screen, either select **Self-signed** to continue using the self-signed Admin Console certificate or click **Upload your own** to upload your own private key and certificacte.
By default, a self-signed TLS certificate is used to secure communication between your browser and the Admin Console. You will see a warning in your browser every time you access the Admin Console unless you upload your own certificate.
1. On the login page, enter the Admin Console password that you created during installation and click **Log in**.
1. On the **Nodes** page, you can view details about the machine where you installed, including its node role, status, CPU, and memory.
Optionally, add nodes to the cluster before deploying the application. For more information about joining nodes, see [Manage Multi-Node Clusters with Embedded Cluster](/enterprise/embedded-manage-nodes). Click **Continue**.
1. On the **Configure [App Name]** screen, complete the fields for the application configuration options. Click **Continue**.
1. On the **Validate the environment & deploy [App Name]** screen, address any warnings or failures identified by the preflight checks and then click **Deploy**.
Preflight checks are conformance tests that run against the target namespace and cluster to ensure that the environment meets the minimum requirements to support the application.
The Admin Console dashboard opens.
On the Admin Console dashboard, the application status changes from Missing to Unavailable while the application is being installed. When the installation is complete, the status changes to Ready. For example:

[View a larger version of this image](/images/gitea-ec-ready.png)
---
# Automate Installation with Embedded Cluster
This topic describes how to install an application with Replicated Embedded Cluster from the command line, without needing to access the Replicated KOTS Admin Console.
## Overview
A common use case for installing with Embedded Cluster from the command line is to automate installation, such as performing headless installations as part of CI/CD pipelines.
With headless installation, you provide all the necessary installation assets, such as the license file and the application config values, with the installation command rather than through the Admin Console UI. Any preflight checks defined for the application run automatically during headless installations from the command line rather than being displayed in the Admin Console.
## Prerequisite
Create a ConfigValues YAML file to define the configuration values for the application release. The ConfigValues file allows you to pass the configuration values for an application from the command line with the install command, rather than through the Admin Console UI. For air-gapped environments, ensure that the ConfigValues file can be accessed from the installation environment.
The KOTS ConfigValues file includes the fields that are defined in the KOTS Config custom resource for an application release, along with the user-supplied and default values for each field, as shown in the example below:
```yaml
apiVersion: kots.io/v1beta1
kind: ConfigValues
spec:
values:
text_config_field_name:
default: Example default value
value: Example user-provided value
boolean_config_field_name:
value: "1"
password_config_field_name:
valuePlaintext: examplePassword
```
To get the ConfigValues file from an installed application instance:
1. Install the target release in a development environment. You can either install the release with Replicated Embedded Cluster or install in an existing cluster with KOTS. For more information, see [Online Installation with Embedded Cluster](/enterprise/installing-embedded) or [Online Installation in Existing Clusters](/enterprise/installing-existing-cluster).
1. Depending on the installer that you used, do one of the following to get the ConfigValues for the installed instance:
* **For Embedded Cluster installations**: In the Admin Console, go to the **View files** tab. In the filetree, go to **upstream > userdata** and open **config.yaml**, as shown in the image below:

[View a larger version of this image](/images/admin-console-view-files-configvalues.png)
* **For KOTS installations in an existing cluster**: Run the `kubectl kots get config` command to view the generated ConfigValues file:
```bash
kubectl kots get config --namespace APP_NAMESPACE --decrypt
```
Where:
* `APP_NAMESPACE` is the cluster namespace where KOTS is running.
* The `--decrypt` flag decrypts all configuration fields with `type: password`. In the downloaded ConfigValues file, the decrypted value is stored in a `valuePlaintext` field.
The output of the `kots get config` command shows the contents of the ConfigValues file. For more information about the `kots get config` command, including additional flags, see [kots get config](/reference/kots-cli-get-config).
## Online (Internet-Connected) Installation
To install with Embedded Cluster in an online environment:
1. Follow the steps provided in the Vendor Portal to download and untar the Embedded Cluster installation assets. For more information, see [Online Installation with Embedded Cluster](/enterprise/installing-embedded).
1. Run the following command to install:
```bash
sudo ./APP_SLUG install --license PATH_TO_LICENSE \
--config-values PATH_TO_CONFIGVALUES \
--admin-console-password ADMIN_CONSOLE_PASSWORD
```
Replace:
* `APP_SLUG` with the unique slug for the application.
* `PATH_TO_LICENSE` with the path to the customer license.
* `ADMIN_CONSOLE_PASSWORD` with a password for accessing the Admin Console.
* `PATH_TO_CONFIGVALUES` with the path to the ConfigValues file.
## Air Gap Installation
To install with Embedded Cluster in an air-gapped environment:
1. Follow the steps provided in the Vendor Portal to download and untar the Embedded Cluster air gap installation assets. For more information, see [Air Gap Installation with Embedded Cluster](/enterprise/installing-embedded-air-gap).
1. Ensure that the Embedded Cluster installation assets are available on the air-gapped machine, then run the following command to install:
```bash
sudo ./APP_SLUG install --license PATH_TO_LICENSE \
--config-values PATH_TO_CONFIGVALUES \
--admin-console-password ADMIN_CONSOLE_PASSWORD \
--airgap-bundle PATH_TO_AIRGAP_BUNDLE
```
Replace:
* `APP_SLUG` with the unique slug for the application.
* `PATH_TO_LICENSE` with the path to the customer license.
* `PATH_TO_CONFIGVALUES` with the path to the ConfigValues file.
* `ADMIN_CONSOLE_PASSWORD` with a password for accessing the Admin Console.
* `PATH_TO_AIRGAP_BUNDLE` with the path to the Embedded Cluster `.airgap` bundle for the release.
---
# Embedded Cluster Installation Requirements
This topic lists the installation requirements for Replicated Embedded Cluster. Ensure that the installation environment meets these requirements before attempting to install.
## System Requirements
* Linux operating system
* x86-64 architecture
* systemd
* At least 2GB of memory and 2 CPU cores
* The disk on the host must have a maximum P99 write latency of 10 ms. This supports etcd performance and stability. For more information about the disk write latency requirements for etcd, see [Disks](https://etcd.io/docs/latest/op-guide/hardware/#disks) in _Hardware recommendations_ and [What does the etcd warning “failed to send out heartbeat on time” mean?](https://etcd.io/docs/latest/faq/) in the etcd documentation.
* The data directory used by Embedded Cluster must have 40Gi or more of total space and be less than 80% full. By default, the data directory is `/var/lib/embedded-cluster`. The directory can be changed by passing the `--data-dir` flag with the Embedded Cluster `install` command. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
Note that in addition to the primary data directory, Embedded Cluster creates directories and files in the following locations:
- `/etc/cni`
- `/etc/k0s`
- `/opt/cni`
- `/opt/containerd`
- `/run/calico`
- `/run/containerd`
- `/run/k0s`
- `/sys/fs/cgroup/kubepods`
- `/sys/fs/cgroup/system.slice/containerd.service`
- `/sys/fs/cgroup/system.slice/k0scontroller.service`
- `/usr/libexec/k0s`
- `/var/lib/calico`
- `/var/lib/cni`
- `/var/lib/containers`
- `/var/lib/kubelet`
- `/var/log/calico`
- `/var/log/containers`
- `/var/log/embedded-cluster`
- `/var/log/pods`
- `/usr/local/bin/k0s`
* (Online installations only) Access to replicated.app and proxy.replicated.com or your custom domain for each
* Embedded Cluster is based on k0s, so all k0s system requirements and external runtime dependencies apply. See [System requirements](https://docs.k0sproject.io/stable/system-requirements/) and [External runtime dependencies](https://docs.k0sproject.io/stable/external-runtime-deps/) in the k0s documentation.
## Port Requirements
This section lists the ports used by Embedded Cluster. These ports must be open and available for both single- and multi-node installations.
#### Ports Used by Local Processes
The following ports must be open and available for use by local processes running on the same node. It is not necessary to create firewall openings for these ports.
* 2379/TCP
* 7443/TCP
* 9099/TCP
* 10248/TCP
* 10257/TCP
* 10259/TCP
#### Ports Required for Bidirectional Communication Between Nodes
The following ports are used for bidirectional communication between nodes.
For multi-node installations, create firewall openings between nodes for these ports.
For single-node installations, ensure that there are no other processes using these ports. Although there is no communication between nodes in single-node installations, these ports are still required.
* 2380/TCP
* 4789/UDP
* 6443/TCP
* 9091/TCP
* 9443/TCP
* 10249/TCP
* 10250/TCP
* 10256/TCP
#### Admin Console Port
The KOTS Admin Console requires that port 30000/TCP is open and available. Create a firewall opening for port 30000/TCP so that the Admin Console can be accessed by the end user.
Additionally, port 30000 must be accessible by nodes joining the cluster.
If port 30000 is occupied, you can select a different port for the Admin Console during installation. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
#### LAM Port
The Local Artifact Mirror (LAM) requires that port 50000/TCP is open and available.
If port 50000 is occupied, you can select a different port for the LAM during installation. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
## Firewall Openings for Online Installations with Embedded Cluster {#firewall}
The domains for the services listed in the table below need to be accessible from servers performing online installations. No outbound internet access is required for air gap installations.
For services hosted at domains owned by Replicated, the table below includes a link to the list of IP addresses for the domain at [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json) in GitHub. Note that the IP addresses listed in the `replicatedhq/ips` repository also include IP addresses for some domains that are _not_ required for installation.
For any third-party services hosted at domains not owned by Replicated, consult the third-party's documentation for the IP address range for each domain, as needed.
| Domain | Description |
|---|---|
| `proxy.replicated.com` | Private Docker images are proxied through `proxy.replicated.com`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `proxy.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L52-L57) in GitHub. |
| `replicated.app` | Upstream application YAML and metadata is pulled from `replicated.app`. The current running version of the application (if any), as well as a license ID and application ID to authenticate, are all sent to `replicated.app`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `replicated.app`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L60-L65) in GitHub. |
| `registry.replicated.com` * | Some applications host private images in the Replicated registry at this domain. The on-prem docker client uses a license ID to authenticate to `registry.replicated.com`. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA. For the range of IP addresses for `registry.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L20-L25) in GitHub. |
| Port | Protocol |
|---|---|
| 6443 | TCP |
| 10250 | TCP |
| 9443 | TCP |
| 2380 | TCP |
| 4789 | UDP |
[View a larger version of this image](/images/embedded-cluster-install-dialog.png)
1. (Optional) In the **Embedded Cluster install instructions** dialog, under **Select a version**, select a specific application version to install. By default, the latest version is selected.
1. SSH onto the machine where you will install.
1. Run the first command in the **Embedded Cluster install instructions** dialog to download the installation assets as a `.tgz`.
1. Run the second command to extract the `.tgz`. The will produce the following files:
* The installer
* The license
1. Run the third command to install the release:
```bash
sudo ./APP_SLUG install --license LICENSE_FILE
```
Where:
* `APP_SLUG` is the unique slug for the application.
* `LICENSE_FILE` is the customer license.
[View a larger version of this image](/images/release-history-link.png)

[View a larger version of this image](/images/release-history-build-airgap-bundle.png)
1. After the build completes, download the bundle. Ensure that you can access the downloaded bundle from the environment where you will install the application.
1. (Optional) View the contents of the downloaded bundle:
```bash
tar -zxvf AIRGAP_BUNDLE
```
Where `AIRGAP_BUNDLE` is the filename for the `.airgap` bundle that you downloaded.
1. Download the `kotsadm.tar.gz` air gap bundle from the [Releases](https://github.com/replicatedhq/kots/releases) page in the kots repository in GitHub. Ensure that you can access the downloaded bundle from the environment where you will install the application.
:::note
The version of the `kotsadm.tar.gz` air gap bundle used must be compatible with the version of the `.airgap` bundle for the given application release.
:::
1. Install the KOTS CLI. See [Manually Download and Install](/reference/kots-cli-getting-started#manually-download-and-install) in _Installing the KOTS CLI_.
:::note
The versions of the KOTS CLI and the `kotsadm.tar.gz` bundle must match. You can check the version of the KOTS CLI with `kubectl kots version`.
:::
1. Extract the KOTS Admin Console container images from the `kotsadm.tar.gz` bundle and push the images to your private registry:
```
kubectl kots admin-console push-images ./kotsadm.tar.gz REGISTRY_HOST \
--registry-username RW_USERNAME \
--registry-password RW_PASSWORD
```
Replace:
* `REGISTRY_HOST` with the hostname for the private registry. For example, `private.registry.host` or `my-registry.example.com/my-namespace`.
* `RW_USERNAME` and `RW_PASSWORD` with the username and password for an account that has read and write access to the private registry.
:::note
KOTS does not store or reuse these read-write credentials.
:::
1. Install the KOTS Admin Console using the images that you pushed in the previous step:
```shell
kubectl kots install APP_NAME \
--kotsadm-registry REGISTRY_HOST \
--registry-username RO-USERNAME \
--registry-password RO-PASSWORD
```
Replace:
* `APP_NAME` with a name for the application. This is the unique name that KOTS will use to refer to the application that you install.
* `REGISTRY_HOST` with the same hostname for the private registry where you pushed the Admin Console images.
* `RO_USERNAME` and `RO_PASSWORD` with the username and password for an account that has read-only access to the private registry.
:::note
KOTS stores these read-only credentials in a Kubernetes secret in the same namespace where the Admin Console is installed.
KOTS uses these credentials to pull the images. To allow KOTS to pull images, the credentials are automatically created as an imagePullSecret on all of the Admin Console Pods.
:::
1. When prompted by the `kots install` command:
1. Provide the namespace where you want to install both KOTS and the application.
1. Create a new password for logging in to the Admin Console.
**Example**:
```shell
$ kubectl kots install application-name
Enter the namespace to deploy to: application-name
• Deploying Admin Console
• Creating namespace ✓
• Waiting for datastore to be ready ✓
Enter a new password to be used for the Admin Console: ••••••••
• Waiting for Admin Console to be ready ✓
• Press Ctrl+C to exit
• Go to http://localhost:8800 to access the Admin Console
```
After the `kots install` command completes, it creates a port forward to the Admin Console. The Admin Console is exposed internally in the cluster and can only be accessed using a port forward.
1. Access the Admin Console on port 8800. If the port forward is active, go to [http://localhost:8800](http://localhost:8800) to access the Admin Console.
If you need to reopen the port forward to the Admin Console, run the following command:
```shell
kubectl kots admin-console -n NAMESPACE
```
Replace `NAMESPACE` with the namespace where KOTS is installed.
1. Log in with the password that you created during installation.
1. Upload your license file.
1. Upload the `.airgap` application air gap bundle.
1. On the config screen, complete the fields for the application configuration options and then click **Continue**.
1. On the **Preflight checks** page, the application-specific preflight checks run automatically. Preflight checks are conformance tests that run against the target namespace and cluster to ensure that the environment meets the minimum requirements to support the application. Click **Deploy**.
:::note
Replicated recommends that you address any warnings or failures, rather than dismissing them. Preflight checks help ensure that your environment meets the requirements for application deployment.
:::
1. (Minimal RBAC Only) If you are installing with minimal role-based access control (RBAC), KOTS recognizes if the preflight checks failed due to insufficient privileges. When this occurs, a kubectl CLI preflight command displays that lets you manually run the preflight checks. The Admin Console then automatically displays the results of the preflight checks. Click **Deploy**.

[View a larger version of this image](/images/kubectl-preflight-command.png)
The Admin Console dashboard opens.
On the Admin Console dashboard, the application status changes from Missing to Unavailable while the Deployment is being created. When the installation is complete, the status changes to Ready. For example:

[View a larger version of this image](/images/kotsadm-dashboard-graph.png)
---
# Install with the KOTS CLI
This topic describes how to install an application with Replicated KOTS in an existing cluster using the KOTS CLI.
## Overview
You can use the KOTS CLI to install an application with Replicated KOTS. A common use case for installing from the command line is to automate installation, such as performing headless installations as part of CI/CD pipelines.
To install with the KOTS CLI, you provide all the necessary installation assets, such as the license file and the application config values, with the installation command rather than through the Admin Console UI. Any preflight checks defined for the application run automatically from the CLI rather than being displayed in the Admin Console.
The following shows an example of the output from the kots install command:
```
• Deploying Admin Console
• Creating namespace ✓
• Waiting for datastore to be ready ✓
• Waiting for Admin Console to be ready ✓
• Waiting for installation to complete ✓
• Waiting for preflight checks to complete ✓
• Press Ctrl+C to exit
• Go to http://localhost:8800 to access the Admin Console
• Go to http://localhost:8888 to access the application
```
## Prerequisite
Create a ConfigValues YAML file to define the configuration values for the application release. The ConfigValues file allows you to pass the configuration values for an application from the command line with the install command, rather than through the Admin Console UI. For air-gapped environments, ensure that the ConfigValues file can be accessed from the installation environment.
The KOTS ConfigValues file includes the fields that are defined in the KOTS Config custom resource for an application release, along with the user-supplied and default values for each field, as shown in the example below:
```yaml
apiVersion: kots.io/v1beta1
kind: ConfigValues
spec:
values:
text_config_field_name:
default: Example default value
value: Example user-provided value
boolean_config_field_name:
value: "1"
password_config_field_name:
valuePlaintext: examplePassword
```
To get the ConfigValues file from an installed application instance:
1. Install the target release in a development environment. You can either install the release with Replicated Embedded Cluster or install in an existing cluster with KOTS. For more information, see [Online Installation with Embedded Cluster](/enterprise/installing-embedded) or [Online Installation in Existing Clusters](/enterprise/installing-existing-cluster).
1. Depending on the installer that you used, do one of the following to get the ConfigValues for the installed instance:
* **For Embedded Cluster installations**: In the Admin Console, go to the **View files** tab. In the filetree, go to **upstream > userdata** and open **config.yaml**, as shown in the image below:

[View a larger version of this image](/images/admin-console-view-files-configvalues.png)
* **For KOTS installations in an existing cluster**: Run the `kubectl kots get config` command to view the generated ConfigValues file:
```bash
kubectl kots get config --namespace APP_NAMESPACE --decrypt
```
Where:
* `APP_NAMESPACE` is the cluster namespace where KOTS is running.
* The `--decrypt` flag decrypts all configuration fields with `type: password`. In the downloaded ConfigValues file, the decrypted value is stored in a `valuePlaintext` field.
The output of the `kots get config` command shows the contents of the ConfigValues file. For more information about the `kots get config` command, including additional flags, see [kots get config](/reference/kots-cli-get-config).
## Online (Internet-Connected) Installation
To install with KOTS in an online existing cluster:
1. Install the KOTS CLI:
```
curl https://kots.io/install | bash
```
For more installation options, see [Installing the KOTS CLI](/reference/kots-cli-getting-started).
1. Install the application:
```bash
kubectl kots install APP_NAME \
--shared-password PASSWORD \
--license-file PATH_TO_LICENSE \
--config-values PATH_TO_CONFIGVALUES \
--namespace NAMESPACE \
--no-port-forward
```
Replace:
* `APP_NAME` with a name for the application. This is the unique name that KOTS will use to refer to the application that you install.
* `PASSWORD` with a shared password for accessing the Admin Console.
* `PATH_TO_LICENSE` with the path to your license file. See [Downloading Customer Licenses](/vendor/licenses-download). For information about how to download licenses with the Vendor API v3, see [Download a customer license file as YAML](https://replicated-vendor-api.readme.io/reference/downloadlicense) in the Vendor API v3 documentation.
* `PATH_TO_CONFIGVALUES` with the path to the ConfigValues file.
* `NAMESPACE` with the namespace where you want to install both the application and KOTS.
## Air Gap Installation {#air-gap}
To install with KOTS in an air-gapped existing cluster:
1. Install the KOTS CLI. See [Manually Download and Install](/reference/kots-cli-getting-started#manually-download-and-install) in _Installing the KOTS CLI_.
1. Download the `kotsadm.tar.gz` air gap bundle from the [Releases](https://github.com/replicatedhq/kots/releases) page in the kots repository in GitHub. Ensure that you can access the downloaded bundle from the environment where you will install the application.
:::note
The version of the `kotsadm.tar.gz` air gap bundle used must be compatible with the version of the `.airgap` bundle for the given application release.
:::
:::note
The versions of the KOTS CLI and the `kotsadm.tar.gz` bundle must match. You can check the version of the KOTS CLI with `kubectl kots version`.
:::
1. Extract the KOTS Admin Console container images from the `kotsadm.tar.gz` bundle and push the images to your private registry:
```
kubectl kots admin-console push-images ./kotsadm.tar.gz REGISTRY_HOST \
--registry-username RW_USERNAME \
--registry-password RW_PASSWORD
```
Replace:
* `REGISTRY_HOST` with the hostname for the private registry. For example, `private.registry.host` or `my-registry.example.com/my-namespace`.
* `RW_USERNAME` and `RW_PASSWORD` with the username and password for an account that has read and write access to the private registry.
:::note
KOTS does not store or reuse these read-write credentials.
:::
1. Install the application:
```bash
kubectl kots install APP_NAME \
--shared-password PASSWORD \
--license-file PATH_TO_LICENSE \
--config-values PATH_TO_CONFIGVALUES \
--airgap-bundle PATH_TO_AIRGAP_BUNDLE \
--namespace NAMESPACE \
--kotsadm-registry REGISTRY_HOST \
--registry-username RO_USERNAME \
--registry-password RO_PASSWORD \
--no-port-forward
```
Replace:
* `APP_NAME` with a name for the application. This is the unique name that KOTS will use to refer to the application that you install.
* `PASSWORD` with a shared password for accessing the Admin Console.
* `PATH_TO_LICENSE` with the path to your license file. See [Downloading Customer Licenses](/vendor/licenses-download). For information about how to download licenses with the Vendor API v3, see [Download a customer license file as YAML](https://replicated-vendor-api.readme.io/reference/downloadlicense) in the Vendor API v3 documentation.
* `PATH_TO_CONFIGVALUES` with the path to the ConfigValues file.
* `PATH_TO_AIRGAP_BUNDLE` with the path to the `.airgap` bundle for the application release. You can build and download the air gap bundle for a release in the [Vendor Portal](https://vendor.replicated.com) on the **Release history** page for the channel where the release is promoted.
Alternatively, for information about building and downloading air gap bundles with the Vendor API v3, see [Trigger airgap build for a channel's release](https://replicated-vendor-api.readme.io/reference/channelreleaseairgapbuild) and [Get airgap bundle download URL for the active release on the channel](https://replicated-vendor-api.readme.io/reference/channelreleaseairgapbundleurl) in the Vendor API v3 documentation.
* `NAMESPACE` with the namespace where you want to install both the application and KOTS.
* `REGISTRY_HOST` with the same hostname for the private registry where you pushed the Admin Console images.
* `RO_USERNAME` and `RO_PASSWORD` with the username and password for an account that has read-only access to the private registry.
:::note
KOTS stores these read-only credentials in a Kubernetes secret in the same namespace where the Admin Console is installed.
KOTS uses these credentials to pull the images. To allow KOTS to pull images, the credentials are automatically created as an imagePullSecret on all of the Admin Console Pods.
:::
## (Optional) Access the Admin Console
By default, during installation, KOTS automatically opens localhost port 8800 to provide access to the Admin Console. Using the `--no-port-forward` flag with the `kots install` command prevents KOTS from creating a port forward to the Admin Console.
After you install with the `--no-port-forward` flag, you can optionally create a port forward so that you can log in to the Admin Console in a browser window.
To access the Admin Console:
1. If you installed in a VM where you cannot open a browser window, forward a port on your local machine to `localhost:8800` on the remote VM using the SSH client:
```bash
ssh -L LOCAL_PORT:localhost:8800 USERNAME@IP_ADDRESS
```
Replace:
* `LOCAL_PORT` with the port on your local machine to forward. For example, `9900` or `8800`.
* `USERNAME` with your username for the VM.
* `IP_ADDRESS` with the IP address for the VM.
**Example**:
The following example shows using the SSH client to forward port 8800 on your local machine to `localhost:8800` on the remote VM.
```bash
ssh -L 8800:localhost:8800 user@ip-addr
```
1. Run the following KOTS CLI command to open localhost port 8800, which forwards to the Admin Console service:
```bash
kubectl kots admin-console --namespace NAMESPACE
```
Replace `NAMESPACE` with the namespace where the Admin Console was installed.
For more information about the `kots admin-console` command, see [admin-console](/reference/kots-cli-admin-console-index) in the _KOTS CLI_ documentation.
1. Open a browser window and go to `https://localhost:8800`.
1. Log in to the Admin Console using the password that you created as part of the `kots install` command.
---
# Online Installation in Existing Clusters with KOTS
This topic describes how to use Replicated KOTS to install an application in an existing Kubernetes cluster.
## Prerequisites
Complete the following prerequisites:
* Ensure that your cluster meets the minimum system requirements. See [Minimum System Requirements](/enterprise/installing-general-requirements#minimum-system-requirements) in _Installation Requirements_.
* Ensure that you have at least the minimum RBAC permissions in the cluster required to install KOTS. See [RBAC Requirements](/enterprise/installing-general-requirements#rbac-requirements) in _Installation Requirements_.
:::note
If you manually created RBAC resources for KOTS as described in [Namespace-scoped RBAC Requirements](/enterprise/installing-general-requirements#namespace-scoped), include both the `--ensure-rbac=false` and `--skip-rbac-check` flags when you run the `kots install` command.
These flags prevent KOTS from checking for or attempting to create a Role with `* * *` permissions in the namespace. For more information about these flags, see [install](/reference/kots-cli-install) or [admin-console upgrade](/reference/kots-cli-admin-console-upgrade).
:::
* Review the options available with the `kots install` command before installing. The `kots install` command includes several optional flags to support different installation use cases. For a list of options, see [install](/reference/kots-cli-install) in the _KOTS CLI_ documentation.
* Download your license file. Ensure that you can access the downloaded license file from the environment where you will install the application. See [Downloading Customer Licenses](/vendor/licenses-download).
## Install {#online}
To install KOTS and the application in an existing cluster:
1. Run one of these commands to install the Replicated KOTS CLI and KOTS. As part of the command, you also specify a name and version for the application that you will install.
* **For the latest application version**:
```shell
curl https://kots.io/install | bash
kubectl kots install APP_NAME
```
* **For a specific application version**:
```shell
curl https://kots.io/install | bash
kubectl kots install APP_NAME --app-version-label=VERSION_LABEL
```
Replace, where applicable:
* `APP_NAME` with the name of the application. The `APP_NAME` is included in the installation command that your vendor gave you. This is a unique identifier that KOTS will use to refer to the application that you install.
* `VERSION_LABEL` with the label for the version of the application to install. For example, `--app-version-label=3.0.1`.
**Examples:**
```shell
curl https://kots.io/install | bash
kubectl kots install application-name
```
```shell
curl https://kots.io/install | bash
kubectl kots install application-name --app-version-label=3.0.1
```
1. When prompted by the `kots install` command:
1. Provide the namespace where you want to install both KOTS and the application.
1. Create a new password for logging in to the Admin Console.
**Example**:
```shell
$ kubectl kots install application-name
Enter the namespace to deploy to: application-name
• Deploying Admin Console
• Creating namespace ✓
• Waiting for datastore to be ready ✓
Enter a new password to be used for the Admin Console: ••••••••
• Waiting for Admin Console to be ready ✓
• Press Ctrl+C to exit
• Go to http://localhost:8800 to access the Admin Console
```
After the `kots install` command completes, it creates a port forward to the Admin Console. The Admin Console is exposed internally in the cluster and can only be accessed using a port forward.
1. Access the Admin Console on port 8800. If the port forward is active, go to [http://localhost:8800](http://localhost:8800) to access the Admin Console.
If you need to reopen the port forward to the Admin Console, run the following command:
```shell
kubectl kots admin-console -n NAMESPACE
```
Replace `NAMESPACE` with the namespace where KOTS is installed.
1. Log in with the password that you created during installation.
1. Upload your license file.
1. On the config screen, complete the fields for the application configuration options and then click **Continue**.
1. On the **Preflight checks** page, the application-specific preflight checks run automatically. Preflight checks are conformance tests that run against the target namespace and cluster to ensure that the environment meets the minimum requirements to support the application. Click **Deploy**.
:::note
Replicated recommends that you address any warnings or failures, rather than dismissing them. Preflight checks help ensure that your environment meets the requirements for application deployment.
:::
1. (Minimal RBAC Only) If you are installing with minimal role-based access control (RBAC), KOTS recognizes if the preflight checks failed due to insufficient privileges. When this occurs, a kubectl CLI preflight command displays that lets you manually run the preflight checks. The Admin Console then automatically displays the results of the preflight checks. Click **Deploy**.

[View a larger version of this image](/images/kubectl-preflight-command.png)
The Admin Console dashboard opens.
On the Admin Console dashboard, the application status changes from Missing to Unavailable while the Deployment is being created. When the installation is complete, the status changes to Ready. For example:

[View a larger version of this image](/images/kotsadm-dashboard-graph.png)
---
# KOTS Installation Requirements
This topic describes the requirements for installing in a Kubernetes cluster with Replicated KOTS.
:::note
This topic does not include any requirements specific to the application. Ensure that you meet any additional requirements for the application before installing.
:::
## Supported Browsers
The following table lists the browser requirements for the Replicated KOTS Admin Console with the latest version of KOTS.
| Browser | Support |
|----------------------|-------------|
| Chrome | 66+ |
| Firefox | 58+ |
| Opera | 53+ |
| Edge | 80+ |
| Safari (Mac OS only) | 13+ |
| Internet Explorer | Unsupported |
## Kubernetes Version Compatibility
Each release of KOTS maintains compatibility with the current Kubernetes version, and the two most recent versions at the time of its release. This includes support against all patch releases of the corresponding Kubernetes version.
Kubernetes versions 1.29 and earlier are end-of-life (EOL). For more information about Kubernetes versions, see [Release History](https://kubernetes.io/releases/) in the Kubernetes documentation.
Replicated recommends using a version of KOTS that is compatible with Kubernetes 1.30 and higher.
| KOTS Versions | Kubernetes Compatibility |
|------------------------|-----------------------------|
| 1.117.0 and later | 1.31, 1.30 |
| 1.109.1 to 1.116.1 | 1.30 |
## Minimum System Requirements
To install KOTS in an existing cluster, your environment must meet the following minimum requirements:
* **KOTS Admin Console minimum requirements**: Clusters that have LimitRanges specified must support the following minimum requirements for the Admin Console:
* **CPU resources and memory**: The Admin Console pod requests 100m CPU resources and 100Mi memory.
* **Disk space**: The Admin Console requires a minimum of 5GB of disk space on the cluster for persistent storage, including:
* **4GB for S3-compatible object store**: The Admin Console requires 4GB for an S3-compatible object store to store appplication archives, support bundles, and snapshots that are configured to use a host path and NFS storage destination. By default, KOTS deploys MinIO to satisfy this object storage requirement. During deployment, MinIO is configured with a randomly generated `AccessKeyID` and `SecretAccessKey`, and only exposed as a ClusterIP on the overlay network.
:::note
You can optionally install KOTS without MinIO by passing `--with-minio=false` with the `kots install` command. This installs KOTS as a StatefulSet using a persistent volume (PV) for storage. For more information, see [Installing KOTS in Existing Clusters Without Object Storage](/enterprise/installing-stateful-component-requirements).
:::
* **1GB for rqlite PersistentVolume**: The Admin Console requires 1GB for a rqlite StatefulSet to store version history, application metadata, and other small amounts of data needed to manage the application(s). During deployment, the rqlite component is secured with a randomly generated password, and only exposed as a ClusterIP on the overlay network.
* **Supported operating systems**: The following are the supported operating systems for nodes:
* Linux AMD64
* Linux ARM64
* **Available StorageClass**: The cluster must have an existing StorageClass available. KOTS creates the required stateful components using the default StorageClass in the cluster. For more information, see [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) in the Kubernetes documentation.
* **Kubernetes version compatibility**: The version of Kubernetes running on the cluster must be compatible with the version of KOTS that you use to install the application. This compatibility requirement does not include any specific and additional requirements defined by the software vendor for the application.
For more information about the versions of Kubernetes that are compatible with each version of KOTS, see [Kubernetes Version Compatibility](#kubernetes-version-compatibility) above.
* **OpenShift version compatibility**: For Red Hat OpenShift clusters, the version of OpenShift must use a supported Kubernetes version. For more information about supported Kubernetes versions, see [Kubernetes Version Compatibility](#kubernetes-version-compatibility) above.
* **Storage class**: The cluster must have an existing storage class available. For more information, see [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) in the Kubernetes documentation.
* **Port forwarding**: To support port forwarding, Kubernetes clusters require that the SOcket CAT (socat) package is installed on each node.
If the package is not installed on each node in the cluster, you see the following error message when the installation script attempts to connect to the Admin Console: `unable to do port forwarding: socat not found`.
To check if the package that provides socat is installed, you can run `which socat`. If the package is installed, the `which socat` command prints the full path to the socat executable file. For example, `usr/bin/socat`.
If the output of the `which socat` command is `socat not found`, then you must install the package that provides the socat command. The name of this package can vary depending on the node's operating system.
## RBAC Requirements
The user that runs the installation command must have at least the minimum role-based access control (RBAC) permissions that are required by KOTS. If the user does not have the required RBAC permissions, then an error message displays: `Current user has insufficient privileges to install Admin Console`.
The required RBAC permissions vary depending on if the user attempts to install KOTS with cluster-scoped access or namespace-scoped access:
* [Cluster-scoped RBAC Requirements (Default)](#cluster-scoped)
* [Namespace-scoped RBAC Requirements](#namespace-scoped)
### Cluster-scoped RBAC Requirements (Default) {#cluster-scoped}
By default, KOTS requires cluster-scoped access. With cluster-scoped access, a Kubernetes ClusterRole and ClusterRoleBinding are created that grant KOTS access to all resources across all namespaces in the cluster.
To install KOTS with cluster-scoped access, the user must meet the following RBAC requirements:
* The user must be able to create workloads, ClusterRoles, and ClusterRoleBindings.
* The user must have cluster-admin permissions to create namespaces and assign RBAC roles across the cluster.
### Namespace-scoped RBAC Requirements {#namespace-scoped}
KOTS can be installed with namespace-scoped access rather than the default cluster-scoped access. With namespace-scoped access, a Kubernetes Role and RoleBinding are automatically created that grant KOTS permissions only in the namespace where it is installed.
:::note
Depending on the application, namespace-scoped access for KOTS is required, optional, or not supported. Contact your software vendor for application-specific requirements.
:::
To install or upgrade KOTS with namespace-scoped access, the user must have _one_ of the following permission levels in the target namespace:
* Wildcard Permissions (Default)
* Minimum KOTS RBAC Permissions
See the sections below for more information.
#### Wildcard Permissions (Default)
By default, when namespace-scoped access is enabled, KOTS attempts to automatically create the following Role to acquire wildcard (`* * *`) permissions in the target namespace:
```yaml
apiVersion: "rbac.authorization.k8s.io/v1"
kind: "Role"
metadata:
name: "kotsadm-role"
rules:
- apiGroups: ["*"]
resources: ["*"]
verb: "*"
```
To support this default behavior, the user must also have `* * *` permissions in the target namespace.
#### Minimum KOTS RBAC Permissions
In some cases, it is not possible to grant the user `* * *` permissions in the target namespace. For example, an organization might have security policies that prevent this level of permissions.
If the user installing or upgrading KOTS cannot be granted `* * *` permissions in the namespace, then they can instead request the minimum RBAC permissions required by KOTS. Using the minimum KOTS RBAC permissions also requires manually creating a ServiceAccount, Role, and RoleBinding for KOTS, rather than allowing KOTS to automatically create a Role with `* * *` permissions.
To use the minimum KOTS RBAC permissions to install or upgrade:
1. Ensure that the user has the minimum RBAC permissions required by KOTS. The following lists the minimum RBAC permissions:
```yaml
- apiGroups: [""]
resources: ["configmaps", "persistentvolumeclaims", "pods", "secrets", "services", "limitranges"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
resources: ["daemonsets", "deployments", "statefulsets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["batch"]
resources: ["jobs", "cronjobs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["networking.k8s.io", "extensions"]
resources: ["ingresses"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["namespaces", "endpoints", "serviceaccounts"]
verbs: ["get"]
- apiGroups: ["authorization.k8s.io"]
resources: ["selfsubjectaccessreviews", "selfsubjectrulesreviews"]
verbs: ["create"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "rolebindings"]
verbs: ["get"]
- apiGroups: [""]
resources: ["pods/log", "pods/exec"]
verbs: ["get", "list", "watch", "create"]
- apiGroups: ["batch"]
resources: ["jobs/status"]
verbs: ["get", "list", "watch"]
```
:::note
The minimum RBAC requirements can vary slightly depending on the cluster's Kubernetes distribution and the version of KOTS. Contact your software vendor if you have the required RBAC permissions listed above and you see an error related to RBAC during installation or upgrade.
:::
1. Save the following ServiceAccount, Role, and RoleBinding to a single YAML file, such as `rbac.yaml`:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
kots.io/backup: velero
kots.io/kotsadm: "true"
name: kotsadm
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
kots.io/backup: velero
kots.io/kotsadm: "true"
name: kotsadm-role
rules:
- apiGroups: [""]
resources: ["configmaps", "persistentvolumeclaims", "pods", "secrets", "services", "limitranges"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
resources: ["daemonsets", "deployments", "statefulsets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["batch"]
resources: ["jobs", "cronjobs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["networking.k8s.io", "extensions"]
resources: ["ingresses"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["namespaces", "endpoints", "serviceaccounts"]
verbs: ["get"]
- apiGroups: ["authorization.k8s.io"]
resources: ["selfsubjectaccessreviews", "selfsubjectrulesreviews"]
verbs: ["create"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "rolebindings"]
verbs: ["get"]
- apiGroups: [""]
resources: ["pods/log", "pods/exec"]
verbs: ["get", "list", "watch", "create"]
- apiGroups: ["batch"]
resources: ["jobs/status"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
kots.io/backup: velero
kots.io/kotsadm: "true"
name: kotsadm-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kotsadm-role
subjects:
- kind: ServiceAccount
name: kotsadm
```
1. If the application contains any Custom Resource Definitions (CRDs), add the CRDs to the Role in the YAML file that you created in the previous step with as many permissions as possible: `["get", "list", "watch", "create", "update", "patch", "delete"]`.
:::note
Contact your software vendor for information about any CRDs that are included in the application.
:::
**Example**
```yaml
rules:
- apiGroups: ["stable.example.com"]
resources: ["crontabs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
```
1. Run the following command to create the RBAC resources for KOTS in the namespace:
```
kubectl apply -f RBAC_YAML_FILE -n TARGET_NAMESPACE
```
Replace:
* `RBAC_YAML_FILE` with the name of the YAML file with the ServiceAccount, Role, and RoleBinding and that you created.
* `TARGET_NAMESPACE` with the namespace where the user will install KOTS.
:::note
After manually creating these RBAC resources, the user must include both the `--ensure-rbac=false` and `--skip-rbac-check` flags when installing or upgrading. These flags prevent KOTS from checking for or attempting to create a Role with `* * *` permissions in the namespace. For more information, see [Prerequisites](installing-existing-cluster#prerequisites) in _Online Installation in Existing Clusters with KOTS_.
:::
## Compatible Image Registries {#registries}
A private image registry is required for air gap installations with KOTS in existing clusters. You provide the credentials for a compatible private registry during installation. You can also optionally configure a local private image registry for use with installations in online (internet-connected) environments.
Private registry settings can be changed at any time. For more information, see [Configuring Local Image Registries](image-registry-settings).
KOTS has been tested for compatibility with the following registries:
- Docker Hub
:::note
To avoid the November 20, 2020 Docker Hub rate limits, use the `kots docker ensure-secret` CLI command. For more information, see [Avoiding Docker Hub Rate Limits](image-registry-rate-limits).
:::
- Quay
- Amazon Elastic Container Registry (ECR)
- Google Container Registry (GCR)
- Azure Container Registry (ACR)
- Harbor
- Sonatype Nexus
## Firewall Openings for Online Installations with KOTS in an Existing Cluster {#firewall}
The domains for the services listed in the table below need to be accessible from servers performing online installations. No outbound internet access is required for air gap installations.
For services hosted at domains owned by Replicated, the table below includes a link to the list of IP addresses for the domain at [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json) in GitHub. Note that the IP addresses listed in the `replicatedhq/ips` repository also include IP addresses for some domains that are _not_ required for installation.
For any third-party services hosted at domains not owned by Replicated, consult the third-party's documentation for the IP address range for each domain, as needed.
| Domain | Description |
|---|---|
| Docker Hub | Some dependencies of KOTS are hosted as public images in Docker Hub. The required domains for this service are `index.docker.io`, `cdn.auth0.com`, `*.docker.io`, and `*.docker.com.` |
| `proxy.replicated.com` * | Private Docker images are proxied through `proxy.replicated.com`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `proxy.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L52-L57) in GitHub. |
| `replicated.app` | Upstream application YAML and metadata is pulled from `replicated.app`. The current running version of the application (if any), as well as a license ID and application ID to authenticate, are all sent to `replicated.app`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `replicated.app`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L60-L65) in GitHub. |
| `registry.replicated.com` ** | Some applications host private images in the Replicated registry at this domain. The on-prem docker client uses a license ID to authenticate to `registry.replicated.com`. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA. For the range of IP addresses for `registry.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L20-L25) in GitHub. |
| `kots.io` | Requests are made to this domain when installing the Replicated KOTS CLI. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. |
| `github.com` | Requests are made to this domain when installing the Replicated KOTS CLI. For information about retrieving GitHub IP addresses, see [About GitHub's IP addresses](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/about-githubs-ip-addresses) in the GitHub documentation. |
[View a larger version of this image](/images/release-history-link.png)

[View a larger version of this image](/images/release-history-build-airgap-bundle.png)
1. After the build completes, download the bundle. Ensure that you can access the downloaded bundle from the environment where you will install the application.
1. (Optional) View the contents of the downloaded bundle:
```bash
tar -zxvf AIRGAP_BUNDLE
```
Where `AIRGAP_BUNDLE` is the filename for the `.airgap` bundle that you downloaded.
1. Download the `.tar.gz` air gap bundle for the kURL installer, which includes the components needed to run the kURL cluster and install the application with KOTS. kURL air gap bundles can be downloaded from the channel where the given release is promoted:
* To download the kURL air gap bundle for the Stable channel:
```bash
export REPLICATED_APP=APP_SLUG
curl -LS https://k8s.kurl.sh/bundle/$REPLICATED_APP.tar.gz -o $REPLICATED_APP.tar.gz
```
Where `APP_SLUG` is the unqiue slug for the application.
* To download the kURL bundle for channels other than Stable:
```bash
replicated channel inspect CHANNEL
```
Replace `CHANNEL` with the exact name of the target channel, which can include uppercase letters or special characters, such as `Unstable` or `my-custom-channel`.
In the output of this command, copy the curl command with the air gap URL.
1. In your installation environment, extract the contents of the kURL `.tar.gz` bundle that you downloaded:
```bash
tar -xvzf $REPLICATED_APP.tar.gz
```
1. Run one of the following commands to install in air gap mode:
- For a regular installation, run:
```bash
cat install.sh | sudo bash -s airgap
```
- For high availability, run:
```bash
cat install.sh | sudo bash -s airgap ha
```
1. (HA Installation Only) If you are installing in HA mode and did not already preconfigure a load balancer, you are prompted during the installation. Do one of the following:
- If you are using the internal load balancer, leave the prompt blank and proceed with the installation.
- If you are using an external load balancer, pass the load balancer address.
1. After the installation command finishes, note the `Kotsadm` and `Login with password (will not be shown again)` fields in the output of the command. You use these to log in to the Admin Console.
The following shows an example of the `Kotsadm` and `Login with password (will not be shown again)` fields in the output of the installation command:
```
Installation
Complete ✔
Kotsadm: http://10.128.0.35:8800
Login with password (will not be shown again): 3Hy8WYYid
This password has been set for you by default. It is recommended that you change
this password; this can be done with the following command:
kubectl kots reset-password default
```
1. Go to the address provided in the `Kotsadm` field in the output of the installation command. For example, `Kotsadm: http://34.171.140.123:8800`.
1. On the Bypass Browser TLS warning page, review the information about how to bypass the browser TLS warning, and then click **Continue to Setup**.
1. On the HTTPS page, do one of the following:
- To use the self-signed TLS certificate only, enter the hostname (required) if you are using the identity service. If you are not using the identity service, the hostname is optional. Click **Skip & continue**.
- To use a custom certificate only, enter the hostname (required) if you are using the identity service. If you are not using the identity service, the hostname is optional. Then upload a private key and SSL certificate to secure communication between your browser and the Admin Console. Click **Upload & continue**.
1. Log in to the Admin Console with the password that was provided in the `Login with password (will not be shown again):` field in the output of the installation command.
1. Upload your license file.
1. Upload the `.airgap` bundle for the release that you downloaded in an earlier step.
1. On the **Preflight checks** page, the application-specific preflight checks run automatically. Preflight checks are conformance tests that run against the target namespace and cluster to ensure that the environment meets the minimum requirements to support the application. Click **Deploy**.
:::note
Replicated recommends that you address any warnings or failures, rather than dismissing them. Preflight checks help ensure that your environment meets the requirements for application deployment.
:::
1. (Minimal RBAC Only) If you are installing with minimal role-based access control (RBAC), KOTS recognizes if the preflight checks failed due to insufficient privileges. When this occurs, a kubectl CLI preflight command displays that lets you manually run the preflight checks. The Admin Console then automatically displays the results of the preflight checks. Click **Deploy**.

[View a larger version of this image](/images/kubectl-preflight-command.png)
The Admin Console dashboard opens.
On the Admin Console dashboard, the application status changes from Missing to Unavailable while the Deployment is being created. When the installation is complete, the status changes to Ready.

[View a larger version of this image](/images/gitea-ec-ready.png)
1. (Recommended) Change the Admin Console login password:
1. Click the menu in the top right corner of the Admin Console, then click **Change password**.
1. Enter a new password in the dialog, and click **Change Password** to save.
Replicated strongly recommends that you change the password from the default provided during installation in a kURL cluster. For more information, see [Change an Admin Console Password](auth-changing-passwords).
1. Add primary and secondary nodes to the cluster. You might add nodes to either meet application requirements or to support your usage of the application. See [Adding Nodes to Embedded Clusters](cluster-management-add-nodes).
---
# Install with kURL from the Command Line
:::note
Replicated kURL is available only for existing customers. If you are not an existing kURL user, use Replicated Embedded Cluster instead. For more information, see [Use Embedded Cluster](/vendor/embedded-overview).
kURL is a Generally Available (GA) product for existing customers. For more information about the Replicated product lifecycle phases, see [Support Lifecycle Policy](/vendor/policies-support-lifecycle).
:::
This topic describes how to install an application with Replicated kURL from the command line.
## Overview
You can use the command line to install an application with Replicated kURL. A common use case for installing from the command line is to automate installation, such as performing headless installations as part of CI/CD pipelines.
To install from the command line, you provide all the necessary installation assets, such as the license file and the application config values, with the installation command rather than through the Admin Console UI. Any preflight checks defined for the application run automatically during headless installations from the command line rather than being displayed in the Admin Console.
## Prerequisite
Create a ConfigValues YAML file to define the configuration values for the application release. The ConfigValues file allows you to pass the configuration values for an application from the command line with the install command, rather than through the Admin Console UI. For air-gapped environments, ensure that the ConfigValues file can be accessed from the installation environment.
The KOTS ConfigValues file includes the fields that are defined in the KOTS Config custom resource for an application release, along with the user-supplied and default values for each field, as shown in the example below:
```yaml
apiVersion: kots.io/v1beta1
kind: ConfigValues
spec:
values:
text_config_field_name:
default: Example default value
value: Example user-provided value
boolean_config_field_name:
value: "1"
password_config_field_name:
valuePlaintext: examplePassword
```
To get the ConfigValues file from an installed application instance:
1. Install the target release in a development environment. You can either install the release with Replicated Embedded Cluster or install in an existing cluster with KOTS. For more information, see [Online Installation with Embedded Cluster](/enterprise/installing-embedded) or [Online Installation in Existing Clusters](/enterprise/installing-existing-cluster).
1. Depending on the installer that you used, do one of the following to get the ConfigValues for the installed instance:
* **For Embedded Cluster installations**: In the Admin Console, go to the **View files** tab. In the filetree, go to **upstream > userdata** and open **config.yaml**, as shown in the image below:

[View a larger version of this image](/images/admin-console-view-files-configvalues.png)
* **For KOTS installations in an existing cluster**: Run the `kubectl kots get config` command to view the generated ConfigValues file:
```bash
kubectl kots get config --namespace APP_NAMESPACE --decrypt
```
Where:
* `APP_NAMESPACE` is the cluster namespace where KOTS is running.
* The `--decrypt` flag decrypts all configuration fields with `type: password`. In the downloaded ConfigValues file, the decrypted value is stored in a `valuePlaintext` field.
The output of the `kots get config` command shows the contents of the ConfigValues file. For more information about the `kots get config` command, including additional flags, see [kots get config](/reference/kots-cli-get-config).
## Online (Internet-Connected) Installation
When you use the KOTS CLI to install an application in a kURL cluster, you first run the kURL installation script to provision the cluster and automatically install KOTS in the cluster. Then, you can run the `kots install` command to install the application.
To install with kURL on a VM or bare metal server:
1. Create the kURL cluster:
```bash
curl -sSL https://k8s.kurl.sh/APP_NAME | sudo bash
```
1. Install the application in the cluster:
```bash
kubectl kots install APP_NAME \
--shared-password PASSWORD \
--license-file PATH_TO_LICENSE \
--config-values PATH_TO_CONFIGVALUES \
--namespace default \
--no-port-forward
```
Replace:
* `APP_NAME` with a name for the application. This is the unique name that KOTS will use to refer to the application that you install.
* `PASSWORD` with a shared password for accessing the Admin Console.
* `PATH_TO_LICENSE` with the path to your license file. See [Downloading Customer Licenses](/vendor/licenses-download). For information about how to download licenses with the Vendor API v3, see [Download a customer license file as YAML](https://replicated-vendor-api.readme.io/reference/downloadlicense) in the Vendor API v3 documentation.
* `PATH_TO_CONFIGVALUES` with the path to the ConfigValues file.
* `NAMESPACE` with the namespace where Replicated kURL installed Replicated KOTS when creating the cluster. By default, kURL installs KOTS in the `default` namespace.
## Air Gap Installation
To install in an air-gapped kURL cluster:
1. Download the kURL `.tar.gz` air gap bundle:
```bash
export REPLICATED_APP=APP_SLUG
curl -LS https://k8s.kurl.sh/bundle/$REPLICATED_APP.tar.gz -o $REPLICATED_APP.tar.gz
```
Where `APP_SLUG` is the unqiue slug for the application.
1. In your installation environment, extract the contents of the kURL `.tar.gz` bundle that you downloaded:
```bash
tar -xvzf $REPLICATED_APP.tar.gz
```
1. Create the kURL cluster:
```
cat install.sh | sudo bash -s airgap
```
1. Install the application:
```bash
kubectl kots install APP_NAME \
--shared-password PASSWORD \
--license-file PATH_TO_LICENSE \
--config-values PATH_TO_CONFIGVALUES \
--airgap-bundle PATH_TO_AIRGAP_BUNDLE \
--namespace default \
--no-port-forward
```
Replace:
* `APP_NAME` with a name for the application. This is the unique name that KOTS will use to refer to the application that you install.
* `PASSWORD` with a shared password for accessing the Admin Console.
* `PATH_TO_LICENSE` with the path to your license file. See [Downloading Customer Licenses](/vendor/licenses-download). For information about how to download licenses with the Vendor API v3, see [Download a customer license file as YAML](https://replicated-vendor-api.readme.io/reference/downloadlicense) in the Vendor API v3 documentation.
* `PATH_TO_CONFIGVALUES` with the path to the ConfigValues file.
* `PATH_TO_AIRGAP_BUNDLE` with the path to the `.airgap` bundle for the application release. You can build and download the air gap bundle for a release in the [Vendor Portal](https://vendor.replicated.com) on the **Release history** page for the channel where the release is promoted.
Alternatively, for information about building and downloading air gap bundles with the Vendor API v3, see [Trigger airgap build for a channel's release](https://replicated-vendor-api.readme.io/reference/channelreleaseairgapbuild) and [Get airgap bundle download URL for the active release on the channel](https://replicated-vendor-api.readme.io/reference/channelreleaseairgapbundleurl) in the Vendor API v3 documentation.
* `NAMESPACE` with the namespace where Replicated kURL installed Replicated KOTS when creating the cluster. By default, kURL installs KOTS in the `default` namespace.
---
# kURL Installation Requirements
This topic lists the installation requirements for Replicated kURL. Ensure that the installation environment meets these requirements before attempting to install.
:::note
Replicated kURL is available only for existing customers. If you are not an existing kURL user, use Replicated Embedded Cluster instead. For more information, see [Use Embedded Cluster](/vendor/embedded-overview).
kURL is a Generally Available (GA) product for existing customers. For more information about the Replicated product lifecycle phases, see [Support Lifecycle Policy](/vendor/policies-support-lifecycle).
:::
## Minimum System Requirements
* 4 CPUs or equivalent per machine
* 8GB of RAM per machine
* 40GB of disk space per machine
* TCP ports 2379, 2380, 6443, 6783, and 10250 open between cluster nodes
* UDP port 8472 open between cluster nodes
:::note
If the Kubernetes installer specification uses the deprecated kURL [Weave add-on](https://kurl.sh/docs/add-ons/weave), UDP ports 6783 and 6784 must be open between cluster nodes. Reach out to your software vendor for more information.
:::
* Root access is required
* (Rook Only) The Rook add-on version 1.4.3 and later requires block storage on each node in the cluster. For more information about how to enable block storage for Rook, see [Block Storage](https://kurl.sh/docs/add-ons/rook/#block-storage) in _Rook Add-On_ in the kURL documentation.
## Additional System Requirements
You must meet the additional kURL system requirements when applicable:
- **Supported Operating Systems**: For supported operating systems, see [Supported Operating Systems](https://kurl.sh/docs/install-with-kurl/system-requirements#supported-operating-systems) in the kURL documentation.
- **kURL Dependencies Directory**: kURL installs additional dependencies in the directory /var/lib/kurl and the directory requirements must be met. See [kURL Dependencies Directory](https://kurl.sh/docs/install-with-kurl/system-requirements#kurl-dependencies-directory) in the kURL documentation.
- **Networking Requirements**: Networking requirements include firewall openings, host firewalls rules, and port availability. See [Networking Requirements](https://kurl.sh/docs/install-with-kurl/system-requirements#networking-requirements) in the kURL documentation.
- **High Availability Requirements**: If you are operating a cluster with high availability, see [High Availability Requirements](https://kurl.sh/docs/install-with-kurl/system-requirements#high-availability-requirements) in the kURL documentation.
- **Cloud Disk Performance**: For a list of cloud VM instance and disk combinations that are known to provide sufficient performance for etcd and pass the write latency preflight, see [Cloud Disk Performance](https://kurl.sh/docs/install-with-kurl/system-requirements#cloud-disk-performance) in the kURL documentation.
## Firewall Openings for Online Installations with kURL {#firewall}
The domains for the services listed in the table below need to be accessible from servers performing online installations. No outbound internet access is required for air gap installations.
For services hosted at domains owned by Replicated, the table below includes a link to the list of IP addresses for the domain at [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json) in GitHub. Note that the IP addresses listed in the `replicatedhq/ips` repository also include IP addresses for some domains that are _not_ required for installation.
For any third-party services hosted at domains not owned by Replicated, consult the third-party's documentation for the IP address range for each domain, as needed.
| Domain | Description |
|---|---|
| Docker Hub | Some dependencies of KOTS are hosted as public images in Docker Hub. The required domains for this service are `index.docker.io`, `cdn.auth0.com`, `*.docker.io`, and `*.docker.com.` |
| `proxy.replicated.com` * | Private Docker images are proxied through `proxy.replicated.com`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `proxy.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L52-L57) in GitHub. |
| `replicated.app` | Upstream application YAML and metadata is pulled from `replicated.app`. The current running version of the application (if any), as well as a license ID and application ID to authenticate, are all sent to `replicated.app`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `replicated.app`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L60-L65) in GitHub. |
| `registry.replicated.com` ** | Some applications host private images in the Replicated registry at this domain. The on-prem docker client uses a license ID to authenticate to `registry.replicated.com`. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA. For the range of IP addresses for `registry.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L20-L25) in GitHub. |
`k8s.kurl.sh` `s3.kurl.sh` |
kURL installation scripts and artifacts are served from [kurl.sh](https://kurl.sh). An application identifier is sent in a URL path, and bash scripts and binary executables are served from kurl.sh. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `k8s.kurl.sh`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L34-L39) in GitHub. The range of IP addresses for `s3.kurl.sh` are the same as IP addresses for the `kurl.sh` domain. For the range of IP address for `kurl.sh`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L28-L31) in GitHub. |
| `amazonaws.com` | `tar.gz` packages are downloaded from Amazon S3 during installations with kURL. For information about dynamically scraping the IP ranges to allowlist for accessing these packages, see [AWS IP address ranges](https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html#aws-ip-download) in the AWS documentation. |
[View a larger version of this image](/images/kotsadm-dashboard-graph.png)
## Configure Prometheus Monitoring
For existing cluster installations with KOTS, users can install Prometheus in the cluster and then connect the Admin Console to the Prometheus endpoint to enable monitoring.
### Step 1: Install Prometheus in the Cluster {#configure-existing}
Replicated recommends that you use CoreOS's Kube-Prometheus distribution for installing and configuring highly available Prometheus on an existing cluster. For more information, see the [kube-prometheus](https://github.com/coreos/kube-prometheus) GitHub repository.
This repository collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
To install Prometheus using the recommended Kube-Prometheus distribution:
1. Clone the [kube-prometheus](https://github.com/coreos/kube-prometheus) repository to the device where there is access to the cluster.
1. Use `kubectl` to create the resources on the cluster:
```bash
# Create the namespace and CRDs, and then wait for them to be available before creating the remaining resources
kubectl create -f manifests/setup
until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
kubectl create -f manifests/
```
For advanced and cluster-specific configuration, you can customize Kube-Prometheus by compiling the manifests using jsonnet. For more information, see the [jsonnet website](https://jsonnet.org/).
For more information about advanced Kube-Prometheus configuration options, see [Customize Kube-Prometheus](https://github.com/coreos/kube-prometheus#customizing-kube-prometheus) in the kube-prometheus GitHub repository.
### Step 2: Connect to a Prometheus Endpoint
To view graphs on the Admin Console dashboard, provide the address of a Prometheus instance installed in the cluster.
To connect the Admin Console to a Prometheus endpoint:
1. On the Admin Console dashboard, under Monitoring, click **Configure Prometheus Address**.
1. Enter the address for the Prometheus endpoint in the text box and click **Save**.

Graphs appear on the dashboard shortly after saving the address.
---
# Consume Prometheus Metrics Externally
:::note
Replicated kURL is available only for existing customers. If you are not an existing kURL user, use Replicated Embedded Cluster instead. For more information, see [Use Embedded Cluster](/vendor/embedded-overview).
kURL is a Generally Available (GA) product for existing customers. For more information about the Replicated product lifecycle phases, see [Support Lifecycle Policy](/vendor/policies-support-lifecycle).
:::
This topic describes how to consume Prometheus metrics in Replicated kURL clusters from a monitoring service that is outside the cluster.
For information about how to access Prometheus, Grafana, and Alertmanager, see [Accessing Dashboards Using Port Forwarding](/enterprise/monitoring-access-dashboards).
## Overview
The KOTS Admin Console can use the open source systems monitoring tool Prometheus to collect metrics on an application and the cluster where the application is installed. Prometheus components include the main Prometheus server, which scrapes and stores time series data, an Alertmanager for alerting on metrics, and Grafana for visualizing metrics. For more information about Prometheus, see [What is Prometheus?](https://prometheus.io/docs/introduction/overview/) in the Prometheus documentation.
The Admin Console exposes graphs with key metrics collected by Prometheus in the **Monitoring** section of the dashboard. By default, the Admin Console displays the following graphs:
* Cluster disk usage
* Pod CPU usage
* Pod memory usage
In addition to these default graphs, application developers can also expose business and application level metrics and alerts on the dashboard.
The following screenshot shows an example of the **Monitoring** section on the Admin Console dashboard with the Disk Usage, CPU Usage, and Memory Usage default graphs:
[View a larger version of this image](/images/kotsadm-dashboard-graph.png)
For kURL installations, if the [kURL Prometheus add-on](https://kurl.sh/docs/add-ons/prometheus) is included in the kURL installer spec, then the Prometheus monitoring system is installed alongside the application. No additional configuration is required to collect metrics and view any default and custom graphs on the Admin Console dashboard.
Prometheus is deployed in kURL clusters as a NodePort service named `prometheus-k8s` in the `monitoring` namespace. The `prometheus-k8s` service is exposed on the IP address for each node in the cluster at port 30900.
You can run the following command to view the `prometheus-k8s` service in your cluster:
```
kubectl get services -l app=kube-prometheus-stack-prometheus -n monitoring
```
The output of the command includes details about the Prometheus service, including the type of service and the ports where the service is exposed. For example:
```
NAME TYPE CLUSTER_IP EXTERNAL_IP PORT(S) AGE
prometheus-k8s NodePort 10.96.2.229 | Deployment | StatefulSet | Service | Ingress | PVC | DaemonSet | |
|---|---|---|---|---|---|---|
| Ready | Ready replicas equals desired replicas | Ready replicas equals desired replicas | All desired endpoints are ready, any load balancers have been assigned | All desired backend service endpoints are ready, any load balancers have been assigned | Claim is bound | Ready daemon pods equals desired scheduled daemon pods |
| Updating | The deployed replicas are from a different revision | The deployed replicas are from a different revision | N/A | N/A | N/A | The deployed daemon pods are from a different revision |
| Degraded | At least 1 replica is ready, but more are desired | At least 1 replica is ready, but more are desired | At least one endpoint is ready, but more are desired | At least one backend service endpoint is ready, but more are desired | N/A | At least one daemon pod is ready, but more are desired |
| Unavailable | No replicas are ready | No replicas are ready | No endpoints are ready, no load balancer has been assigned | No backend service endpoints are ready, no load balancer has been assigned | Claim is pending or lost | No daemon pods are ready |
| Missing | Missing is an initial deployment status indicating that informers have not reported their status because the application has just been deployed and the underlying resource has not been created yet. After the resource is created, the status changes. However, if a resource changes from another status to Missing, then the resource was either deleted or the informers failed to report a status. | |||||
| Resource Statuses | Aggregate Application Status |
|---|---|
| No status available for any resource | Missing |
| One or more resources Unavailable | Unavailable |
| One or more resources Degraded | Degraded |
| One or more resources Updating | Updating |
| All resources Ready | Ready |
[View a larger version of this image](/images/send-bundle-to-vendor.png)
1. (Optional) Click **Download bundle** to download the support bundle. You can send the bundle to your vendor for assistance.
---
# Perform Updates in Existing Clusters
This topic describes how to perform updates in existing cluster installations with Replicated KOTS. It includes information about how to update applications and the version of KOTS running in the cluster.
## Update an Application
You can perform an application update using the KOTS Admin Console or the KOTS CLI. You can also set up automatic updates. See [Configure Automatic Updates](/enterprise/updating-apps).
### Using the Admin Console
#### Online Environments
To perform an update from the Admin Console:
1. In the Admin Console, go to the **Version History** tab.
1. Click **Check for updates**.
A new upstream version displays in the list of available versions.
[View a larger version of this image](/images/new-version-available.png)
1. (Optional) When there are multiple versions of an application, you can compare
the changes between them by clicking **Diff releases** in the right corner.
You can review changes between any two arbitrary releases by clicking the icon in the header
of the release column. Select the two versions to compare, and click **Diff releases**
to show the relative changes between the two releases.
[View a larger version of this image](/images/diff-releases.png)
[View a larger version of this image](/images/new-changes.png)
1. (Optional) Click the **View preflight checks** icon to view or re-run the preflight checks.
[View a larger version of this image](/images/preflight-checks.png)
1. Return to the **Version History** tab and click **Deploy** next to the target version.
#### Air Gap Environments
import BuildAirGapBundle from "../install/_airgap-bundle-build.mdx"
import DownloadAirGapBundle from "../install/_airgap-bundle-download.mdx"
import ViewAirGapBundle from "../install/_airgap-bundle-view-contents.mdx"
To perform an air gap update from the Admin Console:
1. In the [Vendor Portal](https://vendor.replicated.com), go the channel where the target release is promoted to build and download the new `.airgap` bundle:
* If the **Automatically create airgap builds for newly promoted releases in this channel** setting is enabled on the channel, watch for the build status to complete.
* If automatic air gap builds are not enabled, go to the **Release history** page for the channel and build the air gap bundle manually.
[View a larger version of this image](/images/release-history-link.png)

[View a larger version of this image](/images/release-history-build-airgap-bundle.png)
1. After the build completes, download the bundle. Ensure that you can access the downloaded bundle from the environment where you will install the application.
1. (Optional) View the contents of the downloaded bundle:
```bash
tar -zxvf AIRGAP_BUNDLE
```
Where `AIRGAP_BUNDLE` is the filename for the `.airgap` bundle that you downloaded.
1. In the Admin Console, go to the **Version History** tab.
1. Click **Upload a new version**.
A new upstream version displays in the list of available versions.

1. (Optional) When there are multiple versions of an application, you can compare
the changes between them by clicking **Diff releases** in the right corner.
You can review changes between any two arbitrary releases by clicking the icon in the header
of the release column. Select the two versions to compare, and click **Diff releases**
to show the relative changes between the two releases.


1. (Optional) Click the **View preflight checks** icon to view or re-run the preflight checks.

1. Return to the **Version History** tab and click **Deploy** next to the target version.
### Using the KOTS CLI
You can use the KOTS CLI [upstream upgrade](/reference/kots-cli-upstream-upgrade) command to update an application in existing cluster installations.
#### Online Environments
To update an application in online environments:
```bash
kubectl kots upstream upgrade APP_SLUG -n ADMIN_CONSOLE_NAMESPACE
```
Where:
* `APP_SLUG` is the unique slug for the application. See [Get the Application Slug](/vendor/vendor-portal-manage-app#slug) in _Managing Applications_.
* `ADMIN_CONSOLE_NAMESPACE` is the namespace where the Admin Console is running.
:::note
Add the `--deploy` flag to automatically deploy this version.
:::
#### Air Gap Environments
To update an application in air gap environments:
1. In the [Vendor Portal](https://vendor.replicated.com), go the channel where the target release is promoted to build and download the new `.airgap` bundle:
* If the **Automatically create airgap builds for newly promoted releases in this channel** setting is enabled on the channel, watch for the build status to complete.
* If automatic air gap builds are not enabled, go to the **Release history** page for the channel and build the air gap bundle manually.
[View a larger version of this image](/images/release-history-link.png)

[View a larger version of this image](/images/release-history-build-airgap-bundle.png)
1. After the build completes, download the bundle. Ensure that you can access the downloaded bundle from the environment where you will install the application.
1. (Optional) View the contents of the downloaded bundle:
```bash
tar -zxvf AIRGAP_BUNDLE
```
Where `AIRGAP_BUNDLE` is the filename for the `.airgap` bundle that you downloaded.
1. Run the following command to update the application:
```bash
kubectl kots upstream upgrade APP_SLUG \
--airgap-bundle NEW_AIRGAP_BUNDLE \
--kotsadm-registry REGISTRY_HOST[/REGISTRY_NAMESPACE] \
--registry-username RO_USERNAME \
--registry-password RO_PASSWORD \
-n ADMIN_CONSOLE_NAMESPACE
```
Replace:
* `APP_SLUG` with the unique slug for the application. See [Get the Application Slug](/vendor/vendor-portal-manage-app#slug) in _Managing Applications_.
* `NEW_AIRGAP_BUNDLE` with the `.airgap` bundle for the target application version.
* `REGISTRY_HOST` with the private registry that contains the Admin Console images.
* `REGISTRY_NAMESPACE` with the registry namespace where the images are hosted (Optional).
* `RO_USERNAME` and `RO_PASSWORD` with the username and password for an account that has read-only access to the private registry.
* `ADMIN_CONSOLE_NAMESPACE` with the namespace where the Admin Console is running.
:::note
Add the `--deploy` flag to automatically deploy this version.
:::
## Update KOTS
This section describes how to update the version of Replicated KOTS running in your cluster. For information about the latest versions of KOTS, see [KOTS Release Notes](/release-notes/rn-app-manager).
:::note
Downgrading KOTS to a version earlier than what is currently deployed is not supported.
:::
### Online Environments
To update KOTS in an online existing cluster:
1. Run _one_ of the following commands to update the KOTS CLI to the target version of KOTS:
- **Install or update to the latest version**:
```
curl https://kots.io/install | bash
```
- **Install or update to a specific version**:
```
curl https://kots.io/install/VERSION | bash
```
Where `VERSION` is the target KOTS version.
For more KOTS CLI installation options, including information about how to install or update without root access, see [Install the KOTS CLI](/reference/kots-cli-getting-started).
1. Run the following command to update the KOTS Admin Console to the same version as the KOTS CLI:
```bash
kubectl kots admin-console upgrade -n NAMESPACE
```
Replace `NAMESPACE` with the namespace in your cluster where KOTS is installed.
### Air Gap Environments
To update KOTS in an existing air gap cluster:
1. Download the target version of the following assets from the [Releases](https://github.com/replicatedhq/kots/releases/latest) page in the KOTS GitHub repository:
* KOTS Admin Console `kotsadm.tar.gz` bundle
* KOTS CLI plugin
Ensure that you can access the downloaded bundles from the environment where the Admin Console is running.
1. Install or update the KOTS CLI to the version that you downloaded. See [Manually Download and Install](/reference/kots-cli-getting-started#manually-download-and-install) in _Installing the KOTS CLI_.
1. Extract the KOTS Admin Console container images from the `kotsadm.tar.gz` bundle and push the images to your private registry:
```
kubectl kots admin-console push-images ./kotsadm.tar.gz REGISTRY_HOST \
--registry-username RW_USERNAME \
--registry-password RW_PASSWORD
```
Replace:
* `REGISTRY_HOST` with the hostname for the private registry. For example, `private.registry.host` or `my-registry.example.com/my-namespace`.
* `RW_USERNAME` and `RW_PASSWORD` with the username and password for an account that has read and write access to the private registry.
:::note
KOTS does not store or reuse these read-write credentials.
:::
1. Run the following command using registry read-only credentials to update the KOTS Admin Console:
```
kubectl kots admin-console upgrade \
--kotsadm-registry REGISTRY_HOST \
--registry-username RO_USERNAME \
--registry-password RO_PASSWORD \
-n NAMESPACE
```
Replace:
* `REGISTRY_HOST` with the same private registry from the previous step.
* `RO_USERNAME` with the username for credentials with read-only permissions to the registry.
* `RO_PASSWORD` with the password associated with the username.
* `NAMESPACE` with the namespace on your cluster where KOTS is installed.
For help information, run `kubectl kots admin-console upgrade -h`.
---
# Configure Automatic Updates
This topic describes how to configure automatic updates for applications installed in online (internet-connected) environments.
## Overview
For applications installed in an online environment, the Replicated KOTS Admin Console automatically checks for new versions once every four hours by default. After the Admin Console checks for updates, it downloads any new versions of the application and displays them on the **Version History** tab.
You can edit this default cadence to customize how often the Admin Console checks for and downloads new versions.
You can also configure the Admin Console to automatically deploy new versions of the application after it downloads them.
The Admin Console only deploys new versions automatically if preflight checks pass. By default, the Admin Console does not automatically deploy any version of an application.
## Limitations
Automatic updates have the following limitations:
* Automatic updates are not supported for [Replicated Embedded Cluster](/vendor/embedded-overview) installations.
* Automatic updates are not supported for applications installed in air gap environments with no outbound internet access.
* Automatically deploying new versions is not supported when KOTS is installed with minimal RBAC. This is because all preflight checks must pass for the new version to be automatically deployed, and preflight checks that require cluster-scoped access will fail in minimal RBAC environments.
## Set Up Automatic Updates
To configure automatic updates:
1. In the Admin Console, go to the **Version History** tab and click **Configure automatic updates**.
The **Configure automatic updates** dialog opens.
1. Under **Automatically check for updates**, use the default or select a cadence (Hourly, Daily, Weekly, Never, Custom) from the dropdown list.
To turn off automatic updates, select **Never**.
To define a custom cadence, select **Custom**, then enter a cron expression in the text field. For more information about cron expressions, see [Cron Expressions](/reference/cron-expressions). Configured automatic update checks use the local server time.

1. Under **Automatically deploy new versions**, select an option. The available options depend on whether semantic versioning is enabled for the channel.
* **For channels that use semantic versioning**: (v1.58.0 and later) Select an option in the dropdown
to specify the versions that the Admin Console automatically deploys. For example,
to automatically deploy only new patch and minor versions, select
**Automatically deploy new patch and minor versions**.
* **For channels that do not use semantic versioning**: (v1.67.0 and later) Optionally select **Enable automatic deployment**.
When this checkbox is enabled, the Admin Console automatically deploys each new version of the application that it downloads.
---
# Perform Updates in Embedded Clusters
This topic describes how to perform updates for [Replicated Embedded Cluster](/vendor/embedded-overview) installations.
:::note
If you are instead looking for information about Replicated kURL, see [Perform Updates in kURL Clusters](updating-kurl).
:::
## Overview
When you update an application installed with Embedded Cluster, you update both the application and the cluster infrastructure together, including Kubernetes, KOTS, and other components running in the cluster. There is no need or mechanism to update the infrastructure on its own.
When you deploy a new version, any changes to the cluster are deployed first. The Admin Console waits until the cluster is ready before updatng the application.
Any changes made to the Embedded Cluster Config, including changes to the Embedded Cluster version, Helm extensions, and unsupported overrides, trigger a cluster update.
When performing an upgrade with Embedded Cluster, the user is able to change the application config before deploying the new version. Additionally, the user's license is synced automatically. Users can also make config changes and sync their license outside of performing an update. This requires deploying a new version to apply the config change or license sync.
The following diagram demonstrates how updates are performed with Embedded Cluster in online (internet-connected) environments:

[View a larger version of this image](/images/embedded-cluster-update.png)
As shown in the diagram above, users check for available updates from the KOTS Admin Console. When deploying the new version, both the application and the cluster infrastructure are updated as needed.
## Update in Online Clusters
:::important
Do not downgrade the Embedded Cluster version. This is not supported but is not prohibited, and it can lead to unexpected behavior.
:::
To perform an update with Embedded Cluster:
1. In the Admin Console, go to the **Version history** tab.
All versions available for upgrade are listed in the **Available Updates** section:

[View a larger version of this image](/images/ec-upgrade-version-history.png)
1. Click **Deploy** next to the target version.
1. On the **Config** screen of the upgrade wizard, make any necessary changes to the configuration for the application. Click **Next**.

[View a larger version of this image](/images/ec-upgrade-wizard-config.png)
:::note
Any changes made on the **Config** screen of the upgrade wizard are not set until the new version is deployed.
:::
1. On the **Preflight** screen, view the results of the preflight checks.

[View a larger version of this image](/images/ec-upgrade-wizard-preflights.png)
1. On the **Confirm** screen, click **Deploy**.

[View a larger version of this image](/images/ec-upgrade-wizard-confirm.png)
During updates, the Admin Console is unavailable. A modal is displayed with a message that the update is in progress.
:::note
KOTS can experience downtime during an update, such as in single-node installations. If downtime occurs, refreshing the page results in a connection error. Users can refresh the page again after the update is complete to access the Admin Console.
:::
## Update in Air Gap Clusters
:::important
Do not downgrade the Embedded Cluster version. This is not supported but is not prohibited, and it can lead to unexpected behavior.
:::
To upgrade an installation, new air gap bundles can be uploaded to the Admin Console from the browser or with the Embedded Cluster binary from the command line.
Using the binary is faster and allows the user to download the air gap bundle directly to the machine where the Embedded Cluster is running. Using the browser is slower because the user must download the air gap bundle to a machine with a browser, then upload that bundle to the Admin Console, and then the Admin Console can process it.
### Upload the New Version From the Command Line
To update by uploading the air gap bundle for the new version from the command line:
1. SSH onto a controller node in the cluster and download the air gap bundle for the new version using the same curl command that you used to install. For example:
```bash
curl -f https://replicated.app/embedded/APP_SLUG/CHANNEL_SLUG?airgap=true -H "Authorization: LICENSE_ID" -o APP_SLUG-CHANNEL_SLUG.tgz
```
For more information, see [Install](/enterprise/installing-embedded-air-gap#install).
1. Untar the tarball. For example:
```bash
tar -xvzf APP_SLUG-CHANNEL_SLUG.tgz
```
Ensure that the `.airgap` air gap bundle is present.
1. Use the `update` command to upload the air gap bundle and make this new version available in the Admin Console. For example:
```bash
./APP_SLUG update --airgap-bundle APP_SLUG.airgap
```
1. When the air gap bundle has been uploaded, open a browser on the same machine and go to the Admin Console.
1. On the **Version history** page, click **Deploy** next to the new version.

[View a larger version of this image](/images/ec-upgrade-version-history.png)
1. On the **Config** screen of the upgrade wizard, make any necessary changes to the configuration for the application. Click **Next**.

[View a larger version of this image](/images/ec-upgrade-wizard-config.png)
:::note
Any changes made on the **Config** screen of the upgrade wizard are not set until the new version is deployed.
:::
1. On the **Preflight** screen, view the results of the preflight checks.

[View a larger version of this image](/images/ec-upgrade-wizard-preflights.png)
1. On the **Confirm** screen, click **Deploy**.

[View a larger version of this image](/images/ec-upgrade-wizard-confirm.png)
### Upload the New Version From the Admin Console
To update by uploading the air gap bundle for the new version from the Admin Console:
1. On a machine with browser access (for example, where you accessed the Admin Console to configure the application), download the air gap bundle for the new version using the same curl command that you used to install. For example:
```bash
curl -f https://replicated.app/embedded/APP_SLUG/CHANNEL_SLUG?airgap=true -H "Authorization: LICENSE_ID" -o APP_SLUG-CHANNEL_SLUG.tgz
```
For more information, see [Install](/enterprise/installing-embedded-air-gap#install).
1. Untar the tarball. For example:
```bash
tar -xvzf APP_SLUG-CHANNEL_SLUG.tgz
```
Ensure that the `.airgap` air gap bundle is present.
1. On the same machine, use a browser to access the Admin Console.
1. On the **Version history** page, click **Upload new version** and choose the `.airgap` air gap bundle you downloaded.
1. When the air gap bundle has been uploaded, click **Deploy** next to the new version.
1. On the **Config** screen of the upgrade wizard, make any necessary changes to the configuration for the application. Click **Next**.

[View a larger version of this image](/images/ec-upgrade-wizard-config.png)
:::note
Any changes made on the **Config** screen of the upgrade wizard are not set until the new version is deployed.
:::
1. On the **Preflight** screen, view the results of the preflight checks.

[View a larger version of this image](/images/ec-upgrade-wizard-preflights.png)
1. On the **Confirm** screen, click **Deploy**.

[View a larger version of this image](/images/ec-upgrade-wizard-confirm.png)
---
# About kURL Cluster Updates
:::note
Replicated kURL is available only for existing customers. If you are not an existing kURL user, use Replicated Embedded Cluster instead. For more information, see [Use Embedded Cluster](/vendor/embedded-overview).
kURL is a Generally Available (GA) product for existing customers. For more information about the Replicated product lifecycle phases, see [Support Lifecycle Policy](/vendor/policies-support-lifecycle).
:::
This topic provides an overview of Replicated kURL cluster updates. For information about how to perform updates in kURL clusters, see [Perform Updates in kURL Clusters](updating-kurl).
## Overview
The Replicated kURL installer spec specifies the kURL add-ons and the Kubernetes version that are deployed in kURL clusters. You can run the kURL installation script to apply the latest installer spec and update the cluster.
## About Kubernetes Updates {#kubernetes}
The version of Kubernetes running in a kURL cluster can be upgraded by one or more minor versions.
The Kubernetes upgrade process in kURL clusters steps through one minor version at a time. For example, upgrades from Kubernetes 1.19.x to 1.26.x install versions 1.20.x, 1.21x, 1.22.x, 1.23.x, 1.24.x, and 1.25.x before installing 1.26.x.
The installation script automatically detects when the Kubernetes version in your cluster must be updated. When a Kubernetes upgrade is required, the script first prints a prompt: `Drain local node and apply upgrade?`. When you confirm the prompt, it drains and upgrades the local primary node where the script is running.
Then, if there are any remote primary nodes to upgrade, the script drains each sequentially and prints a command that you must run on the node to upgrade. For example, the command that that script prints might look like the following: `curl -sSL https://kurl.sh/myapp/upgrade.sh | sudo bash -s hostname-check=master-node-2 kubernetes-version=v1.24.3`.
The script polls the status of each remote node until it detects that the Kubernetes upgrade is complete. Then, it uncordons the node and proceeds to cordon and drain the next node. This process ensures that only one node is cordoned at a time. After upgrading all primary nodes, the script performs the same operation sequentially on all remote secondary nodes.
### Air Gap Multi-Version Kubernetes Updates {#kubernetes-multi}
To upgrade Kubernetes by more than one minor version in air gapped kURL clusters, you must provide a package that includes the assets required for the upgrade.
When you run the installation script to upgrade, the script searches for the package in the `/var/lib/kurl/assets/` directory. The script then lists any required assets that are missing, prints a command to download the missing assets as a `.tar.gz` package, and prompts you to provide an absolute path to the package in your local directory. For example:
```
⚙ Upgrading Kubernetes from 1.23.17 to 1.26.3
This involves upgrading from 1.23 to 1.24, 1.24 to 1.25, and 1.25 to 1.26.
This may take some time.
⚙ Downloading assets required for Kubernetes 1.23.17 to 1.26.3 upgrade
The following packages are not available locally, and are required:
kubernetes-1.24.12.tar.gz
kubernetes-1.25.8.tar.gz
You can download them with the following command:
curl -LO https://kurl.sh/bundle/version/v2023.04.24-0/19d41b7/packages/kubernetes-1.24.12,kubernetes-1.25.8.tar.gz
Please provide the path to the file on the server.
Absolute path to file:
```
## About Add-ons and KOTS Updates {#add-ons}
If the application vendor updated any add-ons in the kURL installer spec since the last time that you ran the installation script in your cluster, the script automatically updates the add-ons after updating Kubernetes (if required).
For a complete list of add-ons that can be included in the kURL installer spec, including the KOTS add-on, see [Add-ons](https://kurl.sh/docs/add-ons/antrea) in the kURL documentation.
### Containerd and Docker Add-on Updates
The installation script upgrades the version of the Containerd or Docker container runtime if required by the installer spec. For example, if your cluster uses Containerd version 1.6.4 and the spec is updated to use 1.6.18, then Containerd is updated to 1.6.18 in your cluster when you run the installation script.
The installation script also supports migrating from Docker to Containerd as Docker is not supported in Kubernetes versions 1.24 and later. If the install script detects a change from Docker to Containerd, it installs Containerd, loads the images found in Docker, and removes Docker.
For information about the container runtime add-ons, see [Containerd Add-On](https://kurl.sh/docs/add-ons/containerd) and [Docker Add-On](https://kurl.sh/docs/add-ons/docker) in the kURL documentation.
### KOTS Updates (KOTS Add-on)
The version of KOTS that is installed in a kURL cluster is set by the [KOTS add-on](https://kurl.sh/docs/add-ons/kotsadm), which is defined in the kURL installer spec.
For example, if the version of KOTS running in your cluster is 1.109.0, and the KOTS add-on in the kURL installer spec is updated to 1.109.12, then the KOTS version in your cluster is updated to 1.109.12 when you update the cluster.
---
# Perform Updates in kURL Clusters
:::note
Replicated kURL is available only for existing customers. If you are not an existing kURL user, use Replicated Embedded Cluster instead. For more information, see [Use Embedded Cluster](/vendor/embedded-overview).
kURL is a Generally Available (GA) product for existing customers. For more information about the Replicated product lifecycle phases, see [Support Lifecycle Policy](/vendor/policies-support-lifecycle).
:::
This topic describes how to perform updates in Replicated kURL installations. It includes procedures for updating an application, as well as for updating the versions of Kubernetes, Replicated KOTS, and add-ons in a kURL cluster.
For more information about managing nodes in kURL clusters, including how to safely reset, reboot, and remove nodes when performing maintenance tasks, see [Manage Nodes](https://kurl.sh/docs/install-with-kurl/managing-nodes) in the open source kURL documentation.
## Update an Application
For kURL installations, you can update an application from the Admin Console. You can also set up automatic updates. See [Configure Automatic Updates](/enterprise/updating-apps).
### Online Environments
To perform an update from the Admin Console:
1. In the Admin Console, go to the **Version History** tab.
1. Click **Check for updates**.
A new upstream version displays in the list of available versions.
[View a larger version of this image](/images/new-version-available.png)
1. (Optional) When there are multiple versions of an application, you can compare
the changes between them by clicking **Diff releases** in the right corner.
You can review changes between any two arbitrary releases by clicking the icon in the header
of the release column. Select the two versions to compare, and click **Diff releases**
to show the relative changes between the two releases.
[View a larger version of this image](/images/diff-releases.png)
[View a larger version of this image](/images/new-changes.png)
1. (Optional) Click the **View preflight checks** icon to view or re-run the preflight checks.
[View a larger version of this image](/images/preflight-checks.png)
1. Return to the **Version History** tab and click **Deploy** next to the target version.
### Air Gap Environments
import BuildAirGapBundle from "../install/_airgap-bundle-build.mdx"
import DownloadAirGapBundle from "../install/_airgap-bundle-download.mdx"
import ViewAirGapBundle from "../install/_airgap-bundle-view-contents.mdx"
To perform an air gap update from the Admin Console:
1. In the [Vendor Portal](https://vendor.replicated.com), go the channel where the target release is promoted to build and download the new `.airgap` bundle:
| Directory | Changes Persist? | Description |
|---|---|---|
upstream |
No, except for the userdata subdirectory |
The Contains the template functions, preflight checks, support bundle, config options, license, and so on. Contains a |
| Directory | Changes Persist? | Description |
|---|---|---|
base |
No | After KOTS processes and renders the Only the deployable application files, such as files deployable with Any non-deployable manifests, such as template functions, preflight checks, and configuration options, are removed. |
| Subdirectory | Changes Persist? | Description |
|---|---|---|
midstream |
No | Contains KOTS-specific kustomizations, such as:
|
downstream |
Yes | Contains user-defined kustomizations that are applied to the Only one To add kustomizations, see Patch an Application. |
midstream/charts |
No | Appears only when the Contains a subdirectory for each Helm chart. Each Helm chart has its own kustomizations because each chart is rendered and deployed separately from other charts and manifests. The subcharts of each Helm chart also have their own kustomizations and are rendered separately. However, these subcharts are included and deployed as part of the parent chart. |
downstream/charts |
Yes | Appears only when the Contains a subdirectory for each Helm chart. Each Helm chart has its own kustomizations because each chart is rendered and deployed separately from other charts and manifests. The subcharts of each Helm chart also have their own kustomizations and are rendered separately. However, these subcharts are included and deployed as part of the parent chart. |
| Directory | Changes Persist? | Description |
|---|---|---|
rendered |
No | Contains the final rendered application manifests that are deployed to the cluster. The rendered files are created when KOTS processes the |
rendered/charts |
No | Appears only when the Contains a subdirectory for each rendered Helm chart. Each Helm chart is deployed separately from other charts and manifests. The rendered subcharts of each Helm chart are included and deployed as part of the parent chart. |
[View a larger version of this image](/images/kots-installation-overview.png)
As shown in the diagram above:
* For installations in existing online (internet-connected) clusters, users run a command to install KOTS in their cluster.
* For installations on VMs or bare metal servers, users run an Embedded Cluster or kURL installation script that both provisions a cluster in their environment and installs KOTS in the cluster.
* For installations in air-gapped clusters, users download air gap bundles for KOTS and the application from the Replicated Download Portal and then provide the bundles during installation.
All users must have a valid license file to install with KOTS. After KOTS is installed in the cluster, users can access the KOTS Admin Console to provide their license and deploy the application.
For more information about how to install applications with KOTS, see the [Installing an Application](/enterprise/installing-overview) section.
## KOTS User Interfaces
This section describes the KOTS interfaces available to users for installing and managing applications.
### KOTS Admin Console
KOTS provides an Admin Console to make it easy for users to install, manage, update, configure, monitor, backup and restore, and troubleshoot their application instance from a GUI.
The following shows an example of the Admin Console dashboard for an application:

[View a larger version of this image](/images/guides/kots/application.png)
For applications installed with Replicated Embedded Cluster in a VM or bare metal server, the Admin Console also includes a **Cluster Management** tab where users can add and manage nodes in the embedded cluster, as shown below:

[View a larger version of this image](/images/gitea-ec-ready.png)
### KOTS CLI
The KOTS command-line interface (CLI) is a kubectl plugin. Customers can run commands with the KOTS CLI to install and manage their application instances with KOTS programmatically.
For information about getting started with the KOTS CLI, see [Installing the KOTS CLI](/reference/kots-cli-getting-started).
The KOTS CLI can also be used to install an application without needing to access the Admin Console. This can be useful for automating installations and upgrades, such as in CI/CD pipelines. For information about how to perform headless installations from the command line, see [Install with the KOTS CLI](/enterprise/installing-existing-cluster-automation).
---
---
pagination_prev: null
---
# Introduction to Replicated
This topic provides an introduction to the Replicated Platform, including a platform overview and a list of key features. It also describes the Commercial Software Distribution Lifecycle and how Replicated features can be used in each phase of the lifecycle.
## About the Replicated Platform
Replicated is a commercial software distribution platform. Independent software vendors (ISVs) can use features of the Replicated Platform to distribute modern commercial software into complex, customer-controlled environments, including on-prem and air gap.
The Replicated Platform features are designed to support ISVs during each phase of the Commercial Software Distribution Lifecycle. For more information, see [Commercial Software Distribution Lifecycle](#csdl) below.
The following diagram demonstrates the process of using the Replicated Platform to distribute an application, install the application in a customer environment, and support the application after installation:

[View a larger version of this image](/images/replicated-platform.png)
The diagram above shows an application that is packaged with the [**Replicated SDK**](/vendor/replicated-sdk-overview). The application is tested in clusters provisioned with the [**Replicated Compatibility Matrix**](/vendor/testing-about), then added to a new release in the [**Vendor Portal**](/vendor/releases-about) using an automated CI/CD pipeline.
The application is then installed by a customer ("Big Bank") on a VM. To install, the customer downloads their license, which grants proxy access to the application images through the [**Replicated proxy registry**](/vendor/private-images-about). They also download the installation assets for the [**Replicated Embedded Cluster**](/vendor/embedded-overview) installer.
Embedded Cluster runs [**preflight checks**](/vendor/preflight-support-bundle-about) to verify that the environment meets the installation requirements, provisions a cluster on the VM, and installs [**Replicated KOTS**](intro-kots) in the cluster. KOTS provides an [**Admin Console**](intro-kots#kots-admin-console) where the customer enters application-specific configurations, runs application preflight checks, optionally joins nodes to the cluster, and then deploys the application. After installation, customers can manage both the application and the cluster from the Admin Console.
Finally, the diagram shows how [**instance data**](/vendor/instance-insights-event-data) is automatically sent from the customer environment to the Vendor Portal by the Replicated SDK API and the KOTS Admin Console. Additionally, tooling from the open source [**Troubleshoot**](https://troubleshoot.sh/docs/collect/) project is used to generate and send [**support bundles**](/vendor/preflight-support-bundle-about), which include logs and other important diagnostic data.
## Replicated Platform Features
The following describes the key features of the Replicated Platform.
### Compatibility Matrix
Replicated Compatibility Matrix can be used to get kubectl access to running clusters within minutes or less. Compatibility Matrix supports various Kubernetes distributions and versions and can be interacted with through the Vendor Portal or the Replicated CLI.
For more information, see [About Compatibility Matrix](/vendor/testing-about).
### Embedded Cluster
Replicated Embedded Cluster is a Kubernetes installer based on the open source Kubernetes distribution k0s. With Embedded Cluster, users install and manage both the cluster and the application together as a single appliance on a VM or bare metal server. In this way, Kubernetes is _embedded_ with the application.
Additionally, each version of Embedded Cluster includes a specific version of [Replicated KOTS](#kots) that is installed in the cluster during installation. KOTS is used by Embedded Cluster to deploy the application and also provides the Admin Console UI where users can manage both the application and the cluster.
For more information, see [Embedded Cluster Overview](/vendor/embedded-overview).
### KOTS (Admin Console) {#kots}
KOTS is a kubectl plugin and in-cluster Admin Console that installs Kubernetes applications in customer-controlled environments.
KOTS is used by [Replicated Embedded Cluster](#embedded-cluster) to deploy applications and also to provide the Admin Console UI where users can manage both the application and the cluster. KOTS can also be used to install applications in existing Kubernetes clusters in customer-controlled environments, including clusters in air-gapped environments with limited or no outbound internet access.
For more information, see [Introduction to KOTS](intro-kots).
### Preflight Checks and Support Bundles
Preflight checks and support bundles are provided by the Troubleshoot open source project, which is maintained by Replicated. Troubleshoot is a kubectl plugin that provides diagnostic tools for Kubernetes applications. For more information, see the open source [Troubleshoot](https://troubleshoot.sh/docs/collect/) documentation.
Preflight checks and support bundles analyze data from customer environments to provide insights that help users to avoid or troubleshoot common issues with an application:
* **Preflight checks** run before an application is installed to check that the customer environment meets the application requirements.
* **Support bundles** collect troubleshooting data from customer environments to help users diagnose problems with application deployments.
For more information, see [About Preflight Checks and Support Bundles](/vendor/preflight-support-bundle-about).
### Proxy Registry
The Replicated proxy registry grants proxy access to an application's images using the customer's unique license. This means that customers can get access to application images during installation without the vendor needing to provide registry credentials.
For more information, see [About the Replicated Proxy Registry](/vendor/private-images-about).
### Replicated SDK
The Replicated SDK is a Helm chart that can be installed as a small service alongside your application. It provides an in-cluster API that can be used to communicate with the Vendor Portal. For example, the SDK API can return details about the customer's license or report telemetry on the application instance back to the Vendor Portal.
For more information, see [About the Replicated SDK](/vendor/replicated-sdk-overview).
### Vendor Portal
The Replicated Vendor Portal is the web-based user interface that you can use to configure and manage all of the Replicated features for distributing and managing application releases, supporting your release, viewing customer insights and reporting, and managing teams.
The Vendor Portal can also be interacted with programmatically using the following developer tools:
* **Replicated CLI**: The Replicated CLI can be used to complete tasks programmatically, including all tasks for packaging and managing applications, and managing artifacts such as teams, license files, and so on. For more information, see [Installing the Replicated CLI](/reference/replicated-cli-installing).
* **Vendor API v3**: The Vendor API can be used to complete tasks programmatically, including all tasks for packaging and managing applications, and managing artifacts such as teams and license files. For more information, see [Using the Vendor API v3](/reference/vendor-api-using).
## Commercial Software Distribution Lifecycle {#csdl}
Replicated Platform features are designed to support ISVs in each phase of the Commercial Software Distribution Lifecycle shown below:

[View a larger version of this image](/images/software-dev-lifecycle.png)
Commercial software distribution is the business process that independent software vendors (ISVs) use to enable enterprise customers to self-host a fully private instance of the vendor's application in an environment controlled by the customer.
Replicated has developed the Commercial Software Distribution Lifecycle to represents the stages that are essential for every company that wants to deliver their software securely and reliably to customer controlled environments.
This lifecycle was inspired by the DevOps lifecycle and the Software Development Lifecycle (SDLC), but it focuses on the unique things that must be done to successfully distribute third party, commercial software to tens, hundreds, or thousands of enterprise customers.
For more information about to download a copy of The Commercial Software Distribution Handbook, see [The Commercial Software Distribution Handbook](https://www.replicated.com/the-commercial-software-distribution-handbook).
The following describes the phases of the software distribution lifecycle:
* **[Develop](#develop)**: Application design and architecture decisions align with customer needs, and development teams can quickly iterate on new features.
* **[Test](#test)**: Run automated tests in several customer-representative environments as part of continuous integration and continuous delivery (CI/CD) workflows.
* **[Release](#release)**: Use channels to share releases with external and internal users, publish release artifacts securely, and use consistent versioning.
* **[License](#license)**: Licenses are customized to each customer and are easy to issue, manage, and update.
* **[Install](#install)**: Provide unique installation options depending on customers' preferences and experience levels.
* **[Report](#report)**: Make more informed prioritization decisions by collecting usage and performance metadata for application instances running in customer environments.
* **[Support](#support)**: Diagnose and resolve support issues quickly.
For more information about the Replicated features that support each of these phases, see the sections below.
### Develop
The Replicated SDK exposes an in-cluster API that can be developed against to quickly integrate and test core functionality with an application. For example, when the SDK is installed alongside an application in a customer environment, the in-cluster API can be used to send custom metrics from the instance to the Replicated vendor platform.
For more information about using the Replicated SDK, see [About the Replicated SDK](/vendor/replicated-sdk-overview).
### Test
The Replicated Compatibility Matrix rapidly provisions ephemeral Kubernetes clusters, including multi-node and OpenShift clusters. When integrated into existing CI/CD pipelines for an application, the Compatibility Matrix can be used to automatically create a variety of customer-representative environments for testing code changes.
For more information, see [About Compatibility Matrix](/vendor/testing-about).
### Release
Release channels in the Replicated Vendor Portal allow ISVs to make different application versions available to different customers, without needing to maintain separate code bases. For example, a "Beta" channel can be used to share beta releases of an application with only a certain subset of customers.
For more information about working with channels, see [About Channels and Releases](/vendor/releases-about).
Additionally, the Replicated proxy registry grants proxy access to private application images using the customers' license. This ensures that customers have the right access to images based on the channel they are assigned. For more information about using the proxy registry, see [About the Replicated Proxy Registry](/vendor/private-images-about).
### License
Create customers in the Replicated Vendor Portal to handle licensing for your application in both online and air gap environments. For example:
* License free trials and different tiers of product plans
* Create and manage custom license entitlements
* Verify license entitlements both before installation and during runtime
* Measure and report usage
For more information about working with customers and custom license fields, see [About Customers](/vendor/licenses-about).
### Install
Applications distributed with the Replicated Platform can support multiple different installation methods from the same application release, helping you to meet your customers where they are. For example:
* Customers who are not experienced with Kubernetes or who prefer to deploy to a dedicated cluster in their environment can install on a VM or bare metal server with the Replicated Embedded Cluster installer. For more information, see [Embedded Cluster Overview](/vendor/embedded-overview).
* Customers familiar with Kubernetes and Helm can install in their own existing cluster using Helm. For more information, see [Installing with Helm](/vendor/install-with-helm).
* Customers installing into environments with limited or no outbound internet access (often referred to as air-gapped environments) can securely access and push images to their own internal registry, then install using Helm or a Replicated installer. For more information, see [Air Gap Installation with Embedded Cluster](/enterprise/installing-embedded-air-gap) and [Installing and Updating with Helm in Air Gap Environments (Alpha)](/vendor/helm-install-airgap).
### Report
When installed alongside an application, the Replicated SDK and Replicated KOTS automatically send instance data from the customer environment to the Replicated Vendor Portal. This instance data includes health and status indicators, adoption metrics, and performance metrics. For more information, see [About Instance and Event Data](/vendor/instance-insights-event-data).
ISVs can also set up email and Slack notifications to get alerted of important instance issues or performance trends. For more information, see [Configuring Instance Notifications](/vendor/instance-notifications-config).
### Support
Support teams can use Replicated features to more quickly diagnose and resolve application issues. For example:
- Customize and generate support bundles, which collect and analyze redacted information from the customer's cluster, environment, and application instance. See [About Preflights Checks and Support Bundles](/vendor/preflight-support-bundle-about).
- Provision customer-representative environments with Compatibility Matrix to recreate and diagnose issues. See [About Compatibility Matrix](/vendor/testing-about).
- Get insights into an instance's status by accessing telemetry data, which covers the health of the application, the current application version, and details about the infrastructure and cluster where the application is running. For more information, see [Customer Reporting](/vendor/customer-reporting). For more information, see [Customer Reporting](/vendor/customer-reporting).
---
---
slug: /
pagination_next: null
---
# Home
What's New?
Update the Embedded Cluster Config to alias the `replicated.app` and `proxy.replicated.com` endpoints with your custom domains.
Did You Know?
To help troubleshoot Embedded Cluster deployments, you can view logs for both Embedded Cluster and the k0s systemd service.
Getting Started with Replicated
Onboarding workflows, tutorials, and labs to help you get started with Replicated quickly.
Vendor Platform
Create and manage your account and team.
Compatibility Matrix
Rapidly create Kubernetes clusters, including OpenShift.
Helm Charts
Distribute Helm charts with Replicated.
Replicated KOTS
A kubectl plugin and in-cluster Admin Console that installs applications in customer-controlled environments.
Embedded Cluster
Embed Kubernetes with your application to support installations on VMs or bare metal servers.
Insights and Telemetry
Get insights on installed instances of your application.
Channels and Releases
Manage application releases with the vendor platform.
Customer Licensing
Create, customize, and issue customer licenses.
Preflight Checks
Define and verify installation environment requirements.
Support Bundles
Gather information about customer environments for troubleshooting.
Developer Tools
APIs, CLIs, and an SDK for interacting with the Replicated platform.
| Required Field | Allowed Values | Allowed Special Characters |
|---|---|---|
| Minute | 0 through 59 | , - * |
| Hour | 0 through 23 | , - * |
| Day-of-month | 1 through 31 | , - * ? |
| Month | 1 through 12 or JAN through DEC | , - * |
| Day-of-week | 1 through 7 or SUN through SAT | , - * ? |
| Special Character | Description |
|---|---|
| Comma (,) | Specifies a list or multiple values, which can be consecutive or not. For example, 1,2,4 in the Day-of-week field signifies every Monday, Tuesday, and Thursday. |
| Dash (-) | Specifies a contiguous range. For example, 4-6 in the Month field signifies April through June. |
| Asterisk (*) | Specifies that all of the values for the field are used. For example, using * in the Month field means that all of the months are included in the schedule. |
| Question mark (?) | Specifies that one or another value can be used. For example, enter 5 for Day-of-the-month and ? for Day-of-the-week to check for updates on the 5th day of the month, regardless of which day of the week it is. |
| Schedule Value | Description | Equivalent Cron Expression |
|---|---|---|
| @yearly (or @annually) | Runs once a year, at midnight on January 1. | 0 0 1 1 * |
| @monthly | Runs once a month, at midnight on the first of the month. | 0 0 1 * * |
| @weekly | Run once a week, at midnight on Saturday. | 0 0 * * 0 |
| @daily (or @midnight) | Runs once a day, at midnight. | 0 0 * * * |
| @hourly | Runs once an hour, at the beginning of the hour. | 0 * * * * |
| @never | Disables the schedule completely. Only used by KOTS. This value can be useful when you are calling the API directly or are editing the KOTS configuration manually. |
0 * * * * |
| @default | Selects the default schedule option (every 4 hours). Begins when the Admin Console starts up. This value can be useful when you are calling the API directly or are editing the KOTS configuration manually. |
0 * * * * |
| Description | The application title. Used on the license upload and in various places in the Replicated Admin Console. |
|---|---|
| Example | ```yaml title: My Application ``` |
| Supports Go templates? | No |
| Supported for Embedded Cluster? | Yes |
| Description | The icon file for the application. Used on the license upload, in various places in the Admin Console, and in the Download Portal. The icon can be a remote URL or a Base64 encoded image. Base64 encoded images are required to display the image in air gap installations with no outbound internet access. |
|---|---|
| Example | ```yaml icon: https://support.io/img/logo.png ``` |
| Supports Go templates? | No |
| Supported for Embedded Cluster? | Yes |
| Description | The release notes for this version. These can also be set when promoting a release. |
|---|---|
| Example | ```yaml releaseNotes: Fixes a bug and adds a new feature. ``` |
| Supports Go templates? | No |
| Supported for Embedded Cluster? | Yes |
| Description |
Enable this flag to create a Rollback button on the Admin Console Version History page. If an application is guaranteed not to introduce backwards-incompatible versions, such as through database migrations, then the Rollback does not revert any state. Rather, it recovers the YAML manifests that are applied to the cluster. |
|---|---|
| Example | ```yaml allowRollback: false ``` |
| Default | false |
| Supports Go templates? | No |
| Supported for Embedded Cluster? | Embedded Cluster 1.17.0 and later supports partial rollbacks of the application version. Partial rollbacks are supported only when rolling back to a version where there is no change to the [Embedded Cluster Config](/reference/embedded-config) compared to the currently-installed version. For example, users can roll back to release version 1.0.0 after upgrading to 1.1.0 only if both 1.0.0 and 1.1.0 use the same Embedded Cluster Config. |
| Description |
An array of additional namespaces as strings that Replicated KOTS creates on the cluster. For more information, see Defining Additional Namespaces. In addition to creating the additional namespaces, KOTS ensures that the application secret exists in the namespaces. KOTS also ensures that this application secret has access to pull the application images, including both images that are used and any images you add in the For dynamically created namespaces, specify |
|---|---|
| Example | ```yaml additionalNamespaces: - "*" ``` |
| Supports Go templates? | No |
| Supported for Embedded Cluster? | Yes |
| Description | An array of strings that reference images to be included in air gap bundles and pushed to the local registry during installation. KOTS detects images from the PodSpecs in the application. Some applications, such as Operators, might need to include additional images that are not referenced until runtime. For more information, see Defining Additional Images. |
|---|---|
| Example | ```yaml additionalImages: - jenkins/jenkins:lts ``` |
| Supports Go templates? | No |
| Supported for Embedded Cluster? | Yes |
| Description |
Requires minimal role-based access control (RBAC) be used for all customer installations. When set to For additional requirements and limitations related to using namespace-scoped RBAC, see About Namespace-scoped RBAC in Configuring KOTS RBAC. |
|---|---|
| Example | ```yaml requireMinimalRBACPrivileges: false ``` |
| Default | false |
| Supports Go templates? | No |
| Supported for Embedded Cluster? | No |
| Description |
Allows minimal role-based access control (RBAC) be used for all customer installations. When set to Minimal RBAC is not used by default. It is only used when the For additional requirements and limitations related to using namespace-scoped RBAC, see About Namespace-scoped RBAC in Configuring KOTS RBAC. |
|---|---|
| Example | ```yaml supportMinimalRBACPrivileges: true ``` |
| Default | false |
| Supports Go templates? | No |
| Supported for Embedded Cluster? | No |
| Description |
Extra ports (additional to the The
|
|---|---|
| Example | ```yaml ports: - serviceName: web servicePort: 9000 localPort: 9000 applicationUrl: "http://web" ``` |
| Supports Go templates? | Go templates are supported in the `serviceName` and `applicationUrl` fields only. Using Go templates in the `localPort` or `servicePort` fields results in an installation error similar to the following: `json: cannot unmarshal string into Go struct field ApplicationPort.spec.ports.servicePort of type int`. |
| Supported for Embedded Cluster? | Yes |
| Description |
Resources to watch and report application status back to the user. When you include
For more information about including statusInformers, see Adding Resource Status Informers. |
|---|---|
| Example | ```yaml statusInformers: - deployment/my-web-svc - deployment/my-worker ``` The following example shows excluding a specific status informer based on a user-supplied value from the Admin Console Configuration screen: ```yaml statusInformers: - deployment/my-web-svc - '{{repl if ConfigOptionEquals "option" "value"}}deployment/my-worker{{repl else}}{{repl end}}' ``` |
| Supports Go templates? | Yes |
| Supported for Embedded Cluster? | Yes |
| Description | Custom graphs to include on the Admin Console application dashboard.For more information about how to create custom graphs, see Adding Custom Graphs.
|
|---|---|
| Example | ```yaml graphs: - title: User Signups query: 'sum(user_signup_events_total)' ``` |
| Supports Go templates? |
Yes |
| Supported for Embedded Cluster? | No |
| Description | The custom domain used for proxy.replicated.com. For more information, see Using Custom Domains. Introduced in KOTS v1.91.1. |
|---|---|
| Example | ```yaml proxyRegistryDomain: "proxy.yourcompany.com" ``` |
| Supports Go templates? | No |
| Description | The custom domain used for registry.replicated.com. For more information, see Using Custom Domains. Introduced in KOTS v1.91.1. |
|---|---|
| Example | ```yaml replicatedRegistryDomain: "registry.yourcompany.com" ``` |
| Supports Go templates? | No |
| Supported for Embedded Cluster? | Yes |
| Description | The KOTS version that is targeted by the release. For more information, see Setting Minimum and Target Versions for KOTS. |
|---|---|
| Example | ```yaml targetKotsVersion: "1.85.0" ``` |
| Supports Go templates? | No |
| Supported for Embedded Cluster? | No. Setting targetKotsVersion to a version earlier than the KOTS version included in the specified version of Embedded Cluster will cause Embedded Cluster installations to fail with an error message like: Error: This version of App Name requires a different version of KOTS from what you currently have installed.. To avoid installation failures, do not use targetKotsVersion in releases that support installation with Embedded Cluster. |
| Description | The minimum KOTS version that is required by the release. For more information, see Setting Minimum and Target Versions for KOTS. |
|---|---|
| Example | ```yaml minKotsVersion: "1.71.0" ``` |
| Supports Go templates? | No |
| Supported for Embedded Cluster? | No. Setting minKotsVersion to a version later than the KOTS version included in the specified version of Embedded Cluster will cause Embedded Cluster installations to fail with an error message like: Error: This version of App Name requires a different version of KOTS from what you currently have installed.. To avoid installation failures, do not use minKotsVersion in releases that support installation with Embedded Cluster. |
| Field Name | Description |
|---|---|
includedNamespaces |
(Optional) Specifies an array of namespaces to include in the backup. If unspecified, all namespaces are included. |
excludedNamespaces |
(Optional) Specifies an array of namespaces to exclude from the backup. |
orderedResources |
(Optional) Specifies the order of the resources to collect during the backup process. This is a map that uses a key as the plural resource. Each resource name has the format NAMESPACE/OBJECTNAME. The object names are a comma delimited list. For cluster resources, use OBJECTNAME only. |
ttl |
Specifies the amount of time before this backup is eligible for garbage collection. Default:720h (equivalent to 30 days). This value is configurable only by the customer. |
hooks |
(Optional) Specifies the actions to perform at different times during a backup. The only supported hook is executing a command in a container in a pod (uses the pod exec API). Supports pre and post hooks. |
hooks.resources |
(Optional) Specifies an array of hooks that are applied to specific resources. |
hooks.resources.name |
Specifies the name of the hook. This value displays in the backup log. |
hooks.resources.includedNamespaces |
(Optional) Specifies an array of namespaces that this hook applies to. If unspecified, the hook is applied to all namespaces. |
hooks.resources.excludedNamespaces |
(Optional) Specifies an array of namespaces to which this hook does not apply. |
hooks.resources.includedResources |
Specifies an array of pod resources to which this hook applies. |
hooks.resources.excludedResources |
(Optional) Specifies an array of resources to which this hook does not apply. |
hooks.resources.labelSelector |
(Optional) Specifies that this hook only applies to objects that match this label selector. |
hooks.resources.pre |
Specifies an array of exec hooks to run before executing custom actions. |
hooks.resources.post |
Specifies an array of exec hooks to run after executing custom actions. Supports the same arrays and fields as pre hooks. |
hooks.resources.[post/pre].exec |
Specifies the type of the hook. exec is the only supported type. |
hooks.resources.[post/pre].exec.container |
(Optional) Specifies the name of the container where the specified command will be executed. If unspecified, the first container in the pod is used. |
hooks.resources.[post/pre].exec.command |
Specifies the command to execute. The format is an array. |
hooks.resources.[post/pre].exec.onError |
(Optional) Specifies how to handle an error that might occur when executing the command. Valid values: Fail and Continue Default: Fail |
hooks.resources.[post/pre].exec.timeout |
(Optional) Specifies how many seconds to wait for the command to finish executing before the action times out. Default: 30s |
| Description |
Items can be affixed Specify the |
|---|---|
| Required? | No |
| Example | ```yaml groups: - name: example_settings title: My Example Config description: Configuration to serve as an example for creating your own. items: - name: username title: Username type: text required: true affix: left - name: password title: Password type: password required: true affix: right ``` |
| Supports Go templates? | Yes |
| Description |
Defines the default value for the config item. If the user does not provide a value for the item, then the If the |
|---|---|
| Required? | No |
| Example | ```yaml - name: custom_key title: Set your secret key for your app description: Paste in your Custom Key items: - name: key title: Key type: text value: "" default: change me ```  [View a larger version of this image](/images/config-default.png) |
| Supports Go templates? | Yes. Every time the user makes a change to their configuration settings for the application, any template functions used in the |
| Description |
Displays a helpful message below the Markdown syntax is supported. For more information about markdown syntax, see Basic writing and formatting syntax in the GitHub Docs. |
|---|---|
| Required? | No |
| Example | ```yaml - name: http_settings title: HTTP Settings items: - name: http_enabled title: HTTP Enabled help_text: Check to enable the HTTP listener type: bool ```  [View a larger version of this image](/images/config-help-text.png) |
| Supports Go templates? | Yes |
| Description |
Hidden items are not visible in the Admin Console. :::note When you assign a template function that generates a value to a `value` property, you can use the `readonly` and `hidden` properties to define whether or not the generated value is ephemeral or persistent between changes to the configuration settings for the application. For more information, see [RandomString](template-functions-static-context#randomstring) in _Static Context_. ::: |
|---|---|
| Required? | No |
| Example | ```yaml - name: secret_key title: Secret Key type: password hidden: true value: "{{repl RandomString 40}}" ``` |
| Supports Go templates? | No |
| Description | A unique identifier for the config item. Item names must be unique both within the group and across all groups. The item The item |
|---|---|
| Required? | Yes |
| Example | ```yaml - name: http_settings title: HTTP Settings items: - name: http_enabled title: HTTP Enabled type: bool ``` |
| Supports Go templates? | Yes |
| Description |
Readonly items are displayed in the Admin Console and users cannot edit their value. :::note When you assign a template function that generates a value to a `value` property, you can use the `readonly` and `hidden` properties to define whether or not the generated value is ephemeral or persistent between changes to the configuration settings for the application. For more information, see [RandomString](template-functions-static-context#randomstring) in _Static Context_. ::: |
|---|---|
| Required? | No |
| Example | ```yaml - name: key title: Key type: text value: "" default: change me - name: unique_key title: Unique Key type: text value: "{{repl RandomString 20}}" readonly: true ```  [View a larger version of this image](/images/config-readonly.png) |
| Supports Go templates? | No |
| Description | Displays a Recommended tag for the config item in the Admin Console. |
|---|---|
| Required? | No |
| Example | ```yaml - name: recommended_field title: My recommended field type: bool default: "0" recommended: true ```  [View a larger version of this image](/images/config-recommended-item.png) |
| Supports Go templates? | No |
| Description | Displays a Required tag for the config item in the Admin Console. A required item prevents the application from starting until it has a value. |
|---|---|
| Required? | No |
| Example | ```yaml - name: custom_key title: Set your secret key for your app description: Paste in your Custom Key items: - name: key title: Key type: text value: "" default: change me required: true ```  [View a larger version of this image](/images/config-required-item.png) |
| Supports Go templates? | No |
| Description | The title of the config item that displays in the Admin Console. |
|---|---|
| Required? | Yes |
| Example | ```yaml - name: http_settings title: HTTP Settings items: - name: http_enabled title: HTTP Enabled help_text: Check to enable the HTTP listener type: bool ```  [View a larger version of this image](/images/config-help-text.png) |
| Supports Go templates? | Yes |
| Description |
Each item has a The For information about each type, see Item Types. |
|---|---|
| Required? | Yes |
| Example | ```yaml - name: group_title title: Group Title items: - name: http_enabled title: HTTP Enabled type: bool default: "0" ```  [View a larger version of this image](/images/config-screen-bool.png) |
| Supports Go templates? | No |
| Description |
Defines the value of the config item. Data that you add to If the config item is not readonly, then the data that you add to |
|---|---|
| Required? | No |
| Example | ```yaml - name: custom_key title: Set your secret key for your app description: Paste in your Custom Key items: - name: key title: Key type: text value: "{{repl RandomString 20}}" ```  [View a larger version of this image](/images/config-value-randomstring.png) |
| Supports Go templates? | Yes :::note When you assign a template function that generates a value to a `value` property, you can use the `readonly` and `hidden` properties to define whether or not the generated value is ephemeral or persistent between changes to the configuration settings for the application. For more information, see [RandomString](template-functions-static-context#randomstring) in _Static Context_. ::: |
| Description | The The `when` item property has the following requirements and limitations: * The `when` property accepts the following types of values: * Booleans * Strings that match "true", "True", "false", or "False" [KOTS template functions](/reference/template-functions-about) can be used to render these supported value types. * For the `when` property to evaluate to true, the values compared in the conditional statement must match exactly without quotes
|
|---|---|
| Required? | No |
| Example |
Display the ```yaml
- name: database_settings_group
title: Database Settings
items:
- name: db_type
title: Database Type
type: radio
default: external
items:
- name: external
title: External
- name: embedded
title: Embedded DB
- name: database_host
title: Database Hostname
type: text
when: repl{{ (ConfigOptionEquals "db_type" "external")}}
- name: database_password
title: Database Password
type: password
when: repl{{ (ConfigOptionEquals "db_type" "external")}}
```
For additional examples, see Using Conditional Statements in Configuration Fields. |
| Supports Go templates? | Yes |
| Description | The |
|---|---|
| Required? | No |
| Example |
Validates and returns if |
| Supports Go templates? | No |
| Level | Description |
|---|---|
| error | The rule is enabled and shows as an error. |
| warn | The rule is enabled and shows as a warning. |
| info | The rule is enabled and shows an informational message. |
| off | The rule is disabled. |
| Field Name | Description |
|---|---|
collectorName |
(Optional) A collector can specify the collectorName field. In some collectors, this field controls the path where result files are stored in the support bundle. |
exclude |
(Optional) (KOTS Only) Based on the runtime available configuration, a conditional can be specified in the exclude field. This is useful for deployment techniques that allow templating for Replicated KOTS and the optional KOTS Helm component. When this value is true, the collector is not included. |
| Field Name | Description |
|---|---|
collectorName |
(Optional) An analyzer can specify the collectorName field. |
exclude |
(Optional) (KOTS Only) A condition based on the runtime available configuration can be specified in the exclude field. This is useful for deployment techniques that allow templating for KOTS and the optional KOTS Helm component. When this value is true, the analyzer is not included. |
strict |
(Optional) (KOTS Only) An analyzer can be set to strict: true so that fail outcomes for that analyzer prevent the release from being deployed by KOTS until the vendor-specified requirements are met. When exclude: true is also specified, exclude overrides strict and the analyzer is not executed. |
| Field Name | Description |
|---|---|
file |
(Optional) Specifies a single file for redaction. |
files |
(Optional) Specifies multiple files for redaction. |
/my/test/glob/* matches /my/test/glob/file, but does not match /my/test/glob/subdir/file.
### removals
The `removals` object is required and defines the redactions that occur. This object supports the following fields. At least one of these fields must be specified:
| Field Name | Description |
|---|---|
regex |
(Optional) Allows a regular expression to be applied for removal and redaction on lines that immediately follow a line that matches a filter. The selector field is used to identify lines, and the redactor field specifies a regular expression that runs on the line after any line identified by selector. If selector is empty, the redactor runs on every line. Using a selector is useful for removing values from pretty-printed JSON, where the value to be redacted is pretty-printed on the line beneath another value.Matches to the regex are removed or redacted, depending on the construction of the regex. Any portion of a match not contained within a capturing group is removed entirely. The contents of capturing groups tagged mask are masked with ***HIDDEN***. Capturing groups tagged drop are dropped. |
values |
(Optional) Specifies values to replace with the string ***HIDDEN***. |
yamlPath |
(Optional) Specifies a .-delimited path to the items to be redacted from a YAML document. If an item in the path is the literal string *, the redactor is applied to all options at that level.Files that fail to parse as YAML or do not contain any matches are not modified. Files that do contain matches are re-rendered, which removes comments and custom formatting. Multi-document YAML is not fully supported. Only the first document is checked for matches, and if a match is found, later documents are discarded entirely. |
| Flag | Description |
|---|---|
| `--admin-console-password` |
Set the password for the Admin Console. The password must be at least six characters in length. If not set, the user is prompted to provide an Admin Console password. |
| `--admin-console-port` |
Port on which to run the KOTS Admin Console. **Default**: By default, the Admin Console runs on port 30000. **Limitation:** It is not possible to change the port for the Admin Console during a restore with Embedded Cluster. For more information, see [Disaster Recovery for Embedded Cluster (Alpha)](/vendor/embedded-disaster-recovery). |
| `--airgap-bundle` | The Embedded Cluster air gap bundle used for installations in air-gapped environments with no outbound internet access. For information about how to install in an air-gapped environment, see [Air Gap Installation with Embedded Cluster](/enterprise/installing-embedded-air-gap). |
| `--cidr` |
The range of IP addresses that can be assigned to Pods and Services, in CIDR notation. **Default:** By default, the CIDR block is `10.244.0.0/16`. **Requirement**: Embedded Cluster 1.16.0 or later. |
| `--config-values` |
Path to the ConfigValues file for the application. The ConfigValues file can be used to pass the application configuration values from the command line during installation, such as when performing automated installations as part of CI/CD pipelines. For more information, see [Automate Installation with Embedded Cluster](/enterprise/installing-embedded-automation). Requirement: Embedded Cluster 1.18.0 and later. |
| `--data-dir` |
The data directory used by Embedded Cluster. **Default**: `/var/lib/embedded-cluster` **Requirement**: Embedded Cluster 1.16.0 or later. **Limitations:**
|
| `--http-proxy` |
Proxy server to use for HTTP. **Requirement:** Proxy installations require Embedded Cluster 1.5.1 or later with Kubernetes 1.29 or later. **Limitations:** * If any of your [Helm extensions](/reference/embedded-config#extensions) make requests to the internet, the given charts need to be manually configured so that those requests are made to the user-supplied proxy server instead. Typically, this requires updating the Helm values to set HTTP proxy, HTTPS proxy, and no proxy. Note that this limitation applies only to network requests made by your Helm extensions. The proxy settings supplied to the install command are used to pull the containers required to run your Helm extensions. * Proxy settings cannot be changed after installation or during upgrade. |
| `--https-proxy` |
Proxy server to use for HTTPS. **Requirement:** Proxy installations require Embedded Cluster 1.5.1 or later with Kubernetes 1.29 or later. **Limitations:** * If any of your [Helm extensions](/reference/embedded-config#extensions) make requests to the internet, the given charts need to be manually configured so that those requests are made to the user-supplied proxy server instead. Typically, this requires updating the Helm values to set HTTP proxy, HTTPS proxy, and no proxy. Note that this limitation applies only to network requests made by your Helm extensions. The proxy settings supplied to the install command are used to pull the containers required to run your Helm extensions. * Proxy settings cannot be changed after installation or during upgrade. |
| `--ignore-host-preflights` |
When `--ignore-host-preflights` is passed, the host preflight checks are still run, but the user is prompted and can choose to continue with the installation if preflight failures occur. If there are no failed preflights, no user prompt is displayed. Additionally, the Admin Console still runs any application-specific preflight checks before the application is deployed. For more information about the Embedded Cluster host preflight checks, see [About Host Preflight Checks](/vendor/embedded-using#about-host-preflight-checks) in _Using Embedded Cluster_ Ignoring host preflight checks is _not_ recommended for production installations. |
| `-l, --license` |
Path to the customer license file |
| `--local-artifact-mirror-port` |
Port on which to run the Local Artifact Mirror (LAM). **Default**: By default, the LAM runs on port 50000. |
| `--network-interface` |
The name of the network interface to bind to for the Kubernetes API. A common use case of `--network-interface` is for multi-node clusters where node communication should happen on a particular network. **Default**: If a network interface is not provided, the first valid, non-local network interface is used. |
| `--no-proxy` |
Comma-separated list of hosts for which not to use a proxy. For single-node installations, pass the IP address of the node where you are installing. For multi-node installations, when deploying the first node, pass the list of IP addresses for all nodes in the cluster (typically in CIDR notation). The network interface's subnet will automatically be added to the no-proxy list if the node's IP address is not already included. The following are never proxied:
To ensure your application's internal cluster communication is not proxied, use fully qualified domain names like `my-service.my-namespace.svc` or `my-service.my-namespace.svc.cluster.local`. **Requirement:** Proxy installations require Embedded Cluster 1.5.1 or later with Kubernetes 1.29 or later. **Limitations:** * If any of your [Helm extensions](/reference/embedded-config#extensions) make requests to the internet, the given charts need to be manually configured so that those requests are made to the user-supplied proxy server instead. Typically, this requires updating the Helm values to set HTTP proxy, HTTPS proxy, and no proxy. Note that this limitation applies only to network requests made by your Helm extensions. The proxy settings supplied to the install command are used to pull the containers required to run your Helm extensions. * Proxy settings cannot be changed after installation or during upgrade. |
| `--private-ca` |
The path to trusted certificate authority (CA) certificates. Using the `--private-ca` flag ensures that the CA is trusted by the installation. KOTS writes the CA certificates provided with the `--private-ca` flag to a ConfigMap in the cluster. The KOTS [PrivateCACert](/reference/template-functions-static-context#privatecacert) template function returns the ConfigMap containing the private CA certificates supplied with the `--private-ca` flag. You can use this template function to mount the ConfigMap so your containers trust the CA too. |
| `-y, --yes` |
In Embedded Cluster 1.21.0 and later, pass the `--yes` flag to provide an affirmative response to any user prompts for the command. For example, you can pass `--yes` with the `--ignore-host-preflights` flag to ignore host preflight checks during automated installations. **Requirement:** Embedded Cluster 1.21.0 and later |
| Flag | Type | Description |
--rootdir |
string | Root directory where the YAML will be written (default `${HOME}` or `%USERPROFILE%`) |
--namespace |
string | Target namespace for the Admin Console |
--shared-password |
string | Shared password to use when deploying the Admin Console |
--http-proxy |
string | Sets HTTP_PROXY environment variable in all KOTS Admin Console components |
--http-proxy |
string | Sets HTTP_PROXY environment variable in all KOTS Admin Console |
--kotsadm-namespace |
string | Set to override the registry namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). Note: Replicated recommends that you use |
--kotsadm-registry |
string | Set to override the registry hostname and namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). |
--no-proxy |
string | Sets NO_PROXY environment variable in all KOTS Admin Console components |
--private-ca-configmap |
string | Name of a ConfigMap containing private CAs to add to the kotsadm deployment |
--registry-password |
string | Password to use to authenticate with the application registry. Used for air gap installations. |
--registry-username |
string | Username to use to authenticate with the application registry. Used for air gap installations. |
--with-minio |
bool | Set to true to include a local minio instance to be used for storage (default true) |
--minimal-rbac |
bool | Set to true to include a local minio instance to be used for storage (default true) |
--additional-namespaces |
string | Comma delimited list to specify additional namespace(s) managed by KOTS outside where it is to be deployed. Ignored without with --minimal-rbac=true |
--storage-class |
string | Sets the storage class to use for the KOTS Admin Console components. Default: unset, which means the default storage class will be used |
| Flag | Type | Description |
|---|---|---|
--ensure-rbac |
bool | When false, KOTS does not attempt to create the RBAC resources necessary to manage applications. Default: true. If a role specification is needed, use the generate-manifests command. |
-h, --help |
Help for the command. | |
--kotsadm-namespace |
string | Set to override the registry namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). Note: Replicated recommends that you use |
--kotsadm-registry |
string | Set to override the registry hostname and namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). |
--registry-password |
string | Password to use to authenticate with the application registry. Used for air gap installations. |
--registry-username |
string | Username to use to authenticate with the application registry. Used for air gap installations. |
--skip-rbac-check |
bool | When true, KOTS does not validate RBAC permissions. Default: false |
--strict-security-context |
bool |
Set to By default, KOTS Pods and containers are not deployed with a specific security context. When
The following shows the Default: |
--wait-duration |
string | Timeout out to be used while waiting for individual components to be ready. Must be in Go duration format. Example: 10s, 2m |
--with-minio |
bool | When true, KOTS deploys a local MinIO instance for storage and attempts to change any MinIO-based snapshots (hostpath and NFS) to the local-volume-provider plugin. See local-volume-provider in GitHub. Default: true |
| Flag | Type | Description |
--additional-annotations |
bool | Additional annotations to add to kotsadm pods. |
--additional-labels |
bool | Additional labels to add to kotsadm pods. |
--airgap |
bool | Set to true to run install in air gapped mode. Setting --airgap-bundle implies --airgap=true. Default: false. For more information, see Air Gap Installation in Existing Clusters with KOTS. |
--airgap-bundle |
string | Path to the application air gap bundle where application metadata will be loaded from. Setting --airgap-bundle implies --airgap=true. For more information, see Air Gap Installation in Existing Clusters with KOTS. |
--app-version-label |
string | The application version label to install. If not specified, the latest version is installed. |
--config-values |
string | Path to a manifest file containing configuration values. This manifest must be apiVersion: kots.io/v1beta1 and kind: ConfigValues. For more information, see Install with the KOTS CLI. |
--copy-proxy-env |
bool | Copy proxy environment variables from current environment into all Admin Console components. Default: false |
--disable-image-push |
bool | Set to true to disable images from being pushed to private registry. Default: false |
--ensure-rbac |
bool | When false, KOTS does not attempt to create the RBAC resources necessary to manage applications. Default: true. If a role specification is needed, use the [generate-manifests](kots-cli-admin-console-generate-manifests) command. |
-h, --help |
Help for the command. | |
--http-proxy |
string | Sets HTTP_PROXY environment variable in all Admin Console components. |
--https-proxy |
string | Sets HTTPS_PROXY environment variable in all Admin Console components. |
--kotsadm-namespace |
string | Set to override the registry namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). Note: Replicated recommends that you use |
--kotsadm-registry |
string | Set to override the registry hostname and namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). |
--license-file |
string | Path to a license file. |
--local-path |
string | Specify a local-path to test the behavior of rendering a Replicated application locally. Only supported on Replicated application types. |
--name |
string | Name of the application to use in the Admin Console. |
--no-port-forward |
bool | Set to true to disable automatic port forward. Default: false |
--no-proxy |
string | Sets NO_PROXY environment variable in all Admin Console components. |
--port |
string | Override the local port to access the Admin Console. Default: 8800 |
--private-ca-configmap |
string | Name of a ConfigMap containing private CAs to add to the kotsadm deployment. |
--preflights-wait-duration |
string | Timeout to be used while waiting for preflights to complete. Must be in [Go duration](https://pkg.go.dev/time#ParseDuration) format. For example, 10s, 2m. Default: 15m |
--registry-password |
string | Password to use to authenticate with the application registry. Used for air gap installations. |
--registry-username |
string | Username to use to authenticate with the application registry. Used for air gap installations. |
--repo |
string | Repo URI to use when installing a Helm chart. |
--shared-password |
string | Shared password to use when deploying the Admin Console. |
--skip-compatibility-check |
bool | Set to true to skip compatibility checks between the current KOTS version and the application. Default: false |
--skip-preflights |
bool | Set to true to skip preflight checks. Default: false. If any strict preflight checks are configured, the --skip-preflights flag is not honored because strict preflight checks must run and contain no failures before the application is deployed. For more information, see [Define Preflight Checks](/vendor/preflight-defining). |
--skip-rbac-check |
bool | Set to true to bypass RBAC check. Default: false |
--skip-registry-check |
bool | Set to true to skip the connectivity test and validation of the provided registry information. Default: false |
--strict-security-context |
bool |
Set to By default, KOTS Pods and containers are not deployed with a specific security context. When
The following shows the Default: |
--use-minimal-rbac |
bool | When set to true, KOTS RBAC permissions are limited to the namespace where it is installed. To use --use-minimal-rbac, the application must support namespace-scoped installations and the user must have the minimum RBAC permissions required by KOTS in the target namespace. For a complete list of requirements, see [Namespace-scoped RBAC Requirements](/enterprise/installing-general-requirements#namespace-scoped) in _Installation Requirements_. Default: false |
--wait-duration |
string | Timeout to be used while waiting for individual components to be ready. Must be in [Go duration](https://pkg.go.dev/time#ParseDuration) format. For example, 10s, 2m. Default: 2m |
--with-minio |
bool | When set to true, KOTS deploys a local MinIO instance for storage and uses MinIO for host path and NFS snapshot storage. Default: true |
--storage-class |
string | Sets the storage class to use for the KOTS Admin Console components. Default: unset, which means the default storage class will be used |
| Flag | Type | Description |
|---|---|---|
--force |
bool |
Removes the reference even if the application has already been deployed. |
--undeploy |
bool |
Un-deploys the application by deleting all its resources from the cluster. When Note: The following describes how
|
-n |
string |
The namespace where the target application is deployed. Use |
| Flag | Type | Description |
-h, --help |
Help for the command. | |
| `-n, --namespace` | string | The namespace of the Admin Console (required) |
| `--hostpath` | string | A local host path on the node |
--kotsadm-namespace |
string | Set to override the registry namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). Note: Replicated recommends that you use |
--kotsadm-registry |
string | Set to override the registry hostname and namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). |
--registry-password |
string | Password to use to authenticate with the application registry. Used for air gap installations. |
--registry-username |
string | Username to use to authenticate with the application registry. Used for air gap installations. |
| `--force-reset` | bool | Bypass the reset prompt and force resetting the nfs path. (default `false`) |
| `--output` | string | Output format. Supported values: `json` |
| Flag | Type | Description |
-h, --help |
Help for the command. | |
| `-n, --namespace` | string | The namespace of the Admin Console (required) |
| `--nfs-server` | string | The hostname or IP address of the NFS server (required) |
| `--nfs-path` | string | The path that is exported by the NFS server (required) |
--kotsadm-namespace |
string | Set to override the registry namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). Note: Replicated recommends that you use |
--kotsadm-registry |
string | Set to override the registry hostname and namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). |
--registry-password |
string | Password to use to authenticate with the application registry. Used for air gap installations. |
--registry-username |
string | Username to use to authenticate with the application registry. Used for air gap installations. |
| `--force-reset` | bool | Bypass the reset prompt and force resetting the nfs path. (default `false`) |
| `--output` | string | Output format. Supported values: `json` |
| Flag | Type | Description |
-h, --help |
Help for the command. | |
| `-n, --namespace` | string | The namespace of the Admin Console (required) |
| `--access-key-id` | string | The AWS access key ID to use for accessing the bucket (required) |
| `--bucket` | string | Name of the object storage bucket where backups should be stored (required) |
| `--endpoint` | string | The S3 endpoint (for example, http://some-other-s3-endpoint) (required) |
| `--path` | string | Path to a subdirectory in the object store bucket |
| `--region` | string | The region where the bucket exists (required) |
| `--secret-access-key` | string | The AWS secret access key to use for accessing the bucket (required) |
| `--cacert` | string | File containing a certificate bundle to use when verifying TLS connections to the object store |
| `--skip-validation` | bool | Skip the validation of the S3 bucket (default `false`) |
--kotsadm-namespace |
string | Set to override the registry namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). Note: Replicated recommends that you use |
--kotsadm-registry |
string | Set to override the registry hostname and namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). |
--registry-password |
string | Password to use to authenticate with the application registry. Used for air gap installations. |
--registry-username |
string | Username to use to authenticate with the application registry. Used for air gap installations. |
| Description | Notifies if any manifest file has allowPrivilegeEscalation set to true. |
|---|---|
| Level | Info |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml spec: allowPrivilegeEscalation: true ``` |
| Description | Requires an application icon. |
|---|---|
| Level | Warn |
| Applies To |
Files with kind: Application and apiVersion: kots.io/v1beta1.
|
| Example | Example of correct YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 kind: Application spec: icon: https://example.com/app-icon.png ``` |
| Description |
Requires an Application custom resource manifest file. Accepted value for |
|---|---|
| Level | Warn |
| Example | Example of matching YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 kind: Application ``` |
| Description |
Requires statusInformers.
|
|---|---|
| Level | Warn |
| Applies To |
Files with kind: Application and apiVersion: kots.io/v1beta1.
|
| Example | Example of correct YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 kind: Application spec: statusInformers: - deployment/example-nginx ``` |
| Description |
Enforces valid types for Config items. For more information, see Items in Config. |
|---|---|
| Level | Error |
| Applies To | All files |
| Example | **Correct**: ```yaml spec: groups: - name: authentication title: Authentication description: Configure application authentication below. - name: group_title title: Group Title items: - name: http_enabled title: HTTP Enabled type: bool # bool is a valid type ``` **Incorrect**:: ```yaml spec: groups: - name: authentication title: Authentication description: Configure application authentication below. - name: group_title title: Group Title items: - name: http_enabled title: HTTP Enabled type: unknown_type # unknown_type is not a valid type ``` |
| Description | Enforces that all ConfigOption items do not reference themselves. |
|---|---|
| Level | Error |
| Applies To |
Files with kind: Config and apiVersion: kots.io/v1beta1.
|
| Example | **Incorrect**: ```yaml spec: groups: - name: example_settings items: - name: example_default_value type: text value: repl{{ ConfigOption "example_default_value" }} ``` |
| Description |
Requires all ConfigOption items to be defined in the Config custom resource manifest file.
|
|---|---|
| Level | Warn |
| Applies To | All files |
| Description | Enforces that sub-templated ConfigOption items must be repeatable. |
|---|---|
| Level | Error |
| Applies To | All files |
| Description |
Requires ConfigOption items with any of the following names to have
|
|---|---|
| Level | Warn |
| Applies To | All files |
| Example | Example of correct YAML for this rule: ```yaml spec: groups: - name: ports items: - name: my_secret type: password ``` |
| Description |
Enforces valid For more information, see when in Config. |
|---|---|
| Level | Error |
| Applies To | Files with kind: Config and apiVersion: kots.io/v1beta1. |
| Description |
Enforces valid RE2 regular expressions pattern when regex validation is present. For more information, see Validation in Config. |
|---|---|
| Level | Error |
| Applies To | Files with kind: Config and apiVersion: kots.io/v1beta1. |
| Example | **Correct**: ```yaml spec: groups: - name: authentication title: Authentication description: Configure application authentication below. - name: jwt_file title: jwt_file type: file validation: regex: pattern: "^[A-Za-z0-9-_]+.[A-Za-z0-9-_]+.[A-Za-z0-9-_]*$" // valid RE2 regular expression message: "JWT is invalid" ``` **Incorrect**: ```yaml spec: groups: - name: authentication title: Authentication description: Configure application authentication below. - name: jwt_file title: jwt_file type: file validation: regex: pattern: "^/path/([A-Za-z0-9-_]+.[A-Za-z0-9-_]+.[A-Za-z0-9-_]*$" // invalid RE2 regular expression message: "JWT is invalid" ``` |
| Description |
Enforces valid item type when regex validation is present. Item type should be For more information, see Validation in Config. |
|---|---|
| Level | Error |
| Applies To | Files with kind: Config and apiVersion: kots.io/v1beta1. |
| Example | **Correct**: ```yaml spec: groups: - name: authentication title: Authentication description: Configure application authentication below. - name: jwt_file title: jwt_file type: file // valid item type validation: regex: pattern: "^[A-Za-z0-9-_]+.[A-Za-z0-9-_]+.[A-Za-z0-9-_]*$" message: "JWT is invalid" ``` **Incorrect**: ```yaml spec: groups: - name: authentication title: Authentication description: Configure application authentication below. - name: jwt_file title: jwt_file type: bool // invalid item type validation: regex: pattern: "^[A-Za-z0-9-_]+.[A-Za-z0-9-_]+.[A-Za-z0-9-_]*$" message: "JWT is invalid" ``` |
| Description |
Requires a Config custom resource manifest file. Accepted value for Accepted value for |
|---|---|
| Level | Warn |
| Example | Example of matching YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 kind: Config ``` |
| Description | Notifies if any manifest file has a container image tag appended with
:latest. |
|---|---|
| Level | Info |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml spec: containers: - image: nginx:latest ``` |
| Description | Disallows any manifest file having a container image tag that includes LocalImageName. |
|---|---|
| Level | Error |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml spec: containers: - image: LocalImageName ``` |
| Description | Notifies if a spec.container has no resources.limits field. |
|---|---|
| Level | Info |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml spec: containers: - name: nginx resources: requests: memory: '32Mi' cpu: '100m' # note the lack of a limit field ``` |
| Description | Notifies if a spec.container has no resources.requests field. |
|---|---|
| Level | Info |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml spec: containers: - name: nginx resources: limits: memory: '256Mi' cpu: '500m' # note the lack of a requests field ``` |
| Description | Notifies if a manifest file has no resources field. |
|---|---|
| Level | Info |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml spec: containers: - name: nginx # note the lack of a resources field ``` |
| Description |
Disallows using the deprecated kURL installer
|
|---|---|
| Level | Warn |
| Applies To |
Files with kind: Installer and apiVersion: kurl.sh/v1beta1.
|
| Example | **Correct**: ```yaml apiVersion: cluster.kurl.sh/v1beta1 kind: Installer ``` **Incorrect**: ```yaml apiVersion: kurl.sh/v1beta1 kind: Installer ``` |
| Description |
Enforces unique |
|---|---|
| Level | Error |
| Applies To |
Files with kind: HelmChart and apiVersion: kots.io/v1beta1.
|
| Description |
Disallows duplicate Replicated custom resources.
A release can only include one of each This rule disallows inclusion of more than one file with:
|
|---|---|
| Level | Error |
| Applies To | All files |
| Description |
Notifies if any manifest file has a Replicated strongly recommends not specifying a namespace to allow for flexibility when deploying into end user environments. For more information, see Managing Application Namespaces. |
|---|---|
| Level | Info |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml metadata: name: spline-reticulator namespace: graphviz-pro ``` |
| Description | Requires that a |
|---|---|
| Level | Error |
| Applies To |
Releases with a HelmChart custom resource manifest file containing kind: HelmChart and apiVersion: kots.io/v1beta1.
|
| Description | Enforces that a HelmChart custom resource manifest file with |
|---|---|
| Level | Error |
| Applies To |
Releases with a *.tar.gz archive file present.
|
| Description |
Enforces valid
|
|---|---|
| Level | Warn |
| Applies To |
Files with kind: HelmChart and apiVersion: kots.io/v1beta1.
|
| Example | Example of correct YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 kind: HelmChart spec: chart: releaseName: samplechart-release-1 ``` |
| Description |
Enforces valid Replicated kURL add-on versions. kURL add-ons included in the kURL installer must pin specific versions rather than |
|---|---|
| Level | Error |
| Applies To |
Files with
|
| Example | **Correct**: ```yaml apiVersion: cluster.kurl.sh/v1beta1 kind: Installer spec: kubernetes: version: 1.24.5 ``` **Incorrect**: ```yaml apiVersion: cluster.kurl.sh/v1beta1 kind: Installer spec: kubernetes: version: 1.24.x ekco: version: latest ``` |
| Description |
Requires Accepts a |
|---|---|
| Level | Error |
| Applies To |
Files with kind: Application and apiVersion: kots.io/v1beta1.
|
| Example | Example of correct YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 kind: Application spec: minKotsVersion: 1.0.0 ``` |
| Description | Enforces valid YAML after rendering the manifests using the Config spec. |
|---|---|
| Level | Error |
| Applies To | YAML files |
| Example | **Example Helm Chart**: ```yaml apiVersion: kots.io/v1beta1 kind: HelmChart metadata: name: nginx-chart spec: chart: name: nginx-chart chartVersion: 0.1.0 helmVersion: v3 useHelmInstall: true builder: {} values: image: repl{{ ConfigOption `nginx_image`}} ``` **Correct Config**: ```yaml apiVersion: kots.io/v1beta1 kind: Config metadata: name: nginx-config spec: groups: - name: nginx-deployment-config title: nginx deployment config items: - name: nginx_image title: image type: text default: "nginx" ``` **Resulting Rendered Helm Chart**: ```yaml apiVersion: kots.io/v1beta1 kind: HelmChart metadata: name: nginx-chart spec: chart: name: nginx-chart chartVersion: 0.1.0 helmVersion: v3 useHelmInstall: true builder: {} values: image: nginx ``` **Incorrect Config**: ```yaml apiVersion: kots.io/v1beta1 kind: Config metadata: name: nginx-config spec: groups: - name: nginx-deployment-config items: - name: nginx_image title: image type: text default: "***HIDDEN***" ``` **Resulting Lint Error**: ```json { "lintExpressions": [ { "rule": "invalid-rendered-yaml", "type": "error", "message": "yaml: did not find expected alphabetic or numeric character: image: ***HIDDEN***", "path": "nginx-chart.yaml", "positions": null } ], "isLintingComplete": false } ``` **Incorrectly Rendered Helm Chart**: ```yaml apiVersion: kots.io/v1beta1 kind: HelmChart metadata: name: nginx-chart spec: chart: name: nginx-chart chartVersion: 0.1.0 helmVersion: v3 useHelmInstall: true builder: {} values: image: ***HIDDEN*** ``` |
| Description |
Requires Accepts a |
|---|---|
| Level | Error |
| Applies To |
Files with kind: Application and apiVersion: kots.io/v1beta1
|
| Example | Example of correct YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 kind: Application spec: targetKotsVersion: 1.0.0 ``` |
| Description | Requires that the value of a property matches that property's expected type. |
|---|---|
| Level | Error |
| Applies To | All files |
| Example | **Correct**: ```yaml ports: - serviceName: "example" servicePort: 80 ``` **Incorrect**: ```yaml ports: - serviceName: "example" servicePort: "80" ``` |
| Description | Enforces valid YAML. |
|---|---|
| Level | Error |
| Applies To | YAML files |
| Example | **Correct**: ```yaml spec: kubernetes: version: 1.24.5 ``` **Incorrect**: ```yaml spec: kubernetes: version 1.24.x ``` |
| Description | Notifies if any manifest file may contain secrets. |
|---|---|
| Level | Info |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml data: ENV_VAR_1: "y2X4hPiAKn0Pbo24/i5nlInNpvrL/HJhlSCueq9csamAN8g5y1QUjQnNL7btQ==" ``` |
| Description | Requires the apiVersion: field in all files. |
|---|---|
| Level | Error |
| Applies To | All files |
| Example | Example of correct YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 ``` |
| Description | Requires the kind: field in all files. |
|---|---|
| Level | Error |
| Applies To | All files |
| Example | Example of correct YAML for this rule: ```yaml kind: Config ``` |
| Description |
Requires that each The linter cannot evaluate If you configure status informers for Helm-managed resources, you can ignore |
|---|---|
| Level | Warning |
| Applies To |
Compares |
| Description |
Requires a Preflight custom resource manifest file with:
and one of the following:
|
|---|---|
| Level | Warn |
| Example | Example of matching YAML for this rule: ```yaml apiVersion: troubleshoot.sh/v1beta2 kind: Preflight ``` |
| Description | Notifies if any manifest file has privileged set to true. |
|---|---|
| Level | Info |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml spec: privileged: true ``` |
| Description |
Enforces ConfigOption For more information, see Repeatable Item Template Targets in Config. |
|---|---|
| Level | Error |
| Applies To | All files |
| Example | Example of correct YAML for this rule: ```yaml spec: groups: - name: ports items: - name: service_port yamlPath: 'spec.ports[0]' ``` |
| Description |
Disallows repeating Config item with undefined For more information, see Repeatable Item Template Targets in Config. |
|---|---|
| Level | Error |
| Applies To | All files |
| Example | Example of correct YAML for this rule: ```yaml spec: groups: - name: ports items: - name: service_port title: Service Port type: text repeatable: true templates: - apiVersion: v1 kind: Service name: my-service namespace: my-app yamlPath: 'spec.ports[0]' - apiVersion: v1 kind: Service name: my-service namespace: my-app ``` |
| Description |
Disallows repeating Config item with undefined For more information, see Repeatable Items in Config. |
|---|---|
| Level | Error |
| Applies To | All files |
| Example | Example of correct YAML for this rule: ```yaml spec: groups: - name: ports items: - name: service_port title: Service Port type: text repeatable: true valuesByGroup: ports: port-default-1: "80" ``` |
| Description | Notifies if any manifest file has replicas set to 1. |
|---|---|
| Level | Info |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml spec: replicas: 1 ``` |
| Description | Notifies if a spec.container has no resources.limits.cpu field. |
|---|---|
| Level | Info |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml spec: containers: - name: nginx resources: limits: memory: '256Mi' # note the lack of a cpu field ``` |
| Description | Notifies if a spec.container has no resources.limits.memory field. |
|---|---|
| Level | Info |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml spec: containers: - name: nginx resources: limits: cpu: '500m' # note the lack of a memory field ``` |
| Description | Notifies if a spec.container has no resources.requests.cpu field. |
|---|---|
| Level | Info |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml spec: containers: - name: nginx resources: requests: memory: '32Mi' # note the lack of a cpu field ``` |
| Description | Notifies if a spec.container has no resources.requests.memory field. |
|---|---|
| Level | Info |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml spec: containers: - name: nginx resources: requests: cpu: '100m' # note the lack of a memory field ``` |
| Description |
Requires a Troubleshoot manifest file. Accepted values for
Accepted values for
|
|---|---|
| Level | Warn |
| Example | Example of matching YAML for this rule: ```yaml apiVersion: troubleshoot.sh/v1beta2 kind: SupportBundle ``` |
| Description | Notifies if a spec.volumes has hostPath
set to /var/run/docker.sock. |
|---|---|
| Level | Info |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml spec: volumes: - hostPath: path: /var/run/docker.sock ``` |
| Description | Notifies if a spec.volumes has defined a hostPath. |
|---|---|
| Level | Info |
| Applies To | All files |
| Example | Example of matching YAML for this rule: ```yaml spec: volumes: - hostPath: path: /data ``` |
[View a larger version of this image](/images/authorize-repl-cli.png)
:::note
The `replicated login` command creates a token after you log in to your vendor account in a browser and saves it to a config file. Alteratively, if you do not have access to a browser, you can set the `REPLICATED_API_TOKEN` environment variable to authenticate. For more information, see [(Optional) Set Environment Variables](#env-var) below.
:::
1. (Optional) When you are done using the Replicated CLI, remove any stored credentials created by the `replicated login` command:
```
replicated logout
```
### Linux
To install and run the latest Replicated CLI on Linux:
1. Run the following command:
```shell
curl -s https://api.github.com/repos/replicatedhq/replicated/releases/latest \
| grep "browser_download_url.*linux_amd64.tar.gz" \
| cut -d : -f 2,3 \
| tr -d \" \
| wget -O replicated.tar.gz -qi -
tar xf replicated.tar.gz replicated && rm replicated.tar.gz
mv replicated /usr/local/bin/replicated
```
:::note
If you do not have root access to the `/usr/local/bin` directory, you can install with sudo by running `sudo mv replicated /usr/local/bin/replicated` instead of `mv replicated /usr/local/bin/replicated`.
:::
1. Verify that the installation was successful:
```
replicated --help
```
1. Authorize the Replicated CLI:
```
replicated login
```
In the browser window that opens, complete the prompts to log in to your vendor account and authorize the CLI.
[View a larger version of this image](/images/authorize-repl-cli.png)
:::note
The `replicated login` command creates a token after you log in to your vendor account in a browser and saves it to a config file. Alteratively, if you do not have access to a browser, you can set the `REPLICATED_API_TOKEN` environment variable to authenticate. For more information, see [(Optional) Set Environment Variables](#env-var) below.
:::
1. (Optional) When you are done using the Replicated CLI, remove any stored credentials created by the `replicated login` command:
```
replicated logout
```
### Docker / Windows
Installing in Docker environments requires that you set the `REPLICATED_API_TOKEN` environment variable to authorize the Replicated CLI with an API token. For more information, see [(Optional) Set Environment Variables](#env-var) below.
To install and run the latest Replicated CLI in Docker environments:
1. Generate a service account or user API token in the vendor portal. To create new releases, the token must have `Read/Write` access. See [Generating API Tokens](/vendor/replicated-api-tokens).
1. Get the latest Replicated CLI installation files from the [replicatedhq/replicated repository](https://github.com/replicatedhq/replicated/releases) on GitHub.
Download and install the files. For simplicity, the usage in the next step is represented assuming that the CLI is downloaded and installed to the desktop.
1. Authorize the Replicated CLI:
- Through a Docker container:
```shell
docker run \
-e REPLICATED_API_TOKEN=$TOKEN \
replicated/vendor-cli --help
```
Replace `TOKEN` with your API token.
- On Windows:
```dos
docker.exe run \
-e REPLICATED_API_TOKEN=%TOKEN% \
replicated/vendor-cli --help
```
Replace `TOKEN` with your API token.
For more information about the `docker run` command, see [docker run](https://docs.docker.com/engine/reference/commandline/run/) in the Docker documentation.
## (Optional) Set Environment Variables {#env-var}
The Replicated CLI supports setting the following environment variables:
* **`REPLICATED_API_TOKEN`**: A service account or user API token generated from a vendor portal team or individual account. The `REPLICATED_API_TOKEN` environment variable has the following use cases:
* To use Replicated CLI commands as part of automation (such as from continuous integration and continuous delivery pipelines), authenticate by providing the `REPLICATED_API_TOKEN` environment variable.
* To authorize the Replicated CLI when installing and running the CLI in Docker containers.
* Optionally set the `REPLICATED_API_TOKEN` environment variable instead of using the `replicated login` command to authorize the Replicated CLI in MacOS or Linux environments.
* **`REPLICATED_APP`**: The slug of the target application.
When using the Replicated CLI to manage applications through your vendor account (including channels, releases, customers, or other objects associated with an application), you can set the `REPLICATED_APP` environment variable to avoid passing the application slug with each command.
### `REPLICATED_API_TOKEN`
To set the `REPLICATED_API_TOKEN` environment variable:
1. Generate a service account or user API token in the vendor portal. To create new releases, the token must have `Read/Write` access. See [Generating API Tokens](/vendor/replicated-api-tokens).
1. Set the environment variable, replacing `TOKEN` with the token you generated in the previous step:
* **MacOs or Linux**:
```
export REPLICATED_API_TOKEN=TOKEN
```
* **Docker**:
```
docker run \
-e REPLICATED_API_TOKEN=$TOKEN \
replicated/vendor-cli --help
```
* **Windows**:
```
docker.exe run \
-e REPLICATED_API_TOKEN=%TOKEN% \
replicated/vendor-cli --help
```
### `REPLICATED_APP`
To set the `REPLICATED_APP` environment variable:
1. In the [vendor portal](https://vendor.replicated.com), go to the **Application Settings** page and copy the slug for the target application. For more information, see [Get the Application Slug](/vendor/vendor-portal-manage-app#slug) in _Managing Application_.
1. Set the environment variable, replacing `APP_SLUG` with the slug for the target application that you retreived in the previous step:
* **MacOs or Linux**:
```
export REPLICATED_APP=APP_SLUG
```
* **Docker**:
```
docker run \
-e REPLICATED_APP=$APP_SLUG
replicated/vendor-cli --help
```
* **Windows**:
```
docker.exe run \
-e REPLICATED_APP=%APP_SLUG% \
replicated/vendor-cli --help
```
---
# replicated instance inspect
Show full details for a customer instance
### Synopsis
Show full details for a customer instance
```
replicated instance inspect [flags]
```
### Options
```
--customer string Customer Name or ID
-h, --help help for inspect
--instance string Instance Name or ID
-o, --output string The output format to use. One of: json|table (default "table")
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated instance](replicated-cli-instance) - Manage instances
---
# replicated instance ls
list customer instances
### Synopsis
list customer instances
```
replicated instance ls [flags]
```
### Aliases
```
ls, list
```
### Options
```
--customer string Customer Name or ID
-h, --help help for ls
-o, --output string The output format to use. One of: json|table (default "table")
--tag stringArray Tags to use to filter instances (key=value format, can be specified multiple times). Only one tag needs to match (an OR operation)
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated instance](replicated-cli-instance) - Manage instances
---
# replicated instance tag
tag an instance
### Synopsis
remove or add instance tags
```
replicated instance tag [flags]
```
### Options
```
--customer string Customer Name or ID
-h, --help help for tag
--instance string Instance Name or ID
-o, --output string The output format to use. One of: json|table (default "table")
--tag stringArray Tags to apply to instance. Leave value empty to remove tag. Tags not specified will not be removed.
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated instance](replicated-cli-instance) - Manage instances
---
# replicated instance
Manage instances
### Synopsis
The instance command allows vendors to display and tag customer instances.
### Options
```
-h, --help help for instance
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated](replicated) - Manage your Commercial Software Distribution Lifecycle using Replicated
* [replicated instance inspect](replicated-cli-instance-inspect) - Show full details for a customer instance
* [replicated instance ls](replicated-cli-instance-ls) - list customer instances
* [replicated instance tag](replicated-cli-instance-tag) - tag an instance
---
# replicated login
Log in to Replicated
### Synopsis
This command will open your browser to ask you authentication details and create / retrieve an API token for the CLI to use.
```
replicated login [flags]
```
### Options
```
-h, --help help for login
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated](replicated) - Manage your Commercial Software Distribution Lifecycle using Replicated
---
# replicated logout
Logout from Replicated
### Synopsis
This command will remove any stored credentials from the CLI.
```
replicated logout [flags]
```
### Options
```
-h, --help help for logout
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated](replicated) - Manage your Commercial Software Distribution Lifecycle using Replicated
---
# replicated registry add dockerhub
Add a DockerHub registry
### Synopsis
Add a DockerHub registry using a username/password or an account token
```
replicated registry add dockerhub [flags]
```
### Options
```
--authtype string Auth type for the registry (default "password")
-h, --help help for dockerhub
-o, --output string The output format to use. One of: json|table (default "table")
--password string The password to authenticate to the registry with
--password-stdin Take the password from stdin
--token string The token to authenticate to the registry with
--token-stdin Take the token from stdin
--username string The userame to authenticate to the registry with
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--skip-validation Skip validation of the registry (not recommended)
```
### SEE ALSO
* [replicated registry add](replicated-cli-registry-add) - add
---
# replicated registry add ecr
Add an ECR registry
### Synopsis
Add an ECR registry using an Access Key ID and Secret Access Key
```
replicated registry add ecr [flags]
```
### Options
```
--accesskeyid string The access key id to authenticate to the registry with
--endpoint string The ECR endpoint
-h, --help help for ecr
-o, --output string The output format to use. One of: json|table (default "table")
--secretaccesskey string The secret access key to authenticate to the registry with
--secretaccesskey-stdin Take the secret access key from stdin
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--skip-validation Skip validation of the registry (not recommended)
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated registry add](replicated-cli-registry-add) - add
---
# replicated registry add gar
Add a Google Artifact Registry
### Synopsis
Add a Google Artifact Registry using a service account key
```
replicated registry add gar [flags]
```
### Options
```
--authtype string Auth type for the registry (default "serviceaccount")
--endpoint string The GAR endpoint
-h, --help help for gar
-o, --output string The output format to use. One of: json|table (default "table")
--serviceaccountkey string The service account key to authenticate to the registry with
--serviceaccountkey-stdin Take the service account key from stdin
--token string The token to use to auth to the registry with
--token-stdin Take the token from stdin
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--skip-validation Skip validation of the registry (not recommended)
```
### SEE ALSO
* [replicated registry add](replicated-cli-registry-add) - add
---
# replicated registry add gcr
Add a Google Container Registry
### Synopsis
Add a Google Container Registry using a service account key
```
replicated registry add gcr [flags]
```
### Options
```
--endpoint string The GCR endpoint
-h, --help help for gcr
-o, --output string The output format to use. One of: json|table (default "table")
--serviceaccountkey string The service account key to authenticate to the registry with
--serviceaccountkey-stdin Take the service account key from stdin
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--skip-validation Skip validation of the registry (not recommended)
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated registry add](replicated-cli-registry-add) - add
---
# replicated registry add ghcr
Add a GitHub Container Registry
### Synopsis
Add a GitHub Container Registry using a username and personal access token (PAT)
```
replicated registry add ghcr [flags]
```
### Options
```
-h, --help help for ghcr
-o, --output string The output format to use. One of: json|table (default "table")
--token string The token to use to auth to the registry with
--token-stdin Take the token from stdin
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--skip-validation Skip validation of the registry (not recommended)
```
### SEE ALSO
* [replicated registry add](replicated-cli-registry-add) - add
---
# replicated registry add other
Add a generic registry
### Synopsis
Add a generic registry using a username/password
```
replicated registry add other [flags]
```
### Options
```
--endpoint string endpoint for the registry
-h, --help help for other
-o, --output string The output format to use. One of: json|table (default "table")
--password string The password to authenticate to the registry with
--password-stdin Take the password from stdin
--username string The userame to authenticate to the registry with
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--skip-validation Skip validation of the registry (not recommended)
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated registry add](replicated-cli-registry-add) - add
---
# replicated registry add quay
Add a quay.io registry
### Synopsis
Add a quay.io registry using a username/password (or a robot account)
```
replicated registry add quay [flags]
```
### Options
```
-h, --help help for quay
-o, --output string The output format to use. One of: json|table (default "table")
--password string The password to authenticate to the registry with
--password-stdin Take the password from stdin
--username string The userame to authenticate to the registry with
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--skip-validation Skip validation of the registry (not recommended)
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated registry add](replicated-cli-registry-add) - add
---
# replicated registry add
add
### Synopsis
add
### Options
```
-h, --help help for add
--skip-validation Skip validation of the registry (not recommended)
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated registry](replicated-cli-registry) - Manage registries
* [replicated registry add dockerhub](replicated-cli-registry-add-dockerhub) - Add a DockerHub registry
* [replicated registry add ecr](replicated-cli-registry-add-ecr) - Add an ECR registry
* [replicated registry add gar](replicated-cli-registry-add-gar) - Add a Google Artifact Registry
* [replicated registry add gcr](replicated-cli-registry-add-gcr) - Add a Google Container Registry
* [replicated registry add ghcr](replicated-cli-registry-add-ghcr) - Add a GitHub Container Registry
* [replicated registry add other](replicated-cli-registry-add-other) - Add a generic registry
* [replicated registry add quay](replicated-cli-registry-add-quay) - Add a quay.io registry
---
# replicated registry ls
list registries
### Synopsis
list registries, or a single registry by name
```
replicated registry ls [NAME] [flags]
```
### Aliases
```
ls, list
```
### Options
```
-h, --help help for ls
-o, --output string The output format to use. One of: json|table (default "table")
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated registry](replicated-cli-registry) - Manage registries
---
# replicated registry rm
remove registry
### Synopsis
remove registry by endpoint
```
replicated registry rm [ENDPOINT] [flags]
```
### Aliases
```
rm, delete
```
### Options
```
-h, --help help for rm
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated registry](replicated-cli-registry) - Manage registries
---
# replicated registry test
test registry
### Synopsis
test registry
```
replicated registry test HOSTNAME [flags]
```
### Options
```
-h, --help help for test
--image string The image to test pulling
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated registry](replicated-cli-registry) - Manage registries
---
# replicated registry
Manage registries
### Synopsis
registry can be used to manage existing registries and add new registries to a team
### Options
```
-h, --help help for registry
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated](replicated) - Manage your Commercial Software Distribution Lifecycle using Replicated
* [replicated registry add](replicated-cli-registry-add) - add
* [replicated registry ls](replicated-cli-registry-ls) - list registries
* [replicated registry rm](replicated-cli-registry-rm) - remove registry
* [replicated registry test](replicated-cli-registry-test) - test registry
---
# replicated release compatibility
Report release compatibility
### Synopsis
Report release compatibility for a kubernetes distribution and version
```
replicated release compatibility SEQUENCE [flags]
```
### Options
```
--distribution string Kubernetes distribution of the cluster to report on.
--failure If set, the compatibility will be reported as a failure.
-h, --help help for compatibility
--notes string Additional notes to report.
--success If set, the compatibility will be reported as a success.
--version string Kubernetes version of the cluster to report on (format is distribution dependent)
```
### Options inherited from parent commands
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated release](replicated-cli-release) - Manage app releases
---
# replicated release create
Create a new release
### Synopsis
Create a new release by providing application manifests for the next release in
your sequence.
```
replicated release create [flags]
```
### Options
```
--auto generate default values for use in CI
-y, --confirm-auto auto-accept the configuration generated by the --auto flag
--ensure-channel When used with --promote
[View a larger version of this image](/images/customer-expiration-policy.png)
1. Install the Replicated SDK as a standalone component in your cluster. This is called _integration mode_. Installing in integration mode allows you to develop locally against the SDK API without needing to create releases for your application in the vendor portal. See [Develop Against the SDK API](/vendor/replicated-sdk-development).
1. In your application, use the `/api/v1/license/fields/expires_at` endpoint to get the `expires_at` field that you defined in the previous step.
**Example:**
```bash
curl replicated:3000/api/v1/license/fields/expires_at
```
```json
{
"name": "expires_at",
"title": "Expiration",
"description": "License Expiration",
"value": "2023-05-30T00:00:00Z",
"valueType": "String",
"signature": {
"v1": "c6rsImpilJhW0eK+Kk37jeRQvBpvWgJeXK2M..."
}
}
```
1. Add logic to your application to revoke access if the current date and time is more recent than the expiration date of the license.
1. (Recommended) Use signature verification in your application to ensure the integrity of the license field. See [Verify License Field Signatures with the Replicated SDK API](/vendor/licenses-verify-fields-sdk-api).
---
# replicated
Manage your Commercial Software Distribution Lifecycle using Replicated
### Synopsis
The 'replicated' CLI allows Replicated customers (vendors) to manage their Commercial Software Distribution Lifecycle (CSDL) using the Replicated API.
### Options
```
--app string The app slug or app id to use in all calls
--debug Enable debug output
-h, --help help for replicated
--token string The API token to use to access your app in the Vendor API
```
### SEE ALSO
* [replicated api](replicated-cli-api) - Make ad-hoc API calls to the Replicated API
* [replicated app](replicated-cli-app) - Manage applications
* [replicated channel](replicated-cli-channel) - List channels
* [replicated cluster](replicated-cli-cluster) - Manage test Kubernetes clusters.
* [replicated completion](replicated-cli-completion) - Generate completion script
* [replicated customer](replicated-cli-customer) - Manage customers
* [replicated default](replicated-cli-default) - Manage default values used by other commands
* [replicated installer](replicated-cli-installer) - Manage Kubernetes installers
* [replicated instance](replicated-cli-instance) - Manage instances
* [replicated login](replicated-cli-login) - Log in to Replicated
* [replicated logout](replicated-cli-logout) - Logout from Replicated
* [replicated registry](replicated-cli-registry) - Manage registries
* [replicated release](replicated-cli-release) - Manage app releases
* [replicated version](replicated-cli-version) - Print the current version and exit
* [replicated vm](replicated-cli-vm) - Manage test virtual machines.
---
# About Template Functions
This topic describes Replicated KOTS template functions, including information about use cases, template function contexts, syntax.
## Overview
For Kubernetes manifest files for applications deployed by Replicated KOTS, Replicated provides a set of custom template functions based on the Go text/template library.
Common use cases for KOTS template functions include rendering values during installation or upgrade, such as:
* Customer-specific license field values
* User-provided configuration values
* Information about the customer environment, such the number of nodes or the Kubernetes version in the cluster where the application is installed
* Random strings
KOTS template functions can also be used to work with integer, boolean, float, and string values, such as doing mathematical operations, trimming leading and trailing spaces, or converting string values to integers or booleans.
All functionality of the Go templating language, including if statements, loops, and variables, is supported with KOTS template functions. For more information about the Go library, see [text/template](https://golang.org/pkg/text/template/) in the Go documentation.
### Supported File Types
You can use KOTS template functions in Kubernetes manifest files for applications deployed by KOTS, such as:
* Custom resources in the `kots.io` API group like Application, Config, or HelmChart
* Custom resources in other API groups like Preflight, SupportBundle, or Backup
* Kubernetes objects like Deployments, Services, Secrets, or ConfigMaps
* Kubernetes Operators
### Limitations
* Not all fields in the Config and Application custom resources support templating. For more information, see [Application](/reference/custom-resource-application) and [Item Properties](/reference/custom-resource-config#item-properties) in _Config_.
* Templating is not supported in the [Embedded Cluster Config](/reference/embedded-config) resource.
* KOTS template functions are not directly supported in Helm charts. For more information, see [Helm Charts](#helm-charts) below.
### Helm Charts
KOTS template functions are _not_ directly supported in Helm charts. However, the HelmChart custom resource provides a way to map values rendered by KOTS template functions to Helm chart values. This allows you to use KOTS template functions with Helm charts without making changes to those Helm charts.
For information about how to map values from the HelmChart custom resource to Helm chart `values.yaml` files, see [Setting Helm Chart Values with KOTS](/vendor/helm-optional-value-keys).
### Template Function Rendering
During application installation and upgrade, KOTS templates all Kubernetes manifest files in a release (except for the Config custom resource) at the same time during a single process.
For the [Config](/reference/custom-resource-config) custom resource, KOTS templates each item separately so that config items can be used in templates for other items. For examples of this, see [Using Conditional Statements in Configuration Fields](/vendor/config-screen-conditional) and [Template Function Examples](/reference/template-functions-examples).
## Syntax {#syntax}
The KOTS template function syntax supports the following functionally equivalent delimiters:
* [`repl{{ ... }}`](#syntax-integer)
* [`{{repl ... }}`](#syntax-string)
### Syntax Requirements
KOTS template function syntax has the following requirements:
* In both the `repl{{ ... }}` and `{{repl ... }}` syntaxes, there must be no whitespace between `repl` and the `{{` delimiter.
* The manifests where KOTS template functions are used must be valid YAML. This is because the YAML manifests are linted before KOTS template functions are rendered.
### `repl{{ ... }}` {#syntax-integer}
This syntax is recommended for most use cases.
Any quotation marks wrapped around this syntax are stripped during rendering. If you need the rendered value to be quoted, you can pipe into quote (`| quote`) or use the [`{{repl ... }}`](#syntax-string) syntax instead.
#### Integer Example
```yaml
http:
port: repl{{ ConfigOption "load_balancer_port" }}
```
```yaml
http:
port: 8888
```
#### Example with `| quote`
```yaml
customTag: repl{{ ConfigOption "tag" | quote }}
```
```yaml
customTag: 'key: value'
```
#### If-Else Example
```yaml
http:
port: repl{{ if ConfigOptionEquals "ingress_type" "load_balancer" }}repl{{ ConfigOption "load_balancer_port" }}repl{{ else }}8081repl{{ end }}
```
```yaml
http:
port: 8081
```
For more examples, see [Template Function Examples](/reference/template-functions-examples).
### `{{repl ... }}` {#syntax-string}
This syntax can be useful when having the delimiters outside the template function improves readability of the YAML, such as in multi-line statements or if-else statements.
To use this syntax at the beginning of a value in YAML, it _must_ be wrapped in quotes because you cannot start a YAML value with the `{` character and manifests consumed by KOTS must be valid YAML. When this syntax is wrapped in quotes, the rendered value is also wrapped in quotes.
#### Example With Quotes
The following example is wrapped in quotes because it is used at the beginning of a statement in YAML:
```yaml
customTag: '{{repl ConfigOption "tag" }}'
```
```yaml
customTag: 'key: value'
```
#### If-Else Example
```yaml
my-service:
type: '{{repl if ConfigOptionEquals "ingress_type" "load_balancer" }}LoadBalancer{{repl else }}ClusterIP{{repl end }}'
```
```yaml
my-service:
type: 'LoadBalancer'
```
For more examples, see [Template Function Examples](/reference/template-functions-examples).
## Contexts {#contexts}
KOTS template functions are grouped into different contexts, depending on the phase of the application lifecycle when the function is available and the context of the data that is provided.
### Static Context
The context necessary to render the static template functions is always available.
The static context also includes the Masterminds Sprig function library. For more information, see [Sprig Function Documentation](http://masterminds.github.io/sprig/) on the sprig website.
For a list of all KOTS template functions available in the static context, see [Static Context](template-functions-static-context).
### Config Context
Template functions in the config context are available when rendering an application that includes the KOTS [Config](/reference/custom-resource-config) custom resource, which defines the KOTS Admin Console config screen. At execution time, template functions in the config context also can use the static context functions. For more information about configuring the Admin Console config screen, see [About the Configuration Screen](/vendor/config-screen-about).
For a list of all KOTS template functions available in the config context, see [Config Context](template-functions-config-context).
### License Context
Template functions in the license context have access to customer license and version data. For more information about managing customer licenses, see [About Customers and Licensing](/vendor/licenses-about).
For a list of all KOTS template functions available in the license context, see [License Context](template-functions-license-context).
### kURL Context
:::note
Replicated kURL is available only for existing customers. If you are not an existing kURL user, use Replicated Embedded Cluster instead. For more information, see [Use Embedded Cluster](/vendor/embedded-overview).
kURL is a Generally Available (GA) product for existing customers. For more information about the Replicated product lifecycle phases, see [Support Lifecycle Policy](/vendor/policies-support-lifecycle).
:::
Template functions in the kURL context have access to information about applications installed with Replicated kURL. For more information about kURL, see [Introduction to kURL](/vendor/kurl-about).
For a list of all KOTS template functions available in the kURL context, see [kURL Context](template-functions-kurl-context).
### Identity Context
:::note
The KOTS identity service feature is deprecated and is not available to new users.
:::
Template functions in the Identity context have access to Replicated KOTS identity service information.
For a list of all KOTS template functions available in the identity context, see [Identity Context](template-functions-identity-context).
---
# Config Context
This topic provides a list of the KOTS template functions in the Config context.
Template functions in the config context are available when rendering an application that includes the KOTS [Config](/reference/custom-resource-config) custom resource, which defines the KOTS Admin Console config screen. At execution time, template functions in the config context also can use the static context functions. For more information about configuring the Admin Console config screen, see [About the Configuration Screen](/vendor/config-screen-about).
## ConfigOption
```go
func ConfigOption(optionName string) string
```
Returns the value of the specified option from the KOTS Config custom resource as a string.
For the `file` config option type, `ConfigOption` returns the base64 encoded file. To return the decoded contents of a file, use [ConfigOptionData](#configoptiondata) instead.
```yaml
'{{repl ConfigOption "hostname" }}'
```
#### Example
The following KOTS [HelmChart](/reference/custom-resource-helmchart-v2) custom resource uses the ConfigOption template function to set the port, node port, and annotations for a LoadBalancer service using the values supplied by the user on the KOTS Admin Console config screen. These values are then mapped to the `values.yaml` file for the associated Helm chart during deployment.
```yaml
# KOTS HelmChart custom resource
apiVersion: kots.io/v1beta2
kind: HelmChart
metadata:
name: samplechart
spec:
chart:
name: samplechart
chartVersion: 3.1.7
values:
myapp:
service:
type: LoadBalancer
port: repl{{ ConfigOption "myapp_load_balancer_port"}}
nodePort: repl{{ ConfigOption "myapp_load_balancer_node_port"}}
annotations: repl{{ ConfigOption `myapp_load_balancer_annotations` | nindent 14 }}
```
For more information, see [Set Helm Values with KOTS](/vendor/helm-optional-value-keys).
## ConfigOptionData
```go
func ConfigOptionData(optionName string) string
```
For the `file` config option type, `ConfigOptionData` returns the base64 decoded contents of the file. To return the base64 encoded file, use [ConfigOption](#configoption) instead.
```yaml
'{{repl ConfigOptionData "ssl_key"}}'
```
#### Example
The following KOTS [HelmChart](/reference/custom-resource-helmchart-v2) custom resource uses the ConfigOptionData template function to set the TLS cert and key using the files supplied by the user on the KOTS Admin Console config screen. These values are then mapped to the `values.yaml` file for the associated Helm chart during deployment.
```yaml
# KOTS HelmChart custom resource
apiVersion: kots.io/v1beta2
kind: HelmChart
metadata:
name: samplechart
spec:
chart:
name: samplechart
chartVersion: 3.1.7
values:
myapp:
tls:
enabled: true
genSelfSignedCert: repl{{ ConfigOptionEquals "myapp_ingress_tls_type" "self_signed" }}
cert: repl{{ print `|`}}repl{{ ConfigOptionData `tls_certificate_file` | nindent 12 }}
key: repl{{ print `|`}}repl{{ ConfigOptionData `tls_private_key_file` | nindent 12 }}
```
For more information, see [Set Helm Values with KOTS](/vendor/helm-optional-value-keys).
## ConfigOptionFilename
```go
func ConfigOptionFilename(optionName string) string
```
`ConfigOptionFilename` returns the filename associated with a `file` config option.
It will return an empty string if used erroneously with other types.
```yaml
'{{repl ConfigOptionFilename "pom_file"}}'
```
#### Example
For example, if you have the following KOTS Config defined:
```yaml
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: my-application
spec:
groups:
- name: java_settings
title: Java Settings
description: Configures the Java Server build parameters
items:
- name: pom_file
type: file
required: true
```
The following example shows how to use `ConfigOptionFilename` in a Pod Spec to mount a file:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: configmap-demo-pod
spec:
containers:
- name: some-java-app
image: busybox
command: ["bash"]
args:
- "-C"
- "cat /config/{{repl ConfigOptionFilename pom_file}}"
volumeMounts:
- name: config
mountPath: "/config"
readOnly: true
volumes:
- name: config
configMap:
name: demo-configmap
items:
- key: data_key_one
path: repl{{ ConfigOptionFilename pom_file }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: demo-configmap
data:
data_key_one: repl{{ ConfigOptionData pom_file }}
```
## ConfigOptionEquals
```go
func ConfigOptionEquals(optionName string, expectedValue string) bool
```
Returns true if the configuration option value is equal to the supplied value.
```yaml
'{{repl ConfigOptionEquals "http_enabled" "1" }}'
```
#### Example
The following KOTS [HelmChart](/reference/custom-resource-helmchart-v2) custom resource uses the ConfigOptionEquals template function to set the `postgres.enabled` value depending on if the user selected the `embedded_postgres` option on the KOTS Admin Console config screen. This value is then mapped to the `values.yaml` file for the associated Helm chart during deployment.
```yaml
# KOTS HelmChart custom resource
apiVersion: kots.io/v1beta2
kind: HelmChart
metadata:
name: samplechart
spec:
chart:
name: samplechart
chartVersion: 3.1.7
values:
postgresql:
enabled: repl{{ ConfigOptionEquals `postgres_type` `embedded_postgres`}}
```
For more information, see [Set Helm Values with KOTS](/vendor/helm-optional-value-keys).
## ConfigOptionNotEquals
```go
func ConfigOptionNotEquals(optionName string, expectedValue string) bool
```
Returns true if the configuration option value is not equal to the supplied value.
```yaml
'{{repl ConfigOptionNotEquals "http_enabled" "1" }}'
```
## LocalRegistryAddress
```go
func LocalRegistryAddress() string
```
Returns the local registry host or host/namespace that's configured.
This will always return everything before the image name and tag.
## LocalRegistryHost
```go
func LocalRegistryHost() string
```
Returns the host of the local registry that the user configured. Alternatively, for air gap installations with Replicated Embedded Cluster or Replicated kURL, LocalRegistryHost returns the host of the built-in registry.
Includes the port if one is specified.
#### Example
The following KOTS [HelmChart](/reference/custom-resource-helmchart-v2) custom resource uses the HasLocalRegistry, LocalRegistryHost, and LocalRegistryNamespace template functions to conditionally rewrite an image registry and repository depending on if a local registry is used. These values are then mapped to the `values.yaml` file for the associated Helm chart during deployment.
```yaml
# KOTS HelmChart custom resource
apiVersion: kots.io/v1beta2
kind: HelmChart
metadata:
name: samplechart
spec:
chart:
name: samplechart
chartVersion: 3.1.7
values:
myapp:
image:
registry: '{{repl HasLocalRegistry | ternary LocalRegistryHost "images.mycompany.com" }}'
repository: '{{repl HasLocalRegistry | ternary LocalRegistryNamespace "proxy/myapp/quay.io/my-org" }}/nginx'
tag: v1.0.1
```
For more information, see [Set Helm Values with KOTS](/vendor/helm-optional-value-keys).
## LocalRegistryNamespace
```go
func LocalRegistryNamespace() string
```
Returns the namespace of the local registry that the user configured. Alternatively, for air gap installations with Embedded Cluster or kURL, LocalRegistryNamespace returns the namespace of the built-in registry.
#### Example
The following KOTS [HelmChart](/reference/custom-resource-helmchart-v2) custom resource uses the HasLocalRegistry, LocalRegistryHost, and LocalRegistryNamespace template functions to conditionally rewrite an image registry and repository depending on if a local registry is used. These values are then mapped to the `values.yaml` file for the associated Helm chart during deployment.
```yaml
# KOTS HelmChart custom resource
apiVersion: kots.io/v1beta2
kind: HelmChart
metadata:
name: samplechart
spec:
chart:
name: samplechart
chartVersion: 3.1.7
values:
myapp:
image:
registry: '{{repl HasLocalRegistry | ternary LocalRegistryHost "images.mycompany.com" }}'
repository: '{{repl HasLocalRegistry | ternary LocalRegistryNamespace "proxy/myapp/quay.io/my-org" }}/nginx'
tag: v1.0.1
```
For more information, see [Set Helm Values with KOTS](/vendor/helm-optional-value-keys).
## LocalImageName
```go
func LocalImageName(remoteImageName string) string
```
Given a `remoteImageName`, rewrite the `remoteImageName` so that it can be pulled to local hosts.
A common use case for the `LocalImageName` function is to ensure that a Kubernetes Operator can determine the names of container images on Pods created at runtime. For more information, see [Reference Images](/vendor/operator-referencing-images) in the _Packaging a Kubernetes Operator Application_ section.
`LocalImageName` rewrites the `remoteImageName` in one of the following ways, depending on if a private registry is configured and if the image must be proxied:
* If there is a private registry configured in the customer's environment, such as in air gapped environments, rewrite `remoteImageName` to reference the private registry locally. For example, rewrite `elasticsearch:7.6.0` as `registry.somebigbank.com/my-app/elasticsearch:7.6.0`.
* If there is no private registry configured in the customer's environment, but the image must be proxied, rewrite `remoteImageName` so that the image can be pulled through the proxy registry. For example, rewrite `"quay.io/orgname/private-image:v1.2.3"` as `proxy.replicated.com/proxy/app-name/quay.io/orgname/private-image:v1.2.3`.
* If there is no private registry configured in the customer's environment and the image does not need to be proxied, return `remoteImageName` without changes.
For more information about the Replicated proxy registry, see [About the Proxy Registry](/vendor/private-images-about).
## LocalRegistryImagePullSecret
```go
func LocalRegistryImagePullSecret() string
```
Returns the base64 encoded local registry image pull secret value.
This is often needed when an operator is deploying images to a namespace that is not managed by Replicated KOTS.
Image pull secrets must be present in the namespace of the pod.
#### Example
```yaml
apiVersion: v1
kind: Secret
metadata:
name: my-image-pull-secret
namespace: my-namespace
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: '{{repl LocalRegistryImagePullSecret }}'
---
apiVersion: v1
kind: Pod
metadata:
name: dynamic-pod
namespace: my-namespace
spec:
containers:
- image: '{{repl LocalImageName "registry.replicated.com/my-app/my-image:abcdef" }}'
name: my-container
imagePullSecrets:
- name: my-image-pull-secret
```
## ImagePullSecretName
```go
func ImagePullSecretName() string
```
Returns the name of the image pull secret that can be added to pod specs that use private images.
The secret will be automatically created in all application namespaces.
It will contain authentication information for any private registry used with the application.
#### Example
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
spec:
imagePullSecrets:
- name: repl{{ ImagePullSecretName }}
```
## HasLocalRegistry
```go
func HasLocalRegistry() bool
```
Returns true if the environment is configured to rewrite images to a local registry.
HasLocalRegistry is always true for air gap installations. HasLocalRegistry is true in online installations if the user pushed images to a local registry.
#### Example
The following KOTS [HelmChart](/reference/custom-resource-helmchart-v2) custom resource uses the HasLocalRegistry, LocalRegistryHost, and LocalRegistryNamespace template functions to conditionally rewrite an image registry and repository depending on if a local registry is used. These values are then mapped to the `values.yaml` file for the associated Helm chart during deployment.
```yaml
# KOTS HelmChart custom resource
apiVersion: kots.io/v1beta2
kind: HelmChart
metadata:
name: samplechart
spec:
chart:
name: samplechart
chartVersion: 3.1.7
values:
myapp:
image:
registry: '{{repl HasLocalRegistry | ternary LocalRegistryHost "images.mycompany.com" }}'
repository: '{{repl HasLocalRegistry | ternary LocalRegistryNamespace "proxy/myapp/quay.io/my-org" }}/nginx'
tag: v1.0.1
```
For more information, see [Set Helm Values with KOTS](/vendor/helm-optional-value-keys).
---
# Template Function Examples
This topic provides examples of how to use Replicated KOTS template functions in various common use cases. For more information about working with KOTS template functions, including the supported syntax and the types of files where KOTS template functions can be used, see [About Template Functions](template-functions-about).
## Overview
KOTS template functions are based on the Go text/template library. All functionality of the Go templating language, including if statements, loops, and variables, is supported with KOTS template functions. For more information, see [text/template](https://golang.org/pkg/text/template/) in the Go documentation.
Additionally, KOTS template functions can be used with all functions in the Sprig library. Sprig provides several template functions for the Go templating language, such as type conversion, string, and integer math functions. For more information, see [Sprig Function Documentation](https://masterminds.github.io/sprig/).
Common use cases for KOTS template functions include rendering values during installation or upgrade, such as:
* Customer-specific license field values
* User-provided configuration values
* Information about the customer environment, such the number of nodes or the Kubernetes version in the cluster where the application is installed
* Random strings
KOTS template functions can also be used to work with integer, boolean, float, and string values, such as doing mathematical operations, trimming leading and trailing spaces, or converting string values to integers or booleans.
For examples demonstrating these use cases and more, see the sections below.
## Comparison Examples
This section includes examples of how to use KOTS template functions to compare different types of data.
### Boolean Comparison
Boolean values can be used in comparisons to evaluate if a given statement is true or false. Because many KOTS template functions return string values, comparing boolean values often requires using the KOTS [ParseBool](/reference/template-functions-static-context#parsebool) template function to return the boolean represented by the string.
One common use case for working with boolean values is to check that a given field is present in the customer's license. For example, you might need to show a configuration option on the KOTS Admin Console **Config** page only when the customer's license has a certain entitlement.
The following example creates a conditional statement in the KOTS Config custom resource that evaluates to true when a specified license field is present in the customer's license _and_ the customer enables a specified configuration option on the Admin Console **Config** page.
```yaml
# KOTS Config custom resource
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: config-sample
spec:
groups:
- name: example_group
title: Example Config
items:
- name: radio_example
title: Select One
type: radio
items:
- name: option_one
title: Option One
- name: option_two
title: Option Two
- name: conditional_item
title: Conditional Item
type: text
# Display this item only when the customer enables the option_one config field *and*
# has the feature-1 entitlement in their license
when: repl{{ and (LicenseFieldValue "feature-1" | ParseBool) (ConfigOptionEquals "radio_example" "option_one")}}
```
This example uses the following KOTS template functions:
* [LicenseFieldValue](/reference/template-functions-license-context#licensefieldvalue) to return the string value of a boolean type license field named `feature-1`
:::note
The LicenseFieldValue template function always returns a string, regardless of the license field type.
:::
* [ParseBool](/reference/template-functions-static-context#parsebool) to convert the string returned by the LicenseFieldValue template function to a boolean
* [ConfigOptionEquals](/reference/template-functions-config-context#configoptionequals) to return a boolean that evaluates to true if the configuration option value is equal to the supplied value
### Integer Comparison
Integer values can be compared using operators such as greater than, less than, equal to, and so on. Because many KOTS template functions return string values, working with integer values often requires using another function to return the integer represented by the string, such as:
* KOTS [ParseInt](/reference/template-functions-static-context#parseint), which returns the integer value represented by the string with the option to provide a `base` other than 10
* Sprig [atoi](https://masterminds.github.io/sprig/conversion.html), which is equivalent to ParseInt(s, 10, 0), converted to type integer
A common use case for comparing integer values with KOTS template functions is to display different configuration options on the KOTS Admin Console **Config** page depending on integer values from the customer's license. For example, licenses might include an entitlement that defines the number of seats the customer is entitled to. In this case, it can be useful to conditionally display or hide certain fields on the **Config** page depending on the customer's team size.
The following example uses:
* KOTS [LicenseFieldValue](/reference/template-functions-license-context#licensefieldvalue) template function to evaluate the number of seats permitted by the license
* Sprig [atoi](https://masterminds.github.io/sprig/conversion.html) function to convert the string values returned by LicenseFieldValue to integers
* [Go binary comparison operators](https://pkg.go.dev/text/template#hdr-Functions) `gt`, `lt`, `ge`, and `le` to compare the integers
```yaml
# KOTS Config custom resource
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: config-sample
spec:
groups:
- name: example_group
title: Example Config
items:
- name: small
title: Small (100 or Fewer Seats)
type: text
default: Default for small teams
# Use le and atoi functions to display this config item
# only when the value of the numSeats entitlement is
# less than or equal to 100
when: repl{{ le (atoi (LicenseFieldValue "numSeats")) 100 }}
- name: medium
title: Medium (101-1000 Seats)
type: text
default: Default for medium teams
# Use ge, le, and atoi functions to display this config item
# only when the value of the numSeats entitlement is
# greater than or equal to 101 and less than or equal to 1000
when: repl{{ (and (ge (atoi (LicenseFieldValue "numSeats")) 101) (le (atoi (LicenseFieldValue "numSeats")) 1000)) }}
- name: large
title: Large (More Than 1000 Seats)
type: text
default: Default for large teams
# Use gt and atoi functions to display this config item
# only when the value of the numSeats entitlement is
# greater than 1000
when: repl{{ gt (atoi (LicenseFieldValue "numSeats")) 1000 }}
```
As shown in the image below, if the user's license contains `numSeats: 150`, then the `medium` item is displayed on the **Config** page and the `small` and `large` items are not displayed:
[View a larger version of this image](/images/config-example-numseats.png)
### String Comparison
A common use case for string comparison is to compare the rendered value of a KOTS template function against a string to conditionally show or hide fields on the KOTS Admin Console **Config** page depending on details about the customer's environment. For example, a string comparison can be used to check the Kubernetes distribution of the cluster where an application is deployed.
The following example uses:
* KOTS [Distribution](/reference/template-functions-static-context#distribution) template function to return the Kubernetes distribution of the cluster where KOTS is running
* [eq](https://pkg.go.dev/text/template#hdr-Functions) (_equal_) Go binary operator to compare the rendered value of the Distribution template function to a string, then return the boolean truth of the comparison
```yaml
# KOTS Config custom resource
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: config-sample
spec:
groups:
- name: example_settings
title: My Example Config
description: Example fields for using Distribution template function
items:
- name: gke_distribution
type: label
title: "You are deploying to GKE"
# Use the eq binary operator to check if the rendered value
# of the KOTS Distribution template function is equal to gke
when: repl{{ eq Distribution "gke" }}
- name: openshift_distribution
type: label
title: "You are deploying to OpenShift"
when: repl{{ eq Distribution "openShift" }}
- name: eks_distribution
type: label
title: "You are deploying to EKS"
when: repl{{ eq Distribution "eks" }}
...
```
The following image shows how only the `gke_distribution` item is displayed on the **Config** page when KOTS is running in a GKE cluster:
### Not Equal To Comparison
It can be useful to compare the rendered value of a KOTS template function against another value to check if the two values are different. For example, you can conditionally show fields on the KOTS Admin Console **Config** page only when the Kubernetes distribution of the cluster where the application is deployed is _not_ [Replicated embedded cluster](/vendor/embedded-overview).
In the example below, the `ingress_type` field is displayed on the **Config** page only when the distribution of the cluster is _not_ [Replicated Embedded Cluster](/vendor/embedded-overview). This ensures that only users deploying to their own existing cluster are able to select the method for ingress.
The following example uses:
* KOTS [Distribution](/reference/template-functions-static-context#distribution) template function to return the Kubernetes distribution of the cluster where KOTS is running
* [ne](https://pkg.go.dev/text/template#hdr-Functions) (_not equal_) Go binary operator to compare the rendered value of the Distribution template function to a string, then return `true` if the values are not equal to one another
```yaml
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: config
spec:
groups:
# Ingress settings
- name: ingress_settings
title: Ingress Settings
description: Configure Ingress
items:
- name: ingress_type
title: Ingress Type
help_text: |
Select how traffic will ingress to the appliction.
type: radio
items:
- name: ingress_controller
title: Ingress Controller
- name: load_balancer
title: Load Balancer
default: "ingress_controller"
required: true
when: 'repl{{ ne Distribution "embedded-cluster" }}'
# Database settings
- name: database_settings
title: Database
items:
- name: postgres_type
help_text: Would you like to use an embedded postgres instance, or connect to an external instance that you manage?
type: radio
title: Postgres
default: embedded_postgres
items:
- name: embedded_postgres
title: Embedded Postgres
- name: external_postgres
title: External Postgres
```
The following image shows how the `ingress_type` field is hidden when the distribution of the cluster is `embedded-cluster`. Only the `postgres_type` item is displayed:
[View a larger version of this image](/images/config-example-distribution-not-ec.png)
Conversely, when the distribution of the cluster is not `embedded-cluster`, both fields are displayed:
[View a larger version of this image](/images/config-example-distribution-not-ec-2.png)
### Logical AND Comparison
Logical comparisons such as AND, OR, and NOT can be used with KOTS template functions. A common use case for logical AND comparisons is to construct more complex conditional statements where it is necessary that two different conditions are both true.
The following example shows how to use an `and` operator that evaluates to true when two different configuration options on the Admin Console **Config** page are both enabled. This example uses the KOTS [ConfigOptionEquals](/reference/template-functions-config-context#configoptionequals) template function to return a boolean that evaluates to true if the configuration option value is equal to the supplied value.
```yaml
# KOTS Config custom resource
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: config-sample
spec:
groups:
- name: example_group
title: Example Config
items:
- name: radio_example
title: Select One Example
type: radio
items:
- name: option_one
title: Option One
- name: option_two
title: Option Two
- name: boolean_example
title: Boolean Example
type: bool
default: "0"
- name: conditional_item
title: Conditional Item
type: text
# Display this item only when *both* specified config options are enabled
when: repl{{ and (ConfigOptionEquals "radio_example" "option_one") (ConfigOptionEquals "boolean_example" "1")}}
```
As shown below, when both `Option One` and `Boolean Example` are selected, the conditional statement evaluates to true and the `Conditional Item` field is displayed:
[View a larger version of this image](/images/conditional-item-true.png)
Alternatively, if either `Option One` or `Boolean Example` is not selected, then the conditional statement evaluates to false and the `Conditional Item` field is not displayed:
[View a larger version of this image](/images/conditional-item-false-option-two.png)
[View a larger version of this image](/images/conditional-item-false-boolean.png)
## Conditional Statement Examples
This section includes examples of using KOTS template functions to construct conditional statements. Conditional statements can be used with KOTS template functions to render different values depending on a given condition.
### If-Else Statements
A common use case for if-else statements with KOTS template functions is to set values for resources or objects deployed by your application, such as custom annotations or service types, based on user-specific data.
This section includes examples of both single line and multi-line if-else statements. Using multi-line formatting can be useful to improve the readability of YAML files when longer or more complex if-else statements are needed.
Multi-line if-else statements can be constructed using YAML block scalars and block chomping characters to ensure the rendered result is valid YAML. A _folded_ block scalar style is denoted using the greater than (`>`) character. With the folded style, single line breaks in the string are treated as a space. Additionally, the block chomping minus (`-`) character is used to remove all the line breaks at the end of a string. For more information about working with these characters, see [Block Style Productions](https://yaml.org/spec/1.2.2/#chapter-8-block-style-productions) in the YAML documentation.
:::note
For Helm-based applications that need to use more complex or nested if-else statements, you can alternatively use templating within your Helm chart `templates` rather than in the KOTS HelmChart custom resource. For more information, see [If/Else](https://helm.sh/docs/chart_template_guide/control_structures/#ifelse) in the Helm documentation.
:::
#### Single Line
The following example shows if-else statements used in the KOTS HelmChart custom resource `values` field to render different values depending on if the user selects a load balancer or an ingress controller as the ingress type for the application. This example uses the KOTS [ConfigOptionEquals](/reference/template-functions-config-context#configoptionequals) template function to return a boolean that evaluates to true if the configuration option value is equal to the supplied value.
```yaml
# KOTS HelmChart custom resource
apiVersion: kots.io/v1beta2
kind: HelmChart
metadata:
name: my-app
spec:
chart:
name: my-app
chartVersion: 0.23.0
values:
services:
my-service:
enabled: true
appName: ["my-app"]
# Render the service type based on the user's selection
# '{{repl ...}}' syntax is used for `type` to improve readability of the if-else statement and render a string
type: '{{repl if ConfigOptionEquals "ingress_type" "load_balancer" }}LoadBalancer{{repl else }}ClusterIP{{repl end }}'
ports:
http:
enabled: true
# Render the HTTP port for the service depending on the user's selection
# repl{{ ... }} syntax is used for `port` to render an integer value
port: repl{{ if ConfigOptionEquals "ingress_type" "load_balancer" }}repl{{ ConfigOption "load_balancer_port" }}repl{{ else }}8081repl{{ end }}
protocol: HTTP
targetPort: 8081
```
#### Multi-Line in KOTS HelmChart Values
The following example uses a multi-line if-else statement in the KOTS HelmChart custom resource to render the path to the Replicated SDK image depending on if the user pushed images to a local private registry.
This example uses the following KOTS template functions:
* [HasLocalRegistry](/reference/template-functions-config-context#haslocalregistry) to return true if the environment is configured to rewrite images to a local registry
* [LocalRegistryHost](/reference/template-functions-config-context#localregistryhost) to return the local registry host configured by the user
* [LocalRegistryNamespace](/reference/template-functions-config-context#localregistrynamespace) to return the local registry namespace configured by the user
:::note
This example uses the `{{repl ...}}` syntax rather than the `repl{{ ... }}` syntax to improve readability in the YAML file. However, both syntaxes are supported for this use case. For more information, see [Syntax](/reference/template-functions-about#syntax) in _About Template Functions_.
:::
```yaml
# KOTS HelmChart custom resource
apiVersion: kots.io/v1beta2
kind: HelmChart
metadata:
name: samplechart
spec:
values:
images:
replicated-sdk: >-
{{repl if HasLocalRegistry -}}
{{repl LocalRegistryHost }}/{{repl LocalRegistryNamespace }}/replicated-sdk:1.0.0-beta.29
{{repl else -}}
docker.io/replicated/replicated-sdk:1.0.0-beta.29
{{repl end}}
```
Given the example above, if the user is _not_ using a local registry, then the `replicated-sdk` value in the Helm chart is set to the location of the image on the default docker registry, as shown below:
```yaml
# Helm chart values file
images:
replicated-sdk: 'docker.io/replicated/replicated-sdk:1.0.0-beta.29'
```
#### Multi-Line in Secret Object
The following example uses multi-line if-else statements in a Secret object deployed by KOTS to conditionally set the database hostname, port, username, and password depending on if the customer uses the database embedded with the application or brings their own external database.
This example uses the following KOTS template functions:
* [ConfigOptionEquals](/reference/template-functions-config-context#configoptionequals) to return a boolean that evaluates to true if the configuration option value is equal to the supplied value
* [ConfigOption](/reference/template-functions-config-context#configoption) to return the user-supplied value for the specified configuration option
* [Base64Encode](/reference/template-functions-static-context#base64encode) to encode the string with base64
:::note
This example uses the `{{repl ...}}` syntax rather than the `repl{{ ... }}` syntax to improve readability in the YAML file. However, both syntaxes are supported for this use case. For more information, see [Syntax](/reference/template-functions-about#syntax) in _About Template Functions_.
:::
```yaml
# Postgres Secret
apiVersion: v1
kind: Secret
metadata:
name: postgres
data:
# Render the value for the database hostname depending on if an embedded or
# external db is used.
# Also, base64 encode the rendered value.
DB_HOST: >-
{{repl if ConfigOptionEquals "postgres_type" "embedded_postgres" -}}
{{repl Base64Encode "postgres" }}
{{repl else -}}
{{repl ConfigOption "external_postgres_host" | Base64Encode }}
{{repl end}}
DB_PORT: >-
{{repl if ConfigOptionEquals "postgres_type" "embedded_postgres" -}}
{{repl Base64Encode "5432" }}
{{repl else -}}
{{repl ConfigOption "external_postgres_port" | Base64Encode }}
{{repl end}}
DB_USER: >-
{{repl if ConfigOptionEquals "postgres_type" "embedded_postgres" -}}
{{repl Base64Encode "postgres" }}
{{repl else -}}
{{repl ConfigOption "external_postgres_user" | Base64Encode }}
{{repl end}}
DB_PASSWORD: >-
{{repl if ConfigOptionEquals "postgres_type" "embedded_postgres" -}}
{{repl ConfigOption "embedded_postgres_password" | Base64Encode }}
{{repl else -}}
{{repl ConfigOption "external_postgres_password" | Base64Encode }}
{{repl end}}
```
### Ternary Operators
Ternary operators are useful for templating strings where certain values within the string must be rendered differently depending on a given condition. Compared to if-else statements, ternary operators are useful when a small portion of a string needs to be conditionally rendered, as opposed to rendering different values based on a conditional statement. For example, a common use case for ternary operators is to template the path to an image repository based on user-supplied values.
The following example uses ternary operators to render the registry and repository for a private nginx image depending on if a local image regsitry is used. This example uses the following KOTS template functions:
* [HasLocalRegistry](/reference/template-functions-config-context#haslocalregistry) to return true if the environment is configured to rewrite images to a local registry
* [LocalRegistryHost](/reference/template-functions-config-context#localregistryhost) to return the local registry host configured by the user
* [LocalRegistryNamespace](/reference/template-functions-config-context#localregistrynamespace) to return the local registry namespace configured by the user
```yaml
# KOTS HelmChart custom resource
apiVersion: kots.io/v1beta2
kind: HelmChart
metadata:
name: samplechart
spec:
values:
image:
# If a local registry is configured, use the local registry host.
# Otherwise, use proxy.replicated.com
registry: repl{{ HasLocalRegistry | ternary LocalRegistryHost "proxy.replicated.com" }}
# If a local registry is configured, use the local registry's namespace.
# Otherwise, use proxy/my-app/quay.io/my-org
repository: repl{{ HasLocalRegistry | ternary LocalRegistryNamespace "proxy/my-app/quay.io/my-org" }}/nginx
tag: v1.0.1
```
## Formatting Examples
This section includes examples of how to format the rendered output of KOTS template functions.
In addition to the examples in this section, KOTS template functions in the Static context include several options for formatting values, such as converting strings to upper or lower case and trimming leading and trailing space characters. For more information, see [Static Context](/reference/template-functions-static-context).
### Indentation
When using template functions within nested YAML, it is important that the rendered template functions are indented correctly so that the YAML renders. A common use case for adding indentation to KOTS template functions is when templating annotations in the metadata of resources or objects deployed by your application based on user-supplied values.
The [nindent](https://masterminds.github.io/sprig/strings.html) function can be used to prepend a new line to the beginning of the string and indent the string by a specified number of spaces.
#### Indent Templated Helm Chart Values
The following example shows templating a Helm chart value that sets annotations for an Ingress object. This example uses the KOTS [ConfigOption](/reference/template-functions-config-context#configoption) template function to return user-supplied annotations from the Admin Console **Config** page. It also uses [nindent](https://masterminds.github.io/sprig/strings.html) to indent the rendered value ten spaces.
```yaml
# KOTS HelmChart custom resource
apiVersion: kots.io/v1beta2
kind: HelmChart
metadata:
name: myapp
spec:
values:
services:
myservice:
annotations: repl{{ ConfigOption "additional_annotations" | nindent 10 }}
```
#### Indent Templated Annotations in Manifest Files
The following example shows templating annotations for an Ingress object. This example uses the KOTS [ConfigOption](/reference/template-functions-config-context#configoption) template function to return user-supplied annotations from the Admin Console **Config** page. It also uses [nindent](https://masterminds.github.io/sprig/strings.html) to indent the rendered value four spaces.
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kots.io/placeholder: |-
repl{{ ConfigOption "ingress_annotations" | nindent 4 }}
```
### Render Quoted Values
To wrap a rendered value in quotes, you can pipe the result from KOTS template functions with the `repl{{ ... }}` syntax into quotes using `| quote`. Or, you can use the `'{{repl ... }}'` syntax instead.
One use case for quoted values in YAML is when indicator characters are included in values. In YAML, indicator characters (`-`, `?`, `:`) have special semantics and must be escaped if used in values. For more information, see [Indicator Charactors](https://yaml.org/spec/1.2.2/#53-indicator-characters) in the YAML documentation.
#### Example with `'{{repl ... }}'` Syntax
```yaml
customTag: '{{repl ConfigOption "tag" }}'
```
#### Example with `| quote`
```yaml
customTag: repl{{ ConfigOption "tag" | quote }}
```
The result for both examples is:
```yaml
customTag: 'key: value'
```
## Variables Example
This section includes an example of using variables with KOTS template functions. For more information, see [Variables](https://pkg.go.dev/text/template#hdr-Variables) in the Go documentation.
### Using Variables to Generate TLS Certificates in JSON
You can use the Sprig [genCA](https://masterminds.github.io/sprig/crypto.html) and [genSignedCert](https://masterminds.github.io/sprig/crypto.html) functions with KOTS template functions to generate certificate authorities (CAs) and signed certificates in JSON. One use case for this is to generate default CAs, certificates, and keys that users can override with their own values on the Admin Console **Config** page.
The Sprig [genCA](https://masterminds.github.io/sprig/crypto.html) and [genSignedCert](https://masterminds.github.io/sprig/crypto.html) functions require the subject's common name and the certificate's validity duration in days. The `genSignedCert` function also requires the CA that will sign the certificate. You can use variables and KOTS template functions to provide the necessary parameters when calling these functions.
The following example shows how to use variables and KOTS template functions in the `default` property of a [`hidden`](/reference/custom-resource-config#hidden) item to pass parameters to the `genCA` and `genSignedCert` functions and generate a CA, certificate, and key. This example uses a `hidden` item (which is an item that is not displayed on the **Config** page) to generate the certificate chain because variables used in the KOTS Config custom resource can only be accessed from the same item where they were declared. For this reason, `hidden` items can be useful for evaluating complex templates.
This example uses the following:
* KOTS [ConfigOption](/reference/template-functions-config-context#configoption) template function to render the user-supplied value for the ingress hostname. This is passed as a parameter to the [genCA](https://masterminds.github.io/sprig/crypto.html) and [genSignedCert](https://masterminds.github.io/sprig/crypto.html) functions
* Sprig [genCA](https://masterminds.github.io/sprig/crypto.html) and [genSignedCert](https://masterminds.github.io/sprig/crypto.html) functions to generate a CA and a certificate signed by the CA
* Sprig [dict](https://masterminds.github.io/sprig/dicts.html), [set](https://masterminds.github.io/sprig/dicts.html), and [dig](https://masterminds.github.io/sprig/dicts.html) dictionary functions to create a dictionary with entries for both the CA and the certificate, then traverse the dictionary to return the values of the CA, certificate, and key.
* [toJson](https://masterminds.github.io/sprig/defaults.html) and [fromJson](https://masterminds.github.io/sprig/defaults.html) Sprig functions to encode the CA and certificate into a JSON string, then decode the JSON for the purpose of displaying the values on the **Config** page as defaults
:::important
Default values are treated as ephemeral. The following certificate chain is recalculated each time the application configuration is modified. Before using this example with your application, be sure that your application can handle updating these parameters dynamically.
:::
```yaml
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: config-sample
spec:
groups:
- name: example_settings
title: My Example Config
items:
- name: ingress_hostname
title: Ingress Hostname
help_text: Enter a DNS hostname to use as the cert's CN.
type: text
- name: tls_json
title: TLS JSON
type: textarea
hidden: true
default: |-
repl{{ $ca := genCA (ConfigOption "ingress_hostname") 365 }}
repl{{ $tls := dict "ca" $ca }}
repl{{ $cert := genSignedCert (ConfigOption "ingress_hostname") (list ) (list (ConfigOption "ingress_hostname")) 365 $ca }}
repl{{ $_ := set $tls "cert" $cert }}
repl{{ toJson $tls }}
- name: tls_ca
title: Signing Authority
type: textarea
default: repl{{ fromJson (ConfigOption "tls_json") | dig "ca" "Cert" "" }}
- name: tls_cert
title: TLS Cert
type: textarea
default: repl{{ fromJson (ConfigOption "tls_json") | dig "cert" "Cert" "" }}
- name: tls_key
title: TLS Key
type: textarea
default: repl{{ fromJson (ConfigOption "tls_json") | dig "cert" "Key" "" }}
```
The following image shows how the default values for the CA, certificate, and key are displayed on the **Config** page:
[View a larger version of this image](/images/certificate-chain-default-values.png)
## Additional Examples
The following topics include additional examples of using KOTS template functions in Kubernetes manifests deployed by KOTS or in KOTS custom resources:
* [Add Status Informers](/vendor/admin-console-display-app-status#add-status-informers) in _Adding Resource Status Informers_
* [Conditionally Including or Excluding Resources](/vendor/packaging-include-resources)
* [Example: Including Optional Helm Charts](/vendor/helm-optional-charts)
* [Example: Adding Database Configuration Options](/vendor/tutorial-adding-db-config)
* [Templating Annotations](/vendor/resources-annotations-templating)
* [Tutorial: Set Helm Chart Values with KOTS](/vendor/tutorial-config-setup)
---
# Identity Context
This topic provides a list of the KOTS template functions in the Identity context.
:::note
The KOTS identity service feature is deprecated and is not available to new users.
:::
Template functions in the Identity context have access to Replicated KOTS identity service information.
## IdentityServiceEnabled
```go
func IdentityServiceEnabled() bool
```
Returns true if the Replicated identity service has been enabled and configured by the end customer.
```yaml
apiVersion: apps/v1
kind: Deployment
...
env:
- name: IDENTITY_ENABLED
value: repl{{ IdentityServiceEnabled }}
```
## IdentityServiceClientID
```go
func IdentityServiceClientID() string
```
Returns the client ID required for the application to connect to the identity service OIDC server.
```yaml
apiVersion: apps/v1
kind: Deployment
...
env:
- name: CLIENT_ID
value: repl{{ IdentityServiceClientID }}
```
## IdentityServiceClientSecret
```go
func IdentityServiceClientSecret() (string, error)
```
Returns the client secret required for the application to connect to the identity service OIDC server.
```yaml
apiVersion: v1
kind: Secret
...
data:
CLIENT_SECRET: repl{{ IdentityServiceClientSecret | b64enc }}
```
## IdentityServiceRoles
```go
func IdentityServiceRoles() map[string][]string
```
Returns a list of groups specified by the customer mapped to a list of roles as defined in the Identity custom resource manifest file.
For more information about roles in the Identity custom resource, see [Identity](custom-resource-identity#roles) in the _Custom resources_ section.
```yaml
apiVersion: apps/v1
kind: Deployment
...
env:
- name: RESTRICTED_GROUPS
value: repl{{ IdentityServiceRoles | keys | toJson }}
```
## IdentityServiceName
```go
func IdentityServiceName() string
```
Returns the Service name for the identity service OIDC server.
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
...
- path: /dex
backend:
service:
name: repl{{ IdentityServiceName }}
port:
number: repl{{ IdentityServicePort }}
```
## IdentityServicePort
```go
func IdentityServicePort() string
```
Returns the Service port number for the identity service OIDC server.
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
...
- path: /dex
backend:
service:
name: repl{{ IdentityServiceName }}
port:
number: repl{{ IdentityServicePort }}
```
---
# kURL Context
This topic provides a list of the KOTS template functions in the kURL context.
:::note
Replicated kURL is available only for existing customers. If you are not an existing kURL user, use Replicated Embedded Cluster instead. For more information, see [Use Embedded Cluster](/vendor/embedded-overview).
kURL is a Generally Available (GA) product for existing customers. For more information about the Replicated product lifecycle phases, see [Support Lifecycle Policy](/vendor/policies-support-lifecycle).
:::
## Overview
Template functions in the kURL context have access to information about applications installed with Replicated kURL. For more information about kURL, see [Introduction to kURL](/vendor/kurl-about).
The creation of the kURL Installer custom resource will reflect both install script changes made by posting YAML to the kURL API and changes made with -s flags at runtime. These functions are not available on the KOTS Admin Console config page.
KurlBool, KurlInt, KurlString, and KurlOption all take a string yamlPath as a param.
This is the path from the manifest file, and is delineated between add-on and subfield by a period ’.’.
For example, the kURL Kubernetes version can be accessed as `{{repl KurlString "Kubernetes.Version" }}`.
KurlBool, KurlInt, KurlString respectively return a bool, integer, and string value.
If used on a valid field but with the wrong type these will return the falsy value for their type, false, 0, and “string respectively.
The `KurlOption` function will convert all bool, int, and string fields to string.
All functions will return falsy values if there is nothing at the yamlPath specified, or if these functions are run in a cluster with no installer custom resource (as in, not a cluster created by kURL).
## KurlBool
```go
func KurlBool(yamlPath string) bool
```
Returns the value at the yamlPath if there is a valid boolean there, or false if there is not.
```yaml
'{{repl KurlBool "Docker.NoCEonEE" }}'
```
## KurlInt
```go
func KurlInt(yamlPath string) int
```
Returns the value at the yamlPath if there is a valid integer there, or 0 if there is not.
```yaml
'{{repl KurlInt "Rook.CephReplicaCount" }}'
```
## KurlString
```go
func KurlString(yamlPath string) string
```
Returns the value at the yamlPath if there is a valid string there, or "" if there is not.
```yaml
'{{repl KurlString "Kubernetes.Version" }}'
```
## KurlOption
```go
func KurlOption(yamlPath string) string
```
Returns the value at the yamlPath if there is a valid string, int, or bool value there, or "" if there is not.
Int and Bool values will be converted to string values.
```yaml
'{{repl KurlOption "Rook.CephReplicaCount" }}'
```
## KurlAll
```go
func KurlAll() string
```
Returns all values in the Installer custom resource as key:value pairs, sorted by key.
```yaml
'{{repl KurlAll }}'
```
---
# License Context
This topic provides a list of the KOTS template functions in the License context.
Template functions in the license context have access to customer license and version data. For more information about managing customer licenses, see [About Customers and Licensing](/vendor/licenses-about).
## LicenseFieldValue
```go
func LicenseFieldValue(name string) string
```
LicenseFieldValue returns the value of the specified license field. LicenseFieldValue accepts custom license fields and all built-in license fields. For a list of all built-in fields, see [Built-In License Fields](/vendor/licenses-using-builtin-fields).
LicenseFieldValue always returns a string, regardless of the license field type. To return integer or boolean values, you need to use the [ParseInt](/reference/template-functions-static-context#parseint) or [ParseBool](/reference/template-functions-static-context#parsebool) template function to convert the string value.
#### String License Field
The following example returns the value of the built-in `customerName` license field:
```yaml
customerName: '{{repl LicenseFieldValue "customerName" }}'
```
#### Integer License Field
The following example returns the value of a custom integer license field named `numSeats`:
```yaml
numSeats: repl{{ LicenseFieldValue "numSeats" | ParseInt }}
```
This example uses [ParseInt](/reference/template-functions-static-context#parseint) to convert the returned value to an integer.
#### Boolean License Field
The following example returns the value of a custom boolean license field named `feature-1`:
```yaml
feature-1: repl{{ LicenseFieldValue "feature-1" | ParseBool }}
```
This example uses [ParseBool](/reference/template-functions-static-context#parsebool) to convert the returned value to a boolean.
## LicenseDockerCfg
```go
func LicenseDockerCfg() string
```
LicenseDockerCfg returns a value that can be written to a secret if needed to deploy manually.
Replicated KOTS creates and injects this secret automatically in normal conditions, but some deployments (with static, additional namespaces) may need to include this.
```yaml
apiVersion: v1
kind: Secret
type: kubernetes.io/dockerconfigjson
metadata:
name: myapp-registry
namespace: my-other-namespace
data:
.dockerconfigjson: repl{{ LicenseDockerCfg }}
```
## Sequence
```go
func Sequence() int64
```
Sequence is the sequence of the application deployed.
This will start at 0 for each installation, and increase with every app update, config change, license update and registry setting change.
```yaml
'{{repl Sequence }}'
```
## Cursor
```go
func Cursor() string
```
Cursor is the channel sequence of the app.
For instance, if 5 releases have been promoted to the channel that the app is running, then this would return the string `5`.
```yaml
'{{repl Cursor }}'
```
## ChannelName
```go
func ChannelName() string
```
ChannelName is the name of the deployed channel of the app.
```yaml
'{{repl ChannelName }}'
```
## VersionLabel
```go
func VersionLabel() string
```
VersionLabel is the semantic version of the app, as specified when promoting a release to a channel.
```yaml
'{{repl VersionLabel }}'
```
## ReleaseNotes
```go
func ReleaseNotes() string
```
ReleaseNotes is the release notes of the current version of the app.
```yaml
'{{repl ReleaseNotes }}'
```
## IsAirgap
```go
func IsAirgap() bool
```
IsAirgap is `true` when the app is installed via uploading an airgap package, false otherwise.
```yaml
'{{repl IsAirgap }}'
```
---
# Static Context
This topic provides a list of the KOTS template functions in the Static context.
The context necessary to render the static template functions is always available.
The static context also includes the Masterminds Sprig function library. For more information, see [Sprig Function Documentation](http://masterminds.github.io/sprig/) on the sprig website.
## Certificate Functions
### PrivateCACert
>Introduced in KOTS v1.117.0
```yaml
func PrivateCACert() string
```
PrivateCACert returns the name of a ConfigMap that contains private CA certificates provided by the end user. For Embedded Cluster installations, these certificates are provided with the `--private-ca` flag for the `install` command. For KOTS installations, the user provides the ConfigMap using the `--private-ca-configmap` flag for the `install` command.
You can use this template function to mount the specified ConfigMap so your containers can access the internet through enterprise proxies that issue their own TLS certificates in order to inspect traffic.
:::note
This function will return the name of the ConfigMap even if the ConfigMap has no entries. If no ConfigMap exists, this function returns the empty string.
:::
## Cluster Information Functions
### Distribution
```go
func Distribution() string
```
Distribution returns the Kubernetes distribution detected. The possible return values are:
* aks
* digitalOcean
* dockerDesktop
* eks
* embedded-cluster
* gke
* ibm
* k0s
* k3s
* kind
* kurl
* microk8s
* minikube
* oke
* openShift
* rke2
:::note
[IsKurl](#iskurl) can also be used to detect kURL instances.
:::
#### Detect the Distribution
```yaml
repl{{ Distribution }}
```
#### Equal To Comparison
```yaml
repl{{ eq Distribution "gke" }}
```
#### Not Equal To Comparison
```yaml
repl{{ ne Distribution "embedded-cluster" }}
```
See [Functions](https://pkg.go.dev/text/template#hdr-Functions) in the Go documentation.
### IsKurl
```go
func IsKurl() bool
```
IsKurl returns true if running within a kurl-based installation.
#### Detect kURL Installations
```yaml
repl{{ IsKurl }}
```
#### Detect Non-kURL Installations
```yaml
repl{{ not IsKurl }}
```
See [Functions](https://pkg.go.dev/text/template#hdr-Functions) in the Go documentation.
### KotsVersion
```go
func KotsVersion() string
```
KotsVersion returns the current version of KOTS.
```yaml
repl{{ KotsVersion }}
```
You can compare the KOTS version as follows:
```yaml
repl{{KotsVersion | semverCompare ">= 1.19"}}
```
This returns `true` if the KOTS version is greater than or equal to `1.19`.
For more complex comparisons, see [Semantic Version Functions](https://masterminds.github.io/sprig/semver.html) in the sprig documentation.
### KubernetesMajorVersion
> Introduced in KOTS v1.92.0
```go
func KubernetesMajorVersion() string
```
KubernetesMajorVersion returns the Kubernetes server *major* version.
```yaml
repl{{ KubernetesMajorVersion }}
```
You can compare the Kubernetes major version as follows:
```yaml
repl{{lt (KubernetesMajorVersion | ParseInt) 2 }}
```
This returns `true` if the Kubernetes major version is less than `2`.
### KubernetesMinorVersion
> Introduced in KOTS v1.92.0
```go
func KubernetesMinorVersion() string
```
KubernetesMinorVersion returns the Kubernetes server *minor* version.
```yaml
repl{{ KubernetesMinorVersion }}
```
You can compare the Kubernetes minor version as follows:
```yaml
repl{{gt (KubernetesMinorVersion | ParseInt) 19 }}
```
This returns `true` if the Kubernetes minor version is greater than `19`.
### KubernetesVersion
> Introduced in KOTS v1.92.0
```go
func KubernetesVersion() string
```
KubernetesVersion returns the Kubernetes server version.
```yaml
repl{{ KubernetesVersion }}
```
You can compare the Kubernetes version as follows:
```yaml
repl{{KubernetesVersion | semverCompare ">= 1.19"}}
```
This returns `true` if the Kubernetes version is greater than or equal to `1.19`.
For more complex comparisons, see [Semantic Version Functions](https://masterminds.github.io/sprig/semver.html) in the sprig documentation.
### Namespace
```go
func Namespace() string
```
Namespace returns the Kubernetes namespace that the application belongs to.
```yaml
'{{repl Namespace}}'
```
### NodeCount
```go
func NodeCount() int
```
NodeCount returns the number of nodes detected within the Kubernetes cluster.
```yaml
repl{{ NodeCount }}
```
### Lookup
> Introduced in KOTS v1.103.0
```go
func Lookup(apiversion string, resource string, namespace string, name string) map[string]interface{}
```
Lookup searches resources in a running cluster and returns a resource or resource list.
Lookup uses the Helm lookup function to search resources and has the same functionality as the Helm lookup function. For more information, see [lookup](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function) in the Helm documentation.
```yaml
repl{{ Lookup "API_VERSION" "KIND" "NAMESPACE" "NAME" }}
```
Both `NAME` and `NAMESPACE` are optional and can be passed as an empty string ("").
The following combination of parameters are possible:
| Behavior | Lookup function |
|---|---|
kubectl get pod mypod -n mynamespace |
repl{{ Lookup "v1" "Pod" "mynamespace" "mypod" }} |
kubectl get pods -n mynamespace |
repl{{ Lookup "v1" "Pod" "mynamespace" "" }} |
kubectl get pods --all-namespaces |
repl{{ Lookup "v1" "Pod" "" "" }} |
kubectl get namespace mynamespace |
repl{{ Lookup "v1" "Namespace" "" "mynamespace" }} |
kubectl get namespaces |
repl{{ Lookup "v1" "Namespace" "" "" }} |
| readonly | hidden | Outcome | Use Case |
|---|---|---|---|
| false | true | Persistent |
Set
|
| true | false | Ephemeral |
Set
|
| true | true | Ephemeral |
Set
|
| false | false | Persistent |
Set
For example, set both |
[View a larger version of this image](/images/gitea-open-app.png)
:::note
KOTS uses the Kubernetes SIG Application custom resource as metadata and does not require or use an in-cluster controller to handle this custom resource. An application that follows best practices does not require cluster admin privileges or any cluster-wide components to be installed.
:::
## Add a Link
To add a link to the Admin Console dashboard, include a [Kubernetes SIG Application](https://github.com/kubernetes-sigs/application#kubernetes-applications) custom resource in the release with a `spec.descriptor.links` field. The `spec.descriptor.links` field is an array of links that are displayed on the Admin Console dashboard after the application is deployed.
Each link in the `spec.descriptor.links` array contains two fields:
* `description`: The link text that will appear on the Admin Console dashboard.
* `url`: The target URL.
For example:
```yaml
# app.k8s.io/v1beta1 Application Custom resource
apiVersion: app.k8s.io/v1beta1
kind: Application
metadata:
name: "gitea"
spec:
descriptor:
links:
- description: About Wordpress
url: "https://wordpress.org/"
```
When the application is deployed, the "About Wordpress" link is displayed on the Admin Console dashboard as shown below:
[View a larger version of this image](/images/dashboard-link-about-wordpress.png)
For an additional example of a Kubernetes SIG Application custom resource, see [application.yaml](https://github.com/kubernetes-sigs/application/blob/master/docs/examples/wordpress/application.yaml) in the kubernetes-sigs GitHub repository.
### Create URLs with User-Supplied Values Using KOTS Template Functions {#url-template}
You can use KOTS template functions to template URLs in the Kubernetes SIG Application custom resource. This can be useful when all or some of the URL is a user-supplied value. For example, an application might allow users to provide their own ingress controller or load balancer. In this case, the URL can be templated to render the hostname that the user provides on the Admin Console Config screen.
The following examples show how to use the KOTS [ConfigOption](/reference/template-functions-config-context#configoption) template function in the Kubernetes SIG Application custom resource `spec.descriptor.links.url` field to render one or more user-supplied values:
* In the example below, the URL hostname is a user-supplied value for an ingress controller that the user configures during installation.
```yaml
apiVersion: app.k8s.io/v1beta1
kind: Application
metadata:
name: "my-app"
spec:
descriptor:
links:
- description: Open App
url: 'http://{{repl ConfigOption "ingress_host" }}'
```
* In the example below, both the URL hostname and a node port are user-supplied values. It might be necessary to include a user-provided node port if you are exposing NodePort services for installations on VMs or bare metal servers with [Replicated Embedded Cluster](/vendor/embedded-overview) or [Replicated kURL](/vendor/kurl-about).
```yaml
apiVersion: app.k8s.io/v1beta1
kind: Application
metadata:
name: "my-app"
spec:
descriptor:
links:
- description: Open App
url: 'http://{{repl ConfigOption "hostname" }}:{{repl ConfigOption "node_port"}}'
```
For more information about working with KOTS template functions, see [About Template Functions](/reference/template-functions-about).
---
# Customize the Application Icon
You can add a custom application icon that displays in the Replicated Admin Console and the download portal. Adding a custom icon helps ensure that your brand is reflected for your customers.
:::note
You can also use a custom domain for the download portal. For more information, see [About Custom Domains](custom-domains).
:::
## Add a Custom Icon
For information about how to choose an image file for your custom application icon that displays well in the Admin Console, see [Icon Image File Recommendations](#icon-image-file-recommendations) below.
To add a custom application icon:
1. In the [Vendor Portal](https://vendor.replicated.com/apps), click **Releases**. Click **Create release** to create a new release, or click **Edit YAML** to edit an existing release.
1. Create or open the Application custom resource manifest file. An Application custom resource manifest file has `apiVersion: kots.io/v1beta1` and `kind: Application`.
1. In the preview section of the Help pane:
1. If your Application manifest file is already populated with an `icon` key, the icon displays in the preview. Click **Preview a different icon** to access the preview options.
1. Drag and drop an icon image file to the drop zone. Alternatively, paste a link or Base64 encoded data URL in the text box. Click **Preview**.

1. (Air gap only) If you paste a link to the image in the text box, click **Preview** and **Base64 encode icon** to convert the image to a Base64 encoded data URL. An encoded URL displays that you can copy and paste into the Application manifest. Base64 encoding is required for images used with air gap installations.
:::note
If you pasted a Base64 encoded data URL into the text box, the **Base64 encode icon** button does not display because the image is already encoded. If you drag and drop an icon, the icon is automatically encoded for you.
:::

1. Click **Preview a different icon** to preview a different icon if needed.
1. In the Application manifest, under `spec`, add an `icon` key that includes a link or the Base64 encoded data URL to the desired image.
**Example**:
```yaml
apiVersion: kots.io/v1beta1
kind: Application
metadata:
name: my-application
spec:
title: My Application
icon: https://kots.io/images/kotsadm-logo-large@2x.png
```
1. Click **Save Release**.
## Icon Image File Recommendations
For your custom application icon to look best in the Admin Console, consider the following recommendations:
* Use a PNG or JPG file.
* Use an image that is at least 250 by 250 pixels.
* Export the image file at 2x.
---
# Create and Edit Configuration Fields
This topic describes how to use the KOTS Config custom resource manifest file to add and edit fields in the KOTS Admin Console configuration screen.
## About the Config Custom Resource
Applications distributed with Replicated KOTS can include a configuration screen in the Admin Console to collect required or optional values from your users that are used to run your application. For more information about the configuration screen, see [About the Configuration Screen](config-screen-about).
To include a configuration screen in the Admin Console for your application, you add a Config custom resource manifest file to a release for the application.
You define the fields that appear on the configuration screen as an array of `groups` and `items` in the Config custom resource:
* `groups`: A set of `items`. Each group must have a `name`, `title`, `description`, and `items`. For example, you can create a group of several user input fields that are all related to configuring an SMTP mail server.
* `items`: An array of user input fields. Each array under `items` must have a `name`, `title`, and `type`. You can also include several optional properties. For example, in a group for configuring a SMTP mail server, you can have user input fields under `items` for the SMTP hostname, port, username, and password.
There are several types of `items` supported in the Config manifest that allow you to collect different types of user inputs. For example, you can use the `password` input type to create a text field on the configuration screen that hides user input.
For more information about the syntax of the Config custom resource manifest, see [Config](/reference/custom-resource-config).
## About Regular Expression Validation
You can use [RE2 regular expressions](https://github.com/google/re2/wiki/Syntax) (regex) to validate user input for config items, ensuring conformity to certain standards, such as valid email addresses, password complexity rules, IP addresses, and URLs. This prevents users from deploying an application with a verifiably invalid configuration.
You add the `validation`, `regex`, `pattern` and `message` fields to items in the Config custom resource. Validation is supported for `text`, `textarea`, `password` and `file` config item types. For more information about regex validation fields, see [Item Validation](/reference/custom-resource-config#item-validation) in _Config_.
The following example shows a common password complexity rule:
```
- name: smtp-settings
title: SMTP Settings
items:
- name: smtp_password
title: SMTP Password
type: password
help_text: Set SMTP password
validation:
regex:
pattern: ^(?:[\w@#$%^&+=!*()_\-{}[\]:;"'<>,.?\/|]){8,16}$
message: The password must be between 8 and 16 characters long and can contain a combination of uppercase letter, lowercase letters, digits, and special characters.
```
## Add Fields to the Configuration Screen
To add fields to the Admin Console configuration screen:
1. In the [Vendor Portal](https://vendor.replicated.com/apps), click **Releases**. Then, either click **Create release** to create a new release, or click **Edit YAML** to edit an existing release.
1. Create or open the Config custom resource manifest file in the desired release. A Config custom resource manifest file has `kind: Config`.
1. In the Config custom resource manifest file, define custom user-input fields in an array of `groups` and `items`.
**Example**:
```yaml
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: my-application
spec:
groups:
- name: smtp_settings
title: SMTP Settings
description: Configure SMTP Settings
items:
- name: enable_smtp
title: Enable SMTP
help_text: Enable SMTP
type: bool
default: "0"
- name: smtp_host
title: SMTP Hostname
help_text: Set SMTP Hostname
type: text
- name: smtp_port
title: SMTP Port
help_text: Set SMTP Port
type: text
- name: smtp_user
title: SMTP User
help_text: Set SMTP User
type: text
- name: smtp_password
title: SMTP Password
type: password
default: 'password'
```
The example above includes a single group with the name `smtp_settings`.
The `items` array for the `smtp_settings` group includes the following user-input fields: `enable_smtp`, `smtp_host`, `smtp_port`, `smtp_user`, and `smtp_password`. Additional item properties are available, such as `affix` to make items appear horizontally on the same line. For more information about item properties, see [Item Properties](/reference/custom-resource-config#item-properties) in Config.
The following screenshot shows how the SMTP Settings group from the example YAML above displays in the Admin Console configuration screen during application installation:

1. (Optional) Add default values for the fields. You can add default values using one of the following properties:
* **With the `default` property**: When you include the `default` key, KOTS uses this value when rendering the manifest files for your application. The value then displays as a placeholder on the configuration screen in the Admin Console for your users. KOTS only uses the default value if the user does not provide a different value.
:::note
If you change the `default` value in a later release of your application, installed instances of your application receive the updated value only if your users did not change the default from what it was when they initially installed the application.
If a user did change a field from its default, the Admin Console does not overwrite the value they provided.
:::
* **With the `value` property**: When you include the `value` key, KOTS does not overwrite this value during an application update. The value that you provide for the `value` key is visually indistinguishable from other values that your user provides on the Admin Console configuration screen. KOTS treats user-supplied values and the value that you provide for the `value` key as the same.
2. (Optional) Add regular expressions to validate user input for `text`, `textarea`, `password` and `file` config item types. For more information, see [About Regular Expression Validation](#about-regular-expression-validation).
**Example**:
```yaml
- name: smtp_host
title: SMTP Hostname
help_text: Set SMTP Hostname
type: text
validation:
regex:
pattern: ^[a-zA-Z]([a-zA-Z0-9\-]+[\.]?)*[a-zA-Z0-9]$
message: Valid hostname starts with a letter (uppercase/lowercase), followed by zero or more groups of letters (uppercase/lowercase), digits, or hyphens, optionally followed by a period. Ends with a letter or digit.
```
3. (Optional) Mark fields as required by including `required: true`. When there are required fields, the user is prevented from proceeding with the installation until they provide a valid value for required fields.
**Example**:
```yaml
- name: smtp_password
title: SMTP Password
type: password
required: true
```
4. Save and promote the release to a development environment to test your changes.
## Next Steps
After you add user input fields to the configuration screen, you use template functions to map the user-supplied values to manifest files in your release. If you use a Helm chart for your application, you map the values to the Helm chart `values.yaml` file using the HelmChart custom resource.
For more information, see [Map User-Supplied Values](config-screen-map-inputs).
---
# Add Resource Status Informers
This topic describes how to add status informers for your application. Status informers apply only to applications installed with Replicated KOTS. For information about how to collect application status data for applications installed with Helm, see [Enabling and Understanding Application Status](insights-app-status).
## About Status Informers
_Status informers_ are a feature of KOTS that report on the status of supported Kubernetes resources deployed as part of your application. You enable status informers by listing the target resources under the `statusInformers` property in the Replicated Application custom resource. KOTS watches all of the resources that you add to the `statusInformers` property for changes in state.
Possible resource statuses are Ready, Updating, Degraded, Unavailable, and Missing. For more information, see [Understanding Application Status](#understanding-application-status).
When you one or more status informers to your application, KOTS automatically does the following:
* Displays application status for your users on the dashboard of the Admin Console. This can help users diagnose and troubleshoot problems with their instance. The following shows an example of how an Unavailable status displays on the Admin Console dashboard:
* Sends application status data to the Vendor Portal. This is useful for viewing insights on instances of your application running in customer environments, such as the current status and the average uptime. For more information, see [Instance Details](instance-insights-details).
The following shows an example of the Vendor Portal **Instance details** page with data about the status of an instance over time:
[View a larger version of this image](/images/instance-details.png)
## Add Status Informers
To create status informers for your application, add one or more supported resource types to the `statusInformers` property in the Application custom resource. See [`statusInformers`](/reference/custom-resource-application#statusinformers) in _Application_.
The following resource types are supported:
* Deployment
* StatefulSet
* Service
* Ingress
* PersistentVolumeClaims (PVC)
* DaemonSet
You can target resources of the supported types that are deployed in any of the following ways:
* Deployed by KOTS.
* Deployed by a Kubernetes Operator that is deployed by KOTS. For more information, see [About Packaging a Kubernetes Operator Application](operator-packaging-about).
* Deployed by Helm. For more information, see [About Distributing Helm Charts with KOTS](/vendor/helm-native-about).
### Examples
Status informers are in the format `[namespace/]type/name`, where namespace is optional and defaults to the current namespace.
**Example**:
```yaml
apiVersion: kots.io/v1beta1
kind: Application
metadata:
name: my-application
spec:
statusInformers:
- deployment/my-web-svc
- deployment/my-worker
```
The `statusInformers` property also supports template functions. Using template functions allows you to include or exclude a status informer based on a customer-provided configuration value:
**Example**:
```yaml
statusInformers:
- deployment/my-web-svc
- '{{repl if ConfigOptionEquals "option" "value"}}deployment/my-worker{{repl else}}{{repl end}}'
```
In the example above, the `deployment/my-worker` status informer is excluded unless the statement in the `ConfigOptionEquals` template function evaluates to true.
For more information about using template functions in application manifest files, see [About Template Functions](/reference/template-functions-about).
## Understanding Application Status
This section provides information about how Replicated interprets and aggregates the status of Kubernetes resources for your application to report an application status.
### Resource Statuses
Possible resource statuses are Ready, Updating, Degraded, Unavailable, and Missing.
The following table lists the supported Kubernetes resources and the conditions that contribute to each status:
| Deployment | StatefulSet | Service | Ingress | PVC | DaemonSet | |
|---|---|---|---|---|---|---|
| Ready | Ready replicas equals desired replicas | Ready replicas equals desired replicas | All desired endpoints are ready, any load balancers have been assigned | All desired backend service endpoints are ready, any load balancers have been assigned | Claim is bound | Ready daemon pods equals desired scheduled daemon pods |
| Updating | The deployed replicas are from a different revision | The deployed replicas are from a different revision | N/A | N/A | N/A | The deployed daemon pods are from a different revision |
| Degraded | At least 1 replica is ready, but more are desired | At least 1 replica is ready, but more are desired | At least one endpoint is ready, but more are desired | At least one backend service endpoint is ready, but more are desired | N/A | At least one daemon pod is ready, but more are desired |
| Unavailable | No replicas are ready | No replicas are ready | No endpoints are ready, no load balancer has been assigned | No backend service endpoints are ready, no load balancer has been assigned | Claim is pending or lost | No daemon pods are ready |
| Missing | Missing is an initial deployment status indicating that informers have not reported their status because the application has just been deployed and the underlying resource has not been created yet. After the resource is created, the status changes. However, if a resource changes from another status to Missing, then the resource was either deleted or the informers failed to report a status. | |||||
| Resource Statuses | Aggregate Application Status |
|---|---|
| No status available for any resource | Missing |
| One or more resources Unavailable | Unavailable |
| One or more resources Degraded | Degraded |
| One or more resources Updating | Updating |
| All resources Ready | Ready |
[View a larger version of this image](/images/gitea-open-app.png)
## Access Port-Forwarded Services
This section describes how to access port-forwarded services.
### Command Line
Run [`kubectl kots admin-console`](/reference/kots-cli-admin-console-index) to open the KOTS port forward tunnel.
The `kots admin-console` command runs the equivalent of `kubectl port-forward svc/myapplication-service
[View a larger version of this image](/images/gitea-open-app.png)
## Examples
This section provides examples of how to configure the `ports` key to port-forward a service in existing cluster installations and add links to services on the Admin Console dashboard.
### Example: Bitnami Gitea Helm Chart with LoadBalancer Service
This example uses a KOTS Application custom resource and a Kubernetes SIG Application custom resource to configure port forwarding for the Bitnami Gitea Helm chart in existing cluster installations, and add a link to the port-forwarded service on the Admin Console dashboard. To view the Gitea Helm chart source, see [bitnami/gitea](https://github.com/bitnami/charts/blob/main/bitnami/gitea) in GitHub.
To test this example:
1. Pull version 1.0.6 of the Gitea Helm chart from Bitnami:
```
helm pull oci://registry-1.docker.io/bitnamicharts/gitea --version 1.0.6
```
1. Add the `gitea-1.0.6.tgz` chart archive to a new, empty release in the Vendor Portal along with the `kots-app.yaml`, `k8s-app.yaml`, and `gitea.yaml` files provided below. Promote to the channel that you use for internal testing. For more information, see [Manage Releases with the Vendor Portal](releases-creating-releases).
Based on the templates/svc.yaml and values.yaml files in the Gitea Helm chart, the following KOTS Application custom resource adds port 3000 to the port forward tunnel and maps local port 8888. Port 3000 is the container port of the Pod where the gitea service runs.
The Kubernetes Application custom resource lists the same URL as the `ports.applicationUrl` field in the KOTS Application custom resource (`"http://nginx"`). This adds a link to the port-forwarded service from the Admin Console dashboard. It also triggers KOTS to rewrite the URL to use the hostname in the browser and append the specified `localPort`. The label to be used for the link in the Admin Console is "Open App".
The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The name and chartVersion listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. Each Helm chart archive in a release requires a unique HelmChart custom resource.
The YAML below contains ClusterIP and NodePort specifications for a service named nginx. Each specification uses the kots.io/when annotation with the Replicated IsKurl template function to conditionally include the service based on the installation type (existing cluster or kURL cluster). For more information, see Conditionally Including or Excluding Resources and IsKurl.
As shown below, both the ClusterIP and NodePort nginx services are exposed on port 80.
A basic Deployment specification for the NGINX application.
The KOTS Application custom resource below adds port 80 to the KOTS port forward tunnel and maps port 8888 on the local machine. The specification also includes applicationUrl: "http://nginx" so that a link to the service can be added to the Admin Console dashboard.
The Kubernetes Application custom resource lists the same URL as the `ports.applicationUrl` field in the KOTS Application custom resource (`"http://nginx"`). This adds a link to the port-forwarded service on the Admin Console dashboard that uses the hostname in the browser and appends the specified `localPort`. The label to be used for the link in the Admin Console is "Open App".
[View a larger version of this image](/images/kotsadm-dashboard-graph.png)
## About Customizing Graphs
If your application exposes Prometheus metrics, you can add custom graphs to the Admin Console dashboard to expose these metrics to your users. You can also modify or remove the default graphs.
To customize the graphs that are displayed on the Admin Console, edit the [`graphs`](/reference/custom-resource-application#graphs) property in the KOTS Application custom resource manifest file. At a minimum, each graph in the `graphs` property must include the following fields:
* `title`: Defines the graph title that is displayed on the Admin Console.
* `query`: A valid PromQL Prometheus query. You can also include a list of multiple queries by using the `queries` property. For more information about querying Prometheus with PromQL, see [Querying Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/) in the Prometheus documentation.
:::note
By default, a kURL cluster exposes the Prometheus expression browser at NodePort 30900. For more information, see [Expression Browser](https://prometheus.io/docs/visualization/browser/) in the Prometheus documentation.
:::
## Limitation
Monitoring applications with Prometheus is not supported for installations with [Replicated Embedded Cluster](/vendor/embedded-overview).
## Add and Modify Graphs
To customize graphs on the Admin Console dashboard:
1. In the [Vendor Portal](https://vendor.replicated.com/), click **Releases**. Then, either click **Create release** to create a new release, or click **Edit YAML** to edit an existing release.
1. Create or open the [KOTS Application](/reference/custom-resource-application) custom resource manifest file.
1. In the Application manifest file, under `spec`, add a `graphs` property. Edit the `graphs` property to modify or remove existing graphs or add a new custom graph. For more information, see [graphs](/reference/custom-resource-application#graphs) in _Application_.
**Example**:
The following example shows the YAML for adding a custom graph that displays the total number of user signups for an application.
```yaml
apiVersion: kots.io/v1beta1
kind: Application
metadata:
name: my-application
spec:
graphs:
- title: User Signups
query: 'sum(user_signup_events_total)'
```
1. (Optional) Under `graphs`, copy and paste the specs for the default Disk Usage, CPU Usage, and Memory Usage Admin Console graphs provided in the YAML below.
Adding these default graphs to the Application custom resource manifest ensures that they are not overwritten when you add one or more custom graphs. When the default graphs are included in the Application custom resource, the Admin Console displays them in addition to any custom graphs.
Alternatively, you can exclude the YAML specs for the default graphs to remove them from the Admin Console dashboard.
```yaml
apiVersion: kots.io/v1beta1
kind: Application
metadata:
name: my-application
spec:
graphs:
- title: User Signups
query: 'sum(user_signup_events_total)'
# Disk Usage, CPU Usage, and Memory Usage below are the default graphs
- title: Disk Usage
queries:
- query: 'sum((node_filesystem_size_bytes{job="node-exporter",fstype!="",instance!=""} - node_filesystem_avail_bytes{job="node-exporter", fstype!=""})) by (instance)'
legend: 'Used: {{ instance }}'
- query: 'sum((node_filesystem_avail_bytes{job="node-exporter",fstype!="",instance!=""})) by (instance)'
legend: 'Available: {{ instance }}'
yAxisFormat: bytes
- title: CPU Usage
query: 'sum(rate(container_cpu_usage_seconds_total{namespace="{{repl Namespace}}",container!="POD",pod!=""}[5m])) by (pod)'
legend: '{{ pod }}'
- title: Memory Usage
query: 'sum(container_memory_usage_bytes{namespace="{{repl Namespace}}",container!="POD",pod!=""}) by (pod)'
legend: '{{ pod }}'
yAxisFormat: bytes
```
1. Save and promote the release to a development environment to test your changes.
---
# About Integrating with CI/CD
This topic provides an introduction to integrating Replicated CLI commands in your continuous integration and continuous delivery (CI/CD) pipelines, including Replicated's best practices and recommendations.
## Overview
Using CI/CD workflows to automatically compile code and run tests improves the speed at which teams can test, iterate on, and deliver releases to customers. When you integrate Replicated CLI commands into your CI/CD workflows, you can automate the process of deploying your application to clusters for testing, rather than needing to manually create and then archive channels, customers, and environments for testing.
You can also include continuous delivery workflows to automatically promote a release to a shared channel in your Replicated team. This allows you to more easily share releases with team members for internal testing and iteration, and then to promote releases when they are ready to be shared with customers.
## Best Practices and Recommendations
The following are Replicated's best practices and recommendations for CI/CD:
* Include unique workflows for development and for releasing your application. This allows you to run tests on every commit, and then to promote releases to internal and customer-facing channels only when ready. For more information about the workflows that Replicated recommends, see [Recommended CI/CD Workflows](ci-workflows).
* Integrate Replicated Compatibility Matrix into your CI/CD workflows to quickly create multiple different types of clusters where you can deploy and test your application. Supported distributions include OpenShift, GKE, EKS, and more. For more information, see [About Compatibility Matrix](testing-about).
* If you use the GitHub Actions CI/CD platform, integrate the custom GitHub actions that Replicated maintains to replace repetitive tasks related to distributing application with Replicated or using Compatibility Matrix. For more information, see [Use Replicated GitHub Actions in CI/CD](/vendor/ci-workflows-github-actions).
* To help show you are conforming to a secure supply chain, sign all commits and container images. Additionally, provide a verification mechanism for container images.
* Use custom RBAC policies to control the actions that can be performed in your CI/CD workflows. For example, you can create a policy that blocks the ability to promote releases to your production channel. For more information about creating custom RBAC policies in the Vendor Portal, see [Configure RBAC Policies](/vendor/team-management-rbac-configuring). For a full list of available RBAC resources, see [RBAC Resource Names](/vendor/team-management-rbac-resource-names).
* Incorporating code tests into your CI/CD workflows is important for ensuring that developers receive quick feedback and can make updates in small iterations. Replicated recommends that you create and run all of the following test types as part of your CI/CD workflows:
* **Application Testing:** Traditional application testing includes unit, integration, and end-to-end tests. These tests are critical for application reliability, and Compatibility Matrix is designed to to incorporate and use your application testing.
* **Performance Testing:** Performance testing is used to benchmark your application to ensure it can handle the expected load and scale gracefully. Test your application under a range of workloads and scenarios to identify any bottlenecks or performance issues. Make sure to optimize your application for different Kubernetes distributions and configurations by creating all of the environments you need to test in.
* **Smoke Testing:** Using a single, conformant Kubernetes distribution to test basic functionality of your application with default (or standard) configuration values is a quick way to get feedback if something is likely to be broken for all or most customers. Replicated also recommends that you include each Kubernetes version that you intend to support in your smoke tests.
* **Compatibility Testing:** Because applications run on various Kubernetes distributions and configurations, it is important to test compatibility across different environments. Compatibility Matrix provides this infrastructure.
* **Canary Testing:** Before releasing to all customers, consider deploying your application to a small subset of your customer base as a _canary_ release. This lets you monitor the application's performance and stability in real-world environments, while minimizing the impact of potential issues. Compatibility Matrix enables canary testing by simulating exact (or near) customer environments and configurations to test your application with.
---
# Use Replicated GitHub Actions in CI/CD
This topic describes how to integrate Replicated's custom GitHub actions into continuous integration and continuous delivery (CI/CD) workflows that use the GitHub Actions platform.
## Overview
Replicated maintains a set of custom GitHub actions that are designed to replace repetitive tasks related to distributing your application with Replicated and related to using the Compatibility Matrix, such as:
* Creating and removing customers, channels, and clusters
* Promoting releases
* Creating a matrix of clusters for testing based on the Kubernetes distributions and versions where your customers are running application instances
* Reporting the success or failure of tests
If you use GitHub Actions as your CI/CD platform, you can include these custom actions in your workflows rather than using Replicated CLI commands. Integrating the Replicated GitHub actions into your CI/CD pipeline helps you quickly build workflows with the required inputs and outputs, without needing to manually create the required CLI commands for each step.
To view all the available GitHub actions that Replicated maintains, see the [replicatedhq/replicated-actions](https://github.com/replicatedhq/replicated-actions/) repository in GitHub.
## GitHub Actions Workflow Examples
The [replicatedhq/replicated-actions](https://github.com/replicatedhq/replicated-actions#examples) repository in GitHub contains example workflows that use the Replicated GitHub actions. You can use these workflows as a template for your own GitHub Actions CI/CD workflows:
* For a simplified development workflow, see [development-helm-prepare-cluster.yaml](https://github.com/replicatedhq/replicated-actions/blob/main/example-workflows/development-helm-prepare-cluster.yaml).
* For a customizable development workflow for applications installed with the Helm CLI, see [development-helm.yaml](https://github.com/replicatedhq/replicated-actions/blob/main/example-workflows/development-helm.yaml).
* For a customizable development workflow for applications installed with KOTS, see [development-kots.yaml](https://github.com/replicatedhq/replicated-actions/blob/main/example-workflows/development-kots.yaml).
* For a release workflow, see [release.yaml](https://github.com/replicatedhq/replicated-actions/blob/main/example-workflows/release.yaml).
## Integrate GitHub Actions
The following table lists GitHub actions that are maintained by Replicated that you can integrate into your CI/CI workflows. The table also describes when to use the action in a workflow and indicates the related Replicated CLI command where applicable.
:::note
For an up-to-date list of the avilable custom GitHub actions, see the [replicatedhq/replicated-actions](https://github.com/replicatedhq/replicated-actions/) repository in GitHub.
:::
| GitHub Action | When to Use | Related Replicated CLI Commands |
|---|---|---|
| archive-channel |
In release workflows, a temporary channel is created to promote a release for testing. This action archives the temporary channel after tests complete. See Archive the temporary channel and customer in Recommended CI/CD Workflows. |
channel delete |
| archive-customer |
In release workflows, a temporary customer is created so that a release can be installed for testing. This action archives the temporary customer after tests complete. See Archive the temporary channel and customer in Recommended CI/CD Workflows. |
N/A |
| create-cluster |
In release workflows, use this action to create one or more clusters for testing. See Create cluster matrix, deploy, and test in Recommended CI/CD Workflows. |
cluster create |
| create-release |
In release workflows, use this action to create a release to be installed and tested, and optionally to be promoted to a shared channel after tests complete. See Create a release and promote to a temporary channel in Recommended CI/CD Workflows. |
release create |
| get-customer-instances |
In release workflows, use this action to create a matrix of clusters for running tests based on the Kubernetes distributions and versions of active instances of your application running in customer environments. See Create cluster matrix, deploy, and test in Recommended CI/CD Workflows. |
N/A |
| helm-install |
In development or release workflows, use this action to install a release using the Helm CLI in one or more clusters for testing. See Create cluster matrix, deploy, and test in Recommended CI/CD Workflows. |
N/A |
| kots-install |
In development or release workflows, use this action to install a release with Replicated KOTS in one or more clusters for testing. See Create cluster matrix, deploy, and test in Recommended CI/CD Workflows. |
N/A |
| prepare-cluster |
In development workflows, use this action to create a cluster, create a temporary customer of type See Prepare clusters, deploy, and test in Recommended CI/CD Workflows. |
cluster prepare |
| promote-release |
In release workflows, use this action to promote a release to an internal or customer-facing channel (such as Unstable, Beta, or Stable) after tests pass. See Promote to a shared channel in Recommended CI/CD Workflows. |
release promote |
| remove-cluster |
In development or release workflows, use this action to remove a cluster after running tests if no See Prepare clusters, deploy, and test and Create cluster matrix, deploy, and test in Recommended CI/CD Workflows. |
cluster rm |
| report-compatibility-result | In development or release workflows, use this action to report the success or failure of tests that ran in clusters provisioned by the Compatibility Matrix. | release compatibility |
| upgrade-cluster | In release workflows, use this action to test your application's compatibility with Kubernetes API resource version migrations after upgrading. | cluster upgrade |
### Embedded Cluster Distribution Check
It can be useful to show or hide configuration fields if the distribution of the cluster is [Replicated Embedded Cluster](/vendor/embedded-overview) because you can include extensions in embedded cluster distributions to manage functionality such as ingress and storage. This means that embedded clusters frequently have fewer configuration options for the user.
In the example below, the `ingress_type` field is displayed on the **Config** page only when the distribution of the cluster is _not_ [Replicated Embedded Cluster](/vendor/embedded-overview). This ensures that only users deploying to their own existing cluster are able to select the method for ingress.
The following example uses:
* KOTS [Distribution](/reference/template-functions-static-context#distribution) template function to return the Kubernetes distribution of the cluster where KOTS is running
* [ne](https://pkg.go.dev/text/template#hdr-Functions) (_not equal_) Go binary operator to compare the rendered value of the Distribution template function to a string, then return `true` if the values are not equal to one another
```yaml
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: config
spec:
groups:
# Ingress settings
- name: ingress_settings
title: Ingress Settings
description: Configure Ingress
items:
- name: ingress_type
title: Ingress Type
help_text: |
Select how traffic will ingress to the appliction.
type: radio
items:
- name: ingress_controller
title: Ingress Controller
- name: load_balancer
title: Load Balancer
default: "ingress_controller"
required: true
when: 'repl{{ ne Distribution "embedded-cluster" }}'
# Database settings
- name: database_settings
title: Database
items:
- name: postgres_type
help_text: Would you like to use an embedded postgres instance, or connect to an external instance that you manage?
type: radio
title: Postgres
default: embedded_postgres
items:
- name: embedded_postgres
title: Embedded Postgres
- name: external_postgres
title: External Postgres
```
The following image shows how the `ingress_type` field is hidden when the distribution of the cluster is `embedded-cluster`. Only the `postgres_type` item is displayed:
[View a larger version of this image](/images/config-example-distribution-not-ec.png)
Conversely, when the distribution of the cluster is not `embedded-cluster`, both fields are displayed:
[View a larger version of this image](/images/config-example-distribution-not-ec-2.png)
### kURL Distribution Check
It can be useful to show or hide configuration fields if the cluster was provisioned by Replicated kURL because kURL distributions often include add-ons to manage functionality such as ingress and storage. This means that kURL clusters frequently have fewer configuration options for the user.
In the following example, the `when` property of the `not_kurl` group uses the IsKurl template function to evaluate if the cluster was provisioned by kURL. For more information about the IsKurl template function, see [IsKurl](/reference/template-functions-static-context#iskurl) in _Static Context_.
```yaml
# Config custom resource
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: config-sample
spec:
groups:
- name: all_distributions
title: Example Group
description: This group always displays.
items:
- name: example_item
title: This item always displays.
type: text
- name: not_kurl
title: Non-kURL Cluster Group
description: This group displays only if the cluster is not provisioned by kURL.
when: 'repl{{ not IsKurl }}'
items:
- name: example_item_non_kurl
title: The cluster is not provisioned by kURL.
type: label
```
As shown in the image below, both the `all_distributions` and `non_kurl` groups are displayed on the **Config** page when KOTS is _not_ running in a kURL cluster:

[View a larger version of this image](/images/config-example-iskurl-false.png)
However, when KOTS is running in a kURL cluster, only the `all_distributions` group is displayed, as shown below:

[View a larger version of this image](/images/config-example-iskurl-true.png)
### License Field Value Equality Check
You can show or hide configuration fields based on the values in a license to ensure that users only see configuration options for the features and entitlements granted by their license.
In the following example, the `when` property of the `new_feature_config` item uses the LicenseFieldValue template function to determine if the user's license contains a `newFeatureEntitlement` field that is set to `true`. For more information about the LicenseFieldValue template function, see [LicenseFieldValue](/reference/template-functions-license-context#licensefieldvalue) in _License Context_.
```yaml
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: config-sample
spec:
groups:
- name: example_settings
title: My Example Config
description: Example fields for using LicenseFieldValue template function
items:
- name: new_feature_config
type: label
title: "You have the new feature entitlement"
when: '{{repl (LicenseFieldValue "newFeatureEntitlement") }}'
```
As shown in the image below, the **Config** page displays the `new_feature_config` item when the user's license contains `newFeatureEntitlement: true`:

[View a larger version of this image](/images/config-example-newfeature.png)
### License Field Value Integer Comparison
You can show or hide configuration fields based on the values in a license to ensure that users only see configuration options for the features and entitlements granted by their license. You can also compare integer values from license fields to control the configuration experience for your users.
The following example uses:
* KOTS [LicenseFieldValue](/reference/template-functions-license-context#licensefieldvalue) template function to evaluate the number of seats permitted by the license
* Sprig [atoi](https://masterminds.github.io/sprig/conversion.html) function to convert the string values returned by LicenseFieldValue to integers
* [Go binary comparison operators](https://pkg.go.dev/text/template#hdr-Functions) `gt`, `lt`, `ge`, and `le` to compare the integers
```yaml
# KOTS Config custom resource
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: config-sample
spec:
groups:
- name: example_group
title: Example Config
items:
- name: small
title: Small (100 or Fewer Seats)
type: text
default: Default for small teams
# Use le and atoi functions to display this config item
# only when the value of the numSeats entitlement is
# less than or equal to 100
when: repl{{ le (atoi (LicenseFieldValue "numSeats")) 100 }}
- name: medium
title: Medium (101-1000 Seats)
type: text
default: Default for medium teams
# Use ge, le, and atoi functions to display this config item
# only when the value of the numSeats entitlement is
# greater than or equal to 101 and less than or equal to 1000
when: repl{{ (and (ge (atoi (LicenseFieldValue "numSeats")) 101) (le (atoi (LicenseFieldValue "numSeats")) 1000)) }}
- name: large
title: Large (More Than 1000 Seats)
type: text
default: Default for large teams
# Use gt and atoi functions to display this config item
# only when the value of the numSeats entitlement is
# greater than 1000
when: repl{{ gt (atoi (LicenseFieldValue "numSeats")) 1000 }}
```
As shown in the image below, if the user's license contains `numSeats: 150`, then the `medium` item is displayed on the **Config** page and the `small` and `large` items are not displayed:
[View a larger version of this image](/images/config-example-numseats.png)
### User-Supplied Value Check
You can show or hide configuration fields based on user-supplied values on the **Config** page to ensure that users only see options that are relevant to their selections.
In the following example, the `database_host` and `database_passwords` items use the ConfigOptionEquals template function to evaluate if the user selected the `external` database option for the `db_type` item. For more information about the ConfigOptionEquals template function, see [ConfigOptionEquals](/reference/template-functions-config-context#configoptionequals) in _Config Context_.
```yaml
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: config-sample
spec:
groups:
- name: database_settings_group
title: Database Settings
items:
- name: db_type
title: Database Type
type: radio
default: external
items:
- name: external
title: External Database
- name: embedded
title: Embedded Database
- name: database_host
title: Database Hostname
type: text
when: '{{repl (ConfigOptionEquals "db_type" "external")}}'
- name: database_password
title: Database Password
type: password
when: '{{repl (ConfigOptionEquals "db_type" "external")}}'
```
As shown in the images below, when the user selects the external database option, the `database_host` and `database_passwords` items are displayed. Alternatively, when the user selects the embedded database option, the items are _not_ displayed:

[View a larger version of this image](/images/config-example-external-db.png)

[View a larger version of this image](/images/config-example-embedded-db.png)
## Use Multiple Conditions in the `when` Property
You can use more than one template function in the `when` property to create more complex conditional statements. This allows you to show or hide configuration fields based on multiple conditions being true.
The following example includes `when` properties that use both the ConfigOptionEquals and IsKurl template functions:
```yaml
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: config-sample
spec:
groups:
- name: ingress_settings
title: Ingress Settings
description: Configure Ingress
items:
- name: ingress_type
title: Ingress Type
help_text: |
Select how traffic will ingress to the appliction.
type: radio
items:
- name: ingress_controller
title: Ingress Controller
- name: load_balancer
title: Load Balancer
default: "ingress_controller"
required: true
when: 'repl{{ not IsKurl }}'
- name: ingress_host
title: Hostname
help_text: Hostname used to access the application.
type: text
default: "hostname.example.com"
required: true
when: 'repl{{ and (not IsKurl) (ConfigOptionEquals "ingress_type" "ingress_controller") }}'
- name: ingress_annotations
type: textarea
title: Ingress Annotations
help_text: See your ingress controller’s documentation for the required annotations.
when: 'repl{{ and (not IsKurl) (ConfigOptionEquals "ingress_type" "ingress_controller") }}'
- name: ingress_tls_type
title: Ingress TLS Type
type: radio
items:
- name: self_signed
title: Self Signed (Generate Self Signed Certificate)
- name: user_provided
title: User Provided (Upload a TLS Certificate and Key Pair)
required: true
default: self_signed
when: 'repl{{ and (not IsKurl) (ConfigOptionEquals "ingress_type" "ingress_controller") }}'
- name: ingress_tls_cert
title: TLS Cert
type: file
when: '{{repl and (ConfigOptionEquals "ingress_type" "ingress_controller") (ConfigOptionEquals "ingress_tls_type" "user_provided") }}'
required: true
- name: ingress_tls_key
title: TLS Key
type: file
when: '{{repl and (ConfigOptionEquals "ingress_type" "ingress_controller") (ConfigOptionEquals "ingress_tls_type" "user_provided") }}'
required: true
- name: load_balancer_port
title: Load Balancer Port
help_text: Port used to access the application through the Load Balancer.
type: text
default: "443"
required: true
when: 'repl{{ and (not IsKurl) (ConfigOptionEquals "ingress_type" "load_balancer") }}'
- name: load_balancer_annotations
type: textarea
title: Load Balancer Annotations
help_text: See your cloud provider’s documentation for the required annotations.
when: 'repl{{ and (not IsKurl) (ConfigOptionEquals "ingress_type" "load_balancer") }}'
```
As shown in the image below, the configuration fields that are specific to the ingress controller display only when the user selects the ingress controller option and KOTS is _not_ running in a kURL cluster:

[View a larger version of this image](/images/config-example-ingress-controller.png)
Additionally, the options relevant to the load balancer display when the user selects the load balancer option and KOTS is _not_ running in a kURL cluster:

[View a larger version of this image](/images/config-example-ingress-load-balancer.png)
---
# Map User-Supplied Values
This topic describes how to map the values that your users provide in the Replicated Admin Console configuration screen to your application.
This topic assumes that you have already added custom fields to the Admin Console configuration screen by editing the Config custom resource. For more information, see [Create and Edit Configuration Fields](admin-console-customize-config-screen).
## Overview of Mapping Values
You use the values that your users provide in the Admin Console configuration screen to render YAML in the manifest files for your application.
For example, if you provide an embedded database with your application, you might add a field on the Admin Console configuration screen where users input a password for the embedded database. You can then map the password that your user supplies in this field to the Secret manifest file for the database in your application.
For an example of mapping database configuration options in a sample application, see [Example: Adding Database Configuration Options](tutorial-adding-db-config).
You can also conditionally deploy custom resources depending on the user input for a given field. For example, if a customer chooses to use their own database with your application rather than an embedded database option, it is not desirable to deploy the optional database resources such as a StatefulSet and a Service.
For more information about including optional resources conditionally based on user-supplied values, see [Conditionally Including or Excluding Resources](packaging-include-resources).
## About Mapping Values with Template Functions
To map user-supplied values, you use Replicated KOTS template functions. The template functions are based on the Go text/template libraries. To use template functions, you add them as strings in the custom resource manifest files in your application.
For more information about template functions, including use cases and examples, see [About Template Functions](/reference/template-functions-about).
For more information about the syntax of the template functions for mapping configuration values, see [Config Context](/reference/template-functions-config-context) in the _Template Functions_ section.
## Map User-Supplied Values
Follow one of these procedures to map user inputs from the configuration screen, depending on if you use a Helm chart for your application:
* **Without Helm**: See [Map Values to Manifest Files](#map-values-to-manifest-files).
* **With Helm**: See [Map Values to a Helm Chart](#map-values-to-a-helm-chart).
### Map Values to Manifest Files
To map user-supplied values from the configuration screen to manifest files in your application:
1. In the [Vendor Portal](https://vendor.replicated.com/apps), click **Releases**. Then, click **View YAML** next to the desired release.
1. Open the Config custom resource manifest file that you created in the [Add Fields to the Configuration Screen](admin-console-customize-config-screen#add-fields-to-the-configuration-screen) procedure. The Config custom resource manifest file has `kind: Config`.
1. In the Config manifest file, locate the name of the user-input field that you want to map.
**Example**:
```yaml
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: my-application
spec:
groups:
- name: smtp_settings
title: SMTP Settings
description: Configure SMTP Settings
items:
- name: smtp_host
title: SMTP Hostname
help_text: Set SMTP Hostname
type: text
```
In the example above, the field name to map is `smtp_host`.
1. In the same release in the Vendor Portal, open the manifest file where you want to map the value for the field that you selected.
1. In the manifest file, use the ConfigOption template function to map the user-supplied value in a key value pair. For example:
```yaml
hostname: '{{repl ConfigOption "smtp_host"}}'
```
For more information about the ConfigOption template function, see [Config Context](../reference/template-functions-config-context#configoption) in the _Template Functions_ section.
**Example**:
The following example shows mapping user-supplied TLS certificate and TLS private key files to the `tls.cert` and `tls.key` keys in a Secret custom resource manifest file.
For more information about working with TLS secrets, including a strategy for re-using the certificates uploaded for the Admin Console itself, see the [Configuring Cluster Ingress](packaging-ingress) example.
```yaml
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
type: kubernetes.io/tls
data:
tls.crt: '{{repl ConfigOption "tls_certificate_file" }}'
tls.key: '{{repl ConfigOption "tls_private_key_file" }}'
```
1. Save and promote the release to a development environment to test your changes.
### Map Values to a Helm Chart
The `values.yaml` file in a Helm chart defines parameters that are specific to each environment in which the chart will be deployed. With Replicated KOTS, your users provide these values through the configuration screen in the Admin Console. You customize the configuration screen based on the required and optional configuration fields that you want to expose to your users.
To map the values that your users provide in the Admin Console configuration screen to your Helm chart `values.yaml` file, you create a HelmChart custom resource.
For a tutorial that shows how to set values in a sample Helm chart during installation with KOTS, see [Set Helm Chart Values with KOTS](/vendor/tutorial-config-setup).
To map user inputs from the configuration screen to the `values.yaml` file:
1. In the [Vendor Portal](https://vendor.replicated.com/apps), click **Releases**. Then, click **View YAML** next to the desired release.
1. Open the Config custom resource manifest file that you created in the [Add Fields to the Configuration Screen](admin-console-customize-config-screen#add-fields-to-the-configuration-screen) procedure. The Config custom resource manifest file has `kind: Config`.
1. In the Config manifest file, locate the name of the user-input field that you want to map.
**Example**:
```yaml
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: my-application
spec:
groups:
- name: smtp_settings
title: SMTP Settings
description: Configure SMTP Settings
items:
- name: smtp_host
title: SMTP Hostname
help_text: Set SMTP Hostname
type: text
```
In the example above, the field name to map is `smtp_host`.
1. In the same release, create a HelmChart custom resource manifest file. A HelmChart custom resource manifest file has `kind: HelmChart`.
For more information about the HelmChart custom resource, see [HelmChart](../reference/custom-resource-helmchart) in the _Custom Resources_ section.
1. In the HelmChart manifest file, copy and paste the name of the property from your `values.yaml` file that corresponds to the field that you selected from the Config manifest file under `values`:
```yaml
values:
HELM_VALUE_KEY:
```
Replace `HELM_VALUE_KEY` with the property name from the `values.yaml` file.
1. Use the ConfigOption template function to set the property from the `values.yaml` file equal to the corresponding configuration screen field:
```yaml
values:
HELM_VALUE_KEY: '{{repl ConfigOption "CONFIG_SCREEN_FIELD_NAME" }}'
```
Replace `CONFIG_SCREEN_FIELD_NAME` with the name of the field that you created in the Config custom resource.
For more information about the KOTS ConfigOption template function, see [Config Context](../reference/template-functions-config-context#configoption) in the _Template Functions_ section.
**Example:**
```yaml
apiVersion: kots.io/v1beta1
kind: HelmChart
metadata:
name: samplechart
spec:
chart:
name: samplechart
chartVersion: 3.1.7
helmVersion: v3
useHelmInstall: true
values:
hostname: '{{repl ConfigOption "smtp_host" }}'
```
1. Save and promote the release to a development environment to test your changes.
---
# Use Custom Domains
This topic describes how to use the Replicated Vendor Portal to add and manage custom domains to alias the Replicated registry, the Replicated proxy registry, the Replicated app service, and the Download Portal.
For information about adding and managing custom domains with the Vendor API v3, see the [customHostnames](https://replicated-vendor-api.readme.io/reference/createcustomhostname) section in the Vendor API v3 documentation.
For more information about custom domains, see [About Custom Domains](custom-domains).
## Add a Custom Domain in the Vendor Portal {#add-domain}
To add and verify a custom domain:
1. In the [Vendor Portal](https://vendor.replicated.com), go to **Custom Domains**.
1. In the **Add custom domain** dropdown, select the target Replicated endpoint.
The **Configure a custom domain** wizard opens.
[View a larger version of this image](/images/custom-domains-download-configure.png)
1. For **Domain**, enter the custom domain. Click **Save & continue**.
1. For **Create CNAME**, copy the text string and use it to create a CNAME record in your DNS account. Click **Continue**.
1. For **Verify ownership**, ownership will be validated automatically using an HTTP token when possible.
If ownership cannot be validated automatically, copy the text string provided and use it to create a TXT record in your DNS account. Click **Validate & continue**. Your changes can take up to 24 hours to propagate.
1. For **TLS cert creation verification**, TLS verification will be performed automatically using an HTTP token when possible.
If TLS verification cannot be performed automatically, copy the text string provided and use it to create a TXT record in your DNS account. Click **Validate & continue**. Your changes can take up to 24 hours to propagate.
:::note
If you set up a [CAA record](https://letsencrypt.org/docs/caa/) for this hostname, you must include all Certificate Authorities (CAs) that Cloudflare partners with. The following CAA records are required to ensure proper certificate issuance and renewal:
```dns
@ IN CAA 0 issue "letsencrypt.org"
@ IN CAA 0 issue "pki.goog; cansignhttpexchanges=yes"
@ IN CAA 0 issue "ssl.com"
@ IN CAA 0 issue "amazon.com"
@ IN CAA 0 issue "cloudflare.com"
@ IN CAA 0 issue "google.com"
```
Failing to include any of these CAs might prevent certificate issuance or renewal, which can result in downtime for your customers. For additional security, you can add an IODEF record to receive notifications about certificate requests:
```dns
@ IN CAA 0 iodef "mailto:your-security-team@example.com"
```
:::
1. For **Use Domain**, to set the new domain as the default, click **Yes, set as default**. Otherwise, click **Not now**.
:::note
Replicated recommends that you do _not_ set a domain as the default until you are ready for it to be used by customers.
:::
After the verification checks for ownership and TLS certificate creation are complete, the Vendor Portal marks the domain as **Configured**.
1. (Optional) After a domain is marked as **Configured**, you can remove any TXT records that you created in your DNS account.
## Use Custom Domains
After you add one or more custom domains in the Vendor Portal, you can configure your application to use the domains.
### Configure Embedded Cluster to Use Custom Domains {#ec}
You can configure Replicated Embedded Cluster to use your custom domains for the Replicated proxy registry and Replicated app service. For more information about Embedded Cluster, see [Embedded Cluster Overview](/vendor/embedded-overview).
To configure Embedded Cluster to use your custom domains for the proxy registry and app service:
1. In the [Embedded Cluster Config](/reference/embedded-config) spec for your application, add `domains.proxyRegistryDomain` and `domains.replicatedAppDomain`. Set each field to your custom domain for the given service.
**Example:**
```yaml
apiVersion: embeddedcluster.replicated.com/v1beta1
kind: Config
spec:
domains:
# Your proxy registry custom domain
proxyRegistryDomain: proxy.yourcompany.com
# Your app service custom domain
replicatedAppDomain: updates.yourcompany.com
```
For more information, see [domains](/reference/embedded-config#domains) in _Embedded Cluster Config_.
1. Add the Embedded Cluster Config to a new release. Promote the release to a channel that your team uses for testing, and install with Embedded Cluster in a development environment to test your changes.
### Set a Default Domain
Setting a default domain is useful for ensuring that the same domain is used across channels for all your customers.
When you set a custom domain as the default, it is used by default for all new releases promoted to any channel, as long as the channel does not have a different domain assigned in its channel settings.
Only releases that are promoted to a channel _after_ you set a default domain use the new default domain. Any existing releases that were promoted before you set the default continue to use the same domain that they used previously.
:::note
In Embedded Cluster installations, the KOTS Admin Console will use the domains specified in the `domains.proxyRegistryDomain` and `domains.replicatedAppDomain` fields of the Embedded Cluster Config when making requests to the proxy registry and app service, regardless of the default domain or the domain assigned to the given release channel. For more information about using custom domains in Embedded Cluster installations, see [Configure Embedded Cluster to Use Custom Domains](#ec) above.
:::
To set a custom domain as the default:
1. In the Vendor Portal, go to **Custom Domains**.
1. Next to the target domain, click **Set as default**.
1. In the confirmation dialog that opens, click **Yes, set as default**.
### Assign a Domain to a Channel {#channel-domain}
You can assign a domain to an individual channel by editing the channel settings. When you specify a domain in the channel settings, new releases promoted to the channel use the selected domain even if there is a different domain set as the default on the **Custom Domains** page.
Assigning a domain to a release channel is useful when you need to override either the default Replicated domain or a default custom domain for a specific channel. For example:
* You need to use a different domain for releases promoted to your Beta and Stable channels.
* You need to test a domain in a development environment before you set the domain as the default for all channels.
:::note
In Embedded Cluster installations, the KOTS Admin Console will use the domains specified in the `domains.proxyRegistryDomain` and `domains.replicatedAppDomain` fields of the Embedded Cluster Config when making requests to the proxy registry and app service, regardless of the default domain or the domain assigned to the given release channel. For more information about using custom domains in Embedded Cluster installations, see [Configure Embedded Cluster to Use Custom Domains](#ec) above.
:::
To assign a custom domain to a channel:
1. In the Vendor Portal, go to **Channels** and click the settings icon for the target channel.
1. Under **Custom domains**, in the drop-down for the target Replicated endpoint, select the domain to use for the channel. For more information about channel settings, see [Settings](releases-about#settings) in _About Channels and Releases_.
[View a larger version of this image](/images/channel-settings.png)
## Reuse a Custom Domain for Another Application
If you have configured a custom domain for one application, you can reuse the custom domain for another application in the same team without going through the ownership and TLS certificate verification process again.
To reuse a custom domain for another application:
1. In the Vendor Portal, select the application from the dropdown list.
1. Click **Custom Domains**.
1. In the section for the target endpoint, click **Add your first custom domain** for your first domain, or click **Add new domain** for additional domains.
The **Configure a custom domain** wizard opens.
1. In the text box, enter the custom domain name that you want to reuse. Click **Save & continue**.
The last page of the wizard opens because the custom domain was verified previously.
1. Do one of the following:
- Click **Set as default**. In the confirmation dialog that opens, click **Yes, set as default**.
- Click **Not now**. You can come back later to set the domain as the default. The Vendor Portal shows shows that the domain has a Configured status because it was configured for a previous application, though it is not yet assigned as the default for this application.
## Remove a Custom Domain
You can remove a custom domain at any time, but you should plan the transition so that you do not break any existing installations or documentation.
Removing a custom domain for the Replicated registry, proxy registry, or Replicated app service will break existing installations that use the custom domain. Existing installations need to be upgraded to a version that does not use the custom domain before it can be removed safely.
If you remove a custom domain for the download portal, it is no longer accessible using the custom URL. You will need to point customers to an updated URL.
To remove a custom domain:
1. Log in to the [Vendor Portal](https://vendor.replicated.com) and click **Custom Domains**.
1. Verify that the domain is not set as the default nor in use on any channels. You can edit the domains in use on a channel in the channel settings. For more information, see [Settings](releases-about#settings) in _About Channels and Releases_.
:::important
When you remove a registry or Replicated app service custom domain, any installations that reference that custom domain will break. Ensure that the custom domain is no longer in use before you remove it from the Vendor Portal.
:::
1. Click **Remove** next to the unused domain in the list, and then click **Yes, remove domain**.
---
# About Custom Domains
This topic provides an overview and the limitations of using custom domains to alias the Replicated proxy registry, the Replicated app service, the Replicated Download Portal, and the Replicated registry.
For information about adding and managing custom domains, see [Use Custom Domains](custom-domains-using).
## Overview
You can use custom domains to alias Replicated endpoints by creating Canonical Name (CNAME) records for your domains.
Replicated domains are external to your domain and can require additional security reviews by your customer. Using custom domains as aliases can bring the domains inside an existing security review and reduce your exposure.
You can configure custom domains for the following services:
- **Proxy registry:** Images can be proxied from external private registries using the Replicated proxy registry. By default, the proxy registry uses the domain `proxy.replicated.com`. Replicated recommends using a CNAME such as `proxy.{your app name}.com`.
- **Replicated app service:** Upstream application YAML and metadata, including a license ID, are pulled from the app service. By default, this service uses the domain `replicated.app`. Replicated recommends using a CNAME such as `updates.{your app name}.com`.
- **Download Portal:** The Download Portal can be used to share customer license files, air gap bundles, and so on. By default, the Download Portal uses the domain `get.replicated.com`. Replicated recommends using a CNAME such as `portal.{your app name}.com` or `enterprise.{your app name}.com`.
- **Replicated registry:** Images and Helm charts can be pulled from the Replicated registry. By default, this registry uses the domain `registry.replicated.com`. Replicated recommends using a CNAME such as `registry.{your app name}.com`.
## Limitations
Using custom domains has the following limitations:
- A single custom domain cannot be used for multiple endpoints. For example, a single domain can map to `registry.replicated.com` for any number of applications, but cannot map to both `registry.replicated.com` and `proxy.replicated.com`, even if the applications are different.
- Custom domains cannot be used to alias `api.replicated.com` (legacy customer-facing APIs) or kURL.
- Multiple custom domains can be configured, but only one custom domain can be the default for each Replicated endpoint. All configured custom domains work whether or not they are the default.
- Each custom domain can only be used by one team.
- For [Replicated Embedded Cluster](/vendor/embedded-overview) installations, any Helm [`extensions`](/reference/embedded-config) that you add in the Embedded Cluster Config do not use custom domains. During deployment, Embedded Cluster pulls both the repo for the given chart and any images in the chart as written. Embedded Cluster does not rewrite image names to use custom domains.
---
# Configure Custom Metrics (Beta)
This topic describes how to configure an application to send custom metrics to the Replicated Vendor Portal.
## Overview
In addition to the built-in insights displayed in the Vendor Portal by default (such as uptime and time to install), you can also configure custom metrics to measure instances of your application running customer environments. Custom metrics can be collected for application instances running in online or air gap environments.
Custom metrics can be used to generate insights on customer usage and adoption of new features, which can help your team to make more informed prioritization decisions. For example:
* Decreased or plateaued usage for a customer can indicate a potential churn risk
* Increased usage for a customer can indicate the opportunity to invest in growth, co-marketing, and upsell efforts
* Low feature usage and adoption overall can indicate the need to invest in usability, discoverability, documentation, education, or in-product onboarding
* High usage volume for a customer can indicate that the customer might need help in scaling their instance infrastructure to keep up with projected usage
## How the Vendor Portal Collects Custom Metrics
The Vendor Portal collects custom metrics through the Replicated SDK that is installed in the cluster alongside the application.
The SDK exposes an in-cluster API where you can configure your application to POST metric payloads. When an application instance sends data to the API, the SDK sends the data (including any custom and built-in metrics) to the Replicated app service. The app service is located at `replicated.app` or at your custom domain.
If any values in the metric payload are different from the current values for the instance, then a new event is generated and displayed in the Vendor Portal. For more information about how the Vendor Portal generates events, see [How the Vendor Portal Generates Events and Insights](/vendor/instance-insights-event-data#about-events) in _About Instance and Event Data_.
The following diagram demonstrates how a custom `activeUsers` metric is sent to the in-cluster API and ultimately displayed in the Vendor Portal, as described above:
[View a larger version of this image](/images/custom-metrics-flow.png)
## Requirements
To support the collection of custom metrics in online and air gap environments, the Replicated SDK version 1.0.0-beta.12 or later must be running in the cluster alongside the application instance.
The `PATCH` and `DELETE` methods described below are available in the Replicated SDK version 1.0.0-beta.23 or later.
For more information about the Replicated SDK, see [About the Replicated SDK](/vendor/replicated-sdk-overview).
If you have any customers running earlier versions of the SDK, Replicated recommends that you add logic to your application to gracefully handle a 404 from the in-cluster APIs.
## Limitations
Custom metrics have the following limitations:
* The label that is used to display metrics in the Vendor Portal cannot be customized. Metrics are sent to the Vendor Portal with the same name that is sent in the `POST` or `PATCH` payload. The Vendor Portal then converts camel case to title case: for example, `activeUsers` is displayed as **Active Users**.
* The in-cluster APIs accept only JSON scalar values for metrics. Any requests containing nested objects or arrays are rejected.
* When using the `POST` method any existing keys that are not included in the payload will be deleted. To create new metrics or update existing ones without sending the entire dataset, simply use the `PATCH` method.
## Configure Custom Metrics
You can configure your application to `POST` or `PATCH` a set of metrics as key value pairs to the API that is running in the cluster alongside the application instance.
To remove an existing custom metric use the `DELETE` endpoint with the custom metric name.
The Replicated SDK provides an in-cluster API custom metrics endpoint at `http://replicated:3000/api/v1/app/custom-metrics`.
**Example:**
```bash
POST http://replicated:3000/api/v1/app/custom-metrics
```
```json
{
"data": {
"num_projects": 5,
"weekly_active_users": 10
}
}
```
```bash
PATCH http://replicated:3000/api/v1/app/custom-metrics
```
```json
{
"data": {
"num_projects": 54,
"num_error": 2
}
}
```
```bash
DELETE http://replicated:3000/api/v1/app/custom-metrics/num_projects
```
### POST vs PATCH
The `POST` method will always replace the existing data with the most recent payload received. Any existing keys not included in the most recent payload will still be accessible in the instance events API, but they will no longer appear in the instance summary.
The `PATCH` method will accept partial updates or add new custom metrics if a key:value pair that does not currently exist is passed.
In most cases, simply using the `PATCH` method is recommended.
For example, if a component of your application sends the following via the `POST` method:
```json
{
"numProjects": 5,
"activeUsers": 10,
}
```
Then, the component later sends the following also via the `POST` method:
```json
{
"activeUsers": 10,
"usingCustomReports": false
}
```
The instance detail will show `Active Users: 10` and `Using Custom Reports: false`, which represents the most recent payload received. The previously-sent `numProjects` value is discarded from the instance summary and is available in the instance events payload. In order to preseve `numProjects`from the initial payload and upsert `usingCustomReports` and `activeUsers` use the `PATCH` method instead of `POST` on subsequent calls to the endpoint.
For example, if a component of your application initially sends the following via the `POST` method:
```json
{
"numProjects": 5,
"activeUsers": 10,
}
```
Then, the component later sends the following also via the `PATCH` method:
```json
{
"usingCustomReports": false
}
```
The instance detail will show `Num Projects: 5`, `Active Users: 10`, `Using Custom Reports: false`, which represents the merged and upserted payload.
### NodeJS Example
The following example shows a NodeJS application that sends metrics on a weekly interval to the in-cluster API exposed by the SDK:
```javascript
async function sendMetrics(db) {
const projectsQuery = "SELECT COUNT(*) as num_projects from projects";
const numProjects = (await db.getConnection().queryOne(projectsQuery)).num_projects;
const usersQuery =
"SELECT COUNT(*) as active_users from users where DATEDIFF('day', last_active, CURRENT_TIMESTAMP) < 7";
const activeUsers = (await db.getConnection().queryOne(usersQuery)).active_users;
const metrics = { data: { numProjects, activeUsers }};
const res = await fetch('https://replicated:3000/api/v1/app/custom-metrics', {
method: 'POST',
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(metrics),
});
if (res.status !== 200) {
throw new Error(`Failed to send metrics: ${res.statusText}`);
}
}
async function startMetricsLoop(db) {
const ONE_DAY_IN_MS = 1000 * 60 * 60 * 24
// send metrics once on startup
await sendMetrics(db)
.catch((e) => { console.log("error sending metrics: ", e) });
// schedule weekly metrics payload
setInterval( () => {
sendMetrics(db, licenseId)
.catch((e) => { console.log("error sending metrics: ", e) });
}, ONE_DAY_IN_MS);
}
startMetricsLoop(getDatabase());
```
## View Custom Metrics
You can view the custom metrics that you configure for each active instance of your application on the **Instance Details** page in the Vendor Portal.
The following shows an example of an instance with custom metrics:
[View a larger version of this image](/images/instance-custom-metrics.png)
As shown in the image above, the **Custom Metrics** section of the **Instance Details** page includes the following information:
* The timestamp when the custom metric data was last updated.
* Each custom metric that you configured, along with the most recent value for the metric.
* A time-series graph depicting the historical data trends for the selected metric.
Custom metrics are also included in the **Instance activity** stream of the **Instance Details** page. For more information, see [Instance Activity](/vendor/instance-insights-details#instance-activity) in _Instance Details_.
## Export Custom Metrics
You can use the Vendor API v3 `/app/{app_id}/events` endpoint to programatically access historical timeseries data containing instance level events, including any custom metrics that you have defined. For more information about the endpoint, see [Export Customer and Instance Data](/vendor/instance-data-export).
---
# Adoption Report
This topic describes the insights in the **Adoption** section on the Replicated Vendor Portal **Dashboard** page.
## About Adoption Rate
The **Adoption** section on the **Dashboard** provides insights about the rate at which your customers upgrade their instances and adopt the latest versions of your application. As an application vendor, you can use these adoption rate metrics to learn if your customers are completing upgrades regularly, which is a key indicator of the discoverability and ease of application upgrades.
The Vendor Portal generates adoption rate data from all your customer's application instances that have checked-in during the selected time period. For more information about instance check-ins, see [How the Vendor Portal Collects Instance Data](instance-insights-event-data#about-reporting) in _About Instance and Event Data_.
The following screenshot shows an example of the **Adoption** section on the **Dashboard**:

[View a larger version of this image](/images/customer_adoption_rates.png)
As shown in the screenshot above, the **Adoption** report includes a graph and key adoption rate metrics. For more information about how to interpret this data, see [Adoption Graph](#graph) and [Adoption Metrics](#metrics) below.
The **Adoption** report also displays the number of customers assigned to the selected channel and a link to the report that you can share with other members of your team.
You can filter the graph and metrics in the **Adoption** report by:
* License type (Paid, Trial, Dev, or Community)
* Time period (the previous month, three months, six months, or twelve months)
* Release channel to which instance licenses are assigned, such as Stable or Beta
## Adoption Graph {#graph}
The **Adoption** report includes a graph that shows the percent of active instances that are running different versions of your application within the selected time period.
The following shows an example of an adoption rate graph with three months of data:

[View a larger version of this image](/images/adoption_rate_graph.png)
As shown in the image above, the graph plots the number of active instances in each week in the selected time period, grouped by the version each instance is running. The key to the left of the graph shows the unique color that is assigned to each application version. You can use this color-coding to see at a glance the percent of active instances that were running different versions of your application across the selected time period.
Newer versions will enter at the bottom of the area chart, with older versions shown higher up.
You can also hover over a color-coded section in the graph to view the number and percentage of active instances that were running the version in a given period.
If there are no active instances of your application, then the adoption rate graph displays a "No Instances" message.
## Adoption Metrics {#metrics}
The **Adoption** section includes metrics that show how frequently your customers discover and complete upgrades to new versions of your application. It is important that your users adopt new versions of your application so that they have access to the latest features and bug fixes. Additionally, when most of your users are on the latest versions, you can also reduce the number of versions for which you provide support and maintain documentation.
The following shows an example of the metrics in the **Adoption** section:

[View a larger version of this image](/images/adoption_rate_metrics.png)
As shown in the image above, the **Adoption** section displays the following metrics:
* Instances on last three versions
* Unique versions
* Median relative age
* Upgrades completed
Based on the time period selected, each metric includes an arrow that shows the change in value compared to the previous period. For example, if the median relative age today is 68 days, the selected time period is three months, and three months ago the median relative age was 55 days, then the metric would show an upward-facing arrow with an increase of 13 days.
The following table describes each metric in the **Adoption** section, including the formula used to calculate its value and the recommended trend for the metric over time:
| Metric | Description | Target Trend |
|---|---|---|
| Instances on last three versions |
Percent of active instances that are running one the latest three versions of your application. Formula: |
Increase towards 100% |
| Unique versions |
Number of unique versions of your application running in active instances. Formula: |
Decrease towards less than or equal to three |
| Median relative age |
The relative age of a single instance is the number of days between the date that the instance's version was promoted to the channel and the date when the latest available application version was promoted to the channel. Median relative age is the median value across all active instances for the selected time period and channel. Formula: |
Depends on release cadence. For vendors who ship every four to eight weeks, decrease the median relative age towards 60 days or fewer. |
| Upgrades completed |
Total number of completed upgrades across active instances for the selected time period and channel. An upgrade is a single version change for an instance. An upgrade is considered complete when the instance deploys the new application version. The instance does not need to become available (as indicated by reaching a Ready state) after deploying the new version for the upgrade to be counted as complete. Formula: |
Increase compared to any previous period, unless you reduce your total number of live instances. |
[View a larger version of this image](/images/ec-version-command.png)
* Any Helm extensions included in the `extensions` field of the Embedded Cluster Config are _not_ included in backups. Helm extensions are reinstalled as part of the restore process. To include Helm extensions in backups, configure the Velero Backup resource to include the extensions using namespace-based or label-based selection. For more information, see [Configure the Velero Custom Resources](#config-velero-resources) below.
* Users can only restore from the most recent backup.
* Velero is installed only during the initial installation process. Enabling the disaster recovery license field for customers after they have already installed will not do anything.
* If the `--admin-console-port` flag was used during install to change the port for the Admin Console, note that during a restore the Admin Console port will be used from the backup and cannot be changed. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
## Configure Disaster Recovery
This section describes how to configure disaster recovery for Embedded Cluster installations. It also describes how to enable access to the disaster recovery feature on a per-customer basis.
### Configure the Velero Custom Resources {#config-velero-resources}
This section describes how to set up Embedded Cluster disaster recovery for your application by configuring Velero [Backup](https://velero.io/docs/latest/api-types/backup/) and [Restore](https://velero.io/docs/latest/api-types/restore/) custom resources in a release.
To configure Velero Backup and Restore custom resources for Embedded Cluster disaster recovery:
1. In a new release containing your application files, add a Velero Backup resource. In the Backup resource, use namespace-based or label-based selection to indicate the application resources that you want to be included in the backup. For more information, see [Backup API Type](https://velero.io/docs/latest/api-types/backup/) in the Velero documentation.
:::important
If you use namespace-based selection to include all of your application resources deployed in the `kotsadm` namespace, ensure that you exclude the Replicated resources that are also deployed in the `kotsadm` namespace. Because the Embedded Cluster infrastructure components are always included in backups automatically, this avoids duplication.
:::
**Example:**
The following Backup resource uses namespace-based selection to include application resources deployed in the `kotsadm` namespace:
```yaml
apiVersion: velero.io/v1
kind: Backup
metadata:
name: backup
spec:
# Back up the resources in the kotsadm namespace
includedNamespaces:
- kotsadm
orLabelSelectors:
- matchExpressions:
# Exclude Replicated resources from the backup
- { key: kots.io/kotsadm, operator: NotIn, values: ["true"] }
```
1. In the same release, add a Velero Restore resource. In the `backupName` field of the Restore resource, include the name of the Backup resource that you created. For more information, see [Restore API Type](https://velero.io/docs/latest/api-types/restore/) in the Velero documentation.
**Example**:
```yaml
apiVersion: velero.io/v1
kind: Restore
metadata:
name: restore
spec:
# the name of the Backup resource that you created
backupName: backup
includedNamespaces:
- '*'
```
1. For any image names that you include in your Backup and Restore resources, rewrite the image name using the Replicated KOTS [HasLocalRegistry](/reference/template-functions-config-context#haslocalregistry), [LocalRegistryHost](/reference/template-functions-config-context#localregistryhost), and [LocalRegistryNamespace](/reference/template-functions-config-context#localregistrynamespace) template functions. This ensures that the image name is rendered correctly during deployment, allowing the image to be pulled from the user's local image registry (such as in air gap installations) or through the Replicated proxy registry.
**Example:**
```yaml
apiVersion: velero.io/v1
kind: Restore
metadata:
name: restore
spec:
hooks:
resources:
- name: restore-hook-1
includedNamespaces:
- kotsadm
labelSelector:
matchLabels:
app: example
postHooks:
- init:
initContainers:
- name: restore-hook-init1
image:
# Use HasLocalRegistry, LocalRegistryHost, and LocalRegistryNamespace
# to template the image name
registry: '{{repl HasLocalRegistry | ternary LocalRegistryHost "proxy.replicated.com" }}'
repository: '{{repl HasLocalRegistry | ternary LocalRegistryNamespace "proxy/my-app/quay.io/my-org" }}/nginx'
tag: 1.24-alpine
```
For more information about how to rewrite image names using the KOTS [HasLocalRegistry](/reference/template-functions-config-context#haslocalregistry), [LocalRegistryHost](/reference/template-functions-config-context#localregistryhost), and [LocalRegistryNamespace](/reference/template-functions-config-context#localregistrynamespace) template functions, including additional examples, see [Task 1: Rewrite Image Names](helm-native-v2-using#rewrite-image-names) in _Configuring the HelmChart v2 Custom Resource_.
1. If you support air gap installations, add any images that are referenced in your Backup and Restore resources to the `additionalImages` field of the KOTS Application custom resource. This ensures that the images are included in the air gap bundle for the release so they can be used during the backup and restore process in environments with limited or no outbound internet access. For more information, see [additionalImages](/reference/custom-resource-application#additionalimages) in _Application_.
**Example:**
```yaml
apiVersion: kots.io/v1beta1
kind: Application
metadata:
name: my-app
spec:
additionalImages:
- elasticsearch:7.6.0
- quay.io/orgname/private-image:v1.2.3
```
1. (Optional) Use Velero functionality like [backup](https://velero.io/docs/main/backup-hooks/) and [restore](https://velero.io/docs/main/restore-hooks/) hooks to customize the backup and restore process as needed.
**Example:**
For example, a Postgres database might be backed up using pg_dump to extract the database into a file as part of a backup hook. It can then be restored using the file in a restore hook:
```yaml
podAnnotations:
backup.velero.io/backup-volumes: backup
pre.hook.backup.velero.io/command: '["/bin/bash", "-c", "PGPASSWORD=$POSTGRES_PASSWORD pg_dump -U {{repl ConfigOption "postgresql_username" }} -d {{repl ConfigOption "postgresql_database" }} -h 127.0.0.1 > /scratch/backup.sql"]'
pre.hook.backup.velero.io/timeout: 3m
post.hook.restore.velero.io/command: '["/bin/bash", "-c", "[ -f \"/scratch/backup.sql\" ] && PGPASSWORD=$POSTGRES_PASSWORD psql -U {{repl ConfigOption "postgresql_username" }} -h 127.0.0.1 -d {{repl ConfigOption "postgresql_database" }} -f /scratch/backup.sql && rm -f /scratch/backup.sql;"]'
post.hook.restore.velero.io/wait-for-ready: 'true' # waits for the pod to be ready before running the post-restore hook
```
1. Save and the promote the release to a development channel for testing.
### Enable the Disaster Recovery Feature for Your Customers
After configuring disaster recovery for your application, you can enable it on a per-customer basis with the **Allow Disaster Recovery (Alpha)** license field.
To enable disaster recovery for a customer:
1. In the Vendor Portal, go to the [Customers](https://vendor.replicated.com/customers) page and select the target customer.
1. On the **Manage customer** page, under **License options**, enable the **Allow Disaster Recovery (Alpha)** field.
When your customer installs with Embedded Cluster, Velero will be deployed if the **Allow Disaster Recovery (Alpha)** license field is enabled.
## Take Backups and Restore
This section describes how your customers can configure backup storage, take backups, and restore from backups.
### Configure Backup Storage and Take Backups in the Admin Console
Customers with the **Allow Disaster Recovery (Alpha)** license field can configure their backup storage location and take backups from the Admin Console.
To configure backup storage and take backups:
1. After installing the application and logging in to the Admin Console, click the **Disaster Recovery** tab at the top of the Admin Console.
1. For the desired S3-compatible backup storage location, enter the bucket, prefix (optional), access key ID, access key secret, endpoint, and region. Click **Update storage settings**.
[View a larger version of this image](/images/dr-backup-storage-settings.png)
1. (Optional) From this same page, configure scheduled backups and a retention policy for backups.
[View a larger version of this image](/images/dr-scheduled-backups.png)
1. In the **Disaster Recovery** submenu, click **Backups**. Backups can be taken from this screen.
[View a larger version of this image](/images/dr-backups.png)
### Restore from a Backup
To restore from a backup:
1. SSH onto a new machine where you want to restore from a backup.
1. Download the Embedded Cluster installation assets for the version of the application that was included in the backup. You can find the command for downloading Embedded Cluster installation assets in the **Embedded Cluster install instructions dialog** for the customer. For more information, [Online Installation with Embedded Cluster](/enterprise/installing-embedded).
:::note
The version of the Embedded Cluster installation assets must match the version that is in the backup. For more information, see [Limitations and Known Issues](#limitations-and-known-issues).
:::
1. Run the restore command:
```bash
sudo ./APP_SLUG restore
```
Where `APP_SLUG` is the unique application slug.
Note the following requirements and guidance for the `restore` command:
* If the installation is behind a proxy, the same proxy settings provided during install must be provided to the restore command using `--http-proxy`, `--https-proxy`, and `--no-proxy`. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
* If the `--cidr` flag was used during install to the set IP address ranges for Pods and Services, this flag must be provided with the same CIDR during the restore. If this flag is not provided or is provided with a different CIDR, the restore will fail with an error message telling you to rerun with the appropriate value. However, it will take some time before that error occurs. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
* If the `--local-artifact-mirror-port` flag was used during install to change the port for the Local Artifact Mirror (LAM), you can optionally use the `--local-artifact-mirror-port` flag to choose a different LAM port during restore. For example, `restore --local-artifact-mirror-port=50000`. If no LAM port is provided during restore, the LAM port that was supplied during installation will be used. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
You will be guided through the process of restoring from a backup.
1. When prompted, enter the information for the backup storage location.

[View a larger version of this image](/images/dr-restore.png)
1. When prompted, confirm that you want to restore from the detected backup.

[View a larger version of this image](/images/dr-restore-from-backup-confirmation.png)
After some time, the Admin console URL is displayed:

[View a larger version of this image](/images/dr-restore-admin-console-url.png)
1. (Optional) If the cluster should have multiple nodes, go to the Admin Console to get a join command and join additional nodes to the cluster. For more information, see [Manage Multi-Node Clusters with Embedded Cluster](/enterprise/embedded-manage-nodes).
1. Type `continue` when you are ready to proceed with the restore process.

[View a larger version of this image](/images/dr-restore-continue.png)
After some time, the restore process completes.
If the `restore` command is interrupted during the restore process, you can resume by rerunning the `restore` command and selecting to resume the previous restore. This is useful if your SSH session is interrupted during the restore.
---
# Embedded Cluster Overview
This topic provides an introduction to Replicated Embedded Cluster, including a description of the built-in extensions installed by Embedded Cluster, an overview of the Embedded Cluster single-node and multi-node architecture, and requirements and limitations.
:::note
If you are instead looking for information about creating Kubernetes Installers with Replicated kURL, see the [Replicated kURL](/vendor/packaging-embedded-kubernetes) section.
:::
## Overview
Replicated Embedded Cluster allows you to distribute a Kubernetes cluster and your application together as a single appliance, making it easy for enterprise users to install, update, and manage the application and the cluster in tandem. Embedded Cluster is based on the open source Kubernetes distribution k0s. For more information, see the [k0s documentation](https://docs.k0sproject.io/stable/).
For software vendors, Embedded Cluster provides a Config for defining characteristics of the cluster that will be created in the customer environment. Additionally, each version of Embedded Cluster includes a specific version of Replicated KOTS, ensuring compatibility between KOTS and the cluster. For enterprise users, cluster updates are done automatically at the same time as application updates, allowing users to more easily keep the cluster up-to-date without needing to use kubectl.
## Architecture
This section describes the Embedded Cluster architecture, including the built-in extensions deployed by Embedded Cluster.
### Single-Node Architecture
The following diagram shows the architecture of a single-node Embedded Cluster installation for an application named Gitea:

[View a larger version of this image](/images/embedded-architecture-single-node.png)
As shown in the diagram above, the user downloads the Embedded Cluster installation assets as a `.tgz` in their installation environment. These installation assets include the Embedded Cluster binary, the user's license file, and (for air gap installations) an air gap bundle containing the images needed to install and run the release in an environment with limited or no outbound internet access.
When the user runs the Embedded Cluster install command, the Embedded Cluster binary first installs the k0s cluster as a systemd service.
After all the Kubernetes components for the cluster are available, the Embedded Cluster binary then installs the Embedded Cluster built-in extensions. For more information about these extensions, see [Built-In Extensions](#built-in-extensions) below.
Any Helm extensions that were included in the [`extensions`](/reference/embedded-config#extensions) field of the Embedded Cluster Config are also installed. The namespace or namespaces where Helm extensions are installed is defined by the vendor in the Embedded Cluster Config.
Finally, Embedded Cluster also installs Local Artifact Mirror (LAM). In air gap installations, LAM is used to store and update images.
### Multi-Node Architecture
The following diagram shows the architecture of a multi-node Embedded Cluster installation:

[View a larger version of this image](/images/embedded-architecture-multi-node.png)
As shown in the diagram above, in multi-node installations, the Embedded Cluster Operator, KOTS, and the image registry for air gap installations are all installed on one controller node.
For installations that include disaster recovery with Velero, the Velero Node Agent runs on each node in the cluster. The Node Agent is a Kubernetes DaemonSet that performs backup and restore tasks such as creating snapshots and transferring data during restores.
Additionally, any Helm [`extensions`](/reference/embedded-config#extensions) that you include in the Embedded Cluster Config are installed in the cluster depending on the given chart and how it is configured to be deployed.
### Multi-Node Architecture with High Availability
:::note
High availability (HA) for multi-node installations with Embedded Cluster is Alpha and is not enabled by default. For more informaiton about enabling HA, see [Enable High Availability for Multi-Node Clusters (Alpha)](/enterprise/embedded-manage-nodes#ha).
:::
The following diagram shows the architecture of an HA multi-node Embedded Cluster installation:

[View a larger version of this image](/images/embedded-architecture-multi-node-ha.png)
As shown in the diagram above, in HA installations with Embedded Cluster:
* A single replica of the Embedded Cluster Operator is deployed and runs on a controller node.
* A single replica of the KOTS Admin Console is deployed and runs on a controller node.
* Three replicas of rqlite are deployed in the kotsadm namespace. Rqlite is used by KOTS to store information such as support bundles, version history, application metadata, and other small amounts of data needed to manage the application.
* For installations that include disaster recovery, the Velero pod is deployed on one node. The Velero Node Agent runs on each node in the cluster. The Node Agent is a Kubernetes DaemonSet that performs backup and restore tasks such as creating snapshots and transferring data during restores.
* For air gap installations, two replicas of the air gap image registry are deployed.
Any Helm [`extensions`](/reference/embedded-config#extensions) that you include in the Embedded Cluster Config are installed in the cluster depending on the given chart and whether or not it is configured to be deployed with high availability.
## Built-In Extensions {#built-in-extensions}
Embedded Cluster includes several built-in extensions. The built-in extensions provide capabilities such as application management and storage. Each built-in extension is installed in its own namespace.
The built-in extensions installed by Embedded Cluster include:
* **Embedded Cluster Operator**: The Operator is used for reporting purposes as well as some clean up operations.
* **KOTS:** Embedded Cluster installs the KOTS Admin Console in the kotsadm namespace. End customers use the Admin Console to configure and install the application. Rqlite is also installed in the kotsadm namespace alongside KOTS. Rqlite is a distributed relational database that uses SQLite as its storage engine. KOTS uses rqlite to store information such as support bundles, version history, application metadata, and other small amounts of data needed to manage the application. For more information about rqlite, see the [rqlite](https://rqlite.io/) website.
* **OpenEBS:** Embedded Cluster uses OpenEBS to provide local PersistentVolume (PV) storage, including the PV storage for rqlite used by KOTS. For more information, see the [OpenEBS](https://openebs.io/docs/) documentation.
* **(Disaster Recovery Only) Velero:** If the installation uses the Embedded Cluster disaster recovery feature, Embedded Cluster installs Velero, which is an open-source tool that provides backup and restore functionality. For more information about Velero, see the [Velero](https://velero.io/docs/latest/) documentation. For more information about the disaster recovery feature, see [Disaster Recovery for Embedded Cluster (Alpha)](/vendor/embedded-disaster-recovery).
* **(Air Gap Only) Image registry:** For air gap installations in environments with limited or no outbound internet access, Embedded Cluster installs an image registry where the images required to install and run the application are pushed. For more information about installing in air-gapped environments, see [Air Gap Installation with Embedded Cluster](/enterprise/installing-embedded-air-gap).
## Comparison to kURL
Embedded Cluster is a successor to Replicated kURL. Compared to kURL, Embedded Cluster offers several improvements such as:
* Significantly faster installation, updates, and node joins
* A redesigned Admin Console UI for managing the cluster
* Improved support for multi-node clusters
* One-click updates of both the application and the cluster at the same time
Additionally, Embedded Cluster automatically deploys several built-in extensions like KOTS and OpenEBS to provide capabilities such as application management and storage. This represents an improvement over kURL because vendors distributing their application with Embedded Cluster no longer need choose and define various add-ons in the installer spec. For additional functionality that is not included in the built-in extensions, such as an ingress controller, vendors can provide their own [`extensions`](/reference/embedded-config#extensions) that will be deployed alongside the application.
## Requirements
### System Requirements
* Linux operating system
* x86-64 architecture
* systemd
* At least 2GB of memory and 2 CPU cores
* The disk on the host must have a maximum P99 write latency of 10 ms. This supports etcd performance and stability. For more information about the disk write latency requirements for etcd, see [Disks](https://etcd.io/docs/latest/op-guide/hardware/#disks) in _Hardware recommendations_ and [What does the etcd warning “failed to send out heartbeat on time” mean?](https://etcd.io/docs/latest/faq/) in the etcd documentation.
* The data directory used by Embedded Cluster must have 40Gi or more of total space and be less than 80% full. By default, the data directory is `/var/lib/embedded-cluster`. The directory can be changed by passing the `--data-dir` flag with the Embedded Cluster `install` command. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
Note that in addition to the primary data directory, Embedded Cluster creates directories and files in the following locations:
- `/etc/cni`
- `/etc/k0s`
- `/opt/cni`
- `/opt/containerd`
- `/run/calico`
- `/run/containerd`
- `/run/k0s`
- `/sys/fs/cgroup/kubepods`
- `/sys/fs/cgroup/system.slice/containerd.service`
- `/sys/fs/cgroup/system.slice/k0scontroller.service`
- `/usr/libexec/k0s`
- `/var/lib/calico`
- `/var/lib/cni`
- `/var/lib/containers`
- `/var/lib/kubelet`
- `/var/log/calico`
- `/var/log/containers`
- `/var/log/embedded-cluster`
- `/var/log/pods`
- `/usr/local/bin/k0s`
* (Online installations only) Access to replicated.app and proxy.replicated.com or your custom domain for each
* Embedded Cluster is based on k0s, so all k0s system requirements and external runtime dependencies apply. See [System requirements](https://docs.k0sproject.io/stable/system-requirements/) and [External runtime dependencies](https://docs.k0sproject.io/stable/external-runtime-deps/) in the k0s documentation.
### Port Requirements
This section lists the ports used by Embedded Cluster. These ports must be open and available for both single- and multi-node installations.
#### Ports Used by Local Processes
The following ports must be open and available for use by local processes running on the same node. It is not necessary to create firewall openings for these ports.
* 2379/TCP
* 7443/TCP
* 9099/TCP
* 10248/TCP
* 10257/TCP
* 10259/TCP
#### Ports Required for Bidirectional Communication Between Nodes
The following ports are used for bidirectional communication between nodes.
For multi-node installations, create firewall openings between nodes for these ports.
For single-node installations, ensure that there are no other processes using these ports. Although there is no communication between nodes in single-node installations, these ports are still required.
* 2380/TCP
* 4789/UDP
* 6443/TCP
* 9091/TCP
* 9443/TCP
* 10249/TCP
* 10250/TCP
* 10256/TCP
#### Admin Console Port
The KOTS Admin Console requires that port 30000/TCP is open and available. Create a firewall opening for port 30000/TCP so that the Admin Console can be accessed by the end user.
Additionally, port 30000 must be accessible by nodes joining the cluster.
If port 30000 is occupied, you can select a different port for the Admin Console during installation. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
#### LAM Port
The Local Artifact Mirror (LAM) requires that port 50000/TCP is open and available.
If port 50000 is occupied, you can select a different port for the LAM during installation. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
## Limitations
Embedded Cluster has the following limitations:
* **Migration from kURL**: We are helping several customers migrate from kURL to Embedded Cluster. For more information about migrating from kURL to Embedded Cluster, including key considerations before migrating and an example step-by-step migration process, see [Replicated kURL to Embedded Cluster Migration](https://docs.google.com/document/d/1Qw9owCK4xNXHRRmxDgAq_NJdxQ4O-6w2rWk_luzBD7A/edit?tab=t.0). For additional questions and to begin the migration process for your application, reach out to Alex Parker at alexp@replicated.com.
* **Multi-node support is in beta**: Support for multi-node embedded clusters is in beta, and enabling high availability for multi-node clusters is in alpha. Only single-node embedded clusters are generally available. For more information, see [Manage Multi-Node Clusters with Embedded Cluster](/enterprise/embedded-manage-nodes).
* **Disaster recovery is in alpha**: Disaster Recovery for Embedded Cluster installations is in alpha. For more information, see [Disaster Recovery for Embedded Cluster (Alpha)](/vendor/embedded-disaster-recovery).
* **Partial rollback support**: In Embedded Cluster 1.17.0 and later, rollbacks are supported only when rolling back to a version where there is no change to the [Embedded Cluster Config](/reference/embedded-config) compared to the currently-installed version. For example, users can roll back to release version 1.0.0 after upgrading to 1.1.0 only if both 1.0.0 and 1.1.0 use the same Embedded Cluster Config. For more information about how to enable rollbacks for your application in the KOTS Application custom resource, see [allowRollback](/reference/custom-resource-application#allowrollback) in _Application_.
* **Changing node hostnames is not supported**: After a host is added to a Kubernetes cluster, Kubernetes assumes that the hostname and IP address of the host will not change. If you need to change the hostname or IP address of a node, you must first remove the node from the cluster. For more information about the requirements for naming nodes, see [Node name uniqueness](https://kubernetes.io/docs/concepts/architecture/nodes/#node-name-uniqueness) in the Kubernetes documentation.
* **Automatic updates not supported**: Configuring automatic updates from the Admin Console so that new versions are automatically deployed is not supported for Embedded Cluster installations. For more information, see [Configure Automatic Updates](/enterprise/updating-apps).
* **`minKotsVersion` and `targetKotsVersion` not supported**: The [`minKotsVersion`](/reference/custom-resource-application#minkotsversion-beta) and [`targetKotsVersion`](/reference/custom-resource-application#targetkotsversion) fields in the KOTS Application custom resource are not supported for Embedded Cluster installations. This is because each version of Embedded Cluster includes a particular version of KOTS. Setting `targetKotsVersion` or `minKotsVersion` to a version of KOTS that does not coincide with the version that is included in the specified version of Embedded Cluster will cause Embedded Cluster installations to fail with an error message like: `Error: This version of App Name requires a different version of KOTS from what you currently have installed`. To avoid installation failures, do not use targetKotsVersion or minKotsVersion in releases that support installation with Embedded Cluster.
* **Support bundles over 100MB in the Admin Console**: Support bundles are stored in rqlite. Bundles over 100MB could cause rqlite to crash, causing errors in the installation. You can still generate a support bundle from the command line. For more information, see [Generating Support Bundles for Embedded Cluster](/vendor/support-bundle-embedded).
* **Kubernetes version template functions not supported**: The KOTS [KubernetesVersion](/reference/template-functions-static-context#kubernetesversion), [KubernetesMajorVersion](/reference/template-functions-static-context#kubernetesmajorversion), and [KubernetesMinorVersion](/reference/template-functions-static-context#kubernetesminorversion) template functions do not provide accurate Kubernetes version information for Embedded Cluster installations. This is because these template functions are rendered before the Kubernetes cluster has been updated to the intended version. However, `KubernetesVersion` is not necessary for Embedded Cluster because vendors specify the Embedded Cluster version, which includes a known Kubernetes version.
* **KOTS Auto-GitOps workflow not supported**: Embedded Cluster does not support the KOTS Auto-GitOps workflow. If an end-user is interested in GitOps, consider the Helm install method instead. For more information, see [Install with Helm](/vendor/install-with-helm).
* **Downgrading Kubernetes not supported**: Embedded Cluster does not support downgrading Kubernetes. The admin console will not prevent end-users from attempting to downgrade Kubernetes if a more recent version of your application specifies a previous Embedded Cluster version. You must ensure that you do not promote new versions with previous Embedded Cluster versions.
* **Templating not supported in Embedded Cluster Config**: The [Embedded Cluster Config](/reference/embedded-config) resource does not support the use of Go template functions, including [KOTS template functions](/reference/template-functions-about). This only applies to the Embedded Cluster Config. You can still use template functions in the rest of your release as usual.
* **Policy enforcement on Embedded Cluster workloads is not supported**: The Embedded Cluster runs workloads that require higher levels of privilege. If your application installs a policy enforcement engine such as Gatekeeper or Kyverno, ensure that its policies are not enforced in the namespaces used by Embedded Cluster.
* **Installing on STIG- and CIS-hardened OS images is not supported**: Embedded Cluster isn't tested on these images, and issues have arisen when trying to install on them.
---
# Troubleshoot Embedded Cluster
This topic provides information about troubleshooting Replicated Embedded Cluster installations. For more information about Embedded Cluster, including built-in extensions and architecture, see [Embedded Cluster Overview](/vendor/embedded-overview).
## Troubleshoot with Support Bundles
This section includes information about how to collect support bundles for Embedded Cluster installations. For more information about support bundles, see [About Preflight Checks and Support Bundles](/vendor/preflight-support-bundle-about).
### About the Default Embedded Cluster Support Bundle Spec
Embedded Cluster includes a default support bundle spec that collects both host- and cluster-level information:
* The host-level information is useful for troubleshooting failures related to host configuration like DNS, networking, or storage problems.
* Cluster-level information includes details about the components provided by Replicated, such as the Admin Console and Embedded Cluster Operator that manage install and upgrade operations. If the cluster has not installed successfully and cluster-level information is not available, then it is excluded from the bundle.
In addition to the host- and cluster-level details provided by the default Embedded Cluster spec, support bundles generated for Embedded Cluster installations also include app-level details provided by any custom support bundle specs that you included in the application release.
### Generate a Bundle For Versions 1.17.0 and Later
For Embedded Cluster 1.17.0 and later, you can run the Embedded Cluster `support-bundle` command to generate a support bundle.
The `support-bundle` command uses the default Embedded Cluster support bundle spec to collect both cluster- and host-level information. It also automatically includes any application-specific support bundle specs in the generated bundle.
To generate a support bundle:
1. SSH onto a controller node.
:::note
You can SSH onto a worker node to generate a support bundle that contains information specific to that node. However, when run on a worker node, the `support-bundle` command does not capture cluster-wide information.
:::
1. Run the following command:
```bash
sudo ./APP_SLUG support-bundle
```
Where `APP_SLUG` is the unique slug for the application.
### Generate a Bundle For Versions Earlier Than 1.17.0
For Embedded Cluster versions earlier than 1.17.0, you can generate a support bundle from the shell using the kubectl support-bundle plugin.
To generate a bundle with the support-bundle plugin, you pass the default Embedded Cluster spec to collect both cluster- and host-level information. You also pass the `--load-cluster-specs` flag, which discovers all support bundle specs that are defined in Secrets or ConfigMaps in the cluster. This ensures that any application-specific specs are also included in the bundle. For more information, see [Discover Cluster Specs](https://troubleshoot.sh/docs/support-bundle/discover-cluster-specs/) in the Troubleshoot documentation.
To generate a bundle:
1. SSH onto a controller node.
1. Use the Embedded Cluster shell command to start a shell with access to the cluster:
```bash
sudo ./APP_SLUG shell
```
Where `APP_SLUG` is the unique slug for the application.
The output looks similar to the following:
```bash
__4___
_ \ \ \ \ Welcome to APP_SLUG debug shell.
<'\ /_/_/_/ This terminal is now configured to access your cluster.
((____!___/) Type 'exit' (or CTRL+d) to exit.
\0\0\0\0\/ Happy hacking.
~~~~~~~~~~~
root@alex-ec-2:/home/alex# export KUBECONFIG="/var/lib/embedded-cluster/k0s/pki/admin.conf"
root@alex-ec-2:/home/alex# export PATH="$PATH:/var/lib/embedded-cluster/bin"
root@alex-ec-2:/home/alex# source <(kubectl completion bash)
root@alex-ec-2:/home/alex# source /etc/bash_completion
```
The appropriate kubeconfig is exported, and the location of useful binaries like kubectl and the preflight and support-bundle plugins is added to PATH.
:::note
The shell command cannot be run on non-controller nodes.
:::
2. Generate the support bundle using the default Embedded Cluster spec and the `--load-cluster-specs` flag:
```bash
kubectl support-bundle --load-cluster-specs /var/lib/embedded-cluster/support/host-support-bundle.yaml
```
## View Logs
You can view logs for both Embedded Cluster and the k0s systemd service to help troubleshoot Embedded Cluster deployments.
### View Installation Logs for Embedded Cluster
To view installation logs for Embedded Cluster:
1. SSH onto a controller node.
1. Navigate to `/var/log/embedded-cluster` and open the `.log` file to view logs.
### View k0s Logs
You can use the journalctl command line tool to access logs for systemd services, including k0s. For more information about k0s, see the [k0s documentation](https://docs.k0sproject.io/stable/).
To use journalctl to view k0s logs:
1. SSH onto a controller node or a worker node.
1. Use journalctl to view logs for the k0s systemd service that was deployed by Embedded Cluster.
**Examples:**
```bash
journalctl -u k0scontroller
```
```bash
journalctl -u k0sworker
```
## Access the Cluster
When troubleshooting, it can be useful to list the cluster and view logs using the kubectl command line tool. For additional suggestions related to troubleshooting applications, see [Troubleshooting Applications](https://kubernetes.io/docs/tasks/debug/debug-application/) in the Kubernetes documentation.
To access the cluster and use other included binaries:
1. SSH onto a controller node.
:::note
You cannot run the `shell` command on worker nodes.
:::
1. Use the Embedded Cluster shell command to start a shell with access to the cluster:
```
sudo ./APP_SLUG shell
```
Where `APP_SLUG` is the unique slug for the application.
The output looks similar to the following:
```
__4___
_ \ \ \ \ Welcome to APP_SLUG debug shell.
<'\ /_/_/_/ This terminal is now configured to access your cluster.
((____!___/) Type 'exit' (or CTRL+d) to exit.
\0\0\0\0\/ Happy hacking.
~~~~~~~~~~~
root@alex-ec-1:/home/alex# export KUBECONFIG="/var/lib/embedded-cluster/k0s/pki/admin.conf"
root@alex-ec-1:/home/alex# export PATH="$PATH:/var/lib/embedded-cluster/bin"
root@alex-ec-1:/home/alex# source <(k0s completion bash)
root@alex-ec-1:/home/alex# source <(cat /var/lib/embedded-cluster/bin/kubectl_completion_bash.sh)
root@alex-ec-1:/home/alex# source /etc/bash_completion
```
The appropriate kubeconfig is exported, and the location of useful binaries like kubectl and Replicated’s preflight and support-bundle plugins is added to PATH.
1. Use the available binaries as needed.
**Example**:
```bash
kubectl version
```
```
Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.1+k0s
```
1. Type `exit` or **Ctrl + D** to exit the shell.
## Troubleshoot Errors
This section provides troubleshooting advice for common errors.
### Installation failure when NVIDIA GPU Operator is included as Helm extension {#nvidia}
#### Symptom
A release that includes that includes the NVIDIA GPU Operator as a Helm extension fails to install.
#### Cause
If there are any containerd services on the host, the NVIDIA GPU Operator will generate an invalid containerd config, causing the installation to fail.
This is the result of a known issue with v24.9.x of the NVIDIA GPU Operator. For more information about the known issue, see [container-toolkit does not modify the containerd config correctly when there are multiple instances of the containerd binary](https://github.com/NVIDIA/nvidia-container-toolkit/issues/982) in the nvidia-container-toolkit repository in GitHub.
For more information about including the GPU Operator as a Helm extension, see [NVIDIA GPU Operator](/vendor/embedded-using#nvidia-gpu-operator) in _Using Embedded Cluster_.
#### Solution
To troubleshoot:
1. Remove any existing containerd services that are running on the host (such as those deployed by Docker).
1. Reset and reboot the node:
```bash
sudo ./APP_SLUG reset
```
Where `APP_SLUG` is the unique slug for the application.
For more information, see [Reset a Node](/vendor/embedded-using#reset-a-node) in _Using Embedded Cluster_.
1. Re-install with Embedded Cluster.
### Calico networking issues
#### Symptom
Symptoms of Calico networking issues can include:
* The pod is stuck in a CrashLoopBackOff state with failed health checks:
```
Warning Unhealthy 6h51m (x3 over 6h52m) kubelet Liveness probe failed: Get "http://
[View a larger version of this image](/images/embedded-cluster-install-dialog-latest.png)
1. Enter an Admin Console password when prompted.
The Admin Console URL is printed when the installation finishes. Access the Admin Console to begin installing your application. During the installation process in the Admin Console, you have the opportunity to add nodes if you want a multi-node cluster. Then you can provide application config, run preflights, and deploy your application.
## About Configuring Embedded Cluster
To install an application with Embedded Cluster, an Embedded Cluster Config must be present in the application release. The Embedded Cluster Config lets you define several characteristics about the cluster that will be created.
For more information, see [Embedded Cluster Config](/reference/embedded-config).
## About Installing with Embedded Cluster
This section provides an overview of installing applications with Embedded Cluster.
### Installation Overview
The following diagram demonstrates how Kubernetes and an application are installed into a customer environment using Embedded Cluster:

[View a larger version of this image](/images/embedded-cluster-install.png)
As shown in the diagram above, the Embedded Cluster Config is included in the application release in the Replicated Vendor Portal and is used to generate the Embedded Cluster installation assets. Users can download these installation assets from the Replicated app service (`replicated.app`) on the command line, then run the Embedded Cluster installation command to install Kubernetes and the KOTS Admin Console. Finally, users access the Admin Console to optionally add nodes to the cluster and to configure and install the application.
### Installation Options
Embedded Cluster supports installations in online (internet-connected) environments and air gap environments with no outbound internet access.
For online installations, Embedded Cluster also supports installing behind a proxy server.
For more information about how to install with Embedded Cluster, see:
* [Online Installation wtih Embedded Cluster](/enterprise/installing-embedded)
* [Air Gap Installation with Embedded Cluster](/enterprise/installing-embedded-air-gap)
### Customer-Specific Installation Instructions
To install with Embedded Cluster, you can follow the customer-specific instructions provided on the **Customer** page in the Vendor Portal. For example:
[View a larger version of this image](/images/embedded-cluster-install-dialog.png)
### (Optional) Serve Installation Assets Using the Vendor API
To install with Embedded Cluster, you need to download the Embedded Cluster installer binary and a license. Air gap installations also require an air gap bundle. Some vendors already have a portal where their customers can log in to access documentation or download artifacts. In cases like this, you can serve the Embedded Cluster installation essets yourself using the Replicated Vendor API, rather than having customers download the assets from the Replicated app service using a curl command during installation.
To serve Embedded Cluster installation assets with the Vendor API:
1. If you have not done so already, create an API token for the Vendor API. See [Use the Vendor API v3](/reference/vendor-api-using#api-token-requirement).
1. Call the [Get an Embedded Cluster release](https://replicated-vendor-api.readme.io/reference/getembeddedclusterrelease) endpoint to download the assets needed to install your application with Embedded Cluster. Your customers must take this binary and their license and copy them to the machine where they will install your application.
Note the following:
* (Recommended) Provide the `customerId` query parameter so that the customer’s license is included in the downloaded tarball. This mirrors what is returned when a customer downloads the binary directly using the Replicated app service and is the most useful option. Excluding the `customerId` is useful if you plan to distribute the license separately.
* If you do not provide any query parameters, this endpoint downloads the Embedded Cluster binary for the latest release on the specified channel. You can provide the `channelSequence` query parameter to download the binary for a particular release.
### About Host Preflight Checks
During installation, Embedded Cluster automatically runs a default set of _host preflight checks_. The default host preflight checks are designed to verify that the installation environment meets the requirements for Embedded Cluster, such as:
* The system has sufficient disk space
* The system has at least 2G of memory and 2 CPU cores
* The system clock is synchronized
For Embedded Cluster requirements, see [Embedded Cluster Installation Requirements](/enterprise/installing-embedded-requirements). For the full default host preflight spec for Embedded Cluster, see [`host-preflight.yaml`](https://github.com/replicatedhq/embedded-cluster/blob/main/pkg/preflights/host-preflight.yaml) in the `embedded-cluster` repository in GitHub.
If any of the host preflight checks fail, installation is blocked and a message describing the failure is displayed. For more information about host preflight checks for installations on VMs or bare metal servers, see [About Host Preflights](preflight-support-bundle-about#host-preflights).
#### Limitations
Embedded Cluster host preflight checks have the following limitations:
* The default host preflight checks for Embedded Cluster cannot be modified, and vendors cannot provide their own custom host preflight spec for Embedded Cluster.
* Host preflight checks do not check that any application-specific requirements are met. For more information about defining preflight checks for your application, see [Define Preflight Checks](/vendor/preflight-defining).
#### Ignore Host Preflight Checks
You can pass the `--ignore-host-preflights` flag with the install command to ignore the Embedded Cluster host preflight checks. When `--ignore-host-preflights` is passed, the host preflight checks are still run, but the user is prompted and can choose to continue with the installation if preflight failures occur. If there are no failed preflights, no user prompt is displayed. The `--ignore-host-preflights` flag allows users to see any incompatibilities in their environment, while enabling them to bypass failures if necessary.
Additionally, if users choose to ignore the host preflight checks during installation, the Admin Console still runs any application-specific preflight checks before the application is deployed.
:::note
Ignoring host preflight checks is _not_ recommended for production installations.
:::
To ignore the Embedded Cluster host preflight checks:
* **During installation:**
1. Pass the `--ignore-host-preflights` flag with the install command:
```bash
sudo ./APP_SLUG install --license license.yaml --ignore-host-preflights
```
Where `APP_SLUG` is the unique slug for the application.
1. Review the results of the preflight checks. When prompted, type `yes` to ignore the preflight checks.
* **In automation:** Use both the `--ignore-host-preflights` and `--yes` flags to address the prompt for `--ignore-host-preflights`.
```bash
sudo ./APP_SLUG install --license license.yaml --ignore-host-preflights --yes
```
## About Managing Multi-Node Clusters with Embedded Cluster
This section describes managing nodes in multi-node clusters created with Embedded Cluster.
### Defining Node Roles for Multi-Node Clusters
You can optionally define node roles in the Embedded Cluster Config. For multi-node clusters, roles can be useful for the purpose of assigning specific application workloads to nodes. If nodes roles are defined, users access the Admin Console to assign one or more roles to a node when it is joined to the cluster.
For more information, see [roles](/reference/embedded-config#roles) in _Embedded Cluster Config_.
### Adding Nodes
Users can add nodes to a cluster with Embedded Cluster from the Admin Console. The Admin Console provides the join command used to add nodes to the cluster.
For more information, see [Manage Multi-Node Clusters with Embedded Cluster](/enterprise/embedded-manage-nodes).
### High Availability for Multi-Node Clusters (Alpha)
Multi-node clusters are not highly available by default. Enabling high availability (HA) requires that at least three controller nodes are present in the cluster. Users can enable HA when joining the third node.
For more information about creating HA multi-node clusters with Embedded Cluster, see [Enable High Availability for Multi-Node Clusters (Alpha)](/enterprise/embedded-manage-nodes#ha) in _Managing Multi-Node Clusters with Embedded Cluster_.
## About Performing Updates with Embedded Cluster
When you update an application installed with Embedded Cluster, you update both the application and the cluster infrastructure together, including Kubernetes, KOTS, and other components running in the cluster. There is no need or mechanism to update the infrastructure on its own.
When you deploy a new version, any changes to the cluster are deployed first. The Admin Console waits until the cluster is ready before updatng the application.
Any changes made to the Embedded Cluster Config, including changes to the Embedded Cluster version, Helm extensions, and unsupported overrides, trigger a cluster update.
When performing an upgrade with Embedded Cluster, the user is able to change the application config before deploying the new version. Additionally, the user's license is synced automatically. Users can also make config changes and sync their license outside of performing an update. This requires deploying a new version to apply the config change or license sync.
For more information about updating, see [Perform Updates with Embedded Cluster](/enterprise/updating-embedded).
## Access the Cluster
With Embedded Cluster, end users rarely need to use the CLI. Typical workflows, like updating the application and the cluster, can be done through the Admin Console. Nonetheless, there are times when vendors or their customers need to use the CLI for development or troubleshooting.
:::note
If you encounter a typical workflow where your customers have to use the Embedded Cluster shell, reach out to Alex Parker at alexp@replicated.com. These workflows might be candidates for additional Admin Console functionality.
:::
To access the cluster and use other included binaries:
1. SSH onto a controller node.
:::note
You cannot run the `shell` command on worker nodes.
:::
1. Use the Embedded Cluster shell command to start a shell with access to the cluster:
```
sudo ./APP_SLUG shell
```
Where `APP_SLUG` is the unique slug for the application.
The output looks similar to the following:
```
__4___
_ \ \ \ \ Welcome to APP_SLUG debug shell.
<'\ /_/_/_/ This terminal is now configured to access your cluster.
((____!___/) Type 'exit' (or CTRL+d) to exit.
\0\0\0\0\/ Happy hacking.
~~~~~~~~~~~
root@alex-ec-1:/home/alex# export KUBECONFIG="/var/lib/embedded-cluster/k0s/pki/admin.conf"
root@alex-ec-1:/home/alex# export PATH="$PATH:/var/lib/embedded-cluster/bin"
root@alex-ec-1:/home/alex# source <(k0s completion bash)
root@alex-ec-1:/home/alex# source <(cat /var/lib/embedded-cluster/bin/kubectl_completion_bash.sh)
root@alex-ec-1:/home/alex# source /etc/bash_completion
```
The appropriate kubeconfig is exported, and the location of useful binaries like kubectl and Replicated’s preflight and support-bundle plugins is added to PATH.
1. Use the available binaries as needed.
**Example**:
```bash
kubectl version
```
```
Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.1+k0s
```
1. Type `exit` or **Ctrl + D** to exit the shell.
## Reset a Node
Resetting a node removes the cluster and your application from that node. This is useful for iteration, development, and when mistakes are made, so you can reset a machine and reuse it instead of having to procure another machine.
If you want to completely remove a cluster, you need to reset each node individually.
When resetting a node, OpenEBS PVCs on the node are deleted. Only PVCs created as part of a StatefulSet will be recreated automatically on another node. To recreate other PVCs, the application will need to be redeployed.
To reset a node:
1. SSH onto the machine. Ensure that the Embedded Cluster binary is still available on that machine.
1. Run the following command to reset the node and automatically reboot the machine to ensure that transient configuration is also reset:
```
sudo ./APP_SLUG reset
```
Where `APP_SLUG` is the unique slug for the application.
:::note
Pass the `--no-prompt` flag to disable interactive prompts. Pass the `--force` flag to ignore any errors encountered during the reset.
:::
## Additional Use Cases
This section outlines some additional use cases for Embedded Cluster. These are not officially supported features from Replicated, but are ways of using Embedded Cluster that we or our customers have experimented with that might be useful to you.
### NVIDIA GPU Operator
The NVIDIA GPU Operator uses the operator framework within Kubernetes to automate the management of all NVIDIA software components needed to provision GPUs. For more information about this operator, see the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/overview.html) documentation.
You can include the NVIDIA GPU Operator in your release as an additional Helm chart, or using Embedded Cluster Helm extensions. For information about adding Helm extensions, see [extensions](/reference/embedded-config#extensions) in _Embedded Cluster Config_.
Using the NVIDIA GPU Operator with Embedded Cluster requires configuring the containerd options in the operator as follows:
```yaml
# Embedded Cluster Config
extensions:
helm:
repositories:
- name: nvidia
url: https://nvidia.github.io/gpu-operator
charts:
- name: gpu-operator
chartname: nvidia/gpu-operator
namespace: gpu-operator
version: "v24.9.1"
values: |
# configure the containerd options
toolkit:
env:
- name: CONTAINERD_CONFIG
value: /etc/k0s/containerd.d/nvidia.toml
- name: CONTAINERD_SOCKET
value: /run/k0s/containerd.sock
```
When the containerd options are configured as shown above, the NVIDIA GPU Operator automatically creates the required configurations in the `/etc/k0s/containerd.d/nvidia.toml` file. It is not necessary to create this file manually, or modify any other configuration on the hosts.
:::note
If you include the NVIDIA GPU Operator as a Helm extension, remove any existing containerd services that are running on the host (such as those deployed by Docker) before attempting to install the release with Embedded Cluster. If there are any containerd services on the host, the NVIDIA GPU Operator will generate an invalid containerd config, causing the installation to fail. For more information, see [Installation failure when NVIDIA GPU Operator is included as Helm extension](#nvidia) in _Troubleshooting Embedded Cluster_.
This is the result of a known issue with v24.9.x of the NVIDIA GPU Operator. For more information about the known issue, see [container-toolkit does not modify the containerd config correctly when there are multiple instances of the containerd binary](https://github.com/NVIDIA/nvidia-container-toolkit/issues/982) in the nvidia-container-toolkit repository in GitHub.
:::
---
# Use the Proxy Registry with Helm Installations
This topic describes how to use the Replicated proxy registry to proxy images for installations with the Helm CLI. For more information about the proxy registry, see [About the Replicated Proxy Registry](private-images-about).
## Overview
With the Replicated proxy registry, each customer's unique license can grant proxy access to images in an external private registry.
During Helm installations, after customers provide their license ID, a `global.replicated.dockerconfigjson` field that contains a base64 encoded Docker configuration file is automatically injected in the Helm chart values. You can use this `global.replicated.dockerconfigjson` field to create the pull secret required to authenticate with the proxy registry.
## Pull Private Images Through the Proxy Registry in Helm Installations
To use the Replicated proxy registry for applications installed with Helm:
1. In the Vendor Portal, go to **Images > Add external registry** and provide read-only credentials for your registry. This allows Replicated to access the images through the proxy registry. See [Add Credentials for an External Registry](packaging-private-images#add-credentials-for-an-external-registry) in _Connecting to an External Registry_.
1. (Recommended) Go to **Custom Domains > Add custom domain** and add a custom domain for the proxy registry. See [Use Custom Domains](custom-domains-using).
1. In your Helm chart values file, set your image repository URL to the location of the image on the proxy registry. If you added a custom domain, use your custom domain. Otherwise, use `proxy.replicated.com`.
The proxy registry URL has the following format: `DOMAIN/proxy/APP_SLUG/EXTERNAL_REGISTRY_IMAGE_URL`
Where:
* `DOMAIN` is either `proxy.replicated.com` or your custom domain.
* `APP_SLUG` is the unique slug of your application.
* `EXTERNAL_REGISTRY_IMAGE_URL` is the path to the private image on your external registry.
**Example:**
```yaml
# values.yaml
api:
image:
# proxy.registry.com or your custom domain
registry: ghcr.io
repository: proxy/app/ghcr.io/cloudnative-pg/cloudnative-pg
tag: catalog-1.24.0
```
1. Ensure that any references to the image in your Helm chart access the field from your values file.
**Example**:
```yaml
apiVersion: v1
kind: Pod
spec:
containers:
- name: api
# Access the registry, repository, and tag fields from the values file
image: {{ .Values.images.api.registry }}/{{ .Values.images.api.repository }}:{{ .Values.images.api.tag }}
```
1. In your Helm chart templates, create a Kubernetes Secret to evaluate if the `global.replicated.dockerconfigjson` value is set and then write the rendered value into a Secret on the cluster, as shown below.
This Secret is used to authenticate with the proxy registry. For information about how Kubernetes uses the `kubernetes.io/dockerconfigjson` Secret type to provide authentication for a private registry, see [Pull an Image from a Private Registry](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) in the Kubernetes documentation.
:::note
Do not use `replicated` for the name of the image pull secret because the Replicated SDK automatically creates a Secret named `replicated`. Using the same name causes an error.
:::
```yaml
# templates/replicated-pull-secret.yaml
{{ if .Values.global.replicated.dockerconfigjson }}
apiVersion: v1
kind: Secret
metadata:
# Note: Do not use "replicated" for the name of the pull secret
name: replicated-pull-secret
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ .Values.global.replicated.dockerconfigjson }}
{{ end }}
```
1. Add the image pull secret that you created to any manifests that reference the image:
**Example:**
```yaml
apiVersion: v1
kind: Pod
spec:
containers:
- name: api
# Access the registry, repository, and tag fields from the values file
image: {{ .Values.images.api.registry }}/{{ .Values.images.api.repository }}:{{ .Values.images.api.tag }}
# Add the pull secret
{{ if .Values.global.replicated.dockerconfigjson }}
imagePullSecrets:
- name: replicated-pull-secret
{{ end }}
```
1. Package your Helm chart and add it to a release. Promote the release to a development channel. See [Managing Releases with Vendor Portal](releases-creating-releases).
1. Install in a development environment to test your changes. See [Install with Helm](/vendor/install-with-helm).
---
# Install and Update with Helm in Air Gap Environments
This topic describes how to use Helm to install releases that contain one or more Helm charts in air-gapped environments.
## Overview
Replicated supports installing and updating Helm charts in air-gapped environments with no outbound internet access. In air gap Helm installations, customers are guided through the process with instructions provided in the [Replicated Download Portal](/vendor/releases-share-download-portal).
When air gap Helm installations are enabled, an **Existing cluster with Helm** option is displayed in the Download Portal on the left nav. When selected, **Existing cluster with Helm** displays three tabs (**Install**, **Manual Update**, **Automate Updates**), as shown in the screenshot below:

[View a larger version of this image](/images/download-helm.png)
Each tab provides instructions for how to install, perform a manual update, or configure automatic updates, respectively.
These installing and updating instructions assume that your customer is accessing the Download Portal from a workstation that can access the internet and their internal private registry. Direct access to the target cluster is not required.
Each method assumes that your customer is familiar with `curl`, `docker`, `helm`, `kubernetes`, and a bit of `bash`, particularly for automating updates.
## Prerequisites
Before you install, complete the following prerequisites:
* Reach out to your account rep to enable the Helm air gap installation feature.
* The customer used to install must have a valid email address. This email address is only used as a username for the Replicated registry and is never contacted. For more information about creating and editing customers in the Vendor Portal, see [Creating a Customer](/vendor/releases-creating-customer).
* The customer used to install must have the **Existing Cluster (Helm CLI)** install type enabled. For more information about enabling install types for customers in the Vendor Portal, see [Manage Install Types for a License](licenses-install-types).
* To ensure that the Replicated proxy registry can be used to grant proxy access to your application images during Helm installations, you must create an image pull secret for the proxy registry and add it to your Helm chart. To do so, follow the steps in [Using the Proxy Registry with Helm Installations](/vendor/helm-image-registry).
* Declare the SDK as a dependency in your Helm chart. For more information, see [Install the SDK as a Subchart](replicated-sdk-installing#install-the-sdk-as-a-subchart) in _Installing the Replicated SDK_.
## Install
The installation instructions provided in the Download Portal are designed to walk your customer through the first installation of your chart in an air gap environment.
To install with Helm in an air gap environment:
1. In the [Vendor Portal](https://vendor.replicated.com), go to **Customers > [Customer Name] > Reporting**.
1. In the **Download portal** section, click **Visit download portal** to log in to the Download Portal for the customer.
1. In the Download Portal left nav, click **Existing cluster with Helm**.

[View a larger version of this image](/images/download-helm.png)
1. On the **Install** tab, in the **App version** dropdown, select the target application version to install.
1. Run the first command to authenticate into the Replicated proxy registry with the customer's credentials (the `license_id`).
1. Under **Get the list of images**, run the command provided to generate the list of images needed to install.
1. For **(Optional) Specify registry URI**, provide the URI for an internal image registry where you want to push images. If a registry URI is provided, Replicatd automatically updates the commands for tagging and pushing images with the URI.
1. For **Pull, tag, and push each image to your private registry**, copy and paste the docker commands provided to pull, tag, and push each image to your internal registry.
:::note
If you did not provide a URI in the previous step, ensure that you manually replace the image names in the `tag` and `push` commands with the target registry URI.
:::
1. Run the command to authenticate into the OCI registry that contains your Helm chart.
1. Run the command to install the `preflight` plugin. This allows you to run preflight checks before installing to ensure that the installation environment meets the requirements for the application.
1. For **Download a copy of the values.yaml file** and **Edit the values.yaml file**, run the `helm show values` command provided to download the values file for the Helm chart. Then, edit the values file as needed to customize the configuration of the given chart.
If you are installing a release that contains multiple Helm charts, repeat these steps to download and edit each values file.
:::note
For installations with mutliple charts where two or more of the top-level charts in the release use the same name, ensure that each values file has a unique name to avoid installation error. For more information, see [Installation Fails for Release With Multiple Helm Charts](helm-install-troubleshooting#air-gap-values-file-conflict) in _Troubleshooting Helm Installations_.
:::
1. For **Determine install method**, select one of the options depending on your ability to access the internet and the cluster from your workstation.
1. Use the commands provided and the values file or files that you edited to run preflight checks and then install the release.
## Perform Updates
This section describes the processes of performing manual and automatic updates with Helm in air gap environments using the instructions provided in the Download Portal.
### Manual Updates
The manual update instructions provided in the Download Portal are similar to the installation instructions.
However, the first step prompts the customer to select their current version an the target version to install. This step takes [required releases](/vendor/releases-about#properties) into consideration, thereby guiding the customer to the versions that are upgradable from their current version.
The additional steps are consistent with installation process until the `preflight` and `install` commands where customers provide the existing values from the cluster with the `helm get values` command. Your customer will then need to edit the `values.yaml` to reference the new image tags.
If the new version introduces new images or other values, Replicated recommends that you explain this at the top of your release notes so that customers know they will need to make additional edits to the `values.yaml` before installing.
### Automate Updates
The instructions in the Download Portal for automating updates use API endpoints that your customers can automate against.
The instructions in the Download Portal provide customers with example commands that can be put into a script that they run periodically (nightly, weekly) using GitHub Actions, Jenkins, or other platforms.
This method assumes that the customer has already done a successful manual installation, including the configuration of the appropriate `values`.
After logging into the registry, the customer exports their current version and uses that to query an endpoint that provides the latest installable version number (either the next required release, or the latest release) and export it as the target version. With the target version, they can now query an API for the list of images.
With the list of images the provided `bash` script will automate the process of pulling updated images from the repository, tagging them with a name for an internal registry, and then pushing the newly tagged images to their internal registry.
Unless the customer has set up the `values` to preserve the updated tag (for example, by using the `latest` tag), they need to edit the `values.yaml` to reference the new image tags. After doing so, they can log in to the OCI registry and perform the commands to install the updated chart.
## Use a Harbor or Artifactory Registry Proxy
You can integrate the Replicated proxy registry with an existing Harbor or jFrog Artifactory instance to proxy and cache images on demand. For more information, see [Using a Registry Proxy for Helm Air Gap Installations](using-third-party-registry-proxy).
---
# About Helm Installations with Replicated
This topic provides an introduction to Helm installations for applications distributed with Replicated.
## Overview
Helm is a popular open source package manager for Kubernetes applications. Many ISVs use Helm to configure and deploy Kubernetes applications because it provides a consistent, reusable, and sharable packaging format. For more information, see the [Helm documentation](https://helm.sh/docs).
Replicated strongly recommends that all applications are packaged using Helm because many enterprise users expect to be able to install an application with the Helm CLI.
Existing releases in the Replicated Platform that already support installation with Replicated KOTS and Replicated Embedded Cluster (and that include one or more Helm charts) can also be installed with the Helm CLI; it is not necessary to create and manage separate releases or channels for each installation method.
For information about how to install with Helm, see:
* [Installing with Helm](/vendor/install-with-helm)
* [Installing and Updating with Helm in Air Gap Environments](helm-install-airgap)
The following diagram shows how Helm charts distributed with Replicated are installed with Helm in online (internet-connected) customer environments:
[View a larger version of this image](/images/helm-install-diagram.png)
As shown in the diagram above, when a release containing one or more Helm charts is promoted to a channel, the Replicated Vendor Portal automatically extracts any Helm charts included in the release. These charts are pushed as OCI objects to the Replicated registry. The Replicated registry is a private OCI registry hosted by Replicated at `registry.replicated.com`. For information about security for the Replicated registry, see [Replicated Registry Security](packaging-private-registry-security).
For example, if your application in the Vendor Portal is named My App and you promote a release containing a Helm chart with `name: my-chart` to a channel with the slug `beta`, then the Vendor Portal pushes the chart to the following location: `oci://registry.replicated.com/my-app/beta/my-chart`.
Customers can install your Helm chart by first logging in to the Replicated registry with their unique license ID. This step ensures that any customer who installs your chart from the registry has a valid, unexpired license. After the customer logs in to the Replicated registry, they can run `helm install` to install the chart from the registry.
During installation, the Replicated registry injects values into the `global.replicated` key of the parent Helm chart's values file. For more information about the values schema, see [Helm global.replicated Values Schema](helm-install-values-schema).
## Limitations
Helm installations have the following limitations:
* Helm CLI installations do not provide access to any of the features of the Replicated KOTS installer, such as:
* The KOTS Admin Console
* Strict preflight checks that block installation
* Backup and restore with snapshots
* Required releases with the **Prevent this release from being skipped during upgrades** option
---
# Package a Helm Chart for a Release
This topic describes how to package a Helm chart and the Replicated SDK into a chart archive that can be added to a release.
## Overview
To add a Helm chart to a release, you first add the Replicated SDK as a dependency of the Helm chart and then package the chart and its dependencies as a `.tgz` chart archive.
The Replicated SDK is a Helm chart can be installed as a small service alongside your application. The SDK provides access to key Replicated features, such as support for collecting custom metrics on application instances. For more information, see [About the Replicated SDK](replicated-sdk-overview).
## Requirements and Recommendations
This section includes requirements and recommendations for Helm charts.
### Chart Version Requirement
The chart version in your Helm chart must comply with image tag format requirements. A valid tag can contain only lowercase and uppercase letters, digits, underscores, periods, and dashes.
The chart version must also comply with the Semantic Versioning (SemVer) specification. When you run the `helm install` command without the `--version` flag, Helm retrieves the list of all available image tags for the chart from the registry and compares them using the SemVer comparison rules described in the SemVer specification. The version that is installed is the version with the largest tag value. For more information about the SemVer specification, see the [Semantic Versioning](https://semver.org) documentation.
### Chart Naming
For releases that contain more than one Helm chart, Replicated recommends that you use unique names for each top-level Helm chart in the release. This aligns with Helm best practices and also avoids potential conflicts in filenames during installation that could cause the installation to fail. For more information, see [Installation Fails for Release With Multiple Helm Charts](helm-install-troubleshooting#air-gap-values-file-conflict) in _Troubleshooting Helm Installations_.
### Helm Best Practices
Replicated recommends that you review the [Best Practices](https://helm.sh/docs/chart_best_practices/) guide in the Helm documentation to ensure that your Helm chart or charts follows the required and recommended conventions.
## Package a Helm Chart {#release}
This procedure shows how to create a Helm chart archive to add to a release. For more information about the Helm CLI commands in this procedure, see the [Helm Commands](https://helm.sh/docs/helm/helm/) section in the Helm documentation.
To package a Helm chart so that it can be added to a release:
1. In your application Helm chart `Chart.yaml` file, add the YAML below to declare the SDK as a dependency. If your application is installed as multiple charts, declare the SDK as a dependency of the chart that customers install first. Do not declare the SDK in more than one chart.
```yaml
# Chart.yaml
dependencies:
- name: replicated
repository: oci://registry.replicated.com/library
version: 1.5.1
```
For the latest version information for the Replicated SDK, see the [replicated-sdk repository](https://github.com/replicatedhq/replicated-sdk/releases) in GitHub.
For additional guidelines related to adding the SDK as a dependency, see [Install the SDK as a Subchart](replicated-sdk-installing#install-the-sdk-as-a-subchart) in _Installing the Replicated SDK_.
1. Update dependencies and package the chart as a `.tgz` file:
```bash
helm package -u PATH_TO_CHART
```
Where:
* `-u` or `--dependency-update` is an option for the `helm package` command that updates chart dependencies before packaging. For more information, see [Helm Package](https://helm.sh/docs/helm/helm_package/) in the Helm documentation.
* `PATH_TO_CHART` is the path to the Helm chart in your local directory. For example, `helm package -u .`.
The Helm chart, including any dependencies, is packaged and copied to your current directory in a `.tgz` file. The file uses the naming convention: `CHART_NAME-VERSION.tgz`. For example, `postgresql-8.1.2.tgz`.
:::note
If you see a 401 Unauthorized error after running `helm dependency update`, run the following command to remove credentials from the Replicated registry, then re-run `helm dependency update`:
```bash
helm registry logout registry.replicated.com
```
For more information, see [401 Unauthorized Error When Updating Helm Dependencies](replicated-sdk-installing#401).
:::
1. Add the `.tgz` file to a release. For more information, see [Manage Releases with the Vendor Portal](releases-creating-releases) or [Managing Releases with the CLI](releases-creating-cli).
After the release is promoted, your Helm chart is automatically pushed to the Replicated registry. For information about how to install a release with the Helm CLI, see [Install with Helm](install-with-helm). For information about how to install Helm charts with KOTS, see [About Distributing Helm Charts with KOTS](/vendor/helm-native-about).
---
# Troubleshoot Helm Installations with Replicated
This topic provides troubleshooting information for common issues related to performing installations and upgrades with the Helm CLI.
## Installation Fails for Release With Multiple Helm Charts {#air-gap-values-file-conflict}
#### Symptom
When performing installing a release with multiple Helm charts, the installation fails. You might also see the following error message:
```
Error: INSTALLATION FAILED: cannot re-use a name that is still in use
```
#### Cause
In the Download Portal, each chart's values file is named according to the chart's name. For example, the values file for the Helm chart Gitea would be named `gitea-values.yaml`.
If any top-level charts in the release use the same name, the associated values files will also be assigned the same name. This causes each new values file downloaded with the `helm show values` command to overwrite any previously-downloaded values file of the same name.
#### Solution
Replicated recommends that you use unique names for top-level Helm charts in the same release.
Alternatively, if a release contains charts that must use the same name, convert one or both of the charts into subcharts and use Helm conditions to differentiate them. See [Conditions and Tags](https://helm.sh/docs/chart_best_practices/dependencies/#conditions-and-tags) in the Helm documentation.
---
# Helm global.replicated Values Schema
This topic describes the `global.replicated` values that are injected in the values file of an application's parent Helm chart during Helm installations with Replicated.
## Overview
When a user installs a Helm application with the Helm CLI, the Replicated registry injects a set of customer-specific values into the `global.replicated` key of the parent Helm chart's values file.
The values in the `global.replicated` field include the following:
* The fields in the customer's license, such as the field names, descriptions, signatures, values, and any custom license fields that you define. Vendors can use this license information to check entitlements before the application is installed. For more information, see [Check Entitlements in Helm Charts Before Deployment](/vendor/licenses-reference-helm).
* A base64 encoded Docker configuration file. To proxy images from an external private registry with the Replicated proxy registry, you can use the `global.replicated.dockerconfigjson` field to create an image pull secret for the proxy registry. For more information, see [Proxying Images for Helm Installations](/vendor/helm-image-registry).
The following is an example of a Helm values file containing the `global.replicated` values:
```yaml
# Helm values.yaml
global:
replicated:
channelName: Stable
customerEmail: username@example.com
customerName: Example Customer
dockerconfigjson: eyJhdXRocyI6eyJd1dIRk5NbEZFVGsxd2JGUmFhWGxYWm5scloyNVRSV1pPT2pKT2NGaHhUVEpSUkU1...
licenseFields:
expires_at:
description: License Expiration
name: expires_at
signature:
v1: iZBpESXx7fpdtnbMKingYHiJH42rP8fPs0x8izy1mODckGBwVoA...
title: Expiration
value: "2023-05-30T00:00:00Z"
valueType: String
licenseID: YiIXRTjiB7R...
licenseType: dev
```
## `global.replicated` Values Schema
The `global.replicated` values schema contains the following fields:
| Field | Type | Description |
| --- | --- | --- |
| `channelName` | String | The name of the release channel |
| `customerEmail` | String | The email address of the customer |
| `customerName` | String | The name of the customer |
| `dockerconfigjson` | String | Base64 encoded docker config json for pulling images |
| `licenseFields`| | A list containing each license field in the customer's license. Each element under `licenseFields` has the following properties: `description`, `signature`, `title`, `value`, `valueType`. `expires_at` is the default `licenseField` that all licenses include. Other elements under `licenseField` include the custom license fields added by vendors in the Vendor Portal. For more information, see [Manage Customer License Fields](/vendor/licenses-adding-custom-fields). |
| `licenseFields.[FIELD_NAME].description` | String | Description of the license field |
| `licenseFields.[FIELD_NAME].signature.v1` | Object | Signature of the license field |
| `licenseFields.[FIELD_NAME].title` | String | Title of the license field |
| `licenseFields.[FIELD_NAME].value` | String | Value of the license field |
| `licenseFields.[FIELD_NAME].valueType` | String | Type of the license field value |
| `licenseID` | String | The unique identifier for the license |
| `licenseType` | String | The type of license, such as "dev" or "prod". For more information, see [Customer Types](/vendor/licenses-about#customer-types) in _About Customers and Licensing_. |
## Replicated SDK Helm Values
When a user installs a Helm chart that includes the Replicated SDK as a dependency, a set of default SDK values are included in the `replicated` key of the parent chart's values file.
For example:
```yaml
# values.yaml
replicated:
enabled: true
appName: gitea
channelID: 2jKkegBMseH5w...
channelName: Beta
channelSequence: 33
integration:
enabled: true
license: {}
parentChartURL: oci://registry.replicated.com/gitea/beta/gitea
releaseCreatedAt: "2024-11-25T20:38:22Z"
releaseNotes: 'CLI release'
releaseSequence: 88
replicatedAppEndpoint: https://replicated.app
versionLabel: Beta-1234
```
These `replicated` values can be referenced by the application or set during installation as needed. For example, if users need to add labels or annotations to everything that runs in their cluster, then they can pass the labels or annotations to the relevant value in the SDK subchart.
For the default Replicated SDK Helm chart values file, see [values.yaml](https://github.com/replicatedhq/replicated-sdk/blob/main/chart/values.yaml) in the [replicated-sdk](https://github.com/replicatedhq/replicated-sdk) repository in GitHub.
The SDK Helm values also include a `replicated.license` field, which is a string that contains the YAML representation of the customer license. For more information about the built-in fields in customer licenses, see [Built-In License Fields](licenses-using-builtin-fields).
---
# About Distributing Helm Charts with KOTS
This topic provides an overview of how Replicated KOTS deploys Helm charts, including an introduction to the KOTS HelmChart custom resource, limitations of deploying Helm charts with KOTS, and more.
## Overview
Helm is a popular open source package manager for Kubernetes applications. Many ISVs use Helm to configure and deploy Kubernetes applications because it provides a consistent, reusable, and sharable packaging format. For more information, see the [Helm documentation](https://helm.sh/docs).
KOTS can install applications that include:
* One or more Helm charts
* More than a single instance of any chart
* A combination of Helm charts and Kubernetes manifests
Replicated strongly recommends that all applications are packaged as Helm charts because many enterprise users expect to be able to install an application with the Helm CLI.
Deploying Helm charts with KOTS provides additional functionality not directly available with the Helm CLI, such as:
* The KOTS Admin Console
* Backup and restore with snapshots
* Support for air gap installations
* Support for embedded cluster installations on VMs or bare metal servers
Additionally, for applications packaged as Helm charts, you can support Helm CLI and KOTS installations from the same release without having to maintain separate sets of Helm charts and application manifests. The following diagram demonstrates how a single release containing one or more Helm charts can be installed using the Helm CLI and KOTS:
[View a larger version of this image](/images/helm-kots-install-options.png)
For a tutorial that demonstrates how to add a sample Helm chart to a release and then install the release using both KOTS and the Helm CLI, see [Install a Helm Chart with KOTS and the Helm CLI](/vendor/tutorial-kots-helm-setup).
## How KOTS Deploys Helm Charts
This section describes how KOTS uses the HelmChart custom resource to deploy Helm charts.
### About the HelmChart Custom Resource
To deploy Helm charts, KOTS requires a unique HelmChart custom resource for each Helm chart `.tgz` archive in the release. You configure the HelmChart custom resource to provide the necessary instructions to KOTS for processing and preparing the chart for deployment. Additionally, the HelmChart custom resource creates a mapping between KOTS and your Helm chart to allow Helm values to be dynamically set during installation or upgrade.
The HelmChart custom resource with `apiVersion: kots.io/v1beta2` (HelmChart v2) is supported with KOTS v1.99.0 and later. For more information, see [About the HelmChart kots.io/v1beta2 Installation Method](#v2-install) below.
KOTS versions earlier than v1.99.0 can install Helm charts with `apiVersion: kots.io/v1beta1` of the HelmChart custom resource. The `kots.io/v1beta1` HelmChart custom resource is deprecated. For more information, see [Deprecated HelmChart kots.io/v1beta1 Installation Methods](#deprecated-helmchart-kotsiov1beta1-installation-methods) below.
### About the HelmChart v2 Installation Method {#v2-install}
When you include a HelmChart custom resource with `apiVersion: kots.io/v1beta2` in a release, KOTS v1.99.0 or later does a Helm install or upgrade of the associated Helm chart directly.
The `kots.io/v1beta2` HelmChart custom resource does _not_ modify the chart during installation. This results in Helm chart installations that are consistent, reliable, and easy to troubleshoot. For example, you can reproduce the exact installation outside of KOTS by downloading a copy of the application files from the cluster with `kots download`, then using those files to install with `helm install`. And, you can use `helm get values` to view the values that were used to install.
The `kots.io/v1beta2` HelmChart custom resource requires configuration. For more information, see [Configure the HelmChart Custom Resource v2](helm-native-v2-using).
For information about the fields and syntax of the HelmChart custom resource, see [HelmChart v2](/reference/custom-resource-helmchart-v2).
### Limitations
The following limitations apply when deploying Helm charts with the `kots.io/v1beta2` HelmChart custom resource:
* Available only for Helm v3.
* Available only for KOTS v1.99.0 and later.
* The rendered manifests shown in the `rendered` directory might not reflect the final manifests that will be deployed to the cluster. This is because the manifests in the `rendered` directory are generated using `helm template`, which is not run with cluster context. So values returned by the `lookup` function and the built-in `Capabilities` object might differ.
* When updating the HelmChart custom resource in a release from `kots.io/v1beta1` to `kots.io/v1beta2`, the diff viewer shows a large diff because the underlying file structure of the rendered manifests is different.
* Editing downstream Kustomization files to make changes to the application before deploying is not supported. This is because KOTS does not use Kustomize when installing Helm charts with the `kots.io/v1beta2` HelmChart custom resource. For more information about patching applications with Kustomize, see [Patch with Kustomize](/enterprise/updating-patching-with-kustomize).
* The KOTS Auto-GitOps workflow is not supported for installations with the HelmChart custom resource `apiVersion: kots.io/v1beta2` or the HelmChart custom resource `apiVersion: kots.io/v1beta1` with `useHelmInstall: true`.
:::important
KOTS Auto-GitOps is a legacy feature and is **not recommended** for use. For modern enterprise customers that prefer software deployment processes that use CI/CD pipelines, Replicated recommends the [Helm CLI installation method](/vendor/install-with-helm), which is more commonly used in these types of enterprise environments.
:::
For more information, see [KOTS Auto-GitOps Workflow](/enterprise/gitops-workflow).
## Support for Helm Hooks {#hooks}
KOTS supports the following hooks for Helm charts:
* `pre-install`: Executes after resources are rendered but before any resources are installed.
* `post-install`: Executes after resources are installed.
* `pre-upgrade`: Executes after resources are rendered but before any resources are upgraded.
* `post-upgrade`: Executes after resources are upgraded.
* `pre-delete`: Executes before any resources are deleted.
* `post-delete`: Executes after resources are deleted.
The following limitations apply to using hooks with Helm charts deployed by KOTS:
* The following hooks are not supported and are ignored if they are present:
* `test`
* `pre-rollback`
* `post-rollback`
* Hook weights below -9999 are not supported. All hook weights must be set to a value above -9999 to ensure the Replicated image pull secret is deployed before any resources are pulled.
For more information about Helm hooks, see [Chart Hooks](https://helm.sh/docs/topics/charts_hooks/) in the Helm documentation.
## Air Gap Installations
KOTS supports installation of Helm charts into air gap environments with configuration of the HelmChart custom resource [`builder`](/reference/custom-resource-helmchart-v2#builder) key. The `builder` key specifies the Helm values to use when building the air gap bundle for the application.
For more information about how to configure the `builder` key to support air gap installations, see [Package Air Gap Bundles for Helm Charts](/vendor/helm-packaging-airgap-bundles).
## Resource Deployment Order
When installing an application that includes one or more Helm charts, KOTS always deploys standard Kubernetes manifests to the cluster _before_ deploying any Helm charts. For example, if your release contains a Helm chart, a CRD, and a ConfigMap, then the CRD and ConfigMap resources are deployed before the Helm chart.
For information about how to set the deployment order for Helm charts with KOTS, see [Orchestrate Resource Deployment](/vendor/orchestrating-resource-deployment).
## Deprecated HelmChart kots.io/v1beta1 Installation Methods
This section describes the deprecated Helm chart installation methods that use the HelmChart custom resource `apiVersion: kots.io/v1beta1`.
:::important
The HelmChart custom resource `apiVersion: kots.io/v1beta1` is deprecated. For installations with Replicated KOTS v1.99.0 and later, use the HelmChart custom resource with `apiVersion: kots.io/v1beta2` instead. See [HelmChart v2](/reference/custom-resource-helmchart-v2) and [Confguring the HelmChart Custom Resource v2](/vendor/helm-native-v2-using).
:::
### useHelmInstall: true {#v1beta1}
:::note
This method was previously referred to as _Native Helm_.
:::
When you include version `kots.io/v1beta1` of the HelmChart custom resource with `useHelmInstall: true`, KOTS uses Kustomize to render the chart with configuration values, license field values, and rewritten image names. KOTS then packages the resulting manifests into a new Helm chart to install. For more information about Kustomize, see the [Kustomize documentation](https://kubectl.docs.kubernetes.io/).
The following diagram shows how KOTS processes Helm charts for deployment with the `kots.io/v1beta1` method:

[View a larger image](/images/native-helm-flowchart.png)
As shown in the diagram above, when given a Helm chart, KOTS:
- Uses Kustomize to merge instructions from KOTS and the end user to chart resources (see steps 2 - 4 below)
- Packages the resulting manifest files into a new Helm chart (see step 5 below)
- Deploys the new Helm chart (see step 5 below)
To deploy Helm charts with version `kots.io/v1beta1` of the HelmChart custom resource, KOTS does the following:
1. **Checks for previous installations of the chart**: If the Helm chart has already been deployed with a HelmChart custom resource that has `useHelmInstall: false`, then KOTS does not attempt the install the chart. The following error message is displayed if this check fails: `Deployment method for chart | HelmChart v1beta2 | HelmChart v1beta1 | Description |
|---|---|---|
apiVersion: kots.io/v1beta2 |
apiVersion: kots.io/v1beta1 |
apiVersion is updated to kots.io/v1beta2 |
releaseName |
chart.releaseName |
releaseName is a top level field under spec |
| N/A | helmVersion |
helmVersion field is removed |
| N/A | useHelmInstall |
useHelmInstall field is removed |
[View a larger version of this image](/images/resource-status-hover-current-state.png)
Viewing these resource status details is helpful for understanding which resources are contributing to the aggregate application status. For example, when an application has an Unavailable status, that means that one or more resources are Unavailable. By viewing the resource status insights on the **Instance details** page, you can quickly understand which resource or resources are Unavailable for the purpose of troubleshooting.
Granular resource status details are automatically available when the Replicated SDK is installed alongside the application. For information about how to distribute and install the SDK with your application, see [Install the Replicated SDK](/vendor/replicated-sdk-installing).
## Understanding Application Status
This section provides information about how Replicated interprets and aggregates the status of Kubernetes resources for your application to report an application status.
### About Resource Statuses {#resource-statuses}
Possible resource statuses are Ready, Updating, Degraded, Unavailable, and Missing.
The following table lists the supported Kubernetes resources and the conditions that contribute to each status:
| Deployment | StatefulSet | Service | Ingress | PVC | DaemonSet | |
|---|---|---|---|---|---|---|
| Ready | Ready replicas equals desired replicas | Ready replicas equals desired replicas | All desired endpoints are ready, any load balancers have been assigned | All desired backend service endpoints are ready, any load balancers have been assigned | Claim is bound | Ready daemon pods equals desired scheduled daemon pods |
| Updating | The deployed replicas are from a different revision | The deployed replicas are from a different revision | N/A | N/A | N/A | The deployed daemon pods are from a different revision |
| Degraded | At least 1 replica is ready, but more are desired | At least 1 replica is ready, but more are desired | At least one endpoint is ready, but more are desired | At least one backend service endpoint is ready, but more are desired | N/A | At least one daemon pod is ready, but more are desired |
| Unavailable | No replicas are ready | No replicas are ready | No endpoints are ready, no load balancer has been assigned | No backend service endpoints are ready, no load balancer has been assigned | Claim is pending or lost | No daemon pods are ready |
| Missing | Missing is an initial deployment status indicating that informers have not reported their status because the application has just been deployed and the underlying resource has not been created yet. After the resource is created, the status changes. However, if a resource changes from another status to Missing, then the resource was either deleted or the informers failed to report a status. | |||||
| Resource Statuses | Aggregate Application Status |
|---|---|
| No status available for any resource | Missing |
| One or more resources Unavailable | Unavailable |
| One or more resources Degraded | Degraded |
| One or more resources Updating | Updating |
| All resources Ready | Ready |
| Domain | Description |
|---|---|
| `replicated.app` * | Upstream application YAML and metadata is pulled from `replicated.app`. The current running version of the application (if any), as well as a license ID and application ID to authenticate, are all sent to `replicated.app`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `replicated.app`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L60-L65) in GitHub. |
| `registry.replicated.com` | Some applications host private images in the Replicated registry at this domain. The on-prem docker client uses a license ID to authenticate to `registry.replicated.com`. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA. For the range of IP addresses for `registry.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L20-L25) in GitHub. |
| `proxy.replicated.com` | Private Docker images are proxied through `proxy.replicated.com`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `proxy.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L52-L57) in GitHub. |
[View a larger image](/images/helm-install-button.png)
1. In the **Helm install instructions** dialog, run the first command to log in to the Replicated registry:
```bash
helm registry login registry.replicated.com --username EMAIL_ADDRESS --password LICENSE_ID
```
Where:
* `EMAIL_ADDRESS` is the customer's email address
* `LICENSE_ID` is the ID of the customer's license
:::note
You can safely ignore the following warning message: `WARNING: Using --password via the CLI is insecure.` This message is displayed because using the `--password` flag stores the password in bash history. This login method is not insecure.
Alternatively, to avoid the warning message, you can click **(show advanced)** in the **Helm install instructions** dialog to display a login command that excludes the `--password` flag. With the advanced login command, you are prompted for the password after running the command.
:::
1. (Optional) Run the second and third commands to install the preflight plugin and run preflight checks. If no preflight checks are defined, these commands are not displayed. For more information about defining and running preflight checks, see [About Preflight Checks and Support Bundles](preflight-support-bundle-about).
1. Run the fourth command to install using Helm:
```bash
helm install RELEASE_NAME oci://registry.replicated.com/APP_SLUG/CHANNEL/CHART_NAME
```
Where:
* `RELEASE_NAME` is the name of the Helm release.
* `APP_SLUG` is the slug for the application. For information about how to find the application slug, see [Get the Application Slug](/vendor/vendor-portal-manage-app#slug).
* `CHANNEL` is the lowercased name of the channel where the release was promoted, such as `beta` or `unstable`. Channel is not required for releases promoted to the Stable channel.
* `CHART_NAME` is the name of the Helm chart.
:::note
To install the SDK with custom RBAC permissions, include the `--set` flag with the `helm install` command to override the value of the `replicated.serviceAccountName` field with a custom service account. For more information, see [Customizing RBAC for the SDK](/vendor/replicated-sdk-customizing#customize-rbac-for-the-sdk).
:::
1. (Optional) In the Vendor Portal, click **Customers**. You can see that the customer you used to install is marked as **Active** and the details about the application instance are listed under the customer name.
**Example**:

[View a larger version of this image](/images/sdk-customer-active-example.png)
---
# Installer History
:::note
Replicated kURL is available only for existing customers. If you are not an existing kURL user, use Replicated Embedded Cluster instead. For more information, see [Use Embedded Cluster](/vendor/embedded-overview).
kURL is a Generally Available (GA) product for existing customers. For more information about the Replicated product lifecycle phases, see [Support Lifecycle Policy](/vendor/policies-support-lifecycle).
:::
This topic describes how to access the installation commands for all active and inactive kURL installers promoted to a channel.
## About Using Inactive Installers
Each release channel in the Replicated Vendor Portal saves the history of kURL installers that were promoted to the channel. You can view the list of historical installers on the **kURL Installer History** page for each channel. For more information, see [About the Installer History Page](#about) below.
It can be useful to access the installation commands for inactive installers to reproduce an issue that a user is experiencing for troubleshooting purposes. For example, if the user's cluster is running the inactive installer version 1.0.0, then you can install with version 1.0.0 in a test environment to troubleshoot.
You can also send the installation commands for inactive installers to your users as needed. For example, a user might have unique requirements for specific versions of Kubernetes or add-ons.
## About the Installer History Page {#about}
The **kURL Installer History** page for each channel includes a list of all the kURL installers that have been promoted to the channel, including the active installer and any inactive installers.
To access the **kURL Installer History** page, go to **Channels** and click the **Installer history** button on the target channel.
The following image shows an example **kURL Installer History** page with three installers listed:

[View a larger version of this image](/images/installer-history-page.png)
The installers are listed in the order in which they were promoted to the channel. The installer at the top of the list is the active installer for the channel.
The **kURL Installer History** page includes the following information for each installer listed:
* Version label, if provided when the installer was promoted
* Sequence number
* Installation command
* Installer YAML content
---
# Export Customer and Instance Data
This topic describes how to download and export customer and instance data from the Replicated Vendor Portal.
## Overview
While you can always consume customer and instance insight data directly in the Replicated Vendor Portal, the data is also available in a CSV format so that it can be imported into any other system, such as:
* Customer Relationship Management (CRM) systems like Salesforce or Gainsight
* Data warehouses like Redshift, Snowflake, or BigQuery
* Business intelligence (BI) tools like Looker, Tableau, or PowerBI
By collecting and organizing this data wherever it is most visible and valuable, you can enable your team to make better decisions about where to focus efforts across product, sales, engineering, and customer success.
## Bulk Export Instance Event Timeseries Data
You can use the Vendor API v3 `/app/{app_id}/events` endpoint to programatically access historical timeseries data containing instance level events, including any custom metrics that you have defined. For more information about the endpoint, see [Get instance events in either JSON or CSV format](https://replicated-vendor-api.readme.io/reference/listappinstanceevents) in the Vendor API v3 documentation.
The `/app/{app_id}/events` endpoint returns data scoped to a given application identifier. It also allows filtering based on time periods, instances identifiers, customers identifers, and event types. You must provide at least **one** query parameter to scope the query in order to receive a response.
By bulk exporting this instance event data with the `/app/{app_id}/events` endpoint, you can:
* Identify trends and potential problem areas
* Demonstrate the impact, adoption, and usage of recent product features
### Filter Bulk Data Exports
You can use the following types of filters to filter timeseries data for bulk export:
- **Filter by date**:
- Get instance events recorded _at or before_ the query date. For example:
```bash
curl -H "Authorization: $REPLICATED_API_TOKEN" \
"https://api.replicated.com/vendor/v3/app/:appID/events?before=2023-10-15"
```
- Get instance events recorded _at or after_ the query date. For example:
```shell
curl -H "Authorization: $REPLICATED_API_TOKEN" \
"https://api.replicated.com/vendor/v3/app/:appID/events?after=2023-10-15"
```
- Get instance events recorded within a range of dates [after, before]. For example:
```shell
curl -H "Authorization: $REPLICATED_API_TOKEN" \
"https://api.replicated.com/vendor/v3/app/:appID/events?after=2023-05-02&before=2023-10-15"
```
- **Filter by customer**: Get instance events from one or more customers using a comma-separated list of customer IDs. For example:
```bash
curl -H "Authorization: $REPLICATED_API_TOKEN" \
"https://api.replicated.com/vendor/v3/app/:appID/events?customerIDs=1b13241,2Rjk2923481"
```
- **Filter by event type**: Get instance events by event type using a comma-separated list of event types. For example:
```bash
curl -H "Authorization: $REPLICATED_API_TOKEN" \
"https://api.replicated.com/vendor/v3/app/:appID/events?eventTypes=numUsers,numProjects"
```
:::note
If any filter is passed for an object that does not exist, no warning is given. For example, if a `customerIDs` filter is passed for an ID that does not exist, or for an ID that the user does not have access to, then an empty array is returned.
:::
## Download Customer Instance Data CSVs
You can download customer and instance data from the **Download CSV** dropdown on the **Customers** page:

[View a larger version of this image](/images/customers-download-csv.png)
The **Download CSV** dropdown has the following options:
* **Customers**: Includes details about your customers, such as the customer's channel assignment, license entitlements, expiration date, last active timestamp, and more.
* (Recommended) **Customers + Instances**: Includes details about the instances assoicated with each customer, such as the Kubernetes distribution and cloud provider of the cluster where the instance is running, the most recent application instance status, if the instance is active or inactive, and more. The **Customers + Instances** data is a super set of the customer data, and is the recommended download for most use cases.
You can also export customer instance data as JSON using the Vendor API v3 `customer_instances` endpoint. For more information, see [Get customer instance report in CSV or JSON format](https://replicated-vendor-api.readme.io/reference/listappcustomerinstances) in the Vendor API v3 documentation.
### Data Dictionary
The following table lists the data fields that can be included in the customers and instances CSV downloads, including the label, data type, and description.
| Label | Type | Description |
|---|---|---|
| customer_id | string | Customer identifier |
| customer_name | string | The customer name |
| customer_created_date | timestamptz | The date the license was created |
| customer_license_expiration_date | timestamptz | The expiration date of the license |
| customer_channel_id | string | The channel id the customer is assigned |
| customer_channel_name | string | The channel name the customer is assigned |
| customer_app_id | string | App identifier |
| customer_last_active | timestamptz | The date the customer was last active |
| customer_type | string | One of prod, trial, dev, or community |
| customer_status | string | The current status of the customer |
| customer_is_airgap_enabled | boolean | The feature the customer has enabled - Airgap |
| customer_is_geoaxis_supported | boolean | The feature the customer has enabled - GeoAxis |
| customer_is_gitops_supported | boolean | The feature the customer has enabled - KOTS Auto-GitOps |
| customer_is_embedded_cluster_download_enabled | boolean | The feature the customer has enabled - Embedded Cluster |
| customer_is_identity_service_supported | boolean | The feature the customer has enabled - Identity |
| customer_is_snapshot_supported | boolean | The feature the customer has enabled - Snapshot |
| customer_has_entitlements | boolean | Indicates the presence or absence of entitlements and entitlment_* columns |
| customer_entitlement__* | string/integer/boolean | The values of any custom license fields configured for the customer. For example, customer_entitlement__active-users. |
| customer_created_by_id | string | The ID of the actor that created this customer: user ID or a hashed value of a token. |
| customer_created_by_type | string | The type of the actor that created this customer: user, service-account, or service-account. |
| customer_created_by_description | string | The description of the actor that created this customer. Includes username or token name depending on actor type. |
| customer_created_by_link | string | The link to the actor that created this customer. |
| customer_created_by_timestamp | timestamptz | The date the customer was created by this actor. When available, matches the value in the customer_created_date column |
| customer_updated_by_id | string | The ID of the actor that updated this customer: user ID or a hashed value of a token. |
| customer_updated_by_type | string | The type of the actor that updated this customer: user, service-account, or service-account. |
| customer_updated_by_description | string | The description of the actor that updated this customer. Includes username or token name depending on actor type. |
| customer_updated_by_link | string | The link to the actor that updated this customer. |
| customer_updated_by_timestamp | timestamptz | The date the customer was updated by this actor. |
| instance_id | string | Instance identifier |
| instance_is_active | boolean | The instance has pinged within the last 24 hours |
| instance_first_reported_at | timestamptz | The timestamp of the first recorded check-in for the instance. |
| instance_last_reported_at | timestamptz | The timestamp of the last recorded check-in for the instance. |
| instance_first_ready_at | timestamptz | The timestamp of when the cluster was considered ready |
| instance_kots_version | string | The version of KOTS or the Replicated SDK that the instance is running. The version is displayed as a Semantic Versioning compliant string. |
| instance_k8s_version | string | The version of Kubernetes running in the cluster. |
| instance_is_airgap | boolean | The cluster is airgaped |
| instance_is_kurl | boolean | The instance is installed in a Replicated kURL cluster (embedded cluster) |
| instance_last_app_status | string | The instance's last reported app status |
| instance_client | string | Indicates whether this instance is managed by KOTS or if it's a Helm CLI deployed instance using the SDK. |
| instance_kurl_node_count_total | integer | Total number of nodes in the cluster. Applies only to kURL clusters. |
| instance_kurl_node_count_ready | integer | Number of nodes in the cluster that are in a healthy state and ready to run Pods. Applies only to kURL clusters. |
| instance_cloud_provider | string | The cloud provider where the instance is running. Cloud provider is determined by the IP address that makes the request. |
| instance_cloud_provider_region | string | The cloud provider region where the instance is running. For example, us-central1-b |
| instance_app_version | string | The current application version |
| instance_version_age | string | The age (in days) of the currently deployed release. This is relative to the latest available release on the channel. |
| instance_is_gitops_enabled | boolean | Reflects whether the end user has enabled KOTS Auto-GitOps for deployments in their environment |
| instance_gitops_provider | string | If KOTS Auto-GitOps is enabled, reflects the GitOps provider in use. For example, GitHub Enterprise. |
| instance_is_skip_preflights | boolean | Indicates whether an end user elected to skip preflight check warnings or errors |
| instance_preflight_status | string | The last reported preflight check status for the instance |
| instance_k8s_distribution | string | The Kubernetes distribution of the cluster. |
| instance_has_custom_metrics | boolean | Indicates the presence or absence of custom metrics and custom_metric__* columns |
| instance_custom_metrics_reported_at | timestamptz | Timestamp of latest custom_metric |
| custom_metric__* | string/integer/boolean | The values of any custom metrics that have been sent by the instance. For example, custom_metric__active_users |
| instance_has_tags | boolean | Indicates the presence or absence of instance tags and instance_tag__* columns |
| instance_tag__* | string/integer/boolean | The values of any instance tag that have been set by the vendor. For example, instance_tag__name |
[View a larger version of this image](/images/resource-status-hover-current-state.png)
* **App version**: The version label of the currently running release. You define the version label in the release properties when you promote the release. For more information about defining release properties, see [Properties](releases-about#properties) in _About Channels and Releases_.
If there is no version label for the release, then the Vendor Portal displays the release sequence in the **App version** field. You can find the sequence number associated with a release by running the `replicated release ls` command. See [release ls](/reference/replicated-cli-release-ls) in the _Replicated CLI_ documentation.
* **Version age**: The absolute and relative ages of the instance:
* **Absolute age**: `now - current_release.promoted_date`
The number of days since the currently running application version was promoted to the channel. For example, if the instance is currently running version 1.0.0, and version 1.0.0 was promoted to the channel 30 days ago, then the absolute age is 30.
* **Relative age (Days Behind Latest)**: `channel.latest_release.promoted_date - current_release.promoted_date`
The number of days between when the currently running application version was promoted to the channel and when the latest available version on the channel was promoted.
For example, the instance is currently running version 1.0.0, which was promoted to the Stable channel. The latest version available on the Stable channel is 1.5.0. If 1.0.0 was promoted 30 days ago and 1.5.0 was promoted 10 days ago, then the relative age of the application instance is 20 days.
* **Versions behind**: The number of versions between the currently running version and the latest version available on the channel where the instance is assigned.
For example, the instance is currently running version 1.0.0, which was promoted to the Stable channel. If the later versions 1.1.0, 1.2.0, 1.3.0, 1.4.0, and 1.5.0 were also promoted to the Stable channel, then the instance is five versions behind.
* **Last check-in**: The timestamp when the instance most recently sent data to the Vendor Portal.
### Instance Insights {#insights}
The **Insights** section includes the following metrics computed by the Vendor Portal:
* [Uptime](#uptime)
* [Time to Install](#time-to-install)
#### Uptime
The Vendor Portal computes the total uptime for the instance as the fraction of time that the instance spends with a Ready, Updating, or Degraded status. The Vendor Portal also provides more granular details about uptime in the **Instance Uptime** graph. See [Instance Uptime](#instance-uptime) below.
High uptime indicates that the application is reliable and able to handle the demands of the customer environment. Low uptime might indicate that the application is prone to errors or failures. By measuring the total uptime, you can better understand the performance of your application.
The following table lists the application statuses that are associated with an Up or Down state in the total uptime calculation:
| Uptime State | Application Statuses |
|---|---|
| Up | Ready, Updating, or Degraded |
| Down | Missing or Unavailable |
| Uptime State | Application Statuses |
|---|---|
| Up | Ready or Updating |
| Degraded | Degraded |
| Down | Missing or Unavailable |
| Label | Description |
|---|---|
| App Channel | The ID of the channel the application instance is assigned. |
| App Version | The version label of the release that the instance is currently running. The version label is the version that you assigned to the release when promoting it to a channel. |
| Label | Description |
|---|---|
| App Status |
A string that represents the status of the application. Possible values: Ready, Updating, Degraded, Unavailable, Missing. For applications that include the Replicated SDK, hover over the application status to view the statuses of the indiviudal resources deployed by the application. For more information, see Enabling and Understanding Application Status. |
| Label | Description |
|---|---|
| Cluster Type |
Indicates if the cluster was provisioned by kURL. Possible values:
For more information about kURL clusters, see Creating a kURL installer. |
| Kubernetes Version | The version of Kubernetes running in the cluster. |
| Kubernetes Distribution |
The Kubernetes distribution of the cluster. Possible values:
|
| kURL Nodes Total |
Total number of nodes in the cluster. Note: Applies only to kURL clusters. |
| kURL Nodes Ready |
Number of nodes in the cluster that are in a healthy state and ready to run Pods. Note: Applies only to kURL clusters. |
| New kURL Installer |
The ID of the kURL installer specification that kURL used to provision the cluster. Indicates that a new Installer specification was added. An installer specification is a manifest file that has For more information about installer specifications for kURL, see Creating a kURL installer. Note: Applies only to kURL clusters. |
| Label | Description |
|---|---|
| Cloud Provider |
The cloud provider where the instance is running. Cloud provider is determined by the IP address that makes the request. Possible values:
|
| Cloud Region |
The cloud provider region where the instance is running. For example, |
| Label | Description |
|---|---|
| KOTS Version | The version of KOTS that the instance is running. KOTS version is displayed as a Semantic Versioning compliant string. |
| Label | Description |
|---|---|
| Replicated SDK Version | The version of the Replicated SDK that the instance is running. SDK version is displayed as a Semantic Versioning compliant string. |
| Label | Description |
|---|---|
| Versions Behind |
The number of versions between the version that the instance is currently running and the latest version available on the channel. Computed by the Vendor Portal each time it receives instance data. |
1. On the Instance Details page, click **Notifications**.
1. From the **Configure Instance Notifications** dialog, select the types of notifications to enable.

[View a larger version of this image](/images/instance-notifications-dialog.png)
1. Click **Save**.
1. Repeat these steps to configure notifications for other application instances.
## Test Notifications
After you enable notifications for a running development instance, test that your notifications are working as expected.
Do this by forcing your application into a non-ready state. For example, you can delete one or more application Pods and wait for a ReplicationController to recreate them.
Then, look for notifications in the assigned Slack channel. You also receive an email if you enabled email notifications.
:::note
There is a 30-second buffer between event detection and notifications being sent. This buffer provides better roll-ups and reduces noise.
:::
---
# Replicated FAQs
This topic lists frequently-asked questions (FAQs) for different components of the Replicated Platform.
## Getting Started FAQs
### What are the supported application packaging options?
Replicated strongly recommends that all applications are packaged using Helm.
Helm is a popular open source package manager for Kubernetes applications. Many ISVs use Helm to configure and deploy Kubernetes applications because it provides a consistent, reusable, and sharable packaging format. For more information, see the [Helm documentation](https://helm.sh/docs).
Many enterprise customers expect to be able to install an application with Helm in their own cluster. Packaging with Helm allows you to support installation with the Helm CLI and with the Replicated installers (Replicated Emebdded Cluster and Replicated KOTS) from a single release in the Replicated Platform.
For vendors that do not want to use Helm, applications distributed with Replicated can be packaged as Kubernetes manifest files.
### How do I get started with Replicated?
Replicated recommends that new users start by completing one or more labs or tutorials to get familiar with the processes of creating, installing, and iterating on releases for an application with the Replicated Platform.
Then, when you are ready to begin onboarding your own application to the Replicated Platform, see [Onboard to the Replicated Platform](replicated-onboarding) for a list of Replicated features to begin integrating.
#### Labs
The following labs in Instruqt provide a hands-on introduction to working with Replicated features, without needing your own sample application or development environment:
* [Distributing Your Application with Replicated](https://play.instruqt.com/embed/replicated/tracks/distributing-with-replicated?token=em_VHOEfNnBgU3auAnN): Learn how to quickly get value from the Replicated Platform for your application.
* [Delivering Your Application as a Kubernetes Appliance](https://play.instruqt.com/embed/replicated/tracks/delivering-as-an-appliance?token=em_lUZdcv0LrF6alIa3): Use Embedded Cluster to distribute Kubernetes and an application together as a single appliance.
* [Avoiding Installation Pitfalls](https://play.instruqt.com/embed/replicated/tracks/avoiding-installation-pitfalls?token=em_gJjtIzzTTtdd5RFG): Learn how to use preflight checks to avoid common installation issues and assure your customer is installing into a supported environment.
* [Closing the Support Information Gap](https://play.instruqt.com/embed/replicated/tracks/closing-information-gap?token=em_MO2XXCz3bAgwtEca): Learn how to use support bundles to close the information gap between your customers and your support team.
* [Protecting Your Assets](https://play.instruqt.com/embed/replicated/tracks/protecting-your-assets?token=em_7QjY34G_UHKoREBd): Assure your customers have the right access to your application artifacts and features using Replicated licensing.
#### Tutorials
The following getting started tutorials demonstrate how to integrate key Replicated features with a sample Helm chart application:
* [Install a Helm Chart on a VM with Embedded Cluster](/vendor/tutorial-embedded-cluster-setup): Create a release that can be installed on a VM with the Embedded Cluster installer.
* [Install a Helm Chart with KOTS and the Helm CLI](/vendor/tutorial-kots-helm-setup): Create a release that can be installed with both the KOTS installer and the Helm CLI.
* [Set Helm Chart Values with KOTS](/vendor/tutorial-config-setup): Configure the Admin Console Config screen to collect user-supplied values.
* [Add Preflight Checks to a Helm Chart](/vendor/tutorial-preflight-helm-setup): Create preflight checks for your application by addin a spec for preflight checks to a Secret in the Helm templates.
### What are air gap installations?
_Air gap_ refers to a computer or network that does not have outbound internet access. Air-gapped environments are common for enterprises that require high security, such as government agencies or financial institutions.
Traditionally, air-gapped systems are physically isolated from the network. For example, an air-gapped server might be stored in a separate location away from network-connected servers. Physical access to air-gapped servers is often restricted as well.
It is also possible to use _virtual_ or _logical_ air gaps, in which security controls such as firewalls, role-based access control (RBAC), and encryption are used to logically isolate a device from a network. In this way, network access is still restricted, but there is not a phyiscal air gap that disconnects the device from the network.
Replicated supports installations into air-gapped environments. In an air gap installation, users first download the images and other assets required for installation on an internet-connected device. These installation assets are usually provided in an _air gap bundle_ that ISVs can build in the Replicated Vendor Portal. Then, users transfer the installation assets to their air-gapped machine where they can push the images to an internal private registry and install.
For more information, see:
* [Air Gap Installation with Embedded Cluster](/enterprise/installing-embedded-air-gap)
* [Installing and Updating with Helm in Air Gap Environments](/vendor/helm-install-airgap)
### What is the Commercial Sotware Distribution Lifecycle?
Commercial software distribution is the business process that independent software vendors (ISVs) use to enable enterprise customers to self-host a fully private instance of the vendor's application in an environment controlled by the customer.
Replicated has developed the Commercial Software Distribution Lifecycle to represent the stages that are essential for every company that wants to deliver their software securely and reliably to customer-controlled environments.
This lifecycle was inspired by the DevOps lifecycle and the Software Development Lifecycle (SDLC), but it focuses on the unique things requirements for successfully distributing commercial software to tens, hundreds, or thousands of enterprise customers.
The phases are:
* Develop
* Test
* Release
* License
* Install
* Report
* Support
For more information about the Replicated features that enhance each phase of the lifecycle, see [Introduction to Replicated](../intro-replicated).
## Compatibility Matrix FAQs
### What types of clusters can I create with Compatibility Matrix?
You can use Compatibility Matrix to get kubectl access to running clusters within minutes or less. Compatibility Matrix supports a variety of VM and cloud distributions, including Red Hat OpenShift, Replicated Embedded Cluster, and Oracle Container Engine for Kubernetes (OKE). For a complete list, see [Supported Compatibility Matrix Cluster Types](/vendor/testing-supported-clusters).
### How does billing work?
Clusters created with Compatibility Matrix are billed by the minute. Per-minute billing begins when the cluster reaches a running status and ends when the cluster is deleted. For more information, see [Billing and Credits](/vendor/testing-about#billing-and-credits).
### How do I buy credits?
To create clusters with Compatibility Matrix, you must have credits in your Vendor Portal account. If you have a contract, you can purchase credits by logging in to the Vendor Portal and going to **[Compatibility Matrix > Buy additional credits](https://vendor.replicated.com/compatibility-matrix)**. Otherwise, to request credits, log in to the Vendor Portal and go to **[Compatibility Matrix > Request more credits](https://vendor.replicated.com/compatibility-matrix)**.
### How do I add Comaptibility Matrix to my CI/CD pipelines?
You can use Replicated CLI commands to integrate Compatibility Matrix into your CI/CD development and production workflows. This allows you to programmatically create multiple different types of clusters where you can deploy and test your application before releasing.
For more information, see [About Integrating with CI/CD](/vendor/ci-overview).
## KOTS and Embedded Cluster FAQs
### What is the Admin Console?
The Admin Console is the user interface deployed by the Replicated KOTS installer. Users log in to the Admin Console to configure and install the application. Users also access to the Admin Console after installation to complete application mangement tasks such as performing updates, syncing their license, and generating support bundles. For installations with Embedded Cluster, the Admin Console also includes a **Cluster Management** tab where users can manage the nodes in the cluster.
The Admin Console is available in installations with Replicated Embedded Cluster and Replicated KOTS.
The following shows an example of the Admin Console dashboard for an Embedded Cluster installation of an application named "Gitea":
[View a larger version of this image](/images/gitea-ec-ready.png)
### How do Embedded Cluster installations work?
To install with Embedded Cluster, users first download and extract the Embedded Cluster installation assets for the target application release on their VM or bare metal server. Then, they run an Embedded Cluster installation command to provision the cluster. During installation, Embedded Cluster also installs Replicated KOTS in the cluster, which deploys the Admin Console.
After the installation command finishes, users log in to the Admin Console to provide application configuration values, optionally join more nodes to the cluster, run preflight checks, and deploy the application.
Customer-specific Embedded Cluster installation instructions are provided in the Replicated Vendor Portal. For more information, see [Install with Embedded Cluster](/enterprise/installing-embedded).
### Does Replicated support installations into air gap environments?
Yes. The Embedded Cluster and KOTS installers support installation in _air gap_ environments with no outbound internet access.
To support air gap installations, vendors can build air gap bundles for their application in the Vendor Portal that contain all the required assets for a specific release of the application. Additionally, Replicated provides bundles that contain the assets for the Replicated installers.
For more information about how to install with Embedded Cluster and KOTS in air gap environments, see [Air Gap Installation with Embedded Cluster](/enterprise/installing-embedded-air-gap) and [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped).
### Can I deploy Helm charts with KOTS?
Yes. An application deployed with KOTS can use one or more Helm charts, can include Helm charts as components, and can use more than a single instance of any Helm chart. Each Helm chart requires a unique HelmChart custom resource (`apiVersion: kots.io/v1beta2`) in the release.
For more information, see [About Distributing Helm Charts with KOTS](/vendor/helm-native-about).
### What's the difference between Embedded Cluster and kURL?
Replicated Embedded Cluster is a successor to Replicated kURL. Compared to kURL, Embedded Cluster feature offers significantly faster installation, updates, and node joins, a redesigned Admin Console UI, improved support for multi-node clusters, one-click updates that update the application and the cluster at the same time, and more.
:::note
Replicated kURL is available only for existing customers. If you are not an existing kURL user, use Replicated Embedded Cluster instead. For more information, see [Use Embedded Cluster](/vendor/embedded-overview).
kURL is a Generally Available (GA) product for existing customers. For more information about the Replicated product lifecycle phases, see [Support Lifecycle Policy](/vendor/policies-support-lifecycle).
:::
For more information, see [Embedded Cluster Overview](/vendor/embedded-overview).
### How do I enable Embedded Cluster and KOTS installations for my application?
Releases that support installation with KOTS include the manifests required by KOTS to define the Admin Console experience and install the application.
In addition to the KOTS manifests, releases that support installation with Embedded Cluster also include the Embedded Cluster Config. The Embedded Cluster Config defines aspects of the cluster that will be provisioned and also sets the version of KOTS that will be installed.
For more information, see [Embedded Cluster Overview](/vendor/embedded-overview).
### Can I use my own branding?
The KOTS Admin Console and the Replicated Download Portal support the use of a custom logo. Additionally, software vendors can use custom domains to alias the endpoints for Replicated services.
For more information, see [Customize the Admin Console and Download Portal](/vendor/admin-console-customize-app-icon) and [About Custom Domains](custom-domains).
For more information, see [Use Custom Domains](/vendor/custom-domains-using).
## Replicated SDK FAQs
### What is the SDK?
The Replicated SDK is a Helm chart that can be installed as a small service alongside your application. The SDK can be installed alongside applications packaged as Helm charts or Kubernetes manifests. The SDK can be installed using the Helm CLI or KOTS.
For information about how to distribute and install the SDK with your application, see [Install the Replicated SDK](/vendor/replicated-sdk-installing).
Replicated recommends that the SDK is distributed with all applications because it provides access to key Replicated functionality, such as:
* Automatic access to insights and operational telemetry for instances running in customer environments, including granular details about the status of different application resources. For more information, see [About Instance and Event Data](/vendor/instance-insights-event-data).
* An in-cluster API that you can use to embed Replicated features into your application, including:
* Collect custom metrics on instances running in online or air gap environments. See [Configure Custom Metrics](/vendor/custom-metrics).
* Check customer license entitlements at runtime. See [Query Entitlements with the Replicated SDK API](/vendor/licenses-reference-sdk) and [Verify License Field Signatures with the Replicated SDK API](/vendor/licenses-verify-fields-sdk-api).
* Provide update checks to alert customers when new versions of your application are available for upgrade. See [Support Update Checks in Your Application](/reference/replicated-sdk-apis#support-update-checks-in-your-application) in _Replicated SDK API_.
* Programmatically name or tag instances from the instance itself. See [Programatically Set Tags](/reference/replicated-sdk-apis#post-appinstance-tags).
### Is the SDK supported in air gap environments?
Yes. The Replicated SDK has an _air gap mode_ that allows it to run in environments with no outbound internet access. When installed in air gap mode, the SDK does not attempt to connect to the internet. This avoids any failures that would occur when the SDK is unable to make outbound requests in air gap environments.
For more information, see [Install the SDK in Air Gap Environments](/vendor/replicated-sdk-airgap).
### How do I develop against the SDK API?
You can use the Replicated SDK in _integration mode_ to develop locally against the SDK API without needing to make real changes in the Replicated Vendor Portal or in your environment.
For more information, see [Develop Against the SDK API](/vendor/replicated-sdk-development).
### How does the Replicated SDK work with KOTS?
The Replicated SDK is a Helm chart that can be installed as a small service alongside an application, or as a standalone component. The SDK can be installed using the Helm CLI or KOTS.
Replicated recommends that all applications include the SDK because it provides access to key functionality not available through KOTS, such as support for sending custom metrics from application instances. When both the SDK and KOTS are installed in a cluster alongside an application, both send instance telemetry to the Vendor Portal.
For more information about the SDK installation options, see [Install the Replicated SDK](/vendor/replicated-sdk-installing).
## Vendor Portal FAQs
### How do I add and remove team members?
Admins can add, remove, and manage team members from the Vendor Portal. For more information, see [Manage Team Members](/vendor/team-management).
### How do I manage RBAC policies for my team members?
By default, every team has two policies created automatically: Admin and Read Only. If you have an Enterprise plan, you will also have the Sales and Support policies created automatically. These default policies are not configurable.
You can also configure custom RBAC policies if you are on the Enterprise pricing plan. Creating custom RBAC policies lets you limit which areas of the Vendor Portal are accessible to team members, and control read and read/write privileges to groups based on their role.
For more information, see [Configure RBAC Policies](/vendor/team-management-rbac-configuring).
### Can I alias Replicated endpoints?
Yes. Replicated supports the use of custom domains to alias the endpoints for Replicated services, such as the Replicated app service and the Replicated proxy registry.
Replicated domains are external to your domain and can require additional security reviews by your customer. Using custom domains as aliases can bring the domains inside an existing security review and reduce your exposure.
For more information, see [Use Custom Domains](/vendor/custom-domains-using).
### How does Replicated collect telemetry from instances of my application?
For instances running in online (internet-connected) customer environments, either Replicated KOTS or the Replicated SDK periodically sends a small amount of data to the Vendor Portal, depending on which is installed in the cluster alongside the application. If both KOTS and the SDK are installed in the cluster, then both send instance data.
For air gap instances, Replicated KOTS and the Replicated SDK collect and store instance telemetry in a Kubernetes Secret in the customer environment. The telemetry stored in the Secret is collected when a support bundle is generated in the environment. When the support bundle is uploaded to the Vendor Portal, the telemetry is associated with the correct customer and instance ID, and the Vendor Portal updates the instance insights and event data accordingly.
For more information, see [About Instance and Event Data](/vendor/instance-insights-event-data).
---
# Introduction to kURL
This topic provides an introduction to the Replicated kURL installer, including information about kURL specifications and installations.
:::note
The Replicated KOTS entitlement is required to install applications with KOTS and kURL. For more information, see [Pricing](https://www.replicated.com/pricing) on the Replicated website.
:::
:::note
Replicated kURL is available only for existing customers. If you are not an existing kURL user, use Replicated Embedded Cluster instead. For more information, see [Use Embedded Cluster](/vendor/embedded-overview).
kURL is a Generally Available (GA) product for existing customers. For more information about the Replicated product lifecycle phases, see [Support Lifecycle Policy](/vendor/policies-support-lifecycle).
:::
## Overview
kURL is an open source project maintained by Replicated that software vendors can use to create custom Kubernetes distributions that are embedded with their application. Enterprise customers can then run a kURL installation script on their virtual machine (VM) or bare metal server to provision a cluster and install the application. This allows software vendors to distribute Kubernetes applications to customers that do not have access to a cluster in their environment.
For more information about the kURL open source project, see the [kURL website](https://kurl.sh).
### kURL Installers
To provision a cluster on a VM or bare metal server, kURL uses a spec that is defined in a manifest file with `apiVersion: cluster.kurl.sh/v1beta1` and `kind: Installer`. This spec (called a _kURL installer_) lists the kURL add-ons that will be included in the cluster. kURL provides add-ons for networking, storage, ingress, and more. kURL also provides a KOTS add-on, which installs KOTS in the cluster and deploys the KOTS Admin Console. You can customize the kURL installer according to your application requirements.
To distribute a kURL installer alongside your application, you can promote the installer to a channel or include the installer as a manifest file within a given release. For more information about creating kURL installers, see [Create a kURL Installer](/vendor/packaging-embedded-kubernetes).
### kURL Installations
To install with kURL, users run a kURL installation script on their VM or bare metal server to provision a cluster.
When the KOTS add-on is included in the kURL installer spec, the kURL installation script installs the KOTS CLI and KOTS Admin Console in the cluster. After the installation script completes, users can access the Admin Console at the URL provided in the ouput of the command to configure and deploy the application with KOTS.
The following shows an example of the output of the kURL installation script:
```bash
Installation
Complete ✔
Kotsadm: http://10.128.0.35:8800
Login with password (will not be shown again): 3Hy8WYYid
This password has been set for you by default. It is recommended that you change
this password; this can be done with the following command:
kubectl kots reset-password default
```
kURL installations are supported in online (internet-connected) and air gapped environments.
For information about how to install applications with kURL, see [Online Installation with kURL](/enterprise/installing-kurl).
## About the Open Source kURL Documentation
The open source documentation for the kURL project is available at [kurl.sh](https://kurl.sh/docs/introduction/).
The open source kURL documentation contains additional information including kURL installation options, kURL add-ons, and procedural content such as how to add and manage nodes in kURL clusters. Software vendors can use the open source kURL documentation to find detailed reference information when creating kURL installer specs or testing installation.
---
# Expose Services Using NodePorts
:::note
Replicated kURL is available only for existing customers. If you are not an existing kURL user, use Replicated Embedded Cluster instead. For more information, see [Use Embedded Cluster](/vendor/embedded-overview).
kURL is a Generally Available (GA) product for existing customers. For more information about the Replicated product lifecycle phases, see [Support Lifecycle Policy](/vendor/policies-support-lifecycle).
:::
This topic describes how to expose NodePort services in [Replicated Embedded Cluster](/vendor/embedded-overview) or [Replicated kURL](/vendor/kurl-about) installations on VMs or bare metal servers.
## Overview
For installations into existing clusters, KOTS automatically creates a port forward tunnel to expose the Admin Console. Unlike installations into existing clusters, KOTS does _not_ automatically open the port forward tunnel for installations in embedded clusters provisioned on virtual machines (VMs) or bare metal servers. This is because it cannot be verified that the ports are secure and authenticated. For more information about the KOTS port forward tunnel, see [Port Forwarding Services with KOTS](/vendor/admin-console-port-forward).
Instead, to expose the Admin Console in installations with [Embedded Cluster](/vendor/embedded-overview) or [kURL](/vendor/kurl-about), KOTS creates the Admin Console as a NodePort service so it can be accessed at the node's IP address on a node port (port 8800 for kURL installations and port 30000 for Embedded Cluster installations). Additionally, for kURL installations, the UIs of Prometheus, Grafana, and Alertmanager are also exposed using NodePorts.
For installations on VMs or bare metal servers where your application must be accessible from the user's local machine rather than from inside the cluster, you can expose application services as NodePorts to provide access to the application after installation.
## Add a NodePort Service
Services with `type: NodePort` are able to be contacted from outside the cluster by connecting to any node using the appropriate protocol and port. For more information about working with the NodePort service type, see [type: NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) in the Kubernetes documentation.
The following shows an example of a NodePort type service:
```yaml
apiVersion: v1
kind: Service
metadata:
name: sentry
labels:
app: sentry
spec:
type: NodePort
ports:
- port: 9000
targetPort: 9000
nodePort: 9000
protocol: TCP
name: sentry
selector:
app: sentry
role: web
```
After configuring a NodePort service for your application, you can add a link to the service on the Admin Console dashboard where it can be accessed by users after the application is installed. For more information, see [About Accessing NodePort Services](#about-accessing-nodeport-services) below.
### Use KOTS Annotations to Conditionally Deploy NodePort Services
You can use the KOTS [`kots.io/when`](/vendor/packaging-include-resources#kotsiowhen) annotation to conditionally deploy a service. This is useful when you want to deploy a ClusterIP or LoadBalancer service for existing cluster installations, and deploy a NodePort service for Embedded Cluster or kURL installations.
To conditionally deploy a service based on the installation method, you can use the following KOTS template functions in the `kots.io/when` annotation:
* [IsKurl](/reference/template-functions-static-context#iskurl): Detects kURL installations. For example, `repl{{ IsKurl }}` returns true for kURL installations, and `repl{{ not IsKurl }}` returns true for non-kURL installations.
* [Distribution](/reference/template-functions-static-context#distribution): Returns the distribution of the cluster where KOTS is running. For example, `repl{{ eq Distribution "embedded-cluster" }}` returns true for Embedded Cluster installations and `repl{{ ne Distribution "embedded-cluster" }}` returns true for non-Embedded Cluster installations.
For example, the following `sentry` service with `type: NodePort` includes `annotation.kots.io/when: repl{{ eq Distribution "embedded-cluster" }}`. This creates a NodePort service _only_ when installing with Embedded Cluster:
```yaml
apiVersion: v1
kind: Service
metadata:
name: sentry
labels:
app: sentry
annotations:
# This annotation ensures that the NodePort service
# is only created in Embedded Cluster installations
kots.io/when: repl{{ eq Distribution "embedded-cluster" }}
spec:
type: NodePort
ports:
- port: 9000
targetPort: 9000
nodePort: 9000
protocol: TCP
name: sentry
selector:
app: sentry
role: web
```
Similarly, to ensure that a `sentry` service with `type: ClusterIP` is only created in existing cluster installations, add `annotations.kots.io/when: repl{{ ne Distribution "embedded-cluster" }}` to the ClusterIP specification:
```yaml
apiVersion: v1
kind: Service
metadata:
name: sentry
labels:
app: sentry
annotations:
# This annotation ensures that the ClusterIP service
# is only created in existing cluster installations
kots.io/when: repl{{ ne Distribution "embedded-cluster" }}
spec:
type: ClusterIP
ports:
- port: 9000
targetPort: 9000
protocol: TCP
name: sentry
selector:
app: sentry
role: web
```
## About Accessing NodePort Services
This section describes providing access to NodePort services after installation.
### VM Firewall Requirements
To be able to access the Admin Console and any NodePort services for your application, the firewall for the VM where the user installs must allow HTTP traffic and allow inbound traffic to the port where the service is exposed from their workstation. Users can consult their cloud provider's documentation for more information about updating firewall rules.
### Add a Link on the Admin Console Dashboard {#add-link}
You can provide a link to a NodePort service on the Admin Console dashboard by configuring the `links` array in the Kubernetes SIG Application custom resource. This provides users with an easy way to access the application after installation. For more information, see [Adding Links to the Dashboard](admin-console-adding-buttons-links).
For example:
[View a larger version of this image](/images/gitea-open-app.png)
---
# Reset a kURL Cluster
:::note
Replicated kURL is available only for existing customers. If you are not an existing kURL user, use Replicated Embedded Cluster instead. For more information, see [Use Embedded Cluster](/vendor/embedded-overview).
kURL is a Generally Available (GA) product for existing customers. For more information about the Replicated product lifecycle phases, see [Support Lifecycle Policy](/vendor/policies-support-lifecycle).
:::
This topic describes how to use the kURL `reset` command to reset a kURL cluster.
## Overview
If you need to reset a kURL installation, such as when you are testing releases with kURL, You can use the kURL `tasks.sh` `reset` command to remove Kubernetes from the system.
Alterntaively, you can discard your current VM (if you are using one) and recreate the VM with a new OS to reinstall with kURL.
For more information about the `reset` command, see [Resetting a Node](https://kurl.sh/docs/install-with-kurl/managing-nodes#reset-a-node) in the kURL documentation.
To reset a kURL installation:
1. Access the machine where you installed with kURL.
1. Run the following command to remove Kubernetes from the system:
```
curl -sSL https://k8s.kurl.sh/latest/tasks.sh | sudo bash -s reset
```
1. Follow the instructions in the output of the command to manually remove any files that the `reset` command does not remove.
If the `reset` command is unsuccessful, discard your current VM, and recreate the VM with a new OS to reinstall the Admin Console and an application.
---
# About Community Licenses
This topic describes community licenses. For more information about other types of licenses, see [Customer Types](licenses-about#customer-types) in _About Customers_.
## Overview
Community licenses are intended for use with a free or low cost version of your application. For example, you could use community licenses for an open source version of your application.
After installing an application with a community license, users can replace their community license with a new license of a different type without having to completely reinstall the application. This means that, if you have several community users who install with the same license, then you can upgrade a single community user without editing the license for all community users.
Community licenses are supported for applications that are installed with Replicated KOTS or with the Helm CLI.
For applications installed with KOTS, community license users can upload a new license file of a different type in the Replicated admin console. For more information, see [Upgrade from a Community License](/enterprise/updating-licenses#upgrade-from-a-community-license) in _Updating Licenses in the Admin Console_.
## Limitations
Community licenses function in the same way as the other types of licenses, with the following
exceptions:
* Updating a community license to another type of license cannot be reverted.
* Community license users are not supported by the Replicated Support team.
* Community licenses cannot support air gapped installations.
* Community licenses cannot include an expiration date.
## Community License Admin Console Branding
For applications installed with KOTS, the branding in the admin console for community users differs in the following ways:
* The license tile on the admin console **Dashboard** page is highlighted in yellow and with the words **Community Edition**.

[View a larger version of this image](/images/community-license-dashboard.png)
* All support bundles and analysis in the admin console are clearly marked as **Community Edition**.

[View a larger version of this image](/images/community-license-bundle.png)
---
# About Customers and Licensing
This topic provides an overview of customers and licenses in the Replicated Platform.
## Overview
The licensing features of the Replicated Platform allow vendors to securely grant access to software, making license agreements available to the application in end customer environments at startup and runtime.
The Replicated Vendor Portal also allows vendors to create and manage customer records. Each customer record includes several fields that uniquely identify the customer and the application, specify the customer's assigned release channel, and define the customer's entitlements.
Vendors can use these licensing features to enforce entitlements such as license expiration dates, and to track and report on software usage for the purpose of surfacing insights to both internal teams and customers.
The following diagram provides an overview of licensing with the Replicated Platform:

[View a larger version of this image](/images/licensing-overview.png)
As shown in the diagram above, the Replicated license and update server manages and distributes customer license information. The license server retrieves this license information from customer records managed by vendors in the Vendor Portal.
During installation or upgrade, the customer's license ID is used to authenticate with the license server. The license ID also provides authentication for the Replicated proxy registry, securely granting proxy access to images in the vendor's external registry.
The license server is identified with a CNAME record where it can be accessed from end customer environments. When running alongside an application in a customer environment, the Replicated SDK retrieves up-to-date customer license information from the license server during runtime. The in-cluster SDK API `/license/` endpoints can be used to get customer license information on-demand, allowing vendors to programmatically enforce and report on license agreements.
Vendors can also integrate internal Customer Relationship Management (CRM) tools such as Salesforce with the Replicated Platform so that any changes to a customer's entitlements are automatically reflected in the Vendor Portal. This ensures that updates to license agreements are reflected in the customer environment in real time.
## About Customers
Each customer that you create in the Replicated Vendor Portal has a unique license ID. Your customers use their license when they install or update your application.
You assign customers to channels in the Vendor Portal to control their access to your application releases. Customers can install or upgrade to releases that are promoted to the channel they are assigned. For example, assigning a customer to your Beta channel allows that customer to install or upgrade to only releases promoted to the Beta channel.
Each customer license includes several fields that uniquely identify the customer and the application, specify the customer's assigned release channel, and define the customer's entitlements, such as if the license has an expiration date or what application functionality the customer can access. Replicated securely delivers these entitlements to the application and makes them available at installation or at runtime.
For more information about how to create and manage customers, see [Create and Manage Customers](releases-creating-customer).
### Customer Channel Assignment {#channel-assignment}
You can change the channel a customer is assigned at any time. For installations with Replicated KOTS, when you change the customer's channel, the customer can synchronize their license in the Replicated Admin Console to fetch the latest release on the new channel and then upgrade. The Admin Console always fetches the latest release on the new channel, regardless of the presence of any releases on the channel that are marked as required.
For example, if the latest release promoted to the Beta channel is version 1.25.0 and version 1.10.0 is marked as required, when you edit an existing customer to assign them to the Beta channel, then the KOTS Admin Console always fetches 1.25.0, even though 1.10.0 is marked as required. The required release 1.10.0 is ignored and is not available to the customer for upgrade.
For more information about how to mark a release as required, see [Properties](releases-about#properties) in _About Channels and Releases_. For more information about how to synchronize licenses in the Admin Console, see [Update Licenses in the Admin Console](/enterprise/updating-licenses).
### Customer Types
Each customer is assigned one of the following types:
* **Development**: The Development type can be used internally by the development
team for testing and integration.
* **Trial**: The Trial type can be used for customers who are on 2-4 week trials
of your software.
* **Paid**: The Paid type identifies the customer as a paying customer for which
additional information can be provided.
* **Community**: The Community type is designed for a free or low cost version of your application. For more details about this type, see [Community Licenses](licenses-about-types).
* (Beta) **Single Tenant Vendor Managed**: The Single Tenant Vendor Managed type is for customers for whom your team is operating the application in infrastructure you fully control and operate. Single Tenant Vendor Managed licenses are free to use, but come with limited support. The Single Tenant Vendor Managed type is a Beta feature. Reach out to your Replicated account representative to get access.
Except Community licenses, the license type is used solely for reporting purposes and a customer's access to your application is not affected by the type that you assign.
You can change the type of a license at any time in the Vendor Portal. For example, if a customer upgraded from a trial to a paid account, then you could change their license type from Trial to Paid for reporting purposes.
### About Managing Customers
Each customer record in the Vendor Portal has built-in fields and also supports custom fields:
* The built-in fields include values such as the customer name, customer email, and the license expiration date. You can optionally set initial values for the built-in fields so that each new customer created in the Vendor Portal starts with the same set of values.
* You can also create custom fields to define entitlements for your application. For example, you can create a custom field to set the number of active users permitted.
For more information, see [Manage Customer License Fields](/vendor/licenses-adding-custom-fields).
You can make changes to a customer record in the Vendor Portal at any time. The license ID, which is the unique identifier for the customer, never changes. For more information about managing customers in the Vendor Portal, see [Create and Manage Customers](releases-creating-customer).
### About the Customers Page
The following shows an example of the **Customers** page:

[View a larger version of this image](/images/customers-page.png)
From the **Customers** page, you can do the following:
* Create new customers.
* Download CSVs with customer and instance data.
* Search and filter customers.
* Click the **Manage customer** button to edit details such as the customer name and email, the custom license fields assigned to the customer, and the license expiration policy. For more information, see [Create and Manage Customers](releases-creating-customer).
* Download the license file for each customer.
* Click the **Customer reporting** button to view data about the active application instances associated with each customer. For more information, see [Customer Reporting](customer-reporting).
* View instance details for each customer, including the version of the application that this instance is running, the Kubernetes distribution of the cluster, the last check-in time, and more:
[View a larger version of this image](/images/customer-reporting-details.png)
* Archive customers. For more information, see [Create and Manage Customers](releases-creating-customer).
* Click on a customer on the **Customers** page to access the following customer-specific pages:
* [Reporting](#about-the-customer-reporting-page)
* [Manage customer](#about-the-manage-customer-page)
* [Support bundles](#about-the-customer-support-bundles-page)
### About the Customer Reporting Page
The **Reporting** page for a customer displays data about the active application instances associated with each customer. The following shows an example of the **Reporting** page for a customer that has two active application instances:

[View a larger version of this image](/images/customer-reporting-page.png)
For more information about interpreting the data on the **Reporting** page, see [Customer Reporting](customer-reporting).
### About the Manage Customer Page
The **Manage customer** page for a customer displays details about the customer license, including the customer name and email, the license expiration policy, custom license fields, and more.
The following shows an example of the **Manage customer** page:

[View a larger version of this image](/images/customer-details.png)
From the **Manage customer** page, you can view and edit the customer's license fields or archive the customer. For more information, see [Create and Manage Customers](releases-creating-customer).
### About the Customer Support Bundles Page
The **Support bundles** page for a customer displays details about the support bundles collected from the customer. Customers with the **Support Bundle Upload Enabled** entitlement can provide support bundles through the KOTS Admin Console, or you can upload support bundles manually in the Vendor Portal by going to **Troubleshoot > Upload a support bundle**. For more information about uploading and analyzing support bundles, see [Inspecting Support Bundles](support-inspecting-support-bundles).
The following shows an example of the **Support bundles** page:

[View a larger version of this image](/images/customer-support-bundles.png)
As shown in the screenshot above, the **Support bundles** page lists details about the collected support bundles, such as the date the support bundle was collected and the debugging insights found. You can click on a support bundle to view it in the **Support bundle analysis** page. You can also click **Delete** to delete the support bundle, or click **Customer Reporting** to view the **Reporting** page for the customer.
## About Licensing with Replicated
### About Syncing Licenses
When you edit customer licenses for an application installed with a Replicated installer (Embedded Cluster, KOTS, kURL), your customers can use the KOTS Admin Console to get the latest license details from the Vendor Portal, then deploy a new version that includes the license changes. Deploying a new version with the license changes ensures that any license fields that you have templated in your release using [KOTS template functions](/reference/template-functions-about) are rendered with the latest license details.
For online instances, KOTS pulls license details from the Vendor Portal when:
* A customer clicks **Sync license** in the Admin Console.
* An automatic or manual update check is performed by KOTS.
* An update is performed with Replicated Embedded Cluster. See [Perform Updates with Embedded Cluster](/enterprise/updating-embedded).
* An application status changes. See [Current State](instance-insights-details#current-state) in _Instance Details_.
For more information, see [Update Licenses in the Admin Console](/enterprise/updating-licenses).
### About Syncing Licenses in Air-Gapped Environments
To update licenses in air gap installations, customers need to upload the updated license file to the Admin Console.
After you update the license fields in the Vendor Portal, you can notify customers by either sending them a new license file or instructing them to log into their Download Portal to downlaod the new license.
For more information, see [Update Licenses in the Admin Console](/enterprise/updating-licenses).
### Retrieving License Details with the SDK API
The [Replicated SDK](replicated-sdk-overview) includes an in-cluster API that can be used to retrieve up-to-date customer license information from the Vendor Portal during runtime through the [`license`](/reference/replicated-sdk-apis#license) endpoints. This means that you can add logic to your application to get the latest license information without the customer needing to perform a license update. The SDK API polls the Vendor Portal for updated data every four hours.
In KOTS installations that include the SDK, users need to update their licenses from the Admin Console as described in [About Syncing Licenses](#about-syncing-licenses) above. However, any logic in your application that uses the SDK API will update the user's license information without the customer needing to deploy a license update in the Admin Console.
For information about how to use the SDK API to query license entitlements at runtime, see [Query Entitlements with the Replicated SDK API](/vendor/licenses-reference-sdk).
### License Expiration Handling {#expiration}
The built-in `expires_at` license field defines the expiration date for a customer license. When you set an expiration date in the Vendor Portal, the `expires_at` field is encoded in ISO 8601 format (`2026-01-23T00:00:00Z`) and is set to midnight UTC at the beginning of the calendar day (`00:00:00`) on the date selected.
Replicated enforces the following logic when a license expires:
* By default, instances with expired licenses continue to run.
To change the behavior of your application when a license expires, you can can add custom logic in your application that queries the `expires_at` field using the Replicated SDK in-cluster API. For more information, see [Query Entitlements with the Replicated SDK API](/vendor/licenses-reference-sdk).
* Expired licenses cannot log in to the Replicated registry to pull a Helm chart for installation or upgrade.
* Expired licenses cannot pull application images through the Replicated proxy registry or from the Replicated registry.
* In Replicated KOTS installations, KOTS prevents instances with expired licenses from receiving updates.
### Replacing Licenses for Existing Installations
Community licenses are the only license type that can be replaced with a new license without needing to reinstall the application. For more information, see [Community Licenses](licenses-about-types).
Unless the existing customer is using a community license, it is not possible to replace one license with another license without reinstalling the application. When you need to make changes to a customer's entitlements, Replicated recommends that you edit the customer's license details in the Vendor Portal, rather than issuing a new license.
---
# Manage Customer License Fields
This topic describes how to manage customer license fields in the Replicated Vendor Portal, including how to add custom fields and set initial values for the built-in fields.
## Set Initial Values for Built-In License Fields (Beta)
You can set initial values to populate the **Create Customer** form in the Vendor Portal when a new customer is created. This ensures that each new customer created from the Vendor Portal UI starts with the same set of built-in license field values.
:::note
Initial values are not applied to new customers created through the Vendor API v3. For more information, see [Create a customer](https://replicated-vendor-api.readme.io/reference/createcustomer-1) in the Vendor API v3 documentation.
:::
These _initial_ values differ from _default_ values in that setting initial values does not update the license field values for any existing customers.
To set initial values for built-in license fields:
1. In the Vendor Portal, go to **License Fields**.
1. Under **Built-in license options**, click **Edit** next to each license field where you want to set an initial value.

[View a larger version of this image](/images/edit-initial-value.png)
## Manage Custom License Fields
You can create custom license fields in the Vendor Portal. For example, you can create a custom license field to set the number of active users permitted. Or, you can create a field that sets the number of nodes a customer is permitted on their cluster.
The custom license fields that you create are displayed in the Vendor Portal for all new and existing customers. If the custom field is not hidden, it is also displayed to customers under the **Licenses** tab in the Replicated Admin Console.
### Limitation
The maximum size for a license field value is 64KB.
### Create Custom License Fields
To create a custom license field:
1. Log in to the Vendor Portal and select the application.
1. On the **License Fields** page, click **Create license field**.
[View a larger version of this image](/images/license-add-custom-field.png)
1. Complete the following fields:
| Field | Description |
|-----------------------|------------------------|
| Field | The name used to reference the field. This value cannot be changed. |
| Title| The display name for the field. This is how the field appears in the Vendor Portal and the Admin Console. You can change the title in the Vendor Portal. |
| Type| The field type. Supported formats include integer, string, text (multi-line string), and boolean values. This value cannot be changed. |
| Default | The default value for the field for both existing and new customers. It is a best practice to provide a default value when possible. The maximum size for a license field value is 64KB. |
| Required | If checked, this prevents the creation of customers unless this field is explicitly defined with a value. |
| Hidden | If checked, the field is not visible to your customer in the Replicated Admin Console. The field is still visible to you in the Vendor Portal. **Note**: The Hidden field is displayed only for vendors with access to the Replicated installers (KOTS, kURL, Embedded Cluster). |
### Update Custom License Fields
To update a custom license field:
1. Log in to the Vendor Portal and select the application.
1. On the **License Fields** page, click **Edit Field** on the right side of the target row. Changing the default value for a field updates the value for each existing customer record that has not overridden the default value.
:::important
Enabling **Is this field is required?** updates the license field to be required on all new and existing customers. If you enable **Is this field is required?**, you must either set a default value for the field or manually update each existing customer to provide a value for the field.
:::
### Set Customer-Specific Values for Custom License Fields
To set a customer-specific value for a custom license field:
1. Log in to the Vendor Portal and select the application.
1. Click **Customers**.
1. For the target customer, click the **Manage customer** button.
1. Under **Custom fields**, enter values for the target custom license fields for the customer.
:::note
The maximum size for a license field value is 64KB.
:::
[View a larger version of this image](/images/customer-license-custom-fields.png)
### Delete Custom License Fields
Deleted license fields and their values do not appear in the customer's license in any location, including your view in the Vendor Portal, the downloaded YAML version of the license, and the Admin Console **License** screen.
By default, deleting a custom license field also deletes all of the values associated with the field in each customer record.
Only administrators can delete license fields.
:::important
Replicated recommends that you take care when deleting license fields.
Outages can occur for existing deployments if your application or the Admin Console **Config** page expect a license file to provide a required value.
:::
To delete a custom license field:
1. Log in to the Vendor Portal and select the application.
1. On the **License Fields** page, click **Edit Field** on the right side of the target row.
1. Click **Delete** on the bottom left of the dialog.
1. (Optional) Enable **Preserve License Values** to save values for the license field that were not set by the default in each customer record. Preserved license values are not visible to you or the customer.
:::note
If you enable **Preserve License Values**, you can create a new field with the same name and `type` as the deleted field to reinstate the preserved values.
:::
1. Follow the instructions in the dialog and click **Delete**.
---
# Download Customer Licenses
This topic describes how to download a license file from the Replicated Vendor Portal.
For information about how to download customer licenses with the Vendor API v3, see [Download a customer license file as YAML](https://replicated-vendor-api.readme.io/reference/downloadlicense) in the Vendor API v3 documentation.
## Download Licenses
You can download license files for your customers from the **Customer** page in the Vendor Portal.
To download a license:
1. In the [Vendor Portal](https://vendor.replicated.com), go to the **Customers** page.
1. In the row for the target customer, click the **Download License** button.

[View a larger version of this image](/images/download-license-button.png)
## Enable and Download Air Gap Licenses {#air-gap-license}
The **Airgap Download Enabled** license option allows KOTS to install an application without outbound internet access using the `.airgap` bundle.
To enable the air gap entitlement and download the license:
1. In the [Vendor Portal](https://vendor.replicated.com), go to the **Customers** page.
1. Click on the name of the target customer and go to the **Manage customer** tab.
1. Under **License options**, enable the **Airgap Download Enabled** option. Click **Save Changes**.

[View a larger version of this image](/images/airgap-download-enabled.png)
1. At the top of the screen, click **Download license** to download the air gap enabled license.

[View a larger version of this image](/images/download-airgap-license.png)
---
# Manage Install Types for a License
This topic describes how to manage which installation types and options are enabled for a license.
## Overview
You can control which installation methods are available to each of your customers by enabling or disabling **Install types** fields in the customer's license.
The following shows an example of the **Install types** field in a license:

[View a larger version of this image](/images/license-install-types.png)
The installation types that are enabled or disabled for a license determine the following:
* The Replicated installers ([Replicated KOTS](../intro-kots), [Replicated Embedded Cluster](/vendor/embedded-overview), [Replicated kURL](/vendor/kurl-about)) that the customer's license entitles them to use
* The installation assets and/or instructions provided in the Replicated Download Portal for the customer
* The customer's KOTS Admin Console experience
Setting the supported installation types on a per-customer basis gives you greater control over the installation method used by each customer. It also allows you to provide a more curated Download Portal experience, in that customers will only see the installation assets and instructions that are relevant to them.
## Understanding Install Types {#install-types}
In the customer license, under **Install types**, the **Available install types** field allows you to enable and disable different installation methods for the customer.
You can enable one or more installation types for a license.
The following describes each installation type available, as well as the requirements for enabling each type:
| Install Type | Description | Requirements |
|---|---|---|
| Existing Cluster (Helm CLI) | Allows the customer to install with Helm in an existing cluster. The customer does not have access to the Replicated installers (Embedded Cluster, KOTS, and kURL). When the Helm CLI Air Gap Instructions (Helm CLI only) install option is also enabled, the Download Portal displays instructions on how to pull Helm installable images into a local repository. See Understanding Additional Install Options below. |
The latest release promoted to the channel where the customer is assigned must contain one or more Helm charts. It can also include Replicated custom resources, such as the Embedded Cluster Config custom resource, the KOTS HelmChart, Config, and Application custom resources, or the Troubleshoot Preflight and SupportBundle custom resources. Any other Kubernetes resources in the release (such as Kubernetes Deployments or Services) must include the `kots.io/installer-only` annotation. The `kots.io/installer-only` annotation indicates that the Kubernetes resource is used only by the Replicated installers (Embedded Cluster, KOTS, and kURL). Example: ```yaml apiVersion: v1 kind: Service metadata: name: my-service annotations: kots.io/installer-only: "true" ``` |
| Existing Cluster (KOTS install) | Allows the customer to install with Replicated KOTS in an existing cluster. |
|
| kURL Embedded Cluster (first generation product) |
Allows the customer to install with Replicated kURL on a VM or bare metal server. Note: For new installations, enable Replicated Embedded Cluster (current generation product) instead of Replicated kURL (first generation product). |
|
| Embedded Cluster (current generation product) | Allows the customer to install with Replicated Embedded Cluster on a VM or bare metal server. |
|
| Install Type | Description | Requirements |
|---|---|---|
| Helm CLI Air Gap Instructions (Helm CLI only) | When enabled, a customer will see instructions on the Download Portal on how to pull Helm installable images into their local repository. Helm CLI Air Gap Instructions is enabled by default when you select the Existing Cluster (Helm CLI) install type. For more information see [Installing with Helm in Air Gap Environments](/vendor/helm-install-airgap) |
The Existing Cluster (Helm CLI) install type must be enabled |
| Air Gap Installation Option (Replicated Installers only) | When enabled, new installations with this license have an option in their Download Portal to install from an air gap package or do a traditional online installation. |
At least one of the following Replicated install types must be enabled:
|
| Field Name | Description |
| `appSlug` | The unique application slug that the customer is associated with. This value never changes. |
| `channelID` | The ID of the channel where the customer is assigned. When the customer's assigned channel changes, the latest release from that channel will be downloaded on the next update check. |
| `channelName` | The name of the channel where the customer is assigned. When the customer's assigned channel changes, the latest release from that channel will be downloaded on the next update check. |
| `licenseID`, `licenseId` | Unique ID for the installed license. This value never changes. |
| `customerEmail` | The customer email address. |
| `endpoint` | For applications installed with a Replicated installer (KOTS, kURL, Embedded Cluster), this is the endpoint that the KOTS Admin Console uses to synchronize the licenses and download updates. This is typically `https://replicated.app`. |
| `entitlementValues` | Includes both the built-in `expires_at` field and any custom license fields. For more information about adding custom license fields, see [Manage Customer License Fields](licenses-adding-custom-fields). |
| `expires_at` | Defines the expiration date for the license. The date is encoded in ISO 8601 format (`2026-01-23T00:00:00Z`) and is set to midnight UTC at the beginning of the calendar day (`00:00:00`) on the date selected. If a license does not expire, this field is missing. For information about the default behavior when a license expires, see [License Expiration Handling](licenses-about#expiration) in _About Customers_. |
| `licenseSequence` | Every time a license is updated, its sequence number is incremented. This value represents the license sequence that the client currently has. |
| `customerName` | The name of the customer. |
| `signature` | The base64-encoded license signature. This value will change when the license is updated. |
| `licenseType` | A string value that describes the type of the license, which is one of the following: `paid`, `trial`, `dev`, `single-tenant-vendor-managed` or `community`. For more information about license types, see [Managing License Type](licenses-about-types). |
| Field Name | Description |
| `isEmbeddedClusterDownloadEnabled` | If a license supports installation with Replicated Embedded Cluster, this field is set to `true` or missing. If Embedded Cluster installations are not supported, this field is `false`. This field requires that the vendor has the Embedded Cluster entitlement and that the release at the head of the channel includes an [Embedded Cluster Config](/reference/embedded-config) custom resource. This field also requires that the "Install Types" feature is enabled for your Vendor Portal team. Reach out to your Replicated account representative to get access. |
| `isHelmInstallEnabled` | If a license supports Helm installations, this field is set to `true` or missing. If Helm installations are not supported, this field is set to `false`. This field requires that the vendor packages the application as Helm charts and, optionally, Replicated custom resources. This field requires that the "Install Types" feature is enabled for your Vendor Portal team. Reach out to your Replicated account representative to get access. |
| `isKotsInstallEnabled` | If a license supports installation with Replicated KOTS, this field is set to `true`. If KOTS installations are not supported, this field is either `false` or missing. This field requires that the vendor has the KOTS entitlement. |
| `isKurlInstallEnabled` | If a license supports installation with Replicated kURL, this field is set to `true` or missing. If kURL installations are not supported, this field is `false`. This field requires that the vendor has the kURL entitlement and a promoted kURL installer spec. This field also requires that the "Install Types" feature is enabled for your Vendor Portal team. Reach out to your Replicated account representative to get access. |
| Field Name | Description |
| `isAirgapSupported` | If a license supports air gap installations with the Replicated installers (KOTS, kURL, Embedded Cluster), then this field is set to `true`. If Replicated installer air gap installations are not supported, this field is missing. When you enable this field for a license, the `license.yaml` file will have license metadata embedded in it and must be re-downloaded. |
| `isHelmAirgapEnabled` | If a license supports Helm air gap installations, then this field is set to `true` or missing. If Helm air gap is not supported, this field is missing. When you enable this feature, the `license.yaml` file will have license metadata embedded in it and must be re-downloaded. This field requires that the "Install Types" feature is enabled for your Vendor Portal team. Reach out to your Replicated account representative to get access. |
| Field Name | Description |
| `isDisasterRecoverySupported` | If a license supports the Embedded Cluster disaster recovery feature, this field is set to `true`. If a license does not support disaster recovery for Embedded Cluster, this field is either missing or `false`. **Note**: Embedded Cluster Disaster Recovery is in Alpha. To get access to this feature, reach out to Alex Parker at [alexp@replicated.com](mailto:alexp@replicated.com). For more information, see [Disaster Recovery for Embedded Cluster](/vendor/embedded-disaster-recovery). |
| `isGeoaxisSupported` | (kURL Only) If a license supports integration with GeoAxis, this field is set to `true`. If GeoAxis is not supported, this field is either `false` or missing. **Note**: This field requires that the vendor has the GeoAxis entitlement. It also requires that the vendor has access to the Identity Service entitlement. |
| `isGitOpsSupported` | :::important KOTS Auto-GitOps is a legacy feature and is **not recommended** for use. For modern enterprise customers that prefer software deployment processes that use CI/CD pipelines, Replicated recommends the [Helm CLI installation method](/vendor/install-with-helm), which is more commonly used in these types of enterprise environments. ::: If a license supports the KOTS AutoGitOps workflow in the Admin Console, this field is set to `true`. If Auto-GitOps is not supported, this field is either `false` or missing. See [KOTS Auto-GitOps Workflow](/enterprise/gitops-workflow). |
| `isIdentityServiceSupported` | If a license supports identity-service enablement for the Admin Console, this field is set to `true`. If identity service is not supported, this field is either `false` or missing. **Note**: This field requires that the vendor have access to the Identity Service entitlement. |
| `isSemverRequired` | If set to `true`, this field requires that the Admin Console orders releases according to Semantic Versioning. This field is controlled at the channel level. For more information about enabling Semantic Versioning on a channel, see [Semantic Versioning](releases-about#semantic-versioning) in _About Releases_. |
| `isSnapshotSupported` | If a license supports the snapshots backup and restore feature, this field is set to `true`. If a license does not support snapshots, this field is either missing or `false`. **Note**: This field requires that the vendor have access to the Snapshots entitlement. |
| `isSupportBundleUploadSupported` | If a license supports uploading a support bundle in the Admin Console, this field is set to `true`. If a license does not support uploading a support bundle, this field is either missing or `false`. |
| Method | Description |
|---|---|
| Promote the installer to a channel | The installer is promoted to one or more channels. All releases on the channel use the kURL installer that is currently promoted to that channel. There can be only one active kURL installer on each channel at a time. The benefit of promoting an installer to one or more channels is that you can create a single installer without needing to add a separate installer for each release. However, because all the releases on the channel will use the same installer, problems can occur if all releases are not tested with the given installer. |
| Include the installer in a release (Beta) | The installer is included as a manifest file in a release. This makes it easier to test the installer and release together. It also makes it easier to know which installer spec customers are using based on the application version that they have installed. |
[View a larger version of this image](/images/kurl-installers-page.png)
1. Edit the file to customize the installer. For guidance on which add-ons to choose, see [Requirements and Recommendations](#requirements-and-recommendations) below.
You can also go to the landing page at [kurl.sh](https://kurl.sh/) to build an installer then copy the provided YAML:
[View a larger version of this image](/images/kurl-build-an-installer.png)
1. Click **Save installer**. You can continue to edit your file until it is promoted.
1. Click **Promote**. In the **Promote Installer** dialog that opens, edit the fields:
[View a larger version of this image](/images/promote-installer.png)
| Field | Description |
|---|---|
| Channel | Select the channel or channels where you want to promote the installer. |
| Version label | Enter a version label for the installer. |
[View a larger version of this image](/images/kurl-build-an-installer.png)
1. Click **Save**. This saves a draft that you can continue to edit until you promote it.
1. Click **Promote**.
To make changes after promoting, create a new release.
## kURL Add-on Requirements and Recommendations {#requirements-and-recommendations}
KURL includes several add-ons for networking, storage, ingress, and more. The add-ons that you choose depend on the requirements for KOTS and the unique requirements for your application. For more information about each add-on, see the open source [kURL documentation](https://kurl.sh/docs/introduction/).
When creating a kURL installer, consider the following requirements and guidelines for kURL add-ons:
- You must include the KOTS add-on to support installation with KOTS and provision the KOTS Admin Console. See [KOTS add-on](https://kurl.sh/docs/add-ons/kotsadm) in the kURL documentation.
- To support the use of KOTS snapshots, Velero must be installed in the cluster. Replicated recommends that you include the Velero add-on in your kURL installer so that your customers do not have to manually install Velero.
:::note
During installation, the Velero add-on automatically deploys internal storage for backups. The Velero add-on requires the MinIO or Rook add-on to deploy this internal storage. If you include the Velero add-on without either the MinIO add-on or the Rook add-on, installation fails with the following error message: `Only Rook and Longhorn are supported for Velero Internal backup storage`.
:::
- You must select storage add-ons based on the KOTS requirements and the unique requirements for your application. For more information, see [About Selecting Storage Add-ons](packaging-installer-storage).
- kURL installers that are included in releases must pin specific add-on versions and cannot pin `latest` versions or x-ranges (such as 1.2.x). Pinning specific versions ensures the most testable and reproducible installations. For example, pin `Kubernetes 1.23.0` in your manifest to ensure that version 1.23.0 of Kubernetes is installed. For more information about pinning Kubernetes versions, see [Versions](https://kurl.sh/docs/create-installer/#versions) and [Versioned Releases](https://kurl.sh/docs/install-with-kurl/#versioned-releases) in the kURL open source documentation.
:::note
For kURL installers that are _not_ included in a release, pinning specific versions of Kubernetes and Kubernetes add-ons in the kURL installer manifest is not required, though is highly recommended.
:::
- After you configure a kURL installer, Replicated recommends that you customize host preflight checks to support the installation experience with kURL. Host preflight checks help ensure successful installation and the ongoing health of the cluster. For more information about customizing host preflight checks, see [Customize Host Preflight Checks for Kubernetes Installers](preflight-host-preflights).
- For installers included in a release, Replicated recommends that you define a preflight check in the release to ensure that the target kURL installer is deployed before the release is installed. For more information about how to define preflight checks, see [Define Preflight Checks](preflight-defining).
For example, the following preflight check uses the `yamlCompare` analyzer with the `kots.io/installer: "true"` annotation to compare the target kURL installer that is included in the release against the kURL installer that is currently deployed in the customer's environment. For more information about the `yamlCompare` analyzer, see [`yamlCompare`](https://troubleshoot.sh/docs/analyze/yaml-compare/) in the open source Troubleshoot documentation.
```yaml
apiVersion: troubleshoot.sh/v1beta2
kind: Preflight
metadata:
name: installer-preflight-example
spec:
analyzers:
- yamlCompare:
annotations:
kots.io/installer: "true"
checkName: Kubernetes Installer
outcomes:
- fail:
message: The kURL installer for this version differs from what you have installed. It is recommended that you run the updated kURL installer before deploying this version.
uri: https://kurl.sh/my-application
- pass:
message: The kURL installer for this version matches what is currently installed.
```
---
# Conditionally Including or Excluding Resources
This topic describes how to include or exclude optional application resources based on one or more conditional statements. The information in this topic applies to Helm chart- and standard manifest-based applications.
## Overview
Software vendors often need a way to conditionally deploy resources for an application depending on users' configuration choices. For example, a common use case is giving the user the choice to use an external database or an embedded database. In this scenario, when a user chooses to use their own external database, it is not desirable to deploy the embedded database resources.
There are different options for creating conditional statements to include or exclude resources based on the application type (Helm chart- or standard manifest-based) and the installation method (Replicated KOTS or Helm CLI).
### About Replicated Template Functions
For applications deployed with KOTS, Replicated template functions are available for creating the conditional statements that control which optional resources are deployed for a given user. Replicated template functions can be used in standard manifest files such as Replicated custom resources or Kubernetes resources like StatefulSets, Secrets, and Services.
For example, the Replicated ConfigOptionEquals template functions returns true if the specified configuration option value is equal to a supplied value. This is useful for creating conditional statements that include or exclude a resource based on a user's application configuration choices.
For more information about the available Replicated template functions, see [About Template Functions](/reference/template-functions-about).
## Include or Exclude Helm Charts
This section describes methods for including or excluding Helm charts from your application deployment.
### Helm Optional Dependencies
Helm supports adding a `condition` field to dependencies in the Helm chart `Chart.yaml` file to include subcharts based on one or more boolean values evaluating to true.
For more information about working with dependencies and defining optional dependencies for Helm charts, see [Dependencies](https://helm.sh/docs/chart_best_practices/dependencies/) in the Helm documentation.
### HelmChart `exclude` Field
For Helm chart-based applications installed with KOTS, you can configure KOTS to exclude certain Helm charts from deployment using the HelmChart custom resource [`exclude`](/reference/custom-resource-helmchart#exclude) field. When the `exclude` field is set to a conditional statement, KOTS excludes the chart if the condition evaluates to `true`.
The following example uses the `exclude` field and the ConfigOptionEquals template function to exclude a postgresql Helm chart when the `external_postgres` option is selected on the Replicated Admin Console **Config** page:
```yaml
apiVersion: kots.io/v1beta2
kind: HelmChart
metadata:
name: postgresql
spec:
exclude: 'repl{{ ConfigOptionEquals `postgres_type` `external_postgres` }}'
chart:
name: postgresql
chartVersion: 12.1.7
releaseName: samplechart-release-1
```
## Include or Exclude Standard Manifests
For standard manifest-based applications installed with KOTS, you can use the `kots.io/exclude` or `kots.io/when` annotations to include or exclude resources based on a conditional statement.
By default, if neither `kots.io/exclude` nor `kots.io/when` is present on a resource, the resource is included.
### Requirements
The `kots.io/exclude` and `kots.io/when` annotations have the following requirements:
* Only one of the `kots.io/exclude` nor `kots.io/when` annotations can be present on a single resource. If both are present, the `kots.io/exclude` annotation is applied, and the `kots.io/when` annotation is ignored.
* The values of the `kots.io/exclude` and `kots.io/when` annotations must be wrapped in quotes. This is because Kubernetes annotations must be strings. For more information about working with Kubernetes annotations, see [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) in the Kubernetes documentation.
### `kots.io/exclude`
When the `kots.io/exclude: '
[View a larger version of this image](/images/add-external-registry.png)
1. In the **Provider** drop-down, select your registry provider.
1. Complete the fields in the dialog, depending on the provider that you chose:
:::note
Replicated stores your credentials encrypted and securely. Your credentials and the encryption key do not leave Replicated servers.
:::
* **Amazon ECR**
| Field | Instructions |
|---|---|
| Hostname | Enter the host name for the registry, such as 123456689.dkr.ecr.us-east-1.amazonaws.com |
| Access Key ID | Enter the Access Key ID for a Service Account User that has pull access to the registry. See Setting up the Service Account User. |
| Secret Access Key | Enter the Secret Access Key for the Service Account User. |
| Field | Instructions |
|---|---|
| Hostname | Enter the host name for the registry, such as index.docker.io. |
| Auth Type | Select the authentication type for a DockerHub account that has pull access to the registry. |
| Username | Enter the host name for the account. |
| Password or Token | Enter the password or token for the account, depending on the authentication type you selected. |
| Field | Instructions |
|---|---|
| Hostname | Enter the host name for the registry. |
| Username | Enter the username for an account that has pull access to the registry. |
| GitHub Token | Enter the token for the account. |
| Field | Instructions |
|---|---|
| Hostname | Enter the host name for the registry, such as us-east1-docker.pkg.dev |
| Auth Type | Select the authentication type for a Google Cloud Platform account that has pull access to the registry. |
| Service Account JSON Key or Token |
Enter the JSON Key from Google Cloud Platform assigned with the Artifact Registry Reader role, or token for the account, depending on the authentication type you selected. For more information about creating a Service Account, see Access Control with IAM in the Google Cloud documentation. |
| Field | Instructions |
|---|---|
| Hostname | Enter the host name for the registry, such as gcr.io. |
| Service Account JSON Key | Enter the JSON Key for a Service Account in Google Cloud Platform that is assigned the Storage Object Viewer role. For more information about creating a Service Account, see Access Control with IAM in the Google Cloud documentation. |
| Field | Instructions |
|---|---|
| Hostname | Enter the host name for the registry, such as quay.io. |
| Username and Password | Enter the username and password for an account that has pull access to the registry. |
| Field | Instructions |
|---|---|
| Hostname | Enter the host name for the registry, such as nexus.example.net. |
| Username and Password | Enter the username and password for an account that has pull access to the registry. |
| Field | Instructions |
|---|---|
| Hostname | Enter the host name for the registry, such as example.registry.com. |
| Username and Password | Enter the username and password for an account that has pull access to the registry. |
| Product Phase | Definition |
|---|---|
| Alpha | A product or feature that is exploratory or experimental. Typically, access to alpha features and their documentation is limited to customers providing early feedback. While most alpha features progress to beta and general availability (GA), some are deprecated based on assessment learnings. |
| Beta | A product or feature that is typically production-ready, but has not met Replicated’s definition of GA for one or more of the following reasons:
Documentation for beta products and features is published on the Replicated Documentation site with a "(Beta)" label. Beta products or features follow the same build and test processes required for GA. Please contact your Replicated account representative if you have questions about why a product or feature is beta. |
| “GA” - General Availability | A product or feature that has been validated as both production-ready and value-additive by a percentage of Replicated customers. Products in the GA phase are typically those that are available for purchase from Replicated. |
| “LA” - Limited Availability | A product has reached the Limited Availability phase when it is no longer available for new purchases from Replicated. Updates will be primarily limited to security patches, critical bugs and features that enable migration to GA products. |
| “EOA” - End of Availability | A product has reached the End of Availability phase when it is no longer available for renewal purchase by existing customers. This date may coincide with the Limited Availability phase. This product is considered deprecated, and will move to End of Life after a determined support window. Product maintenance is limited to critical security issues only. |
| “EOL” - End of Life | A product has reached its End of Life, and will no longer be supported, patched, or fixed by Replicated. Associated product documentation may no longer be available. The Replicated team will continue to engage to migrate end customers to GA product based deployments of your application. |
| Replicated Product | Product Phase | End of Availability | End of Life |
|---|---|---|---|
| Compatibility Matrix | GA | N/A | N/A |
| Replicated SDK | Beta | N/A | N/A |
| Replicated KOTS Installer | GA | N/A | N/A |
| Replicated kURL Installer | GA | N/A | N/A |
| Replicated Embedded Cluster Installer | GA | N/A | N/A |
| Replicated Classic Native Installer | EOL | 2023-12-31* | 2024-12-31* |
| Kubernetes Version | Embedded Cluster Versions | KOTS Versions | kURL Versions | End of Replicated Support |
|---|---|---|---|---|
| 1.32 | N/A | N/A | N/A | 2026-02-28 |
| 1.31 | N/A | 1.117.0 and later | v2024.08.26-0 and later | 2025-10-28 |
| 1.30 | 1.16.0 and later | 1.109.1 and later | v2024.05.03-0 and later | 2025-06-28 |
The following shows how the pass outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
View a larger version of this image
The following shows how the warn outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
View a larger version of this image
The following shows how the pass outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
View a larger version of this image
This example uses Helm template functions to render the credentials and connection details for the MySQL server that were supplied by the user. Additionally, it uses Helm template functions to create a conditional statement so that the MySQL collector and analyzer are included in the preflight checks only when MySQL is deployed, as indicated by a .Values.global.mysql.enabled field evaluating to true.
For more information about using Helm template functions to access values from the values file, see Values Files.
```yaml apiVersion: v1 kind: Secret metadata: labels: troubleshoot.sh/kind: preflight name: "{{ .Release.Name }}-preflight-config" stringData: preflight.yaml: | apiVersion: troubleshoot.sh/v1beta2 kind: Preflight metadata: name: preflight-sample spec: {{ if eq .Values.global.mysql.enabled true }} collectors: - mysql: collectorName: mysql uri: '{{ .Values.global.externalDatabase.user }}:{{ .Values.global.externalDatabase.password }}@tcp({{ .Values.global.externalDatabase.host }}:{{ .Values.global.externalDatabase.port }})/{{ .Values.global.externalDatabase.database }}?tls=false' {{ end }} analyzers: {{ if eq .Values.global.mysql.enabled true }} - mysql: checkName: Must be MySQL 8.x or later collectorName: mysql outcomes: - fail: when: connected == false message: Cannot connect to MySQL server - fail: when: version < 8.x message: The MySQL server must be at least version 8 - pass: message: The MySQL server is ready {{ end }} ```This example uses KOTS template functions in the Config context to render the credentials and connection details for the MySQL server that were supplied by the user in the Replicated Admin Console Config page. Replicated recommends using a template function for the URI, as shown above, to avoid exposing sensitive information. For more information about template functions, see About Template Functions.
This example also uses an analyzer with strict: true, which prevents installation from continuing if the preflight check fails.
The following shows how a fail outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade when strict: true is set for the analyzer:
View a larger version of this image
The following shows how a warn outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
View a larger version of this image
The following shows how a fail outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
View a larger version of this image
The following shows how a pass outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
View a larger version of this image
The following shows how the fail outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
View a larger version of this image
The following shows how the pass outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
View a larger version of this image
| Flag: Value | Description |
|---|---|
hostPreflightIgnore: true |
Ignores host preflight failures and warnings. The installation proceeds regardless of host preflight outcomes. |
hostPreflightEnforceWarnings: true |
Blocks an installation if the results include a warning. |
[View a larger version of this image](/images/helm-install-preflights.png)
1. Run the commands provided in the dialog:
1. Run the first command to log in to the Replicated registry:
```
helm registry login registry.replicated.com --username USERNAME --password PASSWORD
```
Where:
- `USERNAME` is the customer's email address.
- `PASSWORD` is the customer's license ID.
**Example:**
```
helm registry login registry.replicated.com --username example@companyname.com password 1234abcd
```
1. Run the second command to install the kubectl plugin with krew:
```
curl https://krew.sh/preflight | bash
```
1. Run the third command to run preflight checks:
```
helm template oci://registry.replicated.com/APP_SLUG/CHANNEL/CHART | kubectl preflight -
```
Where:
- `APP_SLUG` is the name of the application.
- `CHANNEL` is the lowercased name of the release channel.
- `CHART` is the name of the Helm chart.
**Examples:**
```
helm template oci://registry.replicated.com/gitea-app/unstable/gitea | kubectl preflight -
```
```
helm template oci://registry.replicated.com/gitea-app/unstable/gitea --values values.yaml | kubectl preflight -
```
For all available options with this command, see [Run Preflight Checks using the CLI](https://troubleshoot.sh/docs/preflight/cli-usage/#options) in the open source Troubleshoot documentation.
1. (Optional) Run the fourth command to install the application. For more information, see [Install with Helm](install-with-helm).
## (Optional) Save Preflight Check Results
The output of the preflight plugin shows the success, warning, or fail message for each preflight, depending on how they were configured. You can ask your users to send you the results of the preflight checks if needed.
To save the results of preflight checks to a `.txt` file, users can can press `s` when viewing the results from the CLI, as shown in the example below:

[View a larger version of this image](/images/helm-preflight-save-output.png)
---
# About Preflight Checks and Support Bundles
This topic provides an introduction to preflight checks and support bundles, which are provided by the Troubleshoot open source project.
For more information, see the [Troubleshoot](https://troubleshoot.sh/docs/collect/) documentation.
## Overview
Preflight checks and support bundles are provided by the Troubleshoot open source project, which is maintained by Replicated. Troubleshoot is a kubectl plugin that provides diagnostic tools for Kubernetes applications. For more information, see the open source [Troubleshoot](https://troubleshoot.sh/docs/collect/) documentation.
Preflight checks and support bundles analyze data from customer environments to provide insights that help users to avoid or troubleshoot common issues with an application:
* **Preflight checks** run before an application is installed to check that the customer environment meets the application requirements.
* **Support bundles** collect troubleshooting data from customer environments to help users diagnose problems with application deployments.
Preflight checks and support bundles consist of _collectors_, _redactors_, and _analyzers_ that are defined in a YAML specification. When preflight checks or support bundles are executed, data is collected, redacted, then analyzed to provide insights to users, as illustrated in the following diagram:

[View a larger version of this image](/images/troubleshoot-workflow-diagram.png)
For more information about each step in this workflow, see the sections below.
### Collect
During the collection phase, _collectors_ gather information from the cluster, the environment, the application, and other sources.
The data collected depends on the types of collectors that are included in the preflight or support bundle specification. For example, the Troubleshoot project provides collectors that can gather information about the Kubernetes version that is running in the cluster, information about database servers, logs from pods, and more.
For more information, see the [Collect](https://troubleshoot.sh/docs/collect/) section in the Troubleshoot documentation.
### Redact
During the redact phase, _redactors_ censor sensitive customer information from the data before analysis. By default, the following information is automatically redacted:
- Passwords
- API token environment variables in JSON
- AWS credentials
- Database connection strings
- URLs that include usernames and passwords
For Replicated KOTS installations, it is also possible to add custom redactors to redact additional data. For more information, see the [Redact](https://troubleshoot.sh/docs/redact/) section in the Troubleshoot documentation.
### Analyze
During the analyze phase, _analyzers_ use the redacted data to provide insights to users.
For preflight checks, analyzers define the pass, fail, and warning outcomes, and can also display custom messages to the user. For example, you can define a preflight check that fails if the cluster's Kubernetes version does not meet the minimum version that your application supports.
For support bundles, analyzers can be used to identify potential problems and share relevant troubleshooting guidance with users. Additionally, when a support bundle is uploaded to the Vendor Portal, it is extracted and automatically analyzed. The goal of analyzers in support bundles is to surface known issues or hints of what might be a problem to make troubleshooting easier.
For more information, see the [Analyze](https://troubleshoot.sh/docs/analyze/) section in the Troubleshoot documentation.
## Preflight Checks
This section provides an overview of preflight checks, including how preflights are defined and run.
### Overview
Preflight checks let you define requirements for the cluster where your application is installed. When run, preflight checks provide clear feedback to your customer about any missing requirements or incompatibilities in the cluster before they install or upgrade your application. For KOTS installations, preflight checks can also be used to block the deployment of the application if one or more requirements are not met.
Thorough preflight checks provide increased confidence that an installation or upgrade will succeed and help prevent support escalations.
### About Host Preflights {#host-preflights}
_Host preflight checks_ automatically run during [Replicated Embedded Cluster](/vendor/embedded-overview) and [Replicated kURL](/vendor/kurl-about) installations on a VM or bare metal server. The purpose of host preflight checks is to verify that the user's installation environment meets the requirements of the Embedded Cluster or kURL installer, such as checking the number of CPU cores in the system, available disk space, and memory usage. If any of the host preflight checks fail, installation is blocked and a message describing the failure is displayed.
Host preflight checks are separate from any application-specific preflight checks that are defined in the release, which run in the Admin Console before the application is deployed with KOTS. Both Embedded Cluster and kURL have default host preflight checks that are specific to the requirements of the given installer. For kURL installations, it is possible to customize the default host preflight checks.
For more information about the default Embedded Cluster host preflight checks, see [Host Preflight Checks](/vendor/embedded-using#about-host-preflight-checks) in _Using Embedded Cluster_.
For more information about kURL host preflight checks, including information about how to customize the defaults, see [Customize Host Preflight Checks for kURL](/vendor/preflight-host-preflights).
### Defining Preflights
To add preflight checks for your application, create a Preflight YAML specification that defines the collectors and analyzers that you want to include.
For information about how to add preflight checks to your application, including examples, see [Define Preflight Checks](preflight-defining).
### Blocking Installation with Required (Strict) Preflights
For applications installed with KOTS, it is possible to block the deployment of a release if a preflight check fails. This is helpful when it is necessary to prevent an installation or upgrade from continuing unless a given requirement is met.
You can add required preflight checks for an application by including `strict: true` for the target analyzer in the preflight specification. For more information, see [Block Installation with Required (Strict) Preflights](preflight-defining#strict) in _Define Preflight Checks_.
### Running Preflights
This section describes how users can run preflight checks for KOTS and Helm installations.
#### Replicated Installations
For Replicated installations with Embedded Cluster, KOTS, or kURL, preflight checks run automatically as part of the installation process. The results of the preflight checks are displayed either in the KOTS Admin Console or in the KOTS CLI, depending on the installation method.
Additionally, users can access preflight checks from the Admin Console after installation to view their results and optionally re-run the checks.
The following shows an example of the results of preflight checks displayed in the Admin Console during installation:

[View a larger version of this image](/images/preflight-warning.png)
#### Helm Installations
For installations with Helm, the preflight kubectl plugin is required to run preflight checks. The preflight plugin is a client-side utility that adds a single binary to the path. For more information, see [Getting Started](https://troubleshoot.sh/docs/) in the Troubleshoot documentation.
Users can optionally run preflight checks before they run `helm install`. The results of the preflight checks are then displayed through the CLI, as shown in the example below:

[View a larger version of this image](/images/helm-preflight-save-output.png)
For more information, see [Run Preflight Checks for Helm Installations](preflight-running).
## Support Bundles
This section provides an overview of support bundles, including how support bundles are customized and generated.
### Overview
Support bundles collect and analyze troubleshooting data from customer environments, helping both users and support teams diagnose problems with application deployments.
Support bundles can collect a variety of important cluster-level data from customer environments, such as:
* Pod logs
* Node resources and status
* The status of replicas in a Deployment
* Cluster information
* Resources deployed to the cluster
* The history of Helm releases installed in the cluster
Support bundles can also be used for more advanced use cases, such as checking that a command successfully executes in a pod in the cluster, or that an HTTP request returns a succesful response.
Support bundles then use the data collected to provide insights to users on potential problems or suggested troubleshooting steps. The troubleshooting data collected and analyzed by support bundles not only helps users to self-resolve issues with their application deployment, but also helps reduce the amount of time required by support teams to resolve requests by ensuring they have access to all the information they need up front.
### About Host Support Bundles
For installations on VMs or bare metal servers with [Replicated Embedded Cluster](/vendor/embedded-overview) or [Replicated kURL](/vendor/kurl-about), it is possible to generate a support bundle that includes host-level information to help troubleshoot failures related to host configuration like DNS, networking, or storage problems.
For Embedded Cluster installations, a default spec can be used to generate support bundles that include cluster- and host-level information. See [Generate Host Bundles for Embedded Cluster](/vendor/support-bundle-embedded).
For kURL installations, vendors can customize a host support bundle spec for their application. See [Generate Host Bundles for kURL](/vendor/support-host-support-bundles).
### Customizing Support Bundles
To enable support bundles for your application, add a support bundle YAML specification to a release. An empty support bundle specification automatically includes several default collectors and analzyers. You can also optionally customize the support bundle specification for by adding, removing, or editing collectors and analyzers.
For more information, see [Add and Customize Support Bundles](support-bundle-customizing).
### Generating Support Bundles
Users generate support bundles as `tar.gz` files from the command line, using the support-bundle kubectl plugin. Your customers can share their support bundles with your team by sending you the resulting `tar.gz` file.
KOTS users can also generate and share support bundles from the KOTS Admin Console.
For more information, see [Generate Support Bundles](support-bundle-generating).
---
# About the Replicated Proxy Registry
This topic describes how the Replicated proxy registry can be used to grant proxy access to your application's private images or allow pull through access of public images.
## Overview
If your application images are available in a private image registry exposed to the internet such as Docker Hub or Amazon Elastic Container Registry (ECR), then the Replicated proxy registry can grant proxy, or _pull-through_, access to the images without exposing registry credentials to your customers. When you use the proxy registry, you do not have to modify the process that you already use to build and push images to deploy your application.
To grant proxy access, the proxy registry uses the customer licenses that you create in the Replicated vendor portal. This allows you to revoke a customer’s ability to pull private images by editing their license, rather than having to manage image access through separate identity or authentication systems. For example, when a trial license expires, the customer's ability to pull private images is automatically revoked.
The following diagram demonstrates how the proxy registry pulls images from your external registry, and how deployed instances of your application pull images from the proxy registry:

[View a larger version of this image](/images/private-registry-diagram-large.png)
## About Enabling the Proxy Registry
The proxy registry requires read-only credentials to your private registry to access your application images. See [Connect to an External Registry](/vendor/packaging-private-images).
After connecting your registry, the steps the enable the proxy registry vary depending on your application deployment method. For more information, see:
* [Using the Proxy Registry with KOTS Installations](/vendor/private-images-kots)
* [Using the Proxy Registry with Helm Installations](/vendor/helm-image-registry)
## About Allowing Pull-Through Access of Public Images
Using the Replicated proxy registry to grant pull-through access to public images can simplify network access requirements for your customers, as they only need to whitelist a single domain (either `proxy.replicated.com` or your custom domain) instead of multiple registry domains.
For more information about how to pull public images through the proxy registry, see [Connecting to a Public Registry through the Proxy Registry](/vendor/packaging-public-images).
---
# Use the Proxy Registry with KOTS Installations
This topic describes how to use the Replicated proxy registry with applications deployed with Replicated KOTS.
## Overview
Replicated KOTS automatically creates the required image pull secret for accessing the Replicated proxy registry during application deployment. When possible, KOTS also automatically rewrites image names in the application manifests to the location of the image at `proxy.replicated.com` or your custom domain.
### Image Pull Secret
During application deployment, KOTS automatically creates an `imagePullSecret` with `type: kubernetes.io/dockerconfigjson` that is based on the customer license. This secret is used to authenticate with the proxy registry and grant proxy access to private images.
For information about how Kubernetes uses the `kubernetes.io/dockerconfigjson` Secret type to authenticate to a private image registry, see [Pull an Image from a Private Registry](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) in the Kubernetes documentation.
### Image Location Patching (Standard Manifests and HelmChart v1)
For applications packaged with standard Kubernetes manifests (or Helm charts deployed with the [HelmChart v1](/reference/custom-resource-helmchart) custom resource), KOTS automatically patches image names to the location of the image at at `proxy.replicated.com` or your custom domain during deployment. If KOTS receives a 401 response when attempting to load image manifests using the image reference from the PodSpec, it assumes that this is a private image that must be proxied through the proxy registry.
KOTS uses Kustomize to patch the `midstream/kustomization.yaml` file to change the image name during deployment to reference the proxy registry. For example, a PodSpec for a Deployment references a private image hosted at `quay.io/my-org/api:v1.0.1`:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: api
image: quay.io/my-org/api:v1.0.1
```
When this application is deployed, KOTS detects that it cannot access
the image at quay.io. So, it creates a patch in the `midstream/kustomization.yaml`
file that changes the image name in all manifest files for the application. This causes the container runtime in the cluster to use the proxy registry to pull the images, using the license information provided to KOTS for authentication.
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
bases:
- ../../base
images:
- name: quay.io/my-org/api:v1.0.1
newName: proxy.replicated.com/proxy/my-kots-app/quay.io/my-org/api
```
## Enable the Proxy Registry
This section describes how to enable the proxy registry for applications deployed with KOTS, including how to ensure that image names are rewritten and that the required image pull secret is provided.
To enable the proxy registry:
1. In the Vendor Portal, go to **Images > Add external registry** and provide read-only credentials for your registry. This allows Replicated to access the images through the proxy registry. See [Add Credentials for an External Registry](packaging-private-images#add-credentials-for-an-external-registry) in _Connecting to an External Registry_.
1. (Recommended) Go to **Custom Domains > Add custom domain** and add a custom domain for the proxy registry. See [Use Custom Domains](custom-domains-using).
1. Rewrite images names to the location of the image at `proxy.replicated.com` or your custom domain. Also, ensure that the correct image pull secret is provided for all private images. The steps required to configure image names and add the image pull secret vary depending on your application type:
* **HelmChart v2**: For Helm charts deployed with the[ HelmChart v2](/reference/custom-resource-helmchart-v2) custom resource, configure the HelmChart v2 custom resource to dynamically update image names in your Helm chart and to inject the image pull secret that is automatically created by KOTS. For instructions, see [Configure the HelmChart Custom Resource v2](/vendor/helm-native-v2-using).
* **Standard Manifests or HelmChart v1**: For standard manifest-based applications or Helm charts deployed with the [HelmChart v1](/reference/custom-resource-helmchart) custom resource, no additional configuration is required. KOTS automatically rewrites image names and injects image pull secrets during deployment for these application types.
:::note
The HelmChart custom resource `apiVersion: kots.io/v1beta1` is deprecated. For installations with Replicated KOTS v1.99.0 and later, use the HelmChart custom resource with `apiVersion: kots.io/v1beta2` instead. See [HelmChart v2](/reference/custom-resource-helmchart-v2) and [Confguring the HelmChart Custom Resource v2](/vendor/helm-native-v2-using).
:::
* **Kubernetes Operators**: For applications packaged with Kubernetes Operators, KOTS cannot modify pods that are created at runtime by the Operator. To support the use of private images in all environments, the Operator code should use KOTS functionality to determine the image name and image pull secrets for all pods when they are created. For instructions, see [Reference Images](/vendor/operator-referencing-images) in the _Packaging Kubernetes Operators_ section.
1. If you are deploying Pods to namespaces other than the application namespace, add the namespace to the `additionalNamespaces` attribute of the KOTS Application custom resource. This ensures that KOTS can provision the `imagePullSecret` in the namespace to allow the Pod to pull the image. For instructions, see [Define Additional Namespaces](operator-defining-additional-namespaces).
1. (Optional) Add a custom domain for the proxy registry instead of `proxy.replicated.com`. See [Use Custom Domains](custom-domains-using).
---
# Use the Replicated Registry for KOTS Installations
This topic describes how to push images to the Replicated private registry.
## Overview
For applications installed with KOTS, you can host private images on the Replicated registry. Hosting your images on the Replicated registry is useful if you do not already have your images in an existing private registry. It is also useful for testing purposes.
Images pushed to the Replicated registry are displayed on the **Images** page in the Vendor Portal:

[View a larger version of this image](/images/images-replicated-registry.png)
For information about security for the Replicated registry, see [Replicated Registry Security](packaging-private-registry-security).
## Limitations
The Replicated registry has the following limitations:
* You cannot delete images from the Replicated registry. As a workaround, you can push a new, empty image to the registry using the same tags as the target image. Replicated does not recommend removing tags from the registry because it could break older releases of your application.
* When using Docker Build to build and push images to the Replicated registry, provenance attestations are not supported. To avoid a 400 error, include the `--provenance=false` flag to disable all provenance attestations. For more information, see [docker buildx build](https://docs.docker.com/engine/reference/commandline/buildx_build/#provenance) and [Provenance Attestations](https://docs.docker.com/build/attestations/slsa-provenance/) in the Docker documentation.
* You might encounter a timeout error when pushing images with layers close to or exceeding 2GB in size, such as: "received unexpected HTTP status: 524." To work around this, reduce the size of the image layers and push the image again. If the 524 error persists, continue decreasing the layer sizes until the push is successful.
## Push Images to the Replicated Registry
This procedure describes how to tag and push images to the Replicated registry. For more information about building, tagging, and pushing Docker images, see the
[Docker CLI documentation](https://docs.docker.com/engine/reference/commandline/cli/).
To push images to the Replicated registry:
1. Do one of the following to connect with the `registry.replicated.com` container registry:
* **(Recommended) Log in with a user token**: Use `docker login registry.replicated.com` with your Vendor Portal email as the username and a Vendor Portal user token as the password. For more information, see [User API Tokens](replicated-api-tokens#user-api-tokens) in _Generating API Tokens_.
* **Log in with a service account token:** Use `docker login registry.replicated.com` with a Replicated Vendor Portal service account as the password. If you have an existing team token, you can use that instead. You can use any string as the username. For more information, see [Service Accounts](replicated-api-tokens#service-accounts) in _Generating API Tokens_.
:::note
Team API tokens are deprecated and cannot be generated. If you are already using team API tokens, Replicated recommends that you migrate to Service Accounts or User API tokens instead because these options provide better granular control over token access.
:::
* **Log in with your credentials**: Use `docker login registry.replicated.com` with your Vendor Portal email and password as the credentials.
1. Tag your private image with the Replicated registry hostname in the standard
Docker format:
```
docker tag IMAGE_NAME registry.replicated.com/APPLICATION_SLUG/TARGET_IMAGE_NAME:TAG
```
Where:
* `IMAGE_NAME` is the name of the existing private image for your application.
* `APPLICATION_SLUG` is the unique slug for the application. You can find the application slug on the **Application Settings** page in the Vendor Portal. For more information, see [Get the Application Slug](/vendor/vendor-portal-manage-app#slug) in _Managing Applications_.
* `TARGET_IMAGE_NAME` is a name for the image. Replicated recommends that the `TARGET_IMAGE_NAME` is the same as the `IMAGE_NAME`.
* `TAG` is a tag for the image.
For example:
```bash
docker tag worker registry.replicated.com/myapp/worker:1.0.1
```
1. Push your private image to the Replicated registry using the following format:
```
docker push registry.replicated.com/APPLICATION_SLUG/TARGET_IMAGE_NAME:TAG
```
Where:
* `APPLICATION_SLUG` is the unique slug for the application.
* `TARGET_IMAGE_NAME` is a name for the image. Use the same name that you used when tagging the image in the previous step.
* `TAG` is a tag for the image. Use the same tag that you used when tagging the image in the previous step.
For example:
```bash
docker push registry.replicated.com/myapp/worker:1.0.1
```
1. In the [Vendor Portal](https://vendor.replicated.com/), go to **Images** and scroll down to the **Replicated Private Registry** section to confirm that the image was pushed.
---
# Use Image Tags and Digests
This topic describes using image tags and digests with your application images. It includes information about when image tags and digests are supported, and how to enable support for image digests in air gap bundles.
## Support for Image Tags and Digests
The following table describes the use cases in which image tags and digests are supported:
| Installation | Support for Image Tags | Support for Image Digests |
|---|---|---|
| Online | Supported by default | Supported by default |
| Air Gap | Supported by default for Replicated KOTS installations |
Supported for applications on KOTS v1.82.0 and later when the Enable new air gap bundle format toggle is enabled on the channel. For more information, see Using Image Digests in Air Gap Installations below. |
The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The name and chartVersion listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. The optionalValues field sets the specified Helm values when a given conditional statement evaluates to true. In this case, if the application is installed with Embedded Cluster, then the Gitea service type is set to `NodePort` and the node port is set to `"32000"`. This will allow Gitea to be accessed from the local machine after deployment for the purpose of this quick start.
The KOTS Application custom resource enables features in the Replicated Admin Console such as branding, release notes, application status indicators, and custom graphs.
The YAML below provides a name for the application to display in the Admin Console, adds a custom status informer that displays the status of the gitea Deployment resource in the Admin Console dashboard, adds a custom application icon, and adds the port where the Gitea service can be accessed so that the user can open the application after installation.
The Kubernetes SIG Application custom resource supports functionality such as including buttons and links on the Replicated Admin Console dashboard. The YAML below adds an Open App button to the Admin Console dashboard that opens the application using the service port defined in the KOTS Application custom resource.
To install your application with Embedded Cluster, an Embedded Cluster Config must be present in the release. At minimum, the Embedded Cluster Config sets the version of Embedded Cluster that will be installed. You can also define several characteristics about the cluster.
[View a larger version of this image](/images/quick-start-select-gitea-app.png)
1. Click **Customers > Create customer**.
The **Create a new customer** page opens:

[View a larger version of this image](/images/create-customer.png)
1. For **Customer name**, enter a name for the customer. For example, `Example Customer`.
1. For **Channel**, select **Unstable**. This allows the customer to install releases promoted to the Unstable channel.
1. For **License type**, select **Development**.
1. For **License options**, enable the following entitlements:
* **KOTS Install Enabled**
* **Embedded Cluster Enabled**
1. Click **Save Changes**.
1. Install the application with Embedded Cluster:
1. On the page for the customer that you created, click **Install instructions > Embedded Cluster**.

[View a larger image](/images/customer-install-instructions-dropdown.png)
1. On the command line, SSH onto your VM and run the commands in the **Embedded cluster install instructions** dialog to download the latest release, extract the installation assets, and install.
[View a larger version of this image](/images/embedded-cluster-install-dialog-latest.png)
1. When prompted, enter a password for accessing the Admin Console.
The installation command takes a few minutes to complete.
**Example output:**
```bash
? Enter an Admin Console password: ********
? Confirm password: ********
✔ Host files materialized!
✔ Running host preflights
✔ Node installation finished!
✔ Storage is ready!
✔ Embedded Cluster Operator is ready!
✔ Admin Console is ready!
✔ Additional components are ready!
Visit the Admin Console to configure and install gitea-kite: http://104.155.145.60:30000
```
At this point, the cluster is provisioned and the Admin Console is deployed, but the application is not yet installed.
1. Go to the URL provided in the output to access to the Admin Console.
1. On the Admin Console landing page, click **Start**.
1. On the **Secure the Admin Console** screen, review the instructions and click **Continue**. In your browser, follow the instructions that were provided on the **Secure the Admin Console** screen to bypass the warning.
1. On the **Certificate type** screen, either select **Self-signed** to continue using the self-signed Admin Console certificate or click **Upload your own** to upload your own private key and certificacte.
By default, a self-signed TLS certificate is used to secure communication between your browser and the Admin Console. You will see a warning in your browser every time you access the Admin Console unless you upload your own certificate.
1. On the login page, enter the Admin Console password that you created during installation and click **Log in**.
1. On the **Configure the cluster** screen, you can view details about the VM where you installed, including its node role, status, CPU, and memory. Users can also optionally add additional nodes on this page before deploying the application. Click **Continue**.
The Admin Console dashboard opens.
1. On the Admin Console dashboard, next to the version, click **Deploy** and then **Yes, Deploy**.
The application status changes from Missing to Unavailable while the `gitea` Deployment is being created.
1. After a few minutes when the application status is Ready, click **Open App** to view the Gitea application in a browser.
For example:

[View a larger version of this image](/images/gitea-ec-ready.png)
[View a larger version of this image](/images/gitea-app.png)
1. Return to the Vendor Portal and go to **Customers**. Under the name of the customer, confirm that you can see an active instance.
This instance telemetry is automatically collected and sent back to the Vendor Portal by both KOTS and the Replicated SDK. For more information, see [About Instance and Event Data](/vendor/instance-insights-event-data).
1. Under **Instance ID**, click on the ID to view additional insights including the versions of Kubernetes and the Replicated SDK running in the cluster where you installed the application. For more information, see [Instance Details](/vendor/instance-insights-details).
1. Create a new release that adds preflight checks to the application:
1. In your local filesystem, go to the `gitea` directory.
1. Create a `gitea-preflights.yaml` file in the `templates` directory:
```
touch templates/gitea-preflights.yaml
```
1. In the `gitea-preflights.yaml` file, add the following YAML to create a Kubernetes Secret with a simple preflight spec:
```yaml
apiVersion: v1
kind: Secret
metadata:
labels:
troubleshoot.sh/kind: preflight
name: "{{ .Release.Name }}-preflight-config"
stringData:
preflight.yaml: |
apiVersion: troubleshoot.sh/v1beta2
kind: Preflight
metadata:
name: preflight-sample
spec:
collectors:
- http:
collectorName: slack
get:
url: https://api.slack.com/methods/api.test
analyzers:
- textAnalyze:
checkName: Slack Accessible
fileName: slack.json
regex: '"status": 200,'
outcomes:
- pass:
when: "true"
message: "Can access the Slack API"
- fail:
when: "false"
message: "Cannot access the Slack API. Check that the server can reach the internet and check [status.slack.com](https://status.slack.com)."
```
The YAML above defines a preflight check that confirms that an HTTP request to the Slack API at `https://api.slack.com/methods/api.test` made from the cluster returns a successful response of `"status": 200,`.
1. In the `Chart.yaml` file, increment the version to 1.0.7:
```yaml
# Chart.yaml
version: 1.0.7
```
1. Update dependencies and package the chart to a `.tgz` chart archive:
```bash
helm package -u .
```
1. Move the chart archive to the `manifests` directory:
```bash
mv gitea-1.0.7.tgz manifests
```
1. In the `manifests` directory, open the KOTS HelmChart custom resource (`gitea.yaml`) and update the `chartVersion`:
```yaml
# gitea.yaml KOTS HelmChart
chartVersion: 1.0.7
```
1. Remove the chart archive for version 1.0.6 of the Gitea chart from the `manifests` directory:
```
rm gitea-1.0.6.tgz
```
1. From the `manifests` directory, create and promote a new release, setting the version label of the release to `0.0.2`:
```bash
replicated release create --yaml-dir . --promote Unstable --version 0.0.2
```
**Example output**:
```bash
• Reading manifests from . ✓
• Creating Release ✓
• SEQUENCE: 2
• Promoting ✓
• Channel 2kvjwEj4uBaCMoTigW5xty1iiw6 successfully set to release 2
```
1. On your VM, update the application instance to the new version that you just promoted:
1. In the Admin Console, go to the **Version history** tab.
The new version is displayed automatically.
1. Click **Deploy** next to the new version.
The Embedded Cluster upgrade wizard opens.
1. In the Embedded Cluster upgrade wizard, on the **Preflight checks** screen, note that the "Slack Accessible" preflight check that you added was successful. Click **Next: Confirm and deploy**.

[View a larger version of this image](/images/quick-start-ec-upgrade-wizard-preflight.png)
:::note
The **Config** screen in the upgrade wizard is bypassed because this release does not contain a KOTS Config custom resource. The KOTS Config custom resource is used to set up the Config screen in the KOTS Admin Console.
:::
1. On the **Confirm and Deploy** page, click **Deploy**.
1. Reset and reboot the VM to remove the installation:
```bash
sudo ./APP_SLUG reset
```
Where `APP_SLUG` is the unique slug for the application.
:::note
You can find the application slug by running `replicated app ls` on your local machine.
:::
## Next Steps
Congratulations! As part of this quick start, you:
* Added the Replicated SDK to a Helm chart
* Created a release with the Helm chart
* Installed the release on a VM with Embedded Cluster
* Viewed telemetry for the installed instance in the Vendor Portal
* Created a new release to add preflight checks to the application
* Updated the application from the Admin Console
Now that you are familiar with the workflow of creating, installing, and updating releases, you can begin onboarding your own application to the Replicated Platform.
To get started, see [Onboard to the Replicated Platform](replicated-onboarding).
## Related Topics
For more information about the Replicated Platform features mentioned in this quick start, see:
* [About Distributing Helm Charts with KOTS](/vendor/helm-native-about)
* [About Preflight Checks and Support Bundles](/vendor/preflight-support-bundle-about)
* [About the Replicated SDK](/vendor/replicated-sdk-overview)
* [Introduction to KOTS](/intro-kots)
* [Managing Releases with the CLI](/vendor/releases-creating-cli)
* [Packaging a Helm Chart for a Release](/vendor/helm-install-release)
* [Using Embedded Cluster](/vendor/embedded-overview)
## Related Tutorials
For additional tutorials related to this quick start, see:
* [Deploying a Helm Chart on a VM with Embedded Cluster](/vendor/tutorial-embedded-cluster-setup)
* [Adding Preflight Checks to a Helm Chart](/vendor/tutorial-preflight-helm-setup)
* [Deploying a Helm Chart with KOTS and the Helm CLI](/vendor/tutorial-kots-helm-setup)
---
# About Channels and Releases
This topic describes channels and releases, including information about the **Releases** and **Channels** pages in the Replicated Vendor Portal.
## Overview
A _release_ represents a single version of your application. Each release is promoted to one or more _channels_. Channels provide a way to progress releases through the software development lifecycle: from internal testing, to sharing with early-adopters, and finally to making the release generally available.
Channels also control which customers are able to install a release. You assign each customer to a channel to define the releases that the customer can access. For example, a customer assigned to the Stable channel can only install releases that are promoted to the Stable channel, and cannot see any releases promoted to other channels. For more information about assigning customers to channels, see [Channel Assignment](licenses-about#channel-assignment) in _About Customers_.
Using channels and releases helps you distribute versions of your application to the right customer segments, without needing to manage different release workflows.
You can manage channels and releases with the Vendor Portal, the Replicated CLI, or the Vendor API v3. For more information about creating and managing releases or channels, see [Manage Releases with the Vendor Portal](releases-creating-releases) or [Creating and Editing Channels](releases-creating-channels).
## About Channels
This section provides additional information about channels, including details about the default channels in the Vendor Portal and channel settings.
### Unstable, Beta, and Stable Channels
Replicated includes the following channels by default:
* **Unstable**: The Unstable channel is designed for internal testing and development. You can create and assign an internal test customer to the Unstable channel to install in a development environment. Replicated recommends that you do not license any of your external users against the Unstable channel.
* **Beta**: The Beta channel is designed for release candidates and early-adopting customers. Replicated recommends that you promote a release to the Beta channel after it has passed automated testing in the Unstable channel. You can also choose to license early-adopting customers against this channel.
* **Stable**: The Stable channel is designed for releases that are generally available. Replicated recommends that you assign most of your customers to the Stable channel. Customers licensed against the Stable channel only receive application updates when you promote a new release to the Stable channel.
You can archive or edit any of the default channels, and create new channels. For more information, see [Create and Edit Channels](releases-creating-channels).
### Settings
Each channel has settings. You can customize the settings for a channel to control some of the behavior of releases promoted to the channel.
The following shows the **Channel Settings** dialog, accessed by clicking the settings icon on a channel:
[View a larger version of this image](/images/channel-settings.png)
The following describes each of the channel settings:
* **Channel name**: The name of the channel. You can change the channel name at any time. Each channel also has a unique ID listed below the channel name.
* **Description**: Optionally, add a description of the channel.
* **Set this channel to default**: When enabled, sets the channel as the default channel. The default channel cannot be archived.
* **Custom domains**: Select the customer-facing domains that releases promoted to this channel use for the Replicated registry, Replicated proxy registry, Replicated app service, or Replicated Download Portal endpoints. If a default custom domain exists for any of these endpoints, choosing a different domain in the channel settings overrides the default. If no custom domains are configured for an endpoint, the drop-down for the endpoint is disabled.
For more information about configuring custom domains and assigning default domains, see [Use Custom Domains](custom-domains-using).
* The following channel settings apply only to applications that support KOTS:
* **Automatically create airgap builds for newly promoted releases in this channel**: When enabled, the Vendor Portal automatically builds an air gap bundle when a new release is promoted to the channel. When disabled, you can generate an air gap bundle manually for a release on the **Release History** page for the channel.
* **Enable semantic versioning**: When enabled, the Vendor Portal verifies that the version label for any releases promoted to the channel uses a valid semantic version. For more information, see [Semantic Versioning](releases-about#semantic-versioning) in _About Releases_.
* **Enable new airgap bundle format**: When enabled, air gap bundles built for releases promoted to the channel use a format that supports image digests. This air gap bundle format also ensures that identical image layers are not duplicated, resulting in a smaller air gap bundle size. For more information, see [Use Image Digests in Air Gap Installations](private-images-tags-digests#digests-air-gap) in _Use Image Tags and Digests_.
:::note
The new air gap bundle format is supported for applications installed with KOTS v1.82.0 or later.
:::
## About Releases
This section provides additional information about releases, including details about release promotion, properties, sequencing, and versioning.
### Release Files
A release contains your application files as well as the manifests required to install the application with the Replicated installers ([Replicated Embedded Cluster](/vendor/embedded-overview) and [Replicated KOTS](../intro-kots)).
The application files in releases can be Helm charts and/or Kubernetes manifests. Replicated strongly recommends that all applications are packaged as Helm charts because many enterprise customers will expect to be able to install with Helm.
### Promotion
Each release is promoted to one or more channels. While you are developing and testing releases, Replicated recommends promoting to a channel that does not have any real customers assigned, such as the default Unstable channel. When the release is ready to be shared externally with customers, you can then promote to a channel that has the target customers assigned, such as the Beta or Stable channel.
A release cannot be edited after it is promoted to a channel. This means that you can test a release on an internal development channel, and know with confidence that the same release will be available to your customers when you promote it to a channel where real customers are assigned.
### Properties
Each release has properties. You define release properties when you promote a release to a channel. You can edit release properties at any time from the channel **Release History** page in the Vendor Portal. For more information, see [Edit Release Properties](releases-creating-releases#edit-release-properties) in _Managing Releases with the Vendor Portal_.
The following shows an example of the release properties dialog:
[View a larger version of this image](/images/release-properties.png)
As shown in the screenshot above, the release has the following properties:
* **Version label**: The version label for the release. Version labels have the following requirements:
* If semantic versioning is enabled for the channel, you must use a valid semantic version. For more information, see [Semantic Versioning](#semantic-versioning).
* The version label for the release must match the version label from one of the `Chart.yaml` files in the release.
* If there is one Helm chart in the release, Replicated automatically uses the version from the `Chart.yaml` file.
* If there is more than one Helm chart in the release, Replicated uses the version label from one of the `Chart.yaml` files. You can edit the version label for the release to use the version label from a different `Chart.yaml` file.
* **Requirements**: Select **Prevent this release from being skipped during upgrades** to mark the release as required.
When a release is required, KOTS requires users to upgrade to that version before they can upgrade to a later version. For example, if you select **Prevent this release from being skipped during upgrades** for release v2.0.0, users with v1.0.0 deployed must upgrade to v2.0.0 before they can upgrade to a version later than v2.0.0, such as v2.1.0.
Required releases have the following limitations:
* Required releases are supported in KOTS v1.68.0 and later.
* After users deploy a required version, they can no longer redeploy (roll back to) versions earlier than the required version, even if `allowRollback` is true in the Application custom resource manifest. For more information, see [`allowRollback`](/reference/custom-resource-application#allowrollback) in the Application custom resource topic.
* If you change the channel an existing customer is assigned to, the Admin Console always fetches the latest release on the new channel, regardless of any required releases on the channel. For more information, see [Channel Assignment](licenses-about#channel-assignment) in _About Customers_.
* Required releases are supported for KOTS installations only and are not supported for releases installed with Helm. The **Prevent this release from being skipped during upgrades** option has no affect if the user installs with Helm.
* **Release notes (supports markdown)**: Detailed release notes for the release. The release notes support markdown and are shown to your customer.
### Sequencing
By default, Replicated uses release sequence numbers to organize and order releases, and uses instance sequence numbers in an instance's internal version history.
#### Release Sequences
In the Vendor Portal, each release is automatically assigned a unique, monotonically-increasing sequence number. You can use this number as a fallback to identify a promoted or draft release, if you do not set the `Version label` field during promotion. For more information, see [Manage Releases with the Vendor Portal](releases-creating-releases).
The following graphic shows release sequence numbers in the Vendor Portal:
[View a larger version of this image](/images/release-sequences.png)
#### Instance Sequences
When a new version is available for upgrade, including when KOTS checks for upstream updates as well as when the user syncs their license or makes a config change, the KOTS Admin Console assigns a unique instance sequence number to that version. The instance sequence in the Admin Console starts at 0 and increments for each identifier that is returned when a new version is available.
This instance sequence is unrelated to the release sequence dispalyed in the Vendor Portal, and it is likely that the instance sequence will differ from the release sequence. Instance sequences are only tracked by KOTS instances, and the Vendor Portal has no knowledge of these numbers.
The following graphic shows instance sequence numbers on the Admin Console dashboard:
[View a larger version of this image](/images/instance-sequences.png)
#### Channel Sequences
When a release is promoted to a channel, a channel sequence number is assigned. This unique sequence number increments by one and tracks the order in which releases were promoted to a channel. You can view the channel sequence on the **Release History** page in the Vendor Portal, as shown in the image below:
[View a larger version of this image](/images/release-history-channel-sequence.png)
The channel sequence is also used in certain URLs. For example, a release with a *release sequence* of `170` can have a *channel sequence* of `125`. The air gap download URL for that release can contain `125` in the URL, even though the release sequence is `170`.
Ordering is more complex if some or all of the releases in a channel have a semantic version label and semantic versioning is enabled for the channel. For more information, see [Semantic Versioning Sequence](#semantic-versioning-sequence).
#### Semantic Versioning Sequence
For channels with semantic versioning enabled, the Admin Console sequences instance releases by their semantic versions instead of their promotion dates.
If releases without a valid semantic version are already promoted to a channel, the Admin Console sorts the releases that do have semantic versions starting with the earliest version and proceeding to the latest. The releases with non-semantic versioning stay in the order of their promotion dates. For example, assume that you promote these releases in the following order to a channel:
- 1.0.0
- abc
- 0.1.0
- xyz
- 2.0.0
Then, you enable semantic versioning on that channel. The Admin Console sequences the version history for the channel as follows:
- 0.1.0
- 1.0.0
- abc
- xyz
- 2.0.0
### Semantic Versioning
Semantic versioning is available with the Replicated KOTS v1.58.0 and later. Note the following:
- For applications created in the Vendor Portal on or after February 23, 2022, semantic versioning is enabled by default on the Stable and Beta channels. Semantic versioning is disabled on the Unstable channel by default.
- For existing applications created before February 23, 2022, semantic versioning is disabled by default on all channels.
Semantic versioning is recommended because it makes versioning more predictable for users and lets you enforce versioning so that no one uses an incorrect version.
To use semantic versioning:
1. Enable semantic versioning on a channel, if it is not enabled by default. Click the **Edit channel settings** icon, and turn on the **Enable semantic versioning** toggle.
1. Assign a semantic version number when you promote a release.
Releases promoted to a channel with semantic versioning enabled are verified to ensure that the release version label is a valid semantic version. For more information about valid semantic versions, see [Semantic Versioning 2.0.0](https://semver.org).
If you enable semantic versioning for a channel and then promote releases to it, Replicated recommends that you do not later disable semantic versioning for that channel.
You can enable semantic versioning on a channel that already has releases promoted to it without semantic versioning. Any subsequently promoted releases must use semantic versioning. In this case, the channel will have releases with and without semantic version numbers. For information about how Replicated organizes these release sequences, see [Semantic Versioning Sequences](#semantic-versioning-sequence).
### Demotion
A channel release can be demoted from a channel. When a channel release is demoted, the release is no longer available for download, but is not withdrawn from environments where it was already downloaded or installed.
The demoted release's channel sequence and version are not reused. For customers, the release will appear to have been skipped. Un-demoting a release will restore its place in the channel sequence making it again available for download and installation.
For information about how to demote a release, see [Demote a Release](/vendor/releases-creating-releases#demote-a-release) in _Managing Releases with the Vendor Portal_.
## Vendor Portal Pages
This section provides information about the channels and releases pages in the Vendor Portal.
### Channels Page
The **Channels** page in the Vendor Portal includes information about each channel. From the **Channels** page, you can edit and archive your channels. You can also edit the properties of the releases promoted to each channel, and view and edit the customers assigned to each channel.
The following shows an example of a channel in the Vendor Portal **Channels** page:
[View a larger version of this image](/images/channel-card.png)
As shown in the image above, you can do the following from the **Channels** page:
* Edit the channel settings by clicking on the settings icon, or archive the channel by clicking on the trash can icon. For information about channel settings, see [Settings](#settings).
* In the **Adoption rate** section, view data on the adoption rate of releases promoted to the channel among customers assigned to the channel.
* In the **Customers** section, view the number of active and inactive customers assigned to the channel. Click **Details** to go to the **Customers** page, where you can view details about the customers assigned to the channel.
* In the **Latest release** section, view the properties of the latest release, and get information about any warnings or errors in the YAML files for the latest release.
Click **Release history** to access the history of all releases promoted to the channel. From the **Release History** page, you can view the version labels and files in each release that has been promoted to the selected channel.
You can also build and download air gap bundles to be used in air gap installations with Replicated installers (Embedded Cluster, KOTS, kURL), edit the release properties for each release promoted to the channel from the **Release History** page, and demote a release from the channel.
The following shows an example of the **Release History** page:
[View a larger version of this image](/images/channel-card.png)
* For applications that support KOTS, you can also do the following from the **Channel** page:
* In the **kURL installer** section, view the current kURL installer promoted to the channel. Click **Installer history** to view the history of kURL installers promoted to the channel. For more information about creating kURL installers, see [Create a kURL Installer](packaging-embedded-kubernetes).
* In the **Install** section, view and copy the installation commands for the latest release on the channel.
### Draft Release Page
For applications that support installation with KOTS, the **Draft** page provides a YAML editor to add, edit, and delete your application files and Replicated custom resources. You click **Releases > Create Release** in the Vendor Portal to open the **Draft** page.
The following shows an example of the **Draft** page in the Vendor Portal:
[View a larger version of this image](/images/guides/kots/default-yaml.png)
You can do the following tasks on the **Draft** page:
- In the file directory, manage the file directory structure. Replicated custom resource files are grouped together above the white line of the file directory. Application files are grouped together underneath the white line in the file directory.
Delete files using the trash icon that displays when you hover over a file. Create a new file or folder using the corresponding icons at the bottom of the file directory pane. You can also drag and drop files in and out of the folders.

- Edit the YAML files by selecting a file in the directory and making changes in the YAML editor.
- In the **Help** or **Config help** pane, view the linter for any errors. If there are no errors, you get an **Everything looks good!** message. If an error displays, you can click the **Learn how to configure** link. For more information, see [Linter Rules](/reference/linter).
- Select the Config custom resource to preview how your application's Config page will look to your customers. The **Config preview** pane only appears when you select that file. For more information, see [About the Configuration Screen](config-screen-about).
- Select the Application custom resource to preview how your application icon will look in the Admin Console. The **Application icon preview** only appears when you select that file. For more information, see [Customizing the Application Icon](admin-console-customize-app-icon).
---
# Create and Edit Channels
This topic describes how to create and edit channels using the Replicated Vendor Portal. For more information about channels, see [About Channels and Releases](releases-about).
For information about creating channels with the Replicated CLI, see [channel create](/reference/replicated-cli-channel-create).
For information about creating and managing channels with the Vendor API v3, see the [channels](https://replicated-vendor-api.readme.io/reference/createchannel) section in the Vendor API v3 documentation.
## Create a Channel
To create a channel:
1. From the Replicated [Vendor Portal](https://vendor.replicated.com), select **Channels** from the left menu.
1. Click **Create Channel**.
The Create a new channel dialog opens. For example:
1. Enter a name and description for the channel.
1. (Recommended) Enable semantic versioning on the channel if it is not enabled by default by turning on **Enable semantic versioning**. For more information about semantic versioning and defaults, see [Semantic Versioning](releases-about#semantic-versioning).
1. (Recommended) Enable an air gap bundle format that supports image digests and deduplication of image layers, by turning on **Enable new air gap bundle format**. For more information, see [Use Image Tags and Digests](private-images-tags-digests).
1. Click **Create Channel**.
## Edit a Channel
To edit the settings of an existing channel:
1. In the Vendor Portal, select **Channels** from the left menu.
1. Click the gear icon on the top right of the channel that you want to modify.
The Channel settings dialog opens. For example:
1. Edit the fields and click **Save**.
For more information about channel settings, see [Settings](releases-about#settings) in _About Channels and Releases_.
## Archive a Channel
You can archive an existing channel to prevent any new releases from being promoted to the channel.
:::note
You cannot archive a channel if:
* There are customers assigned to the channel.
* The channel is set as the default channel.
Assign customers to a different channel and set a different channel as the default before archiving.
:::
To archive a channel with the Vendor Portal or the Replicated CLI:
* **Vendor portal**: In the Vendor Portal, go to the **Channels** page and click the trash can icon in the top right corner of the card for the channel that you want to archive.
* **Replicated CLI**:
1. Run the following command to find the ID for the channel that you want to archive:
```
replicated channel ls
```
The output of this command includes the ID and name for each channel, as well as information about the latest release version on the channels.
1. Run the following command to archive the channel:
```
replicated channel rm CHANNEL_ID
```
Replace `CHANNEL_ID` with the channel ID that you retrieved in the previous step.
For more information, see [channel rm](/reference/replicated-cli-channel-rm) in the Replicated CLI documentation.
---
# Manage Releases with the CLI
This topic describes how to use the Replicated CLI to create and promote releases.
For information about creating and managing releases with the Vendor Portal, see [Manage Releases with the Vendor Portal](/vendor/releases-creating-releases).
For information about creating and managing releases with the Vendor API v3, see the [releases](https://replicated-vendor-api.readme.io/reference/createrelease) section in the Vendor API v3 documentation.
## Prerequisites
Before you create a release using the Replicated CLI, complete the following prerequisites:
* Install the Replicated CLI and then log in to authorize the CLI. See [Install the Replicated CLI](/reference/replicated-cli-installing).
* Create a new application using the `replicated app create APP_NAME` command. You only need to do this procedure one time for each application that you want to deploy. See [`app create`](/reference/replicated-cli-app-create) in _Reference_.
* Set the `REPLICATED_APP` environment variable to the slug of the target application. See [Set Environment Variables](/reference/replicated-cli-installing#env-var) in _Installing the Replicated CLI_.
**Example**:
```bash
export REPLICATED_APP=my-app-slug
```
## Create a Release From a Local Directory {#dir}
You can use the Replicated CLI to create a release from a local directory that contains the release files.
To create and promote a release:
1. (Helm Charts Only) If your release contains any Helm charts:
1. Package each Helm chart as a `.tgz` file. See [Package a Helm Chart for a Release](/vendor/helm-install-release).
1. Move the `.tgz` file or files to the local directory that contains the release files:
```bash
mv CHART_TGZ PATH_TO_RELEASE_DIR
```
Where:
* `CHART_TGZ` is the `.tgz` Helm chart archive.
* `PATH_TO_RELEASE_DIR` is path to the directory that contains the release files.
**Example**
```bash
mv wordpress-1.3.5.tgz manifests
```
1. In the same directory that contains the release files, add a HelmChart custom resource for each Helm chart in the release. See [Configuring the HelmChart Custom Resource](helm-native-v2-using).
1. Lint the application manifest files and ensure that there are no errors in the YAML:
```bash
replicated release lint --yaml-dir=PATH_TO_RELEASE_DIR
```
Where `PATH_TO_RELEASE_DIR` is the path to the directory with the release files.
For more information, see [release lint](/reference/replicated-cli-release-lint) and [Linter Rules](/reference/linter).
1. Do one of the following:
* **Create and promote the release with one command**:
```bash
replicated release create --yaml-dir PATH_TO_RELEASE_DIR --lint --promote CHANNEL
```
Where:
* `PATH_TO_RELEASE_DIR` is the path to the directory with the release files.
* `CHANNEL` is the channel ID or the case sensitive name of the channel.
* **Create and edit the release before promoting**:
1. Create the release:
```bash
replicated release create --yaml-dir PATH_TO_RELEASE_DIR
```
Where `PATH_TO_RELEASE_DIR` is the path to the directory with the release files.
For more information, see [release create](/reference/replicated-cli-release-create).
1. Edit and update the release as desired:
```
replicated release update SEQUENCE --yaml-dir PATH_TO_RELEASE_DIR
```
Where:
- `SEQUENCE` is the release sequence number. This identifies the existing release to be updated.
- `PATH_TO_RELEASE_DIR` is the path to the directory with the release files.
For more information, see [release update](/reference/replicated-cli-release-update).
1. Promote the release when you are ready to test it. Releases cannot be edited after they are promoted. To make changes after promotion, create a new release.
```
replicated release promote SEQUENCE CHANNEL
```
Where:
- `SEQUENCE` is the release sequence number.
- `CHANNEL` is the channel ID or the case sensitive name of the channel.
For more information, see [release promote](/reference/replicated-cli-release-promote).
1. Verify that the release was promoted to the target channel:
```
replicated release ls
```
---
# Create and Manage Customers
This topic describes how to create and manage customers in the Replicated Vendor Portal. For more information about customer licenses, see [About Customers](licenses-about).
## Create a Customer
This procedure describes how to create a new customer in the Vendor Portal. You can edit customer details at any time.
For information about creating a customer with the Replicated CLI, see [customer create](/reference/replicated-cli-customer-create).
For information about creating and managing customers with the Vendor API v3, see the [customers](https://replicated-vendor-api.readme.io/reference/getcustomerentitlements) section in the Vendor API v3 documentation.
To create a customer:
1. In the [Vendor Portal](https://vendor.replicated.com), click **Customers > Create customer**.
The **Create a new customer** page opens:

[View a larger version of this image](/images/create-customer.png)
1. For **Customer name**, enter a name for the customer.
1. For **Customer email**, enter the email address for the customer.
:::note
A customer email address is required for Helm installations. This email address is never used to send emails to customers.
:::
1. For **Assigned channel**, assign the customer to one of your channels. You can select any channel that has at least one release. The channel a customer is assigned to determines the application releases that they can install. For more information, see [Channel Assignment](licenses-about#channel-assignment) in _About Customers_.
:::note
You can change the channel a customer is assigned at any time. For installations with Replicated KOTS, when you change the customer's channel, the customer can synchronize their license in the Replicated Admin Console to fetch the latest release on the new channel and then upgrade. The Admin Console always fetches the latest release on the new channel, regardless of the presence of any releases on the channel that are marked as required.
:::
1. For **Custom ID**, you can enter a custom ID for the customer. Setting a custom ID allows you to easily associate this Replicated customer record to your own internal customer data systems during data exports. Replicated recommends using an alphanumeric value such as your Salesforce ID or Hubspot ID.
:::note
Replicated does _not_ require that the custom ID is unique. The custom ID is for vendor data reconciliation purposes, and is not used by Replicated for any functionality purposes.
:::
1. For **Expiration policy**, by default, **Customer's license does not expire** is enabled. To set an expiration date for the license, enable **Customer's license has an expiration date** and specify an expiration date in the **When does this customer expire?** calendar.
1. For **Customer type**, set the customer type. Customer type is used only for reporting purposes. Customer access to your application is not affected by the type you assign to them. By default, **Trial** is selected. For more information, see [About Customer License Types](licenses-about-types).
1. Enable any of the available options for the customer. For more information about the license options, see [Built-in License Fields](/vendor/licenses-using-builtin-fields). For more information about enabling install types, see [Managing Install Types for a License (Beta)](/vendor/licenses-install-types).
1. For **Custom fields**, configure any custom fields that you have added for your application. For more information about how to create custom fields for your application, see [Manage Customer License Fields](licenses-adding-custom-fields).
1. Click **Save Changes**.
## Edit a Customer
You can edit the built-in and custom license fields for a customer at any time by going to the **Manage customer** for a customer. For more information, see [Manage Customer Page](licenses-about#about-the-manage-customer-page) in _About Customers and Licensing_.
Replicated recommends that you test any licenses changes in a development environment. If needed, install the application using a developer license matching the current customer's entitlements before editing the developer license. Then validate the updated license.
:::important
For online environments, changing license entitlements can trigger changes to the customer's installed application instance during runtime. Replicated recommends that you verify the logic your application uses to query and enforce the target entitlement before making any changes.
:::
To edit license fields:
1. In the [Vendor Portal](https://vendor.replicated.com), click **Customers**.
1. Select the target customer and click the **Manage customer** tab.
1. On the **Manage customer** page, edit the desired fields and click **Save**.

1. Test the changes by installing or updating in a development environment. Do one of the following, depending on the installation method for your application:
* For applications installed with Helm that use the Replicated SDK, you can add logic to your application to enforce entitlements before installation or during runtime using the Replicated SDK API license endpoints. See [Check Entitlements in Helm Charts Before Deployment](licenses-reference-helm).
* For applications installed with Replicated KOTS, update the license in the admin console. See [Update Online Licenses](/enterprise/updating-licenses#update-online-licenses) and [Update Air Gap Licenses](/enterprise/updating-licenses#update-air-gap-licenses) in _Updating Licenses in the Admin Console_.
## Archive a Customer
When you archive a customer in the Vendor Portal, the customer is hidden from search by default and becomes read-only. Archival does not affect the utility of license files downloaded before the customer was archived.
To expire a license, set an expiration date and policy in the **Expiration policy** field before you archive the customer.
To archive a customer:
1. In the Vendor Portal, click **Customers**. Select the target customer then click the **Manage customer** tab.
1. Click **Archive Customer**. In the confirmation dialog, click **Archive Customer** again.
You can unarchive by clicking **Unarchive Customer** in the customer's **Manage customer** page.
## Export Customer and Instance Data {#export}
You can download customer and instance data from the **Download CSV** dropdown on the **Customers** page:

[View a larger version of this image](/images/customers-download-csv.png)
The **Download CSV** dropdown has the following options:
* **Customers**: Includes details about your customers, such as the customer's channel assignment, license entitlements, expiration date, last active timestamp, and more.
* (Recommended) **Customers + Instances**: Includes details about the instances assoicated with each customer, such as the Kubernetes distribution and cloud provider of the cluster where the instance is running, the most recent application instance status, if the instance is active or inactive, and more. The **Customers + Instances** data is a super set of the customer data, and is the recommended download for most use cases.
You can also export customer instance data as JSON using the Vendor API v3 `customer_instances` endpoint. For more information, see [Get customer instance report in CSV or JSON format](https://replicated-vendor-api.readme.io/reference/listappcustomerinstances) in the Vendor API v3 documentation.
For more information about the data fields in the CSV downloads, see [Data Dictionary](/vendor/instance-data-export#data-dictionary) in _Export Customers and Instance Data_.
## Filter and Search Customers
The **Customers** page provides a search box and filters that help you find customers:
[View a larger version of this image](/images/customers-filter.png)
You can filter customers based on whether they are active, by license type, and by channel name. You can filter using more than one criteria, such as Active, Paid, and Stable. However, you can select only one license type and one channel at a time.
If there is adoption rate data available for the channel that you are filtering by, you can also filter by current version, previous version, and older versions.
You can also filter customers by custom ID or email address. To filter customers by custom ID or email, use the search box and prepend your search term with "customId:" (ex: `customId:1234`) or "email:" (ex: `email:bob@replicated.com`).
If you want to filter information using multiple license types or channels, you can download a CSV file instead. For more information, see [Export Customer and Instance Data](#export) above.
---
# Manage Releases with the Vendor Portal
This topic describes how to use the Replicated Vendor Portal to create and promote releases, edit releases, edit release properties, and archive releases.
For information about creating and managing releases with the CLI, see [Manage Releases with the CLI](/vendor/releases-creating-cli).
For information about creating and managing releases with the Vendor API v3, see the [releases](https://replicated-vendor-api.readme.io/reference/createrelease) and [channelReleases](https://replicated-vendor-api.readme.io/reference/channelreleaseairgapbundleurl) sections in the Vendor API v3 documentation.
## Create a Release
To create and promote a release in the Vendor Portal:
1. From the **Applications** dropdown list, select **Create an app** or select an existing application to update.
1. Click **Releases > Create release**.

[View a larger version of this image](/images/release-create-new.png)
1. Add your files to the release. You can do this by dragging and dropping files to the file directory in the YAML editor or clicking the plus icon to add a new, untitled YAML file.
1. For any Helm charts that you add to the release, in the **Select Installation Method** dialog, select the version of the HelmChart custom resource that KOTS will use to install the chart. kots.io/v1beta2 is recommended. For more information about the HelmChart custom resource, see [Configuring the HelmChart Custom Resource](helm-native-v2-using).
[View a larger version of this image](/images/helm-select-install-method.png)
1. Click **Save release**. This saves a draft that you can continue to edit until you promote it.
1. Click **Promote**. In the **Promote Release** dialog, edit the fields:
For more information about the requirements and limitations of each field, see Properties in _About Channels and Releases_.
| Field | Description |
|---|---|
| Channel |
Select the channel where you want to promote the release. If you are not sure which channel to use, use the default Unstable channel. |
| Version label |
Enter a version label. If you have one or more Helm charts in your release, the Vendor Portal automatically populates this field. You can change the version label to any |
| Requirements | Select the Prevent this release from being skipped during upgrades to mark the release as required for KOTS installations. This option does not apply to installations with Helm. |
| Release notes | Add release notes. The release notes support markdown and are shown to your customer. |
[View a larger image](/images/releases-edit-draft.png)
1. Click **Save** to save your updated draft.
1. (Optional) Click **Promote**.
## Edit Release Properties
You can edit the properties of a release at any time. For more information about release properties, see [Properties](releases-about#properties) in _About Channels and Releases_.
To edit release properties:
1. Go to **Channels**.
1. In the channel where the release was promoted, click **Release History**.
1. For the release sequence that you want to edit, open the dot menu and click **Edit release**.
1. Edit the properties as needed.
[View a larger image](/images/release-properties.png)
1. Click **Update Release**.
## Archive a Release
You can archive releases to remove them from view on the **Releases** page. Archiving a release that has been promoted does _not_ remove the release from the channel's **Release History** page or prevent KOTS from downloading the archived release.
To archive one or more releases:
1. From the **Releases** page, click the trash can icon in the upper right corner.
1. Select one or more releases.
1. Click **Archive Releases**.
1. Confirm the archive action when prompted.
## Demote a Release
A channel release can be demoted from a channel. When a channel release is demoted, the release is no longer available for download, but is not withdrawn from environments where it was already downloaded or installed. For more information, see [Demotion](/vendor/releases-about#demotion) in _About Channels and Releases_.
For information about demoting and un-demoting releases with the Replicated CLI, see [channel demote](/reference/replicated-cli-channel-demote) and [channel un-demote](/reference/replicated-cli-channel-un-demote).
To demote a release in the Vendor Portal:
1. Go to **Channels**.
1. In the channel where the release was promoted, click **Release History**.
1. For the release sequence that you want to demote, open the dot menu and select **Demote Release**.

[View a larger version of this image](/images/channels-release-history.png)
After the release is demoted, the given release sequence is greyed out and a **Demoted** label is displayed next to the release on the **Release History** page.
---
# Access a Customer's Download Portal
This topic describes how to access installation instructions and download installation assets (such as customer license files and air gap bundles) from the Replicated Download Portal.
For information about downloading air gap bundles and licenses with the Vendor API v3, see the following pages in the Vendor API v3 documentation:
* [Download a customer license file as YAML](https://replicated-vendor-api.readme.io/reference/downloadlicense)
* [Trigger airgap build for a channel's release](https://replicated-vendor-api.readme.io/reference/channelreleaseairgapbuild)
* [Get airgap bundle download URL for the active release on the channel](https://replicated-vendor-api.readme.io/reference/channelreleaseairgapbundleurl)
## Overview
The Replicated Download Portal can be used to share license files, air gap bundles, and other assets with customers. From the Download Portal, customers can also view instructions for how to install a release with Replicated Embedded Cluster or the Helm CLI.
A unique Download Portal link is available for each customer. The Download Portal uses information from the customer's license to make the relevant assets and installation instructions available.
## Limitation
Sessions in the Download Portal are valid for 72 hours. After the session expires, your customer must log in again. The Download Portal session length is not configurable.
## Access the Download Portal
To access the Download Portal for a customer:
1. In the [Vendor Portal](https://vendor.replicated.com), on the **Customers** page, click on the name of the customer.
1. On the **Manage customer** tab, under **Install types**, enable the installation types that are supported for the customer. This determines the installation instructions and assets that are available for the customer in the Download Portal. For more information about install types, see [Manage Install Types for a License](/vendor/licenses-install-types).
[View a larger version of this image](/images/license-install-types.png)
1. (Optional) Under **Advanced install options**, enable the air gap installation options that are supported for the customer:
* Enable **Helm CLI Air Gap Instructions (Helm CLI only)** to display Helm air gap installation instructions in the Download Portal for the customer.
* Enable **Air Gap Installation Option (Replicated Installers only)** to allow the customer to install from an air gap bundle using one of the Replicated installers (KOTS, kURL, Embedded Cluster).
[View a larger version of this image](/images/airgap-download-enabled.png)
1. Save your changes.
1. On the **Reporting** tab, in the **Download portal** section, click **Manage customer password**.
[View a larger version of this image](/images/download-portal-link.png)
1. In the pop-up window, enter a password or click **Generate**.
[View a larger version of this image](/images/download-portal-password-popup.png)
1. Click **Copy** to copy the password to your clipboard.
After the password is saved, it cannot be retrieved again. If you lose the password, you can generate a new one.
1. Click **Save** to set the password.
1. Click **Visit download portal** to log in to the Download Portal
and preview your customer's experience.
:::note
By default, the Download Portal uses the domain `get.replicated.com`. You can optionally use a custom domain for the Download Portal. For more information, see [Use Custom Domains](/vendor/custom-domains-using).
:::
1. In the Download Portal, on the left side of the screen, select the installation type. The options displayed vary depending on the **Install types** and **Advanced install options** that you enabled in the customer's license.
The following is an example of the Download Portal with options for Helm and Embedded Cluster installations:

[View a larger version of this image](/images/download-portal-helm-ec.png)
1. To share installation instructions and assets with a customer, send the customer their unique link and password for the Download Portal.
---
# Find Installation Commands for a Release
This topic describes where to find the installation commands and instructions for releases in the Replicated Vendor Portal.
For information about getting installation commands with the Replicated CLI, see [channel inspect](/reference/replicated-cli-channel-inspect). For information about getting installation commands with the Vendor API v3, see [Get install commands for a specific channel release](https://replicated-vendor-api.readme.io/reference/getchannelreleaseinstallcommands) in the Vendor API v3 documentation.
## Get Commands for the Latest Release
Every channel in the Vendor Portal has an **Install** section where you can find installation commands for the latest release on the channel.
To get the installation commands for the latest release:
1. In the [Vendor Portal](https://vendor.replicated.com), go to the **Channels** page.
1. On the target channel card, under **Install**, click the tab for the type of installation command that you want to view:
View the command for installing with Replicated KOTS in existing clusters.
[View a larger version of this image](/images/channel-card-install-kots.png)
View the commands for installing with Replicated Embedded Cluster or Replicated kURL on VMs or bare metal servers.
In the dropdown, choose **kURL** or **Embedded Cluster** to view the command for the target installer:
[View a larger version of this image](/images/channel-card-install-kurl.png)
[View a larger version of this image](/images/channel-card-install-ec.png)
:::note
The Embedded Cluster installation instructions are customer-specific. Click **View customer list** to navigate to the page for the target customer. For more information, see [Get Customer-Specific Installation Instructions for Helm or Embedded Cluster](#customer-specific) below.
:::
View the command for installing with the Helm CLI in an existing cluster.
[View a larger version of this image](/images/channel-card-install-helm.png)
:::note
The Helm installation instructions are customer-specific. Click **View customer list** to navigate to the page for the target customer. For more information, see [Get Customer-Specific Installation Instructions for Helm or Embedded Cluster](#customer-specific) below.
:::
[View a larger version of this image](/images/release-history-link.png)
1. For the target release version, open the dot menu and click **Install Commands**.

[View a larger version of this image](/images/channels-release-history.png)
1. In the **Install Commands** dialog, click the tab for the type of installation command that you want to view:
View the command for installing with Replicated KOTS in existing clusters.
[View a larger version of this image](/images/release-history-install-kots.png)
View the commands for installing with Replicated Embedded Cluster or Replicated kURL on VMs or bare metal servers.
In the dropdown, choose **kURL** or **Embedded Cluster** to view the command for the target installer:
[View a larger version of this image](/images/release-history-install-kurl.png)
[View a larger version of this image](/images/release-history-install-embedded-cluster.png)
:::note
The Embedded Cluster installation instructions are customer-specific. Click **View customer list** to navigate to the page for the target customer. For more information, see [Get Customer-Specific Installation Instructions for Helm or Embedded Cluster](#customer-specific) below.
:::
View the command for installing with the Helm CLI in an existing cluster.
[View a larger version of this image](/images/release-history-install-helm.png)
:::note
The Helm installation instructions are customer-specific. Click **View customer list** to navigate to the page for the target customer. For more information, see [Get Customer-Specific Installation Instructions for Helm or Embedded Cluster](#customer-specific) below.
:::
View the customer-specific Helm CLI installation instructions. For more information about installing with the Helm CLI, see [Install with Helm](/vendor/install-with-helm).
[View a larger version of this image](/images/helm-install-instructions-dialog.png)
View the customer-specific Embedded Cluster installation instructions. For more information about installing with Embedded Cluster, see [Online Installation with Embedded Cluster](/enterprise/installing-embedded).
[View a larger version of this image](/images/embedded-cluster-install-dialog-latest.png)
[View a larger version of this image](/images/service-accounts.png)
1. For **Nickname**, enter a name the token. Names for service accounts must be unique within a given team.
1. For **RBAC**, select the RBAC policy from the dropdown list. The token must have `Admin` access to create new releases.
This list includes the Vendor Portal default policies `Admin` and `Read Only`. Any custom policies also display in this list. For more information, see [Configure RBAC Policies](team-management-rbac-configuring).
Users with a non-admin RBAC role cannot select any other RBAC role when creating a token. They are restricted to creating a token with their same level of access to avoid permission elevation.
1. (Optional) For custom RBAC policies, select the **Limit to read-only version of above policy** check box to if you want use a policy that has Read/Write permissions but limit this service account to read-only. This option lets you maintain one version of a custom RBAC policy and use it two ways: as read/write and as read-only.
1. Select **Create Service Account**.
1. Copy the service account token and save it in a secure location. The token will not be available to view again.
:::note
To remove a service account, select **Remove** for the service account that you want to delete.
:::
### Generate a User API Token
To generate a user API token:
1. Log in to the Vendor Portal and go to the [Account Settings](https://vendor.replicated.com/account-settings) page.
1. Under **User API Tokens**, select **Create a user API token**. If one or more tokens already exist, you can add another by selecting **New user API token**.
[View a larger version of this image](/images/user-token-list.png)
1. In the **New user API token** dialog, enter a name for the token in the **Nickname** field. Names for user API tokens must be unique per user.
[View a larger version of this image](/images/user-token-create.png)
1. Select the required permissions or use the default **Read and Write** permissions. Then select **Create token**.
:::note
The token must have `Read and Write` access to create new releases.
:::
1. Copy the user API token that displays and save it in a secure location. The token will not be available to view again.
:::note
To revoke a token, select **Revoke token** for the token that you want to delete.
:::
---
# Onboard to the Replicated Platform
This topic describes how to onboard applications to the Replicated Platform.
## Before You Begin
This section includes guidance and prerequisites to review before you begin onboarding your application.
### Best Practices and Recommendations
The following are some best practices and recommendations for successfully onboarding with Replicated:
* When integrating new Replicated features with an application, make changes in small iterations and test frequently by installing or upgrading the application in a development environment. This will help you to more easily identify issues and troubleshoot. This onboarding workflow will guide you through the process of integrating features in small iterations.
* Use the Replicated CLI to create and manage your application and releases. Getting familiar with the Replicated CLI will also help later on when integrating Replicated workflows into your CI/CD pipelines. For more information, see [Install the Replicated CLI](/reference/replicated-cli-installing).
* These onboarding tasks assume that you will test the installation of each release on a VM with the Replicated Embedded Cluster installer _and_ in a cluster with the Replicated KOTS installer. If you do not intend to offer existing cluster installations with KOTS (for example, if you intend to support only Embedded Cluster and Helm installations for your users), then can choose to test with Embedded Cluster only.
* Ask for help from the Replicated community. For more information, see [Get Help from the Community](#community) below.
### Getting Help from the Community {#community}
The [Replicated community site](https://community.replicated.com/) is a forum where Replicated team members and users can post questions and answers related to working with the Replicated Platform. It is designed to help Replicated users troubleshoot and learn more about common tasks involved with distributing, installing, observing, and supporting their application.
Before posting in the community site, use the search to find existing knowledge base articles related to your question. If you are not able to find an existing article that addresses your question, create a new topic or add a reply to an existing topic so that a member of the Replicated community or team can respond.
To search and participate in the Replicated community, see https://community.replicated.com/.
### Prerequisites
* Create an account in the Vendor Portal. You can either create a new team or join an existing team. For more information, see [Create a Vendor Account](vendor-portal-creating-account).
* Install the Replicated CLI. See [Install the Replicated CLI](/reference/replicated-cli-installing).
* Complete a basic quick start workflow to create an application with a sample Helm chart and then promote and install releases in a development environment. This helps you get familiar with the process of creating, installing, and updating releases in the Replicated Platform. See [Replicated Quick Start](/vendor/quick-start).
* Ensure that you have access to a VM that meets the requirements for the Replicated Embedded Cluster installer. You will use this VM to test installation with Embedded Cluster.
Embedded Cluster has the following requirements:
* Linux operating system
* x86-64 architecture
* systemd
* At least 2GB of memory and 2 CPU cores
* The disk on the host must have a maximum P99 write latency of 10 ms. This supports etcd performance and stability. For more information about the disk write latency requirements for etcd, see [Disks](https://etcd.io/docs/latest/op-guide/hardware/#disks) in _Hardware recommendations_ and [What does the etcd warning “failed to send out heartbeat on time” mean?](https://etcd.io/docs/latest/faq/) in the etcd documentation.
* The data directory used by Embedded Cluster must have 40Gi or more of total space and be less than 80% full. By default, the data directory is `/var/lib/embedded-cluster`. The directory can be changed by passing the `--data-dir` flag with the Embedded Cluster `install` command. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
Note that in addition to the primary data directory, Embedded Cluster creates directories and files in the following locations:
- `/etc/cni`
- `/etc/k0s`
- `/opt/cni`
- `/opt/containerd`
- `/run/calico`
- `/run/containerd`
- `/run/k0s`
- `/sys/fs/cgroup/kubepods`
- `/sys/fs/cgroup/system.slice/containerd.service`
- `/sys/fs/cgroup/system.slice/k0scontroller.service`
- `/usr/libexec/k0s`
- `/var/lib/calico`
- `/var/lib/cni`
- `/var/lib/containers`
- `/var/lib/kubelet`
- `/var/log/calico`
- `/var/log/containers`
- `/var/log/embedded-cluster`
- `/var/log/pods`
- `/usr/local/bin/k0s`
* (Online installations only) Access to replicated.app and proxy.replicated.com or your custom domain for each
* Embedded Cluster is based on k0s, so all k0s system requirements and external runtime dependencies apply. See [System requirements](https://docs.k0sproject.io/stable/system-requirements/) and [External runtime dependencies](https://docs.k0sproject.io/stable/external-runtime-deps/) in the k0s documentation.
* (Optional) Ensure that you have kubectl access to a Kubernetes cluster. You will use this cluster to test installation with KOTS. If you do not intend to offer existing cluster installations with KOTS (for example, if you intend to support only Embedded Cluster and Helm installations for your users), then you do not need access to a cluster for the main onboarding tasks.
You can use any cloud provider or tool that you prefer to create a cluster, such as [Replicated Compatibility Matrix](/vendor/testing-how-to), Google Kubernetes Engine (GKE), or minikube.
## Onboard
Complete the tasks in this section to onboard your application. When you are done, you can continue to [Next Steps](#next-steps) to integrate other Replicated features with your application.
### Task 1: Create An Application
To get started with onboarding, first create a new application. This will be the official Vendor Portal application used by your team to create and promote both internal and customer-facing releases.
To create an application:
1. Create a new application using the Replicated CLI or the Vendor Portal. Use an official name for your application. See [Create an Application](/vendor/vendor-portal-manage-app#create-an-application).
[View a larger version of this image](/images/helm-install-instructions-button.png)
1. In the **Helm install instructions** dialog, copy and run the command to log in to the Replicated registry.
[View a larger version of this image](/images/helm-install-instructions-registry-login.png)
1. From the same dialog, copy and run the command to install the SDK in integration mode:
[View a larger version of this image](/images/helm-install-instructions-sdk-integration.png)
1. Make requests to the SDK API from your application. You can access the SDK API for testing by forwarding the API service to your local machine. For more information, see [Port Forwarding the SDK API Service](/vendor/replicated-sdk-development#port-forward).
## Port Forwarding the SDK API Service {#port-forward}
After the Replicated SDK is installed and initialized in a cluster, the Replicated SDK API is exposed at `replicated:3000`. You can access the SDK API for testing by forwarding port 3000 to your local machine.
To port forward the SDK API service to your local machine:
1. Run the following command to port forward to the SDK API service:
```bash
kubectl port-forward service/replicated 3000
```
```
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
```
1. With the port forward running, test the SDK API endpoints as desired. For example:
```bash
curl localhost:3000/api/v1/license/fields/expires_at
curl localhost:3000/api/v1/license/fields/{field}
```
For more information, see [Replicated SDK API](/reference/replicated-sdk-apis).
:::note
When the SDK is installed in integration mode, requests to the `license` endpoints use your actual development license data, while requests to the `app` endpoints use the default mock data.
:::
---
# Install the Replicated SDK
This topic describes the methods for distributing and installing the Replicated SDK.
It includes information about how to install the SDK alongside Helm charts or Kubernetes manifest-based applications using the Helm CLI or a Replicated installer (Replicated KOTS, kURL, Embedded Cluster). It also includes information about installing the SDK as a standalone component in integration mode.
For information about installing the SDK in air gap mode, see [Install the SDK in Air Gap Environments](replicated-sdk-airgap).
## Requirement
To install the SDK with a Replicated installer, KOTS v1.104.0 or later and the SDK version 1.0.0-beta.12 or later are required. You can verify the version of KOTS installed with `kubectl kots version`. For Replicated Embedded Cluster installations, you can see the version of KOTS that is installed by your version of Embedded Cluster in the [Embedded Cluster Release Notes](/release-notes/rn-embedded-cluster).
## Install the SDK as a Subchart
When included as a dependency of your application Helm chart, the SDK is installed as a subchart alongside the application.
To install the SDK as a subchart:
1. In your application Helm chart `Chart.yaml` file, add the YAML below to declare the SDK as a dependency. If your application is installed as multiple charts, declare the SDK as a dependency of the chart that customers install first. Do not declare the SDK in more than one chart.
```yaml
# Chart.yaml
dependencies:
- name: replicated
repository: oci://registry.replicated.com/library
version: 1.5.1
```
For the latest version information for the Replicated SDK, see the [replicated-sdk repository](https://github.com/replicatedhq/replicated-sdk/releases) in GitHub.
1. Update the `charts/` directory:
```
helm dependency update
```
:::note
If you see a 401 Unauthorized error after running `helm dependency update`, run the following command to remove credentials from the Replicated registry, then re-run `helm dependency update`:
```bash
helm registry logout registry.replicated.com
```
For more information, see [401 Unauthorized Error When Updating Helm Dependencies](replicated-sdk-installing#401).
:::
1. Package the Helm chart into a `.tgz` archive:
```
helm package .
```
1. Add the chart archive to a new release. For more information, see [Manage Releases with the CLI](/vendor/releases-creating-cli) or [Managing Releases with the Vendor Portal](/vendor/releases-creating-releases).
1. (Optional) Add a KOTS HelmChart custom resource to the release to support installation with Embedded Cluster, KOTS, or kURL. For more information, see [Configure the HelmChart Custom Resource v2](/vendor/helm-native-v2-using).
1. Save and promote the release to an internal-only channel used for testing, such as the default Unstable channel.
1. Install the release using Helm or a Replicated installer. For more information, see:
* [Online Installation with Embedded Cluster](/enterprise/installing-embedded)
* [Installing with Helm](/vendor/install-with-helm)
* [Online Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster)
* [Online Installation with kURL](/enterprise/installing-kurl)
1. Confirm that the SDK was installed by seeing that the `replicated` Deployment was created:
```
kubectl get deploy --namespace NAMESPACE
```
Where `NAMESPACE` is the namespace in the cluster where the application and the SDK are installed.
**Example output**:
```
NAME READY UP-TO-DATE AVAILABLE AGE
my-app 1/1 1 1 35s
replicated 1/1 1 1 35s
```
## Install the SDK Alongside a Kubernetes Manifest-Based Application {#manifest-app}
For applications that use Kubernetes manifest files instead of Helm charts, the SDK Helm chart can be added to a release and then installed by KOTS alongside the application.
To install the SDK with a Replicated installer, KOTS v1.104.0 or later and the SDK version 1.0.0-beta.12 or later are required. You can verify the version of KOTS installed with `kubectl kots version`. For Replicated Embedded Cluster installations, you can see the version of KOTS that is installed by your version of Embedded Cluster in the [Embedded Cluster Release Notes](/release-notes/rn-embedded-cluster).
To add the SDK Helm chart to a release for a Kubernetes manifest-based application:
1. Install the Helm CLI using Homebrew:
```
brew install helm
```
For more information, including alternative installation options, see [Install Helm](https://helm.sh/docs/intro/install/) in the Helm documentation.
1. Download the `.tgz` chart archive for the SDK Helm chart:
```
helm pull oci://registry.replicated.com/library/replicated --version SDK_VERSION
```
Where `SDK_VERSION` is the version of the SDK to install. For a list of available SDK versions, see the [replicated-sdk repository](https://github.com/replicatedhq/replicated-sdk/tags) in GitHub.
The output of this command is a `.tgz` file with the naming convention `CHART_NAME-CHART_VERSION.tgz`. For example, `replicated-1.5.0.tgz`.
For more information and additional options, see [Helm Pull](https://helm.sh/docs/helm/helm_pull/) in the Helm documentation.
1. Add the SDK `.tgz` chart archive to a new release. For more information, see [Manage Releases with the CLI](/vendor/releases-creating-cli) or [Managing Releases with the Vendor Portal](/vendor/releases-creating-releases).
The following shows an example of the SDK Helm chart added to a draft release for a standard manifest-based application:

[View a larger version of this image](/images/sdk-kots-release.png)
1. If one was not created automatically, add a KOTS HelmChart custom resource to the release. HelmChart custom resources have `apiVersion: kots.io/v1beta2` and `kind: HelmChart`.
**Example:**
```yaml
apiVersion: kots.io/v1beta2
kind: HelmChart
metadata:
name: replicated
spec:
# chart identifies a matching chart from a .tgz
chart:
# for name, enter replicated
name: replicated
# for chartversion, enter the version of the
# SDK Helm chart in the release
chartVersion: 1.5.1
```
As shown in the example above, the HelmChart custom resource requires the name and version of the SDK Helm chart that you added to the release:
* **`chart.name`**: The name of the SDK Helm chart is `replicated`. You can find the chart name in the `name` field of the SDK Helm chart `Chart.yaml` file.
* **`chart.chartVersion`**: The chart version varies depending on the version of the SDK that you pulled and added to the release. You can find the chart version in the `version` field of SDK Helm chart `Chart.yaml` file.
For more information about configuring the HelmChart custom resource to support KOTS installations, see [About Distributing Helm Charts with KOTS](/vendor/helm-native-about) and [HelmChart v2](/reference/custom-resource-helmchart-v2).
1. Save and promote the release to an internal-only channel used for testing, such as the default Unstable channel.
1. Install the release using a Replicated installer. For more information, see:
* [Online Installation with Embedded Cluster](/enterprise/installing-embedded)
* [Online Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster)
* [Online Installation with kURL](/enterprise/installing-kurl)
1. Confirm that the SDK was installed by seeing that the `replicated` Deployment was created:
```
kubectl get deploy --namespace NAMESPACE
```
Where `NAMESPACE` is the namespace in the cluster where the application, the Admin Console, and the SDK are installed.
**Example output**:
```
NAME READY UP-TO-DATE AVAILABLE AGE
kotsadm 1/1 1 1 112s
my-app 1/1 1 1 28s
replicated 1/1 1 1 27s
```
## Install the SDK in Integration Mode
You can install the Replicated SDK in integration mode to develop locally against the SDK API without needing to add the SDK to your application, create a release in the Replicated Vendor Portal, or make changes in your environment. You can also use integration mode to test sending instance data to the Vendor Portal, including any custom metrics that you configure.
To use integration mode, install the Replicated SDK as a standalone component using a valid Development license created in the Vendor Portal. After you install in integration mode, the SDK provides default mock data for requests to the SDK API `app` endpoints. Requests to the `license` endpoints use the real data from your Development license.
To install the SDK in integration mode:
1. Create a Development license that you can use to install the SDK in integration mode:
1. In the Vendor Portal, go to **Customers** and click **Create customer**.
1. Complete the following fields:
1. For **Customer name**, add a name for the customer.
1. For **Assigned channel**, assign the customer to the channel that you use for testing. For example, Unstable.
1. For **Customer type**, select **Development**.
1. For **Customer email**, add the email address that you want to use for the license.
1. For **Install types**, ensure that the **Existing Cluster (Helm CLI)** option is enabled.
1. (Optional) Add any license field values that you want to use for testing:
1. For **Expiration policy**, you can add an expiration date for the license.
1. For **Custom fields**, you can add values for any custom license fields in your application. For information about how to create custom license fields, see [Manage Customer License Fields](/vendor/licenses-adding-custom-fields).
1. Click **Save Changes**.
1. On the **Manage customer** page for the customer you created, click **Helm install instructions**.
[View a larger version of this image](/images/helm-install-instructions-button.png)
1. In the **Helm install instructions** dialog, copy and run the command to log in to the Replicated registry.
[View a larger version of this image](/images/helm-install-instructions-registry-login.png)
1. From the same dialog, copy and run the command to install the SDK in integration mode:
[View a larger version of this image](/images/helm-install-instructions-sdk-integration.png)
1. Make requests to the SDK API from your application. You can access the SDK API for testing by forwarding the API service to your local machine. For more information, see [Port Forwarding the SDK API Service](/vendor/replicated-sdk-development#port-forward).
## Troubleshoot
### 401 Unauthorized Error When Updating Helm Dependencies {#401}
#### Symptom
You see an error message similar to the following after adding the Replicated SDK as a dependency in your Helm chart then running `helm dependency update`:
```
Error: could not download oci://registry.replicated.com/library/replicated-sdk: failed to authorize: failed to fetch oauth token: unexpected status from GET request to https://registry.replicated.com/v2/token?scope=repository%3Alibrary%2Freplicated-sdk%3Apull&service=registry.replicated.com: 401 Unauthorized
```
#### Cause
When you run `helm dependency update`, Helm attempts to pull the Replicated SDK chart from the Replicated registry. An error can occur if you are already logged in to the Replicated registry with a license that has expired, such as when testing application releases.
#### Solution
To solve this issue:
1. Run the following command to remove login credentials for the Replicated registry:
```
helm registry logout registry.replicated.com
```
1. Re-run `helm dependency update` for your Helm chart.
---
# About the Replicated SDK
This topic provides an introduction to using the Replicated SDK with your application.
## Overview
The Replicated SDK is a Helm chart that can be installed as a small service alongside your application. The SDK can be installed alongside applications packaged as Helm charts or Kubernetes manifests. The SDK can be installed using the Helm CLI or KOTS.
For information about how to distribute and install the SDK with your application, see [Install the Replicated SDK](/vendor/replicated-sdk-installing).
Replicated recommends that the SDK is distributed with all applications because it provides access to key Replicated functionality, such as:
* Automatic access to insights and operational telemetry for instances running in customer environments, including granular details about the status of different application resources. For more information, see [About Instance and Event Data](/vendor/instance-insights-event-data).
* An in-cluster API that you can use to embed Replicated features into your application, including:
* Collect custom metrics on instances running in online or air gap environments. See [Configure Custom Metrics](/vendor/custom-metrics).
* Check customer license entitlements at runtime. See [Query Entitlements with the Replicated SDK API](/vendor/licenses-reference-sdk) and [Verify License Field Signatures with the Replicated SDK API](/vendor/licenses-verify-fields-sdk-api).
* Provide update checks to alert customers when new versions of your application are available for upgrade. See [Support Update Checks in Your Application](/reference/replicated-sdk-apis#support-update-checks-in-your-application) in _Replicated SDK API_.
* Programmatically name or tag instances from the instance itself. See [Programatically Set Tags](/reference/replicated-sdk-apis#post-appinstance-tags).
For more information about the Replicated SDK API, see [Replicated SDK API](/reference/replicated-sdk-apis). For information about developing against the SDK API locally, see [Develop Against the SDK API](replicated-sdk-development).
## Limitations
The Replicated SDK has the following limitations:
* Some popular enterprise continuous delivery tools, such as ArgoCD and Pulumi, deploy Helm charts by running `helm template` then `kubectl apply` on the generated manifests, rather than running `helm install` or `helm upgrade`. The following limitations apply to applications installed by running `helm template` then `kubectl apply`:
* The `/api/v1/app/history` SDK API endpoint always returns an empty array because there is no Helm history in the cluster. See [GET /app/history](/reference/replicated-sdk-apis#get-apphistory) in _Replicated SDK API_.
* The SDK does not automatically generate status informers to report status data for installed instances of the application. To get instance status data, you must enable custom status informers by overriding the `replicated.statusInformers` Helm value. See [Enable Application Status Insights](/vendor/insights-app-status#enable-application-status-insights) in _Enabling and Understanding Application Status_.
## SDK Resiliency
At startup and when serving requests, the SDK retrieves and caches the latest information from the upstream Replicated APIs, including customer license information.
If the upstream APIs are not available at startup, the SDK does not accept connections or serve requests until it is able to communicate with the upstream APIs. If communication fails, the SDK retries every 10 seconds and the SDK pod is at `0/1` ready.
When serving requests, if the upstream APIs become unavailable, the SDK serves from the memory cache and sets the `X-Replicated-Served-From-Cache` header to `true`. Additionally, rapid successive requests to same SDK endpoint with the same request properties will be rate-limited returning the last cached payload and status code without reaching out to the upstream APIs. A `X-Replicated-Rate-Limited` header will set to `true`.
## Replicated SDK Helm Values
When a user installs a Helm chart that includes the Replicated SDK as a dependency, a set of default SDK values are included in the `replicated` key of the parent chart's values file.
For example:
```yaml
# values.yaml
replicated:
enabled: true
appName: gitea
channelID: 2jKkegBMseH5w...
channelName: Beta
channelSequence: 33
integration:
enabled: true
license: {}
parentChartURL: oci://registry.replicated.com/gitea/beta/gitea
releaseCreatedAt: "2024-11-25T20:38:22Z"
releaseNotes: 'CLI release'
releaseSequence: 88
replicatedAppEndpoint: https://replicated.app
versionLabel: Beta-1234
```
These `replicated` values can be referenced by the application or set during installation as needed. For example, if users need to add labels or annotations to everything that runs in their cluster, then they can pass the labels or annotations to the relevant value in the SDK subchart.
For the default Replicated SDK Helm chart values file, see [values.yaml](https://github.com/replicatedhq/replicated-sdk/blob/main/chart/values.yaml) in the [replicated-sdk](https://github.com/replicatedhq/replicated-sdk) repository in GitHub.
The SDK Helm values also include a `replicated.license` field, which is a string that contains the YAML representation of the customer license. For more information about the built-in fields in customer licenses, see [Built-In License Fields](licenses-using-builtin-fields).
---
# Validate Provenance of Releases for the Replicated SDK
This topic describes the process to perform provenance validation on the Replicated SDK.
## About Supply Chain Levels for Software Artifacts (SLSA)
[Supply Chain Levels for Software Artifacts (SLSA)](https://slsa.dev/), pronounced “salsa,” is a security framework that comprises standards and controls designed to prevent tampering, enhance integrity, and secure software packages and infrastructure.
## Purpose of Attestations
Attestations enable the inspection of an image to determine its origin, the identity of its creator, the creation process, and its contents. When building software using the Replicated SDK, the image’s Software Bill of Materials (SBOM) and SLSA-based provenance attestations empower your customers to make informed decisions regarding the impact of an image on the supply chain security of your application. This process ultimately enhances the security and assurances provided to both vendors and end customers.
## Prerequisite
Before you perform these tasks, you must install [slsa-verifier](https://github.com/slsa-framework/slsa-verifier) and [crane](https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane.md).
## Validate the SDK SLSA Attestations
The Replicated SDK build process utilizes Wolfi-based images to minimize the number of CVEs. The build process automatically generates SBOMs and attestations, and then publishes the image along with these metadata components. For instance, you can find all the artifacts readily available on [DockerHub](https://hub.docker.com/r/replicated/replicated-sdk/tags). The following shell script is a tool to easily validate the SLSA attestations for a given Replicated SDK image.
```
#!/bin/bash
# This script verifies the SLSA metadata of a container image
#
# Requires
# - slsa-verifier (https://github.com/slsa-framework/slsa-verifier)
# - crane (https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane.md)
#
# Define the image and version to verify
VERSION=v1.0.0-beta.20
IMAGE=replicated/replicated-sdk:${VERSION}
# expected source repository that should have produced the artifact, e.g. github.com/some/repo
SOURCE_REPO=github.com/replicatedhq/replicated-sdk
# Use `crane` to retrieve the digest of the image without pulling the image
IMAGE_WITH_DIGEST="${IMAGE}@"$(crane digest "${IMAGE}")
echo "Verifying artifact"
echo "Image: ${IMAGE_WITH_DIGEST}"
echo "Source Repo: ${SOURCE_REPO}"
slsa-verifier verify-image "${IMAGE_WITH_DIGEST}" \
--source-uri ${SOURCE_REPO} \
--source-tag ${VERSION}
```
---
# Template Annotations
This topic describes how to use Replicated KOTS template functions to template annotations for resources and objects based on user-supplied values.
## Overview
It is common for users to need to set custom annotations for a resource or object deployed by your application. For example, you might need to allow your users to provide annotations to apply to a Service or Ingress object in public cloud environments.
For applications installed with Replicated KOTS, you can apply user-supplied annotations to resources or objects by first adding a field to the Replicated Admin Console **Config** page where users can enter one or more annotations. For information about how to add fields on the **Config** page, see [Create and Edit Configuration Fields](/vendor/admin-console-customize-config-screen).
You can then map these user-supplied values from the **Config** page to resources and objects in your release using KOTS template functions. KOTS template functions are a set of custom template functions based on the Go text/template library that can be used to generate values specific to customer environments. The template functions in the Config context return user-supplied values on the **Config** page.
For more information about KOTS template functions in the Config text, see [Config Context](/reference/template-functions-config-context). For more information about the Go library, see [text/template](https://pkg.go.dev/text/template) in the Go documentation.
## About `kots.io/placeholder`
For applications installed with KOTS that use standard Kubernetes manifests, the `kots.io/placeholder` annotation allows you to template annotations in resources and objects without breaking the base YAML or needing to include the annotation key.
The `kots.io/placeholder` annotation uses the format `kots.io/placeholder 'bool' 'string'`. For example:
```yaml
# Example manifest file
annotations:
kots.io/placeholder: |-
repl{{ ConfigOption "additional_annotations" | nindent 4 }}
```
:::note
For Helm chart-based applications installed with KOTS, Replicated recommends that you map user-supplied annotations to the Helm chart `values.yaml` file using the Replicated HelmChart custom resource, rather than using `kots.io/placeholder`. This allows you to access user-supplied values in your Helm chart without needing to include KOTS template functions directly in the Helm chart templates.
For an example, see [Map User-Supplied Annotations to Helm Chart Values](#map-user-supplied-annotations-to-helm-chart-values) below.
:::
## Annotation Templating Examples
This section includes common examples of templating annotations in resources and objects to map user-supplied values.
For additional examples of how to map values to Helm chart-based applications, see [Applications](https://github.com/replicatedhq/platform-examples/tree/main/applications) in the platform-examples repository in GitHub.
### Map Multiple Annotations from a Single Configuration Field
You can map one or more annotations from a single `textarea` field on the **Config** page. The `textarea` type defines multi-line text input and supports properties such as `rows` and `cols`. For more information, see [textarea](/reference/custom-resource-config#textarea) in _Config_.
For example, the following Config custom resource adds an `ingress_annotations` field of type `textarea`:
```yaml
# Config custom resource
apiVersion: kots.io/v1beta1
kind: Config
metadata:
name: config
spec:
groups:
- name: ingress_settings
title: Ingress Settings
description: Configure Ingress
items:
- name: ingress_annotations
type: textarea
title: Ingress Annotations
help_text: See your cloud provider’s documentation for the required annotations.
```
On the **Config** page, users can enter one or more key value pairs in the `ingress_annotations` field, as shown in the example below:

[View a larger version of this image](/images/config-map-annotations.png)
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-annotation
annotations:
kots.io/placeholder: |-
repl{{ ConfigOption "ingress_annotations" | nindent 4 }}
```
During installation, KOTS renders the YAML with the multi-line input from the configuration field as shown below:
```yaml
# Rendered Ingress object
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-annotation
annotations:
kots.io/placeholder: |-
key1: value1
key2: value2
key3: value3
```
### Map Annotations from Multiple Configuration Fields
You can specify multiple annotations using the same `kots.io/placeholder` annotation.
For example, the following Ingress object includes ConfigOption template functions that render the user-supplied values for the `ingress_annotation` and `ingress_hostname` fields:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-annotation
annotations:
kots.io/placeholder: |-
repl{{ ConfigOption "ingress_annotation" | nindent 4 }}
repl{{ printf "my.custom/annotation.ingress.hostname: %s" (ConfigOption "ingress_hostname") | nindent 4 }}
```
During installation, KOTS renders the YAML as shown below:
```yaml
# Rendered Ingress object
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-annotation
annotations:
kots.io/placeholder: |-
key1: value1
my.custom/annotation.ingress.hostname: example.hostname.com
```
### Map User-Supplied Value to a Key
You can map a user-supplied value from the **Config** page to a pre-defined annotation key.
For example, in the following Ingress object, `my.custom/annotation.ingress.hostname` is the key for the templated annotation. The annotation also uses the ConfigOption template function to map the user-supplied value from a `ingress_hostname` configuration field:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-annotation
annotations:
kots.io/placeholder: |-
repl{{ printf "my.custom/annotation.ingress.hostname: %s" (ConfigOption "ingress_hostname") | nindent 4 }}
```
During installation, KOTS renders the YAML as shown below:
```yaml
# Rendered Ingress object
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-annotation
annotations:
kots.io/placeholder: |-
my.custom/annotation.ingress.hostname: example.hostname.com
```
### Include Conditional Statements in Templated Annotations
You can include or exclude templated annotations based on a conditional statement.
For example, the following Ingress object includes a conditional statement for `kots.io/placeholder` that renders `my.custom/annotation.class: somevalue` if the user enables a `custom_annotation` field on the **Config** page:
```yaml
apiVersion: v1
kind: Ingress
metadata:
name: myapp
labels:
app: myapp
annotations:
kots.io/placeholder: |-
repl{{if ConfigOptionEquals "custom_annotation" "1" }}repl{{ printf "my.custom/annotation.class: somevalue" | nindent 4 }}repl{{end}}
spec:
...
```
During installation, if the user enables the `custom_annotation` configuration field, KOTS renders the YAML as shown below:
```yaml
# Rendered Ingress object
apiVersion: v1
kind: Ingress
metadata:
name: myapp
labels:
app: myapp
annotations:
kots.io/placeholder: |-
my.custom/annotation.class: somevalue
spec:
...
```
Alternatively, if the condition evaluates to false, the annotation does not appear in the rendered YAML:
```yaml
apiVersion: v1
kind: Ingress
metadata:
name: myapp
labels:
app: myapp
annotations:
kots.io/placeholder: |-
spec:
...
```
### Map User-Supplied Annotations to Helm Chart Values
For Helm chart-based applications installed with KOTS, Replicated recommends that you map user-supplied annotations to the Helm chart `values.yaml` file, rather than using `kots.io/placeholder`. This allows you to access user-supplied values in your Helm chart without needing to include KOTS template functions directly in the Helm chart templates.
To map user-supplied annotations from the **Config** page to the Helm chart `values.yaml` file, you use the `values` field of the Replicated HelmChart custom resource. For more information, see [values](/reference/custom-resource-helmchart-v2#values) in _HelmChart v2_.
For example, the following HelmChart custom resource uses a ConfigOption template function in `values.services.myservice.annotations` to map the value of a configuration field named `additional_annotations`:
```yaml
# HelmChart custom resource
apiVersion: kots.io/v1beta2
kind: HelmChart
metadata:
name: myapp
spec:
values:
services:
myservice:
annotations: repl{{ ConfigOption "additional_annotations" | nindent 10 }}
```
The `values.services.myservice.annotations` field in the HelmChart custom resource corresponds to a `services.myservice.annotations` field in the `value.yaml` file of the application Helm chart, as shown in the example below:
```yaml
# Helm chart values.yaml
services:
myservice:
annotations: {}
```
During installation, the ConfigOption template function in the HelmChart custom resource renders the user-supplied values from the `additional_annotations` configuration field.
Then, KOTS replaces the value in the corresponding field in the `values.yaml` in the chart archive, as shown in the example below.
```yaml
# Rendered Helm chart values.yaml
services:
myservice:
annotations:
key1: value1
```
In your Helm chart templates, you can access these values from the `values.yaml` file to apply the user-supplied annotations to the target resources or objects. For information about how to access values from a `values.yaml` file, see [Values Files](https://helm.sh/docs/chart_template_guide/values_files/) in the Helm documentation.
---
# Configure Snapshots
This topic provides information about how to configure the Velero Backup resource to enable Replicated KOTS snapshots for an application.
For more information about snapshots, see [About Backup and Restore with snapshots](/vendor/snapshots-overview).
## Configure Snapshots
Add a Velero Backup custom resource (`kind: Backup`, `apiVersion: velero.io/v1`) to your release and configure it as needed. After configuring the Backup resource, add annotations for each volume that you want to be included in backups.
To configure snapshots for your application:
1. In a new release containing your application files, add a Velero Backup resource (`kind: Backup` and `apiVersion: velero.io/v1`):
```yaml
apiVersion: velero.io/v1
kind: Backup
metadata:
name: backup
spec: {}
```
1. Configure the Backup resource to specify the resources that will be included in backups.
For more information about the Velero Backup resource, including limitations, the list of supported fields for snapshots, and an example, see [Velero Backup Resource for Snapshots](/reference/custom-resource-backup).
1. (Optional) Configure backup and restore hooks. For more information, see [Configure Backup and Restore Hooks for Snapshots](snapshots-hooks).
1. For each volume that requires a backup, add the `backup.velero.io/backup-volumes` annotation. The annotation name is `backup.velero.io/backup-volumes` and the value is a comma separated list of volumes to include in the backup.
By default, no volumes are included in the backup. If any pods mount a volume that should be backed up, you must configure the backup with an annotation listing the specific volumes to include in the backup.
| podAnnotation | Description |
|---|---|
backup.velero.io/backup-volumes |
A comma-separated list of volumes from the Pod to include in the backup. The primary data volume is not included in this field because data is exported using the backup hook. |
pre.hook.backup.velero.io/command |
A stringified JSON array containing the command for the backup hook.
This command is a pg_dump from the running database to the backup volume. |
pre.hook.backup.velero.io/timeout |
A duration for the maximum time to let this script run. |
post.hook.restore.velero.io/command |
A Velero exec restore hook that runs a script to check if the database file exists, and restores only if it exists. Then, the script deletes the file after the operation is complete. |
| Restore Type | Description | Interface to Use |
|---|---|---|
| Full restore | Restores the Admin Console and the application. | KOTS CLI |
| Partial restore | Restores the application only. | KOTS CLI or Admin Console |
| Admin console | Restores the Admin Console only. | KOTS CLI |
After clicking this button, the bundle will be immediately available under the Troubleshoot tab in the Vendor Portal team account associated with this customer.
For more information on how your customer can use this feature, see [Generate Support Bundles from the Admin Console](/enterprise/troubleshooting-an-app).
### How to Enable Direct Bundle Uploads
Direct bundle uploads are disabled by default. To enable this feature for your customer:
1. Log in to the Vendor Portal and navigate to your customer's **Manage Customer** page.
1. Under the **License options** section, make sure your customer has **KOTS Install Enabled** checked, and then check the **Support Bundle Upload Enabled (Beta)** option.
[View a larger version of this image](/images/configure-direct-support-bundle-upload.png)
1. Click **Save**.
### Limitations
- You will not receive a notification when a customer sends a support bundle to the Vendor Portal. To avoid overlooking these uploads, activate this feature only if there is a reliable escalation process already in place for the customer license.
- This feature only supports online KOTS installations. If enabled, but installed in air gap mode, the upload button will not appear.
- There is a 500mb limit for support bundles uploaded directly via the Admin Console.
---
# Generate Host Bundles for kURL
This topic describes how to configure a host support bundle spec for Replicated kURL installations. For information about generating host support bundles for Replicated Embedded Cluster installations, see [Generate Host Bundles for Embedded Cluster](/vendor/support-bundle-embedded).
## Overview
Host support bundles can be used to collect information directly from the host where a kURL cluster is running, such as CPU, memory, available block devices, and the operating system. Host support bundles can also be used for testing network connectivity and gathering the output of provided commands.
Host bundles for kURL are useful when:
- The kURL cluster is offline
- The kURL installer failed before the control plane was initialized
- The Admin Console is not working
- You want to debug host-specific performance and configuration problems even when the cluster is running
You can create a YAML spec to allow users to generate host support bundles for kURL installations. For information, see [Create a Host Support Bundle Spec](#create-a-host-support-bundle-spec) below.
Replicated also provides a default support bundle spec to collect host-level information for installations with the Embedded Cluster installer. For more information, see [Generate Host Bundles for Embedded Cluster](/vendor/support-bundle-embedded).
## Create a Host Support Bundle Spec
To allow users to generate host support bundles for kURL installations, create a host support bundle spec in a YAML manifest that is separate from your application release and then share the file with customers to run on their hosts. This spec is separate from your application release because host collectors and analyzers are intended to run directly on the host and not with Replicated KOTS. If KOTS runs host collectors, the collectors are unlikely to produce the desired results because they run in the context of the kotsadm Pod.
To configure a host support bundle spec for kURL:
1. Create a SupportBundle custom resource manifest file (`kind: SupportBundle`).
1. Configure all of your host collectors and analyzers in one manifest file. You can use the following resources to help create your specification:
- Access sample specifications in the the Replicated troubleshoot-specs repository, which provides specifications for supporting your customers. See [troubleshoot-specs/host](https://github.com/replicatedhq/troubleshoot-specs/tree/main/host) in GitHub.
- View a list and details of the available host collectors and analyzers. See [All Host Collectors and Analyzers](https://troubleshoot.sh/docs/host-collect-analyze/all/) in the Troubleshoot documentation.
**Example:**
The following example shows host collectors and analyzers for the number of CPUs and the amount of memory.
```yaml
apiVersion: troubleshoot.sh/v1beta2
kind: SupportBundle
metadata:
name: host-collectors
spec:
hostCollectors:
- cpu: {}
- memory: {}
hostAnalyzers:
- cpu:
checkName: "Number of CPUs"
outcomes:
- fail:
when: "count < 2"
message: At least 2 CPU cores are required, and 4 CPU cores are recommended.
- pass:
message: This server has at least 4 CPU cores.
- memory:
checkName: "Amount of Memory"
outcomes:
- fail:
when: "< 4G"
message: At least 4G of memory is required, and 8G is recommended.
- pass:
message: The system has at least 8G of memory.
```
1. Share the file with your customers to run on their hosts.
:::important
Do not store support bundles on public shares, as they may still contain information that could be used to infer private data about the installation, even if some values are redacted.
:::
## Generate a Host Bundle for kURL
To generate a kURL host support bundle:
1. Do one of the following:
- Save the host support bundle YAML file on the host. For more information about creating a YAML spec for a host support bundle, see [Create a Host Support Bundle Spec](/vendor/support-host-support-bundles#create-a-host-support-bundle-spec).
- Run the following command to download the default kURL host support bundle YAML file from the Troubleshoot repository:
```
kubectl support-bundle https://raw.githubusercontent.com/replicatedhq/troubleshoot-specs/main/host/default.yaml
```
:::note
For air gap environments, download the YAML file and copy it to the air gap machine.
:::
1. Run the following command on the host to generate a support bundle:
```
./support-bundle --interactive=false PATH/FILE.yaml
```
Replace:
- `PATH` with the path to the host support bundle YAML file.
- `FILE` with the name of the host support bundle YAML file from your vendor.
:::note
Root access is typically not required to run the host collectors and analyzers. However, depending on what is being collected, you might need to run the support-bundle binary with elevated permissions. For example, if you run the `filesystemPerformance` host collector against `/var/lib/etcd` and the user running the binary does not have permissions on this directory, the collection process fails.
:::
1. Share the host support bundle with your vendor's support team, if needed.
1. Repeat these steps for each node because there is no method to generate host support bundles on remote hosts. If you have a multi-node kURL cluster, you must run the support-bundle binary on each node to generate a host support bundle for each node.
---
# Inspect Support Bundles
You can use the Vendor Portal to get a visual analysis of customer support bundles and use the file inspector to drill down into the details and logs files. Use this information to get insights and help troubleshoot your customer issues.
To inspect a support bundle:
1. In the Vendor Portal, go to the [**Troubleshoot**](https://vendor.replicated.com/troubleshoot) page and click **Add support bundle > Upload a support bundle**.
1. In the **Upload a support bundle** dialog, drag and drop or use the file selector to upload a support bundle file to the Vendor Portal.
[View a larger version of this image](/images/support-bundle-analyze.png)
1. (Optional) If the support bundle relates to an open support issue, select the support issue from the dropdown to share the bundle with Replicated.
1. Click **Upload support bundle**.
The **Support bundle analysis** page opens. The **Support bundle analysis** page includes information about the bundle, any available instance reporting data from the point in time when the bundle was collected, an analysis overview that can be filtered to show errors and warnings, and a file inspector.

[View a larger version of this image](/images/support-bundle-analysis-overview.png)
1. On the **File inspector** tab, select any files from the directory tree to inspect the details of any files included in the support bundle, such as log files.
1. (Optional) Click **Download bundle** to download the bundle. This can be helpful if you want to access the bundle from another system or if other team members want to access the bundle and use other tools to examine the files.
1. (Optional) Navigate back to the [**Troubleshoot**](https://vendor.replicated.com/troubleshoot) page and click **Create cluster** to provision a cluster with Replicated Compatibility Matrix. This can be helpful for creating customer-representative environments for troubleshooting. For more information about creating clusters with Compatibility Matrix, see [Use Compatibility Matrix](testing-how-to).
[View a larger version of this image](/images/cmx-cluster-configuration.png)
1. If you cannot resolve your customer's issue and need to submit a support request, go to the [**Support**](https://vendor.replicated.com/) page and click **Open a support request**. For more information, see [Submit a Support Request](support-submit-request).
:::note
The **Share with Replicated** button on the support bundle analysis page does _not_ open a support request. You might be directed to use the **Share with Replicated** option when you are already interacting with a Replicated team member.
:::

[View larger version of this image](/images/support.png)
---
# About Creating Modular Support Bundle Specs
This topic describes how to use a modular approach to creating support bundle specs.
## Overview
Support bundle specifications can be designed using a modular approach. This refers to creating multiple different specs that are scoped to individual components or microservices, rather than creating a single, large spec. For example, for applications that are deployed as multiple Helm charts, vendors can create a separate support bundle spec in the `templates` directory in the parent chart as well as in each subchart.
This modular approach helps teams develop specs that are easier to maintain and helps teams to avoid merge conflicts that are more likely to occur when making to changes to a large spec. When generating support bundles for an application that includes multiple modular specs, the specs are merged so that only one support bundle archive is generated.
## Example: Support Bundle Specifications by Component {#component}
Using a modular approach for an application that ships MySQL, NGINX, and Redis, your team can add collectors and analyzers in using a separate support bundle specification for each component.
`manifests/nginx/troubleshoot.yaml`
This collector and analyzer checks compliance for the minimum number of replicas for the NGINX component:
```yaml
apiVersion: troubleshoot.sh/v1beta2
kind: SupportBundle
metadata:
name: nginx
spec:
collectors:
- logs:
selector:
- app=nginx
analyzers:
- deploymentStatus:
name: nginx
outcomes:
- fail:
when: replicas < 2
```
`manifests/mysql/troubleshoot.yaml`
This collector and analyzer checks compliance for the minimum version of the MySQL component:
```yaml
apiVersion: troubleshoot.sh/v1beta2
kind: SupportBundle
metadata:
name: mysql
spec:
collectors:
- mysql:
uri: 'dbuser:**REDACTED**@tcp(db-host)/db'
analyzers:
- mysql:
checkName: Must be version 8.x or later
outcomes:
- fail:
when: version < 8.x
```
`manifests/redis/troubleshoot.yaml`
This collector and analyzer checks that the Redis server is responding:
```yaml
apiVersion: troubleshoot.sh/v1beta2
kind: SupportBundle
metadata:
name: redis
spec:
collectors:
- redis:
collectorName: redis
uri: rediss://default:password@hostname:6379
```
A single support bundle archive can be generated from a combination of these manifests using the `kubectl support-bundle --load-cluster-specs` command.
For more information and additional options, see [Generate Support Bundles](support-bundle-generating).
---
# Make Support Bundle Specs Available Online
This topic describes how to make your application's support bundle specs available online as well as how to link to online specs.
## Overview
You can make the definition of one or more support bundle specs available online in a source repository and link to it from the specs in the cluster. This approach lets you update collectors and analyzers outside of the application release and notify customers of potential problems and fixes in between application updates.
The schema supports a `uri:` field that, when set, causes the support bundle generation to use the online specification. If the URI is unreachable or unparseable, any collectors or analyzers in the specification are used as a fallback.
You update collectors and analyzers in the online specification to manage bug fixes. When a customer generates a support bundle, the online specification can detect those potential problems in the cluster and let them know know how to fix it. Without the URI link option, you must wait for the next time your customers update their applications or Kubernetes versions to get notified of potential problems. The URI link option is particularly useful for customers that do not update their application routinely.
If you are using a modular approach to designing support bundles, you can use multiple online specs. Each specification supports one URI link. For more information about modular specs, see [About Creating Modular Support Bundle Specs](support-modular-support-bundle-specs).
## Example: URI Linking to a Source Repository
This example shows how Replicated could set up a URI link for one of its own components. You can follow a similar process to link to your own online repository for your support bundles.
Replicated kURL includes an EKCO add-on for maintenance on embedded clusters, such as automating certificate rotation or data migration tasks. Replicated can ship this component with a support bundle manifest that warns users if they do not have this add-on installed or if it is not running in the cluster.
**Example: Release v1.0.0**
```yaml
apiVersion: troubleshoot.sh/v1beta2
kind: SupportBundle
metadata:
name: ekco
spec:
collectors:
analyzers:
- deploymentStatus:
checkName: Check EKCO is operational
name: ekc-operator
namespace: kurl
outcomes:
- fail:
when: absent
message: EKCO is not installed - please add the EKCO component to your kURL spec and re-run the installer script
- fail:
when: "< 1"
message: EKCO does not have any ready replicas
- pass:
message: EKCO has at least 1 replica
```
If a bug is discovered at any time after the release of the specification above, Replicated can write an analyzer for it in an online specification. By adding a URI link to the online specification, the support bundle uses the assets hosted in the online repository, which is kept current.
The `uri` field is added to the specification as a raw file link. Replicated hosts the online specification on [GitHub](https://github.com/replicatedhq/troubleshoot-specs/blob/main/in-cluster/default.yaml).
**Example: Release v1.1.0**
```yaml
apiVersion: troubleshoot.sh/v1beta2
kind: SupportBundle
metadata:
name: ekco
spec:
uri: https://raw.githubusercontent.com/replicatedhq/troubleshoot-specs/main/in-cluster/default.yaml
collectors: [...]
analyzers: [...]
```
Using the `uri:` property, the support bundle gets the latest online specification if it can, or falls back to the collectors and analyzers listed in the specification that is in the cluster.
Note that because the release version 1.0.0 did not contain the URI, Replicated would have to wait until existing users upgrade a cluster before getting the benefit of the new analyzer. Then, going forward, those users get any future online analyzers without having to upgrade. New users who install the version containing the URI as their initial installation automatically get any online analyzers when they generate a support bundle.
For more information about the URI, see [Troubleshoot schema supports a `uri://` field](https://troubleshoot.sh/docs/support-bundle/supportbundle/#uri) in the Troubleshoot documentation. For a complete example, see [Debugging Kubernetes: Enhancements to Troubleshoot](https://www.replicated.com/blog/debugging-kubernetes-enhancements-to-troubleshoot/#Using-online-specs-for-support-bundles) in The Replicated Blog.
---
# Submit a Support Request
You can submit a support request and a support bundle using the Replicated Vendor Portal. Uploading a support bundle is secure and helps the Replicated support team troubleshoot your application faster. Severity 1 issues are resolved three times faster when you submit a support bundle with your support request.
### Prerequisites
The following prerequisites must be met to submit support requests:
* Your Vendor Portal account must be configured for access to support before you can submit support requests. Contact your administrator to ensure that you are added to the correct team.
* Your team must have a replicated-collab repository configured. If you are a team administrator and need information about getting a collab repository set up and adding users, see [Adding Users to the Collab Repository](team-management-github-username#add).
### Submit a Support Request
To submit a support request:
1. From the [Vendor Portal](https://vendor.replicated.com), click **Support > Submit a Support Request** or go directly to the [Support page](https://vendor.replicated.com/support).
1. In section 1 of the Support Request form, complete the fields with information about your issue.
1. In section 2, do _one_ of the following actions:
- Use your pre-selected support bundle or select a different bundle in the pick list
- Select **Upload and attach a new support bundle** and attach a bundle from your file browser
1. Click **Submit Support Request**. You receive a link to your support issue, where you can interact with the support team.
:::note
Click **Back** to exit without submitting a support request.
:::
---
# Manage Collab Repository Access
This topic describes how to add users to the Replicated collab GitHub repository automatically through the Replicated Vendor Portal. It also includes information about managing user roles in this repository using Vendor Portal role-based access control (RBAC) policies.
## Overview {#overview}
The replicated-collab organization in GitHub is used for tracking and collaborating on escalations, bug reports, and feature requests that are sent by members of a Vendor Portal team to the Replicated team. Replicated creates a unique repository in the replicated-collab organization for each Vendor Portal team. Members of a Vendor Portal team submit issues to their unique collab repository on the Support page in the [Vendor Portal](https://vendor.replicated.com/support).
For more information about the collab repositories and how they are used, see [Replicated Support Paths and Processes](https://community.replicated.com/t/replicated-vendor-support-paths-and-processes/850) in _Replicated Community_.
To get access to the collab repository, members of a Vendor Portal team can add their GitHub username to the [Account Settings](https://vendor.replicated.com/account-settings) page in the Vendor Portal. The Vendor Portal then automatically provisions the team member as a user in the collab repository in GitHub. The RBAC policy that the member is assigned in the Vendor Portal determines the GitHub role that they have in the collab repository.
Replicated recommends that Vendor Portal admins manage user access to the collab repository through the Vendor Portal, rather than manually managing users through GitHub. Managing access through the Vendor Portal has the following benefits:
* Users are automatically added to the collab repository when they add their GitHub username in the Vendor Portal.
* Users are automatically removed from the collab repository when they are removed from the Vendor Portal team.
* Vendor portal and collab repository RBAC policies are managed from a single location.
## Add Users to the Collab Repository {#add}
This procedure describes how to use the Vendor Portal to access the collab repository for the first time as an Admin, then automatically add new and existing users to the repository. This allows you to use the Vendor Portal to manage the GitHub roles for users in the collab repository, rather than manually adding, managing, and removing users from the repository through GitHub.
### Prerequisite
Your team must have a replicated-collab repository configured to add users to
the repository and to manage repository access through the Vendor Portal. To get
a collab support repository configured in GitHub for your team, complete the onboarding
instructions in the email you received from Replicated. You can also access the [Replicated community help forum](https://community.replicated.com/) for assistance.
### Procedure
To add new and existing users to the collab repository through the Vendor Portal:
1. As a Vendor Portal admin, log in to your Vendor Portal account. In the [Account Settings](https://vendor.replicated.com/account-settings) page, add your GitHub username and click **Save Changes**.
The Vendor Portal automatically adds your GitHub username to the collab repository and assigns it the Admin role. You receive an email with details about the collab repository when you are added.
1. Follow the collab repository link from the email that you receive to log in to your GitHub account and access the repository.
1. (Recommended) Manually remove any users in the collab repository that were previously added through GitHub.
:::note
If a team member adds a GitHub username to their Vendor Portal account that already exists in the collab repository, then the Vendor Portal does _not_ change the role that the existing user is assigned in the collab repository.
However, if the RBAC policy assigned to this member in the Vendor Portal later changes, or if the member is removed from the Vendor Portal team, then the Vendor Portal updates or removes the user in the collab repository accordingly.
:::
1. (Optional) In the Vendor Portal, go to the [Team](https://vendor.replicated.com/team/members) page. For each team member, click **Edit permissions** as necessary to specify their GitHub role in the collab repository.
For information about which policies to select, see [About GitHub Roles](#about-github-roles).
1. Instruct each Vendor Portal team member to add their GitHub username to the [Account Settings](https://vendor.replicated.com/account-settings) page in the Vendor Portal.
The Vendor Portal adds the username to the collab repository and assigns a GitHub role to the user based on their Vendor Portal policy.
Users receive an email when they are added to the collab repository.
## About GitHub Roles
When team members add a GitHub username to their Vendor Portal account, the Vendor Portal determines how to assign the user a default GitHub role in the collab repository based on the following criteria:
* If the GitHub username already exists in the collab repository
* The RBAC policy assigned to the member in the Vendor Portal
You can also update any custom RBAC policies in the Vendor Portal to change the default GitHub roles for those policies.
### Default Roles for Existing Users {#existing-username}
If a team member adds a GitHub username to their Vendor Portal account that already exists in the collab repository, then the Vendor Portal does _not_ change the role that the existing user is assigned in the collab repository.
However, if the RBAC policy assigned to this member in the Vendor Portal later changes, or if the member is removed from the Vendor Portal team, then the Vendor Portal updates or removes the user in the collab repository accordingly.
### Default Role Mapping {#role-mapping}
When team members add a GitHub username to their Vendor Portal account, the Vendor Portal assigns them to a GitHub role in the collab repository that corresponds to their Vendor Portal policy. For example, users with the default Read Only policy in the Vendor Portal are assigned the Read GitHub role in the collab repository.
For team members assigned custom RBAC policies in the Vendor Portal, you can edit the custom policy to change their GitHub role in the collab repository. For more information, see [About Changing the Default GitHub Role](#custom) below.
The table below describes how each default and custom Vendor Portal policy corresponds to a role in the collab repository in GitHub. For more information about each of the GitHub roles described in this table, see [Permissions for each role](https://docs.github.com/en/organizations/managing-user-access-to-your-organizations-repositories/repository-roles-for-an-organization#permissions-for-each-role) in the GitHub documentation.
| Vendor Portal Role | GitHub collab Role | Description |
|---|---|---|
| Admin | Admin | Members assigned the default Admin role in the Vendor Portal are assigned the GitHub Admin role in the collab repository. |
| Support Engineer | Triage | Members assigned the custom Support Engineer role in the Vendor Portal are assigned the GitHub Triage role in the collab repository. For information about creating a custom Support Engineer policy in the Vendor Portal, see Support Engineer in Configuring RBAC Policies. For information about editing custom RBAC policies to change this default GitHub role, see About Changing the Default GitHub Role below. |
| Read Only | Read | Members assigned the default Read Only role in the Vendor Portal are assigned the GitHub Read role in the collab repository. |
| Sales | N/A | Users assigned the custom Sales role in the Vendor Portal do not have access to the collab repository. For information about creating a custom Sales policy in the Vendor Portal, see Sales in Configuring RBAC Policies. For information about editing custom RBAC policies to change this default GitHub role, see About Changing the Default GitHub Role below. |
Custom policies with **/admin under allowed: |
Admin |
By default, members assigned to a custom RBAC policy that specifies For information about editing custom RBAC policies to change this default GitHub role, see About Changing the Default GitHub Role below. |
Custom policies without **/admin under allowed: |
Read Only |
By default, members assigned to any custom RBAC policies that do not specify For information about editing custom RBAC policies to change this default GitHub role, see About Changing the Default GitHub Role below. |
[View a larger version of this image](/images/vendor-portal-account-settings.png)
1. In the **Two-Factor Authentication** pane, click **Turn on two-factor authentication**.
[View a larger version of this image](/images/vendor-portal-password-2fa.png)
1. In the **Confirm password** dialog, enter your Vendor Portal account password. Click **Confirm password**.
1. Scan the QR code that displays using a supported two-factor authentication application on your mobile device, such as Google Authenticator. Alternatively, click **Use this text code** in the Vendor Portal to generate an alphanumeric code that you enter in the mobile application.
[View a larger version of this image](/images/vendor-portal-scan-qr.png)
Your mobile application displays an authentication code.
1. Enter the authentication code in the Vendor Portal.
Two-factor authentication is enabled and a list of recovery codes is displayed at the bottom of the **Two-Factor Authentication** pane.
1. Save the recovery codes in a secure location. These codes can be used any time (one time per code), if you lose your mobile device.
1. Log out of your account, then log back in to test that it is enabled. You are prompted to enter a one-time code generated by the application on your mobile device.
## Disable 2FA on Individual Accounts
To disable two-factor authentication on your individual account:
1. In the [Vendor Portal](https://vendor.replicated.com), click **Account Settings** from the dropdown list in the upper right corner of the screen.
[View a larger version of this image](/images/vendor-portal-account-settings.png)
1. In the **Two-Factor Authentication** pane, click **Turn off two-factor authentication**.
1. In the **Confirm password** dialog, enter your Vendor Portal account password. Click **Confirm password**.
## Enable or Disable 2FA for a Team
As an administrator, you can enable and disable 2FA for teams. You must first enable 2FA on your individual account before you can enable 2FA for teams. After you enable 2FA for your team, team members can enable 2FA on their individual accounts.
To enable or disable 2FA for a team:
1. In the [Vendor Portal](https://vendor.replicated.com), select the **Team** tab, then select **Multifactor Auth**.
[View a larger image](/images/team-2fa-auth.png)
1. On the **Multifactor Authentication** page, do one of the following with the **Require Two-Factor Authentication for all Username/Password authenticating users** toggle:
- Turn on the toggle to enable 2FA
- Turn off the toggle to disable 2FA
1. Click **Save changes**.
---
# Manage Team Members
This topic describes how to manage team members in the Replicated Vendor Portal, such as inviting and removing members, and editing permissions. For information about managing user access to the Replicated collab repository in GitHub, see [Manage Collab Repository Access](team-management-github-username).
## Viewing Team Members
The [Team](https://vendor.replicated.com/team/members) page provides a list of all accounts currently associated with or invited to your team. Each row contains information about the user, including their two-factor authentication (2FA) status and role-based access control (RBAC) role, and lets administrators take additional actions, such as remove, re-invite, and edit permissions.
[View a larger image](/images/teams-view.png)
All users, including read-only, can see the name of the RBAC role assigned to each team member. When SAML authentication is enabled, users with the built-in read-only policy cannot see the RBAC role assigned to team members.
## Invite Members
By default, team administrators can invite more team members to collaborate. Invited users receive an email to activate their account. The activation link in the email is unique to the invited user. Following the activation link in the email also ensures that the invited user joins the team from which the invitation originated.
:::note
Teams that have enforced SAML-only authentication do not use the email invitation flow described in this procedure. These teams and their users must log in through their SAML provider.
:::
To invite a new team member:
1. From the [Team Members](https://vendor.replicated.com/team/members) page, click **Invite team member**.
The Invite team member dialog opens.
[Invite team member dialog](/images/teams-invite-member.png)
1. Enter the email address of the member.
1. In the **Permissions** field, assign an RBAC policy from the dropdown list.
:::important
The RBAC policy that you specify also determines the level of access that the user has to the Replicated collab repository in GitHub. By default, the Read Only policy grants the user read access to the collab repository.
For more information about managing user access to the collab repository from the Vendor Portal, see [Manage Access to the Collab Repository](team-management-github-username).
:::
1. Click **Invite member**.
People invited to join your team receive an email notification to accept the invitation. They must follow the link in the email to accept the invitation and join the team. If they do not have a Replicated account already, they can create one that complies with your password policies, 2FA, and Google authentication requirements. If an invited user's email address is already associated with a Replicated account, by accepting your invitation, they automatically leave their current team and join the team that you have invited them to.
## Managing Invitations
Invitations expire after 7 days. If a prospective member has not accepted their invitation in this time frame, you can re-invite them without having to reenter their details. You can also remove the prospective member from the list.
You must be an administrator to perform this action.
To re-invite or remove a prospective member, do one of the following on the **Team Members** page:
* Click **Reinvite** from the row with the user's email address, and then click **Reinvite** in the confirmation dialog.
* Click **Remove** from the row with the user's email address, and then click **Delete Invitation** in the confirmation dialog.
## Edit Policy Permissions
You can edit the RBAC policy that is assigned to a member at any time.
:::important
The RBAC policy that you specify also determines the level of access that the user has to the Replicated collab repository in GitHub. By default, the Read Only policy grants the user read access to the collab repository.
For more information about managing user access to the collab repository from the Vendor Portal, see [Manage Access to the Collab Repository](team-management-github-username).
:::
To edit policy permissions for individual team members:
1. From the the Team Members list, click **Edit permissions** next to a members name.
:::note
The two-factor authentication (2FA) status displays on the **Team members** page, but it is not configured on this page. For more information about configuring 2FA, see [Manage Two-Factor Authentication](team-management-two-factor-auth).
:::
1. Select an RBAC policy from the **Permissions** dropdown list, and click **Save**. For information about configuring the RBAC policies that display in this list, see [Configure RBAC Policies](team-management-rbac-configuring).
## Enable Users to Auto-join Your Team
By default, users must be invited to your team. Team administrators can use the auto-join feature to allow users from the same email domain to join their team automatically. This applies to users registering with an email, or with Google authentication if it is enabled for the team. The auto-join feature does not apply to SAML authentication because SAML users log in using their SAML provider's application portal instead of the Vendor Portal.
To add, edit, or delete custom RBAC policies, see [Configure RBAC Policies](team-management-rbac-configuring).
To enable users to auto-join your team:
1. From the Team Members page, click **Auto-join** from the left navigation.
1. Enable the **Allow all users from my domain to be added to my team** toggle.
[View a larger image](/images/teams-auto-join.png)
1. For **Default RBAC policy level for new accounts**, you can use the default Read Only policy or select another policy from the list. This RBAC policy is applied to all users who join the team with the auto-join feature.
:::important
The RBAC policy that you specify also determines the level of access that the user has to the Replicated collab repository in GitHub. By default, the Read Only policy grants the user read access to the collab repository.
For more information about managing user access to the collab repository from the Vendor Portal, see [Manage Access to the Collab Repository](team-management-github-username).
:::
## Remove Members and End Sessions
As a Vendor Portal team admin, you can remove team members, except for the account you are currently logged in with.
If the team member that you remove added their GitHub username to their Account Settings page in the Vendor Portal to access the Replicated collab repository, then the Vendor Portal also automatically removes their username from the collab repository. For more information, see [Manage Access to the Collab Repository](team-management-github-username).
SAML-created users must be removed using this method to expire their existing sessions because Replicated does not support System for Cross-domain Identity Management (SCIM).
To remove a member:
1. From the Team Members page, click **Remove** on the right side of a user's row.
1. Click **Remove** in the confirmation dialog.
The member is removed. All of their current user sessions are deleted and their next attempt at communicating with the server logs them out of their browser's session.
If the member added their GitHub username to the Vendor Portal to access the collab repository, then the Vendor Portal also removes their GitHub username from the collab repository. For more information, see [Manage Collab Repository Access](team-management-github-username).
For Google-authenticated users, if the user's Google account is suspended or deleted, Replicated logs that user out of all Google authenticated Vendor Portal sessions within 10 minutes. The user remains in the team list, but they cannot log into the Vendor Portal unless the username and password are allowed.
## Update Email Addresses
:::important
Changing team member email addresses has security implications. Replicated advises that you avoid changing team member email addresses if possible.
:::
Updating the email address for a team member requires creating a new account with the updated email address, and then deactivating the previous account.
To update the email address for a team member:
1. From the Team Members page, click **Invite team member**.
1. Assign the required RBAC policies to the new user.
1. Deactivate the previous team member account.
---
# Collect Telemetry for Air Gap Instances
This topic describes how to collect telemetry for instances in air gap environments.
## Overview
Air gap instances run in environments without outbound internet access. This limitation prevents these instances from periodically sending telemetry to the Replicated Vendor Portal through the Replicated SDK or Replicated KOTS. For more information about how the Vendor Portal collects telemetry from online (internet-connected) instances, see [About Instance and Event Data](/vendor/instance-insights-event-data#about-reporting).
For air gap instances, Replicated KOTS and the Replicated SDK collect and store instance telemetry in a Kubernetes Secret in the customer environment. The Replicated SDK also stores any custom metrics within its Secret.
The telemetry and custom metrics stored in the Secret are collected when a support bundle is generated in the environment. When the support bundle is uploaded to the Vendor Portal, the telemetry and custom metrics are associated with the correct customer and instance ID, and the Vendor Portal updates the instance insights and event data accordingly.
The following diagram demonstrates how air gap telemetry is collected and stored by the Replicated SDK in a customer environment, and then shared to the Vendor Portal in a support bundle:
[View a larger version of this image](/images/airgap-telemetry.png)
All support bundles uploaded to the Vendor Portal from air gap customers contributes to a comprehensive dataset, providing parity in the telemetry for air gap and online instances. Replicated recommends that you collect support bundles from air gap customers regularly (monthly or quarterly) to improve the completeness of the dataset. The Vendor Portal handles any overlapping event archives idempotently, ensuring data integrity.
## Requirement
Air gap telemetry has the following requirements:
* To collect telemetry from air gap instances, one of the following must be installed in the cluster where the instance is running:
* The Replicated SDK installed in air gap mode. See [Install the SDK in Air Gap Environments](/vendor/replicated-sdk-airgap).
* KOTS v1.92.1 or later
:::note
When both the Replicated SDK and KOTS v1.92.1 or later are installed in the cluster (such as when a Helm chart that includes the SDK is installed by KOTS), both collect and store instance telemetry in their own dedicated secret, subject to the size limitation noted below. In the case of any overlapping data points, the Vendor Portal will report these data points chronologically based on their timestamp.
:::
* To collect custom metrics from air gap instances, the Replicated SDK must installed in the cluster in air gap mode. See [Install the SDK in Air Gap Environments](/vendor/replicated-sdk-airgap).
For more information about custom metrics, see [Configure Custom Metrics](https://docs.replicated.com/vendor/custom-metrics).
Replicated strongly recommends that all applications include the Replicated SDK because it enables access to both standard instance telemetry and custom metrics for air gap instances.
## Limitation
Telemetry data is capped at 4,000 events or 1MB per Secret; whichever limit is reached first.
When a limit is reached, the oldest events are purged until the payload is within the limit. For optimal use, consider collecting support bundles regularly (monthly or quarterly) from air gap customers.
## Collect and View Air Gap Telemetry
To collect telemetry from air gap instances:
1. Ask your customer to collect a support bundle. See [Generate Support Bundles](/vendor/support-bundle-generating).
1. After receiving the support bundle from your customer, go to the Vendor Portal **Customers**, **Customer Reporting**, or **Instance Details** page and upload the support bundle:

The telemetry collected from the support bundle appears in the instance data shortly. Allow a few minutes for all data to be processed.
---
# About Compatibility Matrix
This topic describes Replicated Compatibility Matrix, including use cases, billing, limitations, and more.
## Overview
The Replicated SDK is a Helm chart that can be installed as a small service alongside your application. The SDK can be installed alongside applications packaged as Helm charts or Kubernetes manifests. The SDK can be installed using the Helm CLI or KOTS.
For information about how to distribute and install the SDK with your application, see [Install the Replicated SDK](/vendor/replicated-sdk-installing).
Replicated recommends that the SDK is distributed with all applications because it provides access to key Replicated functionality, such as:
* Automatic access to insights and operational telemetry for instances running in customer environments, including granular details about the status of different application resources. For more information, see [About Instance and Event Data](/vendor/instance-insights-event-data).
* An in-cluster API that you can use to embed Replicated features into your application, including:
* Collect custom metrics on instances running in online or air gap environments. See [Configure Custom Metrics](/vendor/custom-metrics).
* Check customer license entitlements at runtime. See [Query Entitlements with the Replicated SDK API](/vendor/licenses-reference-sdk) and [Verify License Field Signatures with the Replicated SDK API](/vendor/licenses-verify-fields-sdk-api).
* Provide update checks to alert customers when new versions of your application are available for upgrade. See [Support Update Checks in Your Application](/reference/replicated-sdk-apis#support-update-checks-in-your-application) in _Replicated SDK API_.
* Programmatically name or tag instances from the instance itself. See [Programatically Set Tags](/reference/replicated-sdk-apis#post-appinstance-tags).
You can use Compatibility Matrix with the Replicated CLI or the Replicated Vendor Portal. For more information about how to use Compatibility Matrix, see [Use Compatibility Matrix](testing-how-to).
### Supported Clusters
Compatibility Matrix can create clusters on virtual machines (VMs), such as kind, k3s, RKE2, and Red Hat OpenShift OKD, and also create cloud-managed clusters, such as EKS, GKE and AKS:
* Cloud-based Kubernetes distributions are run in a Replicated managed and controlled cloud account to optimize and deliver a clusters quickly and reliably. The Replicated account has control planes ready and adds a node group when you request it, making the cluster available much faster than if you try to create your own cluster with your own cloud account.
* VMs run on Replicated bare metal servers located in several data centers, including data centers physically in the European Union.
To view an up-to-date list of the available cluster distributions, including the supported Kubernetes versions, instance types, and maximum nodes for each distribution, run [`replicated cluster versions`](/reference/replicated-cli-cluster-versions).
For detailed information about the available cluster distributions, see [Supported Compatibility Matrix Cluster Types](testing-supported-clusters).
### Billing and Credits
Clusters created with Compatibility Matrix are billed by the minute. Per-minute billing begins when the cluster reaches a `running` status and ends when the cluster is deleted. Compatibility Matrix marks a cluster as `running` when a working kubeconfig for the cluster is accessible.
You are billed only for the time that the cluster is in a `running` status. You are _not_ billed for the time that it takes Compatibility Matrix to create and tear down clusters, including when the cluster is in an `assigned` status.
For more information about pricing, see [Compatibility Matrix Pricing](testing-pricing).
To create clusters with Compatibility Matrix, you must have credits in your Vendor Portal account.
If you have a contract, you can purchase credits by logging in to the Vendor Portal and going to [**Compatibility Matrix > Buy additional credits**](https://vendor.replicated.com/compatibility-matrix).
Otherwise, to request credits, log in to the Vendor Portal and go to [**Compatibility Matrix > Request more credits**](https://vendor.replicated.com/compatibility-matrix).
### Quotas and Capacity
By default, Compatibility Matrix sets quotas for the capacity that can be used concurrently by each vendor portal team. These quotas are designed to ensure that Replicated maintains a minimum amount of capacity for provisioning both VM and cloud-based clusters.
By default, the quota for cloud-based cluster distributions (AKS, GKE, EKS) is three clusters running concurrently.
VM-based cluster distributions (such as kind, OpenShift, and Replicated Embedded Cluster) have the following default quotas:
* 32 vCPUs
* 128 GiB memory
* 800 GiB disk size
You can request increased quotas at any time with no additional cost. To view your team's current quota and capacity usage, or to request a quota increase, go to [**Compatibility Matrix > Settings**](https://vendor.replicated.com/compatibility-matrix/settings) in the vendor portal:

[View a larger version of this image](/images/compatibility-matrix-settings.png)
### Cluster Status
Clusters created with Compatibility Matrix can have the following statuses:
* `assigned`: The cluster resources were requested and Compatibility Matrix is provisioning the cluster. You are not billed for the time that a cluster spends in the `assigned` status.
* `running`: A working kubeconfig for the cluster is accessible. Billing begins when the cluster reaches a `running` status.
Additionally, clusters are verified prior to transitioning to a `running` status. Verification includes checking that the cluster is healthy and running with the correct number of nodes, as well as passing [sonobuoy](https://sonobuoy.io/) tests in `--quick` mode.
* `terminated`: The cluster is deleted. Billing ends when the cluster status is changed from `running` to `terminated`.
* `error`: An error occured when attempting to provision the cluster.
You can view the status of clusters using the `replicated cluster ls` command. For more information, see [cluster ls](/reference/replicated-cli-cluster-ls).
### Cluster Add-ons
The Replicated Compatibility Matrix enables you to extend your cluster with add-ons, to make use of by your application, such as an AWS S3 object store.
This allows you to more easily provision dependencies required by your application.
For more information about how to use the add-ons, see [Compatibility Matrix Cluster Add-ons](testing-cluster-addons).
## Limitations
Compatibility Matrix has the following limitations:
- Clusters cannot be resized. Create another cluster if you want to make changes, such as add another node.
- Clusters cannot be rebooted. Create another cluster if you need to reset/reboot the cluster.
- On cloud clusters, node groups are not available for every distribution. For distribution-specific details, see [Supported Compatibility Matrix Cluster Types](/vendor/testing-supported-clusters).
- Multi-node support is not available for every distribution. For distribution-specific details, see [Supported Compatibility Matrix Cluster Types](/vendor/testing-supported-clusters).
- ARM instance types are only supported on Cloud Clusters. For distribution-specific details, see [Supported Compatibility Matrix Cluster Types](/vendor/testing-supported-clusters).
- GPU instance types are only supported on Cloud Clusters. For distribution-specific details, see [Supported Compatibility Matrix Cluster Types](/vendor/testing-supported-clusters).
- There is no support for IPv6 as a single stack. Dual stack support is available on kind clusters.
- There is no support for air gap testing.
- The `cluster upgrade` feature is available only for kURL distributions. See [cluster upgrade](/reference/replicated-cli-cluster-upgrade).
- Cloud clusters do not allow for the configuration of CNI, CSI, CRI, Ingress, or other plugins, add-ons, services, and interfaces.
- The node operating systems for clusters created with Compatibility Matrix cannot be configured nor replaced with different operating systems.
- The Kubernetes scheduler for clusters created with Compatibility Matrix cannot be replaced with a different scheduler.
- Each team has a quota limit on the amount of resources that can be used simultaneously. This limit can be raised by messaging your account representative.
- Team actions with Compatibility Matrix (for example, creating and deleting clusters and requesting quota increases) are not logged and displayed in the [Vendor Team Audit Log](https://vendor.replicated.com/team/audit-log).
For additional distribution-specific limitations, see [Supported Compatibility Matrix Cluster Types](testing-supported-clusters).
---
# Compatibility Matrix Cluster Add-ons (Alpha)
This topic describes the supported cluster add-ons for Replicated Compatibility Matrix.
## Overview
Replicated Compatibility Matrix enables you to extend your cluster with add-ons, to make use of by your application, such as an AWS S3 object store.
This allows you to more easily provision dependencies required by your application.
## CLI
The Replicated CLI can be used to [create](/reference/replicated-cli-cluster-addon-create), [manage](/reference/replicated-cli-cluster-addon-ls) and [remove](/reference/replicated-cli-cluster-addon-rm) cluster add-ons.
## Supported Add-ons
This section lists the supported cluster add-ons for clusters created with Compatibility Matrix.
### object-store (Alpha)
The Replicated cluster object store add-on can be used to create S3 compatible object store buckets for clusters (currently only AWS S3 is supported for EKS clusters).
Assuming you already have a cluster, run the following command with the cluster ID to create an object store bucket:
```bash
$ replicated cluster addon create object-store 4d2f7e70 --bucket-prefix mybucket
05929b24 Object Store pending {"bucket_prefix":"mybucket"}
$ replicated cluster addon ls 4d2f7e70
ID TYPE STATUS DATA
05929b24 Object Store ready {"bucket_prefix":"mybucket","bucket_name":"mybucket-05929b24-cmx","service_account_namespace":"cmx","service_account_name":"mybucket-05929b24-cmx","service_account_name_read_only":"mybucket-05929b24-cmx-ro"}
```
This will create two service accounts in a namespace, one read-write and the other read-only access to the object store bucket.
Additional service accounts can be created in any namespace with access to the object store by annotating the new service account with the same `eks.amazonaws.com/role-arn` annotation found in the predefined ones (`service_account_name` and `service_account_name_read_only`).
| Type | Description |
|---|---|
| Supported Kubernetes Distributions | EKS (AWS S3) |
| Cost | Flat fee of $0.50 per bucket. |
| Options |
|
| Data |
|
[View a larger version of this image](/images/create-a-cluster.png)
1. On the **Create a cluster** page, complete the following fields:
| Field | Description |
|---|---|
| Kubernetes distribution | Select the Kubernetes distribution for the cluster. |
| Version | Select the Kubernetes version for the cluster. The options available are specific to the distribution selected. |
| Name (optional) | Enter an optional name for the cluster. |
| Tags | Add one or more tags to the cluster as key-value pairs. |
| Set TTL | Select the Time to Live (TTL) for the cluster. When the TTL expires, the cluster is automatically deleted. TTL can be adjusted after cluster creation with [cluster update ttl](/reference/replicated-cli-cluster-update-ttl). |
| Instance type | Select the instance type to use for the nodes in the node group. The options available are specific to the distribution selected. |
| Disk size | Select the disk size in GiB to use per node. |
| Nodes | Select the number of nodes to provision in the node group. The options available are specific to the distribution selected. |
[View a larger version of this image](/images/cmx-assigned-cluster.png)
### Prepare Clusters
For applications distributed with the Replicated Vendor Portal, the [`cluster prepare`](/reference/replicated-cli-cluster-prepare) command reduces the number of steps required to provision a cluster and then deploy a release to the cluster for testing. This is useful in continuous integration (CI) workflows that run multiple times a day. For an example workflow that uses the `cluster prepare` command, see [Recommended CI/CD Workflows](/vendor/ci-workflows).
The `cluster prepare` command does the following:
* Creates a cluster
* Creates a release for your application based on either a Helm chart archive or a directory containing the application YAML files
* Creates a temporary customer of type `test`
:::note
Test customers created by the `cluster prepare` command are not saved in your Vendor Portal team.
:::
* Installs the release in the cluster using either the Helm CLI or Replicated KOTS
The `cluster prepare` command requires either a Helm chart archive or a directory containing the application YAML files to be installed:
* **Install a Helm chart with the Helm CLI**:
```bash
replicated cluster prepare \
--distribution K8S_DISTRO \
--version K8S_VERSION \
--chart HELM_CHART_TGZ
```
The following example creates a kind cluster and installs a Helm chart in the cluster using the `nginx-chart-0.0.14.tgz` chart archive:
```bash
replicated cluster prepare \
--distribution kind \
--version 1.27.0 \
--chart nginx-chart-0.0.14.tgz \
--set key1=val1,key2=val2 \
--set-string s1=val1,s2=val2 \
--set-json j1='{"key1":"val1","key2":"val2"}' \
--set-literal l1=val1,l2=val2 \
--values values.yaml
```
* **Install with KOTS from a YAML directory**:
```bash
replicated cluster prepare \
--distribution K8S_DISTRO \
--version K8S_VERSION \
--yaml-dir PATH_TO_YAML_DIR
```
The following example creates a k3s cluster and installs an application in the cluster using the manifest files in a local directory named `config-validation`:
```bash
replicated cluster prepare \
--distribution k3s \
--version 1.26 \
--namespace config-validation \
--shared-password password \
--app-ready-timeout 10m \
--yaml-dir config-validation \
--config-values-file conifg-values.yaml \
--entitlements "num_of_queues=5"
```
For command usage, including additional options, see [cluster prepare](/reference/replicated-cli-cluster-prepare).
### Access Clusters
Compatibility Matrix provides the kubeconfig for clusters so that you can access clusters with the kubectl command line tool. For more information, see [Command line tool (kubectl)](https://kubernetes.io/docs/reference/kubectl/) in the Kubernetes documentation.
To access a cluster from the command line:
1. Verify that the cluster is in a Running state:
```bash
replicated cluster ls
```
In the output of the command, verify that the `STATUS` for the target cluster is `running`. For command usage, see [cluster ls](/reference/replicated-cli-cluster-ls).
1. Run the following command to open a new shell session with the kubeconfig configured for the cluster:
```bash
replicated cluster shell CLUSTER_ID
```
Where `CLUSTER_ID` is the unique ID for the running cluster that you want to access.
For command usage, see [cluster shell](/reference/replicated-cli-cluster-shell).
1. Verify that you can interact with the cluster through kubectl by running a command. For example:
```bash
kubectl get ns
```
1. Press Ctrl-D or type `exit` when done to end the shell and the connection to the server.
### Upgrade Clusters (kURL Only)
For kURL clusters provisioned with Compatibility Matrix, you can use the the `cluster upgrade` command to upgrade the version of the kURL installer specification used to provision the cluster. A recommended use case for the `cluster upgrade` command is for testing your application's compatibility with Kubernetes API resource version migrations after upgrade.
The following example upgrades a kURL cluster from its previous version to version `9d5a44c`:
```bash
replicated cluster upgrade cabb74d5 --version 9d5a44c
```
For command usage, see [cluster upgrade](/reference/replicated-cli-cluster-upgrade).
### Delete Clusters
You can delete clusters using the Replicated CLI or the Vendor Portal.
#### Replicated CLI
To delete a cluster using the Replicated CLI:
1. Get the ID of the target cluster:
```
replicated cluster ls
```
In the output of the command, copy the ID for the cluster.
**Example:**
```
ID NAME DISTRIBUTION VERSION STATUS CREATED EXPIRES
1234abc My Test Cluster eks 1.27 running 2023-10-09 17:08:01 +0000 UTC -
```
For command usage, see [cluster ls](/reference/replicated-cli-cluster-ls).
1. Run the following command:
```
replicated cluster rm CLUSTER_ID
```
Where `CLUSTER_ID` is the ID of the target cluster.
For command usage, see [cluster rm](/reference/replicated-cli-cluster-rm).
1. Confirm that the cluster was deleted:
```
replicated cluster ls CLUSTER_ID --show-terminated
```
Where `CLUSTER_ID` is the ID of the target cluster.
In the output of the command, you can see that the `STATUS` of the cluster is `terminated`. For command usage, see [cluster ls](/reference/replicated-cli-cluster-ls).
#### Vendor Portal
To delete a cluster using the Vendor Portal:
1. Go to **Compatibility Matrix**.
1. Under **Clusters**, in the vertical dots menu for the target cluster, click **Delete cluster**.
[View a larger version of this image](/images/cmx-delete-cluster.png)
## About Using Compatibility Matrix with CI/CD
Replicated recommends that you integrate Compatibility Matrix into your existing CI/CD workflow to automate the process of creating clusters to install your application and run tests. For more information, including additional best practices and recommendations for CI/CD, see [About Integrating with CI/CD](/vendor/ci-overview).
### Replicated GitHub Actions
Replicated maintains a set of custom GitHub actions that are designed to replace repetitive tasks related to using Compatibility Matrix and distributing applications with Replicated.
If you use GitHub Actions as your CI/CD platform, you can include these custom actions in your workflows rather than using Replicated CLI commands. Integrating the Replicated GitHub actions into your CI/CD pipeline helps you quickly build workflows with the required inputs and outputs, without needing to manually create the required CLI commands for each step.
To view all the available GitHub actions that Replicated maintains, see the [replicatedhq/replicated-actions](https://github.com/replicatedhq/replicated-actions/) repository in GitHub.
For more information, see [Use Replicated GitHub Actions in CI/CD](/vendor/ci-workflows-github-actions).
### Recommended Workflows
Replicated recommends that you maintain unique CI/CD workflows for development (continuous integration) and for releasing your software (continuous delivery). For example development and release workflows that integrate Compatibility Matrix for testing, see [Recommended CI/CD Workflows](/vendor/ci-workflows).
### Test Script Recommendations
Incorporating code tests into your CI/CD workflows is important for ensuring that developers receive quick feedback and can make updates in small iterations. Replicated recommends that you create and run all of the following test types as part of your CI/CD workflows:
* **Application Testing:** Traditional application testing includes unit, integration, and end-to-end tests. These tests are critical for application reliability, and Compatibility Matrix is designed to to incorporate and use your application testing.
* **Performance Testing:** Performance testing is used to benchmark your application to ensure it can handle the expected load and scale gracefully. Test your application under a range of workloads and scenarios to identify any bottlenecks or performance issues. Make sure to optimize your application for different Kubernetes distributions and configurations by creating all of the environments you need to test in.
* **Smoke Testing:** Using a single, conformant Kubernetes distribution to test basic functionality of your application with default (or standard) configuration values is a quick way to get feedback if something is likely to be broken for all or most customers. Replicated also recommends that you include each Kubernetes version that you intend to support in your smoke tests.
* **Compatibility Testing:** Because applications run on various Kubernetes distributions and configurations, it is important to test compatibility across different environments. Compatibility Matrix provides this infrastructure.
* **Canary Testing:** Before releasing to all customers, consider deploying your application to a small subset of your customer base as a _canary_ release. This lets you monitor the application's performance and stability in real-world environments, while minimizing the impact of potential issues. Compatibility Matrix enables canary testing by simulating exact (or near) customer environments and configurations to test your application with.
---
# Access Your Application
This topic describes the networking options for accessing applications deployed on clusters created with Replicated Compatibility Matrix. It also describes how to use and manage Compatibility Matrix tunnels.
## Networking Options
After deploying your application into Compatibility Matrix clusters, you will want to execute your tests using your own test runner.
In order to do this, you need to access your application.
Compatibility matrix offers several methods to access your application.
Some standard Kubernetes networking options are available, but vary based on the distribution.
For VM-based distributions, there is no default network route into the cluster, making inbound connections challenging to create.
### Port Forwarding
Port forwarding is a low-cost and portable mechanism to access your application.
Port forwarding works on all clusters supported by Compatibility Matrix because the connection is initiated from the client, over the Kubernetes API server port.
If you have a single service or pod and are not worried about complex routing, this is a good mechanism.
The basic steps are to connect the port-forward, execute your tests against localhost, and then shut down the port-forward.
### LoadBalancer
If your application is only running on cloud services (EKS, GKE, AKS) you can create a service of type `LoadBalancer`.
This will provision the cloud-provider specific load balancer.
The `LoadBalancer` service will be filled by the in-tree Kubernetes functionality that's integrated with the underlying cloud provider.
You can then query the service definition using `kubectl` and connect to and execute your tests over the `LoadBalancer` IP address.
### Ingress
Ingress is a good way to recreate customer-representative environments, but the problem still remains on how to get inbound access to the IP address that the ingress controller allocates.
Ingress is also not perfectly portable; each ingress controller might require different annotations in the ingress resource to work properly.
Supported ingress controllers vary based on the distribution.
Compatibility matrix supports ingress controllers that are running as a `NodePort` service.
### Compatibility Matrix Tunnels
All VM-based Compatibility Matrix clusters support tunneling traffic into a `NodePort` service.
When this option is used, Replicated is responsible for creating the DNS record and TLS certs.
Replicated will route traffic from `:443` and/or `:80` into the `NodePort` service you defined. For more information about using tunnels, see [Managing Compatibility Matrix Tunnels](#manage-nodes) below.
The following diagram shows how the traffic is routed into the service using Compatibility Matrix tunnels:
[View a larger version of this image](/images/compatibility-matrix-ingress.png)
## Managing Compatibility Matrix Tunnels {#manage-nodes}
Tunnels are viewed, created, and removed using the Compatibility Matrix UI within Vendor Portal, the Replicated CLI, GitHub Actions, or directly with the Vendor API v3. There is no limit to the number of tunnels you can create for a cluster and multiple tunnels can connect to a single service, if desired.
### Limitations
Compatibility Matrix tunnels have the following limitations:
* One tunnel can only connect to one service. If you need fanout routing into different services, consider installing the nginx ingress controller as a `NodePort` service and exposing it.
* Tunnels are not supported for cloud distributions (EKS, GKE, AKS).
### Supported Protocols
A tunnel can support one or more protocols.
The supported protocols are HTTP, HTTPS, WS and WSS.
GRPC and other protocols are not routed into the cluster.
### Exposing Ports
Once you have a node port available on the cluster, you can use the Replicated CLI to expose the node port to the public internet.
This can be used multiple times on a single cluster.
Optionally, you can specify the `--wildcard` flag to expose this port with wildcard DNS and TLS certificate.
This feature adds extra time to provision the port, so it should only be used if necessary.
```bash
replicated cluster port expose \
[cluster id] \
--port [node port] \
--protocol [protocol] \
--wildcard
```
For example, if you have the nginx ingress controller installed and the node port is 32456:
```bash
% replicated cluster ls
ID NAME DISTRIBUTION VERSION STATUS
1e616c55 tender_ishizaka k3s 1.29.2 running
% replicated cluster port expose \
1e616c55 \
--port 32456 \
--protocol http \
--protocol https \
--wildcard
```
:::note
You can expose a node port that does not yet exist in the cluster.
This is useful if you have a deterministic node port, but need the DNS name as a value in your Helm chart.
:::
### Viewing Ports
To view all exposed ports, use the Replicated CLI `port ls` subcommand with the cluster ID:
```bash
% replicated cluster port ls 1e616c55
ID CLUSTER PORT PROTOCOL EXPOSED PORT WILDCARD STATUS
d079b2fc 32456 http http://happy-germain.ingress.replicatedcluster.com true ready
d079b2fc 32456 https https://happy-germain.ingress.replicatedcluster.com true ready
```
### Removing Ports
Exposed ports are automatically deleted when a cluster terminates.
If you want to remove a port (and the associated DNS records and TLS certs) prior to cluster termination, run the `port rm` subcommand with the cluster ID:
```bash
% replicated cluster port rm 1e616c55 --id d079b2fc
```
You can remove just one protocol, or all.
Removing all protocols also removes the DNS record and TLS cert.
---
# Compatibility Matrix Pricing
This topic describes the pricing for Replicated Compatibility Matrix.
## Pricing Overview
Compatibility Matrix usage-based pricing includes a $0.50 per cluster startup cost, plus by the minute pricing based on instance size and count (starting at the time the cluster state changed to "running" and ending when the cluster is either expired (TTL) or removed). Minutes will be rounded up, so there will be a minimum charge of $0.50 plus 1 minute for all running clusters. Each cluster's cost will be rounded up to the nearest cent and subtracted from the available credits in the team account.
Remaining credit balance is viewable on the Replicated Vendor Portal [Cluster History](https://vendor.replicated.com/compatibility-matrix/history) page or with the Vendor API v3 [/vendor/v3/cluster/stats](https://replicated-vendor-api.readme.io/reference/getclusterstats) endpoint. Cluster [add-ons](/vendor/testing-cluster-addons) may incur additional charges.
If the team's available credits are insufficient to run the cluster for the full duration of the TTL, the cluster creation will be rejected.
Team members assigned the Admin role will receive a warning to their email address when the remaining credit balance falls below the amount of credits that were consumed by your team in the previous seven days.
## Cluster Quotas
Each team is limited by the number of clusters that they can run concurrently. To increase the quota, reach out to your account manager.
## VM Cluster Pricing (Openshift, RKE2, K3s, Kind, Embedded Cluster, kURL)
VM-based clusters approximately match the AWS m6.i instance type pricing.
| Instance Type | VCPUs | Memory (GiB) | USD/Credit per hour |
|---|---|---|---|
| r1.small | 2 | 8 | $0.096 |
| r1.medium | 4 | 16 | $0.192 |
| r1.large | 8 | 32 | $0.384 |
| r1.xlarge | 16 | 64 | $0.768 |
| r1.2xlarge | 32 | 128 | $1.536 |
| Instance Type | VCPUs | Memory (GiB) | USD/Credit per hour |
|---|---|---|---|
| m6i.large | 2 | 8 | $0.115 |
| m6i.xlarge | 4 | 16 | $0.230 |
| m6i.2xlarge | 8 | 32 | $0.461 |
| m6i.4xlarge | 16 | 64 | $0.922 |
| m6i.8xlarge | 32 | 128 | $1.843 |
| m7i.large | 2 | 8 | $0.121 |
| m7i.xlarge | 4 | 16 | $0.242 |
| m7i.2xlarge | 8 | 32 | $0.484 |
| m7i.4xlarge | 16 | 64 | $0.968 |
| m7i.8xlarge | 32 | 128 | $1.935 |
| m5.large | 2 | 8 | $0.115 |
| m5.xlarge | 4 | 16 | $0.230 |
| m5.2xlarge | 8 | 32 | $0.461 |
| m5.4xlarge | 16 | 64 | $0.922 |
| m5.8xlarge | 32 | 128 | $1.843 |
| m7g.large | 2 | 8 | $0.098 |
| m7g.xlarge | 4 | 16 | $0.195 |
| m7g.2xlarge | 8 | 32 | $0.392 |
| m7g.4xlarge | 16 | 64 | $0.784 |
| m7g.8xlarge | 32 | 128 | $1.567 |
| c5.large | 2 | 4 | $0.102 |
| c5.xlarge | 4 | 8 | $0.204 |
| c5.2xlarge | 8 | 16 | $0.408 |
| c5.4xlarge | 16 | 32 | $0.816 |
| c5.9xlarge | 36 | 72 | $1.836 |
| g4dn.xlarge | 4 | 16 | $0.631 |
| g4dn.2xlarge | 8 | 32 | $0.902 |
| g4dn.4xlarge | 16 | 64 | $1.445 |
| g4dn.8xlarge | 32 | 128 | $2.611 |
| g4dn.12xlarge | 48 | 192 | $4.964 |
| g4dn.16xlarge | 64 | 256 | $5.222 |
| Instance Type | VCPUs | Memory (GiB) | USD/Credit per hour |
|---|---|---|---|
| n2-standard-2 | 2 | 8 | $0.117 |
| n2-standard-4 | 4 | 16 | $0.233 |
| n2-standard-8 | 8 | 32 | $0.466 |
| n2-standard-16 | 16 | 64 | $0.932 |
| n2-standard-32 | 32 | 128 | $1.865 |
| t2a-standard-2 | 2 | 8 | $0.092 |
| t2a-standard-4 | 4 | 16 | $0.185 |
| t2a-standard-8 | 8 | 32 | $0.370 |
| t2a-standard-16 | 16 | 64 | $0.739 |
| t2a-standard-32 | 32 | 128 | $1.478 |
| t2a-standard-48 | 48 | 192 | $2.218 |
| e2-standard-2 | 2 | 8 | $0.081 |
| e2-standard-4 | 4 | 16 | $0.161 |
| e2-standard-8 | 8 | 32 | $0.322 |
| e2-standard-16 | 16 | 64 | $0.643 |
| e2-standard-32 | 32 | 128 | $1.287 |
| n1-standard-1+nvidia-tesla-t4+1 | 1 | 3.75 | $0.321 |
| n1-standard-1+nvidia-tesla-t4+2 | 1 | 3.75 | $0.585 |
| n1-standard-1+nvidia-tesla-t4+4 | 1 | 3.75 | $1.113 |
| n1-standard-2+nvidia-tesla-t4+1 | 2 | 7.50 | $0.378 |
| n1-standard-2+nvidia-tesla-t4+2 | 2 | 7.50 | $0.642 |
| n1-standard-2+nvidia-tesla-t4+4 | 2 | 7.50 | $1.170 |
| n1-standard-4+nvidia-tesla-t4+1 | 4 | 15 | $0.492 |
| n1-standard-4+nvidia-tesla-t4+2 | 4 | 15 | $0.756 |
| n1-standard-4+nvidia-tesla-t4+4 | 4 | 15 | $1.284 |
| n1-standard-8+nvidia-tesla-t4+1 | 8 | 30 | $0.720 |
| n1-standard-8+nvidia-tesla-t4+2 | 8 | 30 | $0.984 |
| n1-standard-8+nvidia-tesla-t4+4 | 8 | 30 | $1.512 |
| n1-standard-16+nvidia-tesla-t4+1 | 16 | 60 | $1.176 |
| n1-standard-16+nvidia-tesla-t4+2 | 16 | 60 | $1.440 |
| n1-standard-16+nvidia-tesla-t4+4 | 16 | 60 | $1.968 |
| n1-standard-32+nvidia-tesla-t4+1 | 32 | 120 | $2.088 |
| n1-standard-32+nvidia-tesla-t4+2 | 32 | 120 | $2.352 |
| n1-standard-32+nvidia-tesla-t4+4 | 32 | 120 | $2.880 |
| n1-standard-64+nvidia-tesla-t4+1 | 64 | 240 | $3.912 |
| n1-standard-64+nvidia-tesla-t4+2 | 64 | 240 | $4.176 |
| n1-standard-64+nvidia-tesla-t4+4 | 64 | 240 | $4.704 |
| n1-standard-96+nvidia-tesla-t4+1 | 96 | 360 | $5.736 |
| n1-standard-96+nvidia-tesla-t4+2 | 96 | 360 | $6.000 |
| n1-standard-96+nvidia-tesla-t4+4 | 96 | 360 | $6.528 |
| Instance Type | VCPUs | Memory (GiB) | Rate | List Price | USD/Credit per hour |
|---|---|---|---|---|---|
| Standard_B2ms | 2 | 8 | 8320 | $0.083 | $0.100 |
| Standard_B4ms | 4 | 16 | 16600 | $0.166 | $0.199 |
| Standard_B8ms | 8 | 32 | 33300 | $0.333 | $0.400 |
| Standard_B16ms | 16 | 64 | 66600 | $0.666 | $0.799 |
| Standard_DS2_v2 | 2 | 7 | 14600 | $0.146 | $0.175 |
| Standard_DS3_v2 | 4 | 14 | 29300 | $0.293 | $0.352 |
| Standard_DS4_v2 | 8 | 28 | 58500 | $0.585 | $0.702 |
| Standard_DS5_v2 | 16 | 56 | 117000 | $1.170 | $1.404 |
| Standard_D2ps_v5 | 2 | 8 | 14600 | $0.077 | $0.092 |
| Standard_D4ps_v5 | 4 | 16 | 7700 | $0.154 | $0.185 |
| Standard_D8ps_v5 | 8 | 32 | 15400 | $0.308 | $0.370 |
| Standard_D16ps_v5 | 16 | 64 | 30800 | $0.616 | $0.739 |
| Standard_D32ps_v5 | 32 | 128 | 61600 | $1.232 | $1.478 |
| Standard_D48ps_v5 | 48 | 192 | 23200 | $1.848 | $2.218 |
| Standard_NC4as_T4_v3 | 4 | 28 | 52600 | $0.526 | $0.631 |
| Standard_NC8as_T4_v3 | 8 | 56 | 75200 | $0.752 | $0.902 |
| Standard_NC16as_T4_v3 | 16 | 110 | 120400 | $1.204 | $1.445 |
| Standard_NC64as_T4_v3 | 64 | 440 | 435200 | $4.352 | $5.222 |
| Standard_D2S_v5 | 2 | 8 | 9600 | $0.096 | $0.115 |
| Standard_D4S_v5 | 4 | 16 | 19200 | $0.192 | $0.230 |
| Standard_D8S_v5 | 8 | 32 | 38400 | $0.384 | $0.461 |
| Standard_D16S_v5 | 16 | 64 | 76800 | $0.768 | $0.922 |
| Standard_D32S_v5 | 32 | 128 | 153600 | $1.536 | $1.843 |
| Standard_D64S_v5 | 64 | 192 | 230400 | $2.304 | $2.765 |
| Instance Type | VCPUs | Memory (GiB) | USD/Credit per hour |
|---|---|---|---|
| VM.Standard2.1 | 1 | 15 | $0.076 |
| VM.Standard2.2 | 2 | 30 | $0.153 |
| VM.Standard2.4 | 4 | 60 | $0.306 |
| VM.Standard2.8 | 8 | 120 | $0.612 |
| VM.Standard2.16 | 16 | 240 | $1.225 |
| VM.Standard3Flex.1 | 1 | 4 | $0.055 |
| VM.Standard3Flex.2 | 2 | 8 | $0.110 |
| VM.Standard3Flex.4 | 4 | 16 | $0.221 |
| VM.Standard3Flex.8 | 8 | 32 | $0.442 |
| VM.Standard3Flex.16 | 16 | 64 | $0.883 |
| VM.Standard.A1.Flex.1 | 1 | 4 | $0.019 |
| VM.Standard.A1.Flex.2 | 2 | 8 | $0.038 |
| VM.Standard.A1.Flex.4 | 4 | 16 | $0.077 |
| VM.Standard.A1.Flex.8 | 8 | 32 | $0.154 |
| VM.Standard.A1.Flex.16 | 16 | 64 | $0.309 |
| Type | Description |
|---|---|
| Supported Kubernetes Versions | {/* START_kind_VERSIONS */}1.26.15, 1.27.16, 1.28.15, 1.29.14, 1.30.10, 1.31.6, 1.32.3{/* END_kind_VERSIONS */} |
| Supported Instance Types | See Replicated Instance Types |
| Node Groups | No |
| Node Auto Scaling | No |
| Nodes | Supports a single node. |
| IP Family | Supports `ipv4` or `dual`. |
| Limitations | See Limitations |
| Common Use Cases | Smoke tests |
| Type | Description |
|---|---|
| Supported k3s Versions | The upstream k8s version that matches the Kubernetes version requested. |
| Supported Kubernetes Versions | {/* START_k3s_VERSIONS */}1.24.1, 1.24.2, 1.24.3, 1.24.4, 1.24.6, 1.24.7, 1.24.8, 1.24.9, 1.24.10, 1.24.11, 1.24.12, 1.24.13, 1.24.14, 1.24.15, 1.24.16, 1.24.17, 1.25.0, 1.25.2, 1.25.3, 1.25.4, 1.25.5, 1.25.6, 1.25.7, 1.25.8, 1.25.9, 1.25.10, 1.25.11, 1.25.12, 1.25.13, 1.25.14, 1.25.15, 1.25.16, 1.26.0, 1.26.1, 1.26.2, 1.26.3, 1.26.4, 1.26.5, 1.26.6, 1.26.7, 1.26.8, 1.26.9, 1.26.10, 1.26.11, 1.26.12, 1.26.13, 1.26.14, 1.26.15, 1.27.1, 1.27.2, 1.27.3, 1.27.4, 1.27.5, 1.27.6, 1.27.7, 1.27.8, 1.27.9, 1.27.10, 1.27.11, 1.27.12, 1.27.13, 1.27.14, 1.27.15, 1.27.16, 1.28.1, 1.28.2, 1.28.3, 1.28.4, 1.28.5, 1.28.6, 1.28.7, 1.28.8, 1.28.9, 1.28.10, 1.28.11, 1.28.12, 1.28.13, 1.28.14, 1.28.15, 1.29.0, 1.29.1, 1.29.2, 1.29.3, 1.29.4, 1.29.5, 1.29.6, 1.29.7, 1.29.8, 1.29.9, 1.29.10, 1.29.11, 1.29.12, 1.29.13, 1.29.14, 1.29.15, 1.30.0, 1.30.1, 1.30.2, 1.30.3, 1.30.4, 1.30.5, 1.30.6, 1.30.7, 1.30.8, 1.30.9, 1.30.10, 1.30.11, 1.31.0, 1.31.1, 1.31.2, 1.31.3, 1.31.4, 1.31.5, 1.31.6, 1.31.7, 1.32.0, 1.32.1, 1.32.2, 1.32.3{/* END_k3s_VERSIONS */} |
| Supported Instance Types | See Replicated Instance Types |
| Node Groups | Yes |
| Node Auto Scaling | No |
| Nodes | Supports multiple nodes. |
| IP Family | Supports `ipv4`. |
| Limitations | For additional limitations that apply to all distributions, see Limitations. |
| Common Use Cases |
|
| Type | Description |
|---|---|
| Supported RKE2 Versions | The upstream k8s version that matches the Kubernetes version requested. |
| Supported Kubernetes Versions | {/* START_rke2_VERSIONS */}1.24.1, 1.24.2, 1.24.3, 1.24.4, 1.24.6, 1.24.7, 1.24.8, 1.24.9, 1.24.10, 1.24.11, 1.24.12, 1.24.13, 1.24.14, 1.24.15, 1.24.16, 1.24.17, 1.25.0, 1.25.2, 1.25.3, 1.25.4, 1.25.5, 1.25.6, 1.25.7, 1.25.8, 1.25.9, 1.25.10, 1.25.11, 1.25.12, 1.25.13, 1.25.14, 1.25.15, 1.25.16, 1.26.0, 1.26.1, 1.26.2, 1.26.3, 1.26.4, 1.26.5, 1.26.6, 1.26.7, 1.26.8, 1.26.9, 1.26.10, 1.26.11, 1.26.12, 1.26.13, 1.26.14, 1.26.15, 1.27.1, 1.27.2, 1.27.3, 1.27.4, 1.27.5, 1.27.6, 1.27.7, 1.27.8, 1.27.9, 1.27.10, 1.27.11, 1.27.12, 1.27.13, 1.27.14, 1.27.15, 1.27.16, 1.28.2, 1.28.3, 1.28.4, 1.28.5, 1.28.6, 1.28.7, 1.28.8, 1.28.9, 1.28.10, 1.28.11, 1.28.12, 1.28.13, 1.28.14, 1.28.15, 1.29.0, 1.29.1, 1.29.2, 1.29.3, 1.29.4, 1.29.5, 1.29.6, 1.29.7, 1.29.8, 1.29.9, 1.29.10, 1.29.11, 1.29.12, 1.29.13, 1.29.14, 1.29.15, 1.30.0, 1.30.1, 1.30.2, 1.30.3, 1.30.4, 1.30.5, 1.30.6, 1.30.7, 1.30.8, 1.30.9, 1.30.10, 1.30.11, 1.31.0, 1.31.1, 1.31.2, 1.31.3, 1.31.4, 1.31.5, 1.31.6, 1.31.7, 1.32.0, 1.32.1, 1.32.2, 1.32.3{/* END_rke2_VERSIONS */} |
| Supported Instance Types | See Replicated Instance Types |
| Node Groups | Yes |
| Node Auto Scaling | No |
| Nodes | Supports multiple nodes. |
| IP Family | Supports `ipv4`. |
| Limitations | For additional limitations that apply to all distributions, see Limitations. |
| Common Use Cases |
|
| Type | Description |
|---|---|
| Supported OpenShift Versions | {/* START_openshift_VERSIONS */}4.10.0-okd, 4.11.0-okd, 4.12.0-okd, 4.13.0-okd, 4.14.0-okd, 4.15.0-okd, 4.16.0-okd, 4.17.0-okd{/* END_openshift_VERSIONS */} |
| Supported Instance Types | See Replicated Instance Types |
| Node Groups | Yes |
| Node Auto Scaling | No |
| Nodes | Supports multiple nodes for versions 4.13.0-okd and later. |
| IP Family | Supports `ipv4`. |
| Limitations |
For additional limitations that apply to all distributions, see Limitations. |
| Common Use Cases | Customer release tests |
| Type | Description |
|---|---|
| Supported Embedded Cluster Versions | Any valid release sequence that has previously been promoted to the channel where the customer license is assigned. Version is optional and defaults to the latest available release on the channel. |
| Supported Instance Types | See Replicated Instance Types |
| Node Groups | Yes |
| Nodes | Supports multiple nodes (alpha). |
| IP Family | Supports `ipv4`. |
| Limitations |
For additional limitations that apply to all distributions, see Limitations. |
| Common Use Cases | Customer release tests |
| Type | Description |
|---|---|
| Supported kURL Versions | Any promoted kURL installer. Version is optional. For an installer version other than "latest", you can find the specific Installer ID for a previously promoted installer under the relevant **Install Command** (ID after kurl.sh/) on the **Channels > kURL Installer History** page in the Vendor Portal. For more information about viewing the history of kURL installers promoted to a channel, see [Installer History](/vendor/installer-history). |
| Supported Instance Types | See Replicated Instance Types |
| Node Groups | Yes |
| Node Auto Scaling | No |
| Nodes | Supports multiple nodes. |
| IP Family | Supports `ipv4`. |
| Limitations | Does not work with the Longhorn add-on. For additional limitations that apply to all distributions, see Limitations. |
| Common Use Cases | Customer release tests |
| Type | Description |
|---|---|
| Supported Kubernetes Versions | {/* START_eks_VERSIONS */}1.25, 1.26, 1.27, 1.28, 1.29, 1.30, 1.31, 1.32{/* END_eks_VERSIONS */} Extended Support Versions: 1.25, 1.26, 1.27, 1.28, 1.29 |
| Supported Instance Types | m6i.large, m6i.xlarge, m6i.2xlarge, m6i.4xlarge, m6i.8xlarge, m7i.large, m7i.xlarge, m7i.2xlarge, m7i.4xlarge, m7i.8xlarge, m5.large, m5.xlarge, m5.2xlarge, m5.4xlarge, m5.8xlarge, m7g.large (arm), m7g.xlarge (arm), m7g.2xlarge (arm), m7g.4xlarge (arm), m7g.8xlarge (arm), c5.large, c5.xlarge, c5.2xlarge, c5.4xlarge, c5.9xlarge, g4dn.xlarge (gpu), g4dn.2xlarge (gpu), g4dn.4xlarge (gpu), g4dn.8xlarge (gpu), g4dn.12xlarge (gpu), g4dn.16xlarge (gpu) g4dn instance types depend on available capacity. After a g4dn cluster is running, you also need to install your version of the NVIDIA device plugin for Kubernetes. See [Amazon EKS optimized accelerated Amazon Linux AMIs](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html#gpu-ami) in the AWS documentation. |
| Node Groups | Yes |
| Node Auto Scaling | Yes. Cost will be based on the max number of nodes. |
| Nodes | Supports multiple nodes. |
| IP Family | Supports `ipv4`. |
| Limitations | You can only choose a minor version, not a patch version. The EKS installer chooses the latest patch for that minor version. For additional limitations that apply to all distributions, see Limitations. |
| Common Use Cases | Customer release tests |
| Type | Description |
|---|---|
| Supported Kubernetes Versions | {/* START_gke_VERSIONS */}1.30, 1.31, 1.32{/* END_gke_VERSIONS */} |
| Supported Instance Types | n2-standard-2, n2-standard-4, n2-standard-8, n2-standard-16, n2-standard-32, t2a-standard-2 (arm), t2a-standard-4 (arm), t2a-standard-8 (arm), t2a-standard-16 (arm), t2a-standard-32 (arm), t2a-standard-48 (arm), e2-standard-2, e2-standard-4, e2-standard-8, e2-standard-16, e2-standard-32, n1-standard-1+nvidia-tesla-t4+1 (gpu), n1-standard-1+nvidia-tesla-t4+2 (gpu), n1-standard-1+nvidia-tesla-t4+4 (gpu), n1-standard-2+nvidia-tesla-t4+1 (gpu), n1-standard-2+nvidia-tesla-t4+2 (gpu), n1-standard-2+nvidia-tesla-t4+4 (gpu), n1-standard-4+nvidia-tesla-t4+1 (gpu), n1-standard-4+nvidia-tesla-t4+2 (gpu), n1-standard-4+nvidia-tesla-t4+4 (gpu), n1-standard-8+nvidia-tesla-t4+1 (gpu), n1-standard-8+nvidia-tesla-t4+2 (gpu), n1-standard-8+nvidia-tesla-t4+4 (gpu), n1-standard-16+nvidia-tesla-t4+1 (gpu), n1-standard-16+nvidia-tesla-t4+2 (gpu), n1-standard-16+nvidia-tesla-t4+4 (gpu), n1-standard-32+nvidia-tesla-t4+1 (gpu), n1-standard-32+nvidia-tesla-t4+2 (gpu), n1-standard-32+nvidia-tesla-t4+4 (gpu), n1-standard-64+nvidia-tesla-t4+1 (gpu), n1-standard-64+nvidia-tesla-t4+2 (gpu), n1-standard-64+nvidia-tesla-t4+4 (gpu), n1-standard-96+nvidia-tesla-t4+1 (gpu), n1-standard-96+nvidia-tesla-t4+2 (gpu), n1-standard-96+nvidia-tesla-t4+4 (gpu) You can specify more than one node. |
| Node Groups | Yes |
| Node Auto Scaling | Yes. Cost will be based on the max number of nodes. |
| Nodes | Supports multiple nodes. |
| IP Family | Supports `ipv4`. |
| Limitations | You can choose only a minor version, not a patch version. The GKE installer chooses the latest patch for that minor version. For additional limitations that apply to all distributions, see Limitations. |
| Common Use Cases | Customer release tests |
| Type | Description |
|---|---|
| Supported Kubernetes Versions | {/* START_aks_VERSIONS */}1.29, 1.30, 1.31{/* END_aks_VERSIONS */} |
| Supported Instance Types | Standard_B2ms, Standard_B4ms, Standard_B8ms, Standard_B16ms, Standard_DS2_v2, Standard_DS3_v2, Standard_DS4_v2, Standard_DS5_v2, Standard_DS2_v5, Standard_DS3_v5, Standard_DS4_v5, Standard_DS5_v5, Standard_D2ps_v5 (arm), Standard_D4ps_v5 (arm), Standard_D8ps_v5 (arm), Standard_D16ps_v5 (arm), Standard_D32ps_v5 (arm), Standard_D48ps_v5 (arm), Standard_NC4as_T4_v3 (gpu), Standard_NC8as_T4_v3 (gpu), Standard_NC16as_T4_v3 (gpu), Standard_NC64as_T4_v3 (gpu) GPU instance types depend on available capacity. After a GPU cluster is running, you also need to install your version of the NVIDIA device plugin for Kubernetes. See [NVIDIA GPU Operator with Azure Kubernetes Service](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/microsoft-aks.html) in the NVIDIA documentation. |
| Node Groups | Yes |
| Node Auto Scaling | Yes. Cost will be based on the max number of nodes. |
| Nodes | Supports multiple nodes. |
| IP Family | Supports `ipv4`. |
| Limitations | You can choose only a minor version, not a patch version. The AKS installer chooses the latest patch for that minor version. For additional limitations that apply to all distributions, see Limitations. |
| Common Use Cases | Customer release tests |
| Type | Description |
|---|---|
| Supported Kubernetes Versions | {/* START_oke_VERSIONS */}1.30.1, 1.31.1, 1.32.1{/* END_oke_VERSIONS */} |
| Supported Instance Types | VM.Standard2.1, VM.Standard2.2, VM.Standard2.4, VM.Standard2.8, VM.Standard2.16, VM.Standard3.Flex.1, VM.Standard3.Flex.2, VM.Standard3.Flex.4, VM.Standard3.Flex.8, VM.Standard3.Flex.16, VM.Standard.A1.Flex.1 (arm), VM.Standard.A1.Flex.2 (arm), VM.Standard.A1.Flex.4 (arm), VM.Standard.A1.Flex.8 (arm), VM.Standard.A1.Flex.16 (arm) |
| Node Groups | Yes |
| Node Auto Scaling | No. |
| Nodes | Supports multiple nodes. |
| IP Family | Supports `ipv4`. |
| Limitations | Provising an OKE cluster does take between 8 to 10 minutes. If needed, some timeouts in your CI pipelines might have to be adjusted. For additional limitations that apply to all distributions, see Limitations. |
| Common Use Cases | Customer release tests |
| Type | Memory (GiB) | VCPU Count |
|---|---|---|
| r1.small | 8 GB | 2 VCPUs |
| r1.medium | 16 GB | 4 VCPUs |
| r1.large | 32 GB | 8 VCPUs |
| r1.xlarge | 64 GB | 16 VCPUs |
| r1.2xlarge | 128 GB | 32 VCPUs |
This is text from a user config value: '{{repl ConfigOption "example_default_value"}}'
This is more text from a user config value: '{{repl ConfigOption "more_text"}}'
This is a hidden value: '{{repl ConfigOption "hidden_text"}}'
``` This creates a reference to the `more_text` field using a Replicated KOTS template function. The ConfigOption template function renders the user input from the configuration item that you specify. For more information, see [Config Context](/reference/template-functions-config-context) in _Reference_. 1. Save the changes to both YAML files. 1. Change to the root `replicated-cli-tutorial` directory, then run the following command to verify that there are no errors in the YAML: ``` replicated release lint --yaml-dir=manifests ``` 1. Create a new release and promote it to the Unstable channel: ``` replicated release create --auto ``` **Example output**: ``` • Reading manifests from ./manifests ✓ • Creating Release ✓ • SEQUENCE: 2 • Promoting ✓ • Channel 2GxpUm7lyB2g0ramqUXqjpLHzK0 successfully set to release 2 ``` 1. Type `y` and press **Enter** to continue with the defaults. **Example output**: ``` RULE TYPE FILENAME LINE MESSAGE • Reading manifests from ./manifests ✓ • Creating Release ✓ • SEQUENCE: 2 • Promoting ✓ • Channel 2GmYFUFzj8JOSLYw0jAKKJKFua8 successfully set to release 2 ``` The release is created and promoted to the Unstable channel with `SEQUENCE: 2`. 1. Verify that the release was promoted to the Unstable channel: ``` replicated release ls ``` **Example output**: ``` SEQUENCE CREATED EDITED ACTIVE_CHANNELS 2 2022-11-03T19:16:24Z 0001-01-01T00:00:00Z Unstable 1 2022-11-03T18:49:13Z 0001-01-01T00:00:00Z ``` ## Next Step Continue to [Step 9: Update the Application](tutorial-cli-update-app) to return to the Admin Console and update the application to the new version that you promoted. --- # Step 4: Create a Release Now that you have the manifest files for the sample Kubernetes application, you can create a release for the `cli-tutorial` application and promote the release to the Unstable channel. By default, the Vendor Portal includes Unstable, Beta, and Stable release channels. The Unstable channel is intended for software vendors to use for internal testing, before promoting a release to the Beta or Stable channels for distribution to customers. For more information about channels, see [About Channels and Releases](releases-about). To create and promote a release to the Unstable channel: 1. From the `replicated-cli-tutorial` directory, lint the application manifest files and ensure that there are no errors in the YAML: ``` replicated release lint --yaml-dir=manifests ``` If there are no errors, an empty list is displayed with a zero exit code: ```text RULE TYPE FILENAME LINE MESSAGE ``` For a complete list of the possible error, warning, and informational messages that can appear in the output of the `release lint` command, see [Linter Rules](/reference/linter). 1. Initialize the project as a Git repository: ``` git init git add . git commit -m "Initial Commit: CLI Tutorial" ``` Initializing the project as a Git repository allows you to track your history. The Replicated CLI also reads Git metadata to help with the generation of release metadata, such as version labels. 1. From the `replicated-cli-tutorial` directory, create a release with the default settings: ``` replicated release create --auto ``` The `--auto` flag generates release notes and metadata based on the Git status. **Example output:** ``` • Reading Environment ✓ Prepared to create release with defaults: yaml-dir "./manifests" promote "Unstable" version "Unstable-ba710e5" release-notes "CLI release of master triggered by exampleusername [SHA: d4173a4] [31 Oct 22 08:51 MDT]" ensure-channel true lint-release true Create with these properties? [Y/n] ``` 1. Type `y` and press **Enter** to confirm the prompt. **Example output:** ```text • Reading manifests from ./manifests ✓ • Creating Release ✓ • SEQUENCE: 1 • Promoting ✓ • Channel VEr0nhJBBUdaWpPvOIK-SOryKZEwa3Mg successfully set to release 1 ``` The release is created and promoted to the Unstable channel. 1. Verify that the release was promoted to the Unstable channel: ``` replicated release ls ``` **Example output:** ```text SEQUENCE CREATED EDITED ACTIVE_CHANNELS 1 2022-10-31T14:55:35Z 0001-01-01T00:00:00Z Unstable ``` ## Next Step Continue to [Step 5: Create a Customer](tutorial-cli-create-customer) to create a customer license file that you will upload when installing the application. --- # Step 7: Configure the Application After you install KOTS, you can log in to the KOTS Admin Console. This procedure shows you how to make a configuration change for the application from the Admin Console, which is a typical task performed by end users. To configure the application: 1. Access the Admin Console using `https://localhost:8800` if the installation script is still running. Otherwise, run the following command to access the Admin Console: ```bash kubectl kots admin-console --namespace NAMESPACE ``` Replace `NAMESPACE` with the namespace where KOTS is installed. 1. Enter the password that you created in [Step 6: Install KOTS and the Application](tutorial-cli-install-app-manager) to log in to the Admin Console. The Admin Console dashboard opens. On the Admin Console **Dashboard** tab, users can take various actions, including viewing the application status, opening the application, checking for application updates, syncing their license, and setting up application monitoring on the cluster with Prometheus.  1. On the **Config** tab, select the **Customize Text Inputs** checkbox. In the **Text Example** field, enter any text. For example, `Hello`.  This page displays configuration settings that are specific to the application. Software vendors define the fields that are displayed on this page in the KOTS Config custom resource. For more information, see [Config](/reference/custom-resource-config) in _Reference_. 1. Click **Save config**. In the dialog that opens, click **Go to updated version**. The **Version history** tab opens. 1. Click **Deploy** for the new version. Then click **Yes, deploy** in the confirmation dialog.  1. Click **Open App** to view the application in your browser.  Notice the text that you entered previously on the configuration page is displayed on the screen. :::note If you do not see the new text, refresh your browser. ::: ## Next Step Continue to [Step 8: Create a New Version](tutorial-cli-create-new-version) to make a change to one of the manifest files for the `cli-tutorial` application, then use the Replicated CLI to create and promote a new release. --- # Step 6: Install KOTS and the Application The next step is to test the installation process for the application release that you promoted. Using the KOTS CLI, you will install KOTS and the sample application in your cluster. KOTS is the Replicated component that allows your users to install, manage, and upgrade your application. Users can interact with KOTS through the Admin Console or through the KOTS CLI. To install KOTS and the application: 1. From the `replicated-cli-tutorial` directory, run the following command to get the installation commands for the Unstable channel, where you promoted the release for the `cli-tutorial` application: ``` replicated channel inspect Unstable ``` **Example output:** ``` ID: 2GmYFUFzj8JOSLYw0jAKKJKFua8 NAME: Unstable DESCRIPTION: RELEASE: 1 VERSION: Unstable-d4173a4 EXISTING: curl -fsSL https://kots.io/install | bash kubectl kots install cli-tutorial/unstable EMBEDDED: curl -fsSL https://k8s.kurl.sh/cli-tutorial-unstable | sudo bash AIRGAP: curl -fSL -o cli-tutorial-unstable.tar.gz https://k8s.kurl.sh/bundle/cli-tutorial-unstable.tar.gz # ... scp or sneakernet cli-tutorial-unstable.tar.gz to airgapped machine, then tar xvf cli-tutorial-unstable.tar.gz sudo bash ./install.sh airgap ``` This command prints information about the channel, including the commands for installing in: * An existing cluster * An _embedded cluster_ created by Replicated kURL * An air gap cluster that is not connected to the internet 1. If you have not already, configure kubectl access to the cluster you provisioned as part of [Set Up the Environment](tutorial-cli-setup#set-up-the-environment). For more information about setting the context for kubectl, see [Command line tool (kubectl)](https://kubernetes.io/docs/reference/kubectl/) in the Kubernetes documentation. 1. Run the `EXISTING` installation script with the following flags to automatically upload the license file and run the preflight checks at the same time you run the installation. **Example:** ``` curl -fsSL https://kots.io/install | bash kubectl kots install cli-tutorial/unstable \ --license-file ./LICENSE_YAML \ --shared-password PASSWORD \ --namespace NAMESPACE ``` Replace: - `LICENSE_YAML` with the local path to your license file. - `PASSWORD` with a password to access the Admin Console. - `NAMESPACE` with the namespace where KOTS and application will be installed. When the Admin Console is ready, the script prints the `https://localhost:8800` URL where you can access the Admin Console and the `http://localhost:8888` URL where you can access the application. **Example output**: ``` • Deploying Admin Console • Creating namespace ✓ • Waiting for datastore to be ready ✓ • Waiting for Admin Console to be ready ✓ • Waiting for installation to complete ✓ • Waiting for preflight checks to complete ✓ • Press Ctrl+C to exit • Go to http://localhost:8800 to access the Admin Console • Go to http://localhost:8888 to access the application ``` 1. Verify that the Pods are running for the example NGNIX service and for kotsadm: ```bash kubectl get pods --namespace NAMESPACE ``` Replace `NAMESPACE` with the namespace where KOTS and application was installed. **Example output:** ```NAME READY STATUS RESTARTS AGE kotsadm-7ccc8586b8-n7vf6 1/1 Running 0 12m kotsadm-minio-0 1/1 Running 0 17m kotsadm-rqlite-0 1/1 Running 0 17m nginx-688f4b5d44-8s5v7 1/1 Running 0 11m ``` ## Next Step Continue to [Step 7: Configure the Application](tutorial-cli-deploy-app) to log in to the Admin Console and make configuration changes. --- # Step 1: Install the Replicated CLI In this tutorial, you use the Replicated CLI to create and promote releases for a sample application with Replicated. The Replicated CLI is the CLI for the Replicated Vendor Portal. This procedure describes how to create a Vendor Portal account, install the Replicated CLI on your local machine, and set up a `REPLICATED_API_TOKEN` environment variable for authentication. To install the Replicated CLI: 1. Do one of the following to create an account in the Replicated Vendor Portal: * **Join an existing team**: If you have an existing Vendor Portal team, you can ask your team administrator to send you an invitation to join. * **Start a trial**: Alternatively, go to [vendor.replicated.com](https://vendor.replicated.com/) and click **Sign up** to create a 21-day trial account for completing this tutorial. 1. Run the following command to use [Homebrew](https://brew.sh) to install the CLI: ``` brew install replicatedhq/replicated/cli ``` For the latest Linux or macOS versions of the Replicated CLI, see the [replicatedhq/replicated](https://github.com/replicatedhq/replicated/releases) releases in GitHub. 1. Verify the installation: ``` replicated version ``` **Example output**: ```json { "version": "0.37.2", "git": "8664ac3", "buildTime": "2021-08-24T17:05:26Z", "go": { "version": "go1.14.15", "compiler": "gc", "os": "darwin", "arch": "amd64" } } ``` If you run a Replicated CLI command, such as `replicated release ls`, you see the following error message about a missing API token: ``` Error: set up APIs: Please provide your API token ``` 1. Create an API token for the Replicated CLI: 1. Log in to the Vendor Portal, and go to the [Account settings](https://vendor.replicated.com/account-settings) page. 1. Under **User API Tokens**, click **Create user API token**. For Nickname, provide a name for the token. For Permissions, select **Read and Write**. For more information about User API tokens, see [User API Tokens](replicated-api-tokens#user-api-tokens) in _Generating API Tokens_. 1. Click **Create Token**. 1. Copy the string that appears in the dialog. 1. Export the string that you copied in the previous step to an environment variable named `REPLICATED_API_TOKEN`: ```bash export REPLICATED_API_TOKEN=YOUR_TOKEN ``` Replace `YOUR_TOKEN` with the token string that you copied from the Vendor Portal in the previous step. 1. Verify the User API token: ``` replicated release ls ``` You see the following error message: ``` Error: App not found: ``` ## Next Step Continue to [Step 2: Create an Application](tutorial-cli-create-app) to use the Replicated CLI to create an application. --- # Step 3: Get the Sample Manifests To create a release for the `cli-tutorial` application, first create the Kubernetes manifest files for the application. This tutorial provides a set of sample manifest files for a simple Kubernetes application that deploys an NGINX service. To get the sample manifest files: 1. Run the following command to create and change to a `replicated-cli-tutorial` directory: ``` mkdir replicated-cli-tutorial cd replicated-cli-tutorial ``` 1. Create a `/manifests` directory and download the sample manifest files from the [kots-default-yaml](https://github.com/replicatedhq/kots-default-yaml) repository in GitHub: ``` mkdir ./manifests curl -fSsL https://github.com/replicatedhq/kots-default-yaml/archive/refs/heads/main.zip | \ tar xzv --strip-components=1 -C ./manifests \ --exclude README.md --exclude LICENSE --exclude .gitignore ``` 1. Verify that you can see the YAML files in the `replicated-cli-tutorial/manifests` folder: ``` ls manifests/ ``` ``` example-configmap.yaml example-service.yaml kots-app.yaml kots-lint-config.yaml kots-support-bundle.yaml example-deployment.yaml k8s-app.yaml kots-config.yaml kots-preflight.yaml ``` ## Next Step Continue to [Step 4: Create a Release](tutorial-cli-create-release) to create and promote the first release for the `cli-tutorial` application using these manifest files. --- # Introduction and Setup This tutorial introduces you to the Replicated features for software vendors and their enterprise users. It is designed to familiarize you with the key concepts and processes that you use as a software vendor when you package and distribute your application with Replicated. In this tutorial, you use a set of sample manifest files for a basic NGINX application to learn how to: * Create and promote releases for an application as a software vendor * Install and update an application on a Kubernetes cluster as an enterprise user The steps in this KOTS CLI-based tutorial show you how to use the Replicated CLI to perform these tasks. The Replicated CLI is the CLI for the Replicated Vendor Portal. You can use the Replicated CLI as a software vendor to programmatically create, configure, and manage your application artifacts, including application releases, release channels, customer entitlements, private image registries, and more. :::note This tutorial assumes that you have a working knowledge of Kubernetes. For an introduction to Kubernetes and free training resources, see [Training](https://kubernetes.io/training/) in the Kubernetes documentation. ::: ## Set Up the Environment As part of this tutorial, you will install a sample application into a Kubernetes cluster. Before you begin, do the following to set up your environment: * Create a Kubernetes cluster that meets the minimum system requirements described in [KOTS Installation Requirements](/enterprise/installing-general-requirements). You can use any cloud provider or tool that you prefer to create a cluster, such as Google Kubernetes Engine (GKE), Amazon Web Services (AWS), or minikube. **Example:** For example, to create a cluster in GKE, run the following command in the gcloud CLI: ``` gcloud container clusters create NAME --preemptible --no-enable-ip-alias ``` Where `NAME` is any name for the cluster. * Install kubectl, the Kubernetes command line tool. See [Install Tools](https://kubernetes.io/docs/tasks/tools/) in the Kubernetes documentation. * Configure kubectl command line access to the cluster that you created. See [Command line tool (kubectl)](https://kubernetes.io/docs/reference/kubectl/) in the Kubernetes documentation. ## Related Topics For more information about the subjects in the getting started tutorials, see the following topics: * [Installing the Replicated CLI](/reference/replicated-cli-installing) * [Linter Rules](/reference/linter) * [Online Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster) * [Performing Updates in Existing Clusters](/enterprise/updating-app-manager) --- # Step 9: Update the Application To test the new release that you promoted, return to the Admin Console in a browser to update the application. To update the application: 1. Access the KOTS Admin Console using `https://localhost:8800` if the installation script is still running. Otherwise, run the following command to access the Admin Console: ```bash kubectl kots admin-console --namespace NAMESPACE ``` Replace `NAMESPACE` with the namespace where the Admin Console is installed. 1. Go to the Version history page, and click **Check for update**.  The Admin Console loads the new release that you promoted. 1. Click **Deploy**. In the dialog, click **Yes, deploy** to deploy the new version.  1. After the Admin Console deploys the new version, go to the **Config** page where the **Another Text Example** field that you added is displayed.  1. In the new **Another Text Example** field, enter any text. Click **Save config**. The Admin Console notifies you that the configuration settings for the application have changed.  1. In the dialog, click **Go to updated version**. The Admin Console loads the updated version on the Version history page. 1. On the Version history page, click **Deploy** next to the latest version to deploy the configuration change.  1. Go to the **Dashboard** page and click **Open App**. The application displays the text that you added to the field.  :::note If you do not see the new text, refresh your browser. ::: ## Summary Congratulations! As part of this tutorial, you: * Created and promoted a release for a Kubernetes application using the Replicated CLI * Installed the application in a Kubernetes cluster * Edited the manifest files for the application, adding a new configuration field and using template functions to reference the field * Promoted a new release with your changes * Used the Admin Console to update the application to the latest version --- # Step 2: Create an Application Next, install the Replicated CLI and then create an application. To create an application: 1. Install the Replicated CLI: ``` brew install replicatedhq/replicated/cli ``` For more installation options, see [Install the Replicated CLI](/reference/replicated-cli-installing). 1. Authorize the Replicated CLI: ``` replicated login ``` In the browser window that opens, complete the prompts to log in to your vendor account and authorize the CLI. 1. Create an application named `Grafana`: ``` replicated app create Grafana ``` 1. Set the `REPLICATED_APP` environment variable to the application that you created. This allows you to interact with the application using the Replicated CLI without needing to use the `--app` flag with every command: 1. Get the slug for the application that you created: ``` replicated app ls ``` **Example output**: ``` ID NAME SLUG SCHEDULER 2WthxUIfGT13RlrsUx9HR7So8bR Grafana grafana-python kots ``` In the example above, the application slug is `grafana-python`. :::info The application _slug_ is a unique string that is generated based on the application name. You can use the application slug to interact with the application through the Replicated CLI and the Vendor API v3. The application name and slug are often different from one another because it is possible to create more than one application with the same name. ::: 1. Set the `REPLICATED_APP` environment variable to the application slug. **MacOS Example:** ``` export REPLICATED_APP=grafana-python ``` ## Next Step Add the Replicated SDK to the Helm chart and package the chart to an archive. See [Step 3: Package the Helm Chart](tutorial-config-package-chart). ## Related Topics * [Create an Application](/vendor/vendor-portal-manage-app#create-an-application) * [Installing the Replicated CLI](/reference/replicated-cli-installing) * [replicated app create](/reference/replicated-cli-app-create) --- # Step 5: Create a KOTS-Enabled Customer After promoting the release, create a customer with the KOTS entitlement so that you can install the release with KOTS. To create a customer: 1. In the [Vendor Portal](https://vendor.replicated.com), click **Customers > Create customer**. The **Create a new customer** page opens:  [View a larger version of this image](/images/create-customer.png) 1. For **Customer name**, enter a name for the customer. For example, `KOTS Customer`. 1. For **Channel**, select **Unstable**. This allows the customer to install releases promoted to the Unstable channel. 1. For **License type**, select Development. 1. For **License options**, verify that **KOTS Install Enabled** is enabled. This is the entitlement that allows the customer to install with KOTS. 1. Click **Save Changes**. 1. On the **Manage customer** page for the customer, click **Download license**. You will use the license file to install with KOTS.  [View a larger version of this image](/images/customer-download-license.png) ## Next Step Get the KOTS installation command and install. See [Step 6: Install the Release with KOTS](tutorial-config-install-kots). ## Related Topics * [About Customers](/vendor/licenses-about) * [Creating and Managing Customers](/vendor/releases-creating-customer) --- # Step 4: Add the Chart Archive to a Release Next, add the Helm chart archive to a new release for the application in the Replicated vendor platform. The purpose of this step is to configure a release that supports installation with KOTS. Additionally, this step defines a user-facing application configuration page that displays in the KOTS Admin Console during installation where users can set their own Grafana login credentials. To create a release: 1. In the `grafana` directory, create a subdirectory named `manifests`: ``` mkdir manifests ``` You will add the files required to support installation with Replicated KOTS to this subdirectory. 1. Move the Helm chart archive that you created to `manifests`: ``` mv grafana-9.6.5.tgz manifests ``` 1. In the `manifests` directory, create the following YAML files to configure the release: ``` cd manifests ``` ``` touch kots-app.yaml k8s-app.yaml kots-config.yaml grafana.yaml ``` 1. In each file, paste the corresponding YAML provided in the tabs below:The KOTS Application custom resource enables features in the Admin Console such as branding, release notes, port forwarding, dashboard buttons, application status indicators, and custom graphs.
The YAML below provides a name for the application to display in the Admin Console, adds a custom status informer that displays the status of the grafana Deployment resource in the Admin Console dashboard, adds a custom application icon, and creates a port forward so that the user can open the Grafana application in a browser.
The Kubernetes Application custom resource supports functionality such as including buttons and links on the Admin Console dashboard. The YAML below adds an Open App button to the Admin Console dashboard that opens the application using the port forward configured in the KOTS Application custom resource.
The Config custom resource specifies a user-facing configuration page in the Admin Console designed for collecting application configuration from users. The YAML below creates "Admin User" and "Admin Password" fields that will be shown to the user on the configuration page during installation. These fields will be used to set the login credentials for Grafana.
The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart.
The HelmChart custom resource below contains a values key, which creates a mapping to the Grafana values.yaml file. In this case, the values.admin.user and values.admin.password fields map to admin.user and admin.password in the Grafana values.yaml file.
During installation, KOTS renders the ConfigOption template functions in the values.admin.user and values.admin.password fields and then sets the corresponding Grafana values accordingly.
[View a larger version of this image](/images/release-promote.png)
## Next Step
Create a customer with the KOTS entitlement so that you can install the release in your cluster using Replicated KOTS. See [Step 5: Create a KOTS-Enabled Customer](tutorial-config-create-customer).
## Related Topics
* [About Channels and Releases](/vendor/releases-about)
* [Configuring the HelmChart Custom Resource](/vendor/helm-native-v2-using)
* [Config Custom Resource](/reference/custom-resource-config)
* [Manipulating Helm Chart Values with KOTS](/vendor/helm-optional-value-keys)
---
# Step 1: Get the Sample Chart and Test
To begin, get the sample Grafana Helm chart from Bitnami, install the chart in your cluster using the Helm CLI, and then uninstall. The purpose of this step is to confirm that you can successfully install and access the application before adding the chart to a release in the Replicated vendor platform.
To get the sample Grafana chart and test installation:
1. Run the following command to pull and untar version 9.6.5 of the Bitnami Grafana Helm chart:
```
helm pull --untar oci://registry-1.docker.io/bitnamicharts/grafana --version 9.6.5
```
For more information about this chart, see the [bitnami/grafana](https://github.com/bitnami/charts/tree/main/bitnami/grafana) repository in GitHub.
1. Change to the new `grafana` directory that was created:
```
cd grafana
```
1. View the files in the directory:
```
ls
```
The directory contains the following files:
```
Chart.lock Chart.yaml README.md charts templates values.yaml
```
1. Install the chart in your cluster:
```
helm install grafana . --namespace grafana --create-namespace
```
To view the full installation instructions from Bitnami, see [Installing the Chart](https://github.com/bitnami/charts/blob/main/bitnami/grafana/README.md#installing-the-chart) in the `bitnami/grafana` repository.
After running the installation command, the following output is displayed:
```
NAME: grafana
LAST DEPLOYED: Thu Dec 14 14:54:50 2023
NAMESPACE: grafana
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: grafana
CHART VERSION: 9.6.5
APP VERSION: 10.2.2
** Please be patient while the chart is being deployed **
1. Get the application URL by running these commands:
echo "Browse to http://127.0.0.1:8080"
kubectl port-forward svc/grafana 8080:3000 &
2. Get the admin credentials:
echo "User: admin"
echo "Password: $(kubectl get secret grafana-admin --namespace grafana -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 -d)"
# Note: Do not include grafana.validateValues.database here. See https://github.com/bitnami/charts/issues/20629
```
1. Watch the `grafana` Deployment until it is ready:
```
kubectl get deploy grafana --namespace grafana --watch
```
1. When the Deployment is created, run the commands provided in the output of the installation command to get the Grafana login credentials:
```
echo "User: admin"
echo "Password: $(kubectl get secret grafana-admin --namespace grafana -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 -d)"
```
1. Run the commands provided in the ouptut of the installation command to get the Grafana URL:
```
echo "Browse to http://127.0.0.1:8080"
kubectl port-forward svc/grafana 8080:3000 --namespace grafana
```
:::note
Include `--namespace grafana` in the `kubectl port-forward` command.
:::
1. In a browser, go to the URL to open the Grafana login page:
[View a larger version of this image](/images/grafana-login.png)
1. Log in using the credentials provided to open the Grafana dashboard:
[View a larger version of this image](/images/grafana-dashboard.png)
1. Uninstall the Helm chart:
```
helm uninstall grafana --namespace grafana
```
This command removes all the Kubernetes resources associated with the chart and uninstalls the `grafana` release.
1. Delete the namespace:
```
kubectl delete namespace grafana
```
## Next Step
Log in to the Vendor Portal and create an application. See [Step 2: Create an Application](tutorial-config-create-app).
## Related Topics
* [Helm Install](https://helm.sh/docs/helm/helm_install/)
* [Helm Uninstall](https://helm.sh/docs/helm/helm_uninstall/)
* [Helm Create](https://helm.sh/docs/helm/helm_create/)
* [Helm Package](https://helm.sh/docs/helm/helm_package/)
* [bitnami/gitea](https://github.com/bitnami/charts/blob/main/bitnami/gitea)
---
# Step 6: Install the Release with KOTS
Next, get the KOTS installation command from the Unstable channel in the Vendor Portal and then install the release using the customer license that you downloaded.
As part of installation, you will set Grafana login credentials on the KOTS Admin Console configuration page.
To install the release with KOTS:
1. In the [Vendor Portal](https://vendor.replicated.com), go to **Channels**. From the **Unstable** channel card, under **Install**, copy the **KOTS Install** command.

[View a larger version of this image](/images/grafana-unstable-channel.png)
1. On the command line, run the **KOTS Install** command that you copied:
```bash
curl https://kots.io/install | bash
kubectl kots install $REPLICATED_APP/unstable
```
This installs the latest version of the KOTS CLI and the Admin Console. The Admin Console provides a user interface where you can upload the customer license file and deploy the application.
For additional KOTS CLI installation options, including how to install without root access, see [Installing the KOTS CLI](/reference/kots-cli-getting-started).
:::note
KOTS v1.104.0 or later is required to deploy the Replicated SDK. You can verify the version of KOTS installed with `kubectl kots version`.
:::
1. Complete the installation command prompts:
1. For `Enter the namespace to deploy to`, enter `grafana`.
1. For `Enter a new password to be used for the Admin Console`, provide a password to access the Admin Console.
When the Admin Console is ready, the command prints the URL where you can access the Admin Console. At this point, the KOTS CLI is installed and the Admin Console is running, but the application is not yet deployed.
**Example output:**
```bash
Enter the namespace to deploy to: grafana
• Deploying Admin Console
• Creating namespace ✓
• Waiting for datastore to be ready ✓
Enter a new password for the Admin Console (6+ characters): ••••••••
• Waiting for Admin Console to be ready ✓
• Press Ctrl+C to exit
• Go to http://localhost:8800 to access the Admin Console
```
1. With the port forward running, go to `http://localhost:8800` in a browser to access the Admin Console.
1. On the login page, enter the password that you created for the Admin Console.
1. On the license page, select the license file that you downloaded previously and click **Upload license**.
1. On the **Configure Grafana** page, enter a username and password. You will use these credentials to log in to Grafana.

[View a larger version of this image](/images/grafana-config.png)
1. Click **Continue**.
The Admin Console dashboard opens. The application status changes from Missing to Unavailable while the `grafana` Deployment is being created.

[View a larger version of this image](/images/grafana-unavailable.png)
1. On the command line, press Ctrl+C to exit the port forward.
1. Watch for the `grafana` Deployment to become ready:
```
kubectl get deploy grafana --namespace grafana --watch
```
1. After the Deployment is ready, run the following command to confirm that the `grafana-admin` Secret was updated with the new password that you created on the **Configure Grafana** page:
```
echo "Password: $(kubectl get secret grafana-admin --namespace grafana -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 -d)"
```
The ouput of this command displays the password that you created.
1. Start the port foward again to access the Admin Console:
```
kubectl kots admin-console --namespace grafana
```
1. Go to `http://localhost:8800` to open the Admin Console.
On the Admin Console dashboard, the application status is now displayed as Ready:

[View a larger version of this image](/images/grafana-ready.png)
1. Click **Open App** to open the Grafana login page in a browser.
[View a larger version of this image](/images/grafana-login.png)
1. On the Grafana login page, enter the username and password that you created on the **Configure Grafana** page. Confirm that you can log in to the application to access the Grafana dashboard:
[View a larger version of this image](/images/grafana-dashboard.png)
1. On the command line, press Ctrl+C to exit the port forward.
1. Uninstall the Grafana application from your cluster:
```bash
kubectl kots remove $REPLICATED_APP --namespace grafana --undeploy
```
**Example output**:
```
• Removing application grafana-python reference from Admin Console and deleting associated resources from the cluster ✓
• Application grafana-python has been removed
```
1. Remove the Admin Console from the cluster:
1. Delete the namespace where the Admin Console is installed:
```
kubectl delete namespace grafana
```
1. Delete the Admin Console ClusterRole and ClusterRoleBinding:
```
kubectl delete clusterrole kotsadm-role
```
```
kubectl delete clusterrolebinding kotsadm-rolebinding
```
## Next Step
Congratulations! As part of this tutorial, you used the KOTS Config custom resource to define a configuration page in the Admin Console. You also used the KOTS HelmChart custom resource and KOTS ConfigOption template function to override the default Grafana login credentials with a user-supplied username and password.
To learn more about how to customize the Config custom resource to create configuration fields for your application, see [Config](/reference/custom-resource-config).
## Related Topics
* [kots install](/reference/kots-cli-install/)
* [Install the KOTS CLI](/reference/kots-cli-getting-started/)
* [Delete the Admin Console and Remove Applications](/enterprise/delete-admin-console)
---
# Step 3: Package the Helm Chart
Next, add the Replicated SDK as a dependency of the Helm chart and then package the chart into a `.tgz` archive. The purpose of this step is to prepare the Helm chart to be added to a release.
To add the Replicated SDK and package the Helm chart:
1. In your local file system, go to the `grafana` directory that was created as part of [Step 1: Get the Sample Chart and Test](tutorial-config-get-chart).
1. In the `Chart.yaml` file, add the Replicated SDK as a dependency:
```yaml
# Chart.yaml
dependencies:
- name: replicated
repository: oci://registry.replicated.com/library
version: 1.5.1
```
For the latest version information for the Replicated SDK, see the [replicated-sdk repository](https://github.com/replicatedhq/replicated-sdk/releases) in GitHub.
1. Update dependencies and package the Helm chart to a `.tgz` chart archive:
```bash
helm package . --dependency-update
```
:::note
If you see a `401 Unauthorized` error message, log out of the Replicated registry by running `helm registry logout registry.replicated.com` and then run `helm package . --dependency-update` again.
:::
## Next Step
Create a release using the Helm chart archive. See [Step 4: Add the Chart Archive to a Release](tutorial-config-create-release).
## Related Topics
* [About the Replicated SDK](/vendor/replicated-sdk-overview)
* [Helm Package](https://helm.sh/docs/helm/helm_package/)
---
# Introduction and Setup
This topic provides a summary of the goals and outcomes for the tutorial and also lists the prerequisites to set up your environment before you begin.
## Summary
This tutorial introduces you to mapping user-supplied values from the Replicated KOTS Admin Console configuration page to a Helm chart `values.yaml` file.
In this tutorial, you use a sample Helm chart to learn how to:
* Define a user-facing application configuration page in the KOTS Admin Console
* Set Helm chart values with the user-supplied values from the Admin Console configuration page
## Set Up the Environment
Before you begin, ensure that you have kubectl access to a Kubernetes cluster. You can use any cloud provider or tool that you prefer to create a cluster, such as [Replicated Compatibility Matrix](/vendor/testing-how-to), Google Kubernetes Engine (GKE), or minikube.
## Next Step
Get the sample Bitnami Helm chart and test installation with the Helm CLI. See [Step 1: Get the Sample Chart and Test](/vendor/tutorial-config-get-chart)
---
# Tutorial: Using ECR for Private Images
## Objective
The purpose of this tutorial is to walk you through how to configure Replicated KOTS to pull images from a private registry in Amazon's Elastic Container Registry (ECR). This tutorial demonstrates the differences between using public and private images with KOTS.
## Prerequisites
* To install the application in this tutorial, you must have a virtual machine (VM) that meets the following minimum requirements:
* Ubuntu 18.04
* At least 8 GB of RAM
* 4 CPU cores
* At least 40GB of disk space
* To pull a public NGINX container and push it to a private repository in ECR as part of this tutorial, you must have the following:
* An ECR Repository
* An AWS account to use with Docker to pull and push the public NGINX image to the ECR repository. The AWS account must be able to create a read-only user.
* Docker
* The AWS CLI
## Overview
The guide is divided into the following steps:
1. [Set Up the Testing Environment](#set-up)
2. [Configure Private Registries in Replicated](#2-configure-private-registries-in-replicated)
3. [Update Definition Files](#3-update-definition-files)
4. [Install the New Version](#4-install-the-new-version)
## 1. Set Up the Testing Environment {#set-up}
We are going to use the default NGINX deployment to create our application and then update it to pull the same container from a private repository in ECR and note the differences.
### Create Sample Application and deploy the first release
In this section, we cover at a high level the steps to create a new application and install it on a VM.
To create our sample application follow these steps:
* Create a new application in the Replicated [vendor portal](https://vendor.replicated.com) and call it 'MySampleECRApp'.
* Create the first release using the default definition files and promote it to the *unstable* channel.
* Create a customer, assign it to the *Unstable* channel and download the license file after creating the customer.
* Install the application to a VM
Log in to the Replicated admin console. To inspect what was deployed, look at the files under **View Files** from the admin console.
In the Upstream files (files from the release created in the vendor portal) show that we are pulling the public image.

We can further validate this if we switch back to the terminal window on the VM where we installed the application.
If we run `kubectl describe pod
[View a larger version of this image](/images/add-external-registry.png)
The values for the fields are:
**Endpoint:**
Enter the same URL used to log in to ECR.
For example, to link to the same registry as the one in the section, we would enter *4999999999999.dkr.ecr.us-east-2.amazonaws.com*.
**Username:**
Enter the AWS Access Key ID for the user created in the [Setting Up the Service Account User](#setting-up-the-service-account-user) section.
**Password:**
Enter the AWS Secret Key for the user created in the [Setting Up the Service Account User](#setting-up-the-service-account-user) section.
* * *
## 3. Update Definition Files
Last step is to update our definition manifest to pull the image from the ECR repository.
To do this, we'll update the `deployment.yaml` file by adding the ECR registry URL to the `image` value.
Below is an example using the registry URL used in this guide.
```diff
spec:
containers:
- name: nginx
- image: nginx
+ image: 4999999999999.dkr.ecr.us-east-2.amazonaws.com/demo-apps/nginx
envFrom:
```
Save your changes and create the new release and promote it to the *Unstable* channel.
* * *
## 4. Install the New Version
To deploy the new version of the application, go back to the admin console and select the *Version History* tab.
Click on **Check for Updates** and then **Deploy** when the new version is listed.
To confirm that the new version was in fact installed, it should look like the screenshot below.

Now, we can inspect to see the changes in the definition files.
Looking at the `deployment.yaml` upstream file, we see the image path as we set it in the [Update Definition Files](#3-update-definition-files) section.

Because KOTS is able to detect that it cannot pull this image anonymously, it then tries to proxy the private registries configured. Looking at the `kustomization.yaml` downstream file we can see that the image path is changed to use the Replicated proxy.

The install of the new version should have created a new pod. If we run `kubectl describe pod` on the new NGINX pod, we can confirm that the image was in fact pulled from the ECR repository.

* * *
## Related Topics
- [Connecting to an External Registry](packaging-private-images/)
- [Replicated Community Thread on AWS Roles and Permissions](https://help.replicated.com/community/t/what-are-the-minimal-aws-iam-permissions-needed-to-proxy-images-from-elastic-container-registry-ecr/267)
- [AWS ECR Managed Policies Documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecr_managed_policies.html)
---
# Step 1: Create an Application
To begin, install the Replicated CLI and create an application in the Replicated Vendor Portal.
An _application_ is an object that has its own customers, channels, releases, license fields, and more. A single team can have more than one application. It is common for teams to have multiple applications for the purpose of onboarding, testing, and iterating.
To create an application:
1. Install the Replicated CLI:
```
brew install replicatedhq/replicated/cli
```
For more installation options, see [Install the Replicated CLI](/reference/replicated-cli-installing).
1. Authorize the Replicated CLI:
```
replicated login
```
In the browser window that opens, complete the prompts to log in to your vendor account and authorize the CLI.
1. Create an application named `Gitea`:
```
replicated app create Gitea
```
1. Set the `REPLICATED_APP` environment variable to the application that you created. This allows you to interact with the application using the Replicated CLI without needing to use the `--app` flag with every command:
1. Get the slug for the application that you created:
```
replicated app ls
```
**Example output**:
```
ID NAME SLUG SCHEDULER
2WthxUIfGT13RlrsUx9HR7So8bR Gitea gitea-kite kots
```
In the example above, the application slug is `gitea-kite`.
:::note
The application _slug_ is a unique string that is generated based on the application name. You can use the application slug to interact with the application through the Replicated CLI and the Vendor API v3. The application name and slug are often different from one another because it is possible to create more than one application with the same name.
:::
1. Set the `REPLICATED_APP` environment variable to the application slug.
**Example:**
```
export REPLICATED_APP=gitea-kite
```
## Next Step
Add the Replicated SDK to the Helm chart and package the chart to an archive. See [Step 2: Package the Helm Chart](tutorial-embedded-cluster-package-chart).
## Related Topics
* [Create an Application](/vendor/vendor-portal-manage-app#create-an-application)
* [Installing the Replicated CLI](/reference/replicated-cli-installing)
* [replicated app create](/reference/replicated-cli-app-create)
---
# Step 4: Create an Embedded Cluster-Enabled Customer
After promoting the release, create a customer with the Replicated KOTS and Embedded Cluster entitlements so that you can install the release with Embedded Cluster. A _customer_ represents a single licensed user of your application.
To create a customer:
1. In the [Vendor Portal](https://vendor.replicated.com), click **Customers > Create customer**.
The **Create a new customer** page opens:

[View a larger version of this image](/images/create-customer.png)
1. For **Customer name**, enter a name for the customer. For example, `Example Customer`.
1. For **Channel**, select **Unstable**. This allows the customer to install releases promoted to the Unstable channel.
1. For **License type**, select **Development**.
1. For **License options**, enable the following entitlements:
* **KOTS Install Enabled**
* **Embedded Cluster Enabled**
1. Click **Save Changes**.
## Next Step
Get the Embedded Cluster installation commands and install. See [Step 5: Install the Release on a VM](tutorial-embedded-cluster-install).
## Related Topics
* [About Customers](/vendor/licenses-about)
* [Creating and Managing Customers](/vendor/releases-creating-customer)
---
# Step 3: Add the Chart Archive to a Release
Next, add the Helm chart archive to a new release for the application in the Replicated Vendor Portal. The purpose of this step is to configure a release that supports installation with Replicated Embedded Cluster.
A _release_ represents a single version of your application and contains your application files. Each release is promoted to one or more _channels_. Channels provide a way to progress releases through the software development lifecycle: from internal testing, to sharing with early-adopters, and finally to making the release generally available.
To create a release:
1. In the `gitea` directory, create a subdirectory named `manifests`:
```
mkdir manifests
```
You will add the files required to support installation with Replicated KOTS to this subdirectory.
1. Move the Helm chart archive that you created to `manifests`:
```
mv gitea-1.0.6.tgz manifests
```
1. In `manifests`, create the YAML manifests required by KOTS:
```
cd manifests
```
```
touch gitea.yaml kots-app.yaml k8s-app.yaml embedded-cluster.yaml
```
1. In each of the files that you created, paste the corresponding YAML provided in the tabs below:
The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The name and chartVersion listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. The optionalValues field sets the specified Helm values when a given conditional statement evaluates to true. In this case, if the application is installed with Embedded Cluster, then the Gitea service type is set to `NodePort` and the node port is set to `"32000"`. This will allow Gitea to be accessed from the local machine after deployment.
The KOTS Application custom resource enables features in the Replicated Admin Console such as branding, release notes, application status indicators, and custom graphs.
The YAML below provides a name for the application to display in the Admin Console, adds a custom status informer that displays the status of the gitea Deployment resource in the Admin Console dashboard, adds a custom application icon, and adds the port where the Gitea service can be accessed so that the user can open the application after installation.
The Kubernetes Application custom resource supports functionality such as including buttons and links on the Replicated Admin Console dashboard. The YAML below adds an Open App button to the Admin Console dashboard that opens the application using the service port defined in the KOTS Application custom resource.
To install your application with Embedded Cluster, an Embedded Cluster Config must be present in the release. At minimum, the Embedded Cluster Config sets the version of Embedded Cluster that will be installed. You can also define several characteristics about the cluster.
[View a larger version of this image](/images/release-promote.png)
## Next Step
Create a customer with the Embedded Cluster entitlement so that you can install the release using Embedded Cluster. See [Step 4: Create an Embedded Cluster-Enabled Customer](tutorial-embedded-cluster-create-customer).
## Related Topics
* [About Channels and Releases](/vendor/releases-about)
* [Configuring the HelmChart Custom Resource](/vendor/helm-native-v2-using)
* [Embedded Cluster Config](/reference/embedded-config)
* [Setting Helm Values with KOTS](/vendor/helm-optional-value-keys)
---
# Step 5: Install the Release on a VM
Next, get the customer-specific Embedded Cluster installation commands and then install the release on a Linux VM.
To install the release with Embedded Cluster:
1. In the [Vendor Portal](https://vendor.replicated.com), go to **Customers**. Click on the name of the customer you created.
1. Click **Install instructions > Embedded cluster**.
[View a larger version of this image](/images/customer-install-instructions-dropdown.png)
The **Embedded cluster install instructions** dialog opens.
[View a larger version of this image](/images/embedded-cluster-install-dialog-latest.png)
1. On the command line, SSH onto your Linux VM.
1. Run the first command in the **Embedded cluster install instructions** dialog to download the latest release.
1. Run the second command to extract the release.
1. Run the third command to install the release.
1. When prompted, enter a password for accessing the KOTS Admin Console.
The installation command takes a few minutes to complete.
1. When the installation command completes, go to the URL provided in the output to log in to the Admin Console.
**Example output:**
```bash
✔ Host files materialized
? Enter an Admin Console password: ********
? Confirm password: ********
✔ Node installation finished
✔ Storage is ready!
✔ Embedded Cluster Operator is ready!
✔ Admin Console is ready!
✔ Finished!
Visit the admin console to configure and install gitea-kite: http://104.155.145.60:30000
```
At this point, the cluster is provisioned and the KOTS Admin Console is deployed, but the application is not yet installed.
1. Bypass the browser TLS warning by clicking **Continue to Setup**.
1. Click **Advanced > Proceed**.
1. On the **HTTPS for the Gitea Admin Console** page, select **Self-signed** and click **Continue**.
1. On the login page, enter the Admin Console password that you created during installation and click **Log in**.
1. On the **Nodes** page, you can view details about the VM where you installed, including its node role, status, CPU, and memory. Users can also optionally add additional nodes on this page before deploying the application. Click **Continue**.
The Admin Console dashboard opens.
1. In the **Version** section, for version `0.1.0`, click **Deploy** then **Yes, Deploy**.
The application status changes from Missing to Unavailable while the `gitea` Deployment is being created.
1. After a few minutes when the application status is Ready, click **Open App** to view the Gitea application in a browser:

[View a larger version of this image](/images/gitea-ec-ready.png)
[View a larger version of this image](/images/gitea-app.png)
1. In another browser window, open the [Vendor Portal](https://vendor.replicated.com/) and go to **Customers**. Select the customer that you created.
On the **Reporting** page for the customer, you can see details about the customer's license and installed instances:

[View a larger version of this image](/images/gitea-customer-reporting-ec.png)
1. On the **Reporting** page, under **Instances**, click on the instance that you just installed to open the instance details page.
On the instance details page, you can see additional insights such as the version of Embedded Cluster that is running, instance status and uptime, and more:

[View a larger version of this image](/images/gitea-instance-insights-ec.png)
1. (Optional) Reset the node to remove the cluster and the application from the node. This is useful for iteration and development so that you can reset a machine and reuse it instead of having to procure another machine.
```bash
sudo ./APP_SLUG reset --reboot
```
Where `APP_SLUG` is the unique slug for the application that you created. You can find the appication slug by running `replicated app ls` on the command line on your local machine.
## Summary
Congratulations! As part of this tutorial, you created a release in the Replicated Vendor Portal and installed the release with Replicated Embedded Cluster in a VM. To learn more about Embedded Cluster, see [Embedded Cluster Overview](embedded-overview).
## Related Topics
* [Embedded Cluster Overview](embedded-overview)
* [Customer Reporting](/vendor/customer-reporting)
* [Instance Details](/vendor/instance-insights-details)
* [Reset a Node](/vendor/embedded-using#reset-a-node)
---
# Step 2: Package the Gitea Helm Chart
Next, get the sample Gitea Helm chart from Bitnami. Add the Replicated SDK as a dependency of the chart, then package the chart into a `.tgz` archive. The purpose of this step is to prepare the Helm chart to be added to a release.
The Replicated SDK is a Helm chart that can be optionally added as a dependency of your application Helm chart. The SDK is installed as a small service running alongside your application, and provides an in-cluster API that you can use to embed Replicated features into your application. Additionally, the Replicated SDK provides access to insights and telemetry for instances of your application installed with the Helm CLI.
To add the Replicated SDK and package the Helm chart:
1. Run the following command to pull and untar version 1.0.6 of the Bitnami Gitea Helm chart:
```
helm pull --untar oci://registry-1.docker.io/bitnamicharts/gitea --version 1.0.6
```
For more information about this chart, see the [bitnami/gitea](https://github.com/bitnami/charts/tree/main/bitnami/gitea) repository in GitHub.
1. Change to the new `gitea` directory that was created:
```
cd gitea
```
1. View the files in the directory:
```
ls
```
The directory contains the following files:
```
Chart.lock Chart.yaml README.md charts templates values.yaml
```
1. In the `Chart.yaml` file, add the Replicated SDK as a dependency:
```yaml
# Chart.yaml
dependencies:
- name: replicated
repository: oci://registry.replicated.com/library
version: 1.5.1
```
For the latest version information for the Replicated SDK, see the [replicated-sdk repository](https://github.com/replicatedhq/replicated-sdk/releases) in GitHub.
1. Update dependencies and package the Helm chart to a `.tgz` chart archive:
```bash
helm package . --dependency-update
```
:::note
If you see a `401 Unauthorized` error message, log out of the Replicated registry by running `helm registry logout registry.replicated.com` and then run `helm package . --dependency-update` again.
:::
## Next Step
Create a release using the Helm chart archive. See [Step 3: Add the Chart Archive to a Release](tutorial-embedded-cluster-create-release).
## Related Topics
* [Packaging a Helm Chart for a Release](/vendor/helm-install-release.md)
* [About the Replicated SDK](/vendor/replicated-sdk-overview)
* [Helm Package](https://helm.sh/docs/helm/helm_package/)
---
# Introduction and Setup
This topic provides a summary of the goals and outcomes for the tutorial and also lists the prerequisites to set up your environment before you begin.
## Summary
This tutorial introduces you to installing an application on a Linux virtual machine (VM) using Replicated Embedded Cluster. Embedded Cluster allows you to distribute a Kubernetes cluster and your application together as a single appliance, making it easy for enterprise users to install, update, and manage the application and the cluster in tandem.
In this tutorial, you use a sample application to learn how to:
* Add the Embedded Cluster Config to a release
* Use Embedded Cluster to install the application on a Linux VM
## Set Up the Environment
Before you begin, ensure that you have access to a VM that meets the requirements for Embedded Cluster:
* Linux operating system
* x86-64 architecture
* systemd
* At least 2GB of memory and 2 CPU cores
* The disk on the host must have a maximum P99 write latency of 10 ms. This supports etcd performance and stability. For more information about the disk write latency requirements for etcd, see [Disks](https://etcd.io/docs/latest/op-guide/hardware/#disks) in _Hardware recommendations_ and [What does the etcd warning “failed to send out heartbeat on time” mean?](https://etcd.io/docs/latest/faq/) in the etcd documentation.
* The data directory used by Embedded Cluster must have 40Gi or more of total space and be less than 80% full. By default, the data directory is `/var/lib/embedded-cluster`. The directory can be changed by passing the `--data-dir` flag with the Embedded Cluster `install` command. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
Note that in addition to the primary data directory, Embedded Cluster creates directories and files in the following locations:
- `/etc/cni`
- `/etc/k0s`
- `/opt/cni`
- `/opt/containerd`
- `/run/calico`
- `/run/containerd`
- `/run/k0s`
- `/sys/fs/cgroup/kubepods`
- `/sys/fs/cgroup/system.slice/containerd.service`
- `/sys/fs/cgroup/system.slice/k0scontroller.service`
- `/usr/libexec/k0s`
- `/var/lib/calico`
- `/var/lib/cni`
- `/var/lib/containers`
- `/var/lib/kubelet`
- `/var/log/calico`
- `/var/log/containers`
- `/var/log/embedded-cluster`
- `/var/log/pods`
- `/usr/local/bin/k0s`
* (Online installations only) Access to replicated.app and proxy.replicated.com or your custom domain for each
* Embedded Cluster is based on k0s, so all k0s system requirements and external runtime dependencies apply. See [System requirements](https://docs.k0sproject.io/stable/system-requirements/) and [External runtime dependencies](https://docs.k0sproject.io/stable/external-runtime-deps/) in the k0s documentation.
## Next Step
Install the Replicated CLI and create an application in the Replicated Vendor Portal. See [Step 1: Create an Application](/vendor/tutorial-embedded-cluster-create-app).
---
# Step 2: Create an Application
Next, install the Replicated CLI and then create an application.
An _application_ is an object that has its own customers, channels, releases, license fields, and more. A single team can have more than one application. It is common for teams to have multiple applications for the purpose of onboarding, testing, and iterating.
To create an application:
1. Install the Replicated CLI:
```
brew install replicatedhq/replicated/cli
```
For more installation options, see [Install the Replicated CLI](/reference/replicated-cli-installing).
1. Authorize the Replicated CLI:
```
replicated login
```
In the browser window that opens, complete the prompts to log in to your vendor account and authorize the CLI.
1. Create an application named `Gitea`:
```
replicated app create Gitea
```
1. Set the `REPLICATED_APP` environment variable to the application that you created. This allows you to interact with the application using the Replicated CLI without needing to use the `--app` flag with every command:
1. Get the slug for the application that you created:
```
replicated app ls
```
**Example output**:
```
ID NAME SLUG SCHEDULER
2WthxUIfGT13RlrsUx9HR7So8bR Gitea gitea-boxer kots
```
In the example above, the application slug is `gitea-boxer`.
:::note
The application _slug_ is a unique string that is generated based on the application name. You can use the application slug to interact with the application through the Replicated CLI and the Vendor API v3. The application name and slug are often different from one another because it is possible to create more than one application with the same name.
:::
1. Set the `REPLICATED_APP` environment variable to the application slug.
**Example:**
```
export REPLICATED_APP=gitea-boxer
```
## Next Step
Add the Replicated SDK to the Helm chart and package the chart to an archive. See [Step 3: Package the Helm Chart](tutorial-kots-helm-package-chart).
## Related Topics
* [Create an Application](/vendor/vendor-portal-manage-app#create-an-application)
* [Installing the Replicated CLI](/reference/replicated-cli-installing)
* [replicated app create](/reference/replicated-cli-app-create)
---
# Step 5: Create a KOTS-Enabled Customer
After promoting the release, create a customer with the KOTS entitlement so that you can install the release with KOTS. A _customer_ represents a single licensed user of your application.
To create a customer:
1. In the [Vendor Portal](https://vendor.replicated.com), click **Customers > Create customer**.
The **Create a new customer** page opens:

[View a larger version of this image](/images/create-customer.png)
1. For **Customer name**, enter a name for the customer. For example, `KOTS Customer`.
1. For **Channel**, select **Unstable**. This allows the customer to install releases promoted to the Unstable channel.
1. For **License type**, select Development.
1. For **License options**, verify that **KOTS Install Enabled** is enabled. This is the entitlement that allows the customer to install with KOTS.
1. Click **Save Changes**.
1. On the **Manage customer** page for the customer, click **Download license**. You will use the license file to install with KOTS.

[View a larger version of this image](/images/customer-download-license.png)
## Next Step
Get the KOTS installation command and install. See [Step 6: Install the Release with KOTS](tutorial-kots-helm-install-kots).
## Related Topics
* [About Customers](/vendor/licenses-about)
* [Creating and Managing Customers](/vendor/releases-creating-customer)
---
# Step 4: Add the Chart Archive to a Release
Next, add the Helm chart archive to a new release for the application in the Replicated Vendor Portal. The purpose of this step is to configure a release that supports installation with both Replicated KOTS and with the Helm CLI.
A _release_ represents a single version of your application and contains your application files. Each release is promoted to one or more _channels_. Channels provide a way to progress releases through the software development lifecycle: from internal testing, to sharing with early-adopters, and finally to making the release generally available.
To create a release:
1. In the `gitea` directory, create a subdirectory named `manifests`:
```
mkdir manifests
```
You will add the files required to support installation with Replicated KOTS to this subdirectory.
1. Move the Helm chart archive that you created to `manifests`:
```
mv gitea-1.0.6.tgz manifests
```
1. In `manifests`, create the YAML manifests required by KOTS:
```
cd manifests
```
```
touch gitea.yaml kots-app.yaml k8s-app.yaml
```
1. In each of the files that you created, paste the corresponding YAML provided in the tabs below:
The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The name and chartVersion listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. Each Helm chart archive in a release requires a unique HelmChart custom resource.
The KOTS Application custom resource enables features in the KOTS Admin Console such as branding, release notes, port forwarding, dashboard buttons, application status indicators, and custom graphs.
The YAML below provides a name for the application to display in the Admin Console, adds a custom status informer that displays the status of the gitea Deployment resource in the Admin Console dashboard, adds a custom application icon, and creates a port forward so that the user can open the Gitea application in a browser.
The Kubernetes Application custom resource supports functionality such as including buttons and links on the KOTS Admin Console dashboard. The YAML below adds an Open App button to the Admin Console dashboard that opens the application using the port forward configured in the KOTS Application custom resource.
[View a larger version of this image](/images/release-promote.png)
## Next Step
Create a customer with the KOTS entitlement so that you can install the release in your cluster using Replicated KOTS. See [Step 5: Create a KOTS-Enabled Customer](tutorial-kots-helm-create-customer).
## Related Topics
* [About Channels and Releases](/vendor/releases-about)
* [Configuring the HelmChart Custom Resource](/vendor/helm-native-v2-using)
---
# Step 1: Get the Sample Chart and Test
To begin, get the sample Gitea Helm chart from Bitnami, install the chart in your cluster using the Helm CLI, and then uninstall. The purpose of this step is to confirm that you can successfully install and access the application before adding the chart to a release in the Replicated Vendor Portal.
To get the sample Gitea Helm chart and test installation:
1. Run the following command to pull and untar version 1.0.6 of the Bitnami Gitea Helm chart:
```
helm pull --untar oci://registry-1.docker.io/bitnamicharts/gitea --version 1.0.6
```
For more information about this chart, see the [bitnami/gitea](https://github.com/bitnami/charts/tree/main/bitnami/gitea) repository in GitHub.
1. Change to the new `gitea` directory that was created:
```
cd gitea
```
1. View the files in the directory:
```
ls
```
The directory contains the following files:
```
Chart.lock Chart.yaml README.md charts templates values.yaml
```
1. Install the Gitea chart in your cluster:
```
helm install gitea . --namespace gitea --create-namespace
```
To view the full installation instructions from Bitnami, see [Installing the Chart](https://github.com/bitnami/charts/blob/main/bitnami/gitea/README.md#installing-the-chart) in the `bitnami/gitea` repository.
When the chart is installed, the following output is displayed:
```
NAME: gitea
LAST DEPLOYED: Tue Oct 24 12:44:55 2023
NAMESPACE: gitea
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: gitea
CHART VERSION: 1.0.6
APP VERSION: 1.20.5
** Please be patient while the chart is being deployed **
1. Get the Gitea URL:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace gitea -w gitea'
export SERVICE_IP=$(kubectl get svc --namespace gitea gitea --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
echo "Gitea URL: http://$SERVICE_IP/"
WARNING: You did not specify a Root URL for Gitea. The rendered URLs in Gitea may not show correctly. In order to set a root URL use the rootURL value.
2. Get your Gitea login credentials by running:
echo Username: bn_user
echo Password: $(kubectl get secret --namespace gitea gitea -o jsonpath="{.data.admin-password}" | base64 -d)
```
1. Watch the `gitea` LoadBalancer service until an external IP is available:
```
kubectl get svc gitea --namespace gitea --watch
```
1. When the external IP for the `gitea` LoadBalancer service is available, run the commands provided in the output of the installation command to get the Gitea URL:
```
export SERVICE_IP=$(kubectl get svc --namespace gitea gitea --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
echo "Gitea URL: http://$SERVICE_IP/"
```
1. In a browser, go to the Gitea URL to confirm that you can see the welcome page for the application:
[View a larger version of this image](/images/gitea-app.png)
1. Uninstall the Helm chart:
```
helm uninstall gitea --namespace gitea
```
This command removes all the Kubernetes components associated with the chart and uninstalls the `gitea` release.
1. Delete the namespace:
```
kubectl delete namespace gitea
```
## Next Step
Log in to the Vendor Portal and create an application. See [Step 2: Create an Application](tutorial-kots-helm-create-app).
## Related Topics
* [Helm Install](https://helm.sh/docs/helm/helm_install/)
* [Helm Uninstall](https://helm.sh/docs/helm/helm_uninstall/)
* [Helm Create](https://helm.sh/docs/helm/helm_create/)
* [Helm Package](https://helm.sh/docs/helm/helm_package/)
* [bitnami/gitea](https://github.com/bitnami/charts/blob/main/bitnami/gitea)
---
# Step 7: Install the Release with the Helm CLI
Next, install the same release using the Helm CLI. All releases that contain one or more Helm charts can be installed with the Helm CLI.
All Helm charts included in a release are automatically pushed to the Replicated registry when the release is promoted to a channel. Helm CLI installations require that the customer has a valid email address to authenticate with the Replicated registry.
To install the release with the Helm CLI:
1. Create a new customer to test the Helm CLI installation:
1. In the [Vendor Portal](https://vendor.replicated.com), click **Customers > Create customer**.
The **Create a new customer** page opens:

[View a larger version of this image](/images/create-customer.png)
1. For **Customer name**, enter a name for the customer. For example, `Helm Customer`.
1. For **Channel**, select **Unstable**. This allows the customer to install releases promoted to the Unstable channel.
1. For **Customer email**, enter the email address for the customer. The customer email address is required to install the application with the Helm CLI. This email address is never used send emails to customers.
1. For **License type**, select Trial.
1. (Optional) For **License options**, _disable_ the **KOTS Install Enabled** entitlement.
1. Click **Save Changes**.
1. On the **Manage customer** page for the new customer, click **Helm install instructions**.

[View a larger version of this image](/images/tutorial-gitea-helm-customer-install-button.png)
You will use the instructions provided in the **Helm install instructions** dialog to install the chart.
1. Before you run the first command in the **Helm install instructions** dialog, create a `gitea` namespace for the installation:
```
kubectl create namespace gitea
```
1. Update the current kubectl context to target the new `gitea` namespace. This ensures that the chart is installed in the `gitea` namespace without requiring you to set the `--namespace` flag with the `helm install` command:
```
kubectl config set-context --namespace=gitea --current
```
1. Run the commands in the provided in the **Helm install instructions** dialog to log in to the registry and install the Helm chart.
[View a larger version of this image](/images/tutorial-gitea-helm-install-instructions.png)
:::note
You can ignore the **No preflight checks found** warning for the purpose of this tutorial. This warning appears because there are no specifications for preflight checks in the Helm chart archive.
:::
1. After the installation command completes, you can see that both the `gitea` Deployment and the Replicated SDK `replicated` Deployment were created:
```
kubectl get deploy
```
**Example output:**
```
NAME READY UP-TO-DATE AVAILABLE AGE
gitea 0/1 1 0 35s
replicated 1/1 1 1 35s
```
1. Watch the `gitea` LoadBalancer service until an external IP is available:
```
kubectl get svc gitea --watch
```
1. After an external IP address is available for the `gitea` LoadBalancer service, follow the instructions in the output of the installation command to get the Gitea URL and then confirm that you can open the application in a browser.
1. In another browser window, open the [Vendor Portal](https://vendor.replicated.com/) and go to **Customers**. Select the customer that you created for the Helm CLI installation.
On the **Reporting** page for the customer, because the Replicated SDK was installed alongside the Gitea Helm chart, you can see details about the customer's license and installed instances:

[View a larger version of this image](/images/tutorial-gitea-helm-reporting.png)
1. On the **Reporting** page, under **Instances**, click on the instance that you just installed to open the instance details page.
On the instance details page, you can see additional insights such as the cluster where the application is installed, the version of the Replicated SDK running in the cluster, instance status and uptime, and more:

[View a larger version of this image](/images/tutorial-gitea-helm-instance.png)
1. Uninstall the Helm chart and the Replicated SDK:
```
helm uninstall gitea
```
1. Delete the `gitea` namespace:
```
kubectl delete namespace gitea
```
## Next Step
Congratulations! As part of this tutorial, you created a release in the Replicated Vendor Portal and installed the release with both KOTS and the Helm CLI.
## Related Topics
* [Installing with Helm](/vendor/install-with-helm)
* [About the Replicated SDK](/vendor/replicated-sdk-overview)
* [Helm Uninstall](https://helm.sh/docs/helm/helm_uninstall/)
* [Helm Delete](https://helm.sh/docs/helm/helm_delete/)
---
# Step 6: Install the Release with KOTS
Next, get the KOTS installation command from the Unstable channel in the Vendor Portal and then install the release using the customer license that you downloaded.
To install the release with KOTS:
1. In the [Vendor Portal](https://vendor.replicated.com), go to **Channels**. From the **Unstable** channel card, under **Install**, copy the **KOTS Install** command.

[View a larger version of this image](/images/helm-tutorial-unstable-kots-install-command.png)
1. On the command line, run the **KOTS Install** command that you copied:
```bash
curl https://kots.io/install | bash
kubectl kots install $REPLICATED_APP/unstable
```
This installs the latest version of the KOTS CLI and the Replicated KOTS Admin Console. The Admin Console provides a user interface where you can upload the customer license file and deploy the application.
For additional KOTS CLI installation options, including how to install without root access, see [Installing the KOTS CLI](/reference/kots-cli-getting-started).
:::note
To install the SDK with a Replicated installer, KOTS v1.104.0 or later and the SDK version 1.0.0-beta.12 or later are required. You can verify the version of KOTS installed with `kubectl kots version`. For Replicated Embedded Cluster installations, you can see the version of KOTS that is installed by your version of Embedded Cluster in the [Embedded Cluster Release Notes](/release-notes/rn-embedded-cluster).
:::
1. Complete the installation command prompts:
1. For `Enter the namespace to deploy to`, enter `gitea`.
1. For `Enter a new password to be used for the Admin Console`, provide a password to access the Admin Console.
When the Admin Console is ready, the command prints the URL where you can access the Admin Console. At this point, the KOTS CLI is installed and the Admin Console is running, but the application is not yet deployed.
**Example output:**
```bash
Enter the namespace to deploy to: gitea
• Deploying Admin Console
• Creating namespace ✓
• Waiting for datastore to be ready ✓
Enter a new password for the admin console (6+ characters): ••••••••
• Waiting for Admin Console to be ready ✓
• Press Ctrl+C to exit
• Go to http://localhost:8800 to access the Admin Console
```
1. With the port forward running, in a browser, go to `http://localhost:8800` to access the Admin Console.
1. On the login page, enter the password that you created.
1. On the license page, select the license file that you downloaded previously and click **Upload license**.
The Admin Console dashboard opens. The application status changes from Missing to Unavailable while the `gitea` Deployment is being created:

[View a larger version of this image](/images/tutorial-gitea-unavailable.png)
1. While waiting for the `gitea` Deployment to be created, do the following:
1. On the command line, press Ctrl+C to exit the port forward.
1. Watch for the `gitea` Deployment to become ready:
```
kubectl get deploy gitea --namespace gitea --watch
```
1. After the `gitea` Deployment is ready, confirm that an external IP for the `gitea` LoadBalancer service is available:
```
kubectl get svc gitea --namespace gitea
```
1. Start the port foward again to access the Admin Console:
```
kubectl kots admin-console --namespace gitea
```
1. Go to `http://localhost:8800` to open the Admin Console.
1. On the Admin Console dashboard, the application status is now displayed as Ready and you can click **Open App** to view the Gitea application in a browser:

[View a larger version of this image](/images/tutorial-gitea-ready.png)
1. In another browser window, open the [Vendor Portal](https://vendor.replicated.com/) and go to **Customers**. Select the customer that you created.
On the **Reporting** page for the customer, you can see details about the customer's license and installed instances:

[View a larger version of this image](/images/tutorial-gitea-customer-reporting.png)
1. On the **Reporting** page, under **Instances**, click on the instance that you just installed to open the instance details page.
On the instance details page, you can see additional insights such as the cluster where the application is installed, the version of KOTS running in the cluster, instance status and uptime, and more:

[View a larger version of this image](/images/tutorial-gitea-instance-insights.png)
1. Uninstall the Gitea application from your cluster so that you can install the same release again using the Helm CLI:
```bash
kubectl kots remove $REPLICATED_APP --namespace gitea --undeploy
```
**Example output**:
```
• Removing application gitea-boxer reference from Admin Console and deleting associated resources from the cluster ✓
• Application gitea-boxer has been removed
```
1. Remove the Admin Console from the cluster:
1. Delete the namespace where the Admin Console is installed:
```
kubectl delete namespace gitea
```
1. Delete the Admin Console ClusterRole and ClusterRoleBinding:
```
kubectl delete clusterrole kotsadm-role
```
```
kubectl delete clusterrolebinding kotsadm-rolebinding
```
## Next Step
Install the same release with the Helm CLI. See [Step 7: Install the Release with the Helm CLI](tutorial-kots-helm-install-helm).
## Related Topics
* [kots install](/reference/kots-cli-install/)
* [Installing the KOTS CLI](/reference/kots-cli-getting-started/)
* [Delete the Admin Console and Remove Applications](/enterprise/delete-admin-console)
* [Customer Reporting](customer-reporting)
* [Instance Details](instance-insights-details)
---
# Step 3: Package the Helm Chart
Next, add the Replicated SDK as a dependency of the Helm chart and then package the chart into a `.tgz` archive. The purpose of this step is to prepare the Helm chart to be added to a release.
The Replicated SDK is a Helm chart that can be optionally added as a dependency of your application Helm chart. The SDK is installed as a small service running alongside your application, and provides an in-cluster API that you can use to embed Replicated features into your application. Additionally, the Replicated SDK provides access to insights and telemetry for instances of your application installed with the Helm CLI.
To add the Replicated SDK and package the Helm chart:
1. In your local file system, go to the `gitea` directory that was created as part of [Step 1: Get the Sample Chart and Test](tutorial-kots-helm-get-chart).
1. In the `Chart.yaml` file, add the Replicated SDK as a dependency:
```yaml
# Chart.yaml
dependencies:
- name: replicated
repository: oci://registry.replicated.com/library
version: 1.5.1
```
For the latest version information for the Replicated SDK, see the [replicated-sdk repository](https://github.com/replicatedhq/replicated-sdk/releases) in GitHub.
1. Update dependencies and package the Helm chart to a `.tgz` chart archive:
```bash
helm package . --dependency-update
```
:::note
If you see a `401 Unauthorized` error message, log out of the Replicated registry by running `helm registry logout registry.replicated.com` and then run `helm package . --dependency-update` again.
:::
## Next Step
Create a release using the Helm chart archive. See [Step 4: Add the Chart Archive to a Release](tutorial-kots-helm-create-release).
## Related Topics
* [Packaging a Helm Chart for a Release](/vendor/helm-install-release.md)
* [About the Replicated SDK](/vendor/replicated-sdk-overview)
* [Helm Package](https://helm.sh/docs/helm/helm_package/)
---
# Introduction and Setup
This topic provides a summary of the goals and outcomes for the tutorial and also lists the prerequisites to set up your environment before you begin.
## Summary
This tutorial introduces you to the Replicated Vendor Portal, the Replicated CLI, the Replicated SDK, and the Replicated KOTS installer.
In this tutorial, you use a sample Helm chart to learn how to:
* Add the Replicated SDK to a Helm chart as a dependency
* Create a release with the Helm chart using the Replicated CLI
* Add custom resources to the release so that it supports installation with both the Helm CLI and Replicated KOTS
* Install the release in a cluster using KOTS and the KOTS Admin Console
* Install the same release using the Helm CLI
## Set Up the Environment
Before you begin, do the following to set up your environment:
* Ensure that you have kubectl access to a Kubernetes cluster. You can use any cloud provider or tool that you prefer to create a cluster, such as Google Kubernetes Engine (GKE), Amazon Web Services (AWS), or minikube.
For information about installing kubectl and configuring kubectl access to a cluster, see the following in the Kubernetes documentation:
* [Install Tools](https://kubernetes.io/docs/tasks/tools/)
* [Command line tool (kubectl)](https://kubernetes.io/docs/reference/kubectl/)
* Install the Helm CLI. To install the Helm CLI using Homebrew, run:
```
brew install helm
```
For more information, including alternative installation options, see [Install Helm](https://helm.sh/docs/intro/install/) in the Helm documentation.
* Create a vendor account to access the Vendor Portal. See [Creating a Vendor Portal](/vendor/vendor-portal-creating-account).
:::note
If you do not yet have a Vendor Portal team to join, you can sign up for a trial account. By default, trial accounts do not include access to Replicated KOTS. To get access to KOTS with your trial account so that you can complete this and other tutorials, contact Replicated at contact@replicated.com.
:::
## Next Step
Get the sample Bitnami Helm chart and test installation with the Helm CLI. See [Step 1: Get the Sample Chart and Test](/vendor/tutorial-kots-helm-get-chart)
---
# Step 2: Add a Preflight Spec to the Chart
Create a preflight specification that fails if the cluster is running a version of Kubernetes earlier than 1.23.0, and add the specification to the Gitea chart as a Kubernetes Secret.
To add a preflight specification to the Gitea chart:
1. In the `gitea/templates` directory, create a `gitea-preflights.yaml` file:
```
touch templates/gitea-preflights.yaml
```
1. In the `gitea-preflights.yaml` file, add the following YAML to create a Kubernetes Secret with a preflight check specification:
```yaml
apiVersion: v1
kind: Secret
metadata:
labels:
troubleshoot.sh/kind: preflight
name: gitea-preflight-checks
stringData:
preflight.yaml: |
apiVersion: troubleshoot.sh/v1beta2
kind: Preflight
metadata:
name: gitea-preflight-checks
spec:
analyzers:
- clusterVersion:
outcomes:
- fail:
when: "< 1.23.0"
message: |-
Your cluster is running a version of Kubernetes that is not supported and your installation will not succeed. To continue, upgrade your cluster to Kubernetes 1.23.0 or later.
uri: https://www.kubernetes.io
- pass:
message: Your cluster is running the required version of Kubernetes.
```
The YAML above defines a preflight check that fails if the target cluster is running a version of Kubernetes earlier than 1.23.0. The preflight check also includes a message to the user that describes the failure and lists the required Kubernetes version. The `troubleshoot.sh/kind: preflight` label is required to run preflight checks defined in Secrets.
1. In the Gitea `Chart.yaml` file, add the Replicated SDK as a dependency:
```yaml
# Chart.yaml
dependencies:
- name: replicated
repository: oci://registry.replicated.com/library
version: 1.5.1
```
For the latest version information for the Replicated SDK, see the [replicated-sdk repository](https://github.com/replicatedhq/replicated-sdk/releases) in GitHub.
The SDK is installed as a small service running alongside your application, and provides an in-cluster API that you can use to embed Replicated features into your application.
1. Update dependencies and package the chart to a `.tgz` chart archive:
```bash
helm package . --dependency-update
```
:::note
If you see a `401 Unauthorized` error message, log out of the Replicated registry by running `helm registry logout registry.replicated.com` and then run `helm package . --dependency-update` again.
:::
## Next Step
Add the chart archive to a release. See [Add the Chart Archive to a Release](tutorial-preflight-helm-create-release).
## Related Topics
* [Define Preflight Checks](/vendor/preflight-defining)
* [Package a Helm Chart for a Release](/vendor/helm-install-release)
---
# Step 4: Create a Customer
After promoting the release, create a customer so that you can run the preflight checks and install.
To create a customer:
1. In the [Vendor Portal](https://vendor.replicated.com), click **Customers > Create customer**.
The **Create a new customer** page opens:

[View a larger version of this image](/images/create-customer.png)
1. For **Customer name**, enter a name for the customer. For example, `Preflight Customer`.
1. For **Channel**, select **Unstable**. This allows the customer to install releases promoted to the Unstable channel.
1. For **Customer email**, enter the email address for the customer. The customer email address is required to install the application with the Helm CLI. This email address is never used send emails to customers.
1. For **License type**, select Development.
1. Click **Save Changes**.
## Next Step
Use the Helm CLI to run the preflight checks you defined and install Gitea. See [Run Preflights with the Helm CLI](tutorial-preflight-helm-install).
## Related Topics
* [About Customers](/vendor/licenses-about)
* [Creating and Managing Customers](/vendor/releases-creating-customer)
---
# Step 3: Add the Chart Archive to a Release
Use the Replicated CLI to add the Gitea Helm chart archive to a release in the Replicated vendor platform.
To create a release:
1. Install the Replicated CLI:
```
brew install replicatedhq/replicated/cli
```
For more installation options, see [Install the Replicated CLI](/reference/replicated-cli-installing).
1. Authorize the Replicated CLI:
```
replicated login
```
In the browser window that opens, complete the prompts to log in to your vendor account and authorize the CLI.
1. Create an application named `Gitea`:
```
replicated app create Gitea
```
1. Get the slug for the application that you created:
```
replicated app ls
```
**Example output**:
```
ID NAME SLUG SCHEDULER
2WthxUIfGT13RlrsUx9HR7So8bR Gitea gitea-boxer kots
```
In the example above, the application slug is `gitea-boxer`.
1. Set the `REPLICATED_APP` environment variable to the application that you created. This allows you to interact with the application using the Replicated CLI without needing to use the `--app` flag with every command:
**Example:**
```
export REPLICATED_APP=gitea-boxer
```
1. Go to the `gitea` directory.
1. Create a release with the Gitea chart archive:
```
replicated release create --chart=gitea-1.0.6.tgz
```
```bash
You are creating a release that will only be installable with the helm CLI.
For more information, see
https://docs.replicated.com/vendor/helm-install#about-helm-installations-with-replicated
• Reading chart from gitea-1.0.6.tgz ✓
• Creating Release ✓
• SEQUENCE: 1
```
1. Log in to the Vendor Portal and go to **Releases**.
The release that you created is listed under **All releases**.
1. Click **View YAML** to view the files in the release.
1. At the top of the page, click **Promote**.
[View a larger version of this image](/images/release-promote.png)
1. In the dialog, for **Which channels you would like to promote this release to?**, select **Unstable**. Unstable is a default channel that is intended for use with internal testing.
1. For **Version label**, open the dropdown and select **1.0.6**.
1. Click **Promote**.
## Next Step
Create a customer so that you can install the release in a development environment. See [Create a Customer](tutorial-preflight-helm-create-customer).
## Related Topics
* [About Channels and Releases](/vendor/releases-about)
* [Managing Releases with the CLI](/vendor/releases-creating-cli)
---
# Step 1: Get the Sample Chart and Test
To begin, get the sample Gitea Helm chart from Bitnami, install the chart in your cluster using the Helm CLI, and then uninstall. The purpose of this step is to confirm that you can successfully install the application before adding preflight checks to the chart.
To get the sample Gitea Helm chart and test installation:
1. Run the following command to pull and untar version 1.0.6 of the Bitnami Gitea Helm chart:
```
helm pull --untar oci://registry-1.docker.io/bitnamicharts/gitea --version 1.0.6
```
For more information about this chart, see the [bitnami/gitea](https://github.com/bitnami/charts/tree/main/bitnami/gitea) repository in GitHub.
1. Change to the new `gitea` directory that was created:
```
cd gitea
```
1. View the files in the directory:
```
ls
```
The directory contains the following files:
```
Chart.lock Chart.yaml README.md charts templates values.yaml
```
1. Install the Gitea chart in your cluster:
```
helm install gitea . --namespace gitea --create-namespace
```
To view the full installation instructions from Bitnami, see [Installing the Chart](https://github.com/bitnami/charts/blob/main/bitnami/gitea/README.md#installing-the-chart) in the `bitnami/gitea` repository.
When the chart is installed, the following output is displayed:
```
NAME: gitea
LAST DEPLOYED: Tue Oct 24 12:44:55 2023
NAMESPACE: gitea
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: gitea
CHART VERSION: 1.0.6
APP VERSION: 1.20.5
** Please be patient while the chart is being deployed **
1. Get the Gitea URL:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace gitea -w gitea'
export SERVICE_IP=$(kubectl get svc --namespace gitea gitea --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
echo "Gitea URL: http://$SERVICE_IP/"
WARNING: You did not specify a Root URL for Gitea. The rendered URLs in Gitea may not show correctly. In order to set a root URL use the rootURL value.
2. Get your Gitea login credentials by running:
echo Username: bn_user
echo Password: $(kubectl get secret --namespace gitea gitea -o jsonpath="{.data.admin-password}" | base64 -d)
```
1. Watch the `gitea` LoadBalancer service until an external IP is available:
```
kubectl get svc gitea --namespace gitea --watch
```
1. When the external IP for the `gitea` LoadBalancer service is available, run the commands provided in the output of the installation command to get the Gitea URL:
```
export SERVICE_IP=$(kubectl get svc --namespace gitea gitea --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
echo "Gitea URL: http://$SERVICE_IP/"
```
:::note
Alternatively, you can run the following command to forward a local port to a port on the Gitea Pod:
```
POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=gitea -o jsonpath='{.items[0].metadata.name}')
kubectl port-forward pod/$POD_NAME 8080:3000
```
:::
1. In a browser, go to the Gitea URL to confirm that you can see the welcome page for the application:
[View a larger version of this image](/images/gitea-app.png)
1. Uninstall the Helm chart:
```
helm uninstall gitea --namespace gitea
```
This command removes all the Kubernetes components associated with the chart and uninstalls the `gitea` release.
1. Delete the namespace:
```
kubectl delete namespace gitea
```
## Next Step
Define preflight checks and add them to the Gitea Helm chart. See [Add a Preflight Spec to the Chart](tutorial-preflight-helm-add-spec).
## Related Topics
* [Helm Install](https://helm.sh/docs/helm/helm_install/)
* [Helm Uninstall](https://helm.sh/docs/helm/helm_uninstall/)
* [Helm Package](https://helm.sh/docs/helm/helm_package/)
* [bitnami/gitea](https://github.com/bitnami/charts/blob/main/bitnami/gitea)
---
# Step 6: Run Preflights with KOTS
Create a KOTS-enabled release and then install Gitea with KOTS. This purpose of this step is to see how preflight checks automatically run in the KOTS Admin Console during installation.
To run preflight checks during installation with KOTS:
1. In the `gitea` directory, create a subdirectory named `manifests`:
```
mkdir manifests
```
You will add the files required to support installation with KOTS to this subdirectory.
1. Move the Helm chart archive to `manifests`:
```
mv gitea-1.0.6.tgz manifests
```
1. In `manifests`, create the YAML manifests required by KOTS:
```
cd manifests
```
```
touch gitea.yaml kots-app.yaml k8s-app.yaml
```
1. In each of the files that you created, paste the corresponding YAML provided in the tabs below:
The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The name and chartVersion listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. Each Helm chart archive in a release requires a unique HelmChart custom resource.
The KOTS Application custom resource enables features in the Replicated Admin Console such as branding, release notes, port forwarding, dashboard buttons, application status indicators, and custom graphs.
The YAML below provides a name for the application to display in the Admin Console, adds a custom status informer that displays the status of the gitea Deployment resource in the Admin Console dashboard, adds a custom application icon, and creates a port forward so that the user can open the Gitea application in a browser.
The Kubernetes Application custom resource supports functionality such as including buttons and links on the Replicated Admin Console dashboard. The YAML below adds an Open App button to the Admin Console dashboard that opens the application using the port forward configured in the KOTS Application custom resource.
[View a larger version of this image](/images/gitea-preflights-cli.png)
1. Run the fourth command listed under **Option 1: Install Gitea** to install the application:
```bash
helm install gitea oci://registry.replicated.com/$REPLICATED_APP/unstable/gitea
```
1. Uninstall and delete the namespace:
```bash
helm uninstall gitea --namespace gitea
```
```bash
kubectl delete namespace gitea
```
## Next Step
Install the application with KOTS to see how preflight checks are run from the KOTS Admin Console. See [Run Preflights with KOTS](tutorial-preflight-helm-install-kots).
## Related Topics
* [Running Preflight Checks](/vendor/preflight-running)
* [Installing with Helm](/vendor/install-with-helm)
---
# Introduction and Setup
This topic provides a summary of the goals and outcomes for the tutorial and also lists the prerequisites to set up your environment before you begin.
## Summary
This tutorial introduces you to preflight checks. The purpose of preflight checks is to provide clear feedback about any missing requirements or incompatibilities in the customer's cluster _before_ they install or upgrade an application. Thorough preflight checks provide increased confidence that an installation or upgrade will succeed and help prevent support escalations.
Preflight checks are part of the [Troubleshoot](https://troubleshoot.sh/) open source project, which is maintained by Replicated.
In this tutorial, you use a sample Helm chart to learn how to:
* Define custom preflight checks in a Kubernetes Secret in a Helm chart
* Package a Helm chart and add it to a release in the Replicated Vendor Portal
* Run preflight checks using the Helm CLI
* Run preflight checks in the Replicated KOTS Admin Console
## Set Up the Environment
Before you begin, do the following to set up your environment:
* Ensure that you have kubectl access to a Kubernetes cluster. You can use any cloud provider or tool that you prefer to create a cluster, such as Google Kubernetes Engine (GKE), Amazon Web Services (AWS), or minikube.
For information about installing kubectl and configuring kubectl access to a cluster, see the following in the Kubernetes documentation:
* [Install Tools](https://kubernetes.io/docs/tasks/tools/)
* [Command line tool (kubectl)](https://kubernetes.io/docs/reference/kubectl/)
* Install the Helm CLI. To install the Helm CLI using Homebrew, run:
```
brew install helm
```
For more information, including alternative installation options, see [Install Helm](https://helm.sh/docs/intro/install/) in the Helm documentation.
* Create a vendor account to access the Vendor Portal. See [Creating a Vendor Portal](/vendor/vendor-portal-creating-account).
:::note
If you do not yet have a Vendor Portal team to join, you can sign up for a trial account. By default, trial accounts do not include access to Replicated KOTS. To get access to KOTS with your trial account so that you can complete this and other tutorials, contact Replicated at contact@replicated.com.
:::
## Next Step
Get the sample Bitnami Helm chart and test installation with the Helm CLI. See [Step 1: Get the Sample Chart and Test](/vendor/tutorial-preflight-helm-get-chart)
---
# Use a Registry Proxy for Helm Air Gap Installations
This topic describes how to connect the Replicated proxy registry to a Harbor or jFrog Artifactory instance to support pull-through image caching. It also includes information about how to set up replication rules in Harbor for image mirroring.
## Overview
For applications distributed with Replicated, the [Replicated proxy registry](/vendor/private-images-about) grants proxy, or _pull-through_, access to application images without exposing registry credentials to customers.
Users can optionally connect the Replicated proxy registry with their own [Harbor](https://goharbor.io) or [jFrog Artifactory](https://jfrog.com/help/r/jfrog-artifactory-documentation) instance to proxy and cache the images that are required for installation on demand. This can be particularly helpful in Helm installations in air-gapped environments because it allows users to pull and cache images from an internet-connected machine, then access the cached images during installation from a machine with limited or no outbound internet access.
In addition to the support for on-demand pull-through caching, connecting the Replicated proxy registry to a Harbor or Artifactory instance also has the following benefits:
* Registries like Harbor or Artifactory typically support access controls as well as scanning images for security vulnerabilities
* With Harbor, users can optionally set up replication rules for image mirroring, which can be used to improve data availability and reliability
## Limtiation
Artifactory does not support mirroring or replication for Docker registries. If you need to set up image mirroring, use Harbor. See [Set Up Mirroring in Harbor](#harbor-mirror) below.
## Connect the Replicated Proxy Registry to Harbor
[Harbor](https://goharbor.io) is a popular open-source container registry. Users can connect the Replicated proxy registry to Harbor in order to cache images on demand and set up pull-based replication rules to proactively mirror images. Connecting the Replicated proxy registry to Harbor also allows customers use Harbor's security features.
### Use Harbor for Pull-Through Proxy Caching {#harbor-proxy-cache}
To connect the Replicated proxy registry to Harbor for pull-through proxy caching:
1. Log in to Harbor and create a new replication endpoint. This endpoint connects the Replicated proxy registry to the Harbor instance. For more information, see [Creating Replication Endpoints](https://goharbor.io/docs/2.11.0/administration/configuring-replication/create-replication-endpoints/) in the Harbor documentation.
1. Enter the following details for the endpoint:
* For the provider field, choose Docker Registry.
* For the URL field, enter `https://proxy.replicated.com` or the custom domain that is configured for the Replicated proxy registry. For more information about configuring custom domains in the Vendor Portal, see [Use Custom Domains](/vendor/custom-domains-using).
* For the access ID, enter the email address associated with the customer in the Vendor Portal.
* For the access secret, enter the customer's unique license ID. You can find the license ID in the Vendor Portal by going to **Customers > [Customer Name]**.
1. Verify your configuration by testing the connection and then save the endpoint.
1. After adding the Replicated proxy registry as a replication endpoint in Harbor, set up a proxy cache. This allows for pull-through image caching with Harbor. For more information, see [Configure Proxy Cache](https://goharbor.io/docs/2.11.0/administration/configure-proxy-cache/) in the Harbor documentation.
1. (Optional) Add a pull-based replication rule to support image mirroring. See [Configure Image Mirroring in Harbor](#harbor-mirror) below.
### Configure Image Mirroring in Harbor {#harbor-mirror}
To enable image mirroring with Harbor, users create a pull-based replication rule. This periodically (or when manually triggered) pulls images from the Replicated proxy registry to store them in Harbor.
The Replicated proxy regsitry exposes standard catalog and tag listing endpoints that are used by Harbor to support image mirroring:
* The catalog endpoint returns a list of repositories built from images of the last 10 releases.
* The tags listing endpoint lists the tags available in a given repository for those same releases.
When image mirroring is enabled, Harbor uses these endpoints to build a list of images to cache and then serve.
#### Limitations
Image mirroring with Harbor has the following limitations:
* Neither the catalog or tags listing endpoints exposed by the Replicated proxy service respect pagination requests. However, Harbor requests 1000 items at a time.
* Only authenticated users can perform catalog calls or list tags. Authenticated users are those with an email address and license ID associated with a customer in the Vendor Portal.
#### Create a Pull-Based Replication Rule in Harbor for Image Mirroring
To configure image mirroring in Harbor:
1. Follow the steps in [Use Harbor for Pull-Through Proxy Caching](#harbor-proxy-cache) above to add the Replicated proxy registry to Harbor as a replication endpoint.
1. Create a **pull-based** replication rule in Harbor to mirror images proactively. For more information, see [Creating a replication rule](https://goharbor.io/docs/2.11.0/administration/configuring-replication/create-replication-rules/) in the Harbor documentation.
## Use Artifactory for Pull-Through Proxy Caching
[jFrog Artifactory](https://jfrog.com/help/r/jfrog-artifactory-documentation) supports pull-through caching for Docker registries.
For information about how to configure a pull-through cache with Artifactory, see [Remote Repository](https://jfrog.com/help/r/jfrog-artifactory-documentation/configure-a-remote-repository) in the Artifactory documentation.
---
# Application Settings Page
Each application has its own settings, which include the application name and application slug.
The following shows the **Application Settings** page, which you access by selecting **_Application Name_ > Settings**:
[View a larger version of this image](/images/application-settings.png)
The following describes each of the application settings:
- **Application name:** The application name is initially set when you first create the application in the Vendor Portal. You can change the name at any time so that it displays as a user-friendly name that your team can easily identify.
- **Application slug:** The application slug is used with the Replicated CLI and with some of the KOTS CLI commands. You can click on the link below the slug to toggle between the application ID number and the slug name. The application ID and application slug are unique identifiers that cannot be edited.
- **Service Account Tokens:** Provides a link to the the **Service Accounts** page, where you can create or remove a service account. Service accounts are paired with API tokens and are used with the Vendor API to automate tasks. For more information, see [Use Vendor API Tokens](/reference/vendor-api-using).
- **Scheduler:** Displayed if the application has a KOTS entitlement.
- **Danger Zone:** Lets you delete the application, and all of the licenses and data associated with the application. The delete action cannot be undone.
---
# Create a Vendor Account
To get started with Replicated, you must create a Replicated vendor account. When you create your account, you are also prompted to create an application. To create additional applications in the future, log in to the Replicated Vendor Portal and select **Create new app** from the Applications drop-down list.
To create a vendor account:
1. Go to the [Vendor Portal](https://vendor.replicated.com), and select **Sign up**.
The sign up page opens.
3. Enter your email address or continue with Google authentication.
- If registering with an email, the Activate account page opens and you will receive an activation code in your email.
:::note
To resend the code, click **Resend it**.
:::
- Copy and paste the activation code into the text box and click **Activate**. Your account is now activated.
:::note
After your account is activated, you might have the option to accept a pending invitation, or to automatically join an existing team if the auto-join feature is enabled by your administrator. For more information about enabling the auto-join feature, see [Enable Users to Auto-join Your Team](https://docs.replicated.com/vendor/team-management#enable-users-to-auto-join-your-team).
:::
4. On the Create your team page, enter you first name, last name, and company name. Click **Continue** to complete the setup.
:::note
The company name you provide is used as your team name in Vendor Portal.
:::
The Create application page opens.
5. Enter a name for the application, such as `My-Application-Demo`. Click **Create application**.
The application is created and the Channels page opens.
:::important
Replicated recommends that you use a temporary name for the application at this time such as `My-Application-Demo` or `My-Application-Test`.
Only use an official name for your application when you have completed testing and are ready to distribute the application to your customers.
Replicated recommends that you use a temporary application name for testing because you are not able to restore or modify previously-used application names or application slugs in the Vendor Portal.
:::
## Next Step
Invite team members to collaborate with you in Vendor Portal. See [Invite Members](team-management#invite-members).
---
# Manage Applications
This topic provides information about managing applications, including how to create, delete, and retrieve the slug for applications in the Replicated Vendor Portal and with the Replicated CLI.
For information about creating and managing application with the Vendor API v3, see the [apps](https://replicated-vendor-api.readme.io/reference/createapp) section in the Vendor API v3 documentation.
## Create an Application
Teams can create one or more applications. It is common to create multiple applications for testing purposes.
### Vendor Portal
To create a new application:
1. Log in to the [Vendor Portal](https://vendor.replicated.com/). If you do not have an account, see [Create a Vendor Account](/vendor/vendor-portal-creating-account).
1. In the top left of the page, open the application drop down and click **Create new app...**.
[View a larger version of this image](/images/create-new-app.png)
1. On the **Create application** page, enter a name for the application.
[View a larger version of this image](/images/create-application-page.png)
:::important
If you intend to use the application for testing purposes, Replicated recommends that you use a temporary name such as `My Application Demo` or `My Application Test`.
You are not able to restore or modify previously-used application names or application slugs.
:::
1. Click **Create application**.
### Replicated CLI
To create an application with the Replicated CLI:
1. Install the Replicated CLI. See [Install the Replicated CLI](/reference/replicated-cli-installing).
1. Run the following command:
```bash
replicated app create APP-NAME
```
Replace `APP-NAME` with the name that you want to use for the new application.
**Example**:
```bash
replicated app create cli-app
ID NAME SLUG SCHEDULER
1xy9t8G9CO0PRGzTwSwWFkMUjZO cli-app cli-app kots
```
## Get the Application Slug {#slug}
Each application has a slug, which is used for interacting with the application using the Replicated CLI. The slug is automatically generated based on the application name and cannot be changed.
### Vendor Portal
To get an application slug in the Vendor Portal:
1. Log in to the [Vendor Portal](https://vendor.replicated.com/) and go to **_Application Name_ > Settings**.
1. Under **Application Slug**, copy the slug.
[View a larger version of this image](/images/application-settings.png)
### Replicated CLI
To get an application slug with the Replicated CLI:
1. Install the Replicated CLI. See [Install the Replicated CLI](/reference/replicated-cli-installing).
1. Run the following command:
```bash
replicated app ls APP-NAME
```
Replace `APP-NAME` with the name of the target application. Or, exclude `APP-NAME` to list all applications in the team.
**Example:**
```bash
replicated app ls cli-app
ID NAME SLUG SCHEDULER
1xy9t8G9CO0PRGzTwSwWFkMUjZO cli-app cli-app kots
```
1. Copy the value in the `SLUG` field.
## Delete an Application
When you delete an application, you also delete all licenses and data associated with the application. You can also optionally delete all images associated with the application from the Replicated registry. Deleting an application cannot be undone.
### Vendor Portal
To delete an application in the Vendor Portal:
1. Log in to the [Vendor Portal](https://vendor.replicated.com/) and go to **_Application Name_ > Settings**.
1. Under **Danger Zone**, click **Delete App**.
[View a larger version of this image](/images/application-settings.png)
1. In the **Are you sure you want to delete this app?** dialog, enter the application name. Optionally, enter your password if you want to delete all images associated with the application from the Replicated registry.
[View a larger version of this image](/images/delete-app-dialog.png)
1. Click **Delete app**.
### Replicated CLI
To delete an application with the Replicated CLI:
1. Install the Replicated CLI. See [Install the Replicated CLI](/reference/replicated-cli-installing).
1. Run the following command:
```bash
replicated app delete APP-NAME
```
Replace `APP-NAME` with the name of the target application.
1. When prompted, type `yes` to confirm that you want to delete the application.
**Example:**
```bash
replicated app delete deletion-example
• Fetching App ✓
ID NAME SLUG SCHEDULER
1xyAIzrmbvq... deletion-example deletion-example kots
Delete the above listed application? There is no undo: yes█
• Deleting App ✓
```
---