Field Name | Description |
---|---|
Owner & Repository | Enter the owner and repository name where the commit will be made. |
Branch | Enter the branch name or leave the field blank to use the default branch. |
Path | Enter the folder name in the repository where the application deployment file will be committed. If you leave this field blank, the Replicated KOTS creates a folder for you. However, the best practice is to manually create a folder in the repository labeled with the application name and dedicated for the deployment file only. |
Field | Description |
---|---|
Hostname | Specify a registry domain that uses the Docker V2 protocol. |
Username | Specify the username for the domain. |
Password | Specify the password for the domain. |
Registry Namespace | Specify the registry namespace. The registry namespace is the path between the registry and the image name. For example, `my.registry.com/namespace/image:tag`. For air gap environments, this setting overwrites the registry namespace where images where pushed when KOTS was installed. |
Disable Pushing Images to Registry | (Optional) Select this option to disable KOTS from pushing images. Make sure that an external process is configured to push images to your registry instead. Your images are still read from your registry when the application is deployed. |
Field | Description |
---|---|
Hostname | Specify a registry domain that uses the Docker V2 protocol. |
Username | Specify the username for the domain. |
Password | Specify the password for the domain. |
Registry Namespace | Specify the registry namespace. For air gap environments, this setting overwrites the registry namespace that you pushed images to when you installed KOTS. |
Domain | Description |
---|---|
`proxy.replicated.com` | Private Docker images are proxied through `proxy.replicated.com`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `proxy.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L52-L57) in GitHub. |
`replicated.app` | Upstream application YAML and metadata is pulled from `replicated.app`. The current running version of the application (if any), as well as a license ID and application ID to authenticate, are all sent to `replicated.app`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `replicated.app`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L60-L65) in GitHub. |
`registry.replicated.com` * | Some applications host private images in the Replicated registry at this domain. The on-prem docker client uses a license ID to authenticate to `registry.replicated.com`. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA. For the range of IP addresses for `registry.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L20-L25) in GitHub. |
Port | Protocol |
---|---|
6443 | TCP |
10250 | TCP |
9443 | TCP |
2380 | TCP |
4789 | UDP |
Domain | Description |
---|---|
Docker Hub | Some dependencies of KOTS are hosted as public images in Docker Hub. The required domains for this service are `index.docker.io`, `cdn.auth0.com`, `*.docker.io`, and `*.docker.com.` |
`proxy.replicated.com` * | Private Docker images are proxied through `proxy.replicated.com`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `proxy.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L52-L57) in GitHub. |
`replicated.app` | Upstream application YAML and metadata is pulled from `replicated.app`. The current running version of the application (if any), as well as a license ID and application ID to authenticate, are all sent to `replicated.app`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `replicated.app`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L60-L65) in GitHub. |
`registry.replicated.com` ** | Some applications host private images in the Replicated registry at this domain. The on-prem docker client uses a license ID to authenticate to `registry.replicated.com`. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA. For the range of IP addresses for `registry.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L20-L25) in GitHub. |
`kots.io` | Requests are made to this domain when installing the Replicated KOTS CLI. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. |
`github.com` | Requests are made to this domain when installing the Replicated KOTS CLI. For information about retrieving GitHub IP addresses, see [About GitHub's IP addresses](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/about-githubs-ip-addresses) in the GitHub documentation. |
Domain | Description |
---|---|
Docker Hub | Some dependencies of KOTS are hosted as public images in Docker Hub. The required domains for this service are `index.docker.io`, `cdn.auth0.com`, `*.docker.io`, and `*.docker.com.` |
`proxy.replicated.com` * | Private Docker images are proxied through `proxy.replicated.com`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `proxy.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L52-L57) in GitHub. |
`replicated.app` | Upstream application YAML and metadata is pulled from `replicated.app`. The current running version of the application (if any), as well as a license ID and application ID to authenticate, are all sent to `replicated.app`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `replicated.app`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L60-L65) in GitHub. |
`registry.replicated.com` ** | Some applications host private images in the Replicated registry at this domain. The on-prem docker client uses a license ID to authenticate to `registry.replicated.com`. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA. For the range of IP addresses for `registry.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L20-L25) in GitHub. |
`k8s.kurl.sh` `s3.kurl.sh` |
kURL installation scripts and artifacts are served from [kurl.sh](https://kurl.sh). An application identifier is sent in a URL path, and bash scripts and binary executables are served from kurl.sh. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `k8s.kurl.sh`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L34-L39) in GitHub. The range of IP addresses for `s3.kurl.sh` are the same as IP addresses for the `kurl.sh` domain. For the range of IP address for `kurl.sh`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L28-L31) in GitHub. |
`amazonaws.com` | `tar.gz` packages are downloaded from Amazon S3 during installations with kURL. For information about dynamically scraping the IP ranges to allowlist for accessing these packages, see [AWS IP address ranges](https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html#aws-ip-download) in the AWS documentation. |
Deployment | StatefulSet | Service | Ingress | PVC | DaemonSet | |
---|---|---|---|---|---|---|
Ready | Ready replicas equals desired replicas | Ready replicas equals desired replicas | All desired endpoints are ready, any load balancers have been assigned | All desired backend service endpoints are ready, any load balancers have been assigned | Claim is bound | Ready daemon pods equals desired scheduled daemon pods |
Updating | The deployed replicas are from a different revision | The deployed replicas are from a different revision | N/A | N/A | N/A | The deployed daemon pods are from a different revision |
Degraded | At least 1 replica is ready, but more are desired | At least 1 replica is ready, but more are desired | At least one endpoint is ready, but more are desired | At least one backend service endpoint is ready, but more are desired | N/A | At least one daemon pod is ready, but more are desired |
Unavailable | No replicas are ready | No replicas are ready | No endpoints are ready, no load balancer has been assigned | No backend service endpoints are ready, no load balancer has been assigned | Claim is pending or lost | No daemon pods are ready |
Missing | Missing is an initial deployment status indicating that informers have not reported their status because the application has just been deployed and the underlying resource has not been created yet. After the resource is created, the status changes. However, if a resource changes from another status to Missing, then the resource was either deleted or the informers failed to report a status. |
Resource Statuses | Aggregate Application Status |
---|---|
No status available for any resource | Missing |
One or more resources Unavailable | Unavailable |
One or more resources Degraded | Degraded |
One or more resources Updating | Updating |
All resources Ready | Ready |
Directory | Changes Persist? | Description |
---|---|---|
upstream |
No, except for the userdata subdirectory |
The Contains the template functions, preflight checks, support bundle, config options, license, and so on. Contains a |
Directory | Changes Persist? | Description |
---|---|---|
base |
No | After KOTS processes and renders the Only the deployable application files, such as files deployable with Any non-deployable manifests, such as template functions, preflight checks, and configuration options, are removed. |
Subdirectory | Changes Persist? | Description |
---|---|---|
midstream |
No | Contains KOTS-specific kustomizations, such as:
|
downstream |
Yes | Contains user-defined kustomizations that are applied to the Only one To add kustomizations, see Patch an Application. |
midstream/charts |
No | Appears only when the Contains a subdirectory for each Helm chart. Each Helm chart has its own kustomizations because each chart is rendered and deployed separately from other charts and manifests. The subcharts of each Helm chart also have their own kustomizations and are rendered separately. However, these subcharts are included and deployed as part of the parent chart. |
downstream/charts |
Yes | Appears only when the Contains a subdirectory for each Helm chart. Each Helm chart has its own kustomizations because each chart is rendered and deployed separately from other charts and manifests. The subcharts of each Helm chart also have their own kustomizations and are rendered separately. However, these subcharts are included and deployed as part of the parent chart. |
Directory | Changes Persist? | Description |
---|---|---|
rendered |
No | Contains the final rendered application manifests that are deployed to the cluster. The rendered files are created when KOTS processes the |
rendered/charts |
No | Appears only when the Contains a subdirectory for each rendered Helm chart. Each Helm chart is deployed separately from other charts and manifests. The rendered subcharts of each Helm chart are included and deployed as part of the parent chart. |
What's New?
The 2.0 release brings improvements to architecture that increase the reliability and stability of Embedded Cluster.
Did You Know?
Control which installation methods are available for each customer from the **Install types** field in the customer's license.
Getting Started with Replicated
Onboarding workflows, tutorials, and labs to help you get started with Replicated quickly.
Vendor Platform
Create and manage your account and team.
Compatibility Matrix
Rapidly create Kubernetes clusters, including OpenShift.
Helm Charts
Distribute Helm charts with Replicated.
Replicated KOTS
A kubectl plugin and in-cluster Admin Console that installs applications in customer-controlled environments.
Embedded Cluster
Embed Kubernetes with your application to support installations on VMs or bare metal servers.
Insights and Telemetry
Get insights on installed instances of your application.
Channels and Releases
Manage application releases with the vendor platform.
Customer Licensing
Create, customize, and issue customer licenses.
Preflight Checks
Define and verify installation environment requirements.
Support Bundles
Gather information about customer environments for troubleshooting.
Developer Tools
APIs, CLIs, and an SDK for interacting with the Replicated platform.
Required Field | Allowed Values | Allowed Special Characters |
---|---|---|
Minute | 0 through 59 | , - * |
Hour | 0 through 23 | , - * |
Day-of-month | 1 through 31 | , - * ? |
Month | 1 through 12 or JAN through DEC | , - * |
Day-of-week | 1 through 7 or SUN through SAT | , - * ? |
Special Character | Description |
---|---|
Comma (,) | Specifies a list or multiple values, which can be consecutive or not. For example, 1,2,4 in the Day-of-week field signifies every Monday, Tuesday, and Thursday. |
Dash (-) | Specifies a contiguous range. For example, 4-6 in the Month field signifies April through June. |
Asterisk (*) | Specifies that all of the values for the field are used. For example, using * in the Month field means that all of the months are included in the schedule. |
Question mark (?) | Specifies that one or another value can be used. For example, enter 5 for Day-of-the-month and ? for Day-of-the-week to check for updates on the 5th day of the month, regardless of which day of the week it is. |
Schedule Value | Description | Equivalent Cron Expression |
---|---|---|
@yearly (or @annually) | Runs once a year, at midnight on January 1. | 0 0 1 1 * |
@monthly | Runs once a month, at midnight on the first of the month. | 0 0 1 * * |
@weekly | Run once a week, at midnight on Saturday. | 0 0 * * 0 |
@daily (or @midnight) | Runs once a day, at midnight. | 0 0 * * * |
@hourly | Runs once an hour, at the beginning of the hour. | 0 * * * * |
@never | Disables the schedule completely. Only used by KOTS. This value can be useful when you are calling the API directly or are editing the KOTS configuration manually. |
0 * * * * |
@default | Selects the default schedule option (every 4 hours). Begins when the Admin Console starts up. This value can be useful when you are calling the API directly or are editing the KOTS configuration manually. |
0 * * * * |
Description | The application title. Used on the license upload and in various places in the Replicated Admin Console. |
---|---|
Example | ```yaml title: My Application ``` |
Supports Go templates? | No |
Supported for Embedded Cluster? | Yes |
Description | The icon file for the application. Used on the license upload, in various places in the Admin Console, and in the Download Portal. The icon can be a remote URL or a Base64 encoded image. Base64 encoded images are required to display the image in air gap installations with no outbound internet access. |
---|---|
Example | ```yaml icon: https://support.io/img/logo.png ``` |
Supports Go templates? | No |
Supported for Embedded Cluster? | Yes |
Description | The release notes for this version. These can also be set when promoting a release. |
---|---|
Example | ```yaml releaseNotes: Fixes a bug and adds a new feature. ``` |
Supports Go templates? | No |
Supported for Embedded Cluster? | Yes |
Description |
Enable this flag to create a Rollback button on the Admin Console Version History page. If an application is guaranteed not to introduce backwards-incompatible versions, such as through database migrations, then the Rollback does not revert any state. Rather, it recovers the YAML manifests that are applied to the cluster. |
---|---|
Example | ```yaml allowRollback: false ``` |
Default | false |
Supports Go templates? | No |
Supported for Embedded Cluster? | Embedded Cluster 1.17.0 and later supports partial rollbacks of the application version. Partial rollbacks are supported only when rolling back to a version where there is no change to the [Embedded Cluster Config](/reference/embedded-config) compared to the currently-installed version. For example, users can roll back to release version 1.0.0 after upgrading to 1.1.0 only if both 1.0.0 and 1.1.0 use the same Embedded Cluster Config. |
Description |
An array of additional namespaces as strings that Replicated KOTS creates on the cluster. For more information, see Defining Additional Namespaces. In addition to creating the additional namespaces, KOTS ensures that the application secret exists in the namespaces. KOTS also ensures that this application secret has access to pull the application images, including both images that are used and any images you add in the For dynamically created namespaces, specify |
---|---|
Example | ```yaml additionalNamespaces: - "*" ``` |
Supports Go templates? | No |
Supported for Embedded Cluster? | Yes |
Description | An array of strings that reference images to be included in air gap bundles and pushed to the local registry during installation. KOTS detects images from the PodSpecs in the application. Some applications, such as Operators, might need to include additional images that are not referenced until runtime. For more information, see Defining Additional Images. |
---|---|
Example | ```yaml additionalImages: - jenkins/jenkins:lts ``` |
Supports Go templates? | No |
Supported for Embedded Cluster? | Yes |
Description |
Requires minimal role-based access control (RBAC) be used for all customer installations. When set to For additional requirements and limitations related to using namespace-scoped RBAC, see About Namespace-scoped RBAC in Configuring KOTS RBAC. |
---|---|
Example | ```yaml requireMinimalRBACPrivileges: false ``` |
Default | false |
Supports Go templates? | No |
Supported for Embedded Cluster? | No |
Description |
Allows minimal role-based access control (RBAC) be used for all customer installations. When set to Minimal RBAC is not used by default. It is only used when the For additional requirements and limitations related to using namespace-scoped RBAC, see About Namespace-scoped RBAC in Configuring KOTS RBAC. |
---|---|
Example | ```yaml supportMinimalRBACPrivileges: true ``` |
Default | false |
Supports Go templates? | No |
Supported for Embedded Cluster? | No |
Description |
Extra ports (additional to the The
|
---|---|
Example | ```yaml ports: - serviceName: web servicePort: 9000 localPort: 9000 applicationUrl: "http://web" ``` |
Supports Go templates? | Go templates are supported in the `serviceName` and `applicationUrl` fields only. Using Go templates in the `localPort` or `servicePort` fields results in an installation error similar to the following: `json: cannot unmarshal string into Go struct field ApplicationPort.spec.ports.servicePort of type int`. |
Supported for Embedded Cluster? | Yes |
Description |
Resources to watch and report application status back to the user. When you include
For more information about including statusInformers, see Adding Resource Status Informers. |
---|---|
Example | ```yaml statusInformers: - deployment/my-web-svc - deployment/my-worker ``` The following example shows excluding a specific status informer based on a user-supplied value from the Admin Console Configuration screen: ```yaml statusInformers: - deployment/my-web-svc - '{{repl if ConfigOptionEquals "option" "value"}}deployment/my-worker{{repl else}}{{repl end}}' ``` |
Supports Go templates? | Yes |
Supported for Embedded Cluster? | Yes |
Description | Custom graphs to include on the Admin Console application dashboard.For more information about how to create custom graphs, see Adding Custom Graphs.
|
---|---|
Example | ```yaml graphs: - title: User Signups query: 'sum(user_signup_events_total)' ``` |
Supports Go templates? |
Yes |
Supported for Embedded Cluster? | No |
Description | The custom domain used for proxy.replicated.com. For more information, see Using Custom Domains. Introduced in KOTS v1.91.1. |
---|---|
Example | ```yaml proxyRegistryDomain: "proxy.yourcompany.com" ``` |
Supports Go templates? | No |
Description | The custom domain used for registry.replicated.com. For more information, see Using Custom Domains. Introduced in KOTS v1.91.1. |
---|---|
Example | ```yaml replicatedRegistryDomain: "registry.yourcompany.com" ``` |
Supports Go templates? | No |
Supported for Embedded Cluster? | Yes |
Description | The KOTS version that is targeted by the release. For more information, see Setting Minimum and Target Versions for KOTS. |
---|---|
Example | ```yaml targetKotsVersion: "1.85.0" ``` |
Supports Go templates? | No |
Supported for Embedded Cluster? | No. Setting targetKotsVersion to a version earlier than the KOTS version included in the specified version of Embedded Cluster will cause Embedded Cluster installations to fail with an error message like: Error: This version of App Name requires a different version of KOTS from what you currently have installed. . To avoid installation failures, do not use targetKotsVersion in releases that support installation with Embedded Cluster. |
Description | The minimum KOTS version that is required by the release. For more information, see Setting Minimum and Target Versions for KOTS. |
---|---|
Example | ```yaml minKotsVersion: "1.71.0" ``` |
Supports Go templates? | No |
Supported for Embedded Cluster? | No. Setting minKotsVersion to a version later than the KOTS version included in the specified version of Embedded Cluster will cause Embedded Cluster installations to fail with an error message like: Error: This version of App Name requires a different version of KOTS from what you currently have installed. . To avoid installation failures, do not use minKotsVersion in releases that support installation with Embedded Cluster. |
Field Name | Description |
---|---|
includedNamespaces |
(Optional) Specifies an array of namespaces to include in the backup. If unspecified, all namespaces are included. |
excludedNamespaces |
(Optional) Specifies an array of namespaces to exclude from the backup. |
orderedResources |
(Optional) Specifies the order of the resources to collect during the backup process. This is a map that uses a key as the plural resource. Each resource name has the format NAMESPACE/OBJECTNAME. The object names are a comma delimited list. For cluster resources, use OBJECTNAME only. |
ttl |
Specifies the amount of time before this backup is eligible for garbage collection. Default:720h (equivalent to 30 days). This value is configurable only by the customer. |
hooks |
(Optional) Specifies the actions to perform at different times during a backup. The only supported hook is executing a command in a container in a pod (uses the pod exec API). Supports pre and post hooks. |
hooks.resources |
(Optional) Specifies an array of hooks that are applied to specific resources. |
hooks.resources.name |
Specifies the name of the hook. This value displays in the backup log. |
hooks.resources.includedNamespaces |
(Optional) Specifies an array of namespaces that this hook applies to. If unspecified, the hook is applied to all namespaces. |
hooks.resources.excludedNamespaces |
(Optional) Specifies an array of namespaces to which this hook does not apply. |
hooks.resources.includedResources |
Specifies an array of pod resources to which this hook applies. |
hooks.resources.excludedResources |
(Optional) Specifies an array of resources to which this hook does not apply. |
hooks.resources.labelSelector |
(Optional) Specifies that this hook only applies to objects that match this label selector. |
hooks.resources.pre |
Specifies an array of exec hooks to run before executing custom actions. |
hooks.resources.post |
Specifies an array of exec hooks to run after executing custom actions. Supports the same arrays and fields as pre hooks. |
hooks.resources.[post/pre].exec |
Specifies the type of the hook. exec is the only supported type. |
hooks.resources.[post/pre].exec.container |
(Optional) Specifies the name of the container where the specified command will be executed. If unspecified, the first container in the pod is used. |
hooks.resources.[post/pre].exec.command |
Specifies the command to execute. The format is an array. |
hooks.resources.[post/pre].exec.onError |
(Optional) Specifies how to handle an error that might occur when executing the command. Valid values: Fail and Continue Default: Fail |
hooks.resources.[post/pre].exec.timeout |
(Optional) Specifies how many seconds to wait for the command to finish executing before the action times out. Default: 30s |
Description |
Items can be affixed Specify the |
---|---|
Required? | No |
Example | ```yaml groups: - name: example_settings title: My Example Config description: Configuration to serve as an example for creating your own. items: - name: username title: Username type: text required: true affix: left - name: password title: Password type: password required: true affix: right ``` |
Supports Go templates? | Yes |
Description |
Defines the default value for the config item. If the user does not provide a value for the item, then the If the |
---|---|
Required? | No |
Example | ```yaml - name: custom_key title: Set your secret key for your app description: Paste in your Custom Key items: - name: key title: Key type: text value: "" default: change me ```  [View a larger version of this image](/images/config-default.png) |
Supports Go templates? | Yes. Every time the user makes a change to their configuration settings for the application, any template functions used in the |
Description |
Displays a helpful message below the Markdown syntax is supported. For more information about markdown syntax, see Basic writing and formatting syntax in the GitHub Docs. |
---|---|
Required? | No |
Example | ```yaml - name: http_settings title: HTTP Settings items: - name: http_enabled title: HTTP Enabled help_text: Check to enable the HTTP listener type: bool ```  [View a larger version of this image](/images/config-help-text.png) |
Supports Go templates? | Yes |
Description |
Hidden items are not visible in the Admin Console. :::note When you assign a template function that generates a value to a `value` property, you can use the `readonly` and `hidden` properties to define whether or not the generated value is ephemeral or persistent between changes to the configuration settings for the application. For more information, see [RandomString](template-functions-static-context#randomstring) in _Static Context_. ::: |
---|---|
Required? | No |
Example | ```yaml - name: secret_key title: Secret Key type: password hidden: true value: "{{repl RandomString 40}}" ``` |
Supports Go templates? | No |
Description | A unique identifier for the config item. Item names must be unique both within the group and across all groups. The item The item |
---|---|
Required? | Yes |
Example | ```yaml - name: http_settings title: HTTP Settings items: - name: http_enabled title: HTTP Enabled type: bool ``` |
Supports Go templates? | Yes |
Description |
Readonly items are displayed in the Admin Console and users cannot edit their value. :::note When you assign a template function that generates a value to a `value` property, you can use the `readonly` and `hidden` properties to define whether or not the generated value is ephemeral or persistent between changes to the configuration settings for the application. For more information, see [RandomString](template-functions-static-context#randomstring) in _Static Context_. ::: |
---|---|
Required? | No |
Example | ```yaml - name: key title: Key type: text value: "" default: change me - name: unique_key title: Unique Key type: text value: "{{repl RandomString 20}}" readonly: true ```  [View a larger version of this image](/images/config-readonly.png) |
Supports Go templates? | No |
Description | Displays a Recommended tag for the config item in the Admin Console. |
---|---|
Required? | No |
Example | ```yaml - name: recommended_field title: My recommended field type: bool default: "0" recommended: true ```  [View a larger version of this image](/images/config-recommended-item.png) |
Supports Go templates? | No |
Description | Displays a Required tag for the config item in the Admin Console. A required item prevents the application from starting until it has a value. |
---|---|
Required? | No |
Example | ```yaml - name: custom_key title: Set your secret key for your app description: Paste in your Custom Key items: - name: key title: Key type: text value: "" default: change me required: true ```  [View a larger version of this image](/images/config-required-item.png) |
Supports Go templates? | No |
Description | The title of the config item that displays in the Admin Console. |
---|---|
Required? | Yes |
Example | ```yaml - name: http_settings title: HTTP Settings items: - name: http_enabled title: HTTP Enabled help_text: Check to enable the HTTP listener type: bool ```  [View a larger version of this image](/images/config-help-text.png) |
Supports Go templates? | Yes |
Description |
Each item has a The For information about each type, see Item Types. |
---|---|
Required? | Yes |
Example | ```yaml - name: group_title title: Group Title items: - name: http_enabled title: HTTP Enabled type: bool default: "0" ```  [View a larger version of this image](/images/config-screen-bool.png) |
Supports Go templates? | No |
Description |
Defines the value of the config item. Data that you add to If the config item is not readonly, then the data that you add to |
---|---|
Required? | No |
Example | ```yaml - name: custom_key title: Set your secret key for your app description: Paste in your Custom Key items: - name: key title: Key type: text value: "{{repl RandomString 20}}" ```  [View a larger version of this image](/images/config-value-randomstring.png) |
Supports Go templates? | Yes :::note When you assign a template function that generates a value to a `value` property, you can use the `readonly` and `hidden` properties to define whether or not the generated value is ephemeral or persistent between changes to the configuration settings for the application. For more information, see [RandomString](template-functions-static-context#randomstring) in _Static Context_. ::: |
Description | The The `when` item property has the following requirements and limitations: * The `when` property accepts the following types of values: * Booleans * Strings that match "true", "True", "false", or "False" [KOTS template functions](/reference/template-functions-about) can be used to render these supported value types. * For the `when` property to evaluate to true, the values compared in the conditional statement must match exactly without quotes
|
---|---|
Required? | No |
Example |
Display the ```yaml
- name: database_settings_group
title: Database Settings
items:
- name: db_type
title: Database Type
type: radio
default: external
items:
- name: external
title: External
- name: embedded
title: Embedded DB
- name: database_host
title: Database Hostname
type: text
when: repl{{ (ConfigOptionEquals "db_type" "external")}}
- name: database_password
title: Database Password
type: password
when: repl{{ (ConfigOptionEquals "db_type" "external")}}
```
For additional examples, see Using Conditional Statements in Configuration Fields. |
Supports Go templates? | Yes |
Description | The |
---|---|
Required? | No |
Example |
Validates and returns if |
Supports Go templates? | No |
Level | Description |
---|---|
error | The rule is enabled and shows as an error. |
warn | The rule is enabled and shows as a warning. |
info | The rule is enabled and shows an informational message. |
off | The rule is disabled. |
Field Name | Description |
---|---|
collectorName |
(Optional) A collector can specify the collectorName field. In some collectors, this field controls the path where result files are stored in the support bundle. |
exclude |
(Optional) (KOTS Only) Based on the runtime available configuration, a conditional can be specified in the exclude field. This is useful for deployment techniques that allow templating for Replicated KOTS and the optional KOTS Helm component. When this value is true , the collector is not included. |
Field Name | Description |
---|---|
collectorName |
(Optional) An analyzer can specify the collectorName field. |
exclude |
(Optional) (KOTS Only) A condition based on the runtime available configuration can be specified in the exclude field. This is useful for deployment techniques that allow templating for KOTS and the optional KOTS Helm component. When this value is true , the analyzer is not included. |
strict |
(Optional) (KOTS Only) An analyzer can be set to strict: true so that fail outcomes for that analyzer prevent the release from being deployed by KOTS until the vendor-specified requirements are met. When exclude: true is also specified, exclude overrides strict and the analyzer is not executed. |
Field Name | Description |
---|---|
file |
(Optional) Specifies a single file for redaction. |
files |
(Optional) Specifies multiple files for redaction. |
/my/test/glob/*
matches /my/test/glob/file
, but does not match /my/test/glob/subdir/file
.
### removals
The `removals` object is required and defines the redactions that occur. This object supports the following fields. At least one of these fields must be specified:
Field Name | Description |
---|---|
regex |
(Optional) Allows a regular expression to be applied for removal and redaction on lines that immediately follow a line that matches a filter. The selector field is used to identify lines, and the redactor field specifies a regular expression that runs on the line after any line identified by selector . If selector is empty, the redactor runs on every line. Using a selector is useful for removing values from pretty-printed JSON, where the value to be redacted is pretty-printed on the line beneath another value.Matches to the regex are removed or redacted, depending on the construction of the regex. Any portion of a match not contained within a capturing group is removed entirely. The contents of capturing groups tagged mask are masked with ***HIDDEN*** . Capturing groups tagged drop are dropped. |
values |
(Optional) Specifies values to replace with the string ***HIDDEN*** . |
yamlPath |
(Optional) Specifies a . -delimited path to the items to be redacted from a YAML document. If an item in the path is the literal string * , the redactor is applied to all options at that level.Files that fail to parse as YAML or do not contain any matches are not modified. Files that do contain matches are re-rendered, which removes comments and custom formatting. Multi-document YAML is not fully supported. Only the first document is checked for matches, and if a match is found, later documents are discarded entirely. |
Flag | Description |
---|---|
`--admin-console-password` |
Set the password for the Admin Console. The password must be at least six characters in length. If not set, the user is prompted to provide an Admin Console password. |
`--admin-console-port` |
Port on which to run the KOTS Admin Console. **Default**: By default, the Admin Console runs on port 30000. **Limitation:** It is not possible to change the port for the Admin Console during a restore with Embedded Cluster. For more information, see [Disaster Recovery for Embedded Cluster (Alpha)](/vendor/embedded-disaster-recovery). |
`--airgap-bundle` | The Embedded Cluster air gap bundle used for installations in air-gapped environments with no outbound internet access. For information about how to install in an air-gapped environment, see [Air Gap Installation with Embedded Cluster](/enterprise/installing-embedded-air-gap). |
`--cidr` |
The range of IP addresses that can be assigned to Pods and Services, in CIDR notation. **Default:** By default, the CIDR block is `10.244.0.0/16`. **Requirement**: Embedded Cluster 1.16.0 or later. |
`--config-values` |
Path to the ConfigValues file for the application. The ConfigValues file can be used to pass the application configuration values from the command line during installation, such as when performing automated installations as part of CI/CD pipelines. For more information, see [Automating Installation with Embedded Cluster](/enterprise/installing-embedded-automation). Requirement: Embedded Cluster 1.18.0 and later. |
`--data-dir` |
The data directory used by Embedded Cluster. **Default**: `/var/lib/embedded-cluster` **Requirement**: Embedded Cluster 1.16.0 or later. **Limitations:**
|
`--http-proxy` |
Proxy server to use for HTTP. **Requirement:** Proxy installations require Embedded Cluster 1.5.1 or later with Kubernetes 1.29 or later. **Limitations:** * If any of your [Helm extensions](/reference/embedded-config#extensions) make requests to the internet, the given charts need to be manually configured so that those requests are made to the user-supplied proxy server instead. Typically, this requires updating the Helm values to set HTTP proxy, HTTPS proxy, and no proxy. Note that this limitation applies only to network requests made by your Helm extensions. The proxy settings supplied to the install command are used to pull the containers required to run your Helm extensions. * Proxy settings cannot be changed after installation or during upgrade. |
`--https-proxy` |
Proxy server to use for HTTPS. **Requirement:** Proxy installations require Embedded Cluster 1.5.1 or later with Kubernetes 1.29 or later. **Limitations:** * If any of your [Helm extensions](/reference/embedded-config#extensions) make requests to the internet, the given charts need to be manually configured so that those requests are made to the user-supplied proxy server instead. Typically, this requires updating the Helm values to set HTTP proxy, HTTPS proxy, and no proxy. Note that this limitation applies only to network requests made by your Helm extensions. The proxy settings supplied to the install command are used to pull the containers required to run your Helm extensions. * Proxy settings cannot be changed after installation or during upgrade. |
`--ignore-host-preflights` |
When `--ignore-host-preflights` is passed, the host preflight checks are still run, but the user is prompted and can choose to continue with the installation if preflight failures occur. If there are no failed preflights, no user prompt is displayed. Additionally, the Admin Console still runs any application-specific preflight checks before the application is deployed. For more information about the Embedded Cluster host preflight checks, see [About Host Preflight Checks](/vendor/embedded-using#about-host-preflight-checks) in _Using Embedded Cluster_ Ignoring host preflight checks is _not_ recommended for production installations. |
`-l, --license` |
Path to the license file |
`--local-artifact-mirror-port` |
Port on which to run the Local Artifact Mirror (LAM). **Default**: By default, the LAM runs on port 50000. |
`--network-interface` |
The name of the network interface to bind to for the Kubernetes API. A common use case of `--network-interface` is for multi-node clusters where node communication should happen on a particular network. **Default**: If a network interface is not provided, the first valid, non-local network interface is used. |
`--no-proxy` |
Comma-separated list of hosts for which not to use a proxy. For single-node installations, pass the IP address of the node where you are installing. For multi-node installations, when deploying the first node, pass the list of IP addresses for all nodes in the cluster (typically in CIDR notation). The network interface's subnet will automatically be added to the no-proxy list if the node's IP address is not already included. The following are never proxied:
To ensure your application's internal cluster communication is not proxied, use fully qualified domain names like `my-service.my-namespace.svc` or `my-service.my-namespace.svc.cluster.local`. **Requirement:** Proxy installations require Embedded Cluster 1.5.1 or later with Kubernetes 1.29 or later. **Limitations:** * If any of your [Helm extensions](/reference/embedded-config#extensions) make requests to the internet, the given charts need to be manually configured so that those requests are made to the user-supplied proxy server instead. Typically, this requires updating the Helm values to set HTTP proxy, HTTPS proxy, and no proxy. Note that this limitation applies only to network requests made by your Helm extensions. The proxy settings supplied to the install command are used to pull the containers required to run your Helm extensions. * Proxy settings cannot be changed after installation or during upgrade. |
`--private-ca` |
The path to trusted certificate authority (CA) certificates. Using the `--private-ca` flag ensures that the CA is trusted by the installation. KOTS writes the CA certificates provided with the `--private-ca` flag to a ConfigMap in the cluster. The KOTS [PrivateCACert](/reference/template-functions-static-context#privatecacert) template function returns the ConfigMap containing the private CA certificates supplied with the `--private-ca` flag. You can use this template function to mount the ConfigMap so your containers trust the CA too. |
`-y, --yes` |
In Embedded Cluster 1.21.0 and later, pass the `--yes` flag to provide an affirmative response to any user prompts for the command. For example, you can pass `--yes` with the `--ignore-host-preflights` flag to ignore host preflight checks during automated installations. **Requirement:** Embedded Cluster 1.21.0 and later |
Flag | Type | Description |
--rootdir |
string | Root directory where the YAML will be written (default `${HOME}` or `%USERPROFILE%`) |
--namespace |
string | Target namespace for the Admin Console |
--shared-password |
string | Shared password to use when deploying the Admin Console |
--http-proxy |
string | Sets HTTP_PROXY environment variable in all KOTS Admin Console components |
--http-proxy |
string | Sets HTTP_PROXY environment variable in all KOTS Admin Console |
--kotsadm-namespace |
string | Set to override the registry namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). Note: Replicated recommends that you use |
--kotsadm-registry |
string | Set to override the registry hostname and namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). |
--no-proxy |
string | Sets NO_PROXY environment variable in all KOTS Admin Console components |
--private-ca-configmap |
string | Name of a ConfigMap containing private CAs to add to the kotsadm deployment |
--registry-password |
string | Password to use to authenticate with the application registry. Used for air gap installations. |
--registry-username |
string | Username to use to authenticate with the application registry. Used for air gap installations. |
--with-minio |
bool | Set to true to include a local minio instance to be used for storage (default true) |
--minimal-rbac |
bool | Set to true to include a local minio instance to be used for storage (default true) |
--additional-namespaces |
string | Comma delimited list to specify additional namespace(s) managed by KOTS outside where it is to be deployed. Ignored without with --minimal-rbac=true |
--storage-class |
string | Sets the storage class to use for the KOTS Admin Console components. Default: unset, which means the default storage class will be used |
Flag | Type | Description |
---|---|---|
--ensure-rbac |
bool | When false , KOTS does not attempt to create the RBAC resources necessary to manage applications. Default: true . If a role specification is needed, use the generate-manifests command. |
-h, --help |
Help for the command. | |
--kotsadm-namespace |
string | Set to override the registry namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). Note: Replicated recommends that you use |
--kotsadm-registry |
string | Set to override the registry hostname and namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). |
--registry-password |
string | Password to use to authenticate with the application registry. Used for air gap installations. |
--registry-username |
string | Username to use to authenticate with the application registry. Used for air gap installations. |
--skip-rbac-check |
bool | When true , KOTS does not validate RBAC permissions. Default: false |
--strict-security-context |
bool |
Set to By default, KOTS Pods and containers are not deployed with a specific security context. When
The following shows the Default: |
--wait-duration |
string | Timeout out to be used while waiting for individual components to be ready. Must be in Go duration format. Example: 10s, 2m |
--with-minio |
bool | When true , KOTS deploys a local MinIO instance for storage and attempts to change any MinIO-based snapshots (hostpath and NFS) to the local-volume-provider plugin. See local-volume-provider in GitHub. Default: true |
Flag | Type | Description |
--additional-annotations |
bool | Additional annotations to add to kotsadm pods. |
--additional-labels |
bool | Additional labels to add to kotsadm pods. |
--airgap |
bool | Set to true to run install in air gapped mode. Setting --airgap-bundle implies --airgap=true . Default: false . For more information, see Air Gap Installation in Existing Clusters with KOTS. |
--airgap-bundle |
string | Path to the application air gap bundle where application metadata will be loaded from. Setting --airgap-bundle implies --airgap=true . For more information, see Air Gap Installation in Existing Clusters with KOTS. |
--app-version-label |
string | The application version label to install. If not specified, the latest version is installed. |
--config-values |
string | Path to a manifest file containing configuration values. This manifest must be apiVersion: kots.io/v1beta1 and kind: ConfigValues . For more information, see Installing with the KOTS CLI. |
--copy-proxy-env |
bool | Copy proxy environment variables from current environment into all Admin Console components. Default: false |
--disable-image-push |
bool | Set to true to disable images from being pushed to private registry. Default: false |
--ensure-rbac |
bool | When false , KOTS does not attempt to create the RBAC resources necessary to manage applications. Default: true . If a role specification is needed, use the [generate-manifests](kots-cli-admin-console-generate-manifests) command. |
-h, --help |
Help for the command. | |
--http-proxy |
string | Sets HTTP_PROXY environment variable in all Admin Console components. |
--https-proxy |
string | Sets HTTPS_PROXY environment variable in all Admin Console components. |
--kotsadm-namespace |
string | Set to override the registry namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). Note: Replicated recommends that you use |
--kotsadm-registry |
string | Set to override the registry hostname and namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). |
--license-file |
string | Path to a license file. |
--local-path |
string | Specify a local-path to test the behavior of rendering a Replicated application locally. Only supported on Replicated application types. |
--name |
string | Name of the application to use in the Admin Console. |
--no-port-forward |
bool | Set to true to disable automatic port forward. Default: false |
--no-proxy |
string | Sets NO_PROXY environment variable in all Admin Console components. |
--port |
string | Override the local port to access the Admin Console. Default: 8800 |
--private-ca-configmap |
string | Name of a ConfigMap containing private CAs to add to the kotsadm deployment. |
--preflights-wait-duration |
string | Timeout to be used while waiting for preflights to complete. Must be in [Go duration](https://pkg.go.dev/time#ParseDuration) format. For example, 10s, 2m. Default: 15m |
--registry-password |
string | Password to use to authenticate with the application registry. Used for air gap installations. |
--registry-username |
string | Username to use to authenticate with the application registry. Used for air gap installations. |
--repo |
string | Repo URI to use when installing a Helm chart. |
--shared-password |
string | Shared password to use when deploying the Admin Console. |
--skip-compatibility-check |
bool | Set to true to skip compatibility checks between the current KOTS version and the application. Default: false |
--skip-preflights |
bool | Set to true to skip preflight checks. Default: false . If any strict preflight checks are configured, the --skip-preflights flag is not honored because strict preflight checks must run and contain no failures before the application is deployed. For more information, see [Defining Preflight Checks](/vendor/preflight-defining). |
--skip-rbac-check |
bool | Set to true to bypass RBAC check. Default: false |
--skip-registry-check |
bool | Set to true to skip the connectivity test and validation of the provided registry information. Default: false |
--strict-security-context |
bool |
Set to By default, KOTS Pods and containers are not deployed with a specific security context. When
The following shows the Default: |
--use-minimal-rbac |
bool | When set to true , KOTS RBAC permissions are limited to the namespace where it is installed. To use --use-minimal-rbac , the application must support namespace-scoped installations and the user must have the minimum RBAC permissions required by KOTS in the target namespace. For a complete list of requirements, see [Namespace-scoped RBAC Requirements](/enterprise/installing-general-requirements#namespace-scoped) in _Installation Requirements_. Default: false |
--wait-duration |
string | Timeout to be used while waiting for individual components to be ready. Must be in [Go duration](https://pkg.go.dev/time#ParseDuration) format. For example, 10s, 2m. Default: 2m |
--with-minio |
bool | When set to true , KOTS deploys a local MinIO instance for storage and uses MinIO for host path and NFS snapshot storage. Default: true |
--storage-class |
string | Sets the storage class to use for the KOTS Admin Console components. Default: unset, which means the default storage class will be used |
Flag | Type | Description |
---|---|---|
--force |
bool |
Removes the reference even if the application has already been deployed. |
--undeploy |
bool |
Un-deploys the application by deleting all its resources from the cluster. When Note: The following describes how
|
-n |
string |
The namespace where the target application is deployed. Use |
Flag | Type | Description |
-h, --help |
Help for the command. | |
`-n, --namespace` | string | The namespace of the Admin Console (required) |
`--hostpath` | string | A local host path on the node |
--kotsadm-namespace |
string | Set to override the registry namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). Note: Replicated recommends that you use |
--kotsadm-registry |
string | Set to override the registry hostname and namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). |
--registry-password |
string | Password to use to authenticate with the application registry. Used for air gap installations. |
--registry-username |
string | Username to use to authenticate with the application registry. Used for air gap installations. |
`--force-reset` | bool | Bypass the reset prompt and force resetting the nfs path. (default `false`) |
`--output` | string | Output format. Supported values: `json` |
Flag | Type | Description |
-h, --help |
Help for the command. | |
`-n, --namespace` | string | The namespace of the Admin Console (required) |
`--nfs-server` | string | The hostname or IP address of the NFS server (required) |
`--nfs-path` | string | The path that is exported by the NFS server (required) |
--kotsadm-namespace |
string | Set to override the registry namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). Note: Replicated recommends that you use |
--kotsadm-registry |
string | Set to override the registry hostname and namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). |
--registry-password |
string | Password to use to authenticate with the application registry. Used for air gap installations. |
--registry-username |
string | Username to use to authenticate with the application registry. Used for air gap installations. |
`--force-reset` | bool | Bypass the reset prompt and force resetting the nfs path. (default `false`) |
`--output` | string | Output format. Supported values: `json` |
Flag | Type | Description |
-h, --help |
Help for the command. | |
`-n, --namespace` | string | The namespace of the Admin Console (required) |
`--access-key-id` | string | The AWS access key ID to use for accessing the bucket (required) |
`--bucket` | string | Name of the object storage bucket where backups should be stored (required) |
`--endpoint` | string | The S3 endpoint (for example, http://some-other-s3-endpoint) (required) |
`--path` | string | Path to a subdirectory in the object store bucket |
`--region` | string | The region where the bucket exists (required) |
`--secret-access-key` | string | The AWS secret access key to use for accessing the bucket (required) |
`--cacert` | string | File containing a certificate bundle to use when verifying TLS connections to the object store |
`--skip-validation` | bool | Skip the validation of the S3 bucket (default `false`) |
--kotsadm-namespace |
string | Set to override the registry namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). Note: Replicated recommends that you use |
--kotsadm-registry |
string | Set to override the registry hostname and namespace of KOTS Admin Console images. Used for air gap installations. For more information, see [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped). |
--registry-password |
string | Password to use to authenticate with the application registry. Used for air gap installations. |
--registry-username |
string | Username to use to authenticate with the application registry. Used for air gap installations. |
Description | Notifies if any manifest file has allowPrivilegeEscalation set to true . |
---|---|
Level | Info |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml spec: allowPrivilegeEscalation: true ``` |
Description | Requires an application icon. |
---|---|
Level | Warn |
Applies To |
Files with kind: Application and apiVersion: kots.io/v1beta1 .
|
Example | Example of correct YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 kind: Application spec: icon: https://example.com/app-icon.png ``` |
Description |
Requires an Application custom resource manifest file. Accepted value for |
---|---|
Level | Warn |
Example | Example of matching YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 kind: Application ``` |
Description |
Requires statusInformers .
|
---|---|
Level | Warn |
Applies To |
Files with kind: Application and apiVersion: kots.io/v1beta1 .
|
Example | Example of correct YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 kind: Application spec: statusInformers: - deployment/example-nginx ``` |
Description |
Enforces valid types for Config items. For more information, see Items in Config. |
---|---|
Level | Error |
Applies To | All files |
Example | **Correct**: ```yaml spec: groups: - name: authentication title: Authentication description: Configure application authentication below. - name: group_title title: Group Title items: - name: http_enabled title: HTTP Enabled type: bool # bool is a valid type ``` **Incorrect**:: ```yaml spec: groups: - name: authentication title: Authentication description: Configure application authentication below. - name: group_title title: Group Title items: - name: http_enabled title: HTTP Enabled type: unknown_type # unknown_type is not a valid type ``` |
Description | Enforces that all ConfigOption items do not reference themselves. |
---|---|
Level | Error |
Applies To |
Files with kind: Config and apiVersion: kots.io/v1beta1 .
|
Example | **Incorrect**: ```yaml spec: groups: - name: example_settings items: - name: example_default_value type: text value: repl{{ ConfigOption "example_default_value" }} ``` |
Description |
Requires all ConfigOption items to be defined in the Config custom resource manifest file.
|
---|---|
Level | Warn |
Applies To | All files |
Description | Enforces that sub-templated ConfigOption items must be repeatable. |
---|---|
Level | Error |
Applies To | All files |
Description |
Requires ConfigOption items with any of the following names to have
|
---|---|
Level | Warn |
Applies To | All files |
Example | Example of correct YAML for this rule: ```yaml spec: groups: - name: ports items: - name: my_secret type: password ``` |
Description |
Enforces valid For more information, see when in Config. |
---|---|
Level | Error |
Applies To | Files with kind: Config and apiVersion: kots.io/v1beta1 . |
Description |
Enforces valid RE2 regular expressions pattern when regex validation is present. For more information, see Validation in Config. |
---|---|
Level | Error |
Applies To | Files with kind: Config and apiVersion: kots.io/v1beta1 . |
Example | **Correct**: ```yaml spec: groups: - name: authentication title: Authentication description: Configure application authentication below. - name: jwt_file title: jwt_file type: file validation: regex: pattern: "^[A-Za-z0-9-_]+.[A-Za-z0-9-_]+.[A-Za-z0-9-_]*$" // valid RE2 regular expression message: "JWT is invalid" ``` **Incorrect**: ```yaml spec: groups: - name: authentication title: Authentication description: Configure application authentication below. - name: jwt_file title: jwt_file type: file validation: regex: pattern: "^/path/([A-Za-z0-9-_]+.[A-Za-z0-9-_]+.[A-Za-z0-9-_]*$" // invalid RE2 regular expression message: "JWT is invalid" ``` |
Description |
Enforces valid item type when regex validation is present. Item type should be For more information, see Validation in Config. |
---|---|
Level | Error |
Applies To | Files with kind: Config and apiVersion: kots.io/v1beta1 . |
Example | **Correct**: ```yaml spec: groups: - name: authentication title: Authentication description: Configure application authentication below. - name: jwt_file title: jwt_file type: file // valid item type validation: regex: pattern: "^[A-Za-z0-9-_]+.[A-Za-z0-9-_]+.[A-Za-z0-9-_]*$" message: "JWT is invalid" ``` **Incorrect**: ```yaml spec: groups: - name: authentication title: Authentication description: Configure application authentication below. - name: jwt_file title: jwt_file type: bool // invalid item type validation: regex: pattern: "^[A-Za-z0-9-_]+.[A-Za-z0-9-_]+.[A-Za-z0-9-_]*$" message: "JWT is invalid" ``` |
Description |
Requires a Config custom resource manifest file. Accepted value for Accepted value for |
---|---|
Level | Warn |
Example | Example of matching YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 kind: Config ``` |
Description | Notifies if any manifest file has a container image tag appended with
:latest . |
---|---|
Level | Info |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml spec: containers: - image: nginx:latest ``` |
Description | Disallows any manifest file having a container image tag that includes LocalImageName . |
---|---|
Level | Error |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml spec: containers: - image: LocalImageName ``` |
Description | Notifies if a spec.container has no resources.limits field. |
---|---|
Level | Info |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml spec: containers: - name: nginx resources: requests: memory: '32Mi' cpu: '100m' # note the lack of a limit field ``` |
Description | Notifies if a spec.container has no resources.requests field. |
---|---|
Level | Info |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml spec: containers: - name: nginx resources: limits: memory: '256Mi' cpu: '500m' # note the lack of a requests field ``` |
Description | Notifies if a manifest file has no resources field. |
---|---|
Level | Info |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml spec: containers: - name: nginx # note the lack of a resources field ``` |
Description |
Disallows using the deprecated kURL installer
|
---|---|
Level | Warn |
Applies To |
Files with kind: Installer and apiVersion: kurl.sh/v1beta1 .
|
Example | **Correct**: ```yaml apiVersion: cluster.kurl.sh/v1beta1 kind: Installer ``` **Incorrect**: ```yaml apiVersion: kurl.sh/v1beta1 kind: Installer ``` |
Description |
Enforces unique |
---|---|
Level | Error |
Applies To |
Files with kind: HelmChart and apiVersion: kots.io/v1beta1 .
|
Description |
Disallows duplicate Replicated custom resources.
A release can only include one of each This rule disallows inclusion of more than one file with:
|
---|---|
Level | Error |
Applies To | All files |
Description |
Notifies if any manifest file has a Replicated strongly recommends not specifying a namespace to allow for flexibility when deploying into end user environments. For more information, see Managing Application Namespaces. |
---|---|
Level | Info |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml metadata: name: spline-reticulator namespace: graphviz-pro ``` |
Description | Requires that a |
---|---|
Level | Error |
Applies To |
Releases with a HelmChart custom resource manifest file containing kind: HelmChart and apiVersion: kots.io/v1beta1 .
|
Description | Enforces that a HelmChart custom resource manifest file with |
---|---|
Level | Error |
Applies To |
Releases with a *.tar.gz archive file present.
|
Description |
Enforces valid
|
---|---|
Level | Warn |
Applies To |
Files with kind: HelmChart and apiVersion: kots.io/v1beta1 .
|
Example | Example of correct YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 kind: HelmChart spec: chart: releaseName: samplechart-release-1 ``` |
Description |
Enforces valid Replicated kURL add-on versions. kURL add-ons included in the kURL installer must pin specific versions rather than |
---|---|
Level | Error |
Applies To |
Files with
|
Example | **Correct**: ```yaml apiVersion: cluster.kurl.sh/v1beta1 kind: Installer spec: kubernetes: version: 1.24.5 ``` **Incorrect**: ```yaml apiVersion: cluster.kurl.sh/v1beta1 kind: Installer spec: kubernetes: version: 1.24.x ekco: version: latest ``` |
Description |
Requires Accepts a |
---|---|
Level | Error |
Applies To |
Files with kind: Application and apiVersion: kots.io/v1beta1 .
|
Example | Example of correct YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 kind: Application spec: minKotsVersion: 1.0.0 ``` |
Description | Enforces valid YAML after rendering the manifests using the Config spec. |
---|---|
Level | Error |
Applies To | YAML files |
Example | **Example Helm Chart**: ```yaml apiVersion: kots.io/v1beta1 kind: HelmChart metadata: name: nginx-chart spec: chart: name: nginx-chart chartVersion: 0.1.0 helmVersion: v3 useHelmInstall: true builder: {} values: image: repl{{ ConfigOption `nginx_image`}} ``` **Correct Config**: ```yaml apiVersion: kots.io/v1beta1 kind: Config metadata: name: nginx-config spec: groups: - name: nginx-deployment-config title: nginx deployment config items: - name: nginx_image title: image type: text default: "nginx" ``` **Resulting Rendered Helm Chart**: ```yaml apiVersion: kots.io/v1beta1 kind: HelmChart metadata: name: nginx-chart spec: chart: name: nginx-chart chartVersion: 0.1.0 helmVersion: v3 useHelmInstall: true builder: {} values: image: nginx ``` **Incorrect Config**: ```yaml apiVersion: kots.io/v1beta1 kind: Config metadata: name: nginx-config spec: groups: - name: nginx-deployment-config items: - name: nginx_image title: image type: text default: "***HIDDEN***" ``` **Resulting Lint Error**: ```json { "lintExpressions": [ { "rule": "invalid-rendered-yaml", "type": "error", "message": "yaml: did not find expected alphabetic or numeric character: image: ***HIDDEN***", "path": "nginx-chart.yaml", "positions": null } ], "isLintingComplete": false } ``` **Incorrectly Rendered Helm Chart**: ```yaml apiVersion: kots.io/v1beta1 kind: HelmChart metadata: name: nginx-chart spec: chart: name: nginx-chart chartVersion: 0.1.0 helmVersion: v3 useHelmInstall: true builder: {} values: image: ***HIDDEN*** ``` |
Description |
Requires Accepts a |
---|---|
Level | Error |
Applies To |
Files with kind: Application and apiVersion: kots.io/v1beta1
|
Example | Example of correct YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 kind: Application spec: targetKotsVersion: 1.0.0 ``` |
Description | Requires that the value of a property matches that property's expected type. |
---|---|
Level | Error |
Applies To | All files |
Example | **Correct**: ```yaml ports: - serviceName: "example" servicePort: 80 ``` **Incorrect**: ```yaml ports: - serviceName: "example" servicePort: "80" ``` |
Description | Enforces valid YAML. |
---|---|
Level | Error |
Applies To | YAML files |
Example | **Correct**: ```yaml spec: kubernetes: version: 1.24.5 ``` **Incorrect**: ```yaml spec: kubernetes: version 1.24.x ``` |
Description | Notifies if any manifest file may contain secrets. |
---|---|
Level | Info |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml data: ENV_VAR_1: "y2X4hPiAKn0Pbo24/i5nlInNpvrL/HJhlSCueq9csamAN8g5y1QUjQnNL7btQ==" ``` |
Description | Requires the apiVersion: field in all files. |
---|---|
Level | Error |
Applies To | All files |
Example | Example of correct YAML for this rule: ```yaml apiVersion: kots.io/v1beta1 ``` |
Description | Requires the kind: field in all files. |
---|---|
Level | Error |
Applies To | All files |
Example | Example of correct YAML for this rule: ```yaml kind: Config ``` |
Description |
Requires that each The linter cannot evaluate If you configure status informers for Helm-managed resources, you can ignore |
---|---|
Level | Warning |
Applies To |
Compares |
Description |
Requires a Preflight custom resource manifest file with:
and one of the following:
|
---|---|
Level | Warn |
Example | Example of matching YAML for this rule: ```yaml apiVersion: troubleshoot.sh/v1beta2 kind: Preflight ``` |
Description | Notifies if any manifest file has privileged set to true . |
---|---|
Level | Info |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml spec: privileged: true ``` |
Description |
Enforces ConfigOption For more information, see Repeatable Item Template Targets in Config. |
---|---|
Level | Error |
Applies To | All files |
Example | Example of correct YAML for this rule: ```yaml spec: groups: - name: ports items: - name: service_port yamlPath: 'spec.ports[0]' ``` |
Description |
Disallows repeating Config item with undefined For more information, see Repeatable Item Template Targets in Config. |
---|---|
Level | Error |
Applies To | All files |
Example | Example of correct YAML for this rule: ```yaml spec: groups: - name: ports items: - name: service_port title: Service Port type: text repeatable: true templates: - apiVersion: v1 kind: Service name: my-service namespace: my-app yamlPath: 'spec.ports[0]' - apiVersion: v1 kind: Service name: my-service namespace: my-app ``` |
Description |
Disallows repeating Config item with undefined For more information, see Repeatable Items in Config. |
---|---|
Level | Error |
Applies To | All files |
Example | Example of correct YAML for this rule: ```yaml spec: groups: - name: ports items: - name: service_port title: Service Port type: text repeatable: true valuesByGroup: ports: port-default-1: "80" ``` |
Description | Notifies if any manifest file has replicas set to 1 . |
---|---|
Level | Info |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml spec: replicas: 1 ``` |
Description | Notifies if a spec.container has no resources.limits.cpu field. |
---|---|
Level | Info |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml spec: containers: - name: nginx resources: limits: memory: '256Mi' # note the lack of a cpu field ``` |
Description | Notifies if a spec.container has no resources.limits.memory field. |
---|---|
Level | Info |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml spec: containers: - name: nginx resources: limits: cpu: '500m' # note the lack of a memory field ``` |
Description | Notifies if a spec.container has no resources.requests.cpu field. |
---|---|
Level | Info |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml spec: containers: - name: nginx resources: requests: memory: '32Mi' # note the lack of a cpu field ``` |
Description | Notifies if a spec.container has no resources.requests.memory field. |
---|---|
Level | Info |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml spec: containers: - name: nginx resources: requests: cpu: '100m' # note the lack of a memory field ``` |
Description |
Requires a Troubleshoot manifest file. Accepted values for
Accepted values for
|
---|---|
Level | Warn |
Example | Example of matching YAML for this rule: ```yaml apiVersion: troubleshoot.sh/v1beta2 kind: SupportBundle ``` |
Description | Notifies if a spec.volumes has hostPath
set to /var/run/docker.sock . |
---|---|
Level | Info |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml spec: volumes: - hostPath: path: /var/run/docker.sock ``` |
Description | Notifies if a spec.volumes has defined a hostPath . |
---|---|
Level | Info |
Applies To | All files |
Example | Example of matching YAML for this rule: ```yaml spec: volumes: - hostPath: path: /data ``` |
Behavior | Lookup function |
---|---|
kubectl get pod mypod -n mynamespace |
repl{{ Lookup "v1" "Pod" "mynamespace" "mypod" }} |
kubectl get pods -n mynamespace |
repl{{ Lookup "v1" "Pod" "mynamespace" "" }} |
kubectl get pods --all-namespaces |
repl{{ Lookup "v1" "Pod" "" "" }} |
kubectl get namespace mynamespace |
repl{{ Lookup "v1" "Namespace" "" "mynamespace" }} |
kubectl get namespaces |
repl{{ Lookup "v1" "Namespace" "" "" }} |
readonly | hidden | Outcome | Use Case |
---|---|---|---|
false | true | Persistent |
Set
|
true | false | Ephemeral |
Set
|
true | true | Ephemeral |
Set
|
false | false | Persistent |
Set
For example, set both |
Deployment | StatefulSet | Service | Ingress | PVC | DaemonSet | |
---|---|---|---|---|---|---|
Ready | Ready replicas equals desired replicas | Ready replicas equals desired replicas | All desired endpoints are ready, any load balancers have been assigned | All desired backend service endpoints are ready, any load balancers have been assigned | Claim is bound | Ready daemon pods equals desired scheduled daemon pods |
Updating | The deployed replicas are from a different revision | The deployed replicas are from a different revision | N/A | N/A | N/A | The deployed daemon pods are from a different revision |
Degraded | At least 1 replica is ready, but more are desired | At least 1 replica is ready, but more are desired | At least one endpoint is ready, but more are desired | At least one backend service endpoint is ready, but more are desired | N/A | At least one daemon pod is ready, but more are desired |
Unavailable | No replicas are ready | No replicas are ready | No endpoints are ready, no load balancer has been assigned | No backend service endpoints are ready, no load balancer has been assigned | Claim is pending or lost | No daemon pods are ready |
Missing | Missing is an initial deployment status indicating that informers have not reported their status because the application has just been deployed and the underlying resource has not been created yet. After the resource is created, the status changes. However, if a resource changes from another status to Missing, then the resource was either deleted or the informers failed to report a status. |
Resource Statuses | Aggregate Application Status |
---|---|
No status available for any resource | Missing |
One or more resources Unavailable | Unavailable |
One or more resources Degraded | Degraded |
One or more resources Updating | Updating |
All resources Ready | Ready |
Based on the templates/svc.yaml and values.yaml files in the Gitea Helm chart, the following KOTS Application custom resource adds port 3000 to the port forward tunnel and maps local port 8888. Port 3000 is the container port of the Pod where the gitea
service runs.
The Kubernetes Application custom resource lists the same URL as the `ports.applicationUrl` field in the KOTS Application custom resource (`"http://nginx"`). This adds a link to the port-forwarded service from the Admin Console dashboard. It also triggers KOTS to rewrite the URL to use the hostname in the browser and append the specified `localPort`. The label to be used for the link in the Admin Console is "Open App".
The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The name
and chartVersion
listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. Each Helm chart archive in a release requires a unique HelmChart custom resource.
The YAML below contains ClusterIP and NodePort specifications for a service named nginx
. Each specification uses the kots.io/when
annotation with the Replicated IsKurl template function to conditionally include the service based on the installation type (existing cluster or kURL cluster). For more information, see Conditionally Including or Excluding Resources and IsKurl.
As shown below, both the ClusterIP and NodePort nginx
services are exposed on port 80.
A basic Deployment specification for the NGINX application.
The KOTS Application custom resource below adds port 80 to the KOTS port forward tunnel and maps port 8888 on the local machine. The specification also includes applicationUrl: "http://nginx"
so that a link to the service can be added to the Admin Console dashboard.
The Kubernetes Application custom resource lists the same URL as the `ports.applicationUrl` field in the KOTS Application custom resource (`"http://nginx"`). This adds a link to the port-forwarded service on the Admin Console dashboard that uses the hostname in the browser and appends the specified `localPort`. The label to be used for the link in the Admin Console is "Open App".
GitHub Action | When to Use | Related Replicated CLI Commands |
---|---|---|
archive-channel |
In release workflows, a temporary channel is created to promote a release for testing. This action archives the temporary channel after tests complete. See Archive the temporary channel and customer in Recommended CI/CD Workflows. |
channel delete |
archive-customer |
In release workflows, a temporary customer is created so that a release can be installed for testing. This action archives the temporary customer after tests complete. See Archive the temporary channel and customer in Recommended CI/CD Workflows. |
N/A |
create-cluster |
In release workflows, use this action to create one or more clusters for testing. See Create cluster matrix, deploy, and test in Recommended CI/CD Workflows. |
cluster create |
create-release |
In release workflows, use this action to create a release to be installed and tested, and optionally to be promoted to a shared channel after tests complete. See Create a release and promote to a temporary channel in Recommended CI/CD Workflows. |
release create |
get-customer-instances |
In release workflows, use this action to create a matrix of clusters for running tests based on the Kubernetes distributions and versions of active instances of your application running in customer environments. See Create cluster matrix, deploy, and test in Recommended CI/CD Workflows. |
N/A |
helm-install |
In development or release workflows, use this action to install a release using the Helm CLI in one or more clusters for testing. See Create cluster matrix, deploy, and test in Recommended CI/CD Workflows. |
N/A |
kots-install |
In development or release workflows, use this action to install a release with Replicated KOTS in one or more clusters for testing. See Create cluster matrix, deploy, and test in Recommended CI/CD Workflows. |
N/A |
prepare-cluster |
In development workflows, use this action to create a cluster, create a temporary customer of type See Prepare clusters, deploy, and test in Recommended CI/CD Workflows. |
cluster prepare |
promote-release |
In release workflows, use this action to promote a release to an internal or customer-facing channel (such as Unstable, Beta, or Stable) after tests pass. See Promote to a shared channel in Recommended CI/CD Workflows. |
release promote |
remove-cluster |
In development or release workflows, use this action to remove a cluster after running tests if no See Prepare clusters, deploy, and test and Create cluster matrix, deploy, and test in Recommended CI/CD Workflows. |
cluster rm |
report-compatibility-result | In development or release workflows, use this action to report the success or failure of tests that ran in clusters provisioned by the Compatibility Matrix. | release compatibility |
upgrade-cluster | In release workflows, use this action to test your application's compatibility with Kubernetes API resource version migrations after upgrading. | cluster upgrade |
Metric | Description | Target Trend |
---|---|---|
Instances on last three versions |
Percent of active instances that are running one the latest three versions of your application. Formula: |
Increase towards 100% |
Unique versions |
Number of unique versions of your application running in active instances. Formula: |
Decrease towards less than or equal to three |
Median relative age |
The relative age of a single instance is the number of days between the date that the instance's version was promoted to the channel and the date when the latest available application version was promoted to the channel. Median relative age is the median value across all active instances for the selected time period and channel. Formula: |
Depends on release cadence. For vendors who ship every four to eight weeks, decrease the median relative age towards 60 days or fewer. |
Upgrades completed |
Total number of completed upgrades across active instances for the selected time period and channel. An upgrade is a single version change for an instance. An upgrade is considered complete when the instance deploys the new application version. The instance does not need to become available (as indicated by reaching a Ready state) after deploying the new version for the upgrade to be counted as complete. Formula: |
Increase compared to any previous period, unless you reduce your total number of live instances. |
HelmChart v1beta2 | HelmChart v1beta1 | Description |
---|---|---|
apiVersion: kots.io/v1beta2 |
apiVersion: kots.io/v1beta1 |
apiVersion is updated to kots.io/v1beta2 |
releaseName |
chart.releaseName |
releaseName is a top level field under spec |
N/A | helmVersion |
helmVersion field is removed |
N/A | useHelmInstall |
useHelmInstall field is removed |
Deployment | StatefulSet | Service | Ingress | PVC | DaemonSet | |
---|---|---|---|---|---|---|
Ready | Ready replicas equals desired replicas | Ready replicas equals desired replicas | All desired endpoints are ready, any load balancers have been assigned | All desired backend service endpoints are ready, any load balancers have been assigned | Claim is bound | Ready daemon pods equals desired scheduled daemon pods |
Updating | The deployed replicas are from a different revision | The deployed replicas are from a different revision | N/A | N/A | N/A | The deployed daemon pods are from a different revision |
Degraded | At least 1 replica is ready, but more are desired | At least 1 replica is ready, but more are desired | At least one endpoint is ready, but more are desired | At least one backend service endpoint is ready, but more are desired | N/A | At least one daemon pod is ready, but more are desired |
Unavailable | No replicas are ready | No replicas are ready | No endpoints are ready, no load balancer has been assigned | No backend service endpoints are ready, no load balancer has been assigned | Claim is pending or lost | No daemon pods are ready |
Missing | Missing is an initial deployment status indicating that informers have not reported their status because the application has just been deployed and the underlying resource has not been created yet. After the resource is created, the status changes. However, if a resource changes from another status to Missing, then the resource was either deleted or the informers failed to report a status. |
Resource Statuses | Aggregate Application Status |
---|---|
No status available for any resource | Missing |
One or more resources Unavailable | Unavailable |
One or more resources Degraded | Degraded |
One or more resources Updating | Updating |
All resources Ready | Ready |
Domain | Description |
---|---|
`replicated.app` * | Upstream application YAML and metadata is pulled from `replicated.app`. The current running version of the application (if any), as well as a license ID and application ID to authenticate, are all sent to `replicated.app`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `replicated.app`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L60-L65) in GitHub. |
`registry.replicated.com` | Some applications host private images in the Replicated registry at this domain. The on-prem docker client uses a license ID to authenticate to `registry.replicated.com`. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA. For the range of IP addresses for `registry.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L20-L25) in GitHub. |
`proxy.replicated.com` | Private Docker images are proxied through `proxy.replicated.com`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `proxy.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L52-L57) in GitHub. |
Label | Type | Description |
---|---|---|
customer_id | string | Customer identifier |
customer_name | string | The customer name |
customer_created_date | timestamptz | The date the license was created |
customer_license_expiration_date | timestamptz | The expiration date of the license |
customer_channel_id | string | The channel id the customer is assigned |
customer_channel_name | string | The channel name the customer is assigned |
customer_app_id | string | App identifier |
customer_last_active | timestamptz | The date the customer was last active |
customer_type | string | One of prod, trial, dev, or community |
customer_status | string | The current status of the customer |
customer_is_airgap_enabled | boolean | The feature the customer has enabled - Airgap |
customer_is_geoaxis_supported | boolean | The feature the customer has enabled - GeoAxis |
customer_is_gitops_supported | boolean | The feature the customer has enabled - KOTS Auto-GitOps |
customer_is_embedded_cluster_download_enabled | boolean | The feature the customer has enabled - Embedded Cluster |
customer_is_identity_service_supported | boolean | The feature the customer has enabled - Identity |
customer_is_snapshot_supported | boolean | The feature the customer has enabled - Snapshot |
customer_has_entitlements | boolean | Indicates the presence or absence of entitlements and entitlment_* columns |
customer_entitlement__* | string/integer/boolean | The values of any custom license fields configured for the customer. For example, customer_entitlement__active-users. |
customer_created_by_id | string | The ID of the actor that created this customer: user ID or a hashed value of a token. |
customer_created_by_type | string | The type of the actor that created this customer: user, service-account, or service-account. |
customer_created_by_description | string | The description of the actor that created this customer. Includes username or token name depending on actor type. |
customer_created_by_link | string | The link to the actor that created this customer. |
customer_created_by_timestamp | timestamptz | The date the customer was created by this actor. When available, matches the value in the customer_created_date column |
customer_updated_by_id | string | The ID of the actor that updated this customer: user ID or a hashed value of a token. |
customer_updated_by_type | string | The type of the actor that updated this customer: user, service-account, or service-account. |
customer_updated_by_description | string | The description of the actor that updated this customer. Includes username or token name depending on actor type. |
customer_updated_by_link | string | The link to the actor that updated this customer. |
customer_updated_by_timestamp | timestamptz | The date the customer was updated by this actor. |
instance_id | string | Instance identifier |
instance_is_active | boolean | The instance has pinged within the last 24 hours |
instance_first_reported_at | timestamptz | The timestamp of the first recorded check-in for the instance. |
instance_last_reported_at | timestamptz | The timestamp of the last recorded check-in for the instance. |
instance_first_ready_at | timestamptz | The timestamp of when the cluster was considered ready |
instance_kots_version | string | The version of KOTS or the Replicated SDK that the instance is running. The version is displayed as a Semantic Versioning compliant string. |
instance_k8s_version | string | The version of Kubernetes running in the cluster. |
instance_is_airgap | boolean | The cluster is airgaped |
instance_is_kurl | boolean | The instance is installed in a Replicated kURL cluster (embedded cluster) |
instance_last_app_status | string | The instance's last reported app status |
instance_client | string | Indicates whether this instance is managed by KOTS or if it's a Helm CLI deployed instance using the SDK. |
instance_kurl_node_count_total | integer | Total number of nodes in the cluster. Applies only to kURL clusters. |
instance_kurl_node_count_ready | integer | Number of nodes in the cluster that are in a healthy state and ready to run Pods. Applies only to kURL clusters. |
instance_cloud_provider | string | The cloud provider where the instance is running. Cloud provider is determined by the IP address that makes the request. |
instance_cloud_provider_region | string | The cloud provider region where the instance is running. For example, us-central1-b |
instance_app_version | string | The current application version |
instance_version_age | string | The age (in days) of the currently deployed release. This is relative to the latest available release on the channel. |
instance_is_gitops_enabled | boolean | Reflects whether the end user has enabled KOTS Auto-GitOps for deployments in their environment |
instance_gitops_provider | string | If KOTS Auto-GitOps is enabled, reflects the GitOps provider in use. For example, GitHub Enterprise. |
instance_is_skip_preflights | boolean | Indicates whether an end user elected to skip preflight check warnings or errors |
instance_preflight_status | string | The last reported preflight check status for the instance |
instance_k8s_distribution | string | The Kubernetes distribution of the cluster. |
instance_has_custom_metrics | boolean | Indicates the presence or absence of custom metrics and custom_metric__* columns |
instance_custom_metrics_reported_at | timestamptz | Timestamp of latest custom_metric |
custom_metric__* | string/integer/boolean | The values of any custom metrics that have been sent by the instance. For example, custom_metric__active_users |
instance_has_tags | boolean | Indicates the presence or absence of instance tags and instance_tag__* columns |
instance_tag__* | string/integer/boolean | The values of any instance tag that have been set by the vendor. For example, instance_tag__name |
Uptime State | Application Statuses |
---|---|
Up | Ready, Updating, or Degraded |
Down | Missing or Unavailable |
Uptime State | Application Statuses |
---|---|
Up | Ready or Updating |
Degraded | Degraded |
Down | Missing or Unavailable |
Label | Description |
---|---|
App Channel | The ID of the channel the application instance is assigned. |
App Version | The version label of the release that the instance is currently running. The version label is the version that you assigned to the release when promoting it to a channel. |
Label | Description |
---|---|
App Status |
A string that represents the status of the application. Possible values: Ready, Updating, Degraded, Unavailable, Missing. For applications that include the Replicated SDK, hover over the application status to view the statuses of the indiviudal resources deployed by the application. For more information, see Enabling and Understanding Application Status. |
Label | Description |
---|---|
Cluster Type |
Indicates if the cluster was provisioned by kURL. Possible values:
For more information about kURL clusters, see Creating a kURL installer. |
Kubernetes Version | The version of Kubernetes running in the cluster. |
Kubernetes Distribution |
The Kubernetes distribution of the cluster. Possible values:
|
kURL Nodes Total |
Total number of nodes in the cluster. Note: Applies only to kURL clusters. |
kURL Nodes Ready |
Number of nodes in the cluster that are in a healthy state and ready to run Pods. Note: Applies only to kURL clusters. |
New kURL Installer |
The ID of the kURL installer specification that kURL used to provision the cluster. Indicates that a new Installer specification was added. An installer specification is a manifest file that has For more information about installer specifications for kURL, see Creating a kURL installer. Note: Applies only to kURL clusters. |
Label | Description |
---|---|
Cloud Provider |
The cloud provider where the instance is running. Cloud provider is determined by the IP address that makes the request. Possible values:
|
Cloud Region |
The cloud provider region where the instance is running. For example, |
Label | Description |
---|---|
KOTS Version | The version of KOTS that the instance is running. KOTS version is displayed as a Semantic Versioning compliant string. |
Label | Description |
---|---|
Replicated SDK Version | The version of the Replicated SDK that the instance is running. SDK version is displayed as a Semantic Versioning compliant string. |
Label | Description |
---|---|
Versions Behind |
The number of versions between the version that the instance is currently running and the latest version available on the channel. Computed by the Vendor Portal each time it receives instance data. |
Install Type | Description | Requirements |
---|---|---|
Existing Cluster (Helm CLI) | Allows the customer to install with Helm in an existing cluster. The customer does not have access to the Replicated installers (Embedded Cluster, KOTS, and kURL). When the Helm CLI Air Gap Instructions (Helm CLI only) install option is also enabled, the Download Portal displays instructions on how to pull Helm installable images into a local repository. See Understanding Additional Install Options below. |
The latest release promoted to the channel where the customer is assigned must contain one or more Helm charts. It can also include Replicated custom resources, such as the Embedded Cluster Config custom resource, the KOTS HelmChart, Config, and Application custom resources, or the Troubleshoot Preflight and SupportBundle custom resources. Any other Kubernetes resources in the release (such as Kubernetes Deployments or Services) must include the `kots.io/installer-only` annotation. The `kots.io/installer-only` annotation indicates that the Kubernetes resource is used only by the Replicated installers (Embedded Cluster, KOTS, and kURL). Example: ```yaml apiVersion: v1 kind: Service metadata: name: my-service annotations: kots.io/installer-only: "true" ``` |
Existing Cluster (KOTS install) | Allows the customer to install with Replicated KOTS in an existing cluster. |
|
kURL Embedded Cluster (first generation product) |
Allows the customer to install with Replicated kURL on a VM or bare metal server. Note: For new installations, enable Replicated Embedded Cluster (current generation product) instead of Replicated kURL (first generation product). |
|
Embedded Cluster (current generation product) | Allows the customer to install with Replicated Embedded Cluster on a VM or bare metal server. |
|
Install Type | Description | Requirements |
---|---|---|
Helm CLI Air Gap Instructions (Helm CLI only) | When enabled, a customer will see instructions on the Download Portal on how to pull Helm installable images into their local repository. Helm CLI Air Gap Instructions is enabled by default when you select the Existing Cluster (Helm CLI) install type. For more information see [Installing with Helm in Air Gap Environments](/vendor/helm-install-airgap) |
The Existing Cluster (Helm CLI) install type must be enabled |
Air Gap Installation Option (Replicated Installers only) | When enabled, new installations with this license have an option in their Download Portal to install from an air gap package or do a traditional online installation. |
At least one of the following Replicated install types must be enabled:
|
Field Name | Description |
`appSlug` | The unique application slug that the customer is associated with. This value never changes. |
`channelID` | The ID of the channel where the customer is assigned. When the customer's assigned channel changes, the latest release from that channel will be downloaded on the next update check. |
`channelName` | The name of the channel where the customer is assigned. When the customer's assigned channel changes, the latest release from that channel will be downloaded on the next update check. |
`licenseID`, `licenseId` | Unique ID for the installed license. This value never changes. |
`customerEmail` | The customer email address. |
`endpoint` | For applications installed with a Replicated installer (KOTS, kURL, Embedded Cluster), this is the endpoint that the KOTS Admin Console uses to synchronize the licenses and download updates. This is typically `https://replicated.app`. |
`entitlementValues` | Includes both the built-in `expires_at` field and any custom license fields. For more information about adding custom license fields, see [Managing Customer License Fields](licenses-adding-custom-fields). |
`expires_at` | Defines the expiration date for the license. The date is encoded in ISO 8601 format (`2026-01-23T00:00:00Z`) and is set to midnight UTC at the beginning of the calendar day (`00:00:00`) on the date selected. If a license does not expire, this field is missing. For information about the default behavior when a license expires, see [License Expiration Handling](licenses-about#expiration) in _About Customers_. |
`licenseSequence` | Every time a license is updated, its sequence number is incremented. This value represents the license sequence that the client currently has. |
`customerName` | The name of the customer. |
`signature` | The base64-encoded license signature. This value will change when the license is updated. |
`licenseType` | A string value that describes the type of the license, which is one of the following: `paid`, `trial`, `dev`, `single-tenant-vendor-managed` or `community`. For more information about license types, see [Managing License Type](licenses-about-types). |
Field Name | Description |
`isEmbeddedClusterDownloadEnabled` | If a license supports installation with Replicated Embedded Cluster, this field is set to `true` or missing. If Embedded Cluster installations are not supported, this field is `false`. This field requires that the vendor has the Embedded Cluster entitlement and that the release at the head of the channel includes an [Embedded Cluster Config](/reference/embedded-config) custom resource. This field also requires that the "Install Types" feature is enabled for your Vendor Portal team. Reach out to your Replicated account representative to get access. |
`isHelmInstallEnabled` | If a license supports Helm installations, this field is set to `true` or missing. If Helm installations are not supported, this field is set to `false`. This field requires that the vendor packages the application as Helm charts and, optionally, Replicated custom resources. This field requires that the "Install Types" feature is enabled for your Vendor Portal team. Reach out to your Replicated account representative to get access. |
`isKotsInstallEnabled` | If a license supports installation with Replicated KOTS, this field is set to `true`. If KOTS installations are not supported, this field is either `false` or missing. This field requires that the vendor has the KOTS entitlement. |
`isKurlInstallEnabled` | If a license supports installation with Replicated kURL, this field is set to `true` or missing. If kURL installations are not supported, this field is `false`. This field requires that the vendor has the kURL entitlement and a promoted kURL installer spec. This field also requires that the "Install Types" feature is enabled for your Vendor Portal team. Reach out to your Replicated account representative to get access. |
Field Name | Description |
`isAirgapSupported` | If a license supports air gap installations with the Replicated installers (KOTS, kURL, Embedded Cluster), then this field is set to `true`. If Replicated installer air gap installations are not supported, this field is missing. When you enable this field for a license, the `license.yaml` file will have license metadata embedded in it and must be re-downloaded. |
`isHelmAirgapEnabled` | If a license supports Helm air gap installations, then this field is set to `true` or missing. If Helm air gap is not supported, this field is missing. When you enable this feature, the `license.yaml` file will have license metadata embedded in it and must be re-downloaded. This field requires that the "Install Types" feature is enabled for your Vendor Portal team. Reach out to your Replicated account representative to get access. |
Field Name | Description |
`isDisasterRecoverySupported` | If a license supports the Embedded Cluster disaster recovery feature, this field is set to `true`. If a license does not support disaster recovery for Embedded Cluster, this field is either missing or `false`. **Note**: Embedded Cluster Disaster Recovery is in Alpha. To get access to this feature, reach out to Alex Parker at [alexp@replicated.com](mailto:alexp@replicated.com). For more information, see [Disaster Recovery for Embedded Cluster](/vendor/embedded-disaster-recovery). |
`isGeoaxisSupported` | (kURL Only) If a license supports integration with GeoAxis, this field is set to `true`. If GeoAxis is not supported, this field is either `false` or missing. **Note**: This field requires that the vendor has the GeoAxis entitlement. It also requires that the vendor has access to the Identity Service entitlement. |
`isGitOpsSupported` | :::important KOTS Auto-GitOps is a legacy feature and is **not recommended** for use. For modern enterprise customers that prefer software deployment processes that use CI/CD pipelines, Replicated recommends the [Helm CLI installation method](/vendor/install-with-helm), which is more commonly used in these types of enterprise environments. ::: If a license supports the KOTS AutoGitOps workflow in the Admin Console, this field is set to `true`. If Auto-GitOps is not supported, this field is either `false` or missing. See [KOTS Auto-GitOps Workflow](/enterprise/gitops-workflow). |
`isIdentityServiceSupported` | If a license supports identity-service enablement for the Admin Console, this field is set to `true`. If identity service is not supported, this field is either `false` or missing. **Note**: This field requires that the vendor have access to the Identity Service entitlement. |
`isSemverRequired` | If set to `true`, this field requires that the Admin Console orders releases according to Semantic Versioning. This field is controlled at the channel level. For more information about enabling Semantic Versioning on a channel, see [Semantic Versioning](releases-about#semantic-versioning) in _About Releases_. |
`isSnapshotSupported` | If a license supports the snapshots backup and restore feature, this field is set to `true`. If a license does not support snapshots, this field is either missing or `false`. **Note**: This field requires that the vendor have access to the Snapshots entitlement. |
`isSupportBundleUploadSupported` | If a license supports uploading a support bundle in the Admin Console, this field is set to `true`. If a license does not support uploading a support bundle, this field is either missing or `false`. |
Method | Description |
---|---|
Promote the installer to a channel | The installer is promoted to one or more channels. All releases on the channel use the kURL installer that is currently promoted to that channel. There can be only one active kURL installer on each channel at a time. The benefit of promoting an installer to one or more channels is that you can create a single installer without needing to add a separate installer for each release. However, because all the releases on the channel will use the same installer, problems can occur if all releases are not tested with the given installer. |
Include the installer in a release (Beta) | The installer is included as a manifest file in a release. This makes it easier to test the installer and release together. It also makes it easier to know which installer spec customers are using based on the application version that they have installed. |
Field | Description |
---|---|
Channel | Select the channel or channels where you want to promote the installer. |
Version label | Enter a version label for the installer. |
Field | Instructions |
---|---|
Hostname | Enter the host name for the registry, such as 123456689.dkr.ecr.us-east-1.amazonaws.com |
Access Key ID | Enter the Access Key ID for a Service Account User that has pull access to the registry. See Setting up the Service Account User. |
Secret Access Key | Enter the Secret Access Key for the Service Account User. |
Field | Instructions |
---|---|
Hostname | Enter the host name for the registry, such as index.docker.io. |
Auth Type | Select the authentication type for a DockerHub account that has pull access to the registry. |
Username | Enter the host name for the account. |
Password or Token | Enter the password or token for the account, depending on the authentication type you selected. |
Field | Instructions |
---|---|
Hostname | Enter the host name for the registry. |
Username | Enter the username for an account that has pull access to the registry. |
GitHub Token | Enter the token for the account. |
Field | Instructions |
---|---|
Hostname | Enter the host name for the registry, such as us-east1-docker.pkg.dev |
Auth Type | Select the authentication type for a Google Cloud Platform account that has pull access to the registry. |
Service Account JSON Key or Token |
Enter the JSON Key from Google Cloud Platform assigned with the Artifact Registry Reader role, or token for the account, depending on the authentication type you selected. For more information about creating a Service Account, see Access Control with IAM in the Google Cloud documentation. |
Field | Instructions |
---|---|
Hostname | Enter the host name for the registry, such as gcr.io. |
Service Account JSON Key | Enter the JSON Key for a Service Account in Google Cloud Platform that is assigned the Storage Object Viewer role. For more information about creating a Service Account, see Access Control with IAM in the Google Cloud documentation. |
Field | Instructions |
---|---|
Hostname | Enter the host name for the registry, such as quay.io. |
Username and Password | Enter the username and password for an account that has pull access to the registry. |
Field | Instructions |
---|---|
Hostname | Enter the host name for the registry, such as nexus.example.net. |
Username and Password | Enter the username and password for an account that has pull access to the registry. |
Field | Instructions |
---|---|
Hostname | Enter the host name for the registry, such as example.registry.com. |
Username and Password | Enter the username and password for an account that has pull access to the registry. |
Product Phase | Definition |
---|---|
Alpha | A product or feature that is exploratory or experimental. Typically, access to alpha features and their documentation is limited to customers providing early feedback. While most alpha features progress to beta and general availability (GA), some are deprecated based on assessment learnings. |
Beta | A product or feature that is typically production-ready, but has not met Replicated’s definition of GA for one or more of the following reasons:
Documentation for beta products and features is published on the Replicated Documentation site with a "(Beta)" label. Beta products or features follow the same build and test processes required for GA. Please contact your Replicated account representative if you have questions about why a product or feature is beta. |
“GA” - General Availability | A product or feature that has been validated as both production-ready and value-additive by a percentage of Replicated customers. Products in the GA phase are typically those that are available for purchase from Replicated. |
“LA” - Limited Availability | A product has reached the Limited Availability phase when it is no longer available for new purchases from Replicated. Updates will be primarily limited to security patches, critical bugs and features that enable migration to GA products. |
“EOA” - End of Availability | A product has reached the End of Availability phase when it is no longer available for renewal purchase by existing customers. This date may coincide with the Limited Availability phase. This product is considered deprecated, and will move to End of Life after a determined support window. Product maintenance is limited to critical security issues only. |
“EOL” - End of Life | A product has reached its End of Life, and will no longer be supported, patched, or fixed by Replicated. Associated product documentation may no longer be available. The Replicated team will continue to engage to migrate end customers to GA product based deployments of your application. |
Replicated Product | Product Phase | End of Availability | End of Life |
---|---|---|---|
Compatibility Matrix | GA | N/A | N/A |
Replicated SDK | Beta | N/A | N/A |
Replicated KOTS Installer | GA | N/A | N/A |
Replicated kURL Installer | GA | N/A | N/A |
Replicated Embedded Cluster Installer | GA | N/A | N/A |
Replicated Classic Native Installer | EOL | 2023-12-31* | 2024-12-31* |
Kubernetes Version | Embedded Cluster Versions | KOTS Versions | kURL Versions | End of Replicated Support |
---|---|---|---|---|
1.32 | N/A | N/A | N/A | 2026-02-28 |
1.31 | N/A | 1.117.0 and later | v2024.08.26-0 and later | 2025-10-28 |
1.30 | 1.16.0 and later | 1.109.1 and later | v2024.05.03-0 and later | 2025-06-28 |
1.29 | 1.0.0 and later | 1.105.2 and later | v2024.01.02-0 and later | 2025-02-28 |
The following shows how the pass
outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
The following shows how the warn
outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
The following shows how the pass
outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
This example uses Helm template functions to render the credentials and connection details for the MySQL server that were supplied by the user. Additionally, it uses Helm template functions to create a conditional statement so that the MySQL collector and analyzer are included in the preflight checks only when MySQL is deployed, as indicated by a .Values.global.mysql.enabled
field evaluating to true.
For more information about using Helm template functions to access values from the values file, see Values Files.
```yaml apiVersion: v1 kind: Secret metadata: labels: troubleshoot.sh/kind: preflight name: "{{ .Release.Name }}-preflight-config" stringData: preflight.yaml: | apiVersion: troubleshoot.sh/v1beta2 kind: Preflight metadata: name: preflight-sample spec: {{ if eq .Values.global.mysql.enabled true }} collectors: - mysql: collectorName: mysql uri: '{{ .Values.global.externalDatabase.user }}:{{ .Values.global.externalDatabase.password }}@tcp({{ .Values.global.externalDatabase.host }}:{{ .Values.global.externalDatabase.port }})/{{ .Values.global.externalDatabase.database }}?tls=false' {{ end }} analyzers: {{ if eq .Values.global.mysql.enabled true }} - mysql: checkName: Must be MySQL 8.x or later collectorName: mysql outcomes: - fail: when: connected == false message: Cannot connect to MySQL server - fail: when: version < 8.x message: The MySQL server must be at least version 8 - pass: message: The MySQL server is ready {{ end }} ```This example uses KOTS template functions in the Config context to render the credentials and connection details for the MySQL server that were supplied by the user in the Replicated Admin Console Config page. Replicated recommends using a template function for the URI, as shown above, to avoid exposing sensitive information. For more information about template functions, see About Template Functions.
This example also uses an analyzer with strict: true
, which prevents installation from continuing if the preflight check fails.
The following shows how a fail
outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade when strict: true
is set for the analyzer:
The following shows how a warn
outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
The following shows how a fail
outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
The following shows how a pass
outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
The following shows how the fail
outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
The following shows how the pass
outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
Flag: Value | Description |
---|---|
hostPreflightIgnore: true |
Ignores host preflight failures and warnings. The installation proceeds regardless of host preflight outcomes. |
hostPreflightEnforceWarnings: true |
Blocks an installation if the results include a warning. |
Installation | Support for Image Tags | Support for Image Digests |
---|---|---|
Online | Supported by default | Supported by default |
Air Gap | Supported by default for Replicated KOTS installations |
Supported for applications on KOTS v1.82.0 and later when the Enable new air gap bundle format toggle is enabled on the channel. For more information, see Using Image Digests in Air Gap Installations below. |
The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The name
and chartVersion
listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. The optionalValues
field sets the specified Helm values when a given conditional statement evaluates to true. In this case, if the application is installed with Embedded Cluster, then the Gitea service type is set to `NodePort` and the node port is set to `"32000"`. This will allow Gitea to be accessed from the local machine after deployment for the purpose of this quick start.
The KOTS Application custom resource enables features in the Replicated Admin Console such as branding, release notes, application status indicators, and custom graphs.
The YAML below provides a name for the application to display in the Admin Console, adds a custom status informer that displays the status of the gitea
Deployment resource in the Admin Console dashboard, adds a custom application icon, and adds the port where the Gitea service can be accessed so that the user can open the application after installation.
The Kubernetes SIG Application custom resource supports functionality such as including buttons and links on the Replicated Admin Console dashboard. The YAML below adds an Open App button to the Admin Console dashboard that opens the application using the service port defined in the KOTS Application custom resource.
To install your application with Embedded Cluster, an Embedded Cluster Config must be present in the release. At minimum, the Embedded Cluster Config sets the version of Embedded Cluster that will be installed. You can also define several characteristics about the cluster.
Field | Description |
---|---|
Channel |
Select the channel where you want to promote the release. If you are not sure which channel to use, use the default Unstable channel. |
Version label |
Enter a version label. If you have one or more Helm charts in your release, the Vendor Portal automatically populates this field. You can change the version label to any |
Requirements | Select the Prevent this release from being skipped during upgrades to mark the release as required for KOTS installations. This option does not apply to installations with Helm. |
Release notes | Add release notes. The release notes support markdown and are shown to your customer. |
View the command for installing with Replicated KOTS in existing clusters.
View the commands for installing with Replicated Embedded Cluster or Replicated kURL on VMs or bare metal servers.
In the dropdown, choose **kURL** or **Embedded Cluster** to view the command for the target installer:
View the command for installing with the Helm CLI in an existing cluster.
View the command for installing with Replicated KOTS in existing clusters.
View the commands for installing with Replicated Embedded Cluster or Replicated kURL on VMs or bare metal servers.
In the dropdown, choose **kURL** or **Embedded Cluster** to view the command for the target installer:
View the command for installing with the Helm CLI in an existing cluster.
View the customer-specific Helm CLI installation instructions. For more information about installing with the Helm CLI, see [Installing with Helm](/vendor/install-with-helm).
View the customer-specific Embedded Cluster installation instructions. For more information about installing with Embedded Cluster, see [Online Installation with Embedded Cluster](/enterprise/installing-embedded).
By default, no volumes are included in the backup. If any pods mount a volume that should be backed up, you must configure the backup with an annotation listing the specific volumes to include in the backup.
podAnnotation | Description |
---|---|
backup.velero.io/backup-volumes |
A comma-separated list of volumes from the Pod to include in the backup. The primary data volume is not included in this field because data is exported using the backup hook. |
pre.hook.backup.velero.io/command |
A stringified JSON array containing the command for the backup hook.
This command is a pg_dump from the running database to the backup volume. |
pre.hook.backup.velero.io/timeout |
A duration for the maximum time to let this script run. |
post.hook.restore.velero.io/command |
A Velero exec restore hook that runs a script to check if the database file exists, and restores only if it exists. Then, the script deletes the file after the operation is complete. |
Restore Type | Description | Interface to Use |
---|---|---|
Full restore | Restores the Admin Console and the application. | KOTS CLI |
Partial restore | Restores the application only. | KOTS CLI or Admin Console |
Admin console | Restores the Admin Console only. | KOTS CLI |
Vendor Portal Role | GitHub collab Role | Description |
---|---|---|
Admin | Admin | Members assigned the default Admin role in the Vendor Portal are assigned the GitHub Admin role in the collab repository. |
Support Engineer | Triage | Members assigned the custom Support Engineer role in the Vendor Portal are assigned the GitHub Triage role in the collab repository. For information about creating a custom Support Engineer policy in the Vendor Portal, see Support Engineer in Configuring RBAC Policies. For information about editing custom RBAC policies to change this default GitHub role, see About Changing the Default GitHub Role below. |
Read Only | Read | Members assigned the default Read Only role in the Vendor Portal are assigned the GitHub Read role in the collab repository. |
Sales | N/A | Users assigned the custom Sales role in the Vendor Portal do not have access to the collab repository. For information about creating a custom Sales policy in the Vendor Portal, see Sales in Configuring RBAC Policies. For information about editing custom RBAC policies to change this default GitHub role, see About Changing the Default GitHub Role below. |
Custom policies with **/admin under allowed: |
Admin |
By default, members assigned to a custom RBAC policy that specifies For information about editing custom RBAC policies to change this default GitHub role, see About Changing the Default GitHub Role below. |
Custom policies without **/admin under allowed: |
Read Only |
By default, members assigned to any custom RBAC policies that do not specify For information about editing custom RBAC policies to change this default GitHub role, see About Changing the Default GitHub Role below. |
Type | Description |
---|---|
Supported Kubernetes Distributions | EKS (AWS S3) |
Cost | Flat fee of $0.50 per bucket. |
Options |
|
Data |
|
Field | Description |
---|---|
Kubernetes distribution | Select the Kubernetes distribution for the cluster. |
Version | Select the Kubernetes version for the cluster. The options available are specific to the distribution selected. |
Name (optional) | Enter an optional name for the cluster. |
Tags | Add one or more tags to the cluster as key-value pairs. |
Set TTL | Select the Time to Live (TTL) for the cluster. When the TTL expires, the cluster is automatically deleted. TTL can be adjusted after cluster creation with [cluster update ttl](/reference/replicated-cli-cluster-update-ttl). |
Instance type | Select the instance type to use for the nodes in the node group. The options available are specific to the distribution selected. |
Disk size | Select the disk size in GiB to use per node. |
Nodes | Select the number of nodes to provision in the node group. The options available are specific to the distribution selected. |
Instance Type | VCPUs | Memory (GiB) | USD/Credit per hour |
---|---|---|---|
r1.small | 2 | 8 | $0.096 |
r1.medium | 4 | 16 | $0.192 |
r1.large | 8 | 32 | $0.384 |
r1.xlarge | 16 | 64 | $0.768 |
r1.2xlarge | 32 | 128 | $1.536 |
Instance Type | VCPUs | Memory (GiB) | USD/Credit per hour |
---|---|---|---|
m6i.large | 2 | 8 | $0.115 |
m6i.xlarge | 4 | 16 | $0.230 |
m6i.2xlarge | 8 | 32 | $0.461 |
m6i.4xlarge | 16 | 64 | $0.922 |
m6i.8xlarge | 32 | 128 | $1.843 |
m7i.large | 2 | 8 | $0.121 |
m7i.xlarge | 4 | 16 | $0.242 |
m7i.2xlarge | 8 | 32 | $0.484 |
m7i.4xlarge | 16 | 64 | $0.968 |
m7i.8xlarge | 32 | 128 | $1.935 |
m5.large | 2 | 8 | $0.115 |
m5.xlarge | 4 | 16 | $0.230 |
m5.2xlarge | 8 | 32 | $0.461 |
m5.4xlarge | 16 | 64 | $0.922 |
m5.8xlarge | 32 | 128 | $1.843 |
m7g.large | 2 | 8 | $0.098 |
m7g.xlarge | 4 | 16 | $0.195 |
m7g.2xlarge | 8 | 32 | $0.392 |
m7g.4xlarge | 16 | 64 | $0.784 |
m7g.8xlarge | 32 | 128 | $1.567 |
c5.large | 2 | 4 | $0.102 |
c5.xlarge | 4 | 8 | $0.204 |
c5.2xlarge | 8 | 16 | $0.408 |
c5.4xlarge | 16 | 32 | $0.816 |
c5.9xlarge | 36 | 72 | $1.836 |
g4dn.xlarge | 4 | 16 | $0.631 |
g4dn.2xlarge | 8 | 32 | $0.902 |
g4dn.4xlarge | 16 | 64 | $1.445 |
g4dn.8xlarge | 32 | 128 | $2.611 |
g4dn.12xlarge | 48 | 192 | $4.964 |
g4dn.16xlarge | 64 | 256 | $5.222 |
Instance Type | VCPUs | Memory (GiB) | USD/Credit per hour |
---|---|---|---|
n2-standard-2 | 2 | 8 | $0.117 |
n2-standard-4 | 4 | 16 | $0.233 |
n2-standard-8 | 8 | 32 | $0.466 |
n2-standard-16 | 16 | 64 | $0.932 |
n2-standard-32 | 32 | 128 | $1.865 |
t2a-standard-2 | 2 | 8 | $0.092 |
t2a-standard-4 | 4 | 16 | $0.185 |
t2a-standard-8 | 8 | 32 | $0.370 |
t2a-standard-16 | 16 | 64 | $0.739 |
t2a-standard-32 | 32 | 128 | $1.478 |
t2a-standard-48 | 48 | 192 | $2.218 |
e2-standard-2 | 2 | 8 | $0.081 |
e2-standard-4 | 4 | 16 | $0.161 |
e2-standard-8 | 8 | 32 | $0.322 |
e2-standard-16 | 16 | 64 | $0.643 |
e2-standard-32 | 32 | 128 | $1.287 |
n1-standard-1+nvidia-tesla-t4+1 | 1 | 3.75 | $0.321 |
n1-standard-1+nvidia-tesla-t4+2 | 1 | 3.75 | $0.585 |
n1-standard-1+nvidia-tesla-t4+4 | 1 | 3.75 | $1.113 |
n1-standard-2+nvidia-tesla-t4+1 | 2 | 7.50 | $0.378 |
n1-standard-2+nvidia-tesla-t4+2 | 2 | 7.50 | $0.642 |
n1-standard-2+nvidia-tesla-t4+4 | 2 | 7.50 | $1.170 |
n1-standard-4+nvidia-tesla-t4+1 | 4 | 15 | $0.492 |
n1-standard-4+nvidia-tesla-t4+2 | 4 | 15 | $0.756 |
n1-standard-4+nvidia-tesla-t4+4 | 4 | 15 | $1.284 |
n1-standard-8+nvidia-tesla-t4+1 | 8 | 30 | $0.720 |
n1-standard-8+nvidia-tesla-t4+2 | 8 | 30 | $0.984 |
n1-standard-8+nvidia-tesla-t4+4 | 8 | 30 | $1.512 |
n1-standard-16+nvidia-tesla-t4+1 | 16 | 60 | $1.176 |
n1-standard-16+nvidia-tesla-t4+2 | 16 | 60 | $1.440 |
n1-standard-16+nvidia-tesla-t4+4 | 16 | 60 | $1.968 |
n1-standard-32+nvidia-tesla-t4+1 | 32 | 120 | $2.088 |
n1-standard-32+nvidia-tesla-t4+2 | 32 | 120 | $2.352 |
n1-standard-32+nvidia-tesla-t4+4 | 32 | 120 | $2.880 |
n1-standard-64+nvidia-tesla-t4+1 | 64 | 240 | $3.912 |
n1-standard-64+nvidia-tesla-t4+2 | 64 | 240 | $4.176 |
n1-standard-64+nvidia-tesla-t4+4 | 64 | 240 | $4.704 |
n1-standard-96+nvidia-tesla-t4+1 | 96 | 360 | $5.736 |
n1-standard-96+nvidia-tesla-t4+2 | 96 | 360 | $6.000 |
n1-standard-96+nvidia-tesla-t4+4 | 96 | 360 | $6.528 |
Instance Type | VCPUs | Memory (GiB) | Rate | List Price | USD/Credit per hour |
---|---|---|---|---|---|
Standard_B2ms | 2 | 8 | 8320 | $0.083 | $0.100 |
Standard_B4ms | 4 | 16 | 16600 | $0.166 | $0.199 |
Standard_B8ms | 8 | 32 | 33300 | $0.333 | $0.400 |
Standard_B16ms | 16 | 64 | 66600 | $0.666 | $0.799 |
Standard_DS2_v2 | 2 | 7 | 14600 | $0.146 | $0.175 |
Standard_DS3_v2 | 4 | 14 | 29300 | $0.293 | $0.352 |
Standard_DS4_v2 | 8 | 28 | 58500 | $0.585 | $0.702 |
Standard_DS5_v2 | 16 | 56 | 117000 | $1.170 | $1.404 |
Standard_D2ps_v5 | 2 | 8 | 14600 | $0.077 | $0.092 |
Standard_D4ps_v5 | 4 | 16 | 7700 | $0.154 | $0.185 |
Standard_D8ps_v5 | 8 | 32 | 15400 | $0.308 | $0.370 |
Standard_D16ps_v5 | 16 | 64 | 30800 | $0.616 | $0.739 |
Standard_D32ps_v5 | 32 | 128 | 61600 | $1.232 | $1.478 |
Standard_D48ps_v5 | 48 | 192 | 23200 | $1.848 | $2.218 |
Standard_NC4as_T4_v3 | 4 | 28 | 52600 | $0.526 | $0.631 |
Standard_NC8as_T4_v3 | 8 | 56 | 75200 | $0.752 | $0.902 |
Standard_NC16as_T4_v3 | 16 | 110 | 120400 | $1.204 | $1.445 |
Standard_NC64as_T4_v3 | 64 | 440 | 435200 | $4.352 | $5.222 |
Standard_D2S_v5 | 2 | 8 | 9600 | $0.096 | $0.115 |
Standard_D4S_v5 | 4 | 16 | 19200 | $0.192 | $0.230 |
Standard_D8S_v5 | 8 | 32 | 38400 | $0.384 | $0.461 |
Standard_D16S_v5 | 16 | 64 | 76800 | $0.768 | $0.922 |
Standard_D32S_v5 | 32 | 128 | 153600 | $1.536 | $1.843 |
Standard_D64S_v5 | 64 | 192 | 230400 | $2.304 | $2.765 |
Instance Type | VCPUs | Memory (GiB) | USD/Credit per hour |
---|---|---|---|
VM.Standard2.1 | 1 | 15 | $0.076 |
VM.Standard2.2 | 2 | 30 | $0.153 |
VM.Standard2.4 | 4 | 60 | $0.306 |
VM.Standard2.8 | 8 | 120 | $0.612 |
VM.Standard2.16 | 16 | 240 | $1.225 |
VM.Standard3Flex.1 | 1 | 4 | $0.055 |
VM.Standard3Flex.2 | 2 | 8 | $0.110 |
VM.Standard3Flex.4 | 4 | 16 | $0.221 |
VM.Standard3Flex.8 | 8 | 32 | $0.442 |
VM.Standard3Flex.16 | 16 | 64 | $0.883 |
VM.Standard.A1.Flex.1 | 1 | 4 | $0.019 |
VM.Standard.A1.Flex.2 | 2 | 8 | $0.038 |
VM.Standard.A1.Flex.4 | 4 | 16 | $0.077 |
VM.Standard.A1.Flex.8 | 8 | 32 | $0.154 |
VM.Standard.A1.Flex.16 | 16 | 64 | $0.309 |
Type | Description |
---|---|
Supported Kubernetes Versions | {/* START_kind_VERSIONS */}1.26.15, 1.27.16, 1.28.15, 1.29.14, 1.30.10, 1.31.6, 1.32.3{/* END_kind_VERSIONS */} |
Supported Instance Types | See Replicated Instance Types |
Node Groups | No |
Node Auto Scaling | No |
Nodes | Supports a single node. |
IP Family | Supports `ipv4` or `dual`. |
Limitations | See Limitations |
Common Use Cases | Smoke tests |
Type | Description |
---|---|
Supported k3s Versions | The upstream k8s version that matches the Kubernetes version requested. |
Supported Kubernetes Versions | {/* START_k3s_VERSIONS */}1.24.1, 1.24.2, 1.24.3, 1.24.4, 1.24.6, 1.24.7, 1.24.8, 1.24.9, 1.24.10, 1.24.11, 1.24.12, 1.24.13, 1.24.14, 1.24.15, 1.24.16, 1.24.17, 1.25.0, 1.25.2, 1.25.3, 1.25.4, 1.25.5, 1.25.6, 1.25.7, 1.25.8, 1.25.9, 1.25.10, 1.25.11, 1.25.12, 1.25.13, 1.25.14, 1.25.15, 1.25.16, 1.26.0, 1.26.1, 1.26.2, 1.26.3, 1.26.4, 1.26.5, 1.26.6, 1.26.7, 1.26.8, 1.26.9, 1.26.10, 1.26.11, 1.26.12, 1.26.13, 1.26.14, 1.26.15, 1.27.1, 1.27.2, 1.27.3, 1.27.4, 1.27.5, 1.27.6, 1.27.7, 1.27.8, 1.27.9, 1.27.10, 1.27.11, 1.27.12, 1.27.13, 1.27.14, 1.27.15, 1.27.16, 1.28.1, 1.28.2, 1.28.3, 1.28.4, 1.28.5, 1.28.6, 1.28.7, 1.28.8, 1.28.9, 1.28.10, 1.28.11, 1.28.12, 1.28.13, 1.28.14, 1.28.15, 1.29.0, 1.29.1, 1.29.2, 1.29.3, 1.29.4, 1.29.5, 1.29.6, 1.29.7, 1.29.8, 1.29.9, 1.29.10, 1.29.11, 1.29.12, 1.29.13, 1.29.14, 1.29.15, 1.30.0, 1.30.1, 1.30.2, 1.30.3, 1.30.4, 1.30.5, 1.30.6, 1.30.7, 1.30.8, 1.30.9, 1.30.10, 1.30.11, 1.31.0, 1.31.1, 1.31.2, 1.31.3, 1.31.4, 1.31.5, 1.31.6, 1.31.7, 1.32.0, 1.32.1, 1.32.2, 1.32.3{/* END_k3s_VERSIONS */} |
Supported Instance Types | See Replicated Instance Types |
Node Groups | Yes |
Node Auto Scaling | No |
Nodes | Supports multiple nodes. |
IP Family | Supports `ipv4`. |
Limitations | For additional limitations that apply to all distributions, see Limitations. |
Common Use Cases |
|
Type | Description |
---|---|
Supported RKE2 Versions | The upstream k8s version that matches the Kubernetes version requested. |
Supported Kubernetes Versions | {/* START_rke2_VERSIONS */}1.24.1, 1.24.2, 1.24.3, 1.24.4, 1.24.6, 1.24.7, 1.24.8, 1.24.9, 1.24.10, 1.24.11, 1.24.12, 1.24.13, 1.24.14, 1.24.15, 1.24.16, 1.24.17, 1.25.0, 1.25.2, 1.25.3, 1.25.4, 1.25.5, 1.25.6, 1.25.7, 1.25.8, 1.25.9, 1.25.10, 1.25.11, 1.25.12, 1.25.13, 1.25.14, 1.25.15, 1.25.16, 1.26.0, 1.26.1, 1.26.2, 1.26.3, 1.26.4, 1.26.5, 1.26.6, 1.26.7, 1.26.8, 1.26.9, 1.26.10, 1.26.11, 1.26.12, 1.26.13, 1.26.14, 1.26.15, 1.27.1, 1.27.2, 1.27.3, 1.27.4, 1.27.5, 1.27.6, 1.27.7, 1.27.8, 1.27.9, 1.27.10, 1.27.11, 1.27.12, 1.27.13, 1.27.14, 1.27.15, 1.27.16, 1.28.2, 1.28.3, 1.28.4, 1.28.5, 1.28.6, 1.28.7, 1.28.8, 1.28.9, 1.28.10, 1.28.11, 1.28.12, 1.28.13, 1.28.14, 1.28.15, 1.29.0, 1.29.1, 1.29.2, 1.29.3, 1.29.4, 1.29.5, 1.29.6, 1.29.7, 1.29.8, 1.29.9, 1.29.10, 1.29.11, 1.29.12, 1.29.13, 1.29.14, 1.29.15, 1.30.0, 1.30.1, 1.30.2, 1.30.3, 1.30.4, 1.30.5, 1.30.6, 1.30.7, 1.30.8, 1.30.9, 1.30.10, 1.30.11, 1.31.0, 1.31.1, 1.31.2, 1.31.3, 1.31.4, 1.31.5, 1.31.6, 1.31.7, 1.32.0, 1.32.1, 1.32.2, 1.32.3{/* END_rke2_VERSIONS */} |
Supported Instance Types | See Replicated Instance Types |
Node Groups | Yes |
Node Auto Scaling | No |
Nodes | Supports multiple nodes. |
IP Family | Supports `ipv4`. |
Limitations | For additional limitations that apply to all distributions, see Limitations. |
Common Use Cases |
|
Type | Description |
---|---|
Supported OpenShift Versions | {/* START_openshift_VERSIONS */}4.10.0-okd, 4.11.0-okd, 4.12.0-okd, 4.13.0-okd, 4.14.0-okd, 4.15.0-okd, 4.16.0-okd, 4.17.0-okd{/* END_openshift_VERSIONS */} |
Supported Instance Types | See Replicated Instance Types |
Node Groups | Yes |
Node Auto Scaling | No |
Nodes | Supports multiple nodes for versions 4.13.0-okd and later. |
IP Family | Supports `ipv4`. |
Limitations |
For additional limitations that apply to all distributions, see Limitations. |
Common Use Cases | Customer release tests |
Type | Description |
---|---|
Supported Embedded Cluster Versions | Any valid release sequence that has previously been promoted to the channel where the customer license is assigned. Version is optional and defaults to the latest available release on the channel. |
Supported Instance Types | See Replicated Instance Types |
Node Groups | Yes |
Nodes | Supports multiple nodes (alpha). |
IP Family | Supports `ipv4`. |
Limitations |
For additional limitations that apply to all distributions, see Limitations. |
Common Use Cases | Customer release tests |
Type | Description |
---|---|
Supported kURL Versions | Any promoted kURL installer. Version is optional. For an installer version other than "latest", you can find the specific Installer ID for a previously promoted installer under the relevant **Install Command** (ID after kurl.sh/) on the **Channels > kURL Installer History** page in the Vendor Portal. For more information about viewing the history of kURL installers promoted to a channel, see [Installer History](/vendor/installer-history). |
Supported Instance Types | See Replicated Instance Types |
Node Groups | Yes |
Node Auto Scaling | No |
Nodes | Supports multiple nodes. |
IP Family | Supports `ipv4`. |
Limitations | Does not work with the Longhorn add-on. For additional limitations that apply to all distributions, see Limitations. |
Common Use Cases | Customer release tests |
Type | Description |
---|---|
Supported Kubernetes Versions | {/* START_eks_VERSIONS */}1.25, 1.26, 1.27, 1.28, 1.29, 1.30, 1.31, 1.32{/* END_eks_VERSIONS */} Extended Support Versions: 1.25, 1.26, 1.27, 1.28, 1.29 |
Supported Instance Types | m6i.large, m6i.xlarge, m6i.2xlarge, m6i.4xlarge, m6i.8xlarge, m7i.large, m7i.xlarge, m7i.2xlarge, m7i.4xlarge, m7i.8xlarge, m5.large, m5.xlarge, m5.2xlarge, m5.4xlarge, m5.8xlarge, m7g.large (arm), m7g.xlarge (arm), m7g.2xlarge (arm), m7g.4xlarge (arm), m7g.8xlarge (arm), c5.large, c5.xlarge, c5.2xlarge, c5.4xlarge, c5.9xlarge, g4dn.xlarge (gpu), g4dn.2xlarge (gpu), g4dn.4xlarge (gpu), g4dn.8xlarge (gpu), g4dn.12xlarge (gpu), g4dn.16xlarge (gpu) g4dn instance types depend on available capacity. After a g4dn cluster is running, you also need to install your version of the NVIDIA device plugin for Kubernetes. See [Amazon EKS optimized accelerated Amazon Linux AMIs](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html#gpu-ami) in the AWS documentation. |
Node Groups | Yes |
Node Auto Scaling | Yes. Cost will be based on the max number of nodes. |
Nodes | Supports multiple nodes. |
IP Family | Supports `ipv4`. |
Limitations | You can only choose a minor version, not a patch version. The EKS installer chooses the latest patch for that minor version. For additional limitations that apply to all distributions, see Limitations. |
Common Use Cases | Customer release tests |
Type | Description |
---|---|
Supported Kubernetes Versions | {/* START_gke_VERSIONS */}1.30, 1.31, 1.32.2{/* END_gke_VERSIONS */} |
Supported Instance Types | n2-standard-2, n2-standard-4, n2-standard-8, n2-standard-16, n2-standard-32, t2a-standard-2 (arm), t2a-standard-4 (arm), t2a-standard-8 (arm), t2a-standard-16 (arm), t2a-standard-32 (arm), t2a-standard-48 (arm), e2-standard-2, e2-standard-4, e2-standard-8, e2-standard-16, e2-standard-32, n1-standard-1+nvidia-tesla-t4+1 (gpu), n1-standard-1+nvidia-tesla-t4+2 (gpu), n1-standard-1+nvidia-tesla-t4+4 (gpu), n1-standard-2+nvidia-tesla-t4+1 (gpu), n1-standard-2+nvidia-tesla-t4+2 (gpu), n1-standard-2+nvidia-tesla-t4+4 (gpu), n1-standard-4+nvidia-tesla-t4+1 (gpu), n1-standard-4+nvidia-tesla-t4+2 (gpu), n1-standard-4+nvidia-tesla-t4+4 (gpu), n1-standard-8+nvidia-tesla-t4+1 (gpu), n1-standard-8+nvidia-tesla-t4+2 (gpu), n1-standard-8+nvidia-tesla-t4+4 (gpu), n1-standard-16+nvidia-tesla-t4+1 (gpu), n1-standard-16+nvidia-tesla-t4+2 (gpu), n1-standard-16+nvidia-tesla-t4+4 (gpu), n1-standard-32+nvidia-tesla-t4+1 (gpu), n1-standard-32+nvidia-tesla-t4+2 (gpu), n1-standard-32+nvidia-tesla-t4+4 (gpu), n1-standard-64+nvidia-tesla-t4+1 (gpu), n1-standard-64+nvidia-tesla-t4+2 (gpu), n1-standard-64+nvidia-tesla-t4+4 (gpu), n1-standard-96+nvidia-tesla-t4+1 (gpu), n1-standard-96+nvidia-tesla-t4+2 (gpu), n1-standard-96+nvidia-tesla-t4+4 (gpu) You can specify more than one node. |
Node Groups | Yes |
Node Auto Scaling | Yes. Cost will be based on the max number of nodes. |
Nodes | Supports multiple nodes. |
IP Family | Supports `ipv4`. |
Limitations | You can choose only a minor version, not a patch version. The GKE installer chooses the latest patch for that minor version. For additional limitations that apply to all distributions, see Limitations. |
Common Use Cases | Customer release tests |
Type | Description |
---|---|
Supported Kubernetes Versions | {/* START_aks_VERSIONS */}1.29, 1.30, 1.31{/* END_aks_VERSIONS */} |
Supported Instance Types | Standard_B2ms, Standard_B4ms, Standard_B8ms, Standard_B16ms, Standard_DS2_v2, Standard_DS3_v2, Standard_DS4_v2, Standard_DS5_v2, Standard_DS2_v5, Standard_DS3_v5, Standard_DS4_v5, Standard_DS5_v5, Standard_D2ps_v5 (arm), Standard_D4ps_v5 (arm), Standard_D8ps_v5 (arm), Standard_D16ps_v5 (arm), Standard_D32ps_v5 (arm), Standard_D48ps_v5 (arm), Standard_NC4as_T4_v3 (gpu), Standard_NC8as_T4_v3 (gpu), Standard_NC16as_T4_v3 (gpu), Standard_NC64as_T4_v3 (gpu) GPU instance types depend on available capacity. After a GPU cluster is running, you also need to install your version of the NVIDIA device plugin for Kubernetes. See [NVIDIA GPU Operator with Azure Kubernetes Service](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/microsoft-aks.html) in the NVIDIA documentation. |
Node Groups | Yes |
Node Auto Scaling | Yes. Cost will be based on the max number of nodes. |
Nodes | Supports multiple nodes. |
IP Family | Supports `ipv4`. |
Limitations | You can choose only a minor version, not a patch version. The AKS installer chooses the latest patch for that minor version. For additional limitations that apply to all distributions, see Limitations. |
Common Use Cases | Customer release tests |
Type | Description |
---|---|
Supported Kubernetes Versions | {/* START_oke_VERSIONS */}1.29.1, 1.29.10, 1.30.1, 1.31.1, 1.32.1{/* END_oke_VERSIONS */} |
Supported Instance Types | VM.Standard2.1, VM.Standard2.2, VM.Standard2.4, VM.Standard2.8, VM.Standard2.16, VM.Standard3.Flex.1, VM.Standard3.Flex.2, VM.Standard3.Flex.4, VM.Standard3.Flex.8, VM.Standard3.Flex.16, VM.Standard.A1.Flex.1 (arm), VM.Standard.A1.Flex.2 (arm), VM.Standard.A1.Flex.4 (arm), VM.Standard.A1.Flex.8 (arm), VM.Standard.A1.Flex.16 (arm) |
Node Groups | Yes |
Node Auto Scaling | No. |
Nodes | Supports multiple nodes. |
IP Family | Supports `ipv4`. |
Limitations | Provising an OKE cluster does take between 8 to 10 minutes. If needed, some timeouts in your CI pipelines might have to be adjusted. For additional limitations that apply to all distributions, see Limitations. |
Common Use Cases | Customer release tests |
Type | Memory (GiB) | VCPU Count |
---|---|---|
r1.small | 8 GB | 2 VCPUs |
r1.medium | 16 GB | 4 VCPUs |
r1.large | 32 GB | 8 VCPUs |
r1.xlarge | 64 GB | 16 VCPUs |
r1.2xlarge | 128 GB | 32 VCPUs |
This is text from a user config value: '{{repl ConfigOption "example_default_value"}}'
This is more text from a user config value: '{{repl ConfigOption "more_text"}}'
This is a hidden value: '{{repl ConfigOption "hidden_text"}}'
``` This creates a reference to the `more_text` field using a Replicated KOTS template function. The ConfigOption template function renders the user input from the configuration item that you specify. For more information, see [Config Context](/reference/template-functions-config-context) in _Reference_. 1. Save the changes to both YAML files. 1. Change to the root `replicated-cli-tutorial` directory, then run the following command to verify that there are no errors in the YAML: ``` replicated release lint --yaml-dir=manifests ``` 1. Create a new release and promote it to the Unstable channel: ``` replicated release create --auto ``` **Example output**: ``` • Reading manifests from ./manifests ✓ • Creating Release ✓ • SEQUENCE: 2 • Promoting ✓ • Channel 2GxpUm7lyB2g0ramqUXqjpLHzK0 successfully set to release 2 ``` 1. Type `y` and press **Enter** to continue with the defaults. **Example output**: ``` RULE TYPE FILENAME LINE MESSAGE • Reading manifests from ./manifests ✓ • Creating Release ✓ • SEQUENCE: 2 • Promoting ✓ • Channel 2GmYFUFzj8JOSLYw0jAKKJKFua8 successfully set to release 2 ``` The release is created and promoted to the Unstable channel with `SEQUENCE: 2`. 1. Verify that the release was promoted to the Unstable channel: ``` replicated release ls ``` **Example output**: ``` SEQUENCE CREATED EDITED ACTIVE_CHANNELS 2 2022-11-03T19:16:24Z 0001-01-01T00:00:00Z Unstable 1 2022-11-03T18:49:13Z 0001-01-01T00:00:00Z ``` ## Next Step Continue to [Step 9: Update the Application](tutorial-cli-update-app) to return to the Admin Console and update the application to the new version that you promoted. --- # Step 4: Create a Release Now that you have the manifest files for the sample Kubernetes application, you can create a release for the `cli-tutorial` application and promote the release to the Unstable channel. By default, the Vendor Portal includes Unstable, Beta, and Stable release channels. The Unstable channel is intended for software vendors to use for internal testing, before promoting a release to the Beta or Stable channels for distribution to customers. For more information about channels, see [About Channels and Releases](releases-about). To create and promote a release to the Unstable channel: 1. From the `replicated-cli-tutorial` directory, lint the application manifest files and ensure that there are no errors in the YAML: ``` replicated release lint --yaml-dir=manifests ``` If there are no errors, an empty list is displayed with a zero exit code: ```text RULE TYPE FILENAME LINE MESSAGE ``` For a complete list of the possible error, warning, and informational messages that can appear in the output of the `release lint` command, see [Linter Rules](/reference/linter). 1. Initialize the project as a Git repository: ``` git init git add . git commit -m "Initial Commit: CLI Tutorial" ``` Initializing the project as a Git repository allows you to track your history. The Replicated CLI also reads Git metadata to help with the generation of release metadata, such as version labels. 1. From the `replicated-cli-tutorial` directory, create a release with the default settings: ``` replicated release create --auto ``` The `--auto` flag generates release notes and metadata based on the Git status. **Example output:** ``` • Reading Environment ✓ Prepared to create release with defaults: yaml-dir "./manifests" promote "Unstable" version "Unstable-ba710e5" release-notes "CLI release of master triggered by exampleusername [SHA: d4173a4] [31 Oct 22 08:51 MDT]" ensure-channel true lint-release true Create with these properties? [Y/n] ``` 1. Type `y` and press **Enter** to confirm the prompt. **Example output:** ```text • Reading manifests from ./manifests ✓ • Creating Release ✓ • SEQUENCE: 1 • Promoting ✓ • Channel VEr0nhJBBUdaWpPvOIK-SOryKZEwa3Mg successfully set to release 1 ``` The release is created and promoted to the Unstable channel. 1. Verify that the release was promoted to the Unstable channel: ``` replicated release ls ``` **Example output:** ```text SEQUENCE CREATED EDITED ACTIVE_CHANNELS 1 2022-10-31T14:55:35Z 0001-01-01T00:00:00Z Unstable ``` ## Next Step Continue to [Step 5: Create a Customer](tutorial-cli-create-customer) to create a customer license file that you will upload when installing the application. --- # Step 7: Configure the Application After you install KOTS, you can log in to the KOTS Admin Console. This procedure shows you how to make a configuration change for the application from the Admin Console, which is a typical task performed by end users. To configure the application: 1. Access the Admin Console using `https://localhost:8800` if the installation script is still running. Otherwise, run the following command to access the Admin Console: ```bash kubectl kots admin-console --namespace NAMESPACE ``` Replace `NAMESPACE` with the namespace where KOTS is installed. 1. Enter the password that you created in [Step 6: Install KOTS and the Application](tutorial-cli-install-app-manager) to log in to the Admin Console. The Admin Console dashboard opens. On the Admin Console **Dashboard** tab, users can take various actions, including viewing the application status, opening the application, checking for application updates, syncing their license, and setting up application monitoring on the cluster with Prometheus.  1. On the **Config** tab, select the **Customize Text Inputs** checkbox. In the **Text Example** field, enter any text. For example, `Hello`.  This page displays configuration settings that are specific to the application. Software vendors define the fields that are displayed on this page in the KOTS Config custom resource. For more information, see [Config](/reference/custom-resource-config) in _Reference_. 1. Click **Save config**. In the dialog that opens, click **Go to updated version**. The **Version history** tab opens. 1. Click **Deploy** for the new version. Then click **Yes, deploy** in the confirmation dialog.  1. Click **Open App** to view the application in your browser.  Notice the text that you entered previously on the configuration page is displayed on the screen. :::note If you do not see the new text, refresh your browser. ::: ## Next Step Continue to [Step 8: Create a New Version](tutorial-cli-create-new-version) to make a change to one of the manifest files for the `cli-tutorial` application, then use the Replicated CLI to create and promote a new release. --- # Step 6: Install KOTS and the Application The next step is to test the installation process for the application release that you promoted. Using the KOTS CLI, you will install KOTS and the sample application in your cluster. KOTS is the Replicated component that allows your users to install, manage, and upgrade your application. Users can interact with KOTS through the Admin Console or through the KOTS CLI. To install KOTS and the application: 1. From the `replicated-cli-tutorial` directory, run the following command to get the installation commands for the Unstable channel, where you promoted the release for the `cli-tutorial` application: ``` replicated channel inspect Unstable ``` **Example output:** ``` ID: 2GmYFUFzj8JOSLYw0jAKKJKFua8 NAME: Unstable DESCRIPTION: RELEASE: 1 VERSION: Unstable-d4173a4 EXISTING: curl -fsSL https://kots.io/install | bash kubectl kots install cli-tutorial/unstable EMBEDDED: curl -fsSL https://k8s.kurl.sh/cli-tutorial-unstable | sudo bash AIRGAP: curl -fSL -o cli-tutorial-unstable.tar.gz https://k8s.kurl.sh/bundle/cli-tutorial-unstable.tar.gz # ... scp or sneakernet cli-tutorial-unstable.tar.gz to airgapped machine, then tar xvf cli-tutorial-unstable.tar.gz sudo bash ./install.sh airgap ``` This command prints information about the channel, including the commands for installing in: * An existing cluster * An _embedded cluster_ created by Replicated kURL * An air gap cluster that is not connected to the internet 1. If you have not already, configure kubectl access to the cluster you provisioned as part of [Set Up the Environment](tutorial-cli-setup#set-up-the-environment). For more information about setting the context for kubectl, see [Command line tool (kubectl)](https://kubernetes.io/docs/reference/kubectl/) in the Kubernetes documentation. 1. Run the `EXISTING` installation script with the following flags to automatically upload the license file and run the preflight checks at the same time you run the installation. **Example:** ``` curl -fsSL https://kots.io/install | bash kubectl kots install cli-tutorial/unstable \ --license-file ./LICENSE_YAML \ --shared-password PASSWORD \ --namespace NAMESPACE ``` Replace: - `LICENSE_YAML` with the local path to your license file. - `PASSWORD` with a password to access the Admin Console. - `NAMESPACE` with the namespace where KOTS and application will be installed. When the Admin Console is ready, the script prints the `https://localhost:8800` URL where you can access the Admin Console and the `http://localhost:8888` URL where you can access the application. **Example output**: ``` • Deploying Admin Console • Creating namespace ✓ • Waiting for datastore to be ready ✓ • Waiting for Admin Console to be ready ✓ • Waiting for installation to complete ✓ • Waiting for preflight checks to complete ✓ • Press Ctrl+C to exit • Go to http://localhost:8800 to access the Admin Console • Go to http://localhost:8888 to access the application ``` 1. Verify that the Pods are running for the example NGNIX service and for kotsadm: ```bash kubectl get pods --namespace NAMESPACE ``` Replace `NAMESPACE` with the namespace where KOTS and application was installed. **Example output:** ```NAME READY STATUS RESTARTS AGE kotsadm-7ccc8586b8-n7vf6 1/1 Running 0 12m kotsadm-minio-0 1/1 Running 0 17m kotsadm-rqlite-0 1/1 Running 0 17m nginx-688f4b5d44-8s5v7 1/1 Running 0 11m ``` ## Next Step Continue to [Step 7: Configure the Application](tutorial-cli-deploy-app) to log in to the Admin Console and make configuration changes. --- # Step 1: Install the Replicated CLI In this tutorial, you use the Replicated CLI to create and promote releases for a sample application with Replicated. The Replicated CLI is the CLI for the Replicated Vendor Portal. This procedure describes how to create a Vendor Portal account, install the Replicated CLI on your local machine, and set up a `REPLICATED_API_TOKEN` environment variable for authentication. To install the Replicated CLI: 1. Do one of the following to create an account in the Replicated Vendor Portal: * **Join an existing team**: If you have an existing Vendor Portal team, you can ask your team administrator to send you an invitation to join. * **Start a trial**: Alternatively, go to [vendor.replicated.com](https://vendor.replicated.com/) and click **Sign up** to create a 21-day trial account for completing this tutorial. 1. Run the following command to use [Homebrew](https://brew.sh) to install the CLI: ``` brew install replicatedhq/replicated/cli ``` For the latest Linux or macOS versions of the Replicated CLI, see the [replicatedhq/replicated](https://github.com/replicatedhq/replicated/releases) releases in GitHub. 1. Verify the installation: ``` replicated version ``` **Example output**: ```json { "version": "0.37.2", "git": "8664ac3", "buildTime": "2021-08-24T17:05:26Z", "go": { "version": "go1.14.15", "compiler": "gc", "os": "darwin", "arch": "amd64" } } ``` If you run a Replicated CLI command, such as `replicated release ls`, you see the following error message about a missing API token: ``` Error: set up APIs: Please provide your API token ``` 1. Create an API token for the Replicated CLI: 1. Log in to the Vendor Portal, and go to the [Account settings](https://vendor.replicated.com/account-settings) page. 1. Under **User API Tokens**, click **Create user API token**. For Nickname, provide a name for the token. For Permissions, select **Read and Write**. For more information about User API tokens, see [User API Tokens](replicated-api-tokens#user-api-tokens) in _Generating API Tokens_. 1. Click **Create Token**. 1. Copy the string that appears in the dialog. 1. Export the string that you copied in the previous step to an environment variable named `REPLICATED_API_TOKEN`: ```bash export REPLICATED_API_TOKEN=YOUR_TOKEN ``` Replace `YOUR_TOKEN` with the token string that you copied from the Vendor Portal in the previous step. 1. Verify the User API token: ``` replicated release ls ``` You see the following error message: ``` Error: App not found: ``` ## Next Step Continue to [Step 2: Create an Application](tutorial-cli-create-app) to use the Replicated CLI to create an application. --- # Step 3: Get the Sample Manifests To create a release for the `cli-tutorial` application, first create the Kubernetes manifest files for the application. This tutorial provides a set of sample manifest files for a simple Kubernetes application that deploys an NGINX service. To get the sample manifest files: 1. Run the following command to create and change to a `replicated-cli-tutorial` directory: ``` mkdir replicated-cli-tutorial cd replicated-cli-tutorial ``` 1. Create a `/manifests` directory and download the sample manifest files from the [kots-default-yaml](https://github.com/replicatedhq/kots-default-yaml) repository in GitHub: ``` mkdir ./manifests curl -fSsL https://github.com/replicatedhq/kots-default-yaml/archive/refs/heads/main.zip | \ tar xzv --strip-components=1 -C ./manifests \ --exclude README.md --exclude LICENSE --exclude .gitignore ``` 1. Verify that you can see the YAML files in the `replicated-cli-tutorial/manifests` folder: ``` ls manifests/ ``` ``` example-configmap.yaml example-service.yaml kots-app.yaml kots-lint-config.yaml kots-support-bundle.yaml example-deployment.yaml k8s-app.yaml kots-config.yaml kots-preflight.yaml ``` ## Next Step Continue to [Step 4: Create a Release](tutorial-cli-create-release) to create and promote the first release for the `cli-tutorial` application using these manifest files. --- # Introduction and Setup This tutorial introduces you to the Replicated features for software vendors and their enterprise users. It is designed to familiarize you with the key concepts and processes that you use as a software vendor when you package and distribute your application with Replicated. In this tutorial, you use a set of sample manifest files for a basic NGINX application to learn how to: * Create and promote releases for an application as a software vendor * Install and update an application on a Kubernetes cluster as an enterprise user The steps in this KOTS CLI-based tutorial show you how to use the Replicated CLI to perform these tasks. The Replicated CLI is the CLI for the Replicated Vendor Portal. You can use the Replicated CLI as a software vendor to programmatically create, configure, and manage your application artifacts, including application releases, release channels, customer entitlements, private image registries, and more. :::note This tutorial assumes that you have a working knowledge of Kubernetes. For an introduction to Kubernetes and free training resources, see [Training](https://kubernetes.io/training/) in the Kubernetes documentation. ::: ## Set Up the Environment As part of this tutorial, you will install a sample application into a Kubernetes cluster. Before you begin, do the following to set up your environment: * Create a Kubernetes cluster that meets the minimum system requirements described in [KOTS Installation Requirements](/enterprise/installing-general-requirements). You can use any cloud provider or tool that you prefer to create a cluster, such as Google Kubernetes Engine (GKE), Amazon Web Services (AWS), or minikube. **Example:** For example, to create a cluster in GKE, run the following command in the gcloud CLI: ``` gcloud container clusters create NAME --preemptible --no-enable-ip-alias ``` Where `NAME` is any name for the cluster. * Install kubectl, the Kubernetes command line tool. See [Install Tools](https://kubernetes.io/docs/tasks/tools/) in the Kubernetes documentation. * Configure kubectl command line access to the cluster that you created. See [Command line tool (kubectl)](https://kubernetes.io/docs/reference/kubectl/) in the Kubernetes documentation. ## Related Topics For more information about the subjects in the getting started tutorials, see the following topics: * [Installing the Replicated CLI](/reference/replicated-cli-installing) * [Linter Rules](/reference/linter) * [Online Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster) * [Performing Updates in Existing Clusters](/enterprise/updating-app-manager) --- # Step 9: Update the Application To test the new release that you promoted, return to the Admin Console in a browser to update the application. To update the application: 1. Access the KOTS Admin Console using `https://localhost:8800` if the installation script is still running. Otherwise, run the following command to access the Admin Console: ```bash kubectl kots admin-console --namespace NAMESPACE ``` Replace `NAMESPACE` with the namespace where the Admin Console is installed. 1. Go to the Version history page, and click **Check for update**.  The Admin Console loads the new release that you promoted. 1. Click **Deploy**. In the dialog, click **Yes, deploy** to deploy the new version.  1. After the Admin Console deploys the new version, go to the **Config** page where the **Another Text Example** field that you added is displayed.  1. In the new **Another Text Example** field, enter any text. Click **Save config**. The Admin Console notifies you that the configuration settings for the application have changed.  1. In the dialog, click **Go to updated version**. The Admin Console loads the updated version on the Version history page. 1. On the Version history page, click **Deploy** next to the latest version to deploy the configuration change.  1. Go to the **Dashboard** page and click **Open App**. The application displays the text that you added to the field.  :::note If you do not see the new text, refresh your browser. ::: ## Summary Congratulations! As part of this tutorial, you: * Created and promoted a release for a Kubernetes application using the Replicated CLI * Installed the application in a Kubernetes cluster * Edited the manifest files for the application, adding a new configuration field and using template functions to reference the field * Promoted a new release with your changes * Used the Admin Console to update the application to the latest version --- # Step 2: Create an Application Next, install the Replicated CLI and then create an application. To create an application: 1. Install the Replicated CLI: ``` brew install replicatedhq/replicated/cli ``` For more installation options, see [Installing the Replicated CLI](/reference/replicated-cli-installing). 1. Authorize the Replicated CLI: ``` replicated login ``` In the browser window that opens, complete the prompts to log in to your vendor account and authorize the CLI. 1. Create an application named `Grafana`: ``` replicated app create Grafana ``` 1. Set the `REPLICATED_APP` environment variable to the application that you created. This allows you to interact with the application using the Replicated CLI without needing to use the `--app` flag with every command: 1. Get the slug for the application that you created: ``` replicated app ls ``` **Example output**: ``` ID NAME SLUG SCHEDULER 2WthxUIfGT13RlrsUx9HR7So8bR Grafana grafana-python kots ``` In the example above, the application slug is `grafana-python`. :::info The application _slug_ is a unique string that is generated based on the application name. You can use the application slug to interact with the application through the Replicated CLI and the Vendor API v3. The application name and slug are often different from one another because it is possible to create more than one application with the same name. ::: 1. Set the `REPLICATED_APP` environment variable to the application slug. **MacOS Example:** ``` export REPLICATED_APP=grafana-python ``` ## Next Step Add the Replicated SDK to the Helm chart and package the chart to an archive. See [Step 3: Package the Helm Chart](tutorial-config-package-chart). ## Related Topics * [Create an Application](/vendor/vendor-portal-manage-app#create-an-application) * [Installing the Replicated CLI](/reference/replicated-cli-installing) * [replicated app create](/reference/replicated-cli-app-create) --- # Step 5: Create a KOTS-Enabled Customer After promoting the release, create a customer with the KOTS entitlement so that you can install the release with KOTS. To create a customer: 1. In the [Vendor Portal](https://vendor.replicated.com), click **Customers > Create customer**. The **Create a new customer** page opens:  [View a larger version of this image](/images/create-customer.png) 1. For **Customer name**, enter a name for the customer. For example, `KOTS Customer`. 1. For **Channel**, select **Unstable**. This allows the customer to install releases promoted to the Unstable channel. 1. For **License type**, select Development. 1. For **License options**, verify that **KOTS Install Enabled** is enabled. This is the entitlement that allows the customer to install with KOTS. 1. Click **Save Changes**. 1. On the **Manage customer** page for the customer, click **Download license**. You will use the license file to install with KOTS.  [View a larger version of this image](/images/customer-download-license.png) ## Next Step Get the KOTS installation command and install. See [Step 6: Install the Release with KOTS](tutorial-config-install-kots). ## Related Topics * [About Customers](/vendor/licenses-about) * [Creating and Managing Customers](/vendor/releases-creating-customer) --- # Step 4: Add the Chart Archive to a Release Next, add the Helm chart archive to a new release for the application in the Replicated vendor platform. The purpose of this step is to configure a release that supports installation with KOTS. Additionally, this step defines a user-facing application configuration page that displays in the KOTS Admin Console during installation where users can set their own Grafana login credentials. To create a release: 1. In the `grafana` directory, create a subdirectory named `manifests`: ``` mkdir manifests ``` You will add the files required to support installation with Replicated KOTS to this subdirectory. 1. Move the Helm chart archive that you created to `manifests`: ``` mv grafana-9.6.5.tgz manifests ``` 1. In the `manifests` directory, create the following YAML files to configure the release: ``` cd manifests ``` ``` touch kots-app.yaml k8s-app.yaml kots-config.yaml grafana.yaml ``` 1. In each file, paste the corresponding YAML provided in the tabs below:The KOTS Application custom resource enables features in the Admin Console such as branding, release notes, port forwarding, dashboard buttons, application status indicators, and custom graphs.
The YAML below provides a name for the application to display in the Admin Console, adds a custom status informer that displays the status of the grafana
Deployment resource in the Admin Console dashboard, adds a custom application icon, and creates a port forward so that the user can open the Grafana application in a browser.
The Kubernetes Application custom resource supports functionality such as including buttons and links on the Admin Console dashboard. The YAML below adds an Open App button to the Admin Console dashboard that opens the application using the port forward configured in the KOTS Application custom resource.
The Config custom resource specifies a user-facing configuration page in the Admin Console designed for collecting application configuration from users. The YAML below creates "Admin User" and "Admin Password" fields that will be shown to the user on the configuration page during installation. These fields will be used to set the login credentials for Grafana.
The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart.
The HelmChart custom resource below contains a values
key, which creates a mapping to the Grafana values.yaml
file. In this case, the values.admin.user
and values.admin.password
fields map to admin.user
and admin.password
in the Grafana values.yaml
file.
During installation, KOTS renders the ConfigOption template functions in the values.admin.user
and values.admin.password
fields and then sets the corresponding Grafana values accordingly.
The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The name
and chartVersion
listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. The optionalValues
field sets the specified Helm values when a given conditional statement evaluates to true. In this case, if the application is installed with Embedded Cluster, then the Gitea service type is set to `NodePort` and the node port is set to `"32000"`. This will allow Gitea to be accessed from the local machine after deployment.
The KOTS Application custom resource enables features in the Replicated Admin Console such as branding, release notes, application status indicators, and custom graphs.
The YAML below provides a name for the application to display in the Admin Console, adds a custom status informer that displays the status of the gitea
Deployment resource in the Admin Console dashboard, adds a custom application icon, and adds the port where the Gitea service can be accessed so that the user can open the application after installation.
The Kubernetes Application custom resource supports functionality such as including buttons and links on the Replicated Admin Console dashboard. The YAML below adds an Open App button to the Admin Console dashboard that opens the application using the service port defined in the KOTS Application custom resource.
To install your application with Embedded Cluster, an Embedded Cluster Config must be present in the release. At minimum, the Embedded Cluster Config sets the version of Embedded Cluster that will be installed. You can also define several characteristics about the cluster.
The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The name
and chartVersion
listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. Each Helm chart archive in a release requires a unique HelmChart custom resource.
The KOTS Application custom resource enables features in the KOTS Admin Console such as branding, release notes, port forwarding, dashboard buttons, application status indicators, and custom graphs.
The YAML below provides a name for the application to display in the Admin Console, adds a custom status informer that displays the status of the gitea
Deployment resource in the Admin Console dashboard, adds a custom application icon, and creates a port forward so that the user can open the Gitea application in a browser.
The Kubernetes Application custom resource supports functionality such as including buttons and links on the KOTS Admin Console dashboard. The YAML below adds an Open App button to the Admin Console dashboard that opens the application using the port forward configured in the KOTS Application custom resource.
The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The name
and chartVersion
listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. Each Helm chart archive in a release requires a unique HelmChart custom resource.
The KOTS Application custom resource enables features in the Replicated Admin Console such as branding, release notes, port forwarding, dashboard buttons, application status indicators, and custom graphs.
The YAML below provides a name for the application to display in the Admin Console, adds a custom status informer that displays the status of the gitea
Deployment resource in the Admin Console dashboard, adds a custom application icon, and creates a port forward so that the user can open the Gitea application in a browser.
The Kubernetes Application custom resource supports functionality such as including buttons and links on the Replicated Admin Console dashboard. The YAML below adds an Open App button to the Admin Console dashboard that opens the application using the port forward configured in the KOTS Application custom resource.