Skip to main content

Using the Compatibility Matrix (Beta)

This topic describes how to use the Replicated compatibility matrix to create ephemeral clusters that you can use for manual and CI/CD testing.


The compatibility matrix add-on is Beta. The features, limitations, and requirements of the compatibility matrix are subject to change. As the compatiblity matrix add-on progresses towards general availability, many of its limitations will be removed.


The compatibility matrix has the following limitations:

  • Clusters cannot be resized. Create another cluster if you want to make changes, such as add another node.
  • On cloud clusters, only one node group per cluster is supported.
  • Multi-node support is available only for GKE and EKS.
  • There is no support for IPv6.
  • The cluster upgrade feature is available only for kURL distributions. See cluster upgrade.
  • Clusters have a maximum Time To Live (TTL) of 48 hours. See Setting TTL below.
  • Cloud clusters do not allow for the configuration of CNI, CSI, CRI, Ingress, or other plugins, add-ons, services, and interfaces.
  • The node operating systems for clusters created with the compatibility matrix cannot be configured nor replaced with different operating systems.
  • The Kubernetes scheduler for clusters created with the compatibility matrix cannot be replaced with a different scheduler.
  • Each team has a quota limit on the amount of resources that can be used simultaneously. This limit can be raised by messaging your account representative.

For additional distribution-specific limitations, see Supported Compatibility Matrix Cluster Types (Beta).


Before you can use the compatibility matrix, you must complete the following prerequisites:

Creating and Preparing Clusters

The Replicated compatibility matrix functionality is provided through the replicated CLI cluster commands.

You can run the commands to manually create a cluster when you need one for a short period of time, such as when debugging a support issue or to use testing as part of an inner development loop. You can also integrate the commands in your continuous integration and continuous delivery (CI/CD) workflows to automatically provision clusters for running tests. For more information, see Integrating with CI/CD.

You can use both cluster create and cluster prepare to provision clusters. The following describes the use cases for each command:

  • cluster create: Provisions a cluster based on the parameters specified. After a cluster is provisioned, an application can be installed in the cluster by creating a release, promoting the release to a temporary channel, and creating a temporary customer in the Replicated platform. A recommended use case for the cluster create command is provisioning clusters for testing in CD workflows that release your software to customers.

    The following example creates a kind cluster with Kubernetes version 1.27.0, a disk size of 100 GiB, and an instance type of r1.small.

    replicated cluster create --name kind-example --distribution kind --version 1.27.0 --disk 100 --instance-type r1.small

    For command usage, see cluster create in the replicated CLI reference.

  • cluster prepare: Provisions a cluster based on the parameters specified, creates a release, and then installs the release in the cluster. The cluster prepare command allows you to install an application in a cluster for testing without needing to create a temporary channel or a temporary customer in the Replicated platform. A recommended use case for the cluster prepare command is provisioning clusters for testing in CI workflows that run on every commit.

    The following example creates a kind cluster and installs a Helm chart in the cluster using the nginx-chart-0.0.14.tgz chart archive:

    replicated cluster cluster prepare \
    --distribution kind \
    --version 1.27.0 \
    --chart nginx-chart-0.0.14.tgz \
    --set key1=val1,key2=val2 \
    --set-string s1=val1,s2=val2 \
    --set-json j1='{"key1":"val1","key2":"val2"}' \
    --set-literal l1=val1,l2=val2 \
    --values values.yaml

    For command usage, see cluster prepare in the replicated CLI reference.

  • cluster upgrade: Upgrades an existing cluster version. A recommended use case for the cluster upgrade command is for testing your application's compatibility with Kubernetes API resource version migrations post upgrade in CD workflows that release your software to customers.

    The following example upgrades a kURL cluster from its previous version to version 9d5a44c.

    replicated cluster upgrade cabb74d5 --version 9d5a44c

    For command usage, see cluster upgrade in the replicated CLI reference.

Setting TTL

To help you manage costs, compatibility matrix clusters have a Time To Live (TTL) mechanism, using the --ttl flag. By default, the TTL is one hour, but you can configure it to a minimum of 10 minutes and a maximum of 48 hours. When the TTL expires, the cluster is automatically deleted. The TTL countdown does not begin until a cluster is in the Ready state.

To delete the cluster before the TTL expires, use the replicated cluster rm command with the cluster ID.

For more information about the replicated cluster commands, see the replicated CLI reference.

Test Script Recommendations

Incorporating code tests into your CI/CD workflows is important for ensuring that developers receive quick feedback and can make updates in small iterations. Replicated recommends that you create and run all of the following test types as part of your CI/CD workflows:

  • Application Testing: Traditional application testing includes unit, integration, and end-to-end tests. These tests are critical for application reliability, and the compatibility matrix is designed to to incorporate and use your application testing.

  • Performance Testing: Performance testing is used to benchmark your application to ensure it can handle the expected load and scale gracefully. Test your application under a range of workloads and scenarios to identify any bottlenecks or performance issues. Make sure to optimize your application for different Kubernetes distributions and configurations by creating all of the environments you need to test in.

  • Smoke Testing: Using a single, conformant Kubernetes distribution to test basic functionality of your application with default (or standard) configuration values is a quick way to get feedback if something is likely to be broken for all or most customers. Replicated also recommends that you include each Kubernetes version that you intend to support in your smoke tests.

  • Compatibility Testing: Because applications run on various Kubernetes distributions and configurations, it is important to test compatibility across different environments. The compatibility matrix provides this infrastructure.

  • Canary Testing: Before releasing to all customers, consider deploying your application to a small subset of your customer base as a canary release. This lets you monitor the application's performance and stability in real-world environments, while minimizing the impact of potential issues. The compatibility matrix enables canary testing by simulating exact (or near) customer environments and configurations to test your application with.