Skip to main content

Configuring Namespace Access and Memory Limit

This topic describes how to configure namespace access and the memory limit for Velero.

Overview

The Replicated admin console requires access to the namespace where Velero is installed. If your admin console is running with minimal role-based-access-control (RBAC) privileges, you must enable the admin console to access Velero.

Additionally, if the application uses a large amount of memory, you can configure the default memory limit to help ensure that Velero runs successfully with snapshots.

Configure Namespace Access

This section applies only to existing cluster installations (online and air gap) where the admin console is running with minimal role-based-access-control (RBAC) privileges.

Run the following command to enable the admin console to access the Velero namespace:

kubectl kots velero ensure-permissions --namespace ADMIN_CONSOLE_NAMESPACE --velero-namespace VELERO_NAMESPACE

Replace:

  • ADMIN_CONSOLE_NAMESPACE with the namespace on the cluster where the admin console is running.
  • VELERO_NAMESPACE with the namespace on the cluster where Velero is installed.

For more information, see velero ensure-permissions in the kots CLI documentation. For more information about RBAC privileges for the admin console, see Kubernetes RBAC.

Configure Memory Limit

This section applies to all online and air gap installations.

Velero sets default limits for the velero Pod and the node-agent (restic) Pod during installation. There is a known issue with restic that causes high memory usage, which can result in failures during backup creation when the Pod reaches the memory limit.

Increase the default memory limit for the node-agent (restic) Pod if your application is particularly large. For more information about configuring Velero resource requests and limits, see Customize resource requests and limits in the Velero documentation.

For example, the following kubectl commands will increase the memory limit for the node-agent (restic) daemon set from the default of 1Gi to 2Gi.

Velero 1.10 and later:

kubectl -n velero patch daemonset node-agent -p '{"spec":{"template":{"spec":{"containers":[{"name":"node-agent","resources":{"limits":{"memory":"2Gi"}}}]}}}}'

Velero versions earlier than 1.10:

kubectl -n velero patch daemonset restic -p '{"spec":{"template":{"spec":{"containers":[{"name":"restic","resources":{"limits":{"memory":"2Gi"}}}]}}}}'

Alternatively, you can potentially avoid the node-agent (restic) Pod reaching the memory limit during snapshot creation by running the following kubectl command to lower the memory garbage collection target percentage on the node-agent (restic) daemon set:

Velero 1.10 and later:

kubectl -n velero set env daemonset/node-agent GOGC=1

Velero versions earlier than 1.10:

kubectl -n velero set env daemonset/restic GOGC=1

Additional Resources