From 7920b689ccf640e49083b37810311a5a2ec53442 Mon Sep 17 00:00:00 2001 From: martyav Date: Tue, 17 Sep 2024 16:15:24 -0400 Subject: [PATCH 01/30] Fix order of headings part 2 - next 30 commits fixed communicating-with-downstream-user-clusters.md --- .../communicating-with-downstream-user-clusters.md | 8 ++++---- .../communicating-with-downstream-user-clusters.md | 8 ++++---- .../communicating-with-downstream-user-clusters.md | 8 ++++---- .../communicating-with-downstream-user-clusters.md | 8 ++++---- .../communicating-with-downstream-user-clusters.md | 8 ++++---- .../communicating-with-downstream-user-clusters.md | 8 ++++---- 6 files changed, 24 insertions(+), 24 deletions(-) diff --git a/docs/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md b/docs/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md index ebdbd0526ebd..e3dd9cb475eb 100644 --- a/docs/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md +++ b/docs/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md @@ -21,7 +21,7 @@ The following descriptions correspond to the numbers in the diagram above: 3. [Node Agents](#3-node-agents) 4. [Authorized Cluster Endpoint](#4-authorized-cluster-endpoint) -### 1. The Authentication Proxy +## 1. The Authentication Proxy In this diagram, a user named Bob wants to see all pods running on a downstream user cluster called User Cluster 1. From within Rancher, he can run a `kubectl` command to see the pods. Bob is authenticated through Rancher's authentication proxy. @@ -32,7 +32,7 @@ Rancher communicates with Kubernetes clusters using a [service account](https:// By default, Rancher generates a [kubeconfig file](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) that contains credentials for proxying through the Rancher server to connect to the Kubernetes API server on a downstream user cluster. The kubeconfig file (`kube_config_rancher-cluster.yml`) contains full access to the cluster. -### 2. Cluster Controllers and Cluster Agents +## 2. Cluster Controllers and Cluster Agents Each downstream user cluster has a cluster agent, which opens a tunnel to the corresponding cluster controller within the Rancher server. @@ -52,13 +52,13 @@ The cluster agent, also called `cattle-cluster-agent`, is a component that runs - Applies the roles and bindings defined in each cluster's global policies - Communicates between the cluster and Rancher server (through a tunnel to the cluster controller) about events, stats, node info, and health -### 3. Node Agents +## 3. Node Agents If the cluster agent (also called `cattle-cluster-agent`) is not available, one of the node agents creates a tunnel to the cluster controller to communicate with Rancher. The `cattle-node-agent` is deployed using a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) resource to make sure it runs on every node in a Rancher-launched Kubernetes cluster. It is used to interact with the nodes when performing cluster operations. Examples of cluster operations include upgrading the Kubernetes version and creating or restoring etcd snapshots. -### 4. Authorized Cluster Endpoint +## 4. Authorized Cluster Endpoint An authorized cluster endpoint (ACE) allows users to connect to the Kubernetes API server of a downstream cluster without having to route their requests through the Rancher authentication proxy. diff --git a/versioned_docs/version-2.0-2.4/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md b/versioned_docs/version-2.0-2.4/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md index 425f992293fb..590a039360a3 100644 --- a/versioned_docs/version-2.0-2.4/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md +++ b/versioned_docs/version-2.0-2.4/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md @@ -21,7 +21,7 @@ The following descriptions correspond to the numbers in the diagram above: 3. [Node Agents](#3-node-agents) 4. [Authorized Cluster Endpoint](#4-authorized-cluster-endpoint) -### 1. The Authentication Proxy +## 1. The Authentication Proxy In this diagram, a user named Bob wants to see all pods running on a downstream user cluster called User Cluster 1. From within Rancher, he can run a `kubectl` command to see the pods. Bob is authenticated through Rancher's authentication proxy. @@ -32,7 +32,7 @@ Rancher communicates with Kubernetes clusters using a [service account,](https:/ By default, Rancher generates a [kubeconfig file](../../how-to-guides/advanced-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) that contains credentials for proxying through the Rancher server to connect to the Kubernetes API server on a downstream user cluster. The kubeconfig file (`kube_config_rancher-cluster.yml`) contains full access to the cluster. -### 2. Cluster Controllers and Cluster Agents +## 2. Cluster Controllers and Cluster Agents Each downstream user cluster has a cluster agent, which opens a tunnel to the corresponding cluster controller within the Rancher server. @@ -52,13 +52,13 @@ The cluster agent, also called `cattle-cluster-agent`, is a component that runs - Applies the roles and bindings defined in each cluster's global policies - Communicates between the cluster and Rancher server (through a tunnel to the cluster controller) about events, stats, node info, and health -### 3. Node Agents +## 3. Node Agents If the cluster agent (also called `cattle-cluster-agent`) is not available, one of the node agents creates a tunnel to the cluster controller to communicate with Rancher. The `cattle-node-agent` is deployed using a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) resource to make sure it runs on every node in a Rancher-launched Kubernetes cluster. It is used to interact with the nodes when performing cluster operations. Examples of cluster operations include upgrading the Kubernetes version and creating or restoring etcd snapshots. -### 4. Authorized Cluster Endpoint +## 4. Authorized Cluster Endpoint An authorized cluster endpoint allows users to connect to the Kubernetes API server of a downstream cluster without having to route their requests through the Rancher authentication proxy. diff --git a/versioned_docs/version-2.6/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md b/versioned_docs/version-2.6/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md index 591bcccdd673..b2339687d9ef 100644 --- a/versioned_docs/version-2.6/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md +++ b/versioned_docs/version-2.6/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md @@ -21,7 +21,7 @@ The following descriptions correspond to the numbers in the diagram above: 3. [Node Agents](#3-node-agents) 4. [Authorized Cluster Endpoint](#4-authorized-cluster-endpoint) -### 1. The Authentication Proxy +## 1. The Authentication Proxy In this diagram, a user named Bob wants to see all pods running on a downstream user cluster called User Cluster 1. From within Rancher, he can run a `kubectl` command to see the pods. Bob is authenticated through Rancher's authentication proxy. @@ -32,7 +32,7 @@ Rancher communicates with Kubernetes clusters using a [service account](https:// By default, Rancher generates a [kubeconfig file](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) that contains credentials for proxying through the Rancher server to connect to the Kubernetes API server on a downstream user cluster. The kubeconfig file (`kube_config_rancher-cluster.yml`) contains full access to the cluster. -### 2. Cluster Controllers and Cluster Agents +## 2. Cluster Controllers and Cluster Agents Each downstream user cluster has a cluster agent, which opens a tunnel to the corresponding cluster controller within the Rancher server. @@ -52,13 +52,13 @@ The cluster agent, also called `cattle-cluster-agent`, is a component that runs - Applies the roles and bindings defined in each cluster's global policies - Communicates between the cluster and Rancher server (through a tunnel to the cluster controller) about events, stats, node info, and health -### 3. Node Agents +## 3. Node Agents If the cluster agent (also called `cattle-cluster-agent`) is not available, one of the node agents creates a tunnel to the cluster controller to communicate with Rancher. The `cattle-node-agent` is deployed using a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) resource to make sure it runs on every node in a Rancher-launched Kubernetes cluster. It is used to interact with the nodes when performing cluster operations. Examples of cluster operations include upgrading the Kubernetes version and creating or restoring etcd snapshots. -### 4. Authorized Cluster Endpoint +## 4. Authorized Cluster Endpoint An authorized cluster endpoint (ACE) allows users to connect to the Kubernetes API server of a downstream cluster without having to route their requests through the Rancher authentication proxy. diff --git a/versioned_docs/version-2.7/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md b/versioned_docs/version-2.7/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md index 9797a3777966..71bff2590d2a 100644 --- a/versioned_docs/version-2.7/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md +++ b/versioned_docs/version-2.7/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md @@ -21,7 +21,7 @@ The following descriptions correspond to the numbers in the diagram above: 3. [Node Agents](#3-node-agents) 4. [Authorized Cluster Endpoint](#4-authorized-cluster-endpoint) -### 1. The Authentication Proxy +## 1. The Authentication Proxy In this diagram, a user named Bob wants to see all pods running on a downstream user cluster called User Cluster 1. From within Rancher, he can run a `kubectl` command to see the pods. Bob is authenticated through Rancher's authentication proxy. @@ -32,7 +32,7 @@ Rancher communicates with Kubernetes clusters using a [service account](https:// By default, Rancher generates a [kubeconfig file](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) that contains credentials for proxying through the Rancher server to connect to the Kubernetes API server on a downstream user cluster. The kubeconfig file (`kube_config_rancher-cluster.yml`) contains full access to the cluster. -### 2. Cluster Controllers and Cluster Agents +## 2. Cluster Controllers and Cluster Agents Each downstream user cluster has a cluster agent, which opens a tunnel to the corresponding cluster controller within the Rancher server. @@ -52,13 +52,13 @@ The cluster agent, also called `cattle-cluster-agent`, is a component that runs - Applies the roles and bindings defined in each cluster's global policies - Communicates between the cluster and Rancher server (through a tunnel to the cluster controller) about events, stats, node info, and health -### 3. Node Agents +## 3. Node Agents If the cluster agent (also called `cattle-cluster-agent`) is not available, one of the node agents creates a tunnel to the cluster controller to communicate with Rancher. The `cattle-node-agent` is deployed using a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) resource to make sure it runs on every node in a Rancher-launched Kubernetes cluster. It is used to interact with the nodes when performing cluster operations. Examples of cluster operations include upgrading the Kubernetes version and creating or restoring etcd snapshots. -### 4. Authorized Cluster Endpoint +## 4. Authorized Cluster Endpoint An authorized cluster endpoint (ACE) allows users to connect to the Kubernetes API server of a downstream cluster without having to route their requests through the Rancher authentication proxy. diff --git a/versioned_docs/version-2.8/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md b/versioned_docs/version-2.8/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md index 617236638e53..38a6dc8e5de4 100644 --- a/versioned_docs/version-2.8/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md +++ b/versioned_docs/version-2.8/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md @@ -21,7 +21,7 @@ The following descriptions correspond to the numbers in the diagram above: 3. [Node Agents](#3-node-agents) 4. [Authorized Cluster Endpoint](#4-authorized-cluster-endpoint) -### 1. The Authentication Proxy +## 1. The Authentication Proxy In this diagram, a user named Bob wants to see all pods running on a downstream user cluster called User Cluster 1. From within Rancher, he can run a `kubectl` command to see the pods. Bob is authenticated through Rancher's authentication proxy. @@ -32,7 +32,7 @@ Rancher communicates with Kubernetes clusters using a [service account](https:// By default, Rancher generates a [kubeconfig file](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) that contains credentials for proxying through the Rancher server to connect to the Kubernetes API server on a downstream user cluster. The kubeconfig file (`kube_config_rancher-cluster.yml`) contains full access to the cluster. -### 2. Cluster Controllers and Cluster Agents +## 2. Cluster Controllers and Cluster Agents Each downstream user cluster has a cluster agent, which opens a tunnel to the corresponding cluster controller within the Rancher server. @@ -52,13 +52,13 @@ The cluster agent, also called `cattle-cluster-agent`, is a component that runs - Applies the roles and bindings defined in each cluster's global policies - Communicates between the cluster and Rancher server (through a tunnel to the cluster controller) about events, stats, node info, and health -### 3. Node Agents +## 3. Node Agents If the cluster agent (also called `cattle-cluster-agent`) is not available, one of the node agents creates a tunnel to the cluster controller to communicate with Rancher. The `cattle-node-agent` is deployed using a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) resource to make sure it runs on every node in a Rancher-launched Kubernetes cluster. It is used to interact with the nodes when performing cluster operations. Examples of cluster operations include upgrading the Kubernetes version and creating or restoring etcd snapshots. -### 4. Authorized Cluster Endpoint +## 4. Authorized Cluster Endpoint An authorized cluster endpoint (ACE) allows users to connect to the Kubernetes API server of a downstream cluster without having to route their requests through the Rancher authentication proxy. diff --git a/versioned_docs/version-2.9/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md b/versioned_docs/version-2.9/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md index ebdbd0526ebd..e3dd9cb475eb 100644 --- a/versioned_docs/version-2.9/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md +++ b/versioned_docs/version-2.9/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md @@ -21,7 +21,7 @@ The following descriptions correspond to the numbers in the diagram above: 3. [Node Agents](#3-node-agents) 4. [Authorized Cluster Endpoint](#4-authorized-cluster-endpoint) -### 1. The Authentication Proxy +## 1. The Authentication Proxy In this diagram, a user named Bob wants to see all pods running on a downstream user cluster called User Cluster 1. From within Rancher, he can run a `kubectl` command to see the pods. Bob is authenticated through Rancher's authentication proxy. @@ -32,7 +32,7 @@ Rancher communicates with Kubernetes clusters using a [service account](https:// By default, Rancher generates a [kubeconfig file](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) that contains credentials for proxying through the Rancher server to connect to the Kubernetes API server on a downstream user cluster. The kubeconfig file (`kube_config_rancher-cluster.yml`) contains full access to the cluster. -### 2. Cluster Controllers and Cluster Agents +## 2. Cluster Controllers and Cluster Agents Each downstream user cluster has a cluster agent, which opens a tunnel to the corresponding cluster controller within the Rancher server. @@ -52,13 +52,13 @@ The cluster agent, also called `cattle-cluster-agent`, is a component that runs - Applies the roles and bindings defined in each cluster's global policies - Communicates between the cluster and Rancher server (through a tunnel to the cluster controller) about events, stats, node info, and health -### 3. Node Agents +## 3. Node Agents If the cluster agent (also called `cattle-cluster-agent`) is not available, one of the node agents creates a tunnel to the cluster controller to communicate with Rancher. The `cattle-node-agent` is deployed using a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) resource to make sure it runs on every node in a Rancher-launched Kubernetes cluster. It is used to interact with the nodes when performing cluster operations. Examples of cluster operations include upgrading the Kubernetes version and creating or restoring etcd snapshots. -### 4. Authorized Cluster Endpoint +## 4. Authorized Cluster Endpoint An authorized cluster endpoint (ACE) allows users to connect to the Kubernetes API server of a downstream cluster without having to route their requests through the Rancher authentication proxy. From 3552897924b8bccd91776f20931de7b975df153d Mon Sep 17 00:00:00 2001 From: martyav Date: Tue, 17 Sep 2024 16:18:14 -0400 Subject: [PATCH 02/30] fixed kubernetes-security-best-practices.md --- .../rancher-security/kubernetes-security-best-practices.md | 2 +- .../rancher-security/kubernetes-security-best-practices.md | 2 +- .../rancher-security/kubernetes-security-best-practices.md | 2 +- .../rancher-security/kubernetes-security-best-practices.md | 2 +- .../rancher-security/kubernetes-security-best-practices.md | 2 +- .../rancher-security/kubernetes-security-best-practices.md | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/reference-guides/rancher-security/kubernetes-security-best-practices.md b/docs/reference-guides/rancher-security/kubernetes-security-best-practices.md index ace8bd95fab6..50f39dcc1dc0 100644 --- a/docs/reference-guides/rancher-security/kubernetes-security-best-practices.md +++ b/docs/reference-guides/rancher-security/kubernetes-security-best-practices.md @@ -6,7 +6,7 @@ title: Kubernetes Security Best Practices -### Restricting cloud metadata API access +## Restricting Cloud Metadata API Access Cloud providers such as AWS, Azure, DigitalOcean or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS, DigitalOcean Kubernetes or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets. diff --git a/versioned_docs/version-2.5/reference-guides/rancher-security/kubernetes-security-best-practices.md b/versioned_docs/version-2.5/reference-guides/rancher-security/kubernetes-security-best-practices.md index ace8bd95fab6..50f39dcc1dc0 100644 --- a/versioned_docs/version-2.5/reference-guides/rancher-security/kubernetes-security-best-practices.md +++ b/versioned_docs/version-2.5/reference-guides/rancher-security/kubernetes-security-best-practices.md @@ -6,7 +6,7 @@ title: Kubernetes Security Best Practices -### Restricting cloud metadata API access +## Restricting Cloud Metadata API Access Cloud providers such as AWS, Azure, DigitalOcean or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS, DigitalOcean Kubernetes or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets. diff --git a/versioned_docs/version-2.6/reference-guides/rancher-security/kubernetes-security-best-practices.md b/versioned_docs/version-2.6/reference-guides/rancher-security/kubernetes-security-best-practices.md index ace8bd95fab6..50f39dcc1dc0 100644 --- a/versioned_docs/version-2.6/reference-guides/rancher-security/kubernetes-security-best-practices.md +++ b/versioned_docs/version-2.6/reference-guides/rancher-security/kubernetes-security-best-practices.md @@ -6,7 +6,7 @@ title: Kubernetes Security Best Practices -### Restricting cloud metadata API access +## Restricting Cloud Metadata API Access Cloud providers such as AWS, Azure, DigitalOcean or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS, DigitalOcean Kubernetes or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets. diff --git a/versioned_docs/version-2.7/reference-guides/rancher-security/kubernetes-security-best-practices.md b/versioned_docs/version-2.7/reference-guides/rancher-security/kubernetes-security-best-practices.md index ace8bd95fab6..50f39dcc1dc0 100644 --- a/versioned_docs/version-2.7/reference-guides/rancher-security/kubernetes-security-best-practices.md +++ b/versioned_docs/version-2.7/reference-guides/rancher-security/kubernetes-security-best-practices.md @@ -6,7 +6,7 @@ title: Kubernetes Security Best Practices -### Restricting cloud metadata API access +## Restricting Cloud Metadata API Access Cloud providers such as AWS, Azure, DigitalOcean or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS, DigitalOcean Kubernetes or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets. diff --git a/versioned_docs/version-2.8/reference-guides/rancher-security/kubernetes-security-best-practices.md b/versioned_docs/version-2.8/reference-guides/rancher-security/kubernetes-security-best-practices.md index ace8bd95fab6..50f39dcc1dc0 100644 --- a/versioned_docs/version-2.8/reference-guides/rancher-security/kubernetes-security-best-practices.md +++ b/versioned_docs/version-2.8/reference-guides/rancher-security/kubernetes-security-best-practices.md @@ -6,7 +6,7 @@ title: Kubernetes Security Best Practices -### Restricting cloud metadata API access +## Restricting Cloud Metadata API Access Cloud providers such as AWS, Azure, DigitalOcean or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS, DigitalOcean Kubernetes or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets. diff --git a/versioned_docs/version-2.9/reference-guides/rancher-security/kubernetes-security-best-practices.md b/versioned_docs/version-2.9/reference-guides/rancher-security/kubernetes-security-best-practices.md index ace8bd95fab6..50f39dcc1dc0 100644 --- a/versioned_docs/version-2.9/reference-guides/rancher-security/kubernetes-security-best-practices.md +++ b/versioned_docs/version-2.9/reference-guides/rancher-security/kubernetes-security-best-practices.md @@ -6,7 +6,7 @@ title: Kubernetes Security Best Practices -### Restricting cloud metadata API access +## Restricting Cloud Metadata API Access Cloud providers such as AWS, Azure, DigitalOcean or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS, DigitalOcean Kubernetes or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets. From 40901b29f7c7edb112f2d255e9d2f8cb3c656605 Mon Sep 17 00:00:00 2001 From: martyav Date: Tue, 17 Sep 2024 16:25:05 -0400 Subject: [PATCH 03/30] fixed rancher-security.md --- .../rancher-security/rancher-security.md | 18 ++++++++--------- .../rancher-security/rancher-security.md | 10 +++++----- .../rancher-security/rancher-security.md | 15 +++++++------- .../rancher-security/rancher-security.md | 17 ++++++++-------- .../rancher-security/rancher-security.md | 18 ++++++++--------- .../rancher-security/rancher-security.md | 20 +++++++++---------- .../rancher-security/rancher-security.md | 18 ++++++++--------- 7 files changed, 59 insertions(+), 57 deletions(-) diff --git a/docs/reference-guides/rancher-security/rancher-security.md b/docs/reference-guides/rancher-security/rancher-security.md index f6d56c116547..f16699b8ac6a 100644 --- a/docs/reference-guides/rancher-security/rancher-security.md +++ b/docs/reference-guides/rancher-security/rancher-security.md @@ -27,11 +27,11 @@ Security is at the heart of all Rancher features. From integrating with all the On this page, we provide security related documentation along with resources to help you secure your Rancher installation and your downstream Kubernetes clusters. -### NeuVector Integration with Rancher +## NeuVector Integration with Rancher NeuVector is an open-source, container-focused security application that is now integrated into Rancher. NeuVector provides production security, DevOps vulnerability protection, and a container firewall, et al. Please see the [Rancher docs](../../integrations-in-rancher/neuvector/neuvector.md) and the [NeuVector docs](https://open-docs.neuvector.com/) for more information. -### Running a CIS Security Scan on a Kubernetes Cluster +## Running a CIS Security Scan on a Kubernetes Cluster Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the [CIS](https://www.cisecurity.org/cis-benchmarks/) (Center for Internet Security) Kubernetes Benchmark. @@ -47,13 +47,13 @@ When Rancher runs a CIS security scan on a cluster, it generates a report showin For details, refer to the section on [security scans](../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md). -### SELinux RPM +## SELinux RPM [Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8. We provide two RPMs (Red Hat packages) that enable Rancher products to function properly on SELinux-enforcing hosts: `rancher-selinux` and `rke2-selinux`. For details, see [this page](selinux-rpm/selinux-rpm.md). -### Rancher Hardening Guide +## Rancher Hardening Guide The Rancher Hardening Guide is based on controls and best practices found in the CIS Kubernetes Benchmark from the Center for Internet Security. @@ -63,7 +63,7 @@ The hardening guides provide prescriptive guidance for hardening a production in Each version of the hardening guide is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher. -### The CIS Benchmark and Self-Assessment +## The CIS Benchmark and Self-Assessment The benchmark self-assessment is a companion to the Rancher security hardening guide. While the hardening guide shows you how to harden the cluster, the benchmark guide is meant to help you evaluate the level of security of the hardened cluster. @@ -71,7 +71,7 @@ Because Rancher and RKE install Kubernetes services as Docker containers, many o Each version of Rancher's self-assessment guide corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark. -### Third-party Penetration Test Reports +## Third-party Penetration Test Reports Rancher periodically hires third parties to perform security audits and penetration tests of the Rancher software stack. The environments under test follow the Rancher provided hardening guides at the time of the testing. Previous penetration test reports are available below. @@ -82,14 +82,14 @@ Results: Please note that new reports are no longer shared or made publicly available. -### Rancher Security Advisories and CVEs +## Rancher Security Advisories and CVEs Rancher is committed to informing the community of security issues in our products. For the list of CVEs (Common Vulnerabilities and Exposures) for issues we have resolved, refer to [this page.](security-advisories-and-cves.md) -### Kubernetes Security Best Practices +## Kubernetes Security Best Practices For recommendations on securing your Kubernetes cluster, refer to the [Kubernetes Security Best Practices](kubernetes-security-best-practices.md) guide. -### Rancher Security Best Practices +## Rancher Security Best Practices For recommendations on securing your Rancher Manager deployments, refer to the [Rancher Security Best Practices](rancher-security-best-practices.md) guide. diff --git a/versioned_docs/version-2.0-2.4/reference-guides/rancher-security/rancher-security.md b/versioned_docs/version-2.0-2.4/reference-guides/rancher-security/rancher-security.md index c4dedb25e8db..9ef5e2336eae 100644 --- a/versioned_docs/version-2.0-2.4/reference-guides/rancher-security/rancher-security.md +++ b/versioned_docs/version-2.0-2.4/reference-guides/rancher-security/rancher-security.md @@ -33,7 +33,7 @@ On this page, we provide security-related documentation along with resources to - [Third-party penetration test reports](#third-party-penetration-test-reports) - [Rancher CVEs and resolutions](#rancher-cves-and-resolutions) -### Running a CIS Security Scan on a Kubernetes Cluster +## Running a CIS Security Scan on a Kubernetes Cluster _Available as of v2.4.0_ @@ -51,7 +51,7 @@ When Rancher runs a CIS security scan on a cluster, it generates a report showin For details, refer to the section on [security scans.](../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md) -### Rancher Hardening Guide +## Rancher Hardening Guide The Rancher Hardening Guide is based on controls and best practices found in the CIS Kubernetes Benchmark from the Center for Internet Security. @@ -70,7 +70,7 @@ Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes V [Hardening Guide v2.2](rancher-v2.2-hardening-guides/hardening-guide-with-cis-v1.4-benchmark.md) | Rancher v2.2.x | Benchmark v1.4.1 and 1.4.0 | Kubernetes v1.13 [Hardening Guide v2.1](rancher-v2.1-hardening-guides/hardening-guide-with-cis-v1.3-benchmark.md) | Rancher v2.1.x | Benchmark v1.3.0 | Kubernetes v1.11 -### The CIS Benchmark and Self-Assessment +## The CIS Benchmark and Self-Assessment The benchmark self-assessment is a companion to the Rancher security hardening guide. While the hardening guide shows you how to harden the cluster, the benchmark guide is meant to help you evaluate the level of security of the hardened cluster. @@ -87,7 +87,7 @@ Self Assessment Guide Version | Rancher Version | Hardening Guide Version | Kube [Self Assessment Guide v2.2](rancher-v2.2-hardening-guides/self-assessment-guide-with-cis-v1.4-benchmark.md) | Rancher v2.2.x | Hardening Guide v2.2 | Kubernetes v1.13 | Benchmark v1.4.0 and v1.4.1 [Self Assessment Guide v2.1](rancher-v2.1-hardening-guides/self-assessment-guide-with-cis-v1.3-benchmark.md) | Rancher v2.1.x | Hardening Guide v2.1 | Kubernetes v1.11 | Benchmark 1.3.0 -### Third-party Penetration Test Reports +## Third-party Penetration Test Reports Rancher periodically hires third parties to perform security audits and penetration tests of the Rancher 2.x software stack. The environments under test follow the Rancher provided hardening guides at the time of the testing. Previous penetration test reports are available below. @@ -96,6 +96,6 @@ Results: - [Cure53 Pen Test - 7/2019](https://releases.rancher.com/documents/security/pen-tests/2019/RAN-01-cure53-report.final.pdf) - [Untamed Theory Pen Test- 3/2019](https://releases.rancher.com/documents/security/pen-tests/2019/UntamedTheory-Rancher_SecurityAssessment-20190712_v5.pdf) -### Rancher CVEs and Resolutions +## Rancher CVEs and Resolutions Rancher is committed to informing the community of security issues in our products. For the list of CVEs (Common Vulnerabilities and Exposures) for issues we have resolved, refer to [this page.](security-advisories-and-cves.md) diff --git a/versioned_docs/version-2.5/reference-guides/rancher-security/rancher-security.md b/versioned_docs/version-2.5/reference-guides/rancher-security/rancher-security.md index 5ca30e89ffc1..71162bc304fb 100644 --- a/versioned_docs/version-2.5/reference-guides/rancher-security/rancher-security.md +++ b/versioned_docs/version-2.5/reference-guides/rancher-security/rancher-security.md @@ -26,7 +26,8 @@ title: Rancher Security Guides Security is at the heart of all Rancher features. From integrating with all the popular authentication tools and services, to an enterprise grade [RBAC capability,](../../how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/manage-role-based-access-control-rbac.md) Rancher makes your Kubernetes clusters even more secure. On this page, we provide security related documentation along with resources to help you secure your Rancher installation and your downstream Kubernetes clusters. -### Running a CIS Security Scan on a Kubernetes Cluster + +## Running a CIS Security Scan on a Kubernetes Cluster Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the [CIS](https://www.cisecurity.org/cis-benchmarks/) (Center for Internet Security) Kubernetes Benchmark. @@ -42,13 +43,13 @@ When Rancher runs a CIS security scan on a cluster, it generates a report showin For details, refer to the section on [security scans](../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md). -### SELinux RPM +## SELinux RPM [Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8. We provide two RPMs (Red Hat packages) that enable Rancher products to function properly on SELinux-enforcing hosts: `rancher-selinux` and `rke2-selinux`. For details, see [this page](selinux-rpm/selinux-rpm.md). -### Rancher Hardening Guide +## Rancher Hardening Guide The Rancher Hardening Guide is based on controls and best practices found in the CIS Kubernetes Benchmark from the Center for Internet Security. @@ -58,7 +59,7 @@ The hardening guides provide prescriptive guidance for hardening a production in Each version of the hardening guide is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher. -### The CIS Benchmark and Self-Assessment +## The CIS Benchmark and Self-Assessment The benchmark self-assessment is a companion to the Rancher security hardening guide. While the hardening guide shows you how to harden the cluster, the benchmark guide is meant to help you evaluate the level of security of the hardened cluster. @@ -66,7 +67,7 @@ Because Rancher and RKE install Kubernetes services as Docker containers, many o Each version of Rancher's self-assessment guide corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark. -### Third-party Penetration Test Reports +## Third-party Penetration Test Reports Rancher periodically hires third parties to perform security audits and penetration tests of the Rancher 2.x software stack. The environments under test follow the Rancher provided hardening guides at the time of the testing. Previous penetration test reports are available below. @@ -75,10 +76,10 @@ Results: - [Cure53 Pen Test - July 2019](https://releases.rancher.com/documents/security/pen-tests/2019/RAN-01-cure53-report.final.pdf) - [Untamed Theory Pen Test - March 2019](https://releases.rancher.com/documents/security/pen-tests/2019/UntamedTheory-Rancher_SecurityAssessment-20190712_v5.pdf) -### Rancher Security Advisories and CVEs +## Rancher Security Advisories and CVEs Rancher is committed to informing the community of security issues in our products. For the list of CVEs (Common Vulnerabilities and Exposures) for issues we have resolved, refer to [this page.](security-advisories-and-cves.md) -### Kubernetes Security Best Practices +## Kubernetes Security Best Practices For recommendations on securing your Kubernetes cluster, refer to the [Kubernetes Security Best Practices](kubernetes-security-best-practices.md) guide. diff --git a/versioned_docs/version-2.6/reference-guides/rancher-security/rancher-security.md b/versioned_docs/version-2.6/reference-guides/rancher-security/rancher-security.md index 9c982209fc54..78302d0c00d5 100644 --- a/versioned_docs/version-2.6/reference-guides/rancher-security/rancher-security.md +++ b/versioned_docs/version-2.6/reference-guides/rancher-security/rancher-security.md @@ -26,13 +26,14 @@ title: Rancher Security Guides Security is at the heart of all Rancher features. From integrating with all the popular authentication tools and services, to an enterprise grade [RBAC capability](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/manage-role-based-access-control-rbac.md), Rancher makes your Kubernetes clusters even more secure. On this page, we provide security related documentation along with resources to help you secure your Rancher installation and your downstream Kubernetes clusters. -### NeuVector Integration with Rancher + +## NeuVector Integration with Rancher _New in v2.6.5_ NeuVector is an open-source, container-focused security application that is now integrated into Rancher. NeuVector provides production security, DevOps vulnerability protection, and a container firewall, et al. Please see the [Rancher docs](../../integrations-in-rancher/neuvector.md) and the [NeuVector docs](https://open-docs.neuvector.com/) for more information. -### Running a CIS Security Scan on a Kubernetes Cluster +## Running a CIS Security Scan on a Kubernetes Cluster Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the [CIS](https://www.cisecurity.org/cis-benchmarks/) (Center for Internet Security) Kubernetes Benchmark. @@ -48,13 +49,13 @@ When Rancher runs a CIS security scan on a cluster, it generates a report showin For details, refer to the section on [security scans](../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md). -### SELinux RPM +## SELinux RPM [Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8. We provide two RPMs (Red Hat packages) that enable Rancher products to function properly on SELinux-enforcing hosts: `rancher-selinux` and `rke2-selinux`. For details, see [this page](selinux-rpm/selinux-rpm.md). -### Rancher Hardening Guide +## Rancher Hardening Guide The Rancher Hardening Guide is based on controls and best practices found in the CIS Kubernetes Benchmark from the Center for Internet Security. @@ -64,7 +65,7 @@ The hardening guides provide prescriptive guidance for hardening a production in Each version of the hardening guide is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher. -### The CIS Benchmark and Self-Assessment +## The CIS Benchmark and Self-Assessment The benchmark self-assessment is a companion to the Rancher security hardening guide. While the hardening guide shows you how to harden the cluster, the benchmark guide is meant to help you evaluate the level of security of the hardened cluster. @@ -72,7 +73,7 @@ Because Rancher and RKE install Kubernetes services as Docker containers, many o Each version of Rancher's self-assessment guide corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark. -### Third-party Penetration Test Reports +## Third-party Penetration Test Reports Rancher periodically hires third parties to perform security audits and penetration tests of the Rancher 2.x software stack. The environments under test follow the Rancher provided hardening guides at the time of the testing. Previous penetration test reports are available below. @@ -81,10 +82,10 @@ Results: - [Cure53 Pen Test - July 2019](https://releases.rancher.com/documents/security/pen-tests/2019/RAN-01-cure53-report.final.pdf) - [Untamed Theory Pen Test - March 2019](https://releases.rancher.com/documents/security/pen-tests/2019/UntamedTheory-Rancher_SecurityAssessment-20190712_v5.pdf) -### Rancher Security Advisories and CVEs +## Rancher Security Advisories and CVEs Rancher is committed to informing the community of security issues in our products. For the list of CVEs (Common Vulnerabilities and Exposures) for issues we have resolved, refer to [this page.](security-advisories-and-cves.md) -### Kubernetes Security Best Practices +## Kubernetes Security Best Practices For recommendations on securing your Kubernetes cluster, refer to the [Kubernetes Security Best Practices](kubernetes-security-best-practices.md) guide. diff --git a/versioned_docs/version-2.7/reference-guides/rancher-security/rancher-security.md b/versioned_docs/version-2.7/reference-guides/rancher-security/rancher-security.md index 1d7e12f113da..37d7a40b58fb 100644 --- a/versioned_docs/version-2.7/reference-guides/rancher-security/rancher-security.md +++ b/versioned_docs/version-2.7/reference-guides/rancher-security/rancher-security.md @@ -28,11 +28,11 @@ Security is at the heart of all Rancher features. From integrating with all the On this page, we provide security related documentation along with resources to help you secure your Rancher installation and your downstream Kubernetes clusters. -### NeuVector Integration with Rancher +## NeuVector Integration with Rancher NeuVector is an open-source, container-focused security application that is now integrated into Rancher. NeuVector provides production security, DevOps vulnerability protection, and a container firewall, et al. Please see the [Rancher docs](../../integrations-in-rancher/neuvector.md) and the [NeuVector docs](https://open-docs.neuvector.com/) for more information. -### Running a CIS Security Scan on a Kubernetes Cluster +## Running a CIS Security Scan on a Kubernetes Cluster Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the [CIS](https://www.cisecurity.org/cis-benchmarks/) (Center for Internet Security) Kubernetes Benchmark. @@ -48,13 +48,13 @@ When Rancher runs a CIS security scan on a cluster, it generates a report showin For details, refer to the section on [security scans](../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md). -### SELinux RPM +## SELinux RPM [Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8. We provide two RPMs (Red Hat packages) that enable Rancher products to function properly on SELinux-enforcing hosts: `rancher-selinux` and `rke2-selinux`. For details, see [this page](selinux-rpm/selinux-rpm.md). -### Rancher Hardening Guide +## Rancher Hardening Guide The Rancher Hardening Guide is based on controls and best practices found in the CIS Kubernetes Benchmark from the Center for Internet Security. @@ -64,7 +64,7 @@ The hardening guides provide prescriptive guidance for hardening a production in Each version of the hardening guide is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher. -### The CIS Benchmark and Self-Assessment +## The CIS Benchmark and Self-Assessment The benchmark self-assessment is a companion to the Rancher security hardening guide. While the hardening guide shows you how to harden the cluster, the benchmark guide is meant to help you evaluate the level of security of the hardened cluster. @@ -72,7 +72,7 @@ Because Rancher and RKE install Kubernetes services as Docker containers, many o Each version of Rancher's self-assessment guide corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark. -### Third-party Penetration Test Reports +## Third-party Penetration Test Reports Rancher periodically hires third parties to perform security audits and penetration tests of the Rancher software stack. The environments under test follow the Rancher provided hardening guides at the time of the testing. Previous penetration test reports are available below. @@ -83,14 +83,14 @@ Results: Please note that new reports are no longer shared or made publicly available. -### Rancher Security Advisories and CVEs +## Rancher Security Advisories and CVEs Rancher is committed to informing the community of security issues in our products. For the list of CVEs (Common Vulnerabilities and Exposures) for issues we have resolved, refer to [this page.](security-advisories-and-cves.md) -### Kubernetes Security Best Practices +## Kubernetes Security Best Practices For recommendations on securing your Kubernetes cluster, refer to the [Kubernetes Security Best Practices](kubernetes-security-best-practices.md) guide. -### Rancher Security Best Practices +## Rancher Security Best Practices For recommendations on securing your Rancher Manager deployments, refer to the [Rancher Security Best Practices](rancher-security-best-practices.md) guide. diff --git a/versioned_docs/version-2.8/reference-guides/rancher-security/rancher-security.md b/versioned_docs/version-2.8/reference-guides/rancher-security/rancher-security.md index 795fed1d87e8..b557102c0355 100644 --- a/versioned_docs/version-2.8/reference-guides/rancher-security/rancher-security.md +++ b/versioned_docs/version-2.8/reference-guides/rancher-security/rancher-security.md @@ -27,11 +27,11 @@ Security is at the heart of all Rancher features. From integrating with all the On this page, we provide security related documentation along with resources to help you secure your Rancher installation and your downstream Kubernetes clusters. -### NeuVector Integration with Rancher +## NeuVector Integration with Rancher NeuVector is an open-source, container-focused security application that is now integrated into Rancher. NeuVector provides production security, DevOps vulnerability protection, and a container firewall, et al. Please see the [Rancher docs](../../integrations-in-rancher/neuvector/neuvector.md) and the [NeuVector docs](https://open-docs.neuvector.com/) for more information. -### Running a CIS Security Scan on a Kubernetes Cluster +## Running a CIS Security Scan on a Kubernetes Cluster Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the [CIS](https://www.cisecurity.org/cis-benchmarks/) (Center for Internet Security) Kubernetes Benchmark. @@ -47,13 +47,13 @@ When Rancher runs a CIS security scan on a cluster, it generates a report showin For details, refer to the section on [security scans](../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md). -### SELinux RPM +## SELinux RPM [Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8. We provide two RPMs (Red Hat packages) that enable Rancher products to function properly on SELinux-enforcing hosts: `rancher-selinux` and `rke2-selinux`. For details, see [this page](selinux-rpm/selinux-rpm.md). -### Rancher Hardening Guide +## Rancher Hardening Guide The Rancher Hardening Guide is based on controls and best practices found in the CIS Kubernetes Benchmark from the Center for Internet Security. @@ -63,7 +63,7 @@ The hardening guides provide prescriptive guidance for hardening a production in Each version of the hardening guide is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher. -### The CIS Benchmark and Self-Assessment +## The CIS Benchmark and Self-Assessment The benchmark self-assessment is a companion to the Rancher security hardening guide. While the hardening guide shows you how to harden the cluster, the benchmark guide is meant to help you evaluate the level of security of the hardened cluster. @@ -71,7 +71,7 @@ Because Rancher and RKE install Kubernetes services as Docker containers, many o Each version of Rancher's self-assessment guide corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark. -### Third-party Penetration Test Reports +## Third-party Penetration Test Reports Rancher periodically hires third parties to perform security audits and penetration tests of the Rancher software stack. The environments under test follow the Rancher provided hardening guides at the time of the testing. Previous penetration test reports are available below. @@ -82,18 +82,18 @@ Results: Please note that new reports are no longer shared or made publicly available. -### Rancher Security Advisories and CVEs +## Rancher Security Advisories and CVEs Rancher is committed to informing the community of security issues in our products. For the list of CVEs (Common Vulnerabilities and Exposures) for issues we have resolved, refer to [this page.](security-advisories-and-cves.md) -### Kubernetes Security Best Practices +## Kubernetes Security Best Practices For recommendations on securing your Kubernetes cluster, refer to the [Kubernetes Security Best Practices](kubernetes-security-best-practices.md) guide. -### Rancher Security Best Practices +## Rancher Security Best Practices For recommendations on securing your Rancher Manager deployments, refer to the [Rancher Security Best Practices](rancher-security-best-practices.md) guide. -### Rancher Webhook Hardening +## Rancher Webhook Hardening The Rancher webhook deploys on both the upstream Rancher cluster and all provisioned clusters. For recommendations on hardening the Rancher webhook, see the [Hardening the Rancher Webhook](rancher-webhook-hardening.md) guide. \ No newline at end of file diff --git a/versioned_docs/version-2.9/reference-guides/rancher-security/rancher-security.md b/versioned_docs/version-2.9/reference-guides/rancher-security/rancher-security.md index f6d56c116547..f16699b8ac6a 100644 --- a/versioned_docs/version-2.9/reference-guides/rancher-security/rancher-security.md +++ b/versioned_docs/version-2.9/reference-guides/rancher-security/rancher-security.md @@ -27,11 +27,11 @@ Security is at the heart of all Rancher features. From integrating with all the On this page, we provide security related documentation along with resources to help you secure your Rancher installation and your downstream Kubernetes clusters. -### NeuVector Integration with Rancher +## NeuVector Integration with Rancher NeuVector is an open-source, container-focused security application that is now integrated into Rancher. NeuVector provides production security, DevOps vulnerability protection, and a container firewall, et al. Please see the [Rancher docs](../../integrations-in-rancher/neuvector/neuvector.md) and the [NeuVector docs](https://open-docs.neuvector.com/) for more information. -### Running a CIS Security Scan on a Kubernetes Cluster +## Running a CIS Security Scan on a Kubernetes Cluster Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the [CIS](https://www.cisecurity.org/cis-benchmarks/) (Center for Internet Security) Kubernetes Benchmark. @@ -47,13 +47,13 @@ When Rancher runs a CIS security scan on a cluster, it generates a report showin For details, refer to the section on [security scans](../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md). -### SELinux RPM +## SELinux RPM [Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8. We provide two RPMs (Red Hat packages) that enable Rancher products to function properly on SELinux-enforcing hosts: `rancher-selinux` and `rke2-selinux`. For details, see [this page](selinux-rpm/selinux-rpm.md). -### Rancher Hardening Guide +## Rancher Hardening Guide The Rancher Hardening Guide is based on controls and best practices found in the CIS Kubernetes Benchmark from the Center for Internet Security. @@ -63,7 +63,7 @@ The hardening guides provide prescriptive guidance for hardening a production in Each version of the hardening guide is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher. -### The CIS Benchmark and Self-Assessment +## The CIS Benchmark and Self-Assessment The benchmark self-assessment is a companion to the Rancher security hardening guide. While the hardening guide shows you how to harden the cluster, the benchmark guide is meant to help you evaluate the level of security of the hardened cluster. @@ -71,7 +71,7 @@ Because Rancher and RKE install Kubernetes services as Docker containers, many o Each version of Rancher's self-assessment guide corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark. -### Third-party Penetration Test Reports +## Third-party Penetration Test Reports Rancher periodically hires third parties to perform security audits and penetration tests of the Rancher software stack. The environments under test follow the Rancher provided hardening guides at the time of the testing. Previous penetration test reports are available below. @@ -82,14 +82,14 @@ Results: Please note that new reports are no longer shared or made publicly available. -### Rancher Security Advisories and CVEs +## Rancher Security Advisories and CVEs Rancher is committed to informing the community of security issues in our products. For the list of CVEs (Common Vulnerabilities and Exposures) for issues we have resolved, refer to [this page.](security-advisories-and-cves.md) -### Kubernetes Security Best Practices +## Kubernetes Security Best Practices For recommendations on securing your Kubernetes cluster, refer to the [Kubernetes Security Best Practices](kubernetes-security-best-practices.md) guide. -### Rancher Security Best Practices +## Rancher Security Best Practices For recommendations on securing your Rancher Manager deployments, refer to the [Rancher Security Best Practices](rancher-security-best-practices.md) guide. From 4a6721f6ab8afed2079e3b2e930bd1c7dfedf154 Mon Sep 17 00:00:00 2001 From: martyav Date: Tue, 17 Sep 2024 16:30:17 -0400 Subject: [PATCH 04/30] fixed expired-webhook-certificate-rotation.md --- .../expired-webhook-certificate-rotation.md | 5 +++-- .../expired-webhook-certificate-rotation.md | 5 +++-- .../expired-webhook-certificate-rotation.md | 5 +++-- .../expired-webhook-certificate-rotation.md | 5 +++-- .../expired-webhook-certificate-rotation.md | 5 +++-- .../expired-webhook-certificate-rotation.md | 5 +++-- 6 files changed, 18 insertions(+), 12 deletions(-) diff --git a/docs/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md b/docs/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md index 106479c0bb71..fc8e957c4af0 100644 --- a/docs/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md +++ b/docs/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md @@ -10,14 +10,15 @@ For Rancher versions that have `rancher-webhook` installed, certain versions cre In Rancher v2.6.3 and up, rancher-webhook deployments will automatically renew their TLS certificate when it is within 30 or fewer days of its expiration date. If you are using v2.6.2 or below, there are two methods to work around this issue: -##### 1. Users with cluster access, run the following commands: +## 1. Users with Cluster Access, Run the Following Commands: + ``` kubectl delete secret -n cattle-system cattle-webhook-tls kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io --ignore-not-found=true rancher.cattle.io kubectl delete pod -n cattle-system -l app=rancher-webhook ``` -##### 2. Users with no cluster access via `kubectl`: +## 2. Users with No Cluster Access Via `kubectl`: 1. Delete the `cattle-webhook-tls` secret in the `cattle-system` namespace in the local cluster. diff --git a/versioned_docs/version-2.5/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md b/versioned_docs/version-2.5/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md index 601d062984df..fab6d5db2879 100644 --- a/versioned_docs/version-2.5/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md +++ b/versioned_docs/version-2.5/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md @@ -10,14 +10,15 @@ For Rancher versions that have `rancher-webhook` installed, certain versions cre In Rancher v2.5.12 and up, rancher-webhook deployments will automatically renew their TLS certificate when it is within 30 or fewer days of its expiration date. If you are using v2.5.11 or below, there are two methods to work around this issue: -##### 1. Users with cluster access, run the following commands: +## 1. Users with Cluster Access, Run the Following Commands: + ``` kubectl delete secret -n cattle-system cattle-webhook-tls kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io --ignore-not-found=true rancher.cattle.io kubectl delete pod -n cattle-system -l app=rancher-webhook ``` -##### 2. Users with no cluster access via `kubectl`: +## 2. Users with No Cluster Access Via `kubectl`: 1. Delete the `cattle-webhook-tls` secret in the `cattle-system` namespace in the local cluster. diff --git a/versioned_docs/version-2.6/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md b/versioned_docs/version-2.6/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md index 106479c0bb71..fc8e957c4af0 100644 --- a/versioned_docs/version-2.6/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md +++ b/versioned_docs/version-2.6/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md @@ -10,14 +10,15 @@ For Rancher versions that have `rancher-webhook` installed, certain versions cre In Rancher v2.6.3 and up, rancher-webhook deployments will automatically renew their TLS certificate when it is within 30 or fewer days of its expiration date. If you are using v2.6.2 or below, there are two methods to work around this issue: -##### 1. Users with cluster access, run the following commands: +## 1. Users with Cluster Access, Run the Following Commands: + ``` kubectl delete secret -n cattle-system cattle-webhook-tls kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io --ignore-not-found=true rancher.cattle.io kubectl delete pod -n cattle-system -l app=rancher-webhook ``` -##### 2. Users with no cluster access via `kubectl`: +## 2. Users with No Cluster Access Via `kubectl`: 1. Delete the `cattle-webhook-tls` secret in the `cattle-system` namespace in the local cluster. diff --git a/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md b/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md index 106479c0bb71..fc8e957c4af0 100644 --- a/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md +++ b/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md @@ -10,14 +10,15 @@ For Rancher versions that have `rancher-webhook` installed, certain versions cre In Rancher v2.6.3 and up, rancher-webhook deployments will automatically renew their TLS certificate when it is within 30 or fewer days of its expiration date. If you are using v2.6.2 or below, there are two methods to work around this issue: -##### 1. Users with cluster access, run the following commands: +## 1. Users with Cluster Access, Run the Following Commands: + ``` kubectl delete secret -n cattle-system cattle-webhook-tls kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io --ignore-not-found=true rancher.cattle.io kubectl delete pod -n cattle-system -l app=rancher-webhook ``` -##### 2. Users with no cluster access via `kubectl`: +## 2. Users with No Cluster Access Via `kubectl`: 1. Delete the `cattle-webhook-tls` secret in the `cattle-system` namespace in the local cluster. diff --git a/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md b/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md index 106479c0bb71..fc8e957c4af0 100644 --- a/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md +++ b/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md @@ -10,14 +10,15 @@ For Rancher versions that have `rancher-webhook` installed, certain versions cre In Rancher v2.6.3 and up, rancher-webhook deployments will automatically renew their TLS certificate when it is within 30 or fewer days of its expiration date. If you are using v2.6.2 or below, there are two methods to work around this issue: -##### 1. Users with cluster access, run the following commands: +## 1. Users with Cluster Access, Run the Following Commands: + ``` kubectl delete secret -n cattle-system cattle-webhook-tls kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io --ignore-not-found=true rancher.cattle.io kubectl delete pod -n cattle-system -l app=rancher-webhook ``` -##### 2. Users with no cluster access via `kubectl`: +## 2. Users with No Cluster Access Via `kubectl`: 1. Delete the `cattle-webhook-tls` secret in the `cattle-system` namespace in the local cluster. diff --git a/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md b/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md index 106479c0bb71..fc8e957c4af0 100644 --- a/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md +++ b/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation.md @@ -10,14 +10,15 @@ For Rancher versions that have `rancher-webhook` installed, certain versions cre In Rancher v2.6.3 and up, rancher-webhook deployments will automatically renew their TLS certificate when it is within 30 or fewer days of its expiration date. If you are using v2.6.2 or below, there are two methods to work around this issue: -##### 1. Users with cluster access, run the following commands: +## 1. Users with Cluster Access, Run the Following Commands: + ``` kubectl delete secret -n cattle-system cattle-webhook-tls kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io --ignore-not-found=true rancher.cattle.io kubectl delete pod -n cattle-system -l app=rancher-webhook ``` -##### 2. Users with no cluster access via `kubectl`: +## 2. Users with No Cluster Access Via `kubectl`: 1. Delete the `cattle-webhook-tls` secret in the `cattle-system` namespace in the local cluster. From e6ffc6456c6b6f7082be456570a8c968b3feb99c Mon Sep 17 00:00:00 2001 From: martyav Date: Tue, 17 Sep 2024 17:31:27 -0400 Subject: [PATCH 05/30] fixed other-troubleshooting-tips/rancher-ha.md --- .../other-troubleshooting-tips/rancher-ha.md | 16 ++++++++-------- .../other-troubleshooting-tips/rancher-ha.md | 14 +++++++------- .../other-troubleshooting-tips/rancher-ha.md | 14 +++++++------- .../other-troubleshooting-tips/rancher-ha.md | 14 +++++++------- .../other-troubleshooting-tips/rancher-ha.md | 14 +++++++------- .../other-troubleshooting-tips/rancher-ha.md | 16 ++++++++-------- .../other-troubleshooting-tips/rancher-ha.md | 16 ++++++++-------- 7 files changed, 52 insertions(+), 52 deletions(-) diff --git a/docs/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/docs/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 68c7ca9d6dd1..25845cdc87d4 100644 --- a/docs/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/docs/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -10,7 +10,7 @@ The commands/steps listed on this page can be used to check your Rancher Kuberne Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml`). -### Check Rancher pods +## Check Rancher Pods Rancher pods are deployed as a Deployment in the `cattle-system` namespace. @@ -31,25 +31,25 @@ rancher-7dbd7875f7-qw7wb 1/1 Running 0 8m x.x.x.x x.x.x. If a pod is unable to run (Status is not **Running**, Ready status is not showing `1/1` or you see a high count of Restarts), check the pod details, logs and namespace events. -#### Pod details +### Pod Details ``` kubectl -n cattle-system describe pods -l app=rancher ``` -#### Pod container logs +### Pod Container Logs ``` kubectl -n cattle-system logs -l app=rancher ``` -#### Namespace events +### Namespace Events ``` kubectl -n cattle-system get events ``` -### Check ingress +## Check Ingress Ingress should have the correct `HOSTS` (showing the configured FQDN) and `ADDRESS` (host address(es) it will be routed to). @@ -64,7 +64,7 @@ NAME HOSTS ADDRESS PORTS AGE rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m ``` -### Check ingress controller logs +## Check Ingress Controller Logs When accessing your configured Rancher FQDN does not show you the UI, check the ingress controller logging to see what happens when you try to access Rancher: @@ -72,7 +72,7 @@ When accessing your configured Rancher FQDN does not show you the UI, check the kubectl -n ingress-nginx logs -l app=ingress-nginx ``` -### Leader election +## Leader Election The leader is determined by a leader election process. After the leader has been determined, the leader (`holderIdentity`) is saved in the `cattle-controllers` Lease in the `kube-system` namespace (in this example, `rancher-dbc7ff869-gvg6k`). @@ -87,7 +87,7 @@ NAME HOLDER AGE cattle-controllers rancher-dbc7ff869-gvg6k 6h10m ``` -#### Configuration +### Configuration _Available as of Rancher 2.8.3_ diff --git a/versioned_docs/version-2.0-2.4/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/versioned_docs/version-2.0-2.4/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 3ce3d3881bc4..2e32d94ec229 100644 --- a/versioned_docs/version-2.0-2.4/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/versioned_docs/version-2.0-2.4/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -10,7 +10,7 @@ The commands/steps listed on this page can be used to check your Rancher Kuberne Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_rancher-cluster.yml`). -### Check Rancher pods +## Check Rancher Pods Rancher pods are deployed as a Deployment in the `cattle-system` namespace. @@ -31,25 +31,25 @@ rancher-7dbd7875f7-qw7wb 1/1 Running 0 8m x.x.x.x x.x.x. If a pod is unable to run (Status is not **Running**, Ready status is not showing `1/1` or you see a high count of Restarts), check the pod details, logs and namespace events. -#### Pod details +### Pod Details ``` kubectl -n cattle-system describe pods -l app=rancher ``` -#### Pod container logs +### Pod Container Logs ``` kubectl -n cattle-system logs -l app=rancher ``` -#### Namespace events +### Namespace Events ``` kubectl -n cattle-system get events ``` -### Check ingress +## Check Ingress Ingress should have the correct `HOSTS` (showing the configured FQDN) and `ADDRESS` (host address(es) it will be routed to). @@ -64,7 +64,7 @@ NAME HOSTS ADDRESS PORTS AGE rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m ``` -### Check ingress controller logs +### Check Ingress Controller Logs When accessing your configured Rancher FQDN does not show you the UI, check the ingress controller logging to see what happens when you try to access Rancher: @@ -72,7 +72,7 @@ When accessing your configured Rancher FQDN does not show you the UI, check the kubectl -n ingress-nginx logs -l app=ingress-nginx ``` -### Leader election +### Leader Election The leader is determined by a leader election process. After the leader has been determined, the leader (`holderIdentity`) is saved in the `cattle-controllers` ConfigMap (in this example, `rancher-7dbd7875f7-qbj5k`). diff --git a/versioned_docs/version-2.5/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/versioned_docs/version-2.5/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 8917f80da4da..fe044bf9d8c6 100644 --- a/versioned_docs/version-2.5/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/versioned_docs/version-2.5/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -10,7 +10,7 @@ The commands/steps listed on this page can be used to check your Rancher Kuberne Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml`). -### Check Rancher pods +## Check Rancher Pods Rancher pods are deployed as a Deployment in the `cattle-system` namespace. @@ -31,25 +31,25 @@ rancher-7dbd7875f7-qw7wb 1/1 Running 0 8m x.x.x.x x.x.x. If a pod is unable to run (Status is not **Running**, Ready status is not showing `1/1` or you see a high count of Restarts), check the pod details, logs and namespace events. -#### Pod details +### Pod Details ``` kubectl -n cattle-system describe pods -l app=rancher ``` -#### Pod container logs +### Pod container logs ``` kubectl -n cattle-system logs -l app=rancher ``` -#### Namespace events +### Namespace Events ``` kubectl -n cattle-system get events ``` -### Check ingress +## Check Ingress Ingress should have the correct `HOSTS` (showing the configured FQDN) and `ADDRESS` (host address(es) it will be routed to). @@ -64,7 +64,7 @@ NAME HOSTS ADDRESS PORTS AGE rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m ``` -### Check ingress controller logs +## Check Ingress Controller Logs When accessing your configured Rancher FQDN does not show you the UI, check the ingress controller logging to see what happens when you try to access Rancher: @@ -72,7 +72,7 @@ When accessing your configured Rancher FQDN does not show you the UI, check the kubectl -n ingress-nginx logs -l app=ingress-nginx ``` -### Leader election +## Leader Election The leader is determined by a leader election process. After the leader has been determined, the leader (`holderIdentity`) is saved in the `cattle-controllers` ConfigMap (in this example, `rancher-7dbd7875f7-qbj5k`). diff --git a/versioned_docs/version-2.6/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/versioned_docs/version-2.6/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 8917f80da4da..ac27df911561 100644 --- a/versioned_docs/version-2.6/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/versioned_docs/version-2.6/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -10,7 +10,7 @@ The commands/steps listed on this page can be used to check your Rancher Kuberne Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml`). -### Check Rancher pods +## Check Rancher Pods Rancher pods are deployed as a Deployment in the `cattle-system` namespace. @@ -31,25 +31,25 @@ rancher-7dbd7875f7-qw7wb 1/1 Running 0 8m x.x.x.x x.x.x. If a pod is unable to run (Status is not **Running**, Ready status is not showing `1/1` or you see a high count of Restarts), check the pod details, logs and namespace events. -#### Pod details +### Pod Details ``` kubectl -n cattle-system describe pods -l app=rancher ``` -#### Pod container logs +### Pod Container Logs ``` kubectl -n cattle-system logs -l app=rancher ``` -#### Namespace events +### Namespace Events ``` kubectl -n cattle-system get events ``` -### Check ingress +## Check Ingress Ingress should have the correct `HOSTS` (showing the configured FQDN) and `ADDRESS` (host address(es) it will be routed to). @@ -64,7 +64,7 @@ NAME HOSTS ADDRESS PORTS AGE rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m ``` -### Check ingress controller logs +## Check Ingress Controller Logs When accessing your configured Rancher FQDN does not show you the UI, check the ingress controller logging to see what happens when you try to access Rancher: @@ -72,7 +72,7 @@ When accessing your configured Rancher FQDN does not show you the UI, check the kubectl -n ingress-nginx logs -l app=ingress-nginx ``` -### Leader election +## Leader Election The leader is determined by a leader election process. After the leader has been determined, the leader (`holderIdentity`) is saved in the `cattle-controllers` ConfigMap (in this example, `rancher-7dbd7875f7-qbj5k`). diff --git a/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 8917f80da4da..ac27df911561 100644 --- a/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -10,7 +10,7 @@ The commands/steps listed on this page can be used to check your Rancher Kuberne Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml`). -### Check Rancher pods +## Check Rancher Pods Rancher pods are deployed as a Deployment in the `cattle-system` namespace. @@ -31,25 +31,25 @@ rancher-7dbd7875f7-qw7wb 1/1 Running 0 8m x.x.x.x x.x.x. If a pod is unable to run (Status is not **Running**, Ready status is not showing `1/1` or you see a high count of Restarts), check the pod details, logs and namespace events. -#### Pod details +### Pod Details ``` kubectl -n cattle-system describe pods -l app=rancher ``` -#### Pod container logs +### Pod Container Logs ``` kubectl -n cattle-system logs -l app=rancher ``` -#### Namespace events +### Namespace Events ``` kubectl -n cattle-system get events ``` -### Check ingress +## Check Ingress Ingress should have the correct `HOSTS` (showing the configured FQDN) and `ADDRESS` (host address(es) it will be routed to). @@ -64,7 +64,7 @@ NAME HOSTS ADDRESS PORTS AGE rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m ``` -### Check ingress controller logs +## Check Ingress Controller Logs When accessing your configured Rancher FQDN does not show you the UI, check the ingress controller logging to see what happens when you try to access Rancher: @@ -72,7 +72,7 @@ When accessing your configured Rancher FQDN does not show you the UI, check the kubectl -n ingress-nginx logs -l app=ingress-nginx ``` -### Leader election +## Leader Election The leader is determined by a leader election process. After the leader has been determined, the leader (`holderIdentity`) is saved in the `cattle-controllers` ConfigMap (in this example, `rancher-7dbd7875f7-qbj5k`). diff --git a/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 68c7ca9d6dd1..25845cdc87d4 100644 --- a/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -10,7 +10,7 @@ The commands/steps listed on this page can be used to check your Rancher Kuberne Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml`). -### Check Rancher pods +## Check Rancher Pods Rancher pods are deployed as a Deployment in the `cattle-system` namespace. @@ -31,25 +31,25 @@ rancher-7dbd7875f7-qw7wb 1/1 Running 0 8m x.x.x.x x.x.x. If a pod is unable to run (Status is not **Running**, Ready status is not showing `1/1` or you see a high count of Restarts), check the pod details, logs and namespace events. -#### Pod details +### Pod Details ``` kubectl -n cattle-system describe pods -l app=rancher ``` -#### Pod container logs +### Pod Container Logs ``` kubectl -n cattle-system logs -l app=rancher ``` -#### Namespace events +### Namespace Events ``` kubectl -n cattle-system get events ``` -### Check ingress +## Check Ingress Ingress should have the correct `HOSTS` (showing the configured FQDN) and `ADDRESS` (host address(es) it will be routed to). @@ -64,7 +64,7 @@ NAME HOSTS ADDRESS PORTS AGE rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m ``` -### Check ingress controller logs +## Check Ingress Controller Logs When accessing your configured Rancher FQDN does not show you the UI, check the ingress controller logging to see what happens when you try to access Rancher: @@ -72,7 +72,7 @@ When accessing your configured Rancher FQDN does not show you the UI, check the kubectl -n ingress-nginx logs -l app=ingress-nginx ``` -### Leader election +## Leader Election The leader is determined by a leader election process. After the leader has been determined, the leader (`holderIdentity`) is saved in the `cattle-controllers` Lease in the `kube-system` namespace (in this example, `rancher-dbc7ff869-gvg6k`). @@ -87,7 +87,7 @@ NAME HOLDER AGE cattle-controllers rancher-dbc7ff869-gvg6k 6h10m ``` -#### Configuration +### Configuration _Available as of Rancher 2.8.3_ diff --git a/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 68c7ca9d6dd1..25845cdc87d4 100644 --- a/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -10,7 +10,7 @@ The commands/steps listed on this page can be used to check your Rancher Kuberne Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml`). -### Check Rancher pods +## Check Rancher Pods Rancher pods are deployed as a Deployment in the `cattle-system` namespace. @@ -31,25 +31,25 @@ rancher-7dbd7875f7-qw7wb 1/1 Running 0 8m x.x.x.x x.x.x. If a pod is unable to run (Status is not **Running**, Ready status is not showing `1/1` or you see a high count of Restarts), check the pod details, logs and namespace events. -#### Pod details +### Pod Details ``` kubectl -n cattle-system describe pods -l app=rancher ``` -#### Pod container logs +### Pod Container Logs ``` kubectl -n cattle-system logs -l app=rancher ``` -#### Namespace events +### Namespace Events ``` kubectl -n cattle-system get events ``` -### Check ingress +## Check Ingress Ingress should have the correct `HOSTS` (showing the configured FQDN) and `ADDRESS` (host address(es) it will be routed to). @@ -64,7 +64,7 @@ NAME HOSTS ADDRESS PORTS AGE rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m ``` -### Check ingress controller logs +## Check Ingress Controller Logs When accessing your configured Rancher FQDN does not show you the UI, check the ingress controller logging to see what happens when you try to access Rancher: @@ -72,7 +72,7 @@ When accessing your configured Rancher FQDN does not show you the UI, check the kubectl -n ingress-nginx logs -l app=ingress-nginx ``` -### Leader election +## Leader Election The leader is determined by a leader election process. After the leader has been determined, the leader (`holderIdentity`) is saved in the `cattle-controllers` Lease in the `kube-system` namespace (in this example, `rancher-dbc7ff869-gvg6k`). @@ -87,7 +87,7 @@ NAME HOLDER AGE cattle-controllers rancher-dbc7ff869-gvg6k 6h10m ``` -#### Configuration +### Configuration _Available as of Rancher 2.8.3_ From cc9e0ea135d0f49e3829aa716c63276cf0d586d5 Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 19 Sep 2024 12:25:02 -0400 Subject: [PATCH 06/30] fixed other-troubleshooting-tips/networking.md --- .../other-troubleshooting-tips/networking.md | 7 ++++--- .../other-troubleshooting-tips/networking.md | 8 +++++--- .../other-troubleshooting-tips/networking.md | 7 ++++--- .../other-troubleshooting-tips/networking.md | 7 ++++--- 4 files changed, 17 insertions(+), 12 deletions(-) diff --git a/docs/troubleshooting/other-troubleshooting-tips/networking.md b/docs/troubleshooting/other-troubleshooting-tips/networking.md index d0af8a967c3f..92bd7cf56b6e 100644 --- a/docs/troubleshooting/other-troubleshooting-tips/networking.md +++ b/docs/troubleshooting/other-troubleshooting-tips/networking.md @@ -10,11 +10,12 @@ The commands/steps listed on this page can be used to check networking related i Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml` for Rancher HA) or are using the embedded kubectl via the UI. -### Double check if all the required ports are opened in your (host) firewall +## Double Check if All the Required Ports are Opened in Your (Host) Firewall Double check if all the [required ports](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) are opened in your (host) firewall. The overlay network uses UDP in comparison to all other required ports which are TCP. -### Check if overlay network is functioning correctly + +## Check if Overlay Network is Functioning Correctly The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. @@ -98,7 +99,7 @@ The `swiss-army-knife` container does not support Windows nodes. It also [does n 6. You can now clean up the DaemonSet by running `kubectl delete ds/overlaytest`. -### Check if MTU is correctly configured on hosts and on peering/tunnel appliances/devices +### Check if MTU is Correctly Configured on Hosts and on Peering/Tunnel Appliances/Devices When the MTU is incorrectly configured (either on hosts running Rancher, nodes in created/imported clusters or on appliances/devices in between), error messages will be logged in Rancher and in the agents, similar to: diff --git a/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/networking.md b/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/networking.md index 4d938886206a..450617aaaaff 100644 --- a/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/networking.md +++ b/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/networking.md @@ -10,10 +10,12 @@ The commands/steps listed on this page can be used to check networking related i Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml` for Rancher HA) or are using the embedded kubectl via the UI. -### Double check if all the required ports are opened in your (host) firewall +## Double Check if All the Required Ports are Opened in Your (Host) Firewall Double check if all the [required ports](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) are opened in your (host) firewall. The overlay network uses UDP in comparison to all other required ports which are TCP. -### Check if overlay network is functioning correctly + + +## Check if Overlay Network is Functioning Correctly The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. @@ -97,7 +99,7 @@ The `swiss-army-knife` container does not support Windows nodes. It also [does n 6. You can now clean up the DaemonSet by running `kubectl delete ds/overlaytest`. -### Check if MTU is correctly configured on hosts and on peering/tunnel appliances/devices +### Check if MTU is Correctly Configured on Hosts and on Peering/Tunnel Appliances/Devices When the MTU is incorrectly configured (either on hosts running Rancher, nodes in created/imported clusters or on appliances/devices in between), error messages will be logged in Rancher and in the agents, similar to: diff --git a/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/networking.md b/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/networking.md index 4d938886206a..e2eb22a45a6d 100644 --- a/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/networking.md +++ b/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/networking.md @@ -10,10 +10,11 @@ The commands/steps listed on this page can be used to check networking related i Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml` for Rancher HA) or are using the embedded kubectl via the UI. -### Double check if all the required ports are opened in your (host) firewall +## Double check if all the required ports are opened in your (host) firewall Double check if all the [required ports](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) are opened in your (host) firewall. The overlay network uses UDP in comparison to all other required ports which are TCP. -### Check if overlay network is functioning correctly + +## Check if overlay network is functioning correctly The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. @@ -97,7 +98,7 @@ The `swiss-army-knife` container does not support Windows nodes. It also [does n 6. You can now clean up the DaemonSet by running `kubectl delete ds/overlaytest`. -### Check if MTU is correctly configured on hosts and on peering/tunnel appliances/devices +## Check if MTU is Correctly Configured on Hosts and on Peering/Tunnel Appliances/Devices When the MTU is incorrectly configured (either on hosts running Rancher, nodes in created/imported clusters or on appliances/devices in between), error messages will be logged in Rancher and in the agents, similar to: diff --git a/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/networking.md b/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/networking.md index d67a1cdb7938..7e115bbd9678 100644 --- a/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/networking.md +++ b/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/networking.md @@ -10,11 +10,12 @@ The commands/steps listed on this page can be used to check networking related i Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml` for Rancher HA) or are using the embedded kubectl via the UI. -### Double check if all the required ports are opened in your (host) firewall +## Double Check if All the Required Ports are Opened in Your (Host) Firewall Double check if all the [required ports](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) are opened in your (host) firewall. The overlay network uses UDP in comparison to all other required ports which are TCP. -### Check if overlay network is functioning correctly + +## Check if Overlay Network is Functioning Correctly The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. @@ -98,7 +99,7 @@ The `swiss-army-knife` container does not support Windows nodes. It also [does n 6. You can now clean up the DaemonSet by running `kubectl delete ds/overlaytest`. -### Check if MTU is correctly configured on hosts and on peering/tunnel appliances/devices +### Check if MTU is Correctly Configured on Hosts and on Peering/Tunnel Appliances/Devices When the MTU is incorrectly configured (either on hosts running Rancher, nodes in created/imported clusters or on appliances/devices in between), error messages will be logged in Rancher and in the agents, similar to: From 9c3281e07e4ad2823e10359803912991dd149b77 Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 19 Sep 2024 12:27:48 -0400 Subject: [PATCH 07/30] fixed user-id-tracking-in-audit-logs.md --- .../user-id-tracking-in-audit-logs.md | 2 +- .../user-id-tracking-in-audit-logs.md | 2 +- .../user-id-tracking-in-audit-logs.md | 2 +- .../user-id-tracking-in-audit-logs.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md b/docs/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md index 6a25ae1565e9..adecdecde120 100644 --- a/docs/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md +++ b/docs/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md @@ -20,7 +20,7 @@ Now with this feature, a downstream cluster admin should be able to look at the If the audit logs are shipped off of the cluster, a user of the logging system should be able to identify the user in the external Identity Provider system. A Rancher Admin should now be able to view Rancher audit logs and follow through to the Kubernetes audit log by using the external Identity Provider username. -### Feature Description +## Feature Description - When Kubernetes Audit logs are enabled on the downstream cluster, in each event that is logged, the external Identity Provider's username is now logged for each request, at the "metadata" level. - When Rancher API Audit logs are enabled on the Rancher installation, the external Identity Provider's username is also logged now at the `auditLog.level=1` for each request that hits the Rancher API server, including the login requests. diff --git a/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md b/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md index 6a25ae1565e9..adecdecde120 100644 --- a/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md +++ b/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md @@ -20,7 +20,7 @@ Now with this feature, a downstream cluster admin should be able to look at the If the audit logs are shipped off of the cluster, a user of the logging system should be able to identify the user in the external Identity Provider system. A Rancher Admin should now be able to view Rancher audit logs and follow through to the Kubernetes audit log by using the external Identity Provider username. -### Feature Description +## Feature Description - When Kubernetes Audit logs are enabled on the downstream cluster, in each event that is logged, the external Identity Provider's username is now logged for each request, at the "metadata" level. - When Rancher API Audit logs are enabled on the Rancher installation, the external Identity Provider's username is also logged now at the `auditLog.level=1` for each request that hits the Rancher API server, including the login requests. diff --git a/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md b/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md index 6a25ae1565e9..adecdecde120 100644 --- a/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md +++ b/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md @@ -20,7 +20,7 @@ Now with this feature, a downstream cluster admin should be able to look at the If the audit logs are shipped off of the cluster, a user of the logging system should be able to identify the user in the external Identity Provider system. A Rancher Admin should now be able to view Rancher audit logs and follow through to the Kubernetes audit log by using the external Identity Provider username. -### Feature Description +## Feature Description - When Kubernetes Audit logs are enabled on the downstream cluster, in each event that is logged, the external Identity Provider's username is now logged for each request, at the "metadata" level. - When Rancher API Audit logs are enabled on the Rancher installation, the external Identity Provider's username is also logged now at the `auditLog.level=1` for each request that hits the Rancher API server, including the login requests. diff --git a/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md b/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md index 6a25ae1565e9..adecdecde120 100644 --- a/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md +++ b/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs.md @@ -20,7 +20,7 @@ Now with this feature, a downstream cluster admin should be able to look at the If the audit logs are shipped off of the cluster, a user of the logging system should be able to identify the user in the external Identity Provider system. A Rancher Admin should now be able to view Rancher audit logs and follow through to the Kubernetes audit log by using the external Identity Provider username. -### Feature Description +## Feature Description - When Kubernetes Audit logs are enabled on the downstream cluster, in each event that is logged, the external Identity Provider's username is now logged for each request, at the "metadata" level. - When Rancher API Audit logs are enabled on the Rancher installation, the external Identity Provider's username is also logged now at the `auditLog.level=1` for each request that hits the Rancher API server, including the login requests. From 21ec1b7320d70f4b7ede2a5b6ad55c5ca2fd622d Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 19 Sep 2024 12:41:31 -0400 Subject: [PATCH 08/30] fixed other-troubleshooting-tips/registered-clusters.md --- .../other-troubleshooting-tips/registered-clusters.md | 6 +++--- .../other-troubleshooting-tips/registered-clusters.md | 6 +++--- .../other-troubleshooting-tips/registered-clusters.md | 6 +++--- .../other-troubleshooting-tips/registered-clusters.md | 6 +++--- 4 files changed, 12 insertions(+), 12 deletions(-) diff --git a/docs/troubleshooting/other-troubleshooting-tips/registered-clusters.md b/docs/troubleshooting/other-troubleshooting-tips/registered-clusters.md index cce0e089621f..f58fc038255b 100644 --- a/docs/troubleshooting/other-troubleshooting-tips/registered-clusters.md +++ b/docs/troubleshooting/other-troubleshooting-tips/registered-clusters.md @@ -10,13 +10,13 @@ The commands/steps listed on this page can be used to check clusters that you ar Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kubeconfig_from_imported_cluster.yml`) -### Rancher agents +## Rancher Agents Communication to the cluster (Kubernetes API via cattle-cluster-agent) and communication to the nodes is done through Rancher agents. If the cattle-cluster-agent cannot connect to the configured `server-url`, the cluster will remain in **Pending** state, showing `Waiting for full cluster configuration`. -#### cattle-node-agent +### cattle-node-agent :::note @@ -49,7 +49,7 @@ Check logging of a specific cattle-node-agent pod or all cattle-node-agent pods: kubectl -n cattle-system logs -l app=cattle-agent ``` -#### cattle-cluster-agent +### cattle-cluster-agent Check if the cattle-cluster-agent pod is present in the cluster, has status **Running** and doesn't have a high count of Restarts: diff --git a/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/registered-clusters.md b/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/registered-clusters.md index cce0e089621f..f58fc038255b 100644 --- a/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/registered-clusters.md +++ b/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/registered-clusters.md @@ -10,13 +10,13 @@ The commands/steps listed on this page can be used to check clusters that you ar Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kubeconfig_from_imported_cluster.yml`) -### Rancher agents +## Rancher Agents Communication to the cluster (Kubernetes API via cattle-cluster-agent) and communication to the nodes is done through Rancher agents. If the cattle-cluster-agent cannot connect to the configured `server-url`, the cluster will remain in **Pending** state, showing `Waiting for full cluster configuration`. -#### cattle-node-agent +### cattle-node-agent :::note @@ -49,7 +49,7 @@ Check logging of a specific cattle-node-agent pod or all cattle-node-agent pods: kubectl -n cattle-system logs -l app=cattle-agent ``` -#### cattle-cluster-agent +### cattle-cluster-agent Check if the cattle-cluster-agent pod is present in the cluster, has status **Running** and doesn't have a high count of Restarts: diff --git a/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/registered-clusters.md b/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/registered-clusters.md index cce0e089621f..f58fc038255b 100644 --- a/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/registered-clusters.md +++ b/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/registered-clusters.md @@ -10,13 +10,13 @@ The commands/steps listed on this page can be used to check clusters that you ar Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kubeconfig_from_imported_cluster.yml`) -### Rancher agents +## Rancher Agents Communication to the cluster (Kubernetes API via cattle-cluster-agent) and communication to the nodes is done through Rancher agents. If the cattle-cluster-agent cannot connect to the configured `server-url`, the cluster will remain in **Pending** state, showing `Waiting for full cluster configuration`. -#### cattle-node-agent +### cattle-node-agent :::note @@ -49,7 +49,7 @@ Check logging of a specific cattle-node-agent pod or all cattle-node-agent pods: kubectl -n cattle-system logs -l app=cattle-agent ``` -#### cattle-cluster-agent +### cattle-cluster-agent Check if the cattle-cluster-agent pod is present in the cluster, has status **Running** and doesn't have a high count of Restarts: diff --git a/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/registered-clusters.md b/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/registered-clusters.md index cce0e089621f..f58fc038255b 100644 --- a/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/registered-clusters.md +++ b/versioned_docs/version-2.9/troubleshooting/other-troubleshooting-tips/registered-clusters.md @@ -10,13 +10,13 @@ The commands/steps listed on this page can be used to check clusters that you ar Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kubeconfig_from_imported_cluster.yml`) -### Rancher agents +## Rancher Agents Communication to the cluster (Kubernetes API via cattle-cluster-agent) and communication to the nodes is done through Rancher agents. If the cattle-cluster-agent cannot connect to the configured `server-url`, the cluster will remain in **Pending** state, showing `Waiting for full cluster configuration`. -#### cattle-node-agent +### cattle-node-agent :::note @@ -49,7 +49,7 @@ Check logging of a specific cattle-node-agent pod or all cattle-node-agent pods: kubectl -n cattle-system logs -l app=cattle-agent ``` -#### cattle-cluster-agent +### cattle-cluster-agent Check if the cattle-cluster-agent pod is present in the cluster, has status **Running** and doesn't have a high count of Restarts: From eac601179fa941e664112685183938fd3f1bd485 Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 19 Sep 2024 13:19:54 -0400 Subject: [PATCH 09/30] fixed add-users-to-projects --- docs/how-to-guides/new-user-guides/add-users-to-projects.md | 4 ++-- .../how-to-guides/new-user-guides/add-users-to-projects.md | 4 ++-- .../how-to-guides/new-user-guides/add-users-to-projects.md | 4 ++-- .../how-to-guides/new-user-guides/add-users-to-projects.md | 4 ++-- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/how-to-guides/new-user-guides/add-users-to-projects.md b/docs/how-to-guides/new-user-guides/add-users-to-projects.md index d99e7c181209..d3beb2fb0b59 100644 --- a/docs/how-to-guides/new-user-guides/add-users-to-projects.md +++ b/docs/how-to-guides/new-user-guides/add-users-to-projects.md @@ -16,11 +16,11 @@ Want to provide a user with access to _all_ projects within a cluster? See [Addi ::: -### Adding Members to a New Project +## Adding Members to a New Project You can add members to a project as you create it (recommended if possible). For details on creating a new project, refer to the [cluster administration section.](../../how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces.md) -### Adding Members to an Existing Project +## Adding Members to an Existing Project Following project creation, you can add users as project members so that they can access its resources. diff --git a/versioned_docs/version-2.7/how-to-guides/new-user-guides/add-users-to-projects.md b/versioned_docs/version-2.7/how-to-guides/new-user-guides/add-users-to-projects.md index d99e7c181209..d3beb2fb0b59 100644 --- a/versioned_docs/version-2.7/how-to-guides/new-user-guides/add-users-to-projects.md +++ b/versioned_docs/version-2.7/how-to-guides/new-user-guides/add-users-to-projects.md @@ -16,11 +16,11 @@ Want to provide a user with access to _all_ projects within a cluster? See [Addi ::: -### Adding Members to a New Project +## Adding Members to a New Project You can add members to a project as you create it (recommended if possible). For details on creating a new project, refer to the [cluster administration section.](../../how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces.md) -### Adding Members to an Existing Project +## Adding Members to an Existing Project Following project creation, you can add users as project members so that they can access its resources. diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/add-users-to-projects.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/add-users-to-projects.md index d99e7c181209..d3beb2fb0b59 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/add-users-to-projects.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/add-users-to-projects.md @@ -16,11 +16,11 @@ Want to provide a user with access to _all_ projects within a cluster? See [Addi ::: -### Adding Members to a New Project +## Adding Members to a New Project You can add members to a project as you create it (recommended if possible). For details on creating a new project, refer to the [cluster administration section.](../../how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces.md) -### Adding Members to an Existing Project +## Adding Members to an Existing Project Following project creation, you can add users as project members so that they can access its resources. diff --git a/versioned_docs/version-2.9/how-to-guides/new-user-guides/add-users-to-projects.md b/versioned_docs/version-2.9/how-to-guides/new-user-guides/add-users-to-projects.md index d99e7c181209..d3beb2fb0b59 100644 --- a/versioned_docs/version-2.9/how-to-guides/new-user-guides/add-users-to-projects.md +++ b/versioned_docs/version-2.9/how-to-guides/new-user-guides/add-users-to-projects.md @@ -16,11 +16,11 @@ Want to provide a user with access to _all_ projects within a cluster? See [Addi ::: -### Adding Members to a New Project +## Adding Members to a New Project You can add members to a project as you create it (recommended if possible). For details on creating a new project, refer to the [cluster administration section.](../../how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces.md) -### Adding Members to an Existing Project +## Adding Members to an Existing Project Following project creation, you can add users as project members so that they can access its resources. From 9bd7ddaf83905a0d8b3784ee33d149dfc3eaa66b Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 19 Sep 2024 14:39:32 -0400 Subject: [PATCH 10/30] fixed prometheus-federator.md --- .../prometheus-federator/prometheus-federator.md | 10 +++++----- .../prometheus-federator/prometheus-federator.md | 12 ++++++------ .../prometheus-federator/prometheus-federator.md | 12 ++++++------ .../prometheus-federator/prometheus-federator.md | 12 ++++++------ 4 files changed, 23 insertions(+), 23 deletions(-) diff --git a/docs/reference-guides/prometheus-federator/prometheus-federator.md b/docs/reference-guides/prometheus-federator/prometheus-federator.md index 5166ab8732f6..dd5f22d93a6b 100644 --- a/docs/reference-guides/prometheus-federator/prometheus-federator.md +++ b/docs/reference-guides/prometheus-federator/prometheus-federator.md @@ -26,18 +26,18 @@ Prometheus Federator is designed to be deployed alongside an existing Prometheus 2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project--monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**. 3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). For more information, see the section on [configuring RBAC](rbac.md). -### What is a Project? +## What is a Project? In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`. By default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given Rancher Project. -### Configuring the Helm release created by a ProjectHelmChart +## Configuring the Helm release created by a ProjectHelmChart The `spec.values` of this ProjectHelmChart's resources will correspond to the `values.yaml` override to be supplied to the underlying Helm chart deployed by the operator on the user's behalf; to see the underlying chart's `values.yaml` spec, either: - View the chart's definition located at [`rancher/prometheus-federator` under `charts/rancher-project-monitoring`](https://github.com/rancher/prometheus-federator/blob/main/charts/rancher-project-monitoring) (where the chart version will be tied to the version of this operator). - Look for the ConfigMap named `monitoring.cattle.io.v1alpha1` that is automatically created in each Project Registration Namespace, which will contain both the `values.yaml` and `questions.yaml` that was used to configure the chart (which was embedded directly into the `prometheus-federator` binary). -### Namespaces +## Namespaces As a Project Operator based on [rancher/helm-project-operator](https://github.com/rancher/helm-project-operator), Prometheus Federator has three different classifications of namespaces that the operator looks out for: @@ -65,7 +65,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co ::: -### Helm Resources (HelmChart, HelmRelease) +## Helm Resources (HelmChart, HelmRelease) On deploying a ProjectHelmChart, the Prometheus Federator will automatically create and manage two child custom resources that manage the underlying Helm resources in turn: @@ -103,6 +103,6 @@ For more information on advanced configurations, refer to [this page](https://gi |`helmProjectOperator.hardenedNamespaces.configuration`| The configuration to be supplied to the default ServiceAccount or auto-generated NetworkPolicy on managing a namespace. | --> -### Prometheus Federator on the Local Cluster +## Prometheus Federator on the Local Cluster Prometheus Federator is a resource intensive application. Installing it to the local cluster is possible, but **not recommended**. \ No newline at end of file diff --git a/versioned_docs/version-2.7/reference-guides/prometheus-federator/prometheus-federator.md b/versioned_docs/version-2.7/reference-guides/prometheus-federator/prometheus-federator.md index 5166ab8732f6..8f5cd39451b2 100644 --- a/versioned_docs/version-2.7/reference-guides/prometheus-federator/prometheus-federator.md +++ b/versioned_docs/version-2.7/reference-guides/prometheus-federator/prometheus-federator.md @@ -26,18 +26,18 @@ Prometheus Federator is designed to be deployed alongside an existing Prometheus 2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project--monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**. 3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). For more information, see the section on [configuring RBAC](rbac.md). -### What is a Project? +## What is a Project? In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`. By default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given Rancher Project. -### Configuring the Helm release created by a ProjectHelmChart +## Configuring the Helm release created by a ProjectHelmChart The `spec.values` of this ProjectHelmChart's resources will correspond to the `values.yaml` override to be supplied to the underlying Helm chart deployed by the operator on the user's behalf; to see the underlying chart's `values.yaml` spec, either: - View the chart's definition located at [`rancher/prometheus-federator` under `charts/rancher-project-monitoring`](https://github.com/rancher/prometheus-federator/blob/main/charts/rancher-project-monitoring) (where the chart version will be tied to the version of this operator). - Look for the ConfigMap named `monitoring.cattle.io.v1alpha1` that is automatically created in each Project Registration Namespace, which will contain both the `values.yaml` and `questions.yaml` that was used to configure the chart (which was embedded directly into the `prometheus-federator` binary). -### Namespaces +## Namespaces As a Project Operator based on [rancher/helm-project-operator](https://github.com/rancher/helm-project-operator), Prometheus Federator has three different classifications of namespaces that the operator looks out for: @@ -65,7 +65,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co ::: -### Helm Resources (HelmChart, HelmRelease) +## Helm Resources (HelmChart, HelmRelease) On deploying a ProjectHelmChart, the Prometheus Federator will automatically create and manage two child custom resources that manage the underlying Helm resources in turn: @@ -87,7 +87,7 @@ HelmRelease CRs emit Kubernetes Events that detect when an underlying Helm relea Both of these resources are created for all Helm charts in the Operator / System namespaces to avoid escalation of privileges to underprivileged users. -### Advanced Helm Project Operator Configuration +## Advanced Helm Project Operator Configuration For more information on advanced configurations, refer to [this page](https://github.com/rancher/prometheus-federator/blob/main/charts/prometheus-federator/0.0.1/README.md#advanced-helm-project-operator-configuration). @@ -103,6 +103,6 @@ For more information on advanced configurations, refer to [this page](https://gi |`helmProjectOperator.hardenedNamespaces.configuration`| The configuration to be supplied to the default ServiceAccount or auto-generated NetworkPolicy on managing a namespace. | --> -### Prometheus Federator on the Local Cluster +## Prometheus Federator on the Local Cluster Prometheus Federator is a resource intensive application. Installing it to the local cluster is possible, but **not recommended**. \ No newline at end of file diff --git a/versioned_docs/version-2.8/reference-guides/prometheus-federator/prometheus-federator.md b/versioned_docs/version-2.8/reference-guides/prometheus-federator/prometheus-federator.md index 5166ab8732f6..8f5cd39451b2 100644 --- a/versioned_docs/version-2.8/reference-guides/prometheus-federator/prometheus-federator.md +++ b/versioned_docs/version-2.8/reference-guides/prometheus-federator/prometheus-federator.md @@ -26,18 +26,18 @@ Prometheus Federator is designed to be deployed alongside an existing Prometheus 2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project--monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**. 3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). For more information, see the section on [configuring RBAC](rbac.md). -### What is a Project? +## What is a Project? In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`. By default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given Rancher Project. -### Configuring the Helm release created by a ProjectHelmChart +## Configuring the Helm release created by a ProjectHelmChart The `spec.values` of this ProjectHelmChart's resources will correspond to the `values.yaml` override to be supplied to the underlying Helm chart deployed by the operator on the user's behalf; to see the underlying chart's `values.yaml` spec, either: - View the chart's definition located at [`rancher/prometheus-federator` under `charts/rancher-project-monitoring`](https://github.com/rancher/prometheus-federator/blob/main/charts/rancher-project-monitoring) (where the chart version will be tied to the version of this operator). - Look for the ConfigMap named `monitoring.cattle.io.v1alpha1` that is automatically created in each Project Registration Namespace, which will contain both the `values.yaml` and `questions.yaml` that was used to configure the chart (which was embedded directly into the `prometheus-federator` binary). -### Namespaces +## Namespaces As a Project Operator based on [rancher/helm-project-operator](https://github.com/rancher/helm-project-operator), Prometheus Federator has three different classifications of namespaces that the operator looks out for: @@ -65,7 +65,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co ::: -### Helm Resources (HelmChart, HelmRelease) +## Helm Resources (HelmChart, HelmRelease) On deploying a ProjectHelmChart, the Prometheus Federator will automatically create and manage two child custom resources that manage the underlying Helm resources in turn: @@ -87,7 +87,7 @@ HelmRelease CRs emit Kubernetes Events that detect when an underlying Helm relea Both of these resources are created for all Helm charts in the Operator / System namespaces to avoid escalation of privileges to underprivileged users. -### Advanced Helm Project Operator Configuration +## Advanced Helm Project Operator Configuration For more information on advanced configurations, refer to [this page](https://github.com/rancher/prometheus-federator/blob/main/charts/prometheus-federator/0.0.1/README.md#advanced-helm-project-operator-configuration). @@ -103,6 +103,6 @@ For more information on advanced configurations, refer to [this page](https://gi |`helmProjectOperator.hardenedNamespaces.configuration`| The configuration to be supplied to the default ServiceAccount or auto-generated NetworkPolicy on managing a namespace. | --> -### Prometheus Federator on the Local Cluster +## Prometheus Federator on the Local Cluster Prometheus Federator is a resource intensive application. Installing it to the local cluster is possible, but **not recommended**. \ No newline at end of file diff --git a/versioned_docs/version-2.9/reference-guides/prometheus-federator/prometheus-federator.md b/versioned_docs/version-2.9/reference-guides/prometheus-federator/prometheus-federator.md index 5166ab8732f6..8f5cd39451b2 100644 --- a/versioned_docs/version-2.9/reference-guides/prometheus-federator/prometheus-federator.md +++ b/versioned_docs/version-2.9/reference-guides/prometheus-federator/prometheus-federator.md @@ -26,18 +26,18 @@ Prometheus Federator is designed to be deployed alongside an existing Prometheus 2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project--monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**. 3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). For more information, see the section on [configuring RBAC](rbac.md). -### What is a Project? +## What is a Project? In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`. By default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given Rancher Project. -### Configuring the Helm release created by a ProjectHelmChart +## Configuring the Helm release created by a ProjectHelmChart The `spec.values` of this ProjectHelmChart's resources will correspond to the `values.yaml` override to be supplied to the underlying Helm chart deployed by the operator on the user's behalf; to see the underlying chart's `values.yaml` spec, either: - View the chart's definition located at [`rancher/prometheus-federator` under `charts/rancher-project-monitoring`](https://github.com/rancher/prometheus-federator/blob/main/charts/rancher-project-monitoring) (where the chart version will be tied to the version of this operator). - Look for the ConfigMap named `monitoring.cattle.io.v1alpha1` that is automatically created in each Project Registration Namespace, which will contain both the `values.yaml` and `questions.yaml` that was used to configure the chart (which was embedded directly into the `prometheus-federator` binary). -### Namespaces +## Namespaces As a Project Operator based on [rancher/helm-project-operator](https://github.com/rancher/helm-project-operator), Prometheus Federator has three different classifications of namespaces that the operator looks out for: @@ -65,7 +65,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co ::: -### Helm Resources (HelmChart, HelmRelease) +## Helm Resources (HelmChart, HelmRelease) On deploying a ProjectHelmChart, the Prometheus Federator will automatically create and manage two child custom resources that manage the underlying Helm resources in turn: @@ -87,7 +87,7 @@ HelmRelease CRs emit Kubernetes Events that detect when an underlying Helm relea Both of these resources are created for all Helm charts in the Operator / System namespaces to avoid escalation of privileges to underprivileged users. -### Advanced Helm Project Operator Configuration +## Advanced Helm Project Operator Configuration For more information on advanced configurations, refer to [this page](https://github.com/rancher/prometheus-federator/blob/main/charts/prometheus-federator/0.0.1/README.md#advanced-helm-project-operator-configuration). @@ -103,6 +103,6 @@ For more information on advanced configurations, refer to [this page](https://gi |`helmProjectOperator.hardenedNamespaces.configuration`| The configuration to be supplied to the default ServiceAccount or auto-generated NetworkPolicy on managing a namespace. | --> -### Prometheus Federator on the Local Cluster +## Prometheus Federator on the Local Cluster Prometheus Federator is a resource intensive application. Installing it to the local cluster is possible, but **not recommended**. \ No newline at end of file From b978a56e85a1410ff948a7d131fa39d5af88e739 Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 19 Sep 2024 14:49:23 -0400 Subject: [PATCH 11/30] fixed advanced-options.md --- .../advanced-options.md | 12 ++++++------ .../advanced-options.md | 12 ++++++------ .../advanced-options.md | 12 ++++++------ .../advanced-options.md | 12 ++++++------ 4 files changed, 24 insertions(+), 24 deletions(-) diff --git a/docs/reference-guides/single-node-rancher-in-docker/advanced-options.md b/docs/reference-guides/single-node-rancher-in-docker/advanced-options.md index 4d410831bf9a..c4dcde046d9b 100644 --- a/docs/reference-guides/single-node-rancher-in-docker/advanced-options.md +++ b/docs/reference-guides/single-node-rancher-in-docker/advanced-options.md @@ -6,7 +6,7 @@ title: Advanced Options for Docker Installs -### Custom CA Certificate +## Custom CA Certificate If you want to configure Rancher to use a CA root certificate to be used when validating services, you would start the Rancher container sharing the directory that contains the CA root certificate. @@ -30,7 +30,7 @@ docker run -d --restart=unless-stopped \ rancher/rancher:latest ``` -### API Audit Log +## API Audit Log The API Audit Log records all the user and system transactions made through Rancher server. @@ -49,7 +49,7 @@ docker run -d --restart=unless-stopped \ rancher/rancher:latest ``` -### TLS settings +## TLS settings To set a different TLS configuration, you can use the `CATTLE_TLS_MIN_VERSION` and `CATTLE_TLS_CIPHERS` environment variables. For example, to configure TLS 1.0 as minimum accepted TLS version: @@ -65,7 +65,7 @@ Privileged access is [required.](../../getting-started/installation-and-upgrade/ See [TLS settings](../../getting-started/installation-and-upgrade/installation-references/tls-settings.md) for more information and options. -### Air Gap +## Air Gap If you are visiting this page to complete an air gap installation, you must prepend your private registry URL to the server tag when running the installation command in the option that you choose. Add `` with your private registry URL in front of `rancher/rancher:latest`. @@ -73,7 +73,7 @@ If you are visiting this page to complete an air gap installation, you must prep /rancher/rancher:latest -### Persistent Data +## Persistent Data Rancher uses etcd as a datastore. When Rancher is installed with Docker, the embedded etcd is being used. The persistent data is at the following path in the container: `/var/lib/rancher`. @@ -89,7 +89,7 @@ docker run -d --restart=unless-stopped \ Privileged access is [required.](../../getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher) -### Running `rancher/rancher` and `rancher/rancher-agent` on the Same Node +## Running `rancher/rancher` and `rancher/rancher-agent` on the Same Node In the situation where you want to use a single node to run Rancher and to be able to add the same node to a cluster, you have to adjust the host ports mapped for the `rancher/rancher` container. diff --git a/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/advanced-options.md b/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/advanced-options.md index 4d410831bf9a..c4dcde046d9b 100644 --- a/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/advanced-options.md +++ b/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/advanced-options.md @@ -6,7 +6,7 @@ title: Advanced Options for Docker Installs -### Custom CA Certificate +## Custom CA Certificate If you want to configure Rancher to use a CA root certificate to be used when validating services, you would start the Rancher container sharing the directory that contains the CA root certificate. @@ -30,7 +30,7 @@ docker run -d --restart=unless-stopped \ rancher/rancher:latest ``` -### API Audit Log +## API Audit Log The API Audit Log records all the user and system transactions made through Rancher server. @@ -49,7 +49,7 @@ docker run -d --restart=unless-stopped \ rancher/rancher:latest ``` -### TLS settings +## TLS settings To set a different TLS configuration, you can use the `CATTLE_TLS_MIN_VERSION` and `CATTLE_TLS_CIPHERS` environment variables. For example, to configure TLS 1.0 as minimum accepted TLS version: @@ -65,7 +65,7 @@ Privileged access is [required.](../../getting-started/installation-and-upgrade/ See [TLS settings](../../getting-started/installation-and-upgrade/installation-references/tls-settings.md) for more information and options. -### Air Gap +## Air Gap If you are visiting this page to complete an air gap installation, you must prepend your private registry URL to the server tag when running the installation command in the option that you choose. Add `` with your private registry URL in front of `rancher/rancher:latest`. @@ -73,7 +73,7 @@ If you are visiting this page to complete an air gap installation, you must prep /rancher/rancher:latest -### Persistent Data +## Persistent Data Rancher uses etcd as a datastore. When Rancher is installed with Docker, the embedded etcd is being used. The persistent data is at the following path in the container: `/var/lib/rancher`. @@ -89,7 +89,7 @@ docker run -d --restart=unless-stopped \ Privileged access is [required.](../../getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher) -### Running `rancher/rancher` and `rancher/rancher-agent` on the Same Node +## Running `rancher/rancher` and `rancher/rancher-agent` on the Same Node In the situation where you want to use a single node to run Rancher and to be able to add the same node to a cluster, you have to adjust the host ports mapped for the `rancher/rancher` container. diff --git a/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/advanced-options.md b/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/advanced-options.md index 4d410831bf9a..c4dcde046d9b 100644 --- a/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/advanced-options.md +++ b/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/advanced-options.md @@ -6,7 +6,7 @@ title: Advanced Options for Docker Installs -### Custom CA Certificate +## Custom CA Certificate If you want to configure Rancher to use a CA root certificate to be used when validating services, you would start the Rancher container sharing the directory that contains the CA root certificate. @@ -30,7 +30,7 @@ docker run -d --restart=unless-stopped \ rancher/rancher:latest ``` -### API Audit Log +## API Audit Log The API Audit Log records all the user and system transactions made through Rancher server. @@ -49,7 +49,7 @@ docker run -d --restart=unless-stopped \ rancher/rancher:latest ``` -### TLS settings +## TLS settings To set a different TLS configuration, you can use the `CATTLE_TLS_MIN_VERSION` and `CATTLE_TLS_CIPHERS` environment variables. For example, to configure TLS 1.0 as minimum accepted TLS version: @@ -65,7 +65,7 @@ Privileged access is [required.](../../getting-started/installation-and-upgrade/ See [TLS settings](../../getting-started/installation-and-upgrade/installation-references/tls-settings.md) for more information and options. -### Air Gap +## Air Gap If you are visiting this page to complete an air gap installation, you must prepend your private registry URL to the server tag when running the installation command in the option that you choose. Add `` with your private registry URL in front of `rancher/rancher:latest`. @@ -73,7 +73,7 @@ If you are visiting this page to complete an air gap installation, you must prep /rancher/rancher:latest -### Persistent Data +## Persistent Data Rancher uses etcd as a datastore. When Rancher is installed with Docker, the embedded etcd is being used. The persistent data is at the following path in the container: `/var/lib/rancher`. @@ -89,7 +89,7 @@ docker run -d --restart=unless-stopped \ Privileged access is [required.](../../getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher) -### Running `rancher/rancher` and `rancher/rancher-agent` on the Same Node +## Running `rancher/rancher` and `rancher/rancher-agent` on the Same Node In the situation where you want to use a single node to run Rancher and to be able to add the same node to a cluster, you have to adjust the host ports mapped for the `rancher/rancher` container. diff --git a/versioned_docs/version-2.9/reference-guides/single-node-rancher-in-docker/advanced-options.md b/versioned_docs/version-2.9/reference-guides/single-node-rancher-in-docker/advanced-options.md index 4d410831bf9a..c4dcde046d9b 100644 --- a/versioned_docs/version-2.9/reference-guides/single-node-rancher-in-docker/advanced-options.md +++ b/versioned_docs/version-2.9/reference-guides/single-node-rancher-in-docker/advanced-options.md @@ -6,7 +6,7 @@ title: Advanced Options for Docker Installs -### Custom CA Certificate +## Custom CA Certificate If you want to configure Rancher to use a CA root certificate to be used when validating services, you would start the Rancher container sharing the directory that contains the CA root certificate. @@ -30,7 +30,7 @@ docker run -d --restart=unless-stopped \ rancher/rancher:latest ``` -### API Audit Log +## API Audit Log The API Audit Log records all the user and system transactions made through Rancher server. @@ -49,7 +49,7 @@ docker run -d --restart=unless-stopped \ rancher/rancher:latest ``` -### TLS settings +## TLS settings To set a different TLS configuration, you can use the `CATTLE_TLS_MIN_VERSION` and `CATTLE_TLS_CIPHERS` environment variables. For example, to configure TLS 1.0 as minimum accepted TLS version: @@ -65,7 +65,7 @@ Privileged access is [required.](../../getting-started/installation-and-upgrade/ See [TLS settings](../../getting-started/installation-and-upgrade/installation-references/tls-settings.md) for more information and options. -### Air Gap +## Air Gap If you are visiting this page to complete an air gap installation, you must prepend your private registry URL to the server tag when running the installation command in the option that you choose. Add `` with your private registry URL in front of `rancher/rancher:latest`. @@ -73,7 +73,7 @@ If you are visiting this page to complete an air gap installation, you must prep /rancher/rancher:latest -### Persistent Data +## Persistent Data Rancher uses etcd as a datastore. When Rancher is installed with Docker, the embedded etcd is being used. The persistent data is at the following path in the container: `/var/lib/rancher`. @@ -89,7 +89,7 @@ docker run -d --restart=unless-stopped \ Privileged access is [required.](../../getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher) -### Running `rancher/rancher` and `rancher/rancher-agent` on the Same Node +## Running `rancher/rancher` and `rancher/rancher-agent` on the Same Node In the situation where you want to use a single node to run Rancher and to be able to add the same node to a cluster, you have to adjust the host ports mapped for the `rancher/rancher` container. From ac7f51718481018127dda3728fdcb2902abcd48c Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 19 Sep 2024 14:59:23 -0400 Subject: [PATCH 12/30] fixed set-up-existing-storage.md --- .../set-up-existing-storage.md | 10 +++++----- .../set-up-existing-storage.md | 11 +++++------ .../set-up-existing-storage.md | 11 +++++------ .../set-up-existing-storage.md | 11 +++++------ 4 files changed, 20 insertions(+), 23 deletions(-) diff --git a/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md b/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md index 4be791f5cc3d..194d8284b3f5 100644 --- a/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md +++ b/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md @@ -20,12 +20,12 @@ To set up storage, follow these steps: 2. [Add a PersistentVolume that refers to the persistent storage.](#2-add-a-persistentvolume-that-refers-to-the-persistent-storage) 3. [Use the Storage Class for Pods Deployed with a StatefulSet.](#3-use-the-storage-class-for-pods-deployed-with-a-statefulset) -### Prerequisites +## Prerequisites - To create a persistent volume as a Kubernetes resource, you must have the `Manage Volumes` [role.](../../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-role-reference) - If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider. -### 1. Set up persistent storage +## 1. Set up persistent storage Creating a persistent volume in Rancher will not create a storage volume. It only creates a Kubernetes resource that maps to an existing volume. Therefore, before you can create a persistent volume as a Kubernetes resource, you must have storage provisioned. @@ -33,7 +33,7 @@ The steps to set up a persistent storage device will differ based on your infras If you have a pool of block storage, and you don't want to use a cloud provider, Longhorn could help you provide persistent storage to your Kubernetes cluster. For more information, see [Cloud Native Storage with Longhorn](../../../../../integrations-in-rancher/longhorn/longhorn.md). -### 2. Add a PersistentVolume that refers to the persistent storage +## 2. Add a PersistentVolume that refers to the persistent storage These steps describe how to set up a PersistentVolume at the cluster level in Kubernetes. @@ -52,7 +52,7 @@ These steps describe how to set up a PersistentVolume at the cluster level in Ku **Result:** Your new persistent volume is created. -### 3. Use the Storage Class for Pods Deployed with a StatefulSet +## 3. Use the Storage Class for Pods Deployed with a StatefulSet StatefulSets manage the deployment and scaling of Pods while maintaining a sticky identity for each Pod. In this StatefulSet, we will configure a VolumeClaimTemplate. Each Pod managed by the StatefulSet will be deployed with a PersistentVolumeClaim based on this VolumeClaimTemplate. The PersistentVolumeClaim will refer to the PersistentVolume that we created. Therefore, when each Pod managed by the StatefulSet is deployed, it will be bound a PersistentVolume as defined in its PersistentVolumeClaim. @@ -86,4 +86,4 @@ The following steps describe how to assign persistent storage to an existing wor 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Launch**. -**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. \ No newline at end of file +**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. diff --git a/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md b/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md index 60661aea03b2..17864435e6d0 100644 --- a/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md +++ b/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md @@ -20,12 +20,12 @@ To set up storage, follow these steps: 2. [Add a PersistentVolume that refers to the persistent storage.](#2-add-a-persistentvolume-that-refers-to-the-persistent-storage) 3. [Use the Storage Class for Pods Deployed with a StatefulSet.](#3-use-the-storage-class-for-pods-deployed-with-a-statefulset) -### Prerequisites +## Prerequisites - To create a persistent volume as a Kubernetes resource, you must have the `Manage Volumes` [role.](../../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-role-reference) - If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider. -### 1. Set up persistent storage +## 1. Set up persistent storage Creating a persistent volume in Rancher will not create a storage volume. It only creates a Kubernetes resource that maps to an existing volume. Therefore, before you can create a persistent volume as a Kubernetes resource, you must have storage provisioned. @@ -33,7 +33,7 @@ The steps to set up a persistent storage device will differ based on your infras If you have a pool of block storage, and you don't want to use a cloud provider, Longhorn could help you provide persistent storage to your Kubernetes cluster. For more information, see [this page.](../../../../../integrations-in-rancher/longhorn.md) -### 2. Add a PersistentVolume that refers to the persistent storage +## 2. Add a PersistentVolume that refers to the persistent storage These steps describe how to set up a PersistentVolume at the cluster level in Kubernetes. @@ -51,8 +51,7 @@ These steps describe how to set up a PersistentVolume at the cluster level in Ku **Result:** Your new persistent volume is created. - -### 3. Use the Storage Class for Pods Deployed with a StatefulSet +## 3. Use the Storage Class for Pods Deployed with a StatefulSet StatefulSets manage the deployment and scaling of Pods while maintaining a sticky identity for each Pod. In this StatefulSet, we will configure a VolumeClaimTemplate. Each Pod managed by the StatefulSet will be deployed with a PersistentVolumeClaim based on this VolumeClaimTemplate. The PersistentVolumeClaim will refer to the PersistentVolume that we created. Therefore, when each Pod managed by the StatefulSet is deployed, it will be bound a PersistentVolume as defined in its PersistentVolumeClaim. @@ -86,4 +85,4 @@ The following steps describe how to assign persistent storage to an existing wor 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Launch**. -**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. \ No newline at end of file +**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md index 4be791f5cc3d..3dc4b5945850 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md @@ -20,12 +20,12 @@ To set up storage, follow these steps: 2. [Add a PersistentVolume that refers to the persistent storage.](#2-add-a-persistentvolume-that-refers-to-the-persistent-storage) 3. [Use the Storage Class for Pods Deployed with a StatefulSet.](#3-use-the-storage-class-for-pods-deployed-with-a-statefulset) -### Prerequisites +## Prerequisites - To create a persistent volume as a Kubernetes resource, you must have the `Manage Volumes` [role.](../../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-role-reference) - If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider. -### 1. Set up persistent storage +## 1. Set up persistent storage Creating a persistent volume in Rancher will not create a storage volume. It only creates a Kubernetes resource that maps to an existing volume. Therefore, before you can create a persistent volume as a Kubernetes resource, you must have storage provisioned. @@ -33,7 +33,7 @@ The steps to set up a persistent storage device will differ based on your infras If you have a pool of block storage, and you don't want to use a cloud provider, Longhorn could help you provide persistent storage to your Kubernetes cluster. For more information, see [Cloud Native Storage with Longhorn](../../../../../integrations-in-rancher/longhorn/longhorn.md). -### 2. Add a PersistentVolume that refers to the persistent storage +## 2. Add a PersistentVolume that refers to the persistent storage These steps describe how to set up a PersistentVolume at the cluster level in Kubernetes. @@ -51,8 +51,7 @@ These steps describe how to set up a PersistentVolume at the cluster level in Ku **Result:** Your new persistent volume is created. - -### 3. Use the Storage Class for Pods Deployed with a StatefulSet +## 3. Use the Storage Class for Pods Deployed with a StatefulSet StatefulSets manage the deployment and scaling of Pods while maintaining a sticky identity for each Pod. In this StatefulSet, we will configure a VolumeClaimTemplate. Each Pod managed by the StatefulSet will be deployed with a PersistentVolumeClaim based on this VolumeClaimTemplate. The PersistentVolumeClaim will refer to the PersistentVolume that we created. Therefore, when each Pod managed by the StatefulSet is deployed, it will be bound a PersistentVolume as defined in its PersistentVolumeClaim. @@ -86,4 +85,4 @@ The following steps describe how to assign persistent storage to an existing wor 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Launch**. -**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. \ No newline at end of file +**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. diff --git a/versioned_docs/version-2.9/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md b/versioned_docs/version-2.9/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md index 4be791f5cc3d..3dc4b5945850 100644 --- a/versioned_docs/version-2.9/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md +++ b/versioned_docs/version-2.9/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage.md @@ -20,12 +20,12 @@ To set up storage, follow these steps: 2. [Add a PersistentVolume that refers to the persistent storage.](#2-add-a-persistentvolume-that-refers-to-the-persistent-storage) 3. [Use the Storage Class for Pods Deployed with a StatefulSet.](#3-use-the-storage-class-for-pods-deployed-with-a-statefulset) -### Prerequisites +## Prerequisites - To create a persistent volume as a Kubernetes resource, you must have the `Manage Volumes` [role.](../../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-role-reference) - If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider. -### 1. Set up persistent storage +## 1. Set up persistent storage Creating a persistent volume in Rancher will not create a storage volume. It only creates a Kubernetes resource that maps to an existing volume. Therefore, before you can create a persistent volume as a Kubernetes resource, you must have storage provisioned. @@ -33,7 +33,7 @@ The steps to set up a persistent storage device will differ based on your infras If you have a pool of block storage, and you don't want to use a cloud provider, Longhorn could help you provide persistent storage to your Kubernetes cluster. For more information, see [Cloud Native Storage with Longhorn](../../../../../integrations-in-rancher/longhorn/longhorn.md). -### 2. Add a PersistentVolume that refers to the persistent storage +## 2. Add a PersistentVolume that refers to the persistent storage These steps describe how to set up a PersistentVolume at the cluster level in Kubernetes. @@ -51,8 +51,7 @@ These steps describe how to set up a PersistentVolume at the cluster level in Ku **Result:** Your new persistent volume is created. - -### 3. Use the Storage Class for Pods Deployed with a StatefulSet +## 3. Use the Storage Class for Pods Deployed with a StatefulSet StatefulSets manage the deployment and scaling of Pods while maintaining a sticky identity for each Pod. In this StatefulSet, we will configure a VolumeClaimTemplate. Each Pod managed by the StatefulSet will be deployed with a PersistentVolumeClaim based on this VolumeClaimTemplate. The PersistentVolumeClaim will refer to the PersistentVolume that we created. Therefore, when each Pod managed by the StatefulSet is deployed, it will be bound a PersistentVolume as defined in its PersistentVolumeClaim. @@ -86,4 +85,4 @@ The following steps describe how to assign persistent storage to an existing wor 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Launch**. -**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. \ No newline at end of file +**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. From 49156a110c87c15d39143a3215c7b0f413b18e63 Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 19 Sep 2024 15:07:17 -0400 Subject: [PATCH 13/30] fixed dynamically-provision-new-storage,md --- .../dynamically-provision-new-storage.md | 8 ++++---- .../dynamically-provision-new-storage.md | 6 +++--- .../dynamically-provision-new-storage.md | 6 +++--- .../dynamically-provision-new-storage.md | 6 +++--- 4 files changed, 13 insertions(+), 13 deletions(-) diff --git a/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md b/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md index a0f3271be5b4..0c89ef1162aa 100644 --- a/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md +++ b/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md @@ -19,7 +19,7 @@ To provision new storage for your workloads, follow these steps: 1. [Add a storage class and configure it to use your storage.](#1-add-a-storage-class-and-configure-it-to-use-your-storage) 2. [Use the Storage Class for Pods Deployed with a StatefulSet.](#2-use-the-storage-class-for-pods-deployed-with-a-statefulset) -### Prerequisites +## Prerequisites - To set up persistent storage, the `Manage Volumes` [role](../../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-role-reference) is required. - If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider. @@ -42,7 +42,7 @@ hostPath | `host-path` To use a storage provisioner that is not on the above list, you will need to use a [feature flag to enable unsupported storage drivers.](../../../../advanced-user-guides/enable-experimental-features/unsupported-storage-drivers.md) -### 1. Add a storage class and configure it to use your storage +## 1. Add a storage class and configure it to use your storage These steps describe how to set up a storage class at the cluster level. @@ -59,7 +59,7 @@ These steps describe how to set up a storage class at the cluster level. For full information about the storage class parameters, refer to the official [Kubernetes documentation.](https://kubernetes.io/docs/concepts/storage/storage-classes/#parameters). -### 2. Use the Storage Class for Pods Deployed with a StatefulSet +## 2. Use the Storage Class for Pods Deployed with a StatefulSet StatefulSets manage the deployment and scaling of Pods while maintaining a sticky identity for each Pod. In this StatefulSet, we will configure a VolumeClaimTemplate. Each Pod managed by the StatefulSet will be deployed with a PersistentVolumeClaim based on this VolumeClaimTemplate. The PersistentVolumeClaim will refer to the StorageClass that we created. Therefore, when each Pod managed by the StatefulSet is deployed, it will be bound to dynamically provisioned storage using the StorageClass defined in its PersistentVolumeClaim. @@ -88,4 +88,4 @@ To attach the PVC to an existing workload, 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Save**. -**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. If not, Rancher will provision new persistent storage. \ No newline at end of file +**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. If not, Rancher will provision new persistent storage. diff --git a/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md b/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md index 33a07e216c3f..802615e8b0c5 100644 --- a/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md +++ b/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md @@ -19,7 +19,7 @@ To provision new storage for your workloads, follow these steps: 1. [Add a storage class and configure it to use your storage.](#1-add-a-storage-class-and-configure-it-to-use-your-storage) 2. [Use the Storage Class for Pods Deployed with a StatefulSet.](#2-use-the-storage-class-for-pods-deployed-with-a-statefulset) -### Prerequisites +## Prerequisites - To set up persistent storage, the `Manage Volumes` [role](../../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-role-reference) is required. - If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider. @@ -42,7 +42,7 @@ hostPath | `host-path` To use a storage provisioner that is not on the above list, you will need to use a [feature flag to enable unsupported storage drivers.](../../../../advanced-user-guides/enable-experimental-features/unsupported-storage-drivers.md) -### 1. Add a storage class and configure it to use your storage +## 1. Add a storage class and configure it to use your storage These steps describe how to set up a storage class at the cluster level. @@ -59,7 +59,7 @@ These steps describe how to set up a storage class at the cluster level. For full information about the storage class parameters, refer to the official [Kubernetes documentation.](https://kubernetes.io/docs/concepts/storage/storage-classes/#parameters). -### 2. Use the Storage Class for Pods Deployed with a StatefulSet +## 2. Use the Storage Class for Pods Deployed with a StatefulSet StatefulSets manage the deployment and scaling of Pods while maintaining a sticky identity for each Pod. In this StatefulSet, we will configure a VolumeClaimTemplate. Each Pod managed by the StatefulSet will be deployed with a PersistentVolumeClaim based on this VolumeClaimTemplate. The PersistentVolumeClaim will refer to the StorageClass that we created. Therefore, when each Pod managed by the StatefulSet is deployed, it will be bound to dynamically provisioned storage using the StorageClass defined in its PersistentVolumeClaim. diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md index a0f3271be5b4..4d22f3fb4a98 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md @@ -19,7 +19,7 @@ To provision new storage for your workloads, follow these steps: 1. [Add a storage class and configure it to use your storage.](#1-add-a-storage-class-and-configure-it-to-use-your-storage) 2. [Use the Storage Class for Pods Deployed with a StatefulSet.](#2-use-the-storage-class-for-pods-deployed-with-a-statefulset) -### Prerequisites +## Prerequisites - To set up persistent storage, the `Manage Volumes` [role](../../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-role-reference) is required. - If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider. @@ -42,7 +42,7 @@ hostPath | `host-path` To use a storage provisioner that is not on the above list, you will need to use a [feature flag to enable unsupported storage drivers.](../../../../advanced-user-guides/enable-experimental-features/unsupported-storage-drivers.md) -### 1. Add a storage class and configure it to use your storage +## 1. Add a storage class and configure it to use your storage These steps describe how to set up a storage class at the cluster level. @@ -59,7 +59,7 @@ These steps describe how to set up a storage class at the cluster level. For full information about the storage class parameters, refer to the official [Kubernetes documentation.](https://kubernetes.io/docs/concepts/storage/storage-classes/#parameters). -### 2. Use the Storage Class for Pods Deployed with a StatefulSet +## 2. Use the Storage Class for Pods Deployed with a StatefulSet StatefulSets manage the deployment and scaling of Pods while maintaining a sticky identity for each Pod. In this StatefulSet, we will configure a VolumeClaimTemplate. Each Pod managed by the StatefulSet will be deployed with a PersistentVolumeClaim based on this VolumeClaimTemplate. The PersistentVolumeClaim will refer to the StorageClass that we created. Therefore, when each Pod managed by the StatefulSet is deployed, it will be bound to dynamically provisioned storage using the StorageClass defined in its PersistentVolumeClaim. diff --git a/versioned_docs/version-2.9/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md b/versioned_docs/version-2.9/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md index c40d3000816a..991f7ac41fff 100644 --- a/versioned_docs/version-2.9/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md +++ b/versioned_docs/version-2.9/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md @@ -19,7 +19,7 @@ To provision new storage for your workloads, follow these steps: 1. [Add a storage class and configure it to use your storage.](#1-add-a-storage-class-and-configure-it-to-use-your-storage) 2. [Use the Storage Class for Pods Deployed with a StatefulSet.](#2-use-the-storage-class-for-pods-deployed-with-a-statefulset) -### Prerequisites +## Prerequisites - To set up persistent storage, the `Manage Volumes` [role](../../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-role-reference) is required. - If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider. @@ -42,7 +42,7 @@ hostPath | `host-path` To use a storage provisioner that is not on the above list, you will need to use a [feature flag to enable unsupported storage drivers.](../../../../advanced-user-guides/enable-experimental-features/unsupported-storage-drivers.md) -### 1. Add a storage class and configure it to use your storage +## 1. Add a storage class and configure it to use your storage These steps describe how to set up a storage class at the cluster level. @@ -59,7 +59,7 @@ These steps describe how to set up a storage class at the cluster level. For full information about the storage class parameters, refer to the official [Kubernetes documentation.](https://kubernetes.io/docs/concepts/storage/storage-classes/#parameters). -### 2. Use the Storage Class for Pods Deployed with a StatefulSet +## 2. Use the Storage Class for Pods Deployed with a StatefulSet StatefulSets manage the deployment and scaling of Pods while maintaining a sticky identity for each Pod. In this StatefulSet, we will configure a VolumeClaimTemplate. Each Pod managed by the StatefulSet will be deployed with a PersistentVolumeClaim based on this VolumeClaimTemplate. The PersistentVolumeClaim will refer to the StorageClass that we created. Therefore, when each Pod managed by the StatefulSet is deployed, it will be bound to dynamically provisioned storage using the StorageClass defined in its PersistentVolumeClaim. From 07b62b044383e1a5923967c5083b71bdf4598d4d Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 19 Sep 2024 15:25:43 -0400 Subject: [PATCH 14/30] fixed gke-private-clusters.md --- .../gke-private-clusters.md | 12 ++++++------ .../gke-private-clusters.md | 12 ++++++------ .../gke-private-clusters.md | 12 ++++++------ .../gke-private-clusters.md | 12 ++++++------ 4 files changed, 24 insertions(+), 24 deletions(-) diff --git a/docs/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md b/docs/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md index a020ff66ed2a..4322fb5e2d21 100644 --- a/docs/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md +++ b/docs/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md @@ -8,11 +8,11 @@ title: Private Clusters In GKE, [private clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept) are clusters whose nodes are isolated from inbound and outbound traffic by assigning them internal IP addresses only. Private clusters in GKE have the option of exposing the control plane endpoint as a publicly accessible address or as a private address. This is different from other Kubernetes providers, which may refer to clusters with private control plane endpoints as "private clusters" but still allow traffic to and from nodes. You may want to create a cluster with private nodes, with or without a public control plane endpoint, depending on your organization's networking and security requirements. A GKE cluster provisioned from Rancher can use isolated nodes by selecting "Private Cluster" in the Cluster Options (under "Show advanced options"). The control plane endpoint can optionally be made private by selecting "Enable Private Endpoint". -### Private Nodes +## Private Nodes Because the nodes in a private cluster only have internal IP addresses, they will not be able to install the cluster agent and Rancher will not be able to fully manage the cluster. This can be overcome in a few ways. -#### Cloud NAT +### Cloud NAT :::caution @@ -22,7 +22,7 @@ Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing). If restricting outgoing internet access is not a concern for your organization, use Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service to allow nodes in the private network to access the internet, enabling them to download the required images from Docker Hub and contact the Rancher management server. This is the simplest solution. -#### Private registry +### Private Registry :::caution @@ -32,11 +32,11 @@ This scenario is not officially supported, but is described for cases in which u If restricting both incoming and outgoing traffic to nodes is a requirement, follow the air-gapped installation instructions to set up a private container image [registry](../../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) on the VPC where the cluster is going to be, allowing the cluster nodes to access and download the images they need to run the cluster agent. If the control plane endpoint is also private, Rancher will need [direct access](#direct-access) to it. -### Private Control Plane Endpoint +## Private Control Plane Endpoint If the cluster has a public endpoint exposed, Rancher will be able to reach the cluster, and no additional steps need to be taken. However, if the cluster has no public endpoint, then considerations must be made to ensure Rancher can access the cluster. -#### Cloud NAT +### Cloud NAT :::caution @@ -47,7 +47,7 @@ Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing). As above, if restricting outgoing internet access to the nodes is not a concern, then Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service can be used to allow the nodes to access the internet. While the cluster is provisioning, Rancher will provide a registration command to run on the cluster. Download the [kubeconfig](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl) for the new cluster and run the provided kubectl command on the cluster. Gaining access to the cluster in order to run this command can be done by creating a temporary node or using an existing node in the VPC, or by logging on to or creating an SSH tunnel through one of the cluster nodes. -#### Direct access +### Direct Access If the Rancher server is run on the same VPC as the cluster's control plane, it will have direct access to the control plane's private endpoint. The cluster nodes will need to have access to a [private registry](#private-registry) to download images as described above. diff --git a/versioned_docs/version-2.7/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md b/versioned_docs/version-2.7/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md index a020ff66ed2a..4322fb5e2d21 100644 --- a/versioned_docs/version-2.7/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md +++ b/versioned_docs/version-2.7/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md @@ -8,11 +8,11 @@ title: Private Clusters In GKE, [private clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept) are clusters whose nodes are isolated from inbound and outbound traffic by assigning them internal IP addresses only. Private clusters in GKE have the option of exposing the control plane endpoint as a publicly accessible address or as a private address. This is different from other Kubernetes providers, which may refer to clusters with private control plane endpoints as "private clusters" but still allow traffic to and from nodes. You may want to create a cluster with private nodes, with or without a public control plane endpoint, depending on your organization's networking and security requirements. A GKE cluster provisioned from Rancher can use isolated nodes by selecting "Private Cluster" in the Cluster Options (under "Show advanced options"). The control plane endpoint can optionally be made private by selecting "Enable Private Endpoint". -### Private Nodes +## Private Nodes Because the nodes in a private cluster only have internal IP addresses, they will not be able to install the cluster agent and Rancher will not be able to fully manage the cluster. This can be overcome in a few ways. -#### Cloud NAT +### Cloud NAT :::caution @@ -22,7 +22,7 @@ Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing). If restricting outgoing internet access is not a concern for your organization, use Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service to allow nodes in the private network to access the internet, enabling them to download the required images from Docker Hub and contact the Rancher management server. This is the simplest solution. -#### Private registry +### Private Registry :::caution @@ -32,11 +32,11 @@ This scenario is not officially supported, but is described for cases in which u If restricting both incoming and outgoing traffic to nodes is a requirement, follow the air-gapped installation instructions to set up a private container image [registry](../../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) on the VPC where the cluster is going to be, allowing the cluster nodes to access and download the images they need to run the cluster agent. If the control plane endpoint is also private, Rancher will need [direct access](#direct-access) to it. -### Private Control Plane Endpoint +## Private Control Plane Endpoint If the cluster has a public endpoint exposed, Rancher will be able to reach the cluster, and no additional steps need to be taken. However, if the cluster has no public endpoint, then considerations must be made to ensure Rancher can access the cluster. -#### Cloud NAT +### Cloud NAT :::caution @@ -47,7 +47,7 @@ Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing). As above, if restricting outgoing internet access to the nodes is not a concern, then Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service can be used to allow the nodes to access the internet. While the cluster is provisioning, Rancher will provide a registration command to run on the cluster. Download the [kubeconfig](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl) for the new cluster and run the provided kubectl command on the cluster. Gaining access to the cluster in order to run this command can be done by creating a temporary node or using an existing node in the VPC, or by logging on to or creating an SSH tunnel through one of the cluster nodes. -#### Direct access +### Direct Access If the Rancher server is run on the same VPC as the cluster's control plane, it will have direct access to the control plane's private endpoint. The cluster nodes will need to have access to a [private registry](#private-registry) to download images as described above. diff --git a/versioned_docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md b/versioned_docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md index a020ff66ed2a..4322fb5e2d21 100644 --- a/versioned_docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md +++ b/versioned_docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md @@ -8,11 +8,11 @@ title: Private Clusters In GKE, [private clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept) are clusters whose nodes are isolated from inbound and outbound traffic by assigning them internal IP addresses only. Private clusters in GKE have the option of exposing the control plane endpoint as a publicly accessible address or as a private address. This is different from other Kubernetes providers, which may refer to clusters with private control plane endpoints as "private clusters" but still allow traffic to and from nodes. You may want to create a cluster with private nodes, with or without a public control plane endpoint, depending on your organization's networking and security requirements. A GKE cluster provisioned from Rancher can use isolated nodes by selecting "Private Cluster" in the Cluster Options (under "Show advanced options"). The control plane endpoint can optionally be made private by selecting "Enable Private Endpoint". -### Private Nodes +## Private Nodes Because the nodes in a private cluster only have internal IP addresses, they will not be able to install the cluster agent and Rancher will not be able to fully manage the cluster. This can be overcome in a few ways. -#### Cloud NAT +### Cloud NAT :::caution @@ -22,7 +22,7 @@ Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing). If restricting outgoing internet access is not a concern for your organization, use Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service to allow nodes in the private network to access the internet, enabling them to download the required images from Docker Hub and contact the Rancher management server. This is the simplest solution. -#### Private registry +### Private Registry :::caution @@ -32,11 +32,11 @@ This scenario is not officially supported, but is described for cases in which u If restricting both incoming and outgoing traffic to nodes is a requirement, follow the air-gapped installation instructions to set up a private container image [registry](../../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) on the VPC where the cluster is going to be, allowing the cluster nodes to access and download the images they need to run the cluster agent. If the control plane endpoint is also private, Rancher will need [direct access](#direct-access) to it. -### Private Control Plane Endpoint +## Private Control Plane Endpoint If the cluster has a public endpoint exposed, Rancher will be able to reach the cluster, and no additional steps need to be taken. However, if the cluster has no public endpoint, then considerations must be made to ensure Rancher can access the cluster. -#### Cloud NAT +### Cloud NAT :::caution @@ -47,7 +47,7 @@ Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing). As above, if restricting outgoing internet access to the nodes is not a concern, then Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service can be used to allow the nodes to access the internet. While the cluster is provisioning, Rancher will provide a registration command to run on the cluster. Download the [kubeconfig](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl) for the new cluster and run the provided kubectl command on the cluster. Gaining access to the cluster in order to run this command can be done by creating a temporary node or using an existing node in the VPC, or by logging on to or creating an SSH tunnel through one of the cluster nodes. -#### Direct access +### Direct Access If the Rancher server is run on the same VPC as the cluster's control plane, it will have direct access to the control plane's private endpoint. The cluster nodes will need to have access to a [private registry](#private-registry) to download images as described above. diff --git a/versioned_docs/version-2.9/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md b/versioned_docs/version-2.9/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md index a020ff66ed2a..4322fb5e2d21 100644 --- a/versioned_docs/version-2.9/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md +++ b/versioned_docs/version-2.9/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters.md @@ -8,11 +8,11 @@ title: Private Clusters In GKE, [private clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept) are clusters whose nodes are isolated from inbound and outbound traffic by assigning them internal IP addresses only. Private clusters in GKE have the option of exposing the control plane endpoint as a publicly accessible address or as a private address. This is different from other Kubernetes providers, which may refer to clusters with private control plane endpoints as "private clusters" but still allow traffic to and from nodes. You may want to create a cluster with private nodes, with or without a public control plane endpoint, depending on your organization's networking and security requirements. A GKE cluster provisioned from Rancher can use isolated nodes by selecting "Private Cluster" in the Cluster Options (under "Show advanced options"). The control plane endpoint can optionally be made private by selecting "Enable Private Endpoint". -### Private Nodes +## Private Nodes Because the nodes in a private cluster only have internal IP addresses, they will not be able to install the cluster agent and Rancher will not be able to fully manage the cluster. This can be overcome in a few ways. -#### Cloud NAT +### Cloud NAT :::caution @@ -22,7 +22,7 @@ Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing). If restricting outgoing internet access is not a concern for your organization, use Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service to allow nodes in the private network to access the internet, enabling them to download the required images from Docker Hub and contact the Rancher management server. This is the simplest solution. -#### Private registry +### Private Registry :::caution @@ -32,11 +32,11 @@ This scenario is not officially supported, but is described for cases in which u If restricting both incoming and outgoing traffic to nodes is a requirement, follow the air-gapped installation instructions to set up a private container image [registry](../../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) on the VPC where the cluster is going to be, allowing the cluster nodes to access and download the images they need to run the cluster agent. If the control plane endpoint is also private, Rancher will need [direct access](#direct-access) to it. -### Private Control Plane Endpoint +## Private Control Plane Endpoint If the cluster has a public endpoint exposed, Rancher will be able to reach the cluster, and no additional steps need to be taken. However, if the cluster has no public endpoint, then considerations must be made to ensure Rancher can access the cluster. -#### Cloud NAT +### Cloud NAT :::caution @@ -47,7 +47,7 @@ Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing). As above, if restricting outgoing internet access to the nodes is not a concern, then Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service can be used to allow the nodes to access the internet. While the cluster is provisioning, Rancher will provide a registration command to run on the cluster. Download the [kubeconfig](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl) for the new cluster and run the provided kubectl command on the cluster. Gaining access to the cluster in order to run this command can be done by creating a temporary node or using an existing node in the VPC, or by logging on to or creating an SSH tunnel through one of the cluster nodes. -#### Direct access +### Direct Access If the Rancher server is run on the same VPC as the cluster's control plane, it will have direct access to the control plane's private endpoint. The cluster nodes will need to have access to a [private registry](#private-registry) to download images as described above. From 8d94b3ce83934b02787cfaec3335e2b50816159e Mon Sep 17 00:00:00 2001 From: martyav Date: Thu, 19 Sep 2024 15:31:56 -0400 Subject: [PATCH 15/30] fixed node-template-configuration/digital-ocean.md --- .../node-template-configuration/digitalocean.md | 4 ++-- .../node-template-configuration/digitalocean.md | 4 ++-- .../node-template-configuration/digitalocean.md | 4 ++-- .../node-template-configuration/digitalocean.md | 4 ++-- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md b/docs/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md index 87b5fccdcfb1..11b7a300a976 100644 --- a/docs/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md +++ b/docs/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md @@ -8,11 +8,11 @@ title: DigitalOcean Node Template Configuration Account access information is stored as a cloud credential. Cloud credentials are stored as Kubernetes secrets. Multiple node templates can use the same cloud credential. You can use an existing cloud credential or create a new one. -### Droplet Options +## Droplet Options The **Droplet Options** provision your cluster's geographical region and specifications. -### Docker Daemon +## Docker Daemon If you use Docker, the [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include: diff --git a/versioned_docs/version-2.7/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md b/versioned_docs/version-2.7/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md index 87b5fccdcfb1..11b7a300a976 100644 --- a/versioned_docs/version-2.7/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md +++ b/versioned_docs/version-2.7/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md @@ -8,11 +8,11 @@ title: DigitalOcean Node Template Configuration Account access information is stored as a cloud credential. Cloud credentials are stored as Kubernetes secrets. Multiple node templates can use the same cloud credential. You can use an existing cloud credential or create a new one. -### Droplet Options +## Droplet Options The **Droplet Options** provision your cluster's geographical region and specifications. -### Docker Daemon +## Docker Daemon If you use Docker, the [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include: diff --git a/versioned_docs/version-2.8/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md b/versioned_docs/version-2.8/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md index 87b5fccdcfb1..11b7a300a976 100644 --- a/versioned_docs/version-2.8/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md +++ b/versioned_docs/version-2.8/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md @@ -8,11 +8,11 @@ title: DigitalOcean Node Template Configuration Account access information is stored as a cloud credential. Cloud credentials are stored as Kubernetes secrets. Multiple node templates can use the same cloud credential. You can use an existing cloud credential or create a new one. -### Droplet Options +## Droplet Options The **Droplet Options** provision your cluster's geographical region and specifications. -### Docker Daemon +## Docker Daemon If you use Docker, the [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include: diff --git a/versioned_docs/version-2.9/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md b/versioned_docs/version-2.9/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md index 87b5fccdcfb1..11b7a300a976 100644 --- a/versioned_docs/version-2.9/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md +++ b/versioned_docs/version-2.9/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean.md @@ -8,11 +8,11 @@ title: DigitalOcean Node Template Configuration Account access information is stored as a cloud credential. Cloud credentials are stored as Kubernetes secrets. Multiple node templates can use the same cloud credential. You can use an existing cloud credential or create a new one. -### Droplet Options +## Droplet Options The **Droplet Options** provision your cluster's geographical region and specifications. -### Docker Daemon +## Docker Daemon If you use Docker, the [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include: From 870c79d7f403acdeedbbdcc4c67088efed5ad83c Mon Sep 17 00:00:00 2001 From: martyav Date: Fri, 20 Sep 2024 11:13:15 -0400 Subject: [PATCH 16/30] fixed prometheus-operator.md again -- broken admonition in adoc version of file due to ':::note Important:' syntax --- .../prometheus-federator/prometheus-federator.md | 8 ++++---- .../prometheus-federator/prometheus-federator.md | 10 +++++----- .../prometheus-federator/prometheus-federator.md | 12 ++++++------ .../prometheus-federator/prometheus-federator.md | 12 ++++++------ 4 files changed, 21 insertions(+), 21 deletions(-) diff --git a/docs/reference-guides/prometheus-federator/prometheus-federator.md b/docs/reference-guides/prometheus-federator/prometheus-federator.md index dd5f22d93a6b..1c0632934d5f 100644 --- a/docs/reference-guides/prometheus-federator/prometheus-federator.md +++ b/docs/reference-guides/prometheus-federator/prometheus-federator.md @@ -14,7 +14,7 @@ Prometheus Federator, also referred to as Project Monitoring v2, deploys a Helm - Default PrometheusRules and Grafana dashboards based on the collection of community-curated resources from [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus/) - Default ServiceMonitors that watch the deployed resources -:::note Important: +:::note Prometheus Federator is designed to be deployed alongside an existing Prometheus Operator deployment in a cluster that has already installed the Prometheus Operator CRDs. @@ -26,7 +26,7 @@ Prometheus Federator is designed to be deployed alongside an existing Prometheus 2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project--monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**. 3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). For more information, see the section on [configuring RBAC](rbac.md). -## What is a Project? +### What is a Project? In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`. By default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given Rancher Project. @@ -45,7 +45,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co 2. **Project Registration Namespace (`cattle-project-`)**: The set of namespaces that the operator watches for ProjectHelmCharts within. The RoleBindings and ClusterRoleBindings that apply to this namespace will also be the source of truth for the auto-assigned RBAC created in the Project Release Namespace. For details, refer to the [RBAC page](rbac.md). **Project Owners (admin), Project Members (edit), and Read-Only Members (view) should have access to this namespace.** - :::note Notes: + :::note - Project Registration Namespaces will be auto-generated by the operator and imported into the Project it is tied to if `.Values.global.cattle.projectLabel` is provided, which is set to `field.cattle.io/projectId` by default. This indicates that a Project Registration Namespace should be created by the operator if at least one namespace is observed with that label. The operator will not let these namespaces be deleted unless either all namespaces with that label are gone (e.g., this is the last namespace in that project, in which case the namespace will be marked with the label `"helm.cattle.io/helm-project-operator-orphaned": "true"`, which signals that it can be deleted), or it is no longer watching that project because the project ID was provided under `.Values.helmProjectOperator.otherSystemProjectLabelValues`, which serves as a denylist for Projects. These namespaces will also never be auto-deleted to avoid destroying user data; it is recommended that users clean up these namespaces manually if desired on creating or deleting a project. @@ -55,7 +55,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co 3. **Project Release Namespace (`cattle-project--monitoring`):** The set of namespaces that the operator deploys Project Monitoring Stacks within on behalf of a ProjectHelmChart; the operator will also automatically assign RBAC to Roles created in this namespace by the Project Monitoring Stack based on bindings found in the Project Registration Namespace. **Only Cluster Admins should have access to this namespace; Project Owners (admin), Project Members (edit), and Read-Only Members (view) will be assigned limited access to this namespace by the deployed Helm Chart and Prometheus Federator.** - :::note Notes: + :::note - Project Release Namespaces are automatically deployed and imported into the project whose ID is specified under `.Values.helmProjectOperator.projectReleaseNamespaces.labelValue`, which defaults to the value of `.Values.global.cattle.systemProjectId` if not specified, whenever a ProjectHelmChart is specified in a Project Registration Namespace. diff --git a/versioned_docs/version-2.7/reference-guides/prometheus-federator/prometheus-federator.md b/versioned_docs/version-2.7/reference-guides/prometheus-federator/prometheus-federator.md index 8f5cd39451b2..1c0632934d5f 100644 --- a/versioned_docs/version-2.7/reference-guides/prometheus-federator/prometheus-federator.md +++ b/versioned_docs/version-2.7/reference-guides/prometheus-federator/prometheus-federator.md @@ -14,7 +14,7 @@ Prometheus Federator, also referred to as Project Monitoring v2, deploys a Helm - Default PrometheusRules and Grafana dashboards based on the collection of community-curated resources from [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus/) - Default ServiceMonitors that watch the deployed resources -:::note Important: +:::note Prometheus Federator is designed to be deployed alongside an existing Prometheus Operator deployment in a cluster that has already installed the Prometheus Operator CRDs. @@ -26,7 +26,7 @@ Prometheus Federator is designed to be deployed alongside an existing Prometheus 2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project--monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**. 3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). For more information, see the section on [configuring RBAC](rbac.md). -## What is a Project? +### What is a Project? In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`. By default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given Rancher Project. @@ -45,7 +45,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co 2. **Project Registration Namespace (`cattle-project-`)**: The set of namespaces that the operator watches for ProjectHelmCharts within. The RoleBindings and ClusterRoleBindings that apply to this namespace will also be the source of truth for the auto-assigned RBAC created in the Project Release Namespace. For details, refer to the [RBAC page](rbac.md). **Project Owners (admin), Project Members (edit), and Read-Only Members (view) should have access to this namespace.** - :::note Notes: + :::note - Project Registration Namespaces will be auto-generated by the operator and imported into the Project it is tied to if `.Values.global.cattle.projectLabel` is provided, which is set to `field.cattle.io/projectId` by default. This indicates that a Project Registration Namespace should be created by the operator if at least one namespace is observed with that label. The operator will not let these namespaces be deleted unless either all namespaces with that label are gone (e.g., this is the last namespace in that project, in which case the namespace will be marked with the label `"helm.cattle.io/helm-project-operator-orphaned": "true"`, which signals that it can be deleted), or it is no longer watching that project because the project ID was provided under `.Values.helmProjectOperator.otherSystemProjectLabelValues`, which serves as a denylist for Projects. These namespaces will also never be auto-deleted to avoid destroying user data; it is recommended that users clean up these namespaces manually if desired on creating or deleting a project. @@ -55,7 +55,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co 3. **Project Release Namespace (`cattle-project--monitoring`):** The set of namespaces that the operator deploys Project Monitoring Stacks within on behalf of a ProjectHelmChart; the operator will also automatically assign RBAC to Roles created in this namespace by the Project Monitoring Stack based on bindings found in the Project Registration Namespace. **Only Cluster Admins should have access to this namespace; Project Owners (admin), Project Members (edit), and Read-Only Members (view) will be assigned limited access to this namespace by the deployed Helm Chart and Prometheus Federator.** - :::note Notes: + :::note - Project Release Namespaces are automatically deployed and imported into the project whose ID is specified under `.Values.helmProjectOperator.projectReleaseNamespaces.labelValue`, which defaults to the value of `.Values.global.cattle.systemProjectId` if not specified, whenever a ProjectHelmChart is specified in a Project Registration Namespace. @@ -87,7 +87,7 @@ HelmRelease CRs emit Kubernetes Events that detect when an underlying Helm relea Both of these resources are created for all Helm charts in the Operator / System namespaces to avoid escalation of privileges to underprivileged users. -## Advanced Helm Project Operator Configuration +### Advanced Helm Project Operator Configuration For more information on advanced configurations, refer to [this page](https://github.com/rancher/prometheus-federator/blob/main/charts/prometheus-federator/0.0.1/README.md#advanced-helm-project-operator-configuration). diff --git a/versioned_docs/version-2.8/reference-guides/prometheus-federator/prometheus-federator.md b/versioned_docs/version-2.8/reference-guides/prometheus-federator/prometheus-federator.md index 8f5cd39451b2..2b9d900ffa01 100644 --- a/versioned_docs/version-2.8/reference-guides/prometheus-federator/prometheus-federator.md +++ b/versioned_docs/version-2.8/reference-guides/prometheus-federator/prometheus-federator.md @@ -14,7 +14,7 @@ Prometheus Federator, also referred to as Project Monitoring v2, deploys a Helm - Default PrometheusRules and Grafana dashboards based on the collection of community-curated resources from [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus/) - Default ServiceMonitors that watch the deployed resources -:::note Important: +:::note Prometheus Federator is designed to be deployed alongside an existing Prometheus Operator deployment in a cluster that has already installed the Prometheus Operator CRDs. @@ -26,7 +26,7 @@ Prometheus Federator is designed to be deployed alongside an existing Prometheus 2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project--monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**. 3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). For more information, see the section on [configuring RBAC](rbac.md). -## What is a Project? +### What is a Project? In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`. By default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given Rancher Project. @@ -37,7 +37,7 @@ The `spec.values` of this ProjectHelmChart's resources will correspond to the `v - View the chart's definition located at [`rancher/prometheus-federator` under `charts/rancher-project-monitoring`](https://github.com/rancher/prometheus-federator/blob/main/charts/rancher-project-monitoring) (where the chart version will be tied to the version of this operator). - Look for the ConfigMap named `monitoring.cattle.io.v1alpha1` that is automatically created in each Project Registration Namespace, which will contain both the `values.yaml` and `questions.yaml` that was used to configure the chart (which was embedded directly into the `prometheus-federator` binary). -## Namespaces +### Namespaces As a Project Operator based on [rancher/helm-project-operator](https://github.com/rancher/helm-project-operator), Prometheus Federator has three different classifications of namespaces that the operator looks out for: @@ -45,7 +45,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co 2. **Project Registration Namespace (`cattle-project-`)**: The set of namespaces that the operator watches for ProjectHelmCharts within. The RoleBindings and ClusterRoleBindings that apply to this namespace will also be the source of truth for the auto-assigned RBAC created in the Project Release Namespace. For details, refer to the [RBAC page](rbac.md). **Project Owners (admin), Project Members (edit), and Read-Only Members (view) should have access to this namespace.** - :::note Notes: + :::note - Project Registration Namespaces will be auto-generated by the operator and imported into the Project it is tied to if `.Values.global.cattle.projectLabel` is provided, which is set to `field.cattle.io/projectId` by default. This indicates that a Project Registration Namespace should be created by the operator if at least one namespace is observed with that label. The operator will not let these namespaces be deleted unless either all namespaces with that label are gone (e.g., this is the last namespace in that project, in which case the namespace will be marked with the label `"helm.cattle.io/helm-project-operator-orphaned": "true"`, which signals that it can be deleted), or it is no longer watching that project because the project ID was provided under `.Values.helmProjectOperator.otherSystemProjectLabelValues`, which serves as a denylist for Projects. These namespaces will also never be auto-deleted to avoid destroying user data; it is recommended that users clean up these namespaces manually if desired on creating or deleting a project. @@ -71,7 +71,7 @@ On deploying a ProjectHelmChart, the Prometheus Federator will automatically cre - A HelmChart CR (managed via an embedded [k3s-io/helm-contoller](https://github.com/k3s-io/helm-controller) in the operator): This custom resource automatically creates a Job in the same namespace that triggers a `helm install`, `helm upgrade`, or `helm uninstall` depending on the change applied to the HelmChart CR. This CR is automatically updated on changes to the ProjectHelmChart (e.g., modifying the values.yaml) or changes to the underlying Project definition (e.g., adding or removing namespaces from a project). -:::note Important: +:::note If a ProjectHelmChart is not deploying or updating the underlying Project Monitoring Stack for some reason, the Job created by this resource in the Operator / System namespace should be the first place you check to see if there's something wrong with the Helm operation. However, this is generally only accessible by a **Cluster Admin.** @@ -87,7 +87,7 @@ HelmRelease CRs emit Kubernetes Events that detect when an underlying Helm relea Both of these resources are created for all Helm charts in the Operator / System namespaces to avoid escalation of privileges to underprivileged users. -## Advanced Helm Project Operator Configuration +### Advanced Helm Project Operator Configuration For more information on advanced configurations, refer to [this page](https://github.com/rancher/prometheus-federator/blob/main/charts/prometheus-federator/0.0.1/README.md#advanced-helm-project-operator-configuration). diff --git a/versioned_docs/version-2.9/reference-guides/prometheus-federator/prometheus-federator.md b/versioned_docs/version-2.9/reference-guides/prometheus-federator/prometheus-federator.md index 8f5cd39451b2..7701e0fb317d 100644 --- a/versioned_docs/version-2.9/reference-guides/prometheus-federator/prometheus-federator.md +++ b/versioned_docs/version-2.9/reference-guides/prometheus-federator/prometheus-federator.md @@ -14,7 +14,7 @@ Prometheus Federator, also referred to as Project Monitoring v2, deploys a Helm - Default PrometheusRules and Grafana dashboards based on the collection of community-curated resources from [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus/) - Default ServiceMonitors that watch the deployed resources -:::note Important: +:::note Prometheus Federator is designed to be deployed alongside an existing Prometheus Operator deployment in a cluster that has already installed the Prometheus Operator CRDs. @@ -26,7 +26,7 @@ Prometheus Federator is designed to be deployed alongside an existing Prometheus 2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project--monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**. 3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). For more information, see the section on [configuring RBAC](rbac.md). -## What is a Project? +### What is a Project? In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`. By default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given Rancher Project. @@ -45,7 +45,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co 2. **Project Registration Namespace (`cattle-project-`)**: The set of namespaces that the operator watches for ProjectHelmCharts within. The RoleBindings and ClusterRoleBindings that apply to this namespace will also be the source of truth for the auto-assigned RBAC created in the Project Release Namespace. For details, refer to the [RBAC page](rbac.md). **Project Owners (admin), Project Members (edit), and Read-Only Members (view) should have access to this namespace.** - :::note Notes: + :::note - Project Registration Namespaces will be auto-generated by the operator and imported into the Project it is tied to if `.Values.global.cattle.projectLabel` is provided, which is set to `field.cattle.io/projectId` by default. This indicates that a Project Registration Namespace should be created by the operator if at least one namespace is observed with that label. The operator will not let these namespaces be deleted unless either all namespaces with that label are gone (e.g., this is the last namespace in that project, in which case the namespace will be marked with the label `"helm.cattle.io/helm-project-operator-orphaned": "true"`, which signals that it can be deleted), or it is no longer watching that project because the project ID was provided under `.Values.helmProjectOperator.otherSystemProjectLabelValues`, which serves as a denylist for Projects. These namespaces will also never be auto-deleted to avoid destroying user data; it is recommended that users clean up these namespaces manually if desired on creating or deleting a project. @@ -55,7 +55,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co 3. **Project Release Namespace (`cattle-project--monitoring`):** The set of namespaces that the operator deploys Project Monitoring Stacks within on behalf of a ProjectHelmChart; the operator will also automatically assign RBAC to Roles created in this namespace by the Project Monitoring Stack based on bindings found in the Project Registration Namespace. **Only Cluster Admins should have access to this namespace; Project Owners (admin), Project Members (edit), and Read-Only Members (view) will be assigned limited access to this namespace by the deployed Helm Chart and Prometheus Federator.** - :::note Notes: + :::note - Project Release Namespaces are automatically deployed and imported into the project whose ID is specified under `.Values.helmProjectOperator.projectReleaseNamespaces.labelValue`, which defaults to the value of `.Values.global.cattle.systemProjectId` if not specified, whenever a ProjectHelmChart is specified in a Project Registration Namespace. @@ -71,7 +71,7 @@ On deploying a ProjectHelmChart, the Prometheus Federator will automatically cre - A HelmChart CR (managed via an embedded [k3s-io/helm-contoller](https://github.com/k3s-io/helm-controller) in the operator): This custom resource automatically creates a Job in the same namespace that triggers a `helm install`, `helm upgrade`, or `helm uninstall` depending on the change applied to the HelmChart CR. This CR is automatically updated on changes to the ProjectHelmChart (e.g., modifying the values.yaml) or changes to the underlying Project definition (e.g., adding or removing namespaces from a project). -:::note Important: +:::note If a ProjectHelmChart is not deploying or updating the underlying Project Monitoring Stack for some reason, the Job created by this resource in the Operator / System namespace should be the first place you check to see if there's something wrong with the Helm operation. However, this is generally only accessible by a **Cluster Admin.** @@ -87,7 +87,7 @@ HelmRelease CRs emit Kubernetes Events that detect when an underlying Helm relea Both of these resources are created for all Helm charts in the Operator / System namespaces to avoid escalation of privileges to underprivileged users. -## Advanced Helm Project Operator Configuration +### Advanced Helm Project Operator Configuration For more information on advanced configurations, refer to [this page](https://github.com/rancher/prometheus-federator/blob/main/charts/prometheus-federator/0.0.1/README.md#advanced-helm-project-operator-configuration). From e0b46a901951ba5705dcbcc940d709bbf2688a8e Mon Sep 17 00:00:00 2001 From: martyav Date: Fri, 20 Sep 2024 11:21:27 -0400 Subject: [PATCH 17/30] fixed http-proxy-configuration.md --- .../single-node-rancher-in-docker/http-proxy-configuration.md | 2 +- .../single-node-rancher-in-docker/http-proxy-configuration.md | 2 +- .../single-node-rancher-in-docker/http-proxy-configuration.md | 2 +- .../single-node-rancher-in-docker/http-proxy-configuration.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 4936d839dd29..6d6670efbfbd 100644 --- a/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -16,7 +16,7 @@ Make sure `NO_PROXY` contains the network addresses, network address ranges and | HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) | | NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s) | -:::note Important: +:::note NO_PROXY must be in uppercase to use network range (CIDR) notation. diff --git a/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 4936d839dd29..6d6670efbfbd 100644 --- a/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -16,7 +16,7 @@ Make sure `NO_PROXY` contains the network addresses, network address ranges and | HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) | | NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s) | -:::note Important: +:::note NO_PROXY must be in uppercase to use network range (CIDR) notation. diff --git a/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 4936d839dd29..6d6670efbfbd 100644 --- a/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -16,7 +16,7 @@ Make sure `NO_PROXY` contains the network addresses, network address ranges and | HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) | | NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s) | -:::note Important: +:::note NO_PROXY must be in uppercase to use network range (CIDR) notation. diff --git a/versioned_docs/version-2.9/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/versioned_docs/version-2.9/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 4936d839dd29..6d6670efbfbd 100644 --- a/versioned_docs/version-2.9/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/versioned_docs/version-2.9/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -16,7 +16,7 @@ Make sure `NO_PROXY` contains the network addresses, network address ranges and | HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) | | NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s) | -:::note Important: +:::note NO_PROXY must be in uppercase to use network range (CIDR) notation. From 4bf87b13c46c97777ef384cf481a73cb1a79163d Mon Sep 17 00:00:00 2001 From: martyav Date: Fri, 20 Sep 2024 11:24:20 -0400 Subject: [PATCH 18/30] fixed air-gapped-upgrades.md --- .../air-gapped-upgrades.md | 2 +- .../air-gapped-upgrades.md | 2 +- .../air-gapped-upgrades.md | 2 +- .../air-gapped-upgrades.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md b/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md index b519ebf2761f..a3b48a0814d4 100644 --- a/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md +++ b/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md @@ -12,7 +12,7 @@ These instructions assume you have already followed the instructions for a Kuber ::: -### Rancher Helm Upgrade Options +## Rancher Helm Upgrade Options To upgrade with Helm, apply the same options that you used when installing Rancher. Refer to the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools. diff --git a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md index 82957c75b8bf..b529207f9381 100644 --- a/versioned_docs/version-2.7/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md +++ b/versioned_docs/version-2.7/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md @@ -12,7 +12,7 @@ These instructions assume you have already followed the instructions for a Kuber ::: -### Rancher Helm Upgrade Options +## Rancher Helm Upgrade Options To upgrade with Helm, apply the same options that you used when installing Rancher. Refer to the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools. diff --git a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md index 82957c75b8bf..b529207f9381 100644 --- a/versioned_docs/version-2.8/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md +++ b/versioned_docs/version-2.8/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md @@ -12,7 +12,7 @@ These instructions assume you have already followed the instructions for a Kuber ::: -### Rancher Helm Upgrade Options +## Rancher Helm Upgrade Options To upgrade with Helm, apply the same options that you used when installing Rancher. Refer to the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools. diff --git a/versioned_docs/version-2.9/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md b/versioned_docs/version-2.9/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md index b519ebf2761f..a3b48a0814d4 100644 --- a/versioned_docs/version-2.9/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md +++ b/versioned_docs/version-2.9/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades.md @@ -12,7 +12,7 @@ These instructions assume you have already followed the instructions for a Kuber ::: -### Rancher Helm Upgrade Options +## Rancher Helm Upgrade Options To upgrade with Helm, apply the same options that you used when installing Rancher. Refer to the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools. From a6415414bcde511da4371ce58da92610ceb7f3b5 Mon Sep 17 00:00:00 2001 From: martyav Date: Mon, 23 Sep 2024 10:38:18 -0400 Subject: [PATCH 19/30] Revert "fixed prometheus-operator.md again -- broken admonition in adoc version of file due to ':::note Important:' syntax" This reverts commit 870c79d7f403acdeedbbdcc4c67088efed5ad83c. --- .../prometheus-federator/prometheus-federator.md | 8 ++++---- .../prometheus-federator/prometheus-federator.md | 10 +++++----- .../prometheus-federator/prometheus-federator.md | 12 ++++++------ .../prometheus-federator/prometheus-federator.md | 12 ++++++------ 4 files changed, 21 insertions(+), 21 deletions(-) diff --git a/docs/reference-guides/prometheus-federator/prometheus-federator.md b/docs/reference-guides/prometheus-federator/prometheus-federator.md index 1c0632934d5f..dd5f22d93a6b 100644 --- a/docs/reference-guides/prometheus-federator/prometheus-federator.md +++ b/docs/reference-guides/prometheus-federator/prometheus-federator.md @@ -14,7 +14,7 @@ Prometheus Federator, also referred to as Project Monitoring v2, deploys a Helm - Default PrometheusRules and Grafana dashboards based on the collection of community-curated resources from [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus/) - Default ServiceMonitors that watch the deployed resources -:::note +:::note Important: Prometheus Federator is designed to be deployed alongside an existing Prometheus Operator deployment in a cluster that has already installed the Prometheus Operator CRDs. @@ -26,7 +26,7 @@ Prometheus Federator is designed to be deployed alongside an existing Prometheus 2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project--monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**. 3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). For more information, see the section on [configuring RBAC](rbac.md). -### What is a Project? +## What is a Project? In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`. By default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given Rancher Project. @@ -45,7 +45,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co 2. **Project Registration Namespace (`cattle-project-`)**: The set of namespaces that the operator watches for ProjectHelmCharts within. The RoleBindings and ClusterRoleBindings that apply to this namespace will also be the source of truth for the auto-assigned RBAC created in the Project Release Namespace. For details, refer to the [RBAC page](rbac.md). **Project Owners (admin), Project Members (edit), and Read-Only Members (view) should have access to this namespace.** - :::note + :::note Notes: - Project Registration Namespaces will be auto-generated by the operator and imported into the Project it is tied to if `.Values.global.cattle.projectLabel` is provided, which is set to `field.cattle.io/projectId` by default. This indicates that a Project Registration Namespace should be created by the operator if at least one namespace is observed with that label. The operator will not let these namespaces be deleted unless either all namespaces with that label are gone (e.g., this is the last namespace in that project, in which case the namespace will be marked with the label `"helm.cattle.io/helm-project-operator-orphaned": "true"`, which signals that it can be deleted), or it is no longer watching that project because the project ID was provided under `.Values.helmProjectOperator.otherSystemProjectLabelValues`, which serves as a denylist for Projects. These namespaces will also never be auto-deleted to avoid destroying user data; it is recommended that users clean up these namespaces manually if desired on creating or deleting a project. @@ -55,7 +55,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co 3. **Project Release Namespace (`cattle-project--monitoring`):** The set of namespaces that the operator deploys Project Monitoring Stacks within on behalf of a ProjectHelmChart; the operator will also automatically assign RBAC to Roles created in this namespace by the Project Monitoring Stack based on bindings found in the Project Registration Namespace. **Only Cluster Admins should have access to this namespace; Project Owners (admin), Project Members (edit), and Read-Only Members (view) will be assigned limited access to this namespace by the deployed Helm Chart and Prometheus Federator.** - :::note + :::note Notes: - Project Release Namespaces are automatically deployed and imported into the project whose ID is specified under `.Values.helmProjectOperator.projectReleaseNamespaces.labelValue`, which defaults to the value of `.Values.global.cattle.systemProjectId` if not specified, whenever a ProjectHelmChart is specified in a Project Registration Namespace. diff --git a/versioned_docs/version-2.7/reference-guides/prometheus-federator/prometheus-federator.md b/versioned_docs/version-2.7/reference-guides/prometheus-federator/prometheus-federator.md index 1c0632934d5f..8f5cd39451b2 100644 --- a/versioned_docs/version-2.7/reference-guides/prometheus-federator/prometheus-federator.md +++ b/versioned_docs/version-2.7/reference-guides/prometheus-federator/prometheus-federator.md @@ -14,7 +14,7 @@ Prometheus Federator, also referred to as Project Monitoring v2, deploys a Helm - Default PrometheusRules and Grafana dashboards based on the collection of community-curated resources from [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus/) - Default ServiceMonitors that watch the deployed resources -:::note +:::note Important: Prometheus Federator is designed to be deployed alongside an existing Prometheus Operator deployment in a cluster that has already installed the Prometheus Operator CRDs. @@ -26,7 +26,7 @@ Prometheus Federator is designed to be deployed alongside an existing Prometheus 2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project--monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**. 3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). For more information, see the section on [configuring RBAC](rbac.md). -### What is a Project? +## What is a Project? In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`. By default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given Rancher Project. @@ -45,7 +45,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co 2. **Project Registration Namespace (`cattle-project-`)**: The set of namespaces that the operator watches for ProjectHelmCharts within. The RoleBindings and ClusterRoleBindings that apply to this namespace will also be the source of truth for the auto-assigned RBAC created in the Project Release Namespace. For details, refer to the [RBAC page](rbac.md). **Project Owners (admin), Project Members (edit), and Read-Only Members (view) should have access to this namespace.** - :::note + :::note Notes: - Project Registration Namespaces will be auto-generated by the operator and imported into the Project it is tied to if `.Values.global.cattle.projectLabel` is provided, which is set to `field.cattle.io/projectId` by default. This indicates that a Project Registration Namespace should be created by the operator if at least one namespace is observed with that label. The operator will not let these namespaces be deleted unless either all namespaces with that label are gone (e.g., this is the last namespace in that project, in which case the namespace will be marked with the label `"helm.cattle.io/helm-project-operator-orphaned": "true"`, which signals that it can be deleted), or it is no longer watching that project because the project ID was provided under `.Values.helmProjectOperator.otherSystemProjectLabelValues`, which serves as a denylist for Projects. These namespaces will also never be auto-deleted to avoid destroying user data; it is recommended that users clean up these namespaces manually if desired on creating or deleting a project. @@ -55,7 +55,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co 3. **Project Release Namespace (`cattle-project--monitoring`):** The set of namespaces that the operator deploys Project Monitoring Stacks within on behalf of a ProjectHelmChart; the operator will also automatically assign RBAC to Roles created in this namespace by the Project Monitoring Stack based on bindings found in the Project Registration Namespace. **Only Cluster Admins should have access to this namespace; Project Owners (admin), Project Members (edit), and Read-Only Members (view) will be assigned limited access to this namespace by the deployed Helm Chart and Prometheus Federator.** - :::note + :::note Notes: - Project Release Namespaces are automatically deployed and imported into the project whose ID is specified under `.Values.helmProjectOperator.projectReleaseNamespaces.labelValue`, which defaults to the value of `.Values.global.cattle.systemProjectId` if not specified, whenever a ProjectHelmChart is specified in a Project Registration Namespace. @@ -87,7 +87,7 @@ HelmRelease CRs emit Kubernetes Events that detect when an underlying Helm relea Both of these resources are created for all Helm charts in the Operator / System namespaces to avoid escalation of privileges to underprivileged users. -### Advanced Helm Project Operator Configuration +## Advanced Helm Project Operator Configuration For more information on advanced configurations, refer to [this page](https://github.com/rancher/prometheus-federator/blob/main/charts/prometheus-federator/0.0.1/README.md#advanced-helm-project-operator-configuration). diff --git a/versioned_docs/version-2.8/reference-guides/prometheus-federator/prometheus-federator.md b/versioned_docs/version-2.8/reference-guides/prometheus-federator/prometheus-federator.md index 2b9d900ffa01..8f5cd39451b2 100644 --- a/versioned_docs/version-2.8/reference-guides/prometheus-federator/prometheus-federator.md +++ b/versioned_docs/version-2.8/reference-guides/prometheus-federator/prometheus-federator.md @@ -14,7 +14,7 @@ Prometheus Federator, also referred to as Project Monitoring v2, deploys a Helm - Default PrometheusRules and Grafana dashboards based on the collection of community-curated resources from [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus/) - Default ServiceMonitors that watch the deployed resources -:::note +:::note Important: Prometheus Federator is designed to be deployed alongside an existing Prometheus Operator deployment in a cluster that has already installed the Prometheus Operator CRDs. @@ -26,7 +26,7 @@ Prometheus Federator is designed to be deployed alongside an existing Prometheus 2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project--monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**. 3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). For more information, see the section on [configuring RBAC](rbac.md). -### What is a Project? +## What is a Project? In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`. By default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given Rancher Project. @@ -37,7 +37,7 @@ The `spec.values` of this ProjectHelmChart's resources will correspond to the `v - View the chart's definition located at [`rancher/prometheus-federator` under `charts/rancher-project-monitoring`](https://github.com/rancher/prometheus-federator/blob/main/charts/rancher-project-monitoring) (where the chart version will be tied to the version of this operator). - Look for the ConfigMap named `monitoring.cattle.io.v1alpha1` that is automatically created in each Project Registration Namespace, which will contain both the `values.yaml` and `questions.yaml` that was used to configure the chart (which was embedded directly into the `prometheus-federator` binary). -### Namespaces +## Namespaces As a Project Operator based on [rancher/helm-project-operator](https://github.com/rancher/helm-project-operator), Prometheus Federator has three different classifications of namespaces that the operator looks out for: @@ -45,7 +45,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co 2. **Project Registration Namespace (`cattle-project-`)**: The set of namespaces that the operator watches for ProjectHelmCharts within. The RoleBindings and ClusterRoleBindings that apply to this namespace will also be the source of truth for the auto-assigned RBAC created in the Project Release Namespace. For details, refer to the [RBAC page](rbac.md). **Project Owners (admin), Project Members (edit), and Read-Only Members (view) should have access to this namespace.** - :::note + :::note Notes: - Project Registration Namespaces will be auto-generated by the operator and imported into the Project it is tied to if `.Values.global.cattle.projectLabel` is provided, which is set to `field.cattle.io/projectId` by default. This indicates that a Project Registration Namespace should be created by the operator if at least one namespace is observed with that label. The operator will not let these namespaces be deleted unless either all namespaces with that label are gone (e.g., this is the last namespace in that project, in which case the namespace will be marked with the label `"helm.cattle.io/helm-project-operator-orphaned": "true"`, which signals that it can be deleted), or it is no longer watching that project because the project ID was provided under `.Values.helmProjectOperator.otherSystemProjectLabelValues`, which serves as a denylist for Projects. These namespaces will also never be auto-deleted to avoid destroying user data; it is recommended that users clean up these namespaces manually if desired on creating or deleting a project. @@ -71,7 +71,7 @@ On deploying a ProjectHelmChart, the Prometheus Federator will automatically cre - A HelmChart CR (managed via an embedded [k3s-io/helm-contoller](https://github.com/k3s-io/helm-controller) in the operator): This custom resource automatically creates a Job in the same namespace that triggers a `helm install`, `helm upgrade`, or `helm uninstall` depending on the change applied to the HelmChart CR. This CR is automatically updated on changes to the ProjectHelmChart (e.g., modifying the values.yaml) or changes to the underlying Project definition (e.g., adding or removing namespaces from a project). -:::note +:::note Important: If a ProjectHelmChart is not deploying or updating the underlying Project Monitoring Stack for some reason, the Job created by this resource in the Operator / System namespace should be the first place you check to see if there's something wrong with the Helm operation. However, this is generally only accessible by a **Cluster Admin.** @@ -87,7 +87,7 @@ HelmRelease CRs emit Kubernetes Events that detect when an underlying Helm relea Both of these resources are created for all Helm charts in the Operator / System namespaces to avoid escalation of privileges to underprivileged users. -### Advanced Helm Project Operator Configuration +## Advanced Helm Project Operator Configuration For more information on advanced configurations, refer to [this page](https://github.com/rancher/prometheus-federator/blob/main/charts/prometheus-federator/0.0.1/README.md#advanced-helm-project-operator-configuration). diff --git a/versioned_docs/version-2.9/reference-guides/prometheus-federator/prometheus-federator.md b/versioned_docs/version-2.9/reference-guides/prometheus-federator/prometheus-federator.md index 7701e0fb317d..8f5cd39451b2 100644 --- a/versioned_docs/version-2.9/reference-guides/prometheus-federator/prometheus-federator.md +++ b/versioned_docs/version-2.9/reference-guides/prometheus-federator/prometheus-federator.md @@ -14,7 +14,7 @@ Prometheus Federator, also referred to as Project Monitoring v2, deploys a Helm - Default PrometheusRules and Grafana dashboards based on the collection of community-curated resources from [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus/) - Default ServiceMonitors that watch the deployed resources -:::note +:::note Important: Prometheus Federator is designed to be deployed alongside an existing Prometheus Operator deployment in a cluster that has already installed the Prometheus Operator CRDs. @@ -26,7 +26,7 @@ Prometheus Federator is designed to be deployed alongside an existing Prometheus 2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project--monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**. 3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). For more information, see the section on [configuring RBAC](rbac.md). -### What is a Project? +## What is a Project? In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`. By default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given Rancher Project. @@ -45,7 +45,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co 2. **Project Registration Namespace (`cattle-project-`)**: The set of namespaces that the operator watches for ProjectHelmCharts within. The RoleBindings and ClusterRoleBindings that apply to this namespace will also be the source of truth for the auto-assigned RBAC created in the Project Release Namespace. For details, refer to the [RBAC page](rbac.md). **Project Owners (admin), Project Members (edit), and Read-Only Members (view) should have access to this namespace.** - :::note + :::note Notes: - Project Registration Namespaces will be auto-generated by the operator and imported into the Project it is tied to if `.Values.global.cattle.projectLabel` is provided, which is set to `field.cattle.io/projectId` by default. This indicates that a Project Registration Namespace should be created by the operator if at least one namespace is observed with that label. The operator will not let these namespaces be deleted unless either all namespaces with that label are gone (e.g., this is the last namespace in that project, in which case the namespace will be marked with the label `"helm.cattle.io/helm-project-operator-orphaned": "true"`, which signals that it can be deleted), or it is no longer watching that project because the project ID was provided under `.Values.helmProjectOperator.otherSystemProjectLabelValues`, which serves as a denylist for Projects. These namespaces will also never be auto-deleted to avoid destroying user data; it is recommended that users clean up these namespaces manually if desired on creating or deleting a project. @@ -55,7 +55,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co 3. **Project Release Namespace (`cattle-project--monitoring`):** The set of namespaces that the operator deploys Project Monitoring Stacks within on behalf of a ProjectHelmChart; the operator will also automatically assign RBAC to Roles created in this namespace by the Project Monitoring Stack based on bindings found in the Project Registration Namespace. **Only Cluster Admins should have access to this namespace; Project Owners (admin), Project Members (edit), and Read-Only Members (view) will be assigned limited access to this namespace by the deployed Helm Chart and Prometheus Federator.** - :::note + :::note Notes: - Project Release Namespaces are automatically deployed and imported into the project whose ID is specified under `.Values.helmProjectOperator.projectReleaseNamespaces.labelValue`, which defaults to the value of `.Values.global.cattle.systemProjectId` if not specified, whenever a ProjectHelmChart is specified in a Project Registration Namespace. @@ -71,7 +71,7 @@ On deploying a ProjectHelmChart, the Prometheus Federator will automatically cre - A HelmChart CR (managed via an embedded [k3s-io/helm-contoller](https://github.com/k3s-io/helm-controller) in the operator): This custom resource automatically creates a Job in the same namespace that triggers a `helm install`, `helm upgrade`, or `helm uninstall` depending on the change applied to the HelmChart CR. This CR is automatically updated on changes to the ProjectHelmChart (e.g., modifying the values.yaml) or changes to the underlying Project definition (e.g., adding or removing namespaces from a project). -:::note +:::note Important: If a ProjectHelmChart is not deploying or updating the underlying Project Monitoring Stack for some reason, the Job created by this resource in the Operator / System namespace should be the first place you check to see if there's something wrong with the Helm operation. However, this is generally only accessible by a **Cluster Admin.** @@ -87,7 +87,7 @@ HelmRelease CRs emit Kubernetes Events that detect when an underlying Helm relea Both of these resources are created for all Helm charts in the Operator / System namespaces to avoid escalation of privileges to underprivileged users. -### Advanced Helm Project Operator Configuration +## Advanced Helm Project Operator Configuration For more information on advanced configurations, refer to [this page](https://github.com/rancher/prometheus-federator/blob/main/charts/prometheus-federator/0.0.1/README.md#advanced-helm-project-operator-configuration). From e0a8ee2fec1c96346b0b03bcfb004e9b9a1f2b8e Mon Sep 17 00:00:00 2001 From: martyav Date: Mon, 23 Sep 2024 10:38:57 -0400 Subject: [PATCH 20/30] Revert "fixed http-proxy-configuration.md" This reverts commit e0b46a901951ba5705dcbcc940d709bbf2688a8e. --- .../single-node-rancher-in-docker/http-proxy-configuration.md | 2 +- .../single-node-rancher-in-docker/http-proxy-configuration.md | 2 +- .../single-node-rancher-in-docker/http-proxy-configuration.md | 2 +- .../single-node-rancher-in-docker/http-proxy-configuration.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 6d6670efbfbd..4936d839dd29 100644 --- a/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/docs/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -16,7 +16,7 @@ Make sure `NO_PROXY` contains the network addresses, network address ranges and | HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) | | NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s) | -:::note +:::note Important: NO_PROXY must be in uppercase to use network range (CIDR) notation. diff --git a/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 6d6670efbfbd..4936d839dd29 100644 --- a/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/versioned_docs/version-2.7/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -16,7 +16,7 @@ Make sure `NO_PROXY` contains the network addresses, network address ranges and | HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) | | NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s) | -:::note +:::note Important: NO_PROXY must be in uppercase to use network range (CIDR) notation. diff --git a/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 6d6670efbfbd..4936d839dd29 100644 --- a/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/versioned_docs/version-2.8/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -16,7 +16,7 @@ Make sure `NO_PROXY` contains the network addresses, network address ranges and | HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) | | NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s) | -:::note +:::note Important: NO_PROXY must be in uppercase to use network range (CIDR) notation. diff --git a/versioned_docs/version-2.9/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md b/versioned_docs/version-2.9/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md index 6d6670efbfbd..4936d839dd29 100644 --- a/versioned_docs/version-2.9/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md +++ b/versioned_docs/version-2.9/reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md @@ -16,7 +16,7 @@ Make sure `NO_PROXY` contains the network addresses, network address ranges and | HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) | | NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s) | -:::note +:::note Important: NO_PROXY must be in uppercase to use network range (CIDR) notation. From 3c7ae53ca7b5863cfe7c73673910bd377c68ed75 Mon Sep 17 00:00:00 2001 From: martyav Date: Mon, 23 Sep 2024 10:48:45 -0400 Subject: [PATCH 21/30] fixed supportconfig.md --- .../cloud-marketplace/supportconfig.md | 6 +++--- .../cloud-marketplace/supportconfig.md | 6 +++--- .../cloud-marketplace/supportconfig.md | 6 +++--- .../cloud-marketplace/supportconfig.md | 6 +++--- 4 files changed, 12 insertions(+), 12 deletions(-) diff --git a/docs/integrations-in-rancher/cloud-marketplace/supportconfig.md b/docs/integrations-in-rancher/cloud-marketplace/supportconfig.md index 6eecac1132a7..24d32837e334 100644 --- a/docs/integrations-in-rancher/cloud-marketplace/supportconfig.md +++ b/docs/integrations-in-rancher/cloud-marketplace/supportconfig.md @@ -1,5 +1,5 @@ --- -title: Supportconfig bundle +title: Supportconfig Bundle --- @@ -12,7 +12,7 @@ These bundles can be created through Rancher or through direct access to the clu > **Note:** Only admin users can generate/download supportconfig bundles, regardless of method. -### Accessing through Rancher +## Accessing through Rancher First, click on the hamburger menu. Then click the `Get Support` button. @@ -24,7 +24,7 @@ In the next page, click on the `Generate Support Config` button. ![Get Support](/img/generate-support-config.png) -### Accessing without rancher +## Accessing without Rancher First, generate a kubeconfig for the cluster that Rancher is installed on. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/supportconfig.md b/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/supportconfig.md index 6eecac1132a7..24d32837e334 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/supportconfig.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/cloud-marketplace/supportconfig.md @@ -1,5 +1,5 @@ --- -title: Supportconfig bundle +title: Supportconfig Bundle --- @@ -12,7 +12,7 @@ These bundles can be created through Rancher or through direct access to the clu > **Note:** Only admin users can generate/download supportconfig bundles, regardless of method. -### Accessing through Rancher +## Accessing through Rancher First, click on the hamburger menu. Then click the `Get Support` button. @@ -24,7 +24,7 @@ In the next page, click on the `Generate Support Config` button. ![Get Support](/img/generate-support-config.png) -### Accessing without rancher +## Accessing without Rancher First, generate a kubeconfig for the cluster that Rancher is installed on. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/supportconfig.md b/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/supportconfig.md index 6eecac1132a7..24d32837e334 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/supportconfig.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/cloud-marketplace/supportconfig.md @@ -1,5 +1,5 @@ --- -title: Supportconfig bundle +title: Supportconfig Bundle --- @@ -12,7 +12,7 @@ These bundles can be created through Rancher or through direct access to the clu > **Note:** Only admin users can generate/download supportconfig bundles, regardless of method. -### Accessing through Rancher +## Accessing through Rancher First, click on the hamburger menu. Then click the `Get Support` button. @@ -24,7 +24,7 @@ In the next page, click on the `Generate Support Config` button. ![Get Support](/img/generate-support-config.png) -### Accessing without rancher +## Accessing without Rancher First, generate a kubeconfig for the cluster that Rancher is installed on. diff --git a/versioned_docs/version-2.9/integrations-in-rancher/cloud-marketplace/supportconfig.md b/versioned_docs/version-2.9/integrations-in-rancher/cloud-marketplace/supportconfig.md index 6eecac1132a7..24d32837e334 100644 --- a/versioned_docs/version-2.9/integrations-in-rancher/cloud-marketplace/supportconfig.md +++ b/versioned_docs/version-2.9/integrations-in-rancher/cloud-marketplace/supportconfig.md @@ -1,5 +1,5 @@ --- -title: Supportconfig bundle +title: Supportconfig Bundle --- @@ -12,7 +12,7 @@ These bundles can be created through Rancher or through direct access to the clu > **Note:** Only admin users can generate/download supportconfig bundles, regardless of method. -### Accessing through Rancher +## Accessing through Rancher First, click on the hamburger menu. Then click the `Get Support` button. @@ -24,7 +24,7 @@ In the next page, click on the `Generate Support Config` button. ![Get Support](/img/generate-support-config.png) -### Accessing without rancher +## Accessing without Rancher First, generate a kubeconfig for the cluster that Rancher is installed on. From 1b842c7a60fb2763d3e97078d0bdf3bc2c898ed1 Mon Sep 17 00:00:00 2001 From: martyav Date: Mon, 23 Sep 2024 10:54:45 -0400 Subject: [PATCH 22/30] fixed cis-scans/configuration-reference.md --- .../cis-scans/configuration-reference.md | 6 +++--- .../cis-scans/configuration-reference.md | 6 +++--- .../cis-scans/configuration-reference.md | 6 +++--- .../cis-scans/configuration-reference.md | 6 +++--- 4 files changed, 12 insertions(+), 12 deletions(-) diff --git a/docs/integrations-in-rancher/cis-scans/configuration-reference.md b/docs/integrations-in-rancher/cis-scans/configuration-reference.md index 0403956be56b..3394bc2702b9 100644 --- a/docs/integrations-in-rancher/cis-scans/configuration-reference.md +++ b/docs/integrations-in-rancher/cis-scans/configuration-reference.md @@ -14,7 +14,7 @@ To configure the custom resources, go to the **Cluster Dashboard** To configure 1. On the **Clusters** page, go to the cluster where you want to configure CIS scans and click **Explore**. 1. In the left navigation bar, click **CIS Benchmark**. -### Scans +## Scans A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed. @@ -31,7 +31,7 @@ spec: scanProfileName: rke-profile-hardened ``` -### Profiles +## Profiles A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark. @@ -66,7 +66,7 @@ spec: - "1.1.21" ``` -### Benchmark Versions +## Benchmark Versions A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/configuration-reference.md b/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/configuration-reference.md index 0403956be56b..3394bc2702b9 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/configuration-reference.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/configuration-reference.md @@ -14,7 +14,7 @@ To configure the custom resources, go to the **Cluster Dashboard** To configure 1. On the **Clusters** page, go to the cluster where you want to configure CIS scans and click **Explore**. 1. In the left navigation bar, click **CIS Benchmark**. -### Scans +## Scans A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed. @@ -31,7 +31,7 @@ spec: scanProfileName: rke-profile-hardened ``` -### Profiles +## Profiles A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark. @@ -66,7 +66,7 @@ spec: - "1.1.21" ``` -### Benchmark Versions +## Benchmark Versions A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/configuration-reference.md b/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/configuration-reference.md index 0403956be56b..3394bc2702b9 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/configuration-reference.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/configuration-reference.md @@ -14,7 +14,7 @@ To configure the custom resources, go to the **Cluster Dashboard** To configure 1. On the **Clusters** page, go to the cluster where you want to configure CIS scans and click **Explore**. 1. In the left navigation bar, click **CIS Benchmark**. -### Scans +## Scans A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed. @@ -31,7 +31,7 @@ spec: scanProfileName: rke-profile-hardened ``` -### Profiles +## Profiles A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark. @@ -66,7 +66,7 @@ spec: - "1.1.21" ``` -### Benchmark Versions +## Benchmark Versions A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark. diff --git a/versioned_docs/version-2.9/integrations-in-rancher/cis-scans/configuration-reference.md b/versioned_docs/version-2.9/integrations-in-rancher/cis-scans/configuration-reference.md index 0403956be56b..3394bc2702b9 100644 --- a/versioned_docs/version-2.9/integrations-in-rancher/cis-scans/configuration-reference.md +++ b/versioned_docs/version-2.9/integrations-in-rancher/cis-scans/configuration-reference.md @@ -14,7 +14,7 @@ To configure the custom resources, go to the **Cluster Dashboard** To configure 1. On the **Clusters** page, go to the cluster where you want to configure CIS scans and click **Explore**. 1. In the left navigation bar, click **CIS Benchmark**. -### Scans +## Scans A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed. @@ -31,7 +31,7 @@ spec: scanProfileName: rke-profile-hardened ``` -### Profiles +## Profiles A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark. @@ -66,7 +66,7 @@ spec: - "1.1.21" ``` -### Benchmark Versions +## Benchmark Versions A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark. From e56ee2539063c0fa4c9ac52503b99a6740d0a1ac Mon Sep 17 00:00:00 2001 From: martyav Date: Mon, 23 Sep 2024 11:17:40 -0400 Subject: [PATCH 23/30] fix custom-benchmark.md --- .../integrations-in-rancher/cis-scans/custom-benchmark.md | 8 ++++---- .../integrations-in-rancher/cis-scans/custom-benchmark.md | 8 ++++---- .../integrations-in-rancher/cis-scans/custom-benchmark.md | 8 ++++---- .../integrations-in-rancher/cis-scans/custom-benchmark.md | 8 ++++---- 4 files changed, 16 insertions(+), 16 deletions(-) diff --git a/docs/integrations-in-rancher/cis-scans/custom-benchmark.md b/docs/integrations-in-rancher/cis-scans/custom-benchmark.md index 47853e45c147..4ec353cc60b4 100644 --- a/docs/integrations-in-rancher/cis-scans/custom-benchmark.md +++ b/docs/integrations-in-rancher/cis-scans/custom-benchmark.md @@ -17,7 +17,7 @@ When a cluster scan is run, you need to select a Profile which points to a speci Follow all the steps below to add a custom Benchmark Version and run a scan using it. -### 1. Prepare the Custom Benchmark Version ConfigMap +## 1. Prepare the Custom Benchmark Version ConfigMap To create a custom benchmark version, first you need to create a ConfigMap containing the benchmark version's config files and upload it to your Kubernetes cluster where you want to run the scan. @@ -42,7 +42,7 @@ To prepare a custom benchmark version ConfigMap, suppose we want to add a custom kubectl create configmap -n foo --from-file= ``` -### 2. Add a Custom Benchmark Version to a Cluster +## 2. Add a Custom Benchmark Version to a Cluster 1. In the upper left corner, click **☰ > Cluster Management**. 1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. @@ -54,7 +54,7 @@ To prepare a custom benchmark version ConfigMap, suppose we want to add a custom 1. Add the minimum and maximum Kubernetes version limits applicable, if any. 1. Click **Create**. -### 3. Create a New Profile for the Custom Benchmark Version +## 3. Create a New Profile for the Custom Benchmark Version To run a scan using your custom benchmark version, you need to add a new Profile pointing to this benchmark version. @@ -66,7 +66,7 @@ To run a scan using your custom benchmark version, you need to add a new Profile 1. Choose the Benchmark Version from the dropdown. 1. Click **Create**. -### 4. Run a Scan Using the Custom Benchmark Version +## 4. Run a Scan Using the Custom Benchmark Version Once the Profile pointing to your custom benchmark version `foo` has been created, you can create a new Scan to run the custom test configs in the Benchmark Version. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/custom-benchmark.md b/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/custom-benchmark.md index 47853e45c147..4ec353cc60b4 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/custom-benchmark.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/cis-scans/custom-benchmark.md @@ -17,7 +17,7 @@ When a cluster scan is run, you need to select a Profile which points to a speci Follow all the steps below to add a custom Benchmark Version and run a scan using it. -### 1. Prepare the Custom Benchmark Version ConfigMap +## 1. Prepare the Custom Benchmark Version ConfigMap To create a custom benchmark version, first you need to create a ConfigMap containing the benchmark version's config files and upload it to your Kubernetes cluster where you want to run the scan. @@ -42,7 +42,7 @@ To prepare a custom benchmark version ConfigMap, suppose we want to add a custom kubectl create configmap -n foo --from-file= ``` -### 2. Add a Custom Benchmark Version to a Cluster +## 2. Add a Custom Benchmark Version to a Cluster 1. In the upper left corner, click **☰ > Cluster Management**. 1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. @@ -54,7 +54,7 @@ To prepare a custom benchmark version ConfigMap, suppose we want to add a custom 1. Add the minimum and maximum Kubernetes version limits applicable, if any. 1. Click **Create**. -### 3. Create a New Profile for the Custom Benchmark Version +## 3. Create a New Profile for the Custom Benchmark Version To run a scan using your custom benchmark version, you need to add a new Profile pointing to this benchmark version. @@ -66,7 +66,7 @@ To run a scan using your custom benchmark version, you need to add a new Profile 1. Choose the Benchmark Version from the dropdown. 1. Click **Create**. -### 4. Run a Scan Using the Custom Benchmark Version +## 4. Run a Scan Using the Custom Benchmark Version Once the Profile pointing to your custom benchmark version `foo` has been created, you can create a new Scan to run the custom test configs in the Benchmark Version. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/custom-benchmark.md b/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/custom-benchmark.md index 47853e45c147..4ec353cc60b4 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/custom-benchmark.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/cis-scans/custom-benchmark.md @@ -17,7 +17,7 @@ When a cluster scan is run, you need to select a Profile which points to a speci Follow all the steps below to add a custom Benchmark Version and run a scan using it. -### 1. Prepare the Custom Benchmark Version ConfigMap +## 1. Prepare the Custom Benchmark Version ConfigMap To create a custom benchmark version, first you need to create a ConfigMap containing the benchmark version's config files and upload it to your Kubernetes cluster where you want to run the scan. @@ -42,7 +42,7 @@ To prepare a custom benchmark version ConfigMap, suppose we want to add a custom kubectl create configmap -n foo --from-file= ``` -### 2. Add a Custom Benchmark Version to a Cluster +## 2. Add a Custom Benchmark Version to a Cluster 1. In the upper left corner, click **☰ > Cluster Management**. 1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. @@ -54,7 +54,7 @@ To prepare a custom benchmark version ConfigMap, suppose we want to add a custom 1. Add the minimum and maximum Kubernetes version limits applicable, if any. 1. Click **Create**. -### 3. Create a New Profile for the Custom Benchmark Version +## 3. Create a New Profile for the Custom Benchmark Version To run a scan using your custom benchmark version, you need to add a new Profile pointing to this benchmark version. @@ -66,7 +66,7 @@ To run a scan using your custom benchmark version, you need to add a new Profile 1. Choose the Benchmark Version from the dropdown. 1. Click **Create**. -### 4. Run a Scan Using the Custom Benchmark Version +## 4. Run a Scan Using the Custom Benchmark Version Once the Profile pointing to your custom benchmark version `foo` has been created, you can create a new Scan to run the custom test configs in the Benchmark Version. diff --git a/versioned_docs/version-2.9/integrations-in-rancher/cis-scans/custom-benchmark.md b/versioned_docs/version-2.9/integrations-in-rancher/cis-scans/custom-benchmark.md index 47853e45c147..4ec353cc60b4 100644 --- a/versioned_docs/version-2.9/integrations-in-rancher/cis-scans/custom-benchmark.md +++ b/versioned_docs/version-2.9/integrations-in-rancher/cis-scans/custom-benchmark.md @@ -17,7 +17,7 @@ When a cluster scan is run, you need to select a Profile which points to a speci Follow all the steps below to add a custom Benchmark Version and run a scan using it. -### 1. Prepare the Custom Benchmark Version ConfigMap +## 1. Prepare the Custom Benchmark Version ConfigMap To create a custom benchmark version, first you need to create a ConfigMap containing the benchmark version's config files and upload it to your Kubernetes cluster where you want to run the scan. @@ -42,7 +42,7 @@ To prepare a custom benchmark version ConfigMap, suppose we want to add a custom kubectl create configmap -n foo --from-file= ``` -### 2. Add a Custom Benchmark Version to a Cluster +## 2. Add a Custom Benchmark Version to a Cluster 1. In the upper left corner, click **☰ > Cluster Management**. 1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. @@ -54,7 +54,7 @@ To prepare a custom benchmark version ConfigMap, suppose we want to add a custom 1. Add the minimum and maximum Kubernetes version limits applicable, if any. 1. Click **Create**. -### 3. Create a New Profile for the Custom Benchmark Version +## 3. Create a New Profile for the Custom Benchmark Version To run a scan using your custom benchmark version, you need to add a new Profile pointing to this benchmark version. @@ -66,7 +66,7 @@ To run a scan using your custom benchmark version, you need to add a new Profile 1. Choose the Benchmark Version from the dropdown. 1. Click **Create**. -### 4. Run a Scan Using the Custom Benchmark Version +## 4. Run a Scan Using the Custom Benchmark Version Once the Profile pointing to your custom benchmark version `foo` has been created, you can create a new Scan to run the custom test configs in the Benchmark Version. From 25ca84160c1a2047c56699ff0667864282c3eb80 Mon Sep 17 00:00:00 2001 From: martyav Date: Mon, 23 Sep 2024 11:20:06 -0400 Subject: [PATCH 24/30] fix harvester/overview.md --- docs/integrations-in-rancher/harvester/overview.md | 6 +++--- .../integrations-in-rancher/harvester/overview.md | 6 +++--- .../integrations-in-rancher/harvester/overview.md | 6 +++--- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/integrations-in-rancher/harvester/overview.md b/docs/integrations-in-rancher/harvester/overview.md index d22afe15965e..edd54a6f5574 100644 --- a/docs/integrations-in-rancher/harvester/overview.md +++ b/docs/integrations-in-rancher/harvester/overview.md @@ -8,7 +8,7 @@ title: Overview Introduced in Rancher v2.6.1, [Harvester](https://docs.harvesterhci.io/) is an open-source hyper-converged infrastructure (HCI) software built on Kubernetes. Harvester installs on bare metal servers and provides integrated virtualization and distributed storage capabilities. Although Harvester operates using Kubernetes, it does not require users to know Kubernetes concepts, making it a more user-friendly application. -### Feature Flag +## Feature Flag The Harvester feature flag is used to manage access to the Virtualization Management (VM) page in Rancher where users can navigate directly to Harvester clusters and access the Harvester UI. The Harvester feature flag is enabled by default. Click [here](../../how-to-guides/advanced-user-guides/enable-experimental-features/enable-experimental-features.md) for more information on feature flags in Rancher. @@ -22,7 +22,7 @@ To navigate to the Harvester cluster, click **☰ > Virtualization Management**. * Users may import a Harvester cluster only on the Virtualization Management page. Importing a cluster on the Cluster Management page is not supported, and a warning will advise you to return to the VM page to do so. -### Harvester Node Driver +## Harvester Node Driver The [Harvester node driver](https://docs.harvesterhci.io/v1.1/rancher/node/node-driver/) is generally available for RKE and RKE2 options in Rancher. The node driver is available whether or not the Harvester feature flag is enabled. Note that the node driver is off by default. Users may create RKE or RKE2 clusters on Harvester only from the Cluster Management page. @@ -30,7 +30,7 @@ Harvester allows `.ISO` images to be uploaded and displayed through the Harveste See [Provisioning Drivers](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/about-provisioning-drivers.md#node-drivers) for more information on node drivers in Rancher. -### Port Requirements +## Port Requirements The port requirements for the Harvester cluster can be found [here](https://docs.harvesterhci.io/v1.1/install/requirements#networking). diff --git a/versioned_docs/version-2.8/integrations-in-rancher/harvester/overview.md b/versioned_docs/version-2.8/integrations-in-rancher/harvester/overview.md index d22afe15965e..edd54a6f5574 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/harvester/overview.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/harvester/overview.md @@ -8,7 +8,7 @@ title: Overview Introduced in Rancher v2.6.1, [Harvester](https://docs.harvesterhci.io/) is an open-source hyper-converged infrastructure (HCI) software built on Kubernetes. Harvester installs on bare metal servers and provides integrated virtualization and distributed storage capabilities. Although Harvester operates using Kubernetes, it does not require users to know Kubernetes concepts, making it a more user-friendly application. -### Feature Flag +## Feature Flag The Harvester feature flag is used to manage access to the Virtualization Management (VM) page in Rancher where users can navigate directly to Harvester clusters and access the Harvester UI. The Harvester feature flag is enabled by default. Click [here](../../how-to-guides/advanced-user-guides/enable-experimental-features/enable-experimental-features.md) for more information on feature flags in Rancher. @@ -22,7 +22,7 @@ To navigate to the Harvester cluster, click **☰ > Virtualization Management**. * Users may import a Harvester cluster only on the Virtualization Management page. Importing a cluster on the Cluster Management page is not supported, and a warning will advise you to return to the VM page to do so. -### Harvester Node Driver +## Harvester Node Driver The [Harvester node driver](https://docs.harvesterhci.io/v1.1/rancher/node/node-driver/) is generally available for RKE and RKE2 options in Rancher. The node driver is available whether or not the Harvester feature flag is enabled. Note that the node driver is off by default. Users may create RKE or RKE2 clusters on Harvester only from the Cluster Management page. @@ -30,7 +30,7 @@ Harvester allows `.ISO` images to be uploaded and displayed through the Harveste See [Provisioning Drivers](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/about-provisioning-drivers.md#node-drivers) for more information on node drivers in Rancher. -### Port Requirements +## Port Requirements The port requirements for the Harvester cluster can be found [here](https://docs.harvesterhci.io/v1.1/install/requirements#networking). diff --git a/versioned_docs/version-2.9/integrations-in-rancher/harvester/overview.md b/versioned_docs/version-2.9/integrations-in-rancher/harvester/overview.md index d22afe15965e..edd54a6f5574 100644 --- a/versioned_docs/version-2.9/integrations-in-rancher/harvester/overview.md +++ b/versioned_docs/version-2.9/integrations-in-rancher/harvester/overview.md @@ -8,7 +8,7 @@ title: Overview Introduced in Rancher v2.6.1, [Harvester](https://docs.harvesterhci.io/) is an open-source hyper-converged infrastructure (HCI) software built on Kubernetes. Harvester installs on bare metal servers and provides integrated virtualization and distributed storage capabilities. Although Harvester operates using Kubernetes, it does not require users to know Kubernetes concepts, making it a more user-friendly application. -### Feature Flag +## Feature Flag The Harvester feature flag is used to manage access to the Virtualization Management (VM) page in Rancher where users can navigate directly to Harvester clusters and access the Harvester UI. The Harvester feature flag is enabled by default. Click [here](../../how-to-guides/advanced-user-guides/enable-experimental-features/enable-experimental-features.md) for more information on feature flags in Rancher. @@ -22,7 +22,7 @@ To navigate to the Harvester cluster, click **☰ > Virtualization Management**. * Users may import a Harvester cluster only on the Virtualization Management page. Importing a cluster on the Cluster Management page is not supported, and a warning will advise you to return to the VM page to do so. -### Harvester Node Driver +## Harvester Node Driver The [Harvester node driver](https://docs.harvesterhci.io/v1.1/rancher/node/node-driver/) is generally available for RKE and RKE2 options in Rancher. The node driver is available whether or not the Harvester feature flag is enabled. Note that the node driver is off by default. Users may create RKE or RKE2 clusters on Harvester only from the Cluster Management page. @@ -30,7 +30,7 @@ Harvester allows `.ISO` images to be uploaded and displayed through the Harveste See [Provisioning Drivers](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/about-provisioning-drivers.md#node-drivers) for more information on node drivers in Rancher. -### Port Requirements +## Port Requirements The port requirements for the Harvester cluster can be found [here](https://docs.harvesterhci.io/v1.1/install/requirements#networking). From d30297a7ec83667ec1b16a66d7a67c236c9cf5d1 Mon Sep 17 00:00:00 2001 From: martyav Date: Mon, 23 Sep 2024 11:22:53 -0400 Subject: [PATCH 25/30] fix logging-architecture.md --- docs/integrations-in-rancher/logging/logging-architecture.md | 2 +- .../integrations-in-rancher/logging/logging-architecture.md | 2 +- .../integrations-in-rancher/logging/logging-architecture.md | 2 +- .../integrations-in-rancher/logging/logging-architecture.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/integrations-in-rancher/logging/logging-architecture.md b/docs/integrations-in-rancher/logging/logging-architecture.md index f4b716a6c2e0..ec56b8d1ef64 100644 --- a/docs/integrations-in-rancher/logging/logging-architecture.md +++ b/docs/integrations-in-rancher/logging/logging-architecture.md @@ -10,7 +10,7 @@ This section summarizes the architecture of the Rancher logging application. For more details about how the Logging operator works, see the [official documentation.](https://kube-logging.github.io/docs/#architecture) -### How the Logging Operator Works +## How the Logging Operator Works The Logging operator automates the deployment and configuration of a Kubernetes logging pipeline. It deploys and configures a Fluent Bit DaemonSet on every node to collect container and application logs from the node file system. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-architecture.md b/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-architecture.md index f4b716a6c2e0..ec56b8d1ef64 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-architecture.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-architecture.md @@ -10,7 +10,7 @@ This section summarizes the architecture of the Rancher logging application. For more details about how the Logging operator works, see the [official documentation.](https://kube-logging.github.io/docs/#architecture) -### How the Logging Operator Works +## How the Logging Operator Works The Logging operator automates the deployment and configuration of a Kubernetes logging pipeline. It deploys and configures a Fluent Bit DaemonSet on every node to collect container and application logs from the node file system. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-architecture.md b/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-architecture.md index f4b716a6c2e0..ec56b8d1ef64 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-architecture.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-architecture.md @@ -10,7 +10,7 @@ This section summarizes the architecture of the Rancher logging application. For more details about how the Logging operator works, see the [official documentation.](https://kube-logging.github.io/docs/#architecture) -### How the Logging Operator Works +## How the Logging Operator Works The Logging operator automates the deployment and configuration of a Kubernetes logging pipeline. It deploys and configures a Fluent Bit DaemonSet on every node to collect container and application logs from the node file system. diff --git a/versioned_docs/version-2.9/integrations-in-rancher/logging/logging-architecture.md b/versioned_docs/version-2.9/integrations-in-rancher/logging/logging-architecture.md index f4b716a6c2e0..ec56b8d1ef64 100644 --- a/versioned_docs/version-2.9/integrations-in-rancher/logging/logging-architecture.md +++ b/versioned_docs/version-2.9/integrations-in-rancher/logging/logging-architecture.md @@ -10,7 +10,7 @@ This section summarizes the architecture of the Rancher logging application. For more details about how the Logging operator works, see the [official documentation.](https://kube-logging.github.io/docs/#architecture) -### How the Logging Operator Works +## How the Logging Operator Works The Logging operator automates the deployment and configuration of a Kubernetes logging pipeline. It deploys and configures a Fluent Bit DaemonSet on every node to collect container and application logs from the node file system. From 0a583cfd9640bf230c4c6f5cebf7496781269051 Mon Sep 17 00:00:00 2001 From: martyav Date: Mon, 23 Sep 2024 11:25:29 -0400 Subject: [PATCH 26/30] fix logging-helm-chart-options.md --- .../logging/logging-helm-chart-options.md | 16 ++++++++-------- .../logging/logging-helm-chart-options.md | 14 +++++++------- .../logging/logging-helm-chart-options.md | 12 ++++++------ .../logging/logging-helm-chart-options.md | 14 +++++++------- 4 files changed, 28 insertions(+), 28 deletions(-) diff --git a/docs/integrations-in-rancher/logging/logging-helm-chart-options.md b/docs/integrations-in-rancher/logging/logging-helm-chart-options.md index d68865a3afce..2c1a79e41323 100644 --- a/docs/integrations-in-rancher/logging/logging-helm-chart-options.md +++ b/docs/integrations-in-rancher/logging/logging-helm-chart-options.md @@ -6,7 +6,7 @@ title: rancher-logging Helm Chart Options -### Enable/Disable Windows Node Logging +## Enable/Disable Windows Node Logging You can enable or disable Windows node logging by setting `global.cattle.windows.enabled` to either `true` or `false` in the `values.yaml`. @@ -21,7 +21,7 @@ Currently an [issue](https://github.com/rancher/rancher/issues/32325) exists whe ::: -### Working with a Custom Docker Root Directory +## Working with a Custom Docker Root Directory If using a custom Docker root directory, you can set `global.dockerRootDirectory` in `values.yaml`. @@ -31,11 +31,11 @@ Note that this only affects Linux nodes. If there are any Windows nodes in the cluster, the change will not be applicable to those nodes. -### Adding NodeSelector Settings and Tolerations for Custom Taints +## Adding NodeSelector Settings and Tolerations for Custom Taints You can add your own `nodeSelector` settings and add `tolerations` for additional taints by editing the logging Helm chart values. For details, see [this page.](taints-and-tolerations.md) -### Enabling the Logging Application to Work with SELinux +## Enabling the Logging Application to Work with SELinux :::note Requirements: @@ -49,7 +49,7 @@ To use Logging v2 with SELinux, we recommend installing the `rancher-selinux` RP Then, when installing the logging application, configure the chart to be SELinux aware by changing `global.seLinux.enabled` to `true` in the `values.yaml`. -### Additional Logging Sources +## Additional Logging Sources By default, Rancher collects logs for [control plane components](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) and [node components](https://kubernetes.io/docs/concepts/overview/components/#node-components) for all cluster types. @@ -72,7 +72,7 @@ When enabled, Rancher collects all additional node and control plane logs the pr If you're already using a cloud provider's own logging solution such as AWS CloudWatch or Google Cloud operations suite (formerly Stackdriver), it is not necessary to enable this option as the native solution will have unrestricted access to all logs. -### Systemd Configuration +## Systemd Configuration In Rancher logging, `SystemdLogPath` must be configured for K3s and RKE2 Kubernetes distributions. @@ -87,7 +87,7 @@ K3s and RKE2 Kubernetes distributions log to journald, which is the subsystem of * If `/var/log/journal` exists, then use `/var/log/journal`. * If `/var/log/journal` does not exist, then use `/run/log/journal`. -:::note Notes: +:::note If any value not described above is returned, Rancher Logging will not be able to collect control plane logs. To address this issue, you will need to perform the following actions on every control plane node: @@ -95,4 +95,4 @@ If any value not described above is returned, Rancher Logging will not be able t * Reboot your machine. * Set `systemdLogPath` to `/run/log/journal`. -::: \ No newline at end of file +::: diff --git a/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-helm-chart-options.md b/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-helm-chart-options.md index d68865a3afce..40a2797b34b8 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-helm-chart-options.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/logging/logging-helm-chart-options.md @@ -6,7 +6,7 @@ title: rancher-logging Helm Chart Options -### Enable/Disable Windows Node Logging +## Enable/Disable Windows Node Logging You can enable or disable Windows node logging by setting `global.cattle.windows.enabled` to either `true` or `false` in the `values.yaml`. @@ -21,7 +21,7 @@ Currently an [issue](https://github.com/rancher/rancher/issues/32325) exists whe ::: -### Working with a Custom Docker Root Directory +## Working with a Custom Docker Root Directory If using a custom Docker root directory, you can set `global.dockerRootDirectory` in `values.yaml`. @@ -31,11 +31,11 @@ Note that this only affects Linux nodes. If there are any Windows nodes in the cluster, the change will not be applicable to those nodes. -### Adding NodeSelector Settings and Tolerations for Custom Taints +## Adding NodeSelector Settings and Tolerations for Custom Taints You can add your own `nodeSelector` settings and add `tolerations` for additional taints by editing the logging Helm chart values. For details, see [this page.](taints-and-tolerations.md) -### Enabling the Logging Application to Work with SELinux +## Enabling the Logging Application to Work with SELinux :::note Requirements: @@ -49,7 +49,7 @@ To use Logging v2 with SELinux, we recommend installing the `rancher-selinux` RP Then, when installing the logging application, configure the chart to be SELinux aware by changing `global.seLinux.enabled` to `true` in the `values.yaml`. -### Additional Logging Sources +## Additional Logging Sources By default, Rancher collects logs for [control plane components](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) and [node components](https://kubernetes.io/docs/concepts/overview/components/#node-components) for all cluster types. @@ -72,7 +72,7 @@ When enabled, Rancher collects all additional node and control plane logs the pr If you're already using a cloud provider's own logging solution such as AWS CloudWatch or Google Cloud operations suite (formerly Stackdriver), it is not necessary to enable this option as the native solution will have unrestricted access to all logs. -### Systemd Configuration +## Systemd Configuration In Rancher logging, `SystemdLogPath` must be configured for K3s and RKE2 Kubernetes distributions. @@ -87,7 +87,7 @@ K3s and RKE2 Kubernetes distributions log to journald, which is the subsystem of * If `/var/log/journal` exists, then use `/var/log/journal`. * If `/var/log/journal` does not exist, then use `/run/log/journal`. -:::note Notes: +:::note If any value not described above is returned, Rancher Logging will not be able to collect control plane logs. To address this issue, you will need to perform the following actions on every control plane node: diff --git a/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-helm-chart-options.md b/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-helm-chart-options.md index d68865a3afce..8cc39e2bb039 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-helm-chart-options.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/logging/logging-helm-chart-options.md @@ -6,7 +6,7 @@ title: rancher-logging Helm Chart Options -### Enable/Disable Windows Node Logging +## Enable/Disable Windows Node Logging You can enable or disable Windows node logging by setting `global.cattle.windows.enabled` to either `true` or `false` in the `values.yaml`. @@ -21,7 +21,7 @@ Currently an [issue](https://github.com/rancher/rancher/issues/32325) exists whe ::: -### Working with a Custom Docker Root Directory +## Working with a Custom Docker Root Directory If using a custom Docker root directory, you can set `global.dockerRootDirectory` in `values.yaml`. @@ -35,7 +35,7 @@ If there are any Windows nodes in the cluster, the change will not be applicable You can add your own `nodeSelector` settings and add `tolerations` for additional taints by editing the logging Helm chart values. For details, see [this page.](taints-and-tolerations.md) -### Enabling the Logging Application to Work with SELinux +## Enabling the Logging Application to Work with SELinux :::note Requirements: @@ -49,7 +49,7 @@ To use Logging v2 with SELinux, we recommend installing the `rancher-selinux` RP Then, when installing the logging application, configure the chart to be SELinux aware by changing `global.seLinux.enabled` to `true` in the `values.yaml`. -### Additional Logging Sources +## Additional Logging Sources By default, Rancher collects logs for [control plane components](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) and [node components](https://kubernetes.io/docs/concepts/overview/components/#node-components) for all cluster types. @@ -72,7 +72,7 @@ When enabled, Rancher collects all additional node and control plane logs the pr If you're already using a cloud provider's own logging solution such as AWS CloudWatch or Google Cloud operations suite (formerly Stackdriver), it is not necessary to enable this option as the native solution will have unrestricted access to all logs. -### Systemd Configuration +## Systemd Configuration In Rancher logging, `SystemdLogPath` must be configured for K3s and RKE2 Kubernetes distributions. @@ -87,7 +87,7 @@ K3s and RKE2 Kubernetes distributions log to journald, which is the subsystem of * If `/var/log/journal` exists, then use `/var/log/journal`. * If `/var/log/journal` does not exist, then use `/run/log/journal`. -:::note Notes: +:::note If any value not described above is returned, Rancher Logging will not be able to collect control plane logs. To address this issue, you will need to perform the following actions on every control plane node: diff --git a/versioned_docs/version-2.9/integrations-in-rancher/logging/logging-helm-chart-options.md b/versioned_docs/version-2.9/integrations-in-rancher/logging/logging-helm-chart-options.md index d68865a3afce..40a2797b34b8 100644 --- a/versioned_docs/version-2.9/integrations-in-rancher/logging/logging-helm-chart-options.md +++ b/versioned_docs/version-2.9/integrations-in-rancher/logging/logging-helm-chart-options.md @@ -6,7 +6,7 @@ title: rancher-logging Helm Chart Options -### Enable/Disable Windows Node Logging +## Enable/Disable Windows Node Logging You can enable or disable Windows node logging by setting `global.cattle.windows.enabled` to either `true` or `false` in the `values.yaml`. @@ -21,7 +21,7 @@ Currently an [issue](https://github.com/rancher/rancher/issues/32325) exists whe ::: -### Working with a Custom Docker Root Directory +## Working with a Custom Docker Root Directory If using a custom Docker root directory, you can set `global.dockerRootDirectory` in `values.yaml`. @@ -31,11 +31,11 @@ Note that this only affects Linux nodes. If there are any Windows nodes in the cluster, the change will not be applicable to those nodes. -### Adding NodeSelector Settings and Tolerations for Custom Taints +## Adding NodeSelector Settings and Tolerations for Custom Taints You can add your own `nodeSelector` settings and add `tolerations` for additional taints by editing the logging Helm chart values. For details, see [this page.](taints-and-tolerations.md) -### Enabling the Logging Application to Work with SELinux +## Enabling the Logging Application to Work with SELinux :::note Requirements: @@ -49,7 +49,7 @@ To use Logging v2 with SELinux, we recommend installing the `rancher-selinux` RP Then, when installing the logging application, configure the chart to be SELinux aware by changing `global.seLinux.enabled` to `true` in the `values.yaml`. -### Additional Logging Sources +## Additional Logging Sources By default, Rancher collects logs for [control plane components](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) and [node components](https://kubernetes.io/docs/concepts/overview/components/#node-components) for all cluster types. @@ -72,7 +72,7 @@ When enabled, Rancher collects all additional node and control plane logs the pr If you're already using a cloud provider's own logging solution such as AWS CloudWatch or Google Cloud operations suite (formerly Stackdriver), it is not necessary to enable this option as the native solution will have unrestricted access to all logs. -### Systemd Configuration +## Systemd Configuration In Rancher logging, `SystemdLogPath` must be configured for K3s and RKE2 Kubernetes distributions. @@ -87,7 +87,7 @@ K3s and RKE2 Kubernetes distributions log to journald, which is the subsystem of * If `/var/log/journal` exists, then use `/var/log/journal`. * If `/var/log/journal` does not exist, then use `/run/log/journal`. -:::note Notes: +:::note If any value not described above is returned, Rancher Logging will not be able to collect control plane logs. To address this issue, you will need to perform the following actions on every control plane node: From 7ed34be460d1249bcfdd4424e12997ef6637f85d Mon Sep 17 00:00:00 2001 From: martyav Date: Mon, 23 Sep 2024 11:32:45 -0400 Subject: [PATCH 27/30] fixed taints-and-tolerations.md --- .../integrations-in-rancher/logging/taints-and-tolerations.md | 4 ++-- .../integrations-in-rancher/logging/taints-and-tolerations.md | 4 ++-- .../integrations-in-rancher/logging/taints-and-tolerations.md | 4 ++-- .../integrations-in-rancher/logging/taints-and-tolerations.md | 4 ++-- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/integrations-in-rancher/logging/taints-and-tolerations.md b/docs/integrations-in-rancher/logging/taints-and-tolerations.md index 327cf554fdaa..0147598e84cb 100644 --- a/docs/integrations-in-rancher/logging/taints-and-tolerations.md +++ b/docs/integrations-in-rancher/logging/taints-and-tolerations.md @@ -20,7 +20,7 @@ Both provide choice for the what node(s) the pod will run on. - [Adding NodeSelector Settings and Tolerations for Custom Taints](#adding-nodeselector-settings-and-tolerations-for-custom-taints) -### Default Implementation in Rancher's Logging Stack +## Default Implementation in Rancher's Logging Stack By default, Rancher taints all Linux nodes with `cattle.io/os=linux`, and does not taint Windows nodes. The logging stack pods have `tolerations` for this taint, which enables them to run on Linux nodes. @@ -47,7 +47,7 @@ In the above example, we ensure that our pod only runs on Linux nodes, and we ad You can do the same with Rancher's existing taints, or with your own custom ones. -### Adding NodeSelector Settings and Tolerations for Custom Taints +## Adding NodeSelector Settings and Tolerations for Custom Taints If you would like to add your own `nodeSelector` settings, or if you would like to add `tolerations` for additional taints, you can pass the following to the chart's values. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/logging/taints-and-tolerations.md b/versioned_docs/version-2.7/integrations-in-rancher/logging/taints-and-tolerations.md index 327cf554fdaa..0147598e84cb 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/logging/taints-and-tolerations.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/logging/taints-and-tolerations.md @@ -20,7 +20,7 @@ Both provide choice for the what node(s) the pod will run on. - [Adding NodeSelector Settings and Tolerations for Custom Taints](#adding-nodeselector-settings-and-tolerations-for-custom-taints) -### Default Implementation in Rancher's Logging Stack +## Default Implementation in Rancher's Logging Stack By default, Rancher taints all Linux nodes with `cattle.io/os=linux`, and does not taint Windows nodes. The logging stack pods have `tolerations` for this taint, which enables them to run on Linux nodes. @@ -47,7 +47,7 @@ In the above example, we ensure that our pod only runs on Linux nodes, and we ad You can do the same with Rancher's existing taints, or with your own custom ones. -### Adding NodeSelector Settings and Tolerations for Custom Taints +## Adding NodeSelector Settings and Tolerations for Custom Taints If you would like to add your own `nodeSelector` settings, or if you would like to add `tolerations` for additional taints, you can pass the following to the chart's values. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/logging/taints-and-tolerations.md b/versioned_docs/version-2.8/integrations-in-rancher/logging/taints-and-tolerations.md index 327cf554fdaa..0147598e84cb 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/logging/taints-and-tolerations.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/logging/taints-and-tolerations.md @@ -20,7 +20,7 @@ Both provide choice for the what node(s) the pod will run on. - [Adding NodeSelector Settings and Tolerations for Custom Taints](#adding-nodeselector-settings-and-tolerations-for-custom-taints) -### Default Implementation in Rancher's Logging Stack +## Default Implementation in Rancher's Logging Stack By default, Rancher taints all Linux nodes with `cattle.io/os=linux`, and does not taint Windows nodes. The logging stack pods have `tolerations` for this taint, which enables them to run on Linux nodes. @@ -47,7 +47,7 @@ In the above example, we ensure that our pod only runs on Linux nodes, and we ad You can do the same with Rancher's existing taints, or with your own custom ones. -### Adding NodeSelector Settings and Tolerations for Custom Taints +## Adding NodeSelector Settings and Tolerations for Custom Taints If you would like to add your own `nodeSelector` settings, or if you would like to add `tolerations` for additional taints, you can pass the following to the chart's values. diff --git a/versioned_docs/version-2.9/integrations-in-rancher/logging/taints-and-tolerations.md b/versioned_docs/version-2.9/integrations-in-rancher/logging/taints-and-tolerations.md index 327cf554fdaa..0147598e84cb 100644 --- a/versioned_docs/version-2.9/integrations-in-rancher/logging/taints-and-tolerations.md +++ b/versioned_docs/version-2.9/integrations-in-rancher/logging/taints-and-tolerations.md @@ -20,7 +20,7 @@ Both provide choice for the what node(s) the pod will run on. - [Adding NodeSelector Settings and Tolerations for Custom Taints](#adding-nodeselector-settings-and-tolerations-for-custom-taints) -### Default Implementation in Rancher's Logging Stack +## Default Implementation in Rancher's Logging Stack By default, Rancher taints all Linux nodes with `cattle.io/os=linux`, and does not taint Windows nodes. The logging stack pods have `tolerations` for this taint, which enables them to run on Linux nodes. @@ -47,7 +47,7 @@ In the above example, we ensure that our pod only runs on Linux nodes, and we ad You can do the same with Rancher's existing taints, or with your own custom ones. -### Adding NodeSelector Settings and Tolerations for Custom Taints +## Adding NodeSelector Settings and Tolerations for Custom Taints If you would like to add your own `nodeSelector` settings, or if you would like to add `tolerations` for additional taints, you can pass the following to the chart's values. From 0e6787fb1a52ac795d30f4acdce7e4a2149ac06d Mon Sep 17 00:00:00 2001 From: martyav Date: Mon, 23 Sep 2024 11:51:37 -0400 Subject: [PATCH 28/30] fixed longhorn/overview.md --- docs/integrations-in-rancher/longhorn/overview.md | 12 ++++++------ .../integrations-in-rancher/longhorn/overview.md | 12 ++++++------ .../integrations-in-rancher/longhorn/overview.md | 12 ++++++------ 3 files changed, 18 insertions(+), 18 deletions(-) diff --git a/docs/integrations-in-rancher/longhorn/overview.md b/docs/integrations-in-rancher/longhorn/overview.md index db7e4a620761..13a581175d23 100644 --- a/docs/integrations-in-rancher/longhorn/overview.md +++ b/docs/integrations-in-rancher/longhorn/overview.md @@ -25,7 +25,7 @@ With Longhorn, you can: ![Longhorn Dashboard](/img/longhorn-screenshot.png) -### Installing Longhorn with Rancher +## Installing Longhorn with Rancher 1. Fulfill all [Installation Requirements.](https://longhorn.io/docs/latest/deploy/install/#installation-requirements) 1. Go to the cluster where you want to install Longhorn. @@ -37,14 +37,14 @@ With Longhorn, you can: **Result:** Longhorn is deployed in the Kubernetes cluster. -### Accessing Longhorn from the Rancher UI +## Accessing Longhorn from the Rancher UI 1. Go to the cluster where Longhorn is installed. In the left navigation menu, click **Longhorn**. 1. On this page, you can edit Kubernetes resources managed by Longhorn. To view the Longhorn UI, click the **Longhorn** button in the **Overview** section. **Result:** You will be taken to the Longhorn UI, where you can manage your Longhorn volumes and their replicas in the Kubernetes cluster, as well as secondary backups of your Longhorn storage that may exist in another Kubernetes cluster or in S3. -### Uninstalling Longhorn from the Rancher UI +## Uninstalling Longhorn from the Rancher UI 1. Go to the cluster where Longhorn is installed and click **Apps**. 1. Click **Installed Apps**. @@ -53,15 +53,15 @@ With Longhorn, you can: **Result:** Longhorn is uninstalled. -### GitHub Repository +## GitHub Repository The Longhorn project is available [here.](https://github.com/longhorn/longhorn) -### Documentation +## Documentation The Longhorn documentation is [here.](https://longhorn.io/docs/) -### Architecture +## Architecture Longhorn creates a dedicated storage controller for each volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/longhorn/overview.md b/versioned_docs/version-2.8/integrations-in-rancher/longhorn/overview.md index db7e4a620761..13a581175d23 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/longhorn/overview.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/longhorn/overview.md @@ -25,7 +25,7 @@ With Longhorn, you can: ![Longhorn Dashboard](/img/longhorn-screenshot.png) -### Installing Longhorn with Rancher +## Installing Longhorn with Rancher 1. Fulfill all [Installation Requirements.](https://longhorn.io/docs/latest/deploy/install/#installation-requirements) 1. Go to the cluster where you want to install Longhorn. @@ -37,14 +37,14 @@ With Longhorn, you can: **Result:** Longhorn is deployed in the Kubernetes cluster. -### Accessing Longhorn from the Rancher UI +## Accessing Longhorn from the Rancher UI 1. Go to the cluster where Longhorn is installed. In the left navigation menu, click **Longhorn**. 1. On this page, you can edit Kubernetes resources managed by Longhorn. To view the Longhorn UI, click the **Longhorn** button in the **Overview** section. **Result:** You will be taken to the Longhorn UI, where you can manage your Longhorn volumes and their replicas in the Kubernetes cluster, as well as secondary backups of your Longhorn storage that may exist in another Kubernetes cluster or in S3. -### Uninstalling Longhorn from the Rancher UI +## Uninstalling Longhorn from the Rancher UI 1. Go to the cluster where Longhorn is installed and click **Apps**. 1. Click **Installed Apps**. @@ -53,15 +53,15 @@ With Longhorn, you can: **Result:** Longhorn is uninstalled. -### GitHub Repository +## GitHub Repository The Longhorn project is available [here.](https://github.com/longhorn/longhorn) -### Documentation +## Documentation The Longhorn documentation is [here.](https://longhorn.io/docs/) -### Architecture +## Architecture Longhorn creates a dedicated storage controller for each volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. diff --git a/versioned_docs/version-2.9/integrations-in-rancher/longhorn/overview.md b/versioned_docs/version-2.9/integrations-in-rancher/longhorn/overview.md index db7e4a620761..13a581175d23 100644 --- a/versioned_docs/version-2.9/integrations-in-rancher/longhorn/overview.md +++ b/versioned_docs/version-2.9/integrations-in-rancher/longhorn/overview.md @@ -25,7 +25,7 @@ With Longhorn, you can: ![Longhorn Dashboard](/img/longhorn-screenshot.png) -### Installing Longhorn with Rancher +## Installing Longhorn with Rancher 1. Fulfill all [Installation Requirements.](https://longhorn.io/docs/latest/deploy/install/#installation-requirements) 1. Go to the cluster where you want to install Longhorn. @@ -37,14 +37,14 @@ With Longhorn, you can: **Result:** Longhorn is deployed in the Kubernetes cluster. -### Accessing Longhorn from the Rancher UI +## Accessing Longhorn from the Rancher UI 1. Go to the cluster where Longhorn is installed. In the left navigation menu, click **Longhorn**. 1. On this page, you can edit Kubernetes resources managed by Longhorn. To view the Longhorn UI, click the **Longhorn** button in the **Overview** section. **Result:** You will be taken to the Longhorn UI, where you can manage your Longhorn volumes and their replicas in the Kubernetes cluster, as well as secondary backups of your Longhorn storage that may exist in another Kubernetes cluster or in S3. -### Uninstalling Longhorn from the Rancher UI +## Uninstalling Longhorn from the Rancher UI 1. Go to the cluster where Longhorn is installed and click **Apps**. 1. Click **Installed Apps**. @@ -53,15 +53,15 @@ With Longhorn, you can: **Result:** Longhorn is uninstalled. -### GitHub Repository +## GitHub Repository The Longhorn project is available [here.](https://github.com/longhorn/longhorn) -### Documentation +## Documentation The Longhorn documentation is [here.](https://longhorn.io/docs/) -### Architecture +## Architecture Longhorn creates a dedicated storage controller for each volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. From 064f8d154ecf084287f53d7499196380eaee41ff Mon Sep 17 00:00:00 2001 From: martyav Date: Mon, 23 Sep 2024 12:01:43 -0400 Subject: [PATCH 29/30] fixed monitoring-and-alerting.md --- .../monitoring-and-alerting/monitoring-and-alerting.md | 2 +- .../monitoring-and-alerting/monitoring-and-alerting.md | 2 +- .../monitoring-and-alerting/monitoring-and-alerting.md | 2 +- .../monitoring-and-alerting/monitoring-and-alerting.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md b/docs/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md index da6460a0da70..962a61a3ea9c 100644 --- a/docs/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md +++ b/docs/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md @@ -15,7 +15,7 @@ For information on V1 monitoring and alerting, available in Rancher v2.2 up to v Using the `rancher-monitoring` application, you can quickly deploy leading open-source monitoring and alerting solutions onto your cluster. -### Features +## Features Prometheus lets you view metrics from your Rancher and Kubernetes objects. Using timestamps, Prometheus lets you query and view these metrics in easy-to-read graphs and visuals, either through the Rancher UI or Grafana, which is an analytics viewing platform deployed along with Prometheus. diff --git a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md index da6460a0da70..962a61a3ea9c 100644 --- a/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md +++ b/versioned_docs/version-2.7/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md @@ -15,7 +15,7 @@ For information on V1 monitoring and alerting, available in Rancher v2.2 up to v Using the `rancher-monitoring` application, you can quickly deploy leading open-source monitoring and alerting solutions onto your cluster. -### Features +## Features Prometheus lets you view metrics from your Rancher and Kubernetes objects. Using timestamps, Prometheus lets you query and view these metrics in easy-to-read graphs and visuals, either through the Rancher UI or Grafana, which is an analytics viewing platform deployed along with Prometheus. diff --git a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md index da6460a0da70..962a61a3ea9c 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md @@ -15,7 +15,7 @@ For information on V1 monitoring and alerting, available in Rancher v2.2 up to v Using the `rancher-monitoring` application, you can quickly deploy leading open-source monitoring and alerting solutions onto your cluster. -### Features +## Features Prometheus lets you view metrics from your Rancher and Kubernetes objects. Using timestamps, Prometheus lets you query and view these metrics in easy-to-read graphs and visuals, either through the Rancher UI or Grafana, which is an analytics viewing platform deployed along with Prometheus. diff --git a/versioned_docs/version-2.9/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md b/versioned_docs/version-2.9/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md index da6460a0da70..962a61a3ea9c 100644 --- a/versioned_docs/version-2.9/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md +++ b/versioned_docs/version-2.9/integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md @@ -15,7 +15,7 @@ For information on V1 monitoring and alerting, available in Rancher v2.2 up to v Using the `rancher-monitoring` application, you can quickly deploy leading open-source monitoring and alerting solutions onto your cluster. -### Features +## Features Prometheus lets you view metrics from your Rancher and Kubernetes objects. Using timestamps, Prometheus lets you query and view these metrics in easy-to-read graphs and visuals, either through the Rancher UI or Grafana, which is an analytics viewing platform deployed along with Prometheus. From 05367ada1f34a16c1c1160a0146eb1e4302d9c63 Mon Sep 17 00:00:00 2001 From: martyav Date: Mon, 23 Sep 2024 12:05:04 -0400 Subject: [PATCH 30/30] fixed neuvector.overview.md --- .../neuvector/overview.md | 27 +++++++++---------- .../neuvector/overview.md | 27 +++++++++---------- .../neuvector/overview.md | 27 +++++++++---------- 3 files changed, 39 insertions(+), 42 deletions(-) diff --git a/docs/integrations-in-rancher/neuvector/overview.md b/docs/integrations-in-rancher/neuvector/overview.md index cec0d643afdd..199c51a14f6e 100644 --- a/docs/integrations-in-rancher/neuvector/overview.md +++ b/docs/integrations-in-rancher/neuvector/overview.md @@ -6,13 +6,13 @@ title: Overview -### NeuVector Integration in Rancher +## NeuVector Integration in Rancher [NeuVector 5.x](https://open-docs.neuvector.com/) is an open-source container-centric security platform that is integrated with Rancher. NeuVector offers real-time compliance, visibility, and protection for critical applications and data during runtime. NeuVector provides a firewall, container process/file system monitoring, security auditing with CIS benchmarks, and vulnerability scanning. For more information on Rancher security, please see the [security documentation](../../reference-guides/rancher-security/rancher-security.md). NeuVector can be enabled through a Helm chart that may be installed either through **Apps** or through the **Cluster Tools** button in the Rancher UI. Once the Helm chart is installed, users can easily [deploy and manage NeuVector clusters within Rancher](https://open-docs.neuvector.com/deploying/rancher#deploy-and-manage-neuvector-through-rancher-apps-marketplace). -### Installing NeuVector with Rancher +## Installing NeuVector with Rancher The Harvester Helm Chart is used to manage access to the NeuVector UI in Rancher where users can navigate directly to deploy and manage their NeuVector clusters. @@ -44,12 +44,12 @@ Some examples are as follows: 1. Click on **Cluster Tools** at the bottom of the left navigation bar. 1. Repeat step 4 above to select your container runtime accordingly, then click **Install** again. -### Accessing NeuVector from the Rancher UI +## Accessing NeuVector from the Rancher UI 1. Navigate to the cluster explorer of the cluster where NeuVector is installed. In the left navigation bar, click **NeuVector**. 1. Click the external link to go to the NeuVector UI. Once the link is selected, users must accept the `END USER LICENSE AGREEMENT` to access the NeuVector UI. -### Uninstalling NeuVector from the Rancher UI +## Uninstalling NeuVector from the Rancher UI **To uninstall from Apps:** @@ -62,15 +62,15 @@ Some examples are as follows: 1. Click **☰ > Cluster Management**. 1. Click on **Cluster Tools** at the bottom-left of the screen, then click on the trash can icon under the NeuVector chart. Select `Delete the CRD associated with this app` if desired, then click **Delete**. -### GitHub Repository +## GitHub Repository The NeuVector project is available [here](https://github.com/neuvector/neuvector). -### Documentation +## Documentation The NeuVector documentation is [here](https://open-docs.neuvector.com/). -### Architecture +## Architecture The NeuVector security solution contains four types of security containers: Controllers, Enforcers, Managers, and Scanners. A special container called an All-in-One is also provided to combine the Controller, Enforcer, and Manager functions all in one container, primarily for Docker-native deployments. There is also an Updater which, when run, will update the CVE database. @@ -91,7 +91,7 @@ The NeuVector security solution contains four types of security containers: Cont To learn more about NeuVector's architecture, please refer [here](https://open-docs.neuvector.com/basics/overview#architecture). -### CPU and Memory Allocations +## CPU and Memory Allocations Below are the minimum recommended computing resources for the NeuVector chart installation in a default deployment. Note that the resource limit is not set. @@ -105,7 +105,7 @@ Below are the minimum recommended computing resources for the NeuVector chart in \* Minimum 1GB of memory total required for Controller, Manager, and Scanner containers combined. -### Hardened Cluster Support - Calico and Canal +## Hardened Cluster Support - Calico and Canal @@ -162,7 +162,7 @@ Below are the minimum recommended computing resources for the NeuVector chart in -### SELinux-enabled Cluster Support - Calico and Canal +## SELinux-enabled Cluster Support - Calico and Canal To enable SELinux on RKE2 clusters, follow the steps below: @@ -179,12 +179,12 @@ kubectl patch deploy neuvector-scanner-pod -n cattle-neuvector-system --patch '{ kubectl patch cronjob neuvector-updater-pod -n cattle-neuvector-system --patch '{"spec":{"jobTemplate":{"spec":{"template":{"spec":{"securityContext":{"runAsUser": 5400}}}}}}}' ``` -### Cluster Support in an Air-Gapped Environment +## Cluster Support in an Air-Gapped Environment - All NeuVector components are deployable on a cluster in an air-gapped environment without any additional configuration needed. -### Support Limitations +## Support Limitations * Only admins and cluster owners are currently supported. @@ -193,11 +193,10 @@ kubectl patch cronjob neuvector-updater-pod -n cattle-neuvector-system --patch ' * NeuVector is not supported on a Windows cluster. -### Other Limitations +## Other Limitations * Currently, NeuVector feature chart installation fails when a NeuVector partner chart already exists. To work around this issue, uninstall the NeuVector partner chart and reinstall the NeuVector feature chart. * Sometimes when the controllers are not ready, the NeuVector UI is not accessible from the Rancher UI. During this time, controllers will try to restart, and it takes a few minutes for the controllers to be active. * Container runtime is not auto-detected for different cluster types when installing the NeuVector chart. To work around this, you can specify the runtime manually. - diff --git a/versioned_docs/version-2.8/integrations-in-rancher/neuvector/overview.md b/versioned_docs/version-2.8/integrations-in-rancher/neuvector/overview.md index cec0d643afdd..199c51a14f6e 100644 --- a/versioned_docs/version-2.8/integrations-in-rancher/neuvector/overview.md +++ b/versioned_docs/version-2.8/integrations-in-rancher/neuvector/overview.md @@ -6,13 +6,13 @@ title: Overview -### NeuVector Integration in Rancher +## NeuVector Integration in Rancher [NeuVector 5.x](https://open-docs.neuvector.com/) is an open-source container-centric security platform that is integrated with Rancher. NeuVector offers real-time compliance, visibility, and protection for critical applications and data during runtime. NeuVector provides a firewall, container process/file system monitoring, security auditing with CIS benchmarks, and vulnerability scanning. For more information on Rancher security, please see the [security documentation](../../reference-guides/rancher-security/rancher-security.md). NeuVector can be enabled through a Helm chart that may be installed either through **Apps** or through the **Cluster Tools** button in the Rancher UI. Once the Helm chart is installed, users can easily [deploy and manage NeuVector clusters within Rancher](https://open-docs.neuvector.com/deploying/rancher#deploy-and-manage-neuvector-through-rancher-apps-marketplace). -### Installing NeuVector with Rancher +## Installing NeuVector with Rancher The Harvester Helm Chart is used to manage access to the NeuVector UI in Rancher where users can navigate directly to deploy and manage their NeuVector clusters. @@ -44,12 +44,12 @@ Some examples are as follows: 1. Click on **Cluster Tools** at the bottom of the left navigation bar. 1. Repeat step 4 above to select your container runtime accordingly, then click **Install** again. -### Accessing NeuVector from the Rancher UI +## Accessing NeuVector from the Rancher UI 1. Navigate to the cluster explorer of the cluster where NeuVector is installed. In the left navigation bar, click **NeuVector**. 1. Click the external link to go to the NeuVector UI. Once the link is selected, users must accept the `END USER LICENSE AGREEMENT` to access the NeuVector UI. -### Uninstalling NeuVector from the Rancher UI +## Uninstalling NeuVector from the Rancher UI **To uninstall from Apps:** @@ -62,15 +62,15 @@ Some examples are as follows: 1. Click **☰ > Cluster Management**. 1. Click on **Cluster Tools** at the bottom-left of the screen, then click on the trash can icon under the NeuVector chart. Select `Delete the CRD associated with this app` if desired, then click **Delete**. -### GitHub Repository +## GitHub Repository The NeuVector project is available [here](https://github.com/neuvector/neuvector). -### Documentation +## Documentation The NeuVector documentation is [here](https://open-docs.neuvector.com/). -### Architecture +## Architecture The NeuVector security solution contains four types of security containers: Controllers, Enforcers, Managers, and Scanners. A special container called an All-in-One is also provided to combine the Controller, Enforcer, and Manager functions all in one container, primarily for Docker-native deployments. There is also an Updater which, when run, will update the CVE database. @@ -91,7 +91,7 @@ The NeuVector security solution contains four types of security containers: Cont To learn more about NeuVector's architecture, please refer [here](https://open-docs.neuvector.com/basics/overview#architecture). -### CPU and Memory Allocations +## CPU and Memory Allocations Below are the minimum recommended computing resources for the NeuVector chart installation in a default deployment. Note that the resource limit is not set. @@ -105,7 +105,7 @@ Below are the minimum recommended computing resources for the NeuVector chart in \* Minimum 1GB of memory total required for Controller, Manager, and Scanner containers combined. -### Hardened Cluster Support - Calico and Canal +## Hardened Cluster Support - Calico and Canal @@ -162,7 +162,7 @@ Below are the minimum recommended computing resources for the NeuVector chart in -### SELinux-enabled Cluster Support - Calico and Canal +## SELinux-enabled Cluster Support - Calico and Canal To enable SELinux on RKE2 clusters, follow the steps below: @@ -179,12 +179,12 @@ kubectl patch deploy neuvector-scanner-pod -n cattle-neuvector-system --patch '{ kubectl patch cronjob neuvector-updater-pod -n cattle-neuvector-system --patch '{"spec":{"jobTemplate":{"spec":{"template":{"spec":{"securityContext":{"runAsUser": 5400}}}}}}}' ``` -### Cluster Support in an Air-Gapped Environment +## Cluster Support in an Air-Gapped Environment - All NeuVector components are deployable on a cluster in an air-gapped environment without any additional configuration needed. -### Support Limitations +## Support Limitations * Only admins and cluster owners are currently supported. @@ -193,11 +193,10 @@ kubectl patch cronjob neuvector-updater-pod -n cattle-neuvector-system --patch ' * NeuVector is not supported on a Windows cluster. -### Other Limitations +## Other Limitations * Currently, NeuVector feature chart installation fails when a NeuVector partner chart already exists. To work around this issue, uninstall the NeuVector partner chart and reinstall the NeuVector feature chart. * Sometimes when the controllers are not ready, the NeuVector UI is not accessible from the Rancher UI. During this time, controllers will try to restart, and it takes a few minutes for the controllers to be active. * Container runtime is not auto-detected for different cluster types when installing the NeuVector chart. To work around this, you can specify the runtime manually. - diff --git a/versioned_docs/version-2.9/integrations-in-rancher/neuvector/overview.md b/versioned_docs/version-2.9/integrations-in-rancher/neuvector/overview.md index cec0d643afdd..199c51a14f6e 100644 --- a/versioned_docs/version-2.9/integrations-in-rancher/neuvector/overview.md +++ b/versioned_docs/version-2.9/integrations-in-rancher/neuvector/overview.md @@ -6,13 +6,13 @@ title: Overview -### NeuVector Integration in Rancher +## NeuVector Integration in Rancher [NeuVector 5.x](https://open-docs.neuvector.com/) is an open-source container-centric security platform that is integrated with Rancher. NeuVector offers real-time compliance, visibility, and protection for critical applications and data during runtime. NeuVector provides a firewall, container process/file system monitoring, security auditing with CIS benchmarks, and vulnerability scanning. For more information on Rancher security, please see the [security documentation](../../reference-guides/rancher-security/rancher-security.md). NeuVector can be enabled through a Helm chart that may be installed either through **Apps** or through the **Cluster Tools** button in the Rancher UI. Once the Helm chart is installed, users can easily [deploy and manage NeuVector clusters within Rancher](https://open-docs.neuvector.com/deploying/rancher#deploy-and-manage-neuvector-through-rancher-apps-marketplace). -### Installing NeuVector with Rancher +## Installing NeuVector with Rancher The Harvester Helm Chart is used to manage access to the NeuVector UI in Rancher where users can navigate directly to deploy and manage their NeuVector clusters. @@ -44,12 +44,12 @@ Some examples are as follows: 1. Click on **Cluster Tools** at the bottom of the left navigation bar. 1. Repeat step 4 above to select your container runtime accordingly, then click **Install** again. -### Accessing NeuVector from the Rancher UI +## Accessing NeuVector from the Rancher UI 1. Navigate to the cluster explorer of the cluster where NeuVector is installed. In the left navigation bar, click **NeuVector**. 1. Click the external link to go to the NeuVector UI. Once the link is selected, users must accept the `END USER LICENSE AGREEMENT` to access the NeuVector UI. -### Uninstalling NeuVector from the Rancher UI +## Uninstalling NeuVector from the Rancher UI **To uninstall from Apps:** @@ -62,15 +62,15 @@ Some examples are as follows: 1. Click **☰ > Cluster Management**. 1. Click on **Cluster Tools** at the bottom-left of the screen, then click on the trash can icon under the NeuVector chart. Select `Delete the CRD associated with this app` if desired, then click **Delete**. -### GitHub Repository +## GitHub Repository The NeuVector project is available [here](https://github.com/neuvector/neuvector). -### Documentation +## Documentation The NeuVector documentation is [here](https://open-docs.neuvector.com/). -### Architecture +## Architecture The NeuVector security solution contains four types of security containers: Controllers, Enforcers, Managers, and Scanners. A special container called an All-in-One is also provided to combine the Controller, Enforcer, and Manager functions all in one container, primarily for Docker-native deployments. There is also an Updater which, when run, will update the CVE database. @@ -91,7 +91,7 @@ The NeuVector security solution contains four types of security containers: Cont To learn more about NeuVector's architecture, please refer [here](https://open-docs.neuvector.com/basics/overview#architecture). -### CPU and Memory Allocations +## CPU and Memory Allocations Below are the minimum recommended computing resources for the NeuVector chart installation in a default deployment. Note that the resource limit is not set. @@ -105,7 +105,7 @@ Below are the minimum recommended computing resources for the NeuVector chart in \* Minimum 1GB of memory total required for Controller, Manager, and Scanner containers combined. -### Hardened Cluster Support - Calico and Canal +## Hardened Cluster Support - Calico and Canal @@ -162,7 +162,7 @@ Below are the minimum recommended computing resources for the NeuVector chart in -### SELinux-enabled Cluster Support - Calico and Canal +## SELinux-enabled Cluster Support - Calico and Canal To enable SELinux on RKE2 clusters, follow the steps below: @@ -179,12 +179,12 @@ kubectl patch deploy neuvector-scanner-pod -n cattle-neuvector-system --patch '{ kubectl patch cronjob neuvector-updater-pod -n cattle-neuvector-system --patch '{"spec":{"jobTemplate":{"spec":{"template":{"spec":{"securityContext":{"runAsUser": 5400}}}}}}}' ``` -### Cluster Support in an Air-Gapped Environment +## Cluster Support in an Air-Gapped Environment - All NeuVector components are deployable on a cluster in an air-gapped environment without any additional configuration needed. -### Support Limitations +## Support Limitations * Only admins and cluster owners are currently supported. @@ -193,11 +193,10 @@ kubectl patch cronjob neuvector-updater-pod -n cattle-neuvector-system --patch ' * NeuVector is not supported on a Windows cluster. -### Other Limitations +## Other Limitations * Currently, NeuVector feature chart installation fails when a NeuVector partner chart already exists. To work around this issue, uninstall the NeuVector partner chart and reinstall the NeuVector feature chart. * Sometimes when the controllers are not ready, the NeuVector UI is not accessible from the Rancher UI. During this time, controllers will try to restart, and it takes a few minutes for the controllers to be active. * Container runtime is not auto-detected for different cluster types when installing the NeuVector chart. To work around this, you can specify the runtime manually. -