Skip to content

Commit

Permalink
Merge pull request #1492 from martyav/fix-order-of-headings-2
Browse files Browse the repository at this point in the history
Fix order of headings 2
  • Loading branch information
btat authored Sep 23, 2024
2 parents cefe2de + 6c694a4 commit c58f080
Show file tree
Hide file tree
Showing 75 changed files with 314 additions and 305 deletions.
4 changes: 2 additions & 2 deletions docs/how-to-guides/new-user-guides/add-users-to-projects.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,11 @@ Want to provide a user with access to _all_ projects within a cluster? See [Addi

:::

### Adding Members to a New Project
## Adding Members to a New Project

You can add members to a project as you create it (recommended if possible). For details on creating a new project, refer to the [cluster administration section.](../../how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces.md)

### Adding Members to an Existing Project
## Adding Members to an Existing Project

Following project creation, you can add users as project members so that they can access its resources.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ To provision new storage for your workloads, follow these steps:
1. [Add a storage class and configure it to use your storage.](#1-add-a-storage-class-and-configure-it-to-use-your-storage)
2. [Use the Storage Class for Pods Deployed with a StatefulSet.](#2-use-the-storage-class-for-pods-deployed-with-a-statefulset)

### Prerequisites
## Prerequisites

- To set up persistent storage, the `Manage Volumes` [role](../../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-role-reference) is required.
- If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider.
Expand All @@ -42,7 +42,7 @@ hostPath | `host-path`

To use a storage provisioner that is not on the above list, you will need to use a [feature flag to enable unsupported storage drivers.](../../../../advanced-user-guides/enable-experimental-features/unsupported-storage-drivers.md)

### 1. Add a storage class and configure it to use your storage
## 1. Add a storage class and configure it to use your storage

These steps describe how to set up a storage class at the cluster level.

Expand All @@ -59,7 +59,7 @@ These steps describe how to set up a storage class at the cluster level.

For full information about the storage class parameters, refer to the official [Kubernetes documentation.](https://kubernetes.io/docs/concepts/storage/storage-classes/#parameters).

### 2. Use the Storage Class for Pods Deployed with a StatefulSet
## 2. Use the Storage Class for Pods Deployed with a StatefulSet

StatefulSets manage the deployment and scaling of Pods while maintaining a sticky identity for each Pod. In this StatefulSet, we will configure a VolumeClaimTemplate. Each Pod managed by the StatefulSet will be deployed with a PersistentVolumeClaim based on this VolumeClaimTemplate. The PersistentVolumeClaim will refer to the StorageClass that we created. Therefore, when each Pod managed by the StatefulSet is deployed, it will be bound to dynamically provisioned storage using the StorageClass defined in its PersistentVolumeClaim.

Expand Down Expand Up @@ -88,4 +88,4 @@ To attach the PVC to an existing workload,
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Save**.

**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. If not, Rancher will provision new persistent storage.
**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. If not, Rancher will provision new persistent storage.
Original file line number Diff line number Diff line change
Expand Up @@ -20,20 +20,20 @@ To set up storage, follow these steps:
2. [Add a PersistentVolume that refers to the persistent storage.](#2-add-a-persistentvolume-that-refers-to-the-persistent-storage)
3. [Use the Storage Class for Pods Deployed with a StatefulSet.](#3-use-the-storage-class-for-pods-deployed-with-a-statefulset)

### Prerequisites
## Prerequisites

- To create a persistent volume as a Kubernetes resource, you must have the `Manage Volumes` [role.](../../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-role-reference)
- If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider.

### 1. Set up persistent storage
## 1. Set up persistent storage

Creating a persistent volume in Rancher will not create a storage volume. It only creates a Kubernetes resource that maps to an existing volume. Therefore, before you can create a persistent volume as a Kubernetes resource, you must have storage provisioned.

The steps to set up a persistent storage device will differ based on your infrastructure. We provide examples of how to set up storage using [vSphere,](../../provisioning-storage-examples/vsphere-storage.md) [NFS,](../../provisioning-storage-examples/nfs-storage.md) or Amazon's [EBS.](../../provisioning-storage-examples/persistent-storage-in-amazon-ebs.md)

If you have a pool of block storage, and you don't want to use a cloud provider, Longhorn could help you provide persistent storage to your Kubernetes cluster. For more information, see [Cloud Native Storage with Longhorn](../../../../../integrations-in-rancher/longhorn/longhorn.md).

### 2. Add a PersistentVolume that refers to the persistent storage
## 2. Add a PersistentVolume that refers to the persistent storage

These steps describe how to set up a PersistentVolume at the cluster level in Kubernetes.

Expand All @@ -52,7 +52,7 @@ These steps describe how to set up a PersistentVolume at the cluster level in Ku
**Result:** Your new persistent volume is created.


### 3. Use the Storage Class for Pods Deployed with a StatefulSet
## 3. Use the Storage Class for Pods Deployed with a StatefulSet

StatefulSets manage the deployment and scaling of Pods while maintaining a sticky identity for each Pod. In this StatefulSet, we will configure a VolumeClaimTemplate. Each Pod managed by the StatefulSet will be deployed with a PersistentVolumeClaim based on this VolumeClaimTemplate. The PersistentVolumeClaim will refer to the PersistentVolume that we created. Therefore, when each Pod managed by the StatefulSet is deployed, it will be bound a PersistentVolume as defined in its PersistentVolumeClaim.

Expand Down Expand Up @@ -86,4 +86,4 @@ The following steps describe how to assign persistent storage to an existing wor
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Launch**.

**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC.
**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC.
Original file line number Diff line number Diff line change
Expand Up @@ -95,4 +95,4 @@ If any value not described above is returned, Rancher Logging will not be able t
* Reboot your machine.
* Set `systemdLogPath` to `/run/log/journal`.

:::
:::
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ title: DigitalOcean Node Template Configuration

Account access information is stored as a cloud credential. Cloud credentials are stored as Kubernetes secrets. Multiple node templates can use the same cloud credential. You can use an existing cloud credential or create a new one.

### Droplet Options
## Droplet Options

The **Droplet Options** provision your cluster's geographical region and specifications.

### Docker Daemon
## Docker Daemon

If you use Docker, the [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ title: Private Clusters

In GKE, [private clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept) are clusters whose nodes are isolated from inbound and outbound traffic by assigning them internal IP addresses only. Private clusters in GKE have the option of exposing the control plane endpoint as a publicly accessible address or as a private address. This is different from other Kubernetes providers, which may refer to clusters with private control plane endpoints as "private clusters" but still allow traffic to and from nodes. You may want to create a cluster with private nodes, with or without a public control plane endpoint, depending on your organization's networking and security requirements. A GKE cluster provisioned from Rancher can use isolated nodes by selecting "Private Cluster" in the Cluster Options (under "Show advanced options"). The control plane endpoint can optionally be made private by selecting "Enable Private Endpoint".

### Private Nodes
## Private Nodes

Because the nodes in a private cluster only have internal IP addresses, they will not be able to install the cluster agent and Rancher will not be able to fully manage the cluster. This can be overcome in a few ways.

#### Cloud NAT
### Cloud NAT

:::caution

Expand All @@ -22,7 +22,7 @@ Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing).

If restricting outgoing internet access is not a concern for your organization, use Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service to allow nodes in the private network to access the internet, enabling them to download the required images from Docker Hub and contact the Rancher management server. This is the simplest solution.

#### Private registry
### Private Registry

:::caution

Expand All @@ -32,11 +32,11 @@ This scenario is not officially supported, but is described for cases in which u

If restricting both incoming and outgoing traffic to nodes is a requirement, follow the air-gapped installation instructions to set up a private container image [registry](../../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) on the VPC where the cluster is going to be, allowing the cluster nodes to access and download the images they need to run the cluster agent. If the control plane endpoint is also private, Rancher will need [direct access](#direct-access) to it.

### Private Control Plane Endpoint
## Private Control Plane Endpoint

If the cluster has a public endpoint exposed, Rancher will be able to reach the cluster, and no additional steps need to be taken. However, if the cluster has no public endpoint, then considerations must be made to ensure Rancher can access the cluster.

#### Cloud NAT
### Cloud NAT

:::caution

Expand All @@ -47,7 +47,7 @@ Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing).
As above, if restricting outgoing internet access to the nodes is not a concern, then Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service can be used to allow the nodes to access the internet. While the cluster is provisioning, Rancher will provide a registration command to run on the cluster. Download the [kubeconfig](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl) for the new cluster and run the provided kubectl command on the cluster. Gaining access
to the cluster in order to run this command can be done by creating a temporary node or using an existing node in the VPC, or by logging on to or creating an SSH tunnel through one of the cluster nodes.

#### Direct access
### Direct Access

If the Rancher server is run on the same VPC as the cluster's control plane, it will have direct access to the control plane's private endpoint. The cluster nodes will need to have access to a [private registry](#private-registry) to download images as described above.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,18 +26,18 @@ Prometheus Federator is designed to be deployed alongside an existing Prometheus
2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project-<id>-monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**.
3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). For more information, see the section on [configuring RBAC](rbac.md).

### What is a Project?
## What is a Project?

In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`. By default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given Rancher Project.

### Configuring the Helm release created by a ProjectHelmChart
## Configuring the Helm release created by a ProjectHelmChart

The `spec.values` of this ProjectHelmChart's resources will correspond to the `values.yaml` override to be supplied to the underlying Helm chart deployed by the operator on the user's behalf; to see the underlying chart's `values.yaml` spec, either:

- View the chart's definition located at [`rancher/prometheus-federator` under `charts/rancher-project-monitoring`](https://github.com/rancher/prometheus-federator/blob/main/charts/rancher-project-monitoring) (where the chart version will be tied to the version of this operator).
- Look for the ConfigMap named `monitoring.cattle.io.v1alpha1` that is automatically created in each Project Registration Namespace, which will contain both the `values.yaml` and `questions.yaml` that was used to configure the chart (which was embedded directly into the `prometheus-federator` binary).

### Namespaces
## Namespaces

As a Project Operator based on [rancher/helm-project-operator](https://github.com/rancher/helm-project-operator), Prometheus Federator has three different classifications of namespaces that the operator looks out for:

Expand Down Expand Up @@ -65,7 +65,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co

:::

### Helm Resources (HelmChart, HelmRelease)
## Helm Resources (HelmChart, HelmRelease)

On deploying a ProjectHelmChart, the Prometheus Federator will automatically create and manage two child custom resources that manage the underlying Helm resources in turn:

Expand Down Expand Up @@ -103,6 +103,6 @@ For more information on advanced configurations, refer to [this page](https://gi
|`helmProjectOperator.hardenedNamespaces.configuration`| The configuration to be supplied to the default ServiceAccount or auto-generated NetworkPolicy on managing a namespace. |
-->

### Prometheus Federator on the Local Cluster
## Prometheus Federator on the Local Cluster

Prometheus Federator is a resource intensive application. Installing it to the local cluster is possible, but **not recommended**.
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The following descriptions correspond to the numbers in the diagram above:
3. [Node Agents](#3-node-agents)
4. [Authorized Cluster Endpoint](#4-authorized-cluster-endpoint)

### 1. The Authentication Proxy
## 1. The Authentication Proxy

In this diagram, a user named Bob wants to see all pods running on a downstream user cluster called User Cluster 1. From within Rancher, he can run a `kubectl` command to see
the pods. Bob is authenticated through Rancher's authentication proxy.
Expand All @@ -32,7 +32,7 @@ Rancher communicates with Kubernetes clusters using a [service account](https://

By default, Rancher generates a [kubeconfig file](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) that contains credentials for proxying through the Rancher server to connect to the Kubernetes API server on a downstream user cluster. The kubeconfig file (`kube_config_rancher-cluster.yml`) contains full access to the cluster.

### 2. Cluster Controllers and Cluster Agents
## 2. Cluster Controllers and Cluster Agents

Each downstream user cluster has a cluster agent, which opens a tunnel to the corresponding cluster controller within the Rancher server.

Expand All @@ -52,13 +52,13 @@ The cluster agent, also called `cattle-cluster-agent`, is a component that runs
- Applies the roles and bindings defined in each cluster's global policies
- Communicates between the cluster and Rancher server (through a tunnel to the cluster controller) about events, stats, node info, and health

### 3. Node Agents
## 3. Node Agents

If the cluster agent (also called `cattle-cluster-agent`) is not available, one of the node agents creates a tunnel to the cluster controller to communicate with Rancher.

The `cattle-node-agent` is deployed using a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) resource to make sure it runs on every node in a Rancher-launched Kubernetes cluster. It is used to interact with the nodes when performing cluster operations. Examples of cluster operations include upgrading the Kubernetes version and creating or restoring etcd snapshots.

### 4. Authorized Cluster Endpoint
## 4. Authorized Cluster Endpoint

An authorized cluster endpoint (ACE) allows users to connect to the Kubernetes API server of a downstream cluster without having to route their requests through the Rancher authentication proxy.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ title: Kubernetes Security Best Practices
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/rancher-security/kubernetes-security-best-practices"/>
</head>

### Restricting cloud metadata API access
## Restricting Cloud Metadata API Access

Cloud providers such as AWS, Azure, DigitalOcean or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS, DigitalOcean Kubernetes or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets.

Expand Down
Loading

0 comments on commit c58f080

Please sign in to comment.