diff --git a/doc/README.md b/doc/README.md index 8e555ad89..66f099210 100644 --- a/doc/README.md +++ b/doc/README.md @@ -1,9 +1,13 @@ # Multicluster Global Hub -This document focuses on the features of the multicluster global hub. +The multicluster global hub is a set of components that enable the management of multiple hub clusters from a single hub cluster. You can complete the following tasks by using the multicluster global hub: + +- Deploy regional hub clusters +- List the managed clusters that are managed by all of the regional hub clusters + +The multicluster global hub is useful when a single hub cluster cannot manage the large number of clusters in a high-scale environment. The multicluster global hub designates multiple managed clusters as multiple regional hub clusters. The global hub cluster manages the regional hub clusters. - [Multicluster Global Hub](#multicluster-global-hub) - - [Overview](#overview) - [Use Cases](./global_hub_use_cases.md) - [Architecture](#architecture) - [Multicluster Global Hub Operator](#multicluster-global-hub-operator) @@ -25,19 +29,9 @@ This document focuses on the features of the multicluster global hub. - [Development preview features](./dev-preview.md) - [Known issues](#known-issues) -## Overview - -The multicluster global hub is to resolve the problem of a single hub cluster in high scale environment. Due to the limitation of the kubernetes, the single hub cluster can not handle the large number of managed clusters. The multicluster global hub is designed to solve this problem by splitting the managed clusters into multiple regional hub clusters. The regional hub clusters are managed by the global hub cluster. - -The multicluster global hub is a set of components that enable the management of multiple clusters from a single hub cluster. It is designed to be deployed on a hub cluster and provides the following features: - -- Deploy the regional hub clusters -- List the managed clusters in all the regional hub clusters -- Manage the policies and applications in all the regional hub clusters - ## Use Cases -For understanding the Use Cases solved by Global Hub proceed to [Use Cases](./global_hub_use_cases.md) +You can read about the use cases for multicluster global hub in [Use Cases](./global_hub_use_cases.md). ## Architecture @@ -45,66 +39,81 @@ For understanding the Use Cases solved by Global Hub proceed to [Use Cases](./gl ### Multicluster Global Hub Operator -Operator is for multicluster global hub. It is used to deploy all required components for multicluster management. The components include multicluster-global-hub-manager in the global hub cluster and multicluster-global-hub-agent in the regional hub clusters. +The Multicluster Global Hub Operator contains the components of multicluster global hub. The Operator deploys all of the required components for global multicluster management. The components include `multicluster-global-hub-manager` in the global hub cluster and `multicluster-global-hub-agent` in the regional hub clusters. -The Operator also leverages the manifestwork to deploy the Advanced Cluster Management for Kubernetes in the managed cluster. So the managed cluster is switched to a standard ACM Hub cluster (regional hub cluster). +The Operator also leverages the `manifestwork` custom resource to deploy the Red Hat Advanced Cluster Management for Kubernetes Operator on the managed cluster. After the Red Hat Advanced Cluster Management Operator is deployed on the managed cluster, the managed cluster becomes a standard Red Hat Advanced Cluster Management Hub cluster. This hub cluster is now a regional hub cluster. ### Multicluster Global Hub Manager -The manager is used to persist the data into the postgreSQL. The data is from Kafka transport. The manager is also used to post the data to Kafka transport so that it can be synced to the regional hub clusters. +The Multicluster Global Hub Manager is used to persist the data into the `postgreSQL` database. The data is from Kafka transport. The manager also posts the data to the Kafka transport, so it can be synchronized with the data on the regional hub clusters. ### Multicluster Global Hub Agent -The agent is running in the regional hub clusters. It is responsible to sync-up the data between the global hub cluster and the regional hub clusters. For instance, sync-up the managed clusters' info from the regional hub clusters to the global hub cluster and sync-up the policy or application from the global hub cluster to the regional hub clusters. +The Multicluster Global Hub Agent runs on the regional hub clusters. It synchronizes the data between the global hub cluster and the regional hub clusters. For example, the agent synchronizes the information of the managed clusters from the regional hub clusters with the global hub cluster and synchronizes the policy or application from the global hub cluster and the regional hub clusters. ### Multicluster Global Hub Observability -Grafana runs on the global hub cluster as the main service for Global Hub Observability. The Postgres data collected by the Global Hub Manager services as its default DataSource. By exposing the service via route(`multicluster-global-hub-grafana`), you can access the global hub grafana dashboards just like accessing the openshift console. +Grafana runs on the global hub cluster as the main service for Global Hub Observability. The Postgres data collected by the Global Hub Manager is its default DataSource. By exposing the service using the route called `multicluster-global-hub-grafana`, you can access the global hub Grafana dashboards by accessing the Red Hat OpenShift Container Platform console. ## Workings of Global Hub -To understand how Global Hub functions, proceed [here](how_global_hub_works.md). +To understand how Global Hub functions, see [How global hub works](how_global_hub_works.md). ## Quick Start +The following sections provide the steps to start using the Multicluster Global Hub. + ### Prerequisites #### Dependencies -1. **Red Hat Advanced Cluster Management for Kubernetes (RHACM)** 2.7 or later needs to be installed - [Learn more details about RHACM](https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7) +- Red Hat Advanced Cluster Management for Kubernetes verison 2.7 or later must be installed and configured. [Learn more details about Red Hat Advanced Cluster Management](https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.8) -2. **Crunchy Postgres for Kubernetes** 5.0 or later needs to be provided +- Storage secret - **Crunchy Postgres for Kubernetes** provide a declarative Postgres solution that automatically manages PostgreSQL clusters. - - [Learn more details about Crunchy Postgres for Kubernetes](https://access.crunchydata.com/documentation/postgres-operator/v5/) + Both the global hub manager and Grafana services need a postgres database to collect and display data. The data can be accessed by creating a storage secret, + which contains the following two fields: + + - `database_uri`: Required, the URI user must have the permission to create the global hub database in the postgres. + - `ca.crt`: Optional, if your database service has TLS enabled, you can provide the appropriate certificate depending on the SSL mode of the connection. If + the SSL mode is `verify-ca` and `verify-full`, then the `ca.crt` certificate must be provided. + + **Note:** There is a [sample script](https://github.com/stolostron/multicluster-global-hub/tree/main/operator/config/samples/storage) available to install postgres in `hoh-postgres` namespace and create the secret `storage-secret` in namespace `open-cluster- + management` automatically. The client version of kubectl must be verison 1.21, or later. - Global hub manager and grafana services need Postgres database to collect and display data. The data can be accessed by creating a storage secret `multicluster-global-hub-storage` in namespace `open-cluster-management`, this secret should contains the following two fields: +- Transport secret - - `database_uri`: Required, the URI user should have the permission to create the global hub database in the postgres. - - `ca.crt`: Optional, if your database service has TLS enabled, you can provide the appropriate certificate depending on the SSL mode of the connection. If the SSL mode is `verify-ca` and `verify-full`, then the `ca.crt` certificate must be provided. + Right now, only Kafka transport is supported. You need to create a secret for the Kafka transport. The secret contains the following fields: - > Note: There is a sample script available [here](https://github.com/stolostron/multicluster-global-hub/tree/main/operator/config/samples/storage)(Note:the client version of kubectl must be v1.21+) to install postgres in `hoh-postgres` namespace and create the secret `multicluster-global-hub-storage` in namespace `open-cluster-management` automatically. + - `bootstrap.servers`: Required, the Kafka bootstrap servers. + - `ca.crt`: Optional, if you use the `KafkaUser` custom resource to configure authentication credentials, see [User authentication](https://strimzi.io/docs/operators/latest/deploying.html#con-securing-client-authentication-str) in the STRIMZI documentation for the steps to extract the `ca.crt` certificate from the secret. + - `client.crt`: Optional, see [User authentication](https://strimzi.io/docs/operators/latest/deploying.html#con-securing-client-authentication-str) in the STRIMZI documentation for the steps to extract the `user.crt` certificate from the secret. + - `client.key`: Optional, see [User authentication](https://strimzi.io/docs/operators/latest/deploying.html#con-securing-client-authentication-str) in the STRIMZI documentation for the steps to extract the `user.key` from the secret. + + **Note:** There is a [sample script](https://github.com/stolostron/multicluster-global-hub/tree/main/operator/config/samples/transport) available to automatically install kafka in the `kafka` namespace and create the secret `transport-secret` in namespace `open-cluster-management`. -3. **Strimzi** 0.33 or later needs to be provided +- Crunchy Postgres for Kubernetes version 5.0 or later needs to be installed - **Strimzi** provides a way to run Kafka cluster on Kubernetes in various deployment configurations. + Crunchy Postgres for Kubernetes provide a declarative Postgres solution that automatically manages PostgreSQL clusters. - [Learn more details about Strimzi](https://strimzi.io/documentation/) + See [Crunchy Postgres for Kubernetes](https://access.crunchydata.com/documentation/postgres-operator/v5/) for more information about Crunchy Postgres for Kubernetes. - Global hub agent need to sync cluster info and policy info to Kafka transport. And global hub manager persist the Kafka transport data to Postgre database. + Global hub manager and Grafana services need Postgres database to collect and display data. The data can be accessed by creating a storage secret named `multicluster-global-hub-storage` in the `open-cluster-management` namespace. This secret should contain the following two fields: - So, you need to create a secret `multicluster-global-hub-transport` in global hub cluster namespace `open-cluster-management` for the Kafka transport. The secret contains the following fields: + - `database_uri`: Required: The URI user should have the required permission to create the global hub database in the postgres. + - `ca.crt`: Optional: If your database service has TLS enabled, you can provide the appropriate certificate depending on the SSL mode of the connection. If the SSL mode is `verify-ca` and `verify-full`, then the `ca.crt` certificate must be provided. - - `bootstrap.servers`: Required, the Kafka bootstrap servers. - - `ca.crt`: Optional, if you use the `KafkaUser` custom resource to configure authentication credentials, you can follow this [document](https://strimzi.io/docs/operators/latest/deploying.html#con-securing-client-authentication-str) to get the `ca.crt` certificate from the secret. - - `client.crt`: Optional, you can follow this [document](https://strimzi.io/docs/operators/latest/deploying.html#con-securing-client-authentication-str) to get the `user.crt` certificate from the secret. - - `client.key`: Optional, you can follow this [document](https://strimzi.io/docs/operators/latest/deploying.html#con-securing-client-authentication-str) to get the `user.key` from the secret. + **Note:** There is a sample script available [here](https://github.com/stolostron/multicluster-global-hub/tree/main/operator/config/samples/storage)(Note:the client version of kubectl must be v1.21+) to install postgres in `hoh-postgres` namespace and automatically create the secret `multicluster-global-hub-storage` in namespace `open-cluster-management`. + +- Strimzi 0.33 or later needs to be installed + + Strimzi provides a way to run Kafka cluster on Kubernetes in various deployment configurations. + + See the [Strimzi documentation](https://strimzi.io/documentation/) to learn more about Strimzi. - > Note: There is a sample script available [here](https://github.com/stolostron/multicluster-global-hub/tree/main/operator/config/samples/transport) to install kafka in `kafka` namespace and create the secret `multicluster-global-hub-transport` in namespace `open-cluster-management` automatically. + Global hub agent need to synchronize cluster information and policy information to Kafka transport. The global hub manager persists the Kafka transport data to Postgres database. #### Sizing -1. [Sizing your RHACM cluster](https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/install/installing#sizing-your-cluster) +1. [Sizing your Red Hat Advanced Cluster Management cluster](https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/install/installing#sizing-your-cluster) 2. **Minimum requirements for Crunchy Postgres** @@ -121,13 +130,14 @@ To understand how Global Hub functions, proceed [here](how_global_hub_works.md). #### Network configuration -As regional hub is also managedcluster of global hub in RHACM. So the network configuration in RHACM is necessary. Details see [RHACM Networking](https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/networking/networking) + +The regional hub is also a managed cluster of global hub in Red Hat Advanced Cluster Management. The network configuration in Red Hat Advanced Cluster Management is necessary. See [Networking](https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html/networking/networking) for Red Hat Advanced Cluster Management networking details. 1. Global hub networking requirements | Direction | Protocol | Connection | Port (if specified) | Source address | Destination address | | ------ | ------ | ------ | ------ |------ | ------ | -|Inbound from user's browsers | HTTPS | User need to access the grafana dashboard | 443 | User's browsers | IP address of grafana route | +|Inbound from browser of the user | HTTPS | User need to access the Grafana dashboard | 443 | Browser of the user | IP address of Grafana route | | Outbound to Kafka Cluster | HTTPS | Global hub manager need to get data from Kafka cluster | 443 | multicluster-global-hub-manager-xxx pod | Kafka route host | | Outbound to Postgres database | HTTPS | Global hub manager need to persist data to Postgres database | 443 | multicluster-global-hub-manager-xxx pod | IP address of Postgres database | @@ -139,66 +149,68 @@ As regional hub is also managedcluster of global hub in RHACM. So the network co ### Installation -#### 1. [Install the multicluster global hub operator on a disconnected environment](./disconnected_environment/README.md) +1. [Install the multicluster global hub operator on a disconnected environment](./disconnected_environment/README.md) -#### 2. Install the multicluster global hub operator from OpenShift console +2. Install the multicluster global hub operator from the Red Hat OpenShift Container Platform console: -1. Log in to the OpenShift console as a user with cluster-admin role. -2. Click the Operators -> OperatorHub icon in the left navigation panel. -3. Search for the `multicluster global hub operator`. -4. Click the `multicluster global hub operator` to start the installation. -5. Click the `Install` button to start the installation when you are ready. -6. Wait for the installation to complete. You can check the status in the `Installed Operators` page. -7. Click the `multicluster global hub operator` to go to the operator page. -8. Click the `multicluster global hub` tab to see the `multicluster global hub` instance. -9. Click the `Create multicluster global hub` button to create the `multicluster global hub` instance. -10. Fill in the required information and click the `Create` button to create the `multicluster global hub` instance. + 1. Log in to the Red Hat OpenShift Container Platform console as a user with the `cluster-admin` role. + 2. Click **Operators** > OperatorHub icon in the navigation. + 3. Search for and select the `multicluster global hub operator`. + 4. Click `Install` to start the installation. + 5. After the installation completes, check the status on the *Installed Operators* page. + 6. Click **multicluster global hub operator** to go to the *Operator* page. + 7. Click the *multicluster global hub* tab to see the `multicluster global hub` instance. + 8. Click **Create multicluster global hub** to create the `multicluster global hub` instance. + 9. Enter the required information and click **Create** to create the `multicluster global hub` instance. -> Note: the multicluster global hub is available for x86 platform only right now. -> Note: The policy and application are disabled in RHACM once the multicluster global hub is installed. + **Notes:** + * The multicluster global hub is only available for the x86 platform. + * The policy and application are disabled in Red Hat Advanced Cluster Management after the multicluster global hub is installed. -### Import a regional hub cluster in default mode (tech preview) +### Import a regional hub cluster in default mode (Technology Preview) -It requires to disable the cluster self management in the existing ACM hub cluster. Set `disableHubSelfManagement=true` in the `multiclusterhub` CR to disable importing of the hub cluster as a managed cluster automaticially. +You must disable the cluster self-management in the existing Red Hat Advanced Cluster Management hub cluster. Set `disableHubSelfManagement=true` in the `multiclusterhub` custom resource to disable the automatic importing of the hub cluster as a managed cluster. -After that, follow the [Import cluster](https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.7/html-single/clusters/index#importing-a-target-managed-cluster-to-the-hub-cluster) document to import the regional hub cluster. +Import the regional hub cluster by completing the steps in [Import cluster](https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.8/html-single/clusters/index#importing-a-target-managed-cluster-to-the-hub-cluster). -Once the regional hub cluster is imported, you can check the global hub agent status to ensure that the agent is running in the regional hub cluster. +After the regional hub cluster is imported, check the global hub agent status to ensure that the agent is running in the regional hub cluster by running the following command: -```bash +``` oc get managedclusteraddon multicluster-global-hub-controller -n ${REGIONAL_HUB_CLUSTER_NAME} ``` -### Access the grafana +### Access the Grafana data -The grafana is exposed through Route, you can use the following command to get the login URL. The authentication method of this URL is same as the openshift console, so you don't have to worry about using another authentication. +The Grafana data is exposed through the route. Run the following command to display the login URL: -```bash +``` oc get route multicluster-global-hub-grafana -n ``` +The authentication method of this URL is same as authenticating to the Red Hat OpenShift Container Platform console. + ### Grafana dashboards -Upon accessing the global hub Grafana, users can begin monitoring the policies that have been configured through the hub cluster environments being managed. From the global hub dashboard, users can identify the compliance status of their system's policies over a selected time range. The policy compliance status is updated daily, so users can expect the dashboard to display the status of the current day on the following day. +After accessing the global hub Grafana data, you can begin monitoring the policies that were configured through the hub cluster environments that are managed. From the global hub dashboard, you can identify the compliance status of the policies of the system over a selected time range. The policy compliance status is updated daily, so the dashboard does not display the status of the current day until the following day. ![Global Hub Policy Group Compliancy Overview](./images/global-hub-policy-group-compliancy-overview.gif) -To navigate the global hub dashboards, users can choose to observe and filter the policy data by grouping them either by `policy` or `cluster`. If the user prefers to examine the policy data by `policy` grouping, they should start from the default dashboard called `Global Hub - Policy Group Compliancy Overview`. This dashboard allows users to filter the policy data based on `standard`, `category`, and `control`. Upon selecting a specific point in time on the graph, users will be directed to the `Global Hub - Offending Policies` dashboard, which lists the non-compliant or unknown policies at that particular time. After selecting a target policy, users can view related events and see what has changed by accessing the `Global Hub - What's Changed / Policies` dashboard. +To navigate the global hub dashboards, you can choose to observe and filter the policy data by grouping them either by `policy` or `cluster`. If you prefer to examine the policy data by using the `policy` grouping, you should start from the default dashboard called `Global Hub - Policy Group Compliancy Overview`. This dashboard allows you to filter the policy data based on `standard`, `category`, and `control`. After selecting a specific point in time on the graph, you are directed to the `Global Hub - Offending Policies` dashboard, which lists the non-compliant or unknown policies at that time. After selecting a target policy, you can view related events and see what has changed by accessing the `Global Hub - What's Changed / Policies` dashboard. ![Global Hub Cluster Group Compliancy Overview](./images/global-hub-cluster-group-compliancy-overview.gif) -Similarly, if users prefer to examine the policy data by `cluster` grouping, they should begin from the `Global Hub - Cluster Group Compliancy Overview` dashboard. The navigation flow is identical to the `policy` grouping flow, but users will select filters related to the cluster, such as managed cluster `labels` and `values`. Instead of viewing policy events for all clusters, upon reaching the `Global Hub - What's Changed / Clusters` dashboard, users will be able to view policy events specifically related to an individual cluster. +Similarly, if you want to examine the policy data by `cluster` grouping, begin by using the `Global Hub - Cluster Group Compliancy Overview` dashboard. The navigation flow is identical to the `policy` grouping flow, but you select filters that are related to the cluster, such as managed cluster `labels` and `values`. Instead of viewing policy events for all clusters, after reaching the `Global Hub - What's Changed / Clusters` dashboard, you can view policy events related to an individual cluster. ## Troubleshooting -For common Troubleshooting issues, proceed [here](troubleshooting.md) +For common Troubleshooting issues, see [Troubleshooting](troubleshooting.md). ## Known issues -1. If the database is empty, the grafana dashboards will show the error `db query syntax error for {dashboard_name} dashboard`. When you have some data in the database, the error will disappear. Remember the Top level dashboards gets populated only the day after the data starts flowing as explained in [Workings of Global Hub](how_global_hub_works.md) +1. If the database is empty, the Grafana dashboards show the error `db query syntax error for {dashboard_name} dashboard`. The error is resolved when there is data in the database. The top-level dashboards are populated only the day after data collection is started, as explained in [Workings of Global Hub](how_global_hub_works.md) -2. We provide ability to drill down the `Offending Policies` dashboard when you click a datapoint from the `Policy Group Compliancy Overview` dashboard. But the drill down feature is not working for the first datapoint. You can click the second datapoint or after to see the drill down feature is working. The issue is applied to the `Cluster Group Compliancy Overview` dashboard as well. +2. You cannot drill down by selecting the first datapoint from the `Policy Group Compliancy Overview` dashboard. You can drill down the `Offending Policies` dashboard when you click a datapoint from the `Policy Group Compliancy Overview` dashboard, but it is not working for the first datapoint in the list. This issue also applies to the `Cluster Group Compliancy Overview` dashboard. -3. If you detach the regional hub and then rejoin the regional hub, The data (policies/managed clusters) might not be updated in time from the rejoined regional hub. You can fix this problem by restarting the `multicluster-global-hub-manager` pod on global hub. +3. If you detach the regional hub and rejoin it, The data (policies/managed clusters) might not be updated in time from the rejoined regional hub. You can fix this problem by restarting the `multicluster-global-hub-manager` pod on the global hub. -4. For cluster that are never created successfully(clusterclaim id.k8s.io does not exist in the managed cluster), then we will not count this managed cluster in global hub policy compliance database, but it shows in RHACM policy console. +4. A managed cluster that is not created successfully (clusterclaim `id.k8s.io` does not exist in the managed cluster) is not counted in global hub policy compliance database, but shows in the Red Hat Advanced Cluster Management policy console. diff --git a/doc/dev-preview.md b/doc/dev-preview.md index 0a4967a57..6c97dff8b 100644 --- a/doc/dev-preview.md +++ b/doc/dev-preview.md @@ -1,32 +1,45 @@ -### Create a regional hub cluster (dev preview) -Refer to the original [Create cluster](https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/multicluster_engine/multicluster_engine_overview#creating-a-cluster) document to create the managed cluster in the global hub cluster. add labels of `global-hub.open-cluster-management.io/hub-cluster-install: ''` in managedcluster CR and then the new created managed cluster can be switched to be a regional hub cluster automatically. In other words, the latest released RHACM is installed in this managed cluster. You can get the ACM hub information in the cluster overview page. +### Create a regional hub cluster (Developer Preview) +Refer to the original [Create cluster](https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.8/html/clusters/cluster_mce_overview#creating-a-cluster) document to create the managed cluster in the global hub cluster. Add the label of `global-hub.open-cluster-management.io/hub-cluster-install: ''` to the `managedcluster` custom resource and then the new created managed cluster can automatically be switched to be a regional hub cluster. In other words, the latest version of Red Hat Advanced Cluster Management for Kubernetes is installed in this managed cluster. You can get the Red Hat Advanced Cluster Management hub information in the cluster overview page. + ![cluster overview](cluster_overview.png) -### Import a regional hub cluster in hosted mode (dev preview) -It does not require any changes before importing it. The ACM agent is running in a hosting cluster. -1. Import the cluster from the ACM console, add these annotations to the managedCluster, use the kubeconfig import mode, and disable all add-ons. -``` -import.open-cluster-management.io/klusterlet-deploy-mode: Hosted -import.open-cluster-management.io/hosting-cluster-name: local-cluster -addon.open-cluster-management.io/disable-automatic-installation: "true" -``` -![import hosted cluster](import_hosted_cluster.png) -Click `Next` Button to complete the import process. - -2. Enable work-manager addon after the imported cluster is available. -``` -oc apply -f - < -n open-cluster-management-hub1-addon-workmanager -``` + +### Import a regional hub cluster in hosted mode (Developer Preview) +A regional hub cluster does not require any changes before importing it. The Red Hat Advanced Cluster Management agent is running in a hosting cluster. + +1. Import the cluster from the Red Hat Advanced Cluster Management console, add these annotations to the `managedCluster` custom resource. Use the kubeconfig import mode, and disable all add-ons. + + ``` + import.open-cluster-management.io/klusterlet-deploy-mode: Hosted + import.open-cluster-management.io/hosting-cluster-name: local-cluster + addon.open-cluster-management.io/disable-automatic-installation: "true" + ``` + + ![import hosted cluster](import_hosted_cluster.png) + +2. Click `Next` to complete the import process. + +3. Enable the work-manager addon after the imported cluster is available by creating a file named `work-manager-file` that contains content that is similar to the following example:. + + ``` + apiVersion: addon.open-cluster-management.io/v1alpha1 + kind: ManagedClusterAddOn + metadata: + name: work-manager + namespace: hub1 + annotations: + addon.open-cluster-management.io/hosting-cluster-name: local-cluster + spec: + installNamespace: open-cluster-management-hub1-addon-workmanager + ``` + +4. Apply the file by running the following command: + + ``` + oc apply -f + ``` + +5. Create a kubeconfig secret for the work-manager add-on by running the following command: + + ``` + oc create secret generic work-manager-managed-kubeconfig --from-file=kubeconfig= -n open-cluster-management-hub1-addon-workmanager + ``` diff --git a/doc/disconnected_environment/README.md b/doc/disconnected_environment/README.md index 5e82ed3e1..25c91fa99 100644 --- a/doc/disconnected_environment/README.md +++ b/doc/disconnected_environment/README.md @@ -1,70 +1,105 @@ -# Deploy Global Hub Operator on a Disconnected Environment +# Deploying Global Hub Operator in a disconnected environment + +In situations where a network connection is not available, you can deploy the Global Hub Operator in a disconnected environment. ## Prerequisites -- Make sure you have an image registry, and a bastion host that has access to both the Internet and your mirror registry -- Have OLM([Operator Lifecycle Manager](https://docs.openshift.com/container-platform/4.11/operators/understanding/olm/olm-understanding-olm.html)) installed on your cluster -- The Advanced Cluster Management for Kubernetes has been installed on your cluster -- Make sure your user is authorized with cluster-admin permissions +- An image registry and a bastion host that have access to both the Internet and to your mirror registry +- Operator Lifecycle Manager ([OLM](https://docs.openshift.com/container-platform/4.11/operators/understanding/olm/olm-understanding-olm.html)) installed on your cluster +- Red Hat Advanced Cluster Management for Kubernetes version 2.7, or later, installed on your cluster +- A user account with `cluster-admin` permissions ## Mirror Registry -Installing global hub in a disconnected environment involves the use of a mirror image registry. Which ensures your clusters only use container images that satisfy your organizational controls on external content. You can following the following two step to provision the mirror registry for global hub. -- [Creating a mirror registry](https://docs.openshift.com/container-platform/4.11/installing/disconnected_install/installing-mirroring-creating-registry.html#installing-mirroring-creating-registry) -- [Mirroring images for a disconnected installation](https://docs.openshift.com/container-platform/4.11/installing/disconnected_install/installing-mirroring-installation-images.html) +You must use a mirror image registry when installing Multicluster Global Hub in a disconnected environment. The image registry ensures that your clusters only use container images that satisfy your organizational controls on external content. You can complete the following two-step procedure to provision the mirror registry for global hub. +- [Creating a mirror registry](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html/installing/disconnected-installation-mirroring#creating-mirror-registry) +- [Mirroring images for a disconnected installation](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html/installing/disconnected-installation-mirroring#installing-mirroring-installation-images) -## Create ImageContentSourcePolicy +## Create an ImageContentSourcePolicy -In order to have your cluster obtain container images for the global hub operator from your mirror registry, rather than from the internet-hosted registries, you can configure an `ImageContentSourcePolicy` on your disconnected cluster to redirect image references to your mirror registry. +You can configure an `ImageContentSourcePolicy` on your disconnected cluster to redirect image references to your mirror registry. This enables you to have your cluster obtain container images for the global hub operator on your mirror registry, rather than from the Internet-hosted registries. **Note**: The ImageContentSourcePolicy can only support the image mirror with image digest. -```bash -$ cat ./doc/disconnected_environment/imagecontentsourcepolicy.yaml -apiVersion: operator.openshift.io/v1alpha1 -kind: ImageContentSourcePolicy -metadata: - name: global-hub-operator-icsp -spec: - repositoryDigestMirrors: - - mirrors: - - ${REGISTRY}//multicluster-globalhub - source: registry.redhat.io/multicluster-globalhub +1. Create a file called `imagecontentsourcepolicy.yaml`: -$ envsubst < ./doc/disconnected-operator/imagecontentsourcepolicy.yaml | kubectl apply -f - -``` + ``` + $ cat ./doc/disconnected_environment/imagecontentsourcepolicy.yaml + ``` + +2. Add content that resembles the following content to the new file: + + ``` + apiVersion: operator.openshift.io/v1alpha1 + kind: ImageContentSourcePolicy + metadata: + name: global-hub-operator-icsp + spec: + repositoryDigestMirrors: + - mirrors: + - ${REGISTRY}//multicluster-globalhub + source: registry.redhat.io/multicluster-globalhub + ``` + +3. Apply `imagecontentsourcepolicy.yaml` by running the following command: + + ``` + envsubst < ./doc/disconnected-operator/imagecontentsourcepolicy.yaml | kubectl apply -f - + ``` ## Configure the image pull secret -If the Operator or Operand images that are referenced by a subscribed Operator require access to a private registry, you can either [provide access to all namespaces in the cluster, or individual target tenant namespaces](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.11/html-single/operators/index#olm-creating-catalog-from-index_olm-managing-custom-catalogs). +If the Operator or Operand images that are referenced by a subscribed Operator require access to a private registry, you can either [provide access to all namespaces in the cluster, or to individual target tenant namespaces](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#olm-creating-catalog-from-index_olm-managing-custom-catalogs). -### Option 1. Configure the globalhub imagepullsecret in an Openshift Cluster +### Option 1. Configure the global hub image pull secret in an OpenShift cluster -**Note**: if you apply this on a pre-existing cluster, it will cause a rolling restart of all nodes. +**Note**: Applying the image pull secret on a pre-existing cluster causes a rolling restart of all of the nodes. -```bash -$ export USER= -$ export PASSWORD= -$ oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' > pull_secret.yaml -$ oc registry login --registry=${REGISTRY} --auth-basic="$USER:$PASSWORD" --to=pull_secret.yaml -$ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull_secret.yaml -$ rm pull_secret.yaml -``` +1. Export the user name from the pull secret: + ``` + export USER= + ``` + +2. Export the password from the pull secret: + ``` + export PASSWORD= + ``` + +3. Copy the pull secret: + ``` + oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' > pull_secret.yaml + ``` + +4. Log in using the pull secret: + ``` + oc registry login --registry=${REGISTRY} --auth-basic="$USER:$PASSWORD" --to=pull_secret.yaml + ``` + +5. Specify the global hub image pull secret: + ``` + oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull_secret.yaml + ``` + +6. Remove the old pull secret: + ``` + rm pull_secret.yaml + ``` ### Option 2. Configure image pull secret to an individual namespace -```bash -# create the secret in the tenant namespace -$ oc create secret generic \ - -n \ +1. Create the secret in the tenant namespace by running the following command: + ``` + oc create secret generic -n \ --from-file=.dockerconfigjson= \ --type=kubernetes.io/dockerconfigjson + ``` -# link the secret to the service account for your operator/operand -$ oc secrets link -n --for=pull -``` +2. Link the secret to the service account for your operator/operand: + ``` + oc secrets link -n --for=pull + ``` -## Add GlobalHub operator catalog +## Add the GlobalHub operator catalog ### Build the GlobalHub catalog from upstream [Optional] @@ -150,7 +185,7 @@ multicluster-global-hub-operator Community Operators 28m name: multicluster-global-hub-operator namespace: open-cluster-management spec: - channel: release-0.7 + channel: alpha installPlanApproval: Automatic name: multicluster-global-hub-operator source: global-hub-operator-catalog diff --git a/doc/global_hub_use_cases.md b/doc/global_hub_use_cases.md index b131ad9a9..8734253ad 100644 --- a/doc/global_hub_use_cases.md +++ b/doc/global_hub_use_cases.md @@ -1,40 +1,39 @@ # Goal of Global Hub ## Primary Use Cases -As enterprises evolve to have many ACM hubs to manage their fleet, there is a need to be able to look at some subset of data across the fleet from a single pane of glass. This is where the Global Hub View comes in. We start with Global View of Policies as first step of building Global Views. +As enterprises evolve to have many Red Hat Advanced Cluster Management for Kubernetes hubs to manage their fleet, there is a need to be able to look at some subset of data across the fleet on a single pane of glass. The Global Hub View provides this information. We start with Global View of Policies as the first step of building Global Views. ### Use Case 1 | Elements | Description | |---|---| -|Precondition|ACM policies have been set up with correctly defined standards, categories and controls ; ACM policies have been distributed to clusters that are being managed by different ACM hubs using gitops; The compliance results from groups of ACM policies are aggregated for daily review| -|Trigger |Need to provide a report for the last 30 days on corporate controls in place for Production clusters. | -|Success Flow (nothing goes wrong)|Shows the count of compliance states for the last 30 days. In a successful scenario these compliance states are all trending up across time. (this is one line that represents a group of policies). New clusters that are imported might indicate high levels of initial compliance drift which should indicate the decreasing trend over time. An entire new hub region could be brought ‘online’ and again we would expect the compliance drift decreasing as policy controls are enforced.| -|Alternative Flows (something has gone wrong) |A security auditor needs additional details around a specific noncompliance event. The ad-hoc query requires drill down into a local-hub which is outside of the global hub itself.| +|Precondition|Red Hat Advanced Cluster Management policies have been set up with correctly defined standards, categories and controls. Red Hat Advanced Cluster Management policies have been distributed to clusters that are being managed by different Red Hat Advanced Cluster Management hubs using gitops. The compliance results from groups of Red Hat Advanced Cluster Management policies are aggregated for daily review.| +|Trigger |Need to provide a report for the last 30 days on corporate controls in place for production clusters. | +|Success Flow (nothing goes wrong)|Shows the count of compliance states for the last 30 days. In a successful scenario, these compliance states are all trending up across time. This is one line that represents a group of policies. New clusters that are imported might indicate high levels of initial compliance drift which should indicate the decreasing trend over time. An entire new hub region could be brought ‘online’, and we would expect the compliance drift to decrease as policy controls are enforced.| +|Alternative Flows (something has gone wrong) |A security auditor needs additional details about a specific noncompliance event. The ad-hoc query requires drill down into a local-hub which is outside of the global hub itself.| ### Use Case 2 |Elements|Description| |---|---| -|Precondition|ACM policies have been set up with correctly defined standards, categories and controls ; ACM policies have been rolled out to clusters that are being managed by different ACM hubs using gitops; The cluster groups are summed up to a daily level.| -|Trigger |Need a report for the last 30 days of compliance for Production clusters against all policies | -|Success Flow (nothing goes wrong)|Shows the count of clusters -which are in production group - with any vulnerabilities for the last 30 days. We can filter this data by different vulnerabilities if needed. In a happy scenario they are all trending down across time. (this is one line that represents a group of clusters).| +|Precondition|Red Hat Advanced Cluster Management policies have been set up with correctly defined standards, categories and controls. Red Hat Advanced Cluster Management policies have been rolled out to clusters that are being managed by different Red Hat Advanced Cluster Management hubs using gitops. The cluster groups are summarized at a daily level.| +|Trigger |Need a report for the last 30 days of compliance for production clusters against all policies | +|Success Flow (nothing goes wrong)|Shows the count of clusters in production group with any vulnerabilities in the last 30 days. We can filter this data by using different vulnerabilities, if needed. In a best case scenario, they are all trending down over time. This is one line that represents a group of clusters.| ### Using Global Hub -1. There are more than one ACM Hub (ACM 2.7 or higher) in the problem domain. -1. Install `another` ACM 2.7 Hub on another cluster and install the multicluster global hub (MCGH) operator on the same cluster. And create the MCGH Custom Resource. -1. After disabling `self-management` in the pre existing ACM Hubs, import them as managed cluster in the newly created ACM hub. -1. The policy data from the various ACM Hubs will start to flow into the MCGH and summary views will start to get populated after midnight (timezone of the cluster in which MCGH is installed ) +1. There are more than one Red Hat Advanced Cluster Management hubs (Red Hat Advanced Cluster Management version 2.7 or higher) in the problem domain. +1. Install `another` Red Hat Advanced Cluster Management 2.7 hub on another cluster and install the Multicluster Global Hub (MCGH) operator on the same cluster. And create the MCGH custom resource. +1. After disabling `self-management` in the pre-existing Red Hat Advanced Cluster Management hubs, import them as managed clusters in the newly created Red Hat Advanced Cluster Management hub. +1. The policy data from the various Red Hat Advanced Cluster Management hubs start to flow into the MCGH, and summary views start to populate after midnight in the timezone where the cluster on which MCGH is installed. ## Non Goals -1. Control the life cycle of the existing ACM hubs that -report to it. - - This can be handled by the ACM hub collocated on the Global Hub cluster which imports the existing ACM hubs as managed clusters. The Global Hub microservices do not participate in this in any shape or form -1. Push Policies or Applications to the existing ACM hubs from the Global Hub. - - This can be handled by regular gitops support of ACM. - -## Relation with ACM -1. Global Hub uses ACM API (ManifestWork) to propagate the `multicluster global hub agent` to the ACM Hub. -1. The `multicluster global hub agent` listens to activity on the ACM hub (by listening to the Kube API Server) and sends relevant events to the Messaging sub-system - which is Kafka. This could be adopted to listen to other activities and propagate events to the messaging sub-system. -1. The microservices that run on the Global Hub side can be installed on any cluster. However, since it uses the ACM Manifestwork API, they are collocated just for convenience. +1. Control the lifecycle of the existing Red Hat Advanced Cluster Management hubs that report to it. + - This can be handled by the Red Hat Advanced Cluster Management hub that is co-located on the Global Hub cluster that imports the existing Red Hat Advanced Cluster Management hubs as managed clusters. The Global Hub microservices are not part of this process. +1. Push Policies or Applications to the existing Red Hat Advanced Cluster Management hubs from the Global Hub. + - This can be handled by regular gitops support of Red Hat Advanced Cluster Management. + +## Relation with Red Hat Advanced Cluster Management +1. Global Hub uses the Red Hat Advanced Cluster Management `ManifestWork` API to propagate the `multicluster global hub agent` to the Red Hat Advanced Cluster Management hub. +1. The `multicluster global hub agent` listens to activity on the Red Hat Advanced Cluster Management hub by listening to the Kube API Server and sends relevant events to the Messaging subsystem. The messaging subsystem is Kafka. You can use this subsystem to listen to other activities and propagate events to the messaging subsystem. +1. The microservices that run on the Global Hub side can be installed on any cluster. Because it uses the Red Hat Advanced Cluster Management `Manifestwork` API, they are colocated for convenience. diff --git a/doc/how_global_hub_works.md b/doc/how_global_hub_works.md index a265b8482..82577c893 100644 --- a/doc/how_global_hub_works.md +++ b/doc/how_global_hub_works.md @@ -1,79 +1,78 @@ # How does Global Hub Work -Focus here will be on 2 key set of microservices in [architecture](./README.md) of the Multicluster Global Hub: -- Multicluster Global Hub Manager -- Multicluster Global Hub Agent +The Multicluster Global Hub [architecture](./README.md) contains two key sets of microservices that create the high level summary of the many policies deployed across many clusters: -and how do they concretely create the high level summary of the many policies deployed across many clusters. +- Multicluster Global Hub manager +- Multicluster Global Hub agent ## Scenario -Let us imagine a possible installation scenario. -1. There are 700 managed clusters managed by multiple ACM hubs -1. Each of these managed clusters have around 30 policies deployed on them. -1. There are a total of 100 policies. -1. And a Multicluster Global Hub is added deliver these [Use Cases](./global_hub_use_cases.md) - +The following sections guide you through a possible installation scenario. +1. There are 700 managed clusters managed by multiple Red Hat Advanced Cluster Management for Kubernetes hub clusters. +2. Each of these managed clusters have 30 policies deployed on them. +3. There are a total of 100 policies. +4. A Multicluster Global Hub is added to deliver these [Use Cases](./global_hub_use_cases.md). ### Summarization Process -How to summarize a single line that shows policy compliance across time. From the above, there are 21,000 (700*30) cluster-policy status-es that needs to be summarized. +You can summarize a single line that shows policy compliance across time. In this scenario, there are 21,000 (700*30) cluster policy statuses that need to be summarized. #### Key Steps -1. Events for - - policy creation - - policy propapagtion to managed clusters and - - policy compliance for each of the managed cluster - - [flows](#dataflow) into Global Hub +1. Events for the following policies [flow](#dataflow) into Multicluster Global Hub: + - Policy creation + - Policy propapagtion to managed clusters + - Policy compliance for each of the managed cluster 1. The raw events are saved in the database. -1. The current status of each policies is also saved in the database. -1. Each night - 00:00:00 hrs as per the clock of the cluster on which multicluster global hub runs - there is a summarization routine that kicks off. It is summarizes `a policy running on a cluster` to be `compliant or non-compliant or pending` for the past day. And this is done for all policies. This calculation is done on the basis of : - - compliance state of the `policy on the cluster` at *end of the previous day* - - changes(aka events) to the `policy on the cluster` *during the previous day* - - compliance state of a cluster is a logical AND of daily summarized status of all polcies on the cluster. +1. The current status of each policy is also saved in the database. +1. In this example, a summarization routine starts on the cluster that hosts the Multicluster Global Hub each night at midnight. It summarizes a policy running on a cluster as `compliant`, `non-compliant`, or `pending` for the previous day. This summary is available for all policies. This summary is based on the following input: + - Compliance state of the `policy on the cluster` at *end of the previous day* + - Events on the `policy on the cluster` *during the previous day* + - Compliance state of a cluster is a logical `AND` of daily summarized status of all polcies on the cluster. #### Summarization Rule |State of Policy on a cluster at end of previous day|Events related to the Policy on the cluster during the previous day| Calculated Summarized state for the previous day| |---|---|---| -|compliant| No non-compliant event have come during the day| compliant| -|compliant| Even one Non-compliant event have come during the day| non-compliant| +|compliant| No noncompliant events occurred during the day| compliant| +|compliant| One or more noncompliant events occurred during the day| non-compliant| |non-compliant| does not matter| non-compliant| |pending| does not matter| pending| -In the long run, the desired state is full compliance. If daily variations as captured above continues to persist, it needs to be investigated. And this will also bring out anamolous behaviour if any - that is: the fleet is largely compliant barring a few outliers. - - +In the long run, the desired state is full compliance. Daily variations as captured in the previous table need to be investigated. This investigation also raises any unexpected behaviour. For example, the fleet is largely compliant, except for a few outliers. #### Dataflow ![DataflowDiagram](architecture/mcgh-data-flow.png) -Note: -- multicluster global hub operator controls the life cycle of the multicluster global hub manager and global hub agent -- kafka and database can run on the Global cluster or outside it -- ACM hub also runs on the multicluster global hub cluster, but does not participate in the daily functioning of the global hub. + +- Multicluster Global Hub operator controls the life cycle of the Multicluster Global Hub manager and global hub agent. +- Kafka and the database can run on the Global cluster or outside of it. +- The Red Hat Advanced Cluster Management hub cluster also runs on the Multicluster Global Hub cluster, but does not participate in the daily functioning of the global hub. + +**Notes:** +- Multicluster Global Hub Operator controls the life cycle of the multicluster global hub manager and global hub agent. +- Kafka and the database can run on the Global cluster or outside of it. +- Red Hat Advanced Cluster Management hub also runs on the Multicluster Global Hub cluster, but does not participate in the daily functioning of the global hub. ### Running the Summarization Process manually -Before starting, the first thing you need to know is that the process of this summary consists of two subtasks: -- Insert the cluster policy data of that day from [Materialized View](https://www.postgresql.org/docs/current/rules-materializedviews.html) `local_compliance_view_` to `history.local_compliance`. +The manual summarization process consists of two subtasks: +- Insert the cluster policy data of that day from [Materialized View](https://www.postgresql.org/docs/current/rules-materializedviews.html) `local_compliance_view_` to `history.local_compliance`. - Update the `compliance` and policy flip `frequency` of that day to `history.local_compliance` based on `event.local_policies`. -#### Execution steps +#### Procedure steps -1. Connect to the database +1. Connect to the database. - You can use clients such as pgAdmin, tablePlush, etc. to connect to the Global Hub database to execute the SQL statements involved in the next few steps. If your postgres database is installed through [this script](../operator/config/samples/storage/deploy_postgres.sh), you can directly connect to the database on the cluster through the following command. - ```bash + You can use clients such as pgAdmin, tablePlush, etc. to connect to the Global Hub database to execute the SQL statements involved in the next few steps. If your postgres database is installed through [this script](../operator/config/samples/storage/deploy_postgres.sh), you can directly connect to the database on the cluster by running the following command: + ``` kubectl exec -it $(kubectl get pods -n hoh-postgres -l postgres-operator.crunchydata.com/role=master -o jsonpath='{.items..metadata.name}') -n hoh-postgres -c database -- psql -d hoh ``` -2. Determine the date that needs to be executed, take `2023-07-06` as an example +2. Determine the date when it needs to be run, such as `2023-07-06`. - If you find on the dashboard that there is no any compliance information on `2023-07-06`, then find the the job failure information of the day after this day, that is `2023-07-07`, in `history.local_compliance_job_log`. In this way, it can be determined that `2023-07-06` is the date we need to manually execute the summary processes. + If you find that there is no compliance information on the dashboard for `2023-07-06`, then find the the job failure information of the day following this day in the `history.local_compliance_job_log`. In this case, it is `2023-07-07`. It can be determined that `2023-07-06` is the date when we need to manually run the summary processes. -3. Check whether the Materialized View `history.local_compliance_view_2023_07_06` exists - ```sql +3. Check whether the Materialized View of `history.local_compliance_view_2023_07_06` exists by running the following command: + ``` select * from history.local_compliance_view_2023_07_06; ``` - - If the view exists, load the view records to `history.local_compliance` - ```sql + - If the view exists, load the view records to `history.local_compliance`: + ``` CREATE OR REPLACE FUNCTION history.insert_local_compliance_job( view_date text ) @@ -94,8 +93,8 @@ Before starting, the first thing you need to know is that the process of this su SELECT history.insert_local_compliance_job('2023_07_06'); ``` - - If the view not exists, inherit the history compliance records of the day before that day, that is `2023_07_05` - ```sql + - If the view not exists, inherit the history compliance records of the day before that day. In this example, it is `2023_07_05`. + ``` CREATE OR REPLACE PROCEDURE history.inherit_local_compliance_job( prev_date TEXT, curr_date TEXT @@ -124,8 +123,8 @@ Before starting, the first thing you need to know is that the process of this su CALL history.inherit_local_compliance_job('2023_07_05', '2023_07_06'); ``` -4. Update the `compliance` and `frequency` information of that day to `history.local_compliance` - ```sql +4. Update the `compliance` and `frequency` information of that day to `history.local_compliance`: + ``` CREATE OR REPLACE FUNCTION history.update_local_compliance_job(start_date_param text, end_date_param text) RETURNS void AS $$ BEGIN @@ -162,7 +161,7 @@ Before starting, the first thing you need to know is that the process of this su -- call the func to update records start with '2023-07-06', end with '2023-07-07' SELECT history.update_local_compliance_job('2023_07_06', '2023_07_07'); ``` -5. Once the above steps are successfully executed, you can find the records of that day generated in `history.local_compliance`. Then you can delete the Materialized View `history.local_compliance_view_2023_07_06` safely. - ```sql - DROP MATERIALIZED VIEW IF EXISTS history.local_compliance_view_2023_07_06; +5. Find the records of that day generated in `history.local_compliance`. You can safely delete the Materialized View `history.local_compliance_view_2023_07_06` by running the following command: ``` + DROP MATERIALIZED VIEW IF EXISTS history.local_compliance_view_2023_07_06; + ``` \ No newline at end of file diff --git a/doc/troubleshooting.md b/doc/troubleshooting.md index d67f231eb..59365289e 100644 --- a/doc/troubleshooting.md +++ b/doc/troubleshooting.md @@ -1,38 +1,65 @@ -## Access to the [provisioned postgres database](../operator/config/samples/storage/deploy_postgres.sh) +# Troubleshooting + +You can run troubleshooting steps to determine issues on your Multicluster Global Hub. + +## Access to the provisioned postgres database + +Depending on the type of service, there are three ways to access the [provisioned postgres database](../operator/config/samples/storage/deploy_postgres.sh) database. + +* `ClusterIP` service + 1. Run the following command to determine your postgres connection URI: + ``` + kubectl get secrets -n hoh-postgres hoh-pguser-postgres -o go-template='{{index (.data) "uri" | base64decode}}' + ``` + 2. Run the following command to access the database: + ``` + kubectl exec -it $(kubectl get pods -n hoh-postgres -l postgres-operator.crunchydata.com/role=master -o jsonpath='{.items..metadata.name}') -c database -n hoh-postgres -- psql -U postgres -d hoh -c "SELECT 1" + ``` + +* `NodePort` service + 1. Run the following command to modify the service to NodePort, set the host to be the node IP, and set the port to 32432: + ``` + kubectl patch postgrescluster hoh -n hoh-postgres -p '{"spec":{"service":{"type":"NodePort", "nodePort": 32432}}}' --type merge + ``` + 2. Run the following command to add your username: + ``` + kubectl get secrets -n hoh-postgres hoh-pguser-postgres -o go-template='{{index (.data) "user" | base64decode}}' + ``` + 3. Run the following command to add your password: + ``` + kubectl get secrets -n hoh-postgres hoh-pguser-postgres -o go-template='{{index (.data) "password" | base64decode}}' + ``` + 4. Run the following command to add your database name: + ``` + kubectl get secrets -n hoh-postgres hoh-pguser-postgres -o go-template='{{index (.data) "dbname" | base64decode}}' + ``` + +* `LoadBalancer` + 1. Set the service type to `LoadBalancer` by running the following command: + ``` + kubectl patch postgrescluster hoh -n hoh-postgres -p '{"spec":{"service":{"type":"LoadBalancer"}}}' --type merge + ``` + The default port is 5432 + 2. Run the following command to set your hostname: + ``` + kubectl get svc -n hoh-postgres hoh-ha -ojsonpath='{.status.loadBalancer.ingress[0].hostname}' + ``` + 4. Run the following command to add your username: + ``` + kubectl get secrets -n hoh-postgres hoh-pguser-postgres -o go-template='{{index (.data) "user" | base64decode}}' + ``` + 3. Run the following command to add your password: + ``` + kubectl get secrets -n hoh-postgres hoh-pguser-postgres -o go-template='{{index (.data) "password" | base64decode}}' + ``` + 4. Run the following command to add your database name: + ``` + kubectl get secrets -n hoh-postgres hoh-pguser-postgres -o go-template='{{index (.data) "dbname" | base64decode}}' + ``` -In combination with the type of service, three ways are provided here to access this database. - -1. `ClusterIP` -```bash -# postgres connection uri -kubectl get secrets -n hoh-postgres hoh-pguser-postgres -o go-template='{{index (.data) "uri" | base64decode}}' -# sample -kubectl exec -it $(kubectl get pods -n hoh-postgres -l postgres-operator.crunchydata.com/role=master -o jsonpath='{.items..metadata.name}') -c database -n hoh-postgres -- psql -U postgres -d hoh -c "SELECT 1" -``` - -2. `NodePort` -```bash -# modify the service to NodePort, then the host will be the node IP and set the port to 32432 -kubectl patch postgrescluster hoh -n hoh-postgres -p '{"spec":{"service":{"type":"NodePort", "nodePort": 32432}}}' --type merge -# user/ password/ database -kubectl get secrets -n hoh-postgres hoh-pguser-postgres -o go-template='{{index (.data) "user" | base64decode}}' -kubectl get secrets -n hoh-postgres hoh-pguser-postgres -o go-template='{{index (.data) "password" | base64decode}}' -kubectl get secrets -n hoh-postgres hoh-pguser-postgres -o go-template='{{index (.data) "dbname" | base64decode}}' -``` - -3. `LoadBalancer` -```bash -# modify the service to LoadBalancer, default port is 5432 -kubectl patch postgrescluster hoh -n hoh-postgres -p '{"spec":{"service":{"type":"LoadBalancer"}}}' --type merge -# host/ user/ password/ database -kubectl get svc -n hoh-postgres hoh-ha -ojsonpath='{.status.loadBalancer.ingress[0].hostname}' -kubectl get secrets -n hoh-postgres hoh-pguser-postgres -o go-template='{{index (.data) "user" | base64decode}}' -kubectl get secrets -n hoh-postgres hoh-pguser-postgres -o go-template='{{index (.data) "password" | base64decode}}' -kubectl get secrets -n hoh-postgres hoh-pguser-postgres -o go-template='{{index (.data) "dbname" | base64decode}}' -``` ## Running the must-gather command for troubleshooting -Run `must-gather` to gather details, logs, and take steps in debugging issues, these debugging information is also useful when opening a support case. The `oc adm must-gather CLI` command collects the information from your cluster that is most likely needed for debugging issues, including: +Run the `must-gather` to gather details, logs, and take steps in debugging issues. This debugging information is also useful when you open a support request. The `oc adm must-gather CLI` command collects the information from your cluster that is often needed for debugging issues, including: 1. Resource definitions 2. Service logs @@ -40,27 +67,29 @@ Run `must-gather` to gather details, logs, and take steps in debugging issues, t ### Prerequisites 1. Access to the global hub and regional hub clusters as a user with the cluster-admin role. -2. The OpenShift Container Platform CLI (oc) installed. +2. The Red Hat OpenShift Container Platform CLI (oc) installed. ### Must-gather procedure -See the following procedure to start using the must-gather command: +Complete the following procedure to start using the must-gather command: -1. Learn about the must-gather command and install the prerequisites that you need at [Gathering data about your cluster](https://docs.openshift.com/container-platform/4.8/support/gathering-cluster-data.html?extIdCarryOver=true&sc_cid=701f2000001Css5AAC) in the RedHat OpenShift Container Platform documentation. +1. Learn about the must-gather command and install the prerequisites that you need at [Gathering data about your cluster](https://docs.openshift.com/container-platform/4.8/support/gathering-cluster-data.html?extIdCarryOver=true&sc_cid=701f2000001Css5AAC) in the Red Hat OpenShift Container Platform documentation. -2. Log in to your global hub cluster. For the usual use-case, you should run the must-gather while you are logged into your global hub cluster. +2. Log in to your global hub cluster. For the usual use-case, you run the must-gather while you are logged into your global hub cluster. -```bash -oc adm must-gather --image=quay.io/stolostron/must-gather:SNAPSHOTNAME -``` + ``` + oc adm must-gather --image=quay.io/stolostron/must-gather:SNAPSHOTNAME + ``` -Note: If you want to check your regional hub clusters, run the `must-gather` command on those clusters. + If you want to check your regional hub clusters, run the `must-gather` command on those clusters. + +3. Optional: If you need the results to be saved in a named directory, run the following command instead of the one in step 2: + ``` + oc adm must-gather --image=quay.io/stolostron/must-gather:SNAPSHOTNAME --dest-dir=SOMENAME ; tar -cvzf SOMENAME.tgz SOMENAME + ``` + The command includes the required additions to create a gzipped tarball file. -Note: If you need the results to be saved in a named directory, then following the must-gather instructions, this can be run. Also added are commands to create a gzipped tarball: -```bash -oc adm must-gather --image=quay.io/stolostron/must-gather:SNAPSHOTNAME --dest-dir=SOMENAME ; tar -cvzf SOMENAME.tgz SOMENAME -``` ### Information Captured @@ -84,33 +113,33 @@ oc adm must-gather --image=quay.io/stolostron/must-gather:SNAPSHOTNAME --dest-di ## Database Dump and Restore -In a production environment, no matter how large or small our PostgreSQL database may be, regular back is an essential aspect of database management, it is also used for debugging. +In a production environment, regular backup of your PostgreSQL database is an essential aspect of database management. It is also used for debugging. ### Dump Database for Debugging -Sometimes we need to dump the tables in global hub database for debugging purpose, postgreSQL provides `pg_dump` command line tool to dump the database. To dump data from localhost database server: +Sometimes you need to dump the output in global hub database for debugging purpose. PostgreSQL provides the `pg_dump` command line tool to dump the contents of the database. To dump data from localhost database server: -```shell +``` pg_dump hoh > hoh.sql ``` -If we want to dump global hub database located on some remote server with compressed format, we should use command-line options which allows us to control connection details: +To dump global hub database located on some remote server with compressed format, use the command-line options to control the connection details: -```shell +``` pg_dump -h my.host.com -p 5432 -U postgres -F t hoh -f hoh-$(date +%d-%m-%y_%H-%M).tar ``` ### Restore Database from Dump -To restore a PostgreSQL database, you can use the `psql` or `pg_restore` command line tools. `psql` is used to restore plain text files created by `pg_dump`: +To restore a PostgreSQL database, you can use the `psql` or `pg_restore` command line tools. The `psql` tool is used to restore plain text files created by `pg_dump`: -```shell +``` psql -h another.host.com -p 5432 -U postgres -d hoh < hoh.sql ``` -Whereas `pg_restore` is used to restore a PostgreSQL database from an archive created by `pg_dump` in one of the non-plain-text formats (custom, tar, or directory): +The `pg_restore` tool is used to restore a PostgreSQL database from an archive created by `pg_dump` in one of the non-plain-text formats (custom, tar, or directory): -```shell +``` pg_restore -h another.host.com -p 5432 -U postgres -d hoh hoh-$(date +%d-%m-%y_%H-%M).tar ```