Skip to content

romerobu/workshop-gitops-infra-deploy

Repository files navigation

workshop-gitops-infra-deploy

This repo is part of the ArgoCD Managing Infrastructure workshop and is intended to deploy clusters (hub and managed) for this purpose plus the infra setup required to complete the activities.

Create cluster

⚠️ First of all, create a ./pullsecret.txt containing the pull secret to be used.

This script deploy OCP both hub and SNO managed on AWS. You must specify the following params:

sh ocp4-install.sh <cluster_name> <region_aws> <base_domain> <replicas_master> <replicas_worker> <vpc_id|false> <aws_id> <aws_secret> <instance_type> <amount_of_users>

VPC id is required only if you are deploying on an existing VPC, otherwise specify "false". Amount of users means users for the amount of managed cluster, in case you are not installing hub cluster it is not required.

sh ocp4-install.sh argo-hub <region_aws> <base_domain> 3 3 false <aws_id> <aws_secret> m6i.xlarge <amount_of_users> 

For deploying a SNO managed cluster:

sh ocp4-install.sh sno-1 <region_aws> <base_domain> 1 0 <vpc_id> <aws_id> <aws_secret> m6i.4xlarge

⚠️ It is mandatory to name hub and sno clusters as argo-hub and sno-x

You can check your VPC id on AWS console or by running this command:

aws ec2 describe-vpcs 

Deploy and configure ArgoCD

⚠️ You need to install argocd CLI and yq.

⚠️ It's higly recommended to fllow de Declarative setup approach as it has the last updates.

This script installs GitOps operator, deploy ArgoCD instance and add managed clusters. You must specify the amount of deployed SNO clusters to be managed by argocd:

sh deploy-gitops.sh <amount_of_sno_clusters>

For example, if you want to add 3 sno cluster (sno-1, sno-2 and sno-3):

sh deploy-gitops.sh 3

This script configures argo RBAC so users created in hub cluster for sno managed cluster (user-1, user-2...) can only view project-sno-x and destination sno-x clusters hence only deploying to the allowed destination within the allowed project.

Declarative setup

You can also deploy and configure GitOps using a declarative approach as defined in this repo.

First install Openshift GitOps operator. Then create a setup-sno branch, add your clusters token to hub-setup/charts/gitops-setup/values.yaml file, then set subdomain and sharding replicas values and then create global-config/bootstrap-a/hub-setup-a.yaml Application on your default instance.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: hub-setup
  namespace: openshift-gitops
spec:
  destination:
    namespace: openshift-gitops
    server: https://kubernetes.default.svc
  project: default
  source:
    repoURL: https://github.com/romerobu/workshop-gitops-content-deploy.git
    targetRevision: setup-sno
    path: hub-setup/charts/gitops-setup 
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Deploy keycloak

To deploy an instance of keycloak and create the corresponding realms, client and users, run this script:

sh set-up-keycloak.sh <number_of_clusters> <subdomain | sandoboxXXX.opentlc.com>

Beware you need to update your certificate on your helm charts repo:

oc -n openshift-ingress-operator get secret router-ca -o jsonpath="{ .data.tls\.crt }" | base64 -d -i 

Deploy FreeIPA

Follow the instructions here to deploy FreeIPA server.

git clone https://github.com/redhat-cop/helm-charts.git

cd helm-charts/charts
helm dep up ipa
cd ipa/
helm upgrade --install ipa . --namespace=ipa --create-namespace --set app_domain=apps.<domain>

You have to wait for IPA to be fully deployed to run this commands, verify ipa-1-deploy pod is completed.

Then, expose ipa service as NodePort and allow external traffic on AWS by configuring the security groups.

oc expose service ipa  --type=NodePort --name=ipa-nodeport --generator="service/v2" -n ipa

Make sure you have enabled a security group for allowing incoming traffic to port 389 (nodeport) and origin 10.0.0.0/16. You can test connectivity running this command from your managed cluster node:

nc -vz <hub_worker_node_ip> <ldap_nodeport>

Create FreeIPA users

To create FreeIPA users, run these commands:

# Login to kerberos
oc exec -it dc/ipa -n ipa -- \
    sh -c "echo Passw0rd123 | /usr/bin/kinit admin && \
    echo Passw0rd | \
    ipa user-add ldap_admin --first=ldap \
    --last=admin --email=ldap_admin@redhatlabs.dev --password"
    
# Create groups if they dont exist

oc exec -it dc/ipa -n ipa -- \
    sh -c "ipa group-add student --desc 'wrapper group' || true && \
    ipa group-add ocp_admins --desc 'admin openshift group' || true && \
    ipa group-add ocp_devs --desc 'edit openshift group' || true && \
    ipa group-add ocp_viewers --desc 'view openshift group' || true && \
    ipa group-add-member student --groups=ocp_admins --groups=ocp_devs --groups=ocp_viewers || true"

# Add demo users

oc exec -it dc/ipa -n ipa -- \
    sh -c "echo Passw0rd | \
    ipa user-add paul --first=paul \
    --last=ipa --email=paulipa@redhatlabs.dev --password || true && \
    ipa group-add-member ocp_admins --users=paul"

oc exec -it dc/ipa -n ipa -- \
    sh -c "echo Passw0rd | \
    ipa user-add henry --first=henry \
    --last=ipa --email=henryipa@redhatlabs.dev --password || true && \
    ipa group-add-member ocp_devs --users=henry"

oc exec -it dc/ipa -n ipa -- \
    sh -c "echo Passw0rd | \
    ipa user-add mark --first=mark \
    --last=ipa --email=markipa@redhatlabs.dev --password || true && \
    ipa group-add-member ocp_viewers --users=mark"

Deploy vault server

To deploy an instance of vault server:

git clone https://github.com/hashicorp/vault-helm.git

helm repo add hashicorp https://helm.releases.hashicorp.com

oc new-project vault

helm install vault hashicorp/vault \
    --set "global.openshift=true" \
    --set "server.dev.enabled=true" --values values.openshift.yaml
    
oc expose svc vault -n vault -n vault

Then you must expose vault server so it can be reached from SNO clusters.

Once server is deployed and argo-vault-plugin working on SNO, you must configure vault server auth so argo can authenticate against it.

Follow this instructions here.

# enable kv-v2 engine in Vault
oc exec vault-0 -- vault secrets enable kv-v2

# create kv-v2 secret with two keys # Put your secrets here
oc exec vault-0 -- vault kv put kv-v2/demo password="password123"

oc exec vault-0 -- vault kv get kv-v2/demo

oc rsh vault-0 # Then run these commands

# create policy to enable reading above secret
vault policy write demo - <<EOF # Replace with your app name
path "kv-v2/data/demo" {
  capabilities = ["read"]
}
EOF

vault auth enable approle

vault write auth/approle/role/argocd secret_id_ttl=120h token_num_uses=1000 token_ttl=120h token_max_ttl=120h secret_id_num_uses=4000  token_policies=demo

vault read auth/approle/role/argocd/role-id

vault write -f auth/approle/role/argocd/secret-id

Bear in mind you need to update this secret on main and main-day2 branch to so users will clone and pull the right credentials.

Destroy cluster

If you want to delete a cluster, first run this command to destroy it from AWS:

CLUSTER_NAME=<cluster_name>
openshift-install destroy cluster --dir install/install-dir-$CLUSTER_NAME --log-level info

Then remove it from ArgoCD instance:

# Make sure you are logged in cluster hub, unless you are trying to delete this cluster that this section is not required
export KUBECONFIG=./install/install-dir-argo-hub/auth/kubeconfig
# Login to argo server
ARGO_SERVER=$(oc get route -n openshift-operators argocd-server  -o jsonpath='{.spec.host}')
ADMIN_PASSWORD=$(oc get secret argocd-cluster -n openshift-operators  -o jsonpath='{.data.admin\.password}' | base64 -d)
# Remove managed cluster
argocd login $ARGO_SERVER --username admin --password $ADMIN_PASSWORD --insecure
argocd cluster rm $CLUSTER_NAME
# Then remove installation directories
rm -rf ./backup/backup-$CLUSTER_NAME
rm -rf ./install/install-dir-$CLUSTER_NAME

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages