Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial commit for Pod Affinity Anti Affinity Scheduling workload #92

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

wabouhamad
Copy link
Contributor

Initial commit for the Pod Affinity Anti-Affinity Scheduling workload:

modified:   docs/README.md
new file:   docs/pod-affinity.md
new file:   workloads/files/workload-pod-affinity-script-cm.yml
new file:   workloads/pod-affinity.yml
modified:  workloads/templates/workload-env.yml.j2
new file:   workloads/vars/pod-affinity.yml

@wabouhamad
Copy link
Contributor Author

@mffiedler @chaitanyaenr please review

@wabouhamad
Copy link
Contributor Author

@mffiedler @chaitanyaenr please review, been running these workloads from Jenkins since OCP 4.2

The Pod Affinity Anti Affinity workload playbook is `workloads/pod-affinity.yml` and will run the Pod Affinity Anti Affinity workload on your cluster.

The first step is to deploy a pod labeled with `security=s1` which the scheduler will deploy on one of the worker nodes in the clusters.
Pod Affinity Anti Affinity workload's purpose is to validate if the OpenShift cluster can deploy 130 hello-pods with pod affinity to labeled pod, and get deployed on the same worker node. Additionally, this workload will also test deploying 130 additional hello-pods with anti-affinity to the `security=s1` labeled pod, and will be deployed on the other worker nodes. Deployed pods have memory and CPU requests, and goal is to be close 85% CPU capacity after all pods are deployed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a pointer to the script and pod spec/template in svt repo since they are being maintained there. This way, we will the location in case we need to modify something. On the other hand, it might be a good idea to bake in the script and pod templates into the workload itself like how we are doing in case of other workloads like nodevertical for example : https://github.com/openshift-scale/workloads/blob/master/workloads/templates/workload-nodevertical-script-cm.yml.j2#L75. This way we will have a clear idea of what workload is doing as well easier to modify it in case we want to change something. Thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants