Skip to content

Commit

Permalink
Update doc
Browse files Browse the repository at this point in the history
  • Loading branch information
cbeauchesne committed Jul 19, 2023
1 parent f087479 commit d81ba82
Show file tree
Hide file tree
Showing 4 changed files with 22 additions and 26 deletions.
2 changes: 1 addition & 1 deletion .github/pull_request_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Once your PR is reviewed, you can merge it! :heart:

## Reviewer checklist

* [ ] Check what scenarios are modified. If needed, add the relevant label (`run-parametric-scenario`, `run-profiling-scenario`...). If this PR modifies any system-tests internal, then add the `run-all-scenarios` label ([more info](https://github.com/DataDog/system-tests/blob/main/docs/CI/system-tests-ci.md)).
* [ ] Check what scenarios are modified. If needed, add the relevant label (`run-parametric-scenario`, `run-profiling-scenario`...). If this PR modifies any system-tests internal, then add the `run-all-scenarios` label ([more info](https://github.com/DataDog/system-tests/blob/main/docs/CI/labels.md)).
* [ ] CI is green
* [ ] If not, failing jobs are not related to this change (and you are 100% sure about this statement)
* if any of `build-some-image` label is present
Expand Down
5 changes: 2 additions & 3 deletions docs/CI/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@ All information you need to add System Tests in your CI.

## How to integrate in a CI?

You'll need a CI that supports `docker-compose`, and very common UNIX tools.
You'll need a CI that with `docker` and `python 3.9` installed, among with very common UNIX tools.

A valid `DD_API_KEY` env var for staging must be set.
A valid `DD_API_KEY` env var for staging must be set.

1. Clone this repo
2. Copy paste your components' build inside `./binaries` (See [documentation](./binaries.md))
Expand All @@ -16,4 +16,3 @@ You will find different template or example:
* [github actions](./github-actions.md)
* [gitlab CI](./gitlab-ci.md): TODO
* [azure](https://github.com/DataDog/dd-trace-dotnet/blob/master/.azure-pipelines/ultimate-pipeline.yml) (look for `stage: system_tests`)
* jenkins: TODO
18 changes: 18 additions & 0 deletions docs/CI/labels.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
By default, on system-tests own CI, only the default scenario is ran. It is a valid setup if:

- you modify only code in the `tests/` folder
- and if you modify only classes that do not have any `@scenario` decorator

In any other case, you'll need to add [labels](https://docs.github.com/en/issues/using-labels-and-milestones-to-track-work/managing-labels#applying-labels-to-issues-and-pull-requests) to add other scenarios in the CI workflow. Their names speaks by themselves:

- `run-parametric-scenario`
- `run-sampling-scenario`
- `run-profiling-scenario`
- `run-open-telemetry-scenarios`
- `run-libinjection-scenarios`

And if you modify something that could impact all scenarios, (or if you have any doubt), the label that run everything is `run-all-scenarios`. Be patient, the CI will take more than one hour. You can merge your PR once it has been approved, even if you have only run the tests on the default scenario.

:warning: Reviewers must pay attention on which labels should be present before approving any PR. They must add if necessary the good labels before processing the review.

When a PR is merged on the main branch, and when scheduled nightly runs are executed, all tests will always be executed on all scenarios.
23 changes: 1 addition & 22 deletions docs/CI/system-tests-ci.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,25 +13,4 @@ The System-tests repository contains **one main workflow**: `ci.yml`. It is trig

By default, after some basic test/lint jobs, this pipeline build alls weblogs (67!) in their `prod` (last release of all Datadog components) and `dev` (last commit on main of all Datadog components) versions. Then it runs the DEFAULT scenario on all of them. All of this in parallel (so 134 jobs), it takes few minutes to run.

This workflow can validate any system-tests PR, as long as it modifies only the default scenario, which is the most common use case:

- If you modify only code in the `tests/` folder
- and if you modify only classes that do not have any `@scenario` decorator

If you modified anything else, a system based on [GitHub Action labels](https://docs.github.com/en/issues/using-labels-and-milestones-to-track-work/managing-labels#applying-labels-to-issues-and-pull-requests) is used to enable other scenarios.

### Label system

In any other case, you must add labels on your PR, to add other scenarios to be executed in your PR. Their names speaks by themselves:

- `run-parametric-scenario`
- `run-sampling-scenario`
- `run-profiling-scenario`
- `run-open-telemetry-scenarios`
- `run-libinjection-scenarios`

And if you modify something that could impact all scenarios, (or if you have any doubt), the label that run everything is `run-all-scenarios`. Be patient, the CI will take more than one hour. You can merge your PR once it has been approved, even if you have only run the tests on the default scenario.

:warning: Reviewers must pay attention on which labels should be present before approving any PR. They must add if necessary the good labels before processing the review.

When a PR is merged on the main branch, and when scheduled nightly runs are executed, all tests will always be executed on all scenarios.
This workflow can validate any system-tests PR, as long as it modifies only the default scenario, which is the most common use case. See more details in the [docs about labels](./labels)

0 comments on commit d81ba82

Please sign in to comment.