diff --git a/index.html b/index.html index d67fe67..2e9813a 100644 --- a/index.html +++ b/index.html @@ -636,7 +636,7 @@

ExamplesSpecial Thanks#

farcaller/nix-kube-generators is used internally to pull and render Helm charts and some functions are re-exposed in the lib passed to modules in nixidy.

-

hall/kubenix project has code generation of nix module options for every standard kubernetes resource. Instead of doing this work in nixidy I simply import their generated resource options.

+

hall/kubenix project has code generation of nix module options for every standard kubernetes resource. Instead of doing this work in nixidy I simply import their generated resource options. The resource option generation scripts in nixidy are also a slight modification of kubenix's. Without their work this wouldn't be possible in nixidy.

diff --git a/search/search_index.json b/search/search_index.json index 3d79ced..1d85406 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"nixidy","text":"

Kubernetes GitOps with nix and Argo CD.

Kind of sounds like Nix CD.

Manage an entire Kubernetes cluster like it's NixOS, with the help of Argo CD.

"},{"location":"#why","title":"Why?","text":"

It's desirable to manage Kubernetes clusters in a declarative way using a git repository as a source of truth for manifests that should be deployed into the cluster. One popular solution that is often used to achieve this goal is Argo CD.

Argo CD has a concept of applications. Each application has an entrypoint somewhere in your git repository that is either a Helm chart, kustomize application, jsonnet files or just a directory of YAML files. All the resources that are output when templating the helm chart, kustomizing the kustomize application or are defined in the YAML files in the directory, make up the application and are (usually) deployed into a single namespace.

For those reasons these git repositories often need quite elaborate designs once many applications should be deployed, requiring use of application sets (generator for applications) or custom Helm charts just to render all the different applications of the repository.

On top of that it can be quite obscure what exactly will be deployed by just looking at helm charts (along with all the values override, usually set for each environment) or the kustomize overlays (which often are many depending on number of environments/stages) without going in and just running helm template or kubectl kustomize.

Having dealt with these design decisions and pains that come with the different approaches I'm starting to use The Rendered Manifests Pattern. While it's explained in way more detail in the linked blog post, basically it involves using your CI system to pre-render the helm charts or the kustomize overlays and commit all the rendered manifests to an environment branch (or go through a pull request review where you can review the exact changes to your environment). That way you can just point Argo CD to your different directories full of rendered YAML manifests without having to do any helm templating or kustomize rendering.

"},{"location":"#nixos-module-system","title":"NixOS' Module System","text":"

I have been a user and a fan of NixOS for many years and how its module system works to recursively merge all configuration options that are set in many different modules.

I have not been a fan of helm's string templating of a whitespace-sensitive configuration language or kustomize's repitition (defining a kustomization.yaml file for each layer statically listing files to include, some are json patches some are not...).

Therefore I made nixidy as an experiment to see if I can make something better (at least for myself). As all Argo CD applications are defined in a single configuration it can reference configuration options across applications and automatically generate an App of Apps bootstrapping all of them.

"},{"location":"#getting-started","title":"Getting Started","text":"

Take a look at the getting started guide.

"},{"location":"#examples","title":"Examples","text":""},{"location":"#special-thanks","title":"Special Thanks","text":"

farcaller/nix-kube-generators is used internally to pull and render Helm charts and some functions are re-exposed in the lib passed to modules in nixidy.

hall/kubenix project has code generation of nix module options for every standard kubernetes resource. Instead of doing this work in nixidy I simply import their generated resource options.

"},{"location":"library/","title":"Libary Functions","text":"

The argument lib is passed to each module in nixidy. This is the standard nixpkgs library extended with the following functions.

"},{"location":"library/#libhelmdownloadhelmchart","title":"lib.helm.downloadHelmChart","text":"

Type: downloadHelmChart :: AttrSet -> Derivation

Downloads a helm chart from a helm registry.

This is re-exported directly from farcaller/nix-kube-generators.

"},{"location":"library/#libhelmbuildhelmchart","title":"lib.helm.buildHelmChart","text":"

Type: buildHelmChart :: AttrSet -> Derivation

Templates a helm chart with provided values and creates a derivation with the output.

This is re-exported directly from farcaller/nix-kube-generators.

"},{"location":"library/#libhelmgetchartvalues","title":"lib.helm.getChartValues","text":"

Type: getChartValues :: Derivation -> AttrSet

Parse the default values file shipped with the helm chart.

chart

Derivation containing helm chart. Usually output of lib.helm.downloadHelmChart.

Example:

getChartValues (lib.helm.downloadHelmChart {\n    repo = \"https://argoproj.github.io/argo-helm/\";\n    chart = \"argo-cd\";\n    version = \"5.51.4\";\n    chartHash = \"sha256-LOEJ5mYaHEA0RztDkgM9DGTA0P5eNd0SzSlwJIgpbWY=\";\n})\n=> {\n  server.replicas = 1;\n  controller.replicas = 1;\n  # ...\n}\n
"},{"location":"library/#libkustomizebuildkustomization","title":"lib.kustomize.buildKustomization","text":"

Type: buildKustomization :: AttrSet -> Derivation

Builds a kustomization and creates a derivation with the output.

structured function argument

name

Name is only used for derivation name.

src

Derivation containing the kustomization entrypoint and all relative bases that it might reference.

path

Relative path from the base of src to the kustomization folder to render.

namespace

Override namespace in kustomization.yaml.

Example:

buildKustomization {\n  name = \"argocd\";\n  src = pkgs.fetchFromGitHub {\n    owner = \"argoproj\";\n    repo = \"argo-cd\";\n    rev = \"v2.9.3\";\n    hash = \"sha256-GaY4Cw/LlSwy35umbB4epXt6ev8ya19UjHRwhDwilqU=\";\n  };\n  path = \"manifests/cluster-install\";\n  namespace = \"argocd\";\n}\n=> /nix/store/7i52...7pww-kustomize-argocd\n
"},{"location":"library/#libkubefromyaml","title":"lib.kube.fromYAML","text":"

Type: fromYAML :: String -> [AttrSet]

Parses a YAML document string into a list of attribute sets.

This is re-exported directly from farcaller/nix-kube-generators.

yaml

String with a yaml document.

Example:

fromYAML ''\n  apiVersion: v1\n  kind: Namespace\n  metadata:\n    name: default\n  ---\n  apiVersion: v1\n  kind: Namespace\n  metadata:\n    name: kube-system\n''\n=> [\n  {\n    apiVersion = \"v1\";\n    kind = \"Namespace\";\n    metadata.name = \"default\";\n  }\n  {\n    apiVersion = \"v1\";\n    kind = \"Namespace\";\n    metadata.name = \"kube-system\";\n  }\n]\n
"},{"location":"library/#libkuberemovelabels","title":"lib.kube.removeLabels","text":"

Type: removeLabels :: [String] -> AttrSet -> AttrSet

Removes labels from a Kubernetes manifest.

labels

List of labels that should be removed

manifest

Kubernetes manifest

Example:

removeLabels [\"helm.sh/chart\"] {\n  apiVersion = \"v1\";\n  kind = \"ConfigMap\";\n  metadata = {\n    name = \"argocd-cm\";\n    labels = {\n      \"app.kubernetes.io/name\" = \"argocd-cm\";\n      \"helm.sh/chart\" = \"argo-cd-5.51.6\";\n    };\n  };\n}\n=> {\n  apiVersion = \"v1\";\n  kind = \"ConfigMap\";\n  metadata = {\n    name = \"argocd-cm\";\n    labels = {\n      \"app.kubernetes.io/name\" = \"argocd-cm\";\n    };\n  };\n}\n
"},{"location":"library/#libkubenamespace","title":"lib.kube.namespace","text":"

Type: namespace :: String -> AttrSet -> AttrSet

Create a Kubernetes namespace manifest. This will create a manifest in Kubernetes format so if you want to use it for application's resources it should be then parsed with lib.resources.fromManifests.

name

Name of the namespace manifest to create.

structured function argument

annotations

Optional annotations to add to the namespace manifest. This should be an attribute set.

labels

Optional labels to add to the namespace manifest. This should be an attribute set.

Example:

namespace \"default\" {\n  labels = {\n    \"pod-security.kubernetes.io/enforce\" = \"privileged\";\n  };\n}\n=> {\n  apiVersion = \"v1\";\n  kind = \"Namespace\";\n  metadata = {\n    name = \"default\";\n    labels = {\n      \"pod-security.kubernetes.io/enforce\" = \"privileged\";\n    };\n  };\n}\n
"},{"location":"library/#libkubeconfigmap","title":"lib.kube.configMap","text":"

Type: configMap :: String -> AttrSet -> AttrSet

Create a Kubernetes config map manifest. This will create a manifest in Kubernetes format so if you want to use it for application's resources it should be then parsed with lib.resources.fromManifests.

name

Name of the config map manifest to create.

structured function argument

data

Attribute set of data to put in the config map.

namespace

Optional namespace to add to the config map manifest.

annotations

Optional annotations to add to the namespace manifest. This should be an attribute set.

labels

Optional labels to add to the namespace manifest. This should be an attribute set.

Example:

configMap \"my-config\" {\n  namespace = \"default\";\n  data.\"data.txt\" = \"Hello world!\";\n}\n=> {\n  apiVersion = \"v1\";\n  kind = \"ConfigMap\";\n  metadata = {\n    name = \"my-config\";\n    namespace = \"default\";\n  };\n  data = {\n    \"data.txt\" = \"Hello world!\";\n  };\n}\n
"},{"location":"library/#libkubesecret","title":"lib.kube.secret","text":"

Type: configMap :: String -> AttrSet -> AttrSet

Create a Kubernetes secret manifest. This will create a manifest in Kubernetes format so if you want to use it for application's resources it should be then parsed with lib.resources.fromManifests.

Danger

Due to the nature of nixidy this resource will be rendered to YAML and stored in cleartext in git.

Using this resource for actual secret data is discouraged.

name

Name of the secret manifest to create

structured function argument

data

Attribute set of data to put in the config map. Values should be base64 encoded.

stringData

Attribute set of data to put in the config map. Values should be in cleartext.

namespace

Optional namespace to add to the config map manifest.

annotations

Optional annotations to add to the namespace manifest. This should be an attribute set.

labels

Optional labels to add to the namespace manifest. This should be an attribute set.

Example:

secret \"my-secret\" {\n  namespace = \"default\";\n  stringData.\"data.txt\" = \"Hello world!\";\n}\n=> {\n  apiVersion = \"v1\";\n  kind = \"Secret\";\n  metadata = {\n    name = \"my-secret\";\n    namespace = \"default\";\n  };\n  stringData = {\n    \"data.txt\" = \"Hello world!\";\n  };\n}\n
"},{"location":"library/#libkubeservice","title":"lib.kube.service","text":"

Type: service :: String -> AttrSet -> AttrSet

Create a Kubernetes service manifest. This will create a manifest in Kubernetes format so if you want to use it for application's resources it should be then parsed with lib.resources.fromManifests.

name

Name of the service manifest to create.

structured function argument

type

Type of service to create. Defaults to ClusterIP.

selector

Label selector to match pods that this service should target. This should be an attribute set.

ports

Ports this service should have. This should be an attribute set (see example).

namespace

Optional namespace to add to the config map manifest.

annotations

Optional annotations to add to the namespace manifest. This should be an attribute set.

labels

Optional labels to add to the namespace manifest. This should be an attribute set.

Example:

service \"nginx\" {\n  namespace = \"default\";\n  selector.app = \"nginx\";\n  ports.http = {\n    port = 80;\n  };\n}\n=> {\n  apiVersion = \"v1\";\n  kind = \"Service\";\n  metadata = {\n    name = \"nginx\";\n    namespace = \"default\";\n  };\n  spec = {\n    type = \"ClusterIP\"; # Default\n    selector.app = \"nginx\";\n    ports = [\n      {\n        name = \"http\";\n        port = 80;\n        protocol = \"TCP\"; # Default\n      }\n    ];\n  };\n}\n
"},{"location":"options/","title":"Configuration Options","text":""},{"location":"options/#applications","title":"applications","text":"

An application is a single Argo CD application that will be rendered by nixidy.

The resources will be rendered into it's own directory and an Argo CD application created for it.

Type: attribute set of (submodule)

Default: { }

Example:

{\n  nginx = {\n    namespace = \"nginx\";\n    resources = {\n      deployments = {\n        nginx = {\n          spec = {\n            replicas = 3;\n            selector = {\n              matchLabels = {\n                app = \"nginx\";\n              };\n            };\n            template = {\n              metadata = {\n                labels = {\n                  app = \"nginx\";\n                };\n              };\n              spec = {\n                containers = {\n                  nginx = {\n                    image = \"nginx:1.25.1\";\n                    imagePullPolicy = \"IfNotPresent\";\n                  };\n                };\n                securityContext = {\n                  fsGroup = 1000;\n                };\n              };\n            };\n          };\n        };\n      };\n      services = {\n        nginx = {\n          spec = {\n            ports = {\n              http = {\n                port = 80;\n              };\n            };\n            selector = {\n              app = \"nginx\";\n            };\n          };\n        };\n      };\n    };\n  };\n}\n

Declared by:

"},{"location":"options/#applicationsnamecreatenamespace","title":"applications.<name>.createNamespace","text":"

Whether or not a namespace resource should be automatically created.

Type: boolean

Default: false

Declared by:

"},{"location":"options/#applicationsnamehelmreleases","title":"applications.<name>.helm.releases","text":"

Helm releases to template and add to the rendered application's resources.

Type: attribute set of (submodule)

Default: { }

Declared by:

"},{"location":"options/#applicationsnamehelmreleasesnamechart","title":"applications.<name>.helm.releases.<name>.chart","text":"

Derivation containing the helm chart for the release.

Type: package

Declared by:

"},{"location":"options/#applicationsnamehelmreleasesnameincludecrds","title":"applications.<name>.helm.releases.<name>.includeCRDs","text":"

Whether or not to include CRDs in the helm release.

Type: boolean

Default: true

Declared by:

"},{"location":"options/#applicationsnamehelmreleasesnamename","title":"applications.<name>.helm.releases.<name>.name","text":"

Name of the helm release.

Type: string

Default: \"\u2039name\u203a\"

Declared by:

"},{"location":"options/#applicationsnamehelmreleasesnamenamespace","title":"applications.<name>.helm.releases.<name>.namespace","text":"

Namespace for the release.

Type: string

Default: config.applications.<name>.namespace

Declared by:

"},{"location":"options/#applicationsnamehelmreleasesnametransformer","title":"applications.<name>.helm.releases.<name>.transformer","text":"

Function that will be applied to the list of rendered manifests after the helm templating.

Type: function that evaluates to a(n) list of attribute set of anything

Default: config.nixidy.defaults.helm.transformer

Example: map (lib.kube.removeLabels [\"helm.sh/chart\"])

Declared by:

"},{"location":"options/#applicationsnamehelmreleasesnamevalues","title":"applications.<name>.helm.releases.<name>.values","text":"

Values to pass to the helm chart when rendering it.

Type: attribute set of anything

Default: { }

Declared by:

"},{"location":"options/#applicationsnamekustomizeapplications","title":"applications.<name>.kustomize.applications","text":"

Kustomize applications to render and add to the rendered application's resources.

Type: attribute set of (submodule)

Default: { }

Example:

{\n  argocd = {\n    namespace = \"argocd\";\n    # Equivalent to `github.com/argoproj/argo-cd/manifests/cluster-install?ref=v2.9.3`\n    # in kustomization.yaml.\n    kustomization = {\n      src = pkgs.fetchFromGitHub {\n        owner = \"argoproj\";\n        repo = \"argo-cd\";\n        rev = \"v2.9.3\";\n        hash = \"sha256-GaY4Cw/LlSwy35umbB4epXt6ev8ya19UjHRwhDwilqU=\";\n      };\n      path = \"manifests/cluster-install\";\n    };\n  };\n};\n

Declared by:

"},{"location":"options/#applicationsnamekustomizeapplicationsnamekustomizationpath","title":"applications.<name>.kustomize.applications.<name>.kustomization.path","text":"

Path relative to the base of src to the entrypoint kustomization directory.

Type: string

Declared by:

"},{"location":"options/#applicationsnamekustomizeapplicationsnamekustomizationsrc","title":"applications.<name>.kustomize.applications.<name>.kustomization.src","text":"

Derivation containing all the kustomize bases and overlays.

Type: package

Declared by:

"},{"location":"options/#applicationsnamekustomizeapplicationsnamename","title":"applications.<name>.kustomize.applications.<name>.name","text":"

Name of the kustomize application.

Type: string

Default: \"\u2039name\u203a\"

Declared by:

"},{"location":"options/#applicationsnamekustomizeapplicationsnamenamespace","title":"applications.<name>.kustomize.applications.<name>.namespace","text":"

Namespace for the kustomize application.

Type: string

Default: config.applications.<name>.namespace

Declared by:

"},{"location":"options/#applicationsnamekustomizeapplicationsnametransformer","title":"applications.<name>.kustomize.applications.<name>.transformer","text":"

Function that will be applied to the list of rendered manifests from kustomize.

Type: function that evaluates to a(n) list of attribute set of anything

Default: config.nixidy.defaults.kustomize.transformer

Declared by:

"},{"location":"options/#applicationsnamename","title":"applications.<name>.name","text":"

Name of the application.

Type: string

Default: \"\u2039name\u203a\"

Declared by:

"},{"location":"options/#applicationsnamenamespace","title":"applications.<name>.namespace","text":"

Namespace to deploy application into (defaults to name).

Type: string

Default: \"\u2039name\u203a\"

Declared by:

"},{"location":"options/#applicationsnameoutputpath","title":"applications.<name>.output.path","text":"

Name of the folder that contains all rendered resources for the application. Relative to the root of the repository.

Type: string

Default: \"\u2039name\u203a\"

Declared by:

"},{"location":"options/#applicationsnameproject","title":"applications.<name>.project","text":"

ArgoCD project to make application a part of.

Type: string

Default: \"default\"

Declared by:

"},{"location":"options/#applicationsnameresources","title":"applications.<name>.resources","text":"

Resources for the application

Type: attribute set

Default: { }

Example:

{\n  deployments = {\n    nginx = {\n      spec = {\n        replicas = 3;\n        selector = {\n          matchLabels = {\n            app = \"nginx\";\n          };\n        };\n        template = {\n          metadata = {\n            labels = {\n              app = \"nginx\";\n            };\n          };\n          spec = {\n            containers = {\n              nginx = {\n                image = \"nginx:1.25.1\";\n                imagePullPolicy = \"IfNotPresent\";\n              };\n            };\n            securityContext = {\n              fsGroup = 1000;\n            };\n          };\n        };\n      };\n    };\n  };\n  services = {\n    nginx = {\n      spec = {\n        ports = {\n          http = {\n            port = 80;\n          };\n        };\n        selector = {\n          app = \"nginx\";\n        };\n      };\n    };\n  };\n}\n

Declared by:

"},{"location":"options/#applicationsnamesyncpolicyautomatedprune","title":"applications.<name>.syncPolicy.automated.prune","text":"

Specifies if resources should be pruned during auto-syncing.

Type: boolean

Default: config.nixidy.defaults.syncPolicy.automated.prune

Declared by:

"},{"location":"options/#applicationsnamesyncpolicyautomatedselfheal","title":"applications.<name>.syncPolicy.automated.selfHeal","text":"

Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected.

Type: boolean

Default: config.nixidy.defaults.syncPolicy.automated.selfHeal

Declared by:

"},{"location":"options/#applicationsnameyamls","title":"applications.<name>.yamls","text":"

List of Kubernetes manifests declared in YAML strings. They will be parsed and added to the application's resources where they can be overwritten and modified.

Can be useful for reading existing YAML files (i.e. [(builtins.readFile ./deployment.yaml)]).

Type: list of string

Default: [ ]

Example:

[\n  ''\n    apiVersion: v1\n    kind: Namespace\n    metadata:\n      name: default\n''\n]\n

Declared by:

"},{"location":"options/#nixidyappofappsname","title":"nixidy.appOfApps.name","text":"

Name of the application for bootstrapping all other applications (app of apps pattern).

Type: string

Default: \"apps\"

Declared by:

"},{"location":"options/#nixidyappofappsnamespace","title":"nixidy.appOfApps.namespace","text":"

Destination namespace for generated Argo CD Applications in the app of apps applications.

Type: string

Default: \"argocd\"

Declared by:

"},{"location":"options/#nixidycharts","title":"nixidy.charts","text":"

Attrset of derivations containing helm charts. This will be passed as charts to every module.

Type: attribute set of anything

Default: { }

Declared by:

"},{"location":"options/#nixidychartsdir","title":"nixidy.chartsDir","text":"

Path to a directory containing sub-directory structure that can be used to build a charts attrset. This will be passed as charts to every module.

Type: null or path

Default: null

Declared by:

"},{"location":"options/#nixidydefaultshelmtransformer","title":"nixidy.defaults.helm.transformer","text":"

Function that will be applied to the list of rendered manifests after the helm templating. This option applies to all helm releases in all applications unless explicitly specified there.

Type: function that evaluates to a(n) list of attribute set of anything

Default: res: res

Example: map (lib.kube.removeLabels [\"helm.sh/chart\"])

Declared by:

"},{"location":"options/#nixidydefaultskustomizetransformer","title":"nixidy.defaults.kustomize.transformer","text":"

Function that will be applied to the list of rendered manifests after kustomize rendering. This option applies to all kustomize applications in all nixidy applications unless explicitly specified there.

Type: function that evaluates to a(n) list of attribute set of anything

Default: res: res

Example: map (lib.kube.removeLabels [\"app.kubernetes.io/version\"])

Declared by:

"},{"location":"options/#nixidydefaultssyncpolicyautomatedprune","title":"nixidy.defaults.syncPolicy.automated.prune","text":"

Specifies if resources should be pruned during auto-syncing. This is the default value for all applications if not explicitly set.

Type: boolean

Default: false

Declared by:

"},{"location":"options/#nixidydefaultssyncpolicyautomatedselfheal","title":"nixidy.defaults.syncPolicy.automated.selfHeal","text":"

Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected. This is the default value for all applications if not explicitly set.

Type: boolean

Default: false

Declared by:

"},{"location":"options/#nixidyextrafiles","title":"nixidy.extraFiles","text":"

Extra files to write in the generated stage.

Type: attribute set of (submodule)

Default: { }

Declared by:

"},{"location":"options/#nixidyextrafilesnamepath","title":"nixidy.extraFiles.<name>.path","text":"

Path of output file.

Type: string

Default: \"\u2039name\u203a\"

Declared by:

"},{"location":"options/#nixidyextrafilesnametext","title":"nixidy.extraFiles.<name>.text","text":"

Text of the output file.

Type: strings concatenated with \"\\n\"

Declared by:

"},{"location":"options/#nixidyresourceimports","title":"nixidy.resourceImports","text":"

List of modules to import for resource defintion options.

Type: list of (package or path or function that evaluates to a(n) (attribute set))

Default: [ ]

Declared by:

"},{"location":"options/#nixidytargetbranch","title":"nixidy.target.branch","text":"

The destination branch of the generated applications.

Type: string

Declared by:

"},{"location":"options/#nixidytargetrepository","title":"nixidy.target.repository","text":"

The repository URL to put in all generated applications.

Type: string

Declared by:

"},{"location":"options/#nixidytargetrootpath","title":"nixidy.target.rootPath","text":"

The root path of all generated applications in the repository.

Type: string

Default: \"./\"

Declared by:

"},{"location":"user_guide/getting_started/","title":"Getting Started","text":"

Nixidy only supports Nix Flakes so that needs to be enabled.

"},{"location":"user_guide/getting_started/#initialize-repository","title":"Initialize Repository","text":"

First a flake.nix needs to be created in the root of the repository.

flake.nix
{\n  description = \"My ArgoCD configuration with nixidy.\";\n\n  inputs.nixpkgs.url = \"github:nixos/nixpkgs/nixos-unstable\";\n  inputs.flake-utils.url = \"github:numtide/flake-utils\";\n  inputs.nixidy.url = \"github:arnarg/nixidy\";\n\n  outputs = {\n    self,\n    nixpkgs,\n    flake-utils,\n    nixidy,\n  }: (flake-utils.lib.eachDefaultSystem (system: let\n    pkgs = import nixpkgs {\n      inherit system;\n    };\n  in {\n    # This declares the available nixidy envs.\n    nixidyEnvs = nixidy.lib.mkEnvs {\n      inherit pkgs;\n\n      envs = {\n        # Currently we only have the one dev env.\n        dev.modules = [./env/dev.nix];\n      };\n    };\n\n    # Handy to have nixidy cli available in the local\n    # flake too.\n    packages.nixidy = nixidy.packages.${system}.default;\n\n    # Useful development shell with nixidy in path.\n    # Run `nix develop` to enter.\n    devShells.default = pkgs.mkShell {\n      buildInputs = [nixidy.packages.${system}.default];\n    };\n  }));\n}\n

The flake declares a single nixidy environment called dev. It includes a single nix module found at ./env/dev.nix, so let's create that.

env/dev.nix
{\n  # Set the target repository for the rendered manifests\n  # and applications.\n  # This should be replaced with yours, usually the same\n  # repository as the nixidy definitions.\n  nixidy.target.repository = \"https://github.com/arnarg/nixidy-demo.git\";\n\n  # Set the target branch the rendered manifests for _this_\n  # environment should be pushed to in the repository defined\n  # above.\n  # When using `mkEnvs` function in flake.nix it wil automatically\n  # set this to `\"env/${name}\"`.\n  nixidy.target.branch = \"env/dev\";\n}\n

Now runnig nix run .#nixidy -- info .#dev (or simply nixidy info .#dev if run in nix shell using nix develop) you can get the same info we just declared above. This verifies that things are set up correctly so far.

>> nix run .#nixidy -- info .#dev\nRepository: https://github.com/arnarg/nixidy-demo.git\nBranch:     env/dev\n

If we now attempt to build this new environment with nix run .#nixidy -- build .#dev we can see that nothing is generated but an empty folder called apps.

>> tree result\nresult\n\u2514\u2500\u2500 apps/\n

This is because we have not declared any applications yet for this environment.

"},{"location":"user_guide/getting_started/#our-first-application","title":"Our first Application","text":"

While nixidy allows you to declare all of the application's resources directly in nix it would be a waste to not be able to use Helm charts and Kustomize applications that already exists and are often officially maintained by project maintainers.

The application's declaration is very similar whichever option you go with.

HelmKustomize env/dev.nix
{lib, ...}: {\n  # Options explained in the section above.\n  nixidy.target.repository = \"https://github.com/arnarg/nixidy-demo.git\";\n  nixidy.target.branch = \"env/dev\";\n\n  # Argo CD application using the Helm chart from argo-helm.\n  applications.argocd = {\n    # Declare the destination namespace for the application.\n    namespace = \"argocd\";\n\n    # Instruct nixidy to automatically create a `Namespace`\n    # manifest in the rendered manifests for namespace\n    # selected above.\n    createNamespace = true;\n\n    # Specify Helm chart with values to template with.\n    helm.releases.argocd = {\n      # Using `downloadHelmChart` we can download\n      # the helm chart using nix.\n      # The value for `chartHash` needs to be updated\n      # with each version.\n      chart = lib.helm.downloadHelmChart {\n        repo = \"https://argoproj.github.io/argo-helm/\";\n        chart = \"argo-cd\";\n        version = \"5.51.6\";\n        chartHash = \"sha256-3kRkzOQdYa5JkrBV/+iJK3FP+LDFY1J8L20aPhcEMkY=\";\n      };\n\n      # Specify values to pass to the chart.\n      values = {\n        # Run argocd-server with 2 replicas.\n        # This is an option in the chart's `values.yaml`\n        # usually declared like this:\n        #\n        # server:\n        #   replicas: 2\n        server.replicas = 2;\n      };\n    };\n  };\n}\n
env/dev.nix
{pkgs, ...}: {\n  # Options explained in the section above.\n  nixidy.target.repository = \"https://github.com/arnarg/nixidy-demo.git\";\n  nixidy.target.branch = \"env/dev\";\n\n  # Argo CD application using the official kustomize application\n  # from Argo CD git repository.\n  applications.argocd = {\n    # Declare the destination namespace for the application.\n    namespace = \"argocd\";\n\n    # Instruct nixidy to automatically create a `Namespace`\n    # manifest in the rendered manifests for namespace\n    # selected above.\n    createNamespace = true;\n\n    # Specify Kustomize application to render.\n    kustomize.applications.argocd = {\n      # Equivalent to `github.com/argoproj/argo-cd/manifests/cluster-install?ref=v2.9.3`\n      # in kustomization.yaml.\n      kustomization = {\n        src = pkgs.fetchFromGitHub {\n          owner = \"argoproj\";\n          repo = \"argo-cd\";\n          rev = \"v2.9.3\";\n          hash = \"sha256-GaY4Cw/LlSwy35umbB4epXt6ev8ya19UjHRwhDwilqU=\";\n        };\n        path = \"manifests/cluster-install\";\n      };\n    };\n  };\n}\n

In both cases the following output will be generated:

tree -l result\n\u251c\u2500\u2500 apps\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 Application-argocd.yaml\n\u2514\u2500\u2500 argocd\n    \u251c\u2500\u2500 ClusterRole-argocd-application-controller.yaml\n    \u251c\u2500\u2500 ClusterRole-argocd-server.yaml\n    \u251c\u2500\u2500 ClusterRoleBinding-argocd-application-controller.yaml\n    \u251c\u2500\u2500 ClusterRoleBinding-argocd-server.yaml\n    \u251c\u2500\u2500 ConfigMap-argocd-cmd-params-cm.yaml\n    \u2514\u2500\u2500 ...\n

And the contents of the Argo CD application automatically generated is the following:

apps/Application-argocd.yaml
apiVersion: argoproj.io/v1alpha1\nkind: Application\nmetadata:\n# This is the name of the application (`applications.argocd`).\nname: argocd namespace: argocd\nspec:\ndestination:\n# This is the destination namespace for the application\n# specified with `applications.argocd.namespace`.\nnamespace: argocd\nserver: https://kubernetes.default.svc\nproject: default\nsource:\n# This is the output path declared for the application with\n# option `applications.output.path` (defaults to the name).\npath: argocd\n# Repository specified in `nixidy.target.repository`.\nrepoURL: https://github.com/arnarg/nixidy-demo.git\n# Branch specified in `nixidy.target.branch`.\ntargetRevision: env/dev\nsyncPolicy:\nautomated:\nprune: false\nselfHeal: false\n

A directory with rendered resources is generated for each application declared with applications.<name> as well as an Argo CD application resource YAML file in apps/. What this provides is the option to bootstrap the whole rendered branch to a cluster by adding an application pointing to the apps/ folder.

See App of Apps Pattern.

"},{"location":"user_guide/getting_started/#modularizing-the-configuration","title":"Modularizing the Configuration","text":"

So far we've initialized the repository with flake.nix and a single environment with all options set in a single file (env/dev.nix). Next we'll want to add a test environment.

Adding a test environment is as simple as copying env/dev.nix to env/test.nix, renaming the target branch and adding that to flake.nix under envs.test.modules. This however will involve a lot of code duplication and the environment will need to be maintained completely separately.

Instead we should modularize the configuration into re-usable modules that can allow slight modification between environments (number of replicas, ingress domain, etc.).

To start this migration a modules/default.nix should be created.

HelmKustomize modules/default.nix
{lib, ...}: {\n  # This option should be common across all environments so we\n  # can declare it here.\n  nixidy.target.repository = \"https://github.com/arnarg/nixidy-demo.git\";\n\n  # Argo CD application using the Helm chart from argo-helm.\n  applications.argocd = {\n    # Declare the destination namespace for the application.\n    namespace = \"argocd\";\n\n    # Instruct nixidy to automatically create a `Namespace`\n    # manifest in the rendered manifests for namespace\n    # selected above.\n    createNamespace = true;\n\n    # Specify Helm chart with values to template with.\n    helm.releases.argocd = {\n      # Using `downloadHelmChart` we can download\n      # the helm chart using nix.\n      # The value for `chartHash` needs to be updated\n      # with each version.\n      chart = lib.helm.downloadHelmChart {\n        repo = \"https://argoproj.github.io/argo-helm/\";\n        chart = \"argo-cd\";\n        version = \"5.51.6\";\n        chartHash = \"sha256-3kRkzOQdYa5JkrBV/+iJK3FP+LDFY1J8L20aPhcEMkY=\";\n      };\n\n      # Specify values to pass to the chart.\n      values = {\n        # Run argocd-server with 2 replicas.\n        # This is an option in the chart's `values.yaml`\n        # usually declared like this:\n        #\n        # server:\n        #   replicas: 2\n        server.replicas = 2;\n      };\n    };\n  };\n}\n
modules/default.nix
{pkgs, ...}: {\n  # This option should be common across all environments so we\n  # can declare it here.\n  nixidy.target.repository = \"https://github.com/arnarg/nixidy-demo.git\";\n\n  # Argo CD application using the official kustomize application\n  # from Argo CD git repository.\n  applications.argocd = {\n    # Declare the destination namespace for the application.\n    namespace = \"argocd\";\n\n    # Instruct nixidy to automatically create a `Namespace`\n    # manifest in the rendered manifests for namespace\n    # selected above.\n    createNamespace = true;\n\n    # Specify Kustomize application to render.\n    kustomize.applications.argocd = {\n      # Equivalent to `github.com/argoproj/argo-cd/manifests/cluster-install?ref=v2.9.3`\n      # in kustomization.yaml.\n      kustomization = {\n        src = pkgs.fetchFromGitHub {\n          owner = \"argoproj\";\n          repo = \"argo-cd\";\n          rev = \"v2.9.3\";\n          hash = \"sha256-GaY4Cw/LlSwy35umbB4epXt6ev8ya19UjHRwhDwilqU=\";\n        };\n        path = \"manifests/cluster-install\";\n      };\n    };\n  };\n}\n

And in flake.nix we can now set it to use modules/default.nix as a common module like the following:

flake.nix
{\n  description = \"My ArgoCD configuration with nixidy.\";\n\n  inputs.nixpkgs.url = \"github:nixos/nixpkgs/nixos-unstable\";\n  inputs.flake-utils.url = \"github:numtide/flake-utils\";\n  inputs.nixidy.url = \"github:arnarg/nixidy\";\n\n  outputs = {\n    self,\n    nixpkgs,\n    flake-utils,\n    nixidy,\n  }: (flake-utils.lib.eachDefaultSystem (system: let\n    pkgs = import nixpkgs {\n      inherit system;\n    };\n  in {\n    # This declares the available nixidy envs.\n    nixidyEnvs = nixidy.lib.mkEnvs {\n      inherit pkgs;\n\n      # Modules to include in all envs.\n      modules = [./modules];\n\n      envs = {\n        dev.modules = [./env/dev.nix];\n        test.modules = [./env/test.nix];\n      };\n    };\n  }));\n}\n

Both environment specific files now only declare the target branch:

env/dev.nix
{\n  nixidy.target.branch = \"env/dev\";\n}\n
env/test.nix
{\n  nixidy.target.branch = \"env/test\";\n}\n
"},{"location":"user_guide/getting_started/#abstracting-options-on-top-of-applications","title":"Abstracting Options on top of Applications","text":"

Now we have all common configuration in a module that is used across all environments and the next step is to also add traefik as an ingress controller. Oh! and we also want to create an ingress for Argo CD Web UI using the ingress controller. Also, come to think of it, We also don't want to run 2 replicas of argocd-server in the dev environment to save on resources.

Reaching these goals is simple enough by overriding the few needed options directly in the env specific configuration, for example:

env/dev.nix
{lib, ...}: {\n  # ...\n\n  applications.argocd.helm.releases.argocd.values = {\n    # Actually we want 1 replica only in dev.\n    server.replicas = lib.mkForce 1;\n  };\n}\n

But this requires knowing the implementation details of the application and introduces tight coupling and things become hard to change for the argocd application.

Instead things should ideally be broken apart further and create an extra configuration interface on top. To achieve this we want to break the common modules into more files, or a module per application but with a common entrypoint.

"},{"location":"user_guide/getting_started/#traefik","title":"Traefik","text":"

Let's start by creating a module for traefik:

modules/traefik.nix
{\n  lib,\n  config,\n  ...\n}: {\n  options.networking.traefik = with lib; {\n    enable = mkEnableOption \"traefik ingress controller\";\n    # Exposing some options that _could_ be set directly\n    # in the values option below can be useful for discoverability\n    # and being able to reference in other modules\n    ingressClass = {\n      enable = mkOption {\n        type = types.bool;\n        default = true;\n        description = ''\n          Whether or not an ingress class for traefik should be created automatically.\n        '';\n      };\n      name = mkOption {\n        type = types.str;\n        default = \"traefik\";\n        description = ''\n          The name of the ingress class for traefik that should be created automatically.\n        '';\n      };\n    };\n    # To not limit the consumers of this module allowing for\n    # setting the helm values directly is useful in certain\n    # situations\n    values = mkOption {\n      type = types.attrsOf types.anything;\n      default = {};\n      description = ''\n        Value overrides that will be passed to the helm chart.\n      '';\n    };\n  };\n\n  # Only create the application if traefik is enabled\n  config = lib.mkIf config.networking.traefik.enable {\n    applications.traefik = {\n      namespace = \"traefik\";\n      createNamespace = true;\n\n      helm.releases.traefik = {\n        chart = lib.helm.downloadHelmChart {\n          repo = \"https://traefik.github.io/charts/\";\n          chart = \"traefik\";\n          version = \"25.0.0\";\n          chartHash = \"sha256-ua8KnUB6MxY7APqrrzaKKSOLwSjDYkk9tfVkb1bqkVM=\";\n        };\n\n        # Here we merge default values with provided\n        # values from `config.networking.traefik.values`.\n        values = lib.recursiveUpdate {\n          ingressClass = {\n            enabled = config.networking.traefik.ingressClass.enable;\n            name = config.networking.traefik.ingressClass.name;\n          };\n        } config.networking.traefik.values;\n      };\n    };\n  };\n}\n

Here we have declared extra configuration options that can be set in other modules. By setting networking.traefik.enable = true; the traefik application will be added, otherwise not. By setting networking.traefik.ingressClass.enable = false; the application will not contain an ingress class for traefik, and so on.

"},{"location":"user_guide/getting_started/#argo-cd","title":"Argo CD","text":"

Now let's create a specific module for Argo CD:

modules/argocd.nix
{\n  lib,\n  config,\n  ...\n}: {\n  options.services.argocd = with lib; {\n    enable = mkEnableOption \"argocd\";\n    # Configuration options for the ingress\n    ingress = {\n      enable = mkEnableOption \"argocd ingress\";\n      host = mkOption {\n        type = types.nullOr types.str;\n        default = null;\n        description = ''\n          Hostname to use in the Ingress for argocd-server.\n        '';\n      };\n      ingressClassName = mkOption {\n        type = types.str;\n        default = \"\";\n        description = ''\n          The ingress class to use in the Ingress for argocd-server.\n        '';\n      };\n    };\n    # Configuration option for setting the replicas for\n    # argocd-server\n    replicas = mkOption {\n      type = types.int;\n      default = 2;\n      description = ''\n        Number of replicas of the argocd-server deployment.\n      '';\n    };\n    # To not limit the consumers of this module allowing for\n    # setting the helm values directly is useful in certain\n    # situations\n    values = mkOption {\n      type = types.attrsOf types.anything;\n      default = {};\n      description = ''\n        Value overrides that will be passed to the helm chart.\n      '';\n    };\n  };\n\n  # Only create the application if argocd is enabled\n  config = lib.mkIf config.services.argocd.enable {\n    applications.argocd = {\n      namespace = \"argocd\";\n      createNamespace = true;\n\n      helm.releases.argocd = {\n        chart = lib.helm.downloadHelmChart {\n          repo = \"https://argoproj.github.io/argo-helm/\";\n          chart = \"argo-cd\";\n          version = \"5.51.6\";\n          chartHash = \"sha256-3kRkzOQdYa5JkrBV/+iJK3FP+LDFY1J8L20aPhcEMkY=\";\n        };\n\n        # Here we merge default values with provided\n        # values from `config.services.argocd.values`.\n        values = lib.recursiveUpdate {\n          # Set number of replicas by using service option\n          server.replicas = config.services.argocd.replicas;\n          # Create an ingress with the configured hostname\n          server.ingress = {\n            enabled = config.services.argocd.ingress.enable;\n            ingressClassName = config.services.argocd.ingress.ingressClassName;\n            hosts =\n              if !isNull config.services.argocd.ingress.host\n              then [config.services.argocd.ingress.host]\n              else [];\n          };\n        } config.services.argocd.values;\n      };\n    };\n  };\n}\n

Like with the traefik module you can now set services.argocd.enable = true; to enable the argocd application and services.argocd.ingress.enable = true; to create an ingress.

"},{"location":"user_guide/getting_started/#putting-it-all-together","title":"Putting it all together","text":"

Now with argocd and traefik declared in their own modules we will need to import them in the base modules/default.nix:

modules/default.nix
{lib, config, ...}: {\n  # Here we import the modules we created above.\n  # This will make all the configuration options\n  # available to other modules.\n  imports = [\n    ./argocd.nix\n    ./traefik.nix\n  ];\n\n  # This option should be common across all environments so we\n  # can declare it here.\n  nixidy.target.repository = \"https://github.com/arnarg/nixidy-demo.git\";\n\n  # Traefik should be enable by default.\n  networking.traefik.enable = lib.mkDefault true;\n\n  # Argo CD should be enabled by default.\n  services.argocd = {\n    enable = lib.mkDefault true;\n\n    ingress = {\n      # An ingress for Argo CD Web UI should\n      # be created if traefik is also enabled.\n      enable = lib.mkDefault config.networking.traefik.enable;\n\n      # The ingress should use Treafik's ingress\n      # class.\n      ingressClassName = lib.mkDefault config.networking.traefik.ingressClass.name;\n    };\n  };\n}\n

This will import the two application modules and set some defaults by using mkDefault (this function sets the value as a default value but still allows overriding it in other modules). Notably we have set it up in a way that will automatically enable the ingress for Argo CD Web UI if traefik is also enabled, which is also enabled in this file but can be still be disabled in another module.

Now in order to achieve the goals we set out to achieve in the beginning of this section, the following options are set in the environments' configurations:

env/dev.nix
{\n  nixidy.target.branch = \"env/dev\";\n\n  # We want to set the hostname for ArgoCD Web UI\n  services.argocd.ingress.host = \"argocd.dev.domain.com\";\n\n  # We only want 1 replica of argocd server\n  services.argocd.replicas = 1;\n}\n
env/test.nix
{\n  nixidy.target.branch = \"env/test\";\n\n  # We want to set the hostname for ArgoCD Web UI\n  services.argocd.ingress.host = \"argocd.test.domain.com\";\n}\n

Now the following manifests are generated:

>> tree -l result\nresult\n\u251c\u2500\u2500 apps\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 Application-argocd.yaml\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 Application-traefik.yaml\n\u251c\u2500\u2500 argocd\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ClusterRole-argocd-application-controller.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ClusterRole-argocd-notifications-controller.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ClusterRole-argocd-repo-server.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ClusterRole-argocd-server.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ClusterRoleBinding-argocd-application-controller.yaml\n\u2502   \u2514\u2500\u2500 ...\n\u2514\u2500\u2500 traefik\n    \u251c\u2500\u2500 ClusterRoleBinding-traefik-traefik.yaml\n    \u251c\u2500\u2500 ClusterRole-traefik-traefik.yaml\n    \u251c\u2500\u2500 CustomResourceDefinition-ingressroutes-traefik-containo-us.yaml\n    \u251c\u2500\u2500 CustomResourceDefinition-ingressroutes-traefik-io.yaml\n    \u251c\u2500\u2500 CustomResourceDefinition-ingressroutetcps-traefik-containo-us.yaml\n    \u2514\u2500\u2500 ...\n
"},{"location":"user_guide/github_actions/","title":"GitHub Actions","text":"

Nixidy offers a GitHub Action to build and push an environment to its target branch.

"},{"location":"user_guide/github_actions/#usage","title":"Usage","text":"

In this example it will build environments dev, test and prod on every push to main. Realistically the different environments should be built in different workflows.

name: Generate Kubernetes manifests\n\non:\npush:\nbranches:\n- main\n\njobs:\ngenerate:\nruns-on: ubuntu-latest\nstrategy:\nmatrix:\nenv: [\"dev\", \"test\", \"prod\"]\nsteps:\n- uses: actions/checkout@v4\n\n- uses: cachix/install-nix-action@v20\nwith:\n# This config is required in order to support a nixidy\n# flake repository\nextra_nix_config: |\nextra-experimental-features = nix-command flakes\n\n# This is optional but speeds up consecutive runs\n# by caching nix derivations between github workflows\n# runs\n- uses: DeterminateSystems/magic-nix-cache-action@v2\n\n# Build and push nixidy environment\n- uses: arnarg/nixidy@main\nwith:\nenvironment: ${{matrix.env}}\n
"},{"location":"user_guide/transformers/","title":"Transformers","text":"

Nixidy supports adding a transformers to Helm releases and Kustomize applications. A transformer is only a function that takes in a list of Kubernetes manifests in attribute sets and returns the same ([AttrSet] -> [AttrSet]). It is called after the manifests have been rendered and parsed into nix but before they're transformed into the nixidy form (<apiVersion>.<kind>.<name>) and can be used to modify the resources.

Transformers can be set globally in nixidy.defaults.helm.transformer for Helm releases and nixidy.defaults.kustomize.transformer for kustomize applications.

"},{"location":"user_guide/transformers/#remove-version-specific-labels","title":"Remove Version Specific Labels","text":"

It's very common that helm charts will add the labels helm.sh/chart and app.kubernetes.io/version to all resources it renders. This can produce very big diffs when they're updated and nixidy renders them and commits the manifests to a git branch. The changes in these labels are not very relevant and will mostly just be noise to distract from the actual relevant changes of the rendered output.

A transformer can be used to filter out these labels.

{\n  applications.argocd.helm.releases.argocd = {\n    # ...\n\n    # Remove the following labels from all manifests\n    transformer = map (lib.kube.removeLabels [\n      \"app.kubernetes.io/version\"\n      \"helm.sh/chart\"\n    ]);\n  }\n}\n

Here we use map to call lib.kube.removeLabels on each manifest in the list to remove the specified labels. The example uses function currying, this is equivalent to manifests: map (m: lib.kube.removeLabels [\"...\"] m) manifests.

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"nixidy","text":"

Kubernetes GitOps with nix and Argo CD.

Kind of sounds like Nix CD.

Manage an entire Kubernetes cluster like it's NixOS, with the help of Argo CD.

"},{"location":"#why","title":"Why?","text":"

It's desirable to manage Kubernetes clusters in a declarative way using a git repository as a source of truth for manifests that should be deployed into the cluster. One popular solution that is often used to achieve this goal is Argo CD.

Argo CD has a concept of applications. Each application has an entrypoint somewhere in your git repository that is either a Helm chart, kustomize application, jsonnet files or just a directory of YAML files. All the resources that are output when templating the helm chart, kustomizing the kustomize application or are defined in the YAML files in the directory, make up the application and are (usually) deployed into a single namespace.

For those reasons these git repositories often need quite elaborate designs once many applications should be deployed, requiring use of application sets (generator for applications) or custom Helm charts just to render all the different applications of the repository.

On top of that it can be quite obscure what exactly will be deployed by just looking at helm charts (along with all the values override, usually set for each environment) or the kustomize overlays (which often are many depending on number of environments/stages) without going in and just running helm template or kubectl kustomize.

Having dealt with these design decisions and pains that come with the different approaches I'm starting to use The Rendered Manifests Pattern. While it's explained in way more detail in the linked blog post, basically it involves using your CI system to pre-render the helm charts or the kustomize overlays and commit all the rendered manifests to an environment branch (or go through a pull request review where you can review the exact changes to your environment). That way you can just point Argo CD to your different directories full of rendered YAML manifests without having to do any helm templating or kustomize rendering.

"},{"location":"#nixos-module-system","title":"NixOS' Module System","text":"

I have been a user and a fan of NixOS for many years and how its module system works to recursively merge all configuration options that are set in many different modules.

I have not been a fan of helm's string templating of a whitespace-sensitive configuration language or kustomize's repitition (defining a kustomization.yaml file for each layer statically listing files to include, some are json patches some are not...).

Therefore I made nixidy as an experiment to see if I can make something better (at least for myself). As all Argo CD applications are defined in a single configuration it can reference configuration options across applications and automatically generate an App of Apps bootstrapping all of them.

"},{"location":"#getting-started","title":"Getting Started","text":"

Take a look at the getting started guide.

"},{"location":"#examples","title":"Examples","text":""},{"location":"#special-thanks","title":"Special Thanks","text":"

farcaller/nix-kube-generators is used internally to pull and render Helm charts and some functions are re-exposed in the lib passed to modules in nixidy.

hall/kubenix project has code generation of nix module options for every standard kubernetes resource. Instead of doing this work in nixidy I simply import their generated resource options. The resource option generation scripts in nixidy are also a slight modification of kubenix's. Without their work this wouldn't be possible in nixidy.

"},{"location":"library/","title":"Libary Functions","text":"

The argument lib is passed to each module in nixidy. This is the standard nixpkgs library extended with the following functions.

"},{"location":"library/#libhelmdownloadhelmchart","title":"lib.helm.downloadHelmChart","text":"

Type: downloadHelmChart :: AttrSet -> Derivation

Downloads a helm chart from a helm registry.

This is re-exported directly from farcaller/nix-kube-generators.

"},{"location":"library/#libhelmbuildhelmchart","title":"lib.helm.buildHelmChart","text":"

Type: buildHelmChart :: AttrSet -> Derivation

Templates a helm chart with provided values and creates a derivation with the output.

This is re-exported directly from farcaller/nix-kube-generators.

"},{"location":"library/#libhelmgetchartvalues","title":"lib.helm.getChartValues","text":"

Type: getChartValues :: Derivation -> AttrSet

Parse the default values file shipped with the helm chart.

chart

Derivation containing helm chart. Usually output of lib.helm.downloadHelmChart.

Example:

getChartValues (lib.helm.downloadHelmChart {\n    repo = \"https://argoproj.github.io/argo-helm/\";\n    chart = \"argo-cd\";\n    version = \"5.51.4\";\n    chartHash = \"sha256-LOEJ5mYaHEA0RztDkgM9DGTA0P5eNd0SzSlwJIgpbWY=\";\n})\n=> {\n  server.replicas = 1;\n  controller.replicas = 1;\n  # ...\n}\n
"},{"location":"library/#libkustomizebuildkustomization","title":"lib.kustomize.buildKustomization","text":"

Type: buildKustomization :: AttrSet -> Derivation

Builds a kustomization and creates a derivation with the output.

structured function argument

name

Name is only used for derivation name.

src

Derivation containing the kustomization entrypoint and all relative bases that it might reference.

path

Relative path from the base of src to the kustomization folder to render.

namespace

Override namespace in kustomization.yaml.

Example:

buildKustomization {\n  name = \"argocd\";\n  src = pkgs.fetchFromGitHub {\n    owner = \"argoproj\";\n    repo = \"argo-cd\";\n    rev = \"v2.9.3\";\n    hash = \"sha256-GaY4Cw/LlSwy35umbB4epXt6ev8ya19UjHRwhDwilqU=\";\n  };\n  path = \"manifests/cluster-install\";\n  namespace = \"argocd\";\n}\n=> /nix/store/7i52...7pww-kustomize-argocd\n
"},{"location":"library/#libkubefromyaml","title":"lib.kube.fromYAML","text":"

Type: fromYAML :: String -> [AttrSet]

Parses a YAML document string into a list of attribute sets.

This is re-exported directly from farcaller/nix-kube-generators.

yaml

String with a yaml document.

Example:

fromYAML ''\n  apiVersion: v1\n  kind: Namespace\n  metadata:\n    name: default\n  ---\n  apiVersion: v1\n  kind: Namespace\n  metadata:\n    name: kube-system\n''\n=> [\n  {\n    apiVersion = \"v1\";\n    kind = \"Namespace\";\n    metadata.name = \"default\";\n  }\n  {\n    apiVersion = \"v1\";\n    kind = \"Namespace\";\n    metadata.name = \"kube-system\";\n  }\n]\n
"},{"location":"library/#libkuberemovelabels","title":"lib.kube.removeLabels","text":"

Type: removeLabels :: [String] -> AttrSet -> AttrSet

Removes labels from a Kubernetes manifest.

labels

List of labels that should be removed

manifest

Kubernetes manifest

Example:

removeLabels [\"helm.sh/chart\"] {\n  apiVersion = \"v1\";\n  kind = \"ConfigMap\";\n  metadata = {\n    name = \"argocd-cm\";\n    labels = {\n      \"app.kubernetes.io/name\" = \"argocd-cm\";\n      \"helm.sh/chart\" = \"argo-cd-5.51.6\";\n    };\n  };\n}\n=> {\n  apiVersion = \"v1\";\n  kind = \"ConfigMap\";\n  metadata = {\n    name = \"argocd-cm\";\n    labels = {\n      \"app.kubernetes.io/name\" = \"argocd-cm\";\n    };\n  };\n}\n
"},{"location":"library/#libkubenamespace","title":"lib.kube.namespace","text":"

Type: namespace :: String -> AttrSet -> AttrSet

Create a Kubernetes namespace manifest. This will create a manifest in Kubernetes format so if you want to use it for application's resources it should be then parsed with lib.resources.fromManifests.

name

Name of the namespace manifest to create.

structured function argument

annotations

Optional annotations to add to the namespace manifest. This should be an attribute set.

labels

Optional labels to add to the namespace manifest. This should be an attribute set.

Example:

namespace \"default\" {\n  labels = {\n    \"pod-security.kubernetes.io/enforce\" = \"privileged\";\n  };\n}\n=> {\n  apiVersion = \"v1\";\n  kind = \"Namespace\";\n  metadata = {\n    name = \"default\";\n    labels = {\n      \"pod-security.kubernetes.io/enforce\" = \"privileged\";\n    };\n  };\n}\n
"},{"location":"library/#libkubeconfigmap","title":"lib.kube.configMap","text":"

Type: configMap :: String -> AttrSet -> AttrSet

Create a Kubernetes config map manifest. This will create a manifest in Kubernetes format so if you want to use it for application's resources it should be then parsed with lib.resources.fromManifests.

name

Name of the config map manifest to create.

structured function argument

data

Attribute set of data to put in the config map.

namespace

Optional namespace to add to the config map manifest.

annotations

Optional annotations to add to the namespace manifest. This should be an attribute set.

labels

Optional labels to add to the namespace manifest. This should be an attribute set.

Example:

configMap \"my-config\" {\n  namespace = \"default\";\n  data.\"data.txt\" = \"Hello world!\";\n}\n=> {\n  apiVersion = \"v1\";\n  kind = \"ConfigMap\";\n  metadata = {\n    name = \"my-config\";\n    namespace = \"default\";\n  };\n  data = {\n    \"data.txt\" = \"Hello world!\";\n  };\n}\n
"},{"location":"library/#libkubesecret","title":"lib.kube.secret","text":"

Type: configMap :: String -> AttrSet -> AttrSet

Create a Kubernetes secret manifest. This will create a manifest in Kubernetes format so if you want to use it for application's resources it should be then parsed with lib.resources.fromManifests.

Danger

Due to the nature of nixidy this resource will be rendered to YAML and stored in cleartext in git.

Using this resource for actual secret data is discouraged.

name

Name of the secret manifest to create

structured function argument

data

Attribute set of data to put in the config map. Values should be base64 encoded.

stringData

Attribute set of data to put in the config map. Values should be in cleartext.

namespace

Optional namespace to add to the config map manifest.

annotations

Optional annotations to add to the namespace manifest. This should be an attribute set.

labels

Optional labels to add to the namespace manifest. This should be an attribute set.

Example:

secret \"my-secret\" {\n  namespace = \"default\";\n  stringData.\"data.txt\" = \"Hello world!\";\n}\n=> {\n  apiVersion = \"v1\";\n  kind = \"Secret\";\n  metadata = {\n    name = \"my-secret\";\n    namespace = \"default\";\n  };\n  stringData = {\n    \"data.txt\" = \"Hello world!\";\n  };\n}\n
"},{"location":"library/#libkubeservice","title":"lib.kube.service","text":"

Type: service :: String -> AttrSet -> AttrSet

Create a Kubernetes service manifest. This will create a manifest in Kubernetes format so if you want to use it for application's resources it should be then parsed with lib.resources.fromManifests.

name

Name of the service manifest to create.

structured function argument

type

Type of service to create. Defaults to ClusterIP.

selector

Label selector to match pods that this service should target. This should be an attribute set.

ports

Ports this service should have. This should be an attribute set (see example).

namespace

Optional namespace to add to the config map manifest.

annotations

Optional annotations to add to the namespace manifest. This should be an attribute set.

labels

Optional labels to add to the namespace manifest. This should be an attribute set.

Example:

service \"nginx\" {\n  namespace = \"default\";\n  selector.app = \"nginx\";\n  ports.http = {\n    port = 80;\n  };\n}\n=> {\n  apiVersion = \"v1\";\n  kind = \"Service\";\n  metadata = {\n    name = \"nginx\";\n    namespace = \"default\";\n  };\n  spec = {\n    type = \"ClusterIP\"; # Default\n    selector.app = \"nginx\";\n    ports = [\n      {\n        name = \"http\";\n        port = 80;\n        protocol = \"TCP\"; # Default\n      }\n    ];\n  };\n}\n
"},{"location":"options/","title":"Configuration Options","text":""},{"location":"options/#applications","title":"applications","text":"

An application is a single Argo CD application that will be rendered by nixidy.

The resources will be rendered into it's own directory and an Argo CD application created for it.

Type: attribute set of (submodule)

Default: { }

Example:

{\n  nginx = {\n    namespace = \"nginx\";\n    resources = {\n      deployments = {\n        nginx = {\n          spec = {\n            replicas = 3;\n            selector = {\n              matchLabels = {\n                app = \"nginx\";\n              };\n            };\n            template = {\n              metadata = {\n                labels = {\n                  app = \"nginx\";\n                };\n              };\n              spec = {\n                containers = {\n                  nginx = {\n                    image = \"nginx:1.25.1\";\n                    imagePullPolicy = \"IfNotPresent\";\n                  };\n                };\n                securityContext = {\n                  fsGroup = 1000;\n                };\n              };\n            };\n          };\n        };\n      };\n      services = {\n        nginx = {\n          spec = {\n            ports = {\n              http = {\n                port = 80;\n              };\n            };\n            selector = {\n              app = \"nginx\";\n            };\n          };\n        };\n      };\n    };\n  };\n}\n

Declared by:

"},{"location":"options/#applicationsnamecreatenamespace","title":"applications.<name>.createNamespace","text":"

Whether or not a namespace resource should be automatically created.

Type: boolean

Default: false

Declared by:

"},{"location":"options/#applicationsnamehelmreleases","title":"applications.<name>.helm.releases","text":"

Helm releases to template and add to the rendered application's resources.

Type: attribute set of (submodule)

Default: { }

Declared by:

"},{"location":"options/#applicationsnamehelmreleasesnamechart","title":"applications.<name>.helm.releases.<name>.chart","text":"

Derivation containing the helm chart for the release.

Type: package

Declared by:

"},{"location":"options/#applicationsnamehelmreleasesnameincludecrds","title":"applications.<name>.helm.releases.<name>.includeCRDs","text":"

Whether or not to include CRDs in the helm release.

Type: boolean

Default: true

Declared by:

"},{"location":"options/#applicationsnamehelmreleasesnamename","title":"applications.<name>.helm.releases.<name>.name","text":"

Name of the helm release.

Type: string

Default: \"\u2039name\u203a\"

Declared by:

"},{"location":"options/#applicationsnamehelmreleasesnamenamespace","title":"applications.<name>.helm.releases.<name>.namespace","text":"

Namespace for the release.

Type: string

Default: config.applications.<name>.namespace

Declared by:

"},{"location":"options/#applicationsnamehelmreleasesnametransformer","title":"applications.<name>.helm.releases.<name>.transformer","text":"

Function that will be applied to the list of rendered manifests after the helm templating.

Type: function that evaluates to a(n) list of attribute set of anything

Default: config.nixidy.defaults.helm.transformer

Example: map (lib.kube.removeLabels [\"helm.sh/chart\"])

Declared by:

"},{"location":"options/#applicationsnamehelmreleasesnamevalues","title":"applications.<name>.helm.releases.<name>.values","text":"

Values to pass to the helm chart when rendering it.

Type: attribute set of anything

Default: { }

Declared by:

"},{"location":"options/#applicationsnamekustomizeapplications","title":"applications.<name>.kustomize.applications","text":"

Kustomize applications to render and add to the rendered application's resources.

Type: attribute set of (submodule)

Default: { }

Example:

{\n  argocd = {\n    namespace = \"argocd\";\n    # Equivalent to `github.com/argoproj/argo-cd/manifests/cluster-install?ref=v2.9.3`\n    # in kustomization.yaml.\n    kustomization = {\n      src = pkgs.fetchFromGitHub {\n        owner = \"argoproj\";\n        repo = \"argo-cd\";\n        rev = \"v2.9.3\";\n        hash = \"sha256-GaY4Cw/LlSwy35umbB4epXt6ev8ya19UjHRwhDwilqU=\";\n      };\n      path = \"manifests/cluster-install\";\n    };\n  };\n};\n

Declared by:

"},{"location":"options/#applicationsnamekustomizeapplicationsnamekustomizationpath","title":"applications.<name>.kustomize.applications.<name>.kustomization.path","text":"

Path relative to the base of src to the entrypoint kustomization directory.

Type: string

Declared by:

"},{"location":"options/#applicationsnamekustomizeapplicationsnamekustomizationsrc","title":"applications.<name>.kustomize.applications.<name>.kustomization.src","text":"

Derivation containing all the kustomize bases and overlays.

Type: package

Declared by:

"},{"location":"options/#applicationsnamekustomizeapplicationsnamename","title":"applications.<name>.kustomize.applications.<name>.name","text":"

Name of the kustomize application.

Type: string

Default: \"\u2039name\u203a\"

Declared by:

"},{"location":"options/#applicationsnamekustomizeapplicationsnamenamespace","title":"applications.<name>.kustomize.applications.<name>.namespace","text":"

Namespace for the kustomize application.

Type: string

Default: config.applications.<name>.namespace

Declared by:

"},{"location":"options/#applicationsnamekustomizeapplicationsnametransformer","title":"applications.<name>.kustomize.applications.<name>.transformer","text":"

Function that will be applied to the list of rendered manifests from kustomize.

Type: function that evaluates to a(n) list of attribute set of anything

Default: config.nixidy.defaults.kustomize.transformer

Declared by:

"},{"location":"options/#applicationsnamename","title":"applications.<name>.name","text":"

Name of the application.

Type: string

Default: \"\u2039name\u203a\"

Declared by:

"},{"location":"options/#applicationsnamenamespace","title":"applications.<name>.namespace","text":"

Namespace to deploy application into (defaults to name).

Type: string

Default: \"\u2039name\u203a\"

Declared by:

"},{"location":"options/#applicationsnameoutputpath","title":"applications.<name>.output.path","text":"

Name of the folder that contains all rendered resources for the application. Relative to the root of the repository.

Type: string

Default: \"\u2039name\u203a\"

Declared by:

"},{"location":"options/#applicationsnameproject","title":"applications.<name>.project","text":"

ArgoCD project to make application a part of.

Type: string

Default: \"default\"

Declared by:

"},{"location":"options/#applicationsnameresources","title":"applications.<name>.resources","text":"

Resources for the application

Type: attribute set

Default: { }

Example:

{\n  deployments = {\n    nginx = {\n      spec = {\n        replicas = 3;\n        selector = {\n          matchLabels = {\n            app = \"nginx\";\n          };\n        };\n        template = {\n          metadata = {\n            labels = {\n              app = \"nginx\";\n            };\n          };\n          spec = {\n            containers = {\n              nginx = {\n                image = \"nginx:1.25.1\";\n                imagePullPolicy = \"IfNotPresent\";\n              };\n            };\n            securityContext = {\n              fsGroup = 1000;\n            };\n          };\n        };\n      };\n    };\n  };\n  services = {\n    nginx = {\n      spec = {\n        ports = {\n          http = {\n            port = 80;\n          };\n        };\n        selector = {\n          app = \"nginx\";\n        };\n      };\n    };\n  };\n}\n

Declared by:

"},{"location":"options/#applicationsnamesyncpolicyautomatedprune","title":"applications.<name>.syncPolicy.automated.prune","text":"

Specifies if resources should be pruned during auto-syncing.

Type: boolean

Default: config.nixidy.defaults.syncPolicy.automated.prune

Declared by:

"},{"location":"options/#applicationsnamesyncpolicyautomatedselfheal","title":"applications.<name>.syncPolicy.automated.selfHeal","text":"

Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected.

Type: boolean

Default: config.nixidy.defaults.syncPolicy.automated.selfHeal

Declared by:

"},{"location":"options/#applicationsnameyamls","title":"applications.<name>.yamls","text":"

List of Kubernetes manifests declared in YAML strings. They will be parsed and added to the application's resources where they can be overwritten and modified.

Can be useful for reading existing YAML files (i.e. [(builtins.readFile ./deployment.yaml)]).

Type: list of string

Default: [ ]

Example:

[\n  ''\n    apiVersion: v1\n    kind: Namespace\n    metadata:\n      name: default\n''\n]\n

Declared by:

"},{"location":"options/#nixidyappofappsname","title":"nixidy.appOfApps.name","text":"

Name of the application for bootstrapping all other applications (app of apps pattern).

Type: string

Default: \"apps\"

Declared by:

"},{"location":"options/#nixidyappofappsnamespace","title":"nixidy.appOfApps.namespace","text":"

Destination namespace for generated Argo CD Applications in the app of apps applications.

Type: string

Default: \"argocd\"

Declared by:

"},{"location":"options/#nixidycharts","title":"nixidy.charts","text":"

Attrset of derivations containing helm charts. This will be passed as charts to every module.

Type: attribute set of anything

Default: { }

Declared by:

"},{"location":"options/#nixidychartsdir","title":"nixidy.chartsDir","text":"

Path to a directory containing sub-directory structure that can be used to build a charts attrset. This will be passed as charts to every module.

Type: null or path

Default: null

Declared by:

"},{"location":"options/#nixidydefaultshelmtransformer","title":"nixidy.defaults.helm.transformer","text":"

Function that will be applied to the list of rendered manifests after the helm templating. This option applies to all helm releases in all applications unless explicitly specified there.

Type: function that evaluates to a(n) list of attribute set of anything

Default: res: res

Example: map (lib.kube.removeLabels [\"helm.sh/chart\"])

Declared by:

"},{"location":"options/#nixidydefaultskustomizetransformer","title":"nixidy.defaults.kustomize.transformer","text":"

Function that will be applied to the list of rendered manifests after kustomize rendering. This option applies to all kustomize applications in all nixidy applications unless explicitly specified there.

Type: function that evaluates to a(n) list of attribute set of anything

Default: res: res

Example: map (lib.kube.removeLabels [\"app.kubernetes.io/version\"])

Declared by:

"},{"location":"options/#nixidydefaultssyncpolicyautomatedprune","title":"nixidy.defaults.syncPolicy.automated.prune","text":"

Specifies if resources should be pruned during auto-syncing. This is the default value for all applications if not explicitly set.

Type: boolean

Default: false

Declared by:

"},{"location":"options/#nixidydefaultssyncpolicyautomatedselfheal","title":"nixidy.defaults.syncPolicy.automated.selfHeal","text":"

Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected. This is the default value for all applications if not explicitly set.

Type: boolean

Default: false

Declared by:

"},{"location":"options/#nixidyextrafiles","title":"nixidy.extraFiles","text":"

Extra files to write in the generated stage.

Type: attribute set of (submodule)

Default: { }

Declared by:

"},{"location":"options/#nixidyextrafilesnamepath","title":"nixidy.extraFiles.<name>.path","text":"

Path of output file.

Type: string

Default: \"\u2039name\u203a\"

Declared by:

"},{"location":"options/#nixidyextrafilesnametext","title":"nixidy.extraFiles.<name>.text","text":"

Text of the output file.

Type: strings concatenated with \"\\n\"

Declared by:

"},{"location":"options/#nixidyresourceimports","title":"nixidy.resourceImports","text":"

List of modules to import for resource defintion options.

Type: list of (package or path or function that evaluates to a(n) (attribute set))

Default: [ ]

Declared by:

"},{"location":"options/#nixidytargetbranch","title":"nixidy.target.branch","text":"

The destination branch of the generated applications.

Type: string

Declared by:

"},{"location":"options/#nixidytargetrepository","title":"nixidy.target.repository","text":"

The repository URL to put in all generated applications.

Type: string

Declared by:

"},{"location":"options/#nixidytargetrootpath","title":"nixidy.target.rootPath","text":"

The root path of all generated applications in the repository.

Type: string

Default: \"./\"

Declared by:

"},{"location":"user_guide/getting_started/","title":"Getting Started","text":"

Nixidy only supports Nix Flakes so that needs to be enabled.

"},{"location":"user_guide/getting_started/#initialize-repository","title":"Initialize Repository","text":"

First a flake.nix needs to be created in the root of the repository.

flake.nix
{\n  description = \"My ArgoCD configuration with nixidy.\";\n\n  inputs.nixpkgs.url = \"github:nixos/nixpkgs/nixos-unstable\";\n  inputs.flake-utils.url = \"github:numtide/flake-utils\";\n  inputs.nixidy.url = \"github:arnarg/nixidy\";\n\n  outputs = {\n    self,\n    nixpkgs,\n    flake-utils,\n    nixidy,\n  }: (flake-utils.lib.eachDefaultSystem (system: let\n    pkgs = import nixpkgs {\n      inherit system;\n    };\n  in {\n    # This declares the available nixidy envs.\n    nixidyEnvs = nixidy.lib.mkEnvs {\n      inherit pkgs;\n\n      envs = {\n        # Currently we only have the one dev env.\n        dev.modules = [./env/dev.nix];\n      };\n    };\n\n    # Handy to have nixidy cli available in the local\n    # flake too.\n    packages.nixidy = nixidy.packages.${system}.default;\n\n    # Useful development shell with nixidy in path.\n    # Run `nix develop` to enter.\n    devShells.default = pkgs.mkShell {\n      buildInputs = [nixidy.packages.${system}.default];\n    };\n  }));\n}\n

The flake declares a single nixidy environment called dev. It includes a single nix module found at ./env/dev.nix, so let's create that.

env/dev.nix
{\n  # Set the target repository for the rendered manifests\n  # and applications.\n  # This should be replaced with yours, usually the same\n  # repository as the nixidy definitions.\n  nixidy.target.repository = \"https://github.com/arnarg/nixidy-demo.git\";\n\n  # Set the target branch the rendered manifests for _this_\n  # environment should be pushed to in the repository defined\n  # above.\n  # When using `mkEnvs` function in flake.nix it wil automatically\n  # set this to `\"env/${name}\"`.\n  nixidy.target.branch = \"env/dev\";\n}\n

Now runnig nix run .#nixidy -- info .#dev (or simply nixidy info .#dev if run in nix shell using nix develop) you can get the same info we just declared above. This verifies that things are set up correctly so far.

>> nix run .#nixidy -- info .#dev\nRepository: https://github.com/arnarg/nixidy-demo.git\nBranch:     env/dev\n

If we now attempt to build this new environment with nix run .#nixidy -- build .#dev we can see that nothing is generated but an empty folder called apps.

>> tree result\nresult\n\u2514\u2500\u2500 apps/\n

This is because we have not declared any applications yet for this environment.

"},{"location":"user_guide/getting_started/#our-first-application","title":"Our first Application","text":"

While nixidy allows you to declare all of the application's resources directly in nix it would be a waste to not be able to use Helm charts and Kustomize applications that already exists and are often officially maintained by project maintainers.

The application's declaration is very similar whichever option you go with.

HelmKustomize env/dev.nix
{lib, ...}: {\n  # Options explained in the section above.\n  nixidy.target.repository = \"https://github.com/arnarg/nixidy-demo.git\";\n  nixidy.target.branch = \"env/dev\";\n\n  # Argo CD application using the Helm chart from argo-helm.\n  applications.argocd = {\n    # Declare the destination namespace for the application.\n    namespace = \"argocd\";\n\n    # Instruct nixidy to automatically create a `Namespace`\n    # manifest in the rendered manifests for namespace\n    # selected above.\n    createNamespace = true;\n\n    # Specify Helm chart with values to template with.\n    helm.releases.argocd = {\n      # Using `downloadHelmChart` we can download\n      # the helm chart using nix.\n      # The value for `chartHash` needs to be updated\n      # with each version.\n      chart = lib.helm.downloadHelmChart {\n        repo = \"https://argoproj.github.io/argo-helm/\";\n        chart = \"argo-cd\";\n        version = \"5.51.6\";\n        chartHash = \"sha256-3kRkzOQdYa5JkrBV/+iJK3FP+LDFY1J8L20aPhcEMkY=\";\n      };\n\n      # Specify values to pass to the chart.\n      values = {\n        # Run argocd-server with 2 replicas.\n        # This is an option in the chart's `values.yaml`\n        # usually declared like this:\n        #\n        # server:\n        #   replicas: 2\n        server.replicas = 2;\n      };\n    };\n  };\n}\n
env/dev.nix
{pkgs, ...}: {\n  # Options explained in the section above.\n  nixidy.target.repository = \"https://github.com/arnarg/nixidy-demo.git\";\n  nixidy.target.branch = \"env/dev\";\n\n  # Argo CD application using the official kustomize application\n  # from Argo CD git repository.\n  applications.argocd = {\n    # Declare the destination namespace for the application.\n    namespace = \"argocd\";\n\n    # Instruct nixidy to automatically create a `Namespace`\n    # manifest in the rendered manifests for namespace\n    # selected above.\n    createNamespace = true;\n\n    # Specify Kustomize application to render.\n    kustomize.applications.argocd = {\n      # Equivalent to `github.com/argoproj/argo-cd/manifests/cluster-install?ref=v2.9.3`\n      # in kustomization.yaml.\n      kustomization = {\n        src = pkgs.fetchFromGitHub {\n          owner = \"argoproj\";\n          repo = \"argo-cd\";\n          rev = \"v2.9.3\";\n          hash = \"sha256-GaY4Cw/LlSwy35umbB4epXt6ev8ya19UjHRwhDwilqU=\";\n        };\n        path = \"manifests/cluster-install\";\n      };\n    };\n  };\n}\n

In both cases the following output will be generated:

tree -l result\n\u251c\u2500\u2500 apps\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 Application-argocd.yaml\n\u2514\u2500\u2500 argocd\n    \u251c\u2500\u2500 ClusterRole-argocd-application-controller.yaml\n    \u251c\u2500\u2500 ClusterRole-argocd-server.yaml\n    \u251c\u2500\u2500 ClusterRoleBinding-argocd-application-controller.yaml\n    \u251c\u2500\u2500 ClusterRoleBinding-argocd-server.yaml\n    \u251c\u2500\u2500 ConfigMap-argocd-cmd-params-cm.yaml\n    \u2514\u2500\u2500 ...\n

And the contents of the Argo CD application automatically generated is the following:

apps/Application-argocd.yaml
apiVersion: argoproj.io/v1alpha1\nkind: Application\nmetadata:\n# This is the name of the application (`applications.argocd`).\nname: argocd namespace: argocd\nspec:\ndestination:\n# This is the destination namespace for the application\n# specified with `applications.argocd.namespace`.\nnamespace: argocd\nserver: https://kubernetes.default.svc\nproject: default\nsource:\n# This is the output path declared for the application with\n# option `applications.output.path` (defaults to the name).\npath: argocd\n# Repository specified in `nixidy.target.repository`.\nrepoURL: https://github.com/arnarg/nixidy-demo.git\n# Branch specified in `nixidy.target.branch`.\ntargetRevision: env/dev\nsyncPolicy:\nautomated:\nprune: false\nselfHeal: false\n

A directory with rendered resources is generated for each application declared with applications.<name> as well as an Argo CD application resource YAML file in apps/. What this provides is the option to bootstrap the whole rendered branch to a cluster by adding an application pointing to the apps/ folder.

See App of Apps Pattern.

"},{"location":"user_guide/getting_started/#modularizing-the-configuration","title":"Modularizing the Configuration","text":"

So far we've initialized the repository with flake.nix and a single environment with all options set in a single file (env/dev.nix). Next we'll want to add a test environment.

Adding a test environment is as simple as copying env/dev.nix to env/test.nix, renaming the target branch and adding that to flake.nix under envs.test.modules. This however will involve a lot of code duplication and the environment will need to be maintained completely separately.

Instead we should modularize the configuration into re-usable modules that can allow slight modification between environments (number of replicas, ingress domain, etc.).

To start this migration a modules/default.nix should be created.

HelmKustomize modules/default.nix
{lib, ...}: {\n  # This option should be common across all environments so we\n  # can declare it here.\n  nixidy.target.repository = \"https://github.com/arnarg/nixidy-demo.git\";\n\n  # Argo CD application using the Helm chart from argo-helm.\n  applications.argocd = {\n    # Declare the destination namespace for the application.\n    namespace = \"argocd\";\n\n    # Instruct nixidy to automatically create a `Namespace`\n    # manifest in the rendered manifests for namespace\n    # selected above.\n    createNamespace = true;\n\n    # Specify Helm chart with values to template with.\n    helm.releases.argocd = {\n      # Using `downloadHelmChart` we can download\n      # the helm chart using nix.\n      # The value for `chartHash` needs to be updated\n      # with each version.\n      chart = lib.helm.downloadHelmChart {\n        repo = \"https://argoproj.github.io/argo-helm/\";\n        chart = \"argo-cd\";\n        version = \"5.51.6\";\n        chartHash = \"sha256-3kRkzOQdYa5JkrBV/+iJK3FP+LDFY1J8L20aPhcEMkY=\";\n      };\n\n      # Specify values to pass to the chart.\n      values = {\n        # Run argocd-server with 2 replicas.\n        # This is an option in the chart's `values.yaml`\n        # usually declared like this:\n        #\n        # server:\n        #   replicas: 2\n        server.replicas = 2;\n      };\n    };\n  };\n}\n
modules/default.nix
{pkgs, ...}: {\n  # This option should be common across all environments so we\n  # can declare it here.\n  nixidy.target.repository = \"https://github.com/arnarg/nixidy-demo.git\";\n\n  # Argo CD application using the official kustomize application\n  # from Argo CD git repository.\n  applications.argocd = {\n    # Declare the destination namespace for the application.\n    namespace = \"argocd\";\n\n    # Instruct nixidy to automatically create a `Namespace`\n    # manifest in the rendered manifests for namespace\n    # selected above.\n    createNamespace = true;\n\n    # Specify Kustomize application to render.\n    kustomize.applications.argocd = {\n      # Equivalent to `github.com/argoproj/argo-cd/manifests/cluster-install?ref=v2.9.3`\n      # in kustomization.yaml.\n      kustomization = {\n        src = pkgs.fetchFromGitHub {\n          owner = \"argoproj\";\n          repo = \"argo-cd\";\n          rev = \"v2.9.3\";\n          hash = \"sha256-GaY4Cw/LlSwy35umbB4epXt6ev8ya19UjHRwhDwilqU=\";\n        };\n        path = \"manifests/cluster-install\";\n      };\n    };\n  };\n}\n

And in flake.nix we can now set it to use modules/default.nix as a common module like the following:

flake.nix
{\n  description = \"My ArgoCD configuration with nixidy.\";\n\n  inputs.nixpkgs.url = \"github:nixos/nixpkgs/nixos-unstable\";\n  inputs.flake-utils.url = \"github:numtide/flake-utils\";\n  inputs.nixidy.url = \"github:arnarg/nixidy\";\n\n  outputs = {\n    self,\n    nixpkgs,\n    flake-utils,\n    nixidy,\n  }: (flake-utils.lib.eachDefaultSystem (system: let\n    pkgs = import nixpkgs {\n      inherit system;\n    };\n  in {\n    # This declares the available nixidy envs.\n    nixidyEnvs = nixidy.lib.mkEnvs {\n      inherit pkgs;\n\n      # Modules to include in all envs.\n      modules = [./modules];\n\n      envs = {\n        dev.modules = [./env/dev.nix];\n        test.modules = [./env/test.nix];\n      };\n    };\n  }));\n}\n

Both environment specific files now only declare the target branch:

env/dev.nix
{\n  nixidy.target.branch = \"env/dev\";\n}\n
env/test.nix
{\n  nixidy.target.branch = \"env/test\";\n}\n
"},{"location":"user_guide/getting_started/#abstracting-options-on-top-of-applications","title":"Abstracting Options on top of Applications","text":"

Now we have all common configuration in a module that is used across all environments and the next step is to also add traefik as an ingress controller. Oh! and we also want to create an ingress for Argo CD Web UI using the ingress controller. Also, come to think of it, We also don't want to run 2 replicas of argocd-server in the dev environment to save on resources.

Reaching these goals is simple enough by overriding the few needed options directly in the env specific configuration, for example:

env/dev.nix
{lib, ...}: {\n  # ...\n\n  applications.argocd.helm.releases.argocd.values = {\n    # Actually we want 1 replica only in dev.\n    server.replicas = lib.mkForce 1;\n  };\n}\n

But this requires knowing the implementation details of the application and introduces tight coupling and things become hard to change for the argocd application.

Instead things should ideally be broken apart further and create an extra configuration interface on top. To achieve this we want to break the common modules into more files, or a module per application but with a common entrypoint.

"},{"location":"user_guide/getting_started/#traefik","title":"Traefik","text":"

Let's start by creating a module for traefik:

modules/traefik.nix
{\n  lib,\n  config,\n  ...\n}: {\n  options.networking.traefik = with lib; {\n    enable = mkEnableOption \"traefik ingress controller\";\n    # Exposing some options that _could_ be set directly\n    # in the values option below can be useful for discoverability\n    # and being able to reference in other modules\n    ingressClass = {\n      enable = mkOption {\n        type = types.bool;\n        default = true;\n        description = ''\n          Whether or not an ingress class for traefik should be created automatically.\n        '';\n      };\n      name = mkOption {\n        type = types.str;\n        default = \"traefik\";\n        description = ''\n          The name of the ingress class for traefik that should be created automatically.\n        '';\n      };\n    };\n    # To not limit the consumers of this module allowing for\n    # setting the helm values directly is useful in certain\n    # situations\n    values = mkOption {\n      type = types.attrsOf types.anything;\n      default = {};\n      description = ''\n        Value overrides that will be passed to the helm chart.\n      '';\n    };\n  };\n\n  # Only create the application if traefik is enabled\n  config = lib.mkIf config.networking.traefik.enable {\n    applications.traefik = {\n      namespace = \"traefik\";\n      createNamespace = true;\n\n      helm.releases.traefik = {\n        chart = lib.helm.downloadHelmChart {\n          repo = \"https://traefik.github.io/charts/\";\n          chart = \"traefik\";\n          version = \"25.0.0\";\n          chartHash = \"sha256-ua8KnUB6MxY7APqrrzaKKSOLwSjDYkk9tfVkb1bqkVM=\";\n        };\n\n        # Here we merge default values with provided\n        # values from `config.networking.traefik.values`.\n        values = lib.recursiveUpdate {\n          ingressClass = {\n            enabled = config.networking.traefik.ingressClass.enable;\n            name = config.networking.traefik.ingressClass.name;\n          };\n        } config.networking.traefik.values;\n      };\n    };\n  };\n}\n

Here we have declared extra configuration options that can be set in other modules. By setting networking.traefik.enable = true; the traefik application will be added, otherwise not. By setting networking.traefik.ingressClass.enable = false; the application will not contain an ingress class for traefik, and so on.

"},{"location":"user_guide/getting_started/#argo-cd","title":"Argo CD","text":"

Now let's create a specific module for Argo CD:

modules/argocd.nix
{\n  lib,\n  config,\n  ...\n}: {\n  options.services.argocd = with lib; {\n    enable = mkEnableOption \"argocd\";\n    # Configuration options for the ingress\n    ingress = {\n      enable = mkEnableOption \"argocd ingress\";\n      host = mkOption {\n        type = types.nullOr types.str;\n        default = null;\n        description = ''\n          Hostname to use in the Ingress for argocd-server.\n        '';\n      };\n      ingressClassName = mkOption {\n        type = types.str;\n        default = \"\";\n        description = ''\n          The ingress class to use in the Ingress for argocd-server.\n        '';\n      };\n    };\n    # Configuration option for setting the replicas for\n    # argocd-server\n    replicas = mkOption {\n      type = types.int;\n      default = 2;\n      description = ''\n        Number of replicas of the argocd-server deployment.\n      '';\n    };\n    # To not limit the consumers of this module allowing for\n    # setting the helm values directly is useful in certain\n    # situations\n    values = mkOption {\n      type = types.attrsOf types.anything;\n      default = {};\n      description = ''\n        Value overrides that will be passed to the helm chart.\n      '';\n    };\n  };\n\n  # Only create the application if argocd is enabled\n  config = lib.mkIf config.services.argocd.enable {\n    applications.argocd = {\n      namespace = \"argocd\";\n      createNamespace = true;\n\n      helm.releases.argocd = {\n        chart = lib.helm.downloadHelmChart {\n          repo = \"https://argoproj.github.io/argo-helm/\";\n          chart = \"argo-cd\";\n          version = \"5.51.6\";\n          chartHash = \"sha256-3kRkzOQdYa5JkrBV/+iJK3FP+LDFY1J8L20aPhcEMkY=\";\n        };\n\n        # Here we merge default values with provided\n        # values from `config.services.argocd.values`.\n        values = lib.recursiveUpdate {\n          # Set number of replicas by using service option\n          server.replicas = config.services.argocd.replicas;\n          # Create an ingress with the configured hostname\n          server.ingress = {\n            enabled = config.services.argocd.ingress.enable;\n            ingressClassName = config.services.argocd.ingress.ingressClassName;\n            hosts =\n              if !isNull config.services.argocd.ingress.host\n              then [config.services.argocd.ingress.host]\n              else [];\n          };\n        } config.services.argocd.values;\n      };\n    };\n  };\n}\n

Like with the traefik module you can now set services.argocd.enable = true; to enable the argocd application and services.argocd.ingress.enable = true; to create an ingress.

"},{"location":"user_guide/getting_started/#putting-it-all-together","title":"Putting it all together","text":"

Now with argocd and traefik declared in their own modules we will need to import them in the base modules/default.nix:

modules/default.nix
{lib, config, ...}: {\n  # Here we import the modules we created above.\n  # This will make all the configuration options\n  # available to other modules.\n  imports = [\n    ./argocd.nix\n    ./traefik.nix\n  ];\n\n  # This option should be common across all environments so we\n  # can declare it here.\n  nixidy.target.repository = \"https://github.com/arnarg/nixidy-demo.git\";\n\n  # Traefik should be enable by default.\n  networking.traefik.enable = lib.mkDefault true;\n\n  # Argo CD should be enabled by default.\n  services.argocd = {\n    enable = lib.mkDefault true;\n\n    ingress = {\n      # An ingress for Argo CD Web UI should\n      # be created if traefik is also enabled.\n      enable = lib.mkDefault config.networking.traefik.enable;\n\n      # The ingress should use Treafik's ingress\n      # class.\n      ingressClassName = lib.mkDefault config.networking.traefik.ingressClass.name;\n    };\n  };\n}\n

This will import the two application modules and set some defaults by using mkDefault (this function sets the value as a default value but still allows overriding it in other modules). Notably we have set it up in a way that will automatically enable the ingress for Argo CD Web UI if traefik is also enabled, which is also enabled in this file but can be still be disabled in another module.

Now in order to achieve the goals we set out to achieve in the beginning of this section, the following options are set in the environments' configurations:

env/dev.nix
{\n  nixidy.target.branch = \"env/dev\";\n\n  # We want to set the hostname for ArgoCD Web UI\n  services.argocd.ingress.host = \"argocd.dev.domain.com\";\n\n  # We only want 1 replica of argocd server\n  services.argocd.replicas = 1;\n}\n
env/test.nix
{\n  nixidy.target.branch = \"env/test\";\n\n  # We want to set the hostname for ArgoCD Web UI\n  services.argocd.ingress.host = \"argocd.test.domain.com\";\n}\n

Now the following manifests are generated:

>> tree -l result\nresult\n\u251c\u2500\u2500 apps\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 Application-argocd.yaml\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 Application-traefik.yaml\n\u251c\u2500\u2500 argocd\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ClusterRole-argocd-application-controller.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ClusterRole-argocd-notifications-controller.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ClusterRole-argocd-repo-server.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ClusterRole-argocd-server.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ClusterRoleBinding-argocd-application-controller.yaml\n\u2502   \u2514\u2500\u2500 ...\n\u2514\u2500\u2500 traefik\n    \u251c\u2500\u2500 ClusterRoleBinding-traefik-traefik.yaml\n    \u251c\u2500\u2500 ClusterRole-traefik-traefik.yaml\n    \u251c\u2500\u2500 CustomResourceDefinition-ingressroutes-traefik-containo-us.yaml\n    \u251c\u2500\u2500 CustomResourceDefinition-ingressroutes-traefik-io.yaml\n    \u251c\u2500\u2500 CustomResourceDefinition-ingressroutetcps-traefik-containo-us.yaml\n    \u2514\u2500\u2500 ...\n
"},{"location":"user_guide/github_actions/","title":"GitHub Actions","text":"

Nixidy offers a GitHub Action to build and push an environment to its target branch.

"},{"location":"user_guide/github_actions/#usage","title":"Usage","text":"

In this example it will build environments dev, test and prod on every push to main. Realistically the different environments should be built in different workflows.

name: Generate Kubernetes manifests\n\non:\npush:\nbranches:\n- main\n\njobs:\ngenerate:\nruns-on: ubuntu-latest\nstrategy:\nmatrix:\nenv: [\"dev\", \"test\", \"prod\"]\nsteps:\n- uses: actions/checkout@v4\n\n- uses: cachix/install-nix-action@v20\nwith:\n# This config is required in order to support a nixidy\n# flake repository\nextra_nix_config: |\nextra-experimental-features = nix-command flakes\n\n# This is optional but speeds up consecutive runs\n# by caching nix derivations between github workflows\n# runs\n- uses: DeterminateSystems/magic-nix-cache-action@v2\n\n# Build and push nixidy environment\n- uses: arnarg/nixidy@main\nwith:\nenvironment: ${{matrix.env}}\n
"},{"location":"user_guide/transformers/","title":"Transformers","text":"

Nixidy supports adding a transformers to Helm releases and Kustomize applications. A transformer is only a function that takes in a list of Kubernetes manifests in attribute sets and returns the same ([AttrSet] -> [AttrSet]). It is called after the manifests have been rendered and parsed into nix but before they're transformed into the nixidy form (<apiVersion>.<kind>.<name>) and can be used to modify the resources.

Transformers can be set globally in nixidy.defaults.helm.transformer for Helm releases and nixidy.defaults.kustomize.transformer for kustomize applications.

"},{"location":"user_guide/transformers/#remove-version-specific-labels","title":"Remove Version Specific Labels","text":"

It's very common that helm charts will add the labels helm.sh/chart and app.kubernetes.io/version to all resources it renders. This can produce very big diffs when they're updated and nixidy renders them and commits the manifests to a git branch. The changes in these labels are not very relevant and will mostly just be noise to distract from the actual relevant changes of the rendered output.

A transformer can be used to filter out these labels.

{\n  applications.argocd.helm.releases.argocd = {\n    # ...\n\n    # Remove the following labels from all manifests\n    transformer = map (lib.kube.removeLabels [\n      \"app.kubernetes.io/version\"\n      \"helm.sh/chart\"\n    ]);\n  }\n}\n

Here we use map to call lib.kube.removeLabels on each manifest in the list to remove the specified labels. The example uses function currying, this is equivalent to manifests: map (m: lib.kube.removeLabels [\"...\"] m) manifests.

"}]} \ No newline at end of file