CD with Jenkins is a push operation, i.e. after CI pipeline is run and container is built and pushed, Jenkins must access the cluster through e.g. kubectl and apply the manifests. This has the disadvantages of a) cluster credentials storage, and b) little insight into whether the application of a manifest was successful.
CD with both ArgoCD and Flux is a pull operation. Can declaratively state the desired cluster state and continuously monitor this state to ensure it's met. Useful to keep a single source of truth (Git) and easy rollbacks. Also good for separation of concerns e.g. operations teams can be responsible for merge requests to infrastructure as code (and resulting deployment), keeping developers away from having access to the cluster.
This question is less clear cut, but some of the differences to consider include:
Anecdotally, ArgoCD appears to be gaining the most traction. As of 27/07/22, the ArgoCD repository has 10k stars and 2.7k forks, flux has 6.9k stars and 1.1k forks. There appears to be more public adopters for ArgoCD than Flux (215 and 65 respectively) and company contributors (89 and 18).
For sources, see:
The following assumes that the cluster that argocd is deployed on is running behind a reverse proxy at location /argocd/. The argocd instance is run in insecure mode (http only) as SSL is terminated at the proxy. See [2] and [3] in other resources for more information.
First create the namespace and grab the manifest file:
$ kubectl create namespace argocd
$ wget https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Edit the manifest file at the section where the argocd-server command is specified, adding in -insecure and bashhref arguments:
- command:
- argocd-server
- --insecure
- --basehref
- /argocd/
Then deploy a new ingress like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: argocd-server-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
rules:
- host: <hostname>
http:
paths:
- path: /argocd(/|$)(.*)
pathType: Prefix
backend:
serviceName: argocd-server
servicePort: 80
where the location block in the reverse proxy looks something like:
location /argocd {
proxy_pass http://<internal_cluster_ip>:80/argocd;
allow all;
}
The initial password for user "admin" is gotten from a secret:
$ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
ArgoCD creates its own custom resource definitions, one of which is an Application. This defines what repository to use to pull from,source, and what cluster to deploy to, destination, e.g. for deployment to the same server that argocd is deployed on:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: <name>
namespace: argocd
spec:
project: default
source:
repoURL: <repository_url>
targetRevision: <branch>
path: /path/to/yaml
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
selfHeal: true
prune: true
Note the attributes in the automated section of the manifest. Setting selfHeal will overwrite manual changes made to the cluster. Setting prune will sync deleted resources i.e. delete them from the cluster.
By default argocd polls the Git repository for changes every 3 minutes. For immediate changes a webhook would be required.
For multi-cluster set ups, IaC can either be kept in different branches or directories. There are pros/cons to both. See:
Using a separate repository for infrastructure code with different directories seems a good balance between maintenance and compartmentalisation of application and infrastructure code. However, as of 28/07/2022, there is no way to reference a values.yaml outside of the repo that the helm chart resides in in an ArgoCD Application manifest. There is ongoing work to allow this: https://github.com/argoproj/argo-cd/pull/9609 and also a workaround using ApplicationSets.
values.yaml in a different repository than the chartOne of the current limitations of an ArgoCD Application is the requirement for a chart's values.yaml to be in the same package repository as the chart itself. This makes it hard to separate out infrastructure and application code.
This can be done by creating an ApplicationSet. An ApplicationSet is basically a factory for Applications with generators that can be used to e.g. traverse a list of clusters, traverse yamls in a Git repo. If multiple targets are found, then an Application manifest is built for each.
The workaround is to use the Git generator (https://argocd-applicationset.readthedocs.io/en/stable/Generators-Git/). This can pull yaml from a git repository and template it into an application manifest, e.g.
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: rucio-task-manager
spec:
generators:
- git:
repoURL: https://gitlab.com/ska-telescope/src/ska-rucio-prototype.git
revision: migration-to-gitops
files:
- path: 'dev/rucio-task-manager.values.yaml'
template:
metadata:
name: rucio-task-manager
spec:
project: default
source:
helm:
values: |-
{{values}}
repoURL: https://gitlab.com/api/v4/projects/38166490/packages/helm/stable
targetRevision: 1.1.1
chart: rucio-analysis
destination:
server: 'https://kubernetes.default.svc'
namespace: rucio-analysis
syncPolicy:
automated:
selfHeal: true
prune: true
With this, the yaml file dev/rucio-task-manager.values.yaml in repo https://gitlab.com/ska-telescope/src/ska-rucio-prototype.git under branch migration-to-gitops will be used to populate the helm: values attribute. Note that the values file must be edited so the contents are under the values key, e.g.
values: |
image:
repository: registry.gitlab.com/ska-telescope/src/ska-rucio-task-manager
tag: release-1.29.0
pullPolicy: Always