Skip to main content
Version: 0.5.0

Getting started enterprise

Creating your first CAPD Cluster#

If you've followed the Upgrade guide you should have a management cluster ready to roll. Next up we'll

1. Configure a capi provider#

If you have not already done so, install and configure at least one CAPI provider.

See Cluster API Providers page for more details on providers. He're we'll continue with kind and capd as an example.

# Enable support for `ClusterResourceSet`s for automatically installing CNIsexport EXP_CLUSTER_RESOURCE_SET=trueclusterctl init --infrastructure docker

2. Add a template#

See CAPI Templates page for more details on this topic. Once we load a template we can use it in the UI to create clusters!

Download the template below to your config repository path, then commit and push to your git origin.

.weave-gitops/apps/capi/templates/capd-template.yaml
apiVersion: capi.weave.works/v1alpha1kind: CAPITemplatemetadata:  name: cluster-template-development  namespace: defaultspec:  description: This is the std. CAPD template  params:    - name: CLUSTER_NAME      description: This is used for the cluster naming.    - name: NAMESPACE      description: Namespace to create the cluster in.    - name: KUBERNETES_VERSION      description: The version of Kubernetes to use.      options: ["1.19.11", "1.20.7", "1.21.1"]  resourcetemplates:    - apiVersion: cluster.x-k8s.io/v1beta1      kind: Cluster      metadata:        name: "${CLUSTER_NAME}"        namespace: "${NAMESPACE}"        labels:          cni: calico          weave.works/capi: bootstrap      spec:        clusterNetwork:          services:            cidrBlocks:              - 10.128.0.0/12          pods:            cidrBlocks:              - 192.168.0.0/16          serviceDomain: cluster.local        infrastructureRef:          apiVersion: infrastructure.cluster.x-k8s.io/v1beta1          kind: DockerCluster          name: "${CLUSTER_NAME}"          namespace: "${NAMESPACE}"        controlPlaneRef:          kind: KubeadmControlPlane          apiVersion: controlplane.cluster.x-k8s.io/v1beta1          name: "${CLUSTER_NAME}-control-plane"          namespace: "${NAMESPACE}"    - apiVersion: infrastructure.cluster.x-k8s.io/v1beta1      kind: DockerCluster      metadata:        name: "${CLUSTER_NAME}"        namespace: "${NAMESPACE}"    - apiVersion: infrastructure.cluster.x-k8s.io/v1beta1      kind: DockerMachineTemplate      metadata:        name: "${CLUSTER_NAME}-control-plane"        namespace: "${NAMESPACE}"      spec:        template:          spec:            extraMounts:              - containerPath: "/var/run/docker.sock"                hostPath: "/var/run/docker.sock"    - kind: KubeadmControlPlane      apiVersion: controlplane.cluster.x-k8s.io/v1beta1      metadata:        name: "${CLUSTER_NAME}-control-plane"        namespace: "${NAMESPACE}"      spec:        replicas: 1        machineTemplate:          infrastructureRef:            kind: DockerMachineTemplate            apiVersion: infrastructure.cluster.x-k8s.io/v1beta1            name: "${CLUSTER_NAME}-control-plane"            namespace: "${NAMESPACE}"        kubeadmConfigSpec:          clusterConfiguration:            controllerManager:              extraArgs: { enable-hostpath-provisioner: "true" }            apiServer:              certSANs: [localhost, 127.0.0.1]          initConfiguration:            nodeRegistration:              criSocket: /var/run/containerd/containerd.sock              kubeletExtraArgs:                # We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd                # kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726                cgroup-driver: cgroupfs                eviction-hard: "nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%"          joinConfiguration:            nodeRegistration:              criSocket: /var/run/containerd/containerd.sock              kubeletExtraArgs:                # We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd                # kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726                cgroup-driver: cgroupfs                eviction-hard: "nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%"        version: "${KUBERNETES_VERSION}"    - apiVersion: infrastructure.cluster.x-k8s.io/v1beta1      kind: DockerMachineTemplate      metadata:        name: "${CLUSTER_NAME}-md-0"        namespace: "${NAMESPACE}"      spec:        template:          spec: {}    - apiVersion: bootstrap.cluster.x-k8s.io/v1beta1      kind: KubeadmConfigTemplate      metadata:        name: "${CLUSTER_NAME}-md-0"        namespace: "${NAMESPACE}"      spec:        template:          spec:            joinConfiguration:              nodeRegistration:                kubeletExtraArgs:                  # We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd                  # kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726                  cgroup-driver: cgroupfs                  eviction-hard: "nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%"    - apiVersion: cluster.x-k8s.io/v1beta1      kind: MachineDeployment      metadata:        name: "${CLUSTER_NAME}-md-0"        namespace: "${NAMESPACE}"      spec:        clusterName: "${CLUSTER_NAME}"        replicas: 1        selector:          matchLabels:        template:          spec:            clusterName: "${CLUSTER_NAME}"            version: "${KUBERNETES_VERSION}"            bootstrap:              configRef:                name: "${CLUSTER_NAME}-md-0"                namespace: "${NAMESPACE}"                apiVersion: bootstrap.cluster.x-k8s.io/v1beta1                kind: KubeadmConfigTemplate            infrastructureRef:              name: "${CLUSTER_NAME}-md-0"              namespace: "${NAMESPACE}"              apiVersion: infrastructure.cluster.x-k8s.io/v1beta1              kind: DockerMachineTemplate

Automatically install a CNI with ClusterResourceSets#

We can use ClusterResourceSets to automatically install CNI's on a new cluster, here we use calico as an example.

1. Add a CRS to install a CNI#

Create a calico configmap and a CRS as follows:

.weave-gitops/apps/capi/boostrap/calico-crs.yaml
apiVersion: addons.cluster.x-k8s.io/v1alpha3kind: ClusterResourceSetmetadata:  name: calico-crs  namespace: defaultspec:  clusterSelector:    matchLabels:      cni: calico  resources:  - kind: ConfigMap    name: calico-crs-configmap

The full calico-crs-configmap.yaml is a bit large to display inline here but make sure to download it to .weave-gitops/apps/capi/bootstrap/calico-crs-configmap.yaml too, manually or with the above curl command.

Profiles and clusters#

WGE can automatically install profiles onto new clusters

1. Add a helmrepo#

A profile is an enhanced helm chart. When publishing profiles to helm repositories make sure to include the weave.works/profile in Chart.yaml. These annotated profiles will appear in WGE

annotations:  weave.works/profile: nginx-profile

Download the profile repositry below to your config repository path then commit and push. Make sure to update the url to point to a helm repository containing your profiles.

.weave-gitops/apps/capi/profiles/profile-repo.yaml
apiVersion: source.toolkit.fluxcd.io/v1beta1kind: HelmRepositorymetadata:  creationTimestamp: null  name: weaveworks-charts  namespace: wego-systemspec:  interval: 1m  url: https://my-org.github.io/profilesstatus: {}

3. Add a cluster bootstrap config#

Create a cluster bootstrap config as follows:

 kubectl create secret generic my-pat --from-literal GITHUB_TOKEN=$GITHUB_TOKEN

Download the config with

Then update the GITOPS_REPO variable to point to your cluster

.weave-gitops/apps/capi/boostrap/capi-gitops-cluster-bootstrap-config.yaml
apiVersion: capi.weave.works/v1alpha1kind: ClusterBootstrapConfigmetadata:  name: capi-gitops  namespace: defaultspec:  clusterSelector:    matchLabels:      weave.works/capi: bootstrap  jobTemplate:    generateName: "run-gitops-{{ .ObjectMeta.Name }}"    spec:      containers:        - image: ghcr.io/palemtnrider/wego-app:v0.5.0-rc2          name: gitops-install          resources: {}          volumeMounts:            - name: kubeconfig              mountPath: "/etc/gitops"              readOnly: true          args:            [              "install",              "--override-in-cluster",              "--app-config-url",              "$(GITOPS_REPO)",            ]          env:            - name: KUBECONFIG              value: "/etc/gitops/value"            - name: GITOPS_REPO              value: "ssh://git@github.com/my-org/my-management-cluster.git"            - name: GITHUB_TOKEN              valueFrom:                secretKeyRef:                  name: my-pat                  key: GITHUB_TOKEN          # - name: GITLAB_TOKEN # If your GitOps reop is on GitLab, use this instead of GITHUB_TOKEN          #   valueFrom:          #     secretKeyRef:          #       name: pat          #       key: GITLAB_TOKEN      restartPolicy: Never      volumes:        - name: kubeconfig          secret:            secretName: "{{ .ObjectMeta.Name }}-kubeconfig"

Test#

You should now be able to create a new cluster from your template and install profiles onto it with a single Pull Request via the WGE UI!