Weave policy profile enterprise
Weave Policy Profile
Overview
Weave policy profile provides policies to automate the enforcement of best practice and conventions. It ensures the compliance of workloads through the use of a policy agent that provides an admission controller webhook that stops violating resources from deploying to a cluster and runs a daily audit that reports violating resources already deployed.
Policy Sources
Policies are provided in the profile as Custom Resources. The agent reads from the policies deployed on the cluster and runs them during each admission request or when auditing a resource.
Policies are hosted in a policy library which is ususally a git repository. They are fetched in the profile through the use of kustomize.toolkit.fluxcd.io.Kustomization
, that deploys the policies to the cluster.
By default all policies in the specified path would be deployed in order to specify which policies should be deployed in a library, a kustomize.config.k8s.io.Kustomization
file should be defined in the repository.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources: # specifies the path to each required policy
- policies/ControllerContainerAllowingPrivilegeEscalation/policy.yaml
- policies/ControllerContainerRunningAsRoot/policy.yaml
- policies/ControllerReadOnlyFileSystem/policy.yaml
The profile then needs to be configured with the necessary config to be able to reach the repository that is acting as a policy library.
policyLibraryConfig:
url: ssh://git@github.com/myorg/policy-library # repo url
branch: agent-profile # which branch to use
kustomizationPath: ./ # directory containing the kustomization file
private: true # set to true if a private repo
secretRef: policy-library-auth # secret containing cred to authenticate with a private repo
Policy Sets
Policy set is a custom resource that gives more control over which policies to be used in each scenario. There are cases in which certain policies are required to be observed but denying the requests of violating objects would be disruptive. Policy set allows definining additional filters for each scenario: Audit
and Admission
so it is possible to report violations on certain policies without the need of blocking the deployments if certain policies are not as critical as others.
Policy set should also be hosted on the policy library. The following definition defines which specific policies should be used using policy names:
apiVersion: pac.weave.works/v2beta1
kind: PolicySet
metadata:
name: admission-policy-set
spec:
id: admission-policy-set
name: admission-policy-set
filters:
ids:
- weave.policies.containers-running-with-privilege-escalation
- weave.policies.containers-read-only-root-filesystem
To make use of this policy set in the profile:
config:
AGENT_ADMISSION_POLICY_SET: admission-policy-set # name of policy set to be used for admission
AGENT_AUDIT_POLICY_SET: audit-policy-set # name of policy set to be used for audit
Policy Validation Sinks
When validating a resource a validation object is generated that contains information about the status of that validation and metadata about the resource and policy involved. These objects should be exported to be visible for users as a critical part of the audit flow, but can also be useful as logs for the admission scenario.
By default the agent only writes policy validation that are violating a certain policy when performing an audit, to write compliance results as well, the following needs to be specified in the profile:
config:
AGENT_WRITE_COMPLIANCE: "true"
The agent profile supports multiple methods to expose this data and multiple can be used at the same time:
- Text File
- Kubernetes Events
- Notification Controller
The results would be dumped into a text file in the agent container as a json string. It is important to note that this file would not be persistent and would be deleted upon pod restart, so generally this approach is not recommended for production environment.
To enable writing to a text file:
config:
AGENT_FILESYSTEM_SINK_FILE_PATH: "/path/to/file"
It is possible to make the file persistent, this assumes that there is a PersistentVolume already configured on the cluster.
persistence:
enabled: false # specifies whether to use persistence or not
claimStorage: 1Gi # claim size
sinkDir: /path/to # directory to be mounted
storageClassName: standard # k8s StorageClass name
To enable writing Kubernetes events:
config:
AGENT_ENABLE_K8S_EVENTS_SINK: "true"
To enable writing to flux notification controller:
config:
AGENT_FLUX_NOTIFICATION_SINK_ADDR: "true"
Admission Controller Setup
To enable admission control:
config:
AGENT_ENABLE_ADMISSION: "1"
Enabling admission controller requires certificates for secure communication with the webhook client and the admission server. The best way to achieve this is by installing cert manager and then configuring the profile as follows:
useCertManager: true
There is the option of providing previously generated certificates although it is not recommended and it is up to the user to manage it:
certificate: "---" # admission server certificate
key: "---" # admission server private key
caCertificate: "---" # CA bundle to validate the webhook server, used by the client
If the agent webhook could not be reached or the request failed to complete, the corresponding request would be refused. To change that behavior and accepts the reuqest in cases of failure, this needs to be set:
failurePolicy: Ignore
Audit
Audit functionality provide a full scan on the cluster(s) and report back policy violations. This usually is used for policy violations reporting, and Compliance posture analysis against known benchmarks like PCI DSS, CIS, .etc.
To enable audit functionality:
config:
AGENT_ENABLE_AUDIT: "1"
Audit will be performed when the agent starts and then at an interval of 23 hours. The results from that sink would be published by the registered sinks.
Policy Validation
Policy validation object contains all the necessary information to give the user a clear idea on what caused it. It is the result of validating an entity against a policy.
id: string # identifier for the violation
account_id: string # organization identifier
cluster_id: string # cluster identifier
policy: object # contains related policy data
entity: object # contains related resource data
status: string # Violation or Compliance
message: string # message that summarizes the policy validation
type: string # Admission or Audit
trigger: string # what triggered the validation, create request or initial audit,..
created_at: string # time that the validation occured in
Managing non-capi clusters
Any kubernetes cluster whether capi or not can be added to Weave Gitops Enterprise. The only thing we need is a secret containing a valid kubeconfig
.
- Existing kubeconfig
- Create a kubeconfig for a ServiceAccount
If you already have a kubeconfig
stored in a secret in your management cluster, continue below to create a GitopsCluster
.
If you have a kubeconfig, you can load in into the cluster like so:
kubectl create secret generic demo-01-kubeconfig \
--from-file=value.yaml=./demo-01-kubeconfig
How to create a kubeconfig secret using a service account
- Create a new service account on the remote cluster:
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-01
namespace: default
- Add RBAC permissions for the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: impersonate-user-groups
subjects:
- kind: ServiceAccount
name: demo-02
namespace: default
roleRef:
kind: ClusterRole
name: user-groups-impersonator
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: user-groups-impersonator
rules:
- apiGroups: [""]
resources: ["users", "groups"]
verbs: ["impersonate"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
This will allow WGE to introspect the cluster for available namespaces.
Once we know what namespaces are available we can test whether the logged in user can access them via impersonation.
- Get the token of the service account
First get the list of secrets of the service accounts by running the following command:
kubectl get secrets --field-selector type=kubernetes.io/service-account-token
NAME TYPE DATA AGE
default-token-lsjz4 kubernetes.io/service-account-token 3 13d
demo-01-token-gqz7p kubernetes.io/service-account-token 3 99m
demo-01-token-gqz7p
is the secret that holds the token for demo-01
service account
To get the token of the service account run the following command:
TOKEN=$(kubectl get secret demo-01-token-gqz7p -o jsonpath={.data.token} | base64 -d)
- Create a kubeconfig secret
We'll use a helper script to generate the kubeconfig, save this into static-kubeconfig.sh
:
#!/bin/bash
if [[ -z "$CLUSTER_NAME" ]]; then
echo "Ensure CLUSTER_NAME has been set"
exit 1
fi
if [[ -z "$CA_CERTIFICATE" ]]; then
echo "Ensure CA_CERTIFICATE has been set to the path of the CA certificate"
exit 1
fi
if [[ -z "$ENDPOINT" ]]; then
echo "Ensure ENDPOINT has been set"
exit 1
fi
if [[ -z "$TOKEN" ]]; then
echo "Ensure TOKEN has been set"
exit 1
fi
export CLUSTER_CA_CERTIFICATE=$(cat "$CA_CERTIFICATE" | base64)
envsubst <<EOF
apiVersion: v1
kind: Config
clusters:
- name: $CLUSTER_NAME
cluster:
server: https://$ENDPOINT
certificate-authority-data: $CLUSTER_CA_CERTIFICATE
users:
- name: $CLUSTER_NAME
user:
token: $TOKEN
contexts:
- name: $CLUSTER_NAME
context:
cluster: $CLUSTER_NAME
user: $CLUSTER_NAME
current-context: $CLUSTER_NAME
EOF
For the next step, the cluster certificate (CA) is needed. How you get hold of the certificate depends on the cluster. For GKE you can view it on the GCP Console: Cluster->Details->Endpoint->”Show cluster certificate”. You will need to copy the contents of the certificate into the ca.crt
file used below.
CLUSTER_NAME=demo-01 \
CA_CERTIFICATE=ca.crt \
ENDPOINT=<control-plane-ip-address> \
TOKEN=<token> ./static-kubeconfig.sh > demo-01-kubeconfig
Replace the following:
- CLUSTER_NAME: the name of your cluster i.e.
demo-01
- ENDPOINT: the API server endpoint i.e.
34.218.72.31
- CA_CERTIFICATE: path to the CA certificate file of the cluster
- TOKEN: the token of the service account retrieved in the previous step
Finally create a secret for the generated kubeconfig:
kubectl create secret generic demo-01-kubeconfig \
--from-file=value.yaml=./demo-01-kubeconfig
Connect a cluster
Make sure you've
- Added some common RBAC rules into the
clusters/bases
folder, as described in Getting started. - Configured the cluster bootstrap controller as described in Getting started.
Create a GitopsCluster
apiVersion: gitops.weave.works/v1alpha1
kind: GitopsCluster
metadata:
name: demo-01
namespace: default
# Signals that this cluster should be bootstrapped.
labels:
weave.works/capi: bootstrap
spec:
secretRef:
name: demo-01-kubeconfig
When the GitopsCluster
appears in the cluster, the Cluster Bootstrap Controller will install flux on it and by default start reconciling the ./clusters/demo-01
path in your management cluster's git repository. To inspect the Applications and Sources running on the new cluster we need to give permissions to the user accessing the UI. Common RBAC rules like this should be stored in ./clusters/bases
. Here we create a kustomziation to add these common resources onto our new cluster:
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
creationTimestamp: null
name: clusters-bases-kustomization
namespace: flux-system
spec:
interval: 10m0s
path: clusters/bases
prune: true
sourceRef:
kind: GitRepository
name: flux-system
Save these 2 files into your git repository. Commit and push.
Once flux has reconciled the cluster you can inspect your flux resources via the UI!
Debugging
How to test a kubeconfig secret in a cluster
To test a kubeconfig secret has been correctly setup apply the following manifest and check the logs after the job completes:
apiVersion: batch/v1
kind: Job
metadata:
name: kubectl
spec:
ttlSecondsAfterFinished: 30
template:
spec:
containers:
- name: kubectl
image: bitnami/kubectl
args:
[
"get",
"pods",
"-n",
"kube-system",
"--kubeconfig",
"/etc/kubeconfig/value.yaml",
]
volumeMounts:
- name: kubeconfig
mountPath: "/etc/kubeconfig"
readOnly: true
restartPolicy: Never
volumes:
- name: kubeconfig
secret:
secretName: demo-01-kubeconfig
optional: false
In the manifest above demo-01-kubeconfig
is the name of the secret that contains the kubeconfig for the remote cluster.
Background
- Authentication strategies
- X509 client certificates: can be used across different namespaces
- Service account tokens: limited to a single namespace
- Kubernetes authentication 101 (CNCF blog post)
- Kubernetes authentication (Magalix blog post)