Let’s say you have multiple databases (SQL or NoSQL) and your application is running on K8s. Your credentials should be stored and managed securely. In such situations, secret, a Base64 encoded password on Kubernetes, is commonly used if you are not using an encrypted etcd storage solution. A secret is not the ideal solution as they are not secured. Access to secrets is not encrypted and your credentials could be leaked if the etcd backend was compromised.
Vault from Hashicorp is a secure and reliable solution to store and manage your secret, if you do not want to depend on K8s service. An administrator is not required to manipulate the secrets as required, as it fills the gap between development and operations teams on its own.
The above was a sample case for using static secrets.
This blog, the first of two parts, explores managing static secrets in Kubernetes through Vault. In the next part, we will look at dynamic secrets with MongoDB.
Prerequisites are:
Tech Stack:
Vault is an identity-based secrets and encryption management system. A secret can include API encryption keys, passwords or certificates, anything you want to strictly control access to. Vault provides encryption services guarded by authentication and authorization methods. Using Vault’s UI, CLI, or HTTP API, secrets and other sensitive data can be securely stored and their access can be managed, tightly controlled (restricted), and auditable.
Install the Vault using Helm chart
Edit the helm chart:
– We are not going with the gp2 type of pv, we will use dynamodb for HA.
– Go to the helm default folder of your user.cache/helm/repository
2. Disable backend storage – dataStorage
3. Enable HA – For high availability
4. Under “ha”: provide dynamodb config details as shown below. Enable the UI so we can access the Vault server from the browser.
*the user credential which we need to pass must have dynamodb full access.
– helm install vault hashicorp/vault
– kubectl get pods -n monitoring |grep vault
Vault starts uninitialized and in the sealed state. Before initialization, the Integrated Storage backend is not prepared to receive data.
kubectl exec vault-0 — vault operator init
-key-shares=1
-key-threshold=1
-format=json > cluster-keys.json
The operator initcommand generates a root key that it disassembles into key shares -key-shares=n (means it will split it n no of share key) and
-key-threshold=x (means to unseal the vault server the minimum no of share keys you will need to have is x).
-format=json (it will define the property of your credential file and you can put it in any path)
Here the output is redirected to a file named cluster-keys.json.
VAULT_UNSEAL_KEY=$(cat cluster-keys.json | jq -r “.unseal_keys_b64[]”)
kubectl exec vault-0 — vault operator unseal $VAULT_UNSEAL_KEY
kubectl exec vault-0 — vault status
kubectl exec -it vault-0 — /bin/sh
vault secrets enable -path=internal kv-v2
vault kv put internal/database/clops-app DATABASE_USERNAME=”clops” DATABASE_PASSWORD=”random_passord” DATABASE_HOST=”database_endpoint”
vault kv get internal/database/clops-app
Vault provides a Kubernetes authentication method that enables authentication with a K8s service account token to the client who wants to access the Vault server. Every pod when created is provided a token.
kubernetes_host=”https://$KUBERNETES_PORT_443_TCP_ADDR:443″
For a client that wants to read the secret from the Vault, the read capability should be enabled. We will be creating a policy in the Vault to read the path of the secret i.e internal/data/database/clops-app.
We then need a role on the policy with the bounded service account and namespace.
Write out the policy named internal-app that enables the read capability for secrets at the path internal/data/database/creol-api.
Create a Kubernetes authentication role named internal-app
vault policy write internal-app – <<EOF
path “internal/data/database/clops-app” {
capabilities = [“read”]
}
EOF
vault write auth/kubernetes/role/internal-app
bound_service_account_names=internal-app
bound_service_account_namespaces=apis
policies=internal-app
ttl=24h
The role connects the Kubernetes service account, internal-app, and namespace, with the Vault policy, internal-app. The tokens returned after authentication are valid for 24 hours (we can decide the TTL based on our use case)
The Vault Kubernetes authentication role defined a Kubernetes service account named internal-app.
Here is the deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: clops-app
namespace: apis
labels:
app: clops-app
spec:
replicas: 1
selector:
matchLabels:
app: clops-app
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: ‘true’
vault.hashicorp.com/agent-inject-status: ‘update’
vault.hashicorp.com/role: ‘internal-app’
vault.hashicorp.com/agent-inject-secret-database-creol-api.env: ‘internal/data/database/clops-appi’
vault.hashicorp.com/agent-inject-template-database-clopos-app.env: |
{{- with secret “internal/data/database/clops-app” -}}
export SPRING_DATA_MONGODB_URI=”mongodb+srv://{{ .Data.data.DATABASE_USERNAME }}:{{ .Data.data.DATABASE_PASSWORD }}@{{ .Data.data.DATABASE_HOST }}”
{{- end -}}
labels:
app: clops-app
spec:
serviceAccountName: internal-app
containers:
– name: clops-app
image: registry.gitlab.com/cloudifyops/engineering/clops-app:3.1.21
imagePullPolicy: Always
command: [“/bin/sh”]
args:
[‘-c’, ‘source /vault/secrets/database-clops-app.env && env > /vault/secrets/testenv && java “$MAX_HEAP_SIZE” -jar /opt/clops-app/clops-app.jar’]
ports:
– name: clopos-app
containerPort: 7400
env:
– name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: clops-app
key: AWS_ACCESS_KEY_ID
– name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: clops-app
key: AWS_SECRET_ACCESS_KEY
– name: GIT_ACCESSTOKEN
valueFrom:
secretKeyRef:
name: clops-app
key: GIT_ACCESSTOKEN
volumeMounts:
– name: logs
mountPath: /var/log
– name: promtail-container
image: grafana/promtail
args:
– -config.file=/etc/promtail/promtail.yaml
volumeMounts:
– name: logs
mountPath: /var/log
– name: promtail-config
mountPath: /etc/promtail
volumes:
– name: logs
emptyDir: {}
– name: promtail-config
configMap:
name: promtail-config-creol-api
imagePullSecrets:
– name: gitlab-auth
Vault.hashicorp annotation will inject the secret into the pod in /vaults/secrets path.
We should source the secret file from there and start the jar file.
The logs of the apis
We can see that connection with mongodb has been established and jvm is started.
Bonus tip
If you want to store files that have some sensitive information in the Vault, that can also be stored in KV Engen.
Let’s say in promtail config file we were having the uid and password of the loki database so we need to store that file in the same Vault database path.
vault kv put internal/database/clops-app DATABASE_USERNAME=”clops” DATABASE_PASSWORD=”random_passord” DATABASE_HOST=”database_endpoint” PROMTAIL=@promtail.yaml
Follow us on our LinkedIn Page. To know more about our services, visit our website.
CloudifyOps Pvt Ltd, Ground Floor, Block C, DSR Techno Cube, Survey No.68, Varthur Rd, Thubarahalli, Bengaluru, Karnataka 560037
Indiqube Vantage, 3rd Phase, No.1, OMR Service Road, Santhosh Nagar, Kandhanchavadi, Perungudi, Chennai, Tamil Nadu 600096.
CloudifyOps Inc.,
200, Continental Dr Suite 401,
Newark, Delaware 19713,
United States of America
Copyright 2024 CloudifyOps. All Rights Reserved