Migrating Hashicorp Vault from Virtual Machines to Openshift/Kubernetes and changing from consul to raft and making it highly available

  • Load Balancing wasn’t completed 100%
  • Upgrades were not maintained
  • The original guy who installed it has left
  • No one really understands how it was installed or how it works
  • Kubernetes automatically has internal load-balancers
  • Upgrades can be easily done using Helm
  • All deployment is now completed via code (GitOps, hooraa!)
  • Development
  • QA
  • Pre Production
  • Production (3 Vaults with no Load Balancer in the front)

What’s the Plan Batman!

I wanted the new solution to be simple, so I decided to go with the Hashicorp Helm charts to deploy the Vaults and for their storage backend to use the new Raft storage backend.

  • Extract data from existing Consul, running on VM
  • Setup Helm and projects/namespaces on Openshift
  • Deploy new Consul, running on Openshift
  • Restore data to new Consul, running on Openshift
  • Convert data from Consul to Raft storage backend format
  • Deploy Highly Available Vault
  • Restore Raft snapshot to new Highly Available Vault
  • Join Rafts Together

Extract data from existing Consul

For this you will need ssh access to the server. I didn’t have root access which didn’t matter. In this setup Consul and Vault run on the same server. I will be calling this server “old-dev-vault”.

$ ssh old-dev-vault
Last login: Thu Jul 16 11:30:50 2020 from 10.36.10.82
$ ss -lntup | grep 8500
tcp LISTEN 0 128 127.0.0.1:8500 *:*
$ /opt/consul/consul snapshot save 14072020.consul.snapshot
Saved and verified snapshot to index 6092857
$ /opt/consul/consul snapshot inspect 14072020.consul.snapshot
ID 15-6092857-1594720624826
Size 518755
Index 6092857
Term 15
Version 1

Setup Helm and projects/namespaces on Openshift

This guide assumes you already understand the basics of Openshift/Kubernetes.

$ oc new-project vault-migrator --display-name="Temporary Restore for Hashicorp Vault" --description="Temporary Restore for Hashicorp Vault"
$ wget https://get.helm.sh/helm-v3.2.1-linux-amd64.tar.gz
$ tar zxvf helm-v3.2.1-linux-amd64.tar.gz
$ sudo mv linux-amd64/helm /usr/local/bin/
$ rm helm-v3.2.1-linux-amd64.tar.gz linux-amd64/ -rf
$ helm repo add hashicorp https://helm.releases.hashicorp.com
$ helm repo update
$ helm search repoNAME             CHART VERSION APP VERSION DESCRIPTION                    
hashicorp/consul 0.23.0 1.8.0 Official HashiCorp Consul Chart
hashicorp/vault 0.6.0 1.4.2 Official HashiCorp Vault Chart

Deploy new Consul

We are now going to deploy Consul, using Helm, in the project/namespace “vault-migrator” on Openshift

global:
enabled: true
domain: consul
image: "consul:1.8.0"
imageK8S: "hashicorp/consul-k8s:0.17.0"
imageEnvoy: "envoyproxy/envoy-alpine:v1.14.2"
datacenter: dc1
enablePodSecurityPolicies: false
tls:
enabled: false
acls:
manageSystemACLs: false
enabled: false
createFederationSecret: false
server:
enabled: true
replicas: 1
bootstrapExpect: 1
storage: 10Gi
connect: true
resources:
requests:
memory: "100Mi"
cpu: "100m"
limits:
memory: "100Mi"
cpu: "100m"
updatePartition: 0
disruptionBudget:
enabled: false
maxUnavailable: null
externalServers:
enabled: false
client:
enabled: true
join: false
grpc: true
exposeGossipPorts: false
resources:
requests:
memory: "100Mi"
cpu: "100m"
limits:
memory: "100Mi"
cpu: "100m"
$ oc adm policy add-scc-to-user anyuid -z consul-consul-client -n vault-migrator$ oc adm policy add-scc-to-user anyuid -z consul-consul-server -n vault-migrator
$ helm install consul hashicorp/consul -n vault-migrator --values consul-migrator.values.yamlNAME: consul
LAST DEPLOYED: Thu Jul 16 15:28:28 2020
NAMESPACE: vault-migrator
STATUS: deployed
REVISION: 1
NOTES:
Thank you for installing HashiCorp Consul!
$ oc get po -n vault-migrator NAME                     READY   STATUS    RESTARTS   AGE
consul-consul-server-0 1/1 Running 0 78s
$ oc get pvc -n vault-migratorNAME                                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-vault-migrator-consul-consul-server-0 Bound pvc-24296a30-2a3e-4d95-92b0-1badb7c01b92 10Gi RWO thin 2m5s
$ oc project vault-migrator
$ oc exec consul-consul-server-0 -it -- consul version
Consul v1.8.0
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)

Restore data to new Consul

With the pod up and running, you now need to copy the Consul snapshot onto this pod. Remember this file “14072020.consul.snapshot

$ oc exec consul-consul-server-0 -it -- mkdir /consul/data/snapshots  $ oc cp 14072020.consul.snapshot consul-consul-server-0:/consul/data/snapshots/
$ oc exec consul-consul-server-0 -it -- consul snapshot restore /consul/data/snapshots/14072020.consul.snapshotRestored snapshot

Convert data from Consul to Raft storage backend format

To migrate to the Raft backend storage, we will be deploying a Vault pod and then execute the migration command from within that pod Vault.

global:
enabled: true
tlsDisable: true
openshift: true
injector:
enabled: false
image:
repository: "hashicorp/vault-k8s"
tag: "0.4.0"
pullPolicy: IfNotPresent
agentImage:
repository: "vault"
tag: "1.4.2"
authPath: "auth/kubernetes"
logLevel: "info"
logFormat: "standard"
revokeOnShutdown: false
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 256Mi
cpu: 250m
server:
image:
repository: "vault"
tag: "1.4.2"
pullPolicy: IfNotPresent
updateStrategyType: "OnDelete"
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 256Mi
cpu: 250m
ingress:
enabled: false
route:
enabled: false
authDelegator:
enabled: false
readinessProbe:
enabled: true
livenessProbe:
enabled: false
path: "/v1/sys/health?standbyok=true"
initialDelaySeconds: 60
preStopSleepSeconds: 5
service:
enabled: true
type: ClusterIP
port: 8200
targetPort: 8200
dataStorage:
enabled: true
size: 10Gi
accessMode: ReadWriteOnce
auditStorage:
enabled: false
dev:
enabled: false
standalone:
enabled: true
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "consul" {
path = "vault"
address = "consul-consul-server:8500"
}
ha:
enabled: false
ui:
enabled: true
serviceType: "ClusterIP"
externalPort: 8200
$ helm install vault-migrator hashicorp/vault -n vault-migrator --values vault-migrator.values.yamlLAST DEPLOYED: Thu Jul 16 15:48:30 2020
NAMESPACE: vault-migrator
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing HashiCorp Vault!
$ oc rsh vault-migrator-0 / $ vault status
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed true
Total Shares 5
Threshold 3
Unseal Progress 0/3
Unseal Nonce n/a
Version 1.4.2
HA Enabled true
storage_source "consul" {
address = "consul-consul-server:8500"
path = "vault"
}
storage_destination "raft" {
path = "/vault/data"
}
cluster_addr = "http://127.0.0.1:8201"
/ $ vault operator migrate -config /tmp/migrate.hcl
...
2020-07-16T14:18:26.995Z [INFO] copied key: path=sys/token/parent/3561bb287dfd3983d4dc9ab66dea603c390d95d5/fbe49dd49c384e2ae6c7c2f1b1eda0d0310972c9
2020-07-16T14:18:27.003Z [INFO] copied key: path=sys/token/parent/3561bb287dfd3983d4dc9ab66dea603c390d95d5/fd93a9c4670a29792886e61e61a0d38dcced1db5
2020-07-16T14:18:27.007Z [INFO] copied key: path=sys/token/parent/3561bb287dfd3983d4dc9ab66dea603c390d95d5/fdea7a4910319f962d902156ae4349795f2a4d1b
2020-07-16T14:18:27.011Z [INFO] copied key: path=sys/token/salt
Success! All of the keys have been migrated.
/ $ ls -1 /vault/data/lost+found
node-id
raft
vault.db
/ $ vault operator raft list-peersNo raft cluster configuration found
$ oc edit configmaps vault-migrator-config -n vault-migrator
storage "consul" {
path = "vault"
address = "consul-consul-server:8500"
}
storage "raft" {
path = "/vault/data"
}
$ sleep 120 ; oc delete po vault-migrator-0
$ oc rsh vault-migrator-0/ $ vault operator unseal
Unseal Key (will be hidden):
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 5
Threshold 3
Version 1.4.2
Cluster Name vault-cluster-1acdbff4
Cluster ID 5302c4ed-34a8-cdf4-a5ba-3e0e60df2565
HA Enabled true
HA Cluster n/a
HA Mode standby
Active Node Address <none>
Raft Committed Index 676
Raft Applied Index 676
/ $ vault operator raft list-peersNode                                    Address           State     Voter
---- ------- ----- -----
b45a5f4e-bfe0-8420-248f-8ed9cea285c2 127.0.0.1:8201 leader true
/ $ vault operator raft snapshot save /tmp/14072020.raft.snapshot
/ $ exit$ oc cp vault-migrator-0:/tmp/14072020.raft.snapshot 14072020.raft.snapshot

Deploying a Highly Available Vault

Finally! I can get a new shiny Vault in Openshift. Enough talking! Let’s do this!

$ oc new-project takeoverworld-app-dev --display-name="App to take over the World" --description="Genetically modify 2 lab mice and harness the power of their brains"
  • High Availability Enabled
  • Anti Affinity Enabled
  • Auditing enabled
  • Raft storage backend
global:
enabled: true
tlsDisable: true
openshift: true
injector:
enabled: true
resources:
requests:
memory: 256Mi
cpu: 50m
limits:
memory: 256Mi
cpu: 500m
server:
resources:
requests:
memory: 256Mi
cpu: 50m
limits:
memory: 256Mi
cpu: 500m
authDelegator:
enabled: true
readinessProbe:
enabled: true
livenessProbe:
enabled: false
path: "/v1/sys/health?standbyok=true"
initialDelaySeconds: 60
service:
enabled: true
type: ClusterIP
dataStorage:
enabled: true
size: 1Gi
storageClass: null
accessMode: ReadWriteOnce
auditStorage:
enabled: true
size: 1Gi
storageClass: null
accessMode: ReadWriteOnce
route:
enabled: false
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/name: {{ template "vault.name" . }}
app.kubernetes.io/instance: "{{ .Release.Name }}"
component: server
topologyKey: kubernetes.io/hostname
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: false
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "raft" {
path = "/vault/data"
}
service_registration "kubernetes" {}
dev:
enabled: false
ui:
enabled: true
serviceType: "ClusterIP"
$ helm install pinky hashicorp/vault -n takeoverworld-app-dev --values vault.values.yaml
oc get po -n takeoverworld-app-devNAME                                          READY   STATUS    RESTARTS   AGE
pinky-vault-0 0/1 Running 0 2m32s
pinky-vault-1 0/1 Running 0 2m32s
pinky-vault-2 0/1 Running 0 2m31s
pinky-vault-agent-injector-6d6df4cff5-r9dpp 1/1 Running 0 2m32s
$ oc new-project takeoverworld-app-qa
$ helm install pinky hashicorp/vault -n takeoverworld-app-qa --values vault.values.yaml
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: ClusterRole, namespace: , name: pinky-vault-agent-injector-clusterrole
helm install brain hashicorp/vault -n takeoverworld-app-qa --values vault.values.yaml
NAME: brain
LAST DEPLOYED: Tue Jul 21 15:20:16 2020
NAMESPACE: takeoverworld-app-qa
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing HashiCorp Vault!

Restore Raft snapshot to new Highly Available Vault

Now that we have a new running Vault we can restore the data from the file 14072020.raft.snapshot. Here are the steps

$ oc cp 14072020.raft.snapshot pinky-vault-0:/tmp/14072020.raft.snapshot
$ oc rsh pinky-vault-0/ $ vault operator initUnseal Key 1: eP8UV0T2/I6ewEbJ1zbd3N2aL7pn3GAsUIK11DZhdyns
Unseal Key 2: 7qT1a4oEJLi0Lvh9Z5YLt+CGtkdUlgIdAn901R+mOlON
Unseal Key 3: j7a2vXPT8DAlG/9JsDcgHVRfhtgvWNygYIU3P0csyp9H
Unseal Key 4: QQBc72pwxTGRZ/u1DpdKLAQn5tfY4vmvIfxLn6JAIQQf
Unseal Key 5: rnvZLph8cZqL3+kyU5SVb+2UoK/XXbXcDctu8hPfzGd/
Initial Root Token: s.NCjg9ZbclDTfetGkSG0BvKIuVault initialized with 5 key shares and a key threshold of 3. Please securely
distribute the key shares printed above. When the Vault is re-sealed,
restarted, or stopped, you must supply at least 3 of these keys to unseal it
before it can start servicing requests.
Vault does not store the generated master key. Without at least 3 key to
reconstruct the master key, Vault will remain permanently sealed!
It is possible to generate new unseal keys, provided you have a quorum of
existing unseal keys shares. See "vault operator rekey" for more information.
/ $ vault operator unsealUnseal Key (will be hidden): 
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 5
Threshold 3
Version 1.4.2
Cluster Name vault-cluster-953f7dab
Cluster ID c2bd40d0-5b94-cdd8-88e6-aa07f50793dc
HA Enabled true
HA Cluster n/a
HA Mode standby
Active Node Address <none>
Raft Committed Index 24
Raft Applied Index 24
/ $ vault loginToken (will be hidden): 
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token s.NCjg9ZbclDTfetGkSG0BvKIu
token_accessor PKnSprUq4D5X6macj6V2QL1x
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]
/ $ vault operator raft snapshot restore -force /tmp/14072020.raft.snapshot
/ $ exit$ oc delete po --all -n takeoverworld-app-dev
$ oc rsh pinky-vault-0/ $ vault operator unsealUnseal Key (will be hidden): 
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 5
Threshold 3
Version 1.4.2
Cluster Name vault-cluster-1acdbff4
Cluster ID 5302c4ed-34a8-cdf4-a5ba-3e0e60df2565
HA Enabled true
HA Cluster https://vault-migrator-0.vault-migrator-internal:8201
HA Mode standby
Active Node Address http://10.30.18.11:8200
Raft Committed Index 691
Raft Applied Index 691

Join Rafts Together

Time to join the other pods together

/ $ vault operator raft list-peersNode                                    Address                                    State     Voter
---- ------- ----- -----
68e7d28e-aa44-f2d8-3be0-7e624b9ecf32 pinky-vault-0.pinky-vault-internal:8201 leader true
$ oc exec -ti pinky-vault-1 -- vault operator raft join http://pinky-vault-0.pinky-vault-internal:8200Key       Value
--- -----
Joined true
$ oc exec -ti pinky-vault-2 -- vault operator raft join http://pinky-vault-0.pinky-vault-internal:8200Key       Value
--- -----
Joined true
$ oc exec -ti pinky-vault-1 -- vault operator unsealUnseal Key (will be hidden): 
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 5
Threshold 3
Version 1.4.2
Cluster Name vault-cluster-1acdbff4
Cluster ID 5302c4ed-34a8-cdf4-a5ba-3e0e60df2565
HA Enabled true
HA Cluster https://pinky-vault-0.pinky-vault-internal:8201
HA Mode standby
Active Node Address http://10.30.6.14:8200
Raft Committed Index 770
Raft Applied Index 770
$ oc exec -ti pinky-vault-0 -- vault operator raft list-peersNode                                    Address                                    State       Voter
---- ------- ----- -----
68e7d28e-aa44-f2d8-3be0-7e624b9ecf32 pinky-vault-0.pinky-vault-internal:8201 leader true
3f9eaff0-ae8c-3470-0ca9-7e68eeedf6c8 pinky-vault-1.pinky-vault-internal:8201 follower true
$ oc get po -n takeoverworld-app-devNAME                                          READY   STATUS    RESTARTS   AGE
pinky-vault-0 1/1 Running 0 39m
pinky-vault-1 1/1 Running 0 39m
pinky-vault-2 0/1 Running 0 39m
pinky-vault-agent-injector-6d6df4cff5-6j7rz 1/1 Running 0 39m
$ oc exec -ti pinky-vault-2 -- vault operator unsealUnseal Key (will be hidden): 
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 5
Threshold 3
Version 1.4.2
Cluster Name vault-cluster-1acdbff4
Cluster ID 5302c4ed-34a8-cdf4-a5ba-3e0e60df2565
HA Enabled true
HA Cluster https://pinky-vault-0.pinky-vault-internal:8201
HA Mode standby
Active Node Address http://10.30.6.14:8200
Raft Committed Index 777
Raft Applied Index 777
$ oc exec -ti pinky-vault-0 -- vault operator raft list-peersNode                                    Address                                    State       Voter
---- ------- ----- -----
68e7d28e-aa44-f2d8-3be0-7e624b9ecf32 pinky-vault-0.pinky-vault-internal:8201 leader true
3f9eaff0-ae8c-3470-0ca9-7e68eeedf6c8 pinky-vault-1.pinky-vault-internal:8201 follower true
c4330b82-8508-73c2-6d0c-f14bdedb3e2e pinky-vault-2.pinky-vault-internal:8201 follower true

Creating an external Route

For the peeps that were paying attention, when we created the Helm values file for the highly available Vault deployment, we told it not to create an Openshift route. When telling the Vault values file to create the route for Openshift, it setup the route as “passthrough” which was not working for me, which is why I disabled it.

# oc create route edge pinky-vault --service=pinky-vault-active --insecure-policy=Redirect --hostname=pinky-vault.lsd.co.za

Testing out High Availability

Let’s make sure this Vault HA stuff actually works

$ oc exec -ti pinky-vault-0 -- vault statusKey                     Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 5
Threshold 3
Version 1.4.2
Cluster Name vault-cluster-1acdbff4
Cluster ID 5302c4ed-34a8-cdf4-a5ba-3e0e60df2565
HA Enabled true
HA Cluster https://pinky-vault-0.pinky-vault-internal:8201
HA Mode active
Raft Committed Index 812
Raft Applied Index 812
$ oc exec -ti pinky-vault-1 -- vault statusKey                     Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 5
Threshold 3
Version 1.4.2
Cluster Name vault-cluster-1acdbff4
Cluster ID 5302c4ed-34a8-cdf4-a5ba-3e0e60df2565
HA Enabled true
HA Cluster https://pinky-vault-0.pinky-vault-internal:8201
HA Mode standby
Active Node Address http://10.30.6.14:8200
Raft Committed Index 815
Raft Applied Index 815
$ oc delete po pinky-vault-0 
pod "pinky-vault-0" deleted
$ oc get po NAME READY STATUS RESTARTS AGE
pinky-vault-0 0/1 Running 0 2m34s
pinky-vault-1 1/1 Running 0 63m
pinky-vault-2 1/1 Running 0 63m
pinky-vault-agent-injector-6d6df4cff5-6j7rz 1/1 Running 0 63m
$ oc exec -ti pinky-vault-0 -- vault statusKey                Value
--- -----
Seal Type shamir
Initialized true
Sealed true
Total Shares 5
Threshold 3
Unseal Progress 0/3
Unseal Nonce n/a
Version 1.4.2
HA Enabled true
$ oc exec -ti pinky-vault-2 -- vault operator raft list-peersNode                                    Address                                    State       Voter
---- ------- ----- -----
68e7d28e-aa44-f2d8-3be0-7e624b9ecf32 pinky-vault-0.pinky-vault-internal:8201 follower true
3f9eaff0-ae8c-3470-0ca9-7e68eeedf6c8 pinky-vault-1.pinky-vault-internal:8201 leader true
c4330b82-8508-73c2-6d0c-f14bdedb3e2e pinky-vault-2.pinky-vault-internal:8201 follower true
$ oc exec -ti pinky-vault-1 -- vault statusKey                     Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 5
Threshold 3
Version 1.4.2
Cluster Name vault-cluster-1acdbff4
Cluster ID 5302c4ed-34a8-cdf4-a5ba-3e0e60df2565
HA Enabled true
HA Cluster https://pinky-vault-1.pinky-vault-internal:8201
HA Mode active
Raft Committed Index 833
Raft Applied Index 833
$ oc exec -ti pinky-vault-0 -- vault operator unseal

Cleaning Up

You can clean up your Helm deployments from the vault-migrator project as follows

$ helm uninstall consul -n vault-migrator -n vault-migrator
$ helm uninstall vault-migrator -n vault-migrator -n vault-migrator
$ oc delete pvc data-vault-migrator-consul-consul-server-0 -n vault-migrator
$ oc delete pvc data-vault-migrator-0 -n vault-migrator
$ oc delete project vault-migrator

Troubleshooting

If your Consul pods are not starting up, and you are seeing this error

LAST SEEN   TYPE      REASON                  OBJECT                                                             MESSAGE
8s Normal SuccessfulCreate statefulset/consul-consul-server create Claim data-vault-migrator-consul-consul-server-0 Pod consul-consul-server-0 in StatefulSet consul-consul-server success
2s Warning FailedCreate statefulset/consul-consul-server create Pod consul-consul-server-0 in StatefulSet consul-consul-server failed error: pods "consul-consul-server-0" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{1000}: 1000 is not an allowed group]
2s Warning FailedCreate daemonset/consul-consul Error creating: pods "consul-consul-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.containers[0].hostPort: Invalid value: 8500: Host ports are not allowed to be used spec.containers[0].securityContext.containers[0].hostPort: Invalid value: 8502: Host ports are not allowed to be used]
6s Normal ProvisioningSucceeded persistentvolumeclaim/data-vault-migrator-consul-consul-server-0 Successfully provisioned volume pvc-bbf06b8f-f004-4025-8ec8-39d258c282d5 using kubernetes.io/vsphere-volume
$ helm install pinky hashicorp/vault -n takeoverworld-app-dev --values vault.values.yamlError: Operation cannot be fulfilled on resourcequotas "resources": the object has been modified; please apply your changes to the latest version and try again
$ helm uninstall pinky -n takeoverworld-app-dev
$ helm install pinky hashicorp/vault -n takeoverworld-app-dev --values vault.values.yaml
NAME: pinky
LAST DEPLOYED: Tue Jul 21 15:14:41 2020
NAMESPACE: takeoverworld-app-dev
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing HashiCorp Vault!
$ vault operator migrate -config migrate.hcl
...
...
...
...
Error migrating: error reading entry: Unexpected response code: 400

What is LSD?

If you saw lsd.co.za and were curious….well….

--

--

Mech Warrior Overlord @ LSD. I spend my days killing Kubernetes, operating Openshift, hollering at Helm, vanquishing Vaults and conquering Clouds!

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Neil White

Neil White

28 Followers

Mech Warrior Overlord @ LSD. I spend my days killing Kubernetes, operating Openshift, hollering at Helm, vanquishing Vaults and conquering Clouds!