The Kubernetes Troika
// Publicado em: 30 de setembro de 2019I’m a sysadmin by heart. As a teenager, I had a 16U rack filled with old routers, switches, storage and servers. It was a lot of fun (and a lot of noise). Running my own personal data center thought me a lot about system administration, but exposing an application externally was painful due to all the moving parts. Are cables plugged? Is the VM running? Is the VLAN working? Firewall rules? NAS storage? DNS? TLS? All-the-application-level-stuff? It was a lot of fun (and zero Ansible).
Fast forward to 2019 and the only piece of infrastructure I have is a tiny little Kubernetes cluster in GCP. It is a lot of fun (and way less noise).
After dancing with Kubernetes for a while I fell in love with three amazing services – ingress-nginx, external-dns and cert-manager. With them, I can deploy virtually any modern application with an external IP address, DNS entry and TLS certificate very very quickly. K8S just became my favorite platform to play around with other platforms.
Today I’ll show you how to enable these services and make them best friends ✨
List of Stuff You Will Need ™️
Kubernetes is a massive beast full of replaceable parts running anywhere. Some commands and outputs from this article may be different if you are in a different environment, but most should be OK.
Here’s what I’ll need from you:
- K8S Cluster and
kubectl
. Mine is on GCP. You can get one here. If you dislike Google, make sure your provider has support forLoadBalancer
- A domain! I’ll use
deployeveryday.com
- A DNS provider from this list. I’ll use Cloudflare
- Helm. Here is the installation guide
PS: You might be able to do everything locally with Minikube, but doing things in the Official Internet ™️ is waaaaay more fun! You can send to your friends real Internet Addresses ™️ to brag about how much you know 😉
A tiny little app
Let’s start our adventure by deploying httpbin, a service to test the HTTP Request/Response life cycle. Our mission is not about the app itself, but how we can automatically give it external access, DNS and TLS.
Ay, let’s start YAMLing.
Create a file named httpbin.yaml
with the content below:
apiVersion: v1
kind: Pod
metadata:
name: httpbin
labels:
app: httpbin
spec:
containers:
- name: httpbin
image: kennethreitz/httpbin
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: httpbin
spec:
selector:
app: httpbin
ports:
- port: 80
And apply it:
$ kubectl apply -f httpbin.yaml
It will create a Pod
and a Service
.
A Pod
is the smallest deployable thing in Kubernetes, containing one or more containers. The first section of the YAML above says:
Sup K8S, could you run the httpbin container and expose its port 80?
The second part is a Service
. In our case, it is exposing the httpbin
’s port 80 inside the cluster to other K8S resources, like another Pod. It can’t just be it.
Wait for the Pod to be in the Running
state:
$ kubectl get pod -l app=httpbin -w
We still cannot access the app from the Internet, however, we can create a tunnel from our local machine to the cluster just to take a quick look. I know, Kubernetes is crazy shit.
# Type this in a new terminal and let it open
$ kubectl port-forward httpbin 8080:80
# Back to the main terminal, let's test our app
$ curl localhost:8080/get
You should see some JSON back to you 🎉 But why localhost
if we can Take It To Another Level?
The (NGINX) Ingress
An Ingress
allows external access (from the Internet) to internal Services – like the one we just created – with path-based or named-based rules, SSL termination, etc.
Let’s break down this statement.
-
allows external access (from the Internet)
: defines an specification to access Services from outside the cluster -
path-based rules
: traffic coming to the addressx.x.x.x/foo
goes to theFoo
Service and traffic coming to the addressx.x.x.x/bar
goes to theBar
Service -
named-based rules
: traffic coming to the hostnamefoo.example.com
goes to ServiceFoo
and traffic coming to the addressbar.example.com
goes to ServiceBar
-
SSL termination
: it does the SSL dance for you, usually when in cahoots with another K8S controller, likecert-manager
If you are still confused, don’t worry, it is a hard concept to grasp. You’ll understand it better once we use it. See the official docs if you’d like more information.
BUT, the Ingress just specifies the rules, someone else needs to apply and enforce them. Enters the Ingress Controller, a program that understands the Ingress specification and executes all the shenanigans to make it work. It usually manages the entry point (external IP address) as well, provided by the cloud provider.
The official docs have the most known controllers. We will be using the ingress-nginx.
Install it with the command below:
# `controller.publishService.enabled=true` is necessary for the DNS automation later
$ helm install stable/nginx-ingress --name ingress-nginx --set controller.publishService.enabled=true --wait
The external IP address takes some time to be created. Watch its status with the command below looking for the EXTERNAL-IP
column. Note down the address, you will use it to access the application in a bit.
$ kubectl get services ingress-nginx-nginx-ingress-controller -w
Now let’s create the Ingress. It uses the NGINX controller to direct traffic into our httpbin
application.
Create a file named httpbin-ingress.yaml
:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx # controller class name
nginx.ingress.kubernetes.io/ssl-redirect: "false" # disable forced SSL for now, we will fix it later
name: httpbin
spec:
rules:
- http:
paths:
- path: / # path-based rule
backend:
serviceName: httpbin
servicePort: 80
Apply it:
$ kubectl apply -f httpbin-ingress.yaml
Access your external IP in your browser and 🎉🎊🥳! Our service is up and running in the Real Internet ™️
Names are Better than x.x.x.x
If IP addresses are cool, names are cooler. Would it be fascinating to get a DNS name with the Ingress? Well, External DNS does exactly that.
ExternalDNS synchronizes exposed Kubernetes Services and Ingresses with DNS providers.
We will install using the external-dns Helm Chart.
This section uses Cloudflare’s DNS. If you are using another provider, follow the instructions from the chart’s values.yaml
and the official documentation.
With Cloudflare, I need my account’s email and API key. This article explains how to get them. The API key will be stored in a Kubernetes secret and the email as a text plain value inside the Helm chart.
$ kubectl create secret generic external-dns --from-literal=cloudflare_api_key=<api key>
Create a file named external-dns-values.yaml
with the following. Don’t forget to change the cloudflare.email
key to your own.
sources:
- ingress
provider: cloudflare
cloudflare:
# cloudflare's account email
email: youremail@domain.com
# disables cloudflare's proxy
proxied: false
# Creates RBAC account
# Apparently obligatory in the Helm chart, see https://github.com/kubernetes-incubator/external-dns/issues/1202
rbac:
create: true
And deploy the Helm chart:
$ helm install --name external-dns -f external-dns-values.yaml stable/external-dns --wait
Magic time 🧙♀️. Change the httpbin-ingress.yaml
file to:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx # controller class name
nginx.ingress.kubernetes.io/ssl-redirect: "false" # disable forced SSL for now, we will fix it later
name: httpbin
spec:
rules:
# use your domain here
# now this is a name-based rule
- host: httpbin.deployeveryday.com
http:
paths:
- path: /
backend:
serviceName: httpbin
servicePort: 80
$ kubectl apply -f httpbin-ingress.yaml
And 🎉! A new DNS entry with the specified domain and the Ingress IP should be created.
Here are the logs from External DNS:
$ kubectl logs -l "app.kubernetes.io/name=external-dns,app.kubernetes.io/instance=external-dns"
time="2019-09-25T17:25:20Z" level=info msg="Changing record." action=CREATE record=<your domain> targets=1 ttl=1 type=TXT zone=e919cd53f62fc30c7a25396992ca2472
Verify the DNS propagation with:
$ dig <domain> +short
When the last command returns your IP address, you can access the app via the created DNS!
Be safe with TLS certificates
Without TLS, traffic between the client and the server might be spoofed by an attacker and Google will flag your website as insecure. Nowadays you can get TLS certificates for free with Let’s Encrypt. On top of that, cert-manager generates and renews certificates for our applications automatically, using information from the Ingress and DNS.
cert-manager is a Kubernetes add-on to automate the management and issuance of TLS certificates from various issuing sources. It will ensure certificates are valid and up to date periodically, and attempt to renew certificates at an appropriate time before expiry.
Practically speaking, cert-manager can be configured to watch our Ingress and create a certificate for the defined domain with Let’s Encrypt.
The most widely used method to validate the certificate is http01
, BUT we won’t use here. You see, to use http01
we need a DNS entry in place already propagated before starting the certification process. The objective here is to get an external IP address, a DNS entry and the TLS certificate in one shot, without any DNS-propagation-waiting-time. dns01
solves this issue since it connects directly to our DNS provider to validate the domain.
If you want more details about these methods (and others), please take a look at their docs.
Install cert-manager with this shameless copied from the official docs.
# Install the CustomResourceDefinition resources separately
$ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.10/deploy/manifests/00-crds.yaml
# Create the namespace for cert-manager
$ kubectl create namespace cert-manager
# Label the cert-manager namespace to disable resource validation
$ kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
# Add the Jetstack Helm repository
$ helm repo add jetstack https://charts.jetstack.io
# Update your local Helm chart repository cache
$ helm repo update
# Install the cert-manager Helm chart
$ helm install \
--name cert-manager \
--namespace cert-manager \
--version v0.10.1 \
--wait \
jetstack/cert-manager
Verify the installation with:
$ kubectl get pods --namespace cert-manager -w
Setting up the Issuer
The Issuer contains your Let’s Encrypt “account” and the method used to generate the certificates. The best practice is to create a staging Issuer to test the waters first due Let’s Encrypt API rate limit. For the sake of this article, we will go straight to production.
Cloudflare resolver is set under the solvers
key. Look at the supported providers if you are using a different one.
Create a file named issuer.yaml
with the following YAML:
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: youremail@domain.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- dns01:
# Your DNS provider configuration
cloudflare:
email: youremail@domain.com
apiKeySecretRef:
name: external-dns
key: cloudflare_api_key
And apply it:
$ kubectl apply -f issuer.yaml
Now let’s tell our Ingress that it wants a certificate. Edit the httpbin-ingress.yaml
to the following:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx # controller class name
certmanager.k8s.io/issuer: letsencrypt-prod # issuer name
name: httpbin
spec:
rules:
# use your domain here
# now this is a name-based rule
- host: httpbin.deployeveryday.com
http:
paths:
- path: /
backend:
serviceName: httpbin
servicePort: 80
tls: # specify we want TSL
- hosts:
- httpbin.deployeveryday.com
secretName: httpbin-tls # secret to store our TLS
$ kubectl apply -f httpbin-ingress.yaml
It takes about 3 minutes for cert-manager to cook a certificate. Watch for the READY
column with the command below:
$ kubectl get cert httpbin-tls
Access your domain and 🎉, we have a TLS certificate ready to go 🔒
All together now, all together now!
You are sitting at your desk, sipping some tea and discussing your evil plans to conquer the world in Reddit. Your colleague approaches you:
– “We need to get this app up and running onb the Internet today, with DNS and TLS”.
– “No worry, give me 5 minutes”.
You put your tea down and quickly slams a YAML file named yet-another-service.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: kuard
labels:
app: kuard
spec:
containers:
- name: kuard
image: gcr.io/kuar-demo/kuard-amd64:blue
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: kuard
labels:
app: kuard
spec:
selector:
app: kuard
ports:
- port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/issuer: letsencrypt-prod
name: kuard
labels:
app: kuard
spec:
rules:
- host: kuard.deployeveryday.com
http:
paths:
- path: /
backend:
serviceName: kuard
servicePort: 80
tls:
- hosts:
- kuard.deployeveryday.com
secretName: kuard-tls
$ kubectl apply -f yet-another-service.yaml
$ kubectl get pod,svc,ingress,cert -l app=kuard
“Done”.
Prologue
This is my favorite Kubernetes setup. With some building blocks, you can transform the abstract container orchestrator into a platform ready to serve your workloads.
However, it can be tricky. There’s a bunch of possible combinations between services, providers and configurations. What if you have more than one DNS provider? What if your applications need different rules according to their tier? What about monitoring and observability? I bet our trinity can answer all these questions, and they should be evaluated before any hard work starts.
If you have any thoughts, please leave in the comments below 😌
Thanks ❤️