As a homelab or self-hoster who has deployed a couple services and would now like to access their services from outside their home network, you may have come to the dilemma of … how?
If you don’t want to read my rambling, you can skip to Setting up.
To make the long trek from your coffee shop / office / airport to your home network, you will need a couple things coming together:
- Ownership of a domain
- An authoritative DNS provider
- A static IP or something which will handle routing to the correct IP
- Exposing your content on your host
My Journey
Bare Metal: Port-forward and Dynamic DNS
This is the most low-tech option and requires the least moving parts. with the added spice of opening up your home network to the nasty web. I started off with this as I was a fledgling in the world of self-hosting, with a vague understanding that it was risky.
- Open up ports in your router
- Use a cron to update your DNS of choice with your current IP
This had many downfalls. All network requests were hitting bare metal, with no load balancing or caching. The cron that updates DNS was not reactive, access could be lost for as long as it took for the cron to fire again. Most obviously, I was leaking my IP address and opening up my home network to intrusion.
Moving to the Cloud: Load Balancers and VPCs
During my research, I stumbled across Oracle Cloud Infrastructures (OCI) generous free tier which grants one VPC, an application and a network load balancer, and two compute instances. Excited, I signed up for an account and started clicking through the UI, spinning up a compute instance and VPC. The VPC panel is where my excitement started to wane, setting up the appropriate secure network traffic ACL and route tables via the various subnets was a non-trivial task for a beginner and I soon found myself slowing down to a snails pace trying to understand the various intricacies of VPCs. Additionally, I was somewhat concerned that the architecture was overly complicated for what seemed like a fairly simple use-case. Hence, the next step.
Cloudflare Tunnels
Not willing to commit to OCI, I stumbled across Cloudflare Tunnels (originally Argo Tunnels), a service by Cloudflare that provides a mechanism to proxy traffic between private hosts and Cloudflares endpoints, preserving privacy. This seemed ideal for my use-case with minimal components to manage (a price of $0 is also a bonus!).
The Implementation
As the target is a Kubernetes cluster, I will be using external-dns, a service which synchronises DNS entries in third-party DNS providers. Additionally, external-dns-cloudflare-tunnel-webhook will be deployed alongside to provide the integration with Cloudflare Tunnels. See more about this setup over at Creating a Webhook Provider for External DNS
Setting up
Once you have a free Cloudflare account, note down the account ID (at https://dash.cloudflare.com/). Then, set up the following:
- A site which will contain the DNS entries for your services
- A tunnel (take note of the tunnel ID)
- An API token with
Cloudflare Tunnel:Edit
andDNS:Edit
for the relevant site and tunnel
You can then make these values available to your Kubernetes cluster (via Vault, secrets, etc). E.g.
kubectl create secret generic external-dns-cloudflare-tunnel \
--namespace external-dns \
--from-literal CLOUDFLARE_ACCOUNT_ID=<account id> \
--from-literal CLOUDFLARE_API_TOKEN=<api token> \
--from-literal CLOUDFLARE_TUNNEL_ID=<tunnel id>
To install external-dns, use the helm chart:
helm repo add external-dns https://kubernetes-sigs.github.io/external-dns
helm upgrade external-dns external-dns/external-dns --install --values ./values.yaml
The following values configures external-dns to watch Kubernetes services
annotated with Example
values.yaml
external-dns.alpha.kubernetes.io/hostname
with a domain ending
in example.com
. It also deploys external-dns-cloudflare-tunnel-webhook as a
sidecar which external-dns will communicate with using webhooks.logLevel: info
logFormat: json
policy: sync
registry: noop
interval: 1h
triggerLoopOnEvent: true
sources: [service]
domainFilters: [example.com]
image:
pullPolicy: Always
provider:
name: webhook
webhook:
image:
repository: docker.io/axatol/external-dns-cloudflare-tunnel-webhook
tag: latest
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
env:
- name: LOG_LEVEL
value: debug
- name: DOMAIN_FILTERS
value: example.com
- name: CLOUDFLARE_ACCOUNT_ID
valueFrom:
secretKeyRef:
name: external-dns-cloudflare-tunnel
key: CLOUDFLARE_ACCOUNT_ID
- name: CLOUDFLARE_TUNNEL_ID
valueFrom:
secretKeyRef:
name: external-dns-cloudflare-tunnel
key: CLOUDFLARE_TUNNEL_ID
- name: CLOUDFLARE_API_TOKEN
valueFrom:
secretKeyRef:
name: external-dns-cloudflare-tunnel
key: CLOUDFLARE_API_TOKEN
extraArgs:
- --annotation-filter=external-dns.alpha.kubernetes.io/hostname
You should end up with a Kubernetes cluster that will automatically create, update, and delete DNS entries in Cloudflare whenever you make the same change to a service or ingress.