Skip to content
Blog / DevOps / Kooking Kontainers With Kubernetes: A Recipe for Dual-Stack Deliciousness

Kooking Kontainers With Kubernetes: A Recipe for Dual-Stack Deliciousness

If you have a mild allergy to ASCII or YAML you might want to avert your eyes. You’ve been warned.

Now, lets imagine you have a largish server hanging around, not earning its keep. And on the other hand, you have a desire to run some CI pipelines on it, and think Kubernetes is the answer.

You’ve tried ‘kube-spawn’ and ‘minikube’ etc, but they stubbornly allocate just a ipv4/32 to your container, and, well, your CI job does something ridiculous like bind to ::1, failing miserably. Don’t despair, lets use Calico with a host-local ipam.

For the most part the recipe speaks for itself. The ‘awk’ in the calico install is to switch from calico-ipam (single-stack) to host-local with 2 sets of ranges. Technically Kubernetes doesn’t support dual stack (cloud networking is terrible. Just terrible. its all v4 and proxy server despite sometimes using advanced things like BGP). But, we’ll fool it!

Well, here’s the recipe. Take one server running ubuntu 18.04 (probably works with anything), run as follows, sit back and enjoy, then install your gitlab-runner.

rm -rf ~/.kube
sudo kubeadm reset -f
sudo kubeadm init --apiserver-advertise-address --pod-network-cidr 
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
until kubectl get nodes; do echo -n .; sleep 1; done; echo              
kubectl apply -f \
kubectl apply -f \
curl -s |\
awk '/calico-ipam/ { print "              \"type\": \"host-local\",\n"
print "              \"ranges\": [ [ { \"subnet\": \"\", \"rangeStart\": \"\", \"rangeEnd\": \"\" } ], [ { \"subnet\": \"fc00::/64\", \"rangeStart\": \"fc00:0:0:0:0:0:0:10\", \"rangeEnd\": \"fc00:0:0:0:ffff:ffff:ffff:fffe\" } ] ]"
if (!printed) {
print $0
printed = 0;
}' > /tmp/calico.yaml
kubectl apply -f /tmp/calico.yaml
kubectl apply -f - << EOF
kind: ConfigMap
name: coredns
namespace: kube-system
apiVersion: v1
Corefile: |
.:53 {
kubernetes cluster.local {
pods insecure
prometheus :9153
proxy .
cache 30
kubectl taint nodes --all
kubectl create serviceaccount -n kube-system tiller
kubectl create clusterrolebinding tiller-binding --clusterrole=cluster-admin --serviceaccount kube-system:tiller
helm init --service-account tiller                

Leave a Reply

Your email address will not be published.