Kubernetes / Calico the sort of easy way in AWS

February 16, 2017   

Table of Contents

Overview

The goal of these blog posts is to setup the latest versions of Kubernetes and Calico to get a running setup as easy as possible but also helping you learn about how things work and hook together.

These blog posts will cover the following versions

  • Kubernetes: v1.5.3
  • Calico: v1.0.2
  • Calico CNI Plugin: v1.5.6

These posts also assume you have a working AWS setup. It has a 3 node setup where 1 node is the master node and the other two nodes are worker nodes.

Also to make things easier, I will run etcd on the master node.

This guide is also for Ubuntu 16.04. A lot of the same commands and concepts will work in other distributions but some things may need to get changed.

AWS Setup

So this assumes you know how to setup things in AWS. This is the profile I use for all my Kubernetes hosts.

This is just enough to get your going BTW

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "ec2:DescribeInstances",
        "ec2:DescribeTags",
        "ec2:AttachVolume",
        "ec2:DetachVolume",
        "ec2:DescribeInstances",
        "ec2:DescribeVolumes",
        "ec2:ModifyInstanceAttribute",
        "elasticloadbalancing:DescribeLoadBalancers"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}

Master setup

So first lets get etcd running on the master node first.

apt-get install etcd

You need to edit the etcd config to make it listen to all IPs on the system. Edit the following file

/etc/default/etcd

Find the following two lines

ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://localhost:4001"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379,http://localhost:4001"

Change them to listen to all IPs

ETCD_LISTEN_CLIENT_URLS="http://172.31.11.29:2379,http://172.31.11.29:4001"
ETCD_ADVERTISE_CLIENT_URLS="http://172.31.11.29:2379,http://172.31.11.29:4001"

Now restart etcd

service etcd restart

Install docker

apt-get install docker.io

Install Calico

wget --quiet https://github.com/projectcalico/calico-containers/releases/download/v1.0.2/calicoctl -O /usr/bin/calicoctl
chmod +x /usr/bin/calicoctl

Run Calico now

calicoctl node run

The calico node should now be running in docker. You can verify it via

docker ps

You should see something like

# docker ps
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS              PORTS               NAMES
6ff3449a00ef        calico/node:v1.0.2   "start_runit"       3 seconds ago       Up 3 seconds                            calico-node

Now lets setup the Calico CNI parts

mkdir -p /opt/cni/bin
wget -N -P /opt/cni/bin https://github.com/projectcalico/calico-cni/releases/download/v1.5.6/calico
wget -N -P /opt/cni/bin https://github.com/projectcalico/calico-cni/releases/download/v1.5.6/calico-ipam
chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam
mkdir -p /etc/cni/net.d
cat >/etc/cni/net.d/10-calico.conf <<EOF
{
  "name": "calico-k8s-network",
  "type": "calico",
  "etcd_authority": "172.31.11.29:2379",
  "log_level": "debug",
  "debug": true,
  "ipam": {
      "type": "calico-ipam"
  },
  "policy": {
      "type": "k8s",
      "k8s_api_root": "http://172.31.11.29:8080/api/v1/"
  }
}
EOF

Replace 172.31.11.29 with what your etcd endpoint(s) are. The 2379 is the default port for etcd also

Now install the standard CNI loopback plugin

wget https://github.com/containernetworking/cni/releases/download/v0.3.0/cni-v0.3.0.tgz
tar -zxvf cni-v0.3.0.tgz
mv loopback /opt/cni/bin/

Now lets install the kubelet

curl -L https://storage.googleapis.com/kubernetes-release/release/v1.5.3/bin/linux/amd64/kubelet -o /usr/bin/kubelet
chmod +x /usr/bin/kubelet

Now install kubectl

curl -L https://storage.googleapis.com/kubernetes-release/release/v1.5.3/bin/linux/amd64/kubectl -o /usr/bin/kubectl
chmod +x /usr/bin/kubectl

Now lets setup the kubelet systemd service

cat >/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service calico-node.service

[Service]
ExecStart=/usr/bin/kubelet \
--address=0.0.0.0 \
--port=10250 \
--config=/etc/kubernetes/manifests \
--cluster-dns=10.100.0.10 \
--cluster-domain=cluster.local \
--api_servers=http://172.31.11.29:8080 \
--healthz_bind_address=0.0.0.0 \
--healthz_port=10248 \
--logtostderr=true \
--allow_privileged=true \
--v=3 \
--logtostderr=true \
--network-plugin=cni \
--cni-conf-dir=/etc/cni/net.d \
--cni-bin-dir=/opt/cni/bin
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

Again change 172.31.11.29 to meet your needs.

Now we can reload systemd and start the kubelet

systemctl daemon-reload
service kubelet start

Now the kubelet should be up and running.

I like running the other Kubernetes services in docker using hyperkube. So this will go into how that is setup.

Create the manifests directory

mkdir -p /etc/kubernetes/manifests

Add the first manifest for the api server

cat >/etc/kubernetes/manifests/apiserver.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
  namespace: kube-system
  labels:
    k8s-app: kube-apiserver
    version: v1.5.3
    kubernetes.io/cluster-service: "true"
spec:
  hostNetwork: true
  containers:
  - name: kube-apiserver
    image: gcr.io/google_containers/hyperkube:v1.5.3
    command:
    - /hyperkube
    - apiserver
    - --etcd-servers=http://172.31.11.29:2379
    - --service-cluster-ip-range=10.100.0.0/16
    - --port=8080
    - --address=0.0.0.0
    - --v=3
    - --allow-privileged=true
    - --service-node-port-range=30000-50000
    ports:
    - containerPort: 8080
      hostPort: 8080
      name: local
EOF

Again change 172.31.11.29 to meet your needs.

A few things with the above file

  • –service-cluser-ip-range This is the CIDR range for the kubernetes services. This should not be any CIDR you want containers inside kubernetes to connect to. Kubernetes services will be given an IP in this range. Its the first part how DNS service discovery works in Kubernetes
  • –v=3 This is the log level. Increase it for more verbose logging. I generally will increase it if there is a problem. Too high can lead to disk IO issues if you have a decent sized cluster.
  • –service-node-port-range This is the range of ports Kubernetes will use when you create a new service.

Give it a few minutes to download the hyperkube container and once its downloaded, you’ll now notice a new container running

# docker ps
CONTAINER ID        IMAGE                                       COMMAND                  CREATED             STATUS              PORTS               NAMES
584d1a1b6fbb        gcr.io/google_containers/hyperkube:v1.5.3   "/hyperkube apiserver"   6 hours ago         Up 6 hours                              k8s_kube-apiserver.a9f55048_kube-apiserver-ip-172-31-11-29_kube-system_6224e47d3ddd0aa482fd9d4842998d01_b07efdcc
e034626086eb        gcr.io/google_containers/pause-amd64:3.0    "/pause"                 6 hours ago         Up 6 hours                              k8s_POD.d8dbe16c_kube-apiserver-ip-172-31-11-29_kube-system_6224e47d3ddd0aa482fd9d4842998d01_98ed8d73
6ff3449a00ef        calico/node:v1.0.2                          "start_runit"            8 hours ago         Up 6 hours                              calico-node

The one thing you’ll notice is a /pause container running. This is a special container that gets run with every Kubernetes container that allows containers to be in a “pod”. A pod allows containers to access the same network and disk space.

Now lets create the scheduler

cat >/etc/kubernetes/manifests/scheduler.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: kube-scheduler
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-scheduler
    image: gcr.io/google_containers/hyperkube:v1.5.3
    command:
    - /hyperkube
    - scheduler
    - --master=http://172.31.11.29:8080
    - --v=3
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10251
      initialDelaySeconds: 15
      timeoutSeconds: 1
EOF

Change 172.31.11.29 to meet your needs.

Now lets create the controller

cat >/etc/kubernetes/manifests/controller-manager.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: kube-controller-manager
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-controller-manager
    image: gcr.io/google_containers/hyperkube:v1.5.3
    command:
    - /hyperkube
    - controller-manager
    - --master=http://172.31.11.29:8080
    - --horizontal-pod-autoscaler-sync-period=5s
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10252
      initialDelaySeconds: 15
      timeoutSeconds: 1
EOF

Change 172.31.11.29 to meet your needs.

Now lets create a proxy

cat >/etc/kubernetes/manifests/proxy.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: gcr.io/google_containers/hyperkube:v1.5.3
    command:
    - /hyperkube
    - proxy
    - --v=0
    - --master=http://172.31.11.29:8080
    - --proxy-mode=iptables
    - --conntrack-max-per-core=0
    securityContext:
      privileged: true
EOF

Change 172.31.11.29 to meet your needs. You also might not need the following line –conntrack-max-per-core Ubuntu 16 sets /proc as read-only for systemd. I’m too lazy to deal with a work around right now.

You now have a working master server.



comments powered by Disqus