So first lets get docker running on the nodes
apt-get install docker.io
Install Calico
wget --quiet https://github.com/projectcalico/calico-containers/releases/download/v1.0.2/calicoctl -O /usr/bin/calicoctl
chmod +x /usr/bin/calicoctl
Run Calico now
ETCD_ENDPOINTS=http://172.31.11.29:2379 calicoctl node run
Change 172.31.11.29 to the master IP address.
The calico node should now be running in docker. You can verify it via
docker ps
You should see something like
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6ff3449a00ef calico/node:v1.0.2 "start_runit" 3 seconds ago Up 3 seconds calico-node
Now lets setup the Calico CNI parts
mkdir -p /opt/cni/bin
wget -N -P /opt/cni/bin https://github.com/projectcalico/calico-cni/releases/download/v1.5.6/calico
wget -N -P /opt/cni/bin https://github.com/projectcalico/calico-cni/releases/download/v1.5.6/calico-ipam
chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam
mkdir -p /etc/cni/net.d
cat >/etc/cni/net.d/10-calico.conf <<EOF
{
"name": "calico-k8s-network",
"type": "calico",
"etcd_authority": "172.31.11.29:2379",
"log_level": "debug",
"debug": true,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s",
"k8s_api_root": "http://172.31.11.29:8080/api/v1/"
}
}
EOF
Replace 172.31.11.29 with what your etcd endpoint(s) are. The 2379 is the default port for etcd also
Now install the standard CNI loopback plugin
wget https://github.com/containernetworking/cni/releases/download/v0.3.0/cni-v0.3.0.tgz
tar -zxvf cni-v0.3.0.tgz
mv loopback /opt/cni/bin/
Now lets install the kubelet
curl -L https://storage.googleapis.com/kubernetes-release/release/v1.5.3/bin/linux/amd64/kubelet -o /usr/bin/kubelet
chmod +x /usr/bin/kubelet
Now install kubectl
curl -L https://storage.googleapis.com/kubernetes-release/release/v1.5.3/bin/linux/amd64/kubectl -o /usr/bin/kubectl
chmod +x /usr/bin/kubectl
Now lets setup the kubelet systemd service
cat >/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service calico-node.service
[Service]
ExecStart=/usr/bin/kubelet \
--address=0.0.0.0 \
--port=10250 \
--config=/etc/kubernetes/manifests \
--cluster-dns=10.100.0.10 \
--cluster-domain=cluster.local \
--api_servers=http://172.31.11.29:8080 \
--healthz_bind_address=0.0.0.0 \
--healthz_port=10248 \
--logtostderr=true \
--allow_privileged=true \
--v=3 \
--logtostderr=true \
--network-plugin=cni \
--cni-conf-dir=/etc/cni/net.d \
--cni-bin-dir=/opt/cni/bin
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
Change 172.31.11.29 to meet your needs.
Now we can reload systemd and start the kubelet
systemctl daemon-reload
service kubelet start
Now the kubelet should be up and running.
I like running the other Kubernetes services in docker using hyperkube. So this will go into how that is setup.
Create the manifests directory
mkdir -p /etc/kubernetes/manifests
Now lets create a proxy
cat >/etc/kubernetes/manifests/proxy.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: gcr.io/google_containers/hyperkube:v1.5.3
command:
- /hyperkube
- proxy
- --v=0
- --master=http://172.31.11.29:8080
- --proxy-mode=iptables
- --conntrack-max-per-core=0
securityContext:
privileged: true
EOF
Change 172.31.11.29 to meet your needs. You also might not need the following line –conntrack-max-per-core Ubuntu 16 sets /proc as read-only for systemd. I’m too lazy to deal with a work around right now.
Now go back to the master server and you should be able to run the following command and you should see both nodes up and running
# kubectl get nodes
NAME STATUS AGE
ip-172-31-11-29 Ready 7h
ip-172-31-11-67 Ready 22s
ip-172-31-4-22 Ready 22s
Now you are ready to create some kubernetes services!