Kubernetes / Calico / AWS Setup

January 21, 2016   

This will go over a high level setup for the Calico network layer for Kubernetes running in AWS. Calico allows you to run Layer 3 like firewall rules for pods inside Kubernetes. Think of them almost like AWS security groups for pods. You can create rules based of Kubernetes labels and also CIDRs. These rules are translated into iptable rules on the Kubernetes nodes.

So lets get into it. I am going to assume you know how to setup Kubernetes already.

So first download Calico control app

As of writting 0.14.0 is the latest. You can checkout the following URL to find the latest version

https://github.com/projectcalico/calico-containers/releases

Run the following commands on all kubernetes nodes and the master

wget --quiet https://github.com/projectcalico/calico-docker/releases/download/v0.14.0/calicoctl
chmod +x calicoctl
mv -f calicoctl /usr/bin

The following assumes your kubernetes ETCD server runs at the following IP

10.120.0.20

So now that the calicoctl file is installed everywhere on the master server we want to create a IP pool for containers to use. I have choosen 192.168.0.0/16. This should be unique from your VPC and the subnet you choose for Kubernetes

ETCD_AUTHORITY=10.120.0.20:2379 calicoctl pool add 192.168.0.0/16 --nat-outgoing --ipip

The --nat-outgoing and --ipip are needed for routing to work since we can’t do BGP in AWS. It pretty much wraps up outgoing packets in a ip2ip packet to send to the destination Kubernetes node.

So you can run the following command on all kubernetes nodes

ETCD_AUTHORITY=10.120.0.20:2379 /usr/bin/calicoctl node --kubernetes --kube-plugin-version=v0.7.0

Now calico should be running in a few seconds. You can run the following to verify.

docker ps | grep calico

Below is a systemd service file for calico if you want to use it

[Unit]
Description=Calico per-node agent
Documentation=https://github.com/projectcalico/calico-docker
Requires=docker.service
After=docker.service

[Service]
EnvironmentFile=/etc/network-environment
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/calicoctl node --kubernetes --kube-plugin-version=v0.6.1 --detach=false
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Now with calico running on all the nodes you can run the following on the master to see the status

calicoctl status

You should see the following

Running felix version 1.3.0rc5

IPv4 BGP status
+---------------+-------------------+-------+------------+-------------+
|  Peer address |     Peer type     | State |   Since    |     Info    |
+---------------+-------------------+-------+------------+-------------+
| 10.120.41.129 | node-to-node mesh |   up  | 2016-01-15 | Established |
|  10.120.42.7  | node-to-node mesh |   up  | 2016-01-05 | Established |
| 10.130.43.170 | node-to-node mesh |   up  | 2016-01-05 | Established |
+---------------+-------------------+-------+------------+-------------+

IPv6 BGP status
+--------------+-----------+-------+-------+------+
| Peer address | Peer type | State | Since | Info |
+--------------+-----------+-------+-------+------+
+--------------+-----------+-------+-------+------+

You should see all the nodes you setup there with all Established Info. If you are experiencing any issues check the AWS security groups along with the Kubernetes logs at /var/log/calico

Now you need to tell Kubernetes to use the calico network plugin. This allows Kubernetes to create the correct IP for newly created containers.

So you need to add the following flag to the kubelet command

--network_plugin=calico

Make sure you restart the kubelet and verify its running with the plugin

You should now be able to load up some containers in Kubernetes now and things should work out nicely.



comments powered by Disqus