This will go over how to get GitLab up and running using the following toolchain

Kops is a really nice tool to help easily spin up a Kubernetes cluster in AWS that allows you to have a lot of control over how its spun up.

I used terraform to pre-create a VPC structure that Kops should be able to use to build on top of. I made it a bit hard on myself since I created the Kubernetes subnets to use. If you don’t build those in terraform, then its almost plug and play.

Assumptions

I have VPN connectivity already into the VPC, so I don’t need to have Kops spin up a bastion host. If you need a bastion, then on the create cluster step, make sure you add the bastion option.

Terraform

I’m not going into much detail on this. I assume you know enough about terraform to build your VPC and all needed infrastructure around it.

The big thing is the subnets need correct tags.

I have the following for my node and utility subnets.

resource "aws_subnet" "kubernetes_utility" {
  count = 2

  vpc_id            = "${aws_vpc.vpc.id}"
  cidr_block        = "${cidrsubnet(aws_vpc.vpc.cidr_block, 10, count.index+8)}"
  availability_zone = "${element(data.aws_availability_zones.azs.names, count.index)}"

  tags {
    Name              = "Kubernetes-Utility-${element(data.aws_availability_zones.azs.names, count.index)}"
    Env               = "${var.env}"
    KubernetesCluster = "CLUSTER_NAME"
  }
}

resource "aws_subnet" "kubernetes_node" {
  count = 2

  vpc_id            = "${aws_vpc.vpc.id}"
  cidr_block        = "${cidrsubnet(aws_vpc.vpc.cidr_block, 8, count.index+4)}"
  availability_zone = "${element(data.aws_availability_zones.azs.names, count.index)}"

  tags {
    Name              = "Kubernetes-Nodes-${element(data.aws_availability_zones.azs.names, count.index)}"
    Env               = "${var.env}"
    KubernetesCluster = "CLUSTER_NAME"
  }
}

Make sure you change the CLUSTER_NAME to match what you set it up below. This must be a FQDN.

Also Kubernetes will use the kubernetes_utility subnet for public ELBs. So make sure those route over the Internet Gateway.

I have a floating EBS volume to store all the git repo data in GitLab. So it looks like

resource "aws_ebs_volume" "gitlab" {
  availability_zone = "${element(data.aws_availability_zones.azs.names, 0)}"
  size              = 500
  type              = "gp2"

  tags {
    Name = "GitLab EBS Volume"
    Type = "gitlab"
    AZ   = "${element(data.aws_availability_zones.azs.names, 0)}"
  }
}

The big gotcha is that the EBS volume can’t be cross-AZ so I’m using a data source here to grab a AZ to put it in. You want to take note of the AZ and the volume ID for later use. yes

Kops

You can grab Kops from the following URL

https://github.com/kubernetes/kops

I have a Mac that and use homebrew so I just ran the following commands

brew update && brew install kops

Now that Kops is installed we can create our own cluster now. I filled in the following environment variables to make life easier.

export KOPS_STATE_STORE=s3://<somes3bucket>
export CLUSTER_NAME=<sharedvpc.mydomain.com>
export VPC_ID=vpc-12345678
export NETWORK_CIDR=10.100.0.0/16
export ROUTE53_ZONE=Z31SO15F6TYF4A

Below are explanations of what these are for

With those set, you can run the following command

kops create cluster --node-count 3 \
    --zones us-west-2a,us-west-2b \
    --master-zones us-west-2a \
    --dns-zone=${ROUTE53_ZONE} \
    --node-size t2.medium \
    --master-size t2.small \
    --topology private \
    --networking flannel \
    --vpc=${VPC_ID} \
    --image 595879546273/CoreOS-stable-1298.5.0-hvm \
    --ssh-public-key ~/yourkey.pub \
    ${CLUSTER_NAME}

Some things to note.

This will generate some output. Now you need to edit it to make the subnet changes before you build things.

So run the following command

kops edit cluster ${CLUSTER_NAME}

This will open up vi and you want to change the following

Change the loadBalancer type from Public to Internal if you want to keep your Kubernetes API on an internal ELB.

spec:
  api:
    loadBalancer:
      type: Public

To look like this

spec:
  api:
    loadBalancer:
      type: Internal

Then if you pre-created your subnets you want to delete everything in

- cidr: 10.115.4.0/24
  name: us-west-2a
  type: Private
  zone: us-west-2a
- cidr: 10.115.5.0/24
  name: us-west-2b
  type: Private
  zone: us-west-2b
- cidr: 10.115.2.0/26
  name: utility-us-west-2a
  type: Utility
  zone: us-west-2a
- cidr: 10.115.2.64/26
  name: utility-us-west-2b
  type: Utility
  zone: us-west-2b

You pretty much remove the cidr: and switch it with the id: for the current setup subnet.

- id: subnet-5f922516
  name: us-west-2a
  type: Private
  zone: us-west-2a
- id: subnet-2ab62b4d
  name: us-west-2b
  type: Private
  zone: us-west-2b
- id: subnet-6ec77627
  name: utility-us-west-2a
  type: Utility
  zone: us-west-2a
- id: subnet-70e77817
  name: utility-us-west-2b
  type: Utility
  zone: us-west-2b

Save the file and now you can preview your changes.

kops update cluster ${CLUSTER_NAME}

When you are ok with the changes you can apply them

kops update cluster ${CLUSTER_NAME} --yes

Wait a few moments and your cluster should be up and running.

The first thing I like to do is installed heapster and dashboard.

kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.5.0.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standalone/v1.2.0.yaml

View the admin secret

kops get secrets kube --type secret -oplaintext

If your cluster name is kubernetes.domain.com, you should be able to access dashboard via

https://api.kubernetes.domain.com/ui

GitLab

Lets install the namespace

cat <<EOF | kubectl create -f -
kind: Namespace
apiVersion: v1
metadata:
  name: gitlab
EOF

Now lets setup the redis service and deployment.

cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis
  namespace: gitlab
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: redis
    spec:
      containers:
      - name: redis
        image: redis:3.2.4
        ports:
        - name: redis
          containerPort: 6379
        volumeMounts:
        - mountPath: /var/lib/redis
          name: data
        livenessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        emptyDir: {}

---

apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: gitlab
  labels:
    name: redis
spec:
  selector:
    name: redis
  ports:
    - name: redis
      port: 6379
      targetPort: redis
EOF

The only thing you might want to change is the redis version.

Once its running we can see if the endpoint is working

$ kubectl --namespace=gitlab describe svc redis
Name:			redis
Namespace:		gitlab
Labels:			name=redis
Selector:		name=redis
Type:			ClusterIP
IP:			100.70.200.255
Port:			redis	6379/TCP
Endpoints:		100.96.1.7:6379
Session Affinity:	None
No events.

If you have a IP:port listed in Endpoints then redis is working!

Now lets get GitLab running.

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
  name: gitlab
  namespace: gitlab
  labels:
    name: gitlab
spec:
  type: ClusterIP
  selector:
    name: gitlab
  ports:
    - name: http
      port: 80
      targetPort: http
EOF

I’m using a type of ClusterIP here since I’ll use an ingress controller to get traffic into GitLab

Now lets get GitLab up in Kubernetes.

cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: gitlab
  namespace: gitlab
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: gitlab
        app: gitlab
    spec:
      nodeSelector:
        failure-domain.beta.kubernetes.io/zone: us-west-2b
      containers:
      - name: gitlab
        image: sameersbn/gitlab:8.16.3
        imagePullPolicy: Always
        env:
        - name: TZ
          value: America/Los_Angeles
        - name: GITLAB_TIMEZONE
          value: America/Los_Angeles
        - name: DEBUG
          value: "false"

        - name: GITLAB_SECRETS_DB_KEY_BASE
          value: P26qS5+Csz50Dkd0DLM2oN9owVBFg0Pb
        - name: GITLAB_SECRETS_SECRET_KEY_BASE
          value: KVaMTKLAIElEp0s4L02c1O9JCP0Rfapb
        - name: GITLAB_SECRETS_OTP_KEY_BASE
          value: nXJJ358Qnci0yF9qpAsLrF2vImaoFR0b

        - name: GITLAB_ROOT_PASSWORD
          value: "testing$123"
        - name: GITLAB_ROOT_EMAIL
          value: gitlab@domain.com

        - name: GITLAB_HOST
          value: git.domain.com
        - name: GITLAB_PORT
          value: "80"

        - name: GITLAB_NOTIFY_ON_BROKEN_BUILDS
          value: "true"
        - name: GITLAB_NOTIFY_PUSHER
          value: "false"

        - name: GITLAB_BACKUP_SCHEDULE
          value: daily
        - name: GITLAB_BACKUP_TIME
          value: 01:00

        - name: DB_TYPE
          value: postgres
        - name: DB_HOST
          value:
        - name: DB_PORT
          value: "5432"
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value:
        - name: DB_NAME
          value: gitlab

        - name: REDIS_HOST
          value: redis
        - name: REDIS_PORT
          value: "6379"

        - name: SMTP_ENABLED
          value: "false"
        - name: SMTP_DOMAIN
          value: ""
        - name: SMTP_HOST
          value: ""
        - name: SMTP_PORT
          value: ""
        - name: SMTP_USER
          value: ""
        - name: SMTP_PASS
          value: ""
        - name: SMTP_STARTTLS
          value: "true"
        - name: SMTP_AUTHENTICATION
          value: login

        - name: IMAP_ENABLED
          value: "false"
        - name: IMAP_HOST
          value: imap.gmail.com
        - name: IMAP_PORT
          value: "993"
        - name: IMAP_USER
          value: mailer@example.com
        - name: IMAP_PASS
          value: password
        - name: IMAP_SSL
          value: "true"
        - name: IMAP_STARTTLS
          value: "false"
        ports:
        - name: http
          containerPort: 80
        - name: ssh
          containerPort: 22
        volumeMounts:
        - mountPath: /home/git/data
          name: data
        livenessProbe:
          httpGet:
            path: /users/sign_in
            port: 80
          initialDelaySeconds: 180
          timeoutSeconds: 15
        readinessProbe:
          httpGet:
            path: /users/sign_in
            port: 80
          initialDelaySeconds: 60
          timeoutSeconds: 1
      volumes:
      - name: data
        awsElasticBlockStore:
          volumeID: vol-xxxxxxxxxxx
          fsType: ext4
EOF

So you’ll eant to focus on a few parts.

So now you also need to pin the container to a certain node. This is so it always runs in the same AZ as the volume you made.

So look for

spec:
  nodeSelector:
    failure-domain.beta.kubernetes.io/zone: us-west-2b

You want to change the zone name to the same zone as the EBS volume was created in.

Once that is running you can take a look at the service for a working endpoint. It takes a few minutes to fully spin up an become ready for use.

$ kubectl --namespace=gitlab describe svc gitlab
Name:			gitlab
Namespace:		gitlab
Labels:			name=gitlab
Selector:		name=gitlab
Type:			ClusterIP
IP:			100.65.167.84
Port:			http	80/TCP
Endpoints:		100.96.2.7:80
Session Affinity:	None
No events.

** Ingress controller

Now that GitLab is up and running, its time to get some traffic into it. For this I’m going to setup an two types on ingress services. I’m going to setup a internal and public. This will create an internal ELB and a public ELB.

So lets create the two services.

cat <<EOF | kubectl create -f -
kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx-internal
  namespace: kube-system
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "True"
spec:
  type: LoadBalancer
  selector:
    app: ingress-nginx
  ports:
  - name: http
    port: 80
    targetPort: http

---

kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx-external
  namespace: kube-system
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "True"
spec:
  type: LoadBalancer
  selector:
    app: ingress-nginx
  ports:
  - name: http
    port: 80
    targetPort: http

EOF

The only thing different then the names is the service.beta.kubernetes.io/aws-load-balancer-internal annotation. This tells kubernetes to create a internal ELB

Now lets create the default backend

cat <<EOF | kubectl create -f -
kind: Service
apiVersion: v1
metadata:
  name: nginx-default-backend
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: http
  selector:
    app: nginx-default-backend
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nginx-default-backend
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx-default-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        image: gcr.io/google_containers/defaultbackend:1.0
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
        ports:
        - name: http
          containerPort: 8080
          protocol: TCP

EOF

Now we can create the main ingress controller.

cat <<EOF | kubectl create -f -
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: ingress-nginx
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
        name: ingress-nginx
        imagePullPolicy: Always
        ports:
          - name: http
            containerPort: 80
            protocol: TCP
          - name: https
            containerPort: 443
            protocol: TCP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
EOF

Now we should be able to see the two ELBs created

$ kubectl --namespace=kube-system describe svc ingress-nginx
Name:			ingress-nginx-external
Namespace:		kube-system
Labels:			<none>
Selector:		app=ingress-nginx
Type:			LoadBalancer
IP:			100.64.162.47
LoadBalancer Ingress:	a442435ea0d7811e7a83806413902579-842538784.us-west-2.elb.amazonaws.com
Port:			http	80/TCP
NodePort:		http	31416/TCP
Endpoints:		100.96.1.9:80
Session Affinity:	None
No events.

Name:			ingress-nginx-internal
Namespace:		kube-system
Labels:			<none>
Selector:		app=ingress-nginx
Type:			LoadBalancer
IP:			100.69.212.183
LoadBalancer Ingress:	internal-a415183fa0d7811e7a83806413902579-442086620.us-west-2.elb.amazonaws.com
Port:			http	80/TCP
NodePort:		http	32665/TCP
Endpoints:		100.96.1.9:80
Session Affinity:	None
No events.

Great! I see two services and one has an internal ELB and the other is a public ELB. They also both have the same endpoint that goes to my controller, which means the nginx ingress controller is working.

So now we have a working setup. We just need to update DNS. To do this we want to set a hostname on one of the ingress services (internal or external).

So for GitLab, we want to only have VPN traffic access it. So I’ll create the CNAME to the internal ELB.

So I run the following command

kubectl --namespace=kube-system edit svc ingress-nginx-internal

There is a special service that Kops deploys called dns-controller. Its job is to monitor services for certain annotations to create route53 entries. So in the annotations list I want to add something like

    dns.alpha.kubernetes.io/internal: git.int.domain.com

That will look for a route53 zone called int.domain.com and point git to the ELB that service created.

Once that is done you should be able to route in traffic now to GitLab