Kubernetes Time ….. on vSphere

03-07-2017

OK. So I’m a great fan of vSphere. You probably have gleaned that by now. But I do like Cloud as well – where it makes sense. Its a great platform for workloads written for it ; and in some instances ; workloads not written for it.

But Kubernetes and vSphere. And Cloud. Bring it on. This shit is awesome.

So you need a vSphere environment to start with.
And a VM with docker on it. And the rest will kind of get built / deployed along the way.
This was my journey today with this awesomeness.

Stuff you need

  • DHCP. And  free leases
  • Centos ISO (minimal)
  • VMware (for this anyway) – I used vSphere 6.5 for this. Note. The OVA for the VM is VM Hardware 11. If you try this on 5.5 (I did today at work) – you will need VM hardware 10. The way I got around this was deploy the VM under vSphere 6 and then use the VMware converter, convert it to an image file and import it into vCenter 5.5
  • Hosts.
  • Disk Space
  • Intertube connectivity
  • Time

Build a Centos Box for Docker

First up, get Centos minimal, DL the ISO, its ~680MB or so. Build a VM, with a static IP, standard disk partitioning. Boot it up. SSH to it. PS: if any of that didn’t make sense, then you’re going to struggle with the rest of this.

Lets update it.

$ yum update

I like ifconfig, if you don’t ipaddr will work. ifconfig is now deprecated btw 😦

$ yum install net-tools

Install Docker

Time for some yum utils and tools. Docker needs these to function.

$ yum install -y yum-utils device-mapper-persistent-data lvm2

Lets add the repo for docker

$ sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

Lets update yum

$ sudo yum makecache fast

Now lets install it. Note. This will take a bit of time. If your lab is as old and shit as mine it will take forever.

$ yum install docker

OR you can install docker-ce – your choice

$ sudo yum install docker-ce

Lets get docker to fire up on startup in the event you boot your docker box 🙂

$ sudo systemctl enable docker

Start docker in the meantime

$ sudo systemctl start docker

And there you go. docker is a happening thing.

Lets check that action out (will check to see everything is ok)

$ docker run hello-world

Your output will have more garbage than this ; but ; summary you want to see

Unable to find image 'hello-world:latest' locally
Trying to pull repository docker.io/library/hello-world ... 
latest: Pulling from docker.io/library/hello-world
b04784fba78d: Pull complete 
Digest: sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

Install GOVC

Now you don’t HAVE to do this; but I’m chucking it in here for good measure. On VMware you really need to know the absolute path to your Kubernetes image so when you build out your config you tell it the right thing. It bit me in the arse today so I’m chucking this in ere so you don’t get caught out. GOVC 🙂

Copy it down and unzip it to /usr/local/bin

$ curl -L https://github.com/vmware/govmomi/releases/download/v0.15.0/govc_linux_386.gz | gunzip > /usr/local/bin/govc

Change things up so you can execute it

$ chmod +x /usr/local/bin/govc

Export your VC URL and set INSECURE so you don’t get toasted by the stupid self signed SSL certs in place. Replace (domain), (userid), (password) and (vcIP) with correct info

$ export GOVC_URL='https://(domain)\(userid):(password)@(vcIP)/sdk'
$ export GOVC_INSECURE=1

Test things are working

$ govc about
$ govc ls

Obviosuly follow your nose and LS down the path to find the full path to your Kubernetes image 🙂

Install KubeCTL

Now. You’re going to need this. KubeCTL. its the magic that lets you interact with your Kubernetes cluster, nodes, pods etc. Thanks to google, just curl it ; best be in home BTW. ie : ~ or, /root/home if you prefer.

$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

Change permissions so it will execute

$ chmod +x ./kubectl

Shove it in your path so you can execute it without the ./ every time LOL.

$ mv ./kubectl /usr/local/bin/kubectl

One thing I havent mentioned. VMware tools. 🙂 So if you’re running Vmware / a VM with this ; to get the best possible experience out of your Centos distro, you need tools. Previously under Centos6 and RHEL 6 and before, it would be mount time and install-tools.pl -d however, we now have open-vm-tools 🙂

$ sudo yum install open-vm-tools

Oh and you have to reboot. The only time after doing ALL this work you have to reboot your Linux machine. #best #os #ever

Install Kubernetes-Anywhere

Now the moment you’ve been waiting for. Get Kubernetes on there. Now this will take a little while too.

$ docker pull cnastorage/kubernetes-anywhere
$ docker run -it --rm --env="PS1=[container]:\w> " --net=host cnastorage/kubernetes-anywhere:latest /bin/bash

You should be prompted with the following

[container]:/opt/kubernetes-anywhere>

Kubernetes OVA/Image

Now onto vSphere, you need an OVA. Sort of.
The doco out there bangs on about it being an OVA and don’t get me wrong you grab an OVA and import it into your vCenter. But when its done, its deployed a VM (as an OVF/OVA does) – its not a template, its a good old standard usual VM. Don’t get confused. And DON’T POWER IT ON. The scripting to install the master and nodes will do that for you. Leave it alone.

Copy this URL and deploy it into your vCentre.

Now. For the upcoming config. The VM will be located in your cluster and have /vm/ before it. So, lets say your cluster is called PASITO and you’ve deployed the OVA as kubeunit, then the path in the upcoming config would be /PASITO/vm/kubeunit

Also in my lab, my host that I deployed to was in a Cluster called Kube-Cluster. Pretty original I know ….

Working back inside your docker Centos machine, your prompt will looking something like the following :

[container]:/opt/kubernetes-anywhere>

Its time to make the config for Kubernetes. Now if its your first time (aaaaawwwww) then we start with

$ make deploy

If its sloppy seconds, thirds, hundredths 😉 then its

$ make clean
$ make config .config

Obviously your environmentals will be different but my config /answers was as follows :

*
* Kubernetes Minimal Turnup Configuration
*
*
* Phase 1: Cluster Resource Provisioning
*
number of nodes (phase1.num_nodes) [4] (NEW) 4
kubernetes cluster name (phase1.cluster_name) [kubernetes] (NEW) 
*
* cloud provider: gce, azure or vsphere
*
cloud provider: gce, azure or vsphere (phase1.cloud_provider) [gce] (NEW) vsphere
  *
  * vSphere configuration
  *
  vCenter URL Ex: 10.192.10.30 or myvcenter.io (without https://) (phase1.vSphere.url) [] (NEW) 192.168.0.50
  vCenter port (phase1.vSphere.port) [443] (NEW) 
  vCenter username (phase1.vSphere.username) [] (NEW) administrator@vsphere.local
  vCenter password (phase1.vSphere.password) [] (NEW) VMware1!
  Does host use self-signed cert (phase1.vSphere.insecure) [Y/n/?] (NEW) 
  Datacenter (phase1.vSphere.datacenter) [datacenter] (NEW) PASITO
  Datastore (phase1.vSphere.datastore) [datastore] (NEW) NAS001
  Deploy Kubernetes Cluster on 'host' or 'cluster' (phase1.vSphere.placement) [cluster] (NEW) cluster
    vspherecluster (phase1.vSphere.cluster) [] (NEW) Kube-Cluster
  Do you want to use the resource pool created on the host or cluster? [yes, no] (phase1.vSphere.useresourcepool) [no] (NEW) no
  Number of vCPUs for each VM (phase1.vSphere.vcpu) [1] (NEW) 
  Memory for VM (phase1.vSphere.memory) [2048] (NEW) 
  Network for VM (phase1.vSphere.network) [VM Network] (NEW) 
  Name of the template VM imported from OVA. If Template file is not available at the destination location specify vm path (phase1.vSphere.template) [KubernetesAnywhereTemplatePhotonOS.ova] (NEW) /PASITO/vm/kubeunit
  Flannel Network (phase1.vSphere.flannel_net) [172.1.0.0/16] (NEW) 
*
* Phase 2: Node Bootstrapping
*
installer container (phase2.installer_container) [docker.io/cnastorage/k8s-ignition:v2] (NEW) 
docker registry (phase2.docker_registry) [gcr.io/google-containers] (NEW) 
kubernetes version (phase2.kubernetes_version) [v1.6.5] (NEW) 
bootstrap provider (phase2.provider) [ignition] (NEW) 
*
* Phase 3: Deploying Addons
*
Run the addon manager? (phase3.run_addons) [Y/n/?] (NEW) 
  Run kube-proxy? (phase3.kube_proxy) [Y/n/?] (NEW) 
  Run the dashboard? (phase3.dashboard) [Y/n/?] (NEW) 
  Run heapster? (phase3.heapster) [Y/n/?] (NEW) 
  Run kube-dns? (phase3.kube_dns) [Y/n/?] (NEW) 
  Run weave-net? (phase3.weave_net) [N/y/?] (NEW) N

After finishing that, it takes about 20 minutes or so in my lab. But my lab is shit and old hardware and its shit. Did I happen to mention its shit? It works but.

Post Deployment

So now all your shit is built, you will (should) have a Master and four nodes (node1,2,3,4) – export your kubeconfig.json file ASAP.
Paste it into a text file even – just hang on to it!

[container]:/opt/kubernetes-anywhere> export KUBECONFIG=phase1/vsphere/.tmp/kubeconfig.json

Then using kubectl ; lets see what the status of things are at :

[container]:/opt/kubernetes-anywhere> kubectl cluster-info

Output will look something toward

[container]:/opt/kubernetes-anywhere> kubectl cluster-info
Kubernetes master is running at https://192.168.0.28
Heapster is running at https://192.168.0.28/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://192.168.0.28/api/v1/proxy/namespaces/kube-system/services/kube-dns

When Things Turn to Shit.

Now. When things turn to shit. And they will. Don’t fret, re-do your config and re-deploy!

make config .config

This simply cleans up temp files and mess created by the last deploy.

make clean

Output will look something like

[container]:/opt/kubernetes-anywhere> make clean
rm -rf .tmp
rm -rf phase3/.tmp
rm -rf phase1/gce/.tmp
rm -rf phase1/azure/.tmp
rm -rf phase1/vsphere/.tmp

Just deploy again

make deploy

DHCP

So your nodes and your Master will deploy with DHCP. I’ve not figured out how to define them with static IPs (yet) or if its even needed but, if you are embraced by fail for whatever reason on a re-deploy – check that your DHCP scope is not full.

Working With This

OK here goes.

Remember that export of that kube config, if you’ve dropped out of docker, you need it now…..

$ vi kubeconfig.json

Paste that text file in there and :wq!

$ export KUBECONFIG=./kubeconfig.json
$ kubectl cluster-info
[container]:/opt/kubernetes-anywhere> kubectl cluster-info
Kubernetes master is running at https://192.168.0.28
Heapster is running at https://192.168.0.28/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://192.168.0.28/api/v1/proxy/namespaces/kube-system/services/kube-dns

Good. You’re in business.

Lets deploy Busybox. Why? because we can.

$ kubectl run -i --tty busybox --image=busybox --generator="run-pod/v1"

Expected output

If you don't see a command prompt, try pressing enter.
/ #
/ # ls
bin   dev   etc   home  proc  root  sys   tmp   usr   var
/ #

Meh. Lets exit

/ # exit
Session ended, resume using 'kubectl attach busybox -c busybox -i -t' command when the pod is running.

Cool. Check it out.

$ kubectl attach busybox -c busybox -i -t

Expected output

If you don't see a command prompt, try pressing enter.

Hmmm. But wheres it running?

$ kubectl get pods --output=wide
NAME      READY     STATUS      RESTARTS   AGE       IP           NODE
busybox   0/1       Completed   1          2m        172.1.49.3   node3

Note – the busybox ‘application’ is running as a POD.

Lets kill busybox.

$ kubectl delete pod busybox
pod "busybox" deleted

Lets see

$ kubectl get pods --output=wide
NAME      READY     STATUS        RESTARTS   AGE       IP        NODE
busybox   0/1       Terminating   2          12m           node3

Eventually

$ kubectl get pods --output=wide
No resources found.

Hello-World

So lets deploy a bit more than Busybox. Lets deploy Hello-World 🙂
I can’t take credit for this as I’m an Infra guy not a coder so the link is here

So ;

$ vi deployment.yaml

paste in the text, :wq!

$ kubectl create -f deployment.yaml

Check it

$ kubectl describe deployment nginx-deployment

Lets see our pods

$ kubectl get pods -l app=nginx

Check out one of your pods

$ kubectl describe pod (your pod name from the previous command)

(I quite like this, the output is useful, especially the messages.

Lets update (per the link)

$ rm deployment.yaml
$ vi deployment.yaml

paste in the text from that link, :wq!

$ kubectl update -f deployment.yaml

Check and see they have been re-deployed

$ kubectl get pods -l app=nginx

Check out one of your pods

kubectl describe pod nginx-deployment-3285060500-6gbpf

Note that in messages nginx is now at 1.8!

So lets scale up

$ vi deployment-scale.yaml

paste in the text from that link, :wq!

$ kubectl apply -f deployment-scale.yaml

Check and see they have been re-deployed

$ kubectl get pods -l app=nginx
$ kubectl get pods -l app=nginx
NAME                                READY     STATUS              RESTARTS   AGE
nginx-deployment-3285060500-6gbpf   1/1       Running             0          4m
nginx-deployment-3285060500-76m35   1/1       Running             0          4m
nginx-deployment-3285060500-7ljcq   0/1       ContainerCreating   0          11s
nginx-deployment-3285060500-s7hf2   0/1       ContainerCreating   0          11s

……and running in ~30 seconds. Stunning.

Finally, kill it all off…..

$ kubectl delete deployment nginx-deployment
deployment "nginx-deployment" deleted

Some links if you get stuck along the way :

Github Kubernetes-Anywhere
Docker
Kubernetes Tutorials

04-07-2017

I’ve decided to put dates on here. As they are useful.

null_resource.master (remote-exec): Error response from daemon: Get https://gcr.io/v1/_ping: dial tcp 108.177.97.82:443: i/o timeout
null_resource.master (remote-exec): Failed to docker pull hyperkube
null_resource.node3: Creation complete (ID: 3451854390538856113)
null_resource.node5: Creation complete (ID: 6334595443268439532)
Error applying plan:

1 error(s) occurred:

* null_resource.master: 1 error(s) occurred:

* Script exited with non-zero exit status: 1

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Makefile:63: recipe for target 'do' failed
make[1]: *** [do] Error 1
make[1]: Leaving directory '/opt/kubernetes-anywhere'
Makefile:41: recipe for target 'deploy-cluster' failed
make: *** [deploy-cluster] Error 2

An outburst of violence is brewing. That shit in red has pwned me today. Anyway during trying to deploy this in a secured work environment, I’ve learnt a couple of things.
1. Proxies are a pain in the arse
2. …
100. Proxies are a pain in the arse.

When you fire this up, if you need to go via a proxy, couple of things
1. Hope to hell the resource sites are whitelisted on the proxy
2. Run CNTLM as a local proxy to proxy your connection to the proxy. 🙂

I covered CNTLM here

If you run CNTLM, before you run make deploy, fire up your docker container to deploy K8s, ie:

$ docker run -it --rm --env="PS1=[container]:\w> " --net=host cnastorage/kubernetes-anywhere:latest /bin/bash

You should be prompted with the following

[container]:/opt/kubernetes-anywhere>

THEN RUN THIS

export HTTP_PROXY=http://127.0.0.1:3128
export HTTPS_PROXY=http://127.0.0.1:3128

Then run make deploy.

And you can Pre-Cut your Config

YES. Its possible

So when you've fired up your docker container, and you're sitting at a prompt
[container]:/opt/kubernetes-anywhere> vi .config

I to insert
And paste this in (modified of course)

#
# Automatically generated file; DO NOT EDIT.
# Kubernetes Minimal Turnup Configuration
#

#
# Phase 1: Cluster Resource Provisioning
#
.phase1.num_nodes=4
.phase1.cluster_name="kubernetes"
.phase1.cloud_provider="vsphere"

#
# vSphere configuration
#
.phase1.vSphere.url="(your vCenter URL)"
.phase1.vSphere.port=443
.phase1.vSphere.username="(administrator@vsphere.local or whatever)"
.phase1.vSphere.password="(a secure password)"
.phase1.vSphere.insecure=y
.phase1.vSphere.datacenter="(your datacenter name)"
.phase1.vSphere.datastore="(datastore name where your images will reside)"
.phase1.vSphere.placement="cluster"
.phase1.vSphere.cluster="(name of cluster)"
.phase1.vSphere.useresourcepool="no"
.phase1.vSphere.vcpu=1
.phase1.vSphere.memory=2048
.phase1.vSphere.network="(name of port group)"
.phase1.vSphere.template="(path to Kubernetes OVA)"
.phase1.vSphere.flannel_net="172.1.0.0/16"

#
# Phase 2: Node Bootstrapping (if you're deploying with docker and K8s v1.6.5 up leave this)
#
.phase2.installer_container="docker.io/cnastorage/k8s-ignition:v2"
.phase2.docker_registry="gcr.io/google-containers"
.phase2.kubernetes_version="v1.6.5"
.phase2.provider="ignition"

#
# Phase 3: Deploying Addons (these dont change)
#
.phase3.run_addons=y
.phase3.kube_proxy=y
.phase3.dashboard=y
.phase3.heapster=y
.phase3.kube_dns=y
.phase3.weave_net=n

Now :wq! that thing. Then when you’ve got that sound ;

[container]:/opt/kubernetes-anywhere> make clean
[container]:/opt/kubernetes-anywhere> make deploy

31-07-2017

What FW rules?

So if you’ve got this far and work in large corporate, you’ve probably found you need a few things opened up so this will actually build. After a 2.5GB TCPDUMP running this up in my home lab thrown into wireshark, I came up with the following ;

gcr.io
registry-1.docker.io
auth.docker.io
dseasb33srnrn.cloudfront.net
storage.googleapis.com
ssl.gstatic.com

We allowed 80 and 443 on these and, after doing so I managed to successfully build a Kube Cluster in my environment. Thankfully as our FW support FQDN we didn’t have to individually add IPs like certain Checkpoints did in a previous life…

Unauthorized. But its running?

14-08-2017

Yep. I too struggled with ‘UnAuthorized’ while trying to access the bloody Kubernetes dashboard. Turns out – from the master you cannot access the dashboard directly. K8s needs creds. These are embedded inside your KUBECONFIG export.

So how ; proxy via kubectl. On the machine where you have exported KUBECONFIG ; try running the following (obviously add your machines IP instead of my .96 below)

kubectl proxy --address=192.168.0.96 —port 443 --accept-hosts='.*' --accept-paths='.*' &

Then hit the K8s dashboard at http://192.168.0.96:443/ui

This worked for me really well

I hope this helps someone on their journey.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s