TSR – The Server Room Show – Episode 45 – Rancher & Heimdall Application Dashboard


Remember in Episodes 18 and 19 of The Server Room Show we discussed Docker and Kubernetes in detail. If You dont remember I recommend You go and listen to those two episodes before You listen to this one unless You familiar with Docker and Kubernetes and what both of them are for.

In short Kubernetes is a platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It works with a range of container tools, including Docker.


Rancher is one platfor for Kubernetes management / Enterprise Kubernetes Management Platform.It is a complete container management platform. Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters, while providing DevOps teams with integrated tools for running containerized workloads.

Rancher is open source software and from datacenter to cloud to edge it lets you run Kubernetes everywhere.

Rancher is not the only Kubernetes management platform out there.

There is Red Hat’s Openshift and VMware’s Tanzu.

The problem with vanilla Kubernetes installations that they lack central visibility, the security practices applied are most of the time inconsistent between various Kubernetes clusters and to be honest manually manage one or even more than one Kubernetes cluster can be a complex process.

Kubernetes Management Platforms try to solve these issues f.e with bringing Security Policy & User Management and Shared Tools & Services with high level of reliability with easy and consistent access to the shared tools and services. High Availability , Load Balancing and Centralized Audit or Integration with popular CI/CD Solutions are just a few to mention.

Rancher has a thriving comunity on slack.rancher.io and forums.rancher.com if You need help to get going with it.

So if some of the below questions ever popped into your head regarding operational challanges when designing your companys docker / kubernetes infrastructure then probably Rancher could be a great fit for You:

  • How do I deploy consistentlyt across different infrastructures?
  • How do I manage and implement access control accross multiple clusters and namespaces?
  • How do I integrate with already in place central authentication systems like LDAP, Active Directory, Radius,etc.?
  • What can I do for Monitoring my kubernetes cluster/s?
  • How do I ensure that security policies are the same and enforced across clusters / namespaces?
Screenshot from Rancher (link in the shownotes)

Rancher was originally built to work with multiple orchestrators, and it included its own orchestrator called Cattle. With the rise of Kubernetes in the marketplace, Rancher 2.x exclusively deploys and manages Kubernetes clusters running anywhere, on any provider.

Rancher can provision Kubernetes from a hosted provider, provision compute nodes and then install Kubernetes onto them, or import existing Kubernetes clusters running anywhere.

One Rancher server installation can manage thousands of Kubernetes clusters and thousands of nodes from the same user interface.

Rancher adds significant value on top of Kubernetes, first by centralizing authentication and role-based access control (RBAC) for all of the clusters, giving global admins the ability to control cluster access from one location.

It then enables detailed monitoring and alerting for clusters and their resources, ships logs to external providers, and integrates directly with Helm via the Application Catalog. If you have an external CI/CD system, you can plug it into Rancher, but if you don’t, Rancher even includes a pipeline engine to help you automatically deploy and upgrade workloads.

Rancher is a complete container management platform for Kubernetes, giving you the tools to successfully run Kubernetes anywhere.

Another interesting thing to mention is that while for a standalone Kubernetes installation you would need to fulfill more dependencies than for a Rancher + Kubernetes deploy scenario.

The reason being as Rancher only requires the Host to have a supported Docker version installed on it , wanting to pull a vanilla kubernetes installation calls for more dependencies than just simply Docker being installed.

This is achieved by Rancher as it runs entirely inside or on top of Docker and Rancher then lets you run a Kubernetes cluster/s on top of it/Rancher.

You can be up and running quicker this way then going through vanilla Kubernetes installation.

For Sandboxing environment and to test Rancher out you can deploy it on a single host which has docker installed but for production a three node cluster is a minimum requirement.

How to start with Rancher

Rancher has a great quickstart guide to have you up and running in the lowest time possible.** link is in the shownotes **

You can try it out in a sandbox environment just grab a host with a supported docker version installed like Centos or Fedora and use this one line to pull Rancher up inside a docker image to test it out and play around ( to deploy it to a production environment do not use this but follow a proper production rollout step by step documentation and set it up as a three node cluster at least to have HA *high availability* and Failover support.

$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher

Result: Rancher is installed

Once Rancher is up and runnig the next step is to login using the local hosts FQDN or IP address

https://<SERVER_IP> or <FQDN>

On first logon it will prompt You to set a password for the default admin account.

Rancher running on Centos 8 VM accessed from my Workstation on another subnet 172.35.x.x *make sure You allow ports 80 and 443 at least in the firewall public zone on Centos*
Rancher lets You know to make sure the Rancher Server URL is accessible from all hosts you will create…

Creating Your Kubernetes Cluster is the first step.

In this example, you can use the versatile Custom option. This option lets you add any Linux host (cloud-hosted VM, on-premise VM, or bare-metal) to be used in a cluster.

Once You click on the Add Cluster button You are welcomed with this screen where You can click on From existing nodes (custom)

For this exercise only fill out the following details:

Select a Cluster name , Skip the Member Roles and Cluster Options for now and click Next

From the Cluster Options screen select ALL the Node Options ( etcd, Control Plane, Worker) and copy the command which shown in Step 2. You need to run this on Your machine where You running Rancher for this example using the terminal via ssh or logging in locally.

In my case I had to run this code for this example:

sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.4.8 --server --token fp6gk7wldgrhgglldqt7gd275j5f97rn7g6tdgnqd2rwv5snwz4qm8 --ca-checksum 9a31bd4ea0636bb19c8152a47e1f8389d4187d7e9030bec161f190a1f9562455 --etcd --controlplane --worker

Once You ran the command come back to this window and click Done

Once You click Done You get back to the Main screen where Your Cluster will show up with State: Provisioning
(( it will inform you about what is happening behind the curtains under the text provisioning ))

Kubernetes Cluster provisioning after clicking on Done on the previous screen…
(( it will inform you about what is happening behind the curtains under the text provisioning ))

You can check from the host machine that it is deploying a good couple of other nodes to build the Kubernetes cluster infrastructure.

[viktormadarasz@localhost ~]$ sudo docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS              PORTS                                      NAMES
d67bdef1a64a        rancher/hyperkube:v1.18.6-rancher1    "/opt/rke-tools/entr…"   31 seconds ago      Up 26 seconds                                                  kube-apiserver
ca34379bebcc        rancher/coreos-etcd:v3.4.3-rancher1   "/usr/local/bin/etcd…"   36 seconds ago      Up 34 seconds                                                  etcd
4ea60c63d367        rancher/rancher-agent:v2.4.8          "run.sh --server htt…"   4 minutes ago       Up 4 minutes                                                   laughing_taussig
b9baeb02c206        rancher/rancher                       "entrypoint.sh"          10 minutes ago      Up 6 minutes>80/tcp,>443/tcp   zealous_merkle

Depending on when you check sudo docker ps it can be more or much more docker containers working behind the scenes building out your Kubernetes Cluster

Do not worry if You lose connection to the Rancher server url at some point during this .. .it will come back

…. after 21 minutes has passed

My Kubernetes Cluster provisioning got stuck at this step ( bad certificate tls) also pasted the log from etcd docker container log. — i will continue from here —

[etcd] Failed to bring up Etcd Plane: etcd cluster is unhealthy: hosts [] failed to report healthy. Check etcd container logs on each host for more information
Caused by error in log from etcd docker container
2020-09-20 18:05:00.365851 I | embed: rejected connection from “” (error “tls: failed to verify client’s certificate: x509: certificate signed by unknown authority (possibly because of \”crypto/rsa: verification error\” while trying to verify candidate authority certificate \”kube-ca\”)”, ServerName “”)

Trying to work around the problem in my case

So i went ahead and instead of a Centos 8 VM I tried to run the deployment script of rancher on my Fedora 32 Workstation on the physical machine on kernel 5.8

And I dont know for what reason but it deployed without any error message or complication.

The Kubernetes cluster is / was up and running

kubernetescluster on rancher running on top of Docker in the physical machine under Fedora 32 Linux
Rancher itself and the kubernetes cluster it deploys runs on a bunch of containers in the underlying Docker engine.

Dashboard of the created kubernetescluster

One thing I did different was to tell Rancher during the initial setup after setting the admin password was that the url for the server is localhost and not the IP like I did in the Centos 8 VM case where I gave the url the IP of the local VM which I think should work.

One thing I did different was to tell Rancher during the initial setup after setting the admin password was that the url for the server is localhost and not the IP
You can change the server-url of Rancher from Settings / Advanced Settings menu

So I went back and tried in the Centos 8 VM setting localhost instead of the IP of the VM as the server’s url.

It worked and the Kubernetes Cluster deployed correctly on Centos 8 VM Kernel 4.18.0-193
even tough its not mentioned on the support matrix as of the date when this article was created.

Support Matrix for Rancher

I accessed the control panel of Racher via IP because i was accessing from a different subnet.. In Settings / Advanced Settings it has the server-url set to https://localhost

I went into unsupported territory and experienced odd errors indeed
Fedora 32 on Kernel 5.7

BUT …. on Kernel 5.7 on Fedora 32 things are strange and it fails again like I did on Centos 8 VM in the beginning until I switched server-url to localhost from the IP address…

It can be that as neither Centos 8 nor Fedora are on the support matrix for Rancher can be a cause for odd behaviour experienced below…

However on kernel 5.7 Docker on the same system indeed complains first and the Kubernetes cluster fails at the same place with Rancher

This can be something just with my machine which I can confirm using a VM of Fedora 32 clean install with Kernel 5.7 and rerun this and Update the shownotes to see if it worked or not…

First docker complained for cgroups which i fixed with some temp fix provided in one of the links in the shownotes and after the kubernetes cluster again failed to deploy itself properly whern using the same deployment script like 10 minutes ago on the same box with kernel 5.8

 viktormadarasz  ~  sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
docker: Error response from daemon: cgroups: cgroup mountpoint does not exist: unknown.
Fixed with:

viktormadarasz  ~   sudo mkdir /sys/fs/cgroup/systemd
viktormadarasz  ~   sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
Client: Docker Engine - Community
 Version:           19.03.8
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        afacb8b7f0
 Built:             Wed Mar 11 01:27:05 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       afacb8b7f0
  Built:            Wed Mar 11 01:25:01 2020
  OS/Arch:          linux/amd64
  Experimental:     false
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
  Version:          0.18.0
  GitCommit:        fec3683

Linux fedoraws.lan 5.7.6-201.fc32.x86_64 #1 SMP Mon Jun 29 15:15:52 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

Reason I m not on kernel 5.8 is that it breaks Vmware and Virtualbox and i use those heavily on this machine * was broken both last time i checked..*

Heimdall Application Dashboard

As the name suggests Heimdall Application Dashboard is a dashboard for all your web applications. It doesn’t need to be limited to applications though, you can add links to anything you like.

Heimdall is an elegant solution to organise all your web applications. It’s dedicated to this purpose so you won’t lose your links in a sea of bookmarks.

Why not use it as your browser start page? It even has the ability to include a search bar using either Google, Bing or DuckDuckGo.

Supported applications

You can use the app to link to any site or application even if they are not supported these ones fall under the category of Generic Apps.

This is one of the benefits to Heimdall is you can add a link to absolutely anything, whether it’s intrinsically supported or not. With a generic item, you just fill in the name, background colour, add an icon if you want (if you don’t a default Heimdall icon will be used), and enter the link url and it will be added.

If You add any Foundation apps will auto fill in the icon for the app and supply a default color for the tile.

In addition Enhanced apps allow you provide details to an apps API, allowing you to view live stats directly on the dashboad. For example, the NZBGet and Sabnzbd Enhanced apps will display the queue size and download speed while something is downloading.

Supported applications are recognized by the title of the application as entered in the title field when adding an application. For example, to add a link to pfSense, begin by typing “p” in the title field and then select “pfSense” from the list of supported applications.

On Hemdall Application Database site You can see a list of supported Foundation and Enhanced apps just as you can consult about requested applications to be supported.

You can try out Heimdall on the Kubernetes cluster We created in the first part of this episode using Rancher

Click on Global / Select Your Kubernetes Cluster You Created earlier and Click on Default namespace
Click on the Deploy button on the top right corner
Choose a name for Your pod , leave it on Scalable deployment of 1 pod, in the docker image part specify the command/target you would use after the normal docker pull command which is in the case of heimdall is “linuxserver/heimdall/” * can check https://hub.docker.com/r/linuxserver/heimdall/ for the same info *
Click on Add port to be able to reach the heimdall webgui of port 80 of the Pod You about the create from outside/external of the Kubernetes cluster , for this set port type HostPort and specify a listening port on which the Host where Kubernetes cluster is running should forward the port 80 of the Pod in this example i used port 8082

Click on Launch to Deploy the Pod

Navigate to http://IP or FQDN of Your Kubernetes Cluster:Port-Exposed
In my example its , the IP of my server Centos 8 VM on top of which runs docker in which it runs Rancher which runs Kubernetes Cluster where My Pod Heimdall sits and exposes its port 80 to the underlying host and to external connections via port 8082

Migrating From Docker to Kubernetes Cluster

Here is a great article explaining a three piece service migration from doocker using a docker compose file to Kubernetes Cluster.

Deployment to Kubernetes clusters is more complicated than deployment using Docker Compose. However, Kubernetes is one of the most used orchestration tools used to deploy containers into production environments due to its flexibility, reliability, and features.

Easy to follow and to grasp the concept idea.






Author: viktormadarasz

IT OnSite Analyst for a big multinational company