All posts by viktormadarasz

About viktormadarasz

IT OnSite Analyst for a big multinational company

TSR – The Server Room Show – Episode 48 – Alternatives to Google Analytics

What is Google Analytics

Google Analytics is a web analytics service offered by Google that tracks and reports website traffic, currently as a platform inside the Google Marketing Platform brand. Google launched the service in November 2005 after acquiring Urchin.

As of 2019, Google Analytics is the most widely used web analytics service on the web.Google Analytics provides an SDK that allows gathering usage data from iOS and Android app, known as Google Analytics for Mobile Apps.Google Analytics can be blocked by browsers, browser extensions, firewalls and other means.

Google Analytics is used to track website activity such as session duration, pages per session, bounce rate etc. of individuals using the site, along with the information on the source of the traffic. It can be integrated with Google Ads,with which users can create and review online campaigns by tracking landing page quality and conversions (goals). Goals might include sales, lead generation, viewing a specific page, or downloading a particular file.Google Analytics’ approach is to show high-level, dashboard-type data for the casual user, and more in-depth data further into the report set. Google Analytics analysis can identify poorly performing pages with techniques such as funnel visualization, where visitors came from (referrers), how long they stayed on the website and their geographical position. It also provides more advanced features, including custom visitor segmentation.Google Analytics e-commerce reporting can track sales activity and performance. The e-commerce reports shows a site’s transactions, revenue, and many other commerce-related metrics.

On September 29, 2011, Google Analytics launched Real Time analytics, enabling a user to have insight about visitors currently on the site.A user can have 100 site profiles. Each profile generally corresponds to one website. It is limited to sites which have traffic of fewer than 5 million pageviews per month (roughly 2 pageviews per second) unless the site is linked to a Google Ads campaign. Google Analytics includes Google Website Optimizer, rebranded as Google Analytics Content Experiments.Google Analytics’ Cohort analysis helps in understanding the behaviour of component groups of users apart from your user population. It is beneficial to marketers and analysts for successful implementation of a marketing strategy.

Funny fact I remeber in one of the early websites I made back in the late 90s in very simple HTML i used the simple code i found of course which had a typical visitor counter on the bottom of the page which incremented with each refresh…. I must say The world came a long way from a simple counter You cant really base real trends on to a full arsenal of different Analytics tools which can help You make good business decisions regarding Your audience and trends when it comes to Your audience-visitors.

Google Analytics dashboard

Why We need metrics like that?

It can be said that a successful website is built in equal parts great content and a solid understanding of your audience. While your content may be first-class, if you don’t know where your traffic is coming from (and the topics your audience is interested in), you’re missing half of the formula.

Google Analytics enables you to find answers to these questions by analyzing your website traffic. That way, you can improve your site based on your visitors’ actions.

There are many books out there about Google Analytics and its every bit of feature and i suggest if You are seriously interested You should check some of the books out mentioned in the shownotes.

Open Source alternatives:

Matomo
(Piwik)

Matomo does most of what Google Analytics does, and chances are it offers the features that you need.

Those features include metrics on the number of visitors hitting your site, data on where they come from (both on the web and geographically), the pages from which they leave, and the ability to track search engine referrals. Matomo also offers many reports, and you can customize the dashboard to view the metrics that you want to see.

To make your life easier, Matomo integrates with more than 65 content management, e-commerce, and online forum systems, including WordPress, Magneto, Joomla, and vBulletin, using plugins. For any others, you can simply add a tracking code to a page on your site.

You can test-drive Matomo or use a hosted version.

The best part of it that it is relatively easy to install if You decide to self host it Yourself On-Permises.

It requires MySQL Database or MariaDB for backend , a Webserver like nginx or apache and PHP and for tracking a Javascript based tracking code into Your website/s <header> part/s unless You run web engines like Drupal, WordPress or any of the other 65 content management , e-commerce or forum systems which can do it via plugins instead.

If Your site receives a couple of hundred visitors per day its advised to set up auto-archiving cron task so that Matomo calculates your reports periodically. When the cron is setup and the timeout value increased, Matomo dashboard will load very quickly as the reports will be pre-processed by the core:archive command triggered by cron.

If you do not setup the cron, Matomo will recalculate your statistics every time you visit a Matomo report, which will slow Matomo down and increase the load on your database.

Matomo Analytics - The Google Analytics alternative that protects your data
Matomo dashboard

Open Web Analytics

a close second to Matomo in the open source web analytics stakes, it’s Open Web Analytics. In fact, it includes key features that either rival Google Analytics or leave it in the dust.

In addition to the usual raft of analytics and reporting functions, Open Web Analytics tracks where on a page, and on what elements, visitors click; provides heat maps that show where on a page visitors interact the most; and even does e-commerce tracking.

Open Web Analytics has a WordPress plugin and can integrate with MediaWiki using a plugin. Or you can add a snippet of JavaScript or PHP code to your web pages to enable tracking.

Before you download the Open Web Analytics package, you can give the demo a try to see it it’s right for you.

Open Web Analytics dashboard

AWStats

Web server log files provide a rich vein of information about visitors to your site, but tapping into that vein isn’t always easy. That’s where AWStats comes to the rescue. While it lacks the most modern look and feel, AWStats more than makes up for that with breadth of data it can present.

That information includes the number of unique visitors, how long those visitors stay on the site, the operating system and web browsers they use, the size of a visitor’s screen, and the search engines and search terms people use to find your site. AWStats can also tell you the number of times your site is bookmarked, track the pages where visitors enter and exit your sites, and keep a tally of the most popular pages on your site.

These features only scratch the surface of AWStats’s capabilities. It also works with FTP and email logs, as well as syslog files. AWStats can gives you a deep insight into what’s happening on your website using data that stays under your control.

AWStats dashboard

Countly

Countly bills itself as a “secure web analytics” platform. While I can’t vouch for its security, Countly does a solid job of collecting and presenting data about your site and its visitors.

Heavily targeting marketing organizations, Countly tracks data that is important to marketers. That information includes site visitors’ transactions, as well as which campaigns and sources led visitors to your site. You can also create metrics that are specific to your business. Countly doesn’t forgo basic web analytics; it also keeps track of the number of visitors on your site, where they’re from, which pages they visited, and more.

You can use the hosted version of Countly or grab the source code from GitHub and self-host the application. And yes, there are differences between the hosted and self-hosted versions of Countly.

https://count.ly/images/company/brand-assets/countly-mobile-analytics.png
Countly dashboard

Plausible

Plausible is a newer kid on the open source analytics tools block. It’s lean, it’s fast, and only collects a small amount of information — that includes numbers of unique visitors and the top pages they visited, the number of page views, the bounce rate, and referrers. Plausible is simple and very focused.

What sets Plausible apart from its competitors is its heavy focus on privacy. The project creators state that the tool doesn’t collect or store any information about visitors to your website, which is particularly attractive if privacy is important to you. You can read more about that here.

There’s a demo instance that you check out. After that, you can either self-host Plausible or sign up for a paid, hosted account.

Plausible dashboard
Plausible Analytics | Simple, privacy-friendly alternative to Google  Analytics
Plausible dashboard

Closed Source alternatives:

Woopra

Cloud based analytics tool with Free tier available (500K actions per month)

An action is any event tracked in Woopra such as pageviews, downloads, property changes and any custom actions you configure. Also, every time a property is changed, for example, a user updated their email address or joins a segment, it will be listed as an action in the customer profile and counted towards your action quota.

As far as I could see it offers no option to self host or on permises installations which only hurts me as the next tier after Free is the one for 999 Euros per month (for 5 Million actions and a lot of Pro features..)

https://www.jhestudio.com/wp-content/uploads/2019/09/825-Woopra-1024x611.jpg
Woopra dashboard

FoxMetrics ( I like the name)

https://www.foxmetrics.com/wp-content/uploads/2020/05/laptop-2.png

They offer free plans for startups, minority-owned, education, and non-profits.

StatCounter

Available Free tier to analyze the latest 500 page views and affordable tiers to analyze more f.e to analyze the latest 100 000 page views only costs 7 Euros a month.

Clicky

Why look for Alternatives to Google Analytics and Why I look at metrics myself?

it is good to have alternatives.. For Everything in life .. Mostly True..

Different look , more metrics , new or previously untapped data or visually more pleasing dashboards or simply another interpretation of the same pile of information > Another look or a fresh view even on the same can help interpret trends , making the best possible decision when it comes to Your content and audience. Help You see if what You are trying to implement or introduce is well received or perhaps making them frow their eyebrows…

The more and perhaps most versatile view or angles You have on the same can help You be better than Your competitions and stir Your business forward which will make them think You have some 7th sense or a magic crystal ball.

To use my example for me metrics which are important to me are the ones which tell me as much as possible of the visitors to my website. Where they come from, what device they use, how long they stay and which pages they spend time on or which pages they exit from , are their traffic direct perhaps used specific keywords in google to find me , or they come from social channels where i tend to run ads and promote my show ( mastodon, twitter, instagram, pinterest – also paid ads ) and so on.

These metrics collected together with the listening statistics i get from podcast platforms help me to see a couple of things.

As I host my shows and material in english

Do I reach the proper audience and get the most visits / number of reproductions as my goal is mostly to reach native english speaking audience hence I use english instead of for example hungarian or spanish.

Devices used to visit my website ( Desktop or Mobile)

I get most of the traffic from Mobile devices compared to Desktop browsers and Tablets therefore it is important that the website and content is presented properly on this target platform (Mobile phones) which requires constantly to check and make sure its correct and making the experience better each time.

The Source of the Traffic

  • Direct: Any traffic where the referrer or source is unknown
  • Email: Traffic from email marketing that has been properly tagged with an email parameter
  • Organic: Traffic from search engine results that is earned, not paid
  • Paid search: Traffic from search engine results that is the result of paid advertising via Google AdWords or another paid search platform
  • Referral: Traffic that occurs when a user finds you through a site other than a major search engine
  • Social: Traffic from a social network, such as Facebook, LinkedIn, Twitter, or Instagram
  • Other: If traffic does not fit into another source or has been tagged as “Other” via a URL parameter, it will be bucketed into “Other” traffic

These are just some of the metrics I personally care about. Cause imagine I spend time and effort and money ( pinterest ads and even instagram) to promote the show all in english and my metrics would show me Im most popular in non english speaking countries and most of my traffic is organic coming from search engine results .

it would mean I spend money on promoting my site in social platforms and bringing content in english when perhaps I should either promote elsewhere or look into the reason of why I am successful where my stats say I am or the opposite what makes me not being popular where I want to be.

Links

https://www.searchenginejournal.com/google-analytics-alternatives/347638/

https://opensource.com/article/18/1/top-5-open-source-analytics-tools

https://bookauthority.org/books/new-google-analytics-ebooks

https://www.capterra.com/sem-compare/web-analytics-software?gclid=EAIaIQobChMI_dX54uC07AIVV_lRCh1zHwz0EAAYAiAAEgJFcPD_BwE

https://www.smartbugmedia.com/blog/what-is-the-difference-between-direct-and-organic-search-traffic-sources

TSR – The Server Room Show – Episode 49 – Professional Audio Production on Linux

Proluge

Creating musical melodies with a computer goes back as far as the 1950s in Australia with a computer originally named as CSIR Mark 1 which was programmed to play popular musical melodies from the very early 1950s. In 1950 the CSIR Mark 1 was used to play music, the first known use of a digital computer for the purpose. The music was never recorded

What is a DAW?

A digital audio workstation is an electronic device or application software used for recording, editing and producing audio files.

Some of the main functions of Digital Audio Workstations are:

Producing: From creating simple house loops and beats to full, expansive tracks, DAWs allow you to produce and finetune the creation and layering of these projects, usually with the help of VST plugins.

Tracking: During recording sessions, bands or orchestras are often recorded into multiple tracks at once. This allows the producer or engineer to capture the real-time sound of a group performance. These tracked performances can then be edited and mixed to perfection.

Mixing: This is the process of balancing and blending all the individual elements of a track by adjusting equalization/FX levels and parameters so that the track sounds as good as possible.

Live Performance: The process of building and editing tracks in real-time. Some DAWs are specifically designed for this process, as you’ll see below.

Composing: DAWs can also be used for the process of composing and constructing film/TV/game scores.

History

The first DAWs were conceived in the late 70s and early 80s. Soundstream, which developed the first digital recorder in 1977, developed what is considered the first DAW. Bringing together a minicomputer, disk drive, video display, and the software to run it all was the easy part.

Finding inexpensive storage and fast enough processing and disk speeds to run a commercially viable DAW proved the main challenge for the ensuing years. But as the home computer market exploded with the likes of Apple, Atari, and Commodore Amiga in the late 1980s, size and speed concerns were no longer an issue.

By the late 1980s, a number of consumer-level computers such as the MSX (Yamaha CX5M), Apple Macintosh, Atari ST and Commodore Amiga began to have enough power to handle digital audio editing. Engineers used Macromedia’s Soundedit, with Microdeal’s Replay Professional and Digidesign’s “Sound Tools” and “Sound Designer” to edit audio samples for sampling keyboards like the E-mu Emulator II and the Akai S900. Soon, people began to use them for simple two-track audio editing and audio mastering.

In 1989, Sonic Solutions released the first professional (48 kHz at 24 bit) disk-based nonlinear audio editing system. The Macintosh IIfx-based Sonic System, based on research done earlier at George Lucas’ Sprocket Systems, featured complete CD premastering, with integrated control of Sony’s industry-standard U-matic tape-based digital audio editor.

In 1994, a company in California named OSC produced a 4-track editing-recorder application called DECK that ran on Digidesign’s hardware system, which was used in the production of The Residents’ “Freakshow” [LP].

Many major recording studios finally “went digital” after Digidesign introduced its Pro Tools software in 1991, modeled after the traditional method and signal flow in most analog recording devices. At this time, most DAWs were Apple Mac based (e.g., Pro Tools, Studer Dyaxis, Sonic Solutions). Around 1992, the first Windows-based DAWs started to emerge from companies such as Innovative Quality Software (IQS) (now SAWStudio), Soundscape Digital Technology, SADiE, Echo Digital Audio, and Spectral Synthesis. All the systems at this point used dedicated hardware for their audio processing.

In 1993, the German company Steinberg released Cubase Audio on Atari Falcon 030. This version brought DSP built-in effects with 8-track audio recording & playback using only native hardware. The first Windows-based software-only product, introduced in 1993, was Samplitude (which already existed in 1992 as an audio editor for the Commodore Amiga).

In 1996, Steinberg introduced a revamped Cubase (which was originally launched in 1989 as a MIDI sequencing software for the Atari ST computer, later developed for Mac and Windows PC platforms, but had no audio capabilities until 1993’s Cubase Audio) which could record and play back up to 32 tracks of digital audio on an Apple Macintosh without the need of any external DSP hardware. Cubase not only modeled a tape-like interface for recording and editing, but, in addition, using VST also developed by Steinberg, modeled the entire mixing desk and effects rack common in analog studios. This revolutionized the DAW world, both in features and price tag, and was quickly imitated by most other contemporary DAW systems.

Electronic music became the megalith it is today and anybody who wanted to make music in their bedroom, basement, or local park bench could do just that. Whether they should is still a point that’s up for debate.

Working with Audio on Linux – DAWs and Audio Applications


Linux which came on to existence by the end of 1991 as a form of the Linux kernel obviously are late to the Audio and Music scene but it is definitely trying to catch up.

Audacity

Cómo instalar Audacity en Ubuntu 18.10 y derivados?
Audacity

Audacity is one of the most basic yet a capable audio editor available for Linux. It is a free and open-source cross-platform tool. A lot of you must be already knowing about it.

It has improved a lot when compared to the time when it started trending. I do recall that I utilized it to “try” making karaokes by removing the voice from an audio file. Well, you can still do it – but it depends.

Features:

It also supports plug-ins that include VST effects. Of course, you should not expect it to support VST Instruments.

  • Live audio recording through a microphone or a mixer
  • Export/Import capability supporting multiple formats and multiple files at the same time
  • Plugin support: LADSPA, LV2, Nyquist, VST and Audio Unit effect plug-ins
  • Easy editing with cut, paste, delete and copy functions.
  • Spectogram view mode for analyzing frequencies

LMMS

http://linux-sound.org/images/blog/full-size/1-lmms-044.png
LMMS

LMMS is a free and open source (cross-platform) digital audio workstation. It includes all the basic audio editing functionalities along with a lot of advanced features.

You can mix sounds, arrange them, or create them using VST instruments. It does support them. Also, it comes baked in with some samples, presets, VST Instruments, and effects to get started. In addition, you also get a spectrum analyzer for some advanced audio editing.

Features:

  • Note playback via MIDI
  • VST Instrument support
  • Native multi-sample support
  • Built-in compressor, limiter, delay, reverb, distortion and bass enhancer

Ardour

ardour - the digital audio workstation
Ardour

Ardour is yet another free and open source digital audio workstation. If you have an audio interface, Ardour will support it. Of course, you can add unlimited multichannel tracks. The multichannel tracks can also be routed to different mixer tapes for the ease of editing and recording.

You can also import a video to it and edit the audio to export the whole thing. It comes with a lot of built-in plugins and supports VST plugins as well.

Features:

  • Non-linear editing
  • Vertical window stacking for easy navigation
  • Strip silence, push-pull trimming, Rhythm Ferret for transient and note onset-based editing



Mixxx

Alternativa para Virtual Dj: Mixxx 2.0 disponible para Linux
Mixxx

If you want to mix and record something while being able to have a virtual DJ tool, Mixxx would be a perfect tool. You get to know the BPM, key, and utilize the master sync feature to match the tempo and beats of a song. Also, do not forget that it is yet another free and open source application for Linux!

It supports custom DJ equipment as well. So, if you have one or a MIDI – you can record your live mixes using this tool.

Features

  • Broadcast and record DJ Mixes of your song
  • Ability to connect your equipment and perform live
  • Key detection and BPM detection



Rosegarden

Rosegarden: music software for Linux

Rosegarden is yet another impressive audio editor for Linux which is free and open source. It is neither a fully featured DAW nor a basic audio editing tool. It is a mixture of both with some scaled down functionalities.

I wouldn’t recommend this for professionals but if you have a home studio or just want to experiment, this would be one of the best audio editors for Linux to have installed.

Features:

  • Music notation editing
  • Recording, Mixing, and samples



Cecilia

Cecilia - ear-bending sonics - LinuxLinks
Cecilia

Cecilia is not an ordinary audio editor application. It is meant to be used by sound designers or if you are just in the process of becoming one. It is technically an audio signal processing environment. It lets you create ear-bending sound out of them.

You get in-build modules and plugins for sound effects and synthesis. It is tailored for a specific use – if that is what you were looking for – look no further!

Features:

  • Modules to achieve more (UltimateGrainer – A state-of-the-art granulation processing, RandomAccumulator – Variable speed recording accumulator,
    UpDistoRes – Distortion with upsampling and resonant lowpass filter)
  • Automatic Saving of modulations



Davinci Resolve 16

How To Install DaVinci Resolve 16.2 In Ubuntu, Linux Mint Or Debian  (Generate DEB Package) - Linux Uprising Blog
Davinci Resolve on Linux


DaVinci Resolve (originally known as da Vinci Resolve) is a color correction and non-linear video editing (NLE) application for macOS, Windows, and Linux, originally developed by da Vinci Systems, and now developed by Blackmagic Design following its acquisition in 2009.

In addition to the commercial version of the software (known as DaVinci Resolve Studio), Blackmagic Design also distributes a free edition, with reduced functionality, simply named DaVinci Resolve (formerly known as DaVinci Resolve Lite)


Renoise

Screenshots | Renoise
Renoise


Renoise is a Digital Audio Workstation (DAW) with a refreshing twist. It lets you record, compose, edit, process and render production-quality audio using a tracker-based approach.

In a tracker, the music runs from top to bottom in an easily understood grid known as a pattern. Several patterns arranged in a certain order make up a song. Step-editing in a pattern grid lends itself well to a fast and immediate workflow. On top of this, Renoise features a wide range of modern features: dozens of built-in audio processors, alongside support for all commonly used virtual instrument and effect plug-in formats. And the software can be extended too: with scripting, you can use all of your MIDI or OSC controller to control it in exactly the way you want.


Harrison Mixbus

*90 USD license with Free Demo Option*

Special offer – Get Mixbus for $19 instead of $89
https://harrisonconsoles.com/site/specials.html

Harrison Mixbus

Harrison Mixbus is a digital audio workstation (DAW) available for Microsoft Windows, Mac OS X and Linux operating systems and version 1 was released in 2009.

Mixbus provides a modern DAW model incorporating a “traditional” analog mixing workflow. It includes built in proprietary analog modeled processing, based on Harrison’s 32-series and MR-series analog music consoles.

Mixbus is based on Ardour, the open source DAW, but is sold and marketed commercially by Harrison Audio Consoles

It comes in two versions:

Mixbus ($79) and Mixbus 32c ($299)
There is a Specials offer on their page You can get Mixbus for $19 just like I did and even get their EQ XT-ME Mastering Equalizer plugin for only $9 instead of $109 the link is in the shownotes for the special offer.

Mixbus is a full-featured digital audio workstation for recording, editing, mixing, and mastering your music.

Engineered and supported by Harrison; and developed in collaboration with an open-source community; Mixbus represents the finest sound quality of any DAW at a great price.

Mixbus 32c improves on the Mixbus platform with an exact emulation of the original Harrison 32C parametric four-band sweepable EQ, and 4 additional stereo summing buses.


Reaper

REAPER (an acronym for Rapid Environment for Audio Production, Engineering, and Recording) is a digital audio workstation and MIDI sequencer software created by Cockos. The current version is available for Microsoft Windows (XP and newer) and macOS (10.5 and newer) – beta versions are also available for Linux. REAPER acts as a host to most industry-standard plug-in formats (such as VST and AU) and can import all commonly used media formats, including video. REAPER and its included plug-ins are available in 32-bit and 64-bit format.

REAPER provides a free, fully functional 60-day evaluation period. For further use two licenses are available – a commercial and a discounted one. They are identical in features and differ only in price and target audience, with the discount license being offered for private use, schools and small businesses. Any paid license includes the current version with all of its future updates and a free upgrade to the next major version and all of its subsequent updates, when they are released. Any license is valid for all configurations (x64 and x86) and allows for multiple installations, as long it is being run on one computer at a time.

Extensive customization opportunities are provided through the use of ReaScript (edit, run and debug scripts within REAPER) and user-created themes and functionality extensions.

ReaScript can be used to create anything from advanced macros to full-featured REAPER extensions. ReaScripts can be written in EEL2 (JSFX script), Lua and Python. SWS / S&M is a popular, open-source extension to REAPER, providing workflow enhancements and advanced tempo/groove manipulation functionality.

REAPER’s interface can be customized with user-built themes. Each previous version’s default theme is included with REAPER and theming allows for complete overhauls of the GUI. REAPER has been translated into multiple languages and downloadable language packs are available. Users as well as developers can create language packs for REAPER.

Reaper comes with a variety of commonly used audio production effects. They include tools such as ReaEQ, ReaVerb, ReaGate, ReaDelay, ReaPitch and ReaComp. The included Rea-plug-ins are also available as a separate download for users of other DAWs, as the ReaPlugs VST FX Suite.

Also included are hundreds of JSFX plug-ins ranging from standard effects to specific applications for MIDI and audio. JSFX scripts are text files, which when loaded into REAPER (exactly like a VST or other plug-in) become full-featured plug-ins ranging from simple audio effects (e.g delay, distortion, compression) to instruments (synths, samplers) and other special purpose tools (drum triggering, surround panning). All JSFX plug-ins are editable in any text editor and thus are fully user customizable.

REAPER includes no third-party software, but is fully compatible with all versions of the VST standard (currently VST3) and thus works with the vast majority of both free and commercial plug-ins available. REAPER x64 can also run 32-bit plug-ins alongside 64-bit processes.

While not a dedicated video editor, REAPER can be used to cut and trim video files and to edit or replace the audio within. Common video effects such as fades, wipes and cross-fades are available. REAPER aligns video files in a project, as it would an audio track, and the video part of a file can be viewed in separate video window while working on the project.

Linux REAPER works! - Page 9 - LinuxMusicians
Reaper on Linux

Two Very Important pieces of Applications for Linux when it comes to Audio Routing

Jack

Jack is a low latency capable audio and midi server, designed for pro audio use. It enables all Jack capable applications to connect to each other. It is included in Linux Distributions geared towards Audio Production like AVLinux and Ubuntu Studio .. Ubuntu Studio even has its own Ubuntu Studio Controls ( see screenshot later) to interact with Jack:

(( I left a great article in the shownotes for starting out with Jack under Linux the easy way ))

  • provides low latency (less than 5 milliseconds with the right hardware)
  • allows multiple audo devices to be used at once
  • recognizes hotplugged USB audio devices
Qjackctl main window, also with connection window showing

Carla

Carla is a virtual Audio Rack and Patchbay, otherwise known as a plugin host, that can use audio plugins normally used in a DAW such as Ardour as if it was a rack of audio hardware. Some of its features include:

  • Saving virtual racks and connections
  • Interacting with several plugins types, including LADSPA, LV2, DSSI, and VST.
  • Has a plugin bridge that utilizes WINE to use plugins compiled for Windows devices (experimental, not installed by default).
KXStudio News
Carla

Linux Distributions Dedicated to Audio Production on Linux


AVLinux

https://www.hitsquad.com/files/av-linux.jpg
AV Linux



AV Linux is a Linux-based operating system aimed for multimedia content creators. Available for the i386 and x86-64 architectures with a kernel customised for maximum performance and low-latency audio production, it has been recommended as a supported Linux platform for Harrison Mixbus

AV Linux is built on top of Debian.
AV Linux is bundled with software for both everyday use and media production.

Preinstalled audio software includes: Ardour, Audacity, Calf Studio Gear, Carla, Guitarix, Hydrogen and MuseScore.

Ubuntu Studio 20

https://ubuntustudio.org/wp-content/uploads/2020/04/165e/image.png
Ubuntu Studio 20

KX.Studio

Not a Linux distribution as such but a repository for Debian / Ubuntu based distros and a custom set of their own applications and plugins for working with audio on Linux ( also has some apps for Windows.

Carla is for example one of KX.Studios application.

PLUGINS

VST under Linux and Other Plugins for / under Linux

LADSPA
( Linux Audio Developers Simple Plugin API) released in 2000 just a year after Steinberg released VST 2.0 in 1999 which is the most famous.

LADSPA plugins are only effects processors.

No fancy GUI simple generic GUI

DSSI (Disposable Soft Synth Interface) in 2004 sometimes referred to as LADSPA-for-instruments

LV2 – LADSPA v2

Combines the best and can be the replacement for both DSSI and LADSPA.

Vamp

VAMP plugins does not make or modify audio or midi data. They analyze sound and extract its features. Audacity and Mixxx for example uses them to analyze tempo and key of the songs

VST-s under Linux

VST’s that can be used in Linux come in two flavors:

  • Natively compiled VST plug-ins, also known as LinuxVST’s. These are plug-ins that are compiled or can be compiled on Linux systems with the help of either the Steinberg header files from the VST SDK or the Vestige header (the open source equivalent of the Steinberg headers).
  • VST’s compiled for Windows. These can be used with the help of Wine and any host that supports Windows VST’s.

Personally one of the things I miss is more commercial plugins to be available under Linux as well. Many commercial plugins developed only for Windows / Mac — I would assume most of the time because of the larger user base and all the fancy propietary licensing and other protection methods involved ( Waves plugins, Plugin Alliance, etc.)

Links

https://www.recordingconnection.com/what-are-digital-audio-workstations-daw/
https://en.wikipedia.org/wiki/Computer_music

Best Youtube Linux Channel related to Audio production:
https://www.youtube.com/channel/UCAYKj_peyESIMDp5LtHlH2A

Talk on Plugin Formats * Filipe Coelho *
https://www.youtube.com/watch?v=1DJ5aEU-JkA

https://harrisonconsoles.com/site/index.html

https://wiki.linuxaudio.org/wiki/vst_support_and_commercial_apps

https://forum.renoise.com/t/list-of-freeware-and-commercial-linux-vst-i/47242

https://www.reaper.fm/reaplugs/

https://libremusicproduction.com/articles/demystifying-jack-%E2%80%93-beginners-guide-getting-started-jack.html

https://kx.studio/

TSR – The Server Room Show – Episode 47 – Remote Management Tools

Prologue

Today it is about the System and Network Administrators. Specially the ones who would do everything remotely from the comfort of their own chair and desk preferably using their own computer and just do what needs to be done or dealt with at the company infrastructure or network. I know this cause I am one of those people.

Solutions from Remote Management/Support Tools to IPMI and Managed PDU-s, All and Everything which helps You be far away but work just as efficiently from Your own comfort just like if You were there.

In-Band Vs Out of Band Management

In-Band management is the ability to administer resources or network devices via the corporate LAN while Out of Band management is a solution that provides a secure dedicated alternate access method into an IT network infrastructure to administer connected devices and IT assets without using the corporate LAN.

Hardware and Software solutions both exist for In-Band and Out of Band management to help a system or network administrator achieve what he/she has to either from inside the corporate LAN while working from the corporate office or on site and also from remotely working from home or while being half way on the other side of the country or continent.

Money invested in all of these solutions are fruitful in the long term when a technician has to travel less often, can work securely from a remote location without being put in harms way unnecessarily not even mentioning a situation when there is just no possibility to get to the location to deal with an emergency f.e at 2am in the morning and the closest technician lives 1,5h with regular commute.

As We will see some of these built in OOB or In Band solutions are coming as standard on some devices and optional on some others or even requires a completely separate appliance dedicated to serve a given task or purpose.

When it comes to software some exists from far back from the 70s

Some solutions both HW & SW can be used for both In-Band and Out of Band (OOB) access or management while others are more suited or dedicated to one approach or the other.

OOB management mostly serves for emergency operations/maintenance while In-Band management is more suited as per the nature of having direct network access to the resources via Corporate LAN during the normal Business Hours when its possible as well to a technician to walk up to the machine or server if he/she has to.


Software Solutions

Some of these You already know maybe You even use it on a daily basis just never thought of it consciously that it is indeed a tool in the toolbox of Remote Management Solutions.

Telnet / SSH

Telnet is an application protocol used on the Internet or local area network to provide a bidirectional interactive text-oriented communication facility using a virtual terminal connection. User data is interspersed in-band with Telnet control information in an 8-bit byte oriented data connection over the Transmission Control Protocol (TCP).

Telnet was developed in 1969 and became one of the first Internet standards. The name stands for “teletype-network”

Historically, Telnet provided access to a command-line interface on a remote host. However, because of serious security concerns when using Telnet over an open network such as the Internet, its use for this purpose has waned significantly in favor of SSH.

The term telnet is also used to refer to the software that implements the client part of the protocol. Telnet client applications are available for virtually all computer platforms. Telnet is also used as a verb. To telnet means to establish a connection using the Telnet protocol, either with a command line client or with a graphical interface. For example, a common directive might be: “To change your password, telnet into the server, log in and run the passwd command.” In most cases, a user would be telnetting into a Unix-like server system or a network device (such as a router).

Secure Shell (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network.Typical applications include remote command-line, login, and remote command execution, but any network service can be secured with SSH.

SSH provides a secure channel over an unsecured network by using a client–server architecture, connecting an SSH client application with an SSH server. The protocol specification distinguishes between two major versions, referred to as SSH-1 and SSH-2. The standard TCP port for SSH is 22. SSH is generally used to access Unix-like operating systems, but it can also be used on Microsoft Windows. Windows 10 uses OpenSSH as its default SSH client and SSH server.

Despite popular misconception, SSH is not an implementation of Telnet with cryptography provided by the Secure Sockets Layer (SSL).

SSH was designed as a replacement for Telnet and for unsecured remote shell protocols such as the Berkeley rsh and the related rlogin and rexec protocols. Those protocols send information, notably passwords, in plaintext, rendering them susceptible to interception and disclosure using packet analysis.The encryption used by SSH is intended to provide confidentiality and integrity of data over an unsecured network, such as the Internet, although files leaked by Edward Snowden indicate that the National Security Agency can sometimes decrypt SSH, allowing them to read, modify and selectively suppress the contents of SSH sessions.

SSH can also be run using SCTP rather than TCP as the connection oriented transport layer protocol. The IANA has assigned TCP port 22, UDP port 22 and SCTP port 22 for this protocol.

VNC / RDP

VNC

In computing, Virtual Network Computing (VNC) is a graphical desktop-sharing system that uses the Remote Frame Buffer protocol (RFB) to remotely control another computer. It transmits the keyboard and mouse events from one computer to another, relaying the graphical-screen updates back in the other direction, over a network.

VNC is platform-independent – there are clients and servers for many GUI-based operating systems and for Java. Multiple clients may connect to a VNC server at the same time. Popular uses for this technology include remote technical support and accessing files on one’s work computer from one’s home computer, or vice versa.

VNC was originally developed at the Olivetti & Oracle Research Lab in Cambridge, United Kingdom. The original VNC source code and many modern derivatives are open source under the GNU General Public License.
VNC in KDE 3.1

There are a number of variants of VNC which offer their own particular functionality; e.g., some optimised for Microsoft Windows, or offering file transfer (not part of VNC proper), etc. Many are compatible (without their added features) with VNC proper in the sense that a viewer of one flavour can connect with a server of another; others are based on VNC code but not compatible with standard VNC.

VNC and RFB are registered trademarks of RealVNC Ltd. in the US and some other countries.

RDP

Remote Desktop Protocol (RDP) is a proprietary protocol developed by Microsoft which provides a user with a graphical interface to connect to another computer over a network connection. The user employs RDP client software for this purpose, while the other computer must run RDP server software.

Clients exist for most versions of Microsoft Windows (including Windows Mobile), Linux, Unix, macOS, iOS, Android, and other operating systems. RDP servers are built into Windows operating systems; an RDP server for Unix and OS X also exists. By default, the server listens on TCP port 3389 and UDP port 3389.

Microsoft currently refers to their official RDP client software as Remote Desktop Connection, formerly “Terminal Services Client”.




Hardware Solutions


iLO – DRAC – ILOM management interfaces

All the above 3 solutions are silicon based (custom chip by the manufacturer) built in either as standard or as optional for their server products. These special hardware chips with an rj45 port with the embeeded software offers complete access to manage and troubleshoot and also to interact with the server f.e set it up from zero without any OS installed and being powered down just by connecting to power and network.

Servers can be racked with minimal effort and configuration required ( only power to be plugged and a network cable to be plugged into these management interfaces) and a System Administrator remotely can power on the server , boot an iso format of an operating system to be installed on the server like a virtual CD, see and be able to interact with the server just like if He/She had the keyboard mouse and monitor plugged in locally also being to able to access BIOS and other management consoles on the server and see all console messages also pre-boot, etc.

Console Servers

Console servers are dedicated 19″ rackmount 1u or 2u purpose built devices with most of the time propietary embeeded operating system (lately many of those are taking over by embeeded linux os-es like busybox).

They enable secure remote console management of any device with a serial or usb console port including Cisco routers, switches and firewalls, Servers and PBXs and more.

Single purpose built hardware solution which provides a secure alternate route to monitor IT, networking security and power devices from multiple vendors.

While software management tools can be used for performance monitoring and some remote troubleshooting they only work when the network is up.

A Console server ensures that the on site infrastructure is accessible even during network outages.

They can be used to reconfigure, reboot and reimage remotely across the internet or WANs. Disruption and downtime are minimized by providing better visibility of the physical environment and the physical status of equipment. This ensure business continuity through improved uptime and efficiencies.

Normally Console servers provide various ways to securely access on-site infrastructure such as 4G/LTE Modem, Wifi, V.92 modem or like a dual redundant uplinks in form of copper and SFP fiber network access ports.

I have two older models myself.

https://www.perle.com/productimages/iolan-scgru-modular-380px.jpg
console server example
https://www.perle.com/images/diagrams/scglwm-remote-console-management-md.gif
Various routes provided for secure OOB access.


KVM over IP

Remote Server Access (KVM Over IP) products are a new breed of non-intrusive hardware based solutions which allow you both in-band and out-of-band network access to all the servers connected to your KVM switch. Utilizing advanced security and regardless of operating system, these KVM Over IP products allow you to remotely control all your servers/CPU’s – including pre-boot functions such as editing CMOS settings and power cycling your servers. KVM Over IP products allow you access via your internal LAN/WAN, and connectivity via the Internet or dial in access via ISDN or standard 56K modems. Access to the IP KVMs is secured with military grade network security.

Utilizing all these advanced features in conjunction is critical for remote maintenance, support, and failure recovery of data center devices.


KVM Over IP Solution Diagram

KVM Over IP (Out-Of-Band)


Most KVM Over IP devices offer remote out-of-band access from anywhere in the world using a web browser or alternative protocol. KVM Over IP devices can be wired to a single server or computer with a KVM Over IP Gateway, or to KVM Switch with multiple sources that can easily be switched between.

IP KVM Application

Networked KVM (In-Band)

Another type of IP KVM product is known as Desktop over IP. Desktop over IP is similar to a KVM extender solution, but is routed via the internal LAN/WLAN network to provide a true desktop experience in a point-to-point or point-to-multipoint configuration. This type of solution is very popular in the broadcast market, clean rooms, secure computing environments and many other solutions that require high resolution, USB peripheral flexibility or environments that you cannot simply run a Cat5 or fiber cable.

Desktop Over IP Application

Web Browser Access

Most IP KVMs allow local (in-bound) and remote (out-of-bound) operators the ability to monitor and access their servers, storage, and network devices over the network using a web-based browser (Java or Javascript / HTTPS – IPv4, IPv6). Web based control methods employ high specification security techniques to ensure that only authorized users may gain access.

VNC Viewer Access

Real VNC (Virtual Network Computer) software was devised to enable users to access and control remote computers. An IP KVM switch with Real VNC protocol embedded into the security layer provides the benefits of both hardware and software based solutions – universal compatibility, superior graphical performance, and reliable BIOS level access together with encryption to assure the safety of your enterprise.

Serial Console Access (CLI – Command Line Only)

A lot of IP KVMs feature RS232, DB-15, Ethernet, or USB based Serial ports for managing external devices such as servers, switches, and IP routers through a command line interface (CLI). Serial Console access allows for text-based administrative tasks such as accessing the BIOS or boot loader, the kernel, the init system, or the system logger. Serial control requires very little IP bandwidth and can be especially effective in low bandwidth applications.


Remote Power

As last resort action if system hung or need to force reboot as no other means or management interface/s as discussed previously responds.

Many types of PDUs and ATS ( Automated Transfer Systems)

Basic
Metered
Monitored
Switched
Switched-Metered-by-Outlet

Metered ATS
Switched ATS


Cyberpower PDU83102
Switched Metered-by-Outlet PDU


Links



https://www.perle.com/supportfiles/out-of-band-management.shtml

https://en.wikipedia.org/wiki/Stream_Control_Transmission_Protocol

https://drwetter.eu/talks/oob-management,sagehh.pdf

https://www.perle.com/products/console-server.shtml

https://opengear.com/products/cm7100-console-server/

TSR – The Server Room Show – Episode XX – #WFH The IT Industry are Ready for it But are Businesses Ready for Working from Home?

Prologue

OOB Management and Remote Support Solutions are long existing for those System and Network Administrators who do not wish to go in or perhaps sit in a chilled Datacenter to do what needs to be done or support for what needs to be dealt with.

We did discuss those technologies in a previous episode already.

But this time I want to talk about the rest of the workers. Those other workers man and woman whos job would perfectly allow them to conduct their daily tasks and responsabilities without the need to be hands on or in person on a given specific location aka the corporate office.

WFH – Working From Home – The IT Industry is Ready Are The Companies too?

With Lockdowns all over the world in 2020 and restrictions imposed at our everyday lives #WorkingFromHome became a new phenomenon to many companies and
Will working from home be the new normal? will we go back to the offices or companies will cut and optimize costs to make everyone who can do their job from home being able to do it… saving the time and money on commute… reduce unnecesarry travels back and forth and only attend in person meetings with clients when necessary ( perhaps in meeting or business centers where You pay per hour for a full fledged service like a conference center), of course jobs where hands on and physical presence required will be still done on site and on location
But with the rest of the jobs which could shift to WFH , both employee and employer could save a BIG chunk of money reducing office space and monthly costs… OK You would need to start paying for Your own coffe from now on 🙂

IT industry is ready with remote working solutions, online reunions, teamwork and collaboration offerings/sw already.
Count in VDI as We discussed in a previous episode and with Thin Clients if You wish , You have low cost HW in the hands of Your employees who can work from home.

TSR – The Server Room Show – Episode 46 – VDI & Thin Clients

The Three Types of Client Virtualization

Presentation Virtualization

Think of RDP or VNC technologies or Microsoft Terminal Server / Citrix Metaframe where all the running applications lives, runs , consumes ram and cpu on the Remote Server while You the user interact with it through a presented window or shell to use a better word like a VNC Window or an RDP connection

VDI

Virtual Desktop Infrastructure , the topic of today so lets skip this for now.

Application Virtualization

a case where individual application/s can run on the client machine without ever being installed on it but consuming resources on it and running like it was installed on the client natively. The application runs itself in a sandbox or on top of an abstraction layer which allows even various versions of the same application to be executed in the same time f.e Office 2003 and Office 2019 side by side without causing any compatibility or other issues on the client itself. Wine the windows emulation layer for linux is very similar in fact and if You ask me I consider it a form of Application Virtualization as it fits the example nearly perfectly with the exception that apps in wine indeed installs locally on a specific folder when You run/install them. It does however fulfills the function of allowing an applciation to run in a foreign client or on top of an OS where otherwise it would not be possible natively. (Windows app on Linux or vice versa)

What is it Virtual Desktop Infrastructure?

Virtual desktop infrastructure or VDI is a technology that refers to the use of virtual machines to provide and manage virtual desktops. VDI hosts desktop environments on a centralized server and deploys them to end-users on request. 

In VDI, a hypervisor segments servers into virtual machines that in turn host virtual desktops, which users access remotely from their devices. Users can access these virtual desktops from any device or location, and all processing is done on the host server. Users connect to their desktop instances through a connection broker, which is a software-based gateway that acts as an intermediary between the user and the server.

VDI can be either persistent or nonpersistent. Each type offers different benefits:

  • With persistent VDI, a user connects to the same desktop each time, and users are able to personalize the desktop for their needs since changes are saved even after the connection is reset. In other words, desktops in a persistent VDI environment act exactly like a personal physical desktop. 
  • In contrast, nonpersistent VDI, where users connect to generic desktops and no changes are saved, is usually simpler and cheaper, since there is no need to maintain customized desktops between sessions. Nonpersistent VDI is often used in organizations with a lot of task workers, or employees who perform a limited set of repetitive tasks and don’t need a customized desktop.

Why VDI?

VDI offers a number of advantages, such as user mobility, ease of access, flexibility and greater security. In the past, its high-performance requirements made it costly and challenging to deploy on legacy systems, which posed a barrier for many businesses. However, the rise in enterprise adoption of hyperconverged infrastructure (HCI) offers a solution that provides scalability and high performance at a lower cost.

What are the benefits of VDI?

Although VDI’s complexity means that it isn’t necessarily the right choice for every organization, it offers a number of benefits for organizations that do use it. Some of these benefits include: 

  • Remote access: VDI users can connect to their virtual desktop from any location or device, making it easy for employees to access all their files and applications and work remotely from anywhere in the world.
  • Cost savings: Since processing is done on the server, the hardware requirements for end devices are much lower. Users can access their virtual desktops from older devices, thin clients, or even tablets, reducing the need for IT to purchase new and expensive hardware. 
  • Security: In a VDI environment, data lives on the server rather than the end client device. This serves to protect data if an endpoint device is ever stolen or compromised.
  • Centralized management: VDI’s centralized format allows IT to easily patch, update or configure all the virtual desktops in a system.

What is VDI used for?

Although VDI can be used in all sorts of environments, there are a number of use cases that are uniquely suited for VDI, including:

  • Remote work: Since VDI makes virtual desktops easy to deploy and update from a centralized location, an increasing number of companies are implementing it for remote workers.
  • Bring your own device (BYOD): VDI is an ideal solution for environments that allow or require employees to use their own devices. Since processing is done on a centralized server, VDI allows the use of a wider range of devices. It also offers better security, since data lives on the server and is not retained on the end client device.
  • Task or shift work: Nonpersistent VDI is particularly well suited to organizations such as call centers that have a large number of employees who use the same software to perform limited tasks. 

What is the difference between VDI and desktop virtualization?

Desktop virtualization is a generic term for any technology that separates a desktop environment from the hardware used to access it. VDI is a type of desktop virtualization, but desktop virtualization can also be implemented in different ways, such as remote desktop services (RDS), where users connect to a shared desktop that runs on a remote server.

What is the difference between VDI and virtual machines (VMs)?

Virtual machines are the technology that powers VDI. VMs are software “machines” created by partitioning a physical server into multiple virtual servers through the use of a hypervisor. (This process is also known as server virtualization.) Virtual machines can be used for a number of applications, one of which is running a virtual desktop in a VDI environment.

What is Virtual Desktop?

Virtual desktops are preconfigured images of operating systems and applications in which the desktop environment is separated from the physical device used to access it. Users can access their virtual desktops remotely over a network. Any endpoint device, such as a laptop, smartphone or tablet, can be used to access a virtual desktop. The virtual desktop provider installs client software on the endpoint device, and the user then interacts with that software on the device. 

A virtual desktop looks and feels like a physical workstation. The user experience is often even better than a physical workstation because powerful resources, such as storage and back-end databases, are readily available. Users may or may not be able to save changes or permanently install applications, depending on how the virtual desktop is configured. Users experience their desktop exactly the same way every time they log in, no matter which device they are logging into it from.

Types of virtual desktops

There are a few different types of virtual desktops and desktop virtualization technologies. Desktop virtualization means that you run a virtual machine on your desktop computer, think KVM, VirtualBox , VMware , Vagrant. Meanwhile Virtual desktop infrastructure (VDI) is a data center technology that supplies hosted desktop images to remote users.With host-based virtual machines, one virtual machine is allocated to each individual user at login. With persistent desktop technology, that user connects to the same VM each time they log in, which allows for desktop personalization. Host-based machines can also be physical machines hosting an operating system that remote users log into.

A virtual machine can also be client-based, where the operating system is executed locally on the endpoint. The advantage of this type of virtual desktop is that a network connection is not required for the user to access the desktop.

Virtual desktop infrastructure (VDI) refers to a type of desktop virtualization that allows desktop workstation or server operating systems to run on virtual machines that are hosted on a hypervisor in on-premises servers. The user experiences the operating system and applications on an endpoint device, just as if they were running locally. With desktops as a service (DaaS), a service provider hosts VDI workloads out of the cloud and provides apps and support for enterprise users.

How a virtual desktop works?

Virtual desktop providers abstract the operating system from a computer’s hardware with virtualization software. Instead of running on the hardware, the operating system, applications and data run on a virtual machine. An organization may host the virtual machine on premises. It is also common to run a virtual desktop on cloud-based virtual machines. Previously, only one user could access a virtual desktop from a single operating system. The technology has evolved to allow many users to share an operating system that is running multiple desktops.

IT administrators can choose to purchase virtual desktop thin clients for their VDI, or repurpose older or even obsolete PCs by using them as virtual desktop endpoints, which can save money. However, any money saved on physical infrastructure costs may need to be quickly reallocated to software licensing fees for virtual desktops. 

A virtual desktop infrastructure provides the option for users to bring their own device, which can again save IT departments money. This flexibility makes virtual desktops ideal for seasonal work or organizations that employ contractors for temporary work on big projects. Virtual desktops also work well for salespeople who travel frequently because their desktop is the same and they have access to all the same files and applications no matter where they are working.

What is the purpose of a virtual desktop?

A virtual desktop allows users to access their desktop and applications from anywhere on any kind of endpoint device, while IT organizations can deploy and manage these desktops from a centrally located data center.

Many organizations move to a virtual desktop environment because virtual desktops are usually centrally managed, which eliminates the need for updates and app installations on individual machines. Also, endpoint machines can be less powerful, since most computing happens in the data center.

How to use virtual desktops?

Virtual desktops are as easy to use as physical desktops. Users simply log in to their desktop from their chosen device and connect via the network to a remotely located virtual machine that presents the desktop on the endpoint device. Users can interact with applications on a virtual desktop in the same way that they would on a physical desktop. Users may or may not be able to personalize or save data locally on a virtual desktop, depending on which desktop virtualization technology they are using.



How We used Virtual Desktop Infrastructure backed by VMware Horizon at work in the past?

We used VMware Horizon product to serve the persistent-VDI’s ( always accessing the same Virtual machine image/clone) with the possibility to customize and keep things there like documents on the desktop or links in the web browser)

We might have used Citrix backend previously as in some documentation i saw hints to Citrix and I do not think that the two environments can mix&match.

Two factor authentication with Microsoft Authenticator which also tied into Azure and our AD credentials were mandatory it was pretty much SSO (Single Sign On) everywhere with 2FA as default.


Thin Clients


Thin Clients – My Thin Clients ( Fujitsu physical and Virtual/VM one I use)

Software (form of a VM like Unicorn Software eLux can run in a VM just like on a Physical HW) and Hardware offerings both exists.


What is a Thin Client / Zero Client?

In computer networking, a thin client is a simple (low-performance) computer that has been optimized for establishing a remote connection with a server-based computing environment. The server does most of the work, which can include launching software programs, performing calculations, and storing data. This contrasts with a fat client or a conventional personal computer; the former is also intended for working in a client–server model but has significant local processing power, while the latter aims to perform its function mostly locally.

Thin clients occur as components of a broader computing infrastructure, where many clients share their computations with a server or server farm. The server-side infrastructure uses cloud computing software such as application virtualization, hosted shared desktop (HSD) or desktop virtualization (VDI). This combination forms what is known as a cloud-based system, where desktop resources are centralized at one or more data centers. The benefits of centralization are hardware resource optimization, reduced software maintenance, and improved security.

  • Example of hardware resource optimization: Cabling, bussing and I/O can be minimized while idle memory and processing power can be applied to user sessions that most need it.
  • Example of reduced software maintenance: Software patching and operating system (OS) migrations can be applied, tested and activated for all users in one instance to accelerate roll-out and improve administrative efficiency.
  • Example of improved security: Software assets are centralized and easily firewalled, monitored and protected. Sensitive data is uncompromised in cases of desktop loss or theft.

Thin client hardware generally supports common peripherals, such as keyboards, mouses, monitors, jacks for sound peripherals, and open ports for USB devices (e.g., printer, flash drive, webcam). Some thin clients include (legacy) serial or parallel ports to support older devices, such as receipt printers, scales or time clocks. Thin client software typically consists of a graphical user interface (GUI), cloud access agents (e.g., RDP, ICA, PCoIP), a local web browser, terminal emulators (in some cases), and a basic set of local utilities.

Zero Clients

Zero client is also referred as ultra thin client, contains no moving parts but centralizes all processing and storage to just what is running on the server. As a result, it requires no local driver to install, no patch management, and no local operating system licensing fees or updates. The device consumes very little power and is tamper-resistant and completely incapable of storing any data locally, providing a more secure endpoint. While a traditional thin client is streamlined for multi-protocol client-server communication, a zero client has a highly tuned on board processor specifically designed for one possible protocol (PCoIP, HDX, RemoteFX, DDP). A zero client makes use of very lightweight firmware that merely initializes network communication through a basic GUI (Graphical User Interface), decodes display information received from the server, and sends local input back to the host. A device with such simple functionality has less demand for complex hardware or silicon, and therefore becomes less prone to obsolescence. Another key benefit of the zero client model is that its lightweight firmware represents an ultra-small attack surface making it more secure than a thin client. Further, the local firmware is so simple that it requires very little setup or ongoing administration. It’s the ultimate in desktop simplification but the trade-off is flexibility. Most mainstream zero clients are optimized for one communication protocol only. This limits the number of host environments that a zero client can provide its users with access to.

Web Clients

Some Web Thin Clients examples are Chromebooks and Chromeboxes

Web clients only provide a web browser, and rely on web apps to provide general-purpose computing functionality. However, note that web applications may use web storage to store some data locally, e.g. for “offline mode”, and they can perform significant processing tasks as well. Rich Internet Applications for instance may cross the boundary, and HTML5 web apps can leverage browsers as run-time environments through the use of a cache manifest or so-called “packaged apps” (in Firefox OS and Google Chrome).

Examples of web thin clients include Chromebooks and Chromeboxes (which run Chrome OS) and phones running Firefox OS. O Chromebooks and Chromeboxes also have the capability of remote desktop using the free Chrome Remote Desktop browser extension, which means, other than being a web thin client, they can also be used as an ultra-thin client (see above) to access PC or Mac applications that do not run on the Chromebook directly. Indeed, they can be used as a web thin client and an ultra-thin-client simultaneously, with the user switching between web browser and PC or Mac application windows with a click.

Chromebooks are also able to store user documents locally – though, with the exception of media files (which have a dedicated player application to play them), all such files can only be opened and processed with web applications, since traditional desktop applications cannot be installed in Chrome OS.

Providers

Popular providers of zero clients include Wyse (Xenith), IGEL Technology, 10ZiG, Teradici, vCloudPoint

Fujitsu , HP , Wyse , Dell .. other open source HW like Openthinclient

Clearcube

Windows Thin PC OS for Thin Clients ( Windows 7 Thin Client OS x86 still supported till the end of 2021)

Unicorn Software – eLux ( i run it in a VM and works perfectly)


PXE Boot for those thin clients

In computing, the Preboot eXecution Environment (PXE, most often pronounced as pixie) specification describes a standardized client-server environment that boots a software assembly, retrieved from a network, on PXE-enabled clients. On the client side it requires only a PXE-capable network interface controller (NIC), and uses a small set of industry-standard network protocols such as DHCP and TFTP.

The concept behind the PXE originated in the early days of protocols like BOOTP/DHCP/TFTP, and as of 2015 it forms part of the Unified Extensible Firmware Interface (UEFI) standard. In modern data centers, PXE is the most frequent choice[1] for operating system booting, installation and deployment.

The PXE environment relies on a combination of industry-standard Internet protocols, namely UDP/IP, DHCP and TFTP. These protocols have been selected because they are easily implemented in the client’s NIC firmware, resulting in standardized small-footprint PXE ROMs. Standardization, small size of PXE firmware images and their low use of resources are some of the primary design goals, allowing the client side of the PXE standard to be identically implemented on a wide variety of systems, ranging from powerful client computers to resource-limited single-board computers (SBC) and system-on-a-chip (SoC) computers.

DHCP is used to provide the appropriate client network parameters and specifically the location (IP address) of the TFTP server hosting, ready for download, the initial bootstrap program (NBP) and complementary files. To initiate a PXE bootstrap session the DHCP component of the client’s PXE firmware broadcasts a DHCPDISCOVER packet containing PXE-specific options to port 67/UDP (DHCP server port); it asks for the required network configuration and network booting parameters. The PXE-specific options identify the initiated DHCP transaction as a PXE transaction. Standard DHCP servers (non PXE enabled) will be able to answer with a regular DHCPOFFER carrying networking information (i.e. IP address) but not the PXE specific parameters. A PXE client will not be able to boot if it only receives an answer from a non PXE enabled DHCP server.

After parsing a PXE enabled DHCP server DHCPOFFER, the client will be able to set its own network IP address, IP Mask, etc., and to point to the network located booting resources, based on the received TFTP Server IP address and the name of the NBP. The client next transfers the NBP into its own random-access memory (RAM) using TFTP, possibly verifies it (i.e. UEFI Secure Boot), and finally boots from it. NBPs are just the first link in the boot chain process and they generally request via TFTP a small set of complementary files in order to get running a minimalistic OS executive (i.e. WindowsPE, or a basic Linux kernel+initrd). The small OS executive loads its own network drivers and TCP/IP stack. At this point, the remaining instructions required to boot or install a full OS are provided not over TFTP, but using a robust transfer protocol (such as HTTP, CIFS, or NFS).

PXE acceptance since v2.1 has been ubiquitous; today it is virtually impossible to find a network card without PXE firmware on it. The availability of inexpensive Gigabit Ethernet hardware (NICs, switches, routers, etc.) has made PXE the fastest method available for installing an operating system on a client when competing against the classic CD, DVD, and USB flash drive alternatives.

Over the years several major projects have included PXE support, including:

  • All the major Linux distributions.
  • HP OpenVMS on Itanium hardware.
  • Microsoft Remote Installation Services (RIS)
  • Microsoft Windows Deployment Services (WDS)
  • Microsoft Deployment Toolkit (MDT)
  • Microsoft System Center Configuration Manager (SCCM)

In regard to NBP development there are several projects implementing Boot Managers able to offer boot menu extended features, scripting capabilities, etc.:

  • Syslinux PXELINUX
  • gPXE/iPXE

All the above-mentioned projects, when they are able to boot/install more than one OS, work under a “Boot Manager – Boot Loader” paradigm. The initial NBP is a Boot Manager able to retrieve its own configuration and deploy a menu of booting options. The user selects a booting option and an OS dependent Boot Loader is downloaded and run in order to continue with the selected specific booting procedure.



PXE Boot over WAN?

2PrintSoftware ipxeanywhere claims it can PXE Boot over Cloud or WAN .. interesting but I would like to find open source solutions which work and well documented.

I saw some posts about people trying to set this up over WAN with not much success.
Im sure with cloud offerings or some of the infrastructure parts running on the cloud it is easier to do now than before? maybe its just my assumption blindly.

——-


Perhaps this is as good place as any to mention Desktop As a Service Vs VDI (hinted at it previously)

DaaS is a form of Virtual Desktop Infrastructure (VDI), hosted in the cloud. With VDI, an organization deploys virtual desktops from its own on-premises data centers. In-house IT teams are responsible for deploying the virtual desktops as well as purchasing, managing, and upgrading the infrastructure.

DaaS is essentially the same thing but the infrastructure is cloud-based. Organizations that subscribe to a DaaS solution don’t need to manage their own hardware.

DaaS providers manage the VDI deployment, as well as the maintenance, security, upgrades, data backup, and storage. And the customer manages the applications and desktop images. DaaS is a good choice for organizations that don’t want to invest in and manage their own on-premises VDI solution.

So in a few words DaaS can be a great solution *the correct form of Virtual Desktop Infrastructure* when You want to cross the internet or move the whole infrastructure to the cloud instead of doing it on permises with internal IT Teams on Your own intranet/network


Open Source Vs Commercial Offerings

VMware Horizon (on permises)

Citrix Virtual Apps and Desktop (used to be called Xenapp)

Microsoft Windows Virtual Desktop backed by Azure VM ( use windows on any device)

Amazon Workspaces (cloud)

Parallels RAS

SoftOnNet

flexVDI

FOSS-Cloud

Links

https://www.goodfirms.co/blog/best-free-open-source-virtual-desktop-infrastructure-software

https://www.zdnet.com/article/desktop-virtualization-vs-virtual-desktop-infrastructure/

https://openthinclient.com/en/

https://openthinclient.com/en/shop/hardware/

https://thinstation.github.io/thinstation/

http://rpitc.blogspot.com/

https://superuser.com/questions/1237099/how-to-pxe-boot-over-wan

https://docs.microsoft.com/en-us/troubleshoot/mem/configmgr/boot-from-pxe-server

https://netboot.xyz/

http://www.softonnet.com/eng/technologies/desktop-virtualization

https://betawiki.net/wiki/Windows_Thin_PC

https://unicorn-software.com

http://undeadly.org/cgi?action=article&sid=20121026064602

igel.com

TSR – The Server Room Show – Episode 45 – Rancher & Heimdall Application Dashboard

Prologue


Remember in Episodes 18 and 19 of The Server Room Show we discussed Docker and Kubernetes in detail. If You dont remember I recommend You go and listen to those two episodes before You listen to this one unless You familiar with Docker and Kubernetes and what both of them are for.

In short Kubernetes is a platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It works with a range of container tools, including Docker.

Rancher

Rancher is one platfor for Kubernetes management / Enterprise Kubernetes Management Platform.It is a complete container management platform. Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters, while providing DevOps teams with integrated tools for running containerized workloads.

Rancher is open source software and from datacenter to cloud to edge it lets you run Kubernetes everywhere.

Rancher is not the only Kubernetes management platform out there.

There is Red Hat’s Openshift and VMware’s Tanzu.

The problem with vanilla Kubernetes installations that they lack central visibility, the security practices applied are most of the time inconsistent between various Kubernetes clusters and to be honest manually manage one or even more than one Kubernetes cluster can be a complex process.

Kubernetes Management Platforms try to solve these issues f.e with bringing Security Policy & User Management and Shared Tools & Services with high level of reliability with easy and consistent access to the shared tools and services. High Availability , Load Balancing and Centralized Audit or Integration with popular CI/CD Solutions are just a few to mention.

Rancher has a thriving comunity on slack.rancher.io and forums.rancher.com if You need help to get going with it.

So if some of the below questions ever popped into your head regarding operational challanges when designing your companys docker / kubernetes infrastructure then probably Rancher could be a great fit for You:

  • How do I deploy consistentlyt across different infrastructures?
  • How do I manage and implement access control accross multiple clusters and namespaces?
  • How do I integrate with already in place central authentication systems like LDAP, Active Directory, Radius,etc.?
  • What can I do for Monitoring my kubernetes cluster/s?
  • How do I ensure that security policies are the same and enforced across clusters / namespaces?
Screenshot from Rancher (link in the shownotes)

Rancher was originally built to work with multiple orchestrators, and it included its own orchestrator called Cattle. With the rise of Kubernetes in the marketplace, Rancher 2.x exclusively deploys and manages Kubernetes clusters running anywhere, on any provider.

Rancher can provision Kubernetes from a hosted provider, provision compute nodes and then install Kubernetes onto them, or import existing Kubernetes clusters running anywhere.

One Rancher server installation can manage thousands of Kubernetes clusters and thousands of nodes from the same user interface.

Rancher adds significant value on top of Kubernetes, first by centralizing authentication and role-based access control (RBAC) for all of the clusters, giving global admins the ability to control cluster access from one location.

It then enables detailed monitoring and alerting for clusters and their resources, ships logs to external providers, and integrates directly with Helm via the Application Catalog. If you have an external CI/CD system, you can plug it into Rancher, but if you don’t, Rancher even includes a pipeline engine to help you automatically deploy and upgrade workloads.

Rancher is a complete container management platform for Kubernetes, giving you the tools to successfully run Kubernetes anywhere.

Another interesting thing to mention is that while for a standalone Kubernetes installation you would need to fulfill more dependencies than for a Rancher + Kubernetes deploy scenario.

The reason being as Rancher only requires the Host to have a supported Docker version installed on it , wanting to pull a vanilla kubernetes installation calls for more dependencies than just simply Docker being installed.

This is achieved by Rancher as it runs entirely inside or on top of Docker and Rancher then lets you run a Kubernetes cluster/s on top of it/Rancher.

You can be up and running quicker this way then going through vanilla Kubernetes installation.

For Sandboxing environment and to test Rancher out you can deploy it on a single host which has docker installed but for production a three node cluster is a minimum requirement.

How to start with Rancher

Rancher has a great quickstart guide to have you up and running in the lowest time possible.** link is in the shownotes **

You can try it out in a sandbox environment just grab a host with a supported docker version installed like Centos or Fedora and use this one line to pull Rancher up inside a docker image to test it out and play around ( to deploy it to a production environment do not use this but follow a proper production rollout step by step documentation and set it up as a three node cluster at least to have HA *high availability* and Failover support.

$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher

Result: Rancher is installed

Once Rancher is up and runnig the next step is to login using the local hosts FQDN or IP address

https://<SERVER_IP> or <FQDN>

On first logon it will prompt You to set a password for the default admin account.

Rancher running on Centos 8 VM accessed from my Workstation on another subnet 172.35.x.x *make sure You allow ports 80 and 443 at least in the firewall public zone on Centos*
Rancher lets You know to make sure the Rancher Server URL is accessible from all hosts you will create…

Creating Your Kubernetes Cluster is the first step.

In this example, you can use the versatile Custom option. This option lets you add any Linux host (cloud-hosted VM, on-premise VM, or bare-metal) to be used in a cluster.

Once You click on the Add Cluster button You are welcomed with this screen where You can click on From existing nodes (custom)

For this exercise only fill out the following details:

Select a Cluster name , Skip the Member Roles and Cluster Options for now and click Next


From the Cluster Options screen select ALL the Node Options ( etcd, Control Plane, Worker) and copy the command which shown in Step 2. You need to run this on Your machine where You running Rancher for this example using the terminal via ssh or logging in locally.

In my case I had to run this code for this example:

sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.4.8 --server https://172.19.19.7 --token fp6gk7wldgrhgglldqt7gd275j5f97rn7g6tdgnqd2rwv5snwz4qm8 --ca-checksum 9a31bd4ea0636bb19c8152a47e1f8389d4187d7e9030bec161f190a1f9562455 --etcd --controlplane --worker



Once You ran the command come back to this window and click Done

Once You click Done You get back to the Main screen where Your Cluster will show up with State: Provisioning
(( it will inform you about what is happening behind the curtains under the text provisioning ))

Kubernetes Cluster provisioning after clicking on Done on the previous screen…
(( it will inform you about what is happening behind the curtains under the text provisioning ))

You can check from the host machine that it is deploying a good couple of other nodes to build the Kubernetes cluster infrastructure.

[viktormadarasz@localhost ~]$ sudo docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS              PORTS                                      NAMES
d67bdef1a64a        rancher/hyperkube:v1.18.6-rancher1    "/opt/rke-tools/entr…"   31 seconds ago      Up 26 seconds                                                  kube-apiserver
ca34379bebcc        rancher/coreos-etcd:v3.4.3-rancher1   "/usr/local/bin/etcd…"   36 seconds ago      Up 34 seconds                                                  etcd
4ea60c63d367        rancher/rancher-agent:v2.4.8          "run.sh --server htt…"   4 minutes ago       Up 4 minutes                                                   laughing_taussig
b9baeb02c206        rancher/rancher                       "entrypoint.sh"          10 minutes ago      Up 6 minutes        0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   zealous_merkle

Depending on when you check sudo docker ps it can be more or much more docker containers working behind the scenes building out your Kubernetes Cluster

Do not worry if You lose connection to the Rancher server url at some point during this .. .it will come back

…. after 21 minutes has passed

My Kubernetes Cluster provisioning got stuck at this step ( bad certificate tls) also pasted the log from etcd docker container log. — i will continue from here —


[etcd] Failed to bring up Etcd Plane: etcd cluster is unhealthy: hosts [172.19.19.7] failed to report healthy. Check etcd container logs on each host for more information
Caused by error in log from etcd docker container
2020-09-20 18:05:00.365851 I | embed: rejected connection from “172.19.19.7:53764” (error “tls: failed to verify client’s certificate: x509: certificate signed by unknown authority (possibly because of \”crypto/rsa: verification error\” while trying to verify candidate authority certificate \”kube-ca\”)”, ServerName “”)

Trying to work around the problem in my case

So i went ahead and instead of a Centos 8 VM I tried to run the deployment script of rancher on my Fedora 32 Workstation on the physical machine on kernel 5.8

And I dont know for what reason but it deployed without any error message or complication.

The Kubernetes cluster is / was up and running

kubernetescluster on rancher running on top of Docker in the physical machine under Fedora 32 Linux
Rancher itself and the kubernetes cluster it deploys runs on a bunch of containers in the underlying Docker engine.


Dashboard of the created kubernetescluster


One thing I did different was to tell Rancher during the initial setup after setting the admin password was that the url for the server is localhost and not the IP like I did in the Centos 8 VM case where I gave the url the IP of the local VM which I think should work.

One thing I did different was to tell Rancher during the initial setup after setting the admin password was that the url for the server is localhost and not the IP
You can change the server-url of Rancher from Settings / Advanced Settings menu



So I went back and tried in the Centos 8 VM setting localhost instead of the IP of the VM as the server’s url.

It worked and the Kubernetes Cluster deployed correctly on Centos 8 VM Kernel 4.18.0-193
even tough its not mentioned on the support matrix as of the date when this article was created.

Support Matrix for Rancher



I accessed the control panel of Racher via IP because i was accessing from a different subnet.. In Settings / Advanced Settings it has the server-url set to https://localhost


I went into unsupported territory and experienced odd errors indeed
Fedora 32 on Kernel 5.7

BUT …. on Kernel 5.7 on Fedora 32 things are strange and it fails again like I did on Centos 8 VM in the beginning until I switched server-url to localhost from the IP address…

It can be that as neither Centos 8 nor Fedora are on the support matrix for Rancher can be a cause for odd behaviour experienced below…

However on kernel 5.7 Docker on the same system indeed complains first and the Kubernetes cluster fails at the same place with Rancher

This can be something just with my machine which I can confirm using a VM of Fedora 32 clean install with Kernel 5.7 and rerun this and Update the shownotes to see if it worked or not…

First docker complained for cgroups which i fixed with some temp fix provided in one of the links in the shownotes and after the kubernetes cluster again failed to deploy itself properly whern using the same deployment script like 10 minutes ago on the same box with kernel 5.8

 viktormadarasz  ~  sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
5535f662ad763b3cd414f73d94a070322a9519afa1dccf92cbd2fa65d986bf18
docker: Error response from daemon: cgroups: cgroup mountpoint does not exist: unknown.
 
Fixed with:

viktormadarasz  ~   sudo mkdir /sys/fs/cgroup/systemd
viktormadarasz  ~   sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
Client: Docker Engine - Community
 Version:           19.03.8
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        afacb8b7f0
 Built:             Wed Mar 11 01:27:05 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       afacb8b7f0
  Built:            Wed Mar 11 01:25:01 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Linux fedoraws.lan 5.7.6-201.fc32.x86_64 #1 SMP Mon Jun 29 15:15:52 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

Reason I m not on kernel 5.8 is that it breaks Vmware and Virtualbox and i use those heavily on this machine * was broken both last time i checked..*

Heimdall Application Dashboard
Heimdall_Banner

As the name suggests Heimdall Application Dashboard is a dashboard for all your web applications. It doesn’t need to be limited to applications though, you can add links to anything you like.

Heimdall is an elegant solution to organise all your web applications. It’s dedicated to this purpose so you won’t lose your links in a sea of bookmarks.

Why not use it as your browser start page? It even has the ability to include a search bar using either Google, Bing or DuckDuckGo.

Supported applications

You can use the app to link to any site or application even if they are not supported these ones fall under the category of Generic Apps.

This is one of the benefits to Heimdall is you can add a link to absolutely anything, whether it’s intrinsically supported or not. With a generic item, you just fill in the name, background colour, add an icon if you want (if you don’t a default Heimdall icon will be used), and enter the link url and it will be added.

If You add any Foundation apps will auto fill in the icon for the app and supply a default color for the tile.

In addition Enhanced apps allow you provide details to an apps API, allowing you to view live stats directly on the dashboad. For example, the NZBGet and Sabnzbd Enhanced apps will display the queue size and download speed while something is downloading.

Supported applications are recognized by the title of the application as entered in the title field when adding an application. For example, to add a link to pfSense, begin by typing “p” in the title field and then select “pfSense” from the list of supported applications.

On Hemdall Application Database site You can see a list of supported Foundation and Enhanced apps just as you can consult about requested applications to be supported.

You can try out Heimdall on the Kubernetes cluster We created in the first part of this episode using Rancher

Click on Global / Select Your Kubernetes Cluster You Created earlier and Click on Default namespace
Click on the Deploy button on the top right corner
Choose a name for Your pod , leave it on Scalable deployment of 1 pod, in the docker image part specify the command/target you would use after the normal docker pull command which is in the case of heimdall is “linuxserver/heimdall/” * can check https://hub.docker.com/r/linuxserver/heimdall/ for the same info *
Click on Add port to be able to reach the heimdall webgui of port 80 of the Pod You about the create from outside/external of the Kubernetes cluster , for this set port type HostPort and specify a listening port on which the Host where Kubernetes cluster is running should forward the port 80 of the Pod in this example i used port 8082

Click on Launch to Deploy the Pod

Navigate to http://IP or FQDN of Your Kubernetes Cluster:Port-Exposed
In my example its http://172.19.19.7:8082 , the IP of my server Centos 8 VM on top of which runs docker in which it runs Rancher which runs Kubernetes Cluster where My Pod Heimdall sits and exposes its port 80 to the underlying host and to external connections via port 8082


Migrating From Docker to Kubernetes Cluster

Here is a great article explaining a three piece service migration from doocker using a docker compose file to Kubernetes Cluster.

Deployment to Kubernetes clusters is more complicated than deployment using Docker Compose. However, Kubernetes is one of the most used orchestration tools used to deploy containers into production environments due to its flexibility, reliability, and features.

Easy to follow and to grasp the concept idea.

https://medium.com/better-programming/how-to-migrate-from-docker-compose-to-kubernetes-b57eb229beb2

Links

https://rancher.com/docs/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/

https://apps.heimdall.site/

https://rancher.com/docs/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/

TSR – The Server Room Show – Episode 44 – Sophos XG Firewall and Intercept X Endpoint Management

What is a Next Gen Firewall?

A next-generation firewall (NGFW) is a part of the third generation of firewall technology, combining a traditional firewall with other network device filtering functions, such as an application firewall using in-line deep packet inspection (DPI), an intrusion prevention system (IPS). Other techniques might also be employed, such as TLS/SSL encrypted traffic inspection, website filtering, QoS/bandwidth management, antivirus inspection and third-party identity management integration (i.e. LDAP, RADIUS, Active Directory).

Next-generation firewall vs. traditional firewall

NGFWs include the typical functions of traditional firewalls such as packet filtering, network- and port-address translation (NAT), stateful inspection, and virtual private network (VPN) support. The goal of next-generation firewalls is to include more layers of the OSI model, improving filtering of network traffic that is dependent on the packet contents.

NGFWs perform deeper inspection compared to stateful inspection performed by the first- and second-generation firewalls. NGFWs use a more thorough inspection style, checking packet payloads and matching signatures for harmful activities such as exploitable attacks and malware.

Evolution of next-generation firewalls

Improved detection of encrypted applications and intrusion prevention service. Modern threats like web-based malware attacks, targeted attacks, application-layer attacks, and more have had a significantly negative effect on the threat landscape. In fact, more than 80% of all new malware and intrusion attempts are exploiting weaknesses in applications, as opposed to weaknesses in networking components and services.

Stateful firewalls with simple packet filtering capabilities were efficient blocking unwanted applications as most applications met the port-protocol expectations. Administrators could promptly prevent an unsafe application from being accessed by users by blocking the associated ports and protocols. But blocking a web application that uses port 80 by closing the port would also mean complications with the entire HTTP protocol.

Protection based on ports, protocols, IP addresses is no more reliable and viable. This has led to the development of identity-based security approach, which takes organizations a step ahead of conventional security appliances which bind security to IP-addresses.

NGFWs offer administrators a deeper awareness of and control over individual applications, along with deeper inspection capabilities by the firewall. Administrators can create very granular “allow/deny” rules for controlling use of websites and applications in the network.


Sophos XG Firewall

Sophos firewall offering / product exists both as a software and hardware offering.
You can run the engine on a VM or a hardware of Your choice but You can also choose to go with their own hardware firewalls which uses tried and tested components to make sure You get the most out of their firewall engine.


Sophos claims the XG Firewall to be the world’s best visibility , protection and response.

Their product is NSS Brand Recommended , what NSS Labs does they test security products from around the world pretty much security products as I saw on their website

Also Gartner and SC Awards spoke highly of Sophos products.Sophos offers it as an ultimate firewall solution


Enterprise protection where Visibility , Protection and Response is key
The Best Protection to Stop Unknown Threats Dead


IPS – Intrusion Prevention System with high performance to try and stop unknown threats. With SophosLab Threat Intelligence Integration Sophos is analyzing and trying to stop zero day threats before they get on Your network.

Performance to fully protect Your network

Extreme TLS inspection

Extremely Fast, Effective, and Transparent.

80% of the traffic passing through your firewall is encrypted. Most organizations are completely blind to this traffic. Why? Because TLS Inspection kills their firewall performance. But not anymore.

XG Firewall’s Xstream TLS Inspection solves this problem once and for all. You can now fully enable TLS Inspection without compromising on performance, protection, privacy, and the end user experience.

  • Native support for TLS 1.3 and all modern cipher suites
  • Powerful policy tools to balance privacy, protection, and performance
  • Unique at-a-glance visibility and one-click error handling via the Control Center

SD-WAN Evolved

Unprecedented clarity, connectivity, and control.

XG Firewall evolves SD-WAN with unique capabilities that provide unprecedented clarity and control over your connectivity needs.

Synchronized SD-WAN

Leverages the 100% application visibility and control that Synchronized Security provides to make reliable SD-WAN path selection and routing decisions.

SD-RED Branch Office Connectivity

Our zero-touch branch office edge devices make SD-WAN deployments simple, easy, and secure.

Flexible Connectivity Options

No other firewall offers as many modular and flexible connectivity solutions as XG Firewall, with a full range of wireless, cellular, copper, and fiber options.

Powerful Management and Seamless Scalability

Group Firewall Management
Central Firewall Reporting
Plug and Play High Availability

Designed to Fit Your Network

XG Firewall offers a powerful and modular line of hardware appliance models as well as software, virtual, and cloud deployment options to fit any network.

XG Series Appliances

XG Series Appliances

XG Firewall offers a full range of top-performing hardware appliances with modular connectivity options for all your LAN, WAN, and wireless needs including Wi-Fi, cellular, copper, and fiber interfaces.

Software, Virtual, Cloud

Software, Virtual, Cloud

XG Firewall is also available as a software appliance, supports all the popular virtualization platforms, and is available on both Azure and Amazon Web Services to protect and connect your public, private, and hybrid cloud networks.

SD-WAN

SD-WAN

Our unique zero-touch SD-RED edge devices make extending your secure network to remote and branch locations and industrial control system (ICS) devices simple and easy. Flexible SD-WAN and VPN connectivity options ensure you meet your WAN reliability and quality goals.



Hardware Offerings:

Sophos offers a divers portfolio of Hardware Appliances running Sophos XG Firewall product.
Depending on Your budget and needs You can go from a small 500 euro appliance which is one of the smallest to bigger but still desktop size modular units or go up to rack equipment of 1U or 2U units.

XG86 and XG 86w with wireless module the cheapest and smallest of the firewall hardware Sophos offers.
XG125 and XG125w with wireless is a model I could imagine in my homelab or the whole home network itself to be in charge of protection and be my No.1 firewall appliance. Prices for Appliance only unit I saw around 900 – 1000 U.S Dollars
XG 230 Rev 2 If Money is not a problem 🙂 around 2000 euros appliance only I would put this in my server rack without a doubt. Gigabit and beyond performance nearly for all applications *firewall, ngfwn ipsec vpnm ips, threat protection* except XSTREAM SSL Decryption



A brief comparison table



Product Highlights of Hardware Appliances

  • All features supported on every XG 1xx model and most on XG 86
  • Every model available with optional integrated 802.11ac Wi-Fi
  • 2nd power supply option for all XG 1xx models
  • Expansion bay on all XG 125/135 models for 3G/4G module
  • Optional 2nd Wi-Fi radio module on 135w model
  • SFP port, e.g. for optional DSL modem, on all XG 1xx appliances



Endpoint Management Product:
Intercept X Endpoint protection features:

Endpoint Detection and Response:

This image has an empty alt attribute; its file name is EDR-screenshot-2.jpg



Intercept X detects and investigates suspicious activity with AI-driven analysis. Unlike other EDR tools, it adds expertise, not headcount by replicating the skills of hard-to-find analysts.|


Anti-Ransomware


Today’s ransomware attacks often combine multiple advanced techniques with real-time hacking. To minimize your risk of falling victim you need advanced protection that monitors and secures the whole attack chain. Sophos Intercept X gives you advanced protection technologies that disrupt the whole attack chain including deep learning that predictively prevents attacks, and CryptoGuard which rolls back the unauthorized encryption of files in seconds.

Deep Learning Technology

By integrating deep learning, an advanced form of machine learning, Intercept X is changing endpoint security from a reactive to a predictive approach to protect against both known and never-seen-before threats. While many products claim to use machine learning, not all machine learning is created equally. Deep learning has consistently outperformed other machine learning models for malware detection.

Exploit Prevention

Exploit prevention stops the techniques used in file-less, malware-less, and exploit-based attacks. While there are millions of pieces of malware in existence, and thousands of software vulnerabilities waiting to be exploited, there are only handful of exploit techniques attackers rely on as part of the attack chain – and by taking away the key tools hackers love to use, Intercept X stops zero-day attacks before they can get started.

Managed Threat Response

Sophos Managed Threat Response (MTR) provides 24/7 threat hunting, detection, and response capabilities delivered by an expert team as a fully-managed service. Sophos MTR fuses machine learning technology and expert analysis for improved threat hunting and detection, deeper investigation of alerts, and targeted actions to eliminate threats with speed and precision. Unlike other services, the Sophos MTR team goes beyond simply notifying you of attacks or suspicious behaviors, and takes targeted actions on your behalf to neutralize even the most sophisticated and complex threats.

Active Adversary Mitigations

Intercept X utilizes a range of techniques, including credential theft prevention, code cave utilization detection, and APC protection that attackers use to gain a presence and remain undetected on victim networks. As attackers have increasingly focused on techniques beyond malware in order to move around systems and networks as a legitimate user, Intercept X detects and prevents this behavior in order to prevent attackers from completing their mission.


Sophos’s Synchronized Security Product

Synchronized Security is the cybersecurity system where Sophos endpoint, network, mobile, Wi-Fi, email, and encryption products work together, sharing information in real time and responding automatically to incidents:

  • Isolate infected endpoints, blocking lateral movement
  • Restrict Wi-Fi for non-compliant mobile devices
  • Scan endpoints on detection of compromised mailboxes
  • Revoke encryption keys if a threat is detected
  • Identify all apps on the network

Everything is managed through a single, web-based management console, so you can see and control all your security in one place.

Links

https://www.nsslabs.com

https://secure2.sophos.com/en-us/security-news-trends/reports/gartner/magic-quadrant-utm.aspx

https://news.sophos.com/en-us/2019/06/07/synchronized-security-awarded-best-threat-intelligence-technology/

https://www.sophos.com/

TSR – The Server Room Show – Episode 43 – OpenBSD

OpenBSD

OpenBSD is a 4.4BSD-based UNIX-like operating system built from the ground up to focus its efforts on emphasize portability, standardization, correctness, proactive security and integrated cryptography. OpenSSH the popular software comes from OpenBSD.

Why might you want to use it?Some interesting things to mention….

  • OpenBSD runs on many different hardware platforms.
  • OpenBSD is thought of as the most secure UNIX-like operating system by many security professionals, as a result of the never-ending comprehensive source code audit.
  • OpenBSD is a full-featured UNIX-like operating system available in source and binary form at no charge.
  • OpenBSD integrates cutting-edge security technology suitable for building firewalls and private network services in a distributed environment.
  • OpenBSD benefits from strong ongoing development in many areas, offering opportunities to work with emerging technologies and an international community of developers and end users.
  • OpenBSD attempts to minimize the need for customization and tweaking. For the vast majority of users, OpenBSD just works on their hardware for their application.
  • OpenBSD runs on a lot of different architectures although less than NetBSD does 🙂
  • It is very well documented and has mailing lists in place for those who want to get involved.
  • OpenBSD has gone through heavy and continual security auditing to ensure the quality and security of the code.
  • OpenBSD does not support journaling filesystems. Instead we use the soft updates feature of the Fast File System (FFS).
  • OpenBSD comes with Packet Filter (PF). This means that Network Address Translation, queuing, and filtering are handled through pfctl(8), pf(4) and pf.conf(5).
  • OpenBSD’s default shell is ksh, which is based on the public domain Korn shell. Shells such as bash and many others can be added from packages.
  • Devices are named by driver, not by type. In other words, there are no eth0 and eth1 devices. It would be em0 for an Intel PRO/1000 Ethernet card, bge0 for a Broadcom BCM57xx or BCM590x Ethernet device, ral0 for a RaLink wireless device, etc.
  • OpenBSD/i386, amd64, and several other platforms use a two-layer disk partitioning system, where the first layer is the fdisk BIOS-visible partition and the second is the disklabel.
  • Some other operating systems encourage you to customize your kernel for your machine. OpenBSD users are encouraged to simply use the standard GENERIC kernel provided and tested by the developers.
rc and init

rc is the command script that is invoked by init(8) when the system starts up. It performs system housekeeping chores and starts up system daemons.

In Unix-based computer operating systems, init (short for initialization) is the first process started during booting of the computer system. Init is a daemon process that continues running until the system is shut down. It is the direct or indirect ancestor of all other processes and automatically adopts all orphaned processes. Init is started by the kernel during the booting process; a kernel panic will occur if the kernel is unable to start it. Init is typically assigned process identifier 1. In Unix systems such as System III and System V, the design of init has diverged from the functionality provided by the init in Research Unix and its BSD derivatives. Up until recently, most Linux distributions employed a traditional init that is somewhat compatible with System V, while some distributions such as Slackware use BSD-style startup scripts, and others such as Gentoo have their own customized versions.

Since then, several additional init implementations have been created, attempting to address design limitations in the traditional versions. These include launchd, the Service Management Facility, systemd, Runit and OpenRC.

Additionally, rc is intricately tied to the netstart(8) script, which runs commands and daemons pertaining to the network. rc is also used to execute any rc.d(8) scripts defined in rc.conf.local(8). The rc.securelevel, rc.firsttime, and rc.local scripts hold commands which are pertinent only to a specific site.

All of these startup scripts are controlled to some extent by variables defined in rc.conf(8), which specify which daemons and services to run.

rc is the command script that is invoked by init(8) when the system starts up. It performs system housekeeping chores and starts up system daemons. Additionally, rc is intricately tied to the netstart(8) script, which runs commands and daemons pertaining to the network. rc is also used to execute any rc.d(8) scripts defined in rc.conf.local(8). The rc.securelevel, rc.firsttime, and rc.local scripts hold commands which are pertinent only to a specific site.

All of these startup scripts are controlled to some extent by variables defined in rc.conf(8), which specify which daemons and services to run.

Before init(8) starts rc, it sets the process priority, umask, and resource limits according to the “daemon” login class as described in login.conf(5). It then starts rc and attempts to execute the sequence of commands therein.

OpenBSD as a Desktop Operating System — Daily Driver
Installation of OpenBSD 6.7
Running xenodm as root to bring up to logon manager
Logged in as normal user to the fresh OpenBSD 6.7 installation
Networking and DNS resolution works fine. top is running on the right terminal window
OpenBSD as a Firewall/Router

I found this great article about OpenBSD as a firewall I want to talk about.
https://dzone.com/articles/high-availability-routerfirewall-using-openbsd-car

in this example two small appliances are used to serve as R1 and R2 with OpenBSD in a home network scenario. One PCEngines APU4C4 and an older Soekris net5501. They are set up in failover mode using CARP and pfsync

https://lh5.googleusercontent.com/UY4DMYRIRbNr-ERHu_0yoidz5wG8aYYoQGCmOJZiobPjoA7iQPOxZeJNWVe_-BIcQ35ZSAFss0a6mtvjNXMXu1g-qXcf8N7xD8R3HgsG7ifGnqi6nEG-vwp9Liq99JGs0xytZhmW
Example Network Topology from https://dzone.com & Chad Gross
  • All three switches are unmanaged switches.
  • Both R1 and R2 handling out DHCP Addresses from the same pool but split *R1 in the range of .151-250 and R2 in the range of .100-150
  • vr0 and em0 are the WAN interfaces of R1 and R2 respectively receiving IP assigned via DHCP from ISP *or ISP’s router perhaps*

Example Network Topology from https://dzone.com & Chad Gross


R1 and R2 has pfsync service running and keeping them in sync on vr1 and em1 interfaces

R1 and R2 has pflow service running and keeping them in sync on vr2 and em2 interfaces


CARP and pfsync

CARP is the Common Address Redundancy Protocol. Its primary purpose is to allow multiple hosts on the same network segment to share an IP address. CARP is a secure, free alternative to the Virtual Router Redundancy Protocol (VRRP) and the Hot Standby Router Protocol (HSRP).

CARP works by allowing a group of hosts on the same network segment to share an IP address. This group of hosts is referred to as a “redundancy group.” The redundancy group is assigned an IP address that is shared amongst the group members. Within the group, one host is designated the “master” and the rest as “backups.” The master host is the one that currently “holds” the shared IP; it responds to any traffic or ARP requests directed towards it. Each host may belong to more than one redundancy group at a time.

One common use for CARP is to create a group of redundant firewalls. The virtual IP that is assigned to the redundancy group is configured on client machines as the default gateway. In the event that the master firewall suffers a failure or is taken offline, the IP will move to one of the backup firewalls and service will continue unaffected.

CARP supports IPv4 and IPv6.

The pfsync(4) network interface exposes certain changes made to the pf(4) state table. By monitoring this device using tcpdump(8), state table changes can be observed in real time. In addition, the pfsync(4) interface can send these state change messages out on the network so that other nodes running PF can merge the changes into their own state tables. Likewise, pfsync(4) can also listen on the network for incoming messages.

y default, pfsync(4) does not send or receive state table updates on the network; however, updates can still be monitored using tcpdump(8) or other such tools on the local machine.

When pfsync(4) is set up to send and receive updates on the network, the default behavior is to multicast updates out on the local network. All updates are sent without authentication. Best common practice is either:

Connect the two nodes that will be exchanging updates back-to-back using a crossover cable and use that interface as the syncdev (see below).
Use the ifconfig(8) syncpeer option (see below) so that updates are unicast directly to the peer, then configure ipsec(4) between the hosts to secure the pfsync(4) traffic. 

When updates are being sent and received on the network, pfsync packets should be passed in the filter ruleset:

pass on $sync_if proto pfsync

$sync_if should be the physical interface that pfsync(4) is communicating over.

Links

http://www.troubleshooters.com/linux/pf/index.htm

https://www.openbsd.org/faq/pf/filter.html

https://www.openbsd.org/faq/pf/

https://dzone.com/articles/high-availability-routerfirewall-using-openbsd-car

https://en.wikipedia.org/wiki/Init

https://man.openbsd.org/rc.8

https://www.openbsd.org/papers/eurobsd-firewalls-2002.pdf

https://bsd.cat/es/

TSR – The Server Room Show – Shownotes – Episode 42 – Analytics and Interactive Visualization Solutions

Intro

While preparing this article/episode for today I came across the below dilemma which I could summarize as:

Most Monitoring Softwares Are Not So Great In Presenting Visually The Metrics/Data Acquired But Some Analytics and Visualization Solutions make a near perfect Monitoring Solution.

Viktor Madarasz – while preparing this article for this episode

What I try to say is that while Monitoring softwares like the ones we discussed in the previous episodes like (Nagios and Zabbix and OpenNMS) not ace it in visualizing the acquired metrics and data in the most beautiful form possible which makes us couple a Monitoring tool like OpenNMS with Grafana *a tool of Analystics and Visualization I will talk about today* to achieve what we want , suprisingly enough some of these analytics and visualization layers/tools/software are getting better and better to include functions from monitoring softwares such as alarms for example.

Therefore I had a bit of a hard time to draw a line with some of these tools , and many others which nearly made it to the list , of where a data visualization and analytics software ends and a monitoring software begins. This line seems fuzzier each time I look at it.

For the moment Monitoring softwares have more on the monitoring and handling alarms end on the spectrum and less on the presentation and visualization of the acquired metrics/data but Analytics and Visualization tools are becoming more and more a hybrid to try and exists in both words.

Grafana
Out of the Box experience ….

Grafana is a multi-platform open source analytics and interactive visualization web application. It provides charts, graphs, and alerts for the web when connected to supported data sources. It is expandable through a plug-in system. End users can create complex monitoring dashboards using interactive query builders.

As a visualization tool, Grafana is a popular component in monitoring stacks often used in combination with time series databases such as InfluxDB, Prometheus and Graphite; monitoring platforms such as Sensu, Icinga, Zabbix, Netdata, and PRTG; SIEMs (security information and event management) such as Elasticsearch and Splunk; and other data sources.

What is a time series database?

A time series database (TSDB) is a software system that is optimized for storing and serving time series through associated pairs of time(s) and value(s). In some fields, time series may be called profiles, curves, traces or trends.Several early time series databases are associated with industrial applications which could efficiently store measured values from sensory equipment (also referred to as data historians), but now are used in support of a much wider range of applications.

In many cases, the repositories of time-series data will utilize compression algorithms to manage the data efficiently.Although it is possible to store time-series data in many different database types, the design of these systems with time as a key index is distinctly different from relational databases which reduce discrete relationships through referential models.

A time series database typically separates the set of fixed, discrete characteristics from its dynamic, continuous values into sets of points or ‘tags.’ An example is the storage of CPU utilization for performance monitoring: the fixed characteristics would include the name ‘CPU Utilization’ the units of measure ‘%’ and a range ‘0 to 1’; and the dynamic values would store the utilization percentage and a timestamp. The separation is intended to efficiently store and index data for application purposes which can search through the set of points differently than the time-indexed values.

The databases vary significantly in their features, but most will enable features to create, read, update and delete the time-value pairs as well as the points to which they are associated. Additional features for calculations, interpolation, filtering, and analysis are commonly found, but are not commonly equivalent.

In the below example I used Grafana + Influxdb + Telegraf to monitor the localhost for basic metrics as seen on the screenshot. Also known as TIG Stack Telegraf Influxdb and Grafana

Grafana is an open source data visualization and monitoring suite. It offers support for Graphite, Elasticsearch, Prometheus, influxdb, and many more databases. The tool provides a beautiful dashboard and metric analytics, with the ability to manage and create your own dashboard for your apps or infrastructure performance monitoring

Telegraf is an agent for collecting, processing, aggregating, and writing metrics. It supports various output plugins such as influxdb, Graphite, Kafka, OpenTSDB etc.

InfluxDB is an open-source time series database written in Go. Optimized for fast, high-availability storage and used as a data store for any use case involving large amounts of time-stamped data, including DevOps monitoring, log data, application metrics, IoT sensor data, and real-time analytics.

TIG Stack Monitoring the Localhosts Basic Metrics
Kibana
Kibana + Elasticsearch showing Sample Data Out of the box…

Kibana is similar in many ways to Grafana but one key difference when it comes to data sources it can only work with Elasticsearch. This can be a deal breaker for many if they wish to work with other datasources than Elasticsearch.

Grafana is designed for analyzing and visualizing metrics such as system CPU, memory, disk and I/O utilization. Grafana does not allow full-text data querying. Kibana, on the other hand, runs on top of Elasticsearch and is used primarily for analyzing log messages

Kibana is an open source data visualization dashboard for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.

Kibana also provides a presentation tool, referred to as Canvas, that allows users to create slide decks that pull live data directly from Elasticsearch.

What is Elasticsearch?

Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is developed in Java. Following an open-core business model, parts of the software are licensed under various open-source licenses (mostly the Apache License) while other parts fall under the proprietary (source-available) Elastic License.

Shay Banon created the precursor to Elasticsearch, called Compass, in 2004. While thinking about the third version of Compass he realized that it would be necessary to rewrite big parts of Compass to “create a scalable search solution”. So he created “a solution built from the ground up to be distributed” and used a common interface, JSON over HTTP, suitable for programming languages other than Java as well. Shay Banon released the first version of Elasticsearch in February 2010

Features of Elasticsearch

Elasticsearch can be used to search all kinds of documents. It provides scalable search, has near real-time search, and supports multitenancy. “Elasticsearch is distributed, which means that indices can be divided into shards and each shard can have zero or more replicas. Each node hosts one or more shards, and acts as a coordinator to delegate operations to the correct shard(s). Rebalancing and routing are done automatically”. Related data is often stored in the same index, which consists of one or more primary shards, and zero or more replica shards. Once an index has been created, the number of primary shards cannot be changed.

Elasticsearch is developed alongside a data collection and log-parsing engine called Logstash, an analytics and visualisation platform called Kibana, and Beats, a collection of lightweight data shippers. The four products are designed for use as an integrated solution, referred to as the “Elastic Stack” (formerly the “ELK stack”).

Elasticsearch uses Lucene (a free and open source search engine from Apache Software Foundation) and tries to make all its features available through the JSON and Java API. It supports facetting and percolating which can be useful for notifying if new documents match for registered queries. Another feature is called “gateway” and handles the long-term persistence of the index; for example, an index can be recovered from the gateway in the event of a server crash. Elasticsearch supports real-time GET requests, which makes it suitable as a NoSQL datastore but it lacks distributed transactions.

On 20 May 2019, Elastic made the core security features of the Elastic Stack available free of charge, including TLS for encrypted communications, file and native realm for creating and managing users, and role-based access control for controlling user access to cluster APIs and indexes. The corresponding source code is available under the “Elastic License”, a source-available license. In addition, Elasticsearch now offers SIEM (Security Information and Event Management) and Machine Learning as part of its offered services.

————————————————————————————————————————————————————————————————————————————————————–

The combination of Elasticsearch, Logstash, and Kibana, referred to as the “Elastic Stack” (formerly the “ELK stack”), is available as a product or service. Logstash provides an input stream to Elasticsearch for storage and search, and Kibana accesses the data for visualizations such as dashboards. Elastic also provides “Beats” packages which can be configured to provide pre-made Kibana visualizations and dashboards about various database and application technologies.

Grafana Loki

Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream.

Loki is one of the available Datasources in Grafana.

Loki as a Data Source Option under Grafana

Grafana’s Loki in certain scenarios compared to Elasticsearch can offer an alternative option to be inserted into current workflows.

Graphite
Graphite running in Docker instance exposed on port :80

Graphite is a free open-source software (FOSS) tool that monitors and graphs numeric time-series data such as the performance of computer systems. Graphite was developed by Orbitz Worldwide, Inc and released as open-source software in 2008.

Graphite collects, stores, and displays time-series data in real time.

The tool has three main components:

Carbon - a Twisted daemon that listens for time-series data
Whisper - a simple database library for storing time-series data (similar in design to RRD)
Graphite webapp - A Django webapp that renders graphs on-demand using Cairo library.

Graphite is used in production by companies such as Ford Motor Company, Booking.com, GitHub, Etsy, The Washington Post and Electronic Arts.

Links

Grafana Step by Step for beginners:
https://www.youtube.com/watch?v=4qpI4T6_bUw&t=64s

Grafana
https://grafana.com/

Elasticsearch
https://www.elastic.co

Elasticsearch concepts
https://logz.io/blog/10-elasticsearch-concepts/

Kibana
https://www.elastic.co/kibana

Graphite
https://graphiteapp.org/

Grafana Loki
https://www.youtube.com/watch?v=1obKa6UhlkY

How to deploy TIG Stack
https://www.howtoforge.com/tutorial/how-to-install-tig-stack-telegraf-influxdb-and-grafana-on-ubuntu-1804/

Comparing Grafana Kibana Graphite
https://stackshare.io/stackups/grafana-vs-graphite-vs-kibana