Dally Rhythms – 2020.03.29

Tracklist

  • Lane 8 – Loving You (Lane 8 Rework), A minor, 123 bpm
  • Arnav S – No Strings Attached (Original Mix), C major, 120 bpm
  • Laid Back – Beautiful Day (Banzai Republic, Trentemoller Remix), C major, 124 bpm
  • Rober Sanchez – Shine (Original Mix), C major, 123 bpm
  • Ilario Alicante – Xyxy (Original Mix), F major, 127 bpm
  • Pablo Acenso – In A Box (Dubmotional Mix), D minor, 128 bpm
  • Nadja Lind – Drifting Elements (Chris Lattner Remix), F major, 122 bpm
  • Hollen – Jobless (Original Club Mix), F major, 126 bpm
  • The Rumours – The Real Shit (Chris Main Remix), F major, 126 bpm
  • Double think – Last Word (Dub Version), Eb minor, 123 bpm
  • Face Off – Let It All Down (JazzyFunk Remix), Eb minor, 122 bpm
  • Bruno Be & Dado Prisco – Uncle Jack (Andrey Exx & Hot Hotels Remix), Eb minor, 120 bpm

Available for download in the archives.

MTT 161 / Withdraw

Finishing out the springtime dub techno and moving into a little electro this week. My traditional end-of-the-month slow set.

Tracklist:

  1. Rings Around Saturn / World Interior 00:00
  2. Ohrwert / Neutral 03:10
  3. Iwata / Stereotype 05:55
  4. Brk / III 09:48
  5. Mark Broom / Look And Stare 14:48
  6. Paperclip People / Remake (Basic Reshape) 18:09
  7. Fingers In The Noise / Empty Whisky Bottle 22:23
  8. Chronolux / Sun Spots 27:19
  9. Am.Light / Breathing 32:11
  10. Plant43 / Amphibious Architecture 35:15
  11. Electronome / V = for Viewlexx! 39:11
  12. Lowfish / Cancel, Continue 42:00
  13. Silicon / The Wash 46:55
  14. Marco Bernardi / Complete Direction 49:52
  15. Martin Nonstatic / Sense Of Life 54:45

And here’s a link to the recording: https://archives.anonradio.net/202003270600_cev.mp3 with thanks to aNONradio for hosting the archives.

About half of this mix is a tour through free dub techno netlabels – Deepindub (track 7), Dewtone (tracks 9 and 15), Energostatic (tracks 3 and 4), Recycled Plastics (track 2), and Thinner (track 8) all feature here. My favorites of those are the two from Energostatic Records: Stereotype by Iwata and III by Brk. Dewtone and Recycled Plastics appear to be the only active labels from that list.

Track 6 Remake (Basic Reshape) by Paperclip People (AKA Carl Craig) is of course a Basic Channel remix. The copy played here is from the Basic Reshape 12″, Basic Channel catalog number BC-BR. I believe I bought my copy at Platinum Records a few years after its release. I’ve linked to Hardwax in the tracklist above, where that same release is still available both vinyl and digital.

Tracks 13 The Wash and 14 Complete Direction are both from the Dutch electro label Frustrated Funk. The Wash is of course by Heath Brunner, head of V-Max Records, and not any of the other artists named Silicon. Bernardi’s Complete Direction is from his Welcome To My World EP, and is the slow track on that otherwise fast EP.

I’m glad I picked this month for dub techno, as it’s been a tough one otherwise. I’ll be back next week with something fast again. Something weird.

MTT 160 / Involuntary Recall

More techno, dub techno, and a little house. Music to space out to while you consider your next move.

Tracklist:

  1. Audiokonstrukte / Some Miles 00:00
  2. Zzzzra / Sumo 03:29
  3. James Ulibarri / Buried In Snow 06:48
  4. Unknown / Untitled 07 [UKNWN02 03] 11:18
  5. Roberto Figus / The Natural 16:59
  6. Overcast Sound / Snow Melt 20:51
  7. Marco Bernardi / Keep On Looking 25:11
  8. Bipolardepth / Untitled [ML056 02] 29:35
  9. Simoncino / Baila Baiana 34:07
  10. Mikron / Ghost Node 36:54
  11. JGarrett / Vaporizer (Jonah Sharp Remix) 41:56
  12. Zwart Licht Kommando / Flare (Acid Mix) 46:09
  13. Sinousity / Alfanuosity 50:00
  14. Tones On Tail / You, The Night, And The Music 55:21
  15. Newworldromantic / Spirit 58:01

And here’s a link to the recording: https://archives.anonradio.net/202003200600_cev.mp3 with thanks to aNONradio for hosting the archives.

Two more tracks from Motorlab feature this week: Sinuosity’s Alfanuosity and Bipolardepth’s Untitled from ML056 Soundtest. Soundtest is particularly good and cohesive, with Bipolardepth’s track being in my opinion the best on the release.

Mikron’s Ghost Node, from their album Severance on CPU Records, is my current favorite from this list. Ghost Node has a peculiar moody disco vibe, something I associate with old Legowelt or Orgue Electronique, old Bunker, Clone, or Crème. Severance is easily my favorite album of 2019, so go check it out (please).

All these tracks are good of course (I mean, I like ’em), even if the selection doesn’t quite hang together right. It was a fun and welcome distraction on a troubling Thursday evening.

I’ll be live again on the 26th with something slower; blog post to follow as usual.

Dally Rhythms – 2020.03.22

Tracklist

  • Modeselektor – Kalif Storch, Gb minor, 125 bpm
  • Orbital – Lush 3-3 (Underworld Mix), Gb minor, 130 bpm
  • Ava Mea – In The End (Original Mix), Gb minor, 128 bpm
  • Jonas Rathsman – Complex featuring Josef Salvat (Original Mix), Gb minor, 122 bp
  • Basan, Fred Leone – All Night (Club Mix), E minor, 122 bpm
  • Ivan Roudyk – Deep Dark Symphony (Original Mix), E minor, 122 bpm
  • Dance Bridge – You Giving Me More (Original mix), E minor, 122 bpm
  • Chad Tyson – Don’t give A Damn! (Original Mix), E minor, 121 bpm
  • Daniel Camarillo – Button (Steve Sai Remix), A minor, 124 bpm
  • Murr, Rosina – Dive Into The Deepest (Maceo Plex Remix), A minor, 123 bpm
  • Jadeck – Ayalon Highway (Yuriy From Russia Remix), A minor, 125 bpm
  • Beneefit – Hallucinations featuring Jdashit (J-Soul Remix), A minor, 126 bpm
  • Anja Schneider And Sebo K – Rancho Relaxo, A minor, 122 bpm

Available for download in the archives.

TSR – The Server Room – Shownotes – Episode 18-19

Topic: Microservices, Docker and Kubernetes

Monolithic Architecture Vs Microservices Architecture

Microservices separate Business Logic funtions. Instead of One Big program (Monolyth approach) several smaller applications ( Microservices) are used. They communicate with each other through well defined API’s – usually HTTP.

Monolithic Architectures are a tiered layered approach where one big single application including the UI the core Business logic and the Data access layer communicating with one single giant Database which / where everything is stored.

In Microservices Architecture When a user interacts with the UI it triggers independent Microservices using a shared or perhaps a single database on their own fulfilling one specific function or element of the Business function or Data Access Layer


Microservices are popular today.

The Advantages of using Microservices:

  • Microservices can be programmed in any programming language best fit for the purpose as long as they are able to use a common API for communicating between each other and/or with the UI. As discussed HTTP is the most commonly used
  • Works perfect with Smaller teams assigned and responsable for a single function (Microservice) instead of one big team working on an overly complexed single application doing and containing everything
  • Easier Fault Isolation as the Microservices Architecture breaks up a Single Monolithic Application into smaller set of single functions therefore making it easier to troubleshoot what went wrong and where and also provides certain kind of Fault Tolerancy as failure in one Microservice does not always affects the whole user experience or functionality of the application as a whole as long as it is not part of some critical and core functionality and depending of course on the existing dependency of the other Microservices on the one which failed.

    If for example the Microservice which failed is the one for example which takes care of the Invoice printing feature once the user completed an order via the UI this single fault while inconvenient as it can be does not cause the application as a whole not to be functional and the failure affects only partially the functionality of the application as a whole. And perhaps also other Microservices could offer for example a service to email the invoice generated for the users order and while the Microservice responsible for the Printing of an Orders invoice fails/does not work the user can have an alternate service to achieve the same or similar function.

    Also One Microservice sometimes can handle another Microservices function or failure providing even more Fault Tolerance.
  • Pair well with containers architectures. You can containerize a single Microservice
  • Dynamically Scalable both up and down and with this feature companies able to save a lot of money in the era of cloud computing on demand pricing / instances are expensive

The Disadvantages of using Microservices:

  • Complex Networking: From a logically laid out single application to an X Number of Microservices with or without their own databases with a sometimes confusing web of communication between them makes troubleshooting not always the simplest unless you grew familiar with the design and inner workflows of the application (experience as application support engineer for that specific application)

    Imagine the flow through the steps from start to finish with such a complex web of networking…..
  • Requires extra knowledge and familiarity of topics such as: with the architecture (how You will build it out piece by piece) , containerization, container orchestration *like Kubernetes for example*
  • Overhead: Knowledge (mentioned above) Databases and Servers: overhead in the Infrastructure side as databases,servers need to be spin up instantly

Docker

What is Docker?

As We saw in the Episode on Virtualization ( Episodes 14-15 ) Docker is a type 2 virtualization in form of a PAAS (platform as a service) which delivers software in packages called Containers using OS-Level Virtualization

Docker is an application build and deployment tool. It is based on the idea of that you can package your code with dependencies into a deployable unit called a container. Containers have been around for quite some time.  

Some say they were developed by Sun Microsystems and released as part of Solaris 10 in 2005 as Zones, others stating BSD Jails were the first container technology

Another explanation: is an open platform for developers and sysadmins to build, ship and run distributed applications wherter on laptops , data center VMs, or the cloud

Containers are a way to package software in a format that can run isolated on a shared operating system.Unlike VMs containers do not bundle a full operating system – only libraries and settings required to make the software work are needed. This makes for efficient , lightweight self contained systems and guaranteess that software will always run the same , regardless of where its deployed

The software that hosts the containers is called Docker Engine

With Docker and Containerization there is no more the issues of ,,the application ran fine on my computer but it doesnt on Client Xs machine” as long as the Client has Docker on it it will run just the same way as it did on the developers machine as the whole thing with all its necessary libraries and bits and pieces comes packaged into that container form.

Dockerfile

  • Describes the build process for an image:
    f.e my app will be based on node, copy my files here run npm install and then npm run when the image is run
  • Can be run to automatically create an image
  • Contains all the commands necessary to build the image and run your application
an example of a dockerfile using alpine linux as base adding python and pip with some modules then copying/pulling in some src files required for the application to run , specifying on the port number to expose and eventually run the application

Docker Container is the runtime instance of a Docker Image built from a Dockerfile

Components


The Docker software as a service offering consists of three components:

Software: The Docker daemon, called dockerd, is a persistent process that manages Docker containers and handles container objects. The daemon listens for requests sent via the Docker Engine API. The Docker client program, called docker, provides a command-line interface that allows users to interact with Docker daemons.


Objects: Docker objects are various entities used to assemble an application in Docker. The main classes of Docker objects are images, containers, and services.

  • A Docker container is a standardized, encapsulated environment that runs applications. A container is managed using the Docker API or CLI.
  • A Docker image is a read-only template used to build containers. Images are used to store and ship applications.
  • A Docker service allows containers to be scaled across multiple Docker daemons. The result is known as a swarm, a set of cooperating daemons that communicate through the Docker API.

Registries: A Docker registry is a repository for Docker images. Docker clients connect to registries to download (“pull”) images for use or upload (“push”) images that they have built. Registries can be public or private. Two main public registries are Docker Hub and Docker Cloud. Docker Hub is the default registry where Docker looks for images.Docker registries also allow the creation of notifications based on events.

Tools

Docker Compose is a tool for defining and running multi-container Docker applications.It uses YAML files to configure the application’s services and performs the creation and start-up process of all the containers with a single command. The docker-compose CLI utility allows users to run commands on multiple containers at once, for example, building images, scaling containers, running containers that were stopped, and more. Commands related to image manipulation, or user-interactive options, are not relevant in Docker Compose because they address one container.The docker-compose.yml file is used to define an application’s services and includes various configuration options. For example, the build option defines configuration options such as the Dockerfile path, the command option allows one to override default Docker commands, and more.


Docker Swarm is in simple words Dockers open source Container Orchestration platform (( We will see more about them below ))

Container Orchestration

What is Container Orchestration

Container orchestration automates the deployment, management, scaling, and networking of containers. Enterprises that need to deploy and manage hundreds or thousands of Linux® containers and hosts can benefit from container orchestration. 

Container orchestration can be used in any environment where you use containers. It can help you to deploy the same application across different environments without needing to redesign it. And microservices in containers make it easier to orchestrate services, including storage, networking, and security. 

Containers give your microservice-based apps an ideal application deployment unit and self-contained execution environment. They make it possible to run multiple parts of an app independently in microservices, on the same hardware, with much greater control over individual pieces and life cycles.

Some popular options are Kubernetes, Docker Swarm, and Apache Mesos.

Managing the lifecycle of containers with orchestration also supports DevOps teams who integrate it into CI/CD workflows (Continuous Integration Continuous Delivery / Development) Along with application programming interfaces (APIs) and DevOps teams, containerized microservices are the foundation for cloud-native applications.

Image result for ci / cd

What is container orchestration used for?

Use container orchestration to automate and manage tasks such as:

  • Provisioning and deployment
  • Configuration and scheduling 
  • Resource allocation
  • Container availability 
  • Scaling or removing containers based on balancing workloads across your infrastructure
  • Load balancing and traffic routing 
  • Monitoring container health
  • Configuring applications based on the container in which they will run
  • Keeping interactions between containers secure

Kubernetes

What is Kubernetes?

Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management.It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”.It works with a range of container tools, including Docker.Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions.

A Kubernetes cluster can be deployed on either physical or virtual machines.

Kubernetes Architecture

Complete diagram of Kubernetes Architecture and Kubernetes Process.

Kubernetes Architecture has the following main components:

  • Master nodes
  • Worker/Slave nodes

Kubernetes Cluster is a set of node machines for running containerized applications.

Master Node

The master node is responsible for the management of Kubernetes cluster. It is mainly the entry point for all administrative tasks. There can be more than one master node in the cluster to check for fault tolerance.

Kubernetes Master - Kubernetes Tutorial - Edureka

As you can see in the above diagram, the master node has various components like API Server, Controller Manager, Scheduler and ETCD.

  • API Server: The API server is the entry point for all the REST commands used to control the cluster.
  • Controller Manager: Is a daemon that regulates the Kubernetes cluster, and manages different non-terminating control loops.
  • Scheduler: The scheduler schedules the tasks to slave nodes. It stores the resource usage information for each slave node.
  • ETCD: ETCD is a simple, distributed, consistent key-value store. It’s mainly used for shared configuration and service discovery.

Worker/Slave nodes

Worker nodes contain all the necessary services to manage the networking between the containers, communicate with the master node, and assign resources to the scheduled containers.

Kubernetes Worker Node - Kubernetes Tutorial - Edureka

As you can see in the above diagram, the worker node has various components like Docker Container, Kubelet, Kube-proxy, and Pods.

  • Docker Container: Docker runs on each of the worker nodes, and runs the configured pods
  • Kubelet: Kubelet gets the configuration of a Pod from the API server and ensures that the described containers are up and running.
  • Kube-proxy: Kube-proxy acts as a network proxy and a load balancer for a service on a single worker node
  • Pods: A pod is one or more containers that logically run together on nodes. 

Minikube

To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete

https://kubernetes.io/docs/setup/learning-environment/minikube/

https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/

Kubernetes Cluster with Fluentd collecting and forwarding logs to Elasticsearch for storing in its DB for Logs and ready for query it via Kibana

https://www.learnitguide.net/2018/08/what-is-kubernetes-learn-kubernetes.html

https://www.edureka.co/blog/kubernetes-tutorial/

Contact Information

Telegram Chat Group
https://t.me/tsrpodcast

Episodes in FLAC
https://tsr-podcast.viktormadarasz.com/flac-remastered-episodes

Video Recording of the Live Show:
https://video.hardlimit.com/accounts/serverroomshow/video-channels

Email
viktormadarasz@sdf.org

VOIP // PSTN
261414@sanjose2.voip.ms
+1 910 665 9191

Dally Rhythms – 2020.03.15

Tracklist

  • Shane Blackshaw – Nobody Knows (Andre Sobota Dub), Gb minor, 124 bpm
  • Andain – You Once Told Me (Moonbeam Remix), Gb minor, 130 bpm
  • Beton – Directions featuring Wevie Stonder (The Cirez D Edit), Gb minor, 124 bpm
  • Del Prado, Ross – Telecasting (With Gianni Bini Re-Touch), Gb minor, 124 bpm
  • GotSome – Bassline featuring Get Along Gang (Chocolate Puma Remix), A minor, 125
  • Matt Fear – Groove With It (Original mix), A minor, 120 bpm
  • Pete Bellis, Naya Kouti – You Can Feel It (Original Mix), A minor, 123 bpm
  • Frantzvaag – Once Before, C major, 121 bpm
  • Nopopstar, Max Vertigo – Stranger (Original Mix), Ab minor, 120 bpm
  • Lissat & Voltaxx – Sunglasses At Night (Nosta Remix), Ab minor, 120 bpm
  • Enki Nyxx – Taste More Flavour (Original Mix), Ab minor, 120 bpm
  • Jonas Rathsman – Tobago (Original Mix), Ab minor, 120 bpm
  • Artur Antonov, Meleshkin – Come With Me (Original Mix), Db minor, 121 bpm
  • Dry, Bolinger – Feel The Bass (Original Mix), Db minor, 120 bpm

Available for download in the archives.

MTT 159 / Brain Dive

Techno, dub techno, and a little 2000s minimal for a rainy week.

Tracklist:

  1. Sinuosity / Untitled [Cycle23 B] 00:00
  2. Maurizio / Domina (Maurizio Mix) 02:50
  3. Das Kraftfuttermischwerk / Flieg Mit Mir, Flieg 08:56
  4. Ben Businovski / T-Test (Two-Tailed Mix) 11:25
  5. Bryan Zentz / Merciful Dub 14:27
  6. Pan/Tone / Funky Martini 19:47
  7. Andy Vaz / Untitled [1-1 A1] 24:13
  8. Soultek / Analogueheart (Ghost Hacked By Area) 26:44
  9. Pheek & Benfay / Leaving Bern 30:42
  10. Marko Fürstenberg / Reactor3 35:39
  11. Bipolardepth / Event Two 38:25
  12. Bandulu / Downstairs Somewhere 43:38
  13. Underworld / Surfboy 45:31
  14. Maurizio / Domina (Maurizio Mix) 51:40
  15. Model 500 / Starlight (M 69 Original Mix) 55:15
  16. Auto Kinetic / Motorcity 56:50

And here’s the recording: https://archives.anonradio.net/202003130600_cev.mp3 with thanks to aNONradio for hosting the archives.

Tracks 3, 9, and 10 are all from the famous netlabel Thinner. Pheek & Benfay’s Leaving Bern is my favorite of those three. Pheek‘s two releases on Thinner, Tabisuru Kokoro (thn043) and Consortium (thn076), are worth checking out if you’re into the style exemplified by Leaving Bern. Both are available at files.scene.org. Thinner’s catalog is also available at archive.org, if that’s more convenient.

The reason I assembled this mix, the track I really wanted to play, is the classic Domina by Maurizio. The particular copy featured here is from the the Tresor 3 compilation. It’s probably my favorite of the M-Series (though M-4 is also very good). Hardwax has M-3 Domina available as both 12″ and a digital download if you don’t already have it.

And that’s it. I think I’ll do some more dub techno four-on-the-floor stuff next week, but slower. Check back for that if you’re into it.

TSR – The Server Room – Shownotes – Episode 17

Prologue

We previously talked about Linux but We can not go forward without mentioning Unix which inspired and had a fundamental impact on Linux as well as many other Unix-like operating systems We know and use today as We will see in a short while.

Episodes like this one and the ones for example made on Linux or Virtualization or Routers and Networking while they does not carry any deep technical knowledge or information what its mission is to build a common ground , some basic and general knowledge upon which further discussions and topics can build and refer to pretty much like a framework or indeed a foundation.

Unix

Unix is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, development starting in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, and others.

Initially intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Berkeley (BSD), Microsoft (Xenix), IBM (AIX), and Sun Microsystems (Solaris). In the early 1990s, AT&T sold its rights in Unix to Novell, which then sold its Unix business to the Santa Cruz Operation (SCO) in 1995.The UNIX trademark passed to The Open Group, a neutral industry consortium, which allows the use of the mark for certified operating systems that comply with the Single UNIX Specification (SUS).

Unix systems are characterized by a modular design that is sometimes called the “Unix philosophy”: the operating system provides a set of simple tools that each performs a limited, well-defined function with a unified filesystem (the Unix filesystem) as the main means of communication, and a shell scripting and command language (the Unix shell) to combine the tools to perform complex workflows. Unix distinguishes itself from its predecessors as the first portable operating system: almost the entire operating system is written in the C programming language, thus allowing Unix to reach numerous platforms.

Overview

Unix was originally meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmers. The system grew larger as the operating system started spreading in academic circles, and as users added their own tools to the system and shared them with colleagues.

At first, Unix was not designed to be portable[6] or for multi-tasking.Later, Unix gradually gained portability, multi-tasking and multi-user capabilities in a time-sharing configuration. Unix systems are characterized by various concepts: the use of plain text for storing data; a hierarchical file system; treating devices and certain types of inter-process communication (IPC) as files; and the use of a large number of software tools, small programs that can be strung together through a command-line interpreter using pipes, as opposed to using a single monolithic program that includes all of the same functionality. These concepts are collectively known as the “Unix philosophy”. Brian Kernighan and Rob Pike summarize this in The Unix Programming Environment as “the idea that the power of a system comes more from the relationships among programs than from the programs themselves”.

In an era when a standard computer consisted of a hard disk for storage and a data terminal for input and output (I/O), the Unix file model worked quite well, as I/O was generally linear. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, and semaphores, as well as network sockets to support communication with other hosts. As graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse.

By the early 1980s, users began seeing Unix as a potential universal operating system, suitable for computers of all sizes. The Unix environment and the client–server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers.

Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system.

The Unix operating system consists of many libraries and utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common “low-level” tasks that most programs share, and schedules access to avoid conflicts when programs try to access the same resource or device simultaneously. To mediate such access, the kernel has special rights, reflected in the distinction of kernel space from user space, the latter being a priority realm where most application programs operate.

History

The origins of Unix date back to the mid-1960s when the Massachusetts Institute of Technology, Bell Labs, and General Electric were developing Multics, a time-sharing operating system for the GE-645 mainframe computer. Multics featured several innovations, but also presented severe problems. Frustrated by the size and complexity of Multics, but not by its goals, individual researchers at Bell Labs started withdrawing from the project. The last to leave were Ken Thompson, Dennis Ritchie, Douglas McIlroy, and Joe Ossanna, who decided to reimplement their experiences in a new project of smaller scale. This new operating system was initially without organizational backing, and also without a name.

The new operating system was a single-tasking system.In 1970, the group coined the name Unics for Uniplexed Information and Computing Service (pronounced “eunuchs”), as a pun on Multics, which stood for Multiplexed Information and Computer Services. Brian Kernighan takes credit for the idea, but adds that “no one can remember” the origin of the final spelling Unix. Dennis Ritchie, Doug McIlroy, and Peter G. Neumann also credit Kernighan.

The operating system was originally written in assembly language, but in 1973, Version 4 Unix was rewritten in C. Version 4 Unix, however, still had many PDP-11 dependent codes, and was not suitable for porting. The first port to another platform was made five years later (1978) for Interdata 8/32.

Bell Labs produced several versions of Unix that are collectively referred to as “Research Unix”. In 1975, the first source license for UNIX was sold to Donald B. Gillies at the University of Illinois at Urbana–Champaign Department of Computer Science. UIUC graduate student Greg Chesson, who had worked on the UNIX kernel at Bell Labs, was instrumental in negotiating the terms of the license.

During the late 1970s and early 1980s, the influence of Unix in academic circles led to large-scale adoption of Unix (BSD and System V) by commercial startups, including Sequent, HP-UX, Solaris, AIX, and Xenix. In the late 1980s, AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4 (SVR4), which was subsequently adopted by many commercial Unix vendors.

In the 1990s, Unix and Unix-like systems grew in popularity as BSD and Linux distributions were developed through collaboration by a worldwide network of programmers. In 2000, Apple released Darwin, also a Unix system, which became the core of the Mac OS X operating system, which was later renamed macOS.

Unix operating systems are widely used in modern servers, workstations, and mobile devices.

Standards – POSIX

In the late 1980s, an open operating system standardization effort now known as POSIX provided a common baseline for all operating systems; IEEE based POSIX around the common structure of the major competing variants of the Unix system, publishing the first POSIX standard in 1988. In the early 1990s, a separate but very similar effort was started by an industry consortium, the Common Open Software Environment (COSE) initiative, which eventually became the Single UNIX Specification (SUS) administered by The Open Group. Starting in 1998, the Open Group and IEEE started the Austin Group, to provide a common definition of POSIX and the Single UNIX Specification, which, by 2008, had become the Open Group Base Specification.

In 1999, in an effort towards compatibility, several Unix system vendors agreed on SVR4’s Executable and Linkable Format (ELF) as the standard for binary and object code files. The common format allows substantial binary compatibility among different Unix systems operating on the same CPU architecture.

The Filesystem Hierarchy Standard was created to provide a reference directory layout for Unix-like operating systems; it has mainly been used in Linux.

Impact

The Unix system had a significant impact on other operating systems. It achieved its reputation by its interactivity, by providing the software at a nominal fee for educational use, by running on inexpensive hardware, and by being easy to adapt and move to different machines. Unix was originally written in assembly language, but was soon rewritten in C, a high-level programming language. Although this followed the lead of Multics and Burroughs, it was Unix that popularized the idea.

Unix had a drastically simplified file model compared to many contemporary operating systems: treating all kinds of files as simple byte arrays. The file system hierarchy contained machine services and devices (such as printers, terminals, or disk drives), providing a uniform interface, but at the expense of occasionally requiring additional mechanisms such as ioctl and mode flags to access features of the hardware that did not fit the simple “stream of bytes” model. The Plan 9 operating system pushed this model even further and eliminated the need for additional mechanisms.

Unix also popularized the hierarchical file system with arbitrarily nested subdirectories, originally introduced by Multics. Other common operating systems of the era had ways to divide a storage device into multiple directories or sections, but they had a fixed number of levels, often only one level. Several major proprietary operating systems eventually added recursive subdirectory capabilities also patterned after Multics. DEC’s RSX-11M’s “group, user” hierarchy evolved into VMS directories, CP/M’s volumes evolved into MS-DOS 2.0+ subdirectories, and HP’s MPE group.account hierarchy and IBM’s SSP and OS/400 library systems were folded into broader POSIX file systems.

Making the command interpreter an ordinary user-level program, with additional commands provided as separate programs, was another Multics innovation popularized by Unix. The Unix shell used the same language for interactive commands as for scripting (shell scripts – there was no separate job control language like IBM’s JCL). Since the shell and OS commands were “just another program”, the user could choose (or even write) their own shell. New commands could be added without changing the shell itself. Unix’s innovative command-line syntax for creating modular chains of producer-consumer processes (pipelines) made a powerful programming paradigm (coroutines) widely available. Many later command-line interpreters have been inspired by the Unix shell.

A fundamental simplifying assumption of Unix was its focus on newline-delimited text for nearly all file formats. There were no “binary” editors in the original version of Unix – the entire system was configured using textual shell command scripts. The common denominator in the I/O system was the byte – unlike “record-based” file systems. The focus on text for representing nearly everything made Unix pipes especially useful and encouraged the development of simple, general tools that could be easily combined to perform more complicated ad hoc tasks. The focus on text and bytes made the system far more scalable and portable than other systems. Over time, text-based applications have also proven popular in application areas, such as printing languages (PostScript, ODF), and at the application layer of the Internet protocols, e.g., FTP, SMTP, HTTP, SOAP, and SIP.

Unix popularized a syntax for regular expressions that found widespread use. The Unix programming interface became the basis for a widely implemented operating system interface standard (POSIX, see above). The C programming language soon spread beyond Unix, and is now ubiquitous in systems and applications programming.

Early Unix developers were important in bringing the concepts of modularity and reusability into software engineering practice, spawning a “software tools” movement. Over time, the leading developers of Unix (and programs that ran on it) established a set of cultural norms for developing software, norms which became as important and influential as the technology of Unix itself; this has been termed the Unix philosophy.

The TCP/IP networking protocols were quickly implemented on the Unix versions widely used on relatively inexpensive computers, which contributed to the Internet explosion of worldwide real-time connectivity, and which formed the basis for implementations on many other platforms.

The Unix policy of extensive on-line documentation and (for many years) ready access to all system source code raised programmer expectations, and contributed to the launch of the free software movement in 1983.

Free Unix and Unix-like variants

In 1983, Richard Stallman announced the GNU (short for “GNU’s Not Unix”) project, an ambitious effort to create a free software Unix-like system; “free” in the sense that everyone who received a copy would be free to use, study, modify, and redistribute it. The GNU project’s own kernel development project, GNU Hurd, had not yet produced a working kernel, but in 1991 Linus Torvalds released the kernel Linux as free software under the GNU General Public License. In addition to their use in the GNU operating system, many GNU packages – such as the GNU Compiler Collection (and the rest of the GNU toolchain), the GNU C library and the GNU core utilities – have gone on to play central roles in other free Unix systems as well.

Linux distributions, consisting of the Linux kernel and large collections of compatible software have become popular both with individual users and in business. Popular distributions include Red Hat Enterprise Linux, Fedora, SUSE Linux Enterprise, openSUSE, Debian GNU/Linux, Ubuntu, Linux Mint, Mandriva Linux, Slackware Linux, Arch Linux and Gentoo.

A free derivative of BSD Unix, 386BSD, was released in 1992 and led to the NetBSD and FreeBSD projects. With the 1994 settlement of a lawsuit brought against the University of California and Berkeley Software Design Inc. (USL v. BSDi) by Unix System Laboratories, it was clarified that Berkeley had the right to distribute BSD Unix for free if it so desired. Since then, BSD Unix has been developed in several different product branches, including OpenBSD and DragonFly BSD.

Linux and BSD are increasingly filling the market needs traditionally served by proprietary Unix operating systems, as well as expanding into new markets such as the consumer desktop and mobile and embedded devices. Because of the modular design of the Unix model, sharing components is relatively common; consequently, most or all Unix and Unix-like systems include at least some BSD code, and some systems also include GNU utilities in their distributions.

In a 1999 interview, Dennis Ritchie voiced his opinion that Linux and BSD operating systems are a continuation of the basis of the Unix design, and are derivatives of Unix:

I think the Linux phenomenon is quite delightful, because it draws so strongly on the basis that Unix provided. Linux seems to be the among the healthiest of the direct Unix derivatives, though there are also the various BSD systems as well as the more official offerings from the workstation and mainframe manufacturers.

In the same interview, he states that he views both Unix and Linux as “the continuation of ideas that were started by Ken and me and many others, many years ago”.

OpenSolaris was the open-source counterpart to Solaris developed by Sun Microsystems, which included a CDDL-licensed kernel and a primarily GNU userland. However, Oracle discontinued the project upon their acquisition of Sun, which prompted a group of former Sun employees and members of the OpenSolaris community to fork OpenSolaris into the illumos kernel. As of 2014, illumos remains the only active open-source System V derivative.

Branding

In October 1993, Novell, the company that owned the rights to the Unix System V source at the time, transferred the trademarks of Unix to the X/Open Company (now The Open Group) and in 1995 sold the related business operations to Santa Cruz Operation (SCO). Whether Novell also sold the copyrights to the actual software was the subject of a federal lawsuit in 2006, SCO v. Novell, which Novell won. The case was appealed, but on August 30, 2011, the United States Court of Appeals for the Tenth Circuit affirmed the trial decisions, closing the case. Unix vendor SCO Group Inc. accused Novell of slander of title.

The present owner of the trademark UNIX is The Open Group, an industry standards consortium. Only systems fully compliant with and certified to the Single UNIX Specification qualify as “UNIX” (others are called “Unix-like”).

By decree of The Open Group, the term “UNIX” refers more to a class of operating systems than to a specific implementation of an operating system; those operating systems which meet The Open Group’s Single UNIX Specification should be able to bear the UNIX 98 or UNIX 03 trademarks today, after the operating system’s vendor pays a substantial certification fee and annual trademark royalties to The Open Group.[28] Systems that have been licensed to use the UNIX trademark include AIX, EulerOS, HP-UX, Inspur K-UX, IRIX, Solaris, Tru64 UNIX (formerly “Digital UNIX”, or OSF/1), macOS, and a part of IBM z/OS. Notably, EulerOS and Inspur K-UX are Linux distributions certified as UNIX 03 compliant.



Evolution of Unix and Unix-like systems

Contact Information

Telegram Chat Group
https://t.me/tsrpodcast

Episodes in FLAC
https://tsr-podcast.viktormadarasz.com/flac-remastered-episodes

Video Recording of the Live Show:
https://video.hardlimit.com/accounts/serverroomshow/video-channels

Email
viktormadarasz@sdf.org

VOIP // PSTN
261414@sanjose2.voip.ms
+1 910 665 9191

MTT 158 / To All Patrolling Air Units

Selections from the Megatech Body CD (AKA the Ghost In The Shell PSX soundtrack) and the two Ghost In The Shell Tribute albums from 2004. Fast techno, 130 to 140 BPM.

Tracklist:

  1. Oliver Ho / The Vision 00:00
  2. System 7 vs. Derrick May / Big Sky City 02:24
  3. Dave Angel / Can U Dig It? 07:36
  4. Sam Reeve / Tachikoma Beat Showcase 11:09
  5. Hardfloor / Spook & Spell (Fast Version) 14:36
  6. Brother From Another Planet / Ishikawa Surfs The System 18:32
  7. Joey Beltram / The Vertical 21:58
  8. WestBam / Moonriver 26:59
  9. Bryan Zentz / Interfaced 29:50
  10. Kabuto / No Brain 34:04
  11. Scan X / Higher [Listed on the packaging as “Reflections”] 37:46
  12. Mijk Van Dijk / Soul Survivor 41:58
  13. Shin Nishimura / Disorder 45:40
  14. Q-Hey / Man Machine 48:24
  15. Takkyu Ishino / Ghost In The Shell 51:01
  16. Brother From Another Planet / Section 9 Theme 56:41

and here’s the recording: https://archives.anonradio.net/202003060600_cev.mp3 with thanks to aNONradio for hosting the archives.

This mix is entirely comprised of tracks from three compilations. First: the 1997 Megatech Body.CD, the 2 disc limited edition, better known as the soundtrack to the Ghost In The Shell playstation video game. Second: the 2004 Ghost In The Shell Tribute Album. Third: the Ghost In The Shell Tribute Album Ver 2.0.0, also from 2004.

Wikipedia tells me that Takkyu Ishino produced the soundtrack album, and it’s a terrific sampling of Detroit, European, and Japanese techno. The selection on the two tribute albums is similar, but updated for 2004. Stand Alone Complex 2nd Gig was airing on TV roughly when the two tribute albums were released, so they may have been tie-in or promotional products.

So, these three albums are amazing. From the artists assembled, to the particular tracks, to the packaging and art direction. Way more hits than misses. A lot of great tracks didn’t make it into this hour: CJ Bolland’s tunes (as BCJ), The Advent’s tunes, Adam Beyer… I tried hard to get Mijk Van Dijk’s Fuchi Koma in, but couldn’t find a spot for it. Maybe next time.

I bought these discs on eBay in ’05 or ’06 and unfortunately mine are bootlegs. Specifically the Ever Anime bootleg of the Megatech Body CD, and the Miya Records bootlegs of the first and second Tribute albums. I’ve wanted to record a mix of these tracks since then, and am happy to have done it now.

Oh, and feel free to fast forward past my airheaded rambling at the beginning of the recording.

Dally Rhythms – 2020.03.08

Tracklist

Urban Chameleon – Sociopath (Original Mix), F minor, 123 bpm
Flex Cop – Nocebo (Original Mix), F minor, 122 bpm
Felten & Constantinne – Sky (Original Mix), F minor, 122 bpm
Deeprise – You Touch (Pete Bellis Remix), F minor, 122 bpm
Abramovsky – Fade Away (Original Mix), E minor, 118 bpm
South Methods – Unexpected (Original Mix), E minor, 120 bpm
Mischa Daniels – Run Away (Deep Sound Effect Remix), E minor, 120 bpm
Eric Volt, Sebatian Voigt, Forrest – Words And Chance (Original Mix), E minor, 118 bpm
Going Deeper, Blackfeel Wite – Behind The Mask (Angelo Fracalanzo Remix), A minor, 120 bpm
Fideles, Brad Sucks – Get Your Fable (Della Zouch Remix), A minor, 121 bpm
King Sound – Butterfly Effect (Felten & Constantinne Remix), A minor, 122 bpm
Davide Piras – Time, A minor, 125 bpm

Available for download in the archives.