About half of this mix is a tour through free dub techno netlabels – Deepindub (track 7), Dewtone (tracks 9 and 15), Energostatic (tracks 3 and 4), Recycled Plastics (track 2), and Thinner (track 8) all feature here. My favorites of those are the two from Energostatic Records: Stereotype by Iwata and III by Brk. Dewtone and Recycled Plastics appear to be the only active labels from that list.
Track 6 Remake (Basic Reshape) by Paperclip People (AKA Carl Craig) is of course a Basic Channel remix. The copy played here is from the Basic Reshape 12″, Basic Channel catalog number BC-BR. I believe I bought my copy at Platinum Records a few years after its release. I’ve linked to Hardwax in the tracklist above, where that same release is still available both vinyl and digital.
Two more tracks from Motorlab feature this week: Sinuosity’s Alfanuosity and Bipolardepth’s Untitled from ML056 Soundtest. Soundtest is particularly good and cohesive, with Bipolardepth’s track being in my opinion the best on the release.
Mikron’s Ghost Node, from their album Severance on CPU Records, is my current favorite from this list. Ghost Node has a peculiar moody disco vibe, something I associate with old Legowelt or Orgue Electronique, old Bunker, Clone, or Crème. Severance is easily my favorite album of 2019, so go check it out (please).
All these tracks are good of course (I mean, I like ’em), even if the selection doesn’t quite hang together right. It was a fun and welcome distraction on a troubling Thursday evening.
I’ll be live again on the 26th with something slower; blog post to follow as usual.
Monolithic Architecture Vs Microservices Architecture
Microservices separate Business Logic funtions. Instead of One Big program (Monolyth approach) several smaller applications ( Microservices) are used. They communicate with each other through well defined API’s – usually HTTP.
Monolithic Architectures are a tiered layered approach where one big single application including the UI the core Business logic and the Data access layer communicating with one single giant Database which / where everything is stored.
In Microservices Architecture When a user interacts with the UI it triggers independent Microservices using a shared or perhaps a single database on their own fulfilling one specific function or element of the Business function or Data Access Layer
Microservices are popular today.
The Advantages of using Microservices:
Microservices can be programmed in any programming language best fit for the purpose as long as they are able to use a common API for communicating between each other and/or with the UI. As discussed HTTP is the most commonly used
Works perfect with Smaller teams assigned and responsable for a single function (Microservice) instead of one big team working on an overly complexed single application doing and containing everything
Easier Fault Isolation as the Microservices Architecture breaks up a Single Monolithic Application into smaller set of single functions therefore making it easier to troubleshoot what went wrong and where and also provides certain kind of Fault Tolerancy as failure in one Microservice does not always affects the whole user experience or functionality of the application as a whole as long as it is not part of some critical and core functionality and depending of course on the existing dependency of the other Microservices on the one which failed.
If for example the Microservice which failed is the one for example which takes care of the Invoice printing feature once the user completed an order via the UI this single fault while inconvenient as it can be does not cause the application as a whole not to be functional and the failure affects only partially the functionality of the application as a whole. And perhaps also other Microservices could offer for example a service to email the invoice generated for the users order and while the Microservice responsible for the Printing of an Orders invoice fails/does not work the user can have an alternate service to achieve the same or similar function.
Also One Microservice sometimes can handle another Microservices function or failure providing even more Fault Tolerance.
Pair well with containers architectures. You can containerize a single Microservice
Dynamically Scalable both up and down and with this feature companies able to save a lot of money in the era of cloud computing on demand pricing / instances are expensive
The Disadvantages of using Microservices:
Complex Networking: From a logically laid out single application to an X Number of Microservices with or without their own databases with a sometimes confusing web of communication between them makes troubleshooting not always the simplest unless you grew familiar with the design and inner workflows of the application (experience as application support engineer for that specific application)
Imagine the flow through the steps from start to finish with such a complex web of networking…..
Requires extra knowledge and familiarity of topics such as: with the architecture (how You will build it out piece by piece) , containerization, container orchestration *like Kubernetes for example*
Overhead: Knowledge (mentioned above) Databases and Servers: overhead in the Infrastructure side as databases,servers need to be spin up instantly
What is Docker?
As We saw in the Episode on Virtualization ( Episodes 14-15 ) Docker is a type 2 virtualization in form of a PAAS (platform as a service) which delivers software in packages called Containers using OS-Level Virtualization
Docker is an application build and deployment tool. It is based on the idea of that you can package your code with dependencies into a deployable unit called a container. Containers have been around for quite some time.
Some say they were developed by Sun Microsystems and released as part of Solaris 10 in 2005 as Zones, others stating BSD Jails were the first container technology
Another explanation: is an open platform for developers and sysadmins to build, ship and run distributed applications wherter on laptops , data center VMs, or the cloud
Containers are a way to package software in a format that can run isolated on a shared operating system.Unlike VMs containers do not bundle a full operating system – only libraries and settings required to make the software work are needed. This makes for efficient , lightweight self contained systems and guaranteess that software will always run the same , regardless of where its deployed
The software that hosts the containers is called Docker Engine
With Docker and Containerization there is no more the issues of ,,the application ran fine on my computer but it doesnt on Client Xs machine” as long as the Client has Docker on it it will run just the same way as it did on the developers machine as the whole thing with all its necessary libraries and bits and pieces comes packaged into that container form.
Describes the build process for an image: f.e my app will be based on node, copy my files here run npm install and then npm run when the image is run
Can be run to automatically create an image
Contains all the commands necessary to build the image and run your application
Docker Container is the runtime instance of a Docker Image built from a Dockerfile
The Docker software as a service offering consists of three components:
Software:The Docker daemon, called dockerd, is a persistent process that manages Docker containers and handles container objects. The daemon listens for requests sent via the Docker Engine API. The Docker client program, called docker, provides a command-line interface that allows users to interact with Docker daemons.
Objects: Docker objects are various entities used to assemble an application in Docker. The main classes of Docker objects are images, containers, and services.
A Docker container is a standardized, encapsulated environment that runs applications. A container is managed using the Docker API or CLI.
A Docker image is a read-only template used to build containers. Images are used to store and ship applications.
A Docker service allows containers to be scaled across multiple Docker daemons. The result is known as a swarm, a set of cooperating daemons that communicate through the Docker API.
Registries: A Docker registry is a repository for Docker images. Docker clients connect to registries to download (“pull”) images for use or upload (“push”) images that they have built. Registries can be public or private. Two main public registries are Docker Hub and Docker Cloud. Docker Hub is the default registry where Docker looks for images.Docker registries also allow the creation of notifications based on events.
Docker Compose is a tool for defining and running multi-container Docker applications.It uses YAML files to configure the application’s services and performs the creation and start-up process of all the containers with a single command. The docker-compose CLI utility allows users to run commands on multiple containers at once, for example, building images, scaling containers, running containers that were stopped, and more. Commands related to image manipulation, or user-interactive options, are not relevant in Docker Compose because they address one container.The docker-compose.yml file is used to define an application’s services and includes various configuration options. For example, the build option defines configuration options such as the Dockerfile path, the command option allows one to override default Docker commands, and more.
Docker Swarm is in simple words Dockers open source Container Orchestration platform (( We will see more about them below ))
What is Container Orchestration
Container orchestration automates the deployment, management, scaling, and networking of containers. Enterprises that need to deploy and manage hundreds or thousands of Linux® containers and hosts can benefit from container orchestration.
Container orchestration can be used in any environment where you use containers. It can help you to deploy the same application across different environments without needing to redesign it. And microservices in containers make it easier to orchestrate services, including storage, networking, and security.
Containers give your microservice-based apps an ideal application deployment unit and self-contained execution environment. They make it possible to run multiple parts of an app independently in microservices, on the same hardware, with much greater control over individual pieces and life cycles.
Some popular options are Kubernetes, Docker Swarm, and Apache Mesos.
Use container orchestration to automate and manage tasks such as:
Provisioning and deployment
Configuration and scheduling
Scaling or removing containers based on balancing workloads across your infrastructure
Load balancing and traffic routing
Monitoring container health
Configuring applications based on the container in which they will run
Keeping interactions between containers secure
What is Kubernetes?
Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management.It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”.It works with a range of container tools, including Docker.Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions.
A Kubernetes cluster can be deployed on either physical or virtual machines.
Kubernetes Architecture has the following main components:
Kubernetes Cluster is a set of node machines for running containerized applications.
The master node is responsible for the management of Kubernetes cluster. It is mainly the entry point for all administrative tasks. There can be more than one master node in the cluster to check for fault tolerance.
As you can see in the above diagram, the master node has various components like API Server, Controller Manager, Scheduler and ETCD.
API Server: The API server is the entry point for all the REST commands used to control the cluster.
Controller Manager: Is a daemon that regulates the Kubernetes cluster, and manages different non-terminating control loops.
Scheduler: The scheduler schedules the tasks to slave nodes. It stores the resource usage information for each slave node.
ETCD: ETCD is a simple, distributed, consistent key-value store. It’s mainly used for shared configuration and service discovery.
Worker nodes contain all the necessary services to manage the networking between the containers, communicate with the master node, and assign resources to the scheduled containers.
As you can see in the above diagram, the worker node has various components like Docker Container, Kubelet, Kube-proxy, and Pods.
Docker Container: Docker runs on each of the worker nodes, and runs the configured pods
Kubelet: Kubelet gets the configuration of a Pod from the API server and ensures that the described containers are up and running.
Kube-proxy: Kube-proxy acts as a network proxy and a load balancer for a service on a single worker node
Pods: A pod is one or more containers that logically run together on nodes.
To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete
Tracks 3, 9, and 10 are all from the famous netlabel Thinner. Pheek & Benfay’s Leaving Bern is my favorite of those three. Pheek‘s two releases on Thinner, Tabisuru Kokoro (thn043) and Consortium (thn076), are worth checking out if you’re into the style exemplified by Leaving Bern. Both are available at files.scene.org. Thinner’s catalog is also available at archive.org, if that’s more convenient.
The reason I assembled this mix, the track I really wanted to play, is the classic Domina by Maurizio. The particular copy featured here is from the the Tresor 3 compilation. It’s probably my favorite of the M-Series (though M-4 is also very good). Hardwax has M-3 Domina available as both 12″ and a digital download if you don’t already have it.
And that’s it. I think I’ll do some more dub techno four-on-the-floor stuff next week, but slower. Check back for that if you’re into it.
We previously talked about Linux but We can not go forward without mentioning Unix which inspired and had a fundamental impact on Linux as well as many other Unix-like operating systems We know and use today as We will see in a short while.
Episodes like this one and the ones for example made on Linux or Virtualization or Routers and Networking while they does not carry any deep technical knowledge or information what its mission is to build a common ground , some basic and general knowledge upon which further discussions and topics can build and refer to pretty much like a framework or indeed a foundation.
Unix is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, development starting in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, and others.
Initially intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Berkeley (BSD), Microsoft (Xenix), IBM (AIX), and Sun Microsystems (Solaris). In the early 1990s, AT&T sold its rights in Unix to Novell, which then sold its Unix business to the Santa Cruz Operation (SCO) in 1995.The UNIX trademark passed to The Open Group, a neutral industry consortium, which allows the use of the mark for certified operating systems that comply with the Single UNIX Specification (SUS).
Unix systems are characterized by a modular design that is sometimes called the “Unix philosophy”: the operating system provides a set of simple tools that each performs a limited, well-defined function with a unified filesystem (the Unix filesystem) as the main means of communication, and a shell scripting and command language (the Unix shell) to combine the tools to perform complex workflows. Unix distinguishes itself from its predecessors as the first portable operating system: almost the entire operating system is written in the C programming language, thus allowing Unix to reach numerous platforms.
Unix was originally meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmers. The system grew larger as the operating system started spreading in academic circles, and as users added their own tools to the system and shared them with colleagues.
At first, Unix was not designed to be portable or for multi-tasking.Later, Unix gradually gained portability, multi-tasking and multi-user capabilities in a time-sharing configuration. Unix systems are characterized by various concepts: the use of plain text for storing data; a hierarchical file system; treating devices and certain types of inter-process communication (IPC) as files; and the use of a large number of software tools, small programs that can be strung together through a command-line interpreter using pipes, as opposed to using a single monolithic program that includes all of the same functionality. These concepts are collectively known as the “Unix philosophy”. Brian Kernighan and Rob Pike summarize this in The Unix Programming Environment as “the idea that the power of a system comes more from the relationships among programs than from the programs themselves”.
In an era when a standard computer consisted of a hard disk for storage and a data terminal for input and output (I/O), the Unix file model worked quite well, as I/O was generally linear. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, and semaphores, as well as network sockets to support communication with other hosts. As graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse.
By the early 1980s, users began seeing Unix as a potential universal operating system, suitable for computers of all sizes. The Unix environment and the client–server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers.
Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system.
The Unix operating system consists of many libraries and utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common “low-level” tasks that most programs share, and schedules access to avoid conflicts when programs try to access the same resource or device simultaneously. To mediate such access, the kernel has special rights, reflected in the distinction of kernel space from user space, the latter being a priority realm where most application programs operate.
The origins of Unix date back to the mid-1960s when the Massachusetts Institute of Technology, Bell Labs, and General Electric were developing Multics, a time-sharing operating system for the GE-645 mainframe computer. Multics featured several innovations, but also presented severe problems. Frustrated by the size and complexity of Multics, but not by its goals, individual researchers at Bell Labs started withdrawing from the project. The last to leave were Ken Thompson, Dennis Ritchie, Douglas McIlroy, and Joe Ossanna, who decided to reimplement their experiences in a new project of smaller scale. This new operating system was initially without organizational backing, and also without a name.
The new operating system was a single-tasking system.In 1970, the group coined the name Unics for Uniplexed Information and Computing Service (pronounced “eunuchs”), as a pun on Multics, which stood for Multiplexed Information and Computer Services. Brian Kernighan takes credit for the idea, but adds that “no one can remember” the origin of the final spelling Unix. Dennis Ritchie, Doug McIlroy, and Peter G. Neumann also credit Kernighan.
The operating system was originally written in assembly language, but in 1973, Version 4 Unix was rewritten in C. Version 4 Unix, however, still had many PDP-11 dependent codes, and was not suitable for porting. The first port to another platform was made five years later (1978) for Interdata 8/32.
Bell Labs produced several versions of Unix that are collectively referred to as “Research Unix”. In 1975, the first source license for UNIX was sold to Donald B. Gillies at the University of Illinois at Urbana–Champaign Department of Computer Science. UIUC graduate student Greg Chesson, who had worked on the UNIX kernel at Bell Labs, was instrumental in negotiating the terms of the license.
During the late 1970s and early 1980s, the influence of Unix in academic circles led to large-scale adoption of Unix (BSD and System V) by commercial startups, including Sequent, HP-UX, Solaris, AIX, and Xenix. In the late 1980s, AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4 (SVR4), which was subsequently adopted by many commercial Unix vendors.
In the 1990s, Unix and Unix-like systems grew in popularity as BSD and Linux distributions were developed through collaboration by a worldwide network of programmers. In 2000, Apple released Darwin, also a Unix system, which became the core of the Mac OS X operating system, which was later renamed macOS.
Unix operating systems are widely used in modern servers, workstations, and mobile devices.
Standards – POSIX
In the late 1980s, an open operating system standardization effort now known as POSIX provided a common baseline for all operating systems; IEEE based POSIX around the common structure of the major competing variants of the Unix system, publishing the first POSIX standard in 1988. In the early 1990s, a separate but very similar effort was started by an industry consortium, the Common Open Software Environment (COSE) initiative, which eventually became the Single UNIX Specification (SUS) administered by The Open Group. Starting in 1998, the Open Group and IEEE started the Austin Group, to provide a common definition of POSIX and the Single UNIX Specification, which, by 2008, had become the Open Group Base Specification.
In 1999, in an effort towards compatibility, several Unix system vendors agreed on SVR4’s Executable and Linkable Format (ELF) as the standard for binary and object code files. The common format allows substantial binary compatibility among different Unix systems operating on the same CPU architecture.
The Filesystem Hierarchy Standard was created to provide a reference directory layout for Unix-like operating systems; it has mainly been used in Linux.
The Unix system had a significant impact on other operating systems. It achieved its reputation by its interactivity, by providing the software at a nominal fee for educational use, by running on inexpensive hardware, and by being easy to adapt and move to different machines. Unix was originally written in assembly language, but was soon rewritten in C, a high-level programming language. Although this followed the lead of Multics and Burroughs, it was Unix that popularized the idea.
Unix had a drastically simplified file model compared to many contemporary operating systems: treating all kinds of files as simple byte arrays. The file system hierarchy contained machine services and devices (such as printers, terminals, or disk drives), providing a uniform interface, but at the expense of occasionally requiring additional mechanisms such as ioctl and mode flags to access features of the hardware that did not fit the simple “stream of bytes” model. The Plan 9 operating system pushed this model even further and eliminated the need for additional mechanisms.
Unix also popularized the hierarchical file system with arbitrarily nested subdirectories, originally introduced by Multics. Other common operating systems of the era had ways to divide a storage device into multiple directories or sections, but they had a fixed number of levels, often only one level. Several major proprietary operating systems eventually added recursive subdirectory capabilities also patterned after Multics. DEC’s RSX-11M’s “group, user” hierarchy evolved into VMS directories, CP/M’s volumes evolved into MS-DOS 2.0+ subdirectories, and HP’s MPE group.account hierarchy and IBM’s SSP and OS/400 library systems were folded into broader POSIX file systems.
Making the command interpreter an ordinary user-level program, with additional commands provided as separate programs, was another Multics innovation popularized by Unix. The Unix shell used the same language for interactive commands as for scripting (shell scripts – there was no separate job control language like IBM’s JCL). Since the shell and OS commands were “just another program”, the user could choose (or even write) their own shell. New commands could be added without changing the shell itself. Unix’s innovative command-line syntax for creating modular chains of producer-consumer processes (pipelines) made a powerful programming paradigm (coroutines) widely available. Many later command-line interpreters have been inspired by the Unix shell.
A fundamental simplifying assumption of Unix was its focus on newline-delimited text for nearly all file formats. There were no “binary” editors in the original version of Unix – the entire system was configured using textual shell command scripts. The common denominator in the I/O system was the byte – unlike “record-based” file systems. The focus on text for representing nearly everything made Unix pipes especially useful and encouraged the development of simple, general tools that could be easily combined to perform more complicated ad hoc tasks. The focus on text and bytes made the system far more scalable and portable than other systems. Over time, text-based applications have also proven popular in application areas, such as printing languages (PostScript, ODF), and at the application layer of the Internet protocols, e.g., FTP, SMTP, HTTP, SOAP, and SIP.
Unix popularized a syntax for regular expressions that found widespread use. The Unix programming interface became the basis for a widely implemented operating system interface standard (POSIX, see above). The C programming language soon spread beyond Unix, and is now ubiquitous in systems and applications programming.
Early Unix developers were important in bringing the concepts of modularity and reusability into software engineering practice, spawning a “software tools” movement. Over time, the leading developers of Unix (and programs that ran on it) established a set of cultural norms for developing software, norms which became as important and influential as the technology of Unix itself; this has been termed the Unix philosophy.
The TCP/IP networking protocols were quickly implemented on the Unix versions widely used on relatively inexpensive computers, which contributed to the Internet explosion of worldwide real-time connectivity, and which formed the basis for implementations on many other platforms.
The Unix policy of extensive on-line documentation and (for many years) ready access to all system source code raised programmer expectations, and contributed to the launch of the free software movement in 1983.
Free Unix and Unix-like variants
In 1983, Richard Stallman announced the GNU (short for “GNU’s Not Unix”) project, an ambitious effort to create a free software Unix-like system; “free” in the sense that everyone who received a copy would be free to use, study, modify, and redistribute it. The GNU project’s own kernel development project, GNU Hurd, had not yet produced a working kernel, but in 1991 Linus Torvalds released the kernel Linux as free software under the GNU General Public License. In addition to their use in the GNU operating system, many GNU packages – such as the GNU Compiler Collection (and the rest of the GNU toolchain), the GNU C library and the GNU core utilities – have gone on to play central roles in other free Unix systems as well.
Linux distributions, consisting of the Linux kernel and large collections of compatible software have become popular both with individual users and in business. Popular distributions include Red Hat Enterprise Linux, Fedora, SUSE Linux Enterprise, openSUSE, Debian GNU/Linux, Ubuntu, Linux Mint, Mandriva Linux, Slackware Linux, Arch Linux and Gentoo.
A free derivative of BSD Unix, 386BSD, was released in 1992 and led to the NetBSD and FreeBSD projects. With the 1994 settlement of a lawsuit brought against the University of California and Berkeley Software Design Inc. (USL v. BSDi) by Unix System Laboratories, it was clarified that Berkeley had the right to distribute BSD Unix for free if it so desired. Since then, BSD Unix has been developed in several different product branches, including OpenBSD and DragonFly BSD.
Linux and BSD are increasingly filling the market needs traditionally served by proprietary Unix operating systems, as well as expanding into new markets such as the consumer desktop and mobile and embedded devices. Because of the modular design of the Unix model, sharing components is relatively common; consequently, most or all Unix and Unix-like systems include at least some BSD code, and some systems also include GNU utilities in their distributions.
In a 1999 interview, Dennis Ritchie voiced his opinion that Linux and BSD operating systems are a continuation of the basis of the Unix design, and are derivatives of Unix:
I think the Linux phenomenon is quite delightful, because it draws so strongly on the basis that Unix provided. Linux seems to be the among the healthiest of the direct Unix derivatives, though there are also the various BSD systems as well as the more official offerings from the workstation and mainframe manufacturers.
In the same interview, he states that he views both Unix and Linux as “the continuation of ideas that were started by Ken and me and many others, many years ago”.
OpenSolaris was the open-source counterpart to Solaris developed by Sun Microsystems, which included a CDDL-licensed kernel and a primarily GNU userland. However, Oracle discontinued the project upon their acquisition of Sun, which prompted a group of former Sun employees and members of the OpenSolaris community to fork OpenSolaris into the illumos kernel. As of 2014, illumos remains the only active open-source System V derivative.
In October 1993, Novell, the company that owned the rights to the Unix System V source at the time, transferred the trademarks of Unix to the X/Open Company (now The Open Group) and in 1995 sold the related business operations to Santa Cruz Operation (SCO). Whether Novell also sold the copyrights to the actual software was the subject of a federal lawsuit in 2006, SCO v. Novell, which Novell won. The case was appealed, but on August 30, 2011, the United States Court of Appeals for the Tenth Circuit affirmed the trial decisions, closing the case. Unix vendor SCO Group Inc. accused Novell of slander of title.
The present owner of the trademark UNIX is The Open Group, an industry standards consortium. Only systems fully compliant with and certified to the Single UNIX Specification qualify as “UNIX” (others are called “Unix-like”).
By decree of The Open Group, the term “UNIX” refers more to a class of operating systems than to a specific implementation of an operating system; those operating systems which meet The Open Group’s Single UNIX Specification should be able to bear the UNIX 98 or UNIX 03 trademarks today, after the operating system’s vendor pays a substantial certification fee and annual trademark royalties to The Open Group. Systems that have been licensed to use the UNIX trademark include AIX, EulerOS, HP-UX, Inspur K-UX, IRIX, Solaris, Tru64 UNIX (formerly “Digital UNIX”, or OSF/1), macOS, and a part of IBM z/OS. Notably, EulerOS and Inspur K-UX are Linux distributions certified as UNIX 03 compliant.
Wikipedia tells me that Takkyu Ishino produced the soundtrack album, and it’s a terrific sampling of Detroit, European, and Japanese techno. The selection on the two tribute albums is similar, but updated for 2004. Stand Alone Complex2nd Gig was airing on TV roughly when the two tribute albums were released, so they may have been tie-in or promotional products.
So, these three albums are amazing. From the artists assembled, to the particular tracks, to the packaging and art direction. Way more hits than misses. A lot of great tracks didn’t make it into this hour: CJ Bolland’s tunes (as BCJ), The Advent’s tunes, Adam Beyer… I tried hard to get Mijk Van Dijk’s Fuchi Koma in, but couldn’t find a spot for it. Maybe next time.
I bought these discs on eBay in ’05 or ’06 and unfortunately mine are bootlegs. Specifically the Ever Animebootleg of the Megatech Body CD, and the Miya Records bootlegs of the first and second Tribute albums. I’ve wanted to record a mix of these tracks since then, and am happy to have done it now.
Oh, and feel free to fast forward past my airheaded rambling at the beginning of the recording.