This mix came together when a good friend of mine gifted me a digital copy of Aril Brikha‘s excellent 2007 album Ex Machina. I predictably love the broken-beat tunes: Lady 707, Leaving Me, and Gres. Ex Machina is on Peacefrog Records, who have a bandcamp page up with a few dozen classic releases, very much worth checking out. (Classics like John Beltran’s Ten Days Of Blue, the source of track #15 in this mix).
And last I’d like to highlight the oddity that is Spectre by BCJ better-known-as CJ Bolland. Bolland used the alias BCJ only a few times, according to discogs. The track itself is from the 1997 Megatech Body.CD, which I covered back in MTT 158. I need to do some research into Bolland’s music, as I don’t know much about it, and it appears he has more breakbeat-y tracks like Spectre.
Another good one this week. A few long tracks, a few classics, a few oddities. Kind of bouncy. I’ll be back next week with something slower, as usual.
Three tracks from the mid-2000s electro label Satamile feature in this mix: #7 Getgetto by E.M.S., #8 All Torque by Silicon Scally, and #9 Freezie’s Gonna Funk Ya by Freezie Freekie. I picked up all three of those on vinyl in the mid 2000s in local shops. I’m a fan of Satamile; I miss their style of break-y electro, there doesn’t seem to be much of that being released right now.
Two old Electron Industries tracks feature, right after the Satamile block. #10 10ft Bass by label head The Octagon Man (AKA J. Saul Kane) and #11 Phaze Test (String Phase) by Eon. Phaze Test in particular is excellent, both sides are raucous electro workouts. The 30 Beat Cycle mix on the b-side is exactly as the title suggests, 28 beats in straight 4/4 then an extra two beats to mess with you. Fun and challenging to play in a set.
This was a good one. A bunch of old vinyl and a few new(ish)-to-me pieces from Kerr Knoll, M3taN01a, and Robert Cosmic. Fun times. I’ve run out of ideas for new shows, so I’ve got no idea what to do next week, but I’ll do my best to make it good
Balena is a complete set of tools for building, deploying, and managing fleets of connected Linux devices. We provide infrastructure for fleet owners so they can focus on developing their applications and growing their fleets with as little friction as possible.
Their Final thoughts from their website:
Balena has been built to bring a cloud microservices workflow to the world of edge devices and remove friction for fleet owners wherever possible. Our aim is to make IoT applications as easy to deploy as web applications (or even easier!). Along the way, we’ve discovered and assimilated interesting ideas from both the cloud and embedded worlds, and even invented ideas that only apply to the new paradigm balena represents.
The balenaFin is a Raspberry Pi Compute Module carrier board that can run all the software that the Raspberry Pi can run, but hardened for deployment in the field. Even better, it’s offered at an accessible price point relative to other professional boards.
Track four Black Electric is from the only release by that band, their 2000 self-titled EP on Puzzlebox. Colors and Come To Me, A2 and B1, are also great; I’m sure I’ll play them here eventually. There’s a bunch of Puzzlebox on Beatport but I couldn’t find this one, unfortunately.
A couple of old tracker mods snuck in, specifically track 13 Arsa Bamk by Dune (best known as Brothomstates) and track 14 Honeynut Loops by Chavez. The latter of those is from the netlabel Milk, which I plan to write a post about here (eventually).
This is a pretty scatterbrained selection, and it could’ve used one or two more tracks to break up the long ones, but I’m happy. Good for a few hours of planning. I’ll be back next week with… not sure. Something slower.
Distributed Systems: They are all around us. Google search engine, Amazon platform, Netflix, online gaming, money transfer/banking
The most known form of it is the Client-Server model
A Distributed System is a collection of separate /independent SW or HW components referred to as nodes that are linked together by means of a network and work together coherently by coordinating and communicating through a message passing or events to fulfill a one end goal. The nodes of the system can be unstructured or highly structured depending on the requirements. Its complexity is hidden from the end user or computer and appears as a single system/computer
More simple explanation: Bunch of independent computers which work together to solve a problem.
Characteristics of a Distributed Systems
No Shared Clock Each Computer in a Distributed System has its own internal clock to supply the value of the current time to local processes
No Shared Memory Each computer has its own local memory. (( can see on figure c when comparing parallel vs distributed systems ))
Resource sharing Resource sharing means that the existing resources in a distributed system can be accessed or remotely accessed across multiple computers in the system. Computers in distributed systems shares resources like hardware (disks and printers), software (files, windows and data objects) and data. Hardware resources are shared for reductions in cost and convenience. Data is shared for consistency and exchange of information.
Concurrency Concurrency is a property of a system representing the fact that multiple activities are executed at the same time. The concurrent execution of activities takes place in different components running on multiple machines as part of a distributed system. In addition, these activities may perform some kind of interactions among them. Concurrency reduces the latency and increases thethroughput of the distributed system.
Fault Tolerance — High Availability In a distributed system hardware, software, network anything can fail. The system must be designed in such a way that it is available all the time even after something has failed.
Scalability Scalability is mainly concerned about how the distributed system handles the growth as the number of users for the system increases. Mostly we scale the distributed system by adding more computers in the network. Components should not need to be changed when we scale the system. Components should be designed in such a way that it is scalable
Heterogenetic and Loose Coupling In distributed systems components can have variety and differences in Networks, Computer hardware, Operating systems, Programming languages and implementations by different developers
Transparency Distributed systems should be perceived by users and application programmers as a whole rather than as a collection of cooperating components. Transparency can be of various types like access, location, concurrency, replication, etc.
Parallel and Distributed Computing
Distributed systems are groups of networked computers which share a common goal for their work. The terms “concurrent computing“, “parallel computing“, and “distributed computing” have a lot of overlap, and no clear distinction exists between them.
The same system may be characterized both as “parallel” and “distributed”; the processors in a typical distributed system run concurrently in parallel.
Parallel computing may be seen as a particular tightly coupled form of distributed computing and distributed computing may be seen as a loosely coupled form of parallel computing.
Nevertheless, it is possible to roughly classify concurrent systems as “parallel” or “distributed” using the following criteria:
In parallel computing,all processors may have access to a shared memory to exchange information between processors.
In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors.
The situation is further complicated by the traditional uses of the terms parallel and distributed algorithm that do not quite match the above definitions of parallel and distributed systems
Nevertheless, as a rule of thumb, high-performance parallel computation in a shared-memory multiprocessor uses parallel algorithms while the coordination of a large-scale distributed system uses distributed algorithms.
Client–server: architectures where smart clients contact the server for data then format and display it to the users. Input at the client is committed back to the server when it represents a permanent change.
Three-tier: architectures that move the client intelligence to a middle tier so that stateless clients can be used. This simplifies application deployment. Most web applications are three-tier.
n-tier: architectures that refer typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers.
Peer-to-peer: architectures where there are no special machines that provide a service or manage the network resources.Instead all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and as servers.Examples of this architecture include BitTorrent and the bitcoin network.
Another basic aspect of distributed computing architecture is the method of communicating and coordinating work among concurrent processes.
Through various message passing protocols, processes may communicate directly with one another, typically in a master/slave relationship.
Database-centric architecture in particular provides relational processing analytics in a schematic architecture allowing for live environment relay. This enables distributed computing functions both within and beyond the parameters of a networked database.
Reasons for using distributed systems and distributed computing may include:
The very nature of an application may require the use of a communication network that connects several computers: for example, data produced in one physical location and required in another location.
There are many cases in which the use of a single computer would be possible in principle, but the use of a distributed system is beneficial for practical reasons. For example, it may be more cost-efficient to obtain the desired level of performance by using a cluster of several low-end computers, in comparison with a single high-end computer. A distributed system can provide more reliability than a non-distributed system, as there is no single point of failure. Moreover, a distributed system may be easier to expand and manage than a monolithic uniprocessor system.
Examples of distributed systems and applications of distributed computing include the following:
Distributed computing is a field of computer science that studies distributed systems. .
Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other via message passing.