Tracks 12, 13, and 14 are all from releases on CPU Records. I’m still working through a recent large purchase of music from their shop, and they continue to be excellent. Tracks 8, 9, and 16 are taken from Solar One Music‘s second and third compilation. Solar One recently celebrated their 50th release, a sweet electro-techno collaboration between The Exaltics and Paris The Black Fu called We Exist.
My current favorite here is Recall (Live) by スムー, from the TUMA 3.0 compilation. I love the interplay between the drums (kick on 14, snare on 22) and the high ringing synth. The whole comp is interesting, and it’s listed as name-your-price, so check it out.
I’ve got my regular show with Megaphysics at the arcade next week, so expect something low-energy (and low-effort) next show. Thanks all for listening, and be sure to check the tracklist links above if you’re interested in the music played here.
Computers today have a lot of power (CPU , RAM , Storage, GPU) But is this power being used efficiently?
The answer is no. Computers/Servers(in a sense of 1 server per application or task) are underutilised and the electricity it takes up is wasted Virtualization helps to solve this problem by creating a virtualisation layer betwen the hardware components and the user
This enables the creation of virtual machines which are virtual computers that can run multiple on the same computer/server each of them thinking they are running on their own dedicated hardware/machine/computer
Hypervisor: Host and Guest Machines A hypervisor (or virtual machine monitor, VMM) is a computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called ahost machine, and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux, Windows, and macOS instances can all run on a single physical x86 machine. This contrasts with operating-system-level virtualization, where all instances (usually called containers) must share a single kernel, though the guest operating systems can differ in user space, such as different Linux distributions with the same kernel.
The term hypervisor is a variant of supervisor, a traditional term for the kernel of an operating system: the hypervisor is the supervisor of the supervisor, with hyper- used as a stronger variant of super-.[a] The term dates to circa 1970; in the earlier CP/CMS (1967) system the term Control Program was used instead. CP/CMS is the predecessor of IBM z/VM
Type I or native or bare-metal Hypervisors: These hypervisors run directly on the host’s hardware to control the hardware and to manage guest operating systems. For this reason, they are sometimes called bare metal hypervisors. The first hypervisors, which IBM developed in the 1960s, were native hypervisors.These included the test software SIMMON and the CP/CMS operating system (the predecessor of IBM’s z/VM). Modern equivalents include Xen,Oracle VM Server for SPARC, Oracle VM Server for x86, Microsoft Hyper-Vand VMware ESXi (formerly ESX).
Type II or hosted Hypervisors: These hypervisors run on a conventional operating system (OS) just as other computer programs do. A guest operating system runs as a process on the host. Type-2 hypervisors abstract guest operating systems from the host operating system. VMware Workstation, VMware Player, VirtualBox, Parallels Desktop for Mac and QEMU are examples of type-2 hypervisors.
The distinction between these two types is not always clear. For instance, Linux’s Kernel-based Virtual Machine (KVM) and FreeBSD‘s bhyve are kernel modules that effectively convert the host operating system to a type-1 hypervisor.At the same time, since Linux distributions and FreeBSD are still general-purpose operating systems, with applications competing with each other for VM resources, KVM and bhyve can also be categorized as type-2 hypervisors.
The first hypervisors providing full virtualization were the test tool SIMMON and IBM’s one-off research CP-40 system, which began production use in January 1967, and became the first version of IBM’s CP/CMS operating system. CP-40 ran on a S/360-40 that was modified at the IBM Cambridge Scientific Center to support dynamic address translation, a feature that enabled virtualization. Prior to this time, computer hardware had only been virtualized to the extent to allow multiple user applications to run concurrently, such as in CTSS and IBM M44/44X. With CP-40, the hardware’s supervisor state was virtualized as well, allowing multiple operating systems to run concurrently in separate virtual machine contexts.
Programmers soon implemented CP-40 (as CP-67) for the IBM System/360-67, the first production computer system capable of full virtualization. IBM first shipped this machine in 1966; it included page-translation-table hardware for virtual memory, and other techniques that allowed a full virtualization of all kernel tasks, including I/O and interrupt handling. (Note that its “official” operating system, the ill-fated TSS/360, did not employ full virtualization.) Both CP-40 and CP-67 began production use in 1967. CP/CMS was available to IBM customers from 1968 to early 1970s, in source code form without support.
CP/CMS formed part of IBM’s attempt to build robust time-sharing systems for its mainframe computers. By running multiple operating systems concurrently, the hypervisor increased system robustness and stability: Even if one operating system crashed, the others would continue working without interruption. Indeed, this even allowed beta or experimental versions of operating systems—or even of new hardware—to be deployed and debugged, without jeopardizing the stable main production system, and without requiring costly additional development systems.
IBM announced its System/370 series in 1970 without the virtual memory feature needed for virtualization, but added it in the August 1972 Advanced Function announcement.
Virtualization has been featured in all successor systems (all modern-day IBM mainframes, such as the zSeries line, retain backward compatibility with the 1960s-era IBM S/360 line). The 1972 announcement also included VM/370, a reimplementation of CP/CMS for the S/370. Unlike CP/CMS, IBM provided support for this version (though it was still distributed in source code form for several releases). VM stands for Virtual Machine, emphasizing that all, and not just some, of the hardware interfaces are virtualized. Both VM and CP/CMS enjoyed early acceptance and rapid development by universities, corporate users, and time-sharing vendors, as well as within IBM. Users played an active role in ongoing development, anticipating trends seen in modern open source projects. However, in a series of disputed and bitter battles, time-sharing lost out to batch processing through IBM political infighting, and VM remained IBM’s “other” mainframe operating system for decades, losing to MVS. It enjoyed a resurgence of popularity and support from 2000 as the z/VM product, for example as the platform for Linux on IBM Z.
As mentioned above, the VM control program includes a hypervisor-call handler that intercepts DIAG (“Diagnose”, opcode x’83’) instructions used within a virtual machine. This provides fast-path non-virtualized execution of file-system access and other operations (DIAG is a model-dependent privileged instruction, not used in normal programming, and thus is not virtualized. It is therefore available for use as a signal to the “host” operating system). When first implemented in CP/CMS release 3.1, this use of DIAG provided an operating system interface that was analogous to the System/360Supervisor Call instruction (SVC), but that did not require altering or extending the system’s virtualization of SVC.
In 1985 IBM introduced the PR/SM hypervisor to manage logical partitions (LPAR). ((which is paravirtualization as we will see in the next block Types of Virtualization))
Types of Virtualization
Full Virtualization a., Full Software Virtualization – Software Assisted, BT Binary Translation b., Full Software Virtualization – Hardware Assisted VT – Virtualization Technology
Operating System Virtualization (Containerisation)
Hybrid Virtualization (Hardware Virtualized with PV drivers)
Full Software Virtualization – Software Assisted – BT Binary Translation
Virtualbox (32bit guests)
VMWare Workstation (32bit guests)
VMware Server (formerly VMWare GSX Server) Type II hypervisor installed on top of Linux or Windows
In full software virtualization, all the hardware is simulated by a software program. Each device driver and process in the guest OS “believes” it is running on actual hardware, even though the underlying hardware is really a software program. Software virtualization even fools the OS into thinking that it is running on hardware.One of the advantages of full software virtualization is that you can run any OS on it. It doesn’t matter if the OS in question understands the underlying host hardware or not. Thus, older OSs and specialty OSs can run in this environment. The architecture is very flexible because you don’t need a special understanding of the OS or hardware.The OS hardware subsystem discovers the hardware in the normal fashion. It believes the hardware is really hardware. The hardware types and features that it discovers are usually fairly generic and might not be as full-featured as actual hardware devices, though the system is functional.Another advantage of full software virtualization is that you don’t need to purchase any additional hardware. With hardware-assisted software virtualization, you need to purchase hardware that supports advanced VM technology. Although this technology is included in most systems available today, some older hardware does not have this capability. To use this older hardware as a virtual host, you must use either full software virtualization or paravirtualization.
NOTEOnly hardware-assisted software virtualization requires advanced VM hardware features; full software virtualization does not. Oracle VM and VMware ESX work on older hardware that does not have any special CPU features. This type of virtualization is also known as emulation.
Unfortunately, full software virtualization adds overhead. This overhead translates into extra instructions and CPU time on the host, resulting in a slower system and higher CPU usage. With full software virtualization, the CPU instruction calls are trapped by the Virtual Machine Monitor (VMM) and then emulated in a software program. Therefore, every hardware instruction that would normally be handled by the hardware itself is now handled by a program.For example, when the disk device driver makes an I/O call to the “virtual disk,” the software in the VM system intercepts it, then processes it, and finally makes an I/O to the real underlying disk. The number of instructions to perform an I/O is greatly increased.With networking, even more overhead is incurred since a network switch is simulated in the software. Depending on the amount of network activity, the overhead can be quite high. In fact, with severely overloaded host systems, you could possibly see network delays from the virtual switch itself. This is why sizing is so important.
Full Software Virtualization – Hardware Assisted VT – Virtualization Technology .. Also called Hardware Assisted Software Virtualization
Hardware-assisted software virtualization is available with CPU chips with built-in virtualization support. Recently, with the introduction of the Intel VT and AMD-V technology, this virtualization type has become commoditized. This technology was first introduced on the IBM System/370 computer. It is similar to software virtualization, with the exception that some hardware functions are accelerated and assisted by hardware technology. Similar to software virtualization, the hardware instructions are trapped and processed, but this time using hardware in the virtualization components of the CPU chip.By using hardware-assisted software virtualization, you get the benefits of software virtualization, such as the ability to use any OS without modifying it, and, at the same time, achieve better performance. Because of virtualization’s importance, significant effort is going into providing more support for hardware-assisted software virtualization. Hardware-assisted virtualization also supports any operating system.Using hardware-assisted software virtualization, Oracle VM lets you install and run Linux and Solaris x86-based OSs as well as Microsoft Windows. With other virtualization techniques, Oracle VM only allows Linux OSs. This technique also makes migrating from VMware systems to Oracle VM easier.As mentioned earlier, both Intel and AMD are committed to support for hardware-assisted software virtualization. They both introduced virtualization technology around the mid-2005–2006 period, which is not that long ago, and their support has improved the functionality and performance of virtualization. Intel and AMD do not yet fully support paravirtualization. Hardware-assisted software virtualization components are changing at a very fast pace, however, with new features and functionality being introduced continually.
NOTEHardware-assisted virtualization is really the long-term virtualization solution. Applications such as Xen will be mainly used for management.
Intel supports virtualization via its VT-x technology. The Intel VT-x technology is now part of many Intel chipsets, including the Pentium, Xeon, and Core processors families. The VT-x extensions support an Input/Output Memory Management Unit (IOMMU) that allows virtualized systems to access I/O devices directly. Ethernet and graphics devices can now have their DMA and interrupts directly mapped via the hardware. In the latest versions of the Intel VT technology, extended page tables have been added to allow direct translation from guest virtual addresses to physical addresses.
AMD supports virtualization via the AMD-V technology. The AMD-V technology includes a rapid virtualization indexing technology to accelerate virtualization. This technology is designed to assist with the virtual-to-physical translation of pages in a virtualized environment. Because this operation is one of the most common, by optimizing this function, performance is greatly enhanced. AMD virtualization products are available on both the Opteron and Athlon processor families.The virtual machine that uses the hardware-assisted software virtualization model has become known as the Hardware Virtual Machine or HVM. This terminology will be used throughout the rest of this chapter and refers to the fully software virtualized model with hardware assist.
Here is the list of enterprise software which supports hardware-assisted – Full virtualization which falls under hypervisor type 1 (Bare metal )
VMware ESXi /ESX
KVM ( Type II but behaves as a Type I Hypervisor)
The following virtualization type of virtualization falls under hypervisor type 2 (Hosted).
VMware Workstation (64-bit guests only )
Virtual Box (64-bit guests only )
VMware Server (Retired )
End of Part I ( Episode 14) ….. Start of Part II ( Episode 15)
In paravirtualization, the guest OS is aware of and interfaces with the underlying host OS. A paravirtualized kernel in the guest understands the underlying host technology and takes advantage of that fact. Because the host OS is not faking the guest 100 percent, the amount of resources needed for virtualization is greatly reduced. In addition, paravirtualized device drivers for the guest can interface with the host system, reducing overhead. The idea behind paravirtualization is to reduce both the complexity and overhead involved in virtualization. By paravirtualizing both the host and guest operating system, very expensive functions are offloaded from the guest to the host OS.The guest essentially calls special system calls that then allow these functions to run within the host OS. When using a system such as Oracle VM, the host operating system acts in much the same way as a guest operating system. The hardware device drivers interface with a layer known as the hypervisor. The hypervisor, which is also known as the Virtual Machine Monitor (VMM), was mentioned earlier in this chapter. There are two types of hypervisor: The type 1 hypervisor runs directly on the host hardware; the type 2 or hosted hypervisor runs in software.
Examples of Paravirtualization: ( I added IBMs and HPs Integrity solutions as I think they belong here but correct me if I am wrong)
Oracle VM for SPARC (LDOM)
Oracle VM for X86 (OVM)
IBM PowerVM ( AIX)
HP Integrity VSE
In Hardware assisted full virtualization, Guest operating systems are unmodified and it involves many VM traps and thus high CPU overheads which limit the scalability. Paravirtualization is a complex method where guest kernel needs to be modified to inject the API. By considering these issues, engineers have come with hybrid paravirtualization. It’s a combination of both Full & Paravirtualization. The virtual machine uses paravirtualization for specific hardware drivers (where there is a bottleneck with full virtualization, especially with I/O & memory intense workloads), and the host uses full virtualization for other features. The following products support hybrid virtualization.
Oracle VM for x86
The following diagram will help you to understand how VMware supports both full virtualization and hybrid virtualization. RDMA (Remote Direct Memory Access) uses the paravirual driver to bypass VMkernel in hardware-assisted full virtualization.
Operating System – OS Level Virtualization
Operating system-level virtualization is widely used.It also knowns “containerization”. Host Operating system kernel allows multiple user spaces aka instance.In OS-level virtualization, unlike other virtualization technologies, there will be very little or no overhead since its uses the host operating system kernel for execution. Oracle Solaris zone is one of the famous containers in the enterprise market. Here is the list of other containers.
Linux Containers (LXC)
Which ones I use?
In My Homelab
VMWare ESXi 6.7 on My servers ( HP DL380 G6 and some others)
Proxmox based on Debian on a 3 Server Cluster ( Small Lenovo M93p machines with 16GB Ram with some central vSAN or similar storage for them to use for storing VMs and being able to do Guests VM migrations, etc. (installed on as a type II but behaves as a type I hypervisor on top of Debian)
VMWare Fusion and Workstation on my Mac and Linux machines ( even on a Windows laptop)
Virtualbox in some special case to run some older OS which tends to run better in Virtualbox * OS/2 or Windows NT *
QEMU for some special cases like HP-UX 11.11i 32bit and AIX 7.2 patched one or AIX 5.1
KVM on my Fedora 31 Workstation Fujitsu R940
On My Production Cloud * Hetzner Dedicated Server*
Proxmox based on Debian. With batch of Public IPv4-s and IPv6 for the VMs to be reachable when and if needed.
I have a VM running on Proxmox which acts as a Docker host and thats where i do things with Docker and Containerization (( I do want to have one day a small Kubernetes cluster maybe from Raspberry Pi s or perhaps installed inside my Proxmox or ESXi in my Homelab to tinkle with
Proxmox Virtual Environment (Proxmox VE; short PVE) is an open-source server virtualization environment. It is a Debian-based Linux distribution with a modified Ubuntu LTS kernel and allows deployment and management of virtual machines and containers. Proxmox VE includes a Web console and command-line tools, and provides a REST API for third-party tools. Two types of virtualization are supported: container-based with LXC (starting from version 4.0 replacing OpenVZ used in version up to 3.4, included, and full virtualization with KVM. It comes with a bare-metal installer and includes a Web-based management interface.
The name Proxmox itself has no meaning, and was chosen because the domain name was available.
Development of Proxmox VE started when Dietmar and Martin Maurer, two Linux developers, found out OpenVZ had no backup tool and no management GUI. KVM was appearing at the same time in Linux, and was added shortly afterwards. The first public release took place in April 2008, and the platform quickly gained traction. It was one of the few platforms providing out-of-the-box support for container and full virtualization, managed with a Web GUI similar to commercial offerings.
Proxmox VE is a powerful open-source server virtualization platform to manage two virtualization technologies – KVM (Kernel-based Virtual Machine) for virtual machines and LXC for containers – with a single web-based interface. It also integrates out-of-the-box-tools for configuring high availability between servers, software-defined storage, networking, and disaster recovery.
Server Virtualization supporting Kernel-based Virtual Machine (KVM) and Container-based virtualization with Linux Containers (LXC).
Proxmox VE can be clustered across multiple server nodes.
Since version 2.0, Proxmox VE offers a high availability option for clusters based on the Corosync communication stack. Individual virtual servers can be configured for high availability, using the Red Hat cluster suite. If a Proxmox node becomes unavailable or fails the virtual servers can be automatically moved to another node and restarted. The database- and FUSE-based Proxmox Cluster filesystem (pmxcfs) makes it possible to perform the configuration of each cluster node via the Corosync communication stack.
At least since 2012, in an HA cluster, live virtual machines can be moved from one physical host to another without downtime. Since Proxmox VE 1.0, released 29.10.2008 KVM and OpenVZ live migration is supported.
Redhat Openshift OpenShift is a family of containerization software developed by Red Hat. Its flagship product is the OpenShift Container Platform—an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux.
Redhat Openstack OpenStack is a freeopen standardcloud computing platform, mostly deployed as infrastructure-as-a-service (IaaS) in both public and private clouds where virtual servers and other resources are made available to users. The software platform consists of interrelated components that control diverse, multi-vendor hardware pools of processing, storage, and networking resources throughout a data center. Users either manage it through a web-based dashboard, through command-line tools, or through RESTful web services.
When You Simulate/Emulate a Whole Computer from the 80s or 90s (we could see this as a form of Virtualization if We want to)
The most recent track here is Close The Brain by Scape One from his January 2020 release Click Click Drone. I absolutely love this album, and can’t recommend it enough. 8 new spare stripped-down throwback electro tracks by the prolific Mr. Baggaley. So prolific, in fact, he already has a new album out called Cosmic Trax, the first song of which sounds like a Jive Rhythm Trax tribute. Automatic thumbs-up for that, will have to pick it up.
There’s a lot of old favorites in this hour. Ectomorph, I-F’s Theme From PACK, Umwelt‘s Secret Of A Black World. New favorites too, particularly N-ter’s remix of Maar by Ivna Ji. And now I’m out of things to say. I think I’ll do something slower next show, more from Adam Jay’s new LP, more from Scape One’s Click Click Drone. Thanks all for listening, and If you like these tunes then please support the artists through the links above. Catch you next time.
A NAS unit is a computer connected to a network that provides only file-based data storage services to other devices on the network.
Although it may technically be possible to run other software on a NAS unit, it is usually not designed to be a general-purpose server
however nowadays with apps and addons in the case of Synology and QNAP or jails as we will see in the case of Freenas it is possible to extend the services offered by the NAS
For example, NAS units usually do not have a keyboard or display, and are controlled and configured over the network, often using a browser
A full-featured operating system is not needed on a NAS device, so often a stripped-down operating system is used
For example, FreeNAS or NAS4Free, both open source NAS solutions designed for commodity PC hardware, are implemented as a stripped-down version of FreeBSD.
NAS systems contain one or more hard disk drives, often arranged into logical, redundant storage containers or RAID.
NAS uses file-based protocols such as NFS (popular on UNIX systems), SMB (Server Message Block) (used with MS Windows systems), AFP (used with Apple Macintosh computers), or NCP (used with OES and Novell NetWare). NAS units rarely limit clients to a single protocol.
Off the shelf or ready made cosumer NAS ( Synology Qnap Asustor)
I have a QNAP TS-453A NAS and a DIY Whitebox made with Xpenology for the moment and also a VM with Freenas to do some testing with ZFS and vdevs in a virtual env.
In my opinion ( not counting the price of the Harddrives as those are the same in both DIY and Off the shelf diskless configurations) I think
IMHO You can get more for Your money if You build something on Your own however You might need to give up some of the pros coming with an Off the shelf solution (brand) like Synology or Qnap:
Pros for Off the Shelf:
Unpack , add disks and ready to go
Nice mostly easy to navigate web interface to set things up and have it up and running
Support from Manufacturer
Additional Apps to extend functionality ( also apps for Your mobile phone or tablet all those Synology DS series apps or Qnap apps as well)
Apart from the harddrives if You purchased it in a diskless configuration one single warranty for the whole box
low power consumption
Smaller size / tighter integrated tiny boxes
Cons for Off the Shelf:
Restricted when it comes to what You can get( WYSIWYG)
Expensive compared to DIY/Whitebox solutions for the same amount of money
Absolutely NO or limited expandability * the ones offering additional drive bays via expansion units connected via cable cost near as much as another NAS unit or close to
Mostly Proprietary motherboards can not simply swap them out
Can’t replace or upgrade bits and pieces of the HW later * except RAM and some offers PCIe slot for expansion card or 10GBit Networking but limited in what You can do
You do not have total control
Can be complicated to recover data if Your NAS dies ( however the most used filesystems are Brtfs Ext4 and SHR Synology Hybrid Raid which is a Software Raid solution)
Pros for Whitebox / DIY
You have total control (HW, SW , All the components)
You can expand it later as You go
Cheaper compared to Consumer/SMB NAS solutions
You can tailor it to Your needs 100% no need to compromise
Cons for Whitebox / DIY
Need to invest time and effort in planning and sourcing the components and building one for Yourself
No Support except enthusiast forums or Bulletin Boards and Documentation of the SW and HW You selected
Normally a bit bigger and consume more power than Consumer/SMB NAS offerings but You can really get on pair with what You could have got already boxed and prepared
Can get more for Your money. More CPU, RAM, Harddrive bays everything.
My experience with Off the shelf NAS QNAP
I have a QNAP TS 453A with 4x 10TB Western Digital Gold Datacenter Harddrives in a Raid10 configuration ( 2x Raid1 ) giving me a total of 18Tb usable space
I never had any issue with it except 2x times it happened it did not want to boot up fully from sleep/power off state … since then I keep it on all the time and I always have backups made of the important stuff there ((( actually it is still in the works how best to do on my end… planning planning )
DIY / Whitebox builds
I have 1x 2U server from parts and bits and pieces put together which is in a testing phase since nearly 6 months now with 2x SSDs trying out Xpenology.
Xpenology might not be everyones cup of tea out there as legally it is very similar to what a Hackintosh is (( one day I make an episode on Hackintosh as well as I use one to stream and edit this podcast ))
Xpenology allows to run Synology’s DiskStation Manager software , the software which is practically runs on every Synology box of consumer grade, and let You boot and install it on ordinary hardware , whitebox/DIY You put together while maintaining all the Pros from Synology.
You can install the same apps and use all the features as it was a proper Synology box. Normally a few versions or updates behind (which does not bother me the least) i never had any issue with it.
I started to use it as I was interested in Synology’s NAS offerings just to see how it differs from QNAP
Actually I plan to either use it as is and expand the harddrive configuration to 4x Harddrives using either a SATA Card or HBA Card which Synology Diskstation Manager would accept (( i did some digging and i think i had found one which would work in this case as the motherboard i used had only 3x sata slots on it i could use… 🙁 )) or just transfer it to a proper FreeNAS Box and continue like that.
FreeNAS i have tinkled with in Virtual Machine playing around with it setting it up and checking its settings but have not run so far on real dedicated hardware ( one day i will)
Component selection for DIY Whitebox
Most of the time You have to do this a bit backwards meaning many times You need to decide on the SW bit You are going to use and after looking out for recommendations or build guides regarding that SW.
Some like FreeNAS has great documentation and Forums where You can find more information You will ever need
For some other You have to go through third party forums and places like Reddit to put together bits and pieces to see what will be the best fit for Your SW.
Whenever I can I like to stick to server or workstation grade components, Xeon CPUs, Intel CPUs for power consumption and compatibility, ECC RAM , good quality power supplies.
I also prefer 19 inch rackmount sizes preferably 2U with good airflow ( 1U tend to be way more loud) and just because I can rack them nice in my server rack otherwise I d go for something small and more restrictive
Hard drives and Storage Technology
Do not cheap out on the Hard drives. Building the nicest NAS with awesome SW and HW and then putting shitty Hard drives inside is like putting the cheapest wheels on the best car or motorcycle out there
When it comes to storage technology I like both Raid and ZFS
Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the required level of redundancy and performance. The different schemes, or data distribution layouts, are named by the word “RAID” followed by a number, for example RAID 0 or RAID 1. Each scheme, or RAID level, provides a different balance among the key goals: reliability, availability, performance, and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable sector read errors, as well as against failures of whole physical drives.
In 2010, Sun Microsystems was acquired by Oracle and ZFS became a registered trademark belonging to Oracle. Oracle stopped releasing updated source code for new OpenSolaris and ZFS development, effectively reverting Oracle’s ZFS to closed source. In response, the illumos project was founded, to maintain and enhance the existing open source Solaris, and in 2013 OpenZFS was founded to coordinate the development of open source ZFS. OpenZFS maintains and manages the core ZFS code, while organizations using ZFS maintain the specific code and validation processes required for ZFS to integrate within their systems. OpenZFS is widely used in Unix-like systems. In 2017, one analyst described OpenZFS as “the only proven Open Source data-validating enterprise file system”
As of 2019, OpenZFS (on some platforms such as FreeBSD) is gradually being pivoted to be based upon ZFS on Linux, which has developed faster than other variants of OpenZFS and contains new features not yet ported to those other versions.
Software options for DIY/Whitebox
FreeNAS (iXSystems its HW Brand if You want a box made to run FreeNAS and Supported at Ent Levels as well)
The first new track in this mix is Fragmented Simulation by Adam Jay, from his album Inoperable Data on Detroit Underground. If electro is some combination of techno and hiphop, then Inoperable Data is tilted more to the techno end. It reminds me of The Advent’s electro tracks on Kombination Research, but much brighter. I like it; I’ll be playing more from it in the coming weeks, once I’ve had time to absorb the tracks.
Aloka’s Concave, Convex is also a relatively new-to-me. It’s the first release on the forward-thinking Typeless Records, famous for their two Phalanx compilations. I’ve seen people talking about XT3 by Aloka recently; if you like that track, check out Concave, Convex.
That’s it for notes. Next show will be lower tempo. Maybe something suitably icy, given my local weather.
How will IoT and 5G Networks make the world a better, smarter , more efficient and safer place for everyone?
What is IoT?
When I was searching on the internet for the best and most accurate explanation of what is IoT , I found many confusing and sometimes overly complicate OR Overly simplified explanations. To my knowledge and understanding this is how I could sum up what IoT is:
IoT is more than just smart/connected devices like a washing machine You can control from Your smartphone from outside the house … as a matter of fact as We will see it is one of the building blocks of what We refer to IoT and the flow I just described ( washing machine controlled over smartphone from outside) is just one of the many applications of IoT and how We can benefit from it All.
IoT ( Internet of Things) is a system of interrelated computing devices , mechanical and digital machines, sensors, objects even animals or people that are provided with a UIDs (unique identifier) and the ability to transfer data over a network without requiring human to computer interaction.
Based on this We can see 3x important keywords or key identifiers regarding IoT…
– Uniquiely Identifiable devices/objects/sensors (end devices) –> reminds me for some reason of IPv6 where everything can have a single routable IP address as the address space is that vast.
– The ability to transfer data over a network (internet or even Local network f.e if my smart home devices only operate in my own LAN sending and receiving messages in between only when im on my home network) (network fabric)
– Self sufficient, requires no human interaction .. managed and brought together via an IoT Platform
Explanation of IoT Platform ::
An IoT platform is a multi-layer technology that enables straightforward provisioning, management, and automation of connected devices within the Internet of Things universe. It basically connects your hardware, however diverse, to the cloud by using flexible connectivity options, enterprise-grade security mechanisms, and broad data processing powers. For developers, an IoT platform provides a set of ready-to-use features that greatly speed up development of applications for connected devices as well as take care of scalability and cross-device compatibility.
Thus, an IoT platform can be wearing different hats depending on how you look at it. It is commonly referred to as middleware when we talk about how it connects remote devices to user applications (or other devices) and manages all the interactions between the hardware and the application layers. It is also known as a cloud enablement platform or IoT enablement platform to pinpoint its major business value, that is empowering standard devices with cloud-based applications and services. Finally, under the name of the IoT application enablement platform, it shifts the focus to being a key tool for IoT developers.
Now that We saw a simple easy to understand explanation of IoT , its 3 main key attributes ( in my opinion) and saw what role IoT Platform plays in bringing this all together We can discuss more about IoT
Fleet Management: Business leaders look for real-time fleet information so that they can reap business benefits to making intelligent decisions real-time. The Fleet Management technology is slowly and gradually getting adopted with the improvements in operational efficiency, maintenance cost, fuel consumption, regulatory compliance, and speed up accident response. GPS tracking, geo-fencing, customized dashboards, and real-time business decisions are some of the key features fleet management offers.
Public Transit Management: tracking the real-time location of the vehicle and knowing when it will arrive at a particular stop was always a challenge. As real-time tracking of the vehicle is possible with the help of IoT in transportation, the data that is tracked is sent to an engineer or to a central system and, then, to an Internet-enabled mobile device. The Internet of Things has eradicated all the challenges that were faced in a public transit system and has enabled re-routing features to help people make alternate arrangements as real-time tracking of the vehicle is easily done.
Smart Inventory Management: IoT in transportation has smart inventory management that acts as a catalyst telling the real-time information across the warehouse, distribution, and production center, which reduces the cost of inventory and improves the predictive maintenance. Smart inventory management systems have lowered the inventory cost and reduced the management errors of the inventory. The quality and depth of data from IoT sensors and systems have strengthened the legalized inventory management system.
Optimal Asset Utilization: IoT in transportation enables the asset tracking feature, which keeps track of the physical assets and their information, like location, status, etc. With Biz4Intellia, an end-to-end IoT solution provider, one can track the real-time location of their truck and can know how much load there is on the trailer of the respective truck. Not only this, but the latitude and longitude of an asset can also be known by IoT in transportation. The advanced analytics tracks all the devices like sensors, axels, and tells about the threshold and tolerance of the device.
Geo-Fencing IoT in the transportation industry has come up with an advanced form of GPS, i.e. geo-fencing. It captures the location of an asset or device with the coordinates of a particular area. Geo-fencing helps in starting the automated tasks. IoT in the transportation industry is the most benefitted from geo-fencing. It allows you to receive alerts when a driver deviates from the prescribed path, as it can bring delay in delivery time and cause accidental loss. This technology has manifested the paper logs as it came up with a digital- and cloud-based monitoring system that tells about real-time data of the truck. Increased transparency and accountability has made the IoT in transportation more cost-effective and reduced time. The Internet of Things has changed the business performance of many organizations and is predicted to cut the emissions from trucks
How will IoT and 5G change everything for the better?
5G for the future of IoT both Residential and Industrial is a huge leap forward compared to 4G
IoT will become smarter, faster and more reliable with 5G. Less power consumption coupled with higher data transfer rates and lower latency and the ability to connect to more devices are massive improvements which 5G brings to IoT
Connected Cars and Autonomous vehicles will become a reality together with Smart/Connected Cities and Smart Transportation.
As true unlimited home internet was not possible with 4G now with 5G this can become a reality.
100 BPM beats for a tired Thursday night. This mix includes one F-bomb at the 30 minute mark, during Viktor Vaughn (MF Doom)’s rap. I guess that makes this one PG-13. Consider your surroundings when listening. (In other words: may or may not be appropriate for work. Depends on where you work.)
Two tracks by Mikron feature in this mix: #1 Embers and #12 Aldergrove. Both are from the excellent album Severance on CPU Records. Severance is a nicely varied album, slower in tempo than is typical for electro, more relaxed. It works well as an album, too: I’m happy listening to it straight through, beginning to end. Thumbs up for Severance, give it a listen at the link above.
Another standout here is Muto Love by Takeshi Muto from the Lily Of The Valley comp on Schematic. The rest of that comp is good too, and is definitely worth picking up. I’m sure I’ll be playing parts of it over the coming months.
Not too much more to say about this one; all these tracks are good. If something strikes your fancy here consider supporting the artist(s) at the links above. I’ll be back next week with something fast.