New Raspberry Pi 8Gb and Beta of 64Bit OS Raspberry Pi OS changing the name from Raspbian
Do You really need that much RAM?
The short answer is that, right now, the 8GB capacity makes the most sense for users with very specialized needs: running data-intensive server loads or using virtual machines. As our tests show, it’s pretty difficult to use more than 4GB of RAM on Raspberry Pi, even if you’re a heavy multitasker.
As part of this announcement, the Raspberry Pi Foundation has decided to change its official operating system’s name from Raspbian to Raspberry Pi OS. Up until now, Raspberry Pi OS has only been available in 32-bit form, which means that it can’t allow a single process to use more than 4GB of RAM, though it can use all 8GB when it is spread across multiple processes (each browser tab is a separate process, for example).
However, the organization is working on a 64-bit version of Raspberry Pi OS, which is already available in public beta. A 64-bit operating system allows 64-bit apps that can use more than 4GB in a single process. It could also lead to more caching and better performance overall.
Automotive things: DIY or Off The Shelf Solutions
Is it better to buy an off the shelf multimedia solution for Your car with Navigation , rearview camera, etc. Or its better to tinker and make a DIY solution from the same amount of money or sometimes less from Raspberry Pis and matching components to do everything You need?
it runs on top of Raspbian Linux … autolaunch fullscreen app
as much as I see OpenAuto Pro Accepts creating a shortcut for external applications to be called / lauched so perhaps the way to control the DAB+ module under linux can be called somehow on a way which is a bit more user friendly and intuitive… perhaps calling script file instead of
this is open auto: 2y ago last commit on github 🙁 https://www.youtube.com/watch?v=9sTOMI1qTiA
One workaround is to run android on the raspberry pi like LineageOS and then use dab+ controller android app for the dab+ unit and corresponding android apps for the rest like the navigation and rearview camera , etc>
Unfortunatelly LineageOS 16 Android 9 graphics performance is not ready for multimedia or gaming use so i dont know how well navigation apps would run in this case
One system which pretty much has all I look for or want to is the i-carus system https://i-carus.com/
It does not seem to have: Bluetooth Audio Passthrough , And I dont see how it could interact with the Monkey DAB+ Module mentioned above previously…..
All you need in car
Multimedia center supproting all audio and video formats
Full HD car DVR camera
OBD-II Engine diagnostics and data reading
Wireless Networks: 3G, 4G, Wi-Fi, Bluetooth
Since iCarus is based on Rasberry Pi Linux computer you get almost unlimited opportunities for extanding the functionality of the system by adding external hardware, sensors or creating your own software.
The ICR Board (connects directly to Your cars radio connector)
The heart of iCarus Car PC. Connect your Raspberry Pi (or any other compatible single board computer) to ICR board and build your higly customized Car PC system.
Just connect iCarus Car PC to your car’s radio connector directly (in the case your car uses a standard ISO-10487 connector) or via harness adaptor
Separate ICR board is a suitable solution for makers building Car PC in their own housing
This system raises two important questions for me at least:
Would it be possible for i-carus to interact and control a Monkey DAB+ board? (works from android and from raspbian linux by default) https://www.monkeyboard.org/products/85-developmentboard/80-dab-fm-digital-radio-development-board https://www.monkeyboard.org/tutorials/78-interfacing/87-raspberry-pi-linux-dab-fm-digital-radioCan it be used to be an interface for Android Auto once a compatible smartphone is connected via USB or Wireless?
Can additional apps run on top or parallel like the OpenAuto app project which does similar functions to i-carus? ( so the i-carus can be booted into either i-carus or openauto?) becoming a versatile screen not limited only to be used with i-carus software https://bluewavestudio.io/index.php/bluewave-shop/openauto-pro-detail
Commercial Offerings Out There
Mentioning only one example as these are as many as You can imagine and comes in all form of shapes and sizes.
It ticks pretty much all the boxes I need it to do
What else can be used a Raspberry Pi for in a Car?
What does Open Source Software means? What about Software which uses Open Source code/contribution? Do They give back to the source/s where they were taking from?
The question of Licensing BSD License / Berkeley and the GPL v2 and v3 licenses
Software can be Open Source and Respecting users Freedom/Privacy and be for Cost at the same time But Can They actually make money of that?
What stops an individual to get the open source code of a paid freedom respecting software and fork it / build it on his/her own machine perhaps with just a slight modification of a color or something minuscule to make it just a tidbit differ from the original and use it ,,as his own” for free as in No cost and crippleing the source of income of this company with this move..
What can motivate Companies therefore to Open Source their code of Paid applications * as to show their respecting freedom or otherwise* IF it could cripple their income stream?
On that same thought …Who would continue to pay for Microsoft Office and Visio and Project apps ( not debating how good their are or they are not or if any valid other alternative exists ) if they d just upload it source up to github and People could fork/build it on their own machine and use it for Free as in No Cost? Can Microsoft be blamed for not doing this?
IMHO I believe in Open Source apps but in the same time I believe the need in certain apps of closed source for maintaining its leading edge on the market and to drive them even further in development and making the software better by not telling everyone how its done….
Should Coca cola share the recipe of Coca Cola and let people make it at home ( if possible) and cripple its own revenue source? resulting in thousands of thousands of jobs lost wordwide? Its the same with closed source software ( some respects your freedom and want to do no harm to you while others like Apple for sure want to control you tied to a leash for sure … look at all the controllig and limiting functions of iphones and macs…but that could be another topic) those SW companies with closed source intellectual properties like Microsoft , VMware , Veeam and the list goes on… apart from being called evil of not opening up their source code to the masses by open sourcing it, they also create thousands of jobs.
IF most of us would have access to the source code/s we d for it ourselves and would not ever pay a cent again for the software anymore… people would loose their jobs and go work what?
Open Source Licensing (FOSS Licenses)
Apart from this original or basic dilemma there is another two things … The different Open Source licenses and the Free Software Foundation where Free does not mean No Cost but Free as in Freedom of the Users and respect to their privacy
Its a big pool of mess if You ask me .. More than 80 open source licenses exist in two big categories mostly: permissive and copyleft licenses
A permissive license is simple and is the most basic type of open source license: It allows you to do whatever you want with the software as long as you abide by the notice requirements. Permissive licenses provide the software as-is, with no warranties. So permissive licenses can be summarized as follows:
Do whatever you want with the code
Use at your own risk
Acknowledge the author/contributor
Copyleft licenses add requirements to the permissive license. In addition to the requirements listed above, copyleft licenses also require that:
If you distribute binaries, you must make the source code for those binaries available
The source code must be available under the same copyleft terms under which you got the code
You cannot place additional restrictions on the licensee’s exercise of the license
The table below categorizes popular open source licenses under the permissive and copyleft frameworks. The copyleft licenses are also listed in ascending order of strength, from strongest at the top to the weakest at the bottom. “Strength” refers to the degree to which surrounding software may need to be subject to the same copyleft requirements. For example, GPL is strong because it requires that any program that contains GPL code must contain only GPL code. LGPL is weaker because it allows dynamic linking to other proprietary code without subjecting that linked code to the same GPL requirements. The weakest copyleft licenses, EPL and MPL, allow any kind of integration with other code, as long as EPL or MPL code is in its own file.
BSD (Berkeley Software Distribution) MIT Apache 2
Affero GPL (AGPL)GPLLesser GPL (LGPL)Mozilla Public License (MPL)Eclipse Public License (EPL)Common Development and Distribution License (CDDL)
When a company wants to include an Open Source element into their closed source software (( nevertheless if its nice of them to have their own code closed source and using bits and pieces from Open source software for free as in no cost to make their own software of closed source better… )) some open source licenses make this easy and effortless while others make it near impossible to be used without certain limitations or loosing of the possibility perhaps to copyright the resulting application as a whole itself…
Its very complex and confusing for me to be honest .. but nevertheless is an interesting topic
Top open source questions
When I advise clients on open source licensing, the four most common questions they ask are:
What is “distribution?”
How do open source licenses affect patent rights in software?
What is the “notice” requirement and how do I comply?
What is a “derivative work” and, related, does incorporating GPL code into my proprietary code cause the proprietary code to be licensed under GPL?
The short answers to these questions appear below:
What is “distribution?” In simple terms, distribution refers to transferring a copy of a copyrighted work (such as software) from one legal person to another. The concept of distribution matters because the requirements of open source licenses are triggered only when software is distributed. Thus, a person who does not distribute software cannot violate an open source license’s terms. And because “legal person” includes a corporation, there is no distribution—and therefore no risk of violating a license’s terms—if software is merely transferred between employees of the same company.
Today, distribution can be a thornier question for businesses that deploy software through the Internet, cloud, or a SaaS model. Does allowing users to interact with a software application over the Internet qualify as distribution? For most open source licenses, the answer is no. Indeed, GPLv3 uses the term “convey” rather than “distribute,” precisely to clarify that SaaS use does not trigger any license requirements. But the Affero GPL (AGPL) license is one exception that takes a different approach. AGPL’s requirements (which are the same as GPL) are triggered once software is modified and made available for use and interaction over a network.
How do open source licenses affect patent rights in software? Some open source licenses (e.g., Apache 2, GPLv3) include express patent license provisions, which grant recipients a license to any patents covering the software product. Other open source licenses (e.g., BSD, MIT, GPLv2) are mum on patent licenses. Nonetheless, for these licenses, courts may use the doctrine of “implied license” to find that recipients are still licensed and protected from any patent infringement allegation arising from using the licensed software product. By doing this, courts prevent licensors from taking “two bites at the apple” and suing for patent infringement for using the very software they have licensed. In sum, unless expressly stated otherwise, open source licenses limit the author’s ability to sue license-abiding recipients for alleged patent infringement.
What is the “notice” requirement and how do I comply? The notice requirement means that a distributor of open source software must inform recipients that certain open source software, which is available under the noticed license, is included in the software being delivered to the recipient. Open source licenses each have their own specific notice requirements. Commonly, these requirements include providing entire copies of applicable licenses and acknowledging authors and contributors. A best practice is to deliver the source code covered by the license up front because full copies of licenses are typically included as text files in the source code package. Another best practice is to follow the GPL’s notice requirements because they are considered among the most stringent. Thus, complying with GPL’s notice requirements will usually ensure compliance with other applicable open source licenses’ notice requirements.
Derivative works and the myth of viral GPL: A common concern of clients is that by incorporating code licensed under GPL (or similar copyleft license) into their proprietary code, the proprietary code will be “infected” or “contaminated” and become licensed under GPL (i.e., the proprietary code is effectively converted into GPL code) or forced into the public domain. This concern causes some to view GPL as viral and discourages them from using GPL code because they are worried that any derivative works that incorporate GPL code will also be licensed under GPL.
These concerns are largely unfounded. It is true that under GPL, all code in a single program must be either be subject to GPL or not subject to GPL. So if a developer were to combine GPL code with proprietary code and redistribute that combination, it would violate the GPL. But the likely worst-case consequence of this violation is that the author of the GPL code could exercise their right to bring a claim for copyright infringement. The remedy for copyright infringement is either damages (money) or injunction (stop using the GPL code). Critically, copyright law supports no remedy that would force the offending developer to license their proprietary code under GPL or to put that code into the public domain. Combining GPL code with proprietary code does not therefore “infect” the proprietary code or convert it into GPL code.
After moving to Fedora 32 , my Gnome Desktop started to behave slower and slower every day to a point where between a click and an action started to pass seconds I first noticed in in Brave Browser Chromium Based Browser when changing tabs or in general just using it … then in VLC/MPV for playing back video i knew from before that played without issues started to have lags and slight framedrops ever so slightly noticable at first.
I blamed Brave Browser, my rpmfusion nvidia drivers * i just cant get to use nouveau driver i never had much of a luck with it whereas the closed source driver works for me as expected always* I did not get any further trying to investigate the above two not even switching back to Firefox and try other video settings for mpv/vlc * I use Xorg X11 , I wait till Wayland becomes Standard before I switch 🙂 *
At this point I start to think the issue either came from Gnome in Fedora 32 or perhaps some of the gnome plugins I was using == the GConnect for my cellphone connection and a kind of tiling windowmanager extension/plugin
Nevertheless I went an did a test drive on my DE I always use with other than my main system and I quiet like it. XFCE. I use it in a MultiMonitor Setup ( 4 Screens) and gives me no problems with my Nvidia NVS 510 GPU
I certainly can live without of those Gnome fancy plugins for Gconnect and Tiling kind of Window Manager thingy which worked 50-50% most of the time… Even for most of the things I liked in Gnome like Virtual Workspaces which I indeed use a lot as I always have a lot of windows open and apps running at once I enjoy being able to separate things to separate workspaces at all times. Nevertheless XFCE can use Gtk based apps * the ones from Gnome for example* so You dont have to live completely without the things You were getting used to. I was a long time Gnome user and honestly I dont mind to give it up for the snappyness of XFCE and to sacrifice perhaps only a handful of features I either have viable alternative for or I was hardly using it that much in the first place to have it become to a habit and to miss it so much afterwards I can not live without.
My main machine is nowhere counts as weak ( I d love to add a 2nd CPU 6core Xeon or switch them both out to directly 8 core ones and more RAM tough() ) but a 32Gb ECC DDR4 Ram and a E5-2600 v4 1,6 Ghz Xeon 6 Core machine with SSD Main drive and plenty of additional HDDs to store my things before I move them out to the NAS on the network can not be called slow.. The GPU is an Nvidia NCS 510 natively driving the 4 Screens I have … Its not a box for gaming but I never wanted it to be one.
Before I switched to XFCE when the problems started with Gnome.. I saw it was taxing my cpu to the realms of 50-65% of CPU utilization and Im not sure that It was very normal * hence I was thinking those issues I mentioned above about Gnome extensions or nvidia driver or whatnot **
Now on XFCE with tons of things open and my VM running amongst other things I am not really climbing above the ocassional 19% on CPU load (( attached screenshot to shownotes )) normally I d have a couple of things more open including my work connection via VMware Horizon to a Windows 10 VDI when working from home and +/- 5 things tops depends on the thing I need to do
I am always open to lightweight WindowManagers or DE (DesktopEnvironments) to try out. And as it may be that they call Gnome the king of the Linux desktop environments as always there is definetly a lot of options out there to try out. Most importantly whenever You experience some sluggish response from Your computer or it does not perform as it used to be.. Remember it is not always that its old and need to be replaced to have the latest and greatest most expensive most powerful machine out there.. cause as We discussed it previously We hardly really need that.
Perhaps try to disable some of those fancy features and extensions in Your system and see if it improves.. Or even better.. Give a try to other Desktop Environments and Window Managers out there. Gnome and KDE are not the only ones out there *but definetly the most resource heavy *
You might get suprised and Your machine responds again with the agility and speed You are used to from before… and the best of it.. Without spending a dime.
We talked about ARM Chips and architecture back in other episodes when We talked about Single Board computers at Episode 03.
Today I want to talk about two ARM based devices I came across one laptop and one desktop.
Lets look at ARMs History first
The British computer manufacturer Acorn Computers first developed the Acorn RISC Machine architecture (ARM) in the 1980s to use in its personal computers. Its first ARM-based products were co-processor modules for the MOS Technology 6502 based BBC Micro series of computers
The official Acorn RISC Machine project started in October 1983. They called it ARM2. They chose VLSI Technology as the silicon partner, as they were a source of ROMs and custom chips for Acorn. Wilson and Furber led the design. They implemented it with efficiency principles similar to the 6502. A key design goal was achieving low-latency input/output (interrupt) handling like the 6502. The 6502’s memory access architecture had let developers produce fast machines without costly direct memory access (DMA) hardware
Acorn computers itself has a rich and troubled history worth reading up on it on its own or even watching the BBC movie – Micro Men . I linked in the shownotes on Youtube. Also can read more in detail of ARMs history using the links in the shownotes.
The Next major step forward into success for ARM is when Apple and Acorn began to collaborate on developing the ARM, and it was decided that this would be best achieved by a separate company.
The bulk of the Advanced Research and Development section of Acorn that had developed the ARM CPU formed the basis ofARM Ltd.when that company was spun off in November 1990. Which is the ARM of We more familiar with
Both its 32bit and 64bit ARM CPUs have a wide variety of support in Operating Systems (Embeeded, Mobile OS, Desktop OS, Server OS)
ARM Vs The Competition and Where and Why ARM is a threat to Intel and x86
While ARM CPUs first appeared in the Acorn Archimedes, a desktop computer, today’s systems include mostly embedded systems, including all types of phones. Systems, like iPhone and Android smartphones, frequently include many chips, from many different providers, that include one or more licensed Arm cores, in addition to those in the main Arm-based processor. Arm’s core designs are also used in chips that support all the most common network-related technologies.
Arm’s main CPU competitors in servers include Intel and AMD.[Intel competed with Arm-based chips in mobile, but Arm no longer has any competition in that space to speak of (however, vendors of actual Arm-based chips compete within that space). Arm’s main GPU competitors include mobile GPUs from Imagination Technologies (PowerVR), Qualcomm (Adreno) and increasingly Nvidia and Intel. Despite competing within GPUs, Qualcomm and Nvidia have combined their GPUs with Arm-licensed CPUs.
ARM CPUs/Architecture definetly has the edge in Embedded Systems and Power Efficiency
Apple is also thinking about switching a Third time its Software stack to a different Architecture ARM in this case
(Motorola 68k – IBM PowerPC – Intel – ARM)
Pre- 1994 – Motorola 68k based CPUs
1994 – Transition to PowerPC ( starting with the Power Macintosh 6100 )
2005 – Transition to Intel begins ( starting with the Macbook Pro and Imac from the same year)
2021 – Apple plans to sell Macs with its own chips.
Can it be a direct shift to ARM architecture, perhaps an Apple customised ARM Chips?
ARM and Apple had met in the past when Acorn Computers and Apple created ARM Ltd. in 1990 as mentioned and also Apple Used ARM chip for its Apple Newton (( ARM 610 RISC )) which many looks at as the predecessor of modern iPads
In 1999 Apples stakes in ARM lowered to around 14%
As of today the amount of % of Apples shareholding in ARM Ltd is not exactly known , I couldnt google it up .. but Apple predicts to switch to ARM CPUs will reduce its CPU costs around 40 – 60% which makes me believe they still have to have some stock in ARM Ltd. I might be wrong however.
He has a long list of publications and presentations / teaching history.
And amongst the odds of all the above… He is a great person and lot of fun to talk to.
George’s extensive set of work on Open Source (extracted from his website)
FreeBSD, the premiere open source operating system, at the heart of many of the systems that run the Internet.
PTPd, the Precision Time Protocol Daemon, a BSD licensed implementation of the IEEE-1588 protocols, used to closely synchronize LAN connected hosts.
PCS, the Packet Construction Set, an easily extensible Python library used to write network testing and validation tools.
Packet Debugger, a tool, based on PCS, for interactively working with packet streams such as those collected with tcpdump.
Conductor is a system for controlling distributed systems during tests. It is meant to replace testing by hand with multiple ssh sessions or depending on a ton of random shell scripts to execute network based tests with multiple clients.
To Know More About FreeBSD You can check the below links (( including the FreeBSD Journal which is a magazine from the FreeBSD Foundation ))
Balena is a complete set of tools for building, deploying, and managing fleets of connected Linux devices. We provide infrastructure for fleet owners so they can focus on developing their applications and growing their fleets with as little friction as possible.
Their Final thoughts from their website:
Balena has been built to bring a cloud microservices workflow to the world of edge devices and remove friction for fleet owners wherever possible. Our aim is to make IoT applications as easy to deploy as web applications (or even easier!). Along the way, we’ve discovered and assimilated interesting ideas from both the cloud and embedded worlds, and even invented ideas that only apply to the new paradigm balena represents.
The balenaFin is a Raspberry Pi Compute Module carrier board that can run all the software that the Raspberry Pi can run, but hardened for deployment in the field. Even better, it’s offered at an accessible price point relative to other professional boards.
Distributed Systems: They are all around us. Google search engine, Amazon platform, Netflix, online gaming, money transfer/banking
The most known form of it is the Client-Server model
A Distributed System is a collection of separate /independent SW or HW components referred to as nodes that are linked together by means of a network and work together coherently by coordinating and communicating through a message passing or events to fulfill a one end goal. The nodes of the system can be unstructured or highly structured depending on the requirements. Its complexity is hidden from the end user or computer and appears as a single system/computer
More simple explanation: Bunch of independent computers which work together to solve a problem.
Characteristics of a Distributed Systems
No Shared Clock Each Computer in a Distributed System has its own internal clock to supply the value of the current time to local processes
No Shared Memory Each computer has its own local memory. (( can see on figure c when comparing parallel vs distributed systems ))
Resource sharing Resource sharing means that the existing resources in a distributed system can be accessed or remotely accessed across multiple computers in the system. Computers in distributed systems shares resources like hardware (disks and printers), software (files, windows and data objects) and data. Hardware resources are shared for reductions in cost and convenience. Data is shared for consistency and exchange of information.
Concurrency Concurrency is a property of a system representing the fact that multiple activities are executed at the same time. The concurrent execution of activities takes place in different components running on multiple machines as part of a distributed system. In addition, these activities may perform some kind of interactions among them. Concurrency reduces the latency and increases thethroughput of the distributed system.
Fault Tolerance — High Availability In a distributed system hardware, software, network anything can fail. The system must be designed in such a way that it is available all the time even after something has failed.
Scalability Scalability is mainly concerned about how the distributed system handles the growth as the number of users for the system increases. Mostly we scale the distributed system by adding more computers in the network. Components should not need to be changed when we scale the system. Components should be designed in such a way that it is scalable
Heterogenetic and Loose Coupling In distributed systems components can have variety and differences in Networks, Computer hardware, Operating systems, Programming languages and implementations by different developers
Transparency Distributed systems should be perceived by users and application programmers as a whole rather than as a collection of cooperating components. Transparency can be of various types like access, location, concurrency, replication, etc.
Parallel and Distributed Computing
Distributed systems are groups of networked computers which share a common goal for their work. The terms “concurrent computing“, “parallel computing“, and “distributed computing” have a lot of overlap, and no clear distinction exists between them.
The same system may be characterized both as “parallel” and “distributed”; the processors in a typical distributed system run concurrently in parallel.
Parallel computing may be seen as a particular tightly coupled form of distributed computing and distributed computing may be seen as a loosely coupled form of parallel computing.
Nevertheless, it is possible to roughly classify concurrent systems as “parallel” or “distributed” using the following criteria:
In parallel computing,all processors may have access to a shared memory to exchange information between processors.
In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors.
The situation is further complicated by the traditional uses of the terms parallel and distributed algorithm that do not quite match the above definitions of parallel and distributed systems
Nevertheless, as a rule of thumb, high-performance parallel computation in a shared-memory multiprocessor uses parallel algorithms while the coordination of a large-scale distributed system uses distributed algorithms.
Client–server: architectures where smart clients contact the server for data then format and display it to the users. Input at the client is committed back to the server when it represents a permanent change.
Three-tier: architectures that move the client intelligence to a middle tier so that stateless clients can be used. This simplifies application deployment. Most web applications are three-tier.
n-tier: architectures that refer typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers.
Peer-to-peer: architectures where there are no special machines that provide a service or manage the network resources.Instead all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and as servers.Examples of this architecture include BitTorrent and the bitcoin network.
Another basic aspect of distributed computing architecture is the method of communicating and coordinating work among concurrent processes.
Through various message passing protocols, processes may communicate directly with one another, typically in a master/slave relationship.
Database-centric architecture in particular provides relational processing analytics in a schematic architecture allowing for live environment relay. This enables distributed computing functions both within and beyond the parameters of a networked database.
Reasons for using distributed systems and distributed computing may include:
The very nature of an application may require the use of a communication network that connects several computers: for example, data produced in one physical location and required in another location.
There are many cases in which the use of a single computer would be possible in principle, but the use of a distributed system is beneficial for practical reasons. For example, it may be more cost-efficient to obtain the desired level of performance by using a cluster of several low-end computers, in comparison with a single high-end computer. A distributed system can provide more reliability than a non-distributed system, as there is no single point of failure. Moreover, a distributed system may be easier to expand and manage than a monolithic uniprocessor system.
Examples of distributed systems and applications of distributed computing include the following:
Distributed computing is a field of computer science that studies distributed systems. .
Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other via message passing.
Monolithic Architecture Vs Microservices Architecture
Microservices separate Business Logic funtions. Instead of One Big program (Monolyth approach) several smaller applications ( Microservices) are used. They communicate with each other through well defined API’s – usually HTTP.
Monolithic Architectures are a tiered layered approach where one big single application including the UI the core Business logic and the Data access layer communicating with one single giant Database which / where everything is stored.
In Microservices Architecture When a user interacts with the UI it triggers independent Microservices using a shared or perhaps a single database on their own fulfilling one specific function or element of the Business function or Data Access Layer
Microservices are popular today.
The Advantages of using Microservices:
Microservices can be programmed in any programming language best fit for the purpose as long as they are able to use a common API for communicating between each other and/or with the UI. As discussed HTTP is the most commonly used
Works perfect with Smaller teams assigned and responsable for a single function (Microservice) instead of one big team working on an overly complexed single application doing and containing everything
Easier Fault Isolation as the Microservices Architecture breaks up a Single Monolithic Application into smaller set of single functions therefore making it easier to troubleshoot what went wrong and where and also provides certain kind of Fault Tolerancy as failure in one Microservice does not always affects the whole user experience or functionality of the application as a whole as long as it is not part of some critical and core functionality and depending of course on the existing dependency of the other Microservices on the one which failed.
If for example the Microservice which failed is the one for example which takes care of the Invoice printing feature once the user completed an order via the UI this single fault while inconvenient as it can be does not cause the application as a whole not to be functional and the failure affects only partially the functionality of the application as a whole. And perhaps also other Microservices could offer for example a service to email the invoice generated for the users order and while the Microservice responsible for the Printing of an Orders invoice fails/does not work the user can have an alternate service to achieve the same or similar function.
Also One Microservice sometimes can handle another Microservices function or failure providing even more Fault Tolerance.
Pair well with containers architectures. You can containerize a single Microservice
Dynamically Scalable both up and down and with this feature companies able to save a lot of money in the era of cloud computing on demand pricing / instances are expensive
The Disadvantages of using Microservices:
Complex Networking: From a logically laid out single application to an X Number of Microservices with or without their own databases with a sometimes confusing web of communication between them makes troubleshooting not always the simplest unless you grew familiar with the design and inner workflows of the application (experience as application support engineer for that specific application)
Imagine the flow through the steps from start to finish with such a complex web of networking…..
Requires extra knowledge and familiarity of topics such as: with the architecture (how You will build it out piece by piece) , containerization, container orchestration *like Kubernetes for example*
Overhead: Knowledge (mentioned above) Databases and Servers: overhead in the Infrastructure side as databases,servers need to be spin up instantly
What is Docker?
As We saw in the Episode on Virtualization ( Episodes 14-15 ) Docker is a type 2 virtualization in form of a PAAS (platform as a service) which delivers software in packages called Containers using OS-Level Virtualization
Docker is an application build and deployment tool. It is based on the idea of that you can package your code with dependencies into a deployable unit called a container. Containers have been around for quite some time.
Some say they were developed by Sun Microsystems and released as part of Solaris 10 in 2005 as Zones, others stating BSD Jails were the first container technology
Another explanation: is an open platform for developers and sysadmins to build, ship and run distributed applications wherter on laptops , data center VMs, or the cloud
Containers are a way to package software in a format that can run isolated on a shared operating system.Unlike VMs containers do not bundle a full operating system – only libraries and settings required to make the software work are needed. This makes for efficient , lightweight self contained systems and guaranteess that software will always run the same , regardless of where its deployed
The software that hosts the containers is called Docker Engine
With Docker and Containerization there is no more the issues of ,,the application ran fine on my computer but it doesnt on Client Xs machine” as long as the Client has Docker on it it will run just the same way as it did on the developers machine as the whole thing with all its necessary libraries and bits and pieces comes packaged into that container form.
Describes the build process for an image: f.e my app will be based on node, copy my files here run npm install and then npm run when the image is run
Can be run to automatically create an image
Contains all the commands necessary to build the image and run your application
Docker Container is the runtime instance of a Docker Image built from a Dockerfile
The Docker software as a service offering consists of three components:
Software:The Docker daemon, called dockerd, is a persistent process that manages Docker containers and handles container objects. The daemon listens for requests sent via the Docker Engine API. The Docker client program, called docker, provides a command-line interface that allows users to interact with Docker daemons.
Objects: Docker objects are various entities used to assemble an application in Docker. The main classes of Docker objects are images, containers, and services.
A Docker container is a standardized, encapsulated environment that runs applications. A container is managed using the Docker API or CLI.
A Docker image is a read-only template used to build containers. Images are used to store and ship applications.
A Docker service allows containers to be scaled across multiple Docker daemons. The result is known as a swarm, a set of cooperating daemons that communicate through the Docker API.
Registries: A Docker registry is a repository for Docker images. Docker clients connect to registries to download (“pull”) images for use or upload (“push”) images that they have built. Registries can be public or private. Two main public registries are Docker Hub and Docker Cloud. Docker Hub is the default registry where Docker looks for images.Docker registries also allow the creation of notifications based on events.
Docker Compose is a tool for defining and running multi-container Docker applications.It uses YAML files to configure the application’s services and performs the creation and start-up process of all the containers with a single command. The docker-compose CLI utility allows users to run commands on multiple containers at once, for example, building images, scaling containers, running containers that were stopped, and more. Commands related to image manipulation, or user-interactive options, are not relevant in Docker Compose because they address one container.The docker-compose.yml file is used to define an application’s services and includes various configuration options. For example, the build option defines configuration options such as the Dockerfile path, the command option allows one to override default Docker commands, and more.
Docker Swarm is in simple words Dockers open source Container Orchestration platform (( We will see more about them below ))
What is Container Orchestration
Container orchestration automates the deployment, management, scaling, and networking of containers. Enterprises that need to deploy and manage hundreds or thousands of Linux® containers and hosts can benefit from container orchestration.
Container orchestration can be used in any environment where you use containers. It can help you to deploy the same application across different environments without needing to redesign it. And microservices in containers make it easier to orchestrate services, including storage, networking, and security.
Containers give your microservice-based apps an ideal application deployment unit and self-contained execution environment. They make it possible to run multiple parts of an app independently in microservices, on the same hardware, with much greater control over individual pieces and life cycles.
Some popular options are Kubernetes, Docker Swarm, and Apache Mesos.
Use container orchestration to automate and manage tasks such as:
Provisioning and deployment
Configuration and scheduling
Scaling or removing containers based on balancing workloads across your infrastructure
Load balancing and traffic routing
Monitoring container health
Configuring applications based on the container in which they will run
Keeping interactions between containers secure
What is Kubernetes?
Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management.It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”.It works with a range of container tools, including Docker.Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions.
A Kubernetes cluster can be deployed on either physical or virtual machines.
Kubernetes Architecture has the following main components:
Kubernetes Cluster is a set of node machines for running containerized applications.
The master node is responsible for the management of Kubernetes cluster. It is mainly the entry point for all administrative tasks. There can be more than one master node in the cluster to check for fault tolerance.
As you can see in the above diagram, the master node has various components like API Server, Controller Manager, Scheduler and ETCD.
API Server: The API server is the entry point for all the REST commands used to control the cluster.
Controller Manager: Is a daemon that regulates the Kubernetes cluster, and manages different non-terminating control loops.
Scheduler: The scheduler schedules the tasks to slave nodes. It stores the resource usage information for each slave node.
ETCD: ETCD is a simple, distributed, consistent key-value store. It’s mainly used for shared configuration and service discovery.
Worker nodes contain all the necessary services to manage the networking between the containers, communicate with the master node, and assign resources to the scheduled containers.
As you can see in the above diagram, the worker node has various components like Docker Container, Kubelet, Kube-proxy, and Pods.
Docker Container: Docker runs on each of the worker nodes, and runs the configured pods
Kubelet: Kubelet gets the configuration of a Pod from the API server and ensures that the described containers are up and running.
Kube-proxy: Kube-proxy acts as a network proxy and a load balancer for a service on a single worker node
Pods: A pod is one or more containers that logically run together on nodes.
To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete
We previously talked about Linux but We can not go forward without mentioning Unix which inspired and had a fundamental impact on Linux as well as many other Unix-like operating systems We know and use today as We will see in a short while.
Episodes like this one and the ones for example made on Linux or Virtualization or Routers and Networking while they does not carry any deep technical knowledge or information what its mission is to build a common ground , some basic and general knowledge upon which further discussions and topics can build and refer to pretty much like a framework or indeed a foundation.
Unix is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, development starting in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, and others.
Initially intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Berkeley (BSD), Microsoft (Xenix), IBM (AIX), and Sun Microsystems (Solaris). In the early 1990s, AT&T sold its rights in Unix to Novell, which then sold its Unix business to the Santa Cruz Operation (SCO) in 1995.The UNIX trademark passed to The Open Group, a neutral industry consortium, which allows the use of the mark for certified operating systems that comply with the Single UNIX Specification (SUS).
Unix systems are characterized by a modular design that is sometimes called the “Unix philosophy”: the operating system provides a set of simple tools that each performs a limited, well-defined function with a unified filesystem (the Unix filesystem) as the main means of communication, and a shell scripting and command language (the Unix shell) to combine the tools to perform complex workflows. Unix distinguishes itself from its predecessors as the first portable operating system: almost the entire operating system is written in the C programming language, thus allowing Unix to reach numerous platforms.
Unix was originally meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmers. The system grew larger as the operating system started spreading in academic circles, and as users added their own tools to the system and shared them with colleagues.
At first, Unix was not designed to be portable or for multi-tasking.Later, Unix gradually gained portability, multi-tasking and multi-user capabilities in a time-sharing configuration. Unix systems are characterized by various concepts: the use of plain text for storing data; a hierarchical file system; treating devices and certain types of inter-process communication (IPC) as files; and the use of a large number of software tools, small programs that can be strung together through a command-line interpreter using pipes, as opposed to using a single monolithic program that includes all of the same functionality. These concepts are collectively known as the “Unix philosophy”. Brian Kernighan and Rob Pike summarize this in The Unix Programming Environment as “the idea that the power of a system comes more from the relationships among programs than from the programs themselves”.
In an era when a standard computer consisted of a hard disk for storage and a data terminal for input and output (I/O), the Unix file model worked quite well, as I/O was generally linear. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, and semaphores, as well as network sockets to support communication with other hosts. As graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse.
By the early 1980s, users began seeing Unix as a potential universal operating system, suitable for computers of all sizes. The Unix environment and the client–server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers.
Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system.
The Unix operating system consists of many libraries and utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common “low-level” tasks that most programs share, and schedules access to avoid conflicts when programs try to access the same resource or device simultaneously. To mediate such access, the kernel has special rights, reflected in the distinction of kernel space from user space, the latter being a priority realm where most application programs operate.
The origins of Unix date back to the mid-1960s when the Massachusetts Institute of Technology, Bell Labs, and General Electric were developing Multics, a time-sharing operating system for the GE-645 mainframe computer. Multics featured several innovations, but also presented severe problems. Frustrated by the size and complexity of Multics, but not by its goals, individual researchers at Bell Labs started withdrawing from the project. The last to leave were Ken Thompson, Dennis Ritchie, Douglas McIlroy, and Joe Ossanna, who decided to reimplement their experiences in a new project of smaller scale. This new operating system was initially without organizational backing, and also without a name.
The new operating system was a single-tasking system.In 1970, the group coined the name Unics for Uniplexed Information and Computing Service (pronounced “eunuchs”), as a pun on Multics, which stood for Multiplexed Information and Computer Services. Brian Kernighan takes credit for the idea, but adds that “no one can remember” the origin of the final spelling Unix. Dennis Ritchie, Doug McIlroy, and Peter G. Neumann also credit Kernighan.
The operating system was originally written in assembly language, but in 1973, Version 4 Unix was rewritten in C. Version 4 Unix, however, still had many PDP-11 dependent codes, and was not suitable for porting. The first port to another platform was made five years later (1978) for Interdata 8/32.
Bell Labs produced several versions of Unix that are collectively referred to as “Research Unix”. In 1975, the first source license for UNIX was sold to Donald B. Gillies at the University of Illinois at Urbana–Champaign Department of Computer Science. UIUC graduate student Greg Chesson, who had worked on the UNIX kernel at Bell Labs, was instrumental in negotiating the terms of the license.
During the late 1970s and early 1980s, the influence of Unix in academic circles led to large-scale adoption of Unix (BSD and System V) by commercial startups, including Sequent, HP-UX, Solaris, AIX, and Xenix. In the late 1980s, AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4 (SVR4), which was subsequently adopted by many commercial Unix vendors.
In the 1990s, Unix and Unix-like systems grew in popularity as BSD and Linux distributions were developed through collaboration by a worldwide network of programmers. In 2000, Apple released Darwin, also a Unix system, which became the core of the Mac OS X operating system, which was later renamed macOS.
Unix operating systems are widely used in modern servers, workstations, and mobile devices.
Standards – POSIX
In the late 1980s, an open operating system standardization effort now known as POSIX provided a common baseline for all operating systems; IEEE based POSIX around the common structure of the major competing variants of the Unix system, publishing the first POSIX standard in 1988. In the early 1990s, a separate but very similar effort was started by an industry consortium, the Common Open Software Environment (COSE) initiative, which eventually became the Single UNIX Specification (SUS) administered by The Open Group. Starting in 1998, the Open Group and IEEE started the Austin Group, to provide a common definition of POSIX and the Single UNIX Specification, which, by 2008, had become the Open Group Base Specification.
In 1999, in an effort towards compatibility, several Unix system vendors agreed on SVR4’s Executable and Linkable Format (ELF) as the standard for binary and object code files. The common format allows substantial binary compatibility among different Unix systems operating on the same CPU architecture.
The Filesystem Hierarchy Standard was created to provide a reference directory layout for Unix-like operating systems; it has mainly been used in Linux.
The Unix system had a significant impact on other operating systems. It achieved its reputation by its interactivity, by providing the software at a nominal fee for educational use, by running on inexpensive hardware, and by being easy to adapt and move to different machines. Unix was originally written in assembly language, but was soon rewritten in C, a high-level programming language. Although this followed the lead of Multics and Burroughs, it was Unix that popularized the idea.
Unix had a drastically simplified file model compared to many contemporary operating systems: treating all kinds of files as simple byte arrays. The file system hierarchy contained machine services and devices (such as printers, terminals, or disk drives), providing a uniform interface, but at the expense of occasionally requiring additional mechanisms such as ioctl and mode flags to access features of the hardware that did not fit the simple “stream of bytes” model. The Plan 9 operating system pushed this model even further and eliminated the need for additional mechanisms.
Unix also popularized the hierarchical file system with arbitrarily nested subdirectories, originally introduced by Multics. Other common operating systems of the era had ways to divide a storage device into multiple directories or sections, but they had a fixed number of levels, often only one level. Several major proprietary operating systems eventually added recursive subdirectory capabilities also patterned after Multics. DEC’s RSX-11M’s “group, user” hierarchy evolved into VMS directories, CP/M’s volumes evolved into MS-DOS 2.0+ subdirectories, and HP’s MPE group.account hierarchy and IBM’s SSP and OS/400 library systems were folded into broader POSIX file systems.
Making the command interpreter an ordinary user-level program, with additional commands provided as separate programs, was another Multics innovation popularized by Unix. The Unix shell used the same language for interactive commands as for scripting (shell scripts – there was no separate job control language like IBM’s JCL). Since the shell and OS commands were “just another program”, the user could choose (or even write) their own shell. New commands could be added without changing the shell itself. Unix’s innovative command-line syntax for creating modular chains of producer-consumer processes (pipelines) made a powerful programming paradigm (coroutines) widely available. Many later command-line interpreters have been inspired by the Unix shell.
A fundamental simplifying assumption of Unix was its focus on newline-delimited text for nearly all file formats. There were no “binary” editors in the original version of Unix – the entire system was configured using textual shell command scripts. The common denominator in the I/O system was the byte – unlike “record-based” file systems. The focus on text for representing nearly everything made Unix pipes especially useful and encouraged the development of simple, general tools that could be easily combined to perform more complicated ad hoc tasks. The focus on text and bytes made the system far more scalable and portable than other systems. Over time, text-based applications have also proven popular in application areas, such as printing languages (PostScript, ODF), and at the application layer of the Internet protocols, e.g., FTP, SMTP, HTTP, SOAP, and SIP.
Unix popularized a syntax for regular expressions that found widespread use. The Unix programming interface became the basis for a widely implemented operating system interface standard (POSIX, see above). The C programming language soon spread beyond Unix, and is now ubiquitous in systems and applications programming.
Early Unix developers were important in bringing the concepts of modularity and reusability into software engineering practice, spawning a “software tools” movement. Over time, the leading developers of Unix (and programs that ran on it) established a set of cultural norms for developing software, norms which became as important and influential as the technology of Unix itself; this has been termed the Unix philosophy.
The TCP/IP networking protocols were quickly implemented on the Unix versions widely used on relatively inexpensive computers, which contributed to the Internet explosion of worldwide real-time connectivity, and which formed the basis for implementations on many other platforms.
The Unix policy of extensive on-line documentation and (for many years) ready access to all system source code raised programmer expectations, and contributed to the launch of the free software movement in 1983.
Free Unix and Unix-like variants
In 1983, Richard Stallman announced the GNU (short for “GNU’s Not Unix”) project, an ambitious effort to create a free software Unix-like system; “free” in the sense that everyone who received a copy would be free to use, study, modify, and redistribute it. The GNU project’s own kernel development project, GNU Hurd, had not yet produced a working kernel, but in 1991 Linus Torvalds released the kernel Linux as free software under the GNU General Public License. In addition to their use in the GNU operating system, many GNU packages – such as the GNU Compiler Collection (and the rest of the GNU toolchain), the GNU C library and the GNU core utilities – have gone on to play central roles in other free Unix systems as well.
Linux distributions, consisting of the Linux kernel and large collections of compatible software have become popular both with individual users and in business. Popular distributions include Red Hat Enterprise Linux, Fedora, SUSE Linux Enterprise, openSUSE, Debian GNU/Linux, Ubuntu, Linux Mint, Mandriva Linux, Slackware Linux, Arch Linux and Gentoo.
A free derivative of BSD Unix, 386BSD, was released in 1992 and led to the NetBSD and FreeBSD projects. With the 1994 settlement of a lawsuit brought against the University of California and Berkeley Software Design Inc. (USL v. BSDi) by Unix System Laboratories, it was clarified that Berkeley had the right to distribute BSD Unix for free if it so desired. Since then, BSD Unix has been developed in several different product branches, including OpenBSD and DragonFly BSD.
Linux and BSD are increasingly filling the market needs traditionally served by proprietary Unix operating systems, as well as expanding into new markets such as the consumer desktop and mobile and embedded devices. Because of the modular design of the Unix model, sharing components is relatively common; consequently, most or all Unix and Unix-like systems include at least some BSD code, and some systems also include GNU utilities in their distributions.
In a 1999 interview, Dennis Ritchie voiced his opinion that Linux and BSD operating systems are a continuation of the basis of the Unix design, and are derivatives of Unix:
I think the Linux phenomenon is quite delightful, because it draws so strongly on the basis that Unix provided. Linux seems to be the among the healthiest of the direct Unix derivatives, though there are also the various BSD systems as well as the more official offerings from the workstation and mainframe manufacturers.
In the same interview, he states that he views both Unix and Linux as “the continuation of ideas that were started by Ken and me and many others, many years ago”.
OpenSolaris was the open-source counterpart to Solaris developed by Sun Microsystems, which included a CDDL-licensed kernel and a primarily GNU userland. However, Oracle discontinued the project upon their acquisition of Sun, which prompted a group of former Sun employees and members of the OpenSolaris community to fork OpenSolaris into the illumos kernel. As of 2014, illumos remains the only active open-source System V derivative.
In October 1993, Novell, the company that owned the rights to the Unix System V source at the time, transferred the trademarks of Unix to the X/Open Company (now The Open Group) and in 1995 sold the related business operations to Santa Cruz Operation (SCO). Whether Novell also sold the copyrights to the actual software was the subject of a federal lawsuit in 2006, SCO v. Novell, which Novell won. The case was appealed, but on August 30, 2011, the United States Court of Appeals for the Tenth Circuit affirmed the trial decisions, closing the case. Unix vendor SCO Group Inc. accused Novell of slander of title.
The present owner of the trademark UNIX is The Open Group, an industry standards consortium. Only systems fully compliant with and certified to the Single UNIX Specification qualify as “UNIX” (others are called “Unix-like”).
By decree of The Open Group, the term “UNIX” refers more to a class of operating systems than to a specific implementation of an operating system; those operating systems which meet The Open Group’s Single UNIX Specification should be able to bear the UNIX 98 or UNIX 03 trademarks today, after the operating system’s vendor pays a substantial certification fee and annual trademark royalties to The Open Group. Systems that have been licensed to use the UNIX trademark include AIX, EulerOS, HP-UX, Inspur K-UX, IRIX, Solaris, Tru64 UNIX (formerly “Digital UNIX”, or OSF/1), macOS, and a part of IBM z/OS. Notably, EulerOS and Inspur K-UX are Linux distributions certified as UNIX 03 compliant.