All posts by viktormadarasz

About viktormadarasz

IT OnSite Analyst for a big multinational company

TSR – The Server Room Show – Episode 37 – Networking Basics – Part I.

The Foundation of Networking:

Switches, Routers and Wireless Access Points ( and some phsically laid cables ( copper or fiber) together with some Networking standards and Protocols


Switches are the foundation of most business networks. A switch acts as a controller, connecting computers, printers, and servers to a network in a building or a campus

Switches allow devices on your network to communicate with each other, as well as with other networks, creating a network of shared resources. Through information sharing and resource allocation, switches save money and increase productivity.

There are two basic types of switches to choose from as part of your networking basics: on-premises and cloud-managed.

  • managed on-premises switch lets you configure and monitor your LAN, giving you tighter control of your network traffic.
  • cloud-managed switch can simplify your network management. You get a simple user interface, multisite full-stack management, and automatic updates delivered directly to the switch.


Routers connect multiple networks together. They also connect computers on those networks to the Internet. Routers enable all networked computers to share a single Internet connection, which saves money. 

router acts a dispatcher. It analyzes data being sent across a network, chooses the best route for data to travel, and sends it on its way. 

Routers connect your business to the world, protect information from security threats, and can even decide which computers receive priority over others. 

Beyond those basic networking functions, routers come with additional features to make networking easier or more secure. Depending on your security needs, for example, you can choose a router with a firewall, a virtual private network (VPN), or an Internet Protocol (IP) communications system.

Wireless Access Points

An access point allows devices to connect to the wireless network without cables. A wireless network makes it easy to bring new devices online and provides flexible support to mobile workers

An access point acts like an amplifier for your network. While a router provides the bandwidth, an access point extends that bandwidth so that the network can support many devices, and those devices can access the network from farther away.

But an access point does more than simply extend Wi-Fi. It can also give useful data about the devices on the network, provide proactive security, and serve many other practical purposes.

*Access points support different IEEE standards. Each standard is an amendment that was ratified over time. The standards operate on varying frequencies, deliver different bandwidth, and support different numbers of channels.

Wireless Networking

To create your wireless network, you can choose between three types of deployment: centralized deployment, converged deployment, and cloud-based deployment. Need help figuring out which deployment is best for your business? Talk to an expert

1. Centralized deployment

The most common type of wireless network system, centralized deployments are traditionally used in campuses where buildings and networks are in close proximity. This deployment consolidates the wireless network, which makes upgrades easier and facilitates advanced wireless functionality. Controllers are based on-premises and are installed in a centralized location.  

2. Converged deployment

For small campuses or branch offices, converged deployments offer consistency in wireless and wired connections. This deployment converges wired and wireless on one network device—an access switch—and performs the dual role of both switch and wireless controller.

3. Cloud-based deployment

This system uses the cloud to manage network devices deployed on-premises at different locations. The solution requires Cisco Meraki cloud-managed devices, which provide full visibility of the network through their dashboards. 

When it comes to an Enterprise or Campus these different pieces of building blocks are easy to identify among with other additional devices which serves different functions or purpose in the network ( firewalls, intrusion detection – intrusion prevention systems, load balancers, etc.)

How does it look like in a Home Network

However if You look at Your Home Network most often than not these different building blocks of networking are fusioned into a single device, which is definitely true if You use only Your ISP Provided equipment

For example in my ISP – Movistar one single box they call thise HGU – Home Gateway Unit – includes many functions( 5 different ones) in one single box.

  • Router (its WAN port is essentially the Fibre Port and the LAN ports (switch ports are interpreted as a single Bridge ( Bridged ports together Eth1 – Eth4). It handles VLAN tagging as identifies diferent VLAN for different traffic – Internet, Voice , IPTV )
  • Switch ( 4 ports Gigabit on this device on the foto)
  • Wireless Access Point ( Wireless-5 on this one in particular 802.11ac and newer models come with Wifi-6 802.11ax on )
  • ATA Adapter to Convert analog phone connection to IP Telephony (the grey connection labeled Tel) You can plugin an old analog phone and will work just fine. Tried Myself
  • A Media Converter – Converts The Fibre Optic TO Copper RJ45 ( previously they were using a separate ONT ( Optical Network Terminal) box which handled this conversion and gave you a RJ45 copper port to connect it to Your Routers WAN Port) ( attached foto)

Router Smart WiFi. Router inalámbrico (HGU) - Movistar
Como instalar tu Router Smart WiFi de Movistar, facil y sencillo

In My Case the ONT Box is a separate unit and I have it connected to my Ubiquity 5 POE Router and also the Wireless Access Points are separate devices plus additional switches both managed and unmanaged….

Fibra óptica de Movistar 7
Fibre Cable Comes into This From the Wall Jack
ETH1 connects from this to ETH0 on my Edgerouter 5 POE Box shown below
Ubiquiti EdgeRouter PoE 5 5x Gigabit Ethernet-Lisconet
Edgerouter 5 POE Router eth0 WAN
Two of these serve as 802.11ac Access Points Ubiquity AP AC PRO
(can be powered from a POE Port on the Ubiquity Edgerouter 5 POE (1st antennas Case) or From a Switch with POE Port or with a POE Injector (in 2nd antennas Case)
The Antennas Advertise the Same SSIDs and Devices Can Roam Between Them without Disconnecting or Loosing Connection

What are the Pros and Cons to use my ISP Provided Equipment ( Unless I absolutely MUST)


  • Most of the time everything is packaged up for You in one single box
  • Plug and Play, You don´t need to do anything at all… Ready to go as soon as You plug it in ( Configured to work optimally and efficiently , with all settings taken care for You)
  • If anything happens or experience any troubles, You can just call Your ISP and have them fix it remotely or via a visit to Your home
  • Everything they said it comes with should work accordingly ( IPTV service if it comes with that or telephone calls, etc)


  • Normally a One Box Does Many Function Scenario is never as good as a single box for a single function scenario. Indeed it is more cost efficient but it comes with losing of features or functions which in a single box single function exists and most often than not the performance and endurance of a single box single function scenario outperforms by huge leaps a One Box Many Function Scenario
  • Apart from a Home Setting ( where You have lets says 3 – 5 users max) when it comes to a 6 users or more scenarios like in a smaller office or small business these One Box Many scenario devices just can not keep up with demand ( not in routing not in switching not as a wireless access point)
  • There is no scalability and expandability with these One Box Many scenarios.( amount of available ports f.e)
  • They are neither Modular or neither allow a modular scenario easily. I can not just add another 24 port switch easy with a stacking cable to an existing 24 port switch and as plug and play it copies my config regarding the ports and gateways or routes I need and Im up and running in no time with an additional 24 port. Same goes for adding an additional wireless access point IF i find myself in the need to cover another floor or another area of the building with poor/bad coverage
  • You have no full control what goes on inside the box and/or what flows in and out. The options You can change are few and far between

I think when You weigh these Pros and Cons what comes into account also is the size of Your Home.

If its small enough for one wireless access point to cover it all and Your number of Users are =<5 and You perhaps need no more than 3 – 4 wired devices to be connected and they can run up to this single box without a problem then I would say stay where You are and spend the money on something else.

However. IF You are like me who loves cables and whatever can be plugged via cable will be plugged with cable.. The amount of ports offered on these Single One Box Many scenarios are used up quiet quickly.

If You factor in the size of the lot or house / areas to cover and not only cables / wall ports but wireless coverage has to be extended beyond the reach of this single box provided then You are already in the scenario of a need for an additional/separate Switch and Wireless Access Point.

I like to have various wall ports available in each room and also keeping separate clients/devices from one another ( IP Telephony /Voip on its own VLAN ( Dedicated Wallport) Vs Wall port which is dedicated to a single client or PC to be plugged in to browse the web and access the network in General Vs A Dedicated Port to Connect a NAS Device which will move files back and fort between a Homelab Servers when doing backups and also sharing PLEX Media Catalog to Dedicated Clients and so on and so on )

86x86mm Single Port RJ45 Network wall plate faceplate Wall mount ...


Cisco: Networking Basics

Cisco: Cloud Managed Switches

TSR – The Server Room Show – Shownotes – Episode 36

Working in IT Sector in Spain and things I think are important when You are looking for a new job or change in Your Career

The below is strictly based on My Opinion and What I saw during the years I worked in IT in various roles in Spain.

In Spain the IT Job Market is divided into 2 Main groups and effectively a 3rd group which exist in other countries I guess but completely lacking here in Spain.

1., Direct Hire with the company
(Being it a big multinational to a small business means no difference) You will be placed on the company´s payroll , wear the same uniform as the others, have the same benefits from day one, long story short if You apply to an IT Role X for Company Y thats what You going to be , an employee of Company Y with all the bells and whistles it comes with being it Pro Vs Contra

2., Being Hired by an intermediate consulting company

( they called like this here in Spain or can call them IT Job Agencies to be very specific) are companies which find the perfect candidate for the Role required and take care of the hiring and selection process and employs the individual on their companies payroll to eventually sell You or rent You out to the End User/Company whom needs the Role/Services to be done at/for ( their numbers can only be compared to the stars on the night sky)

Often than not the amount You will get paid Vs the amount of Money the Company pays the Consulting Company FOR YOU I see it go as much as being the double of one number to another and not in favour to YOUR Salary.

You will be working from day one for Company Y, maybe You even get to wear their uniform if any, but You will not be on the company´s payroll neither enjoy the same benefits as Your colleagues whom many times are hired directly via Company Y and most often than not You will definitely not enjoy the same salary as Your peers.

Can be that You sit with Your colleagues in the same office space, You all doing the same Job, even wearing the same uniform , sharing the workload and the same job functions/responsabilities but being un-equal in the rest ( salary, benefits or other compensations & perks , sometimes even shifts or work hours (weekends, on call , etc.)

Sometimes in rare occasions It can happen that after a few months if its beneficial for the Company You lending Your services to so to say, decides to take You over and offers You a place at the Company hiring You as a full time employee just as You were directly hired from day one By Them. But this is rare an far between. Id not even say 20% of the cases what I saw.

Most often than not You will be hired for a Project or a fixed amount of time sometimes prolong able (long term or short term but definitely not with an open ending contract or indefinite contract we call it here in Spain which has no written end date on Your contract with the Company You work for ( The Consulting Company remember)

What is completely missing in Spain for as much as I saw in 12 years living here… Freelancing and Freelancing in IT to be exact. Being those huge number of consulting companies out there they completely sucked out the opportunity or chance of anyone going solo contracting out their services as a one man shop / independent contractor or autonomous contractor/person as many times referred to it here. There is just no chance for them to companies or even smaller companies to go to THEM Vs going to these Intermediate IT Consulting Companies whom have the resources and capacity to find and get any unicorn talent in a matter of days VS the chance of Your Specialty or Focus being what exactly they need.

What is the problem with this?

In its simplest form obviously being Directly hired by the company who You work for with all its perks and benefits being for You directly, enjoying the same conditions and salary as Your peers is the best deal out there.

When You work through or via an Intermediate or Middle Man ( IT Job Agency) which takes from Your earnings and hard work by paying You as little as possible and selling You or lending You to the Company Y for as much as possible to have the difference between the most beneficial for THEM obvious therefore it is not the MOST Beneficial for You and in direct correlation to Your Salary.

Frankly someone else making money on YOUR Work doing nothing in exchange for it. like having Your agent like in sports or music but this time they make the big money and You left with peanuts.

Its like being a manager of a rock band but hey at least the rock bands earn a LOT of money so who cares the manager gets off easy for the little or nothing they do 🙂 No offense to any rock managers at all Im sure You all do and work hard to deserve all the money You make.

Another important issue is that the type of contracts they offer ( contract for a determined project f.e with a maximum possible contract life expectancy of 3-4 years maximum after which You ought to be offered a permanent contract/open eneded with the no end date on it as discussed or having temporary contracts with a well defined end date on it perhaps prolong able for another 3 or 6 months maybe a couple of times) are worth NOTHING and I put emphasis on the word NOTHING here for a reason.

Imagine You work in Spain with one of these contracts for a given project with no clear end date on it but as per the type of contract it has an expiry on it of 3-4 years as maximum for Company Y as a IT Systems Administrator via IT Consulting Company X.

After a year or so when You feel happy and comfortable You decide to take Your savings and try to get a loan or mortgage to buy a house or a car.

For getting any of these these type of contracts worth Zero. No matter You have the down payment or entry amount however You want to call it based on the country You live these contract types are handled like Trash Vs a Permanent Contract / Indefinite Contract in the eyes of the Banks/Loaners.

Permanent Contract –> with a Company with Your name on it ( in the case being a Permanent Contract no matter if its with a flower shop or a big multinational company or even with a Consulting Company if they were to offer You one which they wont as its not in their benefit i have to say that in some cases lets say 10% of the cases intermediate companies or f.e an official company being service partner for HP just to mention one this intermediate company does offer a permanent contract to its employees. But its rare and far between)

It is not very fair is not it? You do the same job most of the time as Your colleagues You just happened to be the unlucky one to get hired via an Intermediate Consulting Company VS Being Hired by the Company Directly itself.

Cherry on top , in Spain to get a Rent for obituary amounts per Month ( as You could not get a loan or mortgage for Your own house on the above step as described) most often than not Renters/Landlords look down upon these type of contracts and many times insist and describe on the ad or via the agency they hired for renting the place out to specifically reject anyone Without a Permanent Contract. Lets say thats the case of 80% of the rents , the nicer the apartment or the district the more problems You can encounter IF Your type of contract is not the desirable one. And this is even without specifying many times a quiet high amount of per neto value they want to applicant to reach per month with his/her salary either individually or as part of the group of persons in sum total if lets say 3 people looking to Rent House X which asks for a rent of 1000 euros a month.

Another important damage caused to the IT sector in itself:

Big Companies therefore have the opportunity to further filter and alienate away whatever jobs left in the country Vs the ones they already outsourced like in many cases to developing countries for their obvious very low low wages and modern day slavery scenario.

What I mean with this that whatever strange, special , rare, obnoxious job or role left in the company which has not been outsourced overseas and normally would require a pretty fee to maintain a person on payroll for such a role f.e AIX Administrators, System Administrators and alike now can be completely be alienated and kept so to say out of the company by making those roles and jobs available through the third party IT consulting companies and having the benefit of both words – the work is done but still they dont have o worry for the person as him/her not being on the Company´s payroll. Easier to insist and demand and held responsible the individual and-or the Third party providing the resource compared to if they have that person so to say in-house.

When You look at these alienated or strange or special or undesired by the company jobs offered through IT Job agencies two main things You will see:

The amount of specialty in a given topic (f.e AIX administrator) with the amount of experience in years along with a long list of other high profile experience and knowledge as Required Vs being Desired from the candidate

Paired with a relatively low salary compared to the desired qualities and experience and know how plus many times very little to non favorable working hours many times working shifts even nights shifts or weekends and on call availability expected.

Personally to stick with Myself and My example for a second. If I want to become a Linux Adminstrator in Spain:

* I have to accept rotative shifts , oncall availabilty even nightshifts

* Relatively Low Salary

* No option to be hired directly with the End Client / Company

* Have to have a long list of experience and know how in a lot of things required about and around the job itself — so exhaustive that many times i think if I had the amount of X years in the field in that Job plus all the experience and know how required I´d probably would not be looking for a job OR be out of one so to speak.

* Being alienated from my colleagues as I d not be one from the same company but the guy hired through an external agency / IT Consuting company

The above as described happens to nearly all of the jobs I d see myself doing and would love to have. Systems Administrator or Linux Admin or something special as UNIX / Solaris related things when it comes to Servers and similar.

Final Conclusion

Today it is better to be just a guy who connects mouse and keyboard as an IT On Site Analyst but directly hired by the company VS Being some big shot and doing something You love and enjoy doing but having contracted through intermediate job agencies sucking the life out of You with all those things describe which goes against You ( low salary, type of contract, etc.)

Many times what You would love to do as a daily job is not paid well or giving the same perks & benefits and job security what something else does no matter it is not as attractive

And A Funny Post From Linkedin (Joke) but Still True to what I was trying to explain:

TSR – The Server Room – Shownotes – Episode 38 – 39 The One/s That Got Away – Commodore, Amiga , MorphOS

The One/s That Got AwayCommodore, Amiga , MorphOS

Brief History of Commodore and Amiga

A Sum Up History of Commodore International

  • Founded by Jack Tramiel a Polish-Jewish immigrant in Toronto , Ontario Canada in 1954
  • Started out as a Typewriter manufacturer after signing a deal with a Czechoslovakian company to manufacture under their license/design
  • By the late 1950s Japanese machines (typewriters) forced North American Typewriter companies to cease business but Tramiel instead turned to adding machines
  • In 1962 Commodore went public in the NYSE under the name Commodore International Limited
  • In 1960 history repeats itself when Japanese firms start producing and exporting adding machines
  • The Company´s main investor Irving Gould suggested to Tramiel to travel to Japan to learn how to compete. Instead Tramiel returned with the new idea to produce electronic calculators which were just coming on the market
  • By the early 1970s Commodore had a profitable calculator line and was one of the more popular brands producing both consumer and scientific/programmable calculators
  • In 1975 Texas Instruments the main supplier of calculator parts entered the market directly and put out a line of machines priced at less than Commodore´s cost for the parts
  • In the beginning of 1976 Commodore to compete used an infusion of cash from its main investor Irving Gould to purchase several second-source chip suppliers includinf MOS Technology Inc. in order to assure his supply.
    The condition to this purchase was that its chip designer Chuck Peddle join Commodore directly as head of engineering.
    *+ Interesting fact that through the 1970s Commodore also produced numerous peripherials and consumer electronic products such as the Chessmate ,a chess computer based around a MOS 6504 Chip released in 1978
  • After Chuck Peddle took over as head of engineering at Commodore, He convinced Tramiel that calculators were already a dead end and they should turn their attention to home computers.
  • Peddle packaged his single board computer design in a metal case initially using calculator keys (later using full qwerty keyboard) monochrome monitor and a tape recorder for program and data storage to produce the Commodore PET.
  • From the Commodore PET´s 1977 debut Commodore would be a computer company
  • In 1980 Commodore launched production for European market in Braunschweig (Germany)
  • By 1980 Commodore was one of the three largest microcomputer companies and the largest in the Common Market.
  • However by mid 1981 its US Market share was less than 5% and US computer magazines rarely discussed Commodore products.BYTE stated of the business computer market that “the lack of a marketing strategy by Commodore, as well as its past nonchalant (cool as a cucumber/unconcerned) attitude
    toward the encouragement and development of good software has hurt its credibility especially in comparison to the other systems on the market”
  • Also CMB (Commodore Business Machines) were widely recognized to be unhelpful .. stated by the author of the book Programming the PET/CBM (1982)
  • Commodore reemphasized the US market with the VIC-20. The PET computer line was used primarily in schools where its tough all metal construction and ability to share printers and disk drives on a simple local area network were an advantage but they did not compete well on the home setting where graphics and sound were important
    This was addressed by the VIC-20 in 1981
  • VIC-20 cost of US $299 and sold in retail stores and Commodore ran an aggressive advertisement campaign featuring WIlliam Shatner asking consumers ,, Why buy just a video game?”
  • The strategy worked and VIC-20 became the first computer to ship in more than a million units. a total of 2,5 million units were sold over the machines lifetime.
  • In another promotion aimed at schools (and as a way of getting rid of unsold inventory) some PET models labeled “Teacher´s PET” were given away as a part of a “buy 2 get 1 free” promotion
  • In 1982 Commodore introduced the Commodore 64 as the successor of the VIC-20.It has posessed remarkable sound and graphics for its time and often credited for starting the computer demo scene
  • The C64s initial price was US $595 which was compared to the VIC-20 was high but was still much less expensive than any other 64K computer on the market
  • In 1983 Tramiel decided to focus on market share and cut the price of the VIC-20 and C64 dramatically starting what would be called the ,,Home Computer war”
  • Other manufacturers such as TI, Atari and every other except Apple responded and there was an all out price war
  • By the end of this conflict Commodore had shipped somewhere around 22 million C64s making it the best selling computer of all times
  • Commodore boards of directors were as impacted as anyone else by the price spiral and decided they wanted out. An internal power struggle resulted in January 1984 Tramiel resigned due to intense disagrement of the chairman of the board
    and Irving Gould replaced Tramiel with Marshal F. Smith a steel executive who had no experience in computers or consumer marketing.
  • Tramiel founded a new company Tramel Technology and hired away a number of Commodore engineer to begin work on a next generation computer design
  • In February 1984 Commodore purchased a small company called Amiga Corporation for $25 Million which became a subsidiary of Commodore called Commodore-Amiga Inc.
    Commodore brought this new 32-bit computer design ( initially codenamed Lorraine) from 1979 ( and had been called High Toro from 1980 to 1981 then later dubbed the Amiga)
    under Amiga Inc. in early 1982
  • There were three unsuccessfull attempts to release the Amiga by Jay Miner and company. 1982, 1983 and one more after Commodore bought Amiga in 1984, after which it was released only to the local public.
    Then in 1985 Commodore re’released it to the world. The cost was $1000 – $1300
  • Tramiel had beaten Commodore to the punch. His design was 95% completed by June 1984. In July 1984 he bought the consumer side of Atari Inc. from Warner Communications which allowed him to strike back and
    release the Atari ST earlier in 1985 for about $800. The Atari was technology wise almost out however the Amiga was out sooner
  • Throughout the life of the ST and Amiga platforms, a ferocious Atari-Commodore rivalry raged.While this rivalry was in many ways a holdover from the days when the Commodore 64 had first challenged the Atari 800 (among others)
    in a series of scathing television commercials, the events leading to the launch of the ST and Amiga only served to further alienate fans of each computer, who fought vitriolic holy wars on the question of which platform was superior.
  • This was reflected in sales numbers for the two platforms until the release of the Amiga 500 in 1987, which led the Amiga sales to exceed the ST by about 1.5 to 1 despite reaching the market later.
  • However, the battle was in vain, as neither platform captured a significant share of the world computer market and only the Apple Macintosh would survive the industry-wide shift to Microsoft Windows running on PC clones.
  • There were horror stories in the industry about Commodore´s dealing with dealers and customers alike.Having issues with poor treatment was not helped by the fact that new models were introduced which were incompatible with existing ones.
  • In 1987 the Amiga 2000 is introduced and Commodore started to favour authorized dealers compared to previous toy stores and discount outlets
  • Software developers also disliked the Commodore platform, at 1987 Comdex an informal Infoworld survey found that none of the developers present planned to write for Commodore platforms.
    This of course did not help Commodore´s plans to try and establish Amiga as a business platform as it was their plan/intention.
  • Commodore faced problems when marketing the Amiga as a serious business computer as they were still seen as a company making cheap computers like the C64 and VIC-20
  • By the late 1980s the personal computer market had become dominated by the IBM PC and Apple Macintosh platforms
  • As early as 1986 the mainstream press was predicting Commodore´s demise
  • Commodore failed to update the Amiga to keep pace as the PC platform advanced.PCs fitted with high color VGA graphics cards and Sound Blaster sound cards had finally caught up with Amiga´s performance and Commodore began to fade from the
    consumer market.
  • Although the Amiga was originally conceived as a gaming machine, Commodore had always emphasized the Amiga’s potential for professional applications.But the Amiga’s high-performance sound and graphics were irrelevant for most of the day’s MS-DOS-based routine business word-processing and data-processing requirements
    and the machine could not successfully compete with PCs in a business market that was rapidly undergoing commoditization.Commodore introduced a range of PC compatible systems designed by its German division, and while the Commodore name was better known in the US than some of its competition, the systems’ price and specs were only average
  • In 1992, the A600 replaced the A500. It removed the numeric keypad, Zorro expansion slot, and other functionality, but added IDE, PCMCIA and a theoretically cost-reduced design. Designed as the Amiga 300, a nonexpandable model to sell for less than the Amiga 500, the 600 was forced to become a replacement for the 500 due to the unexpected higher cost of manufacture. Productivity developers increasingly moved to PC and Macintosh, while the console wars took over the gaming market. David Pleasance, managing director of Commodore UK,described the A600 as a ‘complete and utter screw-up’.
  • In 1992, Commodore released the Amiga 1200 and Amiga 4000 computers, which featured an improved graphics chipset, the AGA. The advent of PC games using 3D graphics such as Doom and Wolfenstein 3D spelled the end of Amiga as a gaming platform, due to mismanagement.
  • By 1994, only the operations in Germany and the United Kingdom were still profitable. Commodore declared bankruptcy on April 29, 1994, and ceased to exist

Interesting Note for those interested You can read more about the Top Secret Deal Amiga and Atari had before Commodore purchased Amiga in 1984 following the links in the shownotes

Commodore PR-100 3q Calculator
レトロPC」おしゃれまとめの人気アイデア|Pinterest|14r Tkd ...

Amiga 500

End of Part I.

Models of Commodore Computers

For A More Complete History on Commodore’s below is a 7 video playlist from the Youtube channel The 8 bit Guy

Operating Systems


Commodore BASIC, also known as PET BASIC or CBM-BASIC, is the dialect of the BASIC programming language used in Commodore International‘s 8-bit home computer line, stretching from the PET of 1977 to the C128 of 1985.

The core is based on 6502 Microsoft BASIC, and as such it shares many characteristics with other 6502 BASICs of the time, such as Applesoft BASIC. Commodore licensed BASIC from Microsoft in 1977 on a “pay once, no royalties” basis after Jack Tramiel turned down Bill Gates‘ offer of a $3 per unit fee, stating, “I’m already married,” and would pay no more than $25,000 for a perpetual license.

The original PET version was very similar to the original Microsoft implementation with few modifications. BASIC 2.0 on the C64 was also similar, and was also seen on some C128s and other models. Later PETs featured BASIC 4.0, similar to the original but added a number of commands for working with floppy disks. BASIC 3.5 was the first to really deviate, adding a number of commands for graphics and sound support on the C16 and Plus/4. Several later versions were based on 3.5, but saw little use. The last, BASIC 10.0, was part of the unreleased Commodore 65.

Commodore took the source code of the flat-fee BASIC and further developed it internally for all their other 8-bit home computers. It was not until the Commodore 128 (with V7.0) that a Microsoft copyright notice was displayed. However, Microsoft had built an easter egg into the version 2 or “upgrade” Commodore Basic that proved its provenance: typing the (obscure) command WAIT 6502, 1 would result in Microsoft! appearing on the screen. (The easter egg was well-obfuscated—the message did not show up in any disassembly of the interpreter.)

The popular Commodore 64 came with BASIC v2.0 in ROM despite the computer being released after the PET/CBM series that had version 4.0 because the 64 was intended as a home computer, while the PET/CBM series were targeted at business and educational use where their built-in programming language was presumed to be more heavily used. This saved manufacturing costs, as the V2 fit into smaller ROMs.


Welcome to AmigaOS | AmigaOS

AmigaOS is a family of proprietary native operating systems of the Amiga and AmigaOne personal computers. It was developed first by Commodore International and introduced with the launch of the first Amiga, the Amiga 1000, in 1985. Early versions of AmigaOS required the Motorola 68000 series of 16-bit and 32-bit microprocessors. Later versions were developed by Haage & Partner (AmigaOS 3.5 and 3.9) and then Hyperion Entertainment (AmigaOS 4.0-4.1). A PowerPC microprocessor is required for the most recent release, AmigaOS 4.

AmigaOS is a single-user operating system based on a preemptive multitasking kernel, called Exec.

It includes an abstraction of the Amiga’s hardware, a disk operating system called AmigaDOS, a windowing system API called Intuition and a desktop file manager called Workbench.

The Amiga intellectual property is fragmented between Amiga Inc., Cloanto, and Hyperion Entertainment. The copyrights for works created up to 1993 are owned by Cloanto.In 2001, Amiga Inc. contracted AmigaOS 4 development to Hyperion Entertainment and, in 2009 they granted Hyperion an exclusive, perpetual, worldwide license to AmigaOS 3.1 in order to develop and market AmigaOS 4 and subsequent versions.

On December 29, 2015, the AmigaOS 3.1 source code leaked to the web; this was confirmed by the rights holder, Hyperion Entertainment.

Influence on Other Operating Systems

AROS Research Operating System (AROS) implements the AmigaOS API in a portable open-source operating system. Although not binary-compatible with AmigaOS (unless running on 68k), users have reported it to be highly source-code-compatible.

MorphOS is a PowerPC native operating system which also runs on some Amiga hardware. It implements AmigaOS API and provides binary compatibility with “OS-friendly” AmigaOS applications (that is, those applications which do not access any native, legacy Amiga hardware directly just as AmigaOS 4.x unless it’s executed on real Amiga models).

pOS was a multiplatform closed-source operating system with source code-level compatibility with existing Amiga software.

BeOS features also a centralized datatype structure similar to MacOS Easy Open after old Amiga developers requested Be to adopt Amiga datatype service. It allows the entire OS to recognize all kinds of files (text, music, videos, documents, etc.) with standard file descriptors. The datatype system provides the entire system and any productivity tools with standard loaders and savers for these files, without the need to embed multiple file-loading capabilities into any single program.[29]

AtheOS was inspired by AmigaOS, and originally intended to be a clone of AmigaOS.[30] Syllable is a fork of AtheOS, and includes some AmigaOS- and BeOS-like qualities.

FriendUP is a cloud based meta operating system. It has many former Commodore and Amiga developers and employees working on the project. The operating system retains several AmigaOS-like features, including DOS Drivers, mount lists, a TRIPOS based CLI and screen dragging.

Finally, the operating system of the 3DO Interactive Multiplayer bore a very strong resemblance to AmigaOS and was developed by RJ Mical, the creator of the Amiga’s Intuition user interface.

Amiga Unix

Bundled with the Amiga 3000UX, Commodore’s Unix was one of the first ports of SVR4 to the 68k architecture. The Amiga A3000UX model even got the attention of Sun Microsystems, though ultimately nothing came of it.

Unlike Apple’s A/UX, Amiga Unix contained no compatibility layer to allow AmigaOS applications to run under Unix. With few native applications available to take advantage of the Amiga’s significant multimedia capabilities, it failed to find a niche in the quite-competitive Unix workstation market of the early 1990s. The A3000UX’s price tag of $4,998 (equivalent to $9,382 in 2019) was also not very attractive compared to other Unix workstations at the time, such as the NeXTstation ($5,000 for a base system, with a full API and many times the number of applications available), the SGI Indigo (starting at $8,000), or the Personal DECstation 5000 Model 25 (starting at $5,000). Sun, HP, and IBM had similarly priced systems. The A3000UX’s 68030 was noticeably underpowered compared to most of its RISC-based competitors.

Unlike typical commercial Unix distributions of the time, Amiga Unix included the source code to the vendor-specific enhancements and platform-dependent device drivers (essentially any part that wasn’t owned by AT&T), allowing interested users to study or enhance those parts of the system. However this source code was subject to the same license terms as the binary part of the system – it was not free software. Amiga Unix also incorporated and depended upon many open source components, such as the GNU C Compiler and X Window System, and included their source code.

Like many other proprietary Unix variants with small market shares, Amiga Unix vanished into the mists of computer history when its vendor, Commodore, went out of business. Today, Unix-like operating systems such as Minix, NetBSD, and Linux are available for the Amiga platform.

Emulation Options for Commodore

Lot of options when it comes to emulating Commodore 64, 128 VIC20 platforms

From Options to Emulate a Commodore 64 and derivatives Online to Running it on a Raspberry Pi or the platform of Your choice there are tons of options out there.

  • VICE is available for nearly every platform out there. Emulates the following:
    C64, the C64DTV, the C128, the VIC20, practically all PET models, the PLUS4 and the CBM-II (aka C610/C510). An extra emulator is provided for C64 expanded with the CMD SuperCPU
  • Hoxs64 (Windows Only)
  • Online

Emulation Options for Amiga

AmiKit XE

Cloanto Amiga Forever

I was able to run AmigaOS 4.1 Final Edition Update 6 Classic with Cloanto Amiga Forever emulating an Amiga 4000. Networking and Sound working fine.



can run it even on a Raspberry Pi

MorphOS (Hardware)

Hardware Compatibility (complete link in the Shownoters)


  • AmigaOne 500
  • AmigaOne X50001
  • Apple eMac2
  • Apple iBook G4
  • Apple Mac Mini G4
  • Apple PowerBook G43
  • Apple PowerMac Cube4
  • Apple PowerMac G45
  • Apple PowerMac G56
  • Genesi Efika Open Client
  • Genesi Open Desktop Workstation


  • ACube Sam460cr
  • ACube Sam460ex
  • A-EON X5000
  • bplan Pegasos I
  • bplan Pegasos II
  • bplan Efika

AmigaOne X5000


The World´s First Multimedia PC

MorphOS 3,12 on a G4 Powerbook

Amiga OS 4.1 Final Edition for Classic Computers

SAM 460ex FlexATX Motherboard w/ SoC AMCC 460ex CPU


Amiga OS 3.9,CD%2DROM%20drive

AmigaOne X5000 (MorphOS and AmigaOS 4.1 with Enhancer Software)

AmigaOne X1000



MorphOS Hardware Compatibility

Amiga Forever


Amiga OS 3.1.4

Amiga OS 4.1 FE

AmigaOS Version History

AmiKit XE

Amikit Store

Home Computers


Amiga Atari Deal

Atari ST

Commodore Basic

Amiga Unix

Amiga Unix Files for FS-UAE / WinUAE


Amiga 3000UX


Hoxs64 (Windows Only)

TSR – The Server Room – Shownotes – Episode 34 – 35

PCem, 86box, DOSBOX, Qemu, Basilisk and others …

Emulation of forgotten realms

PCem / 86Box

86Box is a fork of PCem. What both of them does is being a low level pc emulator to emulate a whole computer from an early one with CPU 8080 AMI XT Clone up to a Socket 7 Pentium from 1995. It also emulates some Graphics and Sound cards as well. Also has some interesting models like the IBM PS/1 and PS/2 models. I left the link for the list of currently emulated systems.

To achieve all this You need to use the corresponding ROM files for emulated systems (bios rom) and peripherals (f.e Graphics cards and Sound cards) I left a link with a quiet up to date link of ROM package to use with PCem

While not everything and every combination works 100% and without flaws… It is a well round and perfectly usable emulator of a whole system/computer. Sometimes maybe You need to tune and change some settings or choose another combination of HW components which best suits You intended SW You want to use and once in a while It can be that the piece of SW You wish to run on the emulated computer f.e under MS DOS 3.21 just dont play 100% well with the emulation itself. But as a whole its pretty usable and You can spend hours and hours putting together Your dream machine and feel like if its the 80s and 90s again. And looking at the prices of real HW from that era if You are able to find a good and working model of the HW of Your choice.. As this emulator is free its definitely a cheaper option and might just be enough for what You need it for.

PCem has a forum on their website where You can get help for troubleshooting or look for answers of what works what does not.

Personally I wanted to run AIX 1.3 on emulated IBM PS/2 but the installer gets kernel panic at nothing more than booting the boot disc. The last time i tried it was still failing or its just me who couldn´t make it work .. thats a possibility as well…sometimes I can make it boot sometimes not 🙂 IBM PS/2 are strange machines in a good sense of the word.

I probably need to spend more time on Pcem and try to make AIX boot and run on IBM PS/2

One thing I miss from PCem is LPT or COM port capture or redirection, implementation for Printing Support.

(( Note that in 86Box LPT Printer emulation is included in a form of Generic Text Printer / Generic Postscript Compatible Printer))

It d be possible then to capture for example a DOS application which can only print via LPT1 the output through an application under windows for example like Printfil ( left the link in the shownotes with a link which explains certain examples/scenarios where Printfil can help capture those printouts from these old DOS Apps). Im sure that under Linux/BSD there is other apps which would do the same and could capture data coming out from those LPT1 or COM ports if 86Box/PCem would handle them to get them out from the emulated machine. It d be awesome

PCem emulating a AMI 486 DX2/66 Clone Installing MS DOS 6.22 Spanish


DOSBOX / vDOS /vDOSPlus / TameDOS / DosBOX SVN Daum , DosBOX Mega Build

Sometimes all You need is a dos prompt to run some applications with networking and options to mount folders from Your local machine and You dont want to spend time emulating the whole computer around the chosen operating system You want to run as it is the case with 86Box – PCem.

For those LPT1 dicussion above..the situation is similar to PCem/86Box… some of these DOSBOX variants (VdosPlus if i remember does LPT1 redirection to file or to an app like Printfil in Windows to say an example) handles that thing as well so perhaps if its a must in Your workflow or for the app You intend to use perhaps its a better fit for You than going with the whole system emulation of 86Box/PCem where as discussed LPT1 port is not handled in the emulated environment.

Update: Added this screenshot to show that for example under vDOSPlus using Professional Write 3.0 Application I was able to Print via LPT1 and catch it with Printfil application when I spent some time with it.

vDOSPlus running Professional Write 3.0 under Windows 10 and showing a generated PDF File captured from Professional Print 3.0 via LPT1 with the help of Printfil.

I have to say that most of the apps I tried under these variants of DOSBOX worked with great success and joy, its very quick to be up and running and using some of Your old favourite DOS applications.

There are still a lot of companies out there using Legacy apps , hence why the demand for Legacy Hardware / Operating Systems and Architectures as well as their cost effective emulation as an alternative (as We will see further along this episode ) where/when possible are in demand. Some companies can not move away from Legacy HW/SW/Architectures many times because of costs involved or cause it d be a mamoth undertaking of a project resulting even more costs or sometimes the good old rule of if it is not broken do not fix it.

DOSBOX running Microsoft FoxPro
vDOSPlus running Wordstar 4.0
vDOSPlus running Wordperfect 6.0



Can emulate a plethora of architectures and You can run a lot of different operating systems.
Some runs so slow probably You will never want to run it again ( i definetly had success and pain following a step by step to install HP-UX 32bit UNIX under Qemu on an emulated PA-RISC CPU where just the text only install took more than 4 hours. the system does boot after and works but painfully slow. Good thing is Qemu is getting better each time . ))

Also for example I have AIX 7.2 installing and booting fine under Qemu emulating Power architecture ( have to use a specific qemu build which is older than the current version and the AIX 7.2 iso You use has to be specially prepped with the required QEMU drivers injected into ( for network and disc drivers if im not mistaken) and if You have it all then that baked installer ISO will boot , install and can power up afterwards with relatively correct normal speed ( install takes about 20min and a boot up around 5min lets say.. networking works and easy to setup relatively)

Even better examples can be older SunOS Operating sysems, some of them can be a bit sluggish but definetly worth trying out and playing with them.

All the above (AIX, HP-UX and SunOS step by steps for qemu You can find on the mentioned Links part of the shownotes at astrobaby s website … there is always something interesting to read there each time I visit his blog. )

Qemu running PA-RISC 32 bit HP-UX 11.11
Courtesy of

Basilisk II

Basilisk II is an Open Source 68k Macintosh emulator. That is, it allows you to run 68k MacOS software on your computer, even if you are using a different operating system. However, you still need a copy of MacOS and a Macintosh ROM image to use Basilisk II. (( google can help ))

Emulates either a Mac Classic (which runs MacOS 0.x thru 7.5) or a Mac II series machine (which runs MacOS 7.x, 8.0 and 8.1), depending on the ROM being used

I left a good amount of links to tutorials and videos how to set up and configure Basilisk II (most of these tutorials are either for Windows or for Mac OS X) You can still get hints and important gotcha-s even if You are under Linux/BSD Systems.

It came in handy for me as I prepared a separate Windows 7 VM under Virtualbox to try out Basilisk II and Sheepshaver on this FreeBSD Laptop ( The Lenovo X220)

Basilisk II running System 7.5.3


(( the name is a pun on ShapeShifter which was a 68k Mac Emulator for AmigaOS obsoleted by Basilisk II)

SheepShaver originally appeared for BeOS in 1998 as a commercial application (first as shareware, then via the now long-defunct BeDepot). Due to the demise of Be, it has been re-released in 2002 as Open Source software under the GPL.

Tutorials which explain Basilisk II can be sometimes/most of the time used for Sheepshaver as well or vice versa…

Sheepshaver Running Mac OS 9

End of Episode 34

Start of Episode 35

Mini vMac

Similar to Basilisk II can run classic 68k mac operating system and apps

Emulates a Machintosh Plus

I have a great video where it shows how to set it up in the Links.

Mini vMac running Sysem Software 7.5.5


Another PowerPC emulator ( similar to a PowerMac G4 what it emulates but about 500 times slower than the original hardware)

You can use it to run newer than classic mac os operating systems ( which were compatible with a PowerMac G4 for example like Mac OS X 10.2.8 Jaguar)

As per the website both OpenBSD and NetBSD for PPC build crashes on boot …..its sad. I definetly have to try it myself maybe newer builds do work???

PearPC running Mac OS X Installer


SimH is the mecca when it comes to legacy systems like DEC VAX , PDP, HP-3000 Series III to mention a few.

SimH could worth its own episode.. Just like Mainframes.

Tons of videos and tutorials how to run different operating systems on various architectures-hardware emulated by SimH can be found with google.

I added my favourite one from Stephen’s machine room with step by step instructions for Simh and OpenVMS installation and also explains how to add Hobbyst License keys
(( Remember the Sad Truth it will be over 2021 December 31st ))

Simh running VAX/VMS v4.7


Virtualbox apart from being free it can be great when it comes to older but not so old Operating Systems like DOS , OS/2 , AIX 1.3 PS2 as mentioned above when I talked about PCem/86Box , Windows NT or even Windows 3.11 .. These Operating Systems I had more success and joy running in Virtualbox instead of Vmware for example.

It is a great alternative to Vmware Workstation which is a paid application and as mentioned if some older Operating System does not install or work properly under Vmware give it a try under Virtualbox You might get lucky.

Virtualbox running OS/2 Warp

Update: Linked a Youtube video in the shownotes where AIX 1.3 is booting and installed fine with networking on Virtualbox. Its good news. However I got as far as booting up from the two SCSI floppies and once it asks for the Install Floppy I get a kernel panic and it reboots — reading the comments below the video I see i need to use some patched discs with the proper injected drives to make it work with Virtualbox as I do not have those floppy images I need to obtain them first before I can continue with this

Update 2: While I could not install it myself for what I explained just above , I left a link from a blog ( where You can download a Virtualbox VDI HDD Image and two floppies with which You can create a Virtual Machine under VirtualBox and boot an already installed system.

This how far I got in Virtualbox trying to install AIX 1.3 PS/2
The Ready to Use VDI Virtualbox HDD with the Boot floppies work just fine 🙂

UNIX under the Hood


Another very awesome emulator this time emulating Big Iron from IBM

Hercules is an open source software implementation of the mainframe System/370 and ESA/390 architectures, in addition to the latest 64-bit z/Architecture. Hercules runs under Linux, Windows, Solaris, FreeBSD, and Mac OS X.

I left in the shownotes my favourite Mainframe related Youtube channel moshix. Have a look at it if You are interested in the topic. moshix has an incredible and extensive knowledge of mainframes and anything surrounding or running on them…. He could be a perfect candidate for a future interview at The Server Room Show. 🙂

There is also an additional Printing app which exists do to what we discussed above for DOSBOX

Hercules Emulator running


Comercial Offerings

EmuVM / AlphaVM

Their AlphaVM-Basic is their offering towards hobbyst..When i did ask them for a quote back in 2019 they told me first year license is 400 euros with an annual prolongation cost afterwards for 100 euros.

It emulates an Alpha system with one CPU and up to 1GB RAM. Supports OpenVMS and Tru64/DigitalUnix systems. Too bad the last batch of – issued OpenVMS hobbyst licenses will expire be finished in 31st of December 2021 🙁

Stromasys Charon
….. HP3000, Sun Sparc , Alpha , VAX – They cover nearly everything … and they charge according to it.

I did talk about them in previous episodes. They hold the holy grail when it comes to Legacy Architectures

Charon-AXP / AXP-Plus – Alpha Architecure
Charon- HPA – HP 3000 PA-Risc
Charon-PDP – PDP
Charon-VAX – VAX

  • Charon-SSP/4M – Sun-4m/V8 1/4
  • Charon-SSP/4U – Sun-4u/V9 1/24
  •  Charon-SSP/4V – Sun-4v/V9 1/64

IBM zPDT – IBM system z Personal Development Tool

I left a Youtube video from 2012 which explains zPDT a bit..It uses USB hardware key for authentication and costs per year around 8000 – 9000 euros the license fee. It includes and comes with all IBM z software os and additional applications. lot of stuff .. but a lot of money for serious devs and their companies.

Imagine it like Hercules on steroids with all the latest and greatest from IBM z OS and application wise working together under Linux … anything gets more awesome than this?

I never managed to get my hands on it nor to see it in person running….. one day ….im sure


Of course nothing beats real hardware, the touch and feel and the smell and noise of it.
If You can get Your hands on Or You already have and nothing stands in Your way go for it.
It is the real deal the true way the way it was meant to be

For the rest of us unlucky poor folks We can try to emulate and get a grasp of that greatness with its quirks and caviats. (mostly loosing on speed and features mainly) Our only plus or advantage is that it definetly wont raise our electricity bill nor will take up extra space in our rooms.

Cause remember when they say that it is like the original … it is just not the same



Setting Up Basilisk II (Windows)

Basilisk II – How to Install Mac OS System 7.5.3

Printing from Sheepshaver and Basilisk II

Getting Online in Basilisk II (Windows)

Essential 68k Mac Software for Basilisk II

Sheepshaver (abandoned since 2006)




Mini vMac

How to Install Mac OS System 1.0 – 7.5 in Mini vMac and Run Classic 68k Applications

vDOSPlus and Wordstar




WordPerfect for DOS Updated

EmuVM / AlphaVM


SimH , OpenVMS , VAX , Hobbyst License

OpenVMS Hobbyst License Registration


WinWorldPC Library of Abandonware Software

A Blog to get some ideas about virtualization – emulation of older systems amongst other topics

Emaculation Forum about Mac Emulation

Macintosh Quadra and Mac OS 8 in a form of Electron App * Win / Mac / Linux*

End of an Era – HP Ends OpenVMS Hobbyst Licenses in 2021

Printfil.. Print Sofware for various scenarios

Printfil and DOS Emulation / Printing

Moshix Youtube channel .. Everything about Mainframes

Hercules emulator

IBM zPDT Youtube video

IBM Master the Mainframe Program (online, free)
Probably the closest one can get to an IBM Mainframe for Free

Young Guy Who Bought a Mainframe for His Home (Awesome)

Betaarchive – Beta and Abandonware Forum

OpenVMS moving towards x86 in a form of Virtual Machine through Virtualbox

PCem List of Emulated Systems

Everything You Ever Wanted to Know About IBM PS/2

AIX PS2 Install images:

AIX 1.3 PS2 (x86) on Virtualbox

AIX 1.3 PS2 (x86) on Virtualbox Ready to use HDD Image and Boot Floppy

AIX in the world of today (Video)

TSR – The Server Room – Shownotes – Episode 33

Venturing to FreeBSD on a Lenovo X220
(PCem, Virtualbox, Bhyve, XFCE, Webcamd and permissions, etc)

an episode about an undertaking with FreeBSD on a Lenovo X220

Setting up video driver for Intel was Easy

Setting up the video card driver as per FreeBSD handbook was straightforward I followed the steps and I had it working after installing Xorg and Gnome as one of the first steps.

Graphics tearing on Gnome even after optimization tricks shared by others

Even after some optimization tricks other people shared I still had tearing regarding GPU performance or something I decided to try on XFCE before I tried to dig deeper into the issue.


Suprisingly enough on XFCE everything was smooth and tearing is non issue over there, could be from laptop performs better on XFCE lower resource hungry DE or just simply it fits better… but thats where I stay.

XFCE works very smooth and uses few resources its very light on this about 8 year old laptop with 8 GB Ram DDR3.

With the help of apps like Albert I can have a workaround for Gnome´s Application viewer feature ( i think its called like this) where You could see the open apps and type in the search bar to find an app and launch it … Albert lets me look-search in these same indexed apps and i can launch any (also google search, etc like Alfred on the Mac) and XFCE´s application drawer icon or how its called let me get with one click on my panel a list of open apps and it also shows on which workspace I have it open and i can switch to the selected one with One click… Like this with this two mentioned workaround I can use XFCE the same way i was using Gnome Application viewer screen and I dont feel like Im loosing on functionality at all. Im sure there is even some way to make it work from Albert some way ( to be able to search or lookup in my open apps and switch to them that way I just did not figure out how just yet )

While it has definitely less plugins or modules available for XFCE´s panels than Gnome does as probably the developer base is smaller including the userbase for sure.. But I like it more and more and I did not regret switching to XFCE on my main workstation as well from Gnome just recently.

VLC player

When I first installed VLC player I did from the binary repositories and as I often use samba shares to keep my files including TV Series or Movies.. I realized quick that each time I was trying to drag and drop a multimedia file from file browser to vlc i got an error.

The same file played perfectly fine when I had it on my local machine.

After some google-ing around I figured it had to do with Samba credentials and/or missing Samba Feature n VLC — however my username password on the local machine under BSD is the same as on the NAS/Samba share…I know its probably not the best practice but just bare with me . ( where the manuals told me to check under preferences… i was missing the menu option)

I uninstalled the binary package and reinstalled from Portstree where indeed amongst the configurable build option Samba support was one of the ones which could be selected.

The building ( including some others as well) was giving me constantly some error message regarding some perl version mismatch altough I had the system updated with freebsd-update fetch and freebsd-update install to the latest build. This issue eventually i resolved after some google-ing someone recommended to do a separate update for the Portstree/Portscollection itself using portsnap fetch portsnap update and portsnap extract command and it resolved this issue and VLC and those other previously failing apps were building correctly this time.

And of course once built from Ports with the Samba feature now baked in , my issue was resolved at once.

Package management

When it comes to packages You have relatively similar choices as of You used to Linux OS’es
You can install binary packages from repository with pkg install …. You can build things from source … or use a system for me most similar to Slackware´s Slackbuiids system called Ports tree which is the mix of source code + specialized build scripts for make and whatnot to do an easy one step build process on FreeBSD similar to Slackbuilds… just a make install clean or a make configure before that to configure build options if any exists for Your app.


Regarding virtualiation I tried with more or less success to make Virtualbox and PCem run ( both from binary packages and from Portstree) and also Bhyve with a debian 9.9 iso if I recall

In the case of Virtualbox i had varied success Im sure thanks to my not enough google’ing or inability to find all the configuration steps I had to.

I managed to get that far wih Virtualbox that I am able to spin up a virtualmachine ( tried to install windows 7 guest just to try it for PCem under windows) i get booted into the installer which eventually finds the virtualbox harddrive and installation starts but around 5-10% of the installation it gives some error either not being able to communicate with the source ISO cdrom file OR with the target is unclear for me why or which part fails there… it also becomes quiet unresponsive even hangs and I am unable to kill the running VM most of the time… I did install in both cases virtualbox and the kernel package or module including the guestaddition modul or package as well.. So thats that my steps into Virtualbox (( never had an issue with this on Linux so its coming from my inability and lack of knowledge to configure and set this up properly on a BSD system I guess.. I hope with time this will change ))

In the case of PCem. From ports tree it builds a newer version than from binary package if I recall. However in both cases I spent 2-3 days figuring out where to put the ROMs for it as I found no written instructions for it nowhere. Eventually You need to place the roms folder to /usr/local/bin folder per default where pcem binary is being placed as well and/or use the provided pcem.cfg file in the same path to specify roms and nvr paths different than the one mentioned , also make sure the folder has chmod properly set so in case You run pcem as a non root user it can access the mentioned roms folder.

The bad thing is that for some reason I tried various roms/configs and I get a lot of segmentation fault errors on the console and it breaks the app/closes it… I was not able to make a single 286 machine with a simple HDD and install MSDOS 3.20 f.e to install GEMDesktop and GEM Writer suite on top f.e ..// hence I tried the Virtualbox approach with Windows 7 to run PCem there as it works under Windows just fine for the same I tried //

I also tried bhyve … with little info I could found on it how to use…with various success again thanks to my inability and inexperence when it comes to a BSD system. I managed to spin up a debian 9.9 vm installation but after to boot it up again / when it comes to the step reboot I got stuck and couldnt figure out how (( i even did start and stop scripts for it as mentioned in the steps outlined I left in the links ))

Some screenshots of the Virtualbox behaviour:


The same ISO installs just fine using the same HW settings (ram, hdd size, etc.) on Linux under VIrtualbox. The versions of VirtualBox on FreeBSD and Linux however differ where Linux having the latest 6.1 version version 5.X on FreeBSD.

The Virtualbox VM installed using the ISO and prepared under Linux boots and works fine under FreeBSD in Virtualbox.

Lenovo X220´s builtin Webcam and Webcamd daemon

It took me a few days and some google-ing and Mastodon questions to figure out this one and I have it 80% working the way I want it.

To use the builtin webcam on my Lenovo X220 I have to use a daemon called webcamd.
It can list and recognize the usb interfaces builtin like the webcam in the lenovo x220 and create a /dev/video device for the webcam for FreeBSD so applications like Firefox with Jitsi meet video meetings f.e and Wecamoid app can find it and use it ( Iridium chromium based browser for some reason unable to find it when I use with jitsi meet website to conclude meeting.. while other online webcam checker websites do work)

The issues I faced there once I figured out reading a post about someone with the same issue of a Lenovo X220 was that webcamd daemon to work needs to load a kernel module called cuse.ko (cuse4bsd). Second thing is that once the kernel module is loaded the webcamd daemon has to be run ( so it creates those /dev/video interfaces mentioned previously) and no matter i tried from my normal user which is member of wheel group it refused to load with anything else than being run under from root account.

Once I figured this one out .. Interestingly enough the apps like Firefox with Jitsi meet video meeting website or Webcamoid app were still not finding my webcam tough the webcamd daemon was running and /dev/video interface was created

After some google-ing and asking around in Mastodon including a certain Black Cat from Miami I know ( literally a CAT not a person referred to with the slang word Cat) turned out that any user i want to be able to access/use the webcam with has to be added to some additional groups than just wheel … some post i found apart from operator also mentioned groups like video and media as well if im not mistaken.

Once I added my normal user to these groups… once i started webcamd daemon from root ( as the kernel module seems to be loading OK from /boot/loader.conf ) it creates the /dev/video interface and if I start now webcamoid or firefox with Jitsi meet video chat website now its able to find and use the integrated webcam. This is what I call 80% working as I want it to but as a service it works 100% means it does what it has to .. its just commodity level 80% for now … Perhaps some scripts .sh could take care of the rest and then it d be a near OOB experience.

My main issues to iron out remains Virtualization. Id like to fix Virtualbox and be able to use bhyve more. Regarding PCem I can accept if it is something which works best on Linux/Windows cause I can always spun up a VM and just use it there tough being able to use it on FreeBSD would be the best…

Apart from all I did I outlined above I only missed to put into /etc/rc.conf webcamd_enable=”YES” now after reboot i dont need to do anything and the webcam is there up and ready to work when I need it 🙂 100% Resolved

What am I missing:

The only thing I miss and found no way around it is a piece of software I need to use to be able to connect to my VDI at work .. called VMWare Horizon Client – Horizon View or Vmware View Client .. all those names are commonly used. It does have a linux native build and it works on linux just fine ( on my Fedora box) but on BSD and on the portstree it seems unexistent.

I have no knowledge if the Linux compatibility library on FreeBSD would be able to make this piece of software work.

This missing piece of software is one of the main reason why I can not migrate my daily driver / main workstation to FreeBSD for the moment ( as I am confident that the rest of the mentioned issues I face at the moment can be solved with proper knowledge and experience)

Some Closing Thoughs:

As You can see from my example BSD is NOT Linux. While many things and knowledge can come handy from Linux world when You find Yourself at a BSD prompt it is still a completely different Beast. And We did not even mention other still commercially available UNIX variants like Solaris, AIX or HP-UX or the Pre-UNIX VMS and OpenVMS derived from it.

Also it must be said that while Linux has bigger userbase out there ( i am not referring to enterprise users or companies but normal general folks) they (Linux) also have a much bigger pile of cash as a form of donation from BIG corporations through the Linux Foundation like AT&T, Intel , Microsoft , Fujitsu, Google, Samsung Nec, Huawei just to name a few from their Platinum donors . They also have a lot of Gold and Silver members too.

The annual membership fee:
Platinum members is $500K+
Gold is $100K
Silver is $5K to $20K

This means Linux in general has a lot more to work with.. A LOT MORE. They can afford to pay developers full time , can attract talent easier ( with money its always easier) and pretty much can bridge certain gaps over with money when they have to ( f.e if they need to implement feature X. in Linux they can use the money to pay a full time developer for example to work on this as a contract or project based job for them)

This will eventually result of course in more traction and weight out there in the open source community. They can organize more and bigger events , afford top dollar panelists or speakers from enterprises whom maybe even keen to participate once they paid out the big annual fees mentioned above to push their ideas or to have a form of say in whats happening and whats trending in the open source and respectively Linux community.

Now BSD in the other hand, specifically FreeBSD through the FreeBSD foundation at the moment of writing this article, they have around 355 donors showing on their website with a raised amount of 327 000 US Dollars towards a goal of 1 250 000 US Dollars. Thats 2,5 Platinum donors fee of the Linux Foundation.

In delight of this I must say its very impressive what FreeBSD achieves with the help and the power of pretty much only coming from the Open Source / BSD Community.

Also regarding their infrastructural structure, FreeBSD is not a One Man Show with a couple of Leautenants like in the case of Linux by Linus Torvalds being in charge with a few key people above him. FreeBSD in this sense feels to me more like a community and less like the Corleone family Linux seems to me most of the time.

Im sure if more of Us Donates to BSD Foundations then BSD could be more and better than it is now in terms of available software and perhaps even broader hardware support OOB ( or maybe its just me guessing)

But one thing is true for sure… With Money and a lot of it , its easier to change the world.


Using bhyve on FreeBSD a link I found which got me up 80% more or less

Bhyve on FreeBSD hanbook

Interesting article on VMS Vs UNIX

BSD Vs Linux from the FreeBSD Handbook

FreeBSD Foundation

The Linux Foundation




FuryBSD (based on FreeBSD)

TSR – The Server Room Show – Shownotes – Episode 30 – 31 – 32


This episode gives credit and relies heavily/completely on the presentations, notes and speeches done and copyrighted fully of Dr Marshall Kirk Mckusick who is a renowned computer scientist, known for his extensive work on BSD UNIX, from the 1980s to FreeBSD in the present day.

  Much of the information in this shownotes
  can be found in the chapter on Berkeley UNIX
  (pages 31-46) in the book:

  Open Sources:  Voices from the
  Open Source Revolution

If You are interested further as this episodes and shownotes are a very stripped down version of the work of Dr Marshall Kirk Mckusick´s notes and presentations Please consider purchasing His DVD of the History Of Berkeley Software Distributions from which this episode excerpts its facts and figures and historical recounting.

It is a near 4 hours long 2 parts presentation which recounts the History of BSD from the early days and another lecture about the Modern FreeBSD.

It is great fun to watch and own and definitely a better recount of the events with many fun facts and personality as I could ever do myself.

Parts which I left out from the History can be found on Kirk McKusick‘s History of BSD DVD release.

History of BSD

The Atlas Supervisor

One of the early time sharing and modern operating system of its time from the 1960s from Manchester University’s Machester Project.

It ran on the Atlas and Atlas-2 Titan computers

It set up many of the principles of what time sharing would actually mean

After World War II. many of the scientific minds came over to the United States as there were far more research opportunities even into the 1960s

A Big Project gets started in the US – Multics

A joint project between MIT, General Electric and Bell Labs

  • GE was going to provide the special hardware *at the time it was thought special hardware was needed to do time sharing properly* included rings of protection which evolved into todays supervisor mode and user mode
  • MIT provided academic input
  • Bell Labs provided the industry support whereas Kirk McKusick described this as he believes they provided much of the funding of the project

The Project was called Multics

It ran on the GE 635 * it was considered to be a mainframe at its time, GE is no longer in the computer industry*

There were of course issues around Multics. It was driven by several groups *MIT, GE, Bell Labs*

Many of the participants had good ideas as to what time sharing should be they d come up with an idea prototype it and when they were convinced it was workable …. they moved onto some other idea … Resulting in the fact that Multics never really became a finished working product or an operational system in the sense of the word.

This resulted of Bell Labs eventually abandoning the project as they wanted a working time-sharing system they just got tired waiting for it to be finished

Bell Labs people who returned from the Multics project they were stuck to use GECOS * General Electric Comprehensive Operating Supervisor* originally a batch processing OS for mainframes. It was a letdown after the interactive time sharing… as this one required punch-card inputs. 🙁

Two of these guys Dennise Ritchie and Ken Thompson were not about to work on the GECOS system. They have already saw what an OS like Multics could be *time sharing OS*

They started to work on an abandoned PDP-7 ( late 1969) a minicomputer from Digital Equipment Corporation.

They got to the point where they developed what we would call today as an embedded system for the PDP-7 so they could run games on it. They were determined that they needed a more powerful computer to do what they wanted

The Deal

Computer Science Department of Bell Labs had nowhere the budget and structure what it has today therefore they approached the Legal Department which had a bigger budget (1971)

They struck a deal with them about the following:

  • Legal Department would purchase a PDP-11/20 machine
  • The Computer Science Department writes a program to help the Legal Department with text processing.

This led to the development of roff which was the UNIX version of Multics‘s runoff program .. later became rewritten as nroff

UNIX and C Programming language

They started to work on what eventually became UNIX * Ken Thompson started working on UNIX while Dennis Ritchie was working on the C Language*

Originally written in Assembly language but later rewritten in C language

They presented UNIX at their first talk about the operating system at the ACM (Association for Computer Machinery) Symposium on Operating System-Principles in 1973.

They were discussing UNIX Version 4 at the conference ( presumably the C language version but there is no accurate confirmation of this fact – i have reached out to Ken Thompson to get a confirmation on the same and I will update this article if it ever happens )

UNIX struck a nerve with academics it looked cool it gave them the light in the end of the tunnel to break free from Mainframes ran by central authorities of the universities and its servants of dead bodies aka terminal computers governed by the computer administrators.

Buying a PDP-11/20 for an academic department seemed like a realistic goal or obstacle to overcome even with a slight budget stretch VS ever dreaming of owning a Mainframe for their department they can govern above.

On this first conference Bob Fabry was present a now retired professor of UC Berkeley who wanted UNIX and all those utilities and lucky for him Ken Thompson was an old alumni of UC Berkley so it gave him the inside track. It resulted the Computer Science Department of UC Berkeley outside of Bell Labs to get UNIX Version 4 which shortly after was upgraded to Version 5.

The Berkeley Computer Science department was not happy running on PDP-11/20. They wanted to run on a newly available PDP-11/45.

They did not have the money to buy one on their own so they made a deal with Math and Stats Departments around 1975

But there came a compromise. The Math and Stats department wanted nothing to do with this new experimental UNIX System. They wanted to use a real operating system called RSTS which was DEC´s operating system for the PDP-11.

This resulted in rotating timeframes ( 3x 8 Hour periods):

  • noon to 8:00pm
  • 8:00pm to 4:00am
  • 4:00am to noon

UNIX was run in rotating timeframes:

  • noon to 8:00pm one day
  • 8:00pm to 4:00am the next day
  • 4:00am to noon the day after that

RSTS ran for the remaining 16 hours of each day

UNIX users were chasing the UNIX time slot around the clock until exhaustion then they would get caught up on their sleep and start chasing the UNIX time slot again.

Many people wanted to run UNIX so around 1974 the Computer Science Department of Berkeley bought a PDP-11/40 which was running the latest Version 5 of UNIX 24 hours a day and it worked well minus one hour a day for panics crashes and reboots…

Professors wanted to teach classes on the UNIX machine. It was better than having the graduate students punching cards but the PDP-11/40 does not have the power to support such use case but it opened up the possibility to use ,,instructional use” funds from the state of California to be used to purchase a PDP-11/70 the latest machine from DEC at the time with a one MIPS processor (in 1975)

They purchased the machine with two disk drives, it had only a single disk controller
The fact that UNIX in theory was able to issue two seek commands one each on the two disk drives the seeks could overlap in time and on whichever disk the seek completed first UNIX could start the transfer. A transfer for the other drive would be started at a later time. Only one transfer could occur at a time.

Previously at Bell Labs they did not stress tested parallel seeks. This computer at Berkeley was bigger with a load average of 40 which stress tested this issue for them plenty.

As a result the system experienced strange hangs and panics.
Ken Thompson had written the relevant disk software therefore Berkeley called Ken for help.

He dialed in from New Yersey and Berkeley connected him to the machine via a 300 baud acoustic coupler modem. (Probably the first instance of remote debugging in UNIX)

Ken Thompson figured out the issue quick and doing so peaked his interest and as he was due a sabbatical leave he decided to spend the year at Berkeley over the 1975-1976 school year.

Bill Joy, Ken Thompson and the “50 Changes” Tape

At the same time Bill Joy matriculated at Berkeley (Fall, 1975)

Bill Joy was interested in programming language research and Pascal had just arrived on the scene and was about to take over the world.

Previously Ken Thompson put together/hacked together a Pascal thing a byte-code interpreter ( a compiler for a subset of Pascal which compiled down to byte-codes)

Bill Joy worked on this and turned it into a real Pascal Compiler (not a full compiler tough but it compiled Pascal down to byte codes which were then interpreted) He added better error handling and recovery which allowed students to address a number of errors at once

This allowed Bill to learn UNIX and it also allowed him to work with Ken Thompson, Ken was administrating the system and also actively developing the kernel.

Bill also learned other important things f.e its a good idea to do dumps , use dump (8) command, used to make a backup of a file system, disks do fail occasionally and people like to be able to recover their files.

Bill stepped into the role of system administrator after Ken left from his sabbatical

After Ken Thompson returned to Bell Labs from his sabbatical He put together the “50 changes” tape which were 50 patches to apply to Your UNIX kernel to make it work better.

Bill Joy obtained the tape and applied the patches

He then rebuilt the kernel (this is how he learned to build kernels, etc.)
There were no autoconfiguration or autorestart or things like that back in that time.

The kernel had to be compiled exactly for the hardware it was to run on.

You had to keep two kernels around:

  • one was compiled for one less disk than You had and You would boot this if you lost a disk

Bill did a lot of editing while doing this work. He found that the ed editor had a lot of deficiencies , He found em editor written by George Coulouris of Queen Mary`s Colleague London.

Bill Joy had a rule (his first rule): never spend a lot of time coming up with a good idea when You can steal a better one.


End of Part I. – Episode 30.

Start of Part II. – Episode 31.


Bill Joy then hacked on the em editor and came up with the ex line oriented editor.

Berkeley at that time did not have any screen oriented terminals * They had ADM-3 line-oriented dumb terminals*

He was very good at deciding what the problem was and finding the shortest path to solve it.

Bill now had two interesting software systems He had put together: the Pascal interpreter system and the ex editor.

He combined these into the Berkeley Software Distribution shortened for BSD later still called 1BSD. He distributed about 30 copies of this on tape starting February 1977.

He was very good in multitasking: being a grad student, doing distributions, sending out mails, answering the phone, hacking on code ….

Berkeley after a while did obtain some ADM-3A screen addressable terminals: it meant that You could move the cursor anywhere You wanted on the screen. Bill took his ex line oriented editor and converted it to a screen editor. And that is how the Vi editor was born.

However He did not want to create 3 different versions of his Vi editor to handle ADM-3As , ADM-3s and DEC Printing terminals. so He came up with “termcap” (terminal capabilities)
Isolated some of these characteristics into strings , dynamically configured the editor based on the terminal You were using.On the downside this allowed terminal manufacturers to create any strange screen addressing they wanted knowing they can write a termcap entry to make it work.

It resulted the termcap file to grew into a monster in size.

Bill finished working on Vi however it was never actually finished it was just good enough for what he and the people around him had to do with the editor.

He did not like the bourne shell ( written by Stephen R. Bourne while at Bell Labs) and the way it worked He wanted something more C like so He started working on the C Shell. Bill was a graduate student in programming languages.

Meanwhile He did an update to the original Berkeley Software Distribution. He called it 2BSD.
He shipped about 75 copies starting in June 1978

2BSD was the last distribution done by Bill Joy for the PDP-11

There was a continued interest in PDP-11s, so other people started backporting the work Bill was doing on the VAX to PDP-11s which led to the releases such as 2.1BSD, 2.2BSD, etc. eventually leading to the 2.11BSD release which still ran on PDP-11s well into the 1990s.

  • By 1978 the Berkeley Computer Science Department has begun to make a name for itself which as per Kirk McKusick was thank to Bill Joy and his work. As a result they had enough grant money to buy the latest machine from DEC a VAX 11/780 delivered in 1978 was one of the first ones off the assembly line.

    They wanted to run UNIX on this machine. They obtained a copy of UNIX/32V a fast and dirty port of UNIX Version 7 for the PDP-11 ( no virtual memory , no paging) which limited them to 128K in size just like on the PDP-11 , limited but it got the VAX up on UNIX.
  • The PDP-11 was limited to 128K text + data
  • The VAX was limited to the amount of physical memory on the machine, minus the memory dedicated to the kernel
  • the department VAX 11/780 was fully loaded with 2Mbytes of memory
    so, the user process was theoretically limited to about 1.5Mbytes

The people who had spent a lot of money on the VAX did so because they wanted to run vaxima a differential equation solver written in LISP.

in those days:

  • when you typed “lisp”, as soon as you got a prompt,
    you were using a megabyte
  • as soon as you typed anything, you were up to two
  • as soon as you started doing anything, memory usage
    went up from there

Unfortunately, this machine had only 2 megabytes of memory, users could not do anything useful in LISP. They needed virtual memory. VAX/VMS operating system had it

Luckily for them Bill Joy comes to save the day and tells them He could get Virtual Memory working in UNIX on the VAX. And He said he could do it over the four week Christmas break.

He was given until the first day of classes in January else DEC‘s VAX/VMS would be used

Remember the first rule of Bill Joy?
never spend a lot of time coming up with a good idea when You can steal a better one.

Ozalp Babaoglu had hacked together a Virtual Memory system for his PhD thesis. Bill took this and hacked it into UNIX. He was not concerned by the fact that Ozalp‘s Virtual Memory system was not finished.

UNIX/32V and Virtual VAX/UNIX or VMUNIX (Virtual Memory UNIX) were up and down alternatively over the course of the break period. VMUNIX was up more and more as the weeks went forward and was up by January 18th.

Was a bit rocky for the next couple of weeks. LISP now worked but slowly however it was not the fault of UNIX. Users were trying to solve differential equations using 10MB of virtual memory on 2MB of real/physical memory.

With this newfound success came the necessity to port all of the utilities from 2BSD to the VAX

By this time Kirk McKusick shares an office with Bill Joy and He is drawn more and more into the works of BSD.

Once all the utilities were ported Bill Joy and his office peers (Peter Kessler, Robert Henry, Kirk McKusick and a few others around the department) had their first complete system:
a kernel, the utilities, everything you needed to load onto a bare metal hardware and just run it.
They released it as 3BSD

As a reminder from 1984-1996 Bell Laboratories is called AT&T Bell Laboratories.

Bill Joy is still a graduate student still hacking code and doing distribution but now a new problem is rising on the horizon. the distribution 3BSD includes also 32V code (UNIX/32V from AT&T Bell Labs)

Previously the distribution just included code that had been written at Berkeley now that it included 32V code licensing was in need.

Bill Joy took all the calls which came into the shared office.

License verification was just a verbal confirmation.

About 100 copies went out under these terms of 3BSD starting in December 1979

This 100 copies meant about a 100 different organizations resulted of each copy deployed on multiple systems. Major universities , a few companies and research groups.

As this system 3BSD is more widely distributed a lot more people begin to get involved in it.
Initialy a 32V license cost 99$ which rose to around 10 000$ later.

( I included two links from original documents scanned from 1983 – 1984 showing UNIX System V license prices it is interesting to see the astronomical rise those UNIX license fees went through )

This is how the 3BSD and Licensing worked in this scenario:

– The organization would buy a VAX
– purchase a 32V license for it
– and then run 3BSD on their system

As a result of the wide distribution, more people started hacking on the code resulting in more contribution to come in.

Auto reboot also known as auto restart became available around this time. When a machine crashed it could automatically reboot itself.

Previously when the machine halted someone had to restart it from the console. It was in part made possible because of the first versions of fsck which meant that after a crash there were no need anymore to a computer guru to manually run filesystem check and cleanup commands on the file system to clean it up. It finally became to be possible to operate 24/7 which meant the systems could go into heavy production.

One of the early LISP systems was coming out around this time.

Also thanks to a program called delivermail the predecessor of sendmail both created by Eric Allman it became possible for the first time to have email travel between two different machines.

When Eric Allman rewrites delivermail into sendmail he decided that You shouldn’t have to recompile your whole email delivery agent every time You wanted to change something. Delivermail had everything compiled into it.

This resulted in the sendmail configuration file so you could reconfigure it on the fly without recompiling th ewhole thing over and over again each time. He also made it work with the SMTP Internet Protocol.

All these changes mentioned above plus improvements to the File System and some other bits and pieces were gathered all together and released as 4BSD starting in October 1980

About 150 copies of 4BSD were shipped however not all had to be handled by Bill Joy.

The University noticed that something big was happening out of this graduate student’s office.
This resulted in the University lawyers to start to sniff around and they decided that license verification was not rigorous enough. 🙂 who d have thought 🙂

An administrative assistance was hired to do license verification.

Many of these above activities became possible because of a new source of funding became available actually driven by VAX VMS the funding was coming from DARPA (Defence Advanced Research Agency)

DARPA as an arm of the Department of Defense was coordinating the research needs of the various arms of the military (Army,Air Force, Navy, Marines, etc.) was funded by these various branches and it was supposed to determine which projects were worthy of funding and which branch of the military should fund which project.

Projects were supposed to have some bearing on military readiness.

The situation with the military was:

– they had many projects going on
– they were running on different hardware
– they were using different operating systems
– they were using different programming languages including assembly language
– they were all using legacy, propietary systems

This resulted in that nobody could share results or programs with one another

DARPA was going to decide on:

  • a certain computer
  • a certain operating system
  • they would get everyone they were funding to use this system
  • so when they developed something they could share it with the other groups/branches

The machine with the best price/performance for the size of grants they were dealing with was the VAX.

The only remaining question was: Which operating system to use? VMS or UNIX?

VMS was the initial favorite. It was written by the vendor, it was supported by the vendor and clearly You could get support if You had issues. However the researchers wanted UNIX.

David Kashtan decided to resolve the VMS Vs UNIX matter
He was then at Stanford Research Institute , He was also a DARPA grant recipient interested in helping to select the hardware and OS to be the basis for the standard.

He wrote the famous VMS paper where he described the results of benchmarks he ran on both VMS and Berkeley UNIX systems.

These benchmarks showed severe performance problems with the UNIX system.

He developed micro-benchmarks:

  • how fast can you do getpid()?
  • how fast can you write a byte back and forth over a pipe between two processes?
    to measures context switch times
  • etc.

He ran these micro benchmarks on VMS and on UNIX.

Performance was better on VMS in all cases

So because of performance measurement and vendor support He concluded they should standardize on VMS.

Bill Joy received a copy of David Kashtan‘s paper.
He decided to speed up the benchmark cases in UNIX

He did a lot of tuning:
Created specialized code, optimized the assembly language, etc.

Was able to make all the benchmarks run faster on UNIX than VMS except for the context switching.

Through surreptitious means Bill and his associates were able to determine how VMS did context switching which resulted UNIX doing the context switching benchmark the same speed as VMS did.

Bill Joy then wrote a rebuttal paper. He trashed the validity of the benchmarks but he then showed that UNIX was as fast or faster in all the benchmark cases.

This resulted in DARPA funding work at Berkeley began about June, 1980.

Bill were commissioned to ship the work they had done to make UNIX so fast (the performance tuning and code optimisations). As these were made as proof of concept some time was needed to make it production ready.

Autoconfiguration had to be added so that BSD recipients did not have to know how to build a custom kernel.

Bill Joy added additional things including autoconfiguration written by Robert Elz at the University of Melbourne, Australia. The system could look around at the hardware present and configure it therefore specific configuration did not have to be compiled in * to the kernel*

The performance tuning and autoconfiguration came out as the next Berkeley release 4.1BSD

People at AT&T were concerned that a name like 5BSD could be confused with System V, hence the switch to using the 4.X as the name of the release.

Around 400 copies of 4.1BSD were shipped starting in June, 1981

At the time AT&T was shipping the following:

  • UNIX Version 7
  • UNIX/32V
  • Programmers Workbench (PWB)
  • System III Created in early 1981 which was a combination of Version 7, 32V, PWB, USG (UNIX Support Group) Tools

None of these offerings ran particularly well on the VAX, none had virtual memory

Therefore AT&T would sell a System III license which they got very good money for and the buyer then get a tape from Berkeley with 4.1BSD to run on the VAX

DARPA was pleased that Berkeley shipper their release (4.1BSD) in only a year however it was promised in four months. It was still very fast compared to other DARPA contractors.

As a result they decided to award another contract to Berkeley, This time to get networking into UNIX. UNIX already had networking of a sort with UUCP (Unix-to-Unix Copy) but DARPA had bigger plans.

DARPA did not trust a university / academics to do something as critical as implementing a protocol stack for example something as complex as TCP/IP.

DARPA decided Berkeley to design:

  • an API for accessing the networking
  • create device drivers for Ethernet controllers

To implement TCP/IP protocol stack DARPA contracted BBN (Bolt, Beranek and Newman)

DARPA set up a steering committee to oversee the development.

In response to the DARPA contract Bill Joy wrote the 4.2BSD architecture manual.

It took him a couple of weeks and it described everything that would be in 4.2BSD (including things like mmap which was not implemented for another eight years)

Everything was to be implemented in one year.

The manual included the networking interface, included function prototypes of the socket interface, accept(), connect() , etc. at the level of man pages. It gave descriptions of how the system calls would be used.

Bill then asked BBN for a copy of their code so he could test his interface.

Rob Gurwitz from BBN provided a pre-release version of their networking code to Bill.

The environment where Rob Gurwitz made BBNs networkign code consisted of mainly VAX 11/750s and 56Kbit lines. His network code saturated a 56K line with 100% of CPU use on a VAX 11/750.

At Berkeley however they had 3 Mbit ethernet the latest and greatest from Xerox PARC

Doing bulk data transfer between two machines connected with this 3 Mbit line using BBNs code the throughput was 56K.

This resulted in Bill Joy hacking up the BBN code completely. As a result he was able to saturate the 3 Mbit ethernet using only 90% of the CPU

Many people wanted to start testing the interface so this system was informally distributed as 4.1a BSD starting at April, 1982. It was basically 4.1BSD plus the networking code, actually an alpha release but it proliferated widely.

At this point Sam Leffler joined the group. He came from a company that had a networking product. He had an advanced degree in computer science.

He saw deficiencies in the networking interface which he corrected.

Summer was approaching and Kirk McKusick asked Bill if He could work on some project during the summer while He also worked on his thesis.

Without getting into details ( for the little nuances and anekdotes on how the FFS filesystem came alive You have to watch Kirk‘s DVD of History of BSD) nevertheless this little side project of Kirk‘s resulted in 18 months of the FFS (Fast File System) which took Bill‘s original prototype filesystem into a release-ready filesystem which went into the 4.1b BSD Release in a form of test distributions in June , 1982

As a result of the new Filesystem (FFS) and the 4.1b release , Bill Joy funds Kirk‘s trip to USENIX conference in Boston with the help of the DARPA funds where He went on stage one of the first times , talked about the Filesystem and as You can see thanks to his lectures and presentations He never really gave up ever since.

The team at Berkeley had put together their UNIX system.

Bill Joy and Sam Leffler had revised the networking and the IPC Code (inter-process communication)

Bill suddenly became interested in separating the code into machine-dependent and machine independent parts.

Later on He shared it with Kirk that He was thinking of going to a startup called Sun Microsystems

The idea was the following:

They were going to take commodity Motorola 68000
microprocessors, and run BSD UNIX on them

as a side note:

a number of small companies were selling 68000-based
boxes already, but running a variant of UNIX from
Santa Cruz Operation (SCO)

After Bill arrived, Sun was were able to
ship BSD and as Per Him BSD UNIX “will be so much better”

The marketing pitch was:

  • Open systems
  • Commodity hardware,
  • Commodity software,
  • If your vendor is not satisfactory, you can go to whomever is cheaper or better
  • People will buy into this,
  • Sun will sell a lot of systems

Bill tried to convince Kirk to join as well.He could get a single-digit employee number and great stock options at Sun.

Eventually Kirk McKusick did not join to Sun He had his own reasoning behind it which He explains in detail in his presentation on History of BSD

After 18 months Kirk finished his PhD, by then Sun was a big company.
He did however was the first hired consultant at Sun by Bill to port the Pascal compiler which was written in assembly language. Kirk completed the work together with Peter Kessler.

Personally I would have loved to see Kirk McKusick joining to Sun.

Perhaps there is an alternate Universe out there where It has happened where Sun thrived through the 2000s An alternate Universe where Oracle does not exist and where Sun , DEC, Compaq still the Big Names in IT.


End of Part II. – Episode 31.

Start of Part III. – Episode 32.


Bill Joy decided to do a test release called 4.1c, just as He was leaving Berkeley to go to Sun.
It contained the FFS( Fast File System), the networking code, the new signal work that had been done written by Sam Leffler. The release came out in April, 1983.

The initial copy went to Sun Microsystems, later some copies went to other users.

There was talk of release 4.1d but eventually it never happened. It supposed to be the VM (Virtual Memory) release with mmap. Bill Joy was committed doing this release before he fully transitioned to Sun. But He was gone before it could happen and also Bob Fabry was gone the professor who was running the project.

This is the end of the Bill Joy era of Berkeley Software Distribution

There was significant pressure to release a system, it had already been a couple of years since 4.1BSD had come out.

Mike Karels joined the project around June,1983 from Molecular Biology.

Sam Leffler and Mike Karels with some help from the rest of the team put together the 4.2BSD release released in August, 1983

(AT&T System V was coming out in about the same time frame)

About 1 000 copies of 4.2BSD release were distributed, one copy per site , a site represented all the machines at a major university or all the machines in a large corporations in another words it represented a huge number of machines.

4.2BSD represented a very important release a high point of Berkeley distributions in terms of the commercial world.

At that moment You could have:

  • System V with UUCP
  • OR You could get 4.2BSD with TCP/IP
  • System V with around 30 KBytes.second filesystem bandwith
  • OR 4.2BSD with 400KBytes.second of filesystem bandwith

There was such a high demand for 4.2BSD the Berkeley team could not make release tapes fast enough to meet demand. Also AT&T was not in a hurry to verify licenses which slowed the whole process down as well.

The first seven 4.2BSD tapes had to go to seven designated DARPA recipients.
One of them was BBN ( Bolt Beranek & Newman)

—– TCPIPWars ——
The continuation of the story would follow with the TCPIPWars which I will leave out and continue from after that happened.
—– TCPIPWars ——

Berkeley issued the 4.3BSD release in June, 1986.
Bill Joy had left by then fully.
The Team at Berkeley started to restaff the project.

Kirk McKusick joined the project in January, 1985

Keith Bostic joined October, 1986.
Keith had a requirement for taking the job:

He needed to be allowed to finish his port of release 4.2BSD to the PDP-11 which would become 2.11 distribution

Kirk and Mike agreed to this in his spare time.

Keith Bostic’s job was divided into three parts:

  • 1/3 of the time He would answer the phone
    and provide technical support their users
    if it was something interesting he would work with them
    particularly if they had a bug fix in hand
  • 1/3 of his time he dealt with bug report emails
  • 1/3 of his time he was to do software development
  • And he worked on the 2.11 project as well for the PDP-11 in his spare time as said.

He finished the 2.11 release in late 1988 ( about 2 years after He joined the Team)

The next release after 4.3BSD was called 4.3-Tahoe ( there was a debate at that moment if that release should have been called 4.4BSD) released in September, 1988

Mike Karels was of the opinion that 4.4BSD should have a certain set of features which this release did not have to be called 4.4BSD

The primary purpose of this release was to support a second architecture
** some partitioning of the code was done at the time Bill Joy was leaving mentioned previously **
This was the first full fledged port to a new architecture, ported to a computer called Computer Consoles Inc (CCI) 6/32 machine which had a similar architecture to a VAX but 5x faster with the size of a large deskside machine

Meanwhile System V (AT&T) and commercial OS vendors continued in putting out ever more complex systems including networking , virtual memory, etc. but the effect of this was that the price of these operating systems and the systems shipped with these operating systems kept increasing.

Kirk McKusick notes that the only real commercial UNIX was still System V at that time

Many users wanted to get the BSD code from Berkeley but they could not afford the $250 000 source license from AT&T so they could pull out / would be allowed to pull out the TCP/IP stack and use it in some embedded application.

This raised the question:

Since the TCP/IP stack code was developed at Berkeley, could not it be released separately?
[DARPA placed no restriction on code releases, other than requiring that the code be distributed to other DARPA contractors]

All DARPA contractors had UNIX licenses so it was not an issue.

The socket interface and the TCP/IP code was done entirely outside of AT&T

Therefore it contained NO AT&T proprietary code so people who wished to incorporate it into products should not have to buy an AT&T license to use it.

The above led to the release ofNetworking Release 1” “Net1” or “Net/1” released in June, 1989 which included the following code extracted from BSD release:

  • network device drivers
  • TCP/IP stack
  • sockets
  • ftp
  • telnet
  • the “Rcommands: rcp, rlogin, rsh, etc. which were quick hacks created by Bill Joy to test the networking code and originally meant to be temporary until the real commands (ftp, telnet, ssh, etc) could be written –> the R commands persisted for two decades
    They used trusted ports only <= 1000 that could only be opened by processes running root

A 9 track tape of this code cost $1000 however it was available over anonymus UUCP for free as well many people bought the tape , they wanted the piece of paper saying this was freely redistributable

The Berkeley team got back on their main project , continuing to put in all the features that were supposed to be in the final 4.4BSD release

They decided to do another interim release, a fair amount of time has passed since the last release.

4.3-Reno was the name planned for the next release partly named because Reno was a gambling capital and it was sort of a gamble to run this distribution as not much release engineering had been done.

It contained a number of major new systems:

a new virtual memory system (previous one has been developed by Bill Joy and Ozalp Babaoglu back in 1979)

Kirk McKusick was put in charge of the new VM system work started in 1988 and first became operational in late 1990

He had two candidates for the new VM system that looked very good

Sun had done a VM system that was written straight to the architecture manual interface
* the 4.2BSD architecture manual , written by Bill Joy in response to the DARPA contract previously)

CMU had done the Mach Project (Carnegie Mellon University) part of a microkernel project , could be used without taking the microkernel part. DARPA favoured this option.(the MACH)

For the Berkeley team to use the Sun VM system , Sun had to release it. They had a discussion about this around 1988 where even tough every major player of Sun including Bill Joy and Scott McNealy *co-founder and CEO of Sun Microsystems) including technical people and their managers agreed it was the good idea… the board of directors and lawyers advised against it * could be taken legal action against Sun by its stockholders giving away company property for free*

As a result of the Sun decision the Berkeley team went with the Mach VM system.
Mike Hibler at the University of Utah did the work of integrating the Mach VM system into BSD then Kirk McKusick made minor changes.

At the same time the Berkeley team wanted to put NFS into BSD * Network File System originally developed by Sun Microsystems)

NFS was taking over due to Sun‘s skillful marketing

Sun had placed the specification into the public domain.

Rick Macklem at the University of Guelpf did the work in Ontario, Canada during the long winter Rick had written NFS.

The 4.3-Reno release started shipping in November, 1990 and it included the following:

  • The New VM system ( Kirk McKusick says this came not in the 4.3 release intermediately but very soon after it came in an updated 4.3-Reno release)
  • NFS
  • the new vnode stuff which were updates to the vnode interface to support the addition of mmap. This was written by Kirk McKusick

Meanwhile the release of Net/1 led to a desire to release more of the BSD software freely.

Calls were coming in to Keith Bostic requesting this who brought it up every week in the Berkeley team’s weekly meeting

Kirk and Mike pointed out impediments to the release:

  • not just a kernel issue
  • there are many utilities
  • there is the C library
  • all of which are riddled with 32V code

it would be a mammoth undertaking to sort all this out

Kirk and Mike told Keith that he could work on this in his spare time if he wanted to

They figured this would be the last they would ever hear of this issue

At the next USENIX conference, Keith gives a presentation:

He puts up a list of utilities “I need people to write these utilities”contribute them to Berkeley, get recognized for their accomplishments.

People started to rewrite certain simple to relatively complex utilities ( cat, od, head, tail)then someone rewrote troff provided evidence that BSD contributors were serious

Keith was busy integrating the contributed utilities, he was also busy rewriting parts of the C library , one day, Keith ambled into one of the Berkeley team meetings he had the C library rewrite mostly done, he had about half of the utilities done.

He asked Kirk and Mike how the kernel was coming

Kirk and Mike realized they couldn’t get the user community to rewrite the kernel

Kirk, Mike, and Keith built an inverted database from the
32V source code
they then went through the BSD source code, line-by-line,
looking it up in the inverted database to see what matched

When the code inspection effort was over, they found that
only about six files were contaminated
(note that trivial similarities had already been rewritten as a part of this effort)

They thought that they could rewrite the six files

But then they thought it would be better to release the software without the six files

Without these 6 files The software would be broken, it wouldn’t even compile, so, hopefully, AT&T wouldn’t even notice

The Berkeley team felt the need to talk to the higher-ups at the University they felt they shouldn’t do this release on their own they talked to the University lawyers about the license to streamline the process, they reused the Net/1 license
with a name change to Net/2

The Berkeley team talked to the head of the Computer
Science department
they escalated it up to higher levels within the
University to the Office of the President of the University of

(the Office of the President oversees all the UC campuses)

Auditors were brought in,the Berkeley team spent three days being audited

Permission was finally granted from a very high level of the University

The Net/2 software was finally release in July, 1991

The software was well-received many people bought it, as before even though it could be downloaded for free via ftp.

By early 1992, several groups had figured out how to
rewrite the six missing files

The Berkeley Software Design Inc. (BSDI) company was shipping an alpha version of their product by January, 1992

Bill Jolitz had rewritten the files, he had released a system called 386/BSD

About this time, Mike Karels decided to work at BSDI (Berkeley Software Design Inc.)(BSDI, later BSDi)

Note: Kirk McKusick was an agel investor in BSDi

BSDI had been shipping a product, They ran ads with the phone number: 1-800-ITS-Unix

AT&T sent a cease and desist letter to BSDI
Stop shipping the product, or get a license from USL
(UNIX System Laboratories a mostly-owned (majority-controlled) subsidiary of AT&T
(80% owned by AT&T)

BSDI was a four person startup

USL filed a lawsuit against BSDI which led to a Lawsuit which I will leave out

—– Lawsuit ——
The continuation of the story would follow with the Lawsut which I will leave out and continue from after that happened.
—– Lawsuit ——

4.4BSD releases.:

The original intent was to release two versions of 4.4BSD.

4.4BSD also referred to as 4.4BSD-Encumbered which had everything and required an AT&T license and 4.4BSD-Lite that had only freely redistributable code.

Eventually 4.4BSD-Encumbered (4.4BSD) got released in June, 1993 being tired of waiting for resolution of lawsuit.

4.4BSD-Lite was released in June, 1994

Keith Bostic and Kirk McKusick continued working at the University putting in bug fixes and enhancements as they were received for 4.4BSD and 4.4BSD-Lite

As a result They released 4.4BSD-Lite2 in June, 1995
It was the last BSD distribution that came out of Berkeley


From the article of Charles Babcock from Information Week published at the 8th of November, 2006

The single Greatest Piece of Software Ever, with the broadest impact on the world, was BSD 4.3. Other Unixes were bigger commercial successes. But as the cumulative accomplishment of the BSD systems, 4.3 represented an unmatched peak of innovation. BSD 4.3 represents the single biggest theoretical undergirder of the Internet. Moreover, the passion that surrounds Linux and open source code is a direct offshoot of the ideas that created BSD: a love for the power of computing and a belief that it should be a freely available extension of man’s intellectual powers–a force that changes his place in the universe.


Atlas Supervisor:

Dennis Ritchie:

Ken Thompson:

DVD order of Dr Marshall Kirk McKusick

Order RunBSD Stickers

Wikipedia Entry on Kirk McKusick:

Kirk McKusick Homepage:

BSDTalk Interview with Kirk McKusick


Vi Editor:

Bill Joy:

Ted Talk:

Bob Fabry:

Sam Leffler:

A talk with
Wikipedia entry



DEC Alpha:

Ozalp Babaoglu

Wikipedia Entry on the History of Berkeley Software Distribution:

Bell labs on wikipedia:

UNIX License Fees (1983-1984)

Franz LISP:

Sendmail and Delivermail of Eric Allman:

UUCP – Unix-to-Unix Copy

Context Switching



4,2BSD Networking Implementation Notes:

4.3BSD Virtual Memory Management (with some comments on 3BSD)


Harris / Tahoe Platform:

Keith Bostic:

What’s The Greatest Software Ever Written?


4.4BSD Lite Release 2: last Unix operating system from Berkeley
(tons of documentation links on the Github page)

SIMH Compatible Tapes of BSD Releases:

The UNIX Heritage Society:

Run 32V 3BSD and 4.0BSD under SIMH:

Install 4.3BSD on SIMH:


Marshall Kirk McKusick, George V. Neville-Neil, Robert N.M. Watson. The Design and Implementation of the FreeBSD Operating System, 2nd ed., Addison-Wesley Professional, 2014. ISBN-13: 978-0321968975, ISBN-10: 0321968972

McKusick, Marshall Kirk; Neville-Neil, George V. The Design and Implementation of the FreeBSD Operating System, 1st Edition, Addison-Wesley Professional,2004. ISBN-13: 9780201702453 ISBN-10: 0201702452

McKusick, Marshall Kirk; Bostic, Keith; Karels, Michael J.; Quarterman, John S. The Design and Implementation of the 4.4 BSD Operating System, Addison-Wesley,1996. ISBN-13: 9780201549799 ISBN-10: 0201549794

Negus, Christopher; Caen, Francois BSD UNIX Toolbox: 1000+ Commands for FreeBSD, OpenBSD and NetBSD, Wiley, 2008. ISBN-13: 9780470376034 ISBN-10: 0470376031

Lucas, Michael W. Absolute FreeBSD, 3rd Edition: The Complete Guide to FreeBSD, No Starch Press, 2018,ISBN-13: 9781593278922 ISBN-10: 1593278926

Approved Reading from the Computer Science Department of Carnegie Mellon


ZFS – An Introduction of the Implementation of ZFS by Dr Marshall Kirk McKusick

TSR – The Server Room – Shownotes – Episode 29

Dominance in Visio on the Workplace… Why not native Linux version if MS is so much commited and in love with Linux.. And No Browser only version do not count for me !!!

More Diagram Software Compared I started based on this article::::

yED or its commercial offering Graphity ( difference between the two ? good question?)

Interesting… The commercial offering for teams of size 10 or under can go with 10$ license per year * no need to renew after the end of 1st year if not wanted* and includes the below user limits and all the apps.. Very Interesting even for a 1 man team like myself.. I have to test drive this to see if its good for diagram needs where yEd Graph Editor is not enough and I need the commercial offering of Graphity – very basic



Visual Paradigm Community Edition – I think the most full fledged

TSR – The Server Room – Shownotes – Episode 28

New Raspberry Pi 8Gb and Beta of 64Bit OS Raspberry Pi OS changing the name from Raspbian

Do You really need that much RAM?

The short answer is that, right now, the 8GB capacity makes the most sense for users with very specialized needs: running data-intensive server loads or using virtual machines. As our tests show, it’s pretty difficult to use more than 4GB of RAM on Raspberry Pi, even if you’re a heavy multitasker. 

As part of this announcement, the Raspberry Pi Foundation has decided to change its official operating system’s name from Raspbian to Raspberry Pi OS. Up until now, Raspberry Pi OS has only been available in 32-bit form, which means that it can’t allow a single process to use more than 4GB of RAM, though it can use all 8GB when it is  spread across multiple processes (each browser tab is a separate process, for example). 

However, the organization is working on a 64-bit version of Raspberry Pi OS, which is already available in public beta. A 64-bit operating system allows 64-bit apps that can use more than 4GB in a single process. It could also lead to more caching and better performance overall.  

Automotive things: DIY or Off The Shelf Solutions

Is it better to buy an off the shelf multimedia solution for Your car with Navigation , rearview camera, etc. Or its better to tinker and make a DIY solution from the same amount of money or sometimes less from Raspberry Pis and matching components to do everything You need?

DAB+ FM Radio Module

How can it made to be work with Raspbery Pi 4 head unit?

the same aforementioned board can be controlled from Android tablet or phone or
from a raspberry pi or linux

Can it be made to work with with OpenAuto?

it runs on top of Raspbian Linux … autolaunch fullscreen app

as much as I see OpenAuto Pro Accepts creating a shortcut for external applications to be called / lauched so perhaps the way to control the DAB+ module under linux can be called somehow on a way which is a bit more user friendly and intuitive… perhaps calling script file instead of

this is open auto: 2y ago last commit on github 🙁

One workaround is to run android on the raspberry pi like LineageOS and then use
dab+ controller android app for the dab+ unit and corresponding android apps for the rest like the navigation and rearview camera , etc>

Unfortunatelly LineageOS 16 Android 9 graphics performance is not ready for multimedia or gaming use so i dont know how well navigation apps would run in this case

i-Carus System

One system which pretty much has all I look for or want to is the i-carus system

It does not seem to have: Bluetooth Audio Passthrough , And I dont see how it could interact with the Monkey DAB+ Module mentioned above previously…..


All you need in car

  • Multimedia center supproting all audio and video formats
  • FM Radio
  • Internet Radio
  • GPS navigation
  • Full HD car DVR camera
  • OBD-II Engine diagnostics and data reading
  • Wireless Networks: 3G, 4G, Wi-Fi, Bluetooth

Extendable Platform

Since iCarus is based on Rasberry Pi Linux computer you get almost unlimited opportunities for extanding the functionality of the system by adding external hardware, sensors or creating your own software.

The ICR Board
(connects directly to Your cars radio connector)


The heart of iCarus Car PC.
Connect your Raspberry Pi (or any other compatible single board computer) to ICR board and build your higly customized Car PC system.

Just connect iCarus Car PC to your car’s radio connector directly (in the case your car uses a standard ISO-10487 connector) or via harness adaptor

Separate ICR board is a suitable solution for makers building Car PC in their own housing

This system raises two important questions for me at least:

  • Would it be possible for i-carus to interact and control a Monkey DAB+ board? (works from android and from raspbian linux by default) it be used to be an interface for Android Auto once a compatible smartphone is connected via USB or Wireless?

  • Can additional apps run on top or parallel like the OpenAuto app project which does similar functions to i-carus? ( so the i-carus can be booted into either i-carus or openauto?) becoming a versatile screen not limited only to be used with i-carus software

Commercial Offerings Out There

Mentioning only one example as these are as many as You can imagine and comes in all form of shapes and sizes.

Imágenes de Receptor multimedia DAB de 8,95" (22,7 cm) con Bluetooth®
Sony XAV-AX8050D
Ready to accept Rearview Camera
Android Auto / Apple Carplay
530 euros on where I live

It ticks pretty much all the boxes I need it to do

What else can be used a Raspberry Pi for in a Car?