Solarwinds is a multinational company with over 3000 employees and 300 000 clients worldwide. A major IT firm that provides software for entities ranging from Fortune 500 companies to the US Government.
Solarwinds main product the Orion platform is a powerful, scalable infrastructure monitoring and management platform designed to simplify IT administration for on-premises, hybrid, and software as a service (SaaS) environments in a single pane of glass
The Solarwinds hack and why it is such a big deal ( or is it?) SUNBURST and SUPERNOVA attacks
They call it the most serios cyber attack agains an enterprise software giant ever
Reuters first reported that SolarWinds was the subject of a massive cybersecurity attack that spread to the company’s clients.
The breach went undetected for months and could have exposed data in the highest reaches of government including the US military and the White House.
As always US officials thinks it was the Russians behind it ( are not they behind everything?? but do we have enough russians to be behind everything? 🙂 )
Whoever was/were behind this hack were able to use it to spy on private companies like the elite cybersecurity firm FireEye and the US Government including the Department of Homeland Security and Treasury Department.
Earlier in 2020 hackers secretly broke into Texas-based SolarWind’s systems and added malicious code into the company’s software system. The system, called “Orion,” is widely used by companies to manage IT resources. Solarwinds has 33,000 customers that use Orion
Most software providers regularly send out updates to their systems, whether it’s fixing a bug or adding new features. SolarWinds is no exception. Beginning as early as March, SolarWinds unwittingly sent out software updates to its customers that included the hacked code.
The code created a backdoor to customer’s information technology systems, which hackers then used to install even more malware that helped them spy on companies and organizations.
The attack used a backdoor in a SolarWinds library; when an update to SolarWinds occurred the malicious attack would go unnoticed due to the trusted certificate. In November 2019, a security researcher notified SolarWinds that their FTP server had a weak password of “solarwinds123”, warning that “any hacker could upload malicious [files]” that would then be distributed to SolarWinds customers.
The New York Times reported SolarWinds did not employ a chief information security officer and that employee passwords had been posted on GitHub in 2019 other sources however estimates that the leak through githubs public repo of Solarwinds was going on since 2018 (Leaked ftp credentials & weak ftp password) <<< Security researcher Vinoth Kumar alerted the company in 2019 about the ftp password leak and that anyone could acces SolarWinds update server by using the password “solarwinds123”
On December 15, 2020, SolarWinds reported the breach to the Securities and Exchange Commission. However, SolarWinds continued to distribute malware-infected updates, and did not immediately revoke the compromised digital certificate used to sign them.
On December 16, 2020, German IT news portal Heise.de reported that SolarWinds had for some time been encouraging customers to disable anti-malware tools before installing SolarWinds products.
On December 17, 2020, SolarWinds said they would revoke the compromised certificates by December 21, 2020.
On December 19, 2020, Microsoft said that its investigations into supply chain attacks at SolarWinds had found evidence of an attempted supply chain attack distinct from the attack in which SUNBURST malware was inserted into Orion binaries (see previous section). This second attack has been dubbed SUPERNOVA
Security researchers from Palo Alto Networks said the SUPERNOVA malware was implemented stealthily. SUPERNOVA comprises a very small number of changes to the Orion source code, implementing a web shell that acts as a remote access tool. The shell is assembled in-memory during SUPERNOVA execution, thus minimizing its forensic footprint.
Unlike SUNBURST, SUPERNOVA does not possess a digital signature. This is among the reasons why it is thought to have originated with a different group than the one responsible for SUNBURST.
Insider trading investigation
SolarWinds’s share price fell 25% within days of the SUNBURST breach becoming public knowledge and 40% within a week. Insiders at the company had sold approximately $280 million in stock shortly before this became publicly known which was months after the attack had started. A spokesperson said that those who sold the stock had not been aware of the breach at the time.
Just the good opportunity for some other companies
Microsoft (Azure) and a Spanish startup called Artica ( sums about 40 workers and around 400 clients) which has its own product in the systems monitoring market. Many of Solarwinds client are looking for alternatives and they want a way out and they are looking at alternative offers like Artica’s monitoring solutions ( a customized Pandora FMS) or trying to avoid a similar issue to happen by moving their infrastructure to the cloud like Microsoft’s Azure.
Probably it offers a great opportunity for many of Solarwinds competitors including OpenNMS who’s founder and CEO Tarus Balog will have a sit down with me and chat amongst other things of this particular event as well on the next episode 63 of The Server Room Show
A database is an organized collection of data, generally stored and accessed electronically from a computer system. Where databases are more complex they are often developed using formal design and modeling techniques which i will talk about a bit later.
The database management system (DBMS) is the software that interacts with end users, applications, and the database itself to capture and analyze the data. The DBMS software additionally encompasses the core facilities provided to administer the database. The sum total of the database, the DBMS and the associated applications can be referred to as a “database system”. Often the term “database” is also used to loosely refer to any of the DBMS, the database system or an application associated with the database.
Computer scientists may classify database-management systems according to the database models that they support. Relational databases became dominant in the 1980s. These model data as rows and columns in a series of tables, and the vast majority use SQL for writing and querying data. In the 2000s, non-relational databases became popular, referred to as NoSQL because they use different query languages.
Terminology and Overview
Formally, a “database” refers to a set of related data and the way it is organized. Access to this data is usually provided by a “database management system” (DBMS) consisting of an integrated set of computer software that allows users to interact with one or more databases and provides access to all of the data contained in the database (although restrictions may exist that limit access to particular data). The DBMS provides various functions that allow entry, storage and retrieval of large quantities of information and provides ways to manage how that information is organized.
Because of the close relationship between them, the term “database” is often used casually to refer to both a database and the DBMS used to manipulate it.
Outside the world of professional information technology, the term database is often used to refer to any collection of related data (such as a spreadsheet or a card index) as size and usage requirements typically necessitate use of a database management system.
Existing DBMSs provide various functions that allow management of a database and its data which can be classified into four main functional groups:
Data definition – Creation, modification and removal of definitions that define the organization of the data.
Update – Insertion, modification, and deletion of the actual data.
Retrieval – Providing information in a form directly usable or for further processing by other applications. The retrieved data may be made available in a form basically the same as it is stored in the database or in a new form obtained by altering or combining existing data from the database.
Administration – Registering and monitoring users, enforcing data security, monitoring performance, maintaining data integrity, dealing with concurrency control, and recovering information that has been corrupted by some event such as an unexpected system failure.
Both a database and its DBMS conform to the principles of a particular database model. Database system” refers collectively to the database model, database management system, and database.
Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions.
Since DBMSs comprise a significant market, computer and storage vendors often take into account DBMS requirements in their own development plans.
Databases and DBMSs can be categorized according to the database model(s) that they support (such as relational or XML), the type(s) of computer they run on (from a server cluster to a mobile phone), the query language(s) used to access the database (such as SQL or XQuery), and their internal engineering, which affects performance, scalability, resilience, and security.
The sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of magnitude. These performance increases were enabled by the technology progress in the areas of processors, computer memory, computer storage, and computer networks. The concept of a database was made possible by the emergence of direct access storage media such as magnetic disks, which became widely available in the mid 1960s; earlier systems relied on sequential storage of data on magnetic tape. The subsequent development of database technology can be divided into three eras based on data model or structure:
The two main early navigational data models were the hierarchical model and the CODASYL model (network model). These were characterized by the use of pointers (often physical disk addresses) to follow relationships from one record to another.
The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links. The relational model employs sets of ledger-style tables, each used for a different type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow the wide deployment of relational systems (DBMSs plus applications). By the early 1990s, however, relational systems dominated in all large-scale data processing applications, and as of 2018 they remain dominant: IBM DB2, Oracle, MySQL, and Microsoft SQL Server are the most searched DBMS. The dominant database language, standardised SQL for the relational model, has influenced database languages for other data models.
Object databases were developed in the 1980s to overcome the inconvenience of object-relational impedance mismatch, which led to the coining of the term “post-relational” and also the development of hybrid object-relational databases.
The next generation of post-relational databases in the late 2000s became known as NoSQL databases, introducing fast key-value stores and document-oriented databases. A competing “next generation” known as NewSQL databases attempted new implementations that retained the relational/SQL model while aiming to match the high performance of NoSQL compared to commercially available relational DBMSs.
1960s, navigational DBMS
The introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive use rather than daily batch processing. The Oxford English Dictionary cites a 1962 report by the System Development Corporation of California as the first to use the term “data-base” in a specific technical sense.
As computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s a number of such systems had come into commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, the Integrated Data Store (IDS), founded the Database Task Group within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971, the Database Task Group delivered their standard, which generally became known as the CODASYL approach, and soon a number of commercial products based on this approach entered the market.
The CODASYL approach offered applications the ability to navigate around a linked data set which was formed into a large network. Applications could find records by one of three methods:
Use of a primary key (known as a CALC key, typically implemented by hashing)
Navigating relationships (called sets) from one record to another
Scanning all the records in a sequential order
Later systems added B-trees to provide alternate access paths. Many CODASYL databases also added a declarative query language for end users (as distinct from the navigational API). However CODASYL databases were complex and required significant training and effort to produce useful applications.
IBM also had their own DBMS in 1966, known as Information Management System (IMS). IMS was a development of software written for the Apollo program on the System/360. IMS was generally similar in concept to CODASYL, but used a strict hierarchy for its model of data navigation instead of CODASYL’s network model. Both concepts later became known as navigational databases due to the way data was accessed: the term was popularized by Bachman’s 1973 Turing Award presentation The Programmer as Navigator. IMS is classified by IBM as a hierarchical database. IDMS and Cincom Systems‘ TOTAL database are classified as network databases. IMS remains in use as of today its last stable release has been on 2017
1970s, relational DBMS
Edgar F. Codd worked at IBM in San Jose, California, in one of their offshoot offices that was primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the CODASYL approach, notably the lack of a “search” facility. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.
In this paper, he described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in CODASYL, Codd’s idea was to organise the data as a number of “tables“, each table being used for a different type of entity. Each table would contain a fixed number of columns containing the attributes of the entity. One or more columns of each table were designated as a primary key by which the rows of the table could be uniquely identified; cross-references between tables always used these primary keys, rather than disk addresses, and queries would join tables based on these key relationships, using a set of operations based on the mathematical system of relational calculus (from which the model takes its name). Splitting the data into a set of normalized tables (or relations) aimed to ensure that each “fact” was only stored once, thus simplifying update operations. Virtual tables called views could present the data in different ways for different users, but views could not be directly updated.
Codd used mathematical terms to define the model: relations, tuples, and domains rather than tables, rows, and columns. The terminology that is now familiar came from early implementations. Codd would later criticize the tendency for practical implementations to depart from the mathematical foundations on which the model was based.
In the relational model, records are “linked” using virtual keys not stored in the database but defined as needed between the data contained in the records.
The use of primary keys (user-oriented identifiers) to represent cross-table relationships, rather than disk addresses, had two primary motivations. From an engineering perspective, it enabled tables to be relocated and resized without expensive database reorganization. But Codd was more interested in the difference in semantics: the use of explicit identifiers made it easier to define update operations with clean mathematical definitions, and it also enabled query operations to be defined in terms of the established discipline of first-order predicate calculus; because these operations have clean mathematical properties, it becomes possible to rewrite queries in provably correct ways, which is the basis of query optimization. There is no loss of expressiveness compared with the hierarchic or network models, though the connections between tables are no longer so explicit.
In the hierarchic and network models, records were allowed to have a complex internal structure. For example, the salary history of an employee might be represented as a “repeating group” within the employee record. In the relational model, the process of normalization led to such internal structures being replaced by data held in multiple tables, connected only by logical keys.
For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach, all of this data would be placed in a single variable-length record. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided.
As well as identifying rows/records using logical identifiers rather than disk addresses, Codd changed the way in which applications assembled data from multiple records. Rather than requiring applications to gather data one record at a time by navigating the links, they would use a declarative query language that expressed what data was required, rather than the access path by which it should be found. Finding an efficient access path to the data became the responsibility of the database management system, rather than the application programmer. This process, called query optimization, depended on the fact that queries were expressed in terms of mathematical logic.
Codd’s paper was picked up by two people at Berkeley, Eugene Wong and Michael Stonebraker. They started a project known as INGRES using funding that had already been allocated for a geographical database project and student programmers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979. INGRES was similar to System R in a number of ways, including the use of a “language” for data access, known as QUEL. Over time, INGRES moved to the emerging SQL standard.
IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both now discontinued. Honeywell wrote MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rel. Most other DBMS implementations usually called relational are actually SQL DBMSs.
IBM started working on a prototype system loosely based on Codd’s concepts as System R in the early 1970s. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (some of which is optional) did not have to be stored in a single large “chunk”. Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language – SQL – had been added. Codd’s ideas were establishing themselves as both workable and superior to CODASYL, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later, Database 2 (DB2).
Larry Ellison‘s Oracle Database (or more simply, Oracle) started from a different chain, based on IBM’s papers on System R. Though Oracle V1 implementations were completed in 1978, it wasn’t until Oracle Version 2 when Ellison beat IBM to market in 1979.
Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as PostgreSQL. PostgreSQL is often used for global mission-critical applications (the .org and .info domain name registries use it as their primary data store, as do many large companies and financial institutions).
In Sweden, Codd’s paper was also read and Mimer SQL was developed from the mid-1970s at Uppsala University. In 1984, this project was consolidated into an independent enterprise.
Another data model, the entity–relationship model, emerged in 1976 and gained popularity for database design as it emphasized a more familiar description than the earlier relational model. Later on, entity–relationship constructs were retrofitted as a data modeling construct for the relational model, and the difference between the two have become irrelevant.
1980s, on the desktop
The 1980s ushered in the age of desktop computing. The new computers empowered their users with spreadsheets like Lotus 1-2-3 and database software like dBASE. The dBASE product was lightweight and easy for any computer user to understand out of the box. C. Wayne Ratliff, the creator of dBASE, stated: “dBASE was different from programs like BASIC, C, FORTRAN, and COBOL in that a lot of the dirty work had already been done. The data manipulation is done by dBASE instead of by the user, so the user can concentrate on what he is doing, rather than having to mess with the dirty details of opening, reading, and closing files, and managing space allocation.” dBASE was one of the top selling software titles in the 1980s and early 1990s.
The 1990s, along with a rise in object-oriented programming, saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person’s data were in a database, that person’s attributes, such as their address, phone number, and age, were now considered to belong to that person instead of being extraneous data. This allows for relations between data to be relations to objects and their attributes and not to individual fields. The term “object-relational impedance mismatch” described the inconvenience of translating between programmed objects and database tables. Object databases and object-relational databases attempt to solve this problem by providing an object-oriented language (sometimes as extensions to SQL) that programmers can use as alternative to purely relational SQL. On the programming side, libraries known as object-relational mappings (ORMs) attempt to solve the same problem.
2000s, NoSQL and NewSQL
XML databases are a type of structured document-oriented database that allows querying based on XML document attributes. XML databases are mostly used in applications where the data is conveniently viewed as a collection of documents, with a structure that can vary from the very flexible to the highly rigid: examples include scientific articles, patents, tax filings, and personnel records.
In recent years, there has been a strong demand for massively distributed databases with high partition tolerance, but according to the CAP theorem it is impossible for a distributed system to simultaneously provide consistency, availability, and partition tolerance guarantees. A distributed system can satisfy any two of these guarantees at the same time, but not all three. For that reason, many NoSQL databases are using what is called eventual consistency to provide both availability and partition tolerance guarantees with a reduced level of data consistency.
NewSQL is a class of modern relational databases that aims to provide the same scalable performance of NoSQL systems for online transaction processing (read-write) workloads while still using SQL and maintaining the ACID guarantees of a traditional database system.
I will talk about Calculators like Texas Instruments or HP Prime and Numworks emulated or running on Your computer or even on your android phone I think they are very useful addition to the built in calculators even if you are not a numbers person like myself.
Remark on this topic: You should dump your own .rom files and flash/boot files from devices you legally own Sharing rom files and other bits and pieces unless shared directly from the manufacturers website is illegal in most countries.
This is the one which started the whole topic. My colleague showed me one the other day and up until then I did not know anything about it.
On their website they state:
The graphing calculator that makes everybody a math person.
NumWorks is a French calculator manufacturer that has produced two models of calculator. Both calculators are source-availablegraphing calculators and have their hardware and software designs available under a Creative Commons license. Its first calculator, the N0100, was released on August 29, 2017 in France and the United States and is geared towards high school classrooms and students.The calculators use Python as their programming language, rather than a proprietary language (e.g. TI-BASIC used by Texas Instruments calculators)
The calculator was specifically designed to be modded using 3D printing, 3D models, firmware operating system source code, schematics, and board layout details available to the public under a Creative Commons License. The software on the calculator is updated on a monthly cycle. Updates can be downloaded to the calculator from its website using WebUSB or by building the operating system from its direct source.
The NumWorks calculator also includes an “exam mode” which removes all Python programs, resets all apps, and disables certain features. It can be disabled by plugging the calculator into a power source and selecting disable on the popup that appears.
On March 22, 2019, NumWorks released an app for iOS and Android. It features the same functionality as the physical calculator except it does not have data persistence.
In their website they mention among its features:
Code in Python Accurate Math (fractions, roots, trigonometry..) Exam approved (SAT, ACT) Functions (Trace colored graphs using the Functions app. You can read function values in a table and retrieve the derivatives.) Probability (Computing probabilities has never been easier. Simply fill in the required information and compute. A graphical display helps you visualize your computations.) Equations Calculations Sequences Regression Statistics
It has a downloadable online version of it ( you can download the js file and run it yourself) completely free.. How awesome is that
HP 300s+ calculator app from HP works perfectly under wine
HP Prime Graphing Calculator runs correctly under wine for me I used the 64 bit installer/ version > HP_Prime_Virtual_Calculator_x64_2020_01_16.exe but the older 32 bit installer / version works fine too > HP_Prime_Virtual_Calculator_2018_10_16.exe
Emu71Emu48/ Emu48+ and Emu42 and Emu28 Emulators
Emu71 is an emulator for the HP 71B Calculator
The HP48 Emulator Emu48 was originally created by Sébastien Carlier and is published under the GPL. The latest version of Emu48 can emulate a HP38G, HP39G, HP40G, HP48SX, HP48GX and a HP49G and HP50g as well.
Emu42 is an emulator for the Pioneer series calculators HP10B, HP14B, HP17B, HP17BII, HP20S, HP21S, HP27S, HP32SII and HP42S and for the Clamshell series calculators HP19BII and HP28S. It base on the sources of the famous HP calculator emulator Emu48 and is published under the GPL.
EMU28 is an emulator for the Clamshell series calculators HP-18C and HP-28C. It base on the sources of the famous HP calculator emulator Emu48 and is published under the GPL.
Virtual TI app works just fine under Wine for me.
Wabbitemu i cant make it work under wine no matter which settings i select tough it works fine under windows.
CEmu is a third-party TI-84 Plus CE / TI-83 Premium CE calculator emulator, focused on developer features. CEmu works natively on Windows, macOS, and Linux. For performance and portability, the core is programmed in C and its customizable GUI in C++ with Qt.
i tried with a TI-84 Plus CE rom and it works just fine. I could not try the TI-83 Premium CE rom as i dont have it.
TI-NSpire on Linux – Firebird
This project is currently the community TI-Nspire emulator, originally created by Goplat. It supports the emulation of Touchpad, TPad CAS, CX and CX CAS calcs on Android, iOS, Linux, macOS and Windows
Installed under Linux I was keep getting segmentation faults when i was trying to emulate TI-Nspire calculator.. However with the windows release of firebird under wine i had no problem at all following the steps how to be up and running in no time.
Notable mentions on Android (Ti emulators and HP emulators)
On Android you can have a tons of emulators which works fine
For Texas Instruments Wabbitemu once you supply it with the correct rom files and figure out in settings which scaling or setting works best for your phone , f.e on my Note 10+ i had to turn off immersive mode option so it shows the bottom bar and the top bar of my phone and the clicks are exactly on point ( they were slightly off with that option turned on for me and i had to aim a bit under what i wanted to click at but thurning this option off solved it and its dead center now and a pleasure to use )
HP emulators on android sold directly by HP
HP 12C Platinum Calculator (18 euros in playstore) HP 12C Financial Calculator (17 euros in playstore)
CentOS Linux 8, as a rebuild of RHEL 8, will end at the end of 2021. CentOS Stream continues after that date, serving as the upstream (development) branch of Red Hat Enterprise Linux.
CentOS was one of the most popular server distributions in the world. It was an open source fork of Red Hat Enterprise Linux (RHEL) and provided the goodness of RHEL without the cost associated with RHEL.
Many companies were able to hire IT professionals and/or geeks to handle their infrastructure on a RHEL binary compatible / bit to bit compatible OS such as * CentOS * and pay the money they would for RHEL subscription prices and contracts instead to these individuals as a salary.
They got the best of both worlds: Stability and Security of a RHEL distribution & Lower Cost of Ownership via hiring their own IT stuff to do the 24/7 maintenance and up keep Vs doing it through RHEL contracts and subscription models.
Many big corporations were still forced for one reason or another to go with RHEL subscriptions and contracts (Gov. & Mission Critical Systems where these support contracts provide a great safety net for executives to point the finger and demand liability of a third party if anything goes wrong) but the smaller shops and corporations could get away if they wanted with hiring their own stuff to maintain and run everything on CentOS instead as it was exactly as RHEL without the Logos and Trademarks and those contracts and subscriptions as mentioned
Red Hat already already had a similar move in the past as You will see just now
CentOS was not started by Red Hat. It was a community project since the beginning. After Red Hat started sponsoring the development, the trademark and ownership of CentOS was transferred to Red Hat in 2014, around 10 years after its creation.
Warren Togami began Fedora Linux in 2002 as an undergraduate project at the University of Hawaii intended to provide a single repository for well-tested third-party software packages so that non-Red Hat software would be easier to find, develop, and use. The key difference between Fedora Linux and Red Hat Linux was that Fedora’s repository development would be collaborative with the global volunteer community.Fedora Linux (through the Fedora Project) was launched in 2003, when Red Hat Linux was discontinued. The project was founded in 2003 as a result of a merger between the Red Hat Linux (RHL) and Fedora Linux projects. It is sponsored by Red Hat primarily, but its employees make up only 35% of project contributors, and most of the over 2,000 contributors are unaffiliated members of the community.
Red Hat sells subscriptions for the support, training, and integration services that help customers in using their open-source software products. Customers pay one set price for unlimited access to services such as Red Hat Network and up to 24/7 support.
In September 2014, however, CEO Jim Whitehurst announced that Red Hat was “in the midst of a major shift from client-server to cloud-mobile”.
Rich Bynum, a member of Red Hat’s legal team, attributes Linux’s success and rapid development partially to open-source business models, including Red Hat’s.
Red Hat Rebuilds
Originally, Red Hat’s enterprise product, then known as Red Hat Linux, was made freely available to anybody who wished to download it, while Red Hat made money from support.
Red Hat then moved towards splitting its product line into Red Hat Enterprise Linux which was designed to be stable and with long-term support for enterprise users and Fedora as the community distribution and project sponsored by Red Hat. The use of trademarks prevents verbatim copying of Red Hat Enterprise Linux.
Since Red Hat Enterprise Linux is based completely on free and open source software, Red Hat makes available the complete source code to its enterprise distribution through its FTP site to anybody who wants it.
Accordingly, several groups have taken this source code and compiled their own versions of Red Hat Enterprise Linux, typically with the only changes being the removal of any references to Red Hat’s trademarks and pointing the update systems to non-Red Hat servers. Groups which have undertaken this include CentOS, Oracle Linux, Scientific Linux, White Box Enterprise Linux, StartCom Enterprise Linux, Pie Box Enterprise Linux, X/OS, Lineox, and Bull‘s XBAS for high-performance computing.
All provide a free mechanism for applying updates without paying a service fee to the distributor.
Rebuilds of Red Hat Enterprise Linux are free but do not get any commercial support or consulting services from Red Hat and lack any software, hardware or security certifications. Also, the rebuilds do not get access to Red Hat services like Red Hat Network.
Unusually, Red Hat took steps to obfuscate their changes to the Linux kernel for 6.0 by not publicly providing the patch files for their changes in the source tarball, and only releasing the finished product in source form. Speculation suggested that the move was made to affect Oracle’s competing rebuild and support services, which further modifies the distribution. This practice however, still complies with the GNU GPL since source code is defined as “[the] preferred form of the work for making modifications to it”, and the distribution still complies with this definition. Red Hat’s CTO Brian Stevens later confirmed the change, stating that certain information (such as patch information) would now only be provided to paying customers to make the Red Hat product more competitive against the growing number of companies offering support for products based on RHEL. CentOS developers had no objections to the change since they do not make any changes to the kernel beyond what is provided by Red Hat. Their competitor Oracle announced in November 2012 that they were releasing a RedPatch service, which allows public view of the RHEL kernel changes, broken down by patch.
IBM’s Takeover of Red Hat
On October 28, 2018, IBM announced its intent to acquire Red Hat for US$34 billion, in one of its largest-ever acquisitions. The company operates out of IBM’s Hybrid Cloud division.
Six months later, on May 3, 2019, the US Department of Justice concluded its review of IBM’s proposed Red Hat acquisition and according to Steven J. Vaughan-Nichols “essentially approved the IBM/Red Hat deal”. The acquisition was closed on July 9, 2019.
What might be the next thing IBM takes away?
Personally I think it will be the now still free Developer subscription of Red Hat Enterprise Linux. This license allow a non-production development only use and access to Red Hat Enterprise Linux latest version and many additional Red Hat Products for free without support and with access to updates and security patches.
It started in 2016 a no-cost Red Hat Enterprise Linux developer subscription, available as part of the Red Hat Developer Program. Offered as a self-supported, development-only subscription, the Red Hat Enterprise Linux Developer Suite provides you with a more stable development platform for building enterprise applications – across cloud, physical, virtual, and container-centric infrastructures.
with CentOS Stream stability goes out of the door as we know it and got use to it from CentOS
Focus shifts from CentOS Linux, the rebuild of Red Hat Enterprise Linux (RHEL) to CentOS Stream, which tracks just ahead of a current RHEL release.
CentOS Linux 8, as a rebuild of RHEL 8, will end at the end of 2021.
After that, the rolling release CentOS Stream becomes the identity of CentOS project. There will be no CentOS 9 based on RHEL 9 in the future.
CentOS Linux 7 will continue its lifecycle and will end in 2024.
Has to point out that not everyone was a free rider. When it comes to small businesses or startups companies like those nearly always lack the funds required to have proper IT departments , procedures and to do things well out of the box just like a proper company does .. a bigger one with the budget to do so.
For these companies CentOS and IT Geeks or IT System Administrators were a great fit. They were able to standardize on a well known and prooven platform which is secure and stable and tried and trusted everywhere you go in the business world yet free to use and actually the same as the brand name equivalent RHEL just without the subscriptions, support and logos.
It was a great path for these companies to do the things right from the start by having standards and platforms they can build upon later and when the moment comes that they can afford and they need RHEL with the support and subscription it comes with they were ready to switch over from day 1 without an issue. Without the need to change or redo anything.
Now these small businesses and companies will never become a RHEL customer for sure… They will build out everything from day 1 on some platform x which is free and when time comes to migrate to something with support RHEL might not even be on their radar hey maybe they will stick to Oracle Linux , CloudLinux (paid or free which is coming in 2021) or ubuntu or debian.
Also there were/are people like myself who do not need the support or subscription and Im happy to fix if something breaks after i intensively googled all the corners of the internet to figure out what and why is happening and perhaps losing all the hope in my abilities in the meantime 🙂 . Myself amongst others were happy to use a #free binary compatible OS with Red Hat Enterprise Linux knowing that everything I learn there can be transferred to a real world experience at the workplace with RHEL and eventually if I want to I can use the same knowledge to obtain a paid certificate from RHEL to get some more leverage as an IT Professional. Most probably for the same reason I always used and recommended CentOS >> Red Hat for the scenarios where it was appropiate and used / using Fedora up until today.
As Red Hat Enterprise Linux is built entirely on Open Source I think every company which does the same should always have a Community version of their product which normally means without the support. There are a lot of examples to this model :::: .( pfsense, XCP-ng, alfresco, Mysql,Automation Anywhere, Visual Paradigm, Veeam Backup and Replication and so on )
As developers have no time to maintain a community edition if it differs from the main product therefore it must be 100% binary equivalent of the paid product *subscription* but without the support part.
Alternatives for Centos (100% binary compatible)
CloudLinux (paid) ( $14 – $18 USD per month not bad IMHO ) Oracle Linux (free) Red Hat Enterprise Linux (paid)
Rocky Linux (non backed by corporation , community built from day one from the ground up) CloudLinux free version Project Lenix (top down approach backed by corporation want to build community around it) (free version to replace CentOS comes 2021)
Ubuntu / Debian ? * non rpm based but alternatives as Linux* Slackware perhaps?
I always wanted to make an episode even a two part one on Sun Microsystems. Unfortunatelly when Sun Microsystem was its height and later after when things turned bad after the dot-com bubble exploded in 2000 I was going through elementary and high school later finally getting my GCSE or High School Diploma in 2002. There were a lot of things on my mind at that age of 19 – 20 years old but I can tell you none of them were Sun Microsystems.
I used computers from a very early age of 6 years old and it was love at first sight. Thanks to that I have never had to look for another hobby elsewhere ever since.
Somehow at that time in hungary I do not recall if I have ever heard of Sun workstations or Sun Microsystems nor I recall seeing any ad or any of their machines anywhere but I do remember I used under linux Staroffice before it became Open Office ( 1999 Sun acquired Star Division and later in 2000 open sourced it and formed openoffice.org) as You will see as part of the many contributions of Sun Microsystems to the Open Source Community.
It was later in life when i found out more about Sun Microsystems and all the things they brought to the world, the things they stood for the machines they made and I learnt more and more of their history with time.
I also purchased a Sun T5220 with an UltraSPARC T2 for my homelab which is from the era of 2007 november just years before the acquisition of Sun Microsystems by Oracle happened in 2010 January 27
I really would like to get my hands on a Sun Ultra 45 Workstation one of the last ones made by SUN with the SPARC processors but their prices are astronomic on ebay … So if anyone has one which is waiting for a new home in mint condition please get in touch with me by email
Enough of me talking about myself.. Let’s dive into the history of Sun Microsystems.
History of Sun Microsystems
If You recall me mentioning about Sun Microsystems before You are not mistaken. In the History of BSD episodes I mentioned Bill Joy decided to leave BSD behind to go and help found and join a new start up called Sun Microsystems.
In 1982 Scott Mcnealy was approached by fellow Stanford alumnus Vinod Khosla to help provide the necessary organizational and business leadership for Sun Microsystems. Sun, along with companies such as Apple Inc., Silicon Graphics, 3Com, and Oracle Corporation, was part of a wave of successful startup companies in California’s Silicon Valley during the early and mid-1980s.
On February 24, 1982, Scott McNealy, Andy Bechtolsheim, and Vinod Khosla, all Stanford graduate students, founded Sun Microsystems. Bill Joy of Berkeley, a primary developer of the Berkeley Software Distribution (BSD), joined soon after and is counted as one of the original founders.
The name “Sun” was derived from co-founder Andy Bechtolsheim’s original Stanford University Network (SUN) computer project, the SUN workstation.
Sun was profitable from its first quarter in July 1982.
In 1984, McNealy took over the CEO role from Khosla, who ultimately would leave the company in 1985. On April 24, 2006, McNealy stepped down as CEO after serving in that position for 22 years, and turned the job over to Jonathan Schwartz.
McNealy is one of the few CEOs of a major corporation to have had a tenure of over twenty years.
The initial design for what became Sun’s first Unix workstation, the Sun-1, was conceived by Andy Bechtolsheim when he was a graduate student at Stanford University in Palo Alto, California. Bechtolsheim originally designed the SUN workstation for the Stanford University Network communications project as a personal CAD workstation. It was designed around the Motorola 68000 processor with an advanced memory management unit (MMU) to support the Unix operating system with virtual memory support.He built the first examples from spare parts obtained from Stanford’s Department of Computer Science and Silicon Valley supply houses.
For the first decade of Sun’s history, the company positioned its products as technical workstations, competing successfully as a low-cost vendor during the Workstation Wars of the 1980s. It then shifted its hardware product line to emphasize servers and storage. High-level telecom control systems such as Operational Support Systems service predominantly used Sun equipment.
Sun’s initial public offering was in 1986 under the stock symbolSUNW, for Sun Workstations (later Sun Worldwide).The symbol was changed in 2007 to JAVA; Sun stated that the brand awareness associated with its Java platform better represented the company’s current strategy
Sun Microsystems workstations and servers went through a few changes during the years just like Apple did (Motorola 68k to PowerPC to Intel x86 to Apple Silicon)
Motorola based systems
initially they used Motorola 68000 family based cpus through the Sun-1 through Sun-3 computers. The Sun-1 employed a 68000 CPU, the Sun-2 series, a 68010. The Sun-3 series was based on the 68020, with the later Sun-3x using the 68030
By 1983 Sun was known for producing 68k-based systems with high-quality graphics that were the only computers other than DEC’s VAX to run 4.2BSD. It licensed the computer design to other manufacturers, which typically used it to build Multibus-based systems running Unix from UniSoft.
Sparc based systems
In 1987, the company began using SPARC, a RISC processor architecture of its own design, in its computer systems, starting with the Sun-4 line. SPARC was initially a 32-bit architecture (SPARC V7) until the introduction of the SPARC V9 architecture in 1995, which added 64-bit extensions.
Sun has developed several generations of SPARC-based computer systems, including the SPARCstation, Ultra, and Sun Blade series of workstations, and the SPARCserver, Netra, Enterprise, and Sun Fire line of servers.
In the early 1990s the company began to extend its product line to include large-scale symmetric multiprocessing servers, starting with the four-processor SPARCserver 600MP. This was followed by the 8-processor SPARCserver 1000 and 20-processor SPARCcenter 2000, which were based on work done in conjunction with Xerox PARC. In 1995 the company introduced Sun Ultra series machines that were equipped with the first 64-bit implementation of SPARC processors (UltraSPARC). In the late 1990s the transformation of product line in favor of large 64-bit SMP systems was accelerated by the acquisition of Cray Business Systems Division from Silicon Graphics.Their 32-bit, 64-processor Cray Superserver 6400, related to the SPARCcenter, led to the 64-bit Sun Enterprise 10000 high-end server (otherwise known as Starfire).
In September 2004 Sun made available systems with UltraSPARC IV which was the first multi-core SPARC processor. It was followed by UltraSPARC IV+ in September 2005 and its revisions with higher clock speeds in 2007. These CPUs were used in the most powerful, enterprise class high-end CC-NUMA servers developed by Sun, such as Sun Fire E25K.
In November 2005 Sun launched the UltraSPARC T1, notable for its ability to concurrently run 32 threads of execution on 8 processor cores. Its intent was to drive more efficient use of CPU resources, which is of particular importance in data centers, where there is an increasing need to reduce power and air conditioning demands, much of which comes from the heat generated by CPUs. The T1 was followed in 2007 by the UltraSPARC T2, which extended the number of threads per core from 4 to 8. Sun has open sourced the design specifications of both the T1 and T2 processors via the OpenSPARC project.
In 2006, Sun ventured into the blade server (high density rack-mounted systems) market with the Sun Blade (distinct from the Sun Blade workstation).
In April 2007 Sun released the SPARC Enterprise server products, jointly designed by Sun and Fujitsu and based on Fujitsu SPARC64 VI and later processors. The M-class SPARC Enterprise systems include high-end reliability and availability features. Later T-series servers have also been badged SPARC Enterprise rather than Sun Fire.
In April 2008 Sun released servers with UltraSPARC T2 Plus, which is an SMP capable version of UltraSPARC T2, available in 2 or 4 processor configurations. It was the first CoolThreads CPU with multi-processor capability and it made possible to build standard rack-mounted servers that could simultaneously process up to massive 256 CPU threads in hardware (Sun SPARC Enterprise T5440) which is considered a record in the industry.
Since 2010, all further development of Sun machines based on SPARC architecture (including new SPARC T-Series servers, SPARC T3 and T4 chips) is done as a part of Oracle Corporation hardware division.
x86 based systems
In the late 1980s, Sun also marketed an Intel 80386-based machine, the Sun386i; this was designed to be a hybrid system, running SunOS but at the same time supporting DOS applications. This only remained on the market for a brief time. A follow-up “486i” upgrade was announced but only a few prototype units were ever manufactured.
Sun’s brief first foray into x86 systems ended in the early 1990s, as it decided to concentrate on SPARC and retire the last Motorola systems and 386i products, a move dubbed by McNealy as “all the wood behind one arrowhead”. Even so, Sun kept its hand in the x86 world, as a release of Solaris for PC compatibles began shipping in 1993.
In 1997 Sun acquired Diba, Inc., followed later by the acquisition of Cobalt Networks in 2000, with the aim of building network appliances (single function computers meant for consumers). Sun also marketed a Network Computer (a term popularized and eventually trademarked by Oracle); the JavaStation was a diskless system designed to run Java applications.
Although none of these business initiatives were particularly successful, the Cobalt purchase gave Sun a toehold for its return to the x86 hardware market. In 2002, Sun introduced its first general purpose x86 system, the LX50, based in part on previous Cobalt system expertise. This was also Sun’s first system announced to support Linux as well as Solaris.
In 2003, Sun announced a strategic alliance with AMD to produce x86/x64 servers based on AMD’s Opteron processor; this was followed shortly by Sun’s acquisition of Kealia, a startup founded by original Sun founder Andy Bechtolsheim, which had been focusing on high-performance AMD-based servers.
The following year, Sun launched the Opteron-based Sun Fire V20z and V40z servers, and the Java Workstation W1100z and W2100z workstations.
On September 12, 2005, Sun unveiled a new range of Opteron-based servers: the Sun Fire X2100, X4100 and X4200 servers.These were designed from scratch by a team led by Bechtolsheim to address heat and power consumption issues commonly faced in data centers. In July 2006, the Sun Fire X4500 and X4600 systems were introduced, extending a line of x64 systems that support not only Solaris, but also Linux and Microsoft Windows.
On January 22, 2007, Sun announced a broad strategic alliance with Intel. Intel endorsed Solaris as a mainstream operating system and as its mission critical Unix for its Xeon processor-based systems, and contributed engineering resources to OpenSolaris. Sun began using the Intel Xeon processor in its x64 server line, starting with the Sun Blade X6250 server module introduced in June 2007.
On May 5, 2008, AMD announced its Operating System Research Center (OSRC) expanded its focus to include optimization to Sun’s OpenSolaris and xVM virtualization products for AMD based processors
Although Sun was initially known as a hardware company, its software history began with its founding in 1982; co-founder Bill Joy was one of the leading Unix developers of the time, having contributed the vi editor, the C shell, and significant work developing TCP/IP and the BSD Unix OS. Sun later developed software such as the Java programming language and acquired software such as StarOffice, VirtualBox and MySQL.
Sun used community-based and open-source licensing of its major technologies, and for its support of its products with other open source technologies. GNOME-based desktop software called Java Desktop System (originally code-named “Madhatter”) was distributed for the Solaris operating system, and at one point for Linux. Sun supported its Java Enterprise System (a middleware stack) on Linux. It released the source code for Solaris under the open-source Common Development and Distribution License, via the OpenSolaris community. Sun’s positioning includes a commitment to indemnify users of some software from intellectual property disputes concerning that software. It offers support services on a variety of pricing bases, including per-employee and per-socket.
A 2006 report prepared for the EU by UNU-MERIT stated that Sun was the largest corporate contributor to open source movements in the world. According to this report, Sun’s open source contributions exceed the combined total of the next five largest commercial contributors.
Operating systems – SunOS / Solaris Operating System
Sun is best known for its Unix systems, which have a reputation for system stability and a consistent design philosophy.
Sun’s first workstation shipped with UniSoft V7 Unix. Later in 1982 Sun began providing SunOS, a customized 4.1BSD Unix, as the operating system for its workstations.
In 1987, AT&T Corporation and Sun announced that they were collaborating on a project to merge the most popular Unix variants on the market at that time: Berkeley Software Distribution, UNIX System V, and Xenix. This became Unix System V Release 4 (SVR4).
On September 4, 1991, Sun announced that it would replace its existing BSD-derived Unix, SunOS 4, with one based on SVR4. This was identified internally as SunOS 5, but a new marketing name was introduced at the same time: Solaris 2. The justification for this new overbrand was that it encompassed not only SunOS, but also the OpenWindows graphical user interface and Open Network Computing (ONC) functionality.
Although SunOS 4.1.x micro releases were retroactively named Solaris 1 by Sun, the Solaris name is used almost exclusively to refer only to the releases based on SVR4-derived SunOS 5.0 and later.
For releases based on SunOS 5, the SunOS minor version is included in the Solaris release number. For example, Solaris 2.4 incorporates SunOS 5.4. After Solaris 2.6, the 2. was dropped from the release name, so Solaris 7 incorporates SunOS 5.7, and the latest release SunOS 5.11 forms the core of Solaris 11.4.
Although SunSoft stated in its initial Solaris 2 press release their intent to eventually support both SPARC and x86 systems, the first two Solaris 2 releases, 2.0 and 2.1, were SPARC-only. An x86 version of Solaris 2.1 was released in June 1993, about 6 months after the SPARC version, as a desktop and uniprocessor workgroup server operating system. It included the Wabi emulator to support Windows applications.
From 1992 Sun also sold Interactive Unix, an operating system it acquired when it bought Interactive Systems Corporation from Eastman Kodak Company. This was a popular Unix variant for the PC platform and a major competitor to market leader SCO UNIX. Sun’s focus on Interactive Unix diminished in favor of Solaris on both SPARC and x86 systems; it was dropped as a product in 2001.
By the mid-1990s, the ensuing Unix wars had largely subsided, AT&T had sold off their Unix interests, and the relationship between the two companies was significantly reduced.
In 1994, Sun released Solaris 2.4, supporting both SPARC and x86 systems from a unified source code base.
Sun dropped the Solaris 2.x version numbering scheme after the Solaris 2.6 release (1997); the following version was branded Solaris 7. This was the first 64-bit release, intended for the new UltraSPARC CPUs based on the SPARC V9 architecture. Within the next four years, the successors Solaris 8 and Solaris 9 were released in 2000 and 2002 respectively.
Following several years of difficult competition and loss of server market share to competitors’ Linux-based systems, Sun began to include Linux as part of its strategy in 2002. Sun supported both Red Hat Enterprise Linux and SUSE Linux Enterprise Server on its x64 systems; companies such as Canonical Ltd., Wind River Systems and MontaVista also supported their versions of Linux on Sun’s SPARC-based systems.
In 2004, after having cultivated a reputation as one of Microsoft’s most vocal antagonists, Sun entered into a joint relationship with them, resolving various legal entanglements between the two companies and receiving US$1.95 billion in settlement payments from them. Sun supported Microsoft Windows on its x64 systems, and announced other collaborative agreements with Microsoft, including plans to support each other’s virtualization environments.
In 2005, the company released Solaris 10. The new version included a large number of enhancements to the operating system, as well as very novel features, previously unseen in the industry. Solaris 10 update releases continued through the next 8 years, the last release from Sun Microsystems being Solaris 10 10/09. The following updates were released by Oracle under the new license agreement; the final release is Solaris 10 1/13.
Previously, Sun offered a separate variant of Solaris called Trusted Solaris, which included augmented security features such as multilevel security and a least privilege access model. Solaris 10 included many of the same capabilities as Trusted Solaris at the time of its initial release; Solaris 10 11/06 included Solaris Trusted Extensions, which give it the remaining capabilities needed to make it the functional successor to Trusted Solaris.
After releasing Solaris 10, its source code was opened under CDDL free software license and developed in open with contributing Opensolaris community through SXCE that used SVR4 .pkg packaging and supported Opensolaris releases that used IPS. Following acquisition of Sun by Oracle , Opensolaris continued to develop in open under illumos with illumos distributions.
Oracle Corporation continued to develop OpenSolaris into next Solaris release, changing back the license to proprietary, and released it as Oracle Solaris 11 in November 2011.
Features introduced in each Solaris releases / Version History
Release Date SPARC
Release Date x86
End of Support
Major New Features
SunOS 4 rebranded as Solaris 1 for marketing purposes.
Preliminary release (primarily available to developers only), support for only the sun4c architecture. First appearance of NIS+.
Support for sun4 and sun4m architectures added; first Solaris x86 release. First Solaris 2 release to supportSMP
SPARC-only release. First to support sun4d architecture. First to support multithreading libraries (UI threads API in libthread)
SPARC-only release. OpenWindows 3.3 switches from NeWS to Display PostScript and drops SunView support. Support added for autofs and CacheFS filesystems.
First unified SPARC/x86 release. Includes OSF/Motif runtime support.
First to support UltraSPARC and include CDE, NFSv3 and NFS/TCP. Dropped sun4 (VMEbus) support. POSIX.1c-1995 pthreads added. Doors added but undocumented
The only Solaris release that supports PowerPC; Ultra Enterprise support added; user and group IDs (uid_t, gid_t) expanded to 32 bits,also included processor sets and early resource management technologies.
Includes Kerberos 5, PAM, TrueType fonts, WebNFS, large file support, enhanced procfs. SPARCserver 600MP series support dropped.
The first 64-bit UltraSPARC release. Added native support for file system meta-data logging (UFS logging). Dropped MCA support on x86 platform. Sun dropped the prefix “2.” in the Solaris version number, leaving “Solaris 7.” Last update was Solaris 7 11/99
Includes Multipath I/O, Solstice DiskSuite] IPMP, first support for IPv6 and IPsec (manual keying only), mdb Modular Debugger. Introduced Role-Based Access Control (RBAC); sun4c support removed. Last update is Solaris 8 2/04.
May 28, 2002
January 10, 2003
iPlanet Directory Server, Resource Manager, extended file attributes, IKE IPsec keying, and Linux compatibility added; OpenWindows dropped, sun4d support removed. Most current update is Solaris 9 9/05 HW.
January 31, 2005
January 31, 2005
before Oracle acquisition in March 2010, open source under CDDL
after March 2010, Post-Oracle closed source
Includes x86-64 (AMD64/Intel 64) support, DTrace (Dynamic Tracing), Solaris Containers, Service Management Facility (SMF) which replaces init.d scripts, NFSv4. Least privilege security model. Support for sun4m and UltraSPARC I processors removed. Support for EISA-based PCs removed. Adds Java Desktop System (based on GNOME) as default desktop.
Solaris 10 1/06 (known internally as “U1”) added the GRUB bootloader for x86 systems, iSCSI Initiator support and fcinfo command-line tool.
Solaris 10 6/06 (“U2”) added the ZFS filesystem.
Solaris 10 11/06 (“U3”) added Solaris Trusted Extensions and Logical Domains (sun4v).
Solaris 10 8/07 (“U4”) added Samba Active Directory support, IP Instances (part of the OpenSolaris Network Virtualization and Resource Control project), iSCSI Target support and Solaris Containers for Linux Applications (based on branded zones), enhanced version of the Resource Capping Daemon (rcapd).
Solaris 10 5/08 (“U5”) added CPU capping for Solaris Containers, performance improvements, SpeedStep support for Intel processors and PowerNow! support for AMD processors
Solaris 10 10/08 (“U6”) added boot from ZFS and can use ZFS as its root file system. Solaris 10 10/08 also includes virtualization enhancements including the ability for a Solaris Container to automatically update its environment when moved from one system to another, Logical Domains support for dynamically reconfigurable disk and network I/O, and paravirtualization support when Solaris 10 is used as a guest OS in Xen-based environments such as Sun xVM Server.
Solaris 10 5/09 (“U7”) added performance and power management support for Intel Nehalem processors, container cloning using ZFS cloned file systems, and performance enhancements for ZFS on solid-state drives.
Solaris 10 10/09 (“U8”) added user and group level ZFS quotas, ZFS cache devices and nss_ldap shadowAccount Support, improvements to patching performance.
Solaris 10 9/10 (“U9”) added physical to zone migration, ZFS triple parity RAID-Z and Oracle Solaris Auto Registration
Solaris 10 8/11 (“U10”) added ZFS speedups and new features, Oracle Database optimization, faster reboot on SPARC system.
OpenSolaris was based on Solaris, which was originally released by Sun in 1991. Solaris is a version of UNIX System V Release 4 (SVR4), jointly developed by Sun and AT&T to merge features from several existing Unix systems. It was licensed by Sun from Novell to replace SunOS.
Planning for OpenSolaris started in early 2004. A pilot program was formed in September 2004 with 18 non-Sun community members and ran for 9 months growing to 145 external participants. Sun submitted the CDDL (Common Development and Distribution License) to the OSI, which approved it on January 14, 2005.
The first part of the Solaris code base to be open sourced was the Solaris Dynamic Tracing facility (commonly known as DTrace), a tool that aids in the analysis, debugging, and tuning of applications and systems. DTrace was released under the CDDL on January 25, 2005, on the newly launched opensolaris.org website. The bulk of the Solaris system code was released on June 14, 2005. There remains some system code that is not open sourced, and is available only as pre-compiled binary files.
In 2003, an addition to the Solaris development process was initiated. Under the program name Software Express for Solaris (or just Solaris Express), a binary release based on the current development basis was made available for download on a monthly basis, allowing anyone to try out new features and test the quality and stability of the OS as it progressed to the release of the next official Solaris version.A later change to this program introduced a quarterly release model with support available, renamed Solaris Express Developer Edition (SXDE).
Initially, Sun’s Solaris Express program provided a distribution based on the OpenSolaris code in combination with software found only in Solaris releases. The first independent distribution was released on June 17, 2005
The Solaris Express Community Edition (SXCE) was intended specifically for OpenSolaris developers.
On March 19, 2007, Sun announced that it had hired Ian Murdock, founder of Debian, to head Project Indiana, an effort to produce a complete OpenSolaris distribution, with GNOME and userland tools from GNU, plus a network-based package management system. The new distribution was planned to refresh the user experience, and would become the successor to Solaris Express as the basis for future releases of Solaris.
The announced Project Indiana had several goals, including providing an open source binary distribution of the OpenSolaris project, replacing SXDE. The first release of this distribution was OpenSolaris 2008.05.
On May 5, 2008, OpenSolaris 2008.05 was released in a format that could be booted as a Live CD or installed directly. It uses the GNOME desktop environment as the primary user interface. The later OpenSolaris 2008.11 release included a GUI for ZFS’ snapshotting capabilities, known as Time Slider, that provides functionality similar to macOS’s Time Machine.
In December 2008, Sun Microsystems and Toshiba America Information Systems announced plans to distribute Toshiba laptops pre-installed with OpenSolaris. On April 1, 2009, the Tecra M10 and Portégé R600 came preinstalled with OpenSolaris 2008.11 release and several supplemental software packages.
On June 1, 2009, OpenSolaris 2009.06 was released, with support for the SPARC platform.
On January 6, 2010, it was announced that Solaris Express program would be closed while an OpenSolaris binary release was scheduled to be released March 26, 2010. The OpenSolaris 2010.03 release never appeared.
SXCE releases terminated with build 130 and OpenSolaris releases terminated with build 134 a few weeks later. The next release of OpenSolaris based on build 134 was due in March 2010, but it was never fully released, though the packages were made available on the package repository.
Instead, Oracle renamed the binary distribution Solaris 11 Express, changed the license terms and released build 151a as 2010.11 in November 2010.
There are a few forks based on OpenSolaris, such as: BeleniX, EON ZFS Storage, Illumos, Jaris OS, MartUX, MilaX, Nexenta OS, NexentaStor, OpenIndiana, OpenSXCE, SchilliX, SmartOS, StormOS.
On September 14, 2010, OpenIndiana was formally launched at the JISC Centre in London. While OpenIndiana is a fork in the technical sense, it is a continuation of OpenSolaris in spirit: the project intends to deliver a System V family operating system which is binary-compatible with the Oracle products Solaris 11 and Solaris 11 Express. However, rather than being based around the OS/Net consolidation like OpenSolaris was, OpenIndiana became a distribution based on illumos (the first release is still based around OS/Net). The project uses the same IPS package management system as OpenSolaris.
illumos is a partly free and open-source Unix operating system. It is based on OpenSolaris, which was based on System V Release 4 (SVR4) and the Berkeley Software Distribution (BSD). illumos comprises a kernel, device drivers, system libraries, and utility software for system administration. This core is now the base for many different open-sourced illumos distributions, in a similar way in which the Linux kernel is used in different Linux distributions.
OpenIndiana is a free and open-source Unix operating system derived from OpenSolaris and based on illumos. Developers forked OpenSolaris after Oracle Corporation discontinued it, in order to continue development and distribution of the source code. OpenIndiana is named after Project Indiana, the development codename at Sun Microsystems for OpenSolaris.
List of Open Source Contributions of Sun Microsystems
Sun had many open source initiatives and products. Almost all of the software was open source as well as some of the hardware designs. Here’s a decent list of the products (I’m sure I left out more than a few)
Operating Systems OpenSolaris Open HA Cluster Java Desktop Linux
Early releases of Solaris used OpenWindows as the standard desktop environment. In Solaris 2.0 to 2.2, OpenWindows supported both NeWS and X applications, and provided backward compatibility for SunView applications from Sun’s older desktop environment. NeWS allowed applications to be built in an object-oriented way using PostScript, a common printing language released in 1982. The X Window System originated from MIT’s Project Athena in 1984 and allowed for the display of an application to be disconnected from the machine where the application was running, separated by a network connection. Sun’s original bundled SunView application suite was ported to X.
Sun later dropped support for legacy SunView applications and NeWS with OpenWindows 3.3, which shipped with Solaris 2.3, and switched to X11R5 with Display Postscript support. The graphical look and feel remained based upon OPEN LOOK. OpenWindows 3.6.2 was the last release under Solaris 8. The OPEN LOOK Window Manager (olwm) with other OPEN LOOK specific applications were dropped in Solaris 9, but support libraries were still bundled, providing long term binary backwards compatibility with existing applications. The OPEN LOOK Virtual Window Manager (olvwm) can still be downloaded for Solaris from sunfreeware and works on releases as recent as Solaris 10.The Common Desktop Environment (CDE) was open sourced in August 2012.
Sun and other Unix vendors created an industry alliance to standardize Unix desktops. As a member of the Common Open Software Environment (COSE) initiative, Sun helped co-develop the Common Desktop Environment (CDE). This was an initiative to create a standard Unix desktop environment. Each vendor contributed different components: Hewlett-Packard contributed the window manager, IBM provided the file manager, and Sun provided the e-mail and calendar facilities as well as drag-and-drop support (ToolTalk). This new desktop environment was based upon the Motif look and feel and the old OPEN LOOK desktop environment was considered legacy. CDE unified Unix desktops across multiple open system vendors. CDE was available as an unbundled add-on for Solaris 2.4 and 2.5, and was included in Solaris 2.6 through 10.
In 2001, Sun issued a preview release of the open-source desktop environment GNOME 1.4, based on the GTK+ toolkit, for Solaris 8. Solaris 9 8/03 introduced GNOME 2.0 as an alternative to CDE. Solaris 10 includes Sun’s Java Desktop System (JDS), which is based on GNOME and comes with a large set of applications, including StarOffice, Sun’s office suite. Sun describes JDS as a “major component” of Solaris 10. The Java Desktop System is not included in Solaris 11 which instead ships with a stock version of GNOME. Likewise, CDE applications are no longer included in Solaris 11, but many libraries remain for binary backwards compatibility.
The open source desktop environments KDE and Xfce, along with numerous other window managers, also compile and run on recent versions of Solaris.
Sun was investing in a new desktop environment called Project Looking Glass since 2003. The project has been inactive since late 2006
The Sun Ultra series is a discontinued line of workstation and server computers developed and sold by Sun Microsystems, comprising two distinct generations. The original line was introduced in 1995 and discontinued in 2001. This generation was partially replaced by the Sun Blade in 2000 and that line was in itself replaced by the Sun Java Workstation—an AMD Opteron system—in 2004. In sync with the transition to x86-64-architecture processors, in 2005 the Ultra brand was later revived with the launch of the Ultra 20 and Ultra 40, albeit to some confusion, since they were no longer based on UltraSPARC processors.
The original Ultra workstations and the Ultra Enterprise (later, “Sun Enterprise”) servers were UltraSPARC-based systems produced from 1995 to 2001, replacing the earlier SPARCstation and SPARCcenter/SPARCserver series respectively. This introduced the 64-bit UltraSPARC processor and in later versions, lower-cost PC-derived technology, such as the PCI and ATA buses (the initial Ultra 1 and 2 models retained the SBus of their predecessors). The original Ultra range were sold during the dot com boom, and became one of the biggest selling series of computers ever developed by Sun Microsystems, with many companies and organisations—including Sun itself—relying on Sun Ultra products for years after their successor products were released.
The Ultra brand was revived in 2005 with the launch of the Ultra 20 and Ultra 40 with x86-64-architecture.
x64-based Ultra systems remained in the Sun portfolio for five more years; the last one, the Intel Xeon-based Ultra 27, was retired in June 2010, thereby concluding the history of Sun as a workstation vendor.
The SPARC-based Ultra 3 Mobile Workstation laptop was released in 2005 as well, but it would prove to be a short-lived design and was retired the next year. Its release did not coincide with the rest of the line as most of the brand had already moved on to x86.
Additionally, new Ultra 25 and Ultra 45 desktop UltraSPARC IIIi-based systems were introduced in 2006.
In October 2008, Sun discontinued all these, effectively ending the production of SPARC architecture workstations.
The original Ultra/Enterprise series itself was later replaced by the Sun Blade workstation and Sun Fire server ranges.
Very simply put a text editor is a type of computer program which edits plain text.
An integrated development environment (IDE) is a software application that provides comprehensive facilities to computer programmers for software development. An IDE normally consists of at least a source code editor, build automation tools and a debugger. Some IDEs contain the necessary compiler, interpreter, or both; while others, do not.
The boundary between an IDE and other parts of the broader software development environment is not well-defined; sometimes a version control system or various tools to simplify the construction of a graphical user interface (GUI) are integrated. Many modern IDEs also have a class browser, an object browser, and a class hierarchy diagram for use in object-oriented software development.
Notable Text Editors (Some old and Some new)
ed / ex
ed is a line editor for Unix and Unix-like operating systems. It was one of the first parts of the Unix operating system that was developed, in August 1969.It remains part of the POSIX and Open Group standards for Unix-based operating systems, alongside the more sophisticated full-screen editor vi.
The ed text editor was one of the first three key elements of the Unix operating system—assembler, editor, and shell—developed by Ken Thompson in August 1969 on a PDP-7 at AT&T Bell Labs. Many features of ed came from the qed text editor developed at Thompson’s alma mater University of California, Berkeley. Thompson was very familiar with qed, and had reimplemented it on the CTSS and Multics systems. Thompson’s versions of qed were notable as the first to implement regular expressions. Regular expressions are also implemented in ed, though their implementation is considerably less general than that in qed.
Dennis M. Ritchie produced what Doug McIlroy later described as the “definitive” ed and aspects of ed went on to influence ex, which in turn spawned vi.
ex, short for EXtended, is a line editor for Unix systems originally written by Bill Joy in 1976, beginning with an earlier program written by Charles Haley.
Ex is heavily based on the text editor ed.The first versions of ex were modifications of a text editor em (named editor for mortals as creator George Coulouris considered the cryptic commands of ed to be only suitable for “immortals” ) developed at Queen Mary’s College in England and shown it to various people at Berkeley in the summer of 1976 including Bill Joy who was very impressed with it. Em was a modifieded which had some added features which were useful on high-speed terminals.The earlier versions of ex also included features from the modified ed in use at UCLA and the ideas of Bill Joy and Charles Haley, who implemented most of the modifications to em which resulted in these early versions of ex.
The original Unix editor, distributed with the Bell Labs versions of the operating system in the 1970s, was the rather user-unfriendly ed. George Coulouris of Queen Mary College, London, which had installed Unix in 1973, developed an improved version called em in 1975 that could take advantage of video terminals. While visiting Berkeley, Coulouris presented his program to Bill Joy, who modified it to be less demanding on the processor; Joy’s version became ex and got included in the Berkeley Software Distribution.
ex was eventually given a full-screen visual interface (adding to its command line oriented operation), thereby becoming the vi text editor. In recent times, ex is implemented as a personality of the vi program; most variants of vi still have an “ex mode”, which is invoked using the command ex, or from within vi for one command by typing the : (colon) character. Although there is overlap between ex and vi functionality, some things can only be done with ex commands, so it remains useful when using vi.
Interesting fact: The non-interactive Unix command grep was inspired by a common special use of qed and later ed, where the command g/re/p means globally search for the regular expression re and print the lines containing it. The Unix stream editor, sed implemented many of the scripting features of qed that were not supported by ed on Unix.
What is a Line editor
In computing, a line editor is a text editor in which each editing command applies to one or more complete lines of text designated by the user. Line editors predate screen-based text editors and originated in an era when a computer operator typically interacted with a teleprinter (essentially a printer with a keyboard), with no video display, and no ability to move a cursor interactively within a document. Line editors were also a feature of many home computers, avoiding the need for a more memory-intensive full-screen editor.
Line editors are limited to typewriter keyboard text-oriented input and output methods. Most edits are a line-at-a-time. Typing, editing, and document display do not occur simultaneously. Typically, typing does not enter text directly into the document. Instead, users modify the document text by entering these commands on a text-only terminal. Commands and text, and corresponding output from the editor, will scroll up from the bottom of the screen in the order that they are entered or printed to the screen. Although the commands typically indicate the line(s) they modify, displaying the edited text within the context of larger portions of the document requires a separate command.
Line editors keep a reference to the ‘current line’ to which the entered commands usually are applied. In contrast, modern screen based editors allow the user to interactively and directly navigate, select, and modify portions of the document. Generally line numbers or a search based context (especially when making changes within lines) are used to specify which part of the document is to be edited or displayed.
Early line editors included Colossal Typewriter, Expensive Typewriter and QED. All three pre-dated the advent of UNIX; the former two ran on DEC PDP-1’s, while the latter was a Unisys product. Numerous line editors are included with UNIX and Linux: ed is considered the standard UNIX editor, while ex extends it and has more features, and sed was written for pattern-based text editing as part of a shell script. GNU Readline is a line editor implemented as a library that is incorporated in many programs, such as Bash. For the first 10 years of the IBM PC, the only editor provided in DOS was the Edlin line editor.
vi is a screen-oriented text editor originally created for the Unix operating system. The portable subset of the behavior of vi and programs based on it, and the ex editor language supported within these programs, is described by (and thus standardized by) the Single Unix Specification and POSIX.
The original code for vi was written by Bill Joy in 1976, as the visual mode for a line editor called ex that Joy had written with Chuck Haley. Bill Joy’s ex 1.1 was released as part of the first Berkeley Software Distribution (BSD) Unix release in March 1978. It was not until version 2.0 of ex, released as part of Second BSD in May 1979 that the editor was installed under the name “vi” (which took users straight into ex’s visual mode) and the name by which it is known today. Some current implementations of vi can trace their source code ancestry to Bill Joy; others are completely new, largely compatible reimplementations.
The name “vi” is derived from the shortest unambiguous abbreviation for the ex command visual, which switches the ex line editor to visual mode.
In addition to various non–free software variants of vi distributed with proprietary implementations of Unix, vi was open sourced with Open Solaris, and several free and open source software vi clones exist.
Many of the ideas in ex’s visual mode (a.k.a. vi) were taken from other software that existed at the time. According to Bill Joy inspiration for vi’s visual mode came from the Bravo editor, which was a bimodal editor. In an interview about vi’s origins, Joy said:
A lot of the ideas for the screen editing mode were stolen from a Bravo manual I surreptitiously looked at and copied. Dot is really the double-escape from Bravo, the redo command. Most of the stuff was stolen. There were some things stolen from ed—we got a manual page for the Toronto version of ed, which I think Rob Pike had something to do with. We took some of the regular expression extensions out of that
Joy used a Lear Siegler ADM-3A terminal. On this terminal, the Escape key was at the location now occupied by the Tab key on the widely used IBM PC keyboard (on the left side of the alphabetic part of the keyboard, one row above the middle row). This made it a convenient choice for switching vi modes. Also, the keys h,j,k,l served double duty as cursor movement keys and were inscribed with arrows, which is why vi uses them in that way. The ADM-3A had no other cursor keys. Joy explained that the terse, single character commands and the ability to type ahead of the display were a result of the slow 300 baud modem he used when developing the software and that he wanted to be productive when the screen was painting slower than he could think.
Bill Joy explains in the previously mentioned interview done by Jim Joyce that He nearly fully implemented multiwindows mode for vi. Here is a quote regarding that from the interview:
What actually happened was that I was in the process of adding multiwindows to vi when we installed our VAX, which would have been in December of ’78. We didn’t have any backups and the tape drive broke. I continued to work even without being able to do backups. And then the source code got scrunched and I didn’t have a complete listing. I had almost rewritten all of the display code for windows, and that was when I gave up. After that, I went back to the previous version and just documented the code, finished the manual and closed it off. If that scrunch had not happened, vi would have multiple windows, and I might have put in some programmability—but I don’t know.
Over the years since its creation, vi became the de facto standard Unix editor and a hacker favorite outside of MIT until the rise of Emacs after about 1984.
Emacs or EMACS (Editor MACroS) is a family of text editors that are characterized by their extensibility. The manual for the most widely used variant, GNU Emacs, describes it as “the extensible, customizable, self-documenting, real-time display editor”. Development of the first Emacs began in the mid-1970s, and work on its direct descendant, GNU Emacs, continues actively as of 2020.
The original EMACS was written in 1976 by David A. Moon and Guy L. Steele Jr. as a set of Editor MACroS for the TECO editor. It was inspired by the ideas of the TECO-macro editors TECMAC and TMACS.
Emacs development began during the 1970s at the MIT AI Lab, whose PDP-6 and PDP-10 computers used the Incompatible Timesharing System (ITS) operating system that featured a default line editor known as Tape Editor and Corrector (TECO). Unlike most modern text editors, TECO used separate modes in which the user would either add text, edit existing text, or display the document. One could not place characters directly into a document by typing them into TECO, but would instead enter a character (‘i’) in the TECO command language telling it to switch to input mode, enter the required characters, during which time the edited text was not displayed on the screen, and finally enter a character (<esc>) to switch the editor back to command mode. (A similar technique was used to allow overtyping.) This behavior is similar to that of the program ed.
E had another feature that TECO lacked: random-access editing. TECO was a page-sequential editor that was designed for editing paper tape on the PDP-1 and typically allowed editing on only one page at a time, in the order of the pages in the file. Instead of adopting E’s approach of structuring the file for page-random access on disk, Stallman modified TECO to handle large buffers more efficiently and changed its file-management method to read, edit, and write the entire file as a single buffer. Almost all modern editors use this approach.
Richard Stallman visited the Stanford AI Lab in 1972 or 1974 and saw the lab’s E editor, written by Fred Wright. He was impressed by the editor’s intuitive WYSIWYG (What You See Is What You Get) behavior, which has since become the default behavior of most modern text editors. He returned to MIT where Carl Mikkelsen, a hacker at the AI Lab, had added to TECO a combined display/editing mode called Control-R that allowed the screen display to be updated each time the user entered a keystroke. Stallman reimplemented this mode to run efficiently and then added a macro feature to the TECO display-editing mode that allowed the user to redefine any keystroke to run a TECO program.
The new version of TECO quickly became popular at the AI Lab and soon accumulated a large collection of custom macros whose names often ended in MAC or MACS, which stood for macro. Two years later, Guy Steele took on the project of unifying the diverse macros into a single set. Steele and Stallman’s finished implementation included facilities for extending and documenting the new macro set. The resulting system was called EMACS, which stood for Editing MACroS or, alternatively, E with MACroS. Stallman picked the name Emacs “because <E> was not in use as an abbreviation on ITS at the time.” An apocryphal hacker koan alleges that the program was named after Emack & Bolio’s, a popular Cambridge ice cream store. The first operational EMACS system existed in late 1976.
In the following years, programmers wrote a variety of Emacs-like editors for other computer systems.
James Gosling, who would later invent NeWS and the Java programming language, wrote Gosling Emacs in 1981. The first Emacs-like editor to run on Unix, Gosling Emacs was written in C and used Mocklisp, a language with Lisp-like syntax, as an extension language.
The most popular, and most ported, version of Emacs is GNU Emacs, which was created by Richard Stallman for the GNU Project.
In 1976, Stallman wrote the first Emacs (“Editor MACroS”), and in 1984, began work on GNU Emacs, to produce a free software alternative to the proprietary Gosling Emacs. GNU Emacs was initially based on Gosling Emacs, but Stallman’s replacement of its Mocklisp interpreter with a true Lisp interpreter required that nearly all of its code be rewritten. This became the first program released by the nascent GNU Project. GNU Emacs is written in C and provides Emacs Lisp, also implemented in C, as an extension language. Version 13, the first public release, was made on March 20, 1985. The first widely distributed version of GNU Emacs was version 15.34, released later in 1985. Early versions of GNU Emacs were numbered as “1.x.x,” with the initial digit denoting the version of the C core. The “1” was dropped after version 1.12 as it was thought that the major number would never change, and thus the major version skipped from “1” to “13”. A new third version number was added to represent changes made by user sites.In the current numbering scheme, a number with two components signifies a release version, with development versions having three components.
GNU Emacs was later ported to the Unix operating system. It offered more features than Gosling Emacs, in particular a full-featured Lisp as its extension language, and soon replaced Gosling Emacs as the de facto Unix Emacs editor.
Emacs has over 10,000 built-in commands and its user interface allows the user to combine these commands into macros to automate work. Implementations of Emacs typically feature a dialect of the Lisp programming language that provides a deep extension capability, allowing users and developers to write new commands and applications for the editor. Extensions have been written to manage email, files, outlines, and RSS feeds as well as clones of ELIZA, Pong, Conway’s Life, Snake and Tetris.
Emacs is, along with vi, one of the two main contenders in the traditional editor wars of Unix culture. Emacs is among the oldest free and open source projects still under development.
GNU nano is a text editor for Unix-like computing systems or operating environments using a command line interface. It emulates the Pico text editor, part of the Pine email client, and also provides additional functionality. Unlike Pico, nano is licensed under the GNU General Public License (GPL). Released as free software by Chris Allegretta in 1999, nano became part of the GNU Project in 2001.
GNU nano was first created in 1999 with the name TIP (a recursive acronym for TIP Isn’t Pico), by Chris Allegretta. His motivation was to create a free software replacement for Pico, which was not distributed under a free software license. The name was changed to nano on 10 January 2000 to avoid a naming conflict with the existing Unix utility tip. The name comes from the system of SI prefixes, in which nano is 1000 times larger than pico. In February 2001, nano became a part of the GNU Project.
GNU nano implements several features that Pico lacks, including syntax highlighting, line numbers, regular expression search and replace, line-by-line scrolling, multiple buffers, indenting groups of lines, rebindable key support and the undoing and redoing of edit changes.
GNU nano, like Pico, is keyboard-oriented, controlled with control keys. For example, Ctrl+O saves the current file; Ctrl+W goes to the search menu. GNU nano puts a two-line “shortcut bar” at the bottom of the screen, listing many of the commands available in the current context. For a complete list, Ctrl+G gets the help screen.
Unlike Pico, nano uses meta keys to toggle its behavior. For example, Meta+S toggles smooth scrolling mode on and off. Almost all features that can be selected from the command line can be dynamically toggled. On keyboards without the meta key it is often mapped to the escape key, Esc, such that in order to simulate, say, Meta+S one has to press the Esc key, then release it, and then press the S key.
GNU nano can also use pointer devices, such as a mouse, to activate functions that are on the shortcut bar, as well as position the cursor.
Like most other configurable text editors, Atom enables users to install third-party packages and themes to customize the features and looks of the editor. Packages can be installed, managed and published via Atom’s package manager apm. Syntactic highlighting support for other languages than the default, can be installed through the packages, as well as the auto-complete function.
Atom’s default packages can apply syntax highlighting for the following programming languages and file formats:
Sublime Text is a shareware cross-platform source code editor with a Python application programming interface (API). It natively supports many programming languages and markup languages, and functions can be added by users with plugins, typically community-built and maintained under free-software licenses.
Column selection and multi-select editing
This feature allows users to select entire columns at once or place more than one cursor in text, which allows for simultaneous editing. All cursors then behave as if each of them was the only one in the text. Commands like move by character, move by line, text selection, move by words, move by subwords (CamelCase, hyphen or underscore delimited), move to beginning/end of line, etc., affect all cursors independently, allowing one to edit slightly complex repetitive structures quickly without the need to use macros or regex.
Sublime Text will offer to complete entries as the user is typing depending on the language being used. It also auto-completes variables created by the user.
Syntax highlight and high contrast display
The dark background on Sublime Text is intended to reduce eyestrain and increase the amount of contrast with the text. Syntax highlighting also makes syntaxes of the language easier to read.
In-editor code building
This feature allows users to run code for certain languages from within the editor, which eliminates the need to switch out to the command line and back again. This function can also be set to build the code automatically every time the file is saved.
This feature allows users to save blocks of frequently used code and assign keywords to them. The user can then type the keyword and press tab to paste the block of code whenever they require it.
This feature is a tool that allows users to switch between open, recent or project files and also navigate to symbols within them.
Sublime Text has a number of features in addition to these including:
Auto-save, which attempts to prevent users from losing their work
Customizable key assignments, a navigational tool which allows users to assign hotkeys to their choice of options in both the menus and the toolbar
Find as you type, begins to look for the text being entered as the user types without requiring a separate dialog box
Spell check function corrects as you type
Repeat the last action
A wide selection of editing commands, including indenting and unindenting, paragraph reformatting and line joining
Package Control is a third-party package manager for Sublime Text which allows the user to find, install, upgrade and remove plug-ins, usually without restarting Sublime Text. The package manager keeps installed packages up-to-date with an auto-upgrade feature and downloads packages from GitHub, BitBucket and a custom JSON-encoded channel/repository system. It also handles updating packages cloned from GitHub and BitBucket via Git and Hg, as well as providing commands for enabling and disabling packages. The package manager also includes a command to bundle any package directory into a .sublime-package file.
Notable third-party packages include:
SublimeCodeIntel – Features include Jump to Symbol Definition, Function Call Tool-Tips.
Sublime Goto Documentation – Opens relevant documentation for the highlighted function
Bracket Highlighter – Enhances the basic highlights Sublime Text provides for bracket pairs
Sublime dpaste – Sends selected text to the dpaste.com service
Side Bar Enhancements – Enhancements to the Sublime Text 2 sidebar with new options for deleting, opening, moving, creating, editing, and finding files
ColorSublime – Expands the number of Themes available from the standard 22 to over 250 community-submitted color schemes
WordPress – Adds autocompletion and Snippets for the blogging platform WordPress
Git – Integrates Git functionality into Sublime Tex
MCEdit ( a user friendly text editor written for the Midnight Commander)
MCEdit is part of the Midnight Commander package a very popular multiplatform Norton Commander clone which runs also under Linux.
mcedit’s features include syntax highlighting for many languages, macros, code snippets, simple integration with external tools, automatic indentation, mouse support, a clipboard and the ability to work in both ASCII and hex modes.
XEDIT is a visual editor for VM/CMS (( a family of IBM virtual machine operating systems used on IBM mainframes System/370, System/390, zSeries, System z and compatible systems, including the Hercules emulator for personal computers. ))using block mode IBM 3270 terminals. (Line-mode terminals are also supported.)
It is not a Unix or Linux text editor but as You will see it influenced editors created to other platforms such as Dos and Unix/Linux.
XEDIT is much more line-oriented than modern PC and Unix editors. For example, XEDIT supports automatic line numbers, and many of the commands operate on blocks of lines. A pair of features allows selective line and column editing. The ALL command, for example, hides all lines not matching the described pattern, and the COL (Column) command allows hiding those columns not specified. Hence changing, for example, the word NO as it appears only in columns 24 thru 28, to YES, and only on lines with the word FLEXIBLE, is doable.
Another feature is a command line which allows the user to type arbitrary editor commands. Because IBM 3270 terminals do not transmit data to the computer until certain special keys are pressed [such as ↵ Enter, a program function key (PFK), or a program access key (PAK), XEDIT is less interactive than many PC and Unix editors. For example, continuous spell-checking as the user types is problematic.
When PCs and Unix computers began to supplant IBM 3270 terminals, some users wanted text editors that resembled the XEDIT they were accustomed to. To fill this need, several developers provided similar programs:
KEDIT by Mansfield Software Group, Inc., was the first XEDIT clone. Although originally released in 1983, the first major release was version 3.53 for DOS, released in 1985. By 1990, KEDIT 4.0 had a version supporting OS/2, and included the ALL command.
The last version for DOS and OS/2 was KEDIT 5.0p4. KeditW (for Windows) is at version 1.6.1 dated December 2012.
KEDIT 1.6 supports syntax highlighting for various languages including C#, COBOL, FORTRAN, HTML, Java, Pascal, and xBase defined in the .kld file format.
KEDIT supports a built-in Rexx-subset called KEXX. Mansfield Software created the first non-IBM implementation of Rexx (Personal Rexx) in 1985.
In December 2012 Mansfield Software released 1.6.1 to provide compatibility with Windows 8 and extended support to at least June 2015. These 32bit versions work also in the 64bit versions of Windows 7 and Vista, but do not directly support Unicode.
SEDIT (first released in 1989) is another implementation on both Windows and Unix, which supports a variant of Rexx language called S/REXX (announced in 1994).
The Hessling Editor
The Hessling Editor (THE) is an open source text editor first released in August 1992. For more than ten years it has been written and maintained by Mark Hessling, who along with being the original author of THE is also a maintainer of Regina, an open source REXX interpreter that has been ported to most Unix platforms.
At the 1993 REXX conference in La Jolla, California, Hessling discussed why he created a new text editor.
Here is a quote from the 1993 REXX conference from Hessling about the history of The Hessling Editor
Work began on THE in 1990 after my then workplace purchased a Sun workstation. This then meant that I was using DOS, VMS and Unix. This meant using 3 different editors. Having used XEDIT for a considerable period of time prior to 1990, I was keen to continue using an editor with the same capability and power. After receiving a copy of LED (Lewis Editor) from Pierre Lewis in Canada, I found that the only way to be able to use the same XEDIT- like editor on a variety of operating systems, was to write my own. Pierre assured me that writing an editor was wonderful for one’s character. The original intention of THE was to provide me with an editor I was happy with and had all the features of XEDIT and KEDIT that I used frequently. Once I had achieved this goal, I decided to make THE available to anyone who also had a need for a multi-platform XEDIT- like editor. THE 1.0 was released to the public in August 199 1. Since then, I then began to add features that I still found lacking and that other users requested. This work resulted in THE 1.1 which is publically released at this Symposium.
Proceedings of the REXX Symposium for Developers and Users , May 18-20, 1993, La Jolla, California https://www.slac.stanford.edu/pubs/slacreports/reports01/slac-r-422.pdf
Provision of both a GUI interface along with a command line interface, and the ability to edit a text file using either one or both
Availability of folding which can be controlled in various sophisticated ways (keyword based, indent based, etc.)
The use of REXX as macro language
Folding is controlled by the “all” command. It permits one to display and work on only those lines in a file that contain a given pattern. For example, the command: all /string/ will display only the lines that include “string”; any global changes one makes on this slice (for example replace string command) will be reflected in the file. (In most cases this is a more convenient way to make global changes in the file.) In order to restore visibility of all lines one needs to enter: all (without a target string).
Similar to XEDIT, THE uses IBMs REXX as its macro language, which makes THE highly configurable and versatile. This provides the ability to create powerful extensions to the editor and/or customize it to specialized needs. For example, one can create edit commands that would allows one to manipulate columns of text (e.g. copy/move or insert/delete a column of text within a file). With REXX, one can also integrate OS commands or external functions into an edit session. Since version 3.0, THE also has user-configurable syntax highlighting.
While THE and XEDIT are not GUI editors, THE has its own syntax highlighting language definition .tld file format comparable with KEDIT’s .kld format.
Interesting fact: two of my favourite multi OS terminal emulator (Win,Mac,Linux) SecureCRT and ZOC Terminal both supports REXX scripting
By definition, a media server is a device that simply stores and shares media. This definition is vague, and can allow several different devices to be called media servers. It may be a NAS drive, a home theater PC running Windows XP Media Center Edition, MediaPortal or MythTV, or a commercial web server that hosts media for a large web site. In a home setting, a media server acts as an aggregator of information: video, audio, photos, books, etc. These different types of media (whether they originated on DVD, CD, digital camera, or in physical form) are stored on the media server’s hard drive. Access to these is then available from a central location. It may also be used to run special applications that allow the user(s) to access the media from a remote location via the internet.
Initially released at 2002 as Xbox Media Player and from 2003 as Xbox Media Center and today known as Kodi is one of the best known and used Media Center solution oe multi platform Home theater PC (HTPC) application.
It is highly customizable which is one of its most important feature. You can customize its look via skins and extend its capabilities via plugins beyond what You might imagine.
It can support Live TV via TV Tuner cards or be as an IPTV player can do DVR – Digital Video Recording and EPG functionality which like the IPTV Player and EPG capability is missing still from its No 1. rival Plex and its a feature I would love to see it acquiring in the coming future.
* in theory there is a plugin to add some sort of IPTV player capability with m3u playlist support to Plex but I could never actually make it work myself… A feature like that should be built in together with EPG Support into the core of the application *
Kodi can also handle Your music library and Photo collection like many other alternatives we willl see today but its real power is not in what the other alternatives also do but the one they do not and that is the one single feature which makes Kodi the perfect choice which can not be replaced with anything else.
One of the main issue for me with Kodi that out of the box its a stand alone application and not a client-server nature one like many of its alternative options We will see today : Plex. Emby, Jellyfin, etc.
However with additional work and configuration it can be done that Kodi’s catalog ( Library Sharing) be shared with another Kodi client as a UPNP server same goes for a centralized Database for Multiple clients see this link to know more ( article is from 2015 but it should still be relevant today)
Kodi’s real power is in its addon / extensible nature through plugins which is also why it receives a lot of negative press regarding being a safe heaven for illegally streamed Live TV content and Movies/TV Shows * pretty much like Popcorn TV application*
Via its third party addons Kodi can be extended with a broad source of content ( Live Sports, Live TV Channels (iptv), Movies and TV Show (VOD) and also with legal content from the likes of Spotify and Tidal or Amazon Prime and Netflix.
However it is a laborious job constantly being on top of which addons / sources work and which got defunct for this You will need to read some websites , forums or other sources of information to get to know the latest * also what works today might not work tomorrow*
Some third party addons are not free but paid and mainly coupled with IPTV subscriptions which I am not sure anyone would need with the tons of sources out there for free iptv and other free addons providing You with the same
* as always IPTV service can stop working from one day to another and there is not a thing You can do about it.. therefore the IPTV services or links i tried in the past were from reputable sources and with nearly always on a pay monthly basis so if the service goes away at any moment I can also cut my losses *
Initially released in 2008 is a client-server media player system based on the XBMC source code.
The Plex Media Server application can be installed on Windows, MacOS, Linux, FreeBSD, Nvidia Shield TV and available also on Qnap and Synology NAS as well * I personally run it on a QNAP NAS connected directly to a Smart TV via HDMI *
Unfortunately Plex killed back in 2019 the plugins support * i think it did not want to end up where Kodi is with constant negative news around illegal and pirated content streamed via third party plugins which they would have no control over just like Kodi *
The server desktop application organizes video (movies & tv shows), music and Photos from your hardrive or network storage folders and also from online services ( podcasts and also Plex from 2019 started to offer free ad-supported video on demand with TV Shows and movies from distributors from Crackle, Warner Bros, MGM, Endemol Shine Group, Lionsgate and Legendary
Also offers Live TV channels for free some of which are quiet great to be honest.
If you upgrade to a Plex Pass (4.99 a month) You can add a compatible HDTV Tuner and access even more Live TV in Your area and also do DVR (Digital Video Recording) to Your harddrive
Plex offers a comprehensive and personalised news experience featuring the most reputable and trustworthy news sources worldwide through Plex News hub integrated into Plex
My only issue with Plex is the lack of IPTV support builtin to the application. Before 2019 when Plex had a some kind of a plugin system it was possible though I never made it work myself and the latest way to make it work * i did not try myself* is to install some additional thing on a docker container to make it work .. see the link here
I honestly think IPTV support in the day and age of today should be part of Plex and IF i ever look for alternatives that being the only reason.
Starting from 4.99 euros a month You can have a long list of additional perks in Plex , my favorite is the skip intro in tv series the same like when You watch Your tv series on Netflix 🙂 )
Skip Intro Phhoto albums PlexDVR Lyrics Mobile Sync (download content Movies,TvShows, Music. Photos for Offline viewing) Parental Controls PlexAmp (beautiful Plex Music Player, Build Radios from Your collection, Parametric EQ, Fades. Loudness leveling) Discount on Tidal subscription
For me Plex is the best alternative and the one I use since many years.
I always think about to build a small plex server with more transcoding power connected directly to my Smart TV and indexing content from the Qnap NAS or just go via the Nvidia Shield TV box route and use that as my Plex Media Server.
Or just build a plex server myself with the now inexpensive (230 euros) Geforce GTX 960 4GB Ram which has H.265 support and can stream to multiple clients even when need to transcode from 4K to 1080p or 720p high bitrates. I left a link in the shownotes where You can look up Nvidia GPUs and their Plex Hardware Transcoding Performance
* right now both content and plex server lives on the Qnap NAS and it connects via the Qnap’s HDMI out to the TV but actually I use it most of the time with the Plex App on the Samsung Smart TV which goes through the network anyways…*
as You can mount network folders on a linux server or nvidia shield pro tv box and make those available for Plex Media Server
OSMC (Open Source Media Center) is perhaps snother good looking Kodi alternative that you can find among the score of media centers. It’s based on the same Kodi project but brings a new and modern user interface that is best suited for TVs and larger screens. Similar to Kodi, OSMC is also open-source and brings the identical tabbed-layout UI.
However, the UI elements are quite polished and clean as opposed to Kodi. And the best part is that you can even use some of the popular Kodi addons on OSMC. OSMC offers its own app store where you can discover new addons and plugins to get content of your preference.
OSMC can play almost all the major media formats out there with a powerful built-in transcoder. Apart from online content, you can also use OSMC as your media center just like Kodi. You can manage your library of movies, TV shows, music, pictures and more.
Best of all, OSMC scraps movie posters, synopsis, and other relevant information from the web for you the media player on OSMC feels much more cohesive and in control than Kodi which is an added advantage.
OSMC can be installed on a variety of devices or purchase their purpose built device for it from their web shop.
I think personally I would spend a bit more and get the Nvidia Shield TV box however I tried Stremio on my Raspberry Pi and it worked very well I must add.
The only issue with Mediaportal that it is a Windows only application 🙁
* i used it in the past under windows to handle my then small tv series collection and watch them.. I nearly bought a remote control compatible with it – MCE remote compatible – It was around 25 euros and i never bought it because I could not afford it at the time *
– i watched the entire series Six Feet Under on Mediaportal the first time –
It has a great interface mainly based on Kodi, can be themed / skinned offers Live TV & PVR functionality , Music, Radio , Movies and TV Shows and it has integration to Remote controls .. I nearly bought one remember?
One of the things I love in Emby that it has IPTV support built in the feature I miss most from Plex Media Server.
However the same in the case of Plex a premium subscription is required to acquire certain premium features for example Live TV and DVR same as in Plex called Empy Premiere costs the same 4.99 per month or 54 a year ( Lifetime 119)
Its premiere features are pretty much close to or identical to Plex Pass:
Offline Media DVR feature Free client apps Cover Art Cinema mode ( trailers, custom intros) Cloud Sync Emby Theater app for TV Convert content to streaming friendly format Folder Sync – (Sync your media to folders and external hard drives for easy backup, archiving, and converting) Podcasts Backup and Restore Server Configuration Smart Home integration (Amazon Echo and Google Home)
Jriver Media Center (originally Jriver Media Jukebox)
Pretty much the only application which has no free tier tough it has free trial. It is a paid application.
It is also one of the oldest initially released in 1998 written entirely in C++ exists for Windows , Mac OS X and Linux.
Its a multimedia application ( A Jukebox like its original name suggests) most similar to the now disappearing iTunes application which uses most of the screen to display a potentially very large library of files.
It can rip and burn cds and supports static and dynamic playlists ( very jukebox like)
It allows access via the network to its library as Tivo Server , UPNP and DLNA Server and the central machine can also act as a library server sharing its content up to 5 clients.
It has web services integrations tough Netflix is now depriciated 🙁
Some of the plugins to services are
Audible Amazon Music CD Baby Hulu MediaNet Youtube HDTracks Digitally Imported Radio Tunes Podcasts Shoutcast Server Last.fm
Movies, TV Shows, Music , Live TV & DVR is what Jellyfin is or as it defines itself on their website
Has clients for nearly all the platforms ( Samsung Tizen is on the way) The Server can run on Windows, Mac OS X , Linux , Docker Container or in a portable form on anything with a .NET Core runtime
What is very refreshing to see while It is simple in offering compared to Plex or Emby while it only has Movies TV Shows , Music and Live TV & DVR if eventually that is all You need You might not need to look elsewhere. Another great thing is there no free and premium tiers. One package and its free.
Sure if You need bells and wistles found in Plex or Emby or Mediaportal like Radio (mediaportal) Podcasts or IPTV , Collected News and Ad Sponsored Movies and TV Shows (plex and/or emby) You might need to pay a monthly fee 4.99 or go for a lifetime pass on the chosen platform the next time they bring a discount or black friday deal on those passes as those usually worth it.
Stremio is a modern media center that’s a one-stop solution for your video entertainment. You discover, watch and organize video content from easy to install addons.
Movies, TV shows, live TV or web channels – find all this on Stremio.
Stremio is available for all devices Windows/Mac/Linux and Android / iOS
You can install official and community add ons and additional sources to add more content like Youtube, Netflix Amazon Prime
i really like it as it brings together many of my favorite sources.. netlfix and popcorn time for example amongst other sources.
Stremio really deserves Your attention as it is great to bring many sources together effortlessly for You the viewer.
Started at 2002 MythTV is a Free Open Source software digital video recorder (DVR) it also allows to organize and manage Your video library and it provides similar functionality to Plex * tough Plex is much more streamlined and perfected towards that goal where MythTV is more focused to watch , pause, rewind and record Live TV watching it through MythTV also offering EPG support ( pretty much like modern set top boxes or iptv streaming boxes)
Addons and Extras
MCE Compatible Remote Controllers
( they look beautiful)
TV Tuner Addons ( i still have a TDT Compatible PCIe card somewhere)
Silicondust HD HomeRun is definetly the best one I have seen out there
It comes in two versions DVB-T (2 tuners or 4 tuners) and DVB-C (4 tuners) — make sure You know the signal type You have in Your country or region for 100% compatbility.
You can receive free local tv via antenna in the DVB-T version or Cable TV Subscription via the DVB-C models and stream it to clients in your existing home network ( tablets, phones, etc.)
These boxes are NOT compatible with the popular IPTV type of services ( for that anyway You have Kodi or other apps and also You can get an IPTV compatible streaming box if You wish to have a separate physical box for it)
Android TV Boxes Nvidia Shield or my Xiaomi Mi Box S are the ones I would recommend. I am seriously thinking of an Nvidia Shield Pro TV Box to see its performance as a Plex Server.
Smart TVs I had the best experience with Samsung Smart TVs. I had LG TV in the past (still do ) but I was not lucky with its smart features hence I have the Xiaomi Mi Box S android tv box and a google chromecast before that.
Mobile Phones and Tablets I always prefer and recommend flagship brands and devices for best experience ( Samsung phones, Samsung S7 tablets, etc)
Had no issues on Samsung mobile phones and tablets.
Laptops and Computers If Your expectations are not 4K content being streamed down to Your device You could be suprised that even an older 8-10 year old laptop can do 720p content easily and without an issue.
Most of the apps shown here today have clients for major platforms (Linux, Mac OS X , Windows) and if none of those works for You then Web Player or Browser based playback is possible most of the time.
Github started back in 2008 developed in Ruby on Rails. Github provides hosting for software development and version control using Git. It offers the distributed version control and source code management (SCM) functionality of Git, plus its own features. It provides access control and several collaboration features such as bug tracking, feature requests, task management, continuous integration and wikis for every project.
GitHub offers unlimited private repositories to all plans, including free accounts. Starting from April 15, 2020, the free plan allows unlimited collaborators, but restricts private repositories to 2,000 actions minutes per month.
Since 2012 Microsoft became a prominent/significant user of the Github service using it to host open-source projects and development tools such as .NET Core, Chakra Core, MSBuild, PowerShell, PowerToys, Visual Studio Code, Windows Calculator, Windows Terminal
Microsoft ended up buying Github in 2018 which know operates as a subsidiary of Microsoft.
Github & Microsoft made it to the news recently on October 23 2020, The RIAA issued a DMCA takedown notice to GitHub to takedown Youtube-dl and its forks.
And here We are in the present Me talking about Github alternatives today but why is it such a problem You might ask?
Well if just the very fact that it is owned and operated by Microsoft was not enough for You then this recent event of takedown of Youtube-dl and its forks which is essentially a command line tool to be able to download youtube videos.
RIAA was claiming it is tool to be used to illegally make downloads/copy of copyrighted music videos hosted on Youtube amongst other reasoning for their claim.
These kind of reasoning is the same when people blame firearms for crimes commited by mentally instable individuals using any kind of firearm.
The fault as always lies with the individual and the use case he/she is applying with a tool for example like Youtube-dl or a firearm. You can use it just like anything else in this world to do bad or to do good.
Sure You can use it do download/copy material off from Youtube You are not supposed to or have no right to do so but many people including myself uses it to do a backup/archive of my own Youtube videos I host on this podcast’s Youtube channel using the command line which is quick , easy and efficient.
Just like a firearm Youtube-dl can be used for Good or Bad but following this logic and argument We shall ban all wheeled motor vehicle cause there are people who use them to make car bombs or grab a purse of someone on the street using/sitting on a motorbike/scooter and get away quick.
Of course this event rubbed many people the bad way including myself and it made me look more into alternatives to Github platform even tough I was never an avid user of github or other development and version controling system.
Many people voiced that perhaps RIAA and Microsoft deserves each other but perhaps We deserve something better than Them. I can not agree more.
Git Vs Github vs SVN
Git is a distributed version-control system for tracking changes in source code during software development.It is designed for coordinating work among programmers, but it can be used to track changes in any set of files. Its goals include speed, data integrity, and support for distributed, non-linear workflows
Git was created by Linus Torvalds in 2005 for development of the Linux kernel, with other kernel developers contributing to its initial development after many developers of the Linux kernel gave up access to BitKeeper, a proprietary source-control management (SCM) system that they had been using to maintain the project since 2002.
As with most other distributed version-control systems, and unlike most client–server systems, every Git directory on every computer is a full-fledged repository with complete history and full version-tracking abilities, independent of network access or a central server.
Github is a cloud service allowing to manage/handle git repositories online with additional features as I commented in the beginning of this episode
Apache Subversion which is often abbreviated SVN, after its command name svn) is a software versioning and revision control system distributed as open source under the Apache License.Software developers use Subversion to maintain current and historical versions of files such as source code, web pages, and documentation. Its goal is to be a mostly compatible successor to the widely used Concurrent Versions System (CVS).
While Git and SVN are both enterprise version control systems (VCS) that help with workflow and project management in coding, they do have their differences. The difference between Git and SVN version control systems is that Git is a distributed version control system, whereas SVN is a centralized version control system.
In Subversion or SVN, you are checking out a single version of the repository. With SVN, your data is stored on a central server.
This means that Subversion allows you to store a record of the changes made to a project, but that history is stored on a central server.
Unlike Git, which is distributed, you need to have constant access to an SVN repository to push changes. These changes are saved as the developer implements them.
In addition, instead of having a copy of a project’s history on your local machine, you only have a copy of the code itself. In other words, to see how a project has evolved, you need to reference the central version of the codebase.
Alternatives to Github
GitLab is a web-based DevOps lifecycle tool that provides a Git-repository manager providing wiki, issue-tracking and continuous integration and deployment pipeline features, using an open-source license, developed by GitLab Inc. The software was created by Ukrainian developers Dmitriy Zaporozhets and Valery Sizov
The product was originally named GitLab and was fully free and open-source software distributed under the MIT License.
In July 2013 the product was split into two distinct versions: GitLab CE: Community Edition and GitLab EE: Enterprise Edition. At that time, the license of both remained the same, being both free and open-source software distributed under the MIT License.
In January 2017, a database administrator accidentally deleted the production database in the aftermath of a cyber attack. Six hours’ worth of issue and merge request data was lost.The recovery process was live-streamed on YouTube.
GitLab runs GitLab.com on a freemium and offers a subscription service.
Bitbucket is a web-based version control repository hosting service owned by Atlassian, for source code and development projects that use either Mercurial (from launch until 1 July 2020) or Git (since October 2011) revision control systems. Bitbucket offers both commercial plans and free accounts. It offers free accounts with an unlimited number of private repositories (which can have up to five users in the case of free accounts) as of September 2010.
opensource projects can request community license above 5 members to remain free
It integrates with other Atlassian products such as Jira
SourceForge is a web-based service that offers software developers a centralized online location to control and manage free and open-source software projects. It provides a source code repository, bug tracking, mirroring of downloads for load balancing, a wiki for documentation, developer and user mailing lists, user-support forums, user-written reviews and ratings, a news bulletin, micro-blog for publishing project updates, and other features.
SourceForge was one of the first to offer this service free of charge to open-source projects.Since 2012, the website has run on Apache Allura software. SourceForge offers free access to hosting and tools for developers of free and open-source software.
As of September 2020, the SourceForge repository claimed to host more than 502,000 projects and had more than 3.7 million registered users.
Launchpad is a web application and website that allows users to develop and maintain software, particularly open-source software. It is developed and maintained by Canonical Ltd.
It has several parts:
Answers: a community support site and knowledge base.
Blueprints: a system for tracking new features.
Bugs: a bug tracker that allows bugs to be tracked in multiple contexts (e.g. in an Ubuntu package, as an upstream, or in remote bug trackers).
Code: source code hosting, with support for the Bazaar and Git version control systems.
Translations: a site for localizing applications into different human languages.
Launchpad has good support for Git. You can host or import Git repositories on Launchpad. And this is entirely free.
Google Cloud Source Repositories
You can get free unlimited private repositories to organize your code in a way that works best for you. Can Mirror code from GitHub or Bitbucket repositories to get powerful code search, code browsing, and diagnostics capabilities.
It integrates with other services from Google. For example Deploy changes directly from branches or tags in your repository to App Engine
(( Google App Engine is a Platform as a Service and cloud computing platform for developing and hosting web applications in Google-managed data centers. Applications are sandboxed and run across multiple servers. ))
You can automatically build and test your source code using Cloud Build service to automatically build and test an image when changes are pushed to Cloud Source Repositories.
It has a free tier up to 5 users with a total of 50 GB of storage with 50 GB egress traffic limit per month
AWS CodeCommit is a version control service hosted by Amazon Web Services that you can use to privately store and manage assets (such as documents, source code, and binary files) in the cloud.
AWS CodeCommit is a similar alternative to Google Cloud Source Repositories.
Just like the Google Cloud Platform, AWS also provides a free tier that does not end when the trial ends. So, it’s free forever if your usage is within the free tier limits as mentioned in their official documentation.
You can have 5 users and 50 GB of storage for free to start with. If you want to add more users, you can do it for $1 per extra user for the resources you already have.
is an all-in-one tool that lets you host code and discuss/plan to keep working on a project without needing to utilize separate applications for communication/collaboration.
You can audit source codes, manage tasks, manage a workboard, note things down, and do a lot of things.
Phabricator lets you self-host or opt for the paid hosting solution offered.
My favourite and the one I am personally self hosting for myself. Gog is a completely self-hosted solution to host your code.
Also, it is a very lightweight option that can also run on a Raspberry Pi.
You can also run it on a low powered cheap 5$ VPS from Linode or Digital Ocean for example or Contabo for those German Data Center Lovers 🙂 I have a dedicated server in a German Datacenter so I know what I am talking about.. 🙂
It is very easy to have it up and running in a matter or 5-10 minutes.
Gitea is an open-source forge software package for hosting software development version control using Git as well as other collaborative features like bug tracking, wikis and code review. It supports self-hosting but also provides a free public first-party instance hosted on DiDi’s cloud. It is a fork of Gogs and is written in Go. Gitea can be hosted on all platforms supported by Go including Linux, macOS, and Windows. The project is funded on Open Collective.
Apache Allura is an open-source forge software for managing source code repositories, bug reports, discussions, wiki pages, blogs and more for any number of individual projects. Allura graduated from incubation with the Apache Software Foundation in March 2013.
Allura can manage any number of projects, including groups of projects known as Neighborhoods, as well as sub-projects under individual projects. Allura also has a modular design to support tools attached to neighborhoods or individual projects. Allura comes packaged with many tools, and additional external and third-party tools can be installed. There are tools to manage version control for source code repositories, ticket tracking, discussions, wiki pages, blogs and more.
Allura can also export project data, as well as import data from a variety of sources, such as Trac, Google Code, GitHub, and, of course, Allura itself.
Google Analytics is a web analytics service offered by Google that tracks and reports website traffic, currently as a platform inside the Google Marketing Platform brand. Google launched the service in November 2005 after acquiring Urchin.
As of 2019, Google Analytics is the most widely used web analytics service on the web.Google Analytics provides an SDK that allows gathering usage data from iOS and Android app, known as Google Analytics for Mobile Apps.Google Analytics can be blocked by browsers, browser extensions, firewalls and other means.
Google Analytics is used to track website activity such as session duration, pages per session, bounce rate etc. of individuals using the site, along with the information on the source of the traffic. It can be integrated with Google Ads,with which users can create and review online campaigns by tracking landing page quality and conversions (goals). Goals might include sales, lead generation, viewing a specific page, or downloading a particular file.Google Analytics’ approach is to show high-level, dashboard-type data for the casual user, and more in-depth data further into the report set. Google Analytics analysis can identify poorly performing pages with techniques such as funnel visualization, where visitors came from (referrers), how long they stayed on the website and their geographical position. It also provides more advanced features, including custom visitor segmentation.Google Analytics e-commerce reporting can track sales activity and performance. The e-commerce reports shows a site’s transactions, revenue, and many other commerce-related metrics.
On September 29, 2011, Google Analytics launched Real Time analytics, enabling a user to have insight about visitors currently on the site.A user can have 100 site profiles. Each profile generally corresponds to one website. It is limited to sites which have traffic of fewer than 5 million pageviews per month (roughly 2 pageviews per second) unless the site is linked to a Google Ads campaign. Google Analytics includes Google Website Optimizer, rebranded as Google Analytics Content Experiments.Google Analytics’ Cohort analysis helps in understanding the behaviour of component groups of users apart from your user population. It is beneficial to marketers and analysts for successful implementation of a marketing strategy.
Funny fact I remeber in one of the early websites I made back in the late 90s in very simple HTML i used the simple code i found of course which had a typical visitor counter on the bottom of the page which incremented with each refresh…. I must say The world came a long way from a simple counter You cant really base real trends on to a full arsenal of different Analytics tools which can help You make good business decisions regarding Your audience and trends when it comes to Your audience-visitors.
Why We need metrics like that?
It can be said that a successful website is built in equal parts great content and a solid understanding of your audience. While your content may be first-class, if you don’t know where your traffic is coming from (and the topics your audience is interested in), you’re missing half of the formula.
Google Analytics enables you to find answers to these questions by analyzing your website traffic. That way, you can improve your site based on your visitors’ actions.
There are many books out there about Google Analytics and its every bit of feature and i suggest if You are seriously interested You should check some of the books out mentioned in the shownotes.
Open Source alternatives:
Matomo does most of what Google Analytics does, and chances are it offers the features that you need.
Those features include metrics on the number of visitors hitting your site, data on where they come from (both on the web and geographically), the pages from which they leave, and the ability to track search engine referrals. Matomo also offers many reports, and you can customize the dashboard to view the metrics that you want to see.
To make your life easier, Matomo integrates with more than 65 content management, e-commerce, and online forum systems, including WordPress, Magneto, Joomla, and vBulletin, using plugins. For any others, you can simply add a tracking code to a page on your site.
The best part of it that it is relatively easy to install if You decide to self host it Yourself On-Permises.
If Your site receives a couple of hundred visitors per day its advised to set up auto-archiving cron task so that Matomo calculates your reports periodically. When the cron is setup and the timeout value increased, Matomo dashboard will load very quickly as the reports will be pre-processed by the core:archive command triggered by cron.
If you do not setup the cron, Matomo will recalculate your statistics every time you visit a Matomo report, which will slow Matomo down and increase the load on your database.
Open Web Analytics
a close second to Matomo in the open source web analytics stakes, it’s Open Web Analytics. In fact, it includes key features that either rival Google Analytics or leave it in the dust.
In addition to the usual raft of analytics and reporting functions, Open Web Analytics tracks where on a page, and on what elements, visitors click; provides heat maps that show where on a page visitors interact the most; and even does e-commerce tracking.
Web server log files provide a rich vein of information about visitors to your site, but tapping into that vein isn’t always easy. That’s where AWStats comes to the rescue. While it lacks the most modern look and feel, AWStats more than makes up for that with breadth of data it can present.
That information includes the number of unique visitors, how long those visitors stay on the site, the operating system and web browsers they use, the size of a visitor’s screen, and the search engines and search terms people use to find your site. AWStats can also tell you the number of times your site is bookmarked, track the pages where visitors enter and exit your sites, and keep a tally of the most popular pages on your site.
These features only scratch the surface of AWStats’s capabilities. It also works with FTP and email logs, as well as syslog files. AWStats can gives you a deep insight into what’s happening on your website using data that stays under your control.
Countly bills itself as a “secure web analytics” platform. While I can’t vouch for its security, Countly does a solid job of collecting and presenting data about your site and its visitors.
Heavily targeting marketing organizations, Countly tracks data that is important to marketers. That information includes site visitors’ transactions, as well as which campaigns and sources led visitors to your site. You can also create metrics that are specific to your business. Countly doesn’t forgo basic web analytics; it also keeps track of the number of visitors on your site, where they’re from, which pages they visited, and more.
Plausible is a newer kid on the open source analytics tools block. It’s lean, it’s fast, and only collects a small amount of information — that includes numbers of unique visitors and the top pages they visited, the number of page views, the bounce rate, and referrers. Plausible is simple and very focused.
What sets Plausible apart from its competitors is its heavy focus on privacy. The project creators state that the tool doesn’t collect or store any information about visitors to your website, which is particularly attractive if privacy is important to you. You can read more about that here.
Cloud based analytics tool with Free tier available (500K actions per month)
An action is any event tracked in Woopra such as pageviews, downloads, property changes and any custom actions you configure. Also, every time a property is changed, for example, a user updated their email address or joins a segment, it will be listed as an action in the customer profile and counted towards your action quota.
As far as I could see it offers no option to self host or on permises installations which only hurts me as the next tier after Free is the one for 999 Euros per month (for 5 Million actions and a lot of Pro features..)
FoxMetrics ( I like the name)
They offer free plans for startups, minority-owned, education, and non-profits.
Available Free tier to analyze the latest 500 page views and affordable tiers to analyze more f.e to analyze the latest 100 000 page views only costs 7 Euros a month.
Why look for Alternatives to Google Analytics and Why I look at metrics myself?
it is good to have alternatives.. For Everything in life .. Mostly True..
Different look , more metrics , new or previously untapped data or visually more pleasing dashboards or simply another interpretation of the same pile of information > Another look or a fresh view even on the same can help interpret trends , making the best possible decision when it comes to Your content and audience. Help You see if what You are trying to implement or introduce is well received or perhaps making them frow their eyebrows…
The more and perhaps most versatile view or angles You have on the same can help You be better than Your competitions and stir Your business forward which will make them think You have some 7th sense or a magic crystal ball.
To use my example for me metrics which are important to me are the ones which tell me as much as possible of the visitors to my website. Where they come from, what device they use, how long they stay and which pages they spend time on or which pages they exit from , are their traffic direct perhaps used specific keywords in google to find me , or they come from social channels where i tend to run ads and promote my show ( mastodon, twitter, instagram, pinterest – also paid ads ) and so on.
These metrics collected together with the listening statistics i get from podcast platforms help me to see a couple of things.
As I host my shows and material in english
Do I reach the proper audience and get the most visits / number of reproductions as my goal is mostly to reach native english speaking audience hence I use english instead of for example hungarian or spanish.
Devices used to visit my website ( Desktop or Mobile)
I get most of the traffic from Mobile devices compared to Desktop browsers and Tablets therefore it is important that the website and content is presented properly on this target platform (Mobile phones) which requires constantly to check and make sure its correct and making the experience better each time.
The Source of the Traffic
Direct: Any traffic where the referrer or source is unknown
Email: Traffic from email marketing that has been properly tagged with an email parameter
Organic: Traffic from search engine results that is earned, not paid
Paid search: Traffic from search engine results that is the result of paid advertising via Google AdWords or another paid search platform
Referral: Traffic that occurs when a user finds you through a site other than a major search engine
Social: Traffic from a social network, such as Facebook, LinkedIn, Twitter, or Instagram
Other: If traffic does not fit into another source or has been tagged as “Other” via a URL parameter, it will be bucketed into “Other” traffic
These are just some of the metrics I personally care about. Cause imagine I spend time and effort and money ( pinterest ads and even instagram) to promote the show all in english and my metrics would show me Im most popular in non english speaking countries and most of my traffic is organic coming from search engine results .
it would mean I spend money on promoting my site in social platforms and bringing content in english when perhaps I should either promote elsewhere or look into the reason of why I am successful where my stats say I am or the opposite what makes me not being popular where I want to be.
Creating musical melodies with a computer goes back as far as the 1950s in Australia with a computer originally named as CSIR Mark 1 which was programmed to play popular musical melodies from the very early 1950s. In 1950 the CSIR Mark 1 was used to play music, the first known use of a digital computer for the purpose. The music was never recorded
What is a DAW?
A digital audio workstation is an electronic device or application software used for recording, editing and producing audio files.
Some of the main functions of Digital Audio Workstations are:
Producing: From creating simple house loops and beats to full, expansive tracks, DAWs allow you to produce and finetune the creation and layering of these projects, usually with the help of VST plugins.
Tracking: During recording sessions, bands or orchestras are often recorded into multiple tracks at once. This allows the producer or engineer to capture the real-time sound of a group performance. These tracked performances can then be edited and mixed to perfection.
Mixing: This is the process of balancing and blending all the individual elements of a track by adjusting equalization/FX levels and parameters so that the track sounds as good as possible.
Live Performance: The process of building and editing tracks in real-time. Some DAWs are specifically designed for this process, as you’ll see below.
Composing: DAWs can also be used for the process of composing and constructing film/TV/game scores.
The first DAWs were conceived in the late 70s and early 80s. Soundstream, which developed the first digital recorder in 1977, developed what is considered the first DAW. Bringing together a minicomputer, disk drive, video display, and the software to run it all was the easy part.
Finding inexpensive storage and fast enough processing and disk speeds to run a commercially viable DAW proved the main challenge for the ensuing years. But as the home computer market exploded with the likes of Apple, Atari, and Commodore Amiga in the late 1980s, size and speed concerns were no longer an issue.
By the late 1980s, a number of consumer-level computers such as the MSX (Yamaha CX5M), Apple Macintosh, Atari ST and Commodore Amiga began to have enough power to handle digital audio editing. Engineers used Macromedia’s Soundedit, with Microdeal’s Replay Professional and Digidesign’s “Sound Tools” and “Sound Designer” to edit audio samples for sampling keyboards like the E-mu Emulator II and the Akai S900. Soon, people began to use them for simple two-track audio editing and audio mastering.
In 1989, Sonic Solutions released the first professional (48 kHz at 24 bit) disk-based nonlinear audio editing system. The Macintosh IIfx-based Sonic System, based on research done earlier at George Lucas’ Sprocket Systems, featured complete CD premastering, with integrated control of Sony’s industry-standard U-matic tape-based digital audio editor.
In 1994, a company in California named OSC produced a 4-track editing-recorder application called DECK that ran on Digidesign’s hardware system, which was used in the production of The Residents’ “Freakshow” [LP].
Many major recording studios finally “went digital” after Digidesign introduced its Pro Tools software in 1991, modeled after the traditional method and signal flow in most analog recording devices. At this time, most DAWs were Apple Mac based (e.g., Pro Tools, Studer Dyaxis, Sonic Solutions). Around 1992, the first Windows-based DAWs started to emerge from companies such as Innovative Quality Software (IQS) (now SAWStudio), Soundscape Digital Technology, SADiE, Echo Digital Audio, and Spectral Synthesis. All the systems at this point used dedicated hardware for their audio processing.
In 1993, the German company Steinberg released Cubase Audio on Atari Falcon 030. This version brought DSP built-in effects with 8-track audio recording & playback using only native hardware. The first Windows-based software-only product, introduced in 1993, was Samplitude (which already existed in 1992 as an audio editor for the Commodore Amiga).
In 1996, Steinberg introduced a revamped Cubase (which was originally launched in 1989 as a MIDI sequencing software for the Atari ST computer, later developed for Mac and Windows PC platforms, but had no audio capabilities until 1993’s Cubase Audio) which could record and play back up to 32 tracks of digital audio on an Apple Macintosh without the need of any external DSP hardware. Cubase not only modeled a tape-like interface for recording and editing, but, in addition, using VST also developed by Steinberg, modeled the entire mixing desk and effects rack common in analog studios. This revolutionized the DAW world, both in features and price tag, and was quickly imitated by most other contemporary DAW systems.
Electronic music became the megalith it is today and anybody who wanted to make music in their bedroom, basement, or local park bench could do just that. Whether they should is still a point that’s up for debate.
Working with Audio on Linux – DAWs and Audio Applications
Linux which came on to existence by the end of 1991 as a form of the Linux kernel obviously are late to the Audio and Music scene but it is definitely trying to catch up.
Audacity is one of the most basic yet a capable audio editor available for Linux. It is a free and open-source cross-platform tool. A lot of you must be already knowing about it.
It has improved a lot when compared to the time when it started trending. I do recall that I utilized it to “try” making karaokes by removing the voice from an audio file. Well, you can still do it – but it depends.
It also supports plug-ins that include VST effects. Of course, you should not expect it to support VST Instruments.
Live audio recording through a microphone or a mixer
Export/Import capability supporting multiple formats and multiple files at the same time
Plugin support: LADSPA, LV2, Nyquist, VST and Audio Unit effect plug-ins
Easy editing with cut, paste, delete and copy functions.
Spectogram view mode for analyzing frequencies
LMMS is a free and open source (cross-platform) digital audio workstation. It includes all the basic audio editing functionalities along with a lot of advanced features.
You can mix sounds, arrange them, or create them using VST instruments. It does support them. Also, it comes baked in with some samples, presets, VST Instruments, and effects to get started. In addition, you also get a spectrum analyzer for some advanced audio editing.
Note playback via MIDI
VST Instrument support
Native multi-sample support
Built-in compressor, limiter, delay, reverb, distortion and bass enhancer
Ardour is yet another free and open source digital audio workstation. If you have an audio interface, Ardour will support it. Of course, you can add unlimited multichannel tracks. The multichannel tracks can also be routed to different mixer tapes for the ease of editing and recording.
You can also import a video to it and edit the audio to export the whole thing. It comes with a lot of built-in plugins and supports VST plugins as well.
Vertical window stacking for easy navigation
Strip silence, push-pull trimming, Rhythm Ferret for transient and note onset-based editing
If you want to mix and record something while being able to have a virtual DJ tool, Mixxx would be a perfect tool. You get to know the BPM, key, and utilize the master sync feature to match the tempo and beats of a song. Also, do not forget that it is yet another free and open source application for Linux!
It supports custom DJ equipment as well. So, if you have one or a MIDI – you can record your live mixes using this tool.
Broadcast and record DJ Mixes of your song
Ability to connect your equipment and perform live
Key detection and BPM detection
Rosegarden is yet another impressive audio editor for Linux which is free and open source. It is neither a fully featured DAW nor a basic audio editing tool. It is a mixture of both with some scaled down functionalities.
I wouldn’t recommend this for professionals but if you have a home studio or just want to experiment, this would be one of the best audio editors for Linux to have installed.
Music notation editing
Recording, Mixing, and samples
Cecilia is not an ordinary audio editor application. It is meant to be used by sound designers or if you are just in the process of becoming one. It is technically an audio signal processing environment. It lets you create ear-bending sound out of them.
You get in-build modules and plugins for sound effects and synthesis. It is tailored for a specific use – if that is what you were looking for – look no further!
Modules to achieve more (UltimateGrainer – A state-of-the-art granulation processing, RandomAccumulator – Variable speed recording accumulator, UpDistoRes – Distortion with upsampling and resonant lowpass filter)
Automatic Saving of modulations
Davinci Resolve 16
DaVinci Resolve (originally known as da Vinci Resolve) is a color correction and non-linear video editing (NLE) application for macOS, Windows, and Linux, originally developed by da Vinci Systems, and now developed by Blackmagic Design following its acquisition in 2009.
In addition to the commercial version of the software (known as DaVinci Resolve Studio), Blackmagic Design also distributes a free edition, with reduced functionality, simply named DaVinci Resolve (formerly known as DaVinci Resolve Lite)
Renoise is a Digital Audio Workstation (DAW) with a refreshing twist. It lets you record, compose, edit, process and render production-quality audio using a tracker-based approach.
In a tracker, the music runs from top to bottom in an easily understood grid known as a pattern. Several patterns arranged in a certain order make up a song. Step-editing in a pattern grid lends itself well to a fast and immediate workflow. On top of this, Renoise features a wide range of modern features: dozens of built-in audio processors, alongside support for all commonly used virtual instrument and effect plug-in formats. And the software can be extended too: with scripting, you can use all of your MIDI or OSC controller to control it in exactly the way you want.
Harrison Mixbus is a digital audio workstation (DAW) available for Microsoft Windows, Mac OS X and Linux operating systems and version 1 was released in 2009.
Mixbus provides a modern DAW model incorporating a “traditional” analog mixing workflow. It includes built in proprietary analog modeled processing, based on Harrison’s 32-series and MR-series analog music consoles.
Mixbus is based on Ardour, the open source DAW, but is sold and marketed commercially by Harrison Audio Consoles
It comes in two versions:
Mixbus ($79) and Mixbus 32c ($299) There is a Specials offer on their page You can get Mixbus for $19 just like I did and even get their EQ XT-ME Mastering Equalizer plugin for only $9 instead of $109the link is in the shownotes for the special offer.
Mixbus is a full-featured digital audio workstation for recording, editing, mixing, and mastering your music.
Engineered and supported by Harrison; and developed in collaboration with an open-source community; Mixbus represents the finest sound quality of any DAW at a great price.
Mixbus 32c improves on the Mixbus platform with an exact emulation of the original Harrison 32C parametric four-band sweepable EQ, and 4 additional stereo summing buses.
REAPER (an acronym for Rapid Environment for Audio Production, Engineering, and Recording) is a digital audio workstation and MIDI sequencer software created by Cockos. The current version is available for Microsoft Windows (XP and newer) and macOS (10.5 and newer) – beta versions are also available for Linux. REAPER acts as a host to most industry-standard plug-in formats (such as VST and AU) and can import all commonly used media formats, including video. REAPER and its included plug-ins are available in 32-bit and 64-bit format.
REAPER provides a free, fully functional 60-day evaluation period. For further use two licenses are available – a commercial and a discounted one. They are identical in features and differ only in price and target audience, with the discount license being offered for private use, schools and small businesses. Any paid license includes the current version with all of its future updates and a free upgrade to the next major version and all of its subsequent updates, when they are released. Any license is valid for all configurations (x64 and x86) and allows for multiple installations, as long it is being run on one computer at a time.
Extensive customization opportunities are provided through the use of ReaScript (edit, run and debug scripts within REAPER) and user-created themes and functionality extensions.
ReaScript can be used to create anything from advanced macros to full-featured REAPER extensions. ReaScripts can be written in EEL2 (JSFX script), Lua and Python. SWS / S&M is a popular, open-source extension to REAPER, providing workflow enhancements and advanced tempo/groove manipulation functionality.
REAPER’s interface can be customized with user-built themes. Each previous version’s default theme is included with REAPER and theming allows for complete overhauls of the GUI. REAPER has been translated into multiple languages and downloadable language packs are available. Users as well as developers can create language packs for REAPER.
Reaper comes with a variety of commonly used audio production effects. They include tools such as ReaEQ, ReaVerb, ReaGate, ReaDelay, ReaPitch and ReaComp. The included Rea-plug-ins are also available as a separate download for users of other DAWs, as the ReaPlugs VST FX Suite.
Also included are hundreds of JSFX plug-ins ranging from standard effects to specific applications for MIDI and audio. JSFX scripts are text files, which when loaded into REAPER (exactly like a VST or other plug-in) become full-featured plug-ins ranging from simple audio effects (e.g delay, distortion, compression) to instruments (synths, samplers) and other special purpose tools (drum triggering, surround panning). All JSFX plug-ins are editable in any text editor and thus are fully user customizable.
REAPER includes no third-party software, but is fully compatible with all versions of the VST standard (currently VST3) and thus works with the vast majority of both free and commercial plug-ins available. REAPER x64 can also run 32-bit plug-ins alongside 64-bit processes.
While not a dedicated video editor, REAPER can be used to cut and trim video files and to edit or replace the audio within. Common video effects such as fades, wipes and cross-fades are available. REAPER aligns video files in a project, as it would an audio track, and the video part of a file can be viewed in separate video window while working on the project.
Two Very Important pieces of Applications for Linux when it comes to Audio Routing
Jack is a low latency capable audio and midi server, designed for pro audio use. It enables all Jack capable applications to connect to each other. It is included in Linux Distributions geared towards Audio Production like AVLinux and Ubuntu Studio .. Ubuntu Studio even has its own Ubuntu Studio Controls ( see screenshot later) to interact with Jack:
(( I left a great article in the shownotes for starting out with Jack under Linux the easy way ))
provides low latency (less than 5 milliseconds with the right hardware)
allows multiple audo devices to be used at once
recognizes hotplugged USB audio devices
Carla is a virtual Audio Rack and Patchbay, otherwise known as a plugin host, that can use audio plugins normally used in a DAW such as Ardour as if it was a rack of audio hardware. Some of its features include:
Saving virtual racks and connections
Interacting with several plugins types, including LADSPA, LV2, DSSI, and VST.
Has a plugin bridge that utilizes WINE to use plugins compiled for Windows devices (experimental, not installed by default).
Linux Distributions Dedicated to Audio Production on Linux
AV Linux is a Linux-based operating system aimed for multimedia content creators. Available for the i386 and x86-64 architectures with a kernel customised for maximum performance and low-latency audio production, it has been recommended as a supported Linux platform for Harrison Mixbus
AV Linux is built on top of Debian. AV Linux is bundled with software for both everyday use and media production.
Preinstalled audio software includes: Ardour, Audacity, Calf Studio Gear, Carla, Guitarix, Hydrogen and MuseScore.
Ubuntu Studio 20
Not a Linux distribution as such but a repository for Debian / Ubuntu based distros and a custom set of their own applications and plugins for working with audio on Linux ( also has some apps for Windows.
Carla is for example one of KX.Studios application.
VST under Linux and Other Plugins for / under Linux
LADSPA ( Linux Audio Developers Simple Plugin API) released in 2000 just a year after Steinberg released VST 2.0 in 1999 which is the most famous.
LADSPA plugins are only effects processors.
No fancy GUI simple generic GUI
DSSI (Disposable Soft Synth Interface) in 2004 sometimes referred to as LADSPA-for-instruments
LV2 – LADSPA v2
Combines the best and can be the replacement for both DSSI and LADSPA.
VAMP plugins does not make or modify audio or midi data. They analyze sound and extract its features. Audacity and Mixxx for example uses them to analyze tempo and key of the songs
VST-s under Linux
VST’s that can be used in Linux come in two flavors:
Natively compiled VST plug-ins, also known as LinuxVST’s. These are plug-ins that are compiled or can be compiled on Linux systems with the help of either the Steinberg header files from the VST SDK or the Vestige header (the open source equivalent of the Steinberg headers).
VST’s compiled for Windows. These can be used with the help of Wine and any host that supports Windows VST’s.
Personally one of the things I miss is more commercial plugins to be available under Linux as well. Many commercial plugins developed only for Windows / Mac — I would assume most of the time because of the larger user base and all the fancy propietary licensing and other protection methods involved ( Waves plugins, Plugin Alliance, etc.)