In this article, and hopefully more to come, I will share interesting examples of hardware that has been certified by the Open Source Hardware Association (OSHWA).
As an introduction to this series, I’ll start with a little background.
What is open source hardware?
Open source hardware is hardware that is, well, open source. The Open Source Hardware Association maintains a formal definition of open source hardware, but fundamentally, open source hardware is about two types of freedom. The first is freedom of information: Does a user have the information required to understand, replicate, and build upon the hardware? The second is freedom from legal barriers: Will legal barriers (such as intellectual property rights) prevent a user who is trying to understand, replicate, and build upon the hardware from doing so? True open source hardware is open to everyone to do with as they see fit.
Throughout the evolution of software tools there exists a tension between generalization and partial specialization. A tool’s broader adoption is a form of natural selection, where its evolution is predicated on filling a given need, or role, better than its competition. This premise is imbued in the central tenets of Unix philosophy:
Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.
Expect the output of every program to become the input to another, as yet unknown, program.
The domain of configuration management tooling is rife with examples of not heeding this lesson (i.e. Terraform, Puppet, Chef, Ansible, Juju, Saltstack, etc.), where expanse in generality has given way to partial specialization of different tools, causing fragmentation of an ecosystem. This pattern has not gone unnoticed by those in the Kubernetes cluster lifecycle special interest group, or SIG, whose objective is to simplify the creation, configuration, upgrade, downgrade, and teardown of Kubernetes clusters and their components. Therefore, one of the primary design principles for any subproject that the SIG endorses is: Where possible, tools should be composable to solve a higher order set of problems.
In this post, we will outline the history and motivations behind the creation of the Cluster API as a specialized toolset to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management in the Kubernetes ecosystem. The primary function of Cluster API is not meant to supplant existing tools, but to serve as a partial specialization that can be used in a composable fashion with those tools.
Most of us appreciate when our compiler lets us know we made a mistake. Finding coding errors early lets us correct them before they embarrass us in a code review or, worse, turn into bugs that impact our customers. Besides the compulsory errors, many projects enable additional diagnostics by using the -Wall and -Wextra command-line options. For this reason, some projects even turn them into errors via -Werror as their first line of defense. But not every instance of a warning necessarily means the code is buggy. Conversely, the absence of warnings for a piece of code is no guarantee that there are no bugs lurking in it.
In this article, I would like to shed more light on trade-offs involved in the GCC implementation choices. Besides illuminating underlying issues for GCC contributors interested in implementing new warnings or improving existing ones, I hope it will help calibrate expectations for GCC users about what kinds of problems can be expected to be detected and with what efficacy. Having a better understanding of the challenges should also reduce the frustration the limitations of the available solutions can sometimes cause. (See part 2 to learn more about middle-end warnings.)
I’ve been using Linux long enough to remember Linux Mandrake. I recall, at one of my first-ever Linux conventions, hanging out with the MandrakeSoft crew and being starstruck to think that they were creating a Linux distribution that was sure to bring about world domination for the open source platform.
Well, that didn’t happen. In fact, Linux Mandrake didn’t even stand the test of time. It was renamed Mandriva and rebranded. Mandriva retained popularity but eventually came to a halt in 2011. The company disbanded, sending all those star developers to other projects. Of course, rising from the ashes of Mandrake Linux came the likes of OpenMandriva, as well as another distribution called Mageia Linux.
Like OpenMandriva, Mageia Linux is a fork of Mandriva. It was created (by a group of former Mandriva employees) in 2010 and first released in 2011, so there was next to no downtime between the end of Mandriva and the release of Mageia. Since then, Mageia has existed in the shadows of bigger, more popular flavors of Linux (e.g., Ubuntu, Mint, Fedora, Elementary OS, etc.), but it’s never faltered. As of this writing, Mageia is listed as number 26 on the Distrowatch Page Hit Ranking chart and is enjoying release number 6.1.
What Sets Mageia Apart?
This question has become quite important when looking at Linux distributions, considering just how many distros there are—many of which are hard to tell apart. If you’ve seen one KDE, GNOME, or Xfce distribution, you’ve seen them all, right? Anyone who’s used Linux enough knows this statement is not even remotely true. For many distributions, though, the differences lie in the subtleties. It’s not about what you do with the desktop; it’s how you put everything together to improve the user experience.
Mageia Linux defaults to the KDE desktop and does as good a job as any other distribution at presenting KDE to users. But before you start using KDE, you should note some differences between Mageia and other distributions. To start, the installation is quite simple, but it’s slightly askew from what might expect. In similar fashion to most modern distributions, you boot up the live instance and click on the Install icon (Figure 1).
Figure 1: Installing Mageia from the Live instance.
Once you’ve launched the installation app, it’s fairly straightforward, although not quite as simple as some other versions of Linux. New users might hesitate when they are presented with the partition choice between Use free space or Custom disk partition (Remember, I’m talking about new users here). This type of user might prefer a bit simpler verbiage. Consider this: What if you were presented (at the partition section) by two choices:
Basic Install
Custom Install
The Basic install path would choose a fairly standard set of options (e.g., using the whole disk for installation and placing the bootloader in the proper/logical place). In contrast, the Custom install would allow the user to install in a non-default fashion (for dual boot, etc.) and choose where the bootloader would go and what options to apply.
The next possible confusing step (again, for new users) is the bootloader (Figure 2). For those who have installed Linux before, this option is a no-brainer. For new users, even understanding what a bootloader does can be a bit of an obstacle.
Figure 2: Configuring the Mageia bootloader.
The bootloader configuration screen also allows you to password protect GRUB2. Because of the layout of this screen, it could be confused as the root user password. It’s not. If you don’t want to password protect GRUB2, leave this blank. In the final installation screen (Figure 3), you can set any bootloader options you might want. Once again, we find a window that could cause confusion with new users.
Figure 3: Advanced bootloader options can be configured here.
Click Finish and the installation will complete. You might have noticed the absence of user configuration or root user password options. With the first stage of the installation complete, you reboot the machine, remove the installer media, and (when the machine reboots) you’ll then be prompted to configure both the root user password and a standard user account (Figure 4).
Figure 4: Configuring your users.
And that’s all there is to the Mageia installation.
Welcome to Mageia
Once you log into Mageia, you’ll be greeted by something every Linux distribution should use—a welcome app (Figure 5).
Figure 5: The Mageia welcome app is a new user’s best friend.
From this welcome app, you can get information about the distribution, get help, and join communities. The importance of having such an approach to greet users at login cannot be overstated. When new users log into Linux for the first time, they want to know that help is available, should they need it. Mageia Linux has done an outstanding job with this feature. Granted, all this app does is serve as a means to point users to various websites, but it’s important information for users to have at the ready.
Beyond the welcome app, the Mageia Control Center (Figure 6) also helps Mageia stand out. This one-stop-shop is where users can take care of installing/updating software, configuring media sources for installation, configure update frequency, manage/configure hardware, configure network devices (e.g., VPNs, proxies, and more), configure system services, view logs, open an administrator console, create network shares, and so much more. This is as close to the openSUSE YaST tool as you’ll find (without using either SUSE or openSUSE).
Figure 6: The Mageia Control Center is an outstanding system management tool.
Beyond those two tools, you’ll find everything else you need to work. Mageia Linux comes with the likes of LibreOffice, Firefox, KMail, GIMP, Clementine, VLC, and more. Out of the box, you’d be hard pressed to find another tool you need to install to get your work done. It’s that complete a distribution.
Target Audience
Figuring out the Mageia Linux target audience is a tough question to answer. If new users can get past the somewhat confusing installation (which isn’t really that challenging, just slightly misleading), using Mageia Linux is a dream.
The slick, barely modified KDE desktop, combined with the welcome app and control center make for a desktop Linux that will let users of all skill levels feel perfectly at home. If the developers could tighten up the verbiage on the installation, Mageia Linux could be one of the greatest new user Linux experiences available. Until then, new users should make sure they understand what they’re getting into with the installation portion of this take on the Linux platform.
Kernel development is truly impossible to keep track of. The main mailing list alone is vast beyond belief. Then there are all the side lists and IRC channels, not to mention all the corporate mailing lists dedicated to kernel development that never see the light of day. In some ways, kernel development has become fundamentally mysterious.
Once in a while, some lunatic decides to try to reach back into the past and study as much of the corpus of kernel discussion as he or she can find. One such person is Joey Pabalinas, who recently wanted to gather everything together in Maildir format, so he could do searches, calculate statistics, generate pseudo-hacker AI bots and whatnot.
He couldn’t find any existing giant corpus, so he tried to create his own by piecing together mail archived on various sites. It turned out to be more than a million separate files, which was too much to host on either GitHub or GitLab.
The first Open Networking Summit was held in October 2011 at Stanford University and described as “a premier event about OpenFlow and Software-Defined Networking (SDN)”. Here we are seven and half years later and I’m constantly amazed at both how far we’ve come since then, and at how quickly a traditionally slow-moving industry like telecommunications is embracing change and innovation powered by open source. Coming out of the ONS Summit in Amsterdam last fall, Network World described open source networking as the “new norm,” and indeed, open platforms have become de-facto standards in networking.
Like the technology, ONS as an event is constantly evolving to meet industry needs and is designed to help you take advantage of this revolution in networking. The theme of this year’s event is “Enabling Collaborative Development & Innovation” and we’re doing this by exploring collaborative development and innovation across the ecosystem for enterprises, service providers and cloud providers in key areas like SDN, NFV, VNF, CNF/Cloud Native Networking, Orchestration, Automation of Cloud, Core Network, Edge, Access, IoT services, and more.
A unique aspect of ONS is that it facilitates deep technical discussions in parallel with exciting keynotes, industry, and business discussions in an integrated program. The latest innovations from the networking project communities including LF Networking (ONAP, OpenDaylight, OPNFV, Tungsten Fabric) are well represented in the program, and in features and add-ons such as the LFN Unconference Track and LFN Networking Demos. A variety of event experiences ensure that attendees have ample opportunities to meet and engage with each other in sessions, the expo hall, and during social events.
New this year is a track structure built to cover the key topics in depth to meet the needs of both CIOs/CTO/architects and developers, sysadmins, NetOps and DevOps teams:
The ONS Schedule is now live — find the sessions and tutorials that will help you learn how to participate in the open source communities and ecosystems that will make a difference in your networking career. And if you need help convincing your boss, this will help you make the case.
The standard price expires March 17th so hurry up and register today! Be sure to check out the Day Passes and Hall Passes available as well.
Last month I wrote about combining a series of Unix commands using pipes. But there are times where you don’t even need pipes to turn a carefully-chosen series of commands into a powerful and convenient home-grown utility. …
The echo command repeats whatever text is entered after it, for example. I’d just never found it particularly useful, since it always seemed to be more trouble than it’s worth. Sure, echo was handy for adding decorations to output.
echo "--------------------------" ; date ; echo "--------------------------"
--------------------------
Thu Feb 28 01:25:46 UTC 2019
--------------------------
But if you have to type in all those decorations in the first place, you’re not really saving any time.
What I’d really wanted (instead of echo) was a command to drop me back into that one deep-down subdirectory where I was doing most of my work. Something that was shorter than
cd ~/subdirectory/subdirectory/subdirectory/subdirectory/subdirectory
Yes, there’s a command that lets you change back to your last-used directory.
In the last few years, we have witnessed the unprecedented growth of open source in all industries—from the increased adoption of open source software in products and services, to the extensive growth in open source contributions and the releasing of proprietary technologies under an open source license. It has been an incredible experience to be a part of.
As many have stated, Open Source is the New Normal, Open Source is Eating the World, Open Source is Eating Software, etc. all of which are true statements. To that extent, I’d like to add one more maxim: Open Source is Eating the Startup Ecosystem. It is almost impossible to find a technology startup today that does not rely in one shape or form on open source software to boot up its operation and develop its product offering. As a result, we are operating in a space where open source due diligence is now a mandatory exercise in every M&A transaction. These exercises evaluate the open source practices of an organization and scope out all open source software used in product(s)/service(s) and how it interacts with proprietary components—all of which is necessary to assess the value creation of the company in relation to open source software.
Being intimately involved in this space has allowed me to observe, learn, and apply many open source best practices. I decided to chronicle these learnings in an ebook as a contribution to the OpenChain project: Assessment of Open Source Practices as part of Due Diligence in Merger and Acquisition Transactions. This ebook addresses the basic question of: How does one evaluate open source practices in a given organization that is an acquisition target? We address this question by offering a path to evaluate these practices along with appropriate checklists for reference. Essentially, it explains how the acquirer and the target company can prepare for this due diligence, offers an explanation of the audit process, and provides general recommended practices for ensuring open source compliance.
If is important to note that not every organization will see a need to implement every practice we recommend. Some organizations will find alternative practices or implementation approaches to achieve the same results. Appropriately, an organization will adapt its open source approach based upon the nature and amount of the open source it uses, the licenses that apply to open source it uses, the kinds of products it distributes or services it offers, and the design of the products or services themselves
If you are involved in assessing the open source and compliance practices of organizations, or involved in an M&A transaction focusing on open source due diligence, or simply want to have a deeper level of understanding of defining, implementing, and improving open source compliance programs within your organizations—this ebook is a must read.Download the Brief.
At the Open Source Leadership Summit, the Linux Foundation announced the formation of CommunityBridge, which is a new platform created for open source developers.
In making the announcement, Jim Zemlin, executive director of the Linux Foundation, said on stage at the conference that the Linux Foundation would match funding for any organization that donated funds to CommunityBridge projects.
Following up on those announcements, Microsoft-owned GitHub said it would donate $100,000 to CommunityBridge and invited maintainers of CommunityBridge projects to take part in GitHub’s maintainer program.
The Linux Foundation will match any organization that contributes money to CommunityBridge projects up to a total of $500,000 across all of the contributors.
People like to make fun of JavaScript. “It’s not a real language! Who, in their right mind, would use it on a server?” The replies are: It’s a real language and JavaScript is one of the most popular languages of all. For years, the enterprise server side had been divided into two camps: JS Foundation and Node.js Foundation. This was a bit, well, silly. Now, the two are coming together to form one organization: OpenJS Foundation.
At the Open Source Leadership Summit in Half Moon Bay, CA, the Linux Foundation announced the long anticipated news that the two JavaScript powers were merging. The newly formed OpenJS Foundation mission is to support the growth of JavaScript and related web technologies by providing a neutral organization to host and sustain projects, and fund development activities. It’s made up of 31 open-source JavaScript projects including Appium, Dojo, jQuery, Node.js, and webpack.