Home Blog Page 271

Mageia Linux Is a Modern Throwback to the Underdog Days

I’ve been using Linux long enough to remember Linux Mandrake. I recall, at one of my first-ever Linux conventions, hanging out with the MandrakeSoft crew and being starstruck to think that they were creating a Linux distribution that was sure to bring about world domination for the open source platform.

Well, that didn’t happen. In fact, Linux Mandrake didn’t even stand the test of time. It was renamed Mandriva and rebranded. Mandriva retained popularity but eventually came to a halt in 2011. The company disbanded, sending all those star developers to other projects. Of course, rising from the ashes of Mandrake Linux came the likes of OpenMandriva, as well as another distribution called Mageia Linux.

Like OpenMandriva, Mageia Linux is a fork of Mandriva. It was created (by a group of former Mandriva employees) in 2010 and first released in 2011, so there was next to no downtime between the end of Mandriva and the release of Mageia. Since then, Mageia has existed in the shadows of bigger, more popular flavors of Linux (e.g., Ubuntu, Mint, Fedora, Elementary OS, etc.), but it’s never faltered. As of this writing, Mageia is listed as number 26 on the Distrowatch Page Hit Ranking chart and is enjoying release number 6.1.

What Sets Mageia Apart?

This question has become quite important when looking at Linux distributions, considering just how many distros there are—many of which are hard to tell apart. If you’ve seen one KDE, GNOME, or Xfce distribution, you’ve seen them all, right? Anyone who’s used Linux enough knows this statement is not even remotely true. For many distributions, though, the differences lie in the subtleties. It’s not about what you do with the desktop; it’s how you put everything together to improve the user experience.

Mageia Linux defaults to the KDE desktop and does as good a job as any other distribution at presenting KDE to users. But before you start using KDE, you should note some differences between Mageia and other distributions. To start, the installation is quite simple, but it’s slightly askew from what might expect. In similar fashion to most modern distributions, you boot up the live instance and click on the Install icon (Figure 1).

Figure 1: Installing Mageia from the Live instance.

Once you’ve launched the installation app, it’s fairly straightforward, although not quite as simple as some other versions of Linux. New users might hesitate when they are presented with the partition choice between Use free space or Custom disk partition (Remember, I’m talking about new users here). This type of user might prefer a bit simpler verbiage. Consider this: What if you were presented (at the partition section) by two choices:

  • Basic Install

  • Custom Install

The Basic install path would choose a fairly standard set of options (e.g., using the whole disk for installation and placing the bootloader in the proper/logical place). In contrast, the Custom install would allow the user to install in a non-default fashion (for dual boot, etc.) and choose where the bootloader would go and what options to apply.

The next possible confusing step (again, for new users) is the bootloader (Figure 2). For those who have installed Linux before, this option is a no-brainer. For new users, even understanding what a bootloader does can be a bit of an obstacle.

Figure 2: Configuring the Mageia bootloader.

The bootloader configuration screen also allows you to password protect GRUB2. Because of the layout of this screen, it could be confused as the root user password. It’s not. If you don’t want to password protect GRUB2, leave this blank. In the final installation screen (Figure 3), you can set any bootloader options you might want. Once again, we find a window that could cause confusion with new users.

Figure 3: Advanced bootloader options can be configured here.

Click Finish and the installation will complete. You might have noticed the absence of user configuration or root user password options. With the first stage of the installation complete, you reboot the machine, remove the installer media, and (when the machine reboots) you’ll then be prompted to configure both the root user password and a standard user account (Figure 4).

Figure 4: Configuring your users.

And that’s all there is to the Mageia installation.

Welcome to Mageia

Once you log into Mageia, you’ll be greeted by something every Linux distribution should use—a welcome app (Figure 5).

Figure 5: The Mageia welcome app is a new user’s best friend.

From this welcome app, you can get information about the distribution, get help, and join communities. The importance of having such an approach to greet users at login cannot be overstated. When new users log into Linux for the first time, they want to know that help is available, should they need it. Mageia Linux has done an outstanding job with this feature. Granted, all this app does is serve as a means to point users to various websites, but it’s important information for users to have at the ready.

Beyond the welcome app, the Mageia Control Center (Figure 6) also helps Mageia stand out. This one-stop-shop is where users can take care of installing/updating software, configuring media sources for installation, configure update frequency, manage/configure hardware, configure network devices (e.g., VPNs, proxies, and more), configure system services, view logs, open an administrator console, create network shares, and so much more. This is as close to the openSUSE YaST tool as you’ll find (without using either SUSE or openSUSE).

Figure 6: The Mageia Control Center is an outstanding system management tool.

Beyond those two tools, you’ll find everything else you need to work. Mageia Linux comes with the likes of LibreOffice, Firefox, KMail, GIMP, Clementine, VLC, and more. Out of the box, you’d be hard pressed to find another tool you need to install to get your work done. It’s that complete a distribution.

Target Audience

Figuring out the Mageia Linux target audience is a tough question to answer. If new users can get past the somewhat confusing installation (which isn’t really that challenging, just slightly misleading), using Mageia Linux is a dream.

The slick, barely modified KDE desktop, combined with the welcome app and control center make for a desktop Linux that will let users of all skill levels feel perfectly at home. If the developers could tighten up the verbiage on the installation, Mageia Linux could be one of the greatest new user Linux experiences available. Until then, new users should make sure they understand what they’re getting into with the installation portion of this take on the Linux platform.

Chasing Linux Kernel Archives

Kernel development is truly impossible to keep track of. The main mailing list alone is vast beyond belief. Then there are all the side lists and IRC channels, not to mention all the corporate mailing lists dedicated to kernel development that never see the light of day. In some ways, kernel development has become fundamentally mysterious.

Once in a while, some lunatic decides to try to reach back into the past and study as much of the corpus of kernel discussion as he or she can find. One such person is Joey Pabalinas, who recently wanted to gather everything together in Maildir format, so he could do searches, calculate statistics, generate pseudo-hacker AI bots and whatnot.

He couldn’t find any existing giant corpus, so he tried to create his own by piecing together mail archived on various sites. It turned out to be more than a million separate files, which was too much to host on either GitHub or GitLab

Read more at Linux Journal

ONS Evolution: Cloud, Edge, and Technical Content for Carriers and Enterprise

The first Open Networking Summit was held in October 2011 at Stanford University and described as “a premier event about OpenFlow and Software-Defined Networking (SDN)”. Here we are seven and half years later and I’m constantly amazed at both how far we’ve come since then, and at how quickly a traditionally slow-moving industry like telecommunications is embracing change and innovation powered by open source. Coming out of the ONS Summit in Amsterdam last fall, Network World described open source networking as the “new norm,” and indeed, open platforms have become de-facto standards in networking.  

Like the technology, ONS as an event is constantly evolving to meet industry needs and is designed to help you take advantage of this revolution in networking. The theme of this year’s event is “Enabling Collaborative Development & Innovation” and we’re doing this by exploring collaborative development and innovation across the ecosystem for enterprises, service providers and cloud providers in key areas like SDN, NFV, VNF, CNF/Cloud Native Networking, Orchestration, Automation of Cloud, Core Network, Edge, Access, IoT services, and more.

A unique aspect of ONS is that it facilitates deep technical discussions in parallel with exciting keynotes, industry, and business discussions in an integrated program. The latest innovations from the networking project communities including LF Networking (ONAP, OpenDaylight, OPNFV, Tungsten Fabric) are well represented in the program, and in features and add-ons such as the LFN Unconference Track and LFN Networking Demos. A variety of event experiences ensure that attendees have ample opportunities to meet and engage with each other in sessions, the expo hall, and during social events.

New this year is a track structure built to cover the key topics in depth to meet the needs of both CIOs/CTO/architects and developers, sysadmins, NetOps and DevOps teams:

The ONS Schedule is now live — find the sessions and tutorials that will help you learn how to participate in the open source communities and ecosystems that will make a difference in your networking career. And if you need help convincing your boss, this will help you make the case.

The standard price expires March 17th so hurry up and register today! Be sure to check out the Day Passes and Hall Passes available as well.

I hope to see you there!

This article originally appeared at the Linux Foundation.

Tutorial: Tap the Hidden Power of Your Bash Command History

Last month I wrote about combining a series of Unix commands using pipes. But there are times where you don’t even need pipes to turn a carefully-chosen series of commands into a powerful and convenient home-grown utility. …

The echo command repeats whatever text is entered after it, for example. I’d just never found it particularly useful, since it always seemed to be more trouble than it’s worth. Sure, echo was handy for adding decorations to output.

echo "--------------------------" ; date ; echo "--------------------------"
--------------------------
Thu Feb 28 01:25:46 UTC 2019
--------------------------

But if you have to type in all those decorations in the first place, you’re not really saving any time.

What I’d really wanted (instead of echo) was a command to drop me back into that one deep-down subdirectory where I was doing most of my work. Something that was shorter than

cd ~/subdirectory/subdirectory/subdirectory/subdirectory/subdirectory

Yes, there’s a command that lets you change back to your last-used directory.

cd 

Read more at The New Stack

Open Source is Eating the Startup Ecosystem: A Guide for Assessing the Value Creation of Startups

In the last few years, we have witnessed the unprecedented growth of open source in all industriesfrom the increased adoption of open source software in products and services, to the extensive growth in open source contributions and the releasing of proprietary technologies under an open source license. It has been an incredible experience to be a part of.

As many have stated, Open Source is the New Normal, Open Source is Eating the World, Open Source is Eating Software, etc. all of which are true statements. To that extent, I’d like to add one more maxim: Open Source is Eating the Startup Ecosystem. It is almost impossible to find a technology startup today that does not rely in one shape or form on open source software to boot up its operation and develop its product offering. As a result, we are operating in a space where open source due diligence is now a mandatory exercise in every M&A transaction. These exercises evaluate the open source practices of an organization and scope out all open source software used in product(s)/service(s) and how it interacts with proprietary components—all of which is necessary to assess the value creation of the company in relation to open source software.

Being intimately involved in this space has allowed me to observe, learn, and apply many open source best practices. I decided to chronicle these learnings in an ebook as a contribution to the OpenChain projectAssessment of Open Source Practices as part of Due Diligence in Merger and Acquisition Transactions. This ebook addresses the basic question of: How does one evaluate open source practices in a given organization that is an acquisition target? We address this question by offering a path to evaluate these practices along with appropriate checklists for reference. Essentially, it explains how the acquirer and the target company can prepare for this due diligence, offers an explanation of the audit process, and provides general recommended practices for ensuring open source compliance.

If is important to note that not every organization will see a need to implement every practice we recommend. Some organizations will find alternative practices or implementation approaches to achieve the same results. Appropriately, an organization will adapt its open source approach based upon the nature and amount of the open source it uses, the licenses that apply to open source it uses, the kinds of products it distributes or services it offers, and the design of the products or services themselves

If you are involved in assessing the open source and compliance practices of organizations, or involved in an M&A transaction focusing on open source due diligence, or simply want to have a deeper level of understanding of defining, implementing, and improving open source compliance programs within your organizationsthis ebook is a must read. Download the Brief.

This article originally appeared at the Linux Foundation.

Linux Foundation Announces CommunityBridge Platform for Open Source Developers

At the Open Source Leadership Summit, the Linux Foundation announced the formation of CommunityBridge, which is a new platform created for open source developers.

In making the announcement, Jim Zemlin, executive director of the Linux Foundation, said on stage at the conference that the Linux Foundation would match funding for any organization that donated funds to CommunityBridge projects.

Following up on those announcements, Microsoft-owned GitHub said it would donate $100,000 to CommunityBridge and invited maintainers of CommunityBridge projects to take part in GitHub’s maintainer program.

The Linux Foundation will match any organization that contributes money to CommunityBridge projects up to a total of $500,000 across all of the contributors.

Read more at Fierce Telecom

JS Foundation and Node.js Foundation Join Forces

People like to make fun of JavaScript. “It’s not a real language! Who, in their right mind, would use it on a server?” The replies are: It’s a real language and JavaScript is one of the most popular languages of all. For years, the enterprise server side had been divided into two camps: JS Foundation and Node.js Foundation. This was a bit, well, silly. Now, the two are coming together to form one organization: OpenJS Foundation.

At the Open Source Leadership Summit in Half Moon Bay, CA, the Linux Foundation announced the long anticipated news that the two JavaScript powers were merging. The newly formed OpenJS Foundation mission is to support the growth of JavaScript and related web technologies by providing a neutral organization to host and sustain projects, and fund development activities. It’s made up of 31 open-source JavaScript projects including Appium, Dojo, jQuery, Node.js, and webpack.

Read more at ZDNet

Shuah Khan Becomes the Third Linux Foundation Fellow

Programmers love to write code. But what about debugging it, writing test suites, and tracking down security bugs? Not so much. To help address these problems in Linux, Shuah Khan, a noted Linux kernel developer, is becoming — after Linus Torvalds and Greg Kroah-Hartman — the Linux Foundation‘s third Linux Foundation Fellow.

Khan, who grew up in India, picked up her computer science master’s degree in operating systems and graphics. After working at AT&T Bell Labs and Lucent, she spent over 13 years at HPE, where she worked on open-source projects. While there, she decided: “I really wanted to contribute to Linux kernel, and I started looking for ways to get involved.”

She started to work in 2011 on the mainstreaming of Android code back into Linux in her spare time. Unlike some people, she found the Linux kernel developer community to be very welcoming. “I thought that it’s the right place, the right fit for me,” Khan told ZDNet.

Read more at ZDNet

Mozilla Releases Iodide, an Open Source Browser Tool for Publishing Dynamic Data Science

Mozilla wants to make it easier to create, view, and replicate data visualizations on the web, and toward that end, it today unveiled Iodide, an “experimental tool” meant to help scientists and engineers write and share interactive documents using an iterative workflow. It’s currently in alpha, and available from GitHub in open source.

“In the last ten years, there has been an explosion of interest in ‘scientific computing’ and ‘data science’: that is, the application of computation to answer questions and analyze data in the natural and social sciences,” Brendan Colloran, staff data scientist at Mozilla, wrote in a blog post. “To address these needs, we’ve seen a renaissance in programming languages, tools, and techniques that help scientists and researchers explore and understand data and scientific concepts, and to communicate their findings. But to date, very few tools have focused on helping scientists gain unfiltered access to the full communication potential of modern web browsers.”

Read more at VentureBeat

CHIPS Alliance to Create Open Chip Design Tools for RISC-V and Beyond

The Linux Foundation and several major RISC-V development firms have launched an LF-hosted CHIPS Alliance with a mission “to host and curate high-quality open source code relevant to the design of silicon devices.” The founding members — Esperanto Technologies, Google, SiFive, and Western Digital — are all involved in RISC-V projects.  

On the same day that the CHIPS Alliance was announced, Intel and other companies, including Google launched a Compute Express Link (CXL) consortium that will open source and develop Intel’s CXL interconnect. CXL shares many traits and goals of the OmniXtend protocol that Western Digital is contributing to CHIPS (see farther below).

The CHIPS Alliance aims to “foster a collaborative environment that will enable accelerated creation and deployment of more efficient and flexible chip designs for use in mobile, computing, consumer electronics, and Internet of Things (IoT) applications.” This “independent entity” will enable “companies and individuals to collaborate and contribute resources to make open source CPU chip and system-on-a-chip (SoC) design more accessible to the market,” says the project.

This announcement follows a collaboration between RISC-V and Linux Foundation formed last November to accelerate development for the open source RISC-V ISA, starting with RISC-V starter guides for Linux and Zephyr. The CHIPS Alliance is more focused on developing open source VLSI chip design building blocks for semiconductor vendors.

The CHIPS Alliance will follow Linux Foundation style governance practices and include the usual Board of Directors, Technical Steering Committee, and community contributors “who will work collectively to manage the project.” Initial plans call for establishing a curation process aimed at providing the chip community with access to high-quality, enterprise grade hardware.”

A testimonial quote by Zvonimir Bandic, senior director of next-generation platforms architecture at Western Digital, offers a few clues about the project’s plans: “The CHIPS Alliance will provide access to an open source silicon solution that can democratize key memory and storage interfaces and enable revolutionary new data-centric architectures. It paves the way for a new generation of compute devices and intelligent accelerators that are close to the memory and can transform how data is moved, shared, and consumed across a wide range of applications.”

Both the AI-focused Esperanto and SiFive, which has led the charge on Linux-driven RISC-V devices with its Freedom U540 SoC and upcoming U74 and U74-MC designs, are exclusively focused on RISC-V. Western Digital, which is contributing its RISC-V based SweRV core to the project, has pledged to produce 1 billion of SiFive’s RISC-V cores. All but Esperanto have committed to contribute specific technology to the project (see farther below).

Notably missing from the CHIPS founders list is Microchip, whose Microsemi unit announced a Linux-friendly PolarFire SoC, based in part on SiFive’s U54-MC cores. The PolarFire SoC is billed as the world’s first RISC-V FPGA SOC.

Although not included as a founding member, the RISC-V Foundation appears to behind the CHIPS Alliance, as evident from this quote from Martin Fink, interim CEO of RISC-V Foundation and VP and CTO of Western Digital: “With the creation of the CHIPS Alliance, we are expecting to fast-track silicon innovation through the open source community.”

With the exploding popularity of RISC-V, the RISC-V Foundation may have decided it has too much on its plate right now to tackle the projects the CHIPS Alliance is planning. For example, the Foundation is attempting to crack down on the growing fragmentation of RISC-V designs. A recent article in Semiconductor Engineering reports on the topic and RISC-V’s RISC-V Compliance Task Group.

Although the official CHIPS Alliance mission statements do not mention RISC-V, the initiative appears to be an extension of the RISC-V ecosystem. So far, there have been few open-ISA alternatives to RISC-V. In December, however, Wave Computing announced plans to follow in RISC-V’s path by offering its MIPS ISA as open source code without royalties or proprietary licensing. As noted in a Bit-Tech.net report on the CHIPS Alliance, there are also various open source chip projects that cover somewhat similar ground, including the FOSSi (Free and Open Source Silicon) Foundation, LibreCores, and OpenCores.

Contributions from Google, SiFive, and Western Digital

Google plans to contribute to the CHIPS Alliance a Universal Verification Methodology (UVM) based instruction stream generator environment for RISC-V cores. The configurable UVM environment will provide “highly stressful instruction sequences that can verify architectural and micro-architectural corner-cases of designs,” says the CHIPS Alliance.

SiFive will contribute and continue to improve its RocketChip (or Rocket-Chip) SoC generator, including the initial version of the TileLink coherent interconnect fabric. SiFive will also continue to contribute to the SCALA-based Chisel open-source hardware construction language and the FIRRTL “intermediate representation specification and transformation toolkit” for writing circuit-level transformations. SiFive will also continue to contribute to and maintain the Diplomacy SoC parameter negotiation framework.

As noted, Western Digital will contribute its 9-stage, dual issue, 32-bit SweRV Core, which recently appeared on GitHub. It will also contribute a SWERV test bench and SweRV instruction set simulator. Additional contributions will include specification and early implementations of the OmniXtend cache coherence protocol.

Intel launches CXL interconnect consortium

Western Digital’s OmniXtend is similar to the high-speed Compute Express Link (CXL) CPU interconnect that Intel is open sourcing. On Monday, Intel, Alibaba, Cisco, Dell EMC, Facebook, Google, Hewlett Packard Enterprise, Huawei, and Microsoft announced a CXL consortium to help develop the PCIe Gen 5 -based CXL into an industry standard. Intel intends to incorporate CXL into its processors starting in 2021 to link the CPU with memory and various accelerator chips.

The CXL group competes with a Cache Coherent Interconnect for Accelerators (CCIX) consortium founded in 2016 by AMD, Arm, IBM, and Xilinx. It similarly adds cache coherency atop a PCIe foundation to improve interconnect performance. By contrast, OmniXtend is based on Ethernet PHY technology. While the CXL and CCIX groups are focused only on interconnects, the CHIPS Alliance has a far more ambitious agenda, according to an interesting EETimes story on the CHIPS Alliance, CXL, and CCIX.