Home Blog Page 520

openSUSE Leap Is Now 99.9% Enterprise Distribution

Two years ago when openSUSE decided to move the base of openSUSE Leap to SUSE Linux Enterprise (SLE), they were entering uncharted territory. SLE is a tightly controlled enterprise ship that runs on mission critical systems. On the other hand openSUSE has been a community-driven project that, despite sponsorship from SUSE, is relatively independent.

It became clear, though, that moving to SLE source code would solve many problems for both members of the SUSE family. SLE would get a platform from where it can borrow the latest fully tested packages, and openSUSE Leap would get enterprise grade code base to move into CentOS and Ubuntu territory. SLE and openSUSE created a symbiotic relationship in which they were pulling content from each other.

Moving closer

“Initially when we moved the base, our utopian vision was to have a 30-30-30 split from SLE, Tumbleweed and openSUSE into Leap,” said Richard Brown, openSUSE chairman.  

“The first version of openSUSE Leap (42.1) didn’t have that equilibrium and there was too much replacement of SLES components from the community. With 42.2, we moved closer and there was enough SLE and enough Tumbleweed and we inherited what we wanted from 42.1. But with the upcoming 43 release, we are exactly where we wanted to be. The base comprises SLE, so you have a fully enterprise grade base, then you have fast moving components on top of it that come from Tumbleweed, which allow you to stay updated on a very stable system. The way I look at it is upcoming release of Leap is 99.9 enterprise grade software; it’s our CentOS, just better and broader with the addition of integrated community packages,” he said.

Leap has essentially created a community platform for those developers and sysadmins who run SUSE Linux Enterprise Server (SLES) in their datacenters.  The strategy to move codebase to SLES has worked. openSUSE Leap has been a success so far as now even companies like IBM contribute directly to Leap as they know that’s the best and open way to get things into SLES. Fujitsu is shipping Tumbleweed and Leap to their users, according to Brown.

Changing mission statement

Initially openSUSE’s mission statement was to “encourage use of Linux & Open Source everywhere.”  But, that’s no longer the heart and soul of openSUSE.  OpenSUSE has evolved beyond just a Linux distribution project. They now cater to a totally different audience — developers and sysadmins. So, openSUSE board members drafted a new mission statement: “Openly engineered tools to change your world.” The mission statement is not final yet, but once it’s discussed with the community and everyone is onboard it may become official.

“We work in open, we share our opinion, which changes over time as we learn more or things improve. We work on everything openly. What we do essentially is engineering – we help in building packages, we help in testing and we help in delivering them. We care about the process.” said Brown. “At the same time everything that we do is a tool, OpenQA is a testing tool, OBS is a packaging tool, YaST is system management tool, even our distributions Leap and Tumbleweed are tools.”

openSUSE in Windows land

Microsoft is now bringing openSUSE to Windows users, through its WSL (Windows Subsystem for Linux) initiative. Microsoft and openSUSE projects have finalized all the “paperwork” and Rich Turner of Microsoft confirmed that openSUSE for Windows is in the works.

Brown said there will be two members of the SUSE family in the Windows Store: Leap 42.2 and SLES 12. This means users will be able to install and run command-line utilities from both of these platforms. Although Leap will be available for free, SLES is subscription based. However, SUSE has started a SUSE Developer Program that offers one year free subscription of SLES. Thus, developers have access to thousands of packages, tools, and utilities through either of the two platforms.

Many free software advocates may wonder whether this will affect the user-base of Linux. If developers can access Linux utilities from within Windows, there won’t be any need to install Linux desktop anymore. Brown said, “We are a project that creates tools and it doesn’t matter which platform runs those tools. You can use them on openSUSE or Windows. The idea is to help more people use our tools and get work done,” said Brown. “I think it will actually increase the reach of Linux as now those users who would have never installed Linux will be able to use these tools. Windows has a much larger market share than Linux and these users will now have access to Linux tools.”

Incubating new ideas

As openSUSE is evolving into a project that offers tools, Brown said they are also contemplating a new project called openSUSE Incubator. Since OBS allows developers to create packages and collaborate, over time it may create some discussions around the quality of these projects.  

“How do we ensure that these projects that are available through OBS are of openSUSE quality?” asked Brown. There is already an answer to that question: Apache Incubator, a place where Apache Software Foundation incubates new projects.

openSUSE will look at the projects that are not up to their standards and will mark them as Incubator. The idea is to create a fertile and nurturing environment that enables developers to bring their projects to openSUSE and see them grow. As part of the Incubator, projects will get access to OBS build service, infinite bandwidth from openSUSE mirror,  and will be hosted on the openSUSE infrastructure and can be consumed by users directly.

However, that also doesn’t mean anyone can “dump” their projects at openSUSE Incubator. Brown is working on some basic guidelines to ensure that the projects at least share the same principles of openness as openSUSE, and that the projects have a few maintainers. The projects will have the option to use openSUSE branding, but Brown stresses that despite being part of openSUSE Incubator, they will remain independent when it comes to branding. Many open source projects can benefit from a project like openSUSE Incubator.

Conclusion

Overall, openSUSE community is heading in the right direction as our computing world is changing. Instead of sticking with the operating system, they are expanding their reach and catering to what developers and sysadmins need.

Connect with the Open Source community at Open Source Summit, September 11-14 in Los Angeles, CA, with over 200 sessions covering everything from Cloud and
Containers, to Security and Networking, to Linux and Kernel Development. Register now & Save $150.

Pivoting To Understand Quicksort [Part 2]

This is the second installment in a two-part series on Quicksort. If you haven’t readPart 1 of this series, I recommend checking that out first!

In part 1 of this series, we walked through how the quicksort algorithm works on a high level. In case you need a quick refresher, this algorithm has two important aspects: a pivot element, and two partitions around the pivot.

We’ll remember that quicksort functions by choosing a pivot point (remember, this is only somewhat random!), and sorting the remaingin elements so that items smaller than the pivot are to the left, or in front of the pivot, and items that are larger than the pivot are to the right, or behind the pivot. These two halves become the partitions, and the algorithm recursively calls itself upon both of these partitions until the entire list is divided down into single-item lists. Then, it combines them all back together again.

Read more at Dev.to

Site Reliability Engineering for Cloud-Native Operations

Developers want to change things as soon as they can, while operations teams remain apprehensive that changes will break stuff. To reconcile these two drives, Google forged the path of site reliability engineering (SRE), an emerging practice for maintaining complex computing systems that need to run with high reliability. As the founder of Google’s SRE Team, Ben Treynor put it: SRE is “what happens when a software engineer is tasked with what used to be called operations.”

SRE dates back to 2003 when Treynor joined Google to manage a team of engineers to run a production environment. The practice proved to be a success, and the company now 1,500 engineers working in SRE. Apple, Oracle, Microsoft, Twitter, Dropbox, IBM, and Amazon have all implemented their own SRE teams as well.

Read more at The New Stack

Digital Signatures in Package Management

Serious distributions try to protect their repositories cryptographically against tampering and transmission errors. Arch Linux, Debian, Fedora, openSUSE, and Ubuntu all take different, complex, but conceptually similar approaches.

Many distributions develop, test, build, and distribute their software via a heterogeneous zoo of servers, mirrors, and workstations that make central management and protection of the end product almost impossible. In terms of personnel, distributions also depend on the collaboration of a severely limited number of international helpers. This technical and human diversity creates a massive door for external and internal attackers who seek to infect popular distribution packages with malware. During updates, then, hundreds of thousands of Linux machines download and install poisoned software with root privileges. The damage could hardly be greater.

The danger is less abstract than some might think. Repeatedly in the past, projects have had to take down one or more servers after hacker attacks. The motivation of (at least) all the major distributions to protect themselves from planted packages is correspondingly large and boils down to two actions: one simple and one cryptographic.

Read more at Linux Pro Magazine

Linux Owns Supercomputing

The US is falling behind in the supercomputer race, but no matter where a supercomputer is running, one thing remains true: It’s running Linux.

In the latest Top500 Supercomputer competition, which was revealed in June 2017, 498 out of 500 supercomputers were running Linux. Of the remaining two, both ran Unix.

These were a pair of Chinese IBM POWER computers running AIX near the bottom of the list. These machines came in at 493 and 494. Since the November 2016 Top500, these supercomputers have dropped by over 100 places. At this rate, Linux will score a clean sweep in the next biannual Top500 competition.

Read more at ZDNet

18 Open Source Translation Tools to Localize Your Project

Localization plays a central role in the ability to customize an open source project to suit the needs of users around the world. Besides coding, language translation is one of the main ways people around the world contribute to and engage with open source projects.

There are tools specific to the language services industry (surprised to hear that’s a thing?) that enable a smooth localization process with a high level of quality. Categories that localization tools fall into include:

  • Computer-assisted translation (CAT) tools
  • Machine translation (MT) engines
  • Translation management systems (TMS)
  • Terminology management tools
  • Localization automation tools

Read more at OpenSource.com

Hyperledger Fabric Blockchain Publishes Software Release Candidate

The developers behind the open-source Hyperledger Fabric blockchain project have issued the software’s official release candidate.

Announced yesterday via the project’s public mailing list, the release effectively moves the software, one of several separate enterprise distributed ledger code bases being incubated under the Hyperledger umbrella, one step closer to a formal version 1.0 launch.

Jonathan Levi, founder of blockchain startup HACERA and a release manager for the project, sought to portray the announcement as a call to action to those seeking to leverage the software, framing it as evidence that Fabric is “getting serious” and moving steadily and pragmatically toward launch.

Read more at CoinDesk

Building Images with Dockerfile and Docker Hub

In this series previewing the self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation, we’ve covered installing Docker, introduced Docker Machine, and some basic commands for performing Docker container and image operations. In the three sample videos below, we’ll take a look at Dockerfiles and Docker Hub.

Docker can build an image by reading the build instructions from a file that’s generally referred to as Dockerfile. So, first, check your connectivity with the “dockerhost” and then create a folder called nginx. In that folder, we have created a file called dockerfile and in the dockerfile, we have used different instructions, like FROM, RUN, EXPOSE, and CMD.

To build an image, we’ll need to use the docker build command. With the -t option, we can specify the image name, and with a “.” at the end, we are requesting Docker to look at the current folder to find the dockerfile, and then build the image.

On the Docker Hub, we also see the repositories — for example, for nginx, redis, and busybox. For a given repository, you can have different tags, which will define the individual image. On the repository page, we can also see a respective Dockerfile, from which an image is created — for example, you can see the Dockerfile of the nginx image. 

If you don’t have an account on Docker Hub, I recommend creating one at this time. After logging in, you can see all the repositories we’ve created. Note that the repository name is prefixed with our username.

To push an image to Docker Hub, make sure that the image name is prefixed with the username used to log into the Docker Hub account. With the docker image push command, we can push the image to the Docker Registry, which would, by default, go to the Docker Hub.  

DockerHub has a functionality called Docker automated builds, that can trigger a build on DockerHub as soon as you commit a code on your GitHub repository. On GitHub, we have a repository called docker-automated-build, in which we have a Dockerfile, using which the image will be created. In the real-world example, we would have our application code with Dockerfile.

To create the automated build, we need to first log into our DockerHub account, and then, link our GitHub account with DockerHub. Once the GitHub account is linked, we click on “Create” and then on “Create Automated Build.”

Next, we provide a short description and then click on “Create.” Then, we select the GitHub repository that we want to link with this DockerHub automated build procedure. Now, we can go to our GitHub repository and change something there. As soon as we commit the change, a Docker build process would start on our DockerHub account.

Our image build is currently in queue, which will be scheduled eventually, and our image would be created. After that, anybody would be able to download the image.

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Want to learn more? Access all the free sample chapter videos now!

What Can Developers Learn from Being On Call?

We often talk about being on call as being a bad thing. For example, the night before I wrote this my phone woke me up in the middle of the night because something went wrong on a computer. That’s no fun! I was grumpy.

In this post, though, we’re going to talk about what you can learn from being on call and how it can make you a better software engineer!. And to learn from being on call you don’t necessarily need to get woken up in the middle of the night. By “being on call”, here, I mean “being responsible for your code when it breaks”. It could mean waking up to issues that happened overnight and needing to fix them during your workday!

Everything in here is synthesized from an amazing Twitter thread by Charity Majors where she asked “How has being on call made you a better engineer?”: https://twitter.com/mipsytipsy/status/847508734188191745

Read more at Julia Evans Blog

Making the Internet Archive’s Full Text Search Faster

The Internet Archive is a nonprofit digital library based in San Francisco. It provides free public access to collections of digitized materials, including websites, books, documents, papers, newspapers, music, video and software.

This article describes how we made the full-text organic search faster — without scaling horizontally — allowing our users to search in just a few seconds across our collection of 35 million documents containing books, magazine, newspapers, scientific papers, patents and much more.

Read more at Medium