Home Blog Page 672

Building a Trusted Open Source Software Supply Chain With OpenChain

There are many examples of collaboration all around us that stretch far beyond the type of collaboration in open source projects. As preparation for her keynote at LinuxCon Europe, Jilayne Lovejoy, Principal Open Source Counsel at ARM, watched a TED talk by Rodney Mullen and was inspired by how he talked about collaboration within the skateboarding community where he compared it to hackers within the open source community.

Lovejoy says, “You’d think the people in this room had an invented the whole concept of collaboration, but you can actually find examples of collaboration all around us, like in the way skateboarding evolved from freestyle to street skating by adapting to a new environment.” She talks about how the values underpinning collaboration are inherently compelling and goes on to talk about how “it’s about being motivated by the respect from your peers, the satisfaction of creating something others can use, and being part of a community that you helped build and you can see other people contributing that and taking it to the next level.”

However, within her own profession, lawyers don’t tend to work in a collaborative atmosphere. Even between people who work in open source, there are other things, like training materials and internal company policies, that we don’t always think to develop collaboratively with other people outside of our teams. 

OpenChain

Lovejoy asks, “How can we take the advantage of collaboration and apply it to making software moving through the supply chain, have less friction, and build trust. What if we had a collaborative group to solve this, to help define what the processes look like? Enter OpenChain. OpenChain is a new Linux Foundation collaborative project with a vision of a software supply chain where free and open source software is delivered with trust and consistent compliance information.”

There are three key areas within the OpenChain project:

  • Specification: Organized into 6 goals, the specification is the description of effective FOSS with requirements and rationale for why it’s important. The first version of the specification was released at LinuxCon Europe.
  • Curriculum: The initial set of training materials are available now, and they have begun working on a teacher’s guide to go along with these materials.
  • Conformance: This will contain a way to self-certify that you’ve met the requirements of the specification.

Lovejoy wants you or someone from your company to participate! 

“OpenChain is run like the other collaborative projects. Anyone can join. Anyone can participate. All the work is done in the open. Some of the things we’ll be working on and need help with includes working on the specification. We’ve got the first version out, but of course, we’re always going to make improvements and there’ll be other versions. Also, the curriculum slides I mentioned, we have the first version out, we’ll be working on those, … the teacher’s guide to go with those, the conformance questions, website issues and so forth and so on. My question to all of you is this. If someone from your company isn’t already following or contributing to OpenChain, who’s it going to be? When you go back to your office after spending time in this lovely city, who are you going to go have a chat with to get involved with OpenChain to make doing software business easier for all of us so we can focus on the more fun, challenging, and differentiating aspects of all of our jobs?”

Watch the entire talk to learn more about how you can contribute to OpenChain.

LinuxCon Europe videos

5 systemd Tools You Should Start Using Now

Once you get over systemd’s rude departure from the plain-text, script-laden System V of yore, it turns out to be quite nifty and comes with an equally nifty toolbox. In this article, we’ll be looking at four of those tools, plus one you’re probably already familiar with but haven’t used in the way you will see here.

So, without more ado…

coredumpctl

You can use this tool, as the name implies, to retrieve coredumps from systemd’s journal.

By running:

coredumpctl

you will get all coredumps in a summarized list. This list may go back weeks or even months.

Figure 1: coredumpctl lists all coredumps registered in the journal.

By using

coredumpctl dump filter

you get a more detailed output about the last coredump that matches the filter. So,

coredumpctl dump 1758

will show all the details of the last coredump with PID 1758. As systemd’s journal broaches more than one session (mine goes back to May, for example), it is conceivable that there are several unrelated coredumps from processes with the same PID.

Figure 2: The dump modifier allows you extract much more detail from the coredump.

Likewise, if you filter using the name of the executable, for example, with:

coredumpctl dump chrome

you will see only the latest coredump for chrome. This makes sense, because it is probably the one you want and the most relevant to your current problem.

You can filter coredumps using PID (as shown above), the name of the executable (also shown above), by specifying the path to the executable (it must contain at least one slash, as in /usr/bin/name_of_executable), or use one or several of journalctl‘s general predicates. An example of the latter would be:

coredumpctl dump _PID=1758

which would be the same as the coredumpctl dump 1758 we saw above.

Another, more interesting example of using journalctl predicates would be to use a coredump’s timestamp:

coredumpctl dump _SOURCE_REALTIME_TIMESTAMP=1463932674907840

For a list of all journalctl’s predicates, have a look at the JOURNAL FIELDS section in man systemd.directives.

If instead of using the dump option, you use

coredumpctl gdb 1758

you will get all the details of the coredump and you will open the GNU debugger (gdb) so you can start debugging right away.

bootctl

Just in case you missed the memo, systemd-boot and not GRUB, is also in charge of the booting firmware now. Yes! That is yet another thing systemd has gobbled down its hungry maw, at least on most modern machines with a UEFI firmware.

Although learning how to configure a boot manager from scratch goes beyond the scope of this post (if you are really interested, this article may prove helpful), when you have done your custom configuration, you will need to use bootctl to get it installed.

(If you’re a Linux newbie, fear not: you will probably never have to do any of what is covered in this section. Your distro will do it for you. This is for Linux control freaks, aka Arch users, who can’t resist messing with every single aspect of their system.)

You need to be root (or invoke the command with sudo) to use bootctl. This may be the first indication that you should treat this command with respect: Misusing bootctl can render your system unbootable, so be careful.

A harmless way of leveraging bootctl is to use it to check the boot status of your machine. Note that, unless /boot points directly to an FAT EFI partition, you will have to specify the route to the EFI boot partition manually using the --path= option. In my openSUSE, for example, I have to do:

bootctl --path=/boot/efi

This will list all the boot options and their variables. You can see what my boot looks like in Figure 3. This is the default behavior and is the same as bootctl --path=/boot/efi status.

Figure 3: The bootctl tool allows you to view and manipulate the boot manager settings.

The output shows where the boot binary is stored (ESP:) and each of the bootable options.

If you’ve built your own boot manager framework, you can install it with:

bootctl --path=/boot/path/to/efi install

This also generates the binary systemd-boot file and stores it in boot/path/to/efi/EFI/Boot and adds a call to it at the top of boot order list.

If you have a newer version than the one installed in the EFI partition, you can update your systemd-boot with:

bootctl --path=/boot/path/to/efi update

You can remove systemd-boot from your EFI partition with:

bootctl --path=/boot/path/to/efi remove

Needless to say, be careful with this last one.

systemd-cgtop

Similar to the classic top tool that tells you which process is hogging your resources, systemd-cgtop tells you which cgroup is eating up most of your CPU cycles and memory.

If you are not familiar with control groups — cgroups for short — they provide a way of partitioning off resources for groups of users and tasks. You can, for example, use cgroups to set the limits of CPU and memory usage on a machine shared between two different groups of users and the applications they use. There is a complete explanation with examples on how to use and implement cgroups here.

systemd relies heavily on cgroups to control its services and systemd-cgtop is how you check that none of the groups are getting out of hand. And, If it is, you can then kill the whole group without needing to actually hunt down each of the processes in the group and killing them individually.

Look at Figure 4. What you see there is the very image of a sane and happy system. Nothing is hogging resources, and only some of all the activity of all the cgroups is registering at all. But I could, for example, get rid of the auditd service if it were misbehaving. As it is not essential to keep the system running, I can do this with:

systemctl kill auditd.service

And… poof! It’s gone!

Figure 4: systemd-cgtop tells you how your cgroups are behaving.

In this case, auditd.service has only got to tasks associated with it, but, as you can see, some have literally hundreds, especially groups used for end users, so using systemctl to call cgroups is very convenient.

By the way, if you want to see the processes within a given cgroup, try this:

systemd-cgls /cgroup.name

For example, try this:

systemd-cgls /system.slice/NetworkManager.service

And you’ll see all the processes working under the NetworkManager sub-cgroup.

Conclusion

This was a just a taste of the tools systemd has for system administration. Not only are there many more (and we’ll be looking at a new batch in a future article), but also the options and combinations you can use with these instructions make them much more powerful than they seem at first glance.

If you would like to delve more deeply into systemd, use:

man systemd.index

to get an overview of all the man pages related with systemd.

Advance your career in Linux System Administration! Check out the online Essentials of System Administration course from The Linux Foundation — also offered in Spanish and Portuguese.

Top 10 Tech Predictions For 2017 From IDC

IDC released today its 10 IT industry predictions for 2017 in a webcast with Frank Gens, IDC’s senior vice president and chief analyst. The predictions covered many trends driving success today and in the future, from how the entire global economy will be re-shaped by digital transformation, the transition of all enterprises from being “digital immigrants” to being “digital natives,” the scaling up of innovation accelerators, the emergence of “the 4th platform” (a new set of technologies that will become mainstream in ten years), drastic changes in how enterprises connect to their customers, and the ecosystem becoming as important for business success as IP.

Here are IDC’s ten predictions:

Read more at Forbes

The Company of the Future

In the process of eating the world, software had traditional organizational structures for lunch. Analogies, methods and tactics that originated in the IT world have a major influence on general business thinking (as they should; the two are increasingly the same thing). Today, we talk about ‘new operating systems for organizations’, organisations are understood as networksagile management is all the buzz and every new company wants to be a lean startup, create an MVP and iterate from there.

Conversely, looking at new developments in technology can often give a hint at the future of business at large. I see three developments that have the potential to influence our company of the future in a major way.

  • Microservices
  • Blockchain
  • Industry 4.0

While this might read like a list of keynote topics at any major tech conference in 2016, let’s look further than the average trend report.

Read more at Thomas Euler’s Blog

Trireme Open-Source Security Project Debuts for Kubernetes, Docker

Network isolation isn’t the only way to secure application containers anymore, so Aporeto unveils a new security model for containers running in Docker or as part of Kubernetes cluster.

Dimitri Stiliadis co-founded software-defined networking (SDN) vendor Nuage Networks in 2011 in a bid to help organizations improve agility and security via network isolation. In the container world, however, network isolation alone isn’t always enough to provide security, which is why Stiliadis founded Aporeto in August 2015. On Nov. 1, Aporeto announced its open-source Trireme project, providing a new security model for containers running in Docker or as part of a Kubernetes cluster.

Read more at eWeek

5 Reasons to Opt for a Linux Rolling Distro vs. a Standard Release

There are a lot of reasons I recommend Ubuntu to Linux newbies. It’s well supported, reasonably stable, and easy to use. But I prefer to roll with Arch Linux myself. It has several compelling attributes, but one of its biggest pluses is that Arch is a rolling-release distribution.

What?

If you’re using Linux for the first time, there’s a pretty good chance your OS is what’s called a “versioned release” distribution. Ubuntu, Fedora, Debian, and Mint all release numbered versions of their respective operating systems. By contrast, a rolling-release distribution eschews versions altogether. Here are a few of the things you can expect from a rolling release.

Read more at PCWorld

What Is the Linux Kernel?

So Linux is 25 years old now. The Linux kernel was created by a Finnish student named Linus Torvalds in 1991 who at the time was a 21-year-old computer science student at the University of Helsinki, Finland . On 25 August 1991, Torvalds posted the following to comp.os.minix, a newsgroup on Usenet…

“I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386 (486) AT clones. This has been brewing since April, and is starting to get ready. I’d like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).

Read At LinuxAndUbuntu

Apcera Platform Primes Containers for Enterprise Deployment

Apcera today is launching what it claims is the first enterprise-grade container management platform. The idea is to provide a turnkey package that includes all the functions necessary for running containers — functions such as orchestration and networking, along with aspects such as security.

It would be like turning “containers” and their environment into a single product, packaged nicely and wrapped up with a bow. Something parallel is happening in OpenStack and cloud management, where startups such as Platform9 and ZeroStack are finding ways to figuratively shrink-wrap the cloud into an all-inclusive offering.

Here’s the tradeoff. Apcera made things simpler for the enterprise by selecting pieces of the environment ahead of time — orchestration, for example. There’s still a lot of flexibility to choose things like software stacks, but “we answered all the dependency questions for you,” says Josh Ellithorpe, Apcera’s lead architect.

Read more at SDxCentral

It’s Finally Legal To Hack Your Own Devices (Even Your Car)

Last Friday, a new exemption to the decades-old law known as the Digital Millennium Copyright Act quietly kicked in, carving out protections for Americans to hack their own devices without fear that the DMCA’s ban on circumventing protections on copyrighted systems would allow manufacturers to sue them. One exemption, crucially, will allow new forms of security research on those consumer devices. Another allows for the digital repair of vehicles. Together, the security community and DIYers are hoping those protections, which were enacted by the Library of Congress’s Copyright Office in October of 2015 but delayed a full year, will spark a new era of benevolent hacking for both research and repair.

Read more at WIRED

Web Pioneer Tries to Incubate a Second Digital Revolution

Brian Behlendorf knows it’s a cliché for veteran technologists like himself to argue that society could be run much better if we just had the right software. He believes it anyway.

“I’ve been as frustrated as anybody in technology about how broken the world seems,” he says. “Corruption or bureaucracy or inefficiency are in some ways technology problems. Couldn’t this just be fixed?” he asks.

This summer Behlendorf made a bet that a technology has appeared that can solve some of those apparently human problems. Leaving a comfortable job as a venture capitalist working for early Facebook investor and billionaire Peter Thiel, he now leads the Hyperledger Project, a nonprofit in San Francisco created to support open-source development of blockchains, a type of database that underpins the digital currency Bitcoin by verifying and recording transactions.

Read more at MIT Tehnology Review