Sometimes when we think about open source, we focus on the code and forget that there are other equally important ways to contribute. Nithya Ruff, Senior Director, Open Source Practice at Comcast, knows that contributions can come in many forms. “Contribution can come in the form of code or in the form of a financial support for projects. It also comes in the form of evangelizing open source; It comes in form of sharing good practices with others,” she said.
Comcast, however, does contribute code. When I sat down with Ruff at Open Source Summit to learn more, she made it clear that Comcast isn’t just a consumer; it contributes a great deal to open source. “One way we contribute is that when we consume a project and a fix or enhancement is needed, we fix it and contribute back.” The company has made roughly 150 such contributions this year alone.
Comcast also releases its own software as open source. “We have created things internally to solve our own problems, but we realized they could solve someone else’s problem, too. So, we released such internal projects as open source,” said Ruff.
Bio-Linux was introduced and detailed in a Nature Biotechnology paper in July 2006. The distribution was a group effort by the Natural Environment Research Council in the UK. As the creators and authors point out, the analysis demands of high-throughput “-omic” (genomic, proteomic, metabolomic) science has necessitated the development of integrated computing solutions to analyze the resultant mountains of experimental data.
From this need, Bio-Linux was born. The distribution, according to its creators, serves as a “free bioinformatics workstation platform that can be installed on anything from a laptop to a large server.” The current distro version, Bio-Linux 8, is built on an Ubuntu 14.04 LTS base. Thus, the general look and feel of Bio-Linux is similar to that of Ubuntu.
In my own work as a research immunologist, I can attest to both the need for and success of the integrated software approach in Bio-Linux’s design and development. Bio-Linux functions as a true turnkey solution to data pipeline requirements of modern science. As the website mentions, Bio-Linux includes more than 250 pre-installed software packages, many of which are specific to the requirements of bioinformatic data analysis.
We recently hosted a webinar about deploying Hyperledger Fabric on Kubernetes. It was taught by Alejandro (Sasha) Vicente Grabovetsky and Nicola Paoli from AID:Tech.
The webinar contained a detailed, step-by-step instruction showing exactly how to deploy Hyperledger Fabric on Kubernetes. For those who prefer reading to watching, we have prepared a condensed transcript with screenshots that will take you through the process that has been adapted to recent updates in the Helm charts for the Orderers and Peers.
Are you ready? Let’s dive in!
What we will build
Fabric CA
First, we will deploy a Fabric Certificate Authority (CA) serviced by a PostgreSQL database for managing identities.
Fabric Orderer
Then, we will deploy an ordering service of several Fabric ordering nodes communicating and establishing consensus over an Apache Kafka cluster. The Fabric Ordering service provides consensus for development (solo) and production (Kafka) networks.
Fabric Peer
Finally, we will deploy several Peers and connect them with a channel. We will bind them to a CouchDB database.
The sudo command is very handy when you need to run occasional commands with superuser power, but you can sometimes run into problems when it doesn’t do everything you expect it should. Say you want to add an important message at the end of some log file and you try something like this:
OK, it looks like you need to employ some extra privilege. In general, you can’t write to a system log file with your user account. Let’s try that again with sudo.
…The sudo command is meant to allow you to easily deploy superuser access on an as-needed basis, but also to endow users with very limited privileged access when that’s all that is required.
More than 45,000 Internet routers have been compromised by a newly discovered campaign that’s designed to open networks to attacks by EternalBlue, the potent exploit that was developed by, and then stolen from, the National Security Agency and leaked to the Internet at large, researchers said Wednesday.
The new attack exploits routers with vulnerable implementations of Universal Plug and Play to force connected devices to open ports 139 and 445, content delivery network Akamai said in a blog post. As a result, almost 2 million computers, phones, and other network devices connected to the routers are reachable to the Internet on those ports. While Internet scans don’t reveal precisely what happens to the connected devices once they’re exposed, Akamai said the ports—which are instrumental for the spread of EternalBlue and its Linux cousin EternalRed—provide a strong hint of the attackers’ intentions.
The attacks are a new instance of a mass exploit the same researchers documented in April. They called it UPnProxy because it exploits Universal Plug and Play—often abbreviated as UPnP—to turn vulnerable routers into proxies that disguise the origins of spam, DDoSes, and botnets.
The most fundamental question you have to ask is what kind of terminal are you talking to. The answer is in the $TERM environment variable and if you ask your shell to print that variable, you’ll see what it thinks you are using: echo $TERM.
For example, a common type is vt100 or linux console. This is usually set by the system and you shouldn’t change it unless you have a good reason. In theory, of course, you could manually process this information but it would be daunting to have to figure out all the way to do something like clear the screen on the many terminals Linux understands.
That’s why there’s a terminfo database. On my system, there are 43 files in /lib/terminfo (not counting the directories) for terminals with names like dumb, cygwin, ansi, sun, and xterm. Looking at the files isn’t very useful since they are in a binary format, but the infocmp program can reverse the compilation process. Here’s part of the result of running infocmp and asking for the definition of a vt100:
Arm’s open source EBBR (Embedded Base Boot Requirements) specification is heading for its v1.0 release in December. Within a year or two, the loosely defined EBBR standard should make it easier for Linux distros to support standardized bootup on major embedded hardware platforms.
EBBR is not a new technology, but rather a requirements document based on existing standards that defines firmware behavior for embedded systems. Its goal is to ensure interoperability between embedded hardware, firmware projects, and the OS.
The spec is based largely on the desktop-oriented Unified Extensible Firmware Interface (UEFI) spec and incorporates work that was already underway in the U-Boot community. EBBR is designed initially for Arm Linux boards loaded with the industry standard U-Boot or UEFI’s TianoCorebootloaders. EBBR also draws upon Arm’s Linaro-hosted Trusted Firmware-A project, which supplies a preferred reference implementation of Arm specifications for easier porting of firmware to modern hardware.
EBBR is currently “working fine” on the Raspberry Pi and has been successfully tested on several Linux hacker boards, including the Ultra96 and MacchiatoBIN, said Likely. Despite the Arm Linux focus, EBBR is already in the process of supporting multiple hardware and OS architectures. The FreeBSD project has joined the effort, and the RISC-V project has shown interest. Additional bootloaders will also be supported.
Why EBBR now?
The UEFI standard emerged when desktop software vendors struggled to support a growing number of PC platforms. Yet, it never translated well to embedded, which lacks the uniformity of the PC platform, as well as the “economic incentives toward standardization that work on the PC level,” said Likely. Unlike the desktop world, a single embedded firm typically develops a custom software stack to run on a custom-built device “all bound together.”
In recent years, however, several trends have pushed the industry toward a greater interest in embedded standards like EBBR. “We have a lot of SBCs now, and embedded software is getting more complicated,” said Likely. “Fifteen years ago, we could probably get by with the Linux kernel, BusyBox, and a bit of custom code on top. But with IoT, we’re expected to have things like network stacks, secure updates, and revision control. You might have multiple hardware platforms to support.”
As a result, there’s a growing interest in using pre-canned distros to offload the growing burden of software maintenance. “There’s starting to be an economic incentive to boot an embedded system on all the major distros and all the major SBCs,” said Likely. “We’ve been duplicating a lot of the same boot setup work.”
The problem is that bootloaders like U-Boot behave differently on different hardware, “with slightly different bootstrips and Device Tree setup on each board,” he continued. “It’s impossible for Linux distros to support more than a couple of boards. The growing number of board-specific images also poses problems. Not only is it a problem of the boot flow — getting the firmware on the system into the OS — but of how to tell the OS what’s on the board.”
As the maintainer of the Linux Device Tree, Likely is an expert on the latter issue. “Device Tree gives us the separation of the board description and the OS, but the next step is to make that part of the platform,” he explained. “So far we haven’t had a standard pre-boot environment in embedded. But if you had a standard boot flow and the machine could describe itself to the OS, the distros would be able to support it. They wouldn’t need to create a custom kernel, and they could do kernel and security updates in a standard way.”
Some embedded developers have expressed skepticism about EBBR’s dependence on the “bloated” UEFI code, with fears that it will slow down boot time. Yet, Likely claimed that the EBBR implementation adds “insignificant” overhead.
EBBR also differs from UEFI in that it’s written with consideration of embedded constraints and real-world practices. “Standard UEFI says if your storage is on eMMC, then firmware can’t be loaded there, but EBBR is flexible enough to account for doing both on same media,” said Likely. “We support limited hardware access at runtime and limited variable access to deal with things like the mechanics of device partitioning.” In addition, the spec is flexible enough that “you can still do your custom thing within the standard’s framework without breaking the flow.”
EBBR v1.0 and beyond
When Arm initially released its initial universal boot proposal “nobody was interested,” said Likely. The company returned with a second EBBR proposal that was launched as an open source project with a CC-BY-SA license and a GitHub page. Major Linux distro projects started taking interest.
Arm is counting on the distro projects to pressure semiconductor and board-makers to get onboard. Already, several chipmakers including ST, TI, and Xilinx have shown interest. “Any board that is supported in U-Boot mainline should work,” said Likely. “Distros will start insisting on it, and it will probably be a requirement in a year or two.”
The upcoming v1.0 release will be available in server images for Fedora, SUSE, Debian, and FreeBSD that will boot unmodified on mainline U-Boot. The spec initially runs on 32- and 64-bit Arm devices and supports both ACPI power management and Linux Device Tree. It requires Arm’s PSCI (Power State Coordination Interface) technology on 64-bit Arm devices. The v1.0 spec provides storage guidance and runtime services guidance, and it may include a QEMU model.
Future releases will look at secure boot, capsule updates, more embedded use cases, better UEFI compliance, and improved non-Linux representation, said Likely. Other goals include security features and a standard testing platform.
“These are all solvable problems,” said Likely. “There are no technical barriers to boot standardization.”
The cloud native landscape can be complicated and confusing. Its myriad of open source projects are supported by the constant contributions of a vibrant and expansive community. The Cloud Native Computing Foundation (CNCF) has a landscape map that shows the full extent of cloud native solutions, many of which are under their umbrella.
The CNCF Mission
The CNCF fosters this landscape of open source projects by helping provide end-user communities with viable options for building cloud native applications. By encouraging projects to collaborate with each other, the CNCF hopes to enable fully-fledged technology stacks comprised solely of CNCF member projects. This is one way that organizations can own their destinies in the cloud.
CNCF Processes
A total of twenty-five projects have followed Kubernetes and been adopted by the CNCF. In order to join, projects must be selected and then elected with a supermajority by the Technical Oversight Committee (TOC). The voting process is aided by a healthy community of TOC contributors, which are representatives from CNCF member companies, including myself. Member projects will join the Sandbox, Incubation, or Graduation phase depending on their level of code maturity.
Below I’ve grouped projects into twelve categories: orchestration, app development, monitoring, logging, tracing, container registries, storage and databases, runtimes, service discovery, service meshes, service proxy, security, and streaming and messaging. I’ve provided information that can hopefully help companies or individuals evaluate what each project does, how it’s evolved over time, and how it integrates with other CNCF projects.
Ownership of a popular npm package, event-stream, was transferred by the original author to a malicious user, right9ctrl. This package receives over 1.5mm weekly downloads and is depended on by nearly 1,600 other packages. The malicious user was able to gain the trust of the original author by making a series of meaningful contributions to the package. The first publish of this package by the malicious user occurred on September 4th, 2018.
The malicious user modified event-stream to then depend on a malicious package, flatmap-stream. This package was specifically crafted for the purposes of this attack. That package contains a fairly simple index.js file, as well as a minified index.min.js file. The two files on GitHub appear innocent enough. However, in the published npm package, the minified version of the file has additional code injected into it. There is no requirement that code being uploaded in an npm module is equivalent to the code stored publicly in a git repository.
The addition of the malicious package to the list of event-streamdependencies came to light on November 20th and is documented heavily in dominictarr/event-stream#116. This issue was made over two months after the compromised package was published. One of the many benefits of open source software is that code can be audited by many different developers. However, this isn’t a silver bullet. An example of this is OpenSSL, which is an open source project receiving some of the highest scrutiny but is still affected by serious vulnerabilities such as Heartbleed.
Machine learning and statistics are playing a pivotal role in finding the truth in human rights cases around the world – and serving as a voice for victims, Patrick Ball, director of Research for the Human Rights Data Analysis Group, told the audience at Open Source Summit Europe.
Ball began his keynote, “Digital Echoes: Understanding Mass Violence with Data and Statistics,” with background on his career, which started in 1991 in El Salvador, building databases. While working with truth commissions from El Salvador to South Africa to East Timor, with international criminal tribunals as well as local groups searching for lost family members, he said, “one of the things that we work with every single time is trying to figure out what the truth means.”
In the course of the work, “we’re always facing people who apologize for mass violence. They tell us grotesque lies that they use to attempt to excuse this violence. They deny that it happened. They blame the victims. This is common, of course, in our world today.”
Human rights campaigns “speak with the moral voice of the victims,’’ he said. Therefore, it is critical that statistics, including machine learning, are accurate, Ball said.
He gave three examples of when statistics and machine learning proved to be useful, and where they failed.