In the mid-70s I heard about floppy drives, but they were expensive, exotic equipment. I didn’t know that IBM had decided as early as 1967 that tape drives, while fine for backups, simply weren’t good enough to load software on mainframes. So it was that Alan Shugart assigned David L. Noble to lead the development of “a reliable and inexpensive system for loading microcode into IBM System/370 mainframes” using a process called Initial Control Program Load (ICPL). From this project came the first 8-inch floppy disk.
Oh, yes, before the 5.25-inch drives many of you remember was the 8-inch floppy. By 1978, I was using those on mainframes. Later I would use them on dedicated cataloging PCs at the Online Computer Library Center.
Linux systems can provide more help with your schedule than just reminding you what day today is. You have a lot of options for displaying calendars — some that are likely to prove helpful and others that just might boggle your mind.
date
To begin, you probably know that you can show the current date with the date command.
$ date
Mon Mar 26 08:01:41 EDT 2018
cal and ncal
You can show the entire month with the cal command. With no arguments, cal displays the current month and, by default, highlights the current day by reversing the foreground and background colors.
Our journey through the history of IT infrastructure starts with the centralised mainframe era kicked off by IBM in the 1960s and advances through to the cloud-based, server-less world we now occupy. In between, we’ve seen the eras of personal computers, client/server computing and web-based enterprise computing, all of which have transformed the way businesses operate.
The personal computing era, for example, was driven by the proliferation of PCs and desktop productivity software tools such as spreadsheets and word processors in the early 1980s, which appealed to personal and corporate users alike.
This was followed by the rise of powerful server computers that were linked to ‘clients’ – i.e. desktop and laptop PCs – to provide users with a variety of capabilities in the client/server age of the late 1980s and the enterprise computing era of the 1990s, which was driven by the need to integrate disparate networks and applications together in a single infrastructure amidst the growth of the World Wide Web.
What are the pitfalls of running Java or JVM-based applications in containers? In this article, Jörg Schad and Ken Sipe discuss the challenges and solutions.
The Java Virtual Machine (not even with the Java 9 release) is not fully aware of the isolation mechanisms that containers are built around. This can lead to unexpected behavior between different environments (e.g., test vs production). To avoid this behavior one should consider overriding some default parameters (which are usually set by the memory and processors available on the node) to match the container limits.
This is a really wonderful study with far-reaching implications that could even impact company strategies in some cases. It starts with a simple question: “how can we improve the state of the art in deep learning?” We have three main lines of attack:
As I wrote above, the syslog-ng application is an enhanced logging daemon with a focus on portability and central log collection. Daemon means syslog-ng is an application running continuously in the background; in this case, it’s collecting log messages.
While Linux testing for many of today’s applications is limited to x86_64 machines, syslog-ng also works on many BSD and commercial UNIX variants. What is even more important from the embedded/IoT standpoint is that it runs on many different CPU architectures, including 32- and 64-bit ARM, PowerPC, MIPS, and more. (Sometimes I learn about new architectures just by reading about how syslog-ng is used.)
Why is central collection of logs such a big deal? One reason is ease of use, as it creates a single place to check instead of tens or thousands of devices.
The important role that open source will play in distributing compute power to the edge is coming into clearer focus here this week, with multiple initiatives and some significant contributions from major industry players.
The Open Networking Foundation kicked things off with its announcement of a strategic shift that will put major operators in charge of developing reference designs for edge SDN platforms for network operators, with the intent of moving open source technologies forward faster on that front. The Linux FoundationTuesday announced broader support for its Akraino Edge Stack open source community, including 13 new members and a major open source contribution from one of those Intel Corp. (Nasdaq: INTC). (See ONF Operators Take Charge of Edge SDN and ONF Operators Take Charge of Edge SDN.)
In the Intel keynote Tuesday afternoon, Melissa Evers-Hood, senior director of cloud and edge software for Intel’s Open Source Technology Center, explained Intel’s decision to open source its Wind River Titanium Cloud portfolio of technologies as well as Intel’s Network Edge Virtualization Software Development Kit. Wind River Titanium Cloud is Intel’s NFV Infrastructure, based on OpenStack.
“The Civil Infrastructure Platform is the most conservative of The Linux Foundation projects,” began Yoshitake Kobayashi at the recent Embedded Linux Conference in Portland. Yet, if any eyelids started fluttering shut in anticipation of an afternoon nap, they quickly opened when he added: “It may also be the most important to the future of civilization.”
The Linux Foundation launched the Civil Infrastructure Platform(CIP) project in April 2016 to develop base layer, open source industrial-grade software for civil infrastructure projects, starting with a 10-year Super Long-Term Support (SLTS) Linux kernel built around the LTS kernel. CIP expects to add other similarly reusable software building blocks that meet the safety and reliability requirements of industrial and civil infrastructure. CIP supports electrical and power grids, water and sewage facilities, oil and gas plants, and rail, shipping and transportation systems, among other applications.
“Our civilization’s infrastructure already runs on Linux,” said Kobayashi, a CIP contributor and Senior Manager of Open Source Technology at Toshiba’s Software Development and Engineering Center. “Our power plants run on Linux. If they stop working, it’s serious.”
CIP’s OSBL may not be disruptive technology, but its aim is to more quickly and affordably bring disruptive tech into projects whose lifespans extend for a half century or more. The goal is to reduce development and testing time so that the latest clean energy equipment, IoT monitors, AI edge computing, and smart city technology can come online more quickly and be updated in a timely manner.
With standardization, open source licensing, and greater reuse of software, CIP plans to reduce duplication of effort and project costs, as well as ease maintenance and improve reliability. “We can provide the stability needed by infrastructure by using Linux,” said Kobayashi.
In many ways, CIP is like The Linux Foundation’s Automotive Grade Linux project in that it’s trying to more quickly introduce the latest technologies into a traditional industry with long lead times. In this case, however, the development times and product and maintenance lifespans can last decades.
Kobayashi explained that a power plant has a life cycle of 25-60 years. The technology takes 3-5 years to develop plus up to four years for customer specific extensions, 6-8 years for supply time, and 15+ years of hardware maintenance after the latest shipment.
“Things change a lot in 60 years, such as IoT, which requires security management and industrial grade devices,” said Kobayashi. Yet, bringing these technologies online is slowed by rampant duplication of effort. “In civil infrastructure, you typically have many companies doing industrial grade development and long-time support even if their business areas are quite similar. There’s a lot of duplication.”
In his talk, Kobayashi gave an overview of CIP’s first two years and shared plans for the future. CIP’s founding members – Codethink, Hitachi, Plat’Home, Siemens, and Toshiba – have since been joined by Renesas, which last October announced that the Linux stack for its Arm-based RZ/G SoCs had been upgraded to use CIP’s 10-year SLTS kernel. In December, CIP was joined by Moxa.
Upstream first, backport later
Unlike AGL, CIP is not developing and maintaining a full Linux distribution. CIP’s Open Source Base Layer (OSBL) is aligned closely with Debian, but it’s also designed to be usable with other Linux distributions.
Kobayashi emphasized that CIP is working closely with the upstream community. “We created a kernel maintenance policy where the most important principle is ‘upstream first.’ All features have to be in the upstream kernel before backporting to the CIP kernel.” Kobayashi added that out-of-tree drivers are unsupported by CIP.
The CIP project initially focused on the SLTS kernel, maintained by Codethink’s Ben Hutchings. New builds have come every 4-6 weeks, adding features such as security patch management.
The most recent, Mar. 9 Linux 4.4.120-cip20 build, based on linux-stable-git, adds Meltdown and Spectre fixes as well as backported patches, such as support for the Renesas RZ/G SoCs and Siemens IoT2000 gateway. It also includes a Kernel Self Protection Project that includes ASLR for user space process, GCC’s undefined behavior Sanitizer (UBSAN), and faster page poisoning.
Over the last year, the project has focused on real-time support. The first CIP SLTS real-time kernel was released in early January based on Stable RT Linux with PREEMPT-RT. The problem here is that Real Time Linux is not yet fully upstream. “We need it immediately, so we are trying to help the RTL project by becoming a Gold member,” said Kobayashi.
More recent projects have included the creation of an environment for testing kernels called Board At Desk (B&D), based on KernelCI and LAVA. The current focus is kernel testing, but CIP plans to eventually test the entire OSBL platform.
CIP is also developing a CIP Core implementation with minimal filesystem images for SLTS that is designed for creating and testing installable images. The project is currently defining component versions for its CIP Core package, which “is difficult because you have to go upstream,” says Kobayashi. “We decided to use Debian as the primary reference distribution, so CIP Core package components will be selected from Debian packages. We have begun to support the Debian-LTS project at the Platinum level.”
CIP has created a build environment for CIP Core based on Debian’s native-build system. The environment supports a Renesas RZ/G based iwg20m, which appears to be another name for iWave’s iW-RainboW-G20M-Qseven module. Other targets include the BeagleBone Black, Intel’s Cyclone V FPGA SoC, and QEMUx86.
The main challenge with aligning CIP with Debian is that Debian-LTS “is only five years but we need 10 years,” said Kobayashi.In addition, while CIP supports both Debian’s native-build and cross-build technology, Debian does not currently support cross-build. However, a Debian-cross (CrossToolchains) project is under development.
Next up: Cybersecurity and Y2038 protections
The CIP SLTS kernel and OSBL platform will have a major release every 2-3 years, so a new release can be expected in 2019. Potential additions include support for the ISA/IEC-62443 cybersecurity standard for industrial automation and control. “We think we can help developers gain certification, but we are not planning to develop procedures or certification schemes,” said Kobayashi.
CIP is also planning workarounds for the Y2038 bug. A Y2K-like computer clock crisis could occur in 2038 because 32-bit systems won’t be able to tell the difference between 2038 and 1970, the genesis year of 32-bit timing schemes.
The v2 release will also add some functional safety and software update code, and potentially add support for The Linux Foundation’s EdgeX Foundry IoT edge computing middleware standard. The main issue here, says Kobayashi, is that unlike EdgeX CIP’s OSBL does not support Java. Debian does, however, so there may be a fix.
Kobayashi concluded by emphasizing that “kernel version alignment is important” to CIP. At the Open Source Summit Japan (June 20-22), CIP is hosting an F2F meeting with participants from LTS/LTSI, AGL, and Debian.
The slidedeck and 47-minute video of “Civil Infrastructure Platform: Industrial Grade Open Source Base-Layer” are now available. You can watch the complete presentation here:
This Minikube tutorial enables admins to work with Kubernetes without additional equipment, software or a significant time investment to set it up. Home labs isolate new technology from vital live infrastructure in production environments.
Follow the installation steps here, run kubectl commands in the Kubernetes lab and then access the application workloads within it.
A Minikube Kubernetes cluster, complete with workload containers, is prebuilt and runs inside a single VM on the user’s computer. Minikube runs on Linux, Windows and macOS and can use a variety of hypervisors for its VM.
Minikube kubectl command lines run directly on the home lab computer, and Kubernetes-run applications are accessible there as well.
Blockchain technology — which encompasses smart contracts and distributed ledgers — can be used to record promises, trades, and transactions of many types. Countless organizations, ranging from IBM to Wells Fargo and the London Stock Exchange Group are partnering to drive the technology forward, and The Linux Foundation’sHyperledger Project is an open source collaborative effort aimed at advancing cross-industry blockchain technologies. Recently, the project announced the arrival ofHyperledger Sawtooth 1.0, a major milestone for the Hyperledger community, which represents the second blockchain framework that has reached production-ready status.
In conjunction with the release, Brian Behlendorf, Executive Director, Hyperledger, and Dan Middleton, Intel’s Head of Technology, Blockchain and Distributed Ledger Program, hosted a webinar, titled “Hyperledger Sawtooth v1.0: Market Significance & Technical Overview.” The webinar is now available as avideo replay (registration required).