Home Blog Page 322

Kubernetes 101: How to Get Started with Container Orchestration

With Kubernetes, life as a developer is a whole lot simpler. Although it started life as an open source project at Google, Kubernetes now is one of the fastest growing automation systems for containers today. Though there is a steep learning curve with Kubernetes, it’s still a simple, highly effective orchestration engine.

Kubernetes is also known as K8s and it might just be the greatest thing that’s hit the DevOps scene in the last few years. With the right skills, Kubernetes can majorly boost the development process by automating updates and even managing apps and services without worrying about downtime. So, how can beginners get started with Kubernetes and why should they even want to? This intro guide breaks down everything you need to know about Kubernetes, so you can hit the ground running. …

Your first step to getting started with Kubernetes is to create a cluster so you can deploy an app. The cluster needs to include both a master and one or more node. To start, run a cluster on a local machine. The Minikube software is the perfect space to test your initial development.

Read more at Jaxenter

Working with Linux File Links

In this article by Oliver Pelz, the author of Fundamentals of Linux, you’ll take a look at what Linux file links are and how to work with them.

Connecting a filename to the actual data is managed by the filesystem using a table or data structure, which is called a title allocation table. In the Linux filesystem, an Inode is the actual entry point to a specific file’s data on the hard disk. To simplify, you can just consider that the Inode represents the actual data of a file. The filesystem management now ensures that every normal file, upon creation, has one link entry in its allocation table to connect the actual filename to the Inode on the hard disk. Such a link is also called a hard link. The original filename to the Inode relationship is also linked using a hard link. Now, the cool thing about the Linux filesystem is that you can create additional hard links to an existing Inode, which is like having alternative names for a file.

One of the drawbacks of a hard link is that you cannot differentiate a hard link from the original filename or the Inode. This can cause problems and side effects because if you change the original file’s content, the hard link’s content will be changed as well. 

Read more at LinuxTechLab

Virtme: The Kernel Developer’s Best Friend

When working on the Linux Kernel, testing via QEMU is pretty common. Here’s a look at virtme, a QEMU wrapper that uses the host instead of a virtual disk, making working with QEMU extremely easy.

By Ezequiel Garcia, Senior Software Engineer at Collabora.

When working on the Linux Kernel, testing via QEMU is pretty common. Many virtual drivers have been recently merged, useful either to test the kernel core code, or your application. These virtual drivers make QEMU even more attractive.

However, QEMU can be hard to setup, which is discouraging for some developers: all you wanted was to run a test, and suddenly you are reading through QEMU man pages, trying to find the right combination of arguments. We have blogged about QEMU’s bonanzas and so this time we want to take a somewhat different path and explore virtme, which is basically a QEMU wrapper. Quoting virtme’s own readme:

“Virtme is a set of simple tools to run a virtualized Linux kernel that uses the host Linux distribution or a simple rootfs instead of a whole disk image. Virtme is tiny, easy to use, and makes testing kernel changes quite simple.”

The tool was written by Andy Lutomirski. See more details on the readme.

We really enjoy using this tool, and have found it’s not too well known. So, we’ve decided to spread the word, and put together a curated zero-to-kernel steps:

Installing virtme

Instead of using Andy Lutomirski’s upstream, we are going to use Ezequiel’s repo. This version of virtme, simply adds some extra sugar.

git clone https://github.com/ezequielgarcia/virtme.git
cd virtme
sudo ./setup.py install

Now get your favourite kernel tree

git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

Configure it for virtme

virtme comes with some handy tools to produce a suitable kernel configuration. This makes the config process much easier.

virtme-configkernel --defconfig

Enable the drivers you need

For instance, let’s enable the vim2m driver. This is a virtual video4linux memory2memory virtual driver.

CONFIG_MEDIA_SUPPORT=y
CONFIG_MEDIA_CAMERA_SUPPORT=y
CONFIG_VIDEO_DEV=y
CONFIG_VIDEO_V4L2=y
CONFIG_V4L2_MEM2MEM_DEV=y
CONFIG_V4L_TEST_DRIVERS=y
CONFIG_VIDEO_VIM2M=y

Build the kernel

make -j4

Run virtme

sudo virtme-run --kimg arch/x86_64/boot/bzImage

or just:

sudo virtme-run --kdir .

Extra sugar

Running scripts at boot

One of them is --script-dir, which allows to run some scripts at boot. This can be used to run all your tests at boot, providing a quick way to test kernel changes.

Continue reading on Collabora’s blog.

At the Crossroads of Open Source and Open Standards

This piece is the first in a series from speakers and sponsors at the Linux Foundation’s Node+JS Interactive (formerly JS Interactive), conference, taking place October 10-12, 2018 at the Vancouver Convention Centre, in Vancouver, Canada. The program will cover a broad spectrum of the JavaScript ecosystem including Node.js, frameworks, best practices and stories from successful end-users.

A new crop of high-value open source software projects stands ready to make a big impact in enterprise production, but structural issues like governance, IPR, and long-term maintenance plague OSS communities at every turn. Meanwhile, facing significant pressures from open source software and the industry groups that support them, standards development organizations are fighting harder than ever to retain members and publish innovative standards. What can these two vastly different philosophies learn from each other, and can they do it in time to ensure they remain relevant for the next 10 years?

Read more at The New Stack

Blockchain Training Takes Off

At major business schools ranging from Berkeley to Wharton, students are flocking to classes on blockchain and cryptocurrency. As CNBC recently reported: “According to a new survey of 675 U.S. undergraduate students by cryptocurrency exchange Coinbase and Qriously, 9 percent of students have already taken a class related to blockchain or cryptocurrency and 26 percent want to take one.”

College course offerings include “Blockchain, Cryptocurrency, and Distributed Ledger Technology” taught by Kevin Werbach and engineering professor David Crosbie at the University of Pennsylvania; and “Blockchain and CryptoEconomics,” taught by computer science professor Dawn Song at the University of California at Berkeley.

Meanwhile, job postings related to blockchain and Hyperledger are taking off, and knowledge in these areas is translating into opportunity. Careers website Glassdoor lists thousands of job posts related to blockchain.

Effectively, blockchain is becoming part of the required lingua franca for those entering the world of business as well as others. Outside of the big business schools, there are many learning resources worth knowing about, including these courses offered by The Linux Foundation:

Hyperledger Fabric Fundamentals (LFD271)

Teaches the fundamental concepts of blockchain and distributed ledger technologies.

Blockchain for Business – An Introduction to Hyperledger Technologies (LFS171)

A primer to blockchain and distributed ledger technologies. Learn how to start building blockchain applications with Hyperledger frameworks.

“In the span of only a year or two, blockchain has gone from something seen only as related to cryptocurrencies to a necessity for businesses across a wide variety of industries,” said The Linux Foundation’s Clyde Seepersad, General Manager, Training & Certification, in introducing the course Blockchain: Understanding its Uses and Implications. “Providing a free introductory course designed not only for technical staff but business professionals will help improve understanding of this important technology, while offering a certificate program through edX will enable professionals from all over the world to clearly demonstrate their expertise.”Aside from full courses, webinars focusing on blockchain technology offer chances to see how individual technologies work, and how industry segments are being influenced by blockchain. On Wednesday, September 26, at 9 a.m. Pacific, you can tune into “A Hitchhiker’s Guide to Deploying Hyperledger Fabric on Kubernetes,a free webinar presented by Alejandro (Sasha) Vicente Grabovetsky and Nicola Paoli of AID:Tech. It’s ideal for DevOps workers and others interested in the increasingly popular Hyperledger Fabric platform.

Conferences also provide good learning opportunities. The Open FinTech Forum in New York City, coming up October 10 and 11, will provide a great opportunity to hear about the latest distributed ledger deployments, use cases, trends, and predictions of blockchain adoption.  Panel discussions are scheduled to cover:

  • Distributed Ledger Technology Deployments & Use Cases in Financial Services

  • Enterprise Blockchain Adoption – Trends and Predictions

  • Blockchain Based Compliance Management Systems

Taking advantage of these opportunities to learn about blockchain makes more sense than ever.

 

Why the Future of Data Storage is (Still) Magnetic Tape

Studies show [PDF] that the amount of data being recorded is increasing at 30 to 40 percent per year. At the same time, the capacity of modern hard drives, which are used to store most of this, is increasing at less than half that rate. Fortunately, much of this information doesn’t need to be accessed instantly. And for such things, magnetic tape is the perfect solution. …

Indeed, much of the world’s data is still kept on tape, including data for basic science, such as particle physics and radio astronomy, human heritage and national archives, major motion pictures, banking, insurance, oil exploration, and more. There is even a cadre of people (including me, trained in materials science, engineering, or physics) whose job it is to keep improving tape storage.

Tape has survived for as long as it has for one fundamental reason: It’s cheap. And it’s getting cheaper all the time. But will that always be the case?

You might expect that if the ability to cram ever more data onto magnetic disks is diminishing, so too must this be true for tape, which uses the same basic technology but is even older. The surprising reality is that for tape, this scaling up in capacity is showing no signs of slowing. Indeed, it should continue for many more years at its historical rate of about 33 percent per year, meaning that you can expect a doubling in capacity roughly every two to three years. Think of it as a Moore’s Law for magnetic tape.

Read more at IEEE Spectrum

Linux on Windows 10: Running Ubuntu VMs Just Got a Lot Easier, Says Microsoft

Ubuntu maintainer Canonical and Microsoft have teamed up to release an optimized Ubuntu Desktop image that’s available through Microsoft’s Hyper-V gallery.

The Ubuntu Desktop image should deliver a better experience when running it as a guest on a Windows 10 Pro host, according to Canonical. The optimized version is Ubuntu Desktop 18.04.1 LTS release, also known as Bionic Beaver.

Microsoft’s work with Canonical was prompted by its users who wanted a “first-class experience” on Linux virtual machines (VMs) as well as Windows VMs. To achieve this goal, Microsoft worked with the developers of XRDP, an open-source remote-desktop protocol (RDP) for Linux based on Microsoft’s RDP for Windows.

Read more at ZDNet

Learn more about Linux on Windows here.

A Deep Dive Into Data Lakes

In the age of Big Data, we’ve had to come up with new terms to describe large-scale data storage. We have databases, data warehouses and now data lakes.

While they all contain data, these terms describe different ways of storing and using that data. Before we discuss data lakes and why they are important, let’s examine how they differ from databases and data warehouses.

Let’s start here: A data warehouse is not a database. Although you could argue that they’re both relational data systems, they serve different purposes. Data warehousing allows you to pull data together from a number of different sources for analysis and reportiong. Data warehouses store vast amounts of historical data for complex queries across all data types being pulled together.

Data lakes are centralized storage and data repositories that allow you to work with a variety of different types of data. The cool thing here is that you don’t need to structure the data and it can be imported “as-is.” This allows you to work with raw data and run analytics, data visualization, big data processing, machine learning tools, AI, and much more. This level of data agility can actually give you some pretty cool competitive advantages.

 

Read more at Datacenter Frontier

The (Awesome) Economics of Open Source

By lowering barriers to innovation, open source is superior to proprietary solutions for enabling continued positive economic growth. …

Successful open source software companies “discover” markets where transaction costs far outweigh all other costs, outcompete the proprietary alternatives for all the good reasons that even the economic nay-sayers already concede (e.g., open source is simply a better development model to create and maintain higher-quality, more rapidly innovative software than the finite limits of proprietary software), and then—and this is the important bit—help clients achieve strategic objectives using open source as a platform for their own innovation. With open source, better/faster/cheaper by itself is available for the low, low price of zero dollars.

As an open source company, we don’t cry about that. Instead, we look at how open source might create a new inflection point that fundamentally changes the economics of existing markets or how it might create entirely new and more valuable markets.

Read more at OpenSource.com 

How IBM Is Using Open Source for a Greater Good

Dr. Angel Diaz is the face of open source at IBM as Vice President of Developer Technology, Open Source & Advocacy. At the recent Open Source Summit in Vancouver, we spoke with Diaz to talk about the importance of open source at IBM and how it’s changing the world around us.

LF: What’s the importance of open source in modern economy?

Angel Diaz: We are living in a technology-fueled business renaissance — cloud, data, artificial intelligence, and the redefinition of the transaction. There is constant democratization of technology. This democratization allows us as computer scientists to innovate higher orders of the stack. You don’t have to worry about compute, storage and network; you get that in the cloud for example, but what has been driving that democratization? Open source.

Open source has been the fuel, the innovation engine, the skills engine, the level playing field that allows us as a society to build more, to build faster and move forward and the rate and pace of that is increasing.

What’s really nice about that is we are doing it in a controlled way with open governance and leveraging the all the work that we do in consortia such as the Linux Foundation.

Read more at The Linux Foundation