Home Blog Page 665

How Linux Can Save Small Businesses (And Old Hardware)

Many small businesses with tight budgets are facing a tough choice: Stick with obsolete systems and remain vulnerable to hackers, or spend a lot to install new gear. David Gewirtz shows how Linux can help you preserve your investment while staying safe and secure.

Linux is much more secure than Windows. As my colleague Steven J Vaughn-Nichols pointed out, Linux was “designed to work in environments where other users are on the same box.” Therefore, Linux has some intrinsic security elements, simply as a matter of user-based security.

The installed base of end-user Linux distros is considerably smaller than Windows, so it’s not as much of a target to hackers. Since there are so many variations out there, it’s also hard to predict the version that’s being used. That makes it harder for an attacker to dig in and find a vulnerability.

Read more at ZDNet

This Week in Open Source News: Open Cloud Report, No Dirty Cow Patch for Android, & More

This week in Linux and OSS news, The Linux Foundation’s annual Guide to the Open Cloud lists top open source cloud projects​ and trends, no Dirty Cow bug patch for Android this month, and more! Stay informed and engaged in open source news with this weekly digest!

1) The Linux Foundation releases its 2016 Guide to the Open Cloud report.

Linux Foundation Provides Insights Into the Open Cloud– SD Times

2) Though Linux users got a fix for “Dirty Cow” Android users might not be so lucky.

Fix for Critical Android Rooting Bug is a No-Show in November Patch Release– Ars Technica

3) Jeff Garzik, CEO and co-founder of Bloq, to join The Linux Foundation’s Board of Directors.

Linux Foundation Appoints Jeff Garzik to Board of Directors– Bitcoins.net

4) “More than 80 leading finance and technology organizations, including IBM, have joined The Linux Foundation Hyperledger, a project aimed at creating an enterprise-grade blockchain framework.”

How Blockchain Will Change Your Life– The Wall Street Journal

5) Munich, Germany is famous for rejecting Windows in favor of Microsoft but now faces proposals to make Windows 10 and Office available across the council.

City that Swapped Windows for Linux Debates Proposed Windows 10 Move– ZDNet

GitHub Enterprise 2.8 Adds New Workflow Options

The big changes rolled out for GitHub Enterprise 2.8 may seem familiar, but don’t say GitHub is running out of ideas. Instead, the company is adding tools to GitHub Enterprise that enterprises may already know, rather than expand functionality exclusive to GitHub.

Some new pieces, like the Reviews or Projects functions, will likely draw users because of their tight integration with the product or because they provide the equivalent of a third-party option. But others, like Jupyter support, appeal because they open up GitHub Enterprise to use cases that didn’t exist before or would have been difficult to implement.

Read more at InfoWorld

 

Build a VR App in 15 Minutes with Linux

In 15 minutes, you can develop a virtual reality application and run it in a web browser, on a VR headset, or with Google Daydream. The key is A-Frame, an open source toolkit built by the Mozilla VR Team.

Test it

Open this link using Chrome or Firefox on your mobile phone.

Put your phone into Google Cardboard and stare at a menu square to switch the 360-degree scene.
Read more at OpenSource.com

Fluentd Joins the Cloud Native Computing Foundation

Fluentd, the open source data/log collector have just joined the Cloud Native Computing Foundation. This is a big step forward for the whole Cloud Native ecosystem and the project it self,  where thousands of users and companies will continue benefit from this standard and scalable Logging solution with better integrations, documentation and features.

For more details about this, please read the original article on CNCF blog here.

The Future of IoT: Containers Aim to Solve Security Crisis

Despite growing security threats, the Internet of Things hype shows no sign of abating. Feeling the FoMo, companies are busily rearranging their roadmaps for IoT. The transition to IoT runs even deeper and broader than the mobile revolution. Everything gets swallowed in the IoT maw, including smartphones, which are often our windows on the IoT world, and sometimes our hubs or sensor endpoints.

New IoT focused processors and embedded boards continue to reshape the tech landscape. Since our Linux and Open Source Hardware for IoT story in September, we’ve seen Intel Atom E3900 “Apollo Lake” SoCs aimed at IoT gateways, as well as new Samsung Artik modules, including a Linux-driven, 64-bit Artik7 COM for gateways and an RTOS-ready, Cortex-M4 Artik0. ARM announced Cortex-M23 and Cortex-M33 cores for IoT endpoints featuring ARMv8-M and TrustZone security.

Security is a selling point for these products, and for good reason. The Mirai botnet that recently attacked the Dyn service and blacked out much of the U.S. Internet for a day brought Linux-based IoT into the forefront — and not in a good way. Just as IoT devices can be turned to the dark side via DDoS, the devices and their owners can also be the victimized directly by malicious attacks.

The Dyn attack reinforced the view that IoT will more confidently move forward in more controlled and protected industrial environments rather than the home. It’s not that consumer IoT security technology is unavailable, but unless products are designed for security from scratch, as are many of the solutions in our smart home hub story, security adds cost and complexity.

In this final, future-looking segment of our IoT series, we look at two Linux-based, Docker-oriented container technologies that are being proposed as solutions to IoT security. Containers might also help solve the ongoing issues of development complexity and barriers to interoperability that we explored in our story on IoT frameworks.

We spoke with Canonical’s Oliver Ries, VP Engineering Ubuntu Client Platform about his company’s Ubuntu Core and its Docker-friendly, container-like Snaps package management technology. We also interviewed Resin.io CEO and co-founder Alexandros Marinos about his company’s new Docker-based ResinOS for IoT.

Ubuntu Core Snaps to

Canonical’s IoT-oriented Snappy Ubuntu Core version of Ubuntu is built around a container-like snap package management mechanism, and offers app store support. The snaps technology was recently released on its own for other Linux distributions. On November 3, Canonical released Ubuntu Core 16, which improves white label app store and update control services.

The Snap mechanism offers automatic updates, and helps block unauthorized updates. Using transactional systems management, snaps ensure that updates either deploy as intended or not at all. In Ubuntu Core, security is further strengthened with AppArmor, and the fact that all application files are kept in separate silos, and are read-only.

Ubuntu Core, which was part of our recent survey of open source IoT OSes, now runs on Gumstix boards, Erle Robotics drones, Dell Edge Gateways, the Nextcloud Box, LimeSDR, the Mycroft home hub, Intel’s Joule, and SBCs compliant with Linaro’s 96Boards spec. Canonical is also collaborating with the Linaro IoT and Embedded (LITE) Segment Group on its 96Boards IoT Edition. Initially, 96Boards IE is focused on Zephyr-driven Cortex-M4 boards like Seeed’s BLE Carbon, but it will expand to gateway boards that can run Ubuntu Core.

“Ubuntu Core and snaps have relevance from edge to gateway to the cloud,” says Canonical’s Ries. “The ability to run snap packages on any major distribution, including Ubuntu Server and Ubuntu for Cloud, allows us to provide a coherent experience. Snaps can be upgraded in a failsafe manner using transactional updates, which is important in an IoT world moving to continuous updates for security, bug fixes, or new features.”

Security and reliability are key points of emphasis, says Ries. “Snaps can run completely isolated from one another and from the OS, making it possible for two applications to securely run on a single gateway,” he says. “Snaps are read-only and authenticated, guaranteeing the integrity of the code.”

Ries also touts the technology for reducing development time. “Snap packages allow a developer to deliver the same binary package to any platform that supports it, thereby cutting down on development and testing costs, deployment time, and update speed,” says Ries. “With snap packages, the developer is in full control of the lifecycle, and can update immediately. Snap packages provide all required dependencies, so developers can choose which components they use.”

ResinOS: Docker for IoT

Resin.io, which makes the commercial IoT framework of the same name, recently spun off the framework’s Yocto Linux based ResinOS 2.0 as an open source project. Whereas Ubuntu Core runs Docker container engines within snap packages, ResinOS runs Docker on the host. The minimalist ResinOS abstracts the complexity of working with Yocto code, enabling developers to quickly deploy Docker containers.

Like the Linux-based CoreOS, ResinOS integrates systemd control services and a networking stack, enabling secure rollouts of updated applications over a heterogeneous network. However, it’s designed to run on resource constrained devices such as ARM hacker boards, whereas CoreOS and other Docker-oriented OSes like the Red Hat based Project Atomic are currently x86 only and prefer a resource-rich server platform. ResinOS can run on 20 Linux devices and counting, including the Raspberry Pi, BeagleBone, and Odroid-C1.

“We believe that Linux containers are even more important for embedded than for the cloud,” says Resin.io’s Marinos. “In the cloud, containers represent an optimization over previous processes, but in embedded they represent the long-delayed arrival of generic virtualization.”

When applied to IoT, full enterprise virtual machines have performance issues and restrictions on direct hardware access, says Marinos. Mobile VMs like OSGi and Android’s Dalvik can been used for IoT, but they require Java among other limitations.

Using Docker may seem natural for enterprise developers, but how do you convince embedded hackers to move to an entirely new paradigm? “Rather than transferring practices from the cloud wholesale, ResinOS is optimized for embedded,” answers Marinos. In addition, he says, containers are better than typical IoT technologies at containing failure. “If there’s a software defect, the host OS can remain functional and even connected. To recover, you can either restart the container or push an update. The ability to update a device without rebooting it further removes failure opportunities.”

According to Marinos, other benefits accrue from better alignment with the cloud, such as access to a broader set of developers. Containers provide “a uniform paradigm across data center and edge, and a way to easily transfer technology, workflows, infrastructure, and even applications to the edge,” he adds.

The inherent security benefits in containers are being augmented with other technologies, says Marinos. “As the Docker community pushes to implement signed images and attestation, these naturally transfer to ResinOS,” he says. “Similar benefits accrue when the Linux kernel is hardened to improve container security, or gains the ability to better manage resources consumed by a container.”

Containers also fit in well with open source IoT frameworks, says Marinos. “Linux containers are easy to use in combination with an almost endless variety of protocols, applications, languages and libraries,” says Marinos. “Resin.io has participated in the AllSeen Alliance, and we have worked with partners who use IoTivity and Thread.”

Future IoT: Smarter Gateways and Endpoints

Marinos and Canonical’s Ries agree on several future trends in IoT. First, the original conception of IoT, in which MCU-based endpoints communicate directly with the cloud for processing, is quickly being replaced with a fog computing architecture. That calls for more intelligent gateways that do a lot more than aggregate data and translate between ZigBee and WiFi.

Second, gateways and smart edge devices are increasingly running multiple apps. Third, many of these devices will provide onboard analytics, which we’re seeing in the latest smart home hubs. Finally, rich media will soon become part of the IoT mix.

“Intelligent gateways are taking over a lot of the processing and control functions that were originally envisioned for the cloud,” says Marinos. “Accordingly, we’re seeing an increased push for containerization, so feature- and security-related improvements can be deployed with a cloud-like workflow. The decentralization is driven by factors such as the mobile data crunch, an evolving legal framework, and various physical limitations.”

Platforms like Ubuntu Core are enabling an “explosion of software becoming available for gateways,” says Canonical’s Ries. “The ability to run multiple applications on a single device is appealing both for users annoyed with the multitude of single-function devices, and for device owners, who can now generate ongoing software revenues.”

It’s not only gateways — endpoints are getting smarter, too. “Reading a lot of IoT coverage, you get the impression that all endpoints run on microcontrollers,” says Marinos. “But we were surprised by the large amount of Linux endpoints out there like digital signage, drones, and industrial machinery, that perform tasks rather than operate as an intermediary. We call this the shadow IoT.”

Canonical’s Ries agrees that a single-minded focus on minimalist technology misses out on the emerging IoT landscape. “The notion of ‘lightweight’ is very short lived in an industry that’s developing as fast as IoT,” says Ries. “Today’s premium consumer hardware will be powering endpoints in a matter of months.”

While much of the IoT world will remain lightweight and “headless,” with sensors like accelerometers and temperature sensors communicating in whisper thin data streams, many of the newer IoT applications use rich media. “Media input/output is simply another type of peripheral,” says Marinos. “There’s always the issue of multiple containers competing for a limited resource, but it’s not much different than with sensor or Bluetooth antenna access.”

Ries sees a trend of “increasing smartness at the edge” in both industrial and home gateways. “We are seeing a large uptick in AI, machine learning, computer vision, and context awareness,” says Ries. “Why run face detection software in the cloud and incur delays and bandwidth and computing costs, when the same software could run at the edge?”

As we explored in our opening story of this IoT series, there are IoT issues related to security such as loss of privacy and the tradeoffs from living in a surveillance culture. There are also questions about the wisdom of relinquishing one’s decisions to AI agents that may be controlled by someone else. These won’t be fully solved by containers, snaps, or any other technology.

Perhaps we’d be happier if Alexa handled the details of our lives while we sweat the big stuff, and maybe there’s a way to balance privacy and utility. For now, we’re still exploring, and that’s all for the good.

Read the previous articles in the series:

Who Needs the Internet of Things?

21 Open Source Projects for IoT

Linux and Open Source Hardware for IoT

Smart Linux Home Hubs Mix IoT with AI

Open Source Operating Systems for IoT

Learn more about embedded Linux through The Linux Foundation’s Embedded Linux Development with Yocto Project course.

 

Enterprise Linux Showdown: Ubuntu Linux

Canonical’s Ubuntu Linux is the newcomer in the enterprise Linux space. Its first release was in 2004; the other two enterprise Linux distributions in this series, SUSE and Red Hat, were born in 1992 and 1993. In its short life Ubuntu has generated considerable controversy, supporters, detractors, excitement, and given the Linux world a much-needed injection of energy.

One of the primary differentiators between Ubuntu, RHEL, and SUSE is Ubuntu unashamedly and boldly promotes their desktop version. RHEL and SUSE soft-pedal their desktop editions. Not Canonical. Desktop Ubuntu has been front and center from the beginning.

Ubuntu is based on Debian Linux, which is the #1 Linux distribution in size and influence. Debian supports several times more packages than any other distribution, and its family tree is by far the largest. Take a look at the Distribution Timeline for Debian; its largest descendants, Knoppix and Ubuntu, have sizable family trees of their own. Debian is a solid base to build on.

Crashing the Party

Mark Shuttleworth founded Canonical and Ubuntu with $10 million of his own money. Ubuntu is a word from the Nguni language with a complex meaning that doesn’t translate well into English: we are all connected, correct behavior that flows from our connectedness, humanity to others, I am what I am because of who we all are. These were radical concepts in the Linux world, which was rather different back then. It was rough-and-tumble, allegedly ruled by meritocracy with the best code rising to the top, but in reality sizable swaths of it were personality-driven, cliquish, and hostile to newcomers. (And old-timers, and random passers-by.) The famous 2006 FLOSSPOLs study (see D16 – Gender: Integrated Report of Findings) claimed that 98.5% of FOSS contributors were men, while the proprietary software world had 28% participation by women. The study asked the question why such a difference? Answer: Hostile environment.

It’s a fascinating report that is worth reading to see how far Linux and FOSS have come in terms of community and community values. Once upon a time the claim of “meritocracy” trumped everything, and too bad for anyone who couldn’t take the heat or develop magnificent skills on their own. Now we’re seeing more of the Apache Foundation philosophy of “community before code.”

The Ubuntu community was key in pushing “community before code” into the Linux mainstream. Ubuntu got a whole new generation of people excited about Linux, and excited about contributing to Linux and FOSS. It launched a rowdy and far-reaching conversation about codes of conduct, the great value of diversity, and treating each other with civility and respect. Doubtless one can point fingers at numerous shortcomings in Ubuntu’s performance in these arenas, but there is no doubt that they were responsible for bringing these issues to the forefront and catalyzing large-scale change.

Kick in the Pants

Ubuntu ignited excitement about desktop Linux by taking Debian Linux and putting a user-friendly face on it. Many had tried to do this: Libranet, Corel, Lindows…but only Ubuntu became dominant and survived. It held the #1 spot on Distrowatch for quite a few years and has always been in the top five. So, how did Ubuntu succeed where so many others did not?

1. Super-easy installation. Pop in your installation media, answer a couple of questions, and in a few minutes you have a nice new Ubuntu Linux system to play with.

2. Free Ubuntu disks. You could order free installation CDs through Canonical’s ShipIt program, and copy and redistribute them. This was discontinued in 2011, but those first few years gave them huge exposure.

3. Live CDs. Live Linux CD/DVDs have been around for a long time, like Yggdrasil Linux, Peter Anvin’s SuperRescue CD, and Knoppix. Ubuntu made running a live CD easy and pretty, and benefited from better hardware than the older generations had.

4. Mark Shuttleworth and Jono Bacon, Ubuntu’s community manager. In the early years you couldn’t go online or peruse print Linux magazines without tripping over these two. They were the first faces of Ubuntu, and they were very good at it.

Ubuntu’s Unique Features

Ubuntu has all the usual products: servers, cloud, containers, microservices, Internet of Things, certified hardware, management tools, paid support, training, partnerships with key vendors, and all the things that enterprise users want.

These features set Ubuntu apart:

Ubuntu has a number of variants: desktop, server (which installs without a graphical interface like a proper server should), and different desktop flavors. These are the official Flavours, as they are called:

  • Edubuntu — Ubuntu for education
  • Ubuntu GNOME — Ubuntu with the GNOME desktop environment
  • Kubuntu — Ubuntu with the K Desktop environment
  • Ubuntu Kylin — Ubuntu localised for China
  • Lubuntu — Ubuntu that uses LXDE
  • Mythbuntu — Designed for creating a home theatre PC with MythTV
  • Ubuntu Studio — Designed for multimedia editing and creation
  • Xubuntu — Ubuntu with the XFCE desktop environment
  • Ubuntu MATE — Ubuntu with the MATE desktop environment

Each one has its own website and community. Ubuntu ships with the Unity desktop. But here’s the cool and unique deal: When you download and install any Ubuntu you get all of them. Don’t like Unity? Then install xubuntu-desktop, ubuntu-mate-desktop, kubuntu-desktop, etc., to get a different Flavour. The base system is the same for all of them. When you install one *buntu you get all of them.

RHEL and SUSE both draw a wide line between the enterprise versions and their free community versions, Fedora Linux and openSUSE. Ubuntu does not do this; there is not an enterprise Ubuntu and a separate free community-supported Ubuntu. It’s all the same. Instead, Ubuntu has a different lifecycle of standard and long-term releases, which you can read about below.

The other feature that sets Ubuntu apart is it’s the easiest of the enterprise Linuxes to get and use. Just download and use it, no time bombs, no registration, no muss, no fuss. Server, desktop, Ubuntu Core for Raspberry Pi and other small devices are all a click away.

The desktop download has a small nag screen that asks some survey questions and asks for a donation. You can be a Scrooge and easily bypass this if you wish.

Support

The free community support options are quite good, thanks to Ubuntu’s large and engaged community. This includes good documentation, Ask Ubuntu, Ubuntu Forums, IRC channels, and Launchpad.

Ubuntu’s paid support follows the model of get the software for free, pay for support. Support pricing is the lowest of the three enterprise Linuxes.

Lifecycles

Ubuntu has two release versions: long-term support (LTS) and standard releases. LTS releases are supported for five years, both server and desktop. Standard releases are supported for nine months. Releases are time-based, and every six months a new release comes out. Ubuntu tends to ship fresher software versions, and when an LTS release reaches end-of-life that’s it, it’s done, no updates, no security fixes. Upgrades to new releases are usually reliable, and the Landscape systems management tool gives you complete control of all of your systems.

See Enterprise Linux Showdown: Red Hat Enterprise Linux for an RHEL comparison, and come back next week for a look at SUSE Linux Enterprise.

Advance your career in system administration! Check out the Essentials of System Administration course from The Linux Foundation.

Re-Imagining the Container Stack to Optimize Space and Speed

The line between containers and VMs can be thought of as a continuum, according to Graham Whaley, Sr. Software Engineer at Intel. In his keynote at LinuxCon Europe, he talked about the spectrum between fully featured, accelerated, secure, VM features all the way down to the barest minimum lightweight container. He says, “Some people want one end of the spectrum, really time-critical, other people want security. We don’t have that today. What we’d like is a continuous choice of features. … That’s something we’re trying to enable.”

Whaley also talked about two VM myths:

  • VMs don’t have to be big: “I’ve seen embedded systems running hypervisors with tiny amounts of RAM. Admittedly, that VM may not have the features that you want to run a container, but it’s not actually that far off. Containers don’t require that many features at the bottom end.”
  • And they aren’t always slow: “Along with, or parallel with, ‘I’m big,’ comes slow. If you’re not that big, it’s pretty hard to be slow, if you’re very very small. Yeah, VMs don’t have to be this humongous behemoth that you can’t really use in your container space because they’re just too slow. That’s a legacy thing. We can move beyond that.”

Whaley talked about “re-imagining what we can do in the whole container cloud stack” with the goal of making a tenfold improvement in performance. The key to getting this kind of performance improvement is to take a fresh look without assuming that you need something equivalent to a self-contained PC. Their approach is to “throw that away, start again, and pick out the pieces we need from the VM.” By being very selective and only including the bare minimum of what is needed along with also using some new technologies that increase performance, they are getting sub-50-millisecond boot times with around 50MB per container instance overhead.

Whaley wraps it up with more information about next steps and where you can go to participate: “We do continue to optimize space and speed. Really, we want to look for that next tenfold improvement. What’s that next leap of faith, that change of architecture? We are redefining what’s possible. We’re an open project. … The code is available now on GitHub. We have an IRC channel and mailing list. Come to the web site where there’s Clear Linux, Clear Containers and Ciao.”

Watch the entire video to learn more about the Clear Linux project and Intel’s approach to improving performance of containers and VMs.

LinuxCon Europe videos

Keynote: Blurring the Lines: The Continuum Between Containers and VMs

Graham Whaley, Sr. Software Engineer at Intel, says there is a continuum of features and benefits across the container/VM spectrum, and you should be able to choose which point on that continuum best suits you.

Docker and Machine Learning Top the Tech Trends for ‘17

AR, microservices, and autonomous team structures tipped to surge in the coming year.

With 2017 fast approaching, technology trends that will keep gathering steam in the new year range from augmented and virtual reality to machine intelligence, Docker, and microservices, according to technology consulting firm ThoughtWorks.

In its latest semi-annual Technology Radar report, ThoughtWorks calls out four IT themes growing in prominence:

  • Virtual reality (VR) and its cousin, augmented reality (AR)
  • Docker as process, PaaS as machine, microservices architecture as programming model
  • Intelligent empowerment
  • The holistic effect of team structure

Read more at TechCentral