What? Another Linux vulnerability? Nope. Other operating systems may be easy malware marks, but Linux continues to resist malware.
Intel, Red Hat Working on Enabling Wayland Support in Gnome
Open Source is all about collaboration and contribution and two leading communities are working towards making Wayland a reality.
PengPod Claims It Will Transform PC & Tablet World
PengPod, a low-quantity Linux tablet vendor, released the PengPod 1040 tablet today that they claim will “transform the PC and Tablet world by merging both elegantly together to fit any lifestyle.” But will it really pan out?..
IDF 2013: Intel Unveils Solid-State Drive Pro 1500 Series for Business PCs
The company is touting the improved security and easy IT manageability of its latest enterprise SSDs.
Brandon Philips: How the CoreOS Linux Distro Uses Cgroups
CoreOS is a new Linux distribution for servers aimed at giving all data centers the same automation capabilities and efficiencies as those seen in the massive server farms run by Google or Facebook. Their technology, combined with the upstart package manager Docker, is popularizing the idea that the Linux operating system itself can serve as a hypervisor. Lending credibility to the approach is Linux kernel developer Greg Kroah-Hartman, a CoreOS advisor.
“Kroah-Hartman says he’s been wanting to build something like CoreOS for over half a decade,” writes Cade Metz in a recent Wired article featuring CoreOS.
At the heart of this functionality is cgroup — the Linux kernel subsystem that allows process containers for resource partitioning. CoreOS CTO Brandon Philips will speak at LinuxCon and CloudOpen in New Orleans next week about cgroup.
“Until recently the only true way to isolate Linux apps from each other was with a hypervisor like KVM,” Philips said. “With containers we get that isolation and ability to programmatically turn on and off virtual machines. But, they come online faster and use far fewer resources.”
Here he previews his talk and discusses the benefits of CoreOS for sysadmins and developers; how CoreOS uses cgroups and systemd; how the cgroup redesign might affect CoreOS; and poses his questions for Linux kernel developers in advance of their panel discussion at LinuxCon and CloudOpen.
What is CoreOS in a nutshell, for those who are unfamiliar (or didn’t read the Wired article)?
It’s a new Linux operating system focused on creating a Linux that’s more tuned for people with up to hundreds of thousands of servers. A lot of the problems they encounter are those that Facebook or Google have already solved. They have lots of machines and developers and need to be really efficient and automate tasks. CoreOS does things a lot differently than other Linux distros.
For example: /etc is where configuration files are stored on a Linux machine. We have created etcd, a dameon that runs across all machines to share configuration data. It’s a simple API and operationally simple.
What are the benefits of CoreOS for a sysadmin? For an application developer?
For a lot of teams already doing distributed systems, this isn’t new to them. But having a dynamic registry running across your machines instead of running a bunch of static config files means you can just write into the service registry and all the machines in the cluster have access to that.
How does CoreOS utilize cgroups and systemd?
Really the fundamental difference between CoreOS and what people are used to, is that we don’t have a package manager. Instead we use containers. What makes it possible are cgroups and namespaces. Cgroups has the ability to meter and isolate the amount of hardware resources the individual container is able to use. And with cgroups we can run production and development software at the same time because dev can have a lot lower priority. We can safely deploy containers across machines that aren’t necessarily production.
How will the cgroup redesign affect what you’re trying to accomplish with CoreOS?
In the long run it will probably help us. CoreOS is built on top of systemd, and essentially the big change in cgroups is moving away from a file system where anybody can manipulate that file system from having a gateway API. The current implementation that exists is systemd and we’ve bought into systemd. We had no intentions of using anything other than systemd.
Why do you need a new operating system to accomplish this for application servers? Why not build within another distro?
A big piece that caused us to take pause and create our own distro is we wanted to do updates quite a bit differently. Taking inspiration from Chrome OS, we have two file systems A and B. While a Linux system is running on A you can make offline changes in the B system. As soon as you’re ready to upgrade you reboot the machine. This double buffered update gives you a couple of advantages. Things are completely atomic, you don’t want a server to ever be in a state where things are unknown. A classic package manager will modify files all over your system while you’re doing your upgrade, this means many daemons could be in any unstable state at any point.
You want to make sure code is up to date and control rollouts so they don’t happen during some critical time for the application. It shouldn’t be something a sysadmin should worry about every day. We wanted to make that a core piece of the distro.
If you could ask the kernel developers on stage at LinuxCon anything, what would it be?
I really think dbus is an interesting thing happening within both kernel and user space. Cgroups started as a file system and you realized a file system isn’t going to cut it. A lot of the new cgroups functionality is exposed to dbus in its first implementation. There’s talk of adding dbus into the kernel. I’m interested to hear where people think file systems today aren’t cutting it and where userspace dbus enabled daemons might be useful for future APIs.
Also, now that we are having to invent new APIs and syscalls that haven’t existed in other unix-like systems how do you feel about versioning APIs/ABIs or providing API previews to application/library developers to make sure we are on the right path with a given design?
Can you give us a preview of your talk at LinuxCon?
My talk won’t be about CoreOS at all, it’s just the product I’m working on. It’s familiarizing people and giving an overview of technologies that have made it possible to think of Linux and Linux servers as a hypervisor for containers and what functionality that gives you.
Until recently the only true way to isolate Linux apps from each other was with a hypervisor like KVM. With containers we get that isolation and ability to programmatically turn on and off virtual machines. But, they come online faster and use far fewer resources. I’ll give people practical examples of how to use cgroups to monitor apps or isolate them or use namespaces to increase security and isolate things from the network and the file system. It’s more of a practical guide to these newer APIs and functionality.
PostgresSQL: The Other Big Open-Source Database has a New Release
The PostgreSQL development team has announced the release of PostgreSQL 9.3, the latest version of the world’s leading open source relational database system.
Linux Leads Self-Driving Car Movement
As Linux continues to gain headway in in-vehicle infotainment (IVI), it’s already finding its way into early designs for self-driving cars. Google’s autonomous car computers run Linux, as do prototypes from GM and Volkswagen. Meanwhile, Cohda Wireless is leading a new wave of vehicle-to-vehicle (V2V) sensor products with its Linux-driven MK2 radio.
The market is potentially huge. On Aug. 21, Navigant Research projected that sales of autonomous vehicles will surpass 95 million units by 2035, representing 75 percent of all light-duty vehicle sales.
Google’s Ubuntu-based self-driving Toyota Prius has led the way, logging over 500,000 miles of autonomous driving with no accidents caused by the computer, says Google. A year ago, its success led California to pass a law allowing autonomous cars to operate by 2015, assuming safety certification. Florida and Nevada have also legalized autonomous vehicles for testing, but as with California, only with a human driver present. Other states are also prepping legislation.
This week at the Frankfurt Car Show, Google is expected to announce a partnership with IBM and auto-components manufacturer Continental AG to build autonomous driving systems, according to Frankfurter Allgemeine Zeitung. It’s unclear whether the partners will build an autonomous system that can be added to existing cars, or as Jessica Lessin speculates in her Aug. 23 blog analysis, possibly a self-driving car designed from scratch by Google. Such a vehicle might be considered the Nexus of the automotive world.
Continental has already racked up thousands of miles of testing in Nevada using a VW Passat modified with its own autonomous technology. Strategy Analytics’ Mark Fitzgerald told Linux.com that Continental’s system “likely” uses Linux technology.
Lessin also reported that Google is negotiating on a partnership to provide driverless cars as part of a “robo-taxi” service. The Independent says the plan likely includes Google-funded Uber, which offers a popular taxi-hailing and car-sharing service. In July, Uber announced plans to purchase 2,500 driverless cars from Google.
Automakers Refine Prototypes
Google’s success has accelerated autonomous research at automakers including GM, Mercedes-Benz, Toyota, Volkswagen/Audi, and Volvo. On Aug. 27, Nissan became the first automaker to announce plans for an autonomous car. The company said it will introduce its first autonomous vehicles by 2020, and will offer autonomous functionality across the model range within two vehicle generations.
Like most manufacturers, Nissan is keeping its technology under wraps. Nissan has used Windows Embedded Automotive in its Nissan Leaf IVI system, but the company is also a member of the GENIVI Alliance, which is standardizing IVI technology based on Linux. Its Nissan Research Center Silicon Valley (NRC-SV) is collaborating with Renault, which is working on Android-based IVI systems, as well as research institutions including Stanford University and MIT. Last year, MIT unveiled a Linux-based semi-autonomous driving system built into a Kawasaki Mule.
Volkswagen has had a longer collaboration with Stanford on Linux-based autonomous technology. Their Stanley vehicle won the 2005 DARPA Grand Challenge, followed by Junior, which was a runner up in the 2007 DARPA Urban Challenge. Junior continues to be developed, but Volkswagen is also experimenting with Solaris on a recent Audi-based prototype.
GM and Carnegie-Mellon University have long collaborated on developing tuxified autonomous cars in DARPA events. The GM-CMU Autonomous Driving Collaborative Research Lab has developed a Linux-based SAFER [PDF] fault-tolerant platform that protects against processor and task failures for the numerous distributed embedded systems found in self-driving cars. The system is built into a modified Cadillac SRX, a model that also offers the Linux-based CUE IVI system.
Toyota, meanwhile, is doing self-driving research on a Lexus LS at its TRINA project. The OS was not revealed, but Toyota signed an agreement with Microsoft to work on a Windows Azure platform for next-generation telematics. On the other hand, Toyota is a founding member of the Linux Foundation’s Automotive Grade Linux working group, which is exploring autonomous technologies in addition to IVI. The company also recently unveiled the Toyota Lexus IS, which includes a Linux-based IVI system.
Autonomous vs. V2X
One emerging question is whether self-driving cars will be fully self-sufficient, like the Google car and most other prototypes, or depend in part on vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) wireless communications. These “V2X” systems, which typically adopt dedicated short-range communication (DSRC) radios that use the 5.9GHz 802.11p standard, offer always-on wireless connections with other cars or wireless road infrastructure to better coordinate driving for accident avoidance and optimized acceleration and braking. The technology can augment either autonomous or semi-autonomous technology, such as the adaptive cruise control that is already appearing in luxury cars.
In January, NXP Semiconductors and Cisco, which has a separate deal with Continental on security for autonomous cars, announced a dual investment in Australian automotive technology firm Cohda Wireless related to V2V. The partnership involves Cohda’s Cohda MK2 WAVE-DSRC Radio, a Linux-based, GPS-equipped 802.11p radio for continual V2X sensing. While V2V requires similar wireless equipment in other cars or road equipment, it offers the advantage of “seeing” around corners, letting the car anticipate interactions at intersections.
Cohda is equipping about half of the vehicles involved in V2V research, “including 1,500 vehicles in the 2,800 vehicle Safety Pilot trial happening in Michigan right now,” Cohda CEO Paul Gray told Linux.com. (The Ann Arbor Safety Pilot trial was recently extended by the National Highway Traffic Safety Administration.)
“V2V can extend the horizon of what the autonomous vehicle is aware of,” Gray added. “Even the Google car will use V2V to supplement their sensors once there are other vehicles on the road fitted with V2V.”
Strategy Analytics’ Fitzgerald, however, is more skeptical about the technology. “Slow V2X implementation and the lack of standards globally will reduce the ability for V2X to help on the self-driving vehicle front,” he told Linux.com.
Challenges to Development
Self-driving technology has come a long way since the DARPA Challenges. The Google technology still costs about $100,000, not counting the Prius itself. The lidar (light detection and ranging) laser range finder – the distinctive bubble atop the Google cars – represents over half the cost. Some prototypes are trying to make do without it, depending solely on less expensive radar which can see farther, but is not as precise. Meanwhile, prices are dropping quickly on other components, including computers, stereo cameras, ultrasonics, and sensors.
Google’s Sergey Brin has predicted self-driving cars would be available “for everyone” by 2017. That’s about three years too early, according to Navigant, which agrees with Nissan’s 2020 projection. Others say that vehicles built from scratch for autonomous operation won’t be available until late in the 2020s.
Availability doesn’t necessarily mean widespread usage, however. The obstacle is not the technology, which is solid enough for California to start testing freeway “autopilot lanes” with 120 mile-an-hour speed limits. According to Navigant, the challenge will be to overcome a wall of “liability and legislation.”
Legislatures, courts, and insurance companies will not only need to know who to blame after a crash – they need to be convinced that driverless cars are as safe as Google claims, i.e. capable of reducing the over 34,000 automotive deaths per year in the U.S. by at least half. Implementation could be further delayed by unions, which are expected to fight hard against the existential threat of the car-bots in commercial fleets.
According to Navigant, the public will adjust more quickly, especially as they grow accustomed to semi-autonomous features. Once consumers realize that most current prototypes allow humans to take the wheel, fears of losing control should diminish.
Car-buyers may also respond to touted advantages including the potential mobilization of elderly handicapped, and intoxicated people, or just normal commuters trying to carve out more productive time.
Autonomous tech could also reduce traffic and improve fuel efficiency thanks to smarter, more coordinated driving. The latter goal may take a while considering the heavy weight of autonomous vehicles, but more convenient car-sharing services and coordinated driving fleets could also reduce fuel consumption.
Google vs. the Automakers
One of the biggest obstacles to autonomous cars is the automotive industry itself. Google was rebuffed by car manufacturers before it started courting auto-components companies like Continental. In addition to fearing the invasion of Google into their turf — shades of Google TV – automakers have long been resistant to new technologies. Others have suggested that carmakers fear the impact of fewer accidents and easier car-sharing on repairs and car sales.
Even as some U.S. automakers prep autonomous designs of their own, they are already lobbying politicians to limit autonomous technology in favor of semi-autonomous gear. According to The Wall Street Journal, legislation is being proposed that would require a 25-mph speed limit and a foam front bumper for self-driving models.
Strategy Analytics’s Fitzgerald would not speculate on the OS landscape for self-driving cars, but suggested that the technology would evolve out of telematics systems rather than IVI, where Linux is more established. “Self-driving vehicles will use some aspects of an IVI OS, but their safety critical nature will most likely follow OSs that are used in aviation and drones,” said Fitzgerald. This telematics focus, he suggests, means real-time operating systems (RTOSes) like Green Hills Integrity and QNX should have an advantage over Linux and Windows Embedded Automotive.
On the other hand, Google has surprised the automotive industry with the success of a car computer that lacks a fully deterministic, real-time OS. Its Ubuntu build does not even use a real-time Linux kernel, but gets by on SCHED_FIFO and control groups to provide “realtime-ish” response, according to an LWN.net report.
While most of the Linux prototypes have used x86 chips, ARM chip manufacturers are prepping processors aimed at telematics, including Freescale’s recently tipped i.MX6 Next and i.MX8 processors. Combined with Linux’ progress in IVI systems, and the prototypes from Google, GM, and VW, these developments bode well for a major Linux presence in autonomous vehicles.
The Man Who Would Build a Computer the Size of the Entire Internet
Inside the massive data centers that drive things like Google Search and Gmail and Google Maps, you’ll find tens of thousands of machines — each small enough to hold in your arms — but thanks to anew breed of software that spans this sea of servers, the entire data center operates like a single system, one giant computer that runs any application the company throws at it.
A Google application like Gmail doesn’t run on a particular server or even a select group of servers. It runs on the data center, grabbing computing power from any machine than can spare it. Google calls this “warehouse-scale computing,†and for some, it’s an idea so large, they have trouble wrapping their heads around it.
Solomon Hykes isn’t one of them. He aims for something even bigger. With a new open-source software project known as Docker, he wants to build a computer the size of the internet.
Read more at Wired.
Red Hat To Oracle: Have You Tried Free?
Oracle probably isn’t the first company that comes to mind when words like “austerity” are used. Perhaps for that very reason Oracle president Mark Hurd recently took to the blog-o-sphere to argue that going all-in with Oracle, even at a premium price, delivers better value. Ironically, Red Hat CEO Jim Whitehurst used the same LinkedIn blog platform just a few days earlier to argue a very different picture of what customer value looks like.
Hint: It looks a lot like “free.”
Fedora 20 Moves Ahead With Wayland Tech Preview
If all goes according to plan by Red Hat engineers operating in conjunction with Intel, Fedora 20 will be the first tier-one Linux distribution with decent support for Wayland and a usable desktop environment having its own compositor…