Home Blog Page 719

Let’s Encrypt: Every Server on the Internet Should Have a Certificate

The web is not secure. As of August 2016, only 45.5 percent of Firefox page loads are HTTPS, according to Josh Aas, co-founder and executive director of Internet Security Research Group. This number should be 100 percent, he said in his talk called “Let’s Encrypt: A Free, Automated, and Open Certificate Authority” at LinuxCon North America.

Why is HTTPS so important? Because without security, users are not in control of their data and unencrypted traffic can be modified. The web is wonderfully complex and, Aas said, it’s a fool’s errand to try to protect this certain thing or that. Instead, we need to protect everything. That’s why, in the summer of 2012, Aas and his friend and co-worker Eric Rescorla decided to address the problem and began working on what would become the Let’s Encrypt project.

The web is not secure because security is seen as too difficult, said Aas. But, security only involves two main requirements: encryption and authentication. You can’t really have one without the other. The encryption part is relatively easy; the authentication part, however, is hard and requires certification. As the two developers explored various options to address this, they realized that any viable solution meant they needed a new Certificate Authority (CA). And, they wanted this CA to be free, automated, open, and global.

These features break down some of the existing obstacles to authentication. For example, making authentication free makes it easy to obtain, automation brings ease of use, reliability, and scalability, and the global factor means anyone can get a certificate.

In explaining the history of the project, Aas said they spent the first couple of years just building the foundation of the project, getting sponsors, and so forth. Their initial backers were Akamai, Mozilla, Cisco, the EFF, and their CA partner was IDenTrust. In April of 2015, however, Let’s Encrypt became a Linux Foundation project, and The Linux Foundation’s organizational development support has allowed the project to focus on their technical operations, Aas said.

Built-in Is Best

Let’s Encrypt works through the ACME protocol, which is “DHCP for certificates,” Aas said. The Boulder software implements ACME, running on the Let’s Encrypt infrastructure, consisting of 42 rack units of hardware between two highly secure sites. Linux is the primary operating system, and there’s a lot of physical and logical redundancy built in.

They issue three types of certificates and have made the process of getting a certificate as simple as possible.

“We want every server on the Internet to have a certificate,” said Aas.

The issuance process involves a series of challenges between the ACME client and ACME server. If you complete all the challenges, you get a cert. The challenges, which are aimed at proving you have control over the domain, include putting a file on your web server, provisioning a virtual host at your domain’s IP address, or provisioning a DNS record for your domain. Additionally, there are three types of clients to use: simple, full-featured, and built-in — the last of which is preferred.

“Built-in is the best client experience,” Aas said. “It all just happens for you.”

Currently, Let’s Encrypt certificates have a 90-day lifetime. Shorter lifetimes are important for security, Aas said, because they encourage automation and limit damage in the case of compromise. This is still not ideal, he noted. Revocation is not an option, so if the certificate gets stolen, you’re stuck until it expires. For some people, 90 days is still too long, and shorter lifetimes are something they’re considering. Again, Aas said, “If it’s all automated, it doesn’t matter… It just happens.”

Additionally, Aas noted that Let’s Encrypt’s policy is not to revoke certificates based on suspicion. “Do you really want CAs to be the content police of the web?” Let’s Encrypt doesn’t want to be in that position; it becomes censorship, he said.

Let’s Encrypt now has 5.3 million active certs, which equates to 8.5 million active domains. And, Aas said, 92 percent of Let’s Encrypt certificates are issued to domains that didn’t have certificates before.

He concluded by saying that we have a chance within 2016 to create a web that is more encrypted than not. You can take the next step by adopting encryption via TLS by default.

Best Practices for Using Open Source in Your DevOps Toolchain

Last week, we hosted another episode of our Continuous Discussion (#c9d9) video podcast, featuring expert panelists discussing the benefits of using open source tools and how DevOps can mitigate risks in quality and security when incorporating open source into your application code, environments and tool chain.

Our expert panel included: Chris Stump, full stack Chicago Ruby on Rails developer who’s big on Docker, Linux, and DevOps and currently working at Airspace Technologies;Eduardo Piairo, database administrator at Celfinet; Moritz Lenz, software engineer, architect, and contributor to the Perl 6 language working at noris network, where he set up their Continuous Delivery pipeline; and, our very own Anders Wallgren and Sam Fell.

During the episode, panelists told viewers where they use open source in their code and processes, and discussed the quality, security and legal implications associated with using open source tools, and how DevOps can help.

Open Source – Free as in Beer or Free as in Puppy?

Stump says open source can be both free as in beer and as in puppy: “I think it’s both. If you are new to open source it’s definitely going to feel like it’s new as in puppy because you are in an unfamiliar territory and for any one task that you want to do it’s going to feel like there is a million different projects. Once you have your tooling, you know what everyone is using and what is well supported, then it becomes pretty easy and it becomes more like free as in beer.”

Piario explains that open source isn’t completely free: “In open source you always have a cost, it depends on the size of your team, the complexity of your task and the frequency of change, and every change has a cost. The good thing about open source is you can contribute to the change and take it in your direction.”

Open source allows for a free flow of ideas, explains Lenz: “Some big companies like Google and Facebook open source their own stuff, and they get additional ideas for what to do with their tools and how to improve them. They also get patterns. But I think the ideas are the main thing, so open source also allows us a free flow of ideas which you can then use in commercial products.”

Fell expands more on the flow of ideas in open source: “The idea of competing with the potential innovation from a cloud of people is a very difficult thing to do. You will find lots of outside/in ideas and lots of enthusiasm for those ideas.”

Culture is a big part of successful open source, says Wallgren: “As with companies, there are communities that have good open source culture and communities that have bad open source culture – because it’s people. There are open source communities that are open, that are welcoming, that are aware of their own shortcomings and strengths. Then there’s other open source communities that are like ‘Yeah, sorry we don’t really want your contribution,’ and it’s difficult to get things going.”

Where Do You Use Open Source?

Some companies work with dozens of different open source tools, explains Fell: “When we talk to customers or prospects, most of them have about 60 tools in their pipeline – 60 different combinations of things just to move something out of Source Code Repository Land and into Production Land, to help with the various configurations or monitoring that needs to be done.”

There isn’t anywhere Stump can’t use open source in his pipeline: “As a Ruby on Rails developer which is an open source stack, I pretty much use open source through and through from the front-end using Angular or React JavaScript frameworks all the way down to the back-end, to a Postgres database with all the Ruby Gems that lie in between them and make our projects run. For servers we definitely use some Debian variants, usually on Ubuntu server containerization stick with Docker.”

Wallgren advises to question what it takes to work in an open source tool: “The thing you have to be concerned about is, what is the cost of ownership for this thing? Is it something that has to grow with me; is it something where if it’s broken it’s a really big problem; or, do I have alternatives? There’s things you have to worry about, but for the most part I use open source just about anywhere. It’s just another tool.”

Lenz uses open source tools for essentially every part of the pipeline: “There are areas where we use it because it’s just the best fit, but wherever open source excels we use it and that’s basically 90% of everything that we do. We use Puppet, we use Ansible, we use all the different test frameworks for Perl and Python and for automating the browser, Selenium, all this good stuff, both inside the product as libraries, and as backing services, for authentication, and then in the pipeline, in the build tool chain, test, deployment, everything… statistics, monitoring, you name it.”

Piario gives his advice on picking the right open source tools: “As a startup we started with a lot of open source tools, and with the evolution and complexity we started to migrate to commercial tools. We say ‘Try it before you buy it.’ We started to try different combinations, Jenkins, TFS, TFS build, TFS Release – for testing we use tSQLt a framework for testing databases.”

Quality Concerns?

There are three main areas to look for when assessing quality in open source tools, per Stump: “You have to know how to find the quality in open source tooling and usually it boils down to: how active is the community, how widely used is that software, and is there a strong leadership team behind that particular project (and do they have good quality control practices). Once you identify that a tool that has all those attributes for the thing you are trying to accomplish, I think quality is just on par if not better than a lot of proprietary solutions.”

Open source quality requires individual responsibility, according to Fell: “When you are using open source components as part of your product what exposure do you have from a quality perspective? Not that they are any more or less quality than what you would have if you did it yourself, but it doesn’t abdicate you from taking responsibility for it when it’s there. If there is a quality problem, if it’s open source you can go in and try to fix it, but if they don’t accept your changes then you are stuck.”

Take extra quality precautions if open source is baked into your product, says Piario: “If the tools support your pipeline, you can better manage the exposure to errors, but if the tool is included in your product then you have to assure that quality is there. Your client will talk to you if some problem happens, not the maker of the tool.”

Even though you have the freedom to change the source code in open source, it’s not an easy task to do, says Wallgren: “Even if you have the source code you still may be kind of screwed because you may not be able to build it, you may not understand it, you may not be able to document it – you now have to go solve a problem that would be nice to have somebody else solve for you.”

Security Concerns?

 

Quality concerns trump security concerns in open source, per Piario: “My main concern is about quality, as for security – it’s a closed environment so it’s more controlled. We try not to deliver open source to the client, we use it to support our activity.”

Having the right toolchain is important in ensuring open source security, says Lenz: “Last year there was a study by HP Security that half of the breaches they investigated were vulnerabilities that were known for two years or longer. Whether the batch comes out this week or maybe in two weeks is not as relevant, often it is a question of, do I have the toolchain to notice that I have to build a new product and ideally automatically upgrade, build, integrate, test and then release the product.”

Having code easily visible to any and all means flaws can actually be fixed more quickly in open source, says Stump: “I would say that with open source it’s like a double edged sword, because the code is open so there are more eyeballs on the code to find security flaws, but there are also more eye balls on the code that can exploit those flaws. But most of the time I think it leads to them getting discovered and patched quickly.”

Transparency is key to addressing security concerns in open source, saysFell: “Transparency is very important. Apple had a security case where the FBI tried to hack into the phone. Up until now Apple had never released the source code for the Kernel of the iPhone, and just this last time around when they did their SDK for the developers for iOS 10 they apparently released the source code un-obfuscated so that people could really start to dig in. You ask yourself, will people find exploits? Probably. Will there be transparency around those exploits? Probably.”

More testing needs to be done to ensure security of open source code, advises Wallgren: “Just because somebody finds a problem doesn’t mean it’s automatically going to be updated and patched in all the applications that use the open source platform because updating is a big deal. The lack of unit testing in open source code is pretty deplorable. It’s stunning how easy some of these things get by and how long bugs can sit there and the mean time to discovery for bugs is pretty long. Every assumes someone else is reading the code, finding the bugs and fixing it, and is that really true?”

Legal Concerns?

Be prepared to have to share what open source is in your product or code, says Fell: “When you are trying to acquire a company or a solution, a lot of the times they’ll say ‘Tell me what open source components that isn’t your code is within your code. It’s not necessarily a black mark against you, but it’s something you need to be aware of.”

Even Electric Cloud clients ask to know what open source or third part tools we use in our products, explains Wallgren: “This is something a decent portion of our customer community cares about is what third-party tools, not just open source tools, are we using, what are the relevant licenses, are we allowed to ship it. All of those things are concerns of anybody who even buys our software.”

Stump recommends spending more time learning the different open source licenses to help you pick the right tool for your needs: “Open source, I like to think, maybe demands a little more respect rather than just clicking through on the 48 pages of the user license agreement that we are all used to seeing, simply because it’s people’s free time and people aren’t getting paid for it generally and there are a lot of different open source licenses out there. If you work with and deal with open source you need to be aware of the difference between the MIT license, BSD license, GPL, that can affect your tooling choices.”

Lenz reminds us that there are legal implications for all types of software: “Proprietary software can also come at a high legal risk, for example there is software that is licensed by the CPU, by the core count, so when you are in a virtualization environment, can you safely use that software? If yes, for which cores do you pay? Do you pay for the cores that are assigned for the virtual machine? To the whole cluster? There are actually very valid legal reasons not to use some types of proprietary source software.”

Watch the full episode here

 

Open Source Drone Controller has an FPGA-Enhanced Brain

Aerotenna has launched an open source, $499 OcPoc drone flight controller that runs Linux on an Altera Cyclone V ARM/FPGA SoC. Lawrence, Kansas based Aerotenna, which bills itself as the leading provider of innovative microwave sensors and flight control systems, describes OcPoC (Octagonal Pilot on Chip) as a ready-to-fly, open source flight control platform. The […]

 

Read more at Hackerboards.com.

Linus Torvalds Reflects on 25 Years of Linux

When Linus Torvalds first announced his new operating system, Linux, on Aug. 25, 1991, it was a “completely personal project,” Torvalds said at LinuxCon today. The kernel totaled 10,000 lines of code that would only run on the same type of hard disk Torvalds himself used because the geometry of the hard disk was hard-coded into the source code. And, he expected only other students to be interested in studying it as a theory.

Those early days were his most memorable, he said, when he was working to solve tough problems and create something out of nothing.

“Even the slightest sign of life makes you go “Wow, I really mastered this machine,” Torvalds told Dirk Hohndel, who interviewed him on stage. “You’re pumped because you got a character on the screen.”

The Linux kernel now supports more than 80 different architectures, Torvalds says, and counts 22 million lines of code with more than 5,000 developers from about 500 companies contributing, according to the latest Linux Kernel Development Report released this week. It is the big, professional project that Torvalds himself didn’t expect in that first public announcement 25 years ago.

These days, Torvalds no longer writes much code. And during the past 15 months, he was responsible for signing off on just 0.2 percent of patches submitted, according to the kernel report. Instead, he’s focused on making sure the development and release process stays on track.

“I can be proud when the release process really works and people get things done and we don’t have a lot of issues,” Torvalds.

During the past 10 years, the release schedule has stayed remarkably consistent. A new kernel is released every nine to 10 weeks, working at an average rate of 7.8 changes per hour. For the 3.19 to 4.7 releases, the kernel community added nearly 11 files and 4,600 lines of code every day, according to the report.

It has not always been smooth sailing, however. As Torvalds pointed out, “it really did take a while before it turned professional, and some of us still struggle with it at times.”

When Linus Torvalds Almost Quit

Fifteen years ago, when commercial interest in Linux began to increase but the kernel community was still very small, the process started to become unmanageable, Torvalds said. The community decided to switch to the Bitkeeper revision control system, which was a lifesaver for Torvalds “because the process before that was such a disaster,” he said.

“That was probably the only time in the history of Linux where I was like, “this is not working,” Torvalds said. “In retrospect that might have been the moment where I just gave up.”

He later created Git to further scale the development process, when Bitkeeper became too unwieldy.

Since then, things have run much more smoothly. To be sure, there have been points when Torvalds became so frustrated he considered walking away, he conceded. He would get angry and pledge to take a week off, but he would inevitably be back the next day after taking some time to cool off.

“Power management was such a bummer for so many years. We really struggled with that, where you could just take a random laptop and suspend it and resume it and assume it works,” Torvalds said.

Torvalds’ own mistakes during the 2.4 cycle also created problems with memory management that took a long time and a lot of effort to fix, he said.

For the most part, however, the technical issues have been small compared to the social challenges involved in organizing a project largely consisting of volunteers at first, and then kernel developers paid by companies with competing interests, operating in disparate markets with vastly different computing needs.

“I used to be worried about  fragmentation and thought it was inevitable at some point,” Torvalds said.

This is where the GPLv2 (Gnu General Public License) license — which governs how the software can be copied, distributed, and modified — has been critical to the success of the project. The license requirement that changes to the code be made available, has been key to avoiding fragmentation that plagued other open source projects, Torvalds said. Under the GPL, developers can rest assured that their code will remain open and won’t be co-opted by corporate ownership.

“I love the GPL2,” Torvalds said. “It has been one of the defining factors of Linux.”

Today, the newest operating systems such as Zephyr and Fuchsia are being developed for tiny systems designed for the Internet of Things. Torvalds admitted that he does not look at the source code for these projects anymore. He contends that it isn’t helpful for him to look at source code for a project unless he wants to fix it. However, he stated that in order for a project to become big and attract contributors, the license is important.

“Under the GPL… nobody will take advantage of your code, it will remain free,” he said.

LinuxCon: Cloud Native Computing Foundation Expands

Dan Kohn, Executive Director of the CNCF details what his organization is now doing.

 

Read more at Datamation.

Red Hat Updates its Kernel-based Virtual Machine

Red Hat updated its Kernel-based Virtual Machine (KVM)-powered virtualization platform for both Linux- and Windows-based workloads.

Red Hat Virtualization 4 includes both a high-performing hypervisor (Red Hat Virtualization Host) and a web-based virtualization resource manager (Red Hat Virtualization Manager) for management of an enterprises virtualization infrastructure. Specifically, Red Hat Virtualization 4 introduces new and enhanced capabilities around:

  • Performance and extensibility
  • Management and automation
  • Support for OpenStack and Linux containers
  • Security and reliability
  • Centralized networking through an external, third-party API
  • Performance and Extensibility

While virtualization remains a key underpinning for the modern datacenter, customer needs are rapidly evolving to demand more than simply virtualizing traditional workloads. Modern virtualization platforms need to address these standard scenarios while making way for the emergence of virtualized containers and cloud computing, key aspects that Red Hat Virtualization can address in an open, extensible fashion,” stated Gary Chen, research manager, Software Defined Compute, IDC.

http://www.redhat.com

How DIGIT Created High Availability on the Public Cloud to Keep Its Games Running

The mobile gaming company must deliver a seamless experience for its gamers and allow for spikes in player activity on its Massively Multiplayer Online gaming platform. That’s why the company built a high-availability infrastructure that runs on Amazon Web Services (AWS) and allows them to launch a cluster in less than 5 minutes using Apache Mesos.

We want to enable developers to iterate fast on their ideas and to be able to deploy new code changes as fast as possible,” say DevOps engineer Emmanuel Rieg and build and release engineer Ross McKinley, below.  “We’re aiming at deploying multiple times a week, whenever a given feature is stable or bug is fixed.”

Rieg and McKinley will give a talk next week at MesosCon Europe on how they went from a blank canvas AWS account to a fully functional PaaS, to set up their immutable infrastructure.  Here they give a short preview of their talk and share tips for developing on top of Mesos.

Emmanuel Rieg
Linux.com: Why do you build your applications on AWS?

Emmanuel & Ross: We are very impressed by the diversity of services offered by Amazon. This is coupled with good AWS support in other tools we use. Developer friendliness is really important to us. The ability to run our cluster in an isolated environment (VPC) was a deciding factor.

Linux.com: How do you create high availability on the public cloud?

Emmanuel & Ross: HA is achievable on the public cloud. In our case, we couple redundancy across Availability Zone (AZ) with monitoring and autonomous systems to ensure our games can keep running. Using only one AZ will not ensure HA, as that entire zone could fail for a short time. Each of our applications runs in multiple containers at the same time. They’re are all being monitored to handle current load. When one container is down, another takes its place. The same applies for all parts of our infrastructure. All services are autoscaling and behind a service discovery system. On top of this, nodes in our cluster are deployed across multiple AZs, each of which being an isolated network with its own NAT gateway. This way we can survive a whole zone going down.

Ross McKinley

Linux.com: What role does Mesos play in your infrastructure?

Emmanuel & Ross: Mesos is the foundation we use to run all of our environments. This allows us to scale quickly, handle spikes in players gracefully, and enables our tech teams to develop with velocity.

Linux.com: Why is speed (i.e., launching a cluster in under 5 minutes) important to your business?

Emmanuel & Ross: As we use an Immutable Infrastructure, many components can be affected when performing large updates. Keeping the feedback loop short on infrastructure changes enables us to react to problems and deploy fixes with minimal user impact.

We want to enable developers to iterate fast on their ideas and to be able to deploy new code changes as fast as possible.We’re aiming at deploying multiple times a week, whenever a given feature is stable or bug is fixed. This also enables us to roll back awry deployments.

Linux.com: What is your top tip for creating development environments on top of Mesos?

Emmanuel & Ross: Have a comprehensive Monitoring solution, automate everything, and codify your infrastructure.

Good monitoring is the key to a successful development environment. Without Monitoring, you’re flying blind and will have a hard time tracking down issues.

A fully automated continuous delivery system for validating and pushing changes makes it easy to ensure that bad practices, like manual intervention and works-of-art, are avoided.

Infrastructure-as-Code is mandatory to prevent servers and infrastructure becoming a work-of-art which cannot be replicated. Treat your servers as cattle, each one is fully replaceable at any time.

 

Join the Apache Mesos community at MesosCon Europe on Aug. 31 – Sept. 1, 2016! Look forward to 40+ talks from users, developers and maintainers deploying Apache Mesos including Netflix, Apple, Twitter and others. Register now.

Apache, Apache Mesos, and Mesos are either registered trademarks or trademarks of the Apache Software Foundation (ASF) in the United States and/or other countries. MesosCon is run in partnership with the ASF

 

Be Bold, Be Curious, and Be Open, Advise Outreachy Participants

In Tuesday afternoon’s “Kernel Internship Report and Outreachy Panel” session at LinuxCon North America, interns and mentors involved with the Outreachy program spoke enthusiastically of their experiences with the program. The panel was moderated by Karen M. Sandler, Executive Director of the Software Freedom Conservancy, and organizer of Outreachy.

Sandler provided an overview of the Outreachy program, which offers a paid three-month internship for women and other underrepresented groups to work on a free and open source software project. Helen M Koike Fornazier, a former Outreachy intern and now a Software Engineer at Collabora, described her Linux kernel project involving video4linux, with Laurent Pinchart as her mentor. She wrote a driver, which simulates some media hardware using the Media API.

Although Fornazier’s work didn’t get merged into into main kernel, she is still developing it and hopes to get it merged later. Overall, she said, her goals within the project were met. She wrote a driver from scratch and was offered a great opportunity. “Outreachy helped a lot,”  Fornazier said, noting that getting a real project to work on was key. “It’s easier than you think,” she added.

Bhaktipriya “Bhakti” Shridhar’s work, mentored by Tejun Heo, involved improving work queue implementation in the Linux kernel and removing 280 legacy workqueue interface users. Shridhar, who heard about Outreachy at school, found the Linux kernel community very supportive and expressed a wish that she could participate again.

“Having a special space for women and newbies is important. All your questions are encouraged and answered,” Shridhar said.

Outreachy allows participants to do only one internship but many go on to participate in other projects, such as the Google Summer of Code, or form new local groups, according to Sandler. Our interns take our values and spread the ideas elsewhere,” she said.

Life Changing

Former Outreachy mentor Tiffany Antopolski, who is now a teacher at Mohawk College and volunteer with Kids on Computers, said her involvement with Outreachy provided momentum and helped her discover a love of teaching.

“None of this would have happened without Outreachy,” she said. “In many ways, it changed my life.”

Red Hat engineer and Outreachy mentor Rik van Riel said mentoring seemed like a good way to get more involved with the project. He said it was very satisfying to teach people and help them find the answer. He noted that communication was vital to success.

“Interns needs to ask questions,” he said. “If they are quiet, reach out to them.” The community can be intimidating, and you have to make sure participants stay engaged and keep asking questions. “Sometimes you just need to point them in the right direction,” he added.

Antopolski agreed, saying, she tells interns, “I’m not here to teach you how to code; I’m here to motivate you.” To be a mentor, she added, you have to be excited about it. “You have to love the project and you have to want to teach people.”

van Riel said participants mainly need to learn how to work within the kernel community. They already know how to program, he said, but they need to learn how to write changelogs, for example, and become familiar with the overall process.

For those considering getting involved, Shridhar’s advice is to be bold, be curious, and be open. “If you find something that you like, pursue it,” said Antopolski.

Currently, Outreachy internships are open internationally to women (cis and trans), trans men, and genderqueer people. Additionally, they are open to residents and nationals of the United States of any gender who are Black/African American, Hispanic/Latin, American Indian, Alaska Native, Native Hawaiian, or Pacific Islander. They are planning to expand the program to other participants in the future.

 

NGINX’s Plan to Create a $1 Billion Business from its Open Source Software

NGINX Inc. has a set an ambitious goal for itself: To become a $1 billion company within the next eight to 10 years. It will not be an easy task, especially given that its biggest competitor may be its own well-engineered open source software. For NGINX, the key to success will be to successfully get customers from additional markets.

The open source NGINX project, which began in 2002, is a widely-used high-performance web server and reverse proxy. However, the commercial company, NGINX Inc., created to support the open source project, was founded much later, in 2011, with the first commercial product in 2013.

Read more at The New Stack

Google Fuchsia Eyes Non-Linux Things

Google’s latest operating system project, Fuchsia, may be largely a mystery, but it reinforces a truth that the platforms vendors are having, grudgingly, to acknowledge: one operating system does not fit all. For a company which has put so much effort into making Android an OS for all purposes, Google has a remarkable number of potentially conflicting platforms, now including Chrome OS, Brillo and Fuchsia.

Even though it looks like an experimental OS for embedded devices, Fuchsia was described by its own Google team as being designed for “modern phones and modern personal computers”, which might be just how Android and Chrome OS would describe themselves too. So is Google hedging its bets, extending Android to cars, homes and wearables while developing alternatives just in case? Or is there a more coordinated master plan at work?

Read more at The Register