Home Blog Page 993

New Collaborative Group to Speed Real-Time Linux

Tux-150The Linux Foundation’s announcement at LinuxCon this week that it was assuming funding control over the Real-Time Linux project gave renewed hope that embedded Linux will complete its 15-year campaign to achieve equivalence with RTOSes in real-time operations. The RTL group is being reinvigorated as a Real-Time Linux Collaborative Project, with better funding, more developers, and closer integration with mainline kernel development.

According to the Linux Foundation, moving RTL under its umbrella “will save the industry millions of dollars in research and development.” The move will also “improve quality of the code through robust upstream kernel test infrastructure,” says the Foundation.

Over the past decade, the RTL project has been overseen, and more recently, funded, by the Open Source Automation Development Lab, which is continuing on as a Gold member of the new collaborative project, but will hand funding duties over to the Linux Foundation in January. The RTL project and OSADL have been responsible for maintaining the RT-Preempt (or Preempt-RT) patches, and periodically updating them to mainline Linux.

The task is about 90 percent complete, according to Dr. Carsten Emde, longtime General Manager of OSADL. “It’s like building a house,” he explains. “The main components such as the walls, windows, and doors are already in place, or in our case, things like high-resolution timers, interrupt threads, and priority-inheritance mutexes. But then you need all these little bits and pieces such as carpets and wallpaper to finish the job.”

According to Emde, real-time Linux is already technologically equivalent to most real-time operating systems – assuming you’re willing to hassle with all the patches. “The goal of the project was to provide a Linux system with a predefined deterministic worst-case latency and nothing else,” says Emde. “This goal is reached today when a kernel is patched, and the same goal will be reached when a future unpatched mainline RT kernel will be used. The only – of course important – difference is that the maintenance work will be much less when we do no longer need to continually adapt off-tree components to mainline.”

The RTL Collaborative Group will continue under the guidance of Thomas Gleixner, the key maintainer over the past decade. This week, Gleixner was appointed a Linux Foundation Fellow, joining a select group that includes Linux kernel stable maintainer Greg Kroah-Hartman, Yocto Project maintainer Richard Purdie, and Linus Torvalds.

According to Emde, RTL’s secondary maintainer Steven Rostedt of Red Hat, who “maintains older but still maintained kernel versions,” will continue to participate in the project along with Red Hat’s Ingo Molnàr, who was a key developer of RTL, but in recent years has had more of an advisory position. Somewhat surprisingly, however, Red Hat is not one of the RTL Collaborative Group’s members. Instead, Google takes the top spot as the lone Platinum member, while Gold members include National Instruments (NI), OSADL, and Texas Instruments (TI). Silver members include Altera, ARM, Intel, and IBM.

The Long Road to Real Time

When Linux first appeared in embedded devices more than 15 years ago, it faced an embedded computing market dominated by RTOSes such as Wind River’s VxWorks, which continue to offer highly deterministic, hardened kernels required by many industrial, avionics, and transportation applications. Like Microsoft’s already then established – and more real-time – Windows CE, Linux faced resistance and outright mockery from potential industrial clients. These desktop-derived distributions might be okay for lightweight consumer electronics, it was argued, but they lacked the hardened, kernels that made RTOSes the choice for devices that required deterministic task scheduling for split-second reliability.

Improving Linux’s real-time capabilities was an early goal of embedded Linux pioneers such as MontaVista. Over the years, RTL development was accelerated and formalized in various groups such as OSADL, which was founded in 2006, as well as the Real-Time Linux Foundation (RTLF). When RTLF merged with OSADL in 2009, OSADL and its RTL group took full ownership over the PREEMPT-RT patch maintenance and upstreaming process. OSADL also oversees other automation-related projects such as Safety Critical Linux.

OSADL’s stewardship over RTL progressed in three stages: advocacy and outreach, testing and quality assessment, and finally, funding. Early on, OSADL’s role was to write articles, make presentations, organize training, and “spread the word” about the advantages of RTL, says Emde.  “To introduce a new technology such as Linux and its community-based development model into the rather conservative automation industry required first of all to build confidence,” he says. “Switching from a proprietary RTOS to Linux means that companies must introduce new strategies and processes in order to interact with a community.”

Later, OSADL moved on to providing technical performance data, establishing a quality assessment and testing center, and providing assistance to its industrial members in open source legal compliance and safety certifications.

As RTL grew more mature, pulling even with the fading Windows CE in real-time capabilities and increasingly cutting into RTOS market share, rival real-time Linux projects – principally Xenomai – have begun to integrate with it.

“The success of the RT patches, and the clear prospective that they would eventually be merged completely, has led to a change of focus at Xenomai,” says Emde. “Xenomai 3.0 can be used in combination with the RT patches and provide so-called ‘skins’ that allow you to recycle real-time source code that was written for other systems. They haven’t been completely unified, however, since Xenomai uses a dual kernel approach whereas the RT patches apply only to a single Linux kernel.”

In more recent years, the RTL group’s various funding sources have dropped off, and OSADL took on that role, too. “When the development recently slowed down a bit because of a lack of funding, OSADL started its third milestone by directly funding Thomas Gleixner’s work,” says Emde.

As Emde wrote in an Oct. 5 blog entry, the growing expansion of Real-Time Linux beyond its core industrial base to areas like automotive and telecom suggested that the funding should be expanded as well. “It would not be entirely fair to let the automation industry fund the complete remaining work on its own, since other industries such as telecommunication also rely on the availability of a deterministic Linux kernel,” wrote Emde.

When the Linux Foundation showed interest in expanding its funding role, OSADL decided it would be “much more efficient to have a single funding and control channel,” says Emde. He adds, however, that as a Gold member, OSADL is still participating in the oversight of the project, and will continue its advocacy and quality assurance activities.

Automotive Looks for Real-Time Boost

RTL will continue to see its greatest growth in industrial applications where it will gradually replace RTOS applications, says Emde. Yet, it is also growing quickly in automotive, and will later spread to railway and avionics, he adds.

Indeed, the growing role of Linux in automotive appears to be key to the Linux Foundation’s goals for RTL, with potential collaborations with its Automotive Grade Linux (AGL) workgroup. Automotive may also be the chief motivator for Google’s high-profile participation, speculates Emde. In addition, TI is deeply involved with automotive with its Jacinto processors.

Linux-oriented automotive projects like AGL aim to move Linux beyond in-vehicle infotainment (IVI) into cluster controls and telematics where RTOSes like QNX dominate. Autonomous vehicles are even in greater need of real-time performance.

Emde notes that OSADL’s SIL2LinuxMP project may play an important role in extending RTL into automotive. SIL2LinuxMP is not an automotive-specific project, but BMW is participating, and automotive is one of the key applications. The project aims to certify base components required for RTL to run on a single- or multi-core COTS board. It defines bootloader, root filesystem, Linux kernel, and C library bindings to access RTL.

Autonomous drones and robots are also ripe for real-time, and Xenomai is already used in many robots, as well as some drones. Yet, RTL’s role will be limited in the wider embedded Linux world of consumer electronics and Internet of Things applications. The main barrier is the latency of wireless communications and the Internet itself.

“Real-time Linux will have a role within machine control and between machines and peripheral devices, but less between remote machines,” says Emde. “Real-time via Internet will probably never be possible.”

Zynq-Based Hacker Board Has FPGA, BT, and WiFi Too

Krtkl’s $60 “Snickerdoodle” SBC is aimed at robots and drones, and runs Linux on an ARM/FPGA Zynq-7000. You get WiFi, BT, 154 GPIOs, and expansion options. The Snickerdoodle appears to be the most affordable single-board computers yet to run on the Xilinx Zynq system-on-chip, which combines dual ARM Cortex-A9 cores along with an FPGA subsystem…

Read more at LinuxGizmos

SHA1 Algorithm Securing E-Commerce and Software Could Break by Year’s End

unlocked-640x427Researchers warn widely used algorithm should be retired sooner. SHA1, one of the Internet’s most crucial cryptographic algorithms, is so weak to a newly refined attack that it may be broken by real-world hackers in the next three months, an international team of researchers warned Thursday.

SHA1 has long been considered theoretically broken, and all major browsers had already planned to stop accepting SHA1-based signatures starting in January 2017. Now, researchers with Centrum Wiskunde & Informatica in the Netherlands, Inria in France, and Nanyang Technological University in Singapore have released a paper that argues real-world attacks that compromise the algorithm will be possible well before the cut-off date. 

Read more at Ars Technica

Dell in Talks to Buy Data Storage Company EMC

Dell Inc, the world’s third largest personal computer maker, is in talks to buy data storage company EMC Corp (EMC.N), a person familiar with the matter said, in what could be one of the biggest technology deals ever.

A deal could be an option for EMC, under pressure from activist investor Elliott Management Corp to spin off majority-owned VMware Inc (VMW.N). The terms being discussed were not known, but if the deal goes through it would top Avago Technologies’ (AVGO.O) $37 billion offer for Broadcom (BRCM.O). EMC has a market value of about $50 billion.

Read more at Reuters

IBM Aims Linux OpenPOWER Systems at Intel

IBM and its OpenPOWER partners are launching three flavors of Linux servers in an effort to swipe big data workloads from Intel. The OpenPOWER effort revolves around open hardware designs that run on IBM’s POWER processor. The aim of the group, which includes IBM, Nvidia, Mellanox, Canonical and Wistron, is to offer a counterweight to Intel in the data center. 

According to IBM, the Power Systems LC line can handle big data workloads faster than Intel. That claim is based on IBM’s internal testing showing that the Linux servers can complete Apache Spark workloads faster…

Read more at ZDNet News

NetBSD 7.0 Released With New ARM Board Support, Lua Kernel Scripting

NetBSD 7.0 was quietly released at the end of September. NetBSD 7.0 is a big release for this BSD operating system and it features Lua kernel scripting support, GCC 4.8.4 is the default compiler, DRM/KMS graphics supportmulti-core support for ARMRaspberry Pi 2 with SMP support, NPF improvements, and a variety of other enhancements. 

Read more at Phoronix

From Robotics to Analytics, Why NASA Is Offering Startups Over 1,000 Patents for ‘Free’

Startups could get a major lift from NASA if they can find a technology at the space agency that fits their commercial ambitions.

US space agency NASA is offering startups a license to 15 categories of patented NASA technologies for free. The move follows Google’s offer earlier this year of ‘free’ patents to select startups – and it could be just as valuable given the 1,200 patented technologies available for license under NASA’s new Technology Transfer Program.

NASA hopes the program will make life easier for cash-strapped startups short on intellectual property,…

Read more at ZDNet News

Unomi: A Bridge Between Privacy and Digital Marketing

jahia-logoWe live in a digital age where personalized experience is becoming a mandatory part of business. Companies are gathering massive amount of personal data about their users — whether we like it or not — to deliver personalized, enhanced experiences. Companies like Apple, for example, need personalized data to deliver news, music, and other services to their paying users. People are being tracked online 24×7, and the ability to link this personalized data to user behavior and then pinpoint that person is a serious privacy problem.

What we need is a mechanism that strikes a balance between privacy and user data; we need to rebuild users’ trust.

And that’s exactly what Unomi does.

Is Unomi the Answer?

The objective of Unomi is to deliver a software core that has all those capabilities to protect privacy of customers without taking away a valuable resource that can help companies improve their products. The primary goal of Unomi is to anonymize personal information, which protects users’ privacy while giving companies the data they need to improve their services. Unomi is based on a standard that is a reference implementation of an OASIS Context Server standardization.

A Brief History of Unomi

Unomi was recently accepted as an Apache Software Foundation Incubator project, which is not easy to do. ASF looks at many factors: sustainability of a project, for example, is extremely important, and an open source project can be sustainable if there are several entities backing that project, instead of just one player. In addition, there should be enough support for the project from within the ASF so that it’s solid for a long run.

Apache offers many benefits to open source projects. I asked Rich Bowen, the Executive Vice President of the Apache Software Foundation, about the benefits projects receive by becoming part of ASF. He pointed out that projects benefit from an established infrastructure, governance, mentorship along with name recognition and reputation.

“Different projects need different things. Each of the above can be beneficial to any project past a certain size. Apache has a reputation of being trustworthy, from a code provenance/IP perspective, and people know that they can use code from the ASF without worrying about licensing, or patent/copyright/trademark issues. Projects have a full-time technical staff to handle their infrastructure needs. The ASF is heavily populated by people who have a decade or more of Open Source experience, that projects can draw on as they grow and learn. All of these things are there in a culture of collaborative development and peer-review of both code and community,” said Bowen.

Jean-Baptiste Onofré (who works for Talend) is an Apache Incubator Committee member and a mentor for the Unomi project. When asked about the importance of Unomi he said, “One of the key Unomi features is that it’s an implementation of a OASIS specification, providing high-performance user profile and event tracking service. It allows companies to own their own data and the way to expose the data. It doesn’t mean the data is physically stored in the company (it could be on a private or public cloud), but they manage the way the data is stored and provided as content.”

Unomi Is Solving the Privacy Equation

The major backer of Unomi is Jahia, an open source User Experience Platform vendor. I talked to the CEO and Co-founder of Jahia, Elie Auvray. Talking about Unomi, he said, “”When we started working on the project with Serge Huber, CTO and Co-founder of Jahia, two years ago, the need of that standard and the system were already there and have tremendously expanded since. Data exchange and usage grow exponentially but without the ability of users to control it or to understand it. As a consequence, data privacy seems to be more and more threatened. That’s why we say that it’s time for digital marketing to be more ethical and transparent.”

Unomi creates that balance between privacy and statistics; it generates that trust mentioned earlier. It is the mechanism that enables companies in giving their users the much needed control over their data. Unomi becomes the foundation of trust between companies and their customers.

Auvray explained that the objective of the Unomi project is to deliver an engine that is able to manage enormous, massive amount of data. It provides the APIs that allow software vendors to actually take that engine for their personalization project and create interfaces that let their customers to first understand what type of data they are aggregating and where that data is used, and then they precisely decide which data they want to anonymize.

It’s a win-win situation.

Enforcing Privacy

Europe is extremely protective of the privacy of their citizens. Soon, it won’t be at the sole discretion of the companies to offer such privacy protection. Auvray said that such policies will be made mandatory by law, which can already be seen in the “right to be forgotten” policy where Google was made to remove URLs from its index. Similar regulations can be brought in to protect privacy.

Auvray added, “The digital right to be forgotten is not new. It’s just becoming mainstream as people start to understand the massive amount of data and the risk behind the fact to not be able to control it.”

The Takers

The potential adopters of Unomi are those players who manage personally identifiable information about their customers. And that is almost everyone – from banks to car dealers, from government agencies to organizations, from electronics good manufactures to service providers.

When I asked, Auvray said, “…ultimately, all companies that manage customer profiles will soon have this requirement.”

Conclusion

Previously, customers had no way to understand what type of data was stored and what kind of things were done to it; they had no control or say in it. And that makes Unomi one of the most important projects in the modern world.

“Unomi is the first project where companies can aggregate data while respecting the data privacy of people, because we have to allow people to understand and decide what they want to be done with that data and anonymize it as they want,” said Auvray.

Speeding Ahead with ZFS and VirtualBox

In total, I have about 20 virtual hosts I take care of across a few workstations. On one system alone, I keep five running constantly, doing builds and network monitoring. At work, my Ubuntu workstation has two Windows VMs I use regularly. My home workstation has about eight: a mix of Fedora(s) and Windows. A lot of my Windows use is pretty brief: doing test installs, doing web page compatibility checking, and using TeamViewer. Sometimes, a VM goes bonkers and you have to roll-back to a previous version of the VM; and sometimes VirtualBox’s snapshots are useful for that. On my home workstation, I have some hosts with about 18 snapshots and they are hard to scroll through…they scroll to the right across the window… How insightful. Chopping out a few snapshots in the middle of that pile is madness. Whatever Copy on Write (COW) de-duplication they end up doing takes “fuhevvuh.” It’s faster to compress all the snapshots into a new VM of one snapshot. (Read the rest at Freedom Penguin)