Home Blog Page 994

Unomi: A Bridge Between Privacy and Digital Marketing

jahia-logoWe live in a digital age where personalized experience is becoming a mandatory part of business. Companies are gathering massive amount of personal data about their users — whether we like it or not — to deliver personalized, enhanced experiences. Companies like Apple, for example, need personalized data to deliver news, music, and other services to their paying users. People are being tracked online 24×7, and the ability to link this personalized data to user behavior and then pinpoint that person is a serious privacy problem.

What we need is a mechanism that strikes a balance between privacy and user data; we need to rebuild users’ trust.

And that’s exactly what Unomi does.

Is Unomi the Answer?

The objective of Unomi is to deliver a software core that has all those capabilities to protect privacy of customers without taking away a valuable resource that can help companies improve their products. The primary goal of Unomi is to anonymize personal information, which protects users’ privacy while giving companies the data they need to improve their services. Unomi is based on a standard that is a reference implementation of an OASIS Context Server standardization.

A Brief History of Unomi

Unomi was recently accepted as an Apache Software Foundation Incubator project, which is not easy to do. ASF looks at many factors: sustainability of a project, for example, is extremely important, and an open source project can be sustainable if there are several entities backing that project, instead of just one player. In addition, there should be enough support for the project from within the ASF so that it’s solid for a long run.

Apache offers many benefits to open source projects. I asked Rich Bowen, the Executive Vice President of the Apache Software Foundation, about the benefits projects receive by becoming part of ASF. He pointed out that projects benefit from an established infrastructure, governance, mentorship along with name recognition and reputation.

“Different projects need different things. Each of the above can be beneficial to any project past a certain size. Apache has a reputation of being trustworthy, from a code provenance/IP perspective, and people know that they can use code from the ASF without worrying about licensing, or patent/copyright/trademark issues. Projects have a full-time technical staff to handle their infrastructure needs. The ASF is heavily populated by people who have a decade or more of Open Source experience, that projects can draw on as they grow and learn. All of these things are there in a culture of collaborative development and peer-review of both code and community,” said Bowen.

Jean-Baptiste Onofré (who works for Talend) is an Apache Incubator Committee member and a mentor for the Unomi project. When asked about the importance of Unomi he said, “One of the key Unomi features is that it’s an implementation of a OASIS specification, providing high-performance user profile and event tracking service. It allows companies to own their own data and the way to expose the data. It doesn’t mean the data is physically stored in the company (it could be on a private or public cloud), but they manage the way the data is stored and provided as content.”

Unomi Is Solving the Privacy Equation

The major backer of Unomi is Jahia, an open source User Experience Platform vendor. I talked to the CEO and Co-founder of Jahia, Elie Auvray. Talking about Unomi, he said, “”When we started working on the project with Serge Huber, CTO and Co-founder of Jahia, two years ago, the need of that standard and the system were already there and have tremendously expanded since. Data exchange and usage grow exponentially but without the ability of users to control it or to understand it. As a consequence, data privacy seems to be more and more threatened. That’s why we say that it’s time for digital marketing to be more ethical and transparent.”

Unomi creates that balance between privacy and statistics; it generates that trust mentioned earlier. It is the mechanism that enables companies in giving their users the much needed control over their data. Unomi becomes the foundation of trust between companies and their customers.

Auvray explained that the objective of the Unomi project is to deliver an engine that is able to manage enormous, massive amount of data. It provides the APIs that allow software vendors to actually take that engine for their personalization project and create interfaces that let their customers to first understand what type of data they are aggregating and where that data is used, and then they precisely decide which data they want to anonymize.

It’s a win-win situation.

Enforcing Privacy

Europe is extremely protective of the privacy of their citizens. Soon, it won’t be at the sole discretion of the companies to offer such privacy protection. Auvray said that such policies will be made mandatory by law, which can already be seen in the “right to be forgotten” policy where Google was made to remove URLs from its index. Similar regulations can be brought in to protect privacy.

Auvray added, “The digital right to be forgotten is not new. It’s just becoming mainstream as people start to understand the massive amount of data and the risk behind the fact to not be able to control it.”

The Takers

The potential adopters of Unomi are those players who manage personally identifiable information about their customers. And that is almost everyone – from banks to car dealers, from government agencies to organizations, from electronics good manufactures to service providers.

When I asked, Auvray said, “…ultimately, all companies that manage customer profiles will soon have this requirement.”

Conclusion

Previously, customers had no way to understand what type of data was stored and what kind of things were done to it; they had no control or say in it. And that makes Unomi one of the most important projects in the modern world.

“Unomi is the first project where companies can aggregate data while respecting the data privacy of people, because we have to allow people to understand and decide what they want to be done with that data and anonymize it as they want,” said Auvray.

Speeding Ahead with ZFS and VirtualBox

In total, I have about 20 virtual hosts I take care of across a few workstations. On one system alone, I keep five running constantly, doing builds and network monitoring. At work, my Ubuntu workstation has two Windows VMs I use regularly. My home workstation has about eight: a mix of Fedora(s) and Windows. A lot of my Windows use is pretty brief: doing test installs, doing web page compatibility checking, and using TeamViewer. Sometimes, a VM goes bonkers and you have to roll-back to a previous version of the VM; and sometimes VirtualBox’s snapshots are useful for that. On my home workstation, I have some hosts with about 18 snapshots and they are hard to scroll through…they scroll to the right across the window… How insightful. Chopping out a few snapshots in the middle of that pile is madness. Whatever Copy on Write (COW) de-duplication they end up doing takes “fuhevvuh.” It’s faster to compress all the snapshots into a new VM of one snapshot. (Read the rest at Freedom Penguin)

LinuxCon 2015 Report: Dirk Hohndel Chats with Linus Torvalds

LinuxConFor many LinuxCon attendees, one of the biggest event highlights is the opportunity to rub elbows with the people who actually write the Linux code. The only thing that can top that? Hearing from Linus Torvalds himself, the man who created it 24 years ago and still writes the code to this day.

At this year’s LinuxCon 2015 fireside chat, Linus sat down with Dirk Hohndel, Chief Linux and Open Source Technologist at Intel, to talk about everything from Linux containers to Dublin bus drivers. Here are a few of his most memorable comments from the discussion.

On kernel security:

I’m sure we could do better, but we have a fair amount of tools to do static checking for common patterns–and we haven’t had anyone say this is unfixable, rewrite it all. Don’t get me wrong, security people will always be unhappy. But the kernel poses special challenges, because any bug can be a security bug. We also have to keep in mind that most of the kernel is drivers, a big chunk of the rest is architecture specific, and there are 25 million lines of code. So it’s really hard to have people go over it; we have to rely on automated testing and on tools. There are too many lines in too many obscure places for humans to really check.

linuxcon-dublin-linus-chat

On containers:

I enjoy all the buzzwords. And I enjoy not having to care.

On maintainer teams:

We’re getting lots of contributors, but we have more trouble finding maintainers. Probably because the maintainer’s job is to read emails seven days a week. Forever. That’s why we’re pushing for maintainer teams as much as possible. It lessens the steps to becoming a maintainer if you’re not the only one.

On ARM architecture:

I’m happy to see that ARM is making progress. One of these days, I will actually have a machine with ARM. They said it would be this year, but maybe it’ll be next year. 2016 will be the year of the ARM laptop.

On Dublin bus drivers:

They’re coming at you from the wrong side of the road. They’re trying to kill you! If I can survive this trip, I think I can make it a few more years.

On user space versus kernel space:

For a long time, I’ve said that user space should be the most interesting thing. The kernel is just the infrastructure, the roadway. And who’s really interested in the tarmac? It’s only odd and dysfunctional people like me. I’m perfectly happy doing infrastructure, but I’m always surprised that others are interested in it.

On the next Linus project:

I’d hate for there to have to be a next Linus project. When I created Linux and Git, I was in a situation where no one else was providing what I needed. I don’t want to be in that situation again; I’d much rather coast along and be lazy. Anytime I need to start a new project, that’s a failure for the rest of the world.

On the next 25 years of Linux:

Linux did everything I expected it to do in the first six months, everything that came after was about other people solving new and interesting problems. Linux is all these crazy people doing crazy things I never imagined. It’s going to be interesting to see what others will do with it in the next 25 years.

Valve Makes SteamOS 2.0 the Official Distro, Now Based on Debian 8.2

valve-makes-steamos-2-0-the-official-distroValve is making SteamOS 2.0 the official version supported by the company, and it looks like it might ship with the Steam Machines after all.

The Valve developers said a while back that they didn’t intend to upgrade to Debian 8 “Jessie” anytime soon, but they now have a branch of the OS that’s using this particular distro. They still maintain SteamOS based on Debian 7, and that’s a little bit strange.  What’s interesting is the fact that SteamOS 2.0 “Brewmaster” has been getting quite a few updates in these past few weeks, which only means that it’s getting ready for launch.

Read more at Softpedia Linux News

Cisco Disrupts $30 Million Browser Plug-In Hacking Operation

Hackers used the Angler toolkit in order to take advantage of vulnerabilities in Flash, Java, and other browser plug-ins.

Cisco has disrupted a major browser-based hacking operation, thought to be worth $30 million to criminals each year. The company said unnamed hackers used the notorious Angler Exploit Kit to take advantage of vulnerabilities in common browser plugins, such as Flash and Java.

As many as 90,000 users were affected each day by the attack.

Read more at ZDNet News

The End of Linearity: git Implementation in Professional Services

Jan Christoph Ebersbach from the Univention Professional Services Team took the time and explains why his department switched from the software control version tool SVN to git and which experiences they made:

“We have now been using git as the version control software for our projects in the Professional Services Team at Univention for a number of weeks and to great success. In this blog article, I want to give you a bit more information about our decision to employ git, report on our initial, recent experiences and provide a perspective of the aspects still requiring work.”

For all working in software development, read the whole article and feedback is welcome!

Software version control system git

Ubuntu Snappy Core 15.04 Now Features Basic Support for UEFI Firmware Updates

On October 6, Canonical’s Michael Vogt had the pleasure of announcing the release and immediate availability for download of a new update for the Snappy Ubuntu Core 15.04 operating system.

The new version marks the sixth update to Snappy Ubuntu Core, a special version of the Ubuntu Linux operating system that uses Snappy packages instead of the traditional ones from the upstream Debian GNU/Linux distribution.

Read more at Softpedia Linux News

Matthew Garrett Leaves Linux Kernel and Forks It

The Linux kernel ecosystem is experiencing some turbulence these days, as a few important developers have quit the project, citing the “toxic” working environment or other technical factors.

Sarah Sharp, a long-time Linux kernel developer and coordinator of the Outreachy program, stepped down from all of her functions in the project and said that she would not continue to work… Now, another Linux kernel developer has decided to move away from the project. Matthew Garrett has been in the news a lot this past year, but surprisingly, not for the Linux kernel. He’s been a constant critic of Canonical IP policy, and he has criticized the company more than once. In fact, he’s a rather well-known kernel developer,…

Read more at Softpedia Linux News

 

LinuxCon 2015 Report: Shrinking the Security Holes in OSS

Dublin native James Joyce famously wrote that “mistakes are the portals of discovery.” LinuxCon 2015 keynote speaker Leigh Honeywell grabbed hold of the same theme here in Dublin, reminding hundreds of open source professionals that “you’re going to make mistakes; you’re going to introduce security bugs.” The goal, said Honeywell, who works as a senior security engineer at Slack Technologies, shouldn’t be the all-out elimination of these mistakes. Instead, security engineers should strive to make different mistakes next time around.

Evoking our collectively painful memories of the Heartbleed virus, Honeywell discussed the need to think through scenarios in advance, without making futile and frustrating attempts to get security plans exactly right. “There are always going to be a zillion different ways to respond,” she said. “The software that many of you work on is unimaginably, unknowably complex. Any large codebase will end up with dark, scary corners.”

linuxcon-Leigh-honeywellWhat’s more, said Honeywell, the work of defenders is always harder than the work of attackers. While an attacker just needs to find one bug to succeed, security engineers have to find or at least mitigate all of them. “They only have to be right once. We have to be right over and over again.”

If it sounds hard, that’s because it is. “You think Dungeons & Dragons is nerdy,” she quipped. “Come talk to me after this keynote about tabletop incident response drills.”

So, how can we secure an open future? Given the challenges, is it even possible? The first step, says Honeywell, is to remember that complex systems always fail. Referencing psychologist James Reason’s “Swiss Cheese Model of Accident Causation” Leigh called the bugs in open software “the holes in the cheese.” Since they’ll never entirely go away, it’s the job of security engineers to “make the holes slightly smaller, make fewer of them, and make sure they don’t all line up.”

But that doesn’t mean we can’t keep software both open and secure — we just need to approach security failures systemically. To do this, Honeywell’s suggestions included:

Think like an attacker — Ask yourself, “if I had control of this input, how would I use it to get in trouble?”

Trust your gut and ask for help –”If you’ve got bad vibes about a piece of code, say something — ask for a code review or additional testing,” says Honeywell. “And if you do get shot down for raising fears about the safety of some code, that’s useful information, too.”

Embrace a culture of blamelessness — Managers should assume that their people want to write secure code, says Honeywell. And they should create a culture where errors can be addressed without fear of punishment or retribution, but instead with a perspective of learning and growth.

Be polite — When Honeywell shared an image of a puffed-up cat flashing its sharp teeth and asked if anyone who worked in open source communities felt like they were trying to pet that cat, hands raised throughout the auditorium. It shouldn’t be that way, said Honeywell. “Polite conversation leads to more secure software.”

There’s no doubt, concluded Honeywell, that writing secure open software is difficult. One of the primary solutions, however, is actually quite simple. “We’ve got to work together, compassionately and diligently if we are to have any hope of doing it well.”

Project Atomic or: How I Learned to Stop Worrying and Love Containers

atomicProject Atomic is a set of technologies that makes containers easier to develop, configure, deploy, run, administer, and deliver in a wide variety of execution environments. This interconnected set of technologies starts with tools that make it easier to run a single container and continues to tools that help deploy complex multi-container applications.

Many of these projects include the word “atomic” in their name. Therefore, discussions turn into conversations about “atomics” and people can get confused. In this post, I will introduce the main atomics and a few of their friends.

Atomic Host

Containers need a operating system to run on, and that’s Atomic Host. Atomic Host represents a design pattern for distributions to build an environment that is optimized for running Linux containers. This pattern can be implemented by existing distributions, which is critical. This eliminates the need to wrap your head around building a new operating system while developing a container deployment environment at the same time.

Some key advantages of an Atomic Host are:

Built on a trusted distribution: Pulling the components that are required to support containerization from a distribution that is trusted and then layering on additional capabilities for containers means that the operating system already has:

  • Hardware and software support, including known kernel support and drivers
  • Broad ISV and IHV support

  • Established and familiar ways to get involved, file bugs, submit patches and get support often from the same colleagues and communities you are familiar with.

  • The ability to reuse existing skills instead of having to learn a whole new operating system

Atomic updates:

  • Single-step — or atomic — upgrades and reversioning of the operating system. This is done via the delivery of an OSTree, or a complete system tree, to the server that is used to boot the server into a new operating system version.
  • No half-updated systems or unpacking RPMs and running scripts on every host.  

A streamlined package set: This only includes what is required to build a Docker and Kubernetes environment.

You can find Atomic Host variants of Fedora, CentOS, and Red Hat Enterprise Linux. These distributions use rpm-ostree to implement the Atomic Host pattern. It allows existing and trusted RPMs to be leveraged to construct the OSTrees. It is also optimized for delivering the tree because it implements what is essentially git for the operating system.

Nulecule and Atomic App

Question: What do you call a containerized application?

Answer: A mess of images, containers, READMEs, and configuration files pretending to be easily deployable. 1990 called and wants its install process back!

Most applications are made of multiple containers. Even a simple web application will typically require a web-frontend and a database. Different container environments will connect those applications in different ways.The Nulecule Specification allows a multi-container application to be specified and configured once and then deployed and run in many execution environments. Today, there is support for Docker, Kubernetes, and OpenShift, and more are welcome. It’s worth noting that Nulecule is a made-up word derived from molecule by fictional nuclear plant operator Homer Simpson. Even the specification name has something to with atomic!

A specification is great, but an implementation is needed for it to be useful. Atomic App is a Python-based implementation of the Nulecule specification. It lives inside a container that is run by the application user. The user never runs Atomic App directly, but benefits from the configuration that Atomic App provides.

Atomic Command

In contrast to Atomic App, the atomic command is a tool to make running containers easier. It provides additional functionality and adds syntactic sugar. For example, using special labels the atomic command can install, start and stop containers easily by turning long Docker commands into short commands like atomic run projectatomic/helloapache. Atomic command is available for many distributions and has been tested on Fedora, CentOS, Debian, and Red Hat Enterprise Linux in both standard and Atomic Host (where available) variants.

If you’re using an Atomic Host, the atomic command does double-duty and provides access to host-specific administration, including upgrades.

Atomic Developer Bundle

The Atomic Developer Bundle (ADB) provides a platform for developers on Linux, Windows, and OS X to use when packaging containerized applications. The ADB encourages good packaging patterns and integration with native, PaaS, and IaaS environments. The ADB is a virtual machine that contains all the tools needed to package containerized applications for these environments. Included in the box is a fully functional Kubernetes preconfigured for you to develop against.

Atomic Reactor & OpenShift Build System Client

Atomic Reactor is a command-line addressable, source-to-image builder for Docker containers. Starting with a Git repo, it can resolve all dependencies and build requirements to allow you to build and push a container to a registry easily. Using Atomic Reactor will allow your build chain to be clean and automatable. Look for it to appear in the Atomic Developer Bundle. A similar tool, OpenShift Build System (OSBS) Client, can trigger builds and deployments in OpenShift.

Atomic Enterprise

In between PasS and IasS sits a project that also has “atomic” in its name. Atomic Enterprise builds on the power of Atomic Host and embeds the operational enablement technologies of OpenShift into a simple, powerful, and easy-to-approach experience for deploying and scaling applications in containers. Atomic Enterprise is an infrastructure platform that is designed to run, orchestrate, and scale multi-container based applications and services. It provides a scale-out cluster of Atomic Host instances that together form a foundation for delivering traditional and cloud-native applications via containers.

Project Atomic has an “atomic” for every container situation. Individuals experimenting with containers on their laptops can use the atomic command, developers can use the Atomic Developer Bundle, Atomic App, and Nulecule, and operators can use Atomic Reactor and Atomic Enterprise. With all these atomics, I am sure you will find one to love.