Home Blog Page 696

Meet Apache Spot, a New Open-Source Project for Cybersecurity

Hard on the heels of the discovery of the largest known data breach in history, Cloudera and Intel on Wednesday announced that they’ve donated a new open-source project to the Apache Software Foundation with a focus on using big data analytics and machine learning for cybersecurity. 

Originally created by Intel and launched as the Open Network Insight (ONI) project in February, the effort is now called Apache Spot and has been accepted into the ASF Incubator.

Read more at ComputerWorld

 

The Second Wave of Platforms, an Interview with Cloud Foundry’s Sam Ramji

In today’s world of platforms, services are increasingly connected. In the past, PaaS offerings were pretty much isolated. It’s that new connected infrastructure that is driving the growth of Cloud Foundry, the open source, service-oriented platform technology.

Sam Ramji is CEO of Cloud Foundry, which is holding its European event in Frankfurt this week. At the conference, we spoke with Ramji to discuss, among other topics:

  • Europe’s adoption of platform technologies.
  • IoT and the connection to serverless technologies.
  • The maturity of the container ecosystem and the corollary to Cloud Foundry.
  • Cloud Foundry BOSH, the open source tool for release engineering, deployment, lifecycle management, and monitoring of distributed systems. …

Read more at The New Stack

Open Source Is Not to Blame for a Lack of Industry Standards

Carol Wilson wrings her hands over the “boring” nature of open source standardization, declaring that “Open source processes can take the fun out of everything, particularly technology wars.” Putting aside for a minute the irony of expecting standards to ever be anything more than mind-numbingly dull, Wilson’s larger argument misses the point.

The problem with open source standards aren’t that they’re boring; it’s that they’re largely the same as the proprietary standards that preceded them. In practice, this presents no problem at all.

Read more at Tech Republic

Raspberry Pi Foundation Unveils New LXDE-Based Desktop for Raspbian Called PIXEL

Today, September 28, 2016, Raspberry Pi Foundation’s Simon Long proudly unveiled a new desktop environment for the Debian-based Raspbian GNU/Linux operating system for Raspberry Pi devices.

Until today, Raspbian shipped with the well-known and lightweight LXDE desktop environment, which looks pretty much the same as on any other Linux-based distribution out there that is built around LXDE (Lightweight X11 Desktop Environment). But Simon Long, a UX engineer working for Raspberry Pi Foundation, was hired to make it better, transform it into something that’s more appealing to users.

Read more at Softpedia

Ericsson: The Journey to a DevOps Future in SDN

There are big transformations going on in the world today that are driving rapid changes to the business of networks, said Santiago Rodriguez, VP of Engineering and head of the product development unit SDN & Policy Control at Ericsson, in his keynote Tuesday at OpenDaylight Summit.    

“Society is transforming, the way we do business is transforming, and accordingly the way we build our networks is transforming,” Rodriguez said.

The three pillars of this network transformation include: 5G, virtualization and open source.

5G, the next generation mobile standard, has been promised to be the biggest innovation in mobile networking since the first cellular handset. Interestingly, Rodriguez noted that it took 120 years for the fixed (or wired) telephony market to reach 1 billion subscribers.  In only the last 20 years, the number of mobile subscriptions has ballooned to more than 10 billion consumer devices and more than 7 billion machines. Concurrently, the number of devices in each home is growing rapidly as well.  If you count all the smartphones, laptops, tablets, and TVs in the home and then add the growing number of IoT devices, the number of network devices in an average home is exceeding 15 to 20 and is expected to continue to grow.

To make things more challenging, the requirements are widely disparate.  Some devices, such as home energy sensors, require small amounts of power and generate small amounts of data volumes, yet there are millions of them.  Other devices and applications, such as a telemedicine, require ultra-low latency, extreme availability, and must be highly reliable with full redundancies built in.  These disparities in performance characteristics add complexities and challenges to the service providers and to the vendor community and are requiring both to re-think how networks are built.

Rodriguez described the transformation in virtualization as “SDN-enabled NFV and Cloud Infrastructure.”  SDN gives you the connectivity required and is used for cross-domain control, orchestration and management. Then you have NFV for virtual network functions and Cloud for scaling network functions and enabling optimal deployments.  The key across virtualization is the need for automation, which he noted is critical to cope with the proliferation of devices.

The third pillar is open source.  Ericsson joined OpenDayLight, the open source SDN platform and a Linux Foundation project, at the beginning and has been an active participant in the community.  They have also joined more than 15 other open source initiatives.  In each, the company is taking the same approach:  Join early and participate actively.  With close to four years of experience with ODL and other open source projects, they’ve learned a few things.  Rodriguez noted three of them.   They are:

  1. Upstream First

  2. There’s a Bigger Picture

  3. The User Matters

Ericsson takes a module-by-module approach when using open source.  In some cases, the module may not be required so it’s dropped, and in other cases the module may not be sufficiently mature and Ericsson will enhance it internally.  They may also look at a module and determine that they can do it better, and lastly, they demand the ability to add their own new modules.  It was unclear in which cases they donate the code back to the community.   This approach requires them to take what he called an “upstream first” approach so they can be confident that future releases of the open source in question doesn’t render the previous customizations obsolete or redundant.

The “bigger picture” refers to the open source community as a whole.  Carrier networks are vast and complex with numerous features and functions required on an end-to-end basis.   There isn’t a single open source project that does it all.  Hence, Ericsson joining and participating in numerous open source projects is important.  In many cases, Rodriguez noted, for a given standard there are multiple open source projects implementing the same standard.   Standardization is required to drive interoperability and predictability.

And his third lesson: usability is paramount to success.  When determining whether software is usable they look at the quality of the code, the performance, the upgradability and the robustness.  Rodriguez noted that at the onset of an open source project there is a push for many features.  When first released, the feature-rich code will not get adopted since it lacks in usability.  The key, he noted, is “good enough features” with “good enough usability.”  That’s when the technology will go mainstream.

In the OpenDaylight community, Rodriguez noted that users and developers are working closely together in the DevOps tradition. The benefit of this approach is developers get immediate feedback from users and they can then modify their products based on what the user actually needs.

In his keynote, Rodriguez wanted the audience to have three “take-aways” to assist with their journey to a DevOps future.  First, this is happening now.  Ericsson is shipping and deploying products based on ODL to customers around the globe “as we speak.”  Second, this future is based on open source and ODL is part of a bigger ecosystem.  Third, usability is the most important aspect for open source success. He then concluded with a reminder that we are all part of the networked society.

Watch Docker CTO Solomon Hykes and More Talk Live at LinuxCon + ContainerCon Europe

Watch open source leaders, entrepreneurs, developers, and IT operations experts speak live next week, Oct. 4-6, 2016, at LinuxCon and ContainerCon Europe in Berlin. The Linux Foundation will provide live streaming video of all the event’s keynotes for those who can’t attend.

Sign up for the free streaming video.

The keynote speakers will focus on the technologies and trends having the biggest impact on open source development today, including containers, networking and IoT, as well as hardware, cloud applications, and the Linux kernel. See the full agenda of keynotes.

Tune into free live video streaming at 9 a.m. CET each day to watch keynotes with:

  • Jilayne Lovejoy, Principal Open Source Counsel, ARM

  • Solomon Hykes, Founder, CTO and Chief Product Officer, Docker

  • Brian Behlendorf, Executive Director, Hyperledger Project

  • Christopher Schlaeger, Director Kernel and Operating Systems, Amazon Development Center Germany

  • Dan Kohn, Executive Director, Cloud Native Computing Foundation

  • Brandon Philips, CTO, CoreOS

  • Many more

Can’t catch the live stream next week? Don’t worry—if you register now, we’ll send out the recordings of keynotes after the conference ends!

You can also follow along on Twitter with the hashtag #linuxcon. Share the live streaming of keynotes with your friends and colleagues!

Vendors and Customers Gettin’ Open Sourcey With It

I’ve written extensively how open source has leveled the playing field between technology vendors and their customers. I’ve also written about how “users” — aka, the customers of vendors — are now driving much of the software innovation in the world by leading several large open source ecosystems. If you’re a technology vendor, this development may frighten you, and for good reason — you grew up believing that you were the one true source of innovation. That is simply no longer the case.

This doesn’t mean that vendors don’t drive any innovation, but rather they must learn to collaborate with their customers and end users on innovation. The vendors that figure this out will run the world. As for those who don’t, well… we all remember what happened to the dinosaurs, right? Basically, if you’re a technology vendor right now, you have a fiduciary duty to work with your customers on open source collaboration. If they’re already open source savvy — great! Time to work with them. And if they’re not open source savvy, this is a great opportunity to enable their inner open source advocate and develop a working collaboration that will benefit both parties extensively. And that’s what I want to focus on in this article: open source enablement of your customers.

Basically, “open source enablement” seems to be about teaching customers how to embrace open source principles, both in terms of internal processes as well as external communities and ecosystems. As I’ve worked with many engineering and product teams over the years, I’ve seen many open source initiatives fail to reach their potential because of ingrained cultural obstacles that usually manifest in the form of corporate inertia that blocks forward progress.

This is where you, good vendor, can lend a hand — assuming you are also not blocked by the same internal obstacles. Open source enablement for customers has to focus on internal processes as much as or more than external participation and collaboration. In fact, I think a lot of companies miss the memo on internal processes because they are blinded by the “sexiness” of external projects and the success it engenders. Before you can run, you must learn to walk, and that means taking a good hard look at how your teams work together and ensuring that their processes are optimized for any kind of collaboration, whether internal or external. A good vendor will recognize this and see it for the opportunity that it is. For more on “innersource” principles, I highly recommend taking a good look at the fabulous innersource commons materials assembled and produced by Danese Cooper and her team at PayPal.

Strategically, there are three ways to look at this, all of mostly equal importance, although I might attach a hair of extra weight to #1, below:

1. Keep existing customers on your technology platforms so that they will be ready for a conversation about your broader vision when the time is right — that would be later. If you turn this conversation into one about direct sales, you will lose.  Of course, this is a much easier conversation to have if your platforms are open source. If you need to understand more about that, I’ve written extensively on that subject, as well. Many of your customers probably just use your standard technologies and platforms without understanding how they’re made, what open source components are already inside, and how they got there. They may not even understand what possibilities exist for them to benefit from open source-style collaboration on your platforms.

This is where you sell them on your broader open source vision that includes innersource principles as well as how embracing those principles opens up a gateway to collaborative innovation with you. Do this, and selling the rest of your technology vision becomes a whole lot easier. The win for the customer is that they get the benefits that come from fully embracing the open source way of collaborating and breaking down silos. You, the vendor, benefit because you’re their partner in such activities, opening the door to more and deeper solutions in the future. Go beyond just selling the product and sell the whole vision. This generally applies no matter what your occupation, but open source adds a few wrinkles in the equation that you would do well to master.

2. Expanding your customer base. If you execute fully on #1, above, then you can make a stronger case for adding new customers, in multiple directions. Those that have yet to embrace the open source way and still haven’t become your customers will perhaps have more reasons to adopt your solutions after you are able to demonstrate and document success from the other customers mentioned above. But there’s another group of potential customers: those that have adopted open source software for various workloads. If they already benefit from open source code, what is the benefit to them of buying from you? This is tricky because many of these shops have convinced themselves that they don’t need vendors and are perfectly happy with a “DIY” approach.

If you are able to execute on #1, above, and give those customers a chance to shine, they will become your best advocates to the rest of the world. Now, not only do you have a strong extended portfolio of open source solutions to sell (presumably, right?), but you can then add the idea of being a partner in IT transformation. Demonstrate real increases in productivity that you can point to, and suddenly those DIY-only shops will begin to understand that vendors can help, too. The key is not to pretend that you have all the answers, but rather that you’re a good partner who will help them find the right solution and not abandon the open source aspects of their existing infrastructure. This is what will allow you to expand your customer base to both open source-savvy and unsavvy customers, as well as transform more of the latter into the former.

3. Expanding your ecosystem and, by extension, your influence on the technology world. If you execute on #1 and #2, above, you can readily point to an expanding group of customers who not only buy into the open source way but who have documented their success on your platforms. This means that when they interact with the upstream projects and communities, which they all eventually will, they will do so from the perspective of being your customer and your platform adopters.

The more that your customers participate in the upstream world, the more likely that upstream communities will see your platforms as something they need to support for future releases. This would help to counteract any industry trends towards your competitors. After all, if your customers are helping the upstream developers see increased value by supporting your platforms, then you’ll ensure the long-term viability, at a minimum, of your platforms and hopefully accelerate their growth.

So, when I hear anyone talk about “open source enablement” of customers, I actually interpret that to be three related things from a customer’s point of view: open source principles of collaboration (innersourcing), devops and IT transformation, and open source evangelism. If you can be seen as the partner that helps companies execute on those three things, it opens a lot of new doors for you.

Diffs and the Power of the Docker Layering Model

Recently I’ve been working more with the sophisticated tool that is Docker, and it hasn’t escaped me that the foundation of the DevOps world is essentially composed of layer after layer of diffs.

For those readers who aren’t hard-core hackers, a diff in back-­in-­the-­day Unix terms simply means a difference. At a glance, as a Unix utility at least, it seems to have been around since the 1970s. The command simply allows you to compare files or directories so it’s easier to spot any differences between them. All modern­-day Linux boffins will attest to the fact that it’s still a highly useful command, which frequently saves the day (if you’re curious, the GNU version can be found here).

Of course, any self-­respecting coder will have been using revision control software for years. There are several available, such as the super­popular Git, partly written by Linus Torvalds himself. As a coder, once you have performed a commit (or save) of your first version of a new piece of code, whether that be one or a thousand lines long, when using Git, its clever software repositories will only save the difference between that first version and any future version which you then commit.

By only dealing with diffs, this process becomes uber-­efficient, meaning that restoring previous versions can be done at breakneck speed, and as you’d imagine storing even hundreds of thousands of lines of precious code into your repositories is kind to disk space.

The Layering Model

For the uninitiated, somewhat surprisingly, Docker doesn’t work too differently. Its inherent layering model affords Docker images the luxury of being lightweight and exceptionally performant and, to my mind at least, the construction of Docker images is a thing of beauty.

Once a base layer has been decided upon for direct download or adjusted to your liking (such as Debian’s) then with a little tweaking, it’s perfectly possible to run your customized applications using an unfathomably thin slice of disk space on top of that base layer.

There are no gold stars being handed out for immediately guessing how that might work.

Correct. The intelligent Docker to all intents and purposes also uses diffs. Whenever you make a change to an already existing image, you’re effectively adding a layer to Docker which simply sits on top of any existing layers. If you’re generating too many layers to keep track of, then a simple way to reduce the number of layers for simplicity is by chaining commands together.

For example, the following two commands, without the two ampersands chaining them together, would otherwise be two different layers because they’re two distinct adjustments to the underlying layer(s):

$ apt­-get update && echo “Chris says hello”

This layering model dramatically reduces the amount of detail that Docker needs to remember, and of course by that I actually mean save to disk. When there’s a few Debian containers residing on a host, Docker simply treats the base layer as a dependency and effectively makes the other changes, which are found within the diffs as the container is launched. By way of an example, one base layer would serve your web, database, and SMTP servers as three distinct containers with a few hundred megabytes of diffs being the only difference between them in total.

A story for another day is how Copy­On­Write (COW) works with Docker images — but aside from that complexity, the undeniably excellent layering model employed by Docker is remarkably simple.

Just like the super-­slick Git and lightning-­fast Docker, the next time you approach a complex problem, I encourage you to flex your lateral thinking muscles before meekly committing to a decision.

Simplicity after all is key in this brave new world.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

2.5 and 5 Gigabit Ethernet Now Official Standards

For most of Ethernet’s history, new standards progressively added more bandwidth, expanding the top end of speed. That progression is now changing, as the IEEE has now ratified the 802.3bz standard that defines 2.5 Gbps and 5 Gbps Ethernet speeds.

In 2014, multiple groups started efforts to create new mid-tier Ethernet speeds with the NBASE-T Alliance starting in October 2014 and MGBASE-T Alliance getting started a few months later in December 2014. While those groups started out on different paths, the final 802.3bz standard represents a unified protocol that is interoperable across multiple vendors.

Read more at Enterprise Networking Planet

Wyoming’s Open Source Enterprise Code Library a Secret No More

NASCIO award-winning project speeds app development, slashes costs.

As described in Wyoming’s NASCIO awards program entry submitted by Deputy State CIO Meredith Bickell, the project launched in 2013 and its main purpose is to serve as a repository of reusable code modules (or “lego blocks”) that can be employed and added to by state agencies building applications. ETS provides internet and enterprise IT services to Wyoming’s executive branch, agencies, boards and commissions.

The upshot of the code library is that apps can be built faster and less expensively – in some cases reducing costs from hundreds of thousands of dollars to less than a thousand. As you might imagine, plenty of what needs to go into such apps, from secure logins to reporting and notifications, is common across agencies.

Read more at Network World