Home Blog Page 786

How to Get an Open Source Security Badge from CII

Co-authored by Dr. David A. Wheeler

Everybody loves getting badges.  Fitbit badges, Stack Overflow badges, Boy Scout merit badges, and even LEED certification are just a few examples that come to mind.  A recent 538 article Even psychologists love badges” publicized the value of a badge.

Core Infrastructure Initiative Best Practices

GitHub now has a number of specific badges for things like test coverage and dependency management, so for many developers they’re desirable. IBM has a slew of certifications for security, analytics, cloud and mobility, Watson Health and more. 

Recently The Linux Foundation joined the trend with the Core Infrastructure Initiative (CII) Best Practices Badges Program

The free, self-service Best Practices Badges Program was designed with the open source community. It provides criteria and an automated assessment tool for open source projects to demonstrate that they are following security best practices.

It’s a perfect fit for CII, which is comprised of technology companies, security experts and developers, all of whom are committed to working collaboratively to identify and fund critical open source projects in need of assistance. The badging project is an attempt to “raise all boats” in security, by encouraging projects to follow best practices for OSS development.  We believe projects that follow best practices are more likely to be healthy and produce secure software. 

Here’s more background on the program and some of the questions we’ve recently been asked.

Q: Why badges?

A: We believe badges encourage projects to follow best practices, to hopefully produce better results. The badges will:

  • Help new projects learn what those practices are (think training in disguise).

  • Help users know which projects are following best practices (so users can prefer such projects).  

  • Act as a signal to users. If a project has achieved a number of badges, it will inspire a certain level of confidence among users that the project is being actively maintained and is more likely to consistently produce good results.

Q: Who gets a badge?  Is this for individuals, projects, sites?

A: The CII best practices badge is for a project, not for an individual.  When you’re selecting OSS, you’re picking the project, knowing that some of the project members may change over time.

Q: Can you tell us a little about the “BadgeApp” web application that implements this?

A: “BadgeApp” is a simple web application that implements the criteria (fill in form).  It’s OSS, with an MIT license.  All the required libraries are OSS & legal to add; we check this using license_finder.

Our overall approach is that we proactively counter mistakes.  Mistakes happen, so we use a variety of tools, an automated test suite, and other processes to counter them.  For example, we use rubocop to lint our Ruby, and ESLint to lint our Javascript.  The test suite currently has 94% statement coverage with over 3000 checked assertions, and our project has a rule that the test suite must be at least 90%.

Please contribute!  See our CONTRIBUTING.md file for more.

Q: What projects already have a badge?
A: Examples of OSS projects that have achieved the badge include the Linux kernel, Curl, GitLab, OpenBlox, OpenSSL, Node.js, and Zephyr.  We specifically reached out to both smaller projects, like curl, and bigger projects, like the Linux kernel, to make sure that our criteria made sense for many different kinds of projects. It’s designed to handle both front-end and infrastructure projects.

Q: Can you tell us more about the badging process itself? What does it cost?

A: It doesn’t cost anything to get a badge.  Filling out the web form takes about an hour.  It’s primarily self-assertion, and the advantage of self-assertion systems is that they can scale up.

There are known problems with self-assertion, and we try to counter their problems.  For example, we do perform some automation, and, in some cases, the automation will override unjustified claims.  Most importantly, the project’s answers and justifications are completely public, so if someone gives false information, we can fix it and thus revoke the badge.

Q: How were the criteria created?

A: We developed the criteria, and the web application that implements them, as an open source software project.  The application is under the MIT license; the criteria are dual-licensed under MIT and CC-BY version 3.0 or later.  David A. Wheeler is the project lead, but the work is based on comments from many people.

The criteria were primarily based on reviewing a lot of existing documents about what OSS projects should do.  A good example is Karl Fogel’s book Producing Open Source Software, which has lots of good advice. We also preferred to add criteria if we could find at least one project that didn’t follow it.  After all, if everyone does it without exception, it’d be a waste of time to ask if your project does it too. We also worked to make sure that our own web application would get its own badge, which helped steer us away from impractical criteria.

Q: Does the project have to be on GitHub?

A: We intentionally don’t require or forbid any particular services or programming languages.  A lot of people use GitHub, and in those cases we fill in some of the form based on data we extract from GitHub, but you do not have to use GitHub.

Q: What does my project need to do to get a badge?

A: Currently there are 66 criteria, and each criterion is in one of three categories: MUST, SHOULD, or SUGGESTED. The MUST (including MUST NOTs) are required, and 42/66 criteria are MUST.  The SHOULD (NOT) are sometimes valid to not do; 10/66 criteria are SHOULDs.  The SUGGESTED criteria have common valid reasons to do them, but we want projects to at least consider them.  14/66 are SUGGESTED.  People don’t like admitting that they don’t do something, so we think that having criteria listed as SUGGESTED are helpful because they’ll nudge people to do them.

To earn a badge, all MUST criteria must be met, all SHOULD criteria must be met OR be unmet with justification, and all SUGGESTED criteria must be explicitly marked as met OR unmet (since we want projects to at least actively consider them). You can include justification text in markdown format with almost every criterion. In a few cases, we require URLs in the justification, so that people can learn more about how the project meets the criteria.

We gamify this – as you fill in the form you can see a progress bar go from 0% to 100%.  When you get to 100%, you’ve passed!

Q: What are the criteria?

A: We’ve grouped the criteria into 6 groups: basics, change control, reporting, quality, security, and analysis.  Each group has a tab in the form.  Here are a few examples of the criteria:

Basics

The software MUST be released as FLOSS. [floss_license]

Change Control

The project MUST have a version-controlled source repository that is publicly readable and has a URL.

Reporting

The project MUST publish the process for reporting vulnerabilities on the project site.

Quality

If the software requires building for use, the project MUST provide a working build system that can automatically rebuild the software from source code.

The project MUST have at least one automated test suite that is publicly released as FLOSS (this test suite may be maintained as a separate FLOSS project).

Security

At least one of the primary developers MUST know of common kinds of errors that lead to vulnerabilities in this kind of software, as well as at least one method to counter or mitigate each of them.

Analysis

At least one static code analysis tool MUST be applied to any proposed major production release of the software before its release, if there is at least one FLOSS tool that implements this criterion in the selected language.

Q: Are these criteria set in stone for all time?

A: The badge criteria were created as an open source process, and we expect that the list will evolve over time to include new aspects of software development. The criteria themselves are hosted on GitHub, and we actively encourage the security community to get involved in developing them. We expect that over time some of the criteria marked as SUGGESTED will become SHOULD, some SHOULDs will become MUSTs, and new criteria will be added.

 

Q: What is the benefit to a project for filling out the form?  Is this just a paperwork exercise? Does it add any real value?

A: It’s not just a paperwork exercise; it adds value.

Project members want their project to produce good results.  Following best practices can help you produce good results – but how do you know that you’re following best practices?  When you’re busy getting specific tasks done, it’s easy to forget to do important things, especially if you don’t have a list to check against.

The process of filling out the form can help your project see if you’re following best practices, or forgetting to do something.  We’ve had several cases during our alpha stage where projects tried to fill out the form, found they were missing something, and went back to change their project.  For example, one project didn’t explain how to report vulnerabilities – but they agreed that they should.  So either a project finds out that they’re following best practices – and know that they are – or will realize they’re missing something important, so the project can then fix it.

There’s also a benefit to potential users.  Users want to use projects that are going to produce good work and be around for a while.  Users can use badges like this “best practices” badge to help them separate well-run projects from poorly-run projects.

Q: Does the Best Practices Badge compete with existing maturity models or anything else that already exists?

A: The Best Practices Badge is the first program specifically focused on criteria for an individual OSS project. It is free and extremely easy to apply for, in part because it uses an interactive web application that tries to automatically fill in information where it can.  

This is much different than maturity models, which tend to be focused on activities done by entire companies.

The BSIMM (pronounced “bee simm”) is short for Building Security In Maturity Model. It is targeted at companies, typically large ones, and not on OSS projects.

OpenSAMM, or just SAMM, is the Software Assurance Maturity Model. Like BSIMM, they’re really focusing on organizations, not specific OSS projects, and they’re focused on identifying activities that would occur within those organizations.  

Q: Does the project’s websites have to support HTTPS?

A: Yes, projects have to support HTTPS to get a badge. Our criterion sites_https now says: “The project sites (website, repository, and download URLs) MUST support HTTPS using TLS. You can get free certificates from Let’s Encrypt.” HTTPS doesn’t counter all attacks, of course, but it counters a lot of them quickly, so it’s worth requiring.   At one time HTTPS imposed a significant performance cost, but modern CPUs and algorithms have basically eliminated that.  It’s time to use HTTPS and TLS.

Q: How do I get started or get involved?

A: If you’re involved in an OSS project, please go get your badge from here:

https://bestpractices.coreinfrastructure.org/

If you want to help improve the criteria or application, you can see our GitHub repo:

https://github.com/linuxfoundation/cii-best-practices-badge

We expect that there will need to be improvements over time, and we’d love your help.

But again, the key is, if you’re involved in an OSS project, please go get your badge:

https://bestpractices.coreinfrastructure.org/

Dr. David A. Wheeler is an expert on developing secure software and on open source software. His works include Software Inspection: An Industry Best Practice, Ada 95: The Lovelace Tutorial, Secure Programming for Linux and Unix HOWTO, Fully Countering Trusting Trust through Diverse Double-Compiling (DDC), Why Open Source Software / Free Software (OSS/FS)? Look at the Numbers!, and How to Evaluate OSS/FS Programs. Dr. Wheeler has a PhD in Information Technology, a Master’s in Computer Science, a certificate in Information Security, and a B.S. in Electronics Engineering, all from George Mason University (GMU). He lives in Northern Virginia.

Emily Ratliff is Sr. Director of Infrastructure Security at The Linux Foundation, where she sets the direction for all CII endeavors, including managing membership growth, grant proposals and funding, and CII tools and services. She brings a wealth of Linux, systems and cloud security experience, and has contributed to the first Common Criteria evaluation of Linux, gaining an in-depth understanding of the risk involved when adding an open source package to a system. She has worked with open standards groups, including the Trusted Computing Group and GlobalPlatform, and has been a Certified Information Systems Security Professional since 2004.

 

Hyperledger Works on Its Open-Source Footing

Taking a bootstrapped initiative to a healthy open-source project is difficult. But when there’s only approximately 100 developers in the world that have a deep understanding of the technology, such as blockchain, the difficulty increases dramatically.

Open-source veteran Brian Behlendorf was aware of the challenges when the Linux Foundation tapped him to lead the Hyperledger Project as its executive director in May.

“The job really is to be an independent voice for the project that is not affiliated with one company or another,” he told Markets Media. “It’s also to bring to the party everything that the Linux Foundation knows about running open-source projects. My job is to corral all that towards the purpose of building a great community and a great collection of code.”

Read more at Markets Media

Steady User Growth Characterizes Cloud Foundry Ecosystem

The CF community now includes 173 user groups with 33,400-plus individual members across 105 cities in 48 countries, CEO Sam Ramji said.

Cloud Foundry might be the only PaaS to have its own user conference—a three-day one, at that.

Cloud Foundry is an open source cloud platform as a service originally developed by VMware and now run by Pivotal Software, which is a joint venture owned by EMC, VMware and General Electric. Cloud Foundry was designed and developed by a small team from Google led by Derek Collison and originally was called Project B29.

Read more at eWeek

The Rise of Deep Learning in the Tech Industry

Tech analysts love trending topics. In fact, that’s their job: forecast and analyze trends. Some years ago we had “Big Data”, more recently “Machine Learning”, and now it s the time of “Deep Learning”. So let’s dive in and try to understand what‘s behind it and what impact it can have on our society.

What’s new?

Neural Network algorithms are the main science behind Deep Learning. They are not new but became more popular in the mid-2000s after Geoffrey Hinton and Ruslan Salakhutdinov published a paper explaining how we could train a many-layered feedforward neural network one layer at a time. The large-scale impact of Deep Learning in Big Tech Companies began around 2010 with speech recognition.

It took around 30 years to become mainstream. Computers were not powerful enough and companies didn’t have such a large amount of data. When the researcher Yann LeCun played with his first algorithms in the 80’s it took him 3 days to run it! As you can see on the previous diagram, it’s only been 3 years since Deep Learning became more mainstream. Indeed in 2012 ImageNet, a popular challenge for scientists in the field of image recognition, was first won by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton thanks to Deep Learning. This result put lot of attention on this field in the tech sector.

The technology behind Deep Learning is Neural Networks stacked together into multiple layers. One of the challenges for the humans who implement them is to understand the exact information extracted by each layer. Each stack of neurons extracts higher level information so that at the end they can recognize very complex patterns. Humans are sometimes skeptical of this model because, even though it’s based on well-known mathematical equations, we know little about why some models works.

This is only the beginning. There are many challenges to tackle on topics like “NLP” (Natural Language Processing) or understanding spoken language. One key for this is the context. When speech is limited to a small scope (e.g., in a legal document or a food recipe) machines can interpret the meaning. For now, much of the nuance and complexity of human language is difficult for machines (for instance it’s very hard for a machine to understand a joke).

This is a big turn in history. Before Neural Networks, humans thought they were the best at designing code. Now they need to accept that the machine can beat them even in writing an algorithm. Machines programmed to recognize patterns with Deep Learning beat the old “Rule-Based” algorithms.

This video of a simple machine trained with the DeepMind algorithm is a very good illustration of the superior “intelligence” of the machine. The computer learns to win the game and at the end discovers tricks that nobody found before. It’s no longer about Brute-Force algorithms, but about the replication of complex human behavior. For instance, the same DeepMind team (recently bought by Google) won the game of Go against the best European player, something that no computer could do before Deep Learning.

Applications

A well-known application of Deep Learning is face recognition. Google Photos, for instance, is a very good example of this technology. It can even recognize your face from 20 years ago! To simplify, we could say that the first layer of neurons can recognize a circle, the second an iris, and the third an eye. If the computer has been trained well enough, it can recognize abstract entities like a face with a good probability.

After videos, speech, and translation, Google now uses Deep Learning for search, its core business. The ranking doesn’t rely anymore only on human-designed algorithms (like the well known PageRank) but thanks to RankBrain, a Deep Learning algorithm, Google now has more accuracy and precision.

Of course, one of the trending topics in Deep Learning is the autonomous car. The National Highway Traffic Safety Administration said the Artificial Intelligence system piloting a self-driving car could be considered as the driver under federal law. This is a major step toward ultimately winning approval for autonomous vehicles on the roads.

Many tech companies have recently understood the benefits that new A.I. techniques can bring. Facebook, Google, Apple, Microsoft, IBM and many others are building Deep Learning teams to tackle these challenges.

Facebook hired Yann LeCun to head its new A.I. lab and one year later hired Vladimir Vapnik, a main developer of the Vapnik–Chervonenkis theory of statistical learning. Apple recently bought three startups in Deep Learning as well. Google, as we underlined before, hired an amazing crew including Geoffrey Hinton. Finally, Baidu hired Andrew Ng, one of the most famous teachers and scientists in Machine Learning, to head its new research lab.

The battle is starting and we don’t know yet who will win this deep learning fight. The main question is about how it will impact our daily life. Will we become as powerful as James Bond, a personal version of Moneypenny in our pocket (the Facebook virtual assistant “M” )? Will we all lose our jobs and be replaced by machines? Maybe both?

What kind of future will appear?

The super-intelligence of connected machines, which humans may not be able to fully understand, could become a potential threat tomorrow. Stephen Hawking, Bill Gates, and Elon Musk have warned us about it.

“I am in the camp that is concerned about super intelligence. First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” said Bill Gates

Indeed, it’s certain that we will save many lives and reduce boring and automated tasks with Deep Learning, but it could also have a huge negative impact on our society. As we all may have seen in more recent sci-fi movies, Artificial (super-)Intelligence could be used destroy things or manipulate humans. One way to limit this potential threat is to open source code so that the whole community can be aware of the algorithm and know the state of the art. TensorFlow or OpenAI are good examples of this idea.

“Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.” in the manifest of OpenAI.

One of the other consequences we fear most is the end of many jobs. Because each major technological innovation spreads across the whole economy, it’s certain that many sectors will be impacted by the exponential growth of such technologies. As economist Joseph Schumpeter taught us, it will also probably create many jobs in other sectors (mainly services and on-demand jobs). Maybe “this time will be different” and we will need new social institutions to take care of this. New economic ideas like Basic Income could be an interesting way to decrease the shock caused by the invasion of Deep Learning everywhere. Some institutions are already prepared to experiment with it.

Today, each AI is built with data from Internet sources like Google searches or Facebook feeds. But in the near future, each AI could be built with data from our personal devices. We don’t know already which applications will emerge. We can be sure, as Andy Rubin the co-founder of Android stated, that Deep Learning will become easier and cheaper to implement so that every piece of software or hardware will be able to run its own intelligent algorithms.

Deep Learning is on its way to becoming a commodity….

This article was contributed by a student at Holberton School and should be used for educational purposes only.

Rancher & Vapor IO Perform New Tricks With Apache Mesos

Rancher Labs and Vapor IO are announcing moves related to Apache Mesos, the open source container orchestration platform. Rancher is adding Mesos support to its container management environment (also named Rancher), while Vapor IO is bringing its data center management software into Mesosphere‘s DC/OS. 

The announcements are coming out Wednesday morning at MesosCon North America, being held in Denver. Rancher already supports three types of container scheduling: KubernetesDocker Swarm from Docker Inc., and the startup’s own Cattle.

Read more at SDx Central

Linux Foundation Backs HPE’s Open Source Switch OS

OpenSwitch, the operating system for data center network switches Hewlett-Packard Enterprise launched last year as an open source project together with a number of other networking heavyweights, has become an official Linux Foundation project, the foundation announced today.

The foundation provides infrastructure and management resources for open source projects it accepts, as well as the exposure to open source developers that may be more inclined to contribute because of the organization’s pedigree. It hosts some of the most influential open source infrastructure projects, such as Cloud Foundry, OpenDaylight, and Zen Project.

Read more at Data Center Knowledge

Bring Networking Projects Under A Common Umbrella, Urges Cisco’s Dave Ward

As a “networking guy,” Cisco CTO of Engineering and Chief Architect Dave Ward finds it frustrating that today, although somebody can fire up an application and ask for CPU, RAM and storage, they can’t even ask for bandwidth. They have very simple networking primitives all the way up to the PaaS (Platform as a Service) layer.

Developers shouldn’t have to “keep the whole stack in their head,” said Ward, in his Collaboration Summit keynote. “From that developer’s point of view, I want to be able to fire up my workload, and I just want it to work.”

In his wide-ranging talk, titled “Umbrellas Aren’t Just for When It’s Raining,” Ward offered his thoughts on points including “building network projects in the stack so developers don’t have to know or care what’s going on.” A “no-stack developer” wants all of the controllers, analytics, orchestration, and service chaining just to work.

Ward’s goal is for the infrastructure to just do what a developer needs to have happen… thereby “creating a no-stack developer environment in which intent can be driven directly into the network.”

Ward discussed various open source projects that have sprung up in the past two years, and he said, “The Linux Foundation has done an outstandingly good job of pulling together communities that fill certain niches and certain functionality inside this stack.”

“The Linux Foundation has proven itself to be a perfect place for us to collaborate,” said Ward, with more than a dozen network projects, millions of lines of code under management, and many corporate sponsors and developers working together on multiple projects.

“I’m trying to catalyze, through this talk, a conversation about how to take all the projects we have and pull them together under an umbrella,” said Ward.

Toward that end, he says, “We could continue with ‘Let a thousand flowers bloom, and let a thousand communities rise’ and continue the way we are currently operating now.” But, suggests Ward, it would be good to have some planning around how to allocate resources: what’s the key focus, what needs to be built inside that architecture, and then align the cost.

Ward says, “It’s time to talk about creating a networking umbrella over all these foundations and projects.” Ward clarifies that he is talking about “The actual mechanism by how we can do this with an understanding of the governance structures, not the technical structures.” This could get the industry to point where they could fill in and continue to complete all the pieces that are necessary for orchestration, config, provisioning, and resources.

At minimum, urged Ward, “If we can’t get an umbrella architecture, we know a lot of the places that we need to fill in and have to work as an industry to create communities around those projects to get the job done.”

Watch Dave Ward’s full keynote, below:

https://www.youtube.com/watch?v=eEckX2hn4y4

linux-com_ctas_may2016_v2_collab.png?itok=Mj86VQnX

Cumulus Linux 3.0 NOS Now in the Wild

Cumulus Linux is touting a bunch of heavyweights as supporting the latest iteration of its white-box Linux.

On board for the launch of the Cumulus Linux 3.0 network operating system are Dell, EdgeCore Networks, Mellanox, Penguin Computing, and Supermicro.

For Cumulus, one of the biggest aspects of the launch is that version 3.0 is 100 Gbps Ethernet-capable, something it reckons will be important for the data centre market.

Four of the products already certified in its hardware compatibility list target that space: Dell’s Z9100, Penguin’s 3200CP and Supermicro’s SSE-C3632 (all using Broadcom Tomahawk silicon), and Mellanox’s own-silicon SN2700.

Read more at The Register

HPE Targets DevOps and Agile with New Application Lifecycle Management Software

On Wednesday, Hewlett Packard Enterprise (HPE) announced the general availability ALM Octane, its cloud-based application lifecycle management offering that is geared towards making customers’ DevOps processes more efficient.

The platform makes use of common toolsets and frameworks, such as Jenkins, GIT, and Gherkin, while also providing insights to developers and application testers. This could potentially help enterprises deliver those applications more quickly, without having to cut corners in the vetting process.

“HPE ALM Octane is specifically designed for Agile and DevOps-ready teams, bringing a cloud-first approach that’s accessible anytime and anywhere, bolstered by big data-style analytics to help deliver speed, quality, and scale across all modes of IT,” said Raffi Margaliot, senior vice president and general manager of application delivery management for HPE.

Read more at TechRepublic

Samba Server installation on Ubuntu 16.04

This guide explains the installation and configuration of a Samba server on Ubuntu 16.04 with anonymous and secured Samba shares. Samba is an Open Source/Free Software suite that provides seamless file and print services to SMB/CIFS clients. Samba is freely available, unlike other SMB/CIFS implementations, and allows for interoperability between Linux/Unix servers and Windows-based clients.