Home Blog Page 557

The Linux Foundation’s Clyde Seepersad to Host Training Q&A on Twitter

On Friday, April 28, The Linux Foundation will continue its new series of Twitter chats with leaders at the organization. This monthly activity, entitled #AskLF, gives the open source community a chance to ask upper management at questions about The Linux Foundation’s strategies and offerings.

Clyde Seepersad
#AskLF aims to increase access to the bright minds and community organizers within The Linux Foundation. While there are many opportunities to interact with staff at Linux Foundation global events, which bring together over 25,000 open source influencers, a live Twitter Q&A will give participants a direct line of communication to the designated hosts.

The second host (following Arpit Joshipura’s chat last month) will be Clyde Seepersad, the General Manager of Training and Certification since 2013. His #AskLF session will take place in the midst of many new training initiatives at the organization, including a new Inclusive Speaker Orientation and a Kubernetes Fundamentals course. @linuxfoundation followers are encouraged to ask Seepersad questions related to Linux Foundation courses, certifications, job prospects in the open source industry, and recent training developments.

Sample questions might include:

  • I’m new to open source but I want to work in the industry. How can a Linux Foundation Certification help me?

  • What are The Linux Foundation Training team’s support offerings like?

  • How will a Linux Foundation certification give me an advantage over other candidates with competitors’ certifications?

Here’s how you can participate in the first #AskLF:

  • Follow @linuxfoundation on Twitter: Hosts will take over The Linux Foundation’s account during the session.

  • Save the date: April 28, 2017 at 10 a.m. PT.

  • Use the hashtag #AskLF: To ask Clyde your questions while he hosts. Click here to spread the news of #AskLF with your Twitter community.

  • Be a n00b!: If you’ve been considering beginning a open source training journey, don’t be afraid to ask Clyde basic questions about The Linux Foundation’s methods, recommendations, or subjects covered. No inquiry is too basic!

More dates and details for future #AskLF sessions to come! We’ll see you on Twitter, April 28th at 10 a.m. PT.

More information on Linux Foundation Training can be found in the training blog via Linux.com:

https://www.linux.com/learn/training

Hear Clyde’s thoughts on why Linux Foundation certifications give you a competitive advantage in this on-demand webinar:

No More Excuses: Why You Need to Get Certified Now

*note: unlike Reddit-style AMAs, #AskLF is not focused around general topics that might pertain to the host’s personal life. To participate, please focus your questions around open source networking and Clyde Seepersad’s career.

Keeping State and Networking in Kubernetes

In our previous installments of this series (see below), we learned a lot of neat things about Kubernetes. We learned that it is descended from the secret Google Borg project, its architecture, and why it is a good choice for your datacenter. Now we’ll learn how Kubernetes keeps state with etcd, and how normal Linux networking ties everything together.

Key-Value Stores

Kubernetes needs a persistency layer to track the state of the cluster over time. Traditionally, this could be implemented with a relational database. However, in a highly scalable system, a relational database (e.g., MySQL PostgreSQL) becomes a single point of failure. Distributed key-value stores are, by design, made to run on multiple nodes. Data is replicated among the nodes and has strong consistency, so that when any individual nodes fail the data store does not fail. Zookeeper, Consul, and etcd are all examples of distributed key-value stores.

Kubernetes uses etcd. etcd can be run on a single node, though this provides no fault-tolerance. etcd uses a leader election algorithm to provide strong consistency of the stored state among the nodes.

In a test setup on the master node, we also run a single node etcd key-value store. We can check its content with the etcdctl command and see what Kubernetes is storing in it:

$ systemctl -a | grep etcd etcd2.service loaded active running
etcd2

$ etcdctl ls /registry
/registry/ranges
/registry/namespaces
/registry/serviceaccounts
/registry/controllers
/registry/secrets
/registry/pods
/registry/deployments
/registry/services
/registry/events
/registry/minions
/registry/replicasets

This gives you a sneak peek at some of the Kubernetes resources.

Networking Setup

Getting all the previous components running is a common task for system administrators who are used to configuration management. But to get a fully functional Kubernetes cluster, the network must be setup properly as well.

If you have deployed virtual machines (VMs) based on IaaS solutions, this will sound familiar. Containers running on all the nodes will attach to a Linux bridge. This bridge is configured to give IP addresses in a specific subnet, and that subnet is routed to all the other nodes. In essence, you need to treat a container just like a VM. All the containers started on any nodes need to be able to reach each other.

You can see the detailed explanation about this model at Cluster Networking. The only caveat is that in Kubernetes the lowest compute unit is not a container, but what we call a pod. A pod is a group of co-located containers that share the same IP address.

Kubernetes expects this network configuration to be available. It is not created automatically, so you have to set it up. You can configure your physical network, or use a software-defined overlay such as Weave, Flannel, or Calico.

Tim Hockin, one of the lead Kubernetes developers, has created a useful slide deck,  Illustrated Guide To Kubernetes Networking, to help understand Kubernetes networking.

Download the sample chapter now.

Kubernetes Fundamentals

You may enjoy the previous entries in this series:

The Cloud Foundry Approach to Container Storage and Security

Recently, The New Stack published an article titled “Containers and Storage: Why We Aren’t There Yet” covering a talk from IBM’s James Bottomley at the Linux Foundation’s Vault conference in March. Both the talk and article focused on one of the central problems we’ve been working to address in the Cloud Foundry Foundation’s Diego Persistence project team, so we thought it would be a good idea to highlight the features we’ve added to mitigate it. Cloud Foundry does significantly better than what the article suggests is the current state of the art on the container security front, so we’ll cover that here as well.

As the article puts it:

Right now, a major roadblock to stateful storage of containers is the inability, under current Linux-y architectures, to reconcile the file system user ID (fsuid), used by external storage systems, with the user IDs (uids) created within containers. They can not be reconciled in any way that can be both safe and maintainable without loss of coherence of either the system or the system administrator.

Read more at The New Stack

Google’s New Chip Is a Stepping Stone to Quantum Computing Supremacy

John Martinis has given himself just a few months to reach a milestone in the history of computing.

He’s leader of the Google research group working on building astonishingly powerful computer chips that manipulate data using the quirks of quantum physics. By the end of this year, Martinis says, his team will build a device that achieves “quantum supremacy,” meaning it can perform a particular calculation that’s beyond the reach of any conventional computer. Proof will come from a kind of drag race between Google’s chip and one of the world’s largest supercomputers.

“We think we’re ready to do this experiment. It’s something we can do now,” says Martinis.

The Story of Getting SSH Port 22

The SSH (Secure Shell) port is 22. It is not a co-incidence. This is a story I (Tatu Ylonen) haven’t told before. I wrote the initial version of SSH in Spring 1995. It was a time when telnet and FTP were widely used.

Anyway, I designed SSH to replace both telnet (port 23) and ftp (port 21). Port 22 was free. It was conveniently between the ports for telnet and ftp. I figured having that port number might be one of those small things that would give some aura of credibility. But how could I get that port number? I had never allocated one, but I knew somebody who had allocated a port.

The basic process for port allocation was fairly simple at that time. Internet was smaller and we were in the very early stages of the Internet boom. Port numbers were allocated by IANA (Internet Assigned Numbers Authority). At the time, that meant an esteemed Internet pioneer called Jon Postel and Joyce K. Reynolds. Among other things, Jon had been the editor of such minor protocol standards as IP (RFC 791), ICMP (RFC 792), and TCP (RFC 793). Some of you may have heard of them.

To me Jon felt outright scary, having authored all the main Internet RFCs!

Anyway, just before announcing ssh-1.0 in July 1995, I sent this e-mail to IANA:

Read more at SSH

Protect Your Management Interfaces

When it comes to architecture design, one area that is often not given due consideration is the protection of the management interfaces used by administrators or operators to configure their infrastructure. These are the interfaces used to perform privileged actions on systems, and as such they’re a valuable prize for an attacker who wants to gain total control of your system.

There are a wide variety of management interfaces for different technologies. These include more traditional management interfaces (such as consoles and remote desktops), browser-based admin interfaces to configure infrastructure, and web-based interfaces to configure many cloud services.

This blog focuses on the more traditional management interfaces for managing servers and network infrastructure. Some of the points will be equally applicable to protecting cloud-based services too, and we’ll follow up with a blog that covers protecting the management interfaces of cloud services at a later date.

Read more at NCSC

An Aerospace Engineer Drags a Stodgy Industry Toward Open Source

MORE THAN A decade ago, software engineer Ryan Melton spent his evenings, after workdays at Ball Aerospace, trying to learn to use a 3-D modeling program. After a few weeks, for all his effort, he could make … rectangles that moved. Still, it was a good start. Melton showed his spinning digital shapes to Ball, a company that makes spacecraft and spacecraft parts, and got the go-ahead he’d been looking for: He could try to use the software to model a gimbal—the piece on a satellite that lets the satellite point.

Melton wanted to build the program to save himself time, learn something new. “It was something I needed for me,” he says. But his work morphed into a software project called Cosmos—a “command and control” system that sends instructions to satellites and displays data from their parts and pieces. Ball used it for some 50 flight projects and on-the-ground test systems. And in 2014, Melton decided Cosmos should share its light with the world.

Read more at Wired

Assimilate Go Programming with Open Source Books

Go is a compiled, statically typed programming language that makes it easy to build simple, reliable, and efficient software. It’s a general purpose programming language with modern features, clean syntax and a robust well-documented common library, making it a good candidate to learn as your first programming language. While it borrows ideas from other languages such as Algol and C, it has a very different character. It’s sometimes described as a simple language.

Read more at: https://www.ossblog.org/assimilate-go-programming-open-source-books/

ShellCheck – A Tool That Shows Warnings and Suggestions for Shell Scripts

ShellCheck is a static analysis tool that shows warnings and suggestions concerning bad code in bash/sh shell scripts. It can be used in several ways: from the web by pasting your shell script in an online editor (Ace – a standalone code editor written in JavaScript) in https://www.shellcheck.net (it is always synchronized to the latest git commit, and is the simplest way to give ShellCheck a go) for instant feedback.

Alternatively, you can install it on your machine and run it from the terminal, integrate it with your text editor as well as in your build or test suites.

There are three things ShellCheck does primarily:

Read more at Tecmint

Your CEO’s Obliviousness about Open Source is Endangering Your Business

By Jeff Luszcz, Vice President of Product Management at Flexera Software

The consequences are easily recognizable; you remember the lucrative software product whose vendor was compelled to shelf it.  You probably also remember the insidious software vulnerability that harmed millions of unsuspecting users (Heartbleed, anyone?).

But what caused these issues?  Itis what happens when an open source component is integrated into a commercial software product and violates its open source license, or when it contains a vulnerability that was previously unknown.  As technology evolves, open source security and compliance risk are reaching a critical apex that if not addressed, will threaten the entire software supply chain.

 So who is responsible if you are not prepared, and your company is affected?  The CEO.

Up to 50 percent of code in commercial software can be defined as open source, while the majority of software engineers use open source to accelerate their work.  The problem with this is that most engineers do not track what they are using.  They also tend not to grasp the legal ramifications for using that code, nor the potential software vulnerability risk they are inviting once adopting it.

The most startling aspect of it all is that most software executives are not aware of this risk either.  If you do not know what open source software is being used, you can’t guarantee that the appropriate methods and automation are established to reduce OSS and compliance risk.  That said, it is imperative for software CEOs to deliberate with CTOs, security officers and engineers to gain a vast comprehension of the key areas of their open source compliance and security operation.

An Open Source Explosion

The way we build software products has greatly changed in the last 10 years.  It used to be the case that even the CEO would be highly aware of the third-party dependences their company had on the outside world.  The dependencies were often commercial in nature and required non-disclosure agreements (NDAs), contracts, payments and other highly visible activities related to acquiring and licensing the technology.  Then slowly at first, and with an ever-increasing pace seemingly overnight, the open source world exploded with millions of extremely high quality, easy-to-acquire, free-of-licensing fee components.  The open source business model, combined with a fast always-on-network and the social effects of open source use, created a perfect environment for hundreds – and even thousands – of OSS components to be brought in and added to a software product.

In some cases, there is more open source than homegrown, proprietary code in a company’s product.  Unfortunately, most companies, while taking advantage of open source in order to create products faster, are not respecting the open source licenses associated with the software components they use.  What is sometimes surprising to CEOs and other executives is that while open source is free of cost, it is not free of obligations.  These obligations run the gamut from passing along a copyright statement or a copy of license text, to providing the entire source code for the company’s product.  Data shows that most companies are aware of only a small percentage of the open source they depend on.  By not knowing what you are using, it is impossible to comply with the obligations specified in the license.  Additionally, software can have bugs or vulnerabilities which may affect your product.  By not keeping track of what you are using, it can make it possible to be far behind on upgrades or patches that fix discovered software vulnerabilities.

As with most business processes, if they are not seen as important or required by senior leadership they will not get done.  Open source license compliance is not an optional part of using open source, but it has been treated that way by most of the tech industry.  It is important for CEOs and other business leaders to show the importance of respecting the legal and security obligations that are part and parcel of using open source.

Basic Training

The technical debt that exists around open source compliance requires a multi-prong process, in order to create a climate and a set of expectations that the processes are being followed.  The most important of these is education.  Everyone in the company should be aware of the basics of open source licensing and compliance.  This is because open source is being used in pretty much every business process and job at a modern tech company.  Graphic designers are using open source art work, IT is installing and maintaining open source applications, marketing is editing and creating content based on existing open source content, as well as countless other examples.  By being trained on the basics, and knowing that compliance is expected, you can make the remaining job much easier.  Employees will be more mindful in their choices, be able to respect the open source content they use – and in many cases – be able to give back to the community in an acceptable manner.

Senior Leadership Mandates

After education, comes time management and the process of building open source compliance and vulnerability management into the technical and business processes that a company creates and follows.  If no time or resources are provided to comply with the open source obligations, it is not a surprise to see those obligations ignored.  While potential legal, security and reputational risk concerns should be enough to bake this into your processes, it is often only after senior leadership mandates that time be created that compliance occurs.

Open a Review Board

It is also recommended that an internal team of open source experts be assembled as part of an Open Source Review Board.  This team should be made up of technical, legal and business representatives who can help create policies and act as a clearing house for open source and third-party usage questions.

Questions that Need Answering

CEOs should make it a priority to see how easy it is to check for signs of proper open source license and security compliance from the outside. Is it easy to find the third-party license notices for your products?  Is it easy to get any source code distributions required due to our use of Copyleft style open source licenses?  Do you have a process and plan for upgrading and patching your products due to open source and third-party software vulnerabilities?  Have you asked for spot checks of your compliance documents to confirm that the processes are being respected?  Are you providing material support to the open source projects you are using?  Have you required your commercial vendors to provide open source disclosures and compliance documents?

These important questions are a useful way to gauge how responsible your company is about proper open source practices.  By asking these questions and demanding suppliers comply with these policies, companies can best protect themselves and their customers from the disadvantages of being a laggard when it comes to technology.