Home Blog Page 556

Google Zero-Trust Security Framework Goes Beyond Passwords

With a sprawling workforce, a wide range of devices running on multiple platforms, and a growing reliance on cloud infrastructure and applications, the idea of the corporate network as the castle and security defenses as walls and moats protecting the perimeter doesn’t really work anymore. Which is why, over the past year, Google has been talking about BeyondCorp, the zero-trust perimeter-less security framework it uses to secure access for its 61,000 employees and their devices. 

The core premise of BeyondCorp is that traffic originating from within the enterprise’s network is not automatically more trustworthy than traffic that originated externally.

Read more at InfoWorld

OpenStack for Research Computing

In this video from Switzerland HPC Conference, Stig Telfer from StackHPC presents: OpenStack for Research Computing. OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface.

“This talk will present the motivating factors for considering OpenStack for the management of research computing infrastructure. Stig will give an overview of the differences in design criteria between cloud, HPC and data analytics, and how these differences can be mitigated through architectural and configuration choices of an OpenStack private cloud…”

Read more at insideHPC

NASA’s 10 Coding Rules for Writing Safety Critical Program

The large and complex software projects use some sort of coding standards and guidelines. These guidelines establish the ground rules that are to be followed while writing software.

a) How the code should be structured?

b) Which language feature should or should not be used?

In order to be effective, the set of rules has to be small and must be specific enough that it can be easily understood and remembered.

The world’s top programmers working for NASA follow a set of guidelines for developing safety critical code. In fact, many organizations, including NASA’s Jet Propulsion Laboratory (JPL) focus on code written in C programming language.

Read more at RankRed

Receiving an AES67 Stream with GStreamer

GStreamer is great for all kinds of multimedia applications, but did you know it could also be used to create studio grade professional audio applications?

Written by Olivier Crete, Multimedia Lead at Collabora.

GStreamer is great for all kinds of multimedia applications, but did you know it could also be used to create studio grade professional audio applications? For example, with GStreamer you can easily receive a AES67 stream, the standard which allows inter-operability between different IP based audio networking systems and transfers of live audio between profesionnal grade systems.

Figure 1. AES67 at the NAB Show in Las Vegas, April 22-27.

Receiving an AES67 stream requires two main components, the first being the reception of the media itself. AES67 is simple because it’s just a stream of RTP packets containing uncompressed PCM data. In other words, this means it can be received with a simple pipeline, such as “udpsrc ! rtpjitterbuffer latency=5 ! rtpL24depay ! …”. There isn’t much more needed, as this pipeline will receive the stream and introduce 5ms of latency, which, as long as the network is uncongested, should already sound great.

The second component is the clock synchronization, one of the important things in Pro Audio. The goal of this component is for the sender and the receiver of the audio to use the same clock so that there aren’t any glitches introduced by a clock running to fast or too slow. The standard used for this is called the Precise Time Protocol version 2 (PTP), defined by the IEEE 1588-2008 standard. While there are a number of free implementations that can be used as master or slave PTP clocks, GStreamer provides the GstPTPClock class that can act as a slave that can synchronize itself from a PTP clock master on the network.

Continue reading on Collabora’s blog.

On Multi-Cloud Tradeoffs and the Paradox of Openness

In any technology adoption decision organisations are faced with a balancing act between openness and convenience. Open standards and open source, while in theory driving commoditization and lower cost, also create associated management overheads. Choice comes at a cost. Managing heterogeneous networks is generally more complicated, and therefore resource intensive, than managing homogenous ones, which explains why in every tech wave the best packager wins and wins big – they make decisions on behalf of the user which serve to increase convenience and manageability at the individual or organisational level.

One of the key reasons that Web Scale companies can do what they do, managing huge networks at scale, is aggressive control of hardware, software, networks and system images …

Read more at RedMonk

Linux Foundation Launches EdgeX Foundry for IoT Edge Interoperability

There is a new internet of things (IoT) project launching at the Linux Foundation today—EdgeX Foundry. Dell is contributing its Fuse IoT code base as the initial code for EdgeX Foundry, providing an open framework for IoT interoperability.

“We lack a common framework for building edge IoT solutions—above individual devices and sensors but below the connection to the cloud,” Philip DesAutels, senior director of IoT at the Linux Foundation, told eWEEK. “That means every development that gets done today is bespoke, and that means it is fragile, costly and immobile.”

DesAutels said that with its common framework, EdgeX aims to help solve some of the current challenges of IoT deployment. EdgeX provides with developers a plug-and-play infrastructure to create edge solutions. 

Read more at eWeek

LC3 2017 Features Open Source Experts in SDN, Cloud, DevOps, and More

Developers, architects, sysadmins, DevOps experts, business leaders, and other professionals will gather in China June 19-20 to discuss the latest open source technology and trends at LinuxCon + ContainerCon + CloudOpen China 2017 (LC3).

This event — held for the first time in Beijing, China — features three conferences in one, with more than 100 conference sessions focusing on topics such as:

  • Kubernetes

  • Cloud Native & Containers

  • Linux

  • Blockchain

  • Networking & Orchestration

  • IoT & Embedded Linux

  • Professional Open Source

In a special keynote presentation, Linus Torvalds, Creator of Linux and Git, will chat with Dirk Hohndel, VP, Chief Open Source Officer, VMware.

Other keynote speakers include:

  • Madam Yang Zhiqiang, Deputy General Manager, China Mobile Research Institute

  • Jonathan Bryce, Executive Director, OpenStack Foundation

  • Dave Ward, ‎CTO of Engineering and Chief Architect, Cisco Systems

  • Dr. Sanqi Li, CTO of Product & Solutions, Huawei

With more than half of the speakers coming from outside of China, there is no better place to learn from leading open source experts from China and around the world.

Session highlights include:

  • Adoption and Localization of Kubernetes in China – Jiayao (Julia) Han, Caicloud

  • There is NO Open Source Business Model – Stephen Walli, Docker Inc.

  • Releasing a Linux Distribution In the Age of DevOps – Brian Stinson, The CentOS Project

  • The Business Reality of Building Open Source: What We Learned from OVS and OVN – Justin Pettit, VMware & Ben Pfaff, Open vSwitch Project

  • Challenge and Practice of SDN in Large Scale Data Centers – Jiang, Alibaba Cloud

  • Hardening Your IoT Endpoints: A Preventive Toolkit – Rabimba Karanjai, Almaden Research Center

At LC3, attendees can expect to learn about the newest and most interesting open source technologies as well as how to collaborate and lead in the open source community.

You can view the full schedule here.

Take advantage of early bird pricing now and save $60USD / 415RMB through April 27. Register now!

The Linux Foundation’s Clyde Seepersad to Host Training Q&A on Twitter

On Friday, April 28, The Linux Foundation will continue its new series of Twitter chats with leaders at the organization. This monthly activity, entitled #AskLF, gives the open source community a chance to ask upper management at questions about The Linux Foundation’s strategies and offerings.

Clyde Seepersad
#AskLF aims to increase access to the bright minds and community organizers within The Linux Foundation. While there are many opportunities to interact with staff at Linux Foundation global events, which bring together over 25,000 open source influencers, a live Twitter Q&A will give participants a direct line of communication to the designated hosts.

The second host (following Arpit Joshipura’s chat last month) will be Clyde Seepersad, the General Manager of Training and Certification since 2013. His #AskLF session will take place in the midst of many new training initiatives at the organization, including a new Inclusive Speaker Orientation and a Kubernetes Fundamentals course. @linuxfoundation followers are encouraged to ask Seepersad questions related to Linux Foundation courses, certifications, job prospects in the open source industry, and recent training developments.

Sample questions might include:

  • I’m new to open source but I want to work in the industry. How can a Linux Foundation Certification help me?

  • What are The Linux Foundation Training team’s support offerings like?

  • How will a Linux Foundation certification give me an advantage over other candidates with competitors’ certifications?

Here’s how you can participate in the first #AskLF:

  • Follow @linuxfoundation on Twitter: Hosts will take over The Linux Foundation’s account during the session.

  • Save the date: April 28, 2017 at 10 a.m. PT.

  • Use the hashtag #AskLF: To ask Clyde your questions while he hosts. Click here to spread the news of #AskLF with your Twitter community.

  • Be a n00b!: If you’ve been considering beginning a open source training journey, don’t be afraid to ask Clyde basic questions about The Linux Foundation’s methods, recommendations, or subjects covered. No inquiry is too basic!

More dates and details for future #AskLF sessions to come! We’ll see you on Twitter, April 28th at 10 a.m. PT.

More information on Linux Foundation Training can be found in the training blog via Linux.com:

https://www.linux.com/learn/training

Hear Clyde’s thoughts on why Linux Foundation certifications give you a competitive advantage in this on-demand webinar:

No More Excuses: Why You Need to Get Certified Now

*note: unlike Reddit-style AMAs, #AskLF is not focused around general topics that might pertain to the host’s personal life. To participate, please focus your questions around open source networking and Clyde Seepersad’s career.

Keeping State and Networking in Kubernetes

In our previous installments of this series (see below), we learned a lot of neat things about Kubernetes. We learned that it is descended from the secret Google Borg project, its architecture, and why it is a good choice for your datacenter. Now we’ll learn how Kubernetes keeps state with etcd, and how normal Linux networking ties everything together.

Key-Value Stores

Kubernetes needs a persistency layer to track the state of the cluster over time. Traditionally, this could be implemented with a relational database. However, in a highly scalable system, a relational database (e.g., MySQL PostgreSQL) becomes a single point of failure. Distributed key-value stores are, by design, made to run on multiple nodes. Data is replicated among the nodes and has strong consistency, so that when any individual nodes fail the data store does not fail. Zookeeper, Consul, and etcd are all examples of distributed key-value stores.

Kubernetes uses etcd. etcd can be run on a single node, though this provides no fault-tolerance. etcd uses a leader election algorithm to provide strong consistency of the stored state among the nodes.

In a test setup on the master node, we also run a single node etcd key-value store. We can check its content with the etcdctl command and see what Kubernetes is storing in it:

$ systemctl -a | grep etcd etcd2.service loaded active running
etcd2

$ etcdctl ls /registry
/registry/ranges
/registry/namespaces
/registry/serviceaccounts
/registry/controllers
/registry/secrets
/registry/pods
/registry/deployments
/registry/services
/registry/events
/registry/minions
/registry/replicasets

This gives you a sneak peek at some of the Kubernetes resources.

Networking Setup

Getting all the previous components running is a common task for system administrators who are used to configuration management. But to get a fully functional Kubernetes cluster, the network must be setup properly as well.

If you have deployed virtual machines (VMs) based on IaaS solutions, this will sound familiar. Containers running on all the nodes will attach to a Linux bridge. This bridge is configured to give IP addresses in a specific subnet, and that subnet is routed to all the other nodes. In essence, you need to treat a container just like a VM. All the containers started on any nodes need to be able to reach each other.

You can see the detailed explanation about this model at Cluster Networking. The only caveat is that in Kubernetes the lowest compute unit is not a container, but what we call a pod. A pod is a group of co-located containers that share the same IP address.

Kubernetes expects this network configuration to be available. It is not created automatically, so you have to set it up. You can configure your physical network, or use a software-defined overlay such as Weave, Flannel, or Calico.

Tim Hockin, one of the lead Kubernetes developers, has created a useful slide deck,  Illustrated Guide To Kubernetes Networking, to help understand Kubernetes networking.

Download the sample chapter now.

Kubernetes Fundamentals

You may enjoy the previous entries in this series:

The Cloud Foundry Approach to Container Storage and Security

Recently, The New Stack published an article titled “Containers and Storage: Why We Aren’t There Yet” covering a talk from IBM’s James Bottomley at the Linux Foundation’s Vault conference in March. Both the talk and article focused on one of the central problems we’ve been working to address in the Cloud Foundry Foundation’s Diego Persistence project team, so we thought it would be a good idea to highlight the features we’ve added to mitigate it. Cloud Foundry does significantly better than what the article suggests is the current state of the art on the container security front, so we’ll cover that here as well.

As the article puts it:

Right now, a major roadblock to stateful storage of containers is the inability, under current Linux-y architectures, to reconcile the file system user ID (fsuid), used by external storage systems, with the user IDs (uids) created within containers. They can not be reconciled in any way that can be both safe and maintainable without loss of coherence of either the system or the system administrator.

Read more at The New Stack