Every great computer security team has a synergistic collection of skilled professionals who work well together to meet common goals. The team may debate a solution, but once a decision is made, everyone works hard to execute with no hard feelings. Good teams expect constant change and disruption. They know whatever it is they are trying to accomplish will likely be harder than anticipated.
When I encounter successful teams, distinct roles emerge among the group. Different organizations require different mixes of players, but these archetypes pop up again and again.
The world has reached a key moment in the history of the way we work. We have entered a new business environment, dictated by rapid changing technological variables that create an entirely new economic landscape. Exponential growth of our interconnected world forces us to see the world anew. The 21st century asks for a different mindset now the rules of the game have fundamentally changed.
In this game it is not anymore relevant to optimize an organization’s efficiency based on a stable set of known variables. Instead, there’s a strong need to adapt as fast as possible to increasingly complex working conditions. Efficiency has to make place for engagement and adaptability. The organizations that know how to fully engage their employees and those who are natives in this information-rich, densely interconnected world of the 21st century are the ones that thrive.
NodeSource primes its enterprise-oriented NSolid Node.js distro for Docker containers.
NodeSource is releasing a distribution of its enterprise-level, commercially supported NSolid Node.js runtime that works with Docker-friendly Alpine Linux. NSolid for Alpine Linux is intended to work with Alpine’s small footprint and security capabilities, said Joe McCann, NodeSource CEO.
RESTful services have been popular for quite some time now. They are widely-used, primarily for improved performance, ease of use and maintenance. Swagger is a popular API for documenting your RESTful Web APIs. You need some way to document your RESTful services to know the endpoints and the different data models used in the request and response payloads. This article presents a discussion on how we can use Swagger to document our Web APIs easily.
What is Swagger? Why is it needed?
Swagger is a framework that can be used for describing and visualizing your RESTful APIs. Swagger provides a simple, yet powerful, way to represent your RESTful APIs so that the developers using those APIs can understand the endpoints and the request and response payloads in a much better way. The success of your API largely depends on proper documentation as proper documentation helps the developers understand ways to consume your API better. Here’s exactly where Swagger comes to the rescue.
As part of its goal to cultivate more diverse thoughts and opinions in open source, the April Women in Open Source webinar will discuss why publishing your own research, technical work and industry commentary is a smart move for your career and incredibly beneficial to the industry at large.
In this webinar, learn how to get started, good topics to write about and how to contribute to magazines, journals and new publishing platforms like Medium. “Why and How To Publish Your Work and Opinions” will be held Thursday, April 27, 2017, at 9 a.m. Pacific Time.
Designed to share both inspirational ideas and practical tips the community can immediately put into action, the webinar will provide examples of women in open source who have successfully published their technical work and viewpoints, as well as identify influential publications to target. So mark your calendars!
Register today for this free webinar, brought to you by Women in Open Source.
As the community manager and an editor for Opensource.com, Rikki helps grow and oversee a community of moderators, contributors, and participants. Opensource.com attracts more than 1 million pageviews each month, with articles contributed by the open source community and community moderators.
Libby oversees content strategy for The Linux Foundation, including Linux.com and its newsletter, managing a team of freelance writers and editors. In addition, she writes and edits content for the site.
For news on future Women in Open Source events and initiatives, join the Women in Open Source email list and Slack channel. Please send a request to join via email to sconway@linuxfoundation.org.
In our first three installments in this series, we learned what Kubernetes is, why it’s a good choice for your datacenter, and how it was descended from the secret Google Borg project. Now we’re going to learn what makes up a Kubernetes cluster.
A Kubernetes cluster is made of a master node and a set of worker nodes. In a production environment these run in a distributed setup on multiple nodes. For testing purposes, all the components can run on the same node (physical or virtual) by using minikube.
Kubernetes has six main components that form a functioning cluster:
API server
Scheduler
Controller manager
kubelet
kube-proxy
etcd
Each of these components can run as standard Linux processes, or they can run as Docker containers.
Figure 1: Kubernetes Architectural Overview (by Steve Watt, Red Hat).
The Master Node
The master node runs the API server, the scheduler, and the controller manager. For example, on one of the Kubernetes master nodes that we started on a CoreOS instance, we see the following systemd unit files:
core@master ~ $ systemctl -a | grep kube
kube-apiserver.service loaded active running Kubernetes API Server
kube-controller-manager.service loaded active running Kubernetes Controller Manager
kube-scheduler.service loaded active running Kubernetes Scheduler
The API server exposes a highly-configurable REST interface to all of the Kubernetes resources.
The Scheduler’s main responsibility is to place the containers on the node in the cluster according to various policies, metrics, and resource requirements. It is also configurable via command line flags.
Finally, the Controller Manager is responsible for reconciling the state of the cluster with the desired state, as specified via the API. In effect, it is a control loop that performs actions based on the observed state of the cluster and the desired state.
The master node supports a multi-master highly-available setup. The schedulers and controller managers can elect a leader, while the API servers can be fronted by a load-balancer.
Worker Nodes
All the worker nodes run the kubelet, kube-proxy, and the Docker engine.
The kubelet interacts with the underlying Docker engine to bring up containers as needed. The kube-proxy is in charge of managing network connectivity to the containers.
core@node-1 ~ $ systemctl -a | grep kube
kube-kubelet.service loaded active running Kubernetes Kubelet
kube-proxy.service loaded active running Kubernetes Proxy
core@node-1 ~ $ systemctl -a | grep docker
docker.service loaded active running Docker Application Container Engine
docker.socket loaded active running Docker Socket for the API
As a side note, you can also run an alternative to the Docker engine, rkt by CoreOS. It is likely that Kubernetes will support additional container runtimes in the future.
Next week we’ll learn about networking, and maintaining a persistency layer with etcd.
Serverless computing and Docker are fast turning into seatmates. Where you find one, you’ll find the other.
Case in point: Hyper.sh, a container hosting service that uses custom hypervisor technology to run containers on bare metal, has introduced Func, a Docker-centric spin on serverless computing.
No one likes to admit it but most of what has passed for IT security in the enterprise has historically been rudimentary at best. Most organizations physically segmented their networks behind a series of firewalls deployed at the edge of the network. The trouble is that once malware gets past the firewall it could move laterally almost anywhere in the data center.
With the rise of network virtualization, a new approach to microsegmenting networks is now possible. The new approach involves using microsegmenting to prevent malware from laterally generating East-West traffic across the data center. Instead of a physical instance of a firewall, there is now a virtual instance of a firewall that is simpler to provision and update.
Over the past two and a half years, I’ve led a project at IBM that deployed a new set of tools to help improve the company’s product development efforts. What is the benefit of providing better tools to employees? A first answer is that it helps increase employee productivity. While this is true and part of the answer, it is much too narrow. The broader answer is that giving employees great tools is an excellent way to concretely affect positive culture change.
In this article I’ll summarize what the team did and what I learned.
Intel has cut funding for an effort it launched two years ago with Rackspace to encourage the use of OpenStack software technology by big business customers that want more flexible and cheaper data center infrastructure.
The two companies announced the joint effort, called the OpenStack Innovation Center, in July 2015. A source close to the effort said initial funding was supposed to last through 2018, but Intel pulled it early.
Intel and Rackspace disclosed the decision internally on Tuesday, the source said.