Home Blog Page 439

KubeCon: CoreOS Tectonic, Open Source Kubernetes Tools from Oracle, Kasten, and More

 

The Cloud Native Computing Foundation kicked off their KubeCon + CloudNativeCon North America conference, dedicated to Kubernetes and cloud native technologies, in Austin, Texas today with the announcement of 31 new members, including AppsCode, CA, Datadog, Grafana Labs, InfluxData, HPE and Kasten.

“KubeCon + CloudNativeCon is the polestar for practitioners of Kubernetes and other cloud native technologies. We are bringing together the core developers, end users, vendors and other contributors who are building the infrastructure for the next decade of computing,” said Dan Kohn, executive director of the Cloud Native Computing Foundation (CNCF).

A number of companies made announcements surrounding Kubernetes and cloud-native technology. Here’s a rundown of the biggest news:

Oracle announces new open source Kubernetes Tools

Oracle is open sourcing the Fn project Kubernetes Installer and Global Multi-Cluster Management solution. 

Read more at SDTimes

Linux Then, and Why You Should Learn It Now

The booming popularity of Linux happened around the same time as the rise of the web. The server world, once proprietary, eventually fell in love with Linux just the same way networking did. But for years after it began growing in popularity, it remained in the background. It powered some of the largest servers, but couldn’t find success on personal devices. That all changed with Google’s release of Android in 2008, and just like that, Linux found its way not only onto phones but onto other consumer devices.

The same shift from proprietary to open is happening in networking. Specialized hardware that came from one of the “big 3” networking vendors isn’t so necessary anymore. What used to require this specialized hardware can now be done (with horsepower to spare) using off-the-shelf hardware, with Intel CPUs, and with the Linux operating system. Linux unifies the stack, and knowing it is useful for both the network and the rest of the rack. With Linux, networking is far more affordable, more scalable, easier to learn, and more adaptable to the needs of the business.

Read more at Network World

How Kubernetes Deployments Work

This contributed article is part of a series, from members of the Cloud Native Computing Foundation (CNCF), about CNCF’s Kubecon/CloudNativeCon, taking place this week in Austin, Dec. 6 – 8.  

We’ve written quite a few blog posts about the Kubernetes container orchestration engine and how to deploy to Kubernetes already, but none cover how Kubernetes Deployments work in detail.

With Kubernetes Deployments, you “describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate,” the Kubernetes Deployment documentation states.  In this blog, we’ll explain both how Deployments work from a high-level perspective, and then get our hands dirty by creating a Deployment and seeing how it relates to ReplicaSet and Pod objects.

Read more at The New Stack

Deployment Strategies Defined

Let’s talk about deployments. This topic used to be considered an uninteresting implementation detail, but is now becoming a fundamental element for modern systems. I feel like everyone understand its importance, and is working to build solutions around it, but we are missing some structure and definition. People use different terms for same meanings, or same terms for different meanings. This leads to other people reinventing the wheel trying to solve their problems. We need a common understanding of this topic in order to build better tools, make better decisions, and simplify communication with each other.

This post is my attempt to list and define those common deployment strategies, which I called:

  • Reckless Deployment
  • Rolling Upgrade
  • Blue/Green Deployment
  • Canary Deployment
  • Versioned Deployment

There are probably other names and terms you expected to see on this list. I’d argue that those “missing” terms can be seen as variants of these primary strategies.

And one final note before we begin: this post is about definitions, methodologies and approaches and not about how to implement them in any technology stack. A technical tutorial will follow in another post. Stay tuned!

Read more at Itay as a Service

Five Edge Data Center Myths

With mobile and last-mile bandwidth coming at a premium and modern applications needing low-latency connections, compute is moving from centralized data centers to the edge of the network. But there a lot of myths about edge data centers. Here’s what organizations are typically getting wrong, according to Uptime Institute’s CTO Chris Brown:

Myth 1: Edge computing is a way to make cheap servers good enough

The old branch office model of local servers won’t work for the edge; an edge data center isn’t just a local data center. “An edge data center is a collection of IT assets that has been moved closer to the end user that is ultimately served from a large data center somewhere.”

Read more at Data Center Knowledge

Linux Kernel Developer: Kees Cook

Security is paramount these days for any computer system, including those running on Linux. Thus, part of the ongoing Linux development work involves hardening the kernel against attack, according to the recent Linux Kernel Development Report.

Here, Kees Cook, Software Engineer at Google, answers a few questions about his work on the kernel.

Linux Foundation: What role do you play in the community and what subsystem(s) do you work on?

Kees Cook: Recently, I organized the Kernel Self-Protection Project (KSPP), which has helped focus lots of other developers to work together to harden the kernel against attack. I’m also the maintainer of seccomp, pstore, LKDTM, and gcc-plugin subsystems, and a co-maintainer of sysctl.

Read more at The Linux Foundation

GDPR: 7 Steps to Compliance

The General Data Protection Regulation will come into effect on the May 25, 2018.

GDPR offers a groundbreaking overhaul of rules first implemented two decades earlier, when the impact on the internet was a mere fraction of what it is today. For consumers, these new rules promise greater data protection. For businesses, however, the rules will require significant overhauls, as the cost of running afoul of rules can be stiff. Here are a few steps to ensure your business is in compliance.

Find Help

Because any new set of regulations can be confusing and disrupt business, there are plenty of entities offering support. Consulting companies can provide the guidance businesses need to ensure they meet new demands. However, it’s important to seek out help early on, as demand might outstrip supply for compliance consultation. Furthermore, the sheer complexity of the GDPR means businesses might not be able to find a single entity for the entire process, so hiring multiple consultants might be essential.

Read more at TechNative

Growing Your Tech Stack: When to Say No

Someone on your team has an exciting suggestion, a new technology to introduce. But is it a good idea?

It is often easier to see the immediate benefits than the immediate risks or the long-term anything. This article looks at questions to ask and precautions to take when implementing new technologies in the development and running of software.

First, recognize that different technologies carry different risks. Ask yourself — what’s the worst that could happen? and what pain is inevitable? Also ask — what’s the best that could happen? And finally, how can I implement this with minimum danger?

The biggest distinguishing factor in the risk profile is where in the technology stack this new idea falls:

Read more at Codeship

7 Habits of Highly Successful Site Reliability Engineers

So we decided to look at some of the characteristics and habits common to highly successful SREs. As in most development and operations roles, first-class technical chops are obviously critical. For SREs, those specific skills might depend on how a particular organization defines or approaches the role: the Google approach to Site Reliability Engineering might require more software engineering and coding experience, whereas another organization might place a higher value on ops or QA skills. But as we found when we looked at what makes dev and ops practitioners successful, what sets the “great” apart from the “good enough” is often a combination of habits and traits that complement technical expertise.

Habit 1: You analyze every change in the context of the (much) bigger picture

Successful software developers understand how their code helps drive the overall business. SREs have their own version of this trait.

Read more at The New Relic

Kubernetes Node

A Kubernetes Node is a logical collection of IT resources that supports one or more containers. Nodes contain the necessary services to run Pods (which are Kubernetes’s units of containers), communicate with master components, configure networking and run assigned workloads. A Node can host one or multiple Pods. Each Kubernetes Node has services to create the runtime environment and support Pods. These components include Docker, kube-proxy and kubelet.

Kubernetes choreographs the deployment and scaling of applications in containers, rather than the deployment and scaling of necessary  hardware systems. Nodes are collections of resources defined by the hosting infrastructure, whether that is on a cloud provider or as physical or virtual machines (VMs). 

Read more at TechTarget