Home Blog Page 620

Linkerd Project Joins the Cloud Native Computing Foundation

Today, the Cloud Native Computing Foundation’s (CNCF) Technical Oversight Committee (TOC) voted to accept Linkerd as the fifth hosted project alongside Kubernetes, Prometheus, OpenTracing and Fluentd. You can find more information on the project on their GitHub page.

As with every project accepted by the CNCF — and by extension, The Linux Foundation —  Linkerd is another great example of how open source technologies, both new and more established, are driving and participating in the transformation of enterprise IT.

Linkerd is an open source, resilient service mesh for cloud-native applications. Created by Buoyant founders William Morgan and Oliver Gould in 2015, Linkerd builds on Finagle, the scalable microservice library that powers companies like Twitter, Soundcloud, Pinterest and ING. Linkerd brings scalable, production-tested reliability to cloud-native applications in the form of a service mesh, a dedicated infrastructure layer for service communication that adds resilience, visibility and control to applications without requiring complex application integration.

“As companies continue the move to cloud native deployment models, they are grappling with a new set of challenges running large scale production environments with complex service interactions,” said Fintan Ryan, Industry Analyst at Redmonk. “The service mesh concept in Linkerd provided a consistent abstraction layer for these challenges, allowing developers to deliver on the promise of microservices and cloud native applications at scale. In bringing linkerd under the auspices of CNCF, Buoyant are providing an important building block for to the wider cloud native community to use with confidence.”

Enabling Resilient and Responsive Microservice Architectures

Linkerd enables a consistent, uniform layer of visibility and control across services and adds features critical for reliability at scale, including latency-aware load balancing, connection pooling, automatic retries and circuit breaking. As a service mesh, Linkerd also provides transparent TLS encryption, distributed tracing and request-level routing. These features combine to make applications scalable, performant, and resilient. Linkerd integrates directly with orchestrated environments such as Kubernetes (example) and DC/OS (demo), and supports a variety of service discovery systems such as ZooKeeper, Consul, and etcd. It recently added HTTP/2 and gRPC support and can provide metrics in Prometheus format.

tgdRBUBANNH8EveiQv6rPGMupfJaYUthOOKWyFGC

“The service mesh is becoming a critical part of building scalable, reliable cloud native applications,” said William Morgan, CEO of Buoyant and co-creator of Linkerd. “Our experience at Twitter showed that, in the face of unpredictable traffic, unreliable hardware, and a rapid pace of production iteration, uptime and site reliability for large microservice applications is a function of how the services that comprise that application communicate. Linkerd allows operators to manage that communication at scale, improving application reliability without tying it to a particular set of libraries or implementations.

Companies around the world use Linkerd in production to power their software infrastructure; including Monzo, Zooz, Foresee, Olark, Houghton Mifflin Harcourt, the National Center for Biotechnology Information, Douban and more, and it’s featured as a default part of cloud-native distributions such as Apprenda’s Kismatic Enterprise Toolkit and StackPointCloud.

Notable Milestones:

  • 29 Releases

  • 28 contributors and 400 Slack members

  • 1,370 Github stars

“Linkerd was built based on real world developer experiences in solving problems found when building large production systems at web scale companies like Twitter and Google,” said Chris Aniszczyk, COO of Cloud Native Computing Foundation. “It brings these expertise to the masses, allowing a greater number of companies to benefit from microservices. I’m thrilled to have Linkerd as a CNCF inception project and for them to share their knowledge of building a cloud native service mesh with scalable observability systems to the wider CNCF community.”

As CNCF’s first inception level project, under the CNCF Graduation Criteria v1.0, Linkerd will receive mentoring from the TOC, priority access to the CNCF Community Cluster, and international awareness at CNCF events like CloudNativeCon/KubeCon Europe. The CNCF Graduation Criteria was recently voted in by the TOC to provide every CNCF project an associated maturity level of either inception, incubating or graduated, which allows CNCF to review projects at different maturity levels to advance the development of cloud native technology and services.

For more on Linkerd, listen to an interview with Alex Williams of The New Stack and Morgan here, or stay tuned for Morgan’s upcoming blog post on the project’s roots and why Linkerd joined CNCF.

Open Source Software Strategies for Enterprise IT

Enterprises using open source code in infrastructure must understand both the risks and benefits of community-developed software. Professional open source management is a discipline that focuses on minimizing risk and delivering the benefits of open source software as efficiently as possible.

For successful open source management, enterprises must adopt clear strategies, well-defined policies, and efficient processes. Nobody gets all this right the first time, so it’s also important to review and audit your policies for continuous improvement. Additionally, successful open source initiatives for enterprise IT must provide real ROI in acquisition, integration, and management.

To examine these concepts in detail, The Linux Foundation is hosting a free webinar called “Open Source Strategy for Enterprise IT” on Thursday, Jan. 26 at 10:00 a.m. Pacific time. In this webinar, presented by Bill Weinberg, Sr. Director and Analyst, Open Source Strategy, and Greg Olson, Sr. Director, Open Source Consulting Services, you will learn about:

  • The elements of enterprise-level open source strategy

  • Using OSS as a secret weapon for innovation and differentiation

  • Current and new use cases for OSS

  • Attracting and retaining talent with OSS use and contribution

  • OSS security and compliance in the enterprise context

In a previous webinar (called “When Open Source Becomes Mission Critical”), Weinberg and Olson covered other topics related to managing open source software and talked specifically about the risks of under-management. Such risks include legal factors involving non-compliance of licenses, operational risks involving the ability of the software to meet enterprise needs over time, and security-related risks involving vulnerabilities that companies must stay on top of.

Weinberg said the moral here is that managing open source software shouldn’t be an afterthought; it should be part and parcel of using and integrating open source software.

Learn more in the free webinar “Open Source Strategy for Enterprise IT, on Thursday, January 26, at 10:00 AM Pacific time. Register Now>>

Linux Security Threats: The 7 Classes of Attackers

Start exploring Linux Security Fundamentals by downloading the free sample chapter today. DOWNLOAD NOW

Organizations today are facing a worldwide security workforce shortage — and hurting for it, according to a 2016 report from Intel Security and the Center for Strategic and International Studies (CSIS).

“Eighty-two percent of surveyed respondents admitted to a shortage of cybersecurity skills, with 71 percent of respondents citing this shortage as responsible for direct and measurable damage to organizations whose lack of talent makes them more desirable hacking targets,” according to Intel Security.

It’s important and valuable for Linux sysadmins to stay one step ahead of malicious hackers by fortifying their security skills. Regardless of your skill level or experience, there’s always more to learn to further expand your awareness of security issues and preventative measures.

falseThe Linux Foundation’s online Linux Security Fundamentals course is intended for anyone involved with any security-related task, at any level. You’ll learn how to assess your current security needs, evaluate your current security readiness, and implement security options as required.

In this new tutorial series, we’ll give you a sneak preview of the third session in the course on Threats and Risk Assessment. Or you can download the entire chapter now.

By the end the series, you should be able to:

  • Differentiate the different classes of attackers

  • Discuss the types of attacks                    

  • Explain the tradeoffs in security, including likelihood, asset value, and business impact

  • Install and try common security tools tcpdump, wireshark, and nmap.

The 7 Classes of Attackers

In dealing with threats and risk assessment, there are different classes of attackers.

A white hat hacker

Breaks security for non-malicious reasons, perhaps to test their own security system or while working for a security company which makes security software. The term “white hat” in Internet slang refers to an ethical hacker. This classification also includes individuals who perform penetration tests and vulnerability assessments within a contractual agreement.

A black hat hacker

Violates computer security to be malicious or for personal gain. Black hat hackers form the stereotypical, illegal hacking groups often portrayed in popular culture. Black hat hackers break into secure networks to destroy data or make the network unusable for those who are authorized to use the network.

A script kiddie (also known as a skid or skiddie)

A non-expert who breaks into computer systems by using prepackaged automated tools written by others, usually with little understanding of the underlying concept.

Hacktivist

Utilizes technology to announce a social, ideological, religious, or political message. In general, most hacktivism involves website defacement or denial-of-service attacks.

Nation state

Refers to intelligence agencies and cyber warfare operatives of nation states.

Organized crime

Refers to criminal activities carried out for profit.

Bots

Automated software tools that are available for use by any type of hacker.

Attack Sources

An attack can be perpetrated by an insider or from outside the organization.

An inside attack is an attack initiated by an entity inside the security perimeter (an insider), i.e., an entity that is authorized to access system resources but uses them in a way not approved by those who granted the authorization.

An outside attack is initiated from outside the perimeter, by an unauthorized or illegitimate user of the system (an outsider). On the Internet, potential outside attackers range from amateur pranksters to organized criminals, international terrorists, and hostile governments.

A resource (both physical or logical), called an asset, can have one or more vulnerabilities that can be exploited by a threat agent in a threat action. The result can potentially compromise the confidentiality, integrity or availability properties of resources (potentially different from the vulnerable one) of the organization and other involved parties (customers, suppliers).    

In part 2 of this series, we’ll cover the types of attacks you can expect. And later we’ll discuss the business trade-offs associated with common security measures.

Stay one step ahead of malicious hackers with The Linux Foundation’s Linux Security Fundamentals course. Download a sample chapter today!

Read the other articles in the series:

Linux Security Threats: Attack Sources and Types of Attacks

Linux Security Fundamentals Part 3: Risk Assessment / Trade-offs and Business Considerations

Linux Security Fundamentals: Estimating the Cost of a Cyber Attack

Linux Security Fundamentals Part 5: Introduction to tcpdump and wireshark

Linux Security Fundamentals Part 6: Introduction to nmap

Multi-Cloud Mesos at Adobe

Building your IT infrastructure is often a complicated dance of negotiating conflicting needs. Engineers have ideas of cool things they want to build. Operations wants stability, security, easy maintenance, and scalability. Users wants things to work with no backtalk. In his talk at MesosCon Asia 2016, Frans van Rooyen of Adobe shares his team’s experiences with adapting Apache Mesos and Apache DC/OS to support deploying infrastructure to multiple diverse clouds.

Adobe had several problems to solve: they wanted to make it easier for engineers to write and deploy new code without having to be cloud experts, and they wanted to solve conflicts between operations and engineering. Engineering wants AWS because “It’s agile, I don’t have to create tickets, and I have infrastructure’s code. So I can actually call the infrastructure programmatically and build it, that’s what I like and want to do it that way.” Operations wants the local data center instead of a public cloud because “It’s secure, it’s cheaper, and we have more control.”

The various public clouds, such as Azure and AWS, have their own ways of doing things, and everyone has their own experience and preferences. Adobe’s solution was to abstract out the details of deploying to specific clouds so engineers could write simple spec files and let the new abstraction layer handle the details of deployment. Rooyen says, “Then suddenly all you care about is where do I need to run my stuff to run most effectively. The two main things that come up are latency and data governance. So if you’re thinking about where do I need to run my stuff, where do I need to run my container as an engineer, in my spec file I can say, “It’s very latent sensitive. It needs to be in a location over in Europe.” Because of that, Operations now can take that requirement and run that in the appropriate cloud… Operations and Engineering don’t battle. Engineering doesn’t care because they know their container’s going to go and run where it needs to run most.”

Another goal was to standardize as much as possible. But as the various cloud APIs are not standard it is very difficult to build a single tool to deploy to all clouds. Cloud technologies are fast-moving targets, so maintaining a single deployment tool would require constant maintenance. Rooyen discusses some of the cloud-specific tools they use: the Azure Container Service engine is open source and freely available. Terraform, by HashiCorp., is a multi-cloud tool. Troposphere builds your code, and then runs it to generate the cloud formation template.

Rooyen says, “So what’s the end result? What do we get when this is all done? Once again, through that story, we had input, input went into infrastructure, infrastructure stood up a cluster, and now we have this…We were able to provision those clusters in multiple clouds in a standard way and get the same output. The end result, which is a cluster. An end point or a platform that we can now deploy code to.”

Watch Rooyen’s complete presentation (below) to learn more details about the software used, hardware considerations, and important details about architecture.

Interested in speaking at MesosCon Asia on June 21 – 22? Submit your proposal by March 25, 2017. Submit now>>

Multi Cloud Mesos at Adobe

In this presentation Frans van Rooyen will share with you the journey the Adobe digital marketing team went on as his team was tasked with building out Mesos in both public and private clouds. 

GitHub Bug Bounty Program Offers Bonus Rewards

GitHub celebrates the third anniversary of its Bug Bounty program, with bonus rewards for security disclosures, as the program continues to help the popular code development platform stay secure. 

In January 2014, the GitHub distributed version control code repository first launched a bug bounty program, rewarding security researchers for responsibly disclosing software vulnerabilities. Now three years later in January 2017, GitHub is celebrating the third anniversary of its bug bounty program, with bonus rewards for the top submissions made in January and February.

Read more at eWeek

 

ContinuousIntegrationCertification

Continuous Integration is a popular technique in software development. At conferences many developers talk about how they use it, and Continuous Integration tools are common in most development organizations. But we all know that any decent technique needs a certification program — and fortunately one does exist. Developed by one of the foremost experts in continuous delivery and devops, it’s known for being remarkably rapid to administer, yet very insightful for its results. Although it’s quite mature, it isn’t as well known as it should be, so as a fan of the technique I think it’s important for me to share this certification program with my readers. Are you ready to be certified for Continuous Integration? And how will you deal with the shocking truth that taking the test will reveal?

Read more at Martin Fowler

Writing SELinux Modules

SELinux struggles to cast off its image as difficult to maintain and the cause of potential application problems. Yet in recent years, much has changed for the better, especially with regard to usability. For example, modules have replaced its monolithic set of rules. If you want to develop a new SELinux module, three files are typically necessary for this purpose.

Three Files for an SELinux Module

A type enforcement (.te) file stores the actual ruleset. To a large extent, it consists of m4 macros, or interfaces. For example, if you want to access a particular service’s resources, such as the logfiles, the service provides a corresponding interface for this purpose. If you want your own application to access these resources, you can draw this on the service’s interface without having to deal with the logfile details. For example, you do not need to know the logfile’s security label, because the interface abstracts access.

Read more at ADMIN Magazine

Understanding Unikernels

When we describe a typical operating system kernel on a typical machine (be it physical or virtual), we are normally talking about a distinct piece of software which runs in a separate processor mode (kernel mode) and address space from the rest of the software running on that machine. This operating system kernel generally provides critical low-level functions which are leveraged by the other software installed on the box. The kernel is generally a generic piece of code which is trivially tailored (if at all) to the application software stack it is supporting on the machine. This generic kernel normally provides a wide range of rich functions, many of which may be unneeded by the particular applications it is being asked to support.

In fact, if you look at the total software stack on most machines today, it is often difficult to figure out just what application will be run on that machine. You are likely to find a wide swath of hundreds, if not thousands, of low-level utilities, plus multiple databases, a web server or two, and a number of specialized application programs. The machine may actually be charged with running a single applica- tion, or it may be intended to run dozens simultaneously. Careful analysis of the startup scripts will yield hints as to the final solution set which will be run on the machine, but it is far from certain, as a suitably privileged user may elect to invoke any of a number of applications present on the box.

Read more at BSD Mag

IPv6 Transition: A Quick Guide

Despite the much-anticipated depletion of public IPv4 addresses, adoption of network address translation (NAT) has led most enterprises to continue using IPv4 both internally and at the internet edge. But as companies refresh their networks and IoT begins to pick up steam, many network administrators are finally making the choice to incorporate IPv6 in their network in some capacity. Here are some fundamentals when it comes to an IPv6 transition.

How to read an IPv6 address

By far the most important skill in an IPv6 transition is simply understanding how to read an IPv6 address. While IPv4 and IPv6 addresses accomplish the same goal, they look drastically different. An IPv6 address is 128-bits long compared to just 32-bits for and IPv4 address. While IPv6 addresses use the same mask structure as IPv4 to differentiate the host bits from the network bits, –it’s on a 128-bit scale.

Read more at Network Computing