Home Blog Page 569

Vulnerability Remediation – You Only Have 4 Options

In my previous post, I wrote about a simple process for triaging vulnerabilities across applications. Once you have the issues prioritized, the vulnerability remediation process is pretty straightforward. You don’t have a lot of options; either remediate the issue, ignore it, or apply other measures (compensating controls) to mitigate the risk posed by the vulnerability.

1. Rip and Replace

This is the most common approach taken. Essentially, you are going to fix the problem by “amputating” the vulnerable component and replacing it with a component that fixes the vulnerability (either directly or by using a different open source project).

Read more at BlackDuck

Scalable Microservices with gRPC, Kubernetes, and Docker by Sandeep Dinesh, Google

https://www.youtube.com/watch?v=xsIwYL-N4vI?list=PLfMzBWSH11xYaaHMalNKqcEurBH8LstB8

Together, Kubernetes and gRPC, provide a comprehensive solution to the complexities involved in deploying a massive number of microservices to a cluster.

Welcoming FRRouting to The Linux Foundation

One of the most exciting parts of being in this industry over the past couple of decades has been witnessing the transformative impact that open source software has had on IT in general and specifically on networking. Contributions to various open source projects have fundamentally helped bring the reliability and economics of web-scale IT to organizations of all sizes. I am happy to report the community has taken yet another step forward with FRRouting.

FRRouting (FRR) is an IP routing protocol suite for Unix and Linux platforms which includes protocol daemons for BGP, IS-IS, LDP, OSPF, PIM, and RIP, and the community is working to make this the best routing protocol stack available.

FRR is rooted in the Quagga project and includes the fundamentals that made Quagga so popular as well as a ton of recent enhancements that greatly improve on that foundation.  

Here’s a bird’s eye view of some things the team has been busy working on:

  • 32-bit route tags were added to BGP and OSPFv2/v3, improving route policy maintenance and increasing interoperability in multivendor environments;

  • Update-groups and nexthop tracking enable BGP to scale to ever-increasing environments

  • BGP add-path provides users with the ability to advertise service reachability in richly connected networks

  • The addition of RFC 5549 to BGP provides IPv4 connectivity using IPv6 native infrastructure, enabling customers to build IPv6-centric networks;

  • Virtual routing and forwarding (VRF) enables BGP users to operate isolated routing domains such as those used by web application infrastructures, hosting providers, and Internet Service Providers

  • EVPN Type 5 routes allow customers with Layer 2 data centers to exchange subnet information using BGP EVPN

  • PIM-SM and MSDP enable enterprise applications that rely on IP multicast to use FRR

  • Static LSPs along with LDP enable architects to use MPLS to engineer network data flow

  • An overhaul of the CLI infrastructure and new unit test infrastructure improves the ongoing development and quality of FRR

  • Enabling IETF NVO3 network virtualization control allows users to build standards-based interoperable network virtualization overlays.

The protocol additions above are augmented by SnapCraft packaging and support for JSON outputs, both of which improve the operationalization of FRR.

Pretty cool stuff, huh? The contributors designed FRR to streamline the routing protocol stack and to make engineers’ lives that much easier. Businesses can use FRR for connecting hosts, virtual machines, and containers to the network; advertising network service endpoints; network switching and routing; and Internet access/peering routers.

Contributors from 6WIND, Architecture Technology Corporation, Big Switch Networks, Cumulus Networks, LabN Consulting, NetDEF (OpenSourceRouting), Orange, Volta Networks, and other companies have been working on integrating their advancements and want to invite you to participate in the FRRouting community to help shape the future of networking.

Deploying Microservices to a Cluster with gRPC and Kubernetes

Although it is true that microservices follow the UNIX philosophy of writing short compact programs that do one thing and do it well, and that they bring a lot of advantages to a framework (e.g., continuous deployment, decentralization, scalability, polyglot development, maintainability, robustness, security, etc.), getting thousands of microservices up and running on a cluster and correctly communicating with each other and the outside world is challenging. In this talk from Node.js Interactive, Sandeep Dinesh — a Developer Advocate at Google Cloud — describes how you can successfully deploy microservices to a cluster using technologies that Google developed: Kubernetes and gRPC.

To address the issues mentioned above, Google first developed Borg and Stubby. Borg was Google’s internal schedule manager. When Google decided to use containers 10 years ago, this was a new field, so they wrote their own stuff. Borg ended up scheduling every single application at Google, from small side projects to Google Search. Stubby, Google’s RPC framework, was used for communication between different services.

However, instead of putting Borg and Stubby on GitHub as open source projects, Google chose to write new frameworks from scratch in the open, with the open source community. The reason for this is that both Borg and Stubby were terribly written, according to Dinesh, and they were so tied to the Google’s internal infrastructure as to be unusable by the outside world.

That is how Kubernetes, the successor of Borg, and gRPC, a saner incarnation of Stubby, came to be.

Kubernetes

A common scenario while developing a microservice is to have your Docker container running your code on your local machine. Everything is fine until it is time to put it into production and you want to deploy your service on a cluster. That’s when complications arise: You have to ssh into a machine, run Docker, keep it up with nohup, etc., all of which is complicated and error-prone. The only thing you gain, according to Dinesh, is that you have made your development a little bit easier.

Kubernetes offers a solution in that it manages and orchestrates the containers on the cluster for you. You do not have to deal with machines anymore. Instead you interact with the cluster and the Kubernetes API.

It works like this: You dockerize your app and pass it on to Kubernetes in what’s called a Replication Controller. You tell Kubernetes that you need, say, four instances of your dockerized app running at the same time, and Kubernetes manages everything automatically. You don’t have to worry about on which machines your apps run. If one instance of your microservices crashes, Kubernetes will spin it back up. If a node in the cluster goes offline, Kubernetes automatically distributes the work to other nodes.

With random pods and containers spinning up on random computers, you need a layer on top that can route traffic to the correct Docker container on the correct machine. That is were Kubernetes’ services come into play. A Kubernetes service has a static IP address and a DNS host name that will route to a dynamic number of containers running on the system. It doesn’t matter if you are sending traffic to one app or a thousand — everything goes through your one service that distributes it to the containers in your cluster transparently.

All of that taken together, the dockerized app embedded in its replication controller, along with its service, is what in Kubernetes makes up one microservice. Obviously, you can run multiple microservices on one cluster, you can scale certain microservices up or down independently, or you can roll a microservice up to a new version, and, again, it will not affect other microservices.

gRPC

When you have multiple microservices running, communications between them becomes the most important part of your framework. According to Martin Fowler: The biggest issue in changing a monolith into microservices lies in changing the communication pattern.

Communication between microservices is done with Remote Procedure Calls (RPCs) and Google has 1010 RPCs per second. To help developers manage their RPCs they created gRPC.

gRPC supports multiple languages, including Python, C/C++, PHP, Java, Ruby and, of course, Node.js; and uses Protocol Buffers v3 to encapsulate data sent from microservice to microservice. Protocol Buffers are Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data. In many ways it is similar to XML, but smaller, faster, and simpler according to Google. “Protocol Buffers” is technically an interface definition language (or IDL) that allows you to define your data once and generate interfaces for any language. It implements a data model for structured request and response, and your data can be compressed into a wire format, a binary format for quick network transmission.

gRPC also uses HTTP/2, which is much faster than HTTP/1.1. HTTP/2 supports multiplexing, opening a single TCP connection and sending all the packages over that. HTTP/1.1 opens a new connection every single time it has to send a package, which adds a lot overhead. HTTP/2 also supports bidirectional streaming, which means you do not have to do polling, sockets, or Server Send Events, because it allows you to do bidirectional streaming on the same single TCP connection easily. Finally, HTTP/2 supports flow control, allowing you to solve congestion issues on your network should they occur.

Together, Kubernetes and gRPC, provide a comprehensive solution to the complexities involved in deploying a massive number of microservices to a cluster.

Watch the complete presentation below:

https://www.youtube.com/watch?v=xsIwYL-N4vI?list=PLfMzBWSH11xYaaHMalNKqcEurBH8LstB8

If you’re interested in speaking at or attending Node.js Interactive North America 2017 – happening October 4-6 in Vancouver, Canada – please subscribe to the Node.js community newsletter to keep abreast of dates and deadlines.

Why Choose Kubernetes to Manage Containerized Applications?

We’re learning about Kubernetes in this series, and why it is a good choice for managing your containerized applications. In part 1, we talked about what Kubernetes does, and its architecture. Now we’ll compare Kubernetes to competing container managers.

One Key Piece of the Puzzle

As we discussed in part 1, managing containers at scale and building a distributed applications infrastructure requires building a large complex infrastructure. You need a continuous integration pipeline and a cluster of physical servers. You need automated systems management for testing and verifying container images, launching and managing containers, performing rolling updates and rollbacks, network self-discovery, and mechanisms to manage persistent services in an ephemeral environment.

Kubernetes manages several important tasks.
Kubernetes is just one piece of this puzzle. But it is a very important piece that manages several important tasks (Figure 1). It tracks the state of the cluster, creates and manages networking rules, controls which nodes your containers run in, and monitors the containers. It is an API server, scheduler, and a controller. That is why it is called “Production-Grade Container Orchestration,” because Kubernetes is like the conductor of a manic orchestra, with a large cast of players that constantly come and go.

Other Solutions

Kubernetes is a mature and feature-rich solution for managing containerized applications. It is not the only container orchestrator, and there are four others that you might be familiar with.

Docker Swarm is the Docker Inc. solution, based on SwarmKit and embedded with the Docker Engine.

Apache Mesos is a datacenter scheduler, which runs containers through the use of frameworks such as Marathon.

Nomad from HashiCorp, the makers of Vagrant and Consul, schedules tasks defined in Jobs. It includes a Docker driver for defining a running container as a task.

Rancher is a container orchestrator-agnostic system that provides a single interface for managing applications. It supports Mesos, Swarm, Kubernetes, and its native system, Cattle.

Similarities with Mesos

At a high level, there is nothing different between Kubernetes and other clustering systems. A central manager exposes an API, a scheduler places the workloads on a set of nodes, and the state of the cluster is stored in a persistent layer.

For example, you could compare Kubernetes with Mesos, and you would see a lot of similarities. In Kubernetes, however, the persistence layer is implemented with etcd instead of Zookeeper for Mesos.

You could also consider systems like OpenStack and CloudStack. Think about what runs on their head node, and what runs on their worker nodes. How do they keep state? How do they handle networking? If you are familiar with those systems, Kubernetes will not seem that different. What really sets Kubernetes apart is its fault-tolerance, self-discovery, and scaling, and it is purely API-driven.

In our next blog, we’ll learn how Google’s Borg inspired the modern datacenter, and Kubernete’s beginnings as Google Borg.

Download the sample chapter now.

Kubernetes Fundamentals

Linux Kernel Holds Key for Advanced Container Networking

Networking has always been one of the most persistent headaches when working with containers. Even Kubernetes—fast becoming the technology of choice for container orchestration—has limitations in how it implements networking. Tricky stuff like network security is, well, even trickier.

Now an open source project named Cilium, which is partly sponsored by Google, is attempting to provide a new networking methodology for containers based on technology used in the Linux kernel. Its goal is to give containers better network security and a simpler model for networking.

Read more at InfoWorld

Attack of the Killer Microseconds

The computer systems we use today make it easy for programmers to mitigate event latencies in the nanosecond and millisecond time scales (such as DRAM accesses at tens or hundreds of nanoseconds and disk I/Os at a few milliseconds) but significantly lack support for microsecond (μs)-scale events. This oversight is quickly becoming a serious problem for programming warehouse-scale computers, where efficient handling of microsecond-scale events is becoming paramount for a new breed of low-latency I/O devices ranging from datacenter networking to emerging memories (see the first sidebar “Is the Microsecond Getting Enough Respect?”).

Read more at ACM

Get Certified on 2017’s Hottest Tech Skills!

Certifications are important for you to showcase tech skills to your employer. The IT certifications are in high demand due to advancement on technologies like cloud computing, Big Data, etc. There are companies where being certified gives you an edge over other candidates. And in some cases, you may be offered a slightly higher pay.

  • PL/SQL Developer

SQL is used across all databases and is useful and necessary in every project that needs to store data (i.e. 99% of all). PL/SQL is a specialty programming language used for coding inside the Oracle database. It fills that role supremely well, but its skills are only useful for a small subset of tasks (database logic) and only with a few organizations (large organizations who need and can afford expensive Oracle databases). The daily responsibilities of an SQL or PL/SQL developer may include writing codes, queries and function in order to manipulate the data and structure of a database using the Structured Query Language.

Following are the advantages of ORACLE Certified PL/SQL Developer:

  • You will get better pay, billing

  • You will get better identity in the market

  • You can get more opportunities if interested to work as a freelancer.

  • Some Customers/Clients prefer certified candidates for their project works.

An SQL developer may also be responsible for querying a database to generate reports or aggregate data for use by others. PL/SQL developers work particularly with Oracle database. These SQL developers may work with Microsoft SQL Server, MySQL or one of many other database systems. Tech Skills must include knowledge of SQL programming, strong logical and analytical ability. 

  • Cloud/SaaS

Cloud computing is poised to change the way people do business and communicate over the Internet. It is considered to be the upcoming big revolution in the IT industry. So, it goes without mention that if you are a certified and trained cloud computing professional, you would be in great demand. You would also get to learn a lot. Here’s a list of the top cloud computing certification and training companies across the globe.

Cloud Administrator Certifications — Top-notch credentials oriented to the everyday operations, troubleshooting and configuration of cloud technologies.

Cloud Developer Certifications — A handful of credentials of specific value to IT experts seeking to ply their trade in the cloud.

Cloud Architect Certifications — This certification is best for those of you with design skills and goals, whether for developing enterprise-level private clouds from the infrastructure or developing cloud storage solutions.

The certifications on these lists cover private and public cloud technologies, a broad expanse that includes :

  • SaaS – Software as a Service

  • PaaS – Platform as a Service

  • IaaS – Infrastructure as a Service

  • Oracle Enterprise Resource Planning Cloud

The Certificate in Enterprise Resource Planning (ERP) with Oracle will instruct you in Oracle Enterprise Resource Planning software, an integrated multi-module application that supports business forms. Oracle is one of the top ERP vendors and the skills gained will allow you to become more valuable in the current commercial place. Oracle certification is valuable to hire managers who want to distinguish among certified candidates for critical IT experts position.

This program enables students to become skilled in Oracle Supply Chain and sets them up for the Oracle Supply Chain Certified Professional Consultant examination. Students who complete this certification will have the ability to implement and support eBusiness Supply Chain applications.

  • Web development

Java is a widely used computer programming language. Web developers use it to create applications found across the Internet via multiple platforms, for example PCs and smartphones. Web developers design and create websites, maintain client websites, troubleshoot problems, and write code for Java-enabled websites. Oracle has certifications divided into two different levels.

Oracle Certified Associate (OCA) is the entry level exam in the Oracle certification path. This OCA exam tests the fundamentals and basics of the technology. For candidates that have achieved this certification without any work experience, the expectation is that they have show knowledge of the fundamentals and can be expected to perform satisfactorily under supervision.

Oracle Certified Professional (OCP) is the second level in the Oracle certification way. This examination tests your in-depth knowledge on the technology. Bu, still it is not the final level. There are Oracle Certified Master (OCM)and Oracle Certified Expert (OCE) as the more advanced level of Oracle certifications.

  • Big Data

The field of big data, analytics and business intelligence is extremely popular, and the number of certifications is ticking up accordingly. Furthermore, IT experts with big data and related certifications are growing in demand. Big data system administrators manage, store and transfer large sets of data, making it accessible for analysis. Data analytics is the practice of examining the raw data to draw conclusions and recognize patterns.

Koenig Solutions has innovative solutions for the training market. It teaches all the above subjects in great detail and makes students ready for the industry.

Microservices With Continuous Delivery Using Docker and Jenkins

Docker, microservices, Continuous Delivery are currently some of the most popular topics in the world of programming. In an environment consisting of dozens of microservices communicating with each other, it seems to be particularly important the automation of the testing, building, and deployment process. Docker is an excellent solution for microservices because it can create and run isolated containers with service.

Today, I’m going to present you how to create a basic Continuous Delivery pipeline for sample microservices using a popular software automation tool: Jenkins.

Read more at DZone

CNCF Accepts Both Docker’s containerd and CoreOS’ rkt as Incubation Projects

In a unanimous voting process that concluded Wednesday during KubeCon in Berlin, The Cloud Native Computing Foundation’s Technical Oversight Committee approved Docker Inc.’s motion to donate containerd — the current incarnation of its core container runtime — as an official CNCF incubation project.  In the same meeting, the TOC also voted unanimously to adopt CoreOS’ rkt container runtime, as well.

“Container orchestrators need community-driven container runtimes,” reads a formal statement from CNCF Executive Director Dan Kohn Wednesday, “and we are excited to have containerd which is used today by everyone running Docker.  Becoming a part of CNCF unlocks new opportunities for broader collaboration within the ecosystem.”

Read more at The New Stack