One of the most exciting parts of being in this industry over the past couple of decades has been witnessing the transformative impact that open source software has had on IT in general and specifically on networking. Contributions to various open source projects have fundamentally helped bring the reliability and economics of web-scale IT to organizations of all sizes. I am happy to report the community has taken yet another step forward with FRRouting.
FRRouting (FRR) is an IP routing protocol suite for Unix and Linux platforms which includes protocol daemons for BGP, IS-IS, LDP, OSPF, PIM, and RIP, and the community is working to make this the best routing protocol stack available.
FRR is rooted in the Quagga project and includes the fundamentals that made Quagga so popular as well as a ton of recent enhancements that greatly improve on that foundation.
Here’s a bird’s eye view of some things the team has been busy working on:
32-bit route tags were added to BGP and OSPFv2/v3, improving route policy maintenance and increasing interoperability in multivendor environments;
Update-groups and nexthop tracking enable BGP to scale to ever-increasing environments
BGP add-path provides users with the ability to advertise service reachability in richly connected networks
The addition of RFC 5549 to BGP provides IPv4 connectivity using IPv6 native infrastructure, enabling customers to build IPv6-centric networks;
Virtual routing and forwarding (VRF) enables BGP users to operate isolated routing domains such as those used by web application infrastructures, hosting providers, and Internet Service Providers
EVPN Type 5 routes allow customers with Layer 2 data centers to exchange subnet information using BGP EVPN
PIM-SM and MSDP enable enterprise applications that rely on IP multicast to use FRR
Static LSPs along with LDP enable architects to use MPLS to engineer network data flow
An overhaul of the CLI infrastructure and new unit test infrastructure improves the ongoing development and quality of FRR
Enabling IETF NVO3 network virtualization control allows users to build standards-based interoperable network virtualization overlays.
The protocol additions above are augmented by SnapCraft packaging and support for JSON outputs, both of which improve the operationalization of FRR.
Pretty cool stuff, huh? The contributors designed FRR to streamline the routing protocol stack and to make engineers’ lives that much easier. Businesses can use FRR for connecting hosts, virtual machines, and containers to the network; advertising network service endpoints; network switching and routing; and Internet access/peering routers.
Contributors from 6WIND, Architecture Technology Corporation, Big Switch Networks, Cumulus Networks, LabN Consulting, NetDEF (OpenSourceRouting), Orange, Volta Networks, and other companies have been working on integrating their advancements and want to invite you to participate in the FRRouting community to help shape the future of networking.
Although it is true that microservices follow the UNIX philosophy of writing short compact programs that do one thing and do it well, and that they bring a lot of advantages to a framework (e.g., continuous deployment, decentralization, scalability, polyglot development, maintainability, robustness, security, etc.), getting thousands of microservices up and running on a cluster and correctly communicating with each other and the outside world is challenging. In this talk from Node.js Interactive, Sandeep Dinesh — a Developer Advocate at Google Cloud — describes how you can successfully deploy microservices to a cluster using technologies that Google developed: Kubernetes and gRPC.
To address the issues mentioned above, Google first developed Borg and Stubby. Borg was Google’s internal schedule manager. When Google decided to use containers 10 years ago, this was a new field, so they wrote their own stuff. Borg ended up scheduling every single application at Google, from small side projects to Google Search. Stubby, Google’s RPC framework, was used for communication between different services.
However, instead of putting Borg and Stubby on GitHub as open source projects, Google chose to write new frameworks from scratch in the open, with the open source community. The reason for this is that both Borg and Stubby were terribly written, according to Dinesh, and they were so tied to the Google’s internal infrastructure as to be unusable by the outside world.
That is how Kubernetes, the successor of Borg, and gRPC, a saner incarnation of Stubby, came to be.
Kubernetes
A common scenario while developing a microservice is to have your Docker container running your code on your local machine. Everything is fine until it is time to put it into production and you want to deploy your service on a cluster. That’s when complications arise: You have to ssh into a machine, run Docker, keep it up with nohup, etc., all of which is complicated and error-prone. The only thing you gain, according to Dinesh, is that you have made your development a little bit easier.
Kubernetes offers a solution in that it manages and orchestrates the containers on the cluster for you. You do not have to deal with machines anymore. Instead you interact with the cluster and the Kubernetes API.
It works like this: You dockerize your app and pass it on to Kubernetes in what’s called a Replication Controller. You tell Kubernetes that you need, say, four instances of your dockerized app running at the same time, and Kubernetes manages everything automatically. You don’t have to worry about on which machines your apps run. If one instance of your microservices crashes, Kubernetes will spin it back up. If a node in the cluster goes offline, Kubernetes automatically distributes the work to other nodes.
With random pods and containers spinning up on random computers, you need a layer on top that can route traffic to the correct Docker container on the correct machine. That is were Kubernetes’ services come into play. A Kubernetes service has a static IP address and a DNS host name that will route to a dynamic number of containers running on the system. It doesn’t matter if you are sending traffic to one app or a thousand — everything goes through your one service that distributes it to the containers in your cluster transparently.
All of that taken together, the dockerized app embedded in its replication controller, along with its service, is what in Kubernetes makes up one microservice. Obviously, you can run multiple microservices on one cluster, you can scale certain microservices up or down independently, or you can roll a microservice up to a new version, and, again, it will not affect other microservices.
gRPC
When you have multiple microservices running, communications between them becomes the most important part of your framework. According to Martin Fowler: The biggest issue in changing a monolith into microservices lies in changing the communication pattern.
Communication between microservices is done with Remote Procedure Calls (RPCs) and Google has 1010 RPCs per second. To help developers manage their RPCs they created gRPC.
gRPC supports multiple languages, including Python, C/C++, PHP, Java, Ruby and, of course, Node.js; and uses Protocol Buffers v3 to encapsulate data sent from microservice to microservice. Protocol Buffers are Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data. In many ways it is similar to XML, but smaller, faster, and simpler according to Google. “Protocol Buffers” is technically an interface definition language (or IDL) that allows you to define your data once and generate interfaces for any language. It implements a data model for structured request and response, and your data can be compressed into a wire format, a binary format for quick network transmission.
gRPC also uses HTTP/2, which is much faster than HTTP/1.1. HTTP/2 supports multiplexing, opening a single TCP connection and sending all the packages over that. HTTP/1.1 opens a new connection every single time it has to send a package, which adds a lot overhead. HTTP/2 also supports bidirectional streaming, which means you do not have to do polling, sockets, or Server Send Events, because it allows you to do bidirectional streaming on the same single TCP connection easily. Finally, HTTP/2 supports flow control, allowing you to solve congestion issues on your network should they occur.
Together, Kubernetes and gRPC, provide a comprehensive solution to the complexities involved in deploying a massive number of microservices to a cluster.
If you’re interested in speaking at or attending Node.js Interactive North America 2017 – happening October 4-6 in Vancouver, Canada – please subscribe to the Node.js community newsletter to keep abreast of dates and deadlines.
We’re learning about Kubernetes in this series, and why it is a good choice for managing your containerized applications. In part 1, we talked about what Kubernetes does, and its architecture. Now we’ll compare Kubernetes to competing container managers.
One Key Piece of the Puzzle
As we discussed in part 1, managing containers at scale and building a distributed applications infrastructure requires building a large complex infrastructure. You need a continuous integration pipeline and a cluster of physical servers. You need automated systems management for testing and verifying container images, launching and managing containers, performing rolling updates and rollbacks, network self-discovery, and mechanisms to manage persistent services in an ephemeral environment.
Kubernetes manages several important tasks.
Kubernetes is just one piece of this puzzle. But it is a very important piece that manages several important tasks (Figure 1). It tracks the state of the cluster, creates and manages networking rules, controls which nodes your containers run in, and monitors the containers. It is an API server, scheduler, and a controller. That is why it is called “Production-Grade Container Orchestration,” because Kubernetes is like the conductor of a manic orchestra, with a large cast of players that constantly come and go.
Other Solutions
Kubernetes is a mature and feature-rich solution for managing containerized applications. It is not the only container orchestrator, and there are four others that you might be familiar with.
Docker Swarm is the Docker Inc. solution, based on SwarmKit and embedded with the Docker Engine.
Apache Mesos is a datacenter scheduler, which runs containers through the use of frameworks such as Marathon.
Nomad from HashiCorp, the makers of Vagrant and Consul, schedules tasks defined in Jobs. It includes a Docker driver for defining a running container as a task.
Rancher is a container orchestrator-agnostic system that provides a single interface for managing applications. It supports Mesos, Swarm, Kubernetes, and its native system, Cattle.
Similarities with Mesos
At a high level, there is nothing different between Kubernetes and other clustering systems. A central manager exposes an API, a scheduler places the workloads on a set of nodes, and the state of the cluster is stored in a persistent layer.
For example, you could compare Kubernetes with Mesos, and you would see a lot of similarities. In Kubernetes, however, the persistence layer is implemented with etcd instead of Zookeeper for Mesos.
You could also consider systems like OpenStack and CloudStack. Think about what runs on their head node, and what runs on their worker nodes. How do they keep state? How do they handle networking? If you are familiar with those systems, Kubernetes will not seem that different. What really sets Kubernetes apart is its fault-tolerance, self-discovery, and scaling, and it is purely API-driven.
In our next blog, we’ll learn how Google’s Borg inspired the modern datacenter, and Kubernete’s beginnings as Google Borg.
Networking has always been one of the most persistent headaches when working with containers. Even Kubernetes—fast becoming the technology of choice for container orchestration—has limitations in how it implements networking. Tricky stuff like network security is, well, even trickier.
Now an open source project named Cilium, which is partly sponsored by Google, is attempting to provide a new networking methodology for containers based on technology used in the Linux kernel. Its goal is to give containers better network security and a simpler model for networking.
The computer systems we use today make it easy for programmers to mitigate event latencies in the nanosecond and millisecond time scales (such as DRAM accesses at tens or hundreds of nanoseconds and disk I/Os at a few milliseconds) but significantly lack support for microsecond (μs)-scale events. This oversight is quickly becoming a serious problem for programming warehouse-scale computers, where efficient handling of microsecond-scale events is becoming paramount for a new breed of low-latency I/O devices ranging from datacenter networking to emerging memories (see the first sidebar “Is the Microsecond Getting Enough Respect?”).
Certifications are important for you to showcase tech skillsto your employer. The IT certifications are in high demand due to advancement on technologies like cloud computing, Big Data, etc. There are companies where being certified gives you an edge over other candidates. And in some cases, you may be offered a slightly higher pay.
PL/SQL Developer
SQL is used across all databases and is useful and necessary in every project that needs to store data (i.e. 99% of all). PL/SQL is a specialty programming language used for coding inside the Oracle database. It fills that role supremely well, but its skills are only useful for a small subset of tasks (database logic) and only with a few organizations (large organizations who need and can afford expensive Oracle databases). The daily responsibilities of an SQL or PL/SQL developer may include writing codes, queries and function in order to manipulate the data and structure of a database using the Structured Query Language.
Following are the advantages of ORACLE Certified PL/SQL Developer:
You will get better pay, billing
You will get better identity in the market
You can get more opportunities if interested to work as a freelancer.
Some Customers/Clients prefer certified candidates for their project works.
An SQL developer may also be responsible for querying a database to generate reports or aggregate data for use by others. PL/SQL developers work particularly with Oracle database. These SQL developers may work with Microsoft SQL Server, MySQL or one of many other database systems. Tech Skills must include knowledge of SQL programming, strong logical and analytical ability.
Cloud/SaaS
Cloud computing is poised to change the way people do business and communicate over the Internet. It is considered to be the upcoming big revolution in the IT industry. So, it goes without mention that if you are a certified and trained cloud computing professional, you would be in great demand. You would also get to learn a lot. Here’s a list of the top cloud computing certification and training companies across the globe.
Cloud Administrator Certifications — Top-notch credentials oriented to the everyday operations, troubleshooting and configuration of cloud technologies.
Cloud Developer Certifications — A handful of credentials of specific value to IT experts seeking to ply their trade in the cloud.
Cloud Architect Certifications — This certification is best for those of you with design skills and goals, whether for developing enterprise-level private clouds from the infrastructure or developing cloud storage solutions.
The certifications on these lists cover private and public cloud technologies, a broad expanse that includes :
SaaS – Software as a Service
PaaS – Platform as a Service
IaaS – Infrastructure as a Service
Oracle Enterprise Resource Planning Cloud
The Certificate in Enterprise Resource Planning (ERP) with Oracle will instruct you in Oracle Enterprise Resource Planning software, an integrated multi-module application that supports business forms. Oracle is one of the top ERP vendors and the skills gained will allow you to become more valuable in the current commercial place. Oracle certification is valuable to hire managers who want to distinguish among certified candidates for critical IT experts position.
This program enables students to become skilled in Oracle Supply Chain and sets them up for the Oracle Supply Chain Certified Professional Consultant examination. Students who complete this certification will have the ability to implement and support eBusiness Supply Chain applications.
Web development
Java is a widely used computer programming language. Web developers use it to create applications found across the Internet via multiple platforms, for example PCs and smartphones. Web developers design and create websites, maintain client websites, troubleshoot problems, and write code for Java-enabled websites. Oracle has certifications divided into two different levels.
Oracle Certified Associate (OCA) is the entry level exam in the Oracle certification path. This OCA exam tests the fundamentals and basics of the technology. For candidates that have achieved this certification without any work experience, the expectation is that they have show knowledge of the fundamentals and can be expected to perform satisfactorily under supervision.
Oracle Certified Professional (OCP) is the second level in the Oracle certification way. This examination tests your in-depth knowledge on the technology. Bu, still it is not the final level. There are Oracle Certified Master (OCM)and Oracle Certified Expert (OCE) as the more advanced level of Oracle certifications.
Big Data
The field of big data, analytics and business intelligence is extremely popular, and the number of certifications is ticking up accordingly. Furthermore, IT experts with big data and related certifications are growing in demand. Big data system administrators manage, store and transfer large sets of data, making it accessible for analysis. Data analytics is the practice of examining the raw data to draw conclusions and recognize patterns.
Koenig Solutions has innovative solutions for the training market. It teaches all the above subjects in great detail and makes students ready for the industry.
Docker, microservices, Continuous Delivery are currently some of the most popular topics in the world of programming. In an environment consisting of dozens of microservices communicating with each other, it seems to be particularly important the automation of the testing, building, and deployment process. Docker is an excellent solution for microservices because it can create and run isolated containers with service.
Today, I’m going to present you how to create a basic Continuous Delivery pipeline for sample microservices using a popular software automation tool: Jenkins.
“Container orchestrators need community-driven container runtimes,” reads a formal statement from CNCF Executive Director Dan Kohn Wednesday, “and we are excited to have containerd which is used today by everyone running Docker. Becoming a part of CNCF unlocks new opportunities for broader collaboration within the ecosystem.”
This week in open source and Linux news, Cloud Foundry releases its new certification program for developers, Google creates a new home-base for its open source initiatives, and more! Read on to stay in the open source loop!
1) Cloud Foundry launches “the world’s largest cloud-native developer certification initiative.”
3) Hyperledger Executive Director Brian Behlendorf talks about the “possibilities blockchain offers for transparent, efficient and quickly executed transactions” in this interview.
Most modern Linux distributions enjoy standard repositories that include most of the software you’ll need to successfully run your Linux server or desktop. Should a package come up missing, more than likely you’ll find a repository you can add, so that the installation can be managed with the built-in package manager. This should be considered a best practice. Why? Because it’s important for the integrity of the platform to ensure the package manager is aware of installed software. When that is the case, packages can easily be updated (to fix vulnerabilities and the like). Another reason to install from repositories is that dependencies are easily met. When installing from source, you can sometimes find yourself trapped in a convoluted dependency nightmare.
Fortunately, repositories have become so inclusive, that it is rare you will ever need to install a package by any other means. However, you may find, on occasion, a reason to install from source. Reasons could include:
A package that is not found in any repository
A package developed in-house
You need to install a package with custom dependencies or options
When you do have to install from source, there are certain things you will need to know. Let’s walk through the process of installing Audacity from source on Ubuntu 16.10 (with the help of build-dep). Although this can be easily installed from repositories, it serves as a fine illustration for installing from source.
First things first
Installing from source used to be very common and also quite simple. You would download the source file, unpack it (with either zip or tar), change into the newly created directory, and then issue the commands:
./configuremakemake install
That still works for applications built with autoconf/automake. As long as your distribution met the necessary dependencies (which were almost always outlined in a README file within the source), the package would install and could be used. Although some source installations are still that simple, things are now a bit more complicated.
Another glitch in the modern system is that Ubuntu doesn’t ship with all the necessary tools to build from source. To solve that issue, you must first install autoconf with the command:
sudo apt-get install autoconf
Depending upon which version of Ubuntu you installed, you may even have to install the build-essential and build-dep packages (which includes the gcc/g++ compilers and libraries as well as a few other necessary utilities). These two packages can be installed with the command:
sudo apt-get install build-essential build-dep
For the likes of Fedora, a similar installation would be:
sudo yum install yum-utils
The above command would install the yum-builddep package.
Installing from source with build-dep
One way to install from source, but avoid the dependency nightmare, is to first work with the build-dep tool. Say you want to install audacity using build-dep; the first thing you must do is uncomment the deb-src listings in /etc/apt/sources.list. Open that file in your favorite editor and then uncomment out the two deb-src listings, by removing the leading # characters (Figure 1).
Figure 1: Configuring apt so it can use build-dep.
Save and close that file. Now run sudo apt-get update to update apt. Once that is done, you’re ready to build Audacity from source. Here’s what you must do. The first step is to use apt to install the necessary dependencies for Audacity. This is taken care of with the command:
sudo apt-get build-dep audacity
Allow that command to finish. The next step is to download the source package with the command:
sudo apt-get source audacity
In your current working directory, you should see a new directory called audacity-XXX (where XXX is the release number). Change into that directory. At this point, you can now issue the old tried and true:
./configuremakesudo make install
Audacity should now be installed and ready to use.
If the installation fails, you might have to revert to using the dpkg tool like so:
sudo dpkg-buildpackage -b -uc -us
The options above are as follows:
b – build binary
uc – do not sign the .changes file
us – do not sign the source package
Why might a source package fail to install? Beyond not having all of the necessary dependencies, the answer very well might lie in the ./configurecommand.
The magic of configure
That configurecommand does have some magic hidden within. Most often you can run the ./configure command without any arguments. However, there are times you might want (or be required) to issue the command such that it configures the software to meet certain needs. Fortunately, themake tool can help us here. If you issue the command ./configure –help (from within the application source directory you’ve downloaded), you will be presented with a list of configuration options that can be used (Figure 2), specific to that package.
Figure 2: Options available for the source installation of Audacity.
These options can sometimes mean the difference between an application installing or not. Every application you attempt to install will display different options for the ./configure command, so make sure to issue ./configure –help before issuing./config. Possible configuration options include:
–prefix=PREFIX(install architecture-independent files in a non-standard location such as –prefix=/opt)
–build=BUILD (configure for a specific system architecture)
–host=HOST (the architecture of the system you want the file to run on, so you can compile the application on one machine and run it on another)
–disable-FEATURE (this allows you to disable specific features of an application)
–enable-FEATURE (this allows you to enable specific features of an application)
–with-PACKAGE=yes (use a specific PACKAGE)
–without-PACKAGE (do not use a specific PACKAGE)
As I mentioned, every software to be installed will offer different configuration options. Once you’ve decided on your options, you would then run the ./configurecommand (with all options). Once the configure script completes, follow it up with make and then make install to complete the installation.
Using Git
Let’s walk through another example, this time with the help of Git. As Ubuntu doesn’t ship with Git installed, we’ll first have to install it with the command:
sudo apt-get install git
Once this is installed, let’s pull down the source for the Clementine audio player with the command:
With the source downloaded, change into the newly added directory with the command cd Clementine. At this point, run the following commands to build the player from source:
cd bincmake ..make -j8sudo make install
That’s it. You should now have a working install of Clementine (so long as you’ve met the necessary dependencies). If the installation complains about dependencies, you can scan back through the output to find out what all needs to be installed. In the case of Clementine, you could always pick up the dependencies with the command:
sudo apt-get build-dep clementine
And there you go
That, my friends, is your introduction to installing from source. You might now not only have a better understanding as to how such an installation is handled, but why so many opt to bypass installing from source, and go straight to their distribution’s package manager. Dependency nightmares and a lack of consistency in steps helps to make the likes of apt, dpkg, yum, zypper, and dnfall the more appealing.
Yes, installing from source offers far more flexibility, but that flexibility comes at the price of simplicity.
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.