Home Blog Page 401

​Kubernetes Graduates to Full-Fledged, Open-Source Program

At the Open Source Leadership Summit (OSLS), the Cloud Native Computing Foundation (CNCF), which sustains and integrates open-source, cloud native technologies such as Prometheus and containerd, and Chen Goldberg, Google Cloud’s director of engineering, announced that Kubernetes is the first project to graduate from the CNCF.

In her OSLS keynote speech, Goldberg explained there were numerous reasons Kubernetes had become successful. One of the most important, she said, “For Google, open-source software is part of the strategy, it’s not a side-gig. From the start Kubernetes upstream was also investing in Google Kubernetes Engine (GKE) and vice-versa. It’s about helping users.” And, “Community before product, community before company. And value diversity in that community,” Goldberg added.

Read more at ZDNet

Register Now to Save $600 for Open Networking Summit — Price Increases Sunday, March 11

ONS is the epicenter of idea exchange, decision making and project mapping across the open networking ecosystem. Attend this year, and join 2,000 architects, developers, and thought leaders to pave the future of networking integration, acceleration and deployment.

VIEW THE FULL SCHEDULE >>

You have 3 days left to save $605 on registration. Register by end of day on Saturday, March 10!

REGISTER NOW >>

Read more at The Linux Foundation

Cilium 1.0.0-rc4 Released: Transparently Secure Container Network Connectivity Utilising Linux BPF

Cilium is open source software for transparently securing the network connectivity between application services deployed using Linux container management platforms like Docker and KubernetesCilium 1.0.0-rc4 has recently been released, which includes: the Cloud Native Computing Foundation (CNCF)-hosted Envoy configured as the default HTTP/gRPC proxy; the addition of a simple health overview for connectivity and other errors; and an improved scalable kvstore interaction layer.

Microservices applications tend to be highly dynamic, and this presents both a challenge and an opportunity in terms of securing connectivity between microservices. Modern approaches to overcoming this issue have coalesced around the CNCF-hosted Container Network Interface (CNI) and the increasingly popular “service mesh” technologies, such as Istio and Conduit. According to the Cilium documentation, traditional Linux network security approaches (such as iptables) filter on IP address and TCP/UDP ports. However, the highly volatile life cycle of containers and IP addresses cause these approaches to struggle to scale alongside the application as the large number of load balancing tables and access control lists must be updated continually.

Read more at InfoQ

Automated Provisioning in Kubernetes

When deploying applications in a Kubernetes cluster, certain types of services are commonly required. Many applications require a database, a storage service, a message broker, identity management, and so on. You have enough work on your hands containerizing your own application. Wouldn’t it be handy if those other services were ready and available for use inside the cluster?

The Service Catalog

Don’t get stuck deploying and managing those other services yourself; let the Service Catalog do it for you. The Kubernetes Service Catalog README states:

“The end-goal of the service-catalog project is to provide a way for Kubernetes users to consume services from brokers and easily configure their applications to use those services, without needing detailed knowledge about how those services are created or managed.”

Read more at OpenSource.com

Tutorial: Tracing Python Flask requests with OpenTracing

A transaction trace is a GPS system for web performance: it paints a rich picture of the flow of a web request through your code.

So, why doesn’t everybody trace? I believe there are two reasons:

  1. Complex instrumentation: Adding in-app tracing instrumentation is more involved than calling logger.info() for logging or statsD_client.incr() for metrics.
  2. Vender lockin: You aren’t committing to a vendor when you log and record metrics: you can easily swap out different services to aggregate your logs and store your metrics. Even though APM vendor tracing libraries are remarkably similar, there hasn’t been a vendor-neutral standard for tracing. Adding complex, vendor-specific instrumentation can feel like a deeper commitment than one desires.

OpenTracing, a vender-neutral tracing API

Enter OpenTracing, a vendor-neutral open standard for distributed tracing. OpenTracing loosens the chains on tracing instrumentation: if we trace our method calls via OpenTracing APIs, we can swap out our tracing vendors just like logging and metrics!

Read more at Scout

Submit a Proposal to Speak at OS Summit Japan and Automotive Linux Summit By March 18

Open Source Summit Japan and Automotive Linux Summit 2018 are once again co-located and will be held June 20-22 at the Tokyo Conference Center Ariake in Tokyo. Both events offer participants the opportunity to learn about the latest projects, technologies, and developments taking place across the open source ecosystem, and specifically in the Automotive Linux arena.

The deadline to submit a proposal is just 3 weeks away on Sunday, March 18, 2018. Don’t miss the opportunity to educate and influence hundreds of technologists and open source professionals by speaking at one of these events.

Read more at The Linux Foundation

Kubernetes Ingress: NodePort, Load Balancers, and Ingress Controllers

A fundamental requirement for cloud applications is some way to expose that application to your end users. This article will introduce the three general strategies in Kubernetes for exposing your application to your end users, and cover the various tradeoffs of each approach. I’ll then explore some of the more sophisticated requirements of an ingress strategy. Finally, I’ll give some guidelines on how to pick your Kubernetes ingress strategy.

Ingress in Kubernetes

In Kubernetes, there are three general approaches to exposing your application.

  • Using a Kubernetes service of type NodePort, which exposes the application on a port across each of your nodes
  • Use a Kubernetes service of type LoadBalancer, which creates an external load balancer that points to a Kubernetes service in your cluster
  • Use a Kubernetes Ingress Resource

Read more on Medium

Most Useful Linux Commands You Can Run in Windows 10

In the previous articles of this series, we talked about getting started with WSL on Windows 10. In the last article of the series, we will talk about some of the widely used Linux commands on Windows 10.

Before we dive further into the topic, let’s make it clear who this is for. This article is meant for greenhorn developers who use Windows 10 machines but want to learn about Linux as it’s the dominant platform in the cloud, whether it be Azure, AWS, or private cloud. In a nutshell, it’s intended for Windows 10 users who are new to Linux.

Which commands you need will depend on your own workload. Your mileage may vary from mine. The goal of the article is to get you comfortable with Linux in Windows 10. Also bear in mind that WSL doesn’t provide access to hardware components like sound cards or GPU. Officially. But Linux users never take a no for an answer. Many users have managed to not only gain access to sound cards and GPU, they also managed to run desktop Linux apps on Windows. But that’s not the scope of this article. We may talk about it at some point, but not today.

Here are a few tasks to get started.

How to keep your Linux system up to date

Since you are running Linux inside of Windows, you are stripped of all the security that Linux systems offer. In addition, if you don’t keep your Linux systems patched, you will expose your Windows machines to those threats. Always keep your Linux machines up to date.

WSL officially supports openSUSE, SUSE Linux Enterprise and Ubuntu. You can install other distributions as well, but I can get all of my work done with either of these two as all I need is access to some basic Linux utilities.

Update openSUSE Leap:

sudo zypper up

If you want a system upgrade, you can do that after running the above command:

sudo zypper dup

Update Ubuntu machine:

sudo apt-get update

sudo apt-get dist-upgrade

You are safe and secure. Since updates on Linux systems are incremental, I run system updates on a daily basis. It’s mostly a few KB or a few MB of updates without any downtime, unlike Windows 10 updates where you need to reboot your system.

Managing files and folders

Once your system is updated, we can look at some mundane, or not so mundane tasks.

The second most important task is to manage your local and remote files using Linux. I must admit that as much as I prefer GUI apps, there are certain tasks, where terminal offers more value and reliability. Try moving 1TB of files using the Explorer app. Good luck. I always use the rsync command to transfer the bulk of files. The good news is that with rsync, if you do stop it in the middle, you can resume from where you left off.

Although you can use cp or mv commands to copy or move files, I prefer rsync as it offers more flexibility over the others and learning it will also help you in transferring files between remote machines. There are three basic tasks that I mostly perform.

Copy entire directory using rsync:

rsync -avzP /source-directory /destination directory

Move files using rsync:

rsync --remove-source-files -avzP /source-directory /destination-directory

This command will delete files from the source directory after successful copying to the destination directory.

Sync two directories:

I keep a copy of all of my files on more than one location. However, I continue to add and delete files from the primary location. It could become a challenge to keep all other locations synced without using some application dedicated to file sync, rsync simplifies the process. This is the command that you need to keep two directories synced. Keep it mind that it’s a one way sync — from source to destination.

rsync --delete -avzP /source-directory /destination-directory

The above commands deletes the file in the destination folder if they are not found in the source folder. In other way it creates a mirror copy of the source directory.

Automate file backup

Yes, keeping up with back up is a mundane task. In order to keep my drives fully synced I add a cron job that runs the rsync command at night to keep all directories synced. I do, however, keep one external drive that is synced manually on a weekly basis. I don’t use the –delete flag as it may delete some files that I might have wanted. I use that flag manually.

To create a cron job, open crontab:

crontab -e

I run this at night when both systems are idle as moving huge amount of files can slow your system down. The command runs at 1 am every morning. You can change it appropriately:

# 0 1 * * * rsync -avzP /source-directory /destination-directory

This is the structure for a cron job using crontab:

# m h  dom mon dow   command

Here m = minute, h = hour, dom= day of the month, mon= month; dow= day of the week.

We are running this command at 1 am every day. You could choose to run in a certain day of the week or day of the month (so it will run on the 5th of every month, for example) and so on. You can read more about crontab here.

Managing your remote servers

One of the reasons you are running WSL on your system is that you manage Linux systems on cloud and WSL provides you with native Linux tools. The first thing you need is to remotely log into your Linux server using the ssh command.

Let’s say my server is 192.168.0.112; the dedicated port is 2018 (never use the default 22 port); the Linux user of that server is swapnil and password is i-wont-tell-you.

ssh -p2018 swapnil@192.168.0.112

It will ask for the password and, eureka, you are logged into your Linux server. Now you can perform all the tasks that you want to perform as you are literally inside that Linux machine. No need to use puTTY.

You can easily transfer files between your local machine and remote machine using the rsync command. Instead of source or destination directory, depending on whether you are uploading the files to the server or downloading them to local machine, you can use username@IP-address-of-server:/path-of-directory.

So if I want to copy some text files to the home directory of my server, here is the command:

rsync -avzP /source-directory-on-local-machine  ‘ssh -p2018’ swapnil@192.168.0.112:/home/swapnil/Documents/

It will copy all files to the Documents directory of my remote server.

Conclusion

The idea of this tutorial was to demonstrate that WSL allows you to perform a wide range of Linux-y tasks on your Windows 10 systems. In most cases, it increases productivity and performance. Now, the whole world of Linux is open to you for exploration on your Windows 10 system. Go ahead and explore it. If you have any questions, or if you would like me to cover more areas of WSL, please share your thoughts in the comments below.

Learn more about the Administering Linux on Azure (LFS205) course and sign up here.

Kubernetes Ingress: NodePort, Load Balancers, and Ingress Controllers

A fundamental requirement for cloud applications is some way to expose that application to your end users. This article will introduce the three general strategies in Kubernetes for exposing your application to your end users, and cover the various tradeoffs of each approach. I’ll then explore some of the more sophisticated requirements of an ingress strategy. Finally, I’ll give some guidelines on how to pick your Kubernetes ingress strategy.

Ingress in Kubernetes

In Kubernetes, there are three general approaches to exposing your application.

  • Using a Kubernetes service of type NodePort, which exposes the application on a port across each of your nodes
  • Use a Kubernetes service of type LoadBalancer, which creates an external load balancer that points to a Kubernetes service in your cluster
  • Use a Kubernetes Ingress Resource

Read more on Medium

Microservices 101

Microservices are an architectural approach to software development based on building an application as a collection of small services. Each service has its own unique and well-defined role, runs in its own process, and communicates via HTTP APIs or messaging. Each microservice can be deployed, upgraded, scaled, and restarted independently of all the sibling services in the application. They are typically orchestrated by an automated system, making it possible to have frequent updates of live applications without affecting the end users.

As a natural approach to optimizing work, we are already comfortable with the concept. Think about it: These days, your average cloud consumer — including adamantly non-technical people — easily and naturally uses multiple cloud products that are, essentially, micro-products and micro-apps. (They don’t call it “The App Store” for nothing). While an average enterprise organization uses, at minimum, a dozen different software products and integrations: one tool for logging business expenses, another for schedule tracking, another for payroll management. You get the idea.

Read more at The New Stack