Home Blog Page 405

How to Manage Kubernetes Apps with Helm Charts

Helm can make deploying and maintaining Kubernetes-based applications easier, said Amy Chen in her talk at KubeCon + CloudNativeCon. Chen, a Systems Software Engineer at Heptio, began by dissecting the structure of a typical Kubernetes setup, explaining how she often described the basic Docker containers as “baby computers,” in that containers are easy to move around, but they still need the “mommy” computer. However, containers do carry with them all the environmental dependencies for a given application.

Basic Units

On the next level up, there is the pod, the basic unit of Kubernetes. Pods group related containers together, along with other frameworks (e.g., databases). They can only be reached from inside the cluster and are given IPs by your LAN. A pod’s IP is dynamic and can change when, for example, they are terminated and spun up again. This makes their addresses unreliable, an issue that the Kubernetes’ service structures solves (see below).

A deployment groups replicated Pods together. Here is where the Kubernetes concepts of actual state and desired state come into play. The deployment controller is in charge of reaching the desired state from the actual state. If, for example, you need three pods and one crashes, the deployment controller will spin up another to achieve the desired state.

A service is a group of pods or deployments. While a service has nothing to do with the states described above, it does provide a way of locating deployments. As pods can be terminated and die, when they are spun up, they may have different IPs, meaning you cannot rely on a pod’s IP as a way to communicate with it. Services define a dependable end point that can be used to communicate with one another. Finally, Ingress routes traffic from the outside worlds to the internal services.

Although all pieces can be managed with Kubernetes’ own kubectl tool to some extent, this does have several drawbacks: kubectl forces you to execute complicated multiple command lines which must be run in a certain order, for example. Also, kubectl does not make any provisions for version controlling your set up. So, if you change something and then want to go back to your initial setup, with kubectl you have to tear everything down and then build it all up, inputting your original settings all over again.

Chart your course

That’s where Helm can help, says Chen. Helm uses charts, text configuration files that define a group of manifest files. Helm charts reference a series of templates, yaml documents that define the deployment, services, and ingress. With those in place, using helm install will bring up your service.

In the demo phase of her talk, Chen showed how Helm makes it easier to get all the moving parts working. The yaml files used by Helm are easily parsable by humans, in that, just by looking at each section, it is easy to understand what they do.

A line that says replicas: 3 in the deployment.yaml file, for example, will bring up three replicas of your pods in the deployment phase; the containerPort: 80 line tells Helm to expose port 80 on the pod; and so on. The service.yaml and ingress.yaml files are equally simple to understand.

After configuring your set up, you can run helm install and the application will start, while returning data on the deployment so you can check everything went correctly.

Helm also lets you use configuration variables. You can create a values.yaml file that contains the values that will be used in the deployment.yaml, service.yaml and ingress.yaml files. This avoids having to edit the configuration files by hand every time you want to change something, making modifications easier and less prone to errors.

Helm also allows you to “upgrade” (which, in this context means modify) a running set up live with the upgrade command. If, for any reason, the upgrade does not conform to what you want, you can use helm list to see the deployment’s you have already had, and then helm rollback to go back to the version that worked best.

In conclusion, Helm is not only useful for beginners thanks to its simplified usage, says Chen, but it also provides some extra features that make running and maintaining Kubernetes-based applications more efficient than using just kubectl.

Watch the entire presentation below:

Learn more about Kubernetes at KubeCon + CloudNativeCon Europe, coming up May 2-4 in Copenhagen, Denmark.

Embedded Apprentice Linux Engineer Courses Coming to a Conference Near You

The 2018 conference season — which, let’s admit, lasts from January 1 to December 31 these days — is already in full swing. And, the various conferences and summits held around the world provide meeting grounds for people who create, maintain, govern, and promote open source.  

However, as the conference scene has become much larger and more varied, one thing that is often missing is simple educational instruction in a specific technology. Conferences are wonderful places if you already know what’s going on and want to find out the latest, but learning at the apprentice level about esoteric or deep subjects like embedded Linux, for example, is often a much less glamorous affair, and training has been limited to classrooms or professional corporate training, or DIY with a good book.

E-ALE

E-ALE, which is short for Embedded Apprentice Linux Engineer, is a new initiative that aims to challenge this state of affairs. This undertaking is the brainchild of a group of embedded Linux professionals who met, logically enough, at a conference — SCaLE in 2017 — and discussed this lack of apprenticeship level training. Afterwards, Behan Webster and Tom King, both Linux consultants and professional trainers with The Linux Foundation, and Jeff Osier-Mixon, open source community strategist at Intel, took the reins to organize a portable educational track consisting of nine 2-hour courses over 3 days.

Sign up for ELC/OpenIoT Summit updates to get the latest information:

In this track, which debuts at SCaLE 16x in Pasadena on March 8 and will also be presented at the Embedded Linux Conference in Portland starting March 12, professional trainers each contribute one course and their time in exchange for exposure and the pleasure of mentoring new users in the not-so-dark arts of embedded Linux. The collection of courses is available to up to 50 apprentices. Each can choose to attend only the courses that interested them, so that they can also attend presentations at the rest of the conference. Each individual class hosts approximately 30 students.

The only cost — beyond that of the conference itself — is a small hardware kit, consisting of a Pocket BeagleBoard (ARM Cortex-based development board) along with a BaconBits add-on board provided by QWERTY Embedded Design, GHI, and OSHPark. The kit costs $75 and is required to attend any of the hands-on courses. A laptop is also required (see the E-ALE page for other requirements).

Apprentice level instruction

Note that while these are apprentice-level courses, they are not “beginner” courses in how to use Linux. Students are expected to understand the basics of the Linux operating system, and to have familiarity with command line interfaces and the C programming language as well as some facility with electronics. Classes run for about 2 hours, and typically consist of about 45 to 60 minutes of instruction, to get a solid high level grounding in a subject, followed by an hour or more of hands-on time with the hardware, exploring the subject matter. Students can then continue their practice at home and stay in touch with each other and ask further questions through an alumni mailing list and participate on the E-ALE wiki. See the course descriptions for details on the scope and depth of each course.

And the best part? All of the training materials for the courses are available as Creative Commons documents, free to download after each conference, along with recordings where possible.

With 18 hour of corporate training for the cost of a bit of hardware, students win big with the E-ALE track. Trainers also win, with two hours of high-profile exposure to students who can then take business cards back to their companies and provide personal recommendations for corporate training and consulting. The conferences themselves win by providing venues for high quality instruction, making them places of learning. And with documentation and real training now provided at events around the world, the entire Linux community wins. How often do you run across a win-win-win-win scenario?

Learn more

If you would like to attend, support, or sponsor E-ALE, visit the website for details and upcoming conferences. The E-ALE track is currently planned in 2018 for SCaLE 16x, Embedded Linux Conference + OpenIoT Summit North America, and Embedded Linux Conference Europe, October 22 in Edinburgh, UK. E-ALE is also exploring the possibility of providing courses on other Linux-based technologies, so stay tuned for more.

Jeff “Jefro” Osier-Mixon has been a fixture in the open source landscape since long before the term “open source” was invented. He is currently a strategist and community manager in Intel’s Open Source Technology Center and a community manager and advisor for a number of Linux Foundation projects.

Serverless Security: What’s Left to Protect?

The cost savings Serverless offers greatly accelerated its rate of adoption, and many companies are starting to use it in production, coping with less mature dev and monitoring practices to get the monthly bill down. Such a trade off makes sense when you balance effort vs reward, but one aspect of it is especially scary – security.

Key Takeaways

  • FaaS takes on the responsibility for “patching” the underlying servers, freeing you from OS patching
  • Denial of Service (DoS) attacks are naturally thwarted by the (presumed) infinite capacity Serverless offers.
  • With serverless, we deploy many small functions that can have their own permissions. However, managing granular permissions for hundreds or thousands of functions is very hard to do.
  • Since the OS is unreachable, attackers will shift their attention to the areas that remain exposed – and first amongst those would be the application itself.
  • Known vulnerabilities in application libraries are just as risky as those in the server dependencies, and the responsibility for addressing vulnerable app libraries falls to you – the function developer.

Read more at InfoQ

Automated Compliance Testing with InSpec

Those who have been involved in converting a home-grown system to one in which strict compliance rules are observed knows the pain involved. Whereas previously a laissez-faire atmosphere ruled the day, all of a sudden, a rigid structure with many requirements and conditions regulate the administrator’s work, often with far-reaching consequences. The sheer volume of regulations alone can make moving forward difficult. If a quick fix is needed in an emergency, compliance rules often provide for exceptions, but they do need to be replaced by the right solutions looking forward.

InSpec, from the developer of Chef, promises to run compliance tests automatically and regularly on target systems with tests you define in a human-readable language that avoids the need to learn an overly elaborate syntax. InSpec describes itself as a framework for auditing and testing. First and foremost, it’s all about acid testing the existing automated system to determine whether the system and the services running on it are configured in line with policies. The slogan is “Compliance as Code.”

Having a tool for automated compliance testing come from the same company that also has the Chef automation tool in its portfolio makes a lot of sense.

Read more at ADMIN Magazine

Endless OS Helps Tear Down Linux Wall

The Endless OS community’s goal is to build a global platform for digital literacy. Its EOS Shell desktop is a big factor in making this universal computing platform work. It eliminates the technology barrier that often inhibits Linux newcomers.

Although it looks and feels a lot like an Android shell running on a PC, Endless OS is a fully functional Linux distro designed to be easy to install and very simple to use.

Endless Mobile released this latest version, 3.3.10, on Feb. 10. Its features include automatic updates, improved launch speed for applications, and some Flatpak programs from the Flathub community repository rather than Endless’ custom repository.

Read more at LinuxInsider

Adrian Cockcroft on the Convergence of Cloud Native Computing and AWS

Cloud native computing is transforming cloud architectures and application delivery at organizations of all sizes. Via containers, microservices, and more, it introduces many new efficiencies. One of the world’s leading experts on it, Adrian Cockcroft, Vice President of Cloud Architecture at Amazon Web Services (AWS), focused on cloud native computing within the context of AWS in his keynote address at KubeCon + CloudNativeCon 

In his talk, called “Cloud Native at AWS,” Cockcroft covered topics including Fargate container provisioning, running Kubernetes on AWS, and open source trends at AWS. “Cloud native computing is pay-as-you-go, emphasizing self-service,” he said. “You’re not going to have to invest in a data center and guess at how much capacity you are going to need next year. Through it, you can get very high utilization.”

Watch the complete video at The Linux Foundation

 

How to Use WSL Like a Linux Pro

In the previous tutorial, we learned about setting up WSL on your Windows 10 system. You can perform a lot of Linux command like tasks in Windows 10 using WSL. Many sysadmin tasks are done inside a terminal, whether it’s a Linux based system or macOS. Windows 10, however, lacks such capabilities. You want to run a cron job? No. You want to ssh into your server and then rsync files? No way. How about managing your local files with powerful command line utilities instead of using slow and unreliable GUI utilities?

In this tutorial, you’ll see how to perform additional tasks beyond managing your servers using WSL – things like mounting USB drives and manipulating files. You need to be running a fully updated Windows 10 and the Linux distro of your choice. I covered these steps in the previous article, so begin there if you need to catch up. Let’s get started.

Keep your Linux system updated

The fact is there is no Linux kernel running under the hood when you run Ubuntu or openSUSE through WSL. Yet, you must keep your distros fully updated to keep your system protected from any new known vulnerabilities. Since only two free community distributions are officially available in Windows Store, out tutorial will cover only those two: openSUSE and Ubuntu.

Update your Ubuntu system:

# sudo apt-get update

# sudo apt-get dist-upgrade

To run updates for openSUSE:

# zypper up

You can also upgrade openSUSE to the latest version with the dup command. But before running the system upgrade, please run updates using the previous command.

# zypper dup

Note: openSUSE defaults to the root user. If you want to perform any non-administrative tasks, please switch to a non-privileged user. You can learn how to create a user on openSUSE in this article.

Manage local files

If you want to use great Linux command line utilities to manage your local files, you can easily do that with WSL. Unfortunately, WSL doesn’t yet support things like lsblk or mnt to mount local drives. You can, however, cd to the C drive and manage files:

/mnt/c/Users/swapnil/Music

I am now in the Music directory of the C drive.

To mount other drives, partitions, and external USB drives, you will need to create a mount point and then mount that drive.

Open File Explorer and check the mount point of that drive. Let’s assume it’s mounted in Windows as S:

In the Ubuntu/openSUSE terminal, create a mount point for the drive.

sudo mkdir /mnt/s

Now mount the drive:

mount -t drvfs S: /mnt/s

Once mounted, you can now access that drive from your distro. Just bear in mind that distro running with WSL will see what Windows can see. So you can’t mount ext4 drives that can’t be mounted on Windows natively.

You can now use all those magical Linux commands here. Want to copy or move files from one folder to another? Just run the cp or mv command.

cp /source-folder/source-file.txt /destination-folder/

cp /music/classical/Beethoven/symphony-2.mp3 /plex-media/music/classical/

If you want to move folders or large files, I would recommend rsync instead of the cp command:

rsync -avzP /music/classical/Beethoven/symphonies/ /plex-media/music/classical/

Yay!

Want to create new directories in Windows drives, just use the awesome mkdir command.

Want to set up a cron job to automate a task at certain time? Go ahead and create a cron job with crontab -e. Easy peasy.

You can also mount network/remote folders in Linux so you can manage them with better tools. All of my drives are plugged into either a Raspberry Pi powered server or a live server, so I simply ssh into that machine and manage the drive. Transferring files between the local machine and remote system can be done using, once again, the rsync command.

WSL is now out of beta, and it will continue to get more new features. Two features that I am excited about are the lsblk command and the dd command that allow me to natively manage my drives and create bootable drives of Linux from within Windows. If you are new to the Linux command line, this previous tutorial will help you get started with some of the most basic commands.

Learn more about the Administering Linux on Azure (LFS205) course and sign up here.

6 Days Left to Submit a Proposal to Speak at LinuxCon + ContainerCon + CloudOpen China

Submit a proposal to speak at LinuxCon + ContainerCon + CloudOpen China (LC3), taking place in Beijing this June 25 – 27, and share your expertise with 3,000+ open source technologists, executives and community members.

We’re seeking a wide range of submissions on topics including Open Source Strategy & Governance, Networking & Orchestration, Linux Systems, Cloud Native & Containers, AI and more. Proposals are due Sunday, March 4, 2018. Submit Now.

Read more at The Linux Foundation

SecOps Spends Its Days Monitoring

Developers, Security and Operations: DevSecOps. The operations part of the term usually refers to IT operations. However, today narrows in on SecOps, that work in security operations centers (SOCs) and cyber incident response teams (CIRTs). The Cyentia Institute’s survey of 160 of these security analysts shows they face some of the same challenges developers and IT operations teams do. They spend more time on monitoring than any other activity, but they much rather solve problems and “hunt” new threats. SecOps does not like reporting or something called Shift Ops — the actual details of change control and making sure the team doesn’t burn out. Given the shortage of information security professionals, it is concerning that only 45 percent of respondents said their job experience was meeting their expectations.

Cyentia suggests that automation can reduce the time spent on monitoring, letting analysts focus on intrusion prevention and threat intelligence. 

Read more at The New Stack

How Compilers Work

Compilers translate source code into executable programs and libraries. Inside modern compiler suites, a multistage process analyzes the source code, points out errors, generates intermediate code and tables, rearranges a large amount of data, and adapts the code to the target processor.

Below the surface, a black box compiler handles complex processes that require good knowledge of machine theory and formal languages. Given the importance of compilers, it is not surprising that compiler construction is standard curriculum for computer science students. If you have never been to a college-level lecture on compiler theory – or if you went to the lecture but need a refresher course – this article summarizes the basics.

In simple terms, a compiler goes through three steps: It parses the source code, analyzes it, and synthesizes the finished program (Figure 1).

Figure 1: Rough structure of a compiler: parse code, analyze it, and create an executable program.

Read more at Linux Pro Magazine