Home Blog Page 311

How to Use Git Version Control System in Linux [Comprehensive Guide]

Version Control (revision control or source control) is a way of recording changes to a file or collection of files over time so that you can recall specific versions later. A version control system (or VCS in short) is a tool that records changes to files on a filesystem.

There are many version control systems out there, but Git is currently the most popular and frequently used, especially for source code management. Version control can actually be used for nearly any type of file on a computer, not only source code.

Version control systems/tools offer several features that allow individuals or a group of people to:

  • create versions of a project.
  • track changes accurately and resolve conflicts.
  • merge changes into a common version.
  • rollback and undo changes to selected files or an entire project.
  • access historical versions of a project to compare changes over time.
  • see who last modified something that might be causing a problem.
  • create a secure offsite backup of a project.
  • use multiple machines to work on a single project and so much more.

A project under a version control system such as Git will have mainly three sections, namely:

  • a repository: a database for recording the state of or changes to your project files. It contains all of the necessary Git metadata and objects for the new project. Note that this is normally what is copied when you clone a repository from another computer on a network or remote server.
  • a working directory or area: stores a copy of the project files which you can work on (make additions, deletions and other modification actions).
  • a staging area: a file (known as index under Git) within the Git directory, that stores information about changes, that you are ready to commit (save the state of a file or set of files) to the repository.

Read more at Tecmint

Raspbian Linux Distribution Updated, But with One Unexpected Omission

New distribution images for the Raspberry Pi’s Raspbian operating system appeared on their Download page a week or so ago. The dates of the new images are 2018-10-09 for the Raspbian-only version, and 2018-10-11 for the NOOBS (Raspbian and More) version.

In a nutshell, this release includes:

  • a number of changes to the “first-run/startup wizard”, which is not surprising since that was just introduced in the previous release
  • a couple of interesting changes which look to me like they are responses to potential security problems (password changes now work properly if the new password contains shell characters? Hmmm. I wonder if this came up simply because some users were having trouble changing passwords, or because some clever users found they could use this to attack the system? Oh, and who ever thought it was a good idea to display the WiFi password by default?)
  • updates to the Linux kernel (4.14.71) and Pi firmware
  • various other minor updates, bug fixes, new versions and such
  • removed Mathematica
  • Raspberry Pi PoE HAT support

Those last two are the ones that really produced some excitement in the Raspberry Pi community. Just look at that next to last one… so innocent looking… but then go and look at the discussion in the Pi Forums about it.

Read more at ZDNet

On the low adoption of automated testing in FOSS

For projects of any value and significance, having a comprehensive automated test suite is nowadays considered a standard software engineering practice. Why, then, don’t we see more prominent FOSS projects employing this practice?

By Alexandros Frantzis, Senior Software Engineer at Collabora.

A few times in the recent past I’ve been in the unfortunate position of using a prominent Free and Open Source Software (FOSS) program or library, and running into issues of such fundamental nature that made me wonder how those issues even made it into a release.

In all cases, the answer came quickly when I realized that, invariably, the project involved either didn’t have a test suite, or, if it did have one, it was not adequately comprehensive.

I am using the term comprehensive in a very practical, non extreme way. I understand that it’s often not feasible to test every possible scenario and interaction, but, at the very least, a decent test suite should ensure that under typical circumstances the code delivers all the functionality it promises to.

For projects of any value and significance, having such a comprehensive automated test suite is nowadays considered a standard software engineering practice. Why, then, don’t we see more prominent FOSS projects employing this practice, or, when they do, why is it often employed poorly?

In this post I will highlight some of the reasons that I believe play a role in the low adoption of proper automated testing in FOSS projects, and argue why these reasons may be misguided. I will focus on topics that are especially relevant from a FOSS perspective, omitting considerations, which, although important, are not particular to FOSS.

My hope is that by shedding some light on this topic, more FOSS projects will consider employing an automated test suite.

As you can imagine, I am a strong proponent of automating testing, but this doesn’t mean I consider it a silver bullet. I do believe, however, that it is an indispensable tool in the software engineering toolbox, which should only be forsaken after careful consideration.

1. Underestimating the cost of bugs

Most FOSS projects, at least those not supported by some commercial entity, don’t come with any warranty; it’s even stated in the various licenses! The lack of any formal obligations makes it relatively inexpensive, both in terms of time and money, to have the occasional bug in the codebase. This means that there are fewer incentives for the developer to spend extra resources to try to safeguard against bugs. When bugs come up, the developers can decide at their own leisure if and when to fix them and when to release the fixed version. Easy!

At first sight, this may seem like a reasonably pragmatic attitude to have. After all, if fixing bugs is so cheap, is it worth spending extra resources trying to prevent them?

Unfortunately, bugs are only cheap for the developer, not for the users who may depend on the project for important tasks. Users expect the code to work properly and can get frustrated or disappointed if this is not the case, regardless of whether there is any formal warranty. This is even more pronounced when security concerns are involved, for which the cost to users can be devastating.

Of course, lack of formal obligations doesn’t mean that there is no driver for quality in FOSS projects. On the contrary, there is an exceptionally strong driver: professional pride. In FOSS projects the developers are in the spotlight and no (decent) developer wants to be associated with a low quality, bug infested codebase. It’s just that, due to the mentality stated above, in many FOSS projects the trade-offs developers make seem to favor a reactive rather than proactive attitude.

2. Overtrusting code reviews

One of the development practices FOSS projects employ ardently is code reviews. Code reviews happen naturally in FOSS projects, even in small ones, since most contributors don’t have commit access to the code repository and the original author has to approve any contributions. In larger projects there are often more structured procedures which involve sending patches to a mailing list or to a dedicated reviewing platform. Unfortunately, in some projects the trust on code reviews is so great, that other practices, like automated testing, are forsaken.

There is no question that code reviews are one of the best ways to maintain and improve the quality of a codebase. They can help ensure that code is designed properly, it is aligned with the overall architecture and furthers the long term goals of the project. They also help catch bugs, but only some of them, some of the time!

The main problem with code reviews is that we, the reviewers, are only human. We humans are great at creative thought, but we are also great at overlooking things, occasionally filling in the gaps with our own unicorns-and-rainbows inspired reality. Another reason is that we tend to focus more on the code changes at a local level, and less on how the code changes affect the system as a whole. This is not an inherent problem with the process itself but rather a limitation of humans performing the process. When a codebase gets large enough, it’s difficult for our brains to keep all the possible states and code paths in mind and check them mentally, even in a codebase that is properly designed.

In theory, the problem of human limitations is offset by the open nature of the code. We even have the so called Linus’s law which states that “given enough eyeballs, all bugs are shallow”. Note the clever use of the indeterminate term “enough”. How many are enough? How about the qualitative aspects of the “eyeballs”?

The reality is that most contributions to big, successful FOSS projects are reviewed on average by a couple of people. Some projects are better, most are worse, but in no case does being FOSS magically lead to a large number of reviewers tirelessly checking code contributions. This limit in the number of reviewers also limits the extent to which code reviews can stand as the only process to ensure quality.

Continue reading on Collabora’s blog.

Understanding Linux Links: Part 1

Along with cp and mv, both of which we talked about at length in the previous installment of this series, links are another way of putting files and directories where you want them to be. The advantage is that links let you have one file or directory show up in several places at the same time.

As noted previously, at the physical disk level, things like files and directories don’t really exist. A filesystem conjures them up for our human convenience. But at the disk level, there is something called a partition table, which lives at the beginning of every partition, and then the data scattered over the rest of the disk.

Although there are different types of partition tables, the ones at the beginning of a partition containing your data will map where each directory and file starts and ends. The partition table acts like an index: When you load a file from your disk, your operating system looks up the entry on the table and the table says where the file starts on the disk and where it finishes. The disk header moves to the start point, reads the data until it reaches the end point and, hey presto: here’s your file.

Hard Links

A hard link is simply an entry in the partition table that points to an area on a disk that has already been assigned to a file. In other words, a hard link points to data that has already been indexed by another entry. Let’s see how this works.

Open a terminal, create a directory for tests and move into it:

mkdir test_dir
cd test_dir

Create a file by touching it:

touch test.txt

For extra excitement (?), open test.txt in a text editor and add some a few words into it.

Now make a hard link by executing:

ln test.txt hardlink_test.txt

Run ls, and you’ll see your directory now contains two files… Or so it would seem. As you read before, really what you are seeing is two names for the exact same file: hardlink_test.txt contains the same content, has not filled any more space in the disk (try with a large file to test this), and shares the same inode as test.txt:

$ ls -li *test*
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 hardlink_test.txt 
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 test.txt

ls‘s -i option shows the inode number of a file. The inode is the chunk of information in the partition table that contains the location of the file or directory on the disk, the last time it was modified, and other data. If two files share the same inode, they are, to all practical effects, the same file, regardless of where they are located in the directory tree.

Fluffy Links

Soft links, also known as symlinks, are different: a soft link is really an independent file, it has its own inode and its own little slot on the disk. But it only contains a snippet of data that points the operating system to another file or directory.

You can create a soft link using ln with the -s option:

ln -s test.txt softlink_test.txt

This will create the soft link softlink_test.txt to test.txt in the current directory.

By running ls -li again, you can see the difference between the two different kinds of links:

$ ls -li
total 8 
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 hardlink_test.txt 
16515855 lrwxrwxrwx 1 paul paul 8 oct 12 09:50 softlink_test.txt -> test.txt 
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 test.txt

hardlink_test.txt and test.txt contain some text and take up the same space *literally*. They also share the same inode number. Meanwhile, softlink_test.txt occupies much less and has a different inode number, marking it as a different file altogether. Using the ls‘s -l option also shows the file or directory your soft link points to.

Why Use Links?

They are good for applications that come with their own environment. It often happens that your Linux distro does not come with the latest version of an application you need. Take the case of the fabulous Blender 3D design software. Blender allows you to create 3D still images as well as animated films and who wouldn’t to have that on their machine? The problem is that the current version of Blender is always at least one version ahead of that found in any distribution.

Fortunately, Blender provides downloads that run out of the box. These packages come, apart from with the program itself, a complex framework of libraries and dependencies that Blender needs to work. All these bits and piece come within their own hierarchy of directories.

Every time you want to run Blender, you could cd into the folder you downloaded it to and run:

./blender

But that is inconvenient. It would be better if you could run the blender command from anywhere in your file system, as well as from your desktop command launchers.

The way to do that is to link the blender executable into a bin/ directory. On many systems, you can make the blender command available from anywhere in the file system by linking to it like this:

ln -s /path/to/blender_directory/blender /home/<username>/bin

Another case in which you will need links is for software that needs outdated libraries. If you list your /usr/lib directory with ls -l, you will see a lot of soft-linked files fly by. Take a closer look, and you will see that the links usually have similar names to the original files they are linking to. You may see libblah linking to libblah.so.2, and then, you may even notice that libblah.so.2 links in turn to libblah.so.2.1.0, the original file.

This is because applications often require older versions of alibrary than what is installed. The problem is that, even if the more modern versions are still compatible with the older versions (and usually they are), the program will bork if it doesn’t find the version it is looking for. To solve this problem distributions often create links so that the picky application believes it has found the older version, when, in reality, it has only found a link and ends up using the more up to date version of the library.

Somewhat related is what happens with programs you compile yourself from the source code. Programs you compile yourself often end up installed under /usr/local: the program itself ends up in /usr/local/bin and it looks for the libraries it needs / in the /usr/local/lib directory. But say that your new program needs libblah, but libblah lives in /usr/lib and that’s where all your other programs look for it. You can link it to /usr/local/lib by doing:

ln -s /usr/lib/libblah /usr/local/lib

Or, if you prefer, by cding into /usr/local/lib

cd /usr/local/lib

… and then linking with:

ln -s ../lib/libblah

There are dozens more cases in which linking proves useful, and you will undoubtedly discover them as you become more proficient in using Linux, but these are the most common. Next time, we’ll look at some linking quirks you need to be aware of.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Take Our Cloud Providers Survey and Enter to Win a Maker Kit

Today’s most dynamic and innovative FOSS projects boast significant  involvement by well-known cloud service and solution providers. We are launching a survey to better understand the perception of these solution providers by people engaging in open source communities.

Visible participation and application of corporate resources has been one of the key drivers of the success of open source software. However, some companies still face challenges:

  • Code consumption with minimal participation in leveraged projects, impacting ability to influence project direction

  • Hiring FOSS maintainers without a strategy or larger commitment to open source, impacting the ability to retain FOSS developers long-term

  • Compliance missteps and not adhering to FOSS license terms.

The experiences open source community members with different companies impact perception of those organizations among FOSS community participants. If companies want the trust of FOSS project participants, they must invest in building strategies, engaging communities, project participation and license compliance.

Cloud Solutions Providers FOSS Survey

The Linux Foundation has been commissioned to survey FOSS developers and users about their opinions, perceptions, and experiences with 6 top cloud solution and service providers that deploy open source software.  The survey examines respondents’ views of reputation, levels of project engagement, contribution, community citizenship and project sponsorship by six major cloud product and services providers.

By completing this survey, you will be eligible for a drawing for one of ten Maker Hardware kits, complete with case, cables, power supply, and other accessories.  The survey will remain open until 12 a.m. EST on November 18, 2018. 

Take the Survey Now

Drawing Rules

  • At the end of survey period, The Linux Foundation (LF) will randomly choose ten (10) respondents to receive a Maker hardware kit (“prize”).

  • Participants are only eligible to win one prize for this drawing and after winning a first prize will not be entered into any additional prize drawings for this promotion.

  • You must be 18 years or older to participate. Employees, vendors and contractors of The Linux Foundation and their families are not eligible, but LF project  participants and employees of member companies are encouraged to complete the survey and enter the drawing

  • To enter the drawing, you must only complete the contact info (name, email, etc.). Completing the contact info will constitute an “entry”. Any participant submitting multiple entries may be disqualified without notice. The Linux Foundation reserves the right to disqualify any participants if for any reason inaccurate or incomplete information is suspected.

  • There is no cash equivalent and no person other than the winning person may take delivery of the prize(s). The prize may not be exchanged for cash.

  • The deadline for participation in the drawing is open until 12 a.m. EST on December 10, 2018. Any participants completing a survey after the deadline will not be entered into the drawing. The survey may remain open to participate beyond the drawing deadline.

  • Entries will be pooled together and a winner will be randomly selected. The winner will be notified via email. The winner’s name, city, and state of residence will be directly contacted and may be posted on our respective social media/marketing outlets (Linux.com, Twitter, Facebook, Google+, etc.). Winners have 30 days to respond to our contact or a new drawing for the prize will be made.

A Pioneering Scientist Explains Deep Learning

Buzzwords like “deep learning” and “neural networks” are everywhere, but so much of the popular understanding is misguided, says Terrence Sejnowski, a computational neuroscientist at the Salk Institute for Biological Studies.

Sejnowski, a pioneer in the study of learning algorithms, is the author of The Deep Learning Revolution (out next week from MIT Press). He argues that the hype about killer AI or robots making us obsolete ignores exciting possibilities happening in the fields of computer science and neuroscience, and what can happen when artificial intelligence meets human intelligence.

The Verge spoke to Sejnkowski about how “deep learning” suddenly became everywhere, what it can and cannot do, and the problem of hype.

First, I’d like to ask about definitions. People throw around words like “artificial intelligence” and “neural networks” and “deep learning” and “machine learning” almost interchangeably. But these are different things — can you explain?

AI goes back to 1956 in the United States, where engineers decided they would write a computer program that would try to imitate intelligence. Within AI, a new field grew up called machine learning. Instead of writing a step-by-step program to do something — which is a traditional approach in AI — you collect lots of data about something that you’re trying to understand.

Read more at The Verge

Think Global: How to Overcome Cultural Communication Challenges

In today’s workplace, our colleagues may not be located in the same office, city, or even country. A growing number of tech companies have a global workforce comprised of employees with varied experiences and perspectives. This diversity allows companies to compete in the rapidly evolving technological environment.

But geographically dispersed teams can face challenges. Managing and maintaining high-performing development teams is difficult even when the members are co-located; when team members come from different backgrounds and locations, that makes it even harder. Communication can deteriorate, misunderstandings can happen, and teams may stop trusting each other—all of which can affect the success of the company.

What factors can cause confusion in global communication? In her book, “The Culture Map,” Erin Meyer presents eight scales into which all global cultures fit. We can use these scales to improve our relationships with international colleagues. She identifies the United States as a very low-context culture in the communication scale. In contrast, Japan is identified as a high-context culture.

Read more at OpenSource.com

A Look at Fundamental Linux sed Commands

Linux administrators who want to modify files without overwriting the original have many options, but one of the most efficient tools is the stream editor — or sed.

The stream editor is a default part of most Linux distributions. It enables you to perform text file manipulations in the operating system with Linux sed commands.

Like most Linux applications, sed can process piped input, which makes it an effective scripting tool. You can use it as a basic find-and-replace tool, as in the example command below, which looks for the occurrences of one and replaces it with two. The command is closed with a /g.

Read more at TechTarget

Set Up CI/CD for a Distributed Crossword Puzzle App on Kubernetes (Part 4)

Part 3 had us running our Kr8sswordz Puzzle app, spinning up multiple instances for a load test, and watching Kubernetes gracefully balance numerous requests across the cluster.

Though we set up Jenkins for use with our Hello-Kenzan app in Part 2, we have yet to set up CI/CD hooks for the Kr8sswordz Puzzle app. Part 4 will walk through this set up. We will use a Jenkins 2.0 Pipeline script for the Kr8sswordz Puzzle app, this time with an important difference: triggering builds based on an update to the forked Git repo. The walkthrough will simulate updating our application with a feature change by pushing the code to Git, which will trigger the Jenkins build process to kick off. In a real world lifecycle, this automation enables developers to simply push code to a specific branch to then have their app build, push, and deploy to a specific environment.

Read the previous articles in the series:
 

6yke0wMHy6bUkeiH9ny2ih2c70XRtVHjvJ-6zScz

This tutorial only runs locally in Minikube and will not work on the cloud. You’ll need a computer running an up-to-date version of Linux or macOS. Optimally, it should have 16 GB of RAM. Minimally, it should have 8 GB of RAM. For best performance, reboot your computer and keep the number of running apps to a minimum.

Creating a Kr8sswordz Pipeline in Jenkins

Before you begin, you’ll want to make sure you’ve run through the steps in Parts 1, 2, and 3 so that you have all the components we previously built in Kubernetes (to do so quickly, you can run the automated scripts detailed below). For this tutorial we are assuming that Minikube is still up and running with all the pods from Part 3.

We are ready to create a new pipeline specifically for the puzzle service. This will allow us to quickly re-deploy the service as a part of CI/CD.

1. Enter the following terminal command to open the Jenkins UI in a web browser. Log in to Jenkins using the username and password you previously set up.

minikube service jenkins

2. We’ll want to create a new pipeline for the puzzle service that we previously deployed. On the left in Jenkins, click New Item.

LpJOpWJv-sFIXvfkZjvKhw_nZpmKtFpCT7wAmuQ5

-ae-tsBuoj7wShvgYZQiF9eibNPb4G0kOBvWzO6P

For simplicity we’re only going to create a pipeline for the puzzle service, but we’ve  provided Jenkinsfiles for all the rest of the services, in order to allow the application to be fully CI/CD capable.

3. Enter the item name as Puzzle-Service, click Pipeline, and click OK.

q4S5cCP3PL1uVKn4tWUNSYQ4a1WJ2VQaMJJYz1kQ

4. Under the Build Triggers section, select Poll SCM. For the Schedule, enter the string H/5 * * * * which will poll the Git repo every 5 minutes for changes.

UvhE3i-J2RwjoZnVFR3622HsnMnL_aJ9coBwOF-O

5. In the Pipeline section, change the following.

  a. Definition: Pipeline script from SCM

  b. SCM: Git

  c. Repository URL: Enter the URL for your forked Git repository

  d. Script Path: applications/puzzle/Jenkinsfile

QCeR2WtIvsJ0Mj5X5oEnb65WbDNY_RwafhaiVBj2

-ae-tsBuoj7wShvgYZQiF9eibNPb4G0kOBvWzO6P

Remember in Part 3 we had to manually replace the $BUILD_TAG env var with the git commit ID? The Kubernetes Continuous Deploy plugin we’re using in Jenkins will automatically find variables in K8s manifest files ($VARIABLE or ${VARIABLE})  and replace them with environment variables pre-configured for the pipeline in the Jenkinsfile. Variable substitution is a functionality Kubernetes lacks in v1.11.0, however the Kubernetes CD plugin as a third party tool provides us with it.

6. When you are finished, click Save. On the left, click Build Now to run the new pipeline. This will rebuild the image from the registry, and redeploy the puzzle pod. You should see it successfully run through the build, push, and deploy steps in a few minutes.

Our Puzzle-Service pipeline is now setup to poll the Git repo for changes every 5 minutes and kick off a build if changes are detected.

Pushing a Feature Update Through the Pipeline

Now let’s make a single change that will trigger our pipeline and rebuild the puzzle-service.

On our current Kr8sswordz Puzzle app, hits against the puzzle services show up as white in the UI when pressing Reload or performing a Load Test:

GmeeT13FUZXYN7OByHpxqhhnPed40_3HGa78i5nN

However, you may have seen that the same white hit does not light up when clicking the Submit button. We are going to remedy this with an update to the code.  

7. In a terminal, open up the Kr8sswordz Puzzle app. (If you don’t see the puzzle, you might need to refresh your browser.)

minikube service kr8sswordz

8. Spin up several instances of the puzzle service by moving the slider to the right and clicking Scale. For reference, click Submit and notice that the white hit does not register on the puzzle services.

d-kHXuxVhttDXD2kzYzDAp1L3K2woqa2pO94Inls

If you did not allocate 8 GB of memory to Minikube, we suggest not exceeding 6 scaled instances using the slider.

9. Edit applications/puzzle/common/models/crossword.js in your favorite text editor, or edit it in nano using the commands below.

cd ~/kubernetes-ci-cd
nano applications/puzzle/common/models/crossword.js

You’ll see the following commented section on lines 42-43:

// Part 4: Uncomment the next line to enable puzzle pod
 highlighting when clicking the Submit button
//fireHit();

Uncomment line 43 by deleting the forward slashes, then save the file. (In nano, Press Ctrl+X to close the file, type Y to confirm the filename, and press Enter to write the changes to the file.)

10. Commit and push the change to your forked Git repo (you may need to enter your GitHub credentials):

git commit -am "Enabled hit highlighting on Submit"
git push

11. In Jenkins, open up the Puzzle-Service pipeline and wait until it triggers a build. It should trigger every 5 minutes. (If it doesn’t trigger right away, give it some time.)

12. After it triggers, observe how the puzzle services disappear in the Kr8sswordz Puzzle app, and how new ones take their place.

13. Try clicking Submit to test that hits now register as white.

If you see one of the puzzle instances light up, it means you’ve successfully set up a CI/CD pipeline that automatically builds, pushes, and deploys code changes to a pod in Kubernetes. It’s okay—go ahead and bask in the glory for a minute.

PUbF_s1wwsHhk_vdpiAb2kS_pTtQcdzciITDODih

You’ve completed Part 4 and finished Kenzan’s blog series on CI/CD with Kubernetes!

From a development perspective, it’s worth mentioning a few things that might be done differently in a real-world scenario with our pipeline:

  • You would likely have separate repositories for each of the services that compose the Kr8sswordz Puzzle to enforce separation for microservice develop/build/deploy. Here we’ve combined all services in one repo for ease of use with the tutorial.

  • You would also set up individual pipelines for the monitor-scale and kr8sswordz services. Jenkins files for these services are actually included in the repository, though for the purpose of the tutorial we’ve kept things simple with a single pipeline to demonstrate CI/CD.

  • You would likely set up separate pipelines for each deployment environment, such as Dev, QA, Stage, and Prod environments. For triggering builds for these environments, you could use different Git branches that represent the environments you push code to. (For example, dev branch > deploy to Dev, master branch > deploy to QA, etc.)

  • Though easy to set up, the SCM Polling operation is somewhat resource intensive as it requires Jenkins to scan the entire repo for changes. An alternative is to use the Jenkins Github plugin on your Jenkins server.

Automated Scripts

If you need to walk through the steps we did again (or do so quickly), we’ve provided npm scripts that will automate running the same commands in a terminal.  

  1. To use the automated scripts, you’ll need to install NodeJS and npm.

On Linux, follow the NodeJS installation steps for your distribution. To quickly install NodeJS and npm on Ubuntu 16.04 or higher, use the following terminal commands.

a. curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -
b. sudo apt-get install -y nodejs

On macOS, download the NodeJS installer, and then double-click the .pkg file to install NodeJS and npm.

2. Change directories to the cloned repository and install the interactive tutorial script:

a. cd ~/kubernetes-ci-cd

b. npm install

3. Start the script

npm run part1 (or part2, part3, part4 of the blog series) 

4. Press Enter to proceed running each command.

Going Deeper

Building the Kr8sswordz Puzzle app has shown us some pretty cool continuous integration and container management patterns:

  • How infrastructure such as Jenkins or image repositories can run as pods in Kubernetes.

  • How Kubernetes handles scaling, load balancing, and automatic healing of pods.

  • How Jenkin’s 2.0 Pipeline scripts can be used to automatically run on a Git commit to build the container image, push it to repository, and deploy it as a pod in Kubernetes.

If you are interested in going deeper into the CI/CD Pipeline process with deployment tools like Spinnaker, see Kenzan’s paper Image is Everything: Continuous Delivery with Kubernetes and Spinnaker.

Kenzan is a software engineering and full service consulting firm that provides customized, end-to-end solutions that drive change through digital transformation. Combining leadership with technical expertise, Kenzan works with partners and clients to craft solutions that leverage cutting-edge technology, from ideation to development and delivery. Specializing in application and platform development, architecture consulting, and digital transformation, Kenzan empowers companies to put technology first.

This article was revised and updated by David Zuluaga, a front end developer at Kenzan. He was born and raised in Colombia, where he studied his BE in Systems Engineering. After moving to the United States, he studied received his master’s degree in computer science at Maharishi University of Management. David has been working at Kenzan for four years, dynamically moving throughout a wide range of areas of technology, from front-end and back-end development to platform and cloud computing. David’s also helped design and deliver training sessions on Microservices for multiple client teams.

Curious to learn more about Kubernetes? Enroll in Introduction to Kubernetes, a FREE training course from The Linux Foundation, hosted on edX.org.

How To Install and Use Docker Compose on CentOS 7

Docker Compose is a tool that allows you to define and run multi-container Docker applications.

With Compose, you define the application’s services, networks and volumes in a single YAML file, then spin your application with a single command.

Compose can be used for different purposes such as single host application deployments, automated testing and local development.

This tutorial walks you through installing the latest version of Docker Compose on CentOS 7. We will also cover the basic Docker Compose concepts and commands.

Read more at Linuxize