Home Blog Page 402

Most Useful Linux Commands You Can Run in Windows 10

In the previous articles of this series, we talked about getting started with WSL on Windows 10. In the last article of the series, we will talk about some of the widely used Linux commands on Windows 10.

Before we dive further into the topic, let’s make it clear who this is for. This article is meant for greenhorn developers who use Windows 10 machines but want to learn about Linux as it’s the dominant platform in the cloud, whether it be Azure, AWS, or private cloud. In a nutshell, it’s intended for Windows 10 users who are new to Linux.

Which commands you need will depend on your own workload. Your mileage may vary from mine. The goal of the article is to get you comfortable with Linux in Windows 10. Also bear in mind that WSL doesn’t provide access to hardware components like sound cards or GPU. Officially. But Linux users never take a no for an answer. Many users have managed to not only gain access to sound cards and GPU, they also managed to run desktop Linux apps on Windows. But that’s not the scope of this article. We may talk about it at some point, but not today.

Here are a few tasks to get started.

How to keep your Linux system up to date

Since you are running Linux inside of Windows, you are stripped of all the security that Linux systems offer. In addition, if you don’t keep your Linux systems patched, you will expose your Windows machines to those threats. Always keep your Linux machines up to date.

WSL officially supports openSUSE, SUSE Linux Enterprise and Ubuntu. You can install other distributions as well, but I can get all of my work done with either of these two as all I need is access to some basic Linux utilities.

Update openSUSE Leap:

sudo zypper up

If you want a system upgrade, you can do that after running the above command:

sudo zypper dup

Update Ubuntu machine:

sudo apt-get update

sudo apt-get dist-upgrade

You are safe and secure. Since updates on Linux systems are incremental, I run system updates on a daily basis. It’s mostly a few KB or a few MB of updates without any downtime, unlike Windows 10 updates where you need to reboot your system.

Managing files and folders

Once your system is updated, we can look at some mundane, or not so mundane tasks.

The second most important task is to manage your local and remote files using Linux. I must admit that as much as I prefer GUI apps, there are certain tasks, where terminal offers more value and reliability. Try moving 1TB of files using the Explorer app. Good luck. I always use the rsync command to transfer the bulk of files. The good news is that with rsync, if you do stop it in the middle, you can resume from where you left off.

Although you can use cp or mv commands to copy or move files, I prefer rsync as it offers more flexibility over the others and learning it will also help you in transferring files between remote machines. There are three basic tasks that I mostly perform.

Copy entire directory using rsync:

rsync -avzP /source-directory /destination directory

Move files using rsync:

rsync --remove-source-files -avzP /source-directory /destination-directory

This command will delete files from the source directory after successful copying to the destination directory.

Sync two directories:

I keep a copy of all of my files on more than one location. However, I continue to add and delete files from the primary location. It could become a challenge to keep all other locations synced without using some application dedicated to file sync, rsync simplifies the process. This is the command that you need to keep two directories synced. Keep it mind that it’s a one way sync — from source to destination.

rsync --delete -avzP /source-directory /destination-directory

The above commands deletes the file in the destination folder if they are not found in the source folder. In other way it creates a mirror copy of the source directory.

Automate file backup

Yes, keeping up with back up is a mundane task. In order to keep my drives fully synced I add a cron job that runs the rsync command at night to keep all directories synced. I do, however, keep one external drive that is synced manually on a weekly basis. I don’t use the –delete flag as it may delete some files that I might have wanted. I use that flag manually.

To create a cron job, open crontab:

crontab -e

I run this at night when both systems are idle as moving huge amount of files can slow your system down. The command runs at 1 am every morning. You can change it appropriately:

# 0 1 * * * rsync -avzP /source-directory /destination-directory

This is the structure for a cron job using crontab:

# m h  dom mon dow   command

Here m = minute, h = hour, dom= day of the month, mon= month; dow= day of the week.

We are running this command at 1 am every day. You could choose to run in a certain day of the week or day of the month (so it will run on the 5th of every month, for example) and so on. You can read more about crontab here.

Managing your remote servers

One of the reasons you are running WSL on your system is that you manage Linux systems on cloud and WSL provides you with native Linux tools. The first thing you need is to remotely log into your Linux server using the ssh command.

Let’s say my server is 192.168.0.112; the dedicated port is 2018 (never use the default 22 port); the Linux user of that server is swapnil and password is i-wont-tell-you.

ssh -p2018 swapnil@192.168.0.112

It will ask for the password and, eureka, you are logged into your Linux server. Now you can perform all the tasks that you want to perform as you are literally inside that Linux machine. No need to use puTTY.

You can easily transfer files between your local machine and remote machine using the rsync command. Instead of source or destination directory, depending on whether you are uploading the files to the server or downloading them to local machine, you can use username@IP-address-of-server:/path-of-directory.

So if I want to copy some text files to the home directory of my server, here is the command:

rsync -avzP /source-directory-on-local-machine  ‘ssh -p2018’ swapnil@192.168.0.112:/home/swapnil/Documents/

It will copy all files to the Documents directory of my remote server.

Conclusion

The idea of this tutorial was to demonstrate that WSL allows you to perform a wide range of Linux-y tasks on your Windows 10 systems. In most cases, it increases productivity and performance. Now, the whole world of Linux is open to you for exploration on your Windows 10 system. Go ahead and explore it. If you have any questions, or if you would like me to cover more areas of WSL, please share your thoughts in the comments below.

Learn more about the Administering Linux on Azure (LFS205) course and sign up here.

Kubernetes Ingress: NodePort, Load Balancers, and Ingress Controllers

A fundamental requirement for cloud applications is some way to expose that application to your end users. This article will introduce the three general strategies in Kubernetes for exposing your application to your end users, and cover the various tradeoffs of each approach. I’ll then explore some of the more sophisticated requirements of an ingress strategy. Finally, I’ll give some guidelines on how to pick your Kubernetes ingress strategy.

Ingress in Kubernetes

In Kubernetes, there are three general approaches to exposing your application.

  • Using a Kubernetes service of type NodePort, which exposes the application on a port across each of your nodes
  • Use a Kubernetes service of type LoadBalancer, which creates an external load balancer that points to a Kubernetes service in your cluster
  • Use a Kubernetes Ingress Resource

Read more on Medium

Microservices 101

Microservices are an architectural approach to software development based on building an application as a collection of small services. Each service has its own unique and well-defined role, runs in its own process, and communicates via HTTP APIs or messaging. Each microservice can be deployed, upgraded, scaled, and restarted independently of all the sibling services in the application. They are typically orchestrated by an automated system, making it possible to have frequent updates of live applications without affecting the end users.

As a natural approach to optimizing work, we are already comfortable with the concept. Think about it: These days, your average cloud consumer — including adamantly non-technical people — easily and naturally uses multiple cloud products that are, essentially, micro-products and micro-apps. (They don’t call it “The App Store” for nothing). While an average enterprise organization uses, at minimum, a dozen different software products and integrations: one tool for logging business expenses, another for schedule tracking, another for payroll management. You get the idea.

Read more at The New Stack

Building Open Source Security into DevOps

DevOps is a philosophy of IT operations that binds the development of services and their delivery to the core principles of W. Edwards Deming’s points on Quality Management. When applied to software development and IT organizations, Deming’s principles seek to improve the overall quality of software systems as a whole.

This is done in part by decomposing the system into manageable components, which can be owned by teams. These teams have the freedom to quickly resolve any issues that might prevent the system from operating properly.

By creating a sense of pride and ownership in the delivered system, any issues discovered can be quickly resolved. This method increases the overall health of the system, which has led to the rise of Continuous Integration (CI) and Continuous Delivery (CD) as defining attributes of DevOps. 

Read more at Infosecurity

Hands-On: Installing Five Different Linux Distributions on my New HP Laptop

I’ve just picked up a new laptop, and I have to say at first glance, it looks like a real beauty. It’s an HP 15-bs166nz, which I got at one of the large electronic chains here in Switzerland for CHF 649.- (approximately £500 / €560 / $685). That’s supposedly half-price, if you believe their list prices. It’s a bit difficult to judge, really, because HP makes so many different models with similar numbers but very different configurations, but after digging around on this one for a while I decided it is a very good price for this configuration.

I have stayed away from HP laptops for several years now, because of how difficult it was to manage and configure their UEFI firmware to boot Linux. So that will be one of the major things I will be interested in looking at on this one.

Read more at ZDNet

Kali Linux Ethical Hacking OS Is Now Available in the Windows 10 Store for WSL

At the request of the community, Microsoft made it possible to download and install Kali Linux directly from the Windows 10 Store on its Windows Subsystem for Linux (WSL) feature, which needs to be enabled on your Windows 10 machine before attempting to run Kali Linux (see the instructions below).

“Were excited to announce that you can now download & install Kali Linux via the Windows Store,” said Tara Raj, Program Manager at Microsoft. “Our community expressed great interest in bringing Kali Linux to WSL in response to a blog post on Kali Linux on WSL. We are happy to officially introduce Kali Linux on WSL.”

Read more at Softpedia

License Scanning and Compliance for FOSS Projects: A Free Publication

Modern open source projects rarely consist solely of all new code, written entirely from scratch. More often, they are built from many sources. And, each of these original sources may operate under a particular license – which may also differ from the license that the new project uses.

license scanning and complianceA new publication, called License Scanning and Compliance Programs for FOSS Projects, aims to clarify and simplify this process. This paper, written by Steve Winslow from The Linux Foundation, describes the benefits of license scanning and compliance for open source projects, together with recommendations for how to incorporate scanning and compliance into a new or existing project.

Read more at The Linux Foundation

The Decentralized Internet Is Here, With Some Glitches

Proponents as varied as privacy activists and marquee venture capitalists talk about the decentralized internet as a kind of digital Garden of Eden that can restore the freedom and good will of the internet’s early days. The argument goes that big tech companies have locked up our data and minds inside stockholder-serving platforms that crush competition and privacy. Ultra-private, socially conscious decentralized apps, sometimes dubbed DApps, will give us back control of our data, and let startups slay giants once more.

“The best entrepreneurs, developers, and investors have become wary of building on top of centralized platforms,” Chris Dixon, a partner with investor Andreessen Horowitz wrote last month, in a kind of manifesto for a more decentralized internet. Tim Berners-Lee, the inventor of the World Wide Web has similar concerns. Graphite Docs and some other early DApps are far from perfect, but show there’s something to the hype. A life less dependent on cloud giants is possible, if not yet easy.

Read more at Wired

The Engine of HPC and Machine Learning

There is no question right now that if you have a big computing job in either high performance computing – the colloquial name for traditional massively parallel simulation and modeling applications – or in machine learning – the set of statistical analysis routines with feedback loops that can do identification and transformation tasks that used to be solely the realm of humans – then an Nvidia GPU accelerator is the engine of choice to run that work at the best efficiency.

It is usually difficult to make such clean proclamations in the IT industry, with so many different kinds of compute available. But Nvidia is in a unique position, and one that it has earned through more than a decade of intense engineering, where it really does not have effective competition in the compute areas where it plays.

Parallel routines written in C, C++, or Fortran were offloaded from CPUs to GPUs in the first place because the CPUs did not have sufficient memory bandwidth to handle these routines. 

Read more at The Next Platform

Eliminating Storage Failures in the Cloud

With the advent of disk mirroring over 35 years ago, data redundancy has been the basic strategy against data loss. That redundancy was extended in the replicated state machine (RSM) clusters popularized by cloud vendors in early aughts, and widely used today in scale-out systems of all types.

The idea behind RSM is that running on many servers, with the same intial state, and the same sequence of inputs, will produce the same outputs. That output will always be correct and available if a majority of the servers are functional. A consensus algorithm, such as Paxos, ensures that the state machine logs are kept in sync.

At Usenix FAST ’18 conference, Ramnatthan Altagappan et. al. presented the paper Protocol-Aware Recovery for Consensus-Based Storage that introduced a new approach to correctly recover from RSM storage faults. They call it corruption-tolerant replication, or CTRL.

Read more at ZDNet