Home Blog Page 341

Linux File Server Guide

Linux file servers play an essential role. The ability to share files is a basic expectation with any modern operating system in the workplace. When using one of the popular Linux distributions, you have a few different file sharing options to choose from. Some of them are simple but not that secure. Others are highly secure, yet require some know-how to set up initially.

Once set up on a dedicated machine, you can utilize these file sharing technologies on a dedicated file server. This article will address these technologies and provide some guidance on choosing one option over another.

Samba Linux File Server

Samba is essentially a collection of tools to access networked SMB (Server Message Block) shares. The single biggest advantage to Samba as a file sharing technology is that it’s compatible with all popular operating systems, especially Windows. Setup correctly, Samba works flawlessly between Windows and Linux servers and clients.

An important thing to note about Samba is that it’s using the SMB protocol to make file sharing possible. SMB is a protocol native to Windows whereas Samba merely provides SMB support to Linux. So when considering a file sharing technology for your needs, keep this in mind.

Read more at Datamation

Viewing Linux Logs from the Command Line

Learn how to easily check Linux logs in this article from our archives.

At some point in your career as a Linux administrator, you are going to have to view log files. After all, they are there for one very important reason…to help you troubleshoot an issue. In fact, every seasoned administrator will immediately tell you that the first thing to be done, when a problem arises, is to view the logs.

And there are plenty of logs to be found: logs for the system, logs for the kernel, for package managers, for Xorg, for the boot process, for Apache, for MySQL… For nearly anything you can think of, there is a log file.

Most log files can be found in one convenient location: /var/log. These are all system and service logs, those which you will lean on heavily when there is an issue with your operating system or one of the major services. For desktop app-specific issues, log files will be written to different locations (e.g., Thunderbird writes crash reports to ‘~/.thunderbird/Crash Reports’). Where a desktop application will write logs will depend upon the developer and if the app allows for custom log configuration.

We are going to be focus on system logs, as that is where the heart of Linux troubleshooting lies. And the key issue here is, how do you view those log files?

Fortunately there are numerous ways in which you can view your system logs, all quite simply executed from the command line.

/var/log

This is such a crucial folder on your Linux systems. Open up a terminal window and issue the command cd /var/log. Now issue the command ls and you will see the logs housed within this directory (Figure 1).

Figure 1: A listing of log files found in /var/log/.

Now, let’s take a peek into one of those logs.

Viewing logs with less

One of the most important logs contained within /var/log is syslog. This particular log file logs everything except auth-related messages. Say you want to view the contents of that particular log file. To do that, you could quickly issue the command less /var/log/syslog. This command will open the syslog log file to the top. You can then use the arrow keys to scroll down one line at a time, the spacebar to scroll down one page at a time, or the mouse wheel to easily scroll through the file.

The one problem with this method is that syslog can grow fairly large; and, considering what you’re looking for will most likely be at or near the bottom, you might not want to spend the time scrolling line or page at a time to reach that end. Will syslog open in the less command, you could also hit the [Shift]+[g] combination to immediately go to the end of the log file. The end will be denoted by (END). You can then scroll up with the arrow keys or the scroll wheel to find exactly what you want.

This, of course, isn’t terribly efficient.

Viewing logs with dmesg

The dmesg command prints the kernel ring buffer. By default, the command will display all messages from the kernel ring buffer. From the terminal window, issue the command dmesg and the entire kernel ring buffer will print out (Figure 2).

Figure 2: A USB external drive displaying an issue that may need to be explored.

Fortunately, there is a built-in control mechanism that allows you to print out only certain facilities (such as daemon).

Say you want to view log entries for the user facility. To do this, issue the command dmesg –facility=user. If anything has been logged to that facility, it will print out.

Unlike the less command, issuing dmesg will display the full contents of the log and send you to the end of the file. You can always use your scroll wheel to browse through the buffer of your terminal window (if applicable). Instead, you’ll want to pipe the output of dmesg to the less command like so:

dmesg | less

The above command will print out the contents of dmesg and allow you to scroll through the output just as you did viewing a standard log with the less command.

Viewing logs with tail

The tail command is probably one of the single most handy tools you have at your disposal for the viewing of log files. What tail does is output the last part of files. So, if you issue the command tail /var/log/syslog, it will print out only the last few lines of the syslog file.

But wait, the fun doesn’t end there. The tail command has a very important trick up its sleeve, by way of the -f option. When you issue the command tail -f /var/log/syslog, tail will continue watching the log file and print out the next line written to the file. This means you can follow what is written to syslog, as it happens, within your terminal window (Figure 3).
Figure 3: Following /var/log/syslog using the tail command.

Using tail in this manner is invaluable for troubleshooting issues.

To escape the tail command (when following a file), hit the [Ctrl]+[x] combination.

You can also instruct tail to only follow a specific amount of lines. Say you only want to view the last five lines written to syslog; for that you could issue the command:

tail -f -n 5 /var/log/syslog

The above command would follow input to syslog and only print out the most recent five lines. As soon as a new line is written to syslog, it would remove the oldest from the top. This is a great way to make the process of following a log file even easier. I strongly recommend not using this to view anything less than four or five lines, as you’ll wind up getting input cut off and won’t get the full details of the entry.

There are other tools

You’ll find plenty of other commands (and even a few decent GUI tools) to enable the viewing of log files. Look to more, grep, head, cat, multitail, and System Log Viewer to aid you in your quest to troubleshooting systems via log files.   

Advance your career with Linux system administration skills. Check out the Essentials of System Administration course from The Linux Foundation.

Eliminating the Product Owner Role

“The Product Owner role no longer exists” I recently announced to an entire department in a large company. A few POs looked a bit shocked and concerned. What would they do instead?

Before I get into who or what would replace the PO role, let me offer a bit of background on this group. Three coaches, including myself, had assessed this group prior to beginning work with them. Our findings were typical:

  • Too much technical debt was slowing development to a crawl
  • There was insufficient clarity on what needed to be built
  • The developers spent little time with their Product Owner
  • The team was scattered around a building, not co-located
  • etc.

When you perform numerous assessments of teams or departments in many industries, you tend to see patterns. The above issues are common. We’ve worked out solutions to these problems eons ago. The challenge is whether people want to embrace change and actually solve their problems. This group apparently was hungry enough to want change….

Chartering is a vital skill I learned from a software industry legend named III. It helps teams and organizations figure out what outcome they’d like to achieve, how they would know they achieved it and who is necessary to help achieve it.

Read more from Joshua Kerievsky on Medium

What Are ‘Mature’ Stateful Applications?

BlueK8s is a new open source Kubernetes initiative from ‘big data workloads’ company BlueData — the project’s direction leads us to learn a little about which direction containerised cloud-centric applications are growing.

The first open project in the BlueK8s initiative is Kubernetes Director (aka KubeDirector), for deploying and managing distributed ‘stateful applications’ with Kubernetes.

Apps can be stateful or stateless….

A stateful app is a program that saves client data from the activities of one session for use in the next session — the data that is saved is called the application’s state.

Typically, stateless applications are microservices or containerised applications that have no need for long-running [data] persistence and aren’t required to store data.

Read more at TechTarget

What Serverless Architecture Actually Means, and Where Servers Enter the Picture

Serverless architecture is not, despite its name, the elimination of servers from distributed applications. Serverless architecture refers to a kind of illusion, originally made for the sake of developers whose software will be hosted in the public cloud, but which extends to the way people eventually use that software. Its main objective is to make it easier for a software developer to compose code, intended to run on a cloud platform, that performs a clearly-defined job.

If all the jobs on the cloud were, in a sense, “aware” of one another and could leverage each other’s help when they needed it, then the whole business of whose servers are hosting them could become trivial, perhaps irrelevant. And not having to know those details might make these jobs easier for developers to program. Conceivably, much of the work involved in attaining a desired result, might already have been done.

What does serverless mean for us at [Amazon] AWS?” asked Chris Munns, senior developer advocate for serverless at AWS, during a session at the re:Invent 2017 conference. “There’s no servers to manage or provision at all. This includes nothing that would be bare metal, nothing that’s virtual, nothing that’s a container — anything that involves you managing a host, patching a host, or dealing with anything on an operating system level, is not something you should have to do in the serverless world.”

Read more at ZDNet

Tips for Success with Open Source Certification

In today’s technology arena, open source is pervasive. The 2018 Open Source Jobs Report found that hiring open source talent is a priority for 83 percent of hiring managers, and half are looking for candidates holding certifications. And yet, 87 percent of hiring managers also cite difficulty in finding the right open source skills and expertise. This article is the second in a weekly series on the growing importance of open source certification.

In the first article, we focused on why certification matters now more than ever. Here, we’ll focus on the kinds of certifications that are making a difference, and what is involved in completing necessary training and passing the performance-based exams that lead to certification, with tips from Clyde Seepersad, General Manager of Training and Certification at The Linux Foundation.

Performance-based exams

So, what are the details on getting certified and what are the differences between major types of certification? Most types of open source credentials and certification that you can obtain are performance-based. In many cases, trainees are required to demonstrate their skills directly from the command line.

“You’re going to be asked to do something live on the system, and then at the end, we’re going to evaluate that system to see if you were successful in accomplishing the task,” said Seepersad. This approach obviously differs from multiple choice exams and other tests where candidate answers are put in front of you. Often, certification programs involve online self-paced courses, so you can learn at your own speed, but the exams can be tough and require demonstration of expertise. That’s part of why the certifications that they lead to are valuable.

Certification options

Many people are familiar with the certifications offered by The Linux Foundation, including the Linux Foundation Certified System Administrator (LFCS) and Linux Foundation Certified Engineer (LFCE) certifications. The Linux Foundation intentionally maintains separation between its training and certification programs and uses an independent proctoring solution to monitor candidates. It also requires that all certifications be renewed every two years, which gives potential employers confidence that skills are current and have been recently demonstrated.

“Note that there are no prerequisites,” Seepersad said. “What that means is that if you’re an experienced Linux engineer, and you think the LFCE, the certified engineer credential, is the right one for you…, you’re allowed to do what we call ‘challenge the exams.’ If you think you’re ready for the LFCE, you can sign up for the LFCE without having to have gone through and taken and passed the LFCS.”

Seepersad noted that the LFCS credential is great for people starting their careers, and the LFCE credential is valuable for many people who have experience with Linux such as volunteer experience, and now want to demonstrate the breadth and depth of their skills for employers. He also said that the LFCS and LFCE coursework prepares trainees to work with various Linux distributions. Other certification options, such as the Kubernetes Fundamentals and Essentials of OpenStack Administration courses and exams, have also made a difference for many people, as cloud adoption has increased around the world.

Seepersad added that certification can make a difference if you are seeking a promotion. “Being able show that you’re over the bar in terms of certification at the engineer level can be a great way to get yourself into the consideration set for that next promotion,” he said.

Tips for Success

In terms of practical advice for taking an exam, Seepersad offered a number of tips:

  • Set the date, and don’t procrastinate.

  • Look through the online exam descriptions and get any training needed to be able to show fluency with the required skill sets.

  • Practice on a live Linux system. This can involve downloading a free terminal emulator or other software and actually performing tasks that you will be tested on.

Seepersad also noted some common mistakes that people make when taking their exams. These include spending too long on a small set of questions, wasting too much time looking through documentation and reference tools, and applying changes without testing them in the work environment.

With open source certification playing an increasingly important role in securing a rewarding career, stay tuned for more certification details in this article series, including how to prepare for certification.

Learn more about Linux training and certification.

Xen Project Hypervisor Power Management: Suspend-to-RAM on Arm Architectures

About a year ago, we started a project to lay the foundation for full-scale power management for applications involving the Xen Project Hypervisor on Arm architectures. We intend to make Xen on Arm’s power management the open source reference design for other Arm hypervisors in need of power management capabilities.

Looking at Previous Examples for Initial Approach

We looked at the older ACPI-based power management for Xen on x86, which features CPU idling (cpu-idle), CPU frequency scaling (cpu-freq), and suspend-to-RAM. We also looked at the PSCI platform management and pass-through capabilities of Xen on Arm, which already existed, but did not have any power management support. We decided to take a different path compared to x86 because we could not rely on ACPI for Arm, which is not widespread in the Arm embedded community. Xen on Arm already used PSCI for booting secondary CPUs, system shutdown, restart and other miscellaneous platform functions; thus, we decided to follow the trend, and base our implementation on PSCI.

Among the typical power management features, such as cpu-idle, cpu-freq, suspend-to-RAM, hibernate and others, we concluded that suspend-to-RAM would be the one best suited for our initial targets, systems-on-chips (SoCs). Most SoCs allow the CPU voltage domain to be completely powered off while the processor subsystem is suspended, and the state preserved in the RAM self-refresh mode, thereby significantly cutting the power consumption, often down to just tens of milliwatts.

Our Design Approach

Our solution provides a framework that is well suited for embedded applications. In our suspend-to-RAM approach, each unprivileged guest is given a chance to suspend on its own and to configure its own wake-up devices. At the same time, the privileged guest (Dom0) is considered to be a decision maker for the whole system: it can trigger the suspend of Xen, regardless of the states of the unprivileged guests.

These two features allow for different Xen embedded configurations and use-cases. They make it possible to freeze an unprivileged guest due to an ongoing suspend procedure, or to inform it about the suspend intent, giving it a chance to cooperate and suspend itself. These features are the foundation for higher level coordination mechanisms and use-case specific policies.

Solution Support

Our solution relies on the PSCI interface to allow guests to suspend themselves, and to enable the hypervisor to suspend the physical system. It further makes use of EEMI to enable guest notifications when the suspend-to-RAM procedure is initiated. EEMI stands for Embedded Energy Management Interface, and it is used to communicate with the power management controller on Xilinx devices. On the Xilinx Zynq UltraScale+ MPSoC we were able to suspend the whole application subsystem with Linux and Xen and put the MPSoC into its deep-sleep state, where it consumes only 35 mW. Resuming from this state is triggered by a wake-up interrupt that can be owned by either Dom0 or an unprivileged guest.

After the successful implementation of suspend-to-ram, the logical next step is to introduce CPU frequency scaling and CPU idling based on the aggregate load and performance requirements of all VMs.

While an individual VM may be aware of its own performance need, its utilization level, and the resulting CPU load, this information only applies to the virtual CPUs assigned to the guest. Since the VMs are not aware of the virtual to physical CPU mappings, while also lacking awareness of all the other VMs and their performance needs, a VM is not in a position to make suitable decisions regarding the power and performance states of the SoC.

The hypervisor, on the other hand, is scheduling the virtual CPUs and needs to be aware of their utilization of the physical CPUs. Having this visibility, the hypervisor is well suited to make power management decisions concerning the frequency and idle states of the physical CPUs. In our vision, the hypervisor scheduler will become energy aware and allocate energy consumption slots to guests, rather than time slots.

Currently, our work is focused on testing the new Xen suspend-to-RAM feature on Xilinx Zynq UltraScale+ MPSoC. We are calling the Xen Project developers to join the Xen power management activity and implement and test the new feature on other Arm architectures, so we accelerate the upstreaming effort and the accompanying cleanup.  

Authors

Mirela Grujic, Principal Engineer at AGGIOS

Davorin Mista, VP Engineering and Co-Founder at AGGIOS

Stefano Stabellini, Principal Engineer at Xilinx and Xen Project Maintainer

Vojin Zivojnovic, CEO and Co-Founder at AGGIOS

 

Tickets Make Operations Unnecessarily Miserable

IT Operations has always been difficult. There is always too much work to do, not enough time to do it, and frequent interrupts. Moreover, there is the relentless pressure from executives who hold the view that everything takes too long, breaks too often, and costs too much.

In search of improvement, we have repeatedly bet on new tools to improve our work. We’ve cycled through new platforms (e.g., Virtualization, Cloud, Docker, Kubernetes) and new automation (e.g., Puppet, Chef, Ansible). While each comes with its own merits, has the stress and overload on operations fundamentally changed?

Enterprises have also spent the past two decades liberally applying Management frameworks like ITIL and COBIT. Would an average operations engineer say things have gotten better or worse?

In the midst of all of this, there is conventional wisdom that rarely gets questioned.

The first of these is the idea that grouping people by functional role should be the primary driver for org structure. I discussed the problem with this idea extensively in a previous post on silos.

Read more at Rundeck

Converting and Manipulating Image Files on the Linux Command Line

Most of us probably know how wonderful a tool Gimp is for editing images, but have you ever thought about manipulating image files on the command line? If not, let me introduce you to the convert command. It easily coverts files from one image format to another and allows you to perform many other image manipulation tasks, as well — and in a lot less time than it would take to make these changes uses desktop tools.

Let’s look at some simple examples of how you can make it work for you.

Converting files by image type

Coverting an image from one format to another is extremely easy with the convert command. Just use a convert command like the one in this example:

$ convert arrow.jpg arrow.png

The arrow.png image should look the same as the original arrow.jpg file, but the file will have the specified file extension and be different in size. 

Read more at Network World

Debian 9.5 Released: “Rock Solid” GNU/Linux Distro Arrives With Spectre v2 Fix

Following the fourth point release of Debian 9 “stretch” in March, the developers of the popular GNU/Linux distro have shipped the latest update to its stable distribution. For those who don’t know, Debian 9 is an LTS version that’ll remain supported for 5 years.

As one would expect, this point release doesn’t bring any set of new features and keeps focusing on improving an already stable experience by delivering security patches and bug fixes. In case you’re looking for an option that brings new features, you can check out the recently released Linux Mint 19.

Coming back to Debian 9.5, all the security patches shipping with the release have already been published in the form of security advisories, and their references can be found in the official release post.

To be precise, Debian 9.5 was released with 100 security updates and 91 bug fixes spread across different packages.

Read more at FOSSBytes