Home Blog Page 773

Make Peace With Your Processes: Part 1

A fundamental design feature of Unix-like operating systems is that many of a system’s resources are accessible via the filesystem, as a file. For example the “procfs” pseudo-filesystem offers us access to all kinds of valuable treasures. In this series of articles, I’ll provide an overview of your system processes, explain how to use the “ps” command, and much more.

By querying files present in “/proc” you can quickly find out the intricacies of your network connections, the server’s CPU vendor, and look up a mountain of useful information, such as the command line parameters that were passed to your latest-and-greatest application when it fired up. This is because many of the functions of a server — such as a network connection — are really just another stream of data and in most cases can be presented as a file on your filesystem.

Let’s jump ahead for a moment in case you’re not too familiar with the “/proc” filesystem. If you knew that your Process ID was 16651, for example, then you could run this command to find out what was sent to the Puppet process to start it up:

# xargs -0 < /proc/16551/cmdline

The output from that command is:

/usr/bin/ruby /usr/bin/puppet agent

As you can see, Puppet’s agent is using the Ruby programming language in this case and the binary “/usr/bin/puppet” is passed the parameter “agent” to run it as an “agent” and not a “master”.

The “Everything Is A File” philosophy makes total sense if you think about it. The power harnessed within Unix’s standard tools (usually used for manipulating data held in the more common text files) such as “grep”, “awk” and “sed” are a major strength of the Operating System. But most importantly you can have a system’s components integrate very efficiently and easily if many things are simply files.

If you have you ever tried to look into a process running on a Unix-like machine then you’ll know that if anything the abundance of information adds confusion, rather than assists, if you don’t know where to look. There are all sorts of things to consider when you are eagerly trying to track down a rogue process on production machine.

In this series, I will attempt to offer a broad insight into how the Process Table can be accessed by the ps command and in combination with “/proc” and “/dev” how it can help you manipulate your systems.

Legacy

There are a few legacy stumbling blocks when it comes to looking up a process on different types of Unix boxes, but thankfully we can rely on the trusty “ps” command to mitigate some of these headaches automatically.

For example, Unix used the “ps” command by grouping its parameters together and prepending a hyphen. BSD, on the other hand, enjoyed grouping switches together but, for one reason or another, fell out with the hyphen entirely.

Throwing another spanner in the works, however, was good old GNU’s preference, in which its long options used two dashes. Now that you’ve fully committed those confusing differences to memory, let’s assume that the ps command does as much as it can by mixing and matching the aforementioned options in an attempt to keep everyone happy.

Be warned that occasionally sometimes oddities can occur, so keep an eye out for them just in case. I’ll try to offer alternative commands as we go along to act as a reminder that not all is to be taken exactly as read. For example, a very common use of the ps command is:

# ps -aux

Note,however, that this is indeed different from:

# ps aux

You might suspect, and would be forgiven for thinking as much, that this is purely to keep everyone on their toes. However, according to the ps command’s manual, this is apparently because POSIX and UNIX insist that they should cater to processes owned by a user called “x”. However, if I’m reading the information correctly, then if the user “x” does not exist, then “ps aux” is run. I love the last sentence of the manual’s definition and draw your attention to it as a gentle warning: “It is fragile, subject to change, and thus should not be relied upon.”

Process Tree

Enough eye strain for a moment; let’s begin by looking at the ps command and what it can help with in relation to querying the Process Table.

For starters (and I won’t be betting the ranch on this statement), it’s relatively safe to assume that upper- and lowercase mean the same thing.

If you’ve never seen the output of a Process Tree, then it might help with understanding “child” threads, which live under a “parent” process. In this case, the command is simply:

# pstree

The not-so-impossible-to-decipher output from that command is shown in Listing 1 (on a server not running “systemd but good, old “init” (which is always Process ID (PID) number one, as an aside):

init-+-auditd---{auditd}

    |-certmonger

    |-crond

    |-dbus-daemon---{dbus-daemon}

    |-hald---hald-runner-+-hald-addon-acpi

    |                    `-hald-addon-inpu

    |-apache2---8*[apache2]

    |-master-+-pickup

    |        `-qmgr

    |-6*[mingetty]

    |-oddjobd

    |-rpcbind

    |-rsyslogd---3*[{rsyslogd}]

    |-sshd---sshd---sshd---bash---pstree

    |-udevd---2*[udevd]

Listing 1: Output from the “pstree” command showing parent processes and their children.

You can make your screen output much messier by adding the “-a” switch. Doing so will add command-line arguments (pulled from the /proc filesystem in the same way that our example did earlier). This is very useful, but you might want to do something like “grep” a specific process name from the output, as follows:

# pstree -ap | grep ssh


|-sshd,29076

 |   `-sshd,32365

 |       `-sshd,32370

 |               |-grep,8401 ssh

 |   |-sssd_ssh,1143 --debug-to-files

Listing 2: The command “pstree” showing only SSH processes with command line arguments and PIDs

As you can see from Listing 2, the command we are querying with is also shown (starting with “grep”) in the output so try not to let that trip you up. I’ve added the “-p” switch to display the PIDs, too.

One final look at this example is shown in Listing 3. Here, the all-pervasive “-Z” switch offers us any SELinux config associated with the parent and child detail displayed in our process table tree. That command for reference was:


# pstree -aZp | grep ssh


|-sshd,29076,`unconfined_u:system_r:sshd_t:s0-s0:c0.c1023'

 |   `-sshd,32365,`unconfined_u:system_r:sshd_t:s0-s0:c0.c1023'

 |       `-sshd,32370,`unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023'

 |               |-grep,8406,`unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023' ssh

 |   |-sssd_ssh,1143,`system_u:system_r:sssd_t:s0' --debug-to-files

Listing 3: The output now includes SELinux detail, command line arguments and PIDs

In this article, I provided a very brief introduction to the “ps” command. In the next several articles, I’ll show further options, examples, and details for using this powerful tool. Stay tuned.

Read Part 2 here.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

 

Give New Life to Old PCs with Linux

Do you have some old hardware collecting dust in the basement, attic, or garage? Don’t let it go to waste just because it’s not powerful enough to run modern operating systems. Linux can breathe new life into such machines. I have revived many old PCs in this way. For example, I use one as my main file server, another as a family laptop in the living room for quick browsing, and third one as a media center in the kids’ room. Additionally, I have donated two revived laptops to a cause.

So, don’t let good hardware die of old age.

Before you start work on reviving your old PC with software alone, however, I suggest making a dime-sized investment in hardware. One thing that can vastly affect performance is RAM. Try to get at least 4GB of RAM. We will be using some 32-bit distros for old hardware, and they won’t detect anything more than 3GB either way. So, 4GB is a great investment for mere $20. Just make sure it’s compatible with your motherboard.

Another hardware component that makes dramatic performance improvement is the hard drive. Trust me, those slow hard drives from early 2000s can really affect performance. I managed to get a decent elementary OS experience from Dell Mini 1012 that I bought about 6 years ago this way. The machine had a mere 2GB of RAM, but I replaced the hard drive with a Kingston SSD. Now elementary OS flies on it.

Softer Side of the Equation: The Right OS

Once you’ve made the suggested hardware upgrades, you need to find the distribution that’s right for you. There are many distros designed to run on underpowered devices. But, there is one main problem from my point of view: appearance.

Most lightweight distributions make heavy compromises in the look and feel department that discourages me from using them on my machines. Because I want them to look great. Sometimes I win; sometimes I have to give up something. But in the end, it’s just another Linux distro running on my system. That’s what counts.

Elementary OS

OK, elementary OS doesn’t claim to be a lightweight distro. But with a Kingston 120GB SSD, my Dell Inspiron 1012 — with Intel Atom N450, 1.66 GHz processor, and 2GB of RAM — ran phenomenally great with elementary OS. Everything, including WiFi, worked out of the box. So, if you have an old machine and you don’t want to compromise on appearances, then give elementary OS a try.

Figure 1: elementary OS Loki desktop
Just download elementary OS from the official site and write the image onto a flash drive. I do suggest waiting another week for the next release of elementary OS Loki, because it has newer kernel and thus better hardware support.

I suggest using the simple dd command to write the iso to the USB drive:

sudo dd if=/location_of_elementaryos.iso of=/path_of_usb_flash_drive bs=1m

MATE

If your system can’t boot into elementary OS or you have performance issues. A MATE based distro would be my second choice. I find Ubuntu MATE to be the best bet on such hardware. This is because Ubuntu MATE offers great out of the box support for a variety of hardware. From a technical point of view, Ubuntu MATE requires a minimum of Pentium III 750 MHz processor and at least 512MB of RAM. So, if you really can’t upgrade your hardware, then MATE offers a much more polished experience.

Download the iso image for Ubuntu Mate from the site and write it to a USB drive with the dd command as mentioned above.

Lubuntu or Xubuntu

Xubuntu is another good “lightweight” distro, but if I had to make compromises where even MATE couldn’t run on the hardware, then I would skip Xubuntu and head straight for Lubuntu.

Lubuntu, on paper and in my experience, is far more resource efficient than Xubuntu. These distros are not as visually appealing as MATE, but when you have a really old or underpowered hardware, Lubuntu will make it usable. Lubuntu was originally based on LXDE, but the LXDE project has merged with Razor Qt project to create LXQt. So Lubuntu is going to enjoy latest Qt technologies (it uses latest Qt5 and KDER Frameworks 5) with the typical lightweight experience. The best of both worlds, I would say.

Trisquel Mini

Trisquel Mini is a lightweight version of Trisquel Linux. The mini version is designed to run on underpowered devices, and it also uses LXDE to keep thing light. Despite my comment on the look and feel of underpowered distros, Trisquel Mini does offer an appealing experience. So, try it if the other options fail you.

Figure 2: Trisquel Mini.

Puppy Linux

As we move down the “lightweight Linux distro” rabbit hole to find the right distro for our really, really, really old hardware, we come across an extremely light distro: Puppy Linux.

Puppy is so lightweight that it doesn’t even require your system to have a hard drive. The Puppy project goes on to recommend that you should not install Puppy on hard drive and instead use to boot it from CD/DVD or USB drive. The good news is that you can save your file/work directly on the CD/DVD or USB drive and work in a purely portable manner. Just plug in your Puppy drive, do your work, and unplug. Done.

It’s Not Just About the Distro

These distros pick and choose lightweight components – from desktop environment to X Window System to application – to keep things under control. So, when you use low-powered hardware, you should also choose appropriate applications. I won’t use LibreOffice on anything below elementary OS on this list. My preferred word processor would be either AbiWord or Google Docs. And, instead of using Firefox or Chromium, I would choose Midori. The point is, you should look for less resource-intensive applications for the job.

Go Arch Linux

If you want even tighter control over your low-powered hardware, I suggest going bare bones with Arch Linux and then choosing the lightest components possible for your system. Arch Linux will enable you to carefully handpick whatever component you need. Check out my Arch Linux tutorial that I update regularly for more information.

Conclusion

There is just no need to waste old or underpowered hardware. You can put these machines to good use with Linux. All you need to do is find the right distro for you.

Do We Need a More Open, Private, “Decentralized” Internet?

Is it time to rebuild the Web? That’s what Tim Berners-Lee and other Internet pioneers are now saying in response to concerns about censorship, electronic spying and excessive centralization on the Web.

Last week, Berners-Lee, the guy who played a leading role in creating the Web in 1989, held a conference with other computer scientists in San Francisco at the Decentralized Web Summit. Attendees also included the likes of Mitchell Baker, head of Mozilla, and Brewster Kahle of the Internet Archive.

Their discussions centered around making the Web “open, secure and free of censorship by distributing data, processing, and hosting across millions of computers around the world, with no centralized control,” according to the conference site.

Read more at The VAR Guy

From the Enterprise Service Bus to Microservices

Dealing with legacy is one of the most common areas of conversation we have around “cloud native” and Pivotal Cloud Foundry. I wrote up a basic framing for how to think of legacy applications last year as part of my cloud-native journey series, and in reviewing talks for the upcoming SpringOne Platform conference I’ve noticed that it’ll be one of the topics at that event this August.

Pivotal architect Rohit Kelapure has been working on this topic a lot and has written a white paper on migrating from Enterprise Service Bus (ESB)-based legacy architectures. After working with him on the paper a tad, I had a few questions whose answers I thought would be helpful to share for all those folks who ask about this topic.

Coté: Technology choices often start with the best of intentions. Few people want to make a bad system. Whats driven so many organizations to choose ESBs?

Rohit: ESBs are a response to enterprise needs around service integration, audit, transformation, business impact traceability, composability, data transformation and a central point for governance….

Read more at The New Stack

Chef’s Habitat Puts the Automation in the App

The creators of the popular system-automation framework introduce Habitat, which allows apps to do their own automation on any target platform.

The makers of the configuration management platform Chef, typically mentioned in the same breath with Puppet, Salt, and Ansible, is taking on application automation applications.Habitat, its new open source project, lets apps take their automation behaviors with them wherever they run.

Habitat doesn’t replace the existing galaxy of container-based application-deployment systems — Docker, Kubernetes, Rocket, Mesosphere, and so on. Instead, it addresses the issues they typically don’t.

Read more at InfoWorld

 

Putting the ‘Micro’ Into Microservices With Raspberry Pi

Microservices replaces monolithic systems with distributed systems. Almost by definition, this means an explosion of moving parts. In a demo context, when everything is running on a single laptop, it’s easy to forget that a microservices architecture really is a system with lots of different things trying to communicate with one another over an unreliable network. Even in a ‘real’ system, with virtualization and containers, it’s not always obvious how much complexity is involved in the aggregated system — as long as things work well. After all, the reason the fallacies of distributed computing are known as fallacies is because they’re assumptions we all tend to make.

Read more at DZone

Kubectl vs HTTP API

One of the best things Kubernetes has is its API, however, I’ve seen a few tools that instead of using the HTTP API use a wrapper on kubectl. I tweeted about it and a discussion was created around the differences between kubectl and the HTTP API.

One thing that I hope it’s clear it’s that kubectl is designed to be used by people and HTTP API is designed to be used by code. In fact, if you look at the documentation you will see that there’s a list of differnt APIs and kubectl is under kubectl CLI, this is teh list of all the kubernetes APIs:

  • Kubernetes API
  • Extension API
  • Autoscaling API
  • Batch API
  • kubectl CLI

So, let’s see what these differences are!

Read more at k8s.uk

Explanation of “Everything is a File” and Types of Files in Linux

If you are new to Linux, or have used it for a few months, then you must have heard or read statements such as In Linux, everything is a File. Read Also: 5 Useful…

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Ubuntu Snappy-Based Package Format Aims to Bridge Linux Divide

Could the transactional mechanism that drives Canonical’s IoT-focused Snappy Ubuntu Core help unify Linux and save it from fragmentation? Today, Canonical announced that the lightweight Snappy’s “snap” mechanism, which two months ago was extended to all Ubuntu users in Ubuntu 16.04, can also work with other Linux distributions. Snap could emerge as a universal Linux package format, enabling a single binary package “to work perfectly and securely on any Linux desktop, server, cloud or device,” says Canonical.

Snap works natively on Arch, Debian, and Fedora, in addition to Ubuntu-based distros like Kubuntu, Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu MATE, Ubuntu Unity, and Xubuntu. It is now being validated on CentOS, Elementary, Gentoo, Mint, openSUSE, RHEL, and OpenWrt.

Snap greatly simplifies third-party Linux app distribution, claims Canonical. ISVs can publish snaps rather than making tough decisions about which distros to support and then struggling to manage diverse package formats and security update mechanisms across multiple distributions.  

The containerized snap technology offers better security than is available with typical package formats such as .deb, says Canonical.  Snaps are isolated from one another to ensure security, and they can be updated or rolled back automatically. Each snap is confined using a range of tailored kernel isolation and security mechanisms and receives only the permissions it needs to operate.

Snaps sit alongside a Linux distro’s native packages and do not infringe on its own update mechanisms for those packages, says Canonical. The snap format is simpler than native internal package formats because it is focused only on applications rather than the core system. “Snaps are essentially self-contained zip files that can be executed very fast in place,” says the company. Stable releases, release candidates, beta versions, and daily builds of a snap can all be published simultaneously, supporting rolling releases.

Snap It Up

While the snap technology could help reduce desktop Linux app fragmentation, much of the focus is on the potentially much larger Internet of Things (IoT) market. Snap won’t solve all the interoperability challenges in Linux-based IoT, but it could go a long way toward unifying the upper application layer.

“We believe snaps address the security risks and manageability challenges associated with deploying and running multiple third party applications on a single IoT Gateway,” stated Jason Shepherd, Director, IoT Strategy and Partnerships, Dell.

Significantly, Samsung has endorsed the snap technology for its Artik embedded boards, which already supports Fedora.

Snaps can be based on existing distribution packages, but “are more commonly built from source for optimization and size efficiency,” says Canonical. Snaps are based on snapd, a free software project on GitHub, and snap packages are built using a “snapcraft” tool. A snapcraft.io project site has been established with documentation and step-by-step guides.

Mark Shuttleworth, founder of Canonical.
The press call was led by Ubuntu creator Mark Shuttleworth, and included reps from Samsung, Dell, and app vendor Mycroft, which is using snap for a voice-controlled smart-home IoT platform. Missing were leaders of other major Linux projects, but testimonials were provided by key contributors, such as: Arch, Debian, and OpenWrt. There were other testimonials from ISVs such as Mozilla and the Krita Foundation, which is releasing Krita 3.0 in the snap format.

According to Shuttleworth, the “stunning” and “surprising” emergence of snap as a universal package format was not even on his roadmap a few months ago. He said that when he told ISVs that Canonical was extending snap to classic Ubuntu, the response was overwhelming.

Shuttleworth conceded that there are other universal open source packaging solutions available, such as AppImage and the newer Flatpak, but argued that most lack the security and/or transactional nature of snap. “The snap mechanism has sophisticated capabilities in the way it delivers updated versions,” he said. “Snaps are perfectly transactional.”

In response to questions, Shuttleworth said that he could see no reason why the Snap mechanism could not be extended to Android. He also said that there was considerable interest among software defined radio (SDR) developers, following the lead of Lime Microsystem’s Snappy Ubuntu Core based LimeSDR. Other notable Ubuntu Snappy supporters have included Acer, GE, and Microsoft, to name a few.

 

 

Open Source as Part of Your Software Delivery Toolchain in the Enterprise: Perspectives for CIOs

A myriad of point-tools are involved in every organizations’ software production. Some of our enterprise customers report using over 50 tools along their pipeline – from code development all the way to releasing into Production. For the majority of development organizations today, these tools are comprised of a mix of commercial and Open Source (OSS) technologies.

Existing open source tools can be found throughout your software Dev and Ops teams – from programming languages, infrastructure and technology stacks, development and test tools, project management and bug tracking, source control management, CI, configuration management, and more – OSS is everywhere.

The proliferation of OSS technologies, libraries and frameworks in recent years has greatly contributed to the advancement of software development, increased developer productivity, and to the flexibility and customizability of the tools landscape to support different use cases and developers’ preferences.

To increase productivity and encourage a culture of autonomy and shared ownership – you want to enable teams to use their tool(s) of choice. That being said, since the advent of Agile development, we see large enterprises wrestle with striking a balance to allow this choice while also retaining a level of managementvisibility and governance over all the technologies used in the software delivery lifecycle. And this problem gets harder over time – because with every passing day new tools are being created and adopted to solve increasingly fine-grained problems in a unique and valuable way.

Enterprises operating mission-critical applications need this level of control, not only to lower costs (with improved utilization of tools, infrastructure, etc.) or speed cycle times (with streamlined or standardized processes), but – more importantly – as a way to ensureoperability, compliance and SLAs.

The tools you’re using can be free, and your process can be faster. But, at the end of the day, no savings on the development side would justify the risks if you’re having trouble managing your applications in Production, or if you’re exposed from a security or regulatory standpoint.

I’d like to address two of the key challenges software executives face with regards to the use of OSS as part of the software development and release process, and how you can address them when adopting OSS, while mitigating possible risks.

Enabling Developers while Ensuring System-Level Management

The realities of software production in large enterprises involve a complex matrix of hundreds or thousands of inter-connected projects, applications, teams and infrastructure nodes. All of them using different OSS tools and work processes – creating management, visibility, scalability and interoperability challenges.

The multitude of point-tools involved also creates a problem of silos of automation. In this situation – each part of the work along the pipeline is carried out by a different tool, and the output of this work has to be exported, analyzed and handed-off to a different team and tool(s) for the next stage in the pipeline. These manual, error-prone handoffs are one of the biggest impediments to enterprise DevOps productivity – they not only slow down your process, but they also introduce risk and increase management overhead.

The fact that your process involves a lot of “best for the task” tools is pretty much a fact of life by now – and with (mostly) good reason. But these silos of automation do not have to be.

Enterprise DevOps initiatives require a unifying approach that coordinates, automates, and manages a disparate set of dozens of tools and processes across the organization. While you want to allow your developers to use the tools they’re used to, you also want to be able to manage the entire end-to-end process of software delivery, maintain the flexibility to include new tools as they are needed, and optimize the whole process across many teams and projects throughout the organization.

This is why enterprises today are opting to integrate their toolchains into an end-to-end DevOps Release Automation platform. To accelerate your pipeline and support better manageability of the entire process, you want a platform that can serve as a layer above (or below) any infrastructure or specific tools/technology and enable centralized management and orchestration of all your tools, environments and apps. This allows for the flexibility to manage the unique tool set of each team has today (or adopts tomorrow), while also tying all the tools together to eliminate silos of automation and provide cross-organization visibility, compliance and control.

Security Risks and Open Source:

Open source is not only prevalent in your toolchain, it’s also in your code and in yourinfrastructure. Many applications today incorporate OSS components and libraries, or rely on OSS technology stacks. Some estimate that more than a third of software code uses open source components, with some applications relying on as much as 70 percent open source code. As OSS use increases, so are the potential security vulnerabilities and breeches (think Heartbleed, Shellshock and POODLE.)

Commercial software is just as likely to include security bugs as OSS code. To mitigate these risks, you need to ensure you have the infrastructure in place to react and fix things quickly to resolve or patch any vulnerability that might come up.

By orchestrating all the tools and automating your end-to-end processes across Dev and Ops, a DevOps Release Automation platform also accelerates your lead time in these cases – so that you can develop, test, and deploy your update more quickly.

In addition, the historical tracking and easy visibility provided by some of these solutions into the state of all your applications, environments, and pipeline stages greatly simplifies your response. When you can easily identify which version of the application is deployed on which environment, and where the compromised bits are located, you can more quickly roll out your update in a faster, more consistent, and repeatable deployment process.

In conclusion:

When managing IT organizations and steering digital transformation in the enterprise, technology leaders need to support proper use of both OSS and commercial technologies as part of their toolchain, while putting the right systems in place to enableenterprise-scale, governance and security.

How do you know where OSS technologies are being used in your process, and if there are any inherent risks or major inefficiencies that need to be addressed as a result? Before you can start optimizing, you have to know exactly what your application lifecycle looks like. This holistic process is sometimes hard to encapsulate in large and complex organizations. I often see different stakeholders understanding only a fraction of the overall process, but lacking knowledge of the entire cross-organizational “pathway to production.” CIOs need to work with their teams to capture the end-to-end pipeline and toolchain, from code-commit all the way to production. This mapping is critical to finding the bottlenecks, breakages and inefficiencies that need to be addresses.

Then, work with your teams to pick the tools (whether they be OSS or not) that work best for the problem that you are trying to solve. Consider how you can orchestrate all these tools as part of a centralized platform. By being able to manage, track and provide visibility into all the tools, tasks, environments and data flow across your delivery pipeline, end-to-end DevOps automation supports extensibility and flexibility for different teams, while enabling system-level view and cross-organizational management for complex enterprise pipelines.

Along with cultural change, breaking the “silos of automation” goes a long way towards effectively breaking the silos between Dev and Ops, and unifying your processes towards one – shared – business goal: the faster delivery of valuable software to your end users.

 

This article first appeared on CIO Review magazine.