Home Blog Page 449

Why and How to Set an Open Source Strategy

Open source projects are generally started as a way to scratch one’s itch — and frankly that’s one of its greatest attributes. Getting code down provides a tangible method to express an idea, showcase a need, and solve a problem. It avoids over thinking and getting a project stuck in analysis-paralysis, letting the project pragmatically solve the problem at hand.

Next, a project starts to scale up and gets many varied users and contributions, with plenty of opinions along the way. That leads to the next big challenge  how does a project start to build a strategic vision? In this article, I’ll describe how to walk through, measure, and define strategies collaboratively, in a community.

Read more at The Linux Foundation

​Linux Totally Dominates Supercomputers

Linux rules supercomputing. This day has been coming since 1998, when Linux first appeared on the TOP500 Supercomputer list. Today it finally happened: All 500 of the world’s fastest supercomputers are running Linux.

The last two non-Linux systems, a pair of Chinese IBM POWER computers running AIX, dropped off the November 2017 TOP500 Supercomputer list. …

When the first Top500 supercomputer list was compiled in June 1993, Linux was barely more than a toy. It hadn’t even adopted Tux as its mascot yet. It didn’t take long for Linux to start its march on supercomputing.

Read more at ZDNet

How to Monitor the SRE Golden Signals

Site Reliability Engineering (SRE) and related concepts are very popular lately, in part due to the famous Google SRE book and others talking about the “Golden Signals” that you should be monitoring to keep your systems fast and reliable as they scale.

Everyone seems to agree these signals are important, but how do you actually monitor them? No one seems to talk much about this.

These signals are much harder to get than traditional CPU or RAM monitoring, as each service and resource has different metrics, definitions, and especially tools required. …

This series of articles will walk through the signals and practical methods for a number of common services. First, we’ll talk briefly about the signals themselves, then a bit about how you can use them in your monitoring system.

Read more at Dev.to

Three Steps to Blend Cloud and Edge Computing on IoT

For years, companies have relied on systems that compute and control from a relatively central location. Even cloud-based systems rely on a single set of software components that churn through data, gather results and serve them back.

The internet of things changes that dynamic. Suddenly, thousands of devices are sharing data, talking to other systems and offering control to thousands of endpoints.

As these networks evolve, they encounter new problems made possible by now-popular computing trends. Thanks to big data and smarter networks (through mesh networking, IoT and low-power networks and computing), the older systems cannot handle the influx of information they helped create.

The answer to these problems is a blend of cloud storage and edge computing. To take advantage of both technologies, however, IT professionals must understand how they operate.

Read more at TechTarget

How to Install Firefox Quantum in Linux

Finally, Firefox 57 was officially released for all major OS e.g. Linux (32/64 bit), Mac OSX, Windows and Android. The binary package are now available for download for Linux (POSIX) systems, grab desired one and enjoy the browsing with new features added to it.

What’s new in Firefox 57

The major new release comes with the following features:

  • A new design look thanks to a new theme, a new Firefox logo and new ‘New Tab‘ page.
  • A multi-core Rendering Engine that’s GPU efficient.
  • New Add-ons designed for the modern web.
  • Faster page load time with less RAM (according to Mozilla developers it should load pages 2 times faster).
  • Efficient memory management.

New Firefox has also added a lots of new interesting features to Android as well. So, don’t wait, just grab the latest Firefox for android from Google Play Store and have fun.

Read more at Tecmint

3 Open Source Alternatives to ArcGIS Desktop

Looking to create a great looking map or perform analysis on geospatial data? Look no further than these open source desktop GIS tools.

If you’ve ever worked with geographic data on the desktop, chances are that you used Esri’s ArcGIS application in at least part of your work. ArcGIS is an incredibly powerful tool, but unfortunately, it’s a proprietary product that is designed for Windows. Linux and Mac users are out of luck unless they want to run ArcGIS in a virtualized environment, and even then, they’re still using a closed source product that can be very expensive to license. While their flagship product is closed source, I would be remiss not to note that Esri has made numerous contributions to the open source community.

Fortunately, GIS users have a few choices for using open source tools to design maps and work with spatial data that can be obtained under free and open source licenses and which run on a variety of different non-Windows operating systems. Let’s take a look at some of the options.

Read more at OpenSource.com

Finding Files with mlocate: Part 2

In the previous article, we discussed some ways to find a specific file out of the thousands that may be present on your filesystems and introduced the locate tool for the job. Here we explain how the important updatedb tool can help.

Well Situated

Incidentally, you might get a little perplexed if trying to look up the manuals for updatedb and the locate command. Even though it’s actually the mlocate command and the binary is /usr/bin/updatedb on my filesystem, you probably want to use varying versions of the following man commands to find what you’re looking for:

# man locate


# man updatedb


# man updatedb.conf

Let’s look at the important updatedb command in a little more detail now. It’s worth mentioning that, after installing the locate utility, you will need to initialize your file-list database before doing anything else. You have to do this as the “root” user in order to reach all the relevant areas of your filesystems or the locate command will complain. Initialize or update your database file, whenever you like, with this command:

# updatedb

Obviously, the first time this command is run it may take a little while to complete, but when I’ve installed the locate command afresh I’ve almost always been pleasantly surprised at how quickly it finishes. After a hop, a skip, and a jump, you can immediately query your file database. However, let’s wait a moment before doing that.

We’re dutifully informed by its manual that the database created as a result of running updatedb resides at the following location: /var/lib/mlocate/mlocate.db.

If you want to change how updatedb is run, then you need to affect it with your config file — a reminder that it should live here: /etc/updatedb.conf. Listing 1 shows the contents of it on my system:

PRUNE_BIND_MOUNTS = "yes"

PRUNEFS = "9p afs anon_inodefs auto autofs bdev binfmt_misc cgroup cifs coda configfs 
cpuset debugfs devpts ecryptfs exofs fuse fusectl gfs gfs2 hugetlbfs inotifyfs iso9660 
jffs2 lustre mqueue ncpfs nfs nfs4 nfsd pipefs proc ramfs rootfs rpc_pipefs securityfs 
selinuxfs sfs sockfs sysfs tmpfs ubifs udf usbfs"

PRUNENAMES = ".git .hg .svn"

PRUNEPATHS = "/afs /media /net /sfs /tmp /udev /var/cache/ccache /var/spool/cups 
/var/spool/squid /var/tmp"

Listing 1: The innards of the file /etc/updatedb.conf which affects how our database is created.

The first thing that my eye is drawn to is the PRUNENAMES section. As you can see by stringing together a list of directory names, delimited with spaces, you can suitably ignore them. One caveat is that only directory names can be skipped, and you can’t use wildcards. As we can see, all of the otherwise-hidden files in a Git repository (the .git directory might be an example of putting this option to good use.

If you need to be more specific then, again using spaces to separate your entries, you can instruct the locate command to ignore certain paths. Imagine for example that you’re generating a whole host of temporary files overnight which are only valid for one day. You’re aware that this is a special directory of sorts which employs a familiar naming convention for its thousands of files. It would take the locate command a relatively long time to process the subtle changes every night adding unnecessary stress to your system. The solution is of course to simply add it to your faithful “ignore” list.

Well Appointed

As seen in Listing 2, the file /etc/mtab offers not just a list of the more familiar filesystems such as /dev/sda1 but also a number of others that you may not immediately remember.

/dev/sda1 /boot ext4 rw,noexec,nosuid,nodev 0 0

proc /proc proc rw 0 0

sysfs /sys sysfs rw 0 0

devpts /dev/pts devpts rw,gid=5,mode=620 0 0

/tmp /var/tmp none rw,noexec,nosuid,nodev,bind 0 0

none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0

Listing 2: A mashed up example of the innards of the file /etc/mtab.

Some of the filesystems shown in Listing 2 contain ephemeral content and indeed content that belongs to pseudo-filesystems, so it is clearly important to ignore their files — if for no other reason than because of the stress added to your system during each overnight update.

In Listing 1, the PRUNEFS option takes care of this and ditches those not suitable (for most cases). There are a few different filesystems to consider as you can see:

PRUNEFS = "9p afs anon_inodefs auto autofs bdev binfmt_misc cgroup cifs coda configfs 
cpuset debugfs devpts ecryptfs exofs fuse fusectl gfs gfs2 hugetlbfs inotifyfs iso9660 jffs2 
lustre mqueue ncpfs nfs nfs4 nfsd pipefs proc ramfs rootfs rpc_pipefs securityfs selinuxfs 
sfs sockfs sysfs tmpfs ubifs udf usbfs"

The updatedb.conf manual succinctly informs us of the following information in relation to the PRUNE_BIND_MOUNTS option:

“If PRUNE_BIND_MOUNTS is 1 or yes, bind mounts are not scanned by updatedb(8).  All file systems mounted in the subtree of a bind mount are skipped as well, even if they are not bind mounts.  As an exception, bind mounts of a directory on itself are not skipped.”

Assuming that makes sense, before moving onto some locate command examples, you should note one thing. Excluding some versions of the updatedb command, it can also be told to ignore certain “non-directory files.” However, this does not always apply, so don’t blindly copy and paste config between versions if you use such an option.

In Need of Modernization

As mentioned earlier, there are times when finding a specific file needs to be so quick that it’s at your fingertips before you’ve consciously recalled the command. This is the irrefutable beauty of the locate command.

And, if you’ve ever sat in front of a horrendously slow Windows machine watching the hard disk light flash manically as if it were suffering a conniption due to the indexing service running, then I can assure you that the performance that you’ll receive from the updatedb command will be a welcome relief.

You should bear in mind, that unlike with the find command, there’s no need to remember the base paths of where your file might be residing. By that I mean that all of your (hopefully) relevant filesystems are immediately accessed with one simple command and that remembering paths is almost a thing of the past.

In its most simple form, the locate command looks like this:

# locate chrisbinnie.pdf

There’s also no need to escape hidden files that start with a dot or indeed expand a search with an asterisk:

# locate .bash

Listing 3 shows us what has been returned, in an instant, from the many partitions the clever locate command has scanned previously.

/etc/bash_completion.d/yum.bash

/etc/skel/.bash_logout

/etc/skel/.bash_profile

/etc/skel/.bashrc

/home/chrisbinnie/.bash_history

/home/chrisbinnie/.bash_logout

/home/chrisbinnie/.bash_profile

/home/chrisbinnie/.bashrc

/usr/share/doc/git-1.5.1/contrib/completion/git-completion.bash

/usr/share/doc/util-linux-ng-2.16.1/getopt-parse.bash

/usr/share/doc/util-linux-ng-2.16.1/getopt-test.bash

Listing 3: The search results from running the command: “locate .bash”

I’m suspicious that the following usage has altered slightly, from back in the day when the slocate command was more popular or possibly the original locate command, but you can receive different results by adding an asterisk to that query as so:

# locate .bash*

In Listing 4, you can see the difference from Listing 3. Thankfully, the results make more sense now that we can see them together. In this case, the addition of the asterisk is asking the locate command to return files beginning with .bash as opposed to all files containing that string of characters.

/etc/skel/.bash_logout

/etc/skel/.bash_profile

/etc/skel/.bashrc

/home/d609288/.bash_history

/home/d609288/.bash_logout

/home/d609288/.bash_profile

/home/d609288/.bashrc

Listing 4: The search results from running the command: “locate .bash*” with the addition of an asterisk.

Stay tuned for next time when we learn more about the amazing simplicity of using the locate command on a day-to-day basis.

Learn more about essential sysadmin skills: Download the Future Proof Your SysAdmin Career ebook now.

Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows how hackers launch sophisticated attacks to compromise servers, steal data, and crack complex passwords, so you can learn how to defend against these attacks. In the book, he also talks you through making your servers invisible, performing penetration testing, and mitigating unwelcome attacks. You can find out more about DevSecOps and Linux security via his website (http://www.devsecops.cc).

What Is OpenHPC?

High performance computing (HPC)—the aggregation of computers into clusters to increase computing speed and power—relies heavily on the software that connects and manages the various nodes in the cluster. Linux is the dominant HPC operating system, and many HPC sites expand upon the operating system’s capabilities with different scientific applications, libraries, and other tools.

As HPC began developing, that there was considerable duplication and redundancy among the HPC sites compiling HPC software became apparent, and sometimes dependencies between the different software components made installations cumbersome. The OpenHPC project was created in response to these issues. OpenHPC is a community-based effort to solve common tasks in HPC environments by providing documentation and building blocks that can be combined by HPC sites according to their needs. 

Read more at OpenSource.com

How Enterprise IT Uses Kubernetes to Tame Container Complexity

Running a few standalone containers for development purposes won’t rob your IT team of time or patience: A standards-based container runtime by itself will do the job. But once you scale to a production environment and multiple applications spanning many containers, it’s clear that you need a way to coordinate those containers to deliver the individual services. As containers accumulate, complexity grows. Eventually, you need to take a step back and group containers along with the coordinated services they need, such as networking, security, and telemetry.

That’s why technologies like the open source Kubernetes project are such a big part of the container scene.

Kubernetes automates and orchestrates Linux container operations. It eliminates many of the manual processes involved in deploying and scaling containerized applications. 

Read more at EnterprisersProject

DevOps, Agile, and Continuous Delivery: What IT Leaders Need to Know

Enterprises across the globe have implemented the Agile methodology of software development and reaped its benefits in terms of smaller development times. Agile has also helped streamline processes in multilevel software development teams. And the methodology builds in feedback loops and drives the pace of innovation. Over time, DevOps and continuous delivery have emerged as more wholesome and upgraded approaches of managing the software development life cycle (SDLC) with a view to improve speed to market, reduce errors, and enhance quality.

In this guide, we will talk more about Agile, how it grew, and how it extended into DevOps and, ultimately, continuous development.

Read more at TechGenix