Home Blog Page 348

Decreasing Vulnerabilities Seen in Red Hat Linux

As usual there’s been a flurry of activity in the cloud and DevOps security space recently. In case you missed it, a particularly painful flaw was found in Red Hat Enterprise Linux’s DHCP (Dynamic Host Configuration Protocol) service not long ago.

The bug specifically affects RHEL 6 and 7 (and derivative OS, Fedora 28, apparently). The CVE folks allocated the following ID to it in case you want to look it up: CVE-2018-1111. What’s important to note about this discovery is that DHCP (the service which asks a DHCP server for a (usually) temporary IP address and then binds it to one of the host’s network interfaces) is a sometimes forgotten cornerstone of our current tech stacks. Amazon’s EC2 for example shouts out to a DHCP server whenever an instance is spun up. As well as asking for an IP address, your servers will  usually pick up DNS servers from DHCP requests, too.

A descendant of BOOTP, a similar service from a time gone by, the pervasive DHCP bug is commonly used on your home networks, your mobile networks and beyond. According to Red Hat the bug affects the “dhclient”, in tandem with the “NetworkManager” daemon, and means that “A malicious DHCP server, or an attacker on the local network able to spoof DHCP responses, could use this flaw to execute arbitrary commands with root privileges on systems using NetworkManager and configured to obtain network configuration using the DHCP protocol.”

At first glance, this vulnerability might make the RHEL naysayers complain that there’s yet another security issue that only affects Red Hat and not other Linux flavors. And, that must therefore mean that the other distributions are better at securing their packages. However, they couldn’t be more wrong.

The commercial model that Red Hat Inc offer is based around supporting enterprises with their products, on a paid-for basis, along with some consultancy on top for good measure. They’ve been very successful and now their products are in use globally on many mission critical opensource server estates. Why is this relevant? Well, aside from the fact that the (free) CentOS Linux flavour benefits from the downstream improvements made by Red Hat, the community as a whole does in addition.

I normally find that it’s hard to know who to believe when a lofty claim is made in the relatively congested Internet giants’ space,  However, a report published in November 2017 — called “The State of Open Source Security” — shows some evidence that Red Hat’s Linux might be ruling the roost for security currently. Obviously, I can’t make any guarantees for the report’s impartiality.

Commissioned by Snyk, the report states: “Open source library vulnerabilities increased by 53.8% in 2016, while Red Hat Linux vulnerabilities have decreased.” The report is well-constructed and easy to digest and, as a plumb line to what’s going on the with security on the Internet in general, it’s a welcome read. It states that there’s been a “65% decrease in Red Hat vulnerabilities since 2012” and in addition to that: “In 2016, 69% of Red Hat Linux vulnerabilities were fixed within a day of their public disclosure, and 90% were fixed within 14 days of their public disclosure”.

The report continues: “Red Hat Linux seems to be finding some level of stability” and “…it does make us optimistic that better security is achievable with a little bit of work”.

The truth, of course, is that every device or service has a vulnerability of some description or another, and, as the report states, “there are a lot of steps involved in the lifecycle of an open source security vulnerability. From discovery through final adoption of fixes, each part of the process is important in its own way, and ultimately plays a role in the overall state of security.” Code auditing is key as well as timely response to vulnerabilities. Check out the report to learn more.

Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows how hackers launch sophisticated attacks to compromise servers, steal data, and crack complex passwords. In the book, he also shows you how to make your servers invisible, perform penetration testing, and mitigate unwelcome attacks. You can find out more about DevOps, DevSecOps, Containers, and Linux security on his website: https://www.devsecops.cc.

Over 20,000 Container Management Dashboards Are Exposed on the Internet

Even though it’s highly discouraged to expose any kind of management dashboard directly to the internet, there are many users who continue to ignore this recommendation, and it seems that infrastructure admins are no exception.

A recent study by cloud security firm Lacework found over 22,000 publicly exposed container orchestration and API management systems, about 300 of which could be accessed without any credentials and gave attackers full control or remote code execution capability on containers.

Kubernetes dashboards accounted for over 75 percent of the exposed interfaces, followed by Docker Swarm (Portainer and Swarmpit) with 20 percent, Swagger API UI with 3 percent and Mesos Marathon and Red Hat OpenShift with under 1 percent each.

Read more at The New Stack

Open Source at 20: The Ubiquity of Shared Code

“Why is open source important? That’s like asking why is gravity important,” stated Brian Behlendorf, a leading figure in the open-source software movement, and executive director for the blockchain consortium Hyperledger.

While this year marks the 20th anniversary of open source, it is hard to imagine a time before open-source software. Today, it’s difficult to find a solution or piece of software that was created without some open-source components….

Behlendorf recalls first being attracted to the open-source space because he didn’t really trust his own coding abilities, and the idea that there were other developers out there willing to read his code and help him fix it was a “godsend.” “For many people who became programmers after the ‘90s, working publicly, pulling down open-source code, sharing improvements back if you made any or even taking your work and releasing it pubilicy became the default. It was what was expected,” he said.

However, being able to share and collaborate openly on software wasn’t always possible. OSI’s vice president VM Brasseur describes the early days as the “wild west” where programmers were building tools for programmers, driven by the philosophical belief that “sharing is good and right.”

Read more at SDTimes

​The Linux Mint Desktop Continues to Lead the Rest

You can keep your Windows 10 and macOS High Sierra. For me, Linux Mint 19 is not only the best Linux desktop, it’s the best desktop. Period. End of statement.

Why? Because its traditional windows, icons, menus, and pointers (WIMP) interface is a true pleasure to use. It’s even more fun now, because with Mint 19 and its default Cinnamon 3.8 desktop it’s faster and snappier than ever. The Cinnamon desktop, which uses the GTK 3.22 toolkit, also has a smoother look than previous versions.

Linux, as ever, out of the box is more secure than macOS or Windows even after you strap on all the anti-virus software. On my Mac and Windows boxes, security is an almost daily job of patches and fixes. On my Mint desktops? I don’t even worry about it and I have never had a single security problem. I wish I could say the same about the competing operating systems.

Read more at ZDNet

How to Plan Projects in the Open, Without the Stress

How can you define project requirements, engage multiple stakeholders, delegate work, and track progress—all in the open? Here’s one tried-and-true method.

Consider these questions:

  • How do you gather and understand requirements?
  • How do you track those requirements?
  • How do you map them into real work that someone can deliver?
  • How do you clarify when the work should be completed, what success looks like, and who is involved?
  • How do you track all this without bogging down your execs (or your teams) with too much detail

Watch the video at OpenSource.com to learn more.

Big O Notation? More like Big O-M-G

There are a lot of ways to write algorithms for a specific task. Since there are so many variations, how does someone know which algorithm to use in which situation? One way to look at the problem is to ask yourself: what limitations are present and how can I check if the algorithm fits within these constraints? Big O is used to help identify which algorithms fit inside these constraints and limitations.

Big O notation is a mathematical representation of how an algorithm scales with an increasing number of inputs. In other words: the length of time it takes for a certain algorithm to run or how much space it will consume during computations.

By using Big O notation, developers and engineers can construct the best algorithm that fits within their constraints.

How does Big O work?

So, Big O is used to describe the trend of an algorithm, but what does the language surrounding Big O sound like? In conversation, you would talk about Big O like this: “The time complexity of [a certain algorithm] is O(n) — “O of n” (both “O” and “n” are pronounced like the letter). The reason I am using the phrase time complexity is because Big O notations are most commonly referred to as time complexities. “n” is used in Big O to refer to the number of inputs. There is another side to Big O called space complexity but I’ll talk about that later in the blog.

Now that you know how to talk about Big O and what it’s used for, let’s look at some examples to help you understand how Big O is calculated. The following examples will go over some of the more common Big O notations you will see (in order): O(n)O(log(n))O(n^2)O(1).


Time complexity: O(n)

1*g5D_dsEo__lYJIkEc_sEmg.jpeg

“Could you not?”

Let’s say that someone stole the last cookie out of your cookie jar (“how rude!”), and you really wanted to eat that cookie, so now you have to go out and find it. One way you could go about to find the cookie is to ask your 5 friends, one by one, if they took your cookie. We could represent this as a function in Python3:

>>> def who_has_my_cookie(my_friends):
...     for friend in my_friends:
...         if friend.get('has_cookie'):
...             return friend.get('name')

In this function, you are going to go ask each of your friends if they have your cookie. In this case, the length of my_friends is your n. This function’s run time is dependent on how many friends were present during the abduction. For that reason, this function would have a time complexity of O(n).

Now let’s feed the function some input, and see who took the cookie:

>>> my_friends = [
...     {'name':'Kristen', 'has_cookie':False},
...     {'name':'Robert', 'has_cookie':False},
...     {'name':'Jesse', 'has_cookie':False},
...     {'name':'Naomi', 'has_cookie':False},
...     {'name':'Thomas', 'has_cookie':True}
... ]
>>> cookie_taker = who_has_my_cookie(my_friends)
>>> print(cookie_taker)
Thomas

You might wonder, “But what if the first person you ask has the cookie?” Great question! In that case, the time complexity would be O(1), also known as constant time. Even so, the time complexity for this problem would still be O(n).

Why is the correct notation O(n)? In most cases, you won’t get the best case scenario — i.e. finding the cookie on the first person. This means it’s best to think of the worst case scenario when dealing with algorithms.

Summary of O(n): A loop that could iterate the same amount of times as the max number of input. In the case above, the loop goes through 5 iterations, of which there were 5 friends (my_friends) before finding who stole your cookie.


Time Complexity: O(log(n))

In this next scenario, lets say that you had all of your friends line up side by side. You then find the middle person and ask them if the cookie is on their left side or right side. You then find the middle person on that side and ask them the same thing — using the original middle person as a boundary to search smaller portions of the line. This keeps happening until the person who has the cookie is found. This can be written in Python3:

>>> def search_for_cookie(my_friends, first, last):
...     middle = (last - first) // 2 + first
...     middle_response = my_friends[middle].get('where_is_cookie')
...     if middle_response is 'I have it':
...         return my_friends[middle].get('name')
...     elif middle_response is 'to the right':
...         # Search on the right half
...         return search_for_cookie(my_friends, middle + 1, last)
...     else:
...         # Search on the left half
...         return search_for_cookie(my_friends, first, middle - 1)

In this function, first refers to the first person in the section being looked at — the first time, the section you’ll be looking at is the whole line. last refers to the last person in the section being looked at and middle refers to the person in the middle in between first and last.

This function would keep going until the person that has your cookie has been singled down. Looking at the worst case scenario, this function wouldn’t find out who has the cookie until the person has been singled out on the last run. This means the function has a time complexity of O(log(n)).

Let’s feed this function some input and see who took your cookie:

>>> my_friends = [
...     {'name':'David', 'where_is_cookie':'to the right'},
...     {'name':'Jacob', 'where_is_cookie':'to the right'},
...     {'name':'Julien', 'where_is_cookie':'to the right'},
...     {'name':'Dimitrios', 'where_is_cookie':'I have it'},
...     {'name':'Sue', 'where_is_cookie':'to the left'}
... ]
>>> cookie_taker = search_for_cookie(my_friends, 0, len(my_friends) ... - 1)
>>> print(cookie_taker)
Dimitrios

Summary of O(log(n))When a function cuts the section you’re looking at by half every iteration, you can suspect an O(log(n)) algorithm. Spotting this kind of behavior along with incrementing using i = i * 2 for each iteration allows for simple diagnosis. This way of narrowing down who has the cookie is typical of recursive functions like quick sort, merge sort and binary searches.


Time Complexity: O(n^2)

In this next scenario, let’s say that only one person knows who took your cookie and only by name. The person who took the cookie won’t fess up to it either, so you have to name off each other friend to the friend you are talking to. This can be written in Python3:

>>> def who_has_my_cookie(my_friends):
...     for friend in my_friends:
...         for their_friend in friend.get('friends'):
...             if their_friend.get('has_cookie'):
...                 return their_friend.get('name')

If you imagine yourself in this situation you can see that it’s going to take a lot longer than any of the other situations because you need to keep reciting everyone’s name every time you go to the next friend.Worst case scenario, you need to cycle through each friend and ask them about every other friend. If you look at the loops, you can see that the outer loop has a time complexity of O(n), and the inner loop also has a time complexity of O(n). This means the function will have a time complexity of O(n^2) because you multiply the time complexities together O(n * n).

Let’s feed this function some input and see who took your cookie:

>>> my_friends = [
...     {'name':'Steven', 'friends':[
...         {'name':'Steven', 'has_cookie':False},
...         {'name':'Binita', 'has_cookie':False},
...         {'name':'Linds', 'has_cookie':False},
...         {'name':'Raman', 'has_cookie':False},
...         {'name':'Elaine', 'has_cookie':False}
...         ]
...     },
...     {'name':'Binita', 'friends':[
...         {'name':'Steven', 'has_cookie':False},
...         {'name':'Binita', 'has_cookie':False},
...         {'name':'Linds', 'has_cookie':False},
...         {'name':'Raman', 'has_cookie':False},
...         {'name':'Elaine', 'has_cookie':False}
...         ]
...     },
...     {'name':'Linds', 'friends':[
...         {'name':'Steven', 'has_cookie':False},
...         {'name':'Binita', 'has_cookie':False},
...         {'name':'Linds', 'has_cookie':False},
...         {'name':'Raman', 'has_cookie':False},
...         {'name':'Elaine', 'has_cookie':False}
...         ]
...     },
...     {'name':'Raman', 'friends':[
...         {'name':'Steven', 'has_cookie':False},
...         {'name':'Binita', 'has_cookie':False},
...         {'name':'Linds', 'has_cookie':False},
...         {'name':'Raman', 'has_cookie':False},
...         {'name':'Elaine', 'has_cookie':False}
...         ]
...     },
...     {'name':'Elaine', 'friends':[
...         {'name':'Steven', 'has_cookie':False},
...         {'name':'Binita', 'has_cookie':False},
...         {'name':'Linds', 'has_cookie':False},
...         {'name':'Raman', 'has_cookie':True},
...         {'name':'Elaine', 'has_cookie':False}
...         ]
...     }
... ]
>>> cookie_taker = who_has_my_cookie(my_friends)
>>> print(cookie_taker)
Raman

Summary of O(n^2): There is a loop within a loop, so you could instantly think that it would be O(n^2). That’s a correct assumption, but to make sure, double check and make sure the increments are still being incremented ntimes. Some loops within loops turn out to be O(nlog(n)).


Time Complexity: O(1)

In this next scenario, let’s assume that you have a security camera that is watching your jar of cookies at all times. Once you noticed that your cookie was missing, you check the tapes and know exactly who took your cookie. For this reason, you can immediately go to that person and get the cookie back. This can be written in Python3:

>>> def who_has_my_cookie(my_friends, security_camera):
...     return my_friends[security_camera].get('name')

This would be considered a time complexity of O(1) because no matter who took your cookie, it would always result in the same amount of time being spent getting it back.

Let’s feed this function some input and expose the cookie thief:

>>> my_friends = [
...     {'name':'Sid', 'has_cookie':False},
...     {'name':'Guillaume', 'has_cookie':False},
...     {'name':'Lee', 'has_cookie':False},
...     {'name':'Liz', 'has_cookie':False},
...     {'name':'Kevin', 'has_cookie':True}
... ]
>>> cookie_taker = who_has_my_cookie(my_friends, 4)
>>> print(cookie_taker)
Kevin

Summary of O(1): If a function has no loops, and only performs one thing based off of the input you are giving, it’s usually going to be an O(1) time complexity.


There’s one more thing about Big O notations that you need to know. Constants in Big O are thrown out. The reasoning behind this is that once the number of inputs become large enough, the constants don’t really give any meaningful insights into the time complexity anymore.

Nested Loops vs Non-Nested Loops

When there is a nested loop, you multiply them together. In the O(n^2)example above, there was a loop within a loop giving the time complexity O(n^2). If there is a loop right after another loop, (not nested), you would add them together rather than multiply. This would give a time complexity of O(n + n) which can be combined to produce O(2n). You know now that Big O notations get rid of any constants, so that would reduce down into O(n).

But what if there was another set of inputs that aren’t as long or short as n? You can represent that set of inputs as k. Say you have a nested loop with nas the outer loop and k as the inner loop. This would boil down to O(n*k). You can’t really simplify it to be O(n^2) since k is not the same as n, so O(n*k) is the correct notation. Same goes for non-nested loops: O(n + k).


If you are a visual learner, I have included a graph to help you spot which algorithm is taking more time than other algorithms. You can clearly see that any of the polynomial notations ( n!2^nn^2 ) are generally going to take a lot longer than say, log(n).

 
1*lLbL3VCKkKbf-Y_dVbB5-Q.png

N is computation time and n is number of input elements

Some of the other types of time complexities that exist, but are not described in this blog are:

  1. O(n!) — O of n factorial
  2. O(2^n) — O of 2 to the n
  3. O(nlog(n)) — O of n log n: Example
  4. O(sqrt(n)) — O of square root of n

Feel free to look up these more complex Big O notations.

Space Complexity

There is also another side to Big O. Instead of focusing on time only, a different aspect can also be inspected — Space complexity. Space complexity is a measure of the amount of working storage an algorithm needs during run time.

In this scenario, let’s say that all your friends are in different rooms and you need to keep track of who you’ve talked to so you don’t ask the same person twice. This can be written in Python3:

>>> def who_took_my_cookie(my_friends):
...     spoken_to = []  
...     for friend in my_friends:
...         if friend not in spoken_to:
...             if friend.get('has_cookie'):
...                 print(spoken_to)
...                 return friend.get('name')
...             else:
...                 spoken_to.append(friend.get('name')

In this example, the spoken_to list could be as long as there are total friends. This means that the space complexity is O(n).

Let’s feed this function some input and see who took your cookie:

>>> my_friends = [
...     {'name':'Kristen', 'has_cookie':False},
...     {'name':'Robert', 'has_cookie':False},
...     {'name':'Jesse', 'has_cookie':False},
...     {'name':'Naomi', 'has_cookie':False},
...     {'name':'Thomas', 'has_cookie':True}
... ]
>>> cookie_taker = who_has_my_cookie(my_friends)
['Kristen', 'Robert', 'Jesse', 'Naomi']
>>> print(cookie_taker)
Thomas

Summary of Space Complexity: Space complexity can have as many notations as time complexity. This means that the O(n^2), etc. also exists and can be calculated using the same general principles. Be careful though, time and space complexities may be different than each other for a given function.

By combining time complexity with space complexity, you can begin to compare algorithms and find which one will work best for your constraints.

Battle of the Algorithms

1*-0kWkkgpkBi4VywaovVwqA.jpeg

“Which algorithm will win?!”

Now that you know what time and space complexities are and how to calculate them, let’s talk about some cases where certain algorithms may be better than others.

A simple example to clearly define some constraints would be to examine a self-driving car. The amount of storage in the car is finite — meaning that there is only so much memory that can be used. Because of this, algorithms which favor low space complexity are often favored. But in this case, fast time complexities are also favored, which is why self-driving cars are so difficult ot implement.

Now let’s examine something like a cellphone. The same storage limitations are in place, but because the cellphone isn’t driving around on the street, some sacrifices in time complexity can be made in favor of less space complexity.


Now that you have the tools to start computing the time and space complexity of the algorithms you read and write, determine which algorithm is best for your constraints. Determining which algorithm to use must take into consideration the space available for working memory, the amount of input to be parsed, and what your function is doing. Happy Coding!

Finding IT Professionals With SDN Skills Remains a Struggle

With any major technology transition there is a gap between the time when the innovation is created and when there is a critical mass of IT professionals available with training and knowledge on how to deploy that state-of-the-art technology.

Although software-defined networking (SDN) and network functions virtualization (NFV) technology has been around for quite some time, network engineers are still reluctant to adopt the technology.

Plus, there are many different technologies that make up the SDN/NFV ecosystem, which means there are a number of different technologies that require training, said Eric Hanselman, chief analyst for 451 Research. Without a dominant set of SDN and NFV technologies, network administrators are unsure what combination of technologies they need to master. And this comes at a time when network services are becoming more disaggregated, Hanselman noted.

Plus, most companies are conservative when making major changes to their networks because if something goes wrong there is the potential for a ‘blast radius,’ which could become a huge problem, Hanselman said.  “The entire scope of the endeavor can be huge,” he added.

Read more at SDxCentral

​Google Doubles Down on Linux and Open Source

Google couldn’t exist without Linux and open-source software. While you may not think of Google as a Linux company in the same way as you do CanonicalRed Hat, or SUSE, it wouldn’t be the search and advertising giant it is today without Linux. So, it makes sense that Google is moving up from its Silver membership in The Linux Foundation, to the Platinum level.

With this jump in status, Google gets a seat on the Foundation’s board of directors. This position will be filled by Sarah Novotny, the head of open source strategy for Google Cloud Platform. Novotny is a self-confessed geek. She has a long history of bridging the gap between the business world and the tech world. Before coming to Google, where she also heads the Kubernetes community, she was head of developer relations at NGNIX and program chair for the O’Reilly Open Source Convention (OSCon).

Read more at ZDNet

Linux netstat Command Tutorial for Beginners (8 Examples)

If you are new to Linux command line, and are interested in learning networking stuff, there are many command line utilities that you should be aware of. One such tool is netstat. In this article, we will discuss this command using some easy to understand examples.

But before we do that, it’s worth mentioning that all these examples have been tested on an Ubuntu 16.04 LTS machine.

Linux netstat command

The netstat command lets you print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. Following is its syntax:

netstat [OPTIONS]

Read more at HowtoForge

4MLinux: More Than Just Another Lightweight Distro

I don’t want to get up on yet another “Here’s another lightweight Linux distribution to revive your aging hardware” soapbox. So many distributions make that promise, and most of them do an outstanding job of fulfilling their mission statement. Also, many of those distributions are fairly similar: They offer a small footprint, work with 32-bit systems, and install a minimal amount of software dedicated to the task of helping you get your work done as best a lightweight operating system can do.

But then there’s 4MLinux. This particular take on the lightweight Linux distribution is a different beast altogether. First and foremost, 4MLinux doesn’t include a package manager. That’s right, the only way you can install packages on this distribution is to do so from source (unless you install the limited number of packages from within the Extensions menu (more on that in a bit). That, of course, can lead to a dependency nightmare. But if you really give it some thought, that could be a serious plus, especially if you’re looking for a distribution that could be considered an ideal desktop for end users with specific use cases. If those users only need to work with a web browser, 4MLinux allows that while preventing users from installing other applications.

What’s in a name?

The name 4MLinux comes from a strong focus on the following “Ms”:

  • Maintenance — 4M can be used as a system rescue Live CD.

  • Multimedia — 4M offers full support for a large number of image, audio and video formats.

  • Miniserver — 4M includes DNS, FTP, HTTP, MySQL, NFS, Proxy, SMTP, SSH, and Telnet servers.

  • Mystery — 4M includes a collection of classic Linux games.

It is the inclusion of servers that makes 4MLinux stand out above the lightweight competition. On top of that, the distribution goes out of its way to make the managing of these servers pretty simple (more on that in a bit).

Let’s install 4MLinux and see take a look at what it has to offer.

Installation

The installation of 4MLinux is a bit more complicated than many other distributions in its class. When you boot the live CD/USB, you must first create a partition it can be installed on. To do that, click the start button > Maintenance > Partitions > GParted. I installed 4MLinux as a VirtualBox VM. In order to do that, I had to first create a partition table on the virtual drive. Do this by clicking Device > Create Partitions Table (Figure 1).

Figure 1: Creating a partition table on the VirtualBox virtual drive.

Once the partition table has been created, click the New button and create a partition that can house 4MLinux (Figure 2).

Figure 2: Creating a new partition for 4M Linux.

With the partition created, go to Start > 4MLinux > Installer. This will open a terminal window. When prompted, hit the Enter key on your keyboard and then select the partition to use for the installation (Figure 3).

Figure 3: Most likely, the partition you want will be sda1.

You will then need to answer two questions:

  • Is 4MLinux to be the only operating system in your PC y/n

  • Do you wish to start the installation right now y/n

After you type “y” to start the installation, 4MLinux will install (a process that takes less than 5 minutes). When you see “Done” presented, close the window and reboot the system. Upon reboot, you will be prompted (in a text-only window) to create a root password (Figure 4).

Figure 4: Creating a root password.

Creating a standard user

Naturally, you don’t want to run 4MLinux as a standard user. Of course, you won’t find a GUI tool for this, so open the terminal window and issue the command adduser USERNAME (Where USERNAME is the name of the user). After typing (and verifying) a password for the new user, you can log out and log back in as that user.

The Desktop

4MLinux employs Joe’s Window Manager (JWM). It’s actually quite a lovely little desktop, one that includes all the features you’d want for easy interaction. There’s a panel, a start menu, launchers, desktop icons, a dock, a system tray, a mouse menu (Figure 5), and, for good measure, Conky (the lightweight, themeable system monitor).

Figure 5: The 4MLinux desktop includes a handy mouse menu that is accessible from anywhere on the desktop.

Applications

Out of the box, you won’t find much in the way of productivity tools. And even though there isn’t a package manager, you can install a few tools, by way of the Start > Extensions menu. Click on that menu and you’ll see entries like LibreOffice, AbiWord, GIMP, Gnumeric (Figure 6), Thunderbird, Firefox, Opera, Skype, Dropbox, FileZilla, VLC, Audacious, VirtualBox, Wine, Java, and more.

Figure 6: Installing the available apps in 4MLinux.

Click on one of those entries and a terminal window will open, asking if you want to continue. Type y and hit Enter. The package will be downloaded and installed. Once installed, the package is started from the same Extensions menu entry used for installation.

Miniserver

Let’s talk about that Miniserver menu entry. If you open that menu, you’ll find entries for StartAll, StopAll, Settings, Tests, and MiscTools. If you click the StartAll entry, all of the included servers will start. Once they’ve started, click on Miniserver > Settings > LAMP. This will open up the default web browser (Chromium) to a page allowing you to select from LAMP Admin, Webmin, or Help. Out of the box, Webmin is not ready to be used. Before you can log into that powerful tool, you must first open up a terminal window and issue the command webmin. This will download and install Webmin on the machine. You’ll have to answer a few questions (Figure 7) and create a password for the Webmin admin user.

Figure 7: Installing Webmin on 4MLinux.

Once you’ve installed Webmin, you can click Start > Miniserver > Settings > LAMP and, when presented with the available links, click the Webmin entry. You’ll be required to log in with the user admin and the password you created during the Webmin installation. You can now administer your LAMP server (and quite a bit more) from within an all-too familiar web GUI.

Is 4MLinux for You?

4MLinux is a very intriguing distribution that’s really hard to classify. It offers the best of a few worlds and misses out on the best of others (such as a traditional package manager). It’s one of those distributions that might never wind up your go-to, but it is certainly one you need to experience first hand. And, who knows, you might find this unique flavor of Linux just right for you.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.