How can you define project requirements, engage multiple stakeholders, delegate work, and track progress—all in the open? Here’s one tried-and-true method.
There are a lot of ways to write algorithms for a specific task. Since there are so many variations, how does someone know which algorithm to use in which situation? One way to look at the problem is to ask yourself: what limitations are present and how can I check if the algorithm fits within these constraints? Big O is used to help identify which algorithms fit inside these constraints and limitations.
Big O notation is a mathematical representation of how an algorithm scales with an increasing number of inputs. In other words: the length of time it takes for a certain algorithm to run or how much space it will consume during computations.
By using Big O notation, developers and engineers can construct the best algorithm that fits within their constraints.
How does Big O work?
So, Big O is used to describe the trend of an algorithm, but what does the language surrounding Big O sound like? In conversation, you would talk about Big O like this: “The time complexity of [a certain algorithm] is O(n) — “O of n” (both “O” and “n” are pronounced like the letter). The reason I am using the phrase time complexity is because Big O notations are most commonly referred to as time complexities. “n” is used in Big O to refer to the number of inputs. There is another side to Big O called space complexity but I’ll talk about that later in the blog.
Now that you know how to talk about Big O and what it’s used for, let’s look at some examples to help you understand how Big O is calculated. The following examples will go over some of the more common Big O notations you will see (in order): O(n), O(log(n)), O(n^2), O(1).
Time complexity: O(n)
“Could you not?”
Let’s say that someone stole the last cookie out of your cookie jar (“how rude!”), and you really wanted to eat that cookie, so now you have to go out and find it. One way you could go about to find the cookie is to ask your 5 friends, one by one, if they took your cookie. We could represent this as a function in Python3:
>>> def who_has_my_cookie(my_friends):
... for friend in my_friends:
... if friend.get('has_cookie'):
... return friend.get('name')
In this function, you are going to go ask each of your friends if they have your cookie. In this case, the length of my_friends is your n. This function’s run time is dependent on how many friends were present during the abduction. For that reason, this function would have a time complexity of O(n).
Now let’s feed the function some input, and see who took the cookie:
>>> cookie_taker = who_has_my_cookie(my_friends)
>>> print(cookie_taker)
Thomas
You might wonder, “But what if the first person you ask has the cookie?” Great question! In that case, the time complexity would be O(1), also known as constant time. Even so, the time complexity for this problem would still be O(n).
Why is the correct notation O(n)? In most cases, you won’t get the best case scenario — i.e. finding the cookie on the first person. This means it’s best to think of the worst case scenario when dealing with algorithms.
Summary of O(n): A loop that could iterate the same amount of times as the max number of input. In the case above, the loop goes through 5 iterations, of which there were 5 friends (my_friends) before finding who stole your cookie.
Time Complexity: O(log(n))
In this next scenario, lets say that you had all of your friends line up side by side. You then find the middle person and ask them if the cookie is on their left side or right side. You then find the middle person on that side and ask them the same thing — using the original middle person as a boundary to search smaller portions of the line. This keeps happening until the person who has the cookie is found. This can be written in Python3:
>>> def search_for_cookie(my_friends, first, last):
... middle = (last - first) // 2 + first
... middle_response = my_friends[middle].get('where_is_cookie')
... if middle_response is 'I have it':
... return my_friends[middle].get('name')
... elif middle_response is 'to the right':
...# Search on the right half... return search_for_cookie(my_friends, middle + 1, last)
... else:
...# Search on the left half... return search_for_cookie(my_friends, first, middle - 1)
In this function, first refers to the first person in the section being looked at — the first time, the section you’ll be looking at is the whole line. last refers to the last person in the section being looked at and middle refers to the person in the middle in between first and last.
This function would keep going until the person that has your cookie has been singled down. Looking at the worst case scenario, this function wouldn’t find out who has the cookie until the person has been singled out on the last run. This means the function has a time complexity of O(log(n)).
Let’s feed this function some input and see who took your cookie:
>>> my_friends = [
... {'name':'David', 'where_is_cookie':'to the right'},
... {'name':'Jacob', 'where_is_cookie':'to the right'},
... {'name':'Julien', 'where_is_cookie':'to the right'},
... {'name':'Dimitrios', 'where_is_cookie':'I have it'},
... {'name':'Sue', 'where_is_cookie':'to the left'}
... ]
>>> cookie_taker = search_for_cookie(my_friends, 0, len(my_friends) ... - 1)
>>> print(cookie_taker)
Dimitrios
Summary of O(log(n)): When a function cuts the section you’re looking at by half every iteration, you can suspect an O(log(n)) algorithm. Spotting this kind of behavior along with incrementing using i = i * 2 for each iteration allows for simple diagnosis. This way of narrowing down who has the cookie is typical of recursive functions like quick sort, merge sort and binary searches.
Time Complexity: O(n^2)
In this next scenario, let’s say that only one person knows who took your cookie and only by name. The person who took the cookie won’t fess up to it either, so you have to name off each other friend to the friend you are talking to. This can be written in Python3:
>>> def who_has_my_cookie(my_friends):
... for friend in my_friends:
... for their_friend in friend.get('friends'):
... if their_friend.get('has_cookie'):
... return their_friend.get('name')
If you imagine yourself in this situation you can see that it’s going to take a lot longer than any of the other situations because you need to keep reciting everyone’s name every time you go to the next friend.Worst case scenario, you need to cycle through each friend and ask them about every other friend. If you look at the loops, you can see that the outer loop has a time complexity of O(n), and the inner loop also has a time complexity of O(n). This means the function will have a time complexity of O(n^2) because you multiply the time complexities together O(n * n).
Let’s feed this function some input and see who took your cookie:
Summary of O(n^2): There is a loop within a loop, so you could instantly think that it would be O(n^2). That’s a correct assumption, but to make sure, double check and make sure the increments are still being incremented ntimes. Some loops within loops turn out to be O(nlog(n)).
Time Complexity: O(1)
In this next scenario, let’s assume that you have a security camera that is watching your jar of cookies at all times. Once you noticed that your cookie was missing, you check the tapes and know exactly who took your cookie. For this reason, you can immediately go to that person and get the cookie back. This can be written in Python3:
This would be considered a time complexity of O(1) because no matter who took your cookie, it would always result in the same amount of time being spent getting it back.
Let’s feed this function some input and expose the cookie thief:
>>> cookie_taker = who_has_my_cookie(my_friends, 4)
>>> print(cookie_taker)
Kevin
Summary of O(1): If a function has no loops, and only performs one thing based off of the input you are giving, it’s usually going to be an O(1) time complexity.
There’s one more thing about Big O notations that you need to know. Constants in Big O are thrown out. The reasoning behind this is that once the number of inputs become large enough, the constants don’t really give any meaningful insights into the time complexity anymore.
Nested Loops vs Non-Nested Loops
When there is a nested loop, you multiply them together. In the O(n^2)example above, there was a loop within a loop giving the time complexity O(n^2). If there is a loop right after another loop, (not nested), you would add them together rather than multiply. This would give a time complexity of O(n + n) which can be combined to produce O(2n). You know now that Big O notations get rid of any constants, so that would reduce down into O(n).
But what if there was another set of inputs that aren’t as long or short as n? You can represent that set of inputs as k. Say you have a nested loop with nas the outer loop and k as the inner loop. This would boil down to O(n*k). You can’t really simplify it to be O(n^2) since k is not the same as n, so O(n*k) is the correct notation. Same goes for non-nested loops: O(n + k).
If you are a visual learner, I have included a graph to help you spot which algorithm is taking more time than other algorithms. You can clearly see that any of the polynomial notations ( n!, 2^n, n^2 ) are generally going to take a lot longer than say, log(n).
N is computation time and n is number of input elements
Some of the other types of time complexities that exist, but are not described in this blog are:
Feel free to look up these more complex Big O notations.
Space Complexity
There is also another side to Big O. Instead of focusing on time only, a different aspect can also be inspected — Space complexity. Space complexity is a measure of the amount of working storage an algorithm needs during run time.
In this scenario, let’s say that all your friends are in different rooms and you need to keep track of who you’ve talked to so you don’t ask the same person twice. This can be written in Python3:
>>> def who_took_my_cookie(my_friends):
... spoken_to = []
... for friend in my_friends:
... if friend not in spoken_to:
... if friend.get('has_cookie'):
... print(spoken_to)
... return friend.get('name')
... else:
... spoken_to.append(friend.get('name')
In this example, the spoken_to list could be as long as there are total friends. This means that the space complexity is O(n).
Let’s feed this function some input and see who took your cookie:
Summary of Space Complexity: Space complexity can have as many notations as time complexity. This means that the O(n^2), etc. also exists and can be calculated using the same general principles. Be careful though, time and space complexities may be different than each other for a given function.
By combining time complexity with space complexity, you can begin to compare algorithms and find which one will work best for your constraints.
Battle of the Algorithms
“Which algorithm will win?!”
Now that you know what time and space complexities are and how to calculate them, let’s talk about some cases where certain algorithms may be better than others.
A simple example to clearly define some constraints would be to examine a self-driving car. The amount of storage in the car is finite — meaning that there is only so much memory that can be used. Because of this, algorithms which favor low space complexity are often favored. But in this case, fast time complexities are also favored, which is why self-driving cars are so difficult ot implement.
Now let’s examine something like a cellphone. The same storage limitations are in place, but because the cellphone isn’t driving around on the street, some sacrifices in time complexity can be made in favor of less space complexity.
Now that you have the tools to start computing the time and space complexity of the algorithms you read and write, determine which algorithm is best for your constraints. Determining which algorithm to use must take into consideration the space available for working memory, the amount of input to be parsed, and what your function is doing. Happy Coding!
This article was produced in partnership with Holberton School and originally appeared on Medium.
With any major technology transition there is a gap between the time when the innovation is created and when there is a critical mass of IT professionals available with training and knowledge on how to deploy that state-of-the-art technology.
Plus, there are many different technologies that make up the SDN/NFV ecosystem, which means there are a number of different technologies that require training, said Eric Hanselman, chief analyst for 451 Research. Without a dominant set of SDN and NFV technologies, network administrators are unsure what combination of technologies they need to master. And this comes at a time when network services are becoming more disaggregated, Hanselman noted.
Plus, most companies are conservative when making major changes to their networks because if something goes wrong there is the potential for a ‘blast radius,’ which could become a huge problem, Hanselman said. “The entire scope of the endeavor can be huge,” he added.
Google couldn’t exist without Linux and open-source software. While you may not think of Google as a Linux company in the same way as you do Canonical, Red Hat, or SUSE, it wouldn’t be the search and advertising giant it is today without Linux. So, it makes sense that Google is moving up from its Silver membership in The Linux Foundation, to the Platinum level.
If you are new to Linux command line, and are interested in learning networking stuff, there are many command line utilities that you should be aware of. One such tool is netstat. In this article, we will discuss this command using some easy to understand examples.
But before we do that, it’s worth mentioning that all these examples have been tested on an Ubuntu 16.04 LTS machine.
Linux netstat command
The netstat command lets you print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. Following is its syntax:
I don’t want to get up on yet another “Here’s another lightweight Linux distribution to revive your aging hardware” soapbox. So many distributions make that promise, and most of them do an outstanding job of fulfilling their mission statement. Also, many of those distributions are fairly similar: They offer a small footprint, work with 32-bit systems, and install a minimal amount of software dedicated to the task of helping you get your work done as best a lightweight operating system can do.
But then there’s 4MLinux. This particular take on the lightweight Linux distribution is a different beast altogether. First and foremost, 4MLinux doesn’t include a package manager. That’s right, the only way you can install packages on this distribution is to do so from source (unless you install the limited number of packages from within the Extensions menu (more on that in a bit). That, of course, can lead to a dependency nightmare. But if you really give it some thought, that could be a serious plus, especially if you’re looking for a distribution that could be considered an ideal desktop for end users with specific use cases. If those users only need to work with a web browser, 4MLinux allows that while preventing users from installing other applications.
What’s in a name?
The name 4MLinux comes from a strong focus on the following “Ms”:
Maintenance — 4M can be used as a system rescue Live CD.
Multimedia — 4M offers full support for a large number of image, audio and video formats.
Miniserver — 4M includes DNS, FTP, HTTP, MySQL, NFS, Proxy, SMTP, SSH, and Telnet servers.
Mystery — 4M includes a collection of classic Linux games.
It is the inclusion of servers that makes 4MLinux stand out above the lightweight competition. On top of that, the distribution goes out of its way to make the managing of these servers pretty simple (more on that in a bit).
Let’s install 4MLinux and see take a look at what it has to offer.
Installation
The installation of 4MLinux is a bit more complicated than many other distributions in its class. When you boot the live CD/USB, you must first create a partition it can be installed on. To do that, click the start button > Maintenance > Partitions > GParted. I installed 4MLinux as a VirtualBox VM. In order to do that, I had to first create a partition table on the virtual drive. Do this by clicking Device > Create Partitions Table (Figure 1).
Figure 1: Creating a partition table on the VirtualBox virtual drive.
Once the partition table has been created, click the New button and create a partition that can house 4MLinux (Figure 2).
Figure 2: Creating a new partition for 4M Linux.
With the partition created, go to Start > 4MLinux > Installer. This will open a terminal window. When prompted, hit the Enter key on your keyboard and then select the partition to use for the installation (Figure 3).
Figure 3: Most likely, the partition you want will be sda1.
You will then need to answer two questions:
Is 4MLinux to be the only operating system in your PC y/n
Do you wish to start the installation right now y/n
After you type “y” to start the installation, 4MLinux will install (a process that takes less than 5 minutes). When you see “Done” presented, close the window and reboot the system. Upon reboot, you will be prompted (in a text-only window) to create a root password (Figure 4).
Figure 4: Creating a root password.
Creating a standard user
Naturally, you don’t want to run 4MLinux as a standard user. Of course, you won’t find a GUI tool for this, so open the terminal window and issue the command adduser USERNAME (Where USERNAME is the name of the user). After typing (and verifying) a password for the new user, you can log out and log back in as that user.
The Desktop
4MLinux employs Joe’s Window Manager (JWM). It’s actually quite a lovely little desktop, one that includes all the features you’d want for easy interaction. There’s a panel, a start menu, launchers, desktop icons, a dock, a system tray, a mouse menu (Figure 5), and, for good measure, Conky (the lightweight, themeable system monitor).
Figure 5: The 4MLinux desktop includes a handy mouse menu that is accessible from anywhere on the desktop.
Applications
Out of the box, you won’t find much in the way of productivity tools. And even though there isn’t a package manager, you can install a few tools, by way of the Start > Extensions menu. Click on that menu and you’ll see entries like LibreOffice, AbiWord, GIMP, Gnumeric (Figure 6), Thunderbird, Firefox, Opera, Skype, Dropbox, FileZilla, VLC, Audacious, VirtualBox, Wine, Java, and more.
Figure 6: Installing the available apps in 4MLinux. Click on one of those entries and a terminal window will open, asking if you want to continue. Type y and hit Enter. The package will be downloaded and installed. Once installed, the package is started from the same Extensions menu entry used for installation.
Miniserver
Let’s talk about that Miniserver menu entry. If you open that menu, you’ll find entries for StartAll, StopAll, Settings, Tests, and MiscTools. If you click the StartAll entry, all of the included servers will start. Once they’ve started, click on Miniserver > Settings > LAMP. This will open up the default web browser (Chromium) to a page allowing you to select from LAMP Admin, Webmin, or Help. Out of the box, Webmin is not ready to be used. Before you can log into that powerful tool, you must first open up a terminal window and issue the command webmin. This will download and install Webmin on the machine. You’ll have to answer a few questions (Figure 7) and create a password for the Webmin admin user.
Figure 7: Installing Webmin on 4MLinux.
Once you’ve installed Webmin, you can click Start > Miniserver > Settings > LAMP and, when presented with the available links, click the Webmin entry. You’ll be required to log in with the user admin and the password you created during the Webmin installation. You can now administer your LAMP server (and quite a bit more) from within an all-too familiar web GUI.
Is 4MLinux for You?
4MLinux is a very intriguing distribution that’s really hard to classify. It offers the best of a few worlds and misses out on the best of others (such as a traditional package manager). It’s one of those distributions that might never wind up your go-to, but it is certainly one you need to experience first hand. And, who knows, you might find this unique flavor of Linux just right for you.
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.
The Raspberry Pi 3 Model B+ won our 2018 reader survey as the most popular community-backed, Linux/Android hacker board under $200. The survey, run in partnership with LinuxGizmos.com, asked readers to select their favorite boards from among 116 community-backed SBCs that run Linux or Android and sell for under $200. All 116 SBCs are summarized in LinuxGizmos’ recently updated hacker board catalog and feature comparison spreadsheet. (Downloadable versions of the spreadsheet may be found here.)
The sample of 683 fell far short of the 1,705 survey respondents for the June 2017 survey and the 1,721 voters in 2015, but it beat out the 473 total for the 2016 survey. Considering the modest sample, the survey may not be highly representative of market share, but it’s still indicative of enthusiasm.
To rank the boards, we used Borda Count scoring, in which we tripled the number of first choices, then doubled the number of second place selections, and added the two results to the unadjusted third-choice amount. The top 10 boards are shown in the chart below:
The Raspberry Pi 3 Model B+, which builds upon the RPi 3 Model B design with a faster Broadcom SoC, faster WiFi and Ethernet, and optional PoE, earned a Borda score of 811. This was about twice the score of the second place (414) UDOO X86, which is one of the most powerful of the handful of x86 based hacker boards.
There was a big drop to the next three boards, including the Odroid-XU4 (278), the Raspberry Pi 3 Model B (277), which was last year’s runaway winner, and the tiny Raspberry Pi Zero W/WH (255). The remainder of the top 10 list includes the venerable sixth place BeagleBone Black (126) and the Raspberry Pi like Asus Tinker Board (112) and Odroid-C2 (95). The Raspberry Pi Zero (85) came in 9th and the BeagleBone Black Wireless (67) was 10th.
If you prefer a top 10 list based on first choice only, the top 7 rankings would stay the same, but the Odroid-C2, RPi Zero, and BB Black Wireless would drop lower. This would make way for the UP Squared, Odroid-N1, RockPro64, Orange Pi Zero H2+/Zero Plus 2 H3/Zero Plus 2 H5, and the Rock64, all of which had higher first-pick scores.
Although the small size of the sample makes it difficult to read too much into the rankings, two trends seem clear: First, SBCs with Raspberry Pi like dimensions and 40-pin expansion connectors continue to do well. Second, it’s tough for a new board to break into the top ranks, at least among LinuxGizmos readers. Seven of the top 10 Borda-ranked boards were also in last year’s top 10, and the RPi 3 B+ was the only top 10 board that was not available a year ago.
A few newcomers did, however, break into our 11-20 ranked group, including the RockPro64, Orange Pi Zero H2+, and DragonBoard 820c. The top 10 list also includes a board that will never see the light of day. We included the Odroid-N1 with the expectation that it would ship on time in June, but Hardkernel’s Odroid project just announced that the Rockchip RK3399 based SBC has been cancelled in favor or an Odroid-N2 model with an unnamed new SoC within 5-6 months.
Reader buying priorities and goals
In addition to asking survey participants to list their favorite boards, we asked about buying criteria and intended applications. The year-to-year consistency we’re seeing in the answers suggests that a 683 sample may be more significant than we thought. In ranking buying criteria, for example, the rankings were very similar. High-quality open source software again led the list as the top priority, while networking/wireless I/O swapped with community ecosystem for second and third places.
When asked about intended applications, home automation was again on top, but the previous second-ranked education category dropped several levels. Home multimedia and special function servers advanced to second and third place, and data acquisition and control also jumped considerably, suggesting a growing role for hacker boards in industrial settings.
In a separate question about more general usage, the maker/hobbyist segment once again led the way, but by a smaller margin. The other three categories increased, with the research and commercial categories seeing the largest gains.
Fifteen randomly selected survey participants will receive a free hacker board in the coming weeks. Many thanks to all the participants who voted, as well as the vendors who donated boards.
The open source Kubernetes container orchestration project this week shipped its second release of the year, with improvements to networking, storage and more.
Kubernetes, under direction of The Linux Foundation, focuses on automating deployment, scaling and management of containerized applications.
Kubernetes 1.11, announced yesterday (June 27), includes several new features, with project backers especially highlighting two features that have graduated to general availability: IPVS-based In-Cluster Load Balancing and CoreDNS as a cluster DNS add-on option. The release team said these result in increased scalability and flexibility.
“IPVS (IP Virtual Server) provides high-performance in-kernel load balancing, with a simpler programming interface than iptables,” the team said in a blog post.
Eighteen more Chromebooks are getting support for Linux apps on Chrome OS, with laptops based in Intel’s Apollo Lake architecture now able to run the applications, via XDA Developers.
That list includes computers from Lenovo (Thinkpad 11e Chromebook), Acer (Chromebook Spin 11 and Chromebook 15), Asus (Chromebook Flip), and Dell (Dell Chromebook 11) — check the full list at XDA’s site to see if your machine is included.
Previously, Linux apps worked on the Google Pixelbook and Samsung Chromebook Plus, but support for the Apollo Lake machines should open it up to a much wider range of users — and more importantly, to a much wider range of laptop price points.
The internet began as a system for applying addresses to servers. Now it’s a means for giving names to services, and distributing those services across the planet. SDN gave rise to a new way of implementing computing on the broadest scale of infrastructure, and has become, for better or worse, the whole point of networking.
The phrase software-defined networking (SDN) was coined when it was necessary to distinguish the concept from the hardware-based variety. Since that time, “SDN” has come to mean the type of dynamic configuration that takes place whenever software-based services in a data center network are made accessible through an Internet Protocol (IP) address. More to the point, SDN is networking now….
Much of the logic for SDN is moved inside the servers’ central processor, as just another user function. Some of it is moved inside simple switch and router appliances, where software is comprised of open-source operating systems and open-source controllers. Yet all of these phenomena are the side-effects of SDN, not the purpose. These changes happen because the real purpose of SDN is to move networking logic to a place where it can be more directly controlled and managed, and even more importantly, changed to suit the dynamics of variable workloads.
THE BASIS OF SDN
Here are SDN’s principal architectural tenets:
The flow of user data is separated from the flow of control instructions. In a physical network, data packets that belong to an application take the same route as internal instructions the network components need to coordinate their actions. As SDN engineers put it, the control plane is separated from the data plane.