The highly popular PHP 5.x branch will stop receiving security updates at the end of the year.
According to statistics from W3Techs, roughly 78.9 percent of all Internet sites today run on PHP. But on December 31, 2018, security support for PHP 5.6.x will officially cease, marking the end of all support for any version of the ancient PHP 5.x branch.
This means that starting with next year, around 62 percent of all Internet sites still running a PHP 5.x version will stop receiving security updates for their server and website’s underlying technology, exposing hundreds of millions of websites, if not more, to serious security risks.
Linux. It’s powerful, flexible, stable, secure, user-friendly… the list goes on and on. There are so many reasons why people have adopted the open source operating system. One of those reasons which particularly stands out is its flexibility. Linux can be and do almost anything. In fact, it will (in most cases) go well above what most platforms can. Just ask any enterprise business why they use Linux and open source.
But once you’ve deployed those servers and desktops, you need to be able to keep track of them. What’s going on? How are they performing? Is something afoot? In other words, you need to be able to monitor your Linux machines. “How?” you ask. That’s a great question, and one with many answers. I want to introduce you to a few such tools—from command line, to GUI, to full-blown web interfaces (with plenty of bells and whistles). From this collection of tools, you can gather just about any kind of information you need. I will stick only with tools that are open source, which will exempt some high-quality, proprietary solutions. But it’s always best to start with open source, and, chances are, you’ll find everything you need to monitor your desktops and servers. So, let’s take a look at four such tools.
Top
We’ll first start with the obvious. Thetopcommand is a great place to start, when you need to monitor what processes are consuming resources. The top command has been around for a very long time and has, for years, been the first tool I turn to when something is amiss. What top does is provide a real-time view of all running systems on a Linux machine. Thetopcommand not only displays dynamic information about each running process (as well as the necessary information to manage those processes), but also gives you an overview of the machine (such as, how many CPUs are found, and how much RAM and swap space is available). When I feel something is going wrong with a machine, I immediately turn to top to see what processes are gobbling up the most CPU and MEM (Figure 1). From there, I can act accordingly.
Figure 1: Top running on Elementary OS.
There is no need to install anything to use thetop command, because it is installed on almost every Linux distribution by default. For more information on top, issue the command man top.
Glances
If you thought thetopcommand offered up plenty of information, you’ve yet to experience Glances. Glances is another text-based monitoring tool. In similar fashion to top, glances offers a real-time listing of more information about your system than nearly any other monitor of its kind. You’ll see disk/network I/O, thermal readouts, fan speeds, disk usage by hardware device and logical volume, processes, warnings, alerts, and much more. Glances also includes a handy sidebar that displays information about disk, filesystem, network, sensors, and even Docker stats. To enable the sidebar, hit the 2 key (while glances is running). You’ll then see the added information (Figure 2).
Figure 2: The glances monitor displaying docker stats along with all the other information it offers.
You won’t find glances installed by default. However, the tool is available in most standard repositories, so it can be installed from the command line or your distribution’s app store, without having to add a third-party repository.
GNOME System Monitor
If you’re not a fan of the command line, there are plenty of tools to make your monitoring life a bit easier. One such tool is GNOME System Monitor, which is a front-end for the top tool. But if you prefer a GUI, you can’t beat this app.
With GNOME System Monitor, you can scroll through the listing of running apps (Figure 3), select an app, and then either end the process (by clicking End Process) or view more details about said process (by clicking the gear icon).
Figure 3: GNOME System Monitor in action.
You can also click any one of the tabs at the top of the window to get even more information about your system. The Resources tab is a very handy way to get real-time data on CPU, Memory, Swap, and Network (Figure 4).
Figure 4: The GNOME System Monitor Resources tab in action.
If you don’t find GNOME System Monitor installed by default, it can be found in the standard repositories, so it’s very simple to add to your system.
Nagios
If you’re looking for an enterprise-grade networking monitoring system, look no further than Nagios. But don’t think Nagios is limited to only monitoring network traffic. This system has over 5,000 different add-ons that can be added to expand the system to perfectly meet (and exceed your needs). The Nagios monitor doesn’t come pre-installed on your Linux distribution and although the install isn’t quite as difficult as some similar tools, it does have some complications. And, because the Nagios version found in many of the default repositories is out of date, you’ll definitely want to install from source. Once installed, you can log into the Nagios web GUI and start monitoring (Figure 5).
Figure 5: With Nagios you can even start and stop services.
Of course, at this point, you’ve only installed the core and will also need to walk through the process of installing the plugins. Trust me when I say it’s worth the extra time. The one caveat with Nagios is that you must manually install any remote hosts to be monitored (outside of the host the system is installed on) via text files. Fortunately, the installation will include sample configuration files (found in /usr/local/nagios/etc/objects) which you can use to create configuration files for remote servers (which are placed in /usr/local/nagios/etc/servers).
Although Nagios can be a challenge to install, it is very much worth the time, as you will wind up with an enterprise-ready monitoring system capable of handling nearly anything you throw at it.
There’s More Where That Came From
We’ve barely scratched the surface in terms of monitoring tools that are available for the Linux platform. No matter whether you’re looking for a general system monitor or something very specific, a command line or GUI application, you’ll find what you need. These four tools offer an outstanding starting point for any Linux administrator. Give them a try and see if you don’t find exactly the information you need.
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.
Practical techniques to ensure developers can actually do the things you want them to do using your API.
In the previous chapters, we gave an overview of various approaches for transmitting data via your web API. Now that you’re familiar with the landscape of transport and have an understanding of how to choose between various patterns and frameworks, we want to provide some tactical best practices to help your developers get the most out of your API.
Designing for Real-Life Use Cases
When designing an API, it’s best to make decisions that are grounded in specific, real-life use cases. Let’s dig into this idea a bit more. Think about the developers who are using your API. What tasks should they be able to complete with your API? What types of apps should developers be able to build? For some companies, this is as targeted as “developers should be able to charge customer credit cards.” For other companies, the answer can be more open-ended: “developers should be able to create a full suite of interactive consumer-quality applications.”
Test automation tools are not used very often. Only 16 percent of performance test cases are executed with test automation tools, and security tests are being completed at the same frequency according to the World Quality Report (WQR) 2018-2019, which surveyed 1,700 IT decision makers (ITDMs) at companies with more than a thousand employees. Although the QA and testing job roles have been adapting to agile development practices, remember that even if one test is automated, the majority of tests are still done manually.
With the recent advances in machine learning technology, it is only a matter of time before developers can expect to run full diagnostics and information retrieval on their own source code. This can include autocompletion, auto-generated user tests, more robust linters, automated code reviews and more. I recently reviewed a new product in this sphere — the source{d} Engine. source{d} offers a suite of applications that uses machine learning on code to complete source code analysis and assisted code reviews. Chief among them is the source{d} Engine, now in public beta; it uses a suite of open source tools (such as Gitbase, Babelfish, and Enry) to enable large-scale source code analysis. Some key uses of the source{d} Engine include language identification, parsing code into abstract syntax trees, and performing SQL Queries on your source code such as:
What are the top repositories in a codebase based on number of commits?
What is the most recent commit message in a given repository?
Who are the most prolific contributors in a repository?
Because source{d} Engine uses both agnostic language analysis and standard SQL queries, the information available feels infinite.
Figure 1: Basic database structure.
From minute one, using source{d} Engine was an easy, efficient process. I ran source{d} Engine chiefly on a virtual machine running Ubuntu 14.04 but also installed it on MacOS and Ubuntu 16.04 for comparison purposes. On all three, install was completely painless, although the Ubuntu versions seemed to run slightly faster. The source{d} Engine documentation is accurate and thorough. It correctly warned me that the first time initializing the engine would take a fair amount of time so I was prepared for the wait. I did have to debug a few errors, all relating to my having a previous SQL instance running so some more thorough troubleshooting documentation might be warranted.
Figure 2: Listing the top contributor of a given repository.
It’s simple to go between codebases using the commands scrd kill and scrd init. I wanted to explore many use cases so I picked a wide variety of codebases to test on ranging from a single contributor with only 5 commits to one with 10 contributors, thousands of lines of code, and hundreds of commits. source{d} Engine worked phenomenally with all of them although it is easier to see the benefits in a larger codebase.
Figure 3: Listing all commits from a repository — not so easy in a bigger codebase, but fantastic when there are only eight!
My favorite queries to run were those pertaining to commits. I am not a fan of the way GitHub organizes commit history, so I find myself coming back to source{d} Engine again and again when I want commit history-related information. I’m also very impressed with the Universal Abstract Syntax Tree (UAST) concept. A UAST is a normalized form of an abstract syntax tree (AST) — a structural representation of source code used for code analysis. Unlike ASTs, UASTs are language agnostic and do not rely on any specific programming language. The UAST format enables further analysis and can be used with any tools in a standard, open style.
My only complaint is the (obvious and understandable) reliance on a base level of SQL knowledge. Because I was already very familiar with SQL, I was able to quickly use the source{d} Engine engine and create my own queries. However, if I had been shakier on the basics, I would’ve appreciated more example queries. Another minor complaint is that support for Python appears to only be for Python 2 right now, and not Python 3.
Figure 4: Currently supported drivers.
I’m excited to follow the future of source{d} Engine and also source{d} Lookout (now in public alpha) which is the first step to a suite of true machine learning on code applications. I would love for the documentation of this and other upcoming applications to be more comprehensive, but because they are not fully available yet, just having what’s available already is great.
In general, I’m extremely impressed with the transparency of the company — not only are the future products and applications clearly listed and described, many internal company documents are also available. This true dedication to open source software is amazing, and I hope more companies follow source{d} ’s lead.
Lizzie Turner is a former digital marketing analyst studying full stack software engineering at Holberton School. She is currently looking for her first software engineering role and is particularly passionate about data and analytics. You can find Lizzie on LinkedIn, GitHub, and Twitter.
This article was produced in partnership with Holberton School
The Linux kernel config/build system, also known as Kconfig/kbuild, has been around for a long time, ever since the Linux kernel code migrated to Git. As supporting infrastructure, however, it is seldom in the spotlight; even kernel developers who use it in their daily work never really think about it.
To explore how the Linux kernel is compiled, this article will dive into the Kconfig/kbuild internal process, explain how the .config file and the vmlinux/bzImage files are produced, and introduce a smart trick for dependency tracking.
Kconfig
The first step in building a kernel is always configuration. Kconfig helps make the Linux kernel highly modular and customizable. Kconfig offers the user many config targets:
Before Microsoft joined, OIN had more than 2,650 community members and owns more than 1,300 global patents and applications. OIN is the largest patent non-aggression community in history and represents a core set of open-source intellectual-property values. Its members include Google, IBM, Red Hat, and SUSE. The OIN patent license and member cross-licenses are available royalty-free to anyone who joins the OIN community.
LinuxBoot is an Open Source alternative to Proprietary UEFI firmware. It was released last year and is now being increasingly preferred by leading hardware manufacturers as default firmware. Last year, LinuxBoot was warmly welcomedinto the Open Source family by The Linux Foundation.
This project was an initiative by Ron Minnich, author of LinuxBIOS and lead of coreboot at Google, in January 2017.
One category that often gets overlooked in the discussion of Linux computers is the market for HDMI dongle devices that plug into your TV to stream, mirror, or cast content from your laptop or mobile device. Yesterday, Google announced an extensively leaked third-gen version of its market-leading, Linux-powered Chromecast device. The latest Chromecast has a new design and Google Home support, and it’s claimed to be 15 percent faster processor with support for 1080@60 video. However, the rumored addition of Bluetooth did not materialize.
Here, we look at a similar Linux-based HDMI dongle device that launched this morning with a somewhat different feature set and market focus. The Airtame 2 is the first hardware overhaul since the original Airtame generated $1.3 million on Indiegogo in 2013. The new version quadruples the RAM, improves the Fedora Linux firmware, and advances to dual-band 802.11a/b/g/n/ac, which is now known as WiFi 5 in the new Wi-Fi Alliance naming scheme that accompanied its recent WiFi 6 (ax) announcement.
In its first year, Copenhagen, Denmark-based Airtame struggled to fulfill its Indiegogo orders and almost collapsed in the process. Yet, the company went on to find success and recently surpassed 100,000 device shipments. With a growing focus on enterprise and educational markets, Airtame upgraded its software with cloud device management features, and expanded its media sources beyond cross-platform desktops to Android and iOS devices.
The key difference with Chromecast is that Airtame supports mirroring to multiple devices at once, as long as you’re video is coming from a laptop or desktop rather than a mobile. Chromecast also requires the Chrome browser, and it lacks cloud-based device management features.
Combined with Chromecast’s dominance of the low-end entertainment segment, thanks in part to its $35 pricetag, Airtame’s advantages led the company to focus more on the enterprise, signage, and educational markets. Unfortunately, the Airtame 2 price went up by $100 to $399 per device.
Airtame 2 extends its enterprise trajectory by “re-imagining how to turn blank screens into smart, collaborative displays,” says the company. Airtame recently released four Homescreen apps, providing “simple app integrations for better team collaboration and digital signage.” These deployments are controlled via Airtame Cloud, which was launched in early 2017. The cloud service enables enterprise and educational customers to monitor their Airtame devices, perform bulk updates, and add updated content directly from the cloud.
Twice the RAM, five times the WiFi performance
The Airtame 2 offers the same basic functionality as the Airtame 1, but it adds a number of performance benefits. It moves from the DualLite version of the NXP i.MX6 to the similarly dual-core, Cortex-A9 Dual model. This has the same 1GHz clock rate, but with a more advanced Vivante GC2000 GPU. Output resolution via the HDMI 1.4b port stays the same at 1920×1080, but you now get a 60fps frame rate instead of 30fps. As before, you can plug into VGA or DVI ports using adapters.
More importantly for performance, the Airtame 2 quadruples the RAM to 2GB. In place of an SD card slot, the firmware is stored on onboard eMMC.
The new Cypress (Broadcom) CYW89342 RSDB WiFi 5 chip is about five times faster than the original’s Qualcomm WiFi 4 (802.11n) chip, which also provided dual-band MIMO 2.4GHz/5.2GHz WiFi. The Airtame 2 has twice the range, at up to 20 meters, which is helpful for its enterprise and educational customers.
Other hardware improvements include a smaller, 77.9 x 13.5mm footprint, a Kensington Lock input, an LED, and a magnetic wall mount. A USB Type-C port replaces the power-only micro-USB OTG, adding support for HDMI, USB host, and Ethernet.
As before, there’s also a micro-USB host port that with the help of an adapter, supports Ethernet and Power-over-Ethernet (PoE). Ethernet can run simultaneously with WiFi, and can improve throughput and reliability, says Airtame. We saw no mention of the new product’s latency, but on the previous Airtame, WiFi streaming latency was one second with audio.
Once again, iOS 9 devices can mirror video using AirPlay. However, Android (4.2.2) devices are limited to the display of static images and PDF files, including non-animated PowerPoint presentations. Desktop support, which also includes a special optimization for Chromebooks, includes support for Windows 10/7, Ubuntu 15.05, and Mac OS X 10.12.
DNS security is a decades-old issue that shows no signs of being fully resolved. Here’s a quick overview of some of the problems with proposed solutions and the best way to move forward.
…After many years of availability, DNSSEC has yet to attain significant adoption, even though any security expert you might ask recognizes its value. As with any public key infrastructure, DNSSEC is complicated. You must follow a lot of rules carefully, although some network services providers are trying to make things easier.
But DNSSEC does not encrypt the communications between the DNS client and server. Using the information in your DNS requests, an attacker between you and your DNS server could determine which sites you are attempting to communicate with just by reading packets on the network.
So despite best efforts of various Internet groups, DNS remains insecure. Too many roadblocks exist that prevent the Internet-wide adoption of a DNS security solution. But it is time to revisit the concerns.