One category that often gets overlooked in the discussion of Linux computers is the market for HDMI dongle devices that plug into your TV to stream, mirror, or cast content from your laptop or mobile device. Yesterday, Google announced an extensively leaked third-gen version of its market-leading, Linux-powered Chromecast device. The latest Chromecast has a new design and Google Home support, and it’s claimed to be 15 percent faster processor with support for 1080@60 video. However, the rumored addition of Bluetooth did not materialize.
Here, we look at a similar Linux-based HDMI dongle device that launched this morning with a somewhat different feature set and market focus. The Airtame 2 is the first hardware overhaul since the original Airtame generated $1.3 million on Indiegogo in 2013. The new version quadruples the RAM, improves the Fedora Linux firmware, and advances to dual-band 802.11a/b/g/n/ac, which is now known as WiFi 5 in the new Wi-Fi Alliance naming scheme that accompanied its recent WiFi 6 (ax) announcement.
In its first year, Copenhagen, Denmark-based Airtame struggled to fulfill its Indiegogo orders and almost collapsed in the process. Yet, the company went on to find success and recently surpassed 100,000 device shipments. With a growing focus on enterprise and educational markets, Airtame upgraded its software with cloud device management features, and expanded its media sources beyond cross-platform desktops to Android and iOS devices.
The key difference with Chromecast is that Airtame supports mirroring to multiple devices at once, as long as you’re video is coming from a laptop or desktop rather than a mobile. Chromecast also requires the Chrome browser, and it lacks cloud-based device management features.
Combined with Chromecast’s dominance of the low-end entertainment segment, thanks in part to its $35 pricetag, Airtame’s advantages led the company to focus more on the enterprise, signage, and educational markets. Unfortunately, the Airtame 2 price went up by $100 to $399 per device.
Airtame 2 extends its enterprise trajectory by “re-imagining how to turn blank screens into smart, collaborative displays,” says the company. Airtame recently released four Homescreen apps, providing “simple app integrations for better team collaboration and digital signage.” These deployments are controlled via Airtame Cloud, which was launched in early 2017. The cloud service enables enterprise and educational customers to monitor their Airtame devices, perform bulk updates, and add updated content directly from the cloud.
Twice the RAM, five times the WiFi performance
The Airtame 2 offers the same basic functionality as the Airtame 1, but it adds a number of performance benefits. It moves from the DualLite version of the NXP i.MX6 to the similarly dual-core, Cortex-A9 Dual model. This has the same 1GHz clock rate, but with a more advanced Vivante GC2000 GPU. Output resolution via the HDMI 1.4b port stays the same at 1920×1080, but you now get a 60fps frame rate instead of 30fps. As before, you can plug into VGA or DVI ports using adapters.
More importantly for performance, the Airtame 2 quadruples the RAM to 2GB. In place of an SD card slot, the firmware is stored on onboard eMMC.
The new Cypress (Broadcom) CYW89342 RSDB WiFi 5 chip is about five times faster than the original’s Qualcomm WiFi 4 (802.11n) chip, which also provided dual-band MIMO 2.4GHz/5.2GHz WiFi. The Airtame 2 has twice the range, at up to 20 meters, which is helpful for its enterprise and educational customers.
Other hardware improvements include a smaller, 77.9 x 13.5mm footprint, a Kensington Lock input, an LED, and a magnetic wall mount. A USB Type-C port replaces the power-only micro-USB OTG, adding support for HDMI, USB host, and Ethernet.
As before, there’s also a micro-USB host port that with the help of an adapter, supports Ethernet and Power-over-Ethernet (PoE). Ethernet can run simultaneously with WiFi, and can improve throughput and reliability, says Airtame. We saw no mention of the new product’s latency, but on the previous Airtame, WiFi streaming latency was one second with audio.
Once again, iOS 9 devices can mirror video using AirPlay. However, Android (4.2.2) devices are limited to the display of static images and PDF files, including non-animated PowerPoint presentations. Desktop support, which also includes a special optimization for Chromebooks, includes support for Windows 10/7, Ubuntu 15.05, and Mac OS X 10.12.
DNS security is a decades-old issue that shows no signs of being fully resolved. Here’s a quick overview of some of the problems with proposed solutions and the best way to move forward.
…After many years of availability, DNSSEC has yet to attain significant adoption, even though any security expert you might ask recognizes its value. As with any public key infrastructure, DNSSEC is complicated. You must follow a lot of rules carefully, although some network services providers are trying to make things easier.
But DNSSEC does not encrypt the communications between the DNS client and server. Using the information in your DNS requests, an attacker between you and your DNS server could determine which sites you are attempting to communicate with just by reading packets on the network.
So despite best efforts of various Internet groups, DNS remains insecure. Too many roadblocks exist that prevent the Internet-wide adoption of a DNS security solution. But it is time to revisit the concerns.
In the previous article I gave you tips for how to receive feedback, especially in the context of your first free and open source project contribution. Now it’s time to talk about the other side of that same coin: providing feedback.
If I tell you that something you did in your contribution is “stupid” or “naive,” how would you feel? You’d probably be angry, hurt, or both, and rightfully so. These are mean-spirited words that when directed at people, can cut like knives. Words matter, and they matter a great deal. Therefore, put as much thought into the words you use when leaving feedback for a contribution as you do into any other form of contribution you give to the project. As you compose your feedback, think to yourself, “How would I feel if someone said this to me? Is there some way someone might take this another way, a less helpful way?” If the answer to that last question has even the chance of being a yes, backtrack and rewrite your feedback. It’s better to spend a little time rewriting now than to spend a lot of time apologizing later.
The Ovum Decision Matrix Research Report discusses the impact of two major shifts in cloud adoption:
The growing impact of Shadow IT in enterprises.
The need to migrate workloads to the cloud.
We also see a clear third shift: the need to develop Cloud Native applications for new business areas: applications that were born in the cloud, and use all-cloud resources.
These trends have created the need for greater environment visibility and control across hybrid infrastructure. As the Ovum report points out, the duality of this situation is that cloud-native workloads need to be managed in a similar manner as VMs on private clouds.
This key requirement has created the need for greater visibility and control over all the environments that are in use – either private or public, either infrastructure running VMs, containers, serverless, and also legacy, bare-metal applications.
The market in multicloud and hybrid cloud management is still evolving, and many of the vendors come from the virtualization management space. While this seems a sensible evolution, the challenge is that the new cloud-native workloads (those already in the cloud) do not look like or operate in the same way as VMs. The difference between these two paradigms needs to be abstracted away from both developers and infrastructure teams. Established vendors are struggling to balance this new world with VM centric infrastructure.
So what are the key lessons we’ve learned over the years, working with customers on enabling them to effectively manage their complex, hybrid environments?
Software is useless if computers can’t run it. Even the most talented developer is at the mercy of the compiler when it comes to run-time performance – if you don’t have a reliable compiler toolchain you can’t build anything serious. The GNU Compiler Collection (GCC) provides a robust, mature and high performance partner to help you get the most out of your software. With decades of development by thousands of people GCC is one of the most respected compilers in the world. If you are building applications and not using GCC, you are missing out on the best possible solution.
GCC is the “de facto-standard open source compiler today” [1] according to LLVM.org and the foundation used to build complete systems – from the kernel upwards. GCC supports over 60 hardware platforms, including ARM, Intel, AMD, IBM POWER, SPARC, HP PA-RISC, and IBM Z, as well as a variety of operating environments, including GNU, Linux, Windows, macOS, FreeBSD, NetBSD, OpenBSD, DragonFly BSD, Solaris, AIX, HP-UX, and RTEMS. It offers highly compliant C/C++ compilers and support for popular C libraries, such as GNU C Library (glibc), Newlib, musl, and the C libraries included with various BSD operating systems, as well as front-ends for Fortran, Ada, and GO languages. GCC also functions as a cross compiler, creating executable code for a platform other than the one on which the compiler is running. GCC is the core component of the tightly integrated GNU toolchain, produced by the GNU Project, that includes glibc, Binutils, and the GNU Debugger (GDB).
“My all-time favorite GNU tool is GCC, the GNU Compiler Collection. At a time when developer tools were expensive, GCC was the second GNU tool and the one that enabled a community to write and build all the others. This tool single-handedly changed the industry and led to the creation of the free software movement, since a good, free compiler is a prerequisite to a community creating software.” —Dave Neary, Open Source and Standards team at Red Hat. [2]
Optimizing Linux
As the default compiler for the Linux kernel source, GCC delivers trusted, stable performance along with the additional extensions needed to correctly build the kernel. GCC is a standard component of popular Linux distributions, such as Arch Linux, CentOS, Debian, Fedora, openSUSE, and Ubuntu, where it routinely compiles supporting system components. This includes the default libraries used by Linux (such as libc, libm, libintl, libssh, libssl, libcrypto, libexpat, libpthread, and ncurses) which depend on GCC to provide correctness and performance and are used by applications and system utilities to access Linux kernel features. Many of the application packages included with a distribution are also built with GCC, such as Python, Perl, Ruby, nginx, Apache HTTP Server, OpenStack, Docker, and OpenShift. This combination of kernel, libraries, and application software translates into a large volume of code built with GCC for each Linux distribution. For the openSUSE distribution nearly 100% of native code is built by GCC, including 6,135 source packages producing 5,705 shared libraries and 38,927 executables. This amounts to about 24,540 source packages compiled weekly. [3]
The base version of GCC included in Linux distributions is used to create the kernel and libraries that define the system Application Binary Interface (ABI). User space developers have the option of downloading the latest stable version of GCC to gain access to advanced features, performance optimizations, and improvements in usability. Linux distributions offer installation instructions or prebuilt toolchains for deploying the latest version of GCC along with other GNU tools that help to enhance developer productivity and improve deployment time.
Optimizing the Internet
GCC is one of the most widely adopted core compilers for embedded systems, enabling the development of software for the growing world of IoT devices. GCC offers a number of extensions that make it well suited for embedded systems software development, including fine-grained control using compiler built-ins, #pragmas, inline assembly, and application-focussed command-line options. GCC supports a broad base of embedded architectures, including ARM, AMCC, AVR, Blackfin, MIPS, RISC-V, Renesas Electronics V850, and NXP and Freescale Power-based processors, producing efficient, high quality code. The cross-compilation capability offered by GCC is critical to this community, and prebuilt cross-compilation toolchains [4] are a major requirement. For example, the GNU ARM Embedded toolchains are integrated and validated packages featuring the Arm Embedded GCC compiler, libraries, and other tools necessary for bare-metal software development. These toolchains are available for cross-compilation on Windows, Linux and macOS host operating systems and target the popular ARM Cortex-R and Cortex-M processors, which have shipped in tens of billions of internet capable devices. [5]
GCC empowers Cloud Computing, providing a reliable development platform for software that needs to directly manages computing resources, like database and web serving engines and backup and security software. GCC is fully compliant with C++11 and C++14 and offers experimental support for C++17 and C++2a [6], creating performant object code with a solid debugging information. Some examples of applications that utilize GCC include: MySQL Database Management System, which requires GCC for Linux [7]; the Apache HTTP Server, which recommends using GCC [8]; and Bacula, an enterprise ready network backup tool which require GCC. [9]
Optimizing Everything
For the research and development of the scientific codes used in High Performance Computing (HPC), GCC offers mature C, C++, and Fortran front ends as well as support for OpenMP and OpenACC APIs for directive-based parallel programming. Because GCC offers portability across computing environments, it enables code to be more easily targeted and tested across a variety of new and legacy client and server platforms. GCC offers full support for OpenMP 4.0 for C, C++ and Fortran compilers and full support for OpenMP 4.5 for C and C++ compilers. For OpenACC, GCC supports most of the 2.5 specification and performance optimizations and is the only non-commercial, nonacademic compiler to provide OpenACC support.
Code performance is an important parameter to this community and GCC offers a solid performance base. A Nov. 2017 paper published by Colfax Research evaluates C++ compilers for the speed of compiled code parallelized with OpenMP 4.x directives and for the speed of compilation time. Figure 1 plots the relative performance of the computational kernels when compiled by the different compilers and run with a single thread. The performance values are normalized so that the performance of G++ is equal to 1.0.
Figure 1. Relative performance of each kernel as compiled by the different compilers. (single-threaded, higher is better).
The paper summarizes “the GNU compiler also does very well in our tests. G++ produces the second fastest code in three out of six cases and is amongst the fastest compiler in terms of compile time.” [10]
Who Is Using GCC?
In The State of Developer Ecosystem Survey in 2018 by JetBrains, out of 6,000 developers who took the survey GCC is regularly used by 66% of C++ programmers and 73% of C programmers. [11] Here is a quick summary of the benefits of GCC that make it so popular with the developer community.
For developers required to write code for a variety of new and legacy computing platforms and operating environments, GCC delivers support for the broadest range of hardware and operating environments. Compilers offered by hardware vendors focus mainly on support for their products and other open source compilers are much more limited in the hardware and operating systems supported. [12]
There is a wide variety of GCC-based prebuilt toolchains, which has particular appeal to embedded systems developers. This includes the GNU ARM Embedded toolchains and 138 pre-compiled cross compiler toolchains available on the Bootlin web site. [13] While other open source compilers, such as Clang/LLVM, can replace GCC in existing cross compiling toolchains, these would need to be completely rebuilt by the developer. [14]
GCC delivers to application developers trusted, stable performance from a mature compiler platform. The GCC 8/9 vs. LLVM Clang 6/7 Compiler Benchmarks On AMD EPYCarticle provides results of 49 benchmarks ran across the four tested compilers at three optimization levels. Coming in first 34% of the time was GCC 8.2 RC1 using “-O3 -march=native” level, while at the same optimization level LLVM Clang 6.0 came in second with wins 20% of the time. [15]
GCC delivers improved diagnostics for compile time debugging [16] and accurate and useful information for runtime debugging. GCC is tightly integrated with GDB, a mature and feature complete tool which offers ‘non-stop’ debugging that can stop a single thread at a breakpoint.
GCC is a well supported platform with an active, committed community that supports the current and two previous releases. With releases schedule yearly this provides two years of support for a version.
GCC: Continuing to Optimize Linux, the Internet, and Everything
GCC continues to move forward as a world-class compiler. The most current version of GCC is 8.2, which was released in July 2018 and added hardware support for upcoming Intel CPUs, more ARM CPUs and improved performance for AMD’s ZEN CPU. Initial C17 support has been added along with initial work towards C++2A. Diagnostics have continued to be enhanced including better emitted diagnostics, with improved locations, location ranges, and fix-it hints, particularly in the C++ front end. A blog written by David Malcolm of Red Hat in March 2018 provides an overview of usability improvements in GCC 8. [17]
New hardware platforms continue to rely on the GCC toolchain for software development, such as RISC-V, a free and open ISA that is of interest to machine learning, Artificial Intelligence (AI), and IoT market segments. GCC continues to be a critical component in the continuing development of Linux systems. The Clear Linux Project for Intel Architecture, an emerging distribution built for cloud, client, and IoT use cases, provides a good example of how GCC compiler technology is being used and improved to boost the performance and security of a Linux-based system. GCC is also being used for application development for Microsoft’s Azure Sphere, a Linux-based operating system for IoT applications that initially supports the ARM based MediaTek MT3620 processor. In terms of developing the next generation of programmers, GCC is also a core component of the Windows toolchain for Raspberry PI, the low-cost embedded board running Debian-based GNU/Linux that is used to promote the teaching of basic computer science in schools and developing countries.
GCC was first released on March 22, 1987 by Richard Stallman, the founder of the GNU Project and was considered a significant breakthrough since it was the first portable ANSI C optimizing compiler released as free software. GCC is maintained by a community of programmers from all over the world under the direction of a steering committee that ensures broad, representative oversight of the project. GCC’s community approach is one of its strengths, resulting in a large and diverse community of developers and users that contribute to and provide support for the project. According to Open Hub, GCC “is one of the largest open-source teams in the world, and is in the top 2% of all project teams on Open Hub.” [18]
There has been a lot of discussion about the licensing of GCC, most of which confuses rather than enlightens. GCC is distributed under the GNU General Public License version 3 or later with the Runtime Library Exception. This is a copyleft license, which means that derivative work can only be distributed under the same license terms. GPLv3 is intended to protect GCC from being made proprietary and requires that changes to GCC code are made available freely and openly. To the ‘end user’ the compiler is just the same as any other; using GCC makes no difference to any licensing choices you might make for your own code. [19]
3. Information provided by SUSE based on recent build statistics. There are other source packages in openSUSE that do not generate an executable image and these are not included in the counts.
Margaret Lewis is a technology consultant who previously served as Director of Software Planning at AMD and an Associate Director at the Maui High Performance Computing Center.
Thanks to the GCC Steering Committee and GNU Cauldron participants for their support and feedback on early drafts of this paper.
End-user computing (EUC) is changing quickly, and dramatically. In my work, I hear just how vital it is that organizations deliver better security, manageability and user experience every day. This is creating increasing pressure on the status quo of operating systems for end-user devices: Windows. And Windows simply can’t keep up with the requirements.
What you may not know is that this pressure is also giving rapid rise to the broad use of Linux on endpoint devices. In fact, according to a new IDC InfoBrief, Linux is the only endpoint operating system (OS) growing at a global level. (Full disclosure: IGEL sponsored the report.) While Windows market share remains flat, at 39% in 2015 and 2017, Linux has grown from 30% in 2015 to 35% in 2017, worldwide. And the trend is accelerating.
So, what is it about Linux that makes it so attractive for endpoint devices? Consider these factors:
Serverless architecture is not needed to use Function as a Service (FaaS). In fact, 54 percent of FaaS users without production deployments say their organization does not utilize a serverless architecture, according to a survey conducted in August 2018. In comparison, 96 percent of organizations with FaaS broadly deployed say they use a serverless architecture. Our upcoming Guide to Serverless Technologies covers serverless architecture, technology and computing, and will include the complete study results.
The survey defined FaaS as typically providing event-driven computing where developers run and manage application code with functions that are triggered by events or HTTP requests. Serverless architecture broadly describes an application design that incorporates third-party Backend as a Service (BaaS) services, and/or that includes custom code run in managed environments on a FaaS platform. In many ways, serverless architecture looks similar to other application designs focused on events and microservices.
If you’re the type of person who uses the word “vuln” as a shorthand for code vulnerabilities, you should check out the presentation from the recent Linux Security Summit called “Security in Zephyr and Fuchsia.” In the talk, two researchers from the National Security Agency discuss their contributions to the nascent security stacks of two open source OS projects: Zephyr and Fuchsia.
Microsoft’s Ryan Fairfax explained how to fit an entire Linux stack into 4 MiB of RAM in this presentation. Yet, the hard part, according to Fairfax, was not so much the kernel modification, as it was the development of the rest of the stack. This includes the custom Linux Security Module, which coordinates with the Cortex-M4’s proprietary Pluton security code using a mailbox-based protocol.
In this article, Swapnil Bhartiya interviewed Linux kernel maintainer Greg Kroah-Hartman about how the kernel community is hardening Linux against vulnerabilities. You can see excerpts from their talk in the accompanying video.
The next Linux Security Summit Europe, coming up October 25 – 26 in Edinburgh, offers more essential security information, with refereed presentations, discussion sessions, subsystem updates, and more. There’s still time to register and attend! Check out the full scheduleand stay tuned for more coverage.
“It is very surprising to people just how much open source software is used in modern software development,” observed Jeff Luszcz, Vice President of Product, Management at US software specialist Flexera, during a recent Automotive World webinar. According to Flexera research, more than 50% of all code written today is open source. “If we look at any software product ten to 20 years ago, OSS was just filling in the gaps, but my experience is that 100% of IT organisations are using OSS today.”
Digital trends
Software content has become a differentiator for automakers, who had once relied on factors such as drivability or exterior design to sell their cars. Today, software-based features are often the focal point for advertising campaigns.
This software is derived from a multitude of different sources, and rarely originates from a single developer. Many vehicles today leverage Linux, an open source operating system. “Every device in a modern vehicle contains anywhere between 80 to 100 million lines of code. Some vendors have ten or more Linux computers running at the same time, and while these computers may have very similar software stacks, they will likely have been put together by different development teams, and have different reasons for existing,” added Luszcz.
Open source networking has become the ‘new norm,’ and many at the recent Open Networking Summit Europe said they’re seeing it play out in the industry.
If you weren’t in Amsterdam for Open Networking Summit Europe 2018, you missed an extremely exciting conferenceThis Linux Foundation event drew more than 700 networking, development and operations leaders and enterprise users from open source service providers, cloud companies, and more.
Chief among the conference themes was the idea that open source networking is the “new norm,” with lots of vendors attesting to how this theme is playing out in the IT industry. Dan Kohn who leads the Linux Foundation’s Cloud Native Computing Foundation cites cost savings, improved resilience and higher development velocity for both bug fixes and the rolling out of new features for this change. Arpit Joshipura, General Manager of Networking at The Linux Foundation used the term “open-sourcification” in his keynote.