Home Blog Page 314

Best Design Practices to Get the Most out of Your API

Practical techniques to ensure developers can actually do the things you want them to do using your API.

In the previous chapters, we gave an overview of various approaches for transmitting data via your web API. Now that you’re familiar with the landscape of transport and have an understanding of how to choose between various patterns and frameworks, we want to provide some tactical best practices to help your developers get the most out of your API.

Designing for Real-Life Use Cases

When designing an API, it’s best to make decisions that are grounded in specific, real-life use cases. Let’s dig into this idea a bit more. Think about the developers who are using your API. What tasks should they be able to complete with your API? What types of apps should developers be able to build? For some companies, this is as targeted as “developers should be able to charge customer credit cards.” For other companies, the answer can be more open-ended: “developers should be able to create a full suite of interactive consumer-quality applications.”

Read more at O’Reilly

Add It Up: Test Automation Is Not a Tooling Story

Test automation tools are not used very often. Only 16 percent of performance test cases are executed with test automation tools, and security tests are being completed at the same frequency according to the World Quality Report (WQR) 2018-2019, which surveyed 1,700 IT decision makers (ITDMs) at companies with more than a thousand employees. Although the QA and testing job roles have been adapting to agile development practices, remember that even if one test is automated, the majority of tests are still done manually.

Read more at The New Stack

source{d} Engine: A Simple, Elegant Way to Analyze your Code

With the recent advances in machine learning technology, it is only a matter of time before developers can expect to run full diagnostics and information retrieval on their own source code. This can include autocompletion, auto-generated user tests, more robust linters, automated code reviews and more. I recently reviewed a new product in this sphere — the source{d} Engine.
source{d} offers a suite of applications that uses machine learning on code to complete source code analysis and assisted code reviews. Chief among them is the source{d} Engine, now in public beta; it uses a suite of open source tools (such as Gitbase, Babelfish, and Enry) to enable large-scale source code analysis. Some key uses of the source{d} Engine include language identification, parsing code into abstract syntax trees, and performing SQL Queries on your source code such as:

  • What are the top repositories in a codebase based on number of commits?

  • What is the most recent commit message in a given repository?

  • Who are the most prolific contributors in a repository?

Because source{d} Engine uses both agnostic language analysis and standard SQL queries, the information available feels infinite.

Figure 1: Basic database structure.

From minute one, using source{d} Engine was an easy, efficient process. I ran source{d} Engine chiefly on a virtual machine running Ubuntu 14.04 but also installed it on MacOS and Ubuntu 16.04 for comparison purposes. On all three, install was completely painless, although the Ubuntu versions seemed to run slightly faster. The source{d} Engine documentation is accurate and thorough. It correctly warned me that the first time initializing the engine would take a fair amount of time so I was prepared for the wait. I did have to debug a few errors, all relating to my having a previous SQL instance running so some more thorough troubleshooting documentation might be warranted.

Figure 2: Listing the top contributor of a given repository.

It’s simple to go between codebases using the commands scrd kill and scrd init. I wanted to explore many use cases so I picked a wide variety of codebases to test on ranging from a single contributor with only 5 commits to one with 10 contributors, thousands of lines of code, and hundreds of commits. source{d} Engine worked phenomenally with all of them although it is easier to see the benefits in a larger codebase.

Figure 3: Listing all commits from a repository — not so easy in a bigger codebase, but fantastic when there are only eight!

My favorite queries to run were those pertaining to commits. I am not a fan of the way GitHub organizes commit history, so I find myself coming back to source{d} Engine again and again when I want commit history-related information. I’m also very impressed with the Universal Abstract Syntax Tree (UAST) concept. A UAST is a normalized form of an abstract syntax tree (AST) — a structural representation of source code used for code analysis. Unlike ASTs, UASTs are language agnostic and do not rely on any specific programming language. The UAST format enables further analysis and can be used with any tools in a standard, open style.

My only complaint is the (obvious and understandable) reliance on a base level of SQL knowledge. Because I was already very familiar with SQL, I was able to quickly use the source{d} Engine engine and create my own queries. However, if I had been shakier on the basics, I would’ve appreciated more example queries. Another minor complaint is that support for Python appears to only be for Python 2 right now, and not Python 3.

Figure 4: Currently supported drivers.

I’m excited to follow the future of source{d} Engine and also source{d} Lookout (now in public alpha) which is the first step to a suite of true machine learning on code applications. I would love for the documentation of this and other upcoming applications to be more comprehensive, but because they are not fully available yet, just having what’s available already is great.

In general, I’m extremely impressed with the transparency of the company — not only are the future products and applications clearly listed and described, many internal company documents are also available. This true dedication to open source software is amazing, and I hope more companies follow source{d} ’s lead.

Lizzie Turner is a former digital marketing analyst studying full stack software engineering at Holberton School. She is currently looking for her first software engineering role and is particularly passionate about data and analytics. You can find Lizzie on LinkedIn, GitHub, and Twitter.

Exploring the Linux Kernel: The Secrets of Kconfig/kbuild

The Linux kernel config/build system, also known as Kconfig/kbuild, has been around for a long time, ever since the Linux kernel code migrated to Git. As supporting infrastructure, however, it is seldom in the spotlight; even kernel developers who use it in their daily work never really think about it.

To explore how the Linux kernel is compiled, this article will dive into the Kconfig/kbuild internal process, explain how the .config file and the vmlinux/bzImage files are produced, and introduce a smart trick for dependency tracking.

Kconfig

The first step in building a kernel is always configuration. Kconfig helps make the Linux kernel highly modular and customizable. Kconfig offers the user many config targets:

Read more at OpenSource.com

​Microsoft Open-Sources Its Patent Portfolio

By joining the Open Invention Network, Microsoft is offering its entire patent portfolio to all of the open-source patent consortium’s members.

Several years ago, I said the one thing Microsoft has to do — to convince everyone in open source that it’s truly an open-source supporter — is stop using its patents against Android vendors. Now, it’s joined the Open Invention Network (OIN), an open-source patent consortium. Microsoft has essentially agreed to grant a royalty-free and unrestricted license to its entire patent portfolio to all other OIN members.

Before Microsoft joined, OIN had more than 2,650 community members and owns more than 1,300 global patents and applications. OIN is the largest patent non-aggression community in history and represents a core set of open-source intellectual-property values. Its members include Google, IBM, Red Hat, and SUSE. The OIN patent license and member cross-licenses are available royalty-free to anyone who joins the OIN community.

Read more at ZDNet

LinuxBoot for Servers: Enter Open Source, Goodbye Proprietary UEFI

LinuxBoot is an Open Source alternative to Proprietary UEFI firmware. It was released last year and is now being increasingly preferred by leading hardware manufacturers as default firmware. Last year, LinuxBoot was warmly welcomedinto the Open Source family by The Linux Foundation.

This project was an initiative by Ron Minnich, author of LinuxBIOS and lead of coreboot at Google, in January 2017.

Google, Facebook, Horizon Computing Solutions, and Two Sigma collaborated together to develop the LinuxBoot project (formerly called NERF) for server machines based on Linux.

Read more at It’sFOSS

Linux-Based Airtame 2 Offers an Enterprise Alternative to Chromecast

One category that often gets overlooked in the discussion of Linux computers is the market for HDMI dongle devices that plug into your TV to stream, mirror, or cast content from your laptop or mobile device. Yesterday, Google announced an extensively leaked third-gen version of its market-leading, Linux-powered Chromecast device. The latest Chromecast has a new design and Google Home support, and it’s claimed to be 15 percent faster processor with support for 1080@60 video. However, the rumored addition of Bluetooth did not materialize.

Here, we look at a similar Linux-based HDMI dongle device that launched this morning with a somewhat different feature set and market focus. The Airtame 2 is the first hardware overhaul since the original Airtame generated $1.3 million on Indiegogo in 2013. The new version quadruples the RAM, improves the Fedora Linux firmware, and advances to dual-band 802.11a/b/g/n/ac, which is now known as WiFi 5 in the new Wi-Fi Alliance naming scheme that accompanied its recent WiFi 6 (ax) announcement.

In its first year, Copenhagen, Denmark-based Airtame struggled to fulfill its Indiegogo orders and almost collapsed in the process. Yet, the company went on to find success and recently surpassed 100,000 device shipments. With a growing focus on enterprise and educational markets, Airtame upgraded its software with cloud device management features, and expanded its media sources beyond cross-platform desktops to Android and iOS devices.

The key difference with Chromecast is that Airtame supports mirroring to multiple devices at once, as long as you’re video is coming from a laptop or desktop rather than a mobile. Chromecast also requires the Chrome browser, and it lacks cloud-based device management features.

Combined with Chromecast’s dominance of the low-end entertainment segment, thanks in part to its $35 pricetag, Airtame’s advantages led the company to focus more on the enterprise, signage, and educational markets. Unfortunately, the Airtame 2 price went up by $100 to $399 per device.

Airtame 2 extends its enterprise trajectory by “re-imagining how to turn blank screens into smart, collaborative displays,” says the company. Airtame recently released four Homescreen apps, providing “simple app integrations for better team collaboration and digital signage.” These deployments are controlled via Airtame Cloud, which was launched in early 2017. The cloud service enables enterprise and educational customers to monitor their Airtame devices, perform bulk updates, and add updated content directly from the cloud.

Twice the RAM, five times the WiFi performance

The Airtame 2 offers the same basic functionality as the Airtame 1, but it adds a number of performance benefits. It moves from the DualLite version of the NXP i.MX6 to the similarly dual-core, Cortex-A9 Dual model. This has the same 1GHz clock rate, but with a more advanced Vivante GC2000 GPU. Output resolution via the HDMI 1.4b port stays the same at 1920×1080, but you now get a 60fps frame rate instead of 30fps. As before, you can plug into VGA or DVI ports using adapters.

More importantly for performance, the Airtame 2 quadruples the RAM to 2GB. In place of an SD card slot, the firmware is stored on onboard eMMC.

The new Cypress (Broadcom) CYW89342 RSDB WiFi 5 chip is about five times faster than the original’s Qualcomm WiFi 4 (802.11n) chip, which also provided dual-band MIMO 2.4GHz/5.2GHz WiFi. The Airtame 2 has twice the range, at up to 20 meters, which is helpful for its enterprise and educational customers.

Other hardware improvements include a smaller, 77.9 x 13.5mm footprint, a Kensington Lock input, an LED, and a magnetic wall mount. A USB Type-C port replaces the power-only micro-USB OTG, adding support for HDMI, USB host, and Ethernet.

As before, there’s also a micro-USB host port that with the help of an adapter, supports Ethernet and Power-over-Ethernet (PoE). Ethernet can run simultaneously with WiFi, and can improve throughput and reliability, says Airtame. We saw no mention of the new product’s latency, but on the previous Airtame, WiFi streaming latency was one second with audio.

Once again, iOS 9 devices can mirror video using AirPlay. However, Android (4.2.2) devices are limited to the display of static images and PDF files, including non-animated PowerPoint presentations. Desktop support, which also includes a special optimization for Chromebooks, includes support for Windows 10/7, Ubuntu 15.05, and Mac OS X 10.12.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

DNS Security Still an Issue

DNS security is a decades-old issue that shows no signs of being fully resolved. Here’s a quick overview of some of the problems with proposed solutions and the best way to move forward.

After many years of availability, DNSSEC has yet to attain significant adoption, even though any security expert you might ask recognizes its value. As with any public key infrastructure, DNSSEC is complicated. You must follow a lot of rules carefully, although some network services providers are trying to make things easier.

But DNSSEC does not encrypt the communications between the DNS client and server. Using the information in your DNS requests, an attacker between you and your DNS server could determine which sites you are attempting to communicate with just by reading packets on the network.

So despite best efforts of various Internet groups, DNS remains insecure. Too many roadblocks exist that prevent the Internet-wide adoption of a DNS security solution. But it is time to revisit the concerns.

Read more at HPE

4 Best Practices for Giving Open Source Code Feedback

In the previous article I gave you tips for how to receive feedback, especially in the context of your first free and open source project contribution. Now it’s time to talk about the other side of that same coin: providing feedback.

If I tell you that something you did in your contribution is “stupid” or “naive,” how would you feel? You’d probably be angry, hurt, or both, and rightfully so. These are mean-spirited words that when directed at people, can cut like knives. Words matter, and they matter a great deal. Therefore, put as much thought into the words you use when leaving feedback for a contribution as you do into any other form of contribution you give to the project. As you compose your feedback, think to yourself, “How would I feel if someone said this to me? Is there some way someone might take this another way, a less helpful way?” If the answer to that last question has even the chance of being a yes, backtrack and rewrite your feedback. It’s better to spend a little time rewriting now than to spend a lot of time apologizing later.

Read more at OpenSource.com

Cloud Management: The Good, The Bad, and The Ugly – Part 2: 5 Key Capabilities for CMPs

See part 1 of this series here.

The Ovum Decision Matrix Research Report discusses the impact of two major shifts in cloud adoption:

  1. The growing impact of Shadow IT in enterprises.
  2. The need to migrate workloads to the cloud.

We also see a clear third shift: the need to develop Cloud Native applications for new business areas: applications that were born in the cloud, and use all-cloud resources.

These trends have created the need for greater environment visibility and control across hybrid infrastructure. As the Ovum report points out, the duality of this situation is that cloud-native workloads need to be managed in a similar manner as VMs on private clouds.

This key requirement has created the need for greater visibility and control over all the environments that are in use – either private or public, either infrastructure running VMs, containers, serverless, and also legacy, bare-metal applications.

The market in multicloud and hybrid cloud management is still evolving, and many of the vendors come from the virtualization management space. While this seems a sensible evolution, the challenge is that the new cloud-native workloads (those already in the cloud) do not look like or operate in the same way as VMs. The difference between these two paradigms needs to be abstracted away from both developers and infrastructure teams. Established vendors are struggling to balance this new world with VM centric infrastructure.

So what are the key lessons we’ve learned over the years, working with customers on enabling them to effectively manage their complex, hybrid environments?

 

Read the full post on Platform9.