This post is a super simple introduction to CUDA, the popular parallel computing platform and programming model from NVIDIA. I wrote a previous “Easy Introduction” to CUDA in 2013 that has been very popular over the years. But CUDA programming has gotten easier, and GPUs have gotten much faster, so it’s time for an updated (and even easier) introduction.
CUDA C++ is just one of the ways you can create massively parallel applications with CUDA. It lets you use the powerful C++ programming language to develop high performance algorithms accelerated by thousands of parallel threads running on GPUs. Many developers have accelerated their computation- and bandwidth-hungry applications this way, including the libraries and frameworks that underpin the ongoing revolution in artificial intelligence known as Deep Learning.
erverless computing is fast becoming one of the hottest trends in the channel since the cloud. What is the open source ecosystem doing to keep pace with the serverless trend, and why does it matter? Here’s a look.
Serverless computing is a paradigm in which developers deploy and run code on demand, without having to maintain a backend server at all. The term is a little misleading; serverless computing does not mean there is no server involved. A server still runs your code, but you don’t have to think about the server when deploying it.
When it comes to deploying apps, serverless computing offers some key advantages. It eliminates the need to set up and maintain a virtual server in the cloud.
The terminal emulator is a venerable but essential tool for computer users. The reason why Linux offers so much power is due to the command line. The Linux shell can do so much, and this power can be accessed on the desktop by using a terminal emulator. There are so many available for Linux that the choice is bewildering.
The terminal window allows users to access a console and all its applications such as command line interfaces (CLI) and text user interface software. Even with the sophistication of modern desktop environments packed with administrative tools, other utilities, and productivity software all sporting attractive graphical user interfaces, it remains the case that some tasks are still best performed with the command line.
Executives, experts, analysts, and leaders in open source at some of the world’s largest and most successful companies will speak at the invitation-only Open Source Leadership Summit next month in Lake Tahoe, The Linux Foundation has announced.
AT&T, Cloud Foundry Foundation, Goldman Sachs, Google, IBM, IDC, Leading Edge Forum, Mozilla, and VMware are among the many organizations that will share insights on how to start, build, participate in and advance open source strategy and development.
The event, set to take place Feb. 14-16, will feature keynotes by Camille Fournier, former CTO of Rent the Runway and author of O’Reilly’s forthcoming book The Manager’s Path: A Guide for Tech Leaders Navigating Growth and Change; Dan Lyons, New York Times best-selling author of Disrupted; Donna Dillenberger, IBM Fellow at the Watson Research Center; and entrepreneur William Hurley aka ‘whurley’ whose retirement savings startup Honest Dollar was acquired last year by Goldman Sachs.
Other featured keynotes include:
Katharina Borchert, Chief Innovation Officer, and Patrick Finch, Strategy Director, Mozilla who will discuss community innovation.
Al Gillen, GVP of Software Development and Open Source at IDC, will provide an analysis of open source in 2017 and beyond.
Abby Kearns, Executive Director of Cloud Foundry Foundation, will share how cross-foundation collaboration is a win for open source.
Chris Rice, SVP at AT&T Labs and Domain 2.0 Design and Architecture at AT&T, will talk about the future of networking and orchestration.
And more.
Open Source Leadership Summit is where open source leaders and visionaries come together, share best practices, and get the latest information on open source for business advantage. This conference is the place to be if your business is among the many companies across diverse industries that are discovering the strategic benefits of using open source software and participating in its development.
The event will also feature more than 50 educational sessions covering best practices, the future of open source, leadership strategy, open source project updates, compliance and standards, professional open source management, and more. Attendees can also take part in Open Spaces unconference sessions, pre and post summit activities and evening events geared towards small group collaboration and networking. See the full schedule.
Open source collaboration is a strong economic force that’s transforming diverse industries. No area of technology is untouched. Open Source Leadership Summit is the place to learn how this transformation is happening and how your company can be involved and benefit from it.
How mature is your organization’s open source software management? Take our short sample POSMA (Professional Open Source Management Assessment) survey for a ballpark score. Take the survey!
The Linux Foundation’s new Kubernetes training course is now available for developers and system administrators who want to learn container orchestration using this popular open source tool.
Kubernetes is quickly becoming the de facto standard to operate containerized applications at scale in the data center. As its popularity surges, so does demand for IT practitioners skilled in Kubernetes.
“Kubernetes is rapidly maturing in development tests and trials and within production settings, where its use has nearly tripled in the last eight months,” said Dan Kohn, executive director, the Cloud Native Computing Foundation.
Kubernetes Fundamentals (LFS258) is a self-paced, online course that teaches students how to use Kubernetes to manage their application infrastructure. Topics covered include:
Students will learn the fundamentals needed to understand Kubernetes and get quickly up to speed to start building distributed applications that will scale, be fault-tolerant, and simple to manage.
The course distills key principles, such as pods, deployments, replicasets and services, and give students enough information to start using Kubernetes on their own. And it’s designed to work with a wide range of Linux distributions, so students will be able to apply the concepts learned regardless of their distribution of choice.
LFS258 also will help prepare those planning to take the Kubernetes certification exam, which will launch later this year. Updates are planned for the course ahead of the certification exam launch, which will be specifically designed to assist with preparation for the exam.
The course, which has been available for pre-registration since November, is available to begin immediately. The $199 course fee provides unlimited access to the course for one year to all content and labs. Sign up now!
Brent Roman describes the Environmental Sampler Processor (ESP), which performs a variety of chemical and genetic assays on samples it takes directly from its position moored 2 to 30 meters underwater. This Linux controlled “lab in a can” was developed to identify health hazards, such as toxic algae blooms, in hours rather than days or weeks.
In the past few years, several cases of non-compliance with open source licenses have made their way to the public eye. Increasingly, the legal disposition towards non-compliance has lessons to teach open source professionals. Here are my four top takeaways, gleaned from the years I’ve worked in open source.
1. Ensure Compliance Prior to Product Shipment/Service Launch
The most important lesson of non-compliance cases has been that the companies involved ultimately had to comply with the terms of the license(s) in question, and the costs of addressing the problem after the fact has categorically exceeded those of basic compliance. Therefore, it is really a smart idea to ensure compliance before a product ships or a service launches.
It is important to acknowledge that compliance is not just a legal department exercise. All facets of the company must be involved in ensuring proper compliance and contributing to correct open source consumption and, when necessary, redistribution.
This involvement includes establishing and maintaining consistent compliance policies and procedures as well as ensuring that the licenses of all the software components in use (proprietary, third-party, and open source) can co-exist before shipment or deployment.
To that effect, companies need to implement an end-to-end open source management infrastructure that will allow them to:
• Identify all open source used in products, presented in services, and/or used internally
• Perform architectural reviews to verify if and how open source license obligations are extending to proprietary and third-party software components
• Collect the applicable open source licenses for review by the legal department
• Develop open source use and distribution policies and procedures
• Mitigate risks through architecture design and engineering practices
2. Non-Compliance is Expensive
Most of the public cases related to non-compliance have involved GPL source code. Those disputes reached a settlement agreement that included one or more of these terms:
• Take necessary action to become compliant
• Appoint a Compliance Officer to monitor and ensure compliance
• Notify previous recipients of the product that the product contains open source software and inform them of their rights with respect to that software
• Publish licensing notice on company website
• Provide additional notices in product publications
• Make available the source code including any modifications applied to it (specific to the GPL/LGPL family of licenses)
• Cease binary distribution of the open source software in question until it has released complete corresponding source code or make it available to the specific clients affected by the non-compliance
• In some cases, pay an undisclosed amount of financial consideration to the plaintiffs
Furthermore, the companies whose compliance has been successfully challenged have incurred costs that included:
• Discovery and diligence costs in response to the compliance inquiry, where the company had to investigate the alleged inquiry and perform due diligence on the source code in question
• Outside and in-house legal costs
• Damage to brand, reputation, and credibility
In almost all cases, the failure to comply with open source license obligations has also resulted in public embarrassment, negative press, and damaged relations with the open source community.
3. Relationships Matter
For companies using open source software in their commercial products, it is recommended to develop and maintain a good relationship with the members of the open source communities that create and sustain the open source code they consume. The communities of open source projects expect companies to honor the licenses of the open source software they include in their products. Taking steps in this direction, combined with an open and honest relationship, is very valuable.
4. Training is Important
Training is an essential building block in a compliance program, to ensure that employees have a good understanding of the policies governing the use of open source software. All personnel involved with software need to understand the company’s policies and procedures. Companies often provide such education through formal and informal training sessions.
Serverless applications abstract the app from the underlying infrastructure. And that changes the IT team’s approach to application performance monitoring.
Serverless applications aren’t for everyone, as they make monitoring more difficult. While scaling and cost savings may be worth it for some developers, serverless apps come with higher test requirements and different monitoring strategies than traditional applications. The best way to ensure serverless applications function as intended is to have consistent back-end tests. While this may not anticipate every scenario, it is a good way to prevent any sort of regression and guarantee that code is operating within expectations in production.
At last October’s Embedded Linux Conference Europe, Brent Roman, an embedded software engineer at the Monterey Bay Aquarium Research Institute (MBARI), described the two decade-long evolution of MBARI’s Linux-controlled Environmental Sampler Processor (ESP). Roman’s lessons in reducing power consumption on the remotely deployed, sensor-driven device are applicable to a wide range of remote Internet of Things projects. The take-home lesson: It’s not only about saving power on the device but also about the communications links.
The ESP is designed to quickly identify potential health hazards, such as toxic algae blooms. The microbiological analysis device is anchored offshore and moored at a position between two and 30 meters underwater where algae is typically found. It then performs chemical and genetic assays on water samples. Results are transferred over cable using RS-232 to a 2G modem mounted on a float, which transmits the data back to shore. There are 25 ESP units in existence at oceanographic labs around the world, with four currently deployed.
Over the years, Roman and his team have tried to reduce power consumption to extend the duration between costly site visits. Initially, the batteries lasted only about a month, but over time, this was extended to up to six months.
Before the ESP, the only way to determine the quantity and types of algae in Monterey Bay — and whether they were secreting neurotoxins — was to take samples from a boat and then return to shore for study. The analysis took several days from the point of sampling, by which time the ecosystem, including humans, might already be at risk. The ESP’s wirelessly connected automated sampling system provides a much earlier warning while also enabling more extensive oceanographic research.
Development on the ESP began in 1996, with the initial testing in 2002. The key innovation was the development of a system that coordinates the handling of identically sized filter pucks stored on a carousel. “Some pucks filter raw water, some preserve samples, and others facilitate processing for genetic identification,” said Roman “We built a robot that acts as a jukebox for shifting these pucks around.”
The data transmitted back to headquarters consists of monochrome images of the samples. “Fiduciary marks allow us to correlate spots with specific algae and other species such as deep water bacteria and human pathogens,” said Roman.
The ESP unit consists of 10 small servo motors to control the jukebox, plus eight rotary valves and 20 solenoid valves. It runs entirely on 360 D Cell batteries. “D Cells are really energy dense, with as much energy per Watt as Li-Ion, but are much safer, cheaper, and more recyclable,” said Roman. “But the total energy budget was still 6kW hours.”
The original system launched in 2002 ran Linux 2.4 on a Technologic Systems TS-7200 computer-on-module with a 200MHz ARM9 processor, 64MB RAM, and 16MB NOR flash. To extend battery life, Roman first focused on reducing the power load of a separate microcontroller that drove the servo motors.
“When we started building the system in 2002 we discovered that the microcontrollers that were designed for DC servos required buses like CAN and RS-485 with quiescent power draw in the Watt range,” said Roman. “With 10 servos, we would have a 10W quiescent load, which would quickly blow our energy budget. So we developed our own microcontroller using a multi-master I2C bus that we got down to 70mW per every two motors, or less than half a Watt for motor control.”
Even then, the ESP’s total 3W idle power draw limited the device to 70 days between recharge instead of their initial 180-day goal. At first, 70 days was plenty. “We were lucky if we lasted a few weeks before something jammed or leaked and we had to go out and repair it,” said Roman. “But after a few years it became more reliable, and we needed longer battery life.”
The core problem was that the device used up considerable power to keep the system partially awake. “We have to wait for the algae to come to the ESP, which means waiting until the sensors tell us we should take a sample.” said Roman. “Also, if the scientists spot a possible algae bloom in a satellite photo, they may want to radio the device to fire off a sample. The waiting game was killing us.”
In 2014, MBARI updated the system to run Linux 2.6 on a lower power PC/104 carrier board they designed in house. The board integrated a more efficient Embedded Artists LPC3141 module with a 270MHz ARM9 CPU, 64MB RAM, and 256MB NAND.
The ESP design remained the same. An I2C serial bus links to the servo controllers, which are turned off during idle. Three RS-232 links connect to the sensors, and also communicate with the float’s 500mW cell modem. “RS-232 uses very little power and you can run it beyond recommended limits at up to 20 meters,” said Roman.
In 2014, when they mounted the more power-efficient LPC3141 enabled carrier as a drop-in replacement, the computer’s idle time draw was reduced from almost 2.5W to 0.25W. Overall, the ESP system dropped from 3W to 1W idle power, which extended battery life to 205 days, or almost seven months.
The ESP enters rougher water
Monterey Bay is sufficiently sheltered to permit mooring the ESP about 10 meters below the surface. In more exposed ocean locations, however, the device needs to sit deeper to avoid “being pummeled by the waves,” said Roman.
MBARI has collaborated with other oceanographic research institutions to modify the device accordingly. Woods Hole Oceanographic Institution (WHOI), for example, began deploying ESPs off the coast of Maine. “WHOI needed a larger mooring about 25 meters below the surface,” said Roman. “The problem was that the algae were still way up above it, so they used a stretch rubber hose to pump the water down to the ESP.”
The greater distance required a switch from RS-232 to DSL, which boosted idle power draw to more than 8W. “Even when we retrofitted these units with the lower power CPU board, they only dropped from 8W to 6W, or only 60 days duration,” said Roman.
The Scripps Institute in La Jolla, California had a similar problem, as they were launching the ESP in the exposed coastal waters of the Pacific. Scripps similarly opted for a stretch hose, but used more power efficient RS-422 instead of DSL. This traveled farther than RS-232, supporting both the 10-meter stretch hose and the 65-meter link to the float.
RS-422 draws more current than RS-232, however, limiting them to 85 days. Roman considered a plan to suspend the CPU image to RAM. However, since the CPU was already very power efficient, “the energy you’re using to keep the RAM image refreshed is a fairly big part of the total, so we would have only gained 15 days,” said Roman. He also considered suspending to disk, but decided against it due to flash wear issues, and the fact that the Linux 2.6 ARM kernel they used did not support disk hibernation.
Ultimately, tradeoffs in functionality were required. “For Scripps, we made the whole system power on based on time rather than sensor input, so we could shut down the power until it received a radio command,” said Roman.
Due to the need to keep the radio on, even this yielded only enough power for 140 days. Roman dug into the AT command set of the 2G modems and found a deep sleep standby option that essentially uses the modems as pagers. The solution reduced power from 500Mw to 100Mw for the modem, or 200Mw overall.
The University of Washington came up with an entirely different solution to enable a deeper ESP mooring. “Rather than using an expensive stretch hose, they tried a 40-meter Cat5 cable to the surface, enabling an Ethernet connection that was more than 100 times faster than RS-232,” said Roman. This setup required a computer at both ends, however, as well as a cellular router that added 2-3 Watts.
Roman then came up with the idea to run USB signals over Cat5, avoiding the need for additional computers and routers while still enabling high-bandwidth communications. For this deployment, he used an Icron 1850 Cat5 USB extender, which he says works reliably at over 50 meters. The extender adds 400mW, plus another 150mW for the hub on the ESP.
Roman also described future plans to add energy harvesting to recharge the batteries. So far, putting a solar panel on the float seems to be the best solution due to the ease of maintenance. The downside to a solar panel is that the wind can more easily tip over the float. A larger float might help.
In summarizing all these projects, Roman concluded that reducing power consumption was a more complex problem than they had imagined. “We worried a lot about active consumption, but should have spent more time on passive, which we finally addressed,” said Roman. “But the real lesson was how important it was to look at the communications power consumption.”
For additional details, watch the full video below:
Embedded Linux Conference + OpenIoT Summit North America will be held on February 21-23, 2017 in Portland, Oregon. Check out over 130 sessions on the Linux kernel, embedded development & systems, and the latest on the open Internet of Things.
Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register now>>
Keen.IO is a cloud platform that provides data collection and analysis services via the REST API. Keen offers SDKs for many programming languages, including browser-side JavaScript and Node.js.
In this tutorial we used Tibbit #30 (ambient humidity and temperature meter) for collecting environmental data and storing it on the Keen.IO server.
A very simple in-browser application is used for fetching data from Keen.IO and displaying charts in the browser window (the charts below have been received from Keen.IO, they are not static).
Data collection and representation are separated into device.js and server.js applications.
git clone https://github.com/tibbotech/keenio-tutorial.git
cd keenio-tutorial
npm install .
Launch the app:
node device
Comments in the code explain how it works
// requires Keen.IO client module
const Keen = require('keen.io');
// requires Tibbo's humidity/temperature meter and sets it up to work with I2C line 4
const humTempMeter = require('@tibbo-tps/tibbit-30').init("S5");
// Binds the client to your account
const client = Keen.configure({
projectId: "57066...........fe6a1279",
writeKey: "0d2b95d4aa686e8274aa40604109d59c5..............4501378b3c193c3286608"
});
// Every minute..
setInterval(function(){
// ..reads the environmental data from the meter..
var data = humTempMeter.getData();
// ..checks out if everything's correct..
if(data.status === 1){
var payload = {
hum: data.humidity,
temp: data.temperature
};
// ..and submits them to your event collection.
client.addEvent("humtemp", payload, function(err){
if(err !== null){
console.log(err);
}
});
}
},60000);
Web Interface
Installation
The web interface application can be installed on your PC, a remote server, or executed on the same LTPS device (as a separate process)
Install the application (skip if running on the same LTPS):
git clone https://github.com/tibbotech/keenio-tutorial.git
cd keenio-tutorial
npm install .