What happens when the way we buy, sell and pay for things changes, perhaps even removing the need for banks or currency exchange bureaus? That’s the radical promise of a world powered by cryptocurrencies like Bitcoin and Ethereum. We’re not there yet, but in this sparky talk, digital currency researcher Neha Narula describes the collective fiction of money — and paints a picture of a very different looking future.
Kurt Kremitzki, who is in his final year of studying biological and agricultural engineering at Texas A&M, visited a Mayan community in the Yucatan this spring to help design irrigation systems. He was one of 14 aspiring IT professionals to receive a 2016 Linux Foundation Training (LiFT) scholarship, announced last month.
Kurt was inspired to take the project a step further when he realized that a system of Raspberry Pis with cell phone connectivity and open source software could create an automated irrigation system based on weather reports and sensor readings. He is now working with a local university in Mexico to develop such a system, which is just the first step in his dream of using technology to find new ways to meet the world’s growing food needs.
LiFT scholarship recipient Kurt KremitzkiLinux.com: How did you learn Linux?
Kurt Kremitzki: I was introduced to Linux in the era of Red Hat Linux 9, but I thought it *was* Linux, and when “Enterprise” was added I stopped using it. Several years ago, I picked up Ubuntu and started using it full time. More recently, besides use at home, I applied what knowledge I have of Linux to a robotics competition, using the Raspberry Pi, hosted by the American Society of Agricultural & Biological Engineers in New Orleans last year. When a similar competition was assigned to an introductory Control Theory class I took last semester, the professor opted to have me assist the TA and all my classmates in teaching basic Linux skills and Python programming to do a simple maze following project.
Linux.com: Why did you become a developer?
Kurt: Originally graduating high school at 16, I chose to explore my talent working with computers by studying Computer Science, but found that studying it for its own sake was uninspiring. I didn’t finish but ended up with a job as a developer anyway, until several years ago. I decided to go back to school for Biological & Agricultural Engineering, where I could use my computer skills to solve pressing challenges, like the need to feed almost 10 billion people by 2050.
Linux.com: How do you use Linux now?
Kurt: This spring, I visited an impoverished Mayan community in the Yucatan to assist in design and repair of backyard irrigation systems. I was inspired to work further with them, and one particular way I want to use Linux to make a difference involves my (hopeful) senior design project plans.
When I visited that community, the potential benefit of Linux, and in particular something like the Raspberry Pi, was obvious. Although water was abundant, knowledge about agriculture has largely been lost as a result of the near-slavery conditions of the hacienda system, and so a simple base of a Raspberry Pi and cell phone network connectivity could serve both as an educational platform and the heart of an irrigation automation system. However, since technical knowledge in the village is limited, my team and I would have to work with the local university to (a) prepare open sourced teaching tools on how to use and repair our (also open source) irrigation automation system and (b) come up with an extremely resilient system that is easy to repair (e.g., create a simple, dedicated SD card flasher from another Raspberry Pi and a button.)
Using wirelessly gathered sensor data and local weather readings, the irrigation system could efficiently use water, and also serve as a guide for planting and harvesting, making the best use of two of our most precious resources: time, and the free energy of the sun. The local university has been working on backyard irrigation systems with small Mayan villages for the last 6 years, and there is tremendous potential to expand this program, both for the Mayan villagers and the students at the university.
Linux.com: How can Linux help solve the problem of food scarcity?
Kurt: Closely related to Linux and the notion of open source software is the idea of empowerment. One of the most pressing issues in my field is the need to feed 9-10 billion people by the year 2050, and because of inefficiencies in our global food system, that means an increase in production of 70 to 100 percent. We may need to double our global production with no new cropland being discovered (in fact it’s being eaten up by cities) and less water being available.
Solving food scarcity with Linux and open source.
One of the only ways I foresee this being done is with the help of Linux and open source tools, since no one person can possibly tackle a problem that large, and even when solutions will be found, they will not be like the “cathedrals” seen in agriculture today, with large, “black box” tractors where farmers have neither the right to repair nor the ability to understand what’s going on in the system that’s essential to their livelihood.
Instead, new developments in Linux, like nascent drone/UAV technology, things like Automotive Grade Linux, and the general ethos of collaboration will be essential. Linux and its associated tools and ecosystem will be pivotal in tackling the challenges of tomorrow, and in empowering people across the world to unlock the full potential of their computer resources to advance mankind, whether it’s in the agricultural sector or otherwise.
Linux.com: How do you plan to use your LiFT scholarship?
Kurt: Although I have quite a bit of experience using Linux as a programmer, there are gaps in my knowledge, as it’s mostly the result of searching for the solution to problems as I come across them. As a developer-turned-biosystems engineering student, I’ve realized this isn’t enough. The problems in my field, while vast and staggering in scope, are about 95 percent human and 5 percent technical. By seeking out formal training, I can cover the gaps in my knowledge, make myself more employable once I graduate, and most importantly, I can spend less time worrying about how technology works, and more time worrying about how technology can help solve human problems.
Linux.com: How will the scholarship help you achieve your dream of helping to solve the world’s looming food scarcity crisis?
Kurt: The estimated doubling of food production that will be needed to feed the world in 2050 is likely not going to come from the corn and soybean fields of Illinois or Iowa. Trying to get more productivity from that style of farming is a little like getting blood from a stone. Instead, some of the places most likely to contribute will be mountainous regions and small villages of China, Latin America, and Africa, where huge tractors and industrial farming practices don’t make physical or economic sense.
Advances in things like agricultural drones have huge potential to empower subsistence farmers; Linux and The Linux Foundation are already forging ahead in that field with work like the Dronecode Project, an open source UAV platform. Besides drones, bringing the wealth of the world’s knowledge in the form of Internet connectivity will have a huge impact for rural farmers’ productivity and for the happiness of rural people in general. Large strides are being made in this domain as well with projects like Rhizomatica in Mexico and Guifi.net in Spain where Linux is once again front and center.
There’s no shortage of work to be done to make the world a better place; Linux and the open source philosophy behind it is one of the best force multipliers to making things happen. With Linux, I can have complete control of computing resources from the physical layer to the presentation layer; I can choose to make and use technology that will help the most people.
At the Linux Security Summit last month, Google developer Kees Cook shared the current workings of the Kernel Self-Protection Project (KSPP). The project, he said, goes beyond user space and even beyond kernel integrity. The idea is to implement changes to help the kernel protect itself.
To understand the importance of the project, Cook said, we need to think about the multitude of devices running Linux, such as servers, laptops, cars, phones, and then consider that the vast majority of these devices are running old software, which contains bugs. Some of these devices have very long lifetimes, but the lifetime of a bug can be longer still.
In 2010, Jon Corbet researched security flaws and found that the average time between introduction and fix was about 5 years. Cook’s own analysis of the Ubuntu CVE tracker for the kernel from 2011 through 2016 showed the following vulnerabilities:
Critical: 2 @ 3.3 years
High: 24 @ 6.4 years
Medium: 334 @ 5.2 years
Low: 186 @ 5.0 years
These risks, Cook said, are not just theoretical. “Attackers are watching commits, and they are better at finding bugs than we are.” To better protect devices, we need to build in protection from the start.
And, just because you have no open bugs in your bug tracker, does not mean everything’s fine. The important thing, he said, is to look at where problems were introduced and try to gauge what bugs are in the system now.
Cook used the analogy of the 1960s US car industry. At that time, cars were designed to run — not to fail. And, when they did fail, crashes were disastrous. The car industry had to figure out how to handle crashes safely, just as Linux needs to handle failures safely.
Cook further noted that user space is becoming more difficult to attack, which makes the kernel more attractive. And, whether we are comfortable with the idea or not, lives now depend on Linux.
Many people are working on excellent technologies that revolve around the kernel protecting user space, Cook said, but the developers working under the KSPP umbrella are focused on the kernel protecting the kernel from attack. The project aims to eliminate exploitation targets and methods, eliminate information leaks, and eliminate anything that assists attackers, even if doing so makes development more difficult.
Toward this end, killing bugs is nice, but killing bug classes is better. If we can stop an entire kind of bug from happening, we absolutely should do so, Cook said. He then described several bug classes, such as stack overflow, integer over/under flow, heap overflow, format string injection, kernel pointer leak (exposing kernel addresses), uninitialized variables, and use-after-free (which is related to integer over/under flow), and discussed possible mitigation approaches.
He then gave some examples of exploits and ways to mitigate them as well. In the case of the “Finding the Kernel” exploit, he said, possible mitigation approaches could be hiding symbols and kernel pointers, introducing runtime randomization of kernel functions, or implementing per-build structure layout randomization.
For the “Userspace Execution” exploit, mitigations include hardware segmentation, emulated memory segmentation via page table swap, and compiler instrumentation to set high bit on function calls. Cook mentioned he still needs this emulation for x86 in case anyone can help out.
He then detailed the changes that have been added to kernel versions 4.3 through 4.7 and described some changes that he expects to be added into 4.8 and even beyond into 4.9.
Cook admitted that the biggest challenge for the project is culture — and that the process requires persistence and patience on both sides. There are also technical challenges and, he said, collaboration plays a big role in overcoming those.
“Even if you have fantastic code, if you can’t describe why it’s needed, how it helps things, then really documenting these changes can be a big challenge.” And, he said, people need to understand they’re not writing for the kernel, they’re writing for the kernel developers.
“Other people are maintaining your code, other people need to understand your code, and other people are not necessarily familiar with what you’re doing, so making it understandable is critical.”
Watch the complete presentation below:
You won’t want to miss the stellar lineup of keynotes, 185+ sessions and plenty of extracurricular events for networking at LinuxCon + ContainerCon Europe in Berlin. Secure your spot before it’s too late! Register now.
Kees Cook provides a quick overview will be shown of what the Kernel Self-Protection Project is trying to protect Linux against, as well as the state of the art in available technologies.
What if I told you that you can have your OpenStack Cloud environment setup before you have to stop for lunch? Would you be surprised? Could you do that today? In most cases, I am betting your answer would be “Not possible,” not even on your best day. Not to worry — the solution is here and it’s called the QuickStart Cloud Installer (QCI). Let’s take a look at the background of where this cloud tool came from, how it evolved, and where it is headed.
Security is always a hot-button issue, and one the folks at the OPNFV project take seriously. In fact, the project — an integrated open platform for facilitating NFV deployments — is among a handful of open source organizations to recently earn a CII Best Practices Badge for security compliance.
(The Core Infrastructure Initiative (CII), run by The Linux Foundation, is a multi-million dollar project to fund and support critical elements of the global information infrastructure, and, among other resources, the project offers a Best Practices Badge program. While serving as an open source secure development model, projects earning the badge demonstrate a commitment to security and must meet strict requirements and criteria.)
OPNFV works upstream to leverage a variety of existing code bases from leading open source projects across compute, storage, and networking to fill gaps where needed to meet carrier-grade end user requirements. The project also is still relatively young (approaching its second birthday), all of which makes earning the best practices badge no small feat. But with security an increasing concern among the telco industry, especially as NFV begins to scale and quickly transform network infrastructures, it was an important step for the project that signals the project’s commitment to secure-aware development.
To find out more about the process and what it took for OPNFV to earn the badge, we sat down with members of the OPNFV Security Working Group, including Sona Sarmadi (Security Responsible at Enea Software AB), Luke Hinds (Principal Software Engineer at RedHat) and Ashlee Young (Distinguished Strategist/Engineer, Standards & Open Source at Huawei).
Why did OPNFV pursue CII certification?
There is no doubt that security is one of the most important features in all software today, including open source and NFV in particular. In fact, security was recently cited as one area the telco industry would like to see OPNFV focus on more moving forward.
During the course of creating the NFV standards, a key discussion point was how we would ensure the code we leveraged from so many open source projects would be secure. CII provided a scope and a framework from which we could approach this topic within OPNFV. Earning the best practices badge is also a very tangible way for us to assure the industry of our commitment to security and quality. It also provides a necessary guideline for project leads to follow to achieve due diligence and ensure their portion of the overall solution is secure. By sharing the responsibility throughout our community, we can all help do our part.
What did you need to do to meet the requirements and what was the hardest part?
The requirements to get the badge are quite extensive, so we had some work to do in order to become compliant. For example, we removed support for crypto algorithms that are no longer considered secure (e.g., MD5) and also updated the OPNFV wiki pages with more specific and clear instructions on how to report security incidents. But probably the hardest part of the process was corralling input from all of the developers in a timely fashion.
It’s also worth noting that while earning the badge was an exciting challenge in itself, the real challenge will be in following these practices to ensure that a high level of security is maintained, which depends on involvement from everybody in the project, from developers to management. In any environment, security can never be achieved by an isolated security group.
What impact will this have on OPNFV security in general?
Earning the CII badge will have a HUGE impact on OPNFV’s general approach to building security into the development model (something all open source projects should model). Statistics show that around 50 percent of vulnerabilities in a software are “flaws” (usually design fault/defective design, which is hard to fix after software has been released) and 50 percent bugs (implementation fault). Following these best practices will hopefully address both design and implementation faults before they become vulnerabilities.
What will the community do moving forward to stay compliant?
To ensure we maintain compliance, the OPNFV Security Working Group is developing a tool to automate checks — such as code lint scanning — and checking for insecure crypto use. This tool has been made available to our community and to our Project Technical Leads (PTLs), but we are also investigating the best way to incorporate it into our overall continuous integration process.
What are you most proud of regarding certification?
I’d have to say our collaboration and teamwork.We are a small team with limited time and resources located in different parts of the world, so earning the CII certification was no small feat! Our experience was also a great example of the power of collaborative open source communities in action; whenever I got stuck, there was always someone willing to lend quick feedback.
We reported the other day that the long-term supported Linux 3.14 kernel branch is about to reach end of life, and that one more maintenance version would be released in the next couple of weeks.
Well, it looks like the Linux kernel maintainers have decided that there’s no need to maintain the Linux kernel 3.14 LTS series anymore, so earlier today, September 11, 2016, they decided to release that last maintenance update, version 3.14.79, and mark the series as EOL (End of Life).
The average person today is surrounded by a cloud. Smartphones alone connect people to a wide array of content and services. Add the other devices they interact with in the office or in their connected home, and the concept of user-centric network (UCN), created and controlled by the user over networks selected by the user, has emerged.
…The definition of UCN has yet to be set in stone as it has slightly different meanings depending what angle you approach it, but in general, it consists of improving the user experience, independent of device, network, location, mobility conditions, and based on a wide range of context provided by the device and the user. What is certain is that UCN will have a broad and significant impact on new and existing networking architectures, both in terms of how they are developed and managed.
There’s a new community standard for container metadata in town, with contributors from Puppet, Mesosphere, Microscaling Systems, Container Solutions, Weave, Cloud66, Tigera and others. The release candidate of Label Schema is being launched Friday at Container Camp in London.
The initiative was inspired by Puppets Gareth Rushgrove, who pointed out how useful additional metadata about a container can be. Docker already provides the LABEL directive for adding arbitrary key-value pairs to any container image, and Label Schema aims to build on this by agreeing standards amongst the community for some common data.
“OpenDaylight fundamentally changed their [the Linux Foundation’s] world,” says Ward. “It’s been wildly successful. It’s the de facto standard open sourceSDN controller for the industry today.” …Later this month, Ward will be giving a talk at the OpenDaylight Summit where he’ll discuss an “Open Networking Umbrella Architecture” in which open source projects align and fit into a unified strategy for enabling up-stack application development.