Most developers aren’t impressed by the ease of use of wireless protocols – they were originally invented by large corporations and heavily patented, which blocked individual developers from innovation. You had to have very deep pockets to bring any alternative to market. Fortunately, this is about to change.
Thanks to inexpensive open source software-defined radios (SDRs), innovators will now be able to design their own wireless protocols.
Networking hardware vendor TP-Link today admitted violating US radio frequency rules by selling routers that could operate at power levels higher than their approved limits. In a settlement with the Federal Communications Commission, TP-Link agreed to pay a $200,000 fine, comply with the rules going forward, and to let customers install open source firmware on routers.
The open source requirement is a unique one, as it isn’t directly related to TP-Link’s violation. Moreover, FCC rules don’t require router makers to allow loading of third-party, open source firmware. In fact, recent changes to FCC rules made it more difficult for router makers to allow open source software. The TP-Link settlement was announced in the midst of a controversy spurred by those new FCC rules.
A new report from Kaspersky Lab on botnet-assisted DDoS attacks shows a steady growth in their numbers the second quarter of this year.
SYN DDoS, TCP DDoS and HTTP DDoS remained the most common attack scenarios, but the proportion of attacks using the SYN DDoS method increased 1.4 times compared to the previous quarter and accounted for 76 percent. This is due to the fact that the share of attacks from Linux botnets almost doubled (to 70 percent) — and Linux bots are the most effective tool for SYN-DDoS. This is the first time that Kaspersky DDoS Intelligence has registered such an imbalance between the activities of Linux- and Windows-based DDoS bots.
Clement Lefebvre has announced the availability of Linux Mint 18 “Xfce” edition. Linux Mint 18 is a long term support release which will receive security updates through to the year 2021. The Xfce edition is a lightweight alternative to Linux Mint’s Cinnamon and MATE editions. The new release offers users access to Mint’s X-Apps, forks of GNOME applications which are designed to look and work the same across multiple desktop environments. The new version of Mint also features improvements to the update manager…
Touting the rise of what it’s calling Container 2.0, Mesosphere says it’s time for containers to support real-time, stateful decision making.
To that end, Mesosphere today announced partnerships with three software vendors that have a hand in real-time applications: Confluent, DataStax, and Lightbend. The three companies‘ applications are now supported on DC/OS, Mesosphere’s data center operating system.
As containers mature, they’re being used for more complex tasks — distributed applications, in particular. Originally, a container was meant to hold one application and its dependencies, such as libraries. Now, developers are interested in developing distributed applications that will run on multiple containers spread across multiple machines, says Tobi Knaup, Mesosphere’s CTO.
Should the Docker container image format be completely standardized? Or should Docker not be held back from evolving the format ahead of the open specification? This was the topic of a heated Twitter tussle last week between Google evangelist Kelsey Hightower and the creator of the Docker itself, Solomon Hykes.
Hightower wants to see the Docker format image to be completely standardized, so companies, including Docker, can build additional functionality atop of the specification. Hykes, however, balked at full standardization, asserting that the format is still too new and evolving too quickly.
The debate centers around how much Docker should donate of its container technology to the Open Container Initiative (OCI), an initiative to build a vendor-neutral container image and runtime specification. The image is the package, or the container itself, which users can fill with their apps. The runtime is the engine that runs the container, providing all the support of the underlying OS.
In this session from the Embedded Linux Conference, Mike Anderson discusses several techniques for improving the robustness of our platforms and hardening them against the myriad of bad actors lurking on the Internet.
It should go without saying that there is no substitute for face to face collaboration. And what is open source if not the ultimate example of collaboration? Open source events provide a wide range of opportunities for the community to connect, and the end result of all of this is good for the community and good for business.
Over the years, and across more than a hundred events, we’ve learned quite a bit of just what it is that makes events specifically so important to the community. Here are some of those reasons:
1. To advance technology.The world has come to understand that open source collaboration moves technology forward. A lot of work can be accomplished over mailing lists and conference calls, but it still slows the process. Time and time again, we hear from all types of technologists – kernel maintainers to architects, that there is absolutely no substitute for the face time they get at events.
2. To learn how the community works. Not everyone in tech starts in open source, and the open source community is unique. Attending events gives developers, sysadmins, operators, users, executives and other open source players and firsthand look at how the community operates. There is no better way to immerse yourself.
3. To get motivated. Programmers are often portrayed as people who work very independently, coding for long hours at their computers into the wee hours of the night. While the long coding hours part is probably true, programmers aren’t the lone wolfs they are sometimes portrayed as. Everyone wants to feel like they are a part of something bigger, part of a community. This is what drives open source. Attendees frequently tell us that the ability to meet in person with like-minded folks to discuss the projects and technologies they are working on is a huge motivation.
4. To connect directly with the maintainers, committers and key members of projects. One of the biggest benefits of our events that we often hear about is the ability to connect directly with these folks to ask questions and gain knowledge. For example, if a developer wants to start submitting patches to the kernel but wants some information on best practices to be successful at this, what better way to find out than to speak directly to one of the kernel maintainers? There is huge benefit to the growth of the community by being able to engage in person with these people.
5. To cross-pollinate.Some of our events gather together the developers who are building technologies, with the operators that are implementing them, the users that are benefiting from them, and the business leaders making the decisions. It is incredibly important for these groups to be able to connect and events provide that opportunity. For a developer to be able to explain value directly to a business leader? For a user to be able to ask questions or propose a new feature direct to a developer? Only the open source community truly allows this level of collaboration and events are the best place to offer it.
6. To learn about the Latest and Greatest. Technology moves fast. Every time you turn around there are new open source projects, new technologies and new advancements. Events provide an unprecedented ability to learn a ton of new information in a short amount of time, with the added benefit of being able to ask questions real time to the speakers presenting new information and to engage with others to discuss the material, ask questions and brainstorm right away.
7. To have fun. The open source community works hard and sometimes events can be a bit of an information overload, so attendees appreciate the ability to ‘take 5’ while onsite and have a little fun. 5k fun runs, games, evening events with good beer and company; these elements are appreciated by attendees and contribute to a productive experience.
The list of reasons could go on and on. The fact is, events provide different benefits for different attendees. The overarching point, though, remains the same: Events help further collaboration and the advancement of open source technology. If you’re thinking of attending an event, or contemplating sending some of your team to an event, and weighing all the pros and cons of whether you should go, the answer is: go. The knowledge gained, the relationships made, the questions answered – there is no substitute and everyone benefits.
Until fairly recently, Linux developers have been spared many of the security threats that have bedeviled the Windows world. Yet, when moving from desktops and servers to the embedded Internet of Things, a much higher threat level awaits.
“The basic rules for Linux security are the same whether it’s desktop, server, or embedded, but because IoT devices are typically on all the time, they pose some unique challenges,” said Mike Anderson, CTO and Chief Scientist for The PTR Group, Inc. during an Embedded Linux Conference talk called “Securing Embedded Linux.”
Anderson has been giving similar overviews at ELC since 2007, when attack and defense technologies were less sophisticated and embedded Linux was less of a target. With the increase in attacks on IoT devices, however, the topic is drawing more interest.
“With IoT, Linux is almost invariably involved, typically in the cloud or on border gateways,” said Anderson. Gateways aggregate data from sensor endpoints over low-power wireless radios — typically 802.15.4 protocols like ZigBee — and convert them for upstream routing to the cloud.
Gateways deployed in a “fog” model can limit the vulnerability of IoT endpoints. “The cloud model provides an incredibly large attack surface,” Anderson told the ELC audience. “If they can crack one endpoint, they may have cracked 20,000. In the fog model, the devices behind the gateway router aren’t routable, and you can’t get to them directly from the Internet.”
Even with basic security, a fog model can usually deter “script kiddies,” which Anderson defines as mischievous amateur hackers who are not necessarily malicious. A much higher security threat comes from professional hackers – black hats who are “in it for the money,” and engage in ransomware, credit card theft, or the new business of “malware to order,” said Anderson. White hats, meanwhile, try to stop black hats by detecting vulnerabilities at companies using tools such as penetration testing via the toolkits found in some distributions like Kali Linux. The most dangerous security threats come from well-funded, state-sponsored hackers, “typically black hats paid by a government, or maybe industrial spies, trying to launch a coordinated cyberattack.
By far the most common threat is an insider attack from employees. “It happens all the time — employees go into debt and decide to sell company information,” said Anderson. “Peer review is one of the best ways to stop this. Open source has a major advantage in detecting insider attacks since thousands of people are looking at the code. Ideally, you will also bring in an independent security professional.”
Getting Physical
The first line of defense is physical security, which can involve technologies like fingerprint readers and access cards. Typically, an IoT gateway is not protected in a secure data center with locked doors, guards, and security systems, however, so the device itself needs to be hardened.
At the very least, you should “remove any debugging interfaces and blow the e-fuses,” said Anderson. You can add anti-tamper sensors, specialty screws, and instrumented cases which, if triggered, “set something in permanently-stored memory that lets you know the device has opened up.” Physical access can be further slowed by “potting the device in epoxy so you won’t be able to get access to the motherboard.” However, “dedicated hackers can apply chemicals to melt the epoxy,” he added.
Even if a hacker can’t get inside a device, they can use techniques such as differential power analysis, which analyzes the current drawn by the CPU to differentiate between decoding a zero and a one. “You can extract 2,040 eight-bit keys in about 10 seconds, but you need a radio and power management device sitting right on the target,” said Anderson.
Attackers often use rootkits to look for vulnerabilities in the power-on jump cycle in order to penetrate the boot cycle. This type of intrusion can be countered with one-time programmable memory, smart cards, Intel’s AMT, or trusted platform module (TPM) solutions, said Anderson.
Linux Confidential
Most security solutions revolve around the concept of confidentiality — making sure unauthorized individuals can’t read the data you want protected. Confidentiality solutions differ depending on whether you’re protecting “data in flight” (crossing the network), “data at rest,” or “data in use.”
For data in flight, the main concern is with a man-in-the-middle (MitM) attack, which typically exploits a vulnerability in one of the protocols such as the Address Resolution Protocol (ARP). Linux has tools such as arpwatch that are targeted at catching this kind of ARP spoofing attack.
You may be able to detect a MitM using trace routes, but sophisticated middlemen can combat this, said Anderson. A more common solution is to encrypt the networking link using VPNs and the like. Yet even if the MitM attacker can’t acquire the encrypted key, they can use operational security (OPSEC) techniques to analyze network traffic patterns and speculate about the content, he added.
With data at rest, sensitive data is encrypted in storage using tools such as eCryptfs or PGP. “But encryption and decryption take time, so it’s best to encrypt particular directories or files,” said Anderson. “Protect the data you have to protect and leave the rest alone.”
Data-in-use attacks that look for encrypted keys appearing in memory represent a “much more difficult problem,” said Anderson. “Many people leave keys in memory because they figure they’ll need them again soon. But a sophisticated hacker can use liquid nitrogen to slow down the decay of RAM, then unplug the SIM, plug it into another machine, and read the data. Keep the amount of time you use visible keys short — load it, use a key to decrypt it, and then overwrite it, preferably three times (to remove any residual vestiges of the key).”
Anderson went on to discuss symmetric and asymmetric encryption, such as Diffie-Hellman, in which you encrypt with both a private key and the recipient’s public key. One problem that is “wigging people out” is that in five to 10 years, “quantum computers will be able to crack public key and Diffie-Hellman in real time and break 1024-bit RSA algorithms,” said Anderson.
For higher security requirements, you can implement mandatory access control using SELinux, Smack, or other techniques that use LSM (Linux Security Modules), limiting applications to read-only access. Yet, these approaches take time and money to develop and test.
In addition to the above recommendations, Anderson offered some basic tips for securing embedded Linux devices. These include:
Implement risk assessment to determine the required level of security.
Eliminate all non-essential services and software.
Periodically audit the installed software.
Have regular, monitored software security updates.
Implement two-factor authentication.
Use Linux containers to separate secure and unsecure functions.
Implement file control policies or code signing, if your platform supports it.
Know every device on your network and periodically scan for new, unauthorized devices.
Implement both IPv4 and IPv6 firewalls and use software such as snort for intrusion protection/detection.
Use VPNs for extended security, and use DTLS, TLS, or AES for temporary link security.
Scan your network ports periodically with nmap, satan, saint, etc.
Use penetration testers periodically, but make sure they’re legit organizations and not hackers posing as pentesters.
“Securing IoT and embedded Linux is a daunting task, and it costs money,” concluded Anderson. “There’s a spectrum between usability and security, and there’s always a compromise. At the very least, make sure you did everything you are legally bound to do.”
Microsoft’s addition of the Bash shell and Ubuntu user space in Windows 10 is a real win for developers working in dynamic, interpreted programming languages everywhere. Dozens of dynamic script interpreters are now immediately available on Windows desktops.
In this article, we’re going to write the classic “hello world” application in several different dynamically executed languages, install any necessary dependencies, and execute our interpreted code.
If you’d like to follow along this article and try out these examples, you can grab all of the source code from Git: