Home Blog Page 453

OpenStack Aims to Improve Integration with Cloud Native Technologies

At the OpenStack Summit in Australia, open-source cloud effort announces a series of new efforts to help improve integration across a variety of complementary cloud native technologies.

At the first day of the event, several initiatives designed to help improve and promote integration between OpenStack and other open-source cloud efforts were announced. Among the announcements was the Open Infrastructure Integration effort, the launch of the OpenLab testing tools program, the debut of the public cloud passport program and the formation of a financial services team.

“We’re really put some focus into the strategy for the OpenStack Foundation for next five years,” Jonathan Bryce, executive director of the OpenStack Foundation told eWEEK.

Read more at eWeek

The First Step in Modern Networking isn’t a Sidecar

One of the most important pieces of any modern web application is the network. As applications become more distributed, it becomes crucial to reason about the network and its behavior in order to understand how a system will behave. Service meshes are more and more frequently proposed as a means of tackling this problem. If you’re not familiar with meshes, Matt Klein has a great intro to them, and Christian Posta has a great series on Patterns with Envoy.

Fundamentally, modern apps benefit from networking patterns like meshes for three reasons:

  1. Scale: At the scale of most modern web applications, your traffic is a thing you manage. …

Read more at TurbineLabs.io

Understanding Tracing

Five questions for Bryan Liles on the complexities of tracing, recommended tools and skills, and how to learn more about monitoring.

The first thing that makes tracing complex is understanding how it fits into your application monitoring stack. I like to break down monitoring into metrics, logs, and tracing. Tracing allows you to understand how your application’s components interact with themselves and any potential consumers. Secondly, finding a good toolset that works a diverse application infrastructure is also complex. This is why I’m hoping to see OpenTracing become more successful since it provides a good interface based on real world work at Google and Twitter. Finally, tracing is complex because of the amount of components involved. If you working in a large microservice-based application, you could have scores of microservices coupled with databases of many types and other applications as well. Combined with the tracing infrastructure, this leads to a large amount of items to consider. OpenTracing helps again by providing standards and clients to help simplify integration for the developer and operations teams.

Read more at O’Reilly

5 Ways Blockchain Can Accelerate Open Organizations

Blockchain makes running an organization less costly. In the process, it introduces revolutionary degrees of transparency, inclusivity, and adaptability.

In an effort to not only understand blockchain itself, but also to discover the ways adopting it could change our approach to organizing today, I read several books on it. Blockchain Revolution, by father and son collaborators Don Tapscott and Alex Tapscott, is one of the most thoroughly researched I’ve encountered so far…

In the book, the authors raise two particularly interesting issues:

  • the impact of blockchain on organizational formation, and
  • the impact of blockchain on the ways we accomplish certain tasks

Pondering the first issue made me wonder: Why should organizations be formed in the first place, and how would blockchain technology “revolutionize” them according to open organization characteristics? I’ll explore that question in the first part of this two-part book review.

The second issue prompted me to think: How would our approaches to various tried-and-true organizational tasks change with the introduction of blockchain technology? I’ll address that one next time.

Read more at OpenSource.com

Analyzing Docker Container Performance with Native Tools

Containerization is changing how organizations deploy and use software. You can now deploy almost any software reliably with just the docker run command. And with orchestration platforms like Kubernetes and DC/OS, even production deployments are easy to set up.

You may have already experimented with Docker, and have maybe run a few containers. But one thing you might not have much experience with is understanding how Docker containers behave under different loads.

Because Docker containers, from the outside, can look a lot like black boxes, it’s not obvious to a lot of people how to go about getting runtime metrics and doing analysis.

In this post, we will set up a small CrateDB cluster with Docker and then go through some useful Docker commands that let us take a look at performance.

Read more at Crate.io

Tracing Memory Leaks in the NFC Digital Protocol Stack

By Thierry Escande, Senior Software Engineer at Collabora.

Kmemleak (Kernel Memory Leak Detector) allows you to track possible memory leaks inside the Linux kernel. Basically, it tracks dynamically allocated memory blocks in the kernel and reports those without any reference left and that are therefore impossible to free. You can check the kmemleak page for more details.

This post exposes real life use cases that I encountered while working on the NFC Digital Protocol stack.

Enabling kmemleak in the kernel

kmemleak can be enabled in the kernel configuration under Kernel hacking > Memory Debugging.

    [*] Kernel memory leak detector
    (4000) Maximum kmemleak early log entries
    < >   Simple test for the kernel memory leak detector
    [*]   Default kmemleak to off

I used to turn it off by default and enable it on demand by passing kmemleak=on to the kernel command line. If some leaks occur before kmemleak is initialized you may need to increase the “early log entries” value. I used to set it to 4000.

The sysfs interface of kmemleak is a single file located in /sys/kernel/debug/kmemleak. You can control kmemleak with the following operations:

Trigger a memory scan:

$ echo scan > /sys/kernel/debug/kmemleak

Clear the leaks list:

$ echo clean > /sys/kernel/debug/kmemleak

Check the possible memory leaks by reading the control file:

$ cat /sys/kernel/debug/kmemleak

I will not go deep regarding the various NFC technologies and the following examples will be based on NFC-DEP, the protocol used to connect 2 NFC devices and make them communicate through standard POSIX sockets. DEP stands for Data Exchange Protocol.

For the purpose of this post I’m using nfctool, a standalone command line tool used to control and monitor NFC devices. nfctool is part of neard, the Linux NFC daemon.

So let’s start with an easy case.

A simple case: leak in a polling loop

When putting a NFC device in target polling mode, it listens for different modulation modes from a peer device in initiator mode. When I first used kmemleak I was surprised to see possible leaks reported by kmemleak while not even a single byte has been exchanged, simply by turning target poll mode on the nfc0 device.

$ nfctool -d nfc0 -p Target

A few seconds later, after a kmemleak scan using:

$ echo scan > /sys/kernel/debug/kmemleak

The following message appear in the syslog:

[11764.643878] kmemleak: 8 new suspected memory leaks (see /sys/kernel/debug/kmemleak)

OK! Check the kmemleak sysfs file then:

    $ cat /sys/kernel/debug/kmemleak
    unreferenced object 0xffff9be0f8f43a08 (size 8):
      comm "kworker/0:1", pid 41, jiffies 4297830116 (age 16.044s)
      hex dump (first 8 bytes):
        01 fe d3 80 ca 41 f1 a0                          .....A..
      backtrace:
        [] kmemleak_alloc+0x4a/0xa0
        [] kmem_cache_alloc_trace+0xf5/0x1d0
        [] digital_tg_listen_nfcf+0x3b/0x90 [nfc_digital]
        [] digital_wq_poll+0x5d/0x90 [nfc_digital]
        [] process_one_work+0x156/0x3f0
        [] worker_thread+0x4b/0x410
        [] kthread+0x109/0x140
        [] ret_from_fork+0x25/0x30
        [] 0xffffffffffffffff

This gives the call stack where the allocation has been actually done. So let’s have a look at digital_tg_listen_nfcf()…

    int digital_tg_listen_nfcf(struct nfc_digital_dev *ddev, u8 rf_tech)
    {
        int rc;
        u8 *nfcid2;

        rc = digital_tg_config_nfcf(ddev, rf_tech);
        if (rc)
            return rc;

        nfcid2 = kzalloc(NFC_NFCID2_MAXSIZE, GFP_KERNEL);
        if (!nfcid2)
            return -ENOMEM;

        nfcid2[0] = DIGITAL_SENSF_NFCID2_NFC_DEP_B1;
        nfcid2[1] = DIGITAL_SENSF_NFCID2_NFC_DEP_B2;
        get_random_bytes(nfcid2 + 2, NFC_NFCID2_MAXSIZE - 2);

        return digital_tg_listen(ddev, 300, digital_tg_recv_sensf_req, nfcid2);
    }

The only allocation here is the nfcid2 array, passed to digital_tg_listen() as 4th parameter, a user argument supposed to be returned as a function argument to the callback digital_tg_recv_sensf_req() upon reception of a valid frame from the peer device or if a timeout error occurs (nobody on the other side is talking to us). After a quick check in digital_tg_recv_sensf_req() it appears that the user argument is not used at all and of course not released.

As I said, that one was easy. There was no need for the nfcid2 array to be allocated in the first place so the fix was pretty straightforward.

Now digital_tg_listen_nfcf() looks good:

    int digital_tg_listen_nfcf(struct nfc_digital_dev *ddev, u8 rf_tech)
    {
        int rc;

        rc = digital_tg_config_nfcf(ddev, rf_tech);
        if (rc)
            return rc;

        return digital_tg_listen(ddev, 300, digital_tg_recv_sensf_req, NULL);
    }

The commit for this fix can be found here.

Another use case for leaks hunting was about un-freed socket buffers.

Continue reading on Collabora’s blog.

Linux Kernel Developer: Laura Abbott

The recent Linux Kernel Development Report released by The Linux Foundation, included information about several featured Linux kernel developers. According to the report, roughly 15,600 developers from more than 1,400 companies have contributed to the Linux kernel since 2005, when the adoption of Git made detailed tracking possible. Over the next several weeks, we will be highlighting some specific Linux kernel developers who agreed to answer a few questions about what they do and why they contribute to the kernel.

In this article, we feature Laura Abbott, a Fedora Kernel Engineer at Red Hat.

Read more at The Linux Foundation

The Internet Sees Nearly 30,000 Distinct DoS Attacks Each Day: Study

The incidence of denial-of-service (DoS) attacks has consistently grown over the last few years, “steadily becoming one of the biggest threats to Internet stability and reliability.” Over the last year or so, the emergence of IoT-based botnets — such as Mirai and more recently Reaper, with as yet unknown total capacity — has left security researchers wondering whether a distributed denial-of-service (DDoS) attack could soon take down the entire internet. 

The problem is there is no macroscopic view of the DoS ecosphere. Analyses tend to be by individual research teams examining individual botnets or attacks. Now academics from the University of Twente (Netherlands); UC San Diego (USA); and Saarland University (Germany) have addressed this problem “by introducing and applying a new framework to enable a macroscopic characterization of attacks, attack targets, and DDoS Protection Services (DPSs).”

Read more at Security Week

Linux Kernel 4.14 LTS Delayed for November 12 as Linus Torvalds Announces 8th RC

Sad news today for those who have hoped to upgrade their GNU/Linux distributions to Linux kernel 4.14 LTS as Linus Torvalds announced a few moments ago the availability for testing of the eighth and last Release Candidate of the next long-term supported Linux kernel series, supported for the next six years.

The release of RC8 delays the final release of Linux kernel 4.14 LTS with a week. Of course, this also means that the merge window for the Linux 4.15 kernel series will be pushed into Thanksgiving week, which isn’t quite what Linus Torvalds expected as he’ll be on vacation with his family.

Read more at Softpedia

OpenStack’s Next Mission: Bridging the Gaps Between Open Source Projects

OpenStack, the massive open source project that provides large businesses with the software tools to run their data center infrastructure, is now almost eight years old. While it had its ups and downs, hundreds of enterprises now use it to run their private clouds and there are even over two dozen public clouds that use the project’s tools. Users now include the likes of AT&T, Walmart, eBay, China Railway, GE Healthcare, SAP, Tencent and the Insurance Australia Group, to name just a few.

“One of the things that’s been happening is that we’re seven years in and the need for turning every type of infrastructure into programmable infrastructure has been proven out. “It’s no longer a debate,” OpenStack COO Mark Collier told me ahead of the projects semi-annual developer conference this week. OpenStack’s own surveys show that the project’s early adopters, who previously only tested it for their clouds, continue to move their production workflows to the platform, too. “We passed the hype phase,” Collier noted.

Read more at TechCrunch