Home Blog Page 453

How Microsoft Contributes to Kubernetes

Microsoft has recently increased their stakes in the Kubernetes community through a variety of actions. For example, they acquired Deis, a company that specialises in Kubernetes container management technologies. And, Microsoft became a member of the Cloud Native Computing Foundation, which is the home of the Kubernetes project.

Microsoft also continues to increase their engagement with the Kubernetes community with a talented team of engineers. In 2016,  Brendan Burns, one of the three co-founders of Kubernetes (along with Joe Beda and Craig McLuckie), left Google and joined Microsoft as a distinguished engineer.

Brendan Burns, Distinguished Engineer at Microsoft

We spoke with Burns at DockerCon Europe to find out more about Microsoft’s engagement with the Kubernetes community. Here is an edited version of that discussion:

Linux.com: First things first, why would an ex-Googler and Kubernetes co-founder join Microsoft?

Brendan Burns: Microsoft is a company with a history that’s unique in the world of computing. Microsoft is a company that has been enabling developer productivity. It has been helping people who may not have thought of themselves as application builders or developer builders in the first place. But Microsoft enabled them to become people who are capable of building applications.

I have seen this with my friends who I went to college with. They took products like Visual Basic and Access to build businesses or consulting jobs. They created businesses to build applications using these technologies. These technologies empowered those people. I think cloud misses that. There is a gap where it’s hard to build reliable, scalable applications on the cloud.

I think that history of enabling developer productivity combined with a really great public cloud is an incredible opportunity to empower a whole new generation, a broader group of users to build these distributed applications. That’s why I am at Microsoft. I think it’s unique because this combination just doesn’t exist anywhere else in any other company.

Linux.com: What’s your role at Microsoft?

Burns: My role is to lead the teams that focus on containers and open source container orchestration within Microsoft. That includes managing the teams and making sure that we get the right people with the right skills, and it includes helping to set direction. It also involves writing some of the code myself. It’s a mix of everything that you would expect from engineering and technical leadership.

I’m really excited about trying to help Azure chart a direction into this new world and figuring out how to marry all of the skills that brought us really great developers tools like Visual Studio Code, with the skills of someone who is building a distributed application and who knows what it takes to deploy, manage, and operate a distributed application at scale.

I think there are a lot of people who work on development environments and a lot of people who build distributed systems, but there are fewer people who think about how they can come together, and that’s something that I’m pretty excited about as well. So, I’m trying to set that direction.

Linux.com: How is Microsoft consuming Kubernetes?

Burns: There are people who are building systems on top of Kubernetes. In fact, our Azure Container Service itself is deployed on Kubernetes. We also offer it as a service. In my capacity, I focus more on building a service for Azure users. The fact is, as big as Microsoft is, the world of public cloud is way bigger, so I want to build services that are useful and empowering to external users. I hope that by doing that, I build things that are useful for internal users as well.

Linux.com: What kind of engagement does Microsoft have with the Kubernetes community?

Burns: We contribute a lot of code. Some of this code is to make Azure work really well with Kubernetes.  Some of it is code like Helm, which is an upstream open source project that is maintained primarily by Microsoft. It makes package development easy. It eases the deployment and management of containerized applications on top of Kubernetes.

We recently open sourced a project called Draft that is aimed at the developer side. We are trying to make it extremely easy for a developer, who may not have learned about containers or Kubernetes, to get started with with those technologies but also beyond that.

We participate in the leadership of a lot of open source governance and steering committees. Michelle Noorali, one of the Microsoft engineers, from my team was recently elected to the Community Steering Committee. I was on the bootstrap steering committee and continue to be on the Kubernetes steering committee. We also have representatives on the boards of the Open Container Initiative, the Cloud Native Computing Foundation, and we also contribute to Docker. Microsoft’s John Howard is the number four contributor of all time to the Docker project. So, as you can see, there are a lot of different ways in which Microsoft contributes its expertise and knowledge in this space.

Learn more about Kubernetes in the free Introduction to Kubernetes course from The Linux Foundation.

OpenStack Aims to Improve Integration with Cloud Native Technologies

At the OpenStack Summit in Australia, open-source cloud effort announces a series of new efforts to help improve integration across a variety of complementary cloud native technologies.

At the first day of the event, several initiatives designed to help improve and promote integration between OpenStack and other open-source cloud efforts were announced. Among the announcements was the Open Infrastructure Integration effort, the launch of the OpenLab testing tools program, the debut of the public cloud passport program and the formation of a financial services team.

“We’re really put some focus into the strategy for the OpenStack Foundation for next five years,” Jonathan Bryce, executive director of the OpenStack Foundation told eWEEK.

Read more at eWeek

The First Step in Modern Networking isn’t a Sidecar

One of the most important pieces of any modern web application is the network. As applications become more distributed, it becomes crucial to reason about the network and its behavior in order to understand how a system will behave. Service meshes are more and more frequently proposed as a means of tackling this problem. If you’re not familiar with meshes, Matt Klein has a great intro to them, and Christian Posta has a great series on Patterns with Envoy.

Fundamentally, modern apps benefit from networking patterns like meshes for three reasons:

  1. Scale: At the scale of most modern web applications, your traffic is a thing you manage. …

Read more at TurbineLabs.io

Understanding Tracing

Five questions for Bryan Liles on the complexities of tracing, recommended tools and skills, and how to learn more about monitoring.

The first thing that makes tracing complex is understanding how it fits into your application monitoring stack. I like to break down monitoring into metrics, logs, and tracing. Tracing allows you to understand how your application’s components interact with themselves and any potential consumers. Secondly, finding a good toolset that works a diverse application infrastructure is also complex. This is why I’m hoping to see OpenTracing become more successful since it provides a good interface based on real world work at Google and Twitter. Finally, tracing is complex because of the amount of components involved. If you working in a large microservice-based application, you could have scores of microservices coupled with databases of many types and other applications as well. Combined with the tracing infrastructure, this leads to a large amount of items to consider. OpenTracing helps again by providing standards and clients to help simplify integration for the developer and operations teams.

Read more at O’Reilly

5 Ways Blockchain Can Accelerate Open Organizations

Blockchain makes running an organization less costly. In the process, it introduces revolutionary degrees of transparency, inclusivity, and adaptability.

In an effort to not only understand blockchain itself, but also to discover the ways adopting it could change our approach to organizing today, I read several books on it. Blockchain Revolution, by father and son collaborators Don Tapscott and Alex Tapscott, is one of the most thoroughly researched I’ve encountered so far…

In the book, the authors raise two particularly interesting issues:

  • the impact of blockchain on organizational formation, and
  • the impact of blockchain on the ways we accomplish certain tasks

Pondering the first issue made me wonder: Why should organizations be formed in the first place, and how would blockchain technology “revolutionize” them according to open organization characteristics? I’ll explore that question in the first part of this two-part book review.

The second issue prompted me to think: How would our approaches to various tried-and-true organizational tasks change with the introduction of blockchain technology? I’ll address that one next time.

Read more at OpenSource.com

Analyzing Docker Container Performance with Native Tools

Containerization is changing how organizations deploy and use software. You can now deploy almost any software reliably with just the docker run command. And with orchestration platforms like Kubernetes and DC/OS, even production deployments are easy to set up.

You may have already experimented with Docker, and have maybe run a few containers. But one thing you might not have much experience with is understanding how Docker containers behave under different loads.

Because Docker containers, from the outside, can look a lot like black boxes, it’s not obvious to a lot of people how to go about getting runtime metrics and doing analysis.

In this post, we will set up a small CrateDB cluster with Docker and then go through some useful Docker commands that let us take a look at performance.

Read more at Crate.io

Tracing Memory Leaks in the NFC Digital Protocol Stack

By Thierry Escande, Senior Software Engineer at Collabora.

Kmemleak (Kernel Memory Leak Detector) allows you to track possible memory leaks inside the Linux kernel. Basically, it tracks dynamically allocated memory blocks in the kernel and reports those without any reference left and that are therefore impossible to free. You can check the kmemleak page for more details.

This post exposes real life use cases that I encountered while working on the NFC Digital Protocol stack.

Enabling kmemleak in the kernel

kmemleak can be enabled in the kernel configuration under Kernel hacking > Memory Debugging.

    [*] Kernel memory leak detector
    (4000) Maximum kmemleak early log entries
    < >   Simple test for the kernel memory leak detector
    [*]   Default kmemleak to off

I used to turn it off by default and enable it on demand by passing kmemleak=on to the kernel command line. If some leaks occur before kmemleak is initialized you may need to increase the “early log entries” value. I used to set it to 4000.

The sysfs interface of kmemleak is a single file located in /sys/kernel/debug/kmemleak. You can control kmemleak with the following operations:

Trigger a memory scan:

$ echo scan > /sys/kernel/debug/kmemleak

Clear the leaks list:

$ echo clean > /sys/kernel/debug/kmemleak

Check the possible memory leaks by reading the control file:

$ cat /sys/kernel/debug/kmemleak

I will not go deep regarding the various NFC technologies and the following examples will be based on NFC-DEP, the protocol used to connect 2 NFC devices and make them communicate through standard POSIX sockets. DEP stands for Data Exchange Protocol.

For the purpose of this post I’m using nfctool, a standalone command line tool used to control and monitor NFC devices. nfctool is part of neard, the Linux NFC daemon.

So let’s start with an easy case.

A simple case: leak in a polling loop

When putting a NFC device in target polling mode, it listens for different modulation modes from a peer device in initiator mode. When I first used kmemleak I was surprised to see possible leaks reported by kmemleak while not even a single byte has been exchanged, simply by turning target poll mode on the nfc0 device.

$ nfctool -d nfc0 -p Target

A few seconds later, after a kmemleak scan using:

$ echo scan > /sys/kernel/debug/kmemleak

The following message appear in the syslog:

[11764.643878] kmemleak: 8 new suspected memory leaks (see /sys/kernel/debug/kmemleak)

OK! Check the kmemleak sysfs file then:

    $ cat /sys/kernel/debug/kmemleak
    unreferenced object 0xffff9be0f8f43a08 (size 8):
      comm "kworker/0:1", pid 41, jiffies 4297830116 (age 16.044s)
      hex dump (first 8 bytes):
        01 fe d3 80 ca 41 f1 a0                          .....A..
      backtrace:
        [] kmemleak_alloc+0x4a/0xa0
        [] kmem_cache_alloc_trace+0xf5/0x1d0
        [] digital_tg_listen_nfcf+0x3b/0x90 [nfc_digital]
        [] digital_wq_poll+0x5d/0x90 [nfc_digital]
        [] process_one_work+0x156/0x3f0
        [] worker_thread+0x4b/0x410
        [] kthread+0x109/0x140
        [] ret_from_fork+0x25/0x30
        [] 0xffffffffffffffff

This gives the call stack where the allocation has been actually done. So let’s have a look at digital_tg_listen_nfcf()…

    int digital_tg_listen_nfcf(struct nfc_digital_dev *ddev, u8 rf_tech)
    {
        int rc;
        u8 *nfcid2;

        rc = digital_tg_config_nfcf(ddev, rf_tech);
        if (rc)
            return rc;

        nfcid2 = kzalloc(NFC_NFCID2_MAXSIZE, GFP_KERNEL);
        if (!nfcid2)
            return -ENOMEM;

        nfcid2[0] = DIGITAL_SENSF_NFCID2_NFC_DEP_B1;
        nfcid2[1] = DIGITAL_SENSF_NFCID2_NFC_DEP_B2;
        get_random_bytes(nfcid2 + 2, NFC_NFCID2_MAXSIZE - 2);

        return digital_tg_listen(ddev, 300, digital_tg_recv_sensf_req, nfcid2);
    }

The only allocation here is the nfcid2 array, passed to digital_tg_listen() as 4th parameter, a user argument supposed to be returned as a function argument to the callback digital_tg_recv_sensf_req() upon reception of a valid frame from the peer device or if a timeout error occurs (nobody on the other side is talking to us). After a quick check in digital_tg_recv_sensf_req() it appears that the user argument is not used at all and of course not released.

As I said, that one was easy. There was no need for the nfcid2 array to be allocated in the first place so the fix was pretty straightforward.

Now digital_tg_listen_nfcf() looks good:

    int digital_tg_listen_nfcf(struct nfc_digital_dev *ddev, u8 rf_tech)
    {
        int rc;

        rc = digital_tg_config_nfcf(ddev, rf_tech);
        if (rc)
            return rc;

        return digital_tg_listen(ddev, 300, digital_tg_recv_sensf_req, NULL);
    }

The commit for this fix can be found here.

Another use case for leaks hunting was about un-freed socket buffers.

Continue reading on Collabora’s blog.

Linux Kernel Developer: Laura Abbott

The recent Linux Kernel Development Report released by The Linux Foundation, included information about several featured Linux kernel developers. According to the report, roughly 15,600 developers from more than 1,400 companies have contributed to the Linux kernel since 2005, when the adoption of Git made detailed tracking possible. Over the next several weeks, we will be highlighting some specific Linux kernel developers who agreed to answer a few questions about what they do and why they contribute to the kernel.

In this article, we feature Laura Abbott, a Fedora Kernel Engineer at Red Hat.

Read more at The Linux Foundation

The Internet Sees Nearly 30,000 Distinct DoS Attacks Each Day: Study

The incidence of denial-of-service (DoS) attacks has consistently grown over the last few years, “steadily becoming one of the biggest threats to Internet stability and reliability.” Over the last year or so, the emergence of IoT-based botnets — such as Mirai and more recently Reaper, with as yet unknown total capacity — has left security researchers wondering whether a distributed denial-of-service (DDoS) attack could soon take down the entire internet. 

The problem is there is no macroscopic view of the DoS ecosphere. Analyses tend to be by individual research teams examining individual botnets or attacks. Now academics from the University of Twente (Netherlands); UC San Diego (USA); and Saarland University (Germany) have addressed this problem “by introducing and applying a new framework to enable a macroscopic characterization of attacks, attack targets, and DDoS Protection Services (DPSs).”

Read more at Security Week

Linux Kernel 4.14 LTS Delayed for November 12 as Linus Torvalds Announces 8th RC

Sad news today for those who have hoped to upgrade their GNU/Linux distributions to Linux kernel 4.14 LTS as Linus Torvalds announced a few moments ago the availability for testing of the eighth and last Release Candidate of the next long-term supported Linux kernel series, supported for the next six years.

The release of RC8 delays the final release of Linux kernel 4.14 LTS with a week. Of course, this also means that the merge window for the Linux 4.15 kernel series will be pushed into Thanksgiving week, which isn’t quite what Linus Torvalds expected as he’ll be on vacation with his family.

Read more at Softpedia