Inclusivity is the quality of an open organization that allows and encourages people to join the organization and feel a connection to it. Practices aimed at enhancing inclusivity are typically those that welcome new participants to the organization and create an environment that makes them want to stay.
When we talk about inclusivity, we should clarify something: Being “inclusive” is not the same as being “diverse.” Diversity is a product of inclusivity; you need to create an inclusive community in order to become a diverse one, not the other way around. The degree to which your open organization is inclusive determines how it adapts to, responds to, and embraces diversity in order to improve itself. Interestingly enough, the best way to know which organizational changes will make your group more inclusive is to interact with the people you want to join your community.
When learning how to code for the first time, there’s a common misconception that learning how to code is primarily about learning the syntax of a programming language. That is, learning how the special symbols, keywords, and characters must be written in the right order for the language to run without errors.
However, focusing only on knowledge of syntax is a bit like practicing to write a novel by only studying grammar and spelling. Grammar and spelling are needed to write a novel, but there are many other layers of skills that are needed in order to write an original, creative novel.
Similarly, to be developer and write original, creative code we need other layers of skills in addition to syntax. Here is one way to organize these skills into what I call the four layers of programming skills:
Syntax skills
This is the layer that is most often focused on in the early learning phase. Syntax skills essentially means how to read and write a programming language using the rules for how different characters must be used for the code to actually work.
It’s not every day you see this group of companies agree on anything, but as Kubernetes has developed into an essential industry tool, each of these companies sees it as a necessity to join the CNCF and support its mission. This is partly driven by customer demand and partly by the desire to simply have a say in how Kubernetes and other related cloud-native technologies are developed.
For those of you who might not be familiar with this organization, it is the part of the Linux Foundation that houses Kubernetes, the open source project originally developed at Google.
If you’re a software developer today, you know how to use open source software, but do you know how and why open source licensing started? A little background will help you understand how and why the licenses work the way they do.
Origins of open source licensing
Technologists today, having grown up in the age of Microsoft Windows and proprietary software, may believe that open source licensing is a recent trend that began in the 1990s. Although open source licensing’s popularity has skyrocketed in the past two decades, in truth, open source was the original model for software licensing, with proprietary licensing coming later.
In fact, the two models for software licensing (open source and proprietary) trace their origins from a common source: the Unix operating system. Unix was developed by AT&T Bell Laboratories in the late 1960s and early 1970s and was the first general-purpose operating system. At that time, AT&T’s market position was so dominant that the US Justice Department issued a consent decree barring AT&T from engaging in commercial activities outside the field of its telephone service, which was AT&T’s primary business. Because of the consent decree, AT&T could not exploit Unix as a commercial product, so Bell Labs gave Unix away in source code form under terms that allowed its modification and redistribution. This led to Unix’s widespread use and popularity among computer scientists in the 1970s and 1980s.
After the US Justice Department lifted the consent decree in 1983, AT&T pivoted to commercialize Unix as a proprietary product and adopted more restrictive licensing terms that allowed Unix to be redistributed only in object code format.
As the chief technology officer of a company specialized in cloud native storage, I have a first hand view of the massive transformation happening right now in enterprise IT. In short, two things are happening in parallel right now that make it radically simpler to build, deploy and run sophisticated applications.
The first is the move to the cloud. This topic has been discussed so much that I won’t try to add anything new. We all know it’s happening, and we all know that its impact is huge.
The second is the move to cloud-native architectures. Since this is a relatively new development, I want to focus on it — and specifically on the importance of pluggable Cloud-Native architectures — in today’s post. But before diving into how to architect for Cloud Native, let’s define it.
Unikernels are small and fast and give Docker a run for its money, while at the same time still giving stronger features of isolation, says Florian Schmidt, a researcher at NEC Europe, who has developed uniprof, a unikernel performance profiler that can also be used for debugging. Schmidt explained more in his presentation at Xen Summit in Budapest in July.
Most developers think that unikernels are hard to create and debug. This is not entirely true: Unikernels are a single linked binary that come with a shared address space, which mean you can use gdb. That said, developers do lack tools, such as effective profilers, that would help create and maintain unikernels.
Enter uniprof
uniprof’s goals are to be a performance profiler that does not require changes to the unikernel’s code. It requires minimal overhead while profiling, which means it can be useful even in production environments.
According to Schmidt, you may think that all you need is a stack profiler, something that would capture stack traces at regular intervals. You could then analyze them to figure out which code paths show up especially often, either because they are functions that take a long time to run, or because they are functions that are hit over and over again. This would point you to potential bottlenecks in your code.
A stack profiler for Xen already exists: xenctx is part of the Xen tool suite and is a generic introspection tool for Xen guests. As it has the option to print a call stack, you could run it over and over again, and you have something like a stack profiler. In fact, this was the starting point for uniprof, says Schmidt.
However, this approach presents several problems, not least of which that xenctx is slow and can take up to 3ms per trace. This may not seem much, but it adds up. And, desiring very high performance is not simply a nice feature; it’s necessity. A profiler interrupts the guest all the time — you have to pause, create a stack trace, and then unpause it. You cannot grab a stack trace while the guest is running or you will encounter race conditions when the guest modifies the stack while you are reading it. A high overhead can influence the results because it may change the unikernel behavior. So, you need a low overhead for a stack tracer if you are going to use it on production unikernels.
Making it Work
For a profiler to work, you need to access the registers to get the instruction pointers. These will allow you to know where you are in the code. Then you need the Framepointer to get the size of the stack frame. Fortunately this is easy: You can get both with getvcpucontext() hypercall.
Then you need to access the stack memory to read the information in it and read the addresses and the next FPs. This is more complicated. You need to read the memory from the guest you are profiling and dump the contents into the guest running the profiler. For that, you need the address resolution because the mapping functionality wants machine frame numbers. This is a multi-step and complex process that finally renders is a series of memory addresses.
Finally, you need to resolve these addresses into function names to see what is going on. For that you need a symbol table which is again thankfully easy to implement. All you need to do is extract symbols from ELF with nm.
Now you have all these stack traces, you have to analyse them. One tool you can use is Flamegraphs. Flamegraphs allows you to graphically represent the result and lets you see the relative run time of each step in your stack trace.
Performance
Schmidt started his project by modifying xenctx. The original xenctx utility has a huge overhead that you can solve with cached memory and virtual machine translations. This reduces the delay from 3ms per trace to 40µs.
xenctx uses linear search to resolve symbols. To make the search faster, you can use a binary search. Alternatively you could avoid doing the resolution altogether, at least while doing the stack tracing. You can wait and do it offline, after the tracing. This reduces the overhead further to 30µs or less.
Schmidt, however, discovered that, by adding some functionalities to xenctx and eliminating others, he was fundamentally changing the tool and it seemed logical to create a new tool from scratch.
uniprof originally is about 100 times faster than xenctx. Furthermore, Xen 4.7 introduces new low-level libraries, such as libxencall and libxenforeignmemory. Using these libraries instead of libxc used in older versions of Xen, you can reduce latency by a further factor of 3. The original version of uniprof took 35 µs for each stack call, while the version that uses libxencall only takes 12µs.
The latest version of uniprof supports both sets of libraries, just in case you are running an older version of Xen. uniprof also fully supports ARM, something xenctx doesn’t do.
If you weren’t able to attend Open Source Summit North America 2017 in Los Angeles, don’t worry! We’ve rounded up the following keynote presentations so you can hear from the experts about the growing impact of open source software.
API documentation is the number one reference for anyone implementing your API, and it can profoundly influence the developer experience. Because it describes what services an application programming interface offers and how to use those services, your documentation will inevitably create an impression about your product—for better or for worse.
In this two-part series I share what I’ve learned about API documentation. This part discusses the basics to help you create good API docs, while in part two, Ten Extras for Great API Documentation, I’ll show you additional ways to improve and fine-tune your documentation.
Know your audience
Knowing who you address with your writing and how you can best support them will help you make decisions about the design, structure, and language of your docs. You will have to know who visits your API documentation and what they want to use it for.
Your API documentation will probably be visited and used by the following audiences.
Kubernetes can be an ultimate local development environment particularly if you are wrangling with a large number of microservices. In this post, we will cover how you can create a local development workflow using Minikube and tools such as Make to iterate fast without the wait imposed by your continuous integration pipeline. With this workflow, you can code and test changes immediately.
Traditionally, you can test your code changes by rebuilding the Docker images (either locally or via your continuous integration pipeline), then pushing the image to a Docker registry after the successful build, and then redeploy to your Kubernetes cluster.
Overall development workflow
Here is our simplified development workflow,
Make changes in your code on your local laptop
Build local Docker images and deploy them to Minikube
Test your code changes after deploying into Minikube
If changes are good, commit them to version control repository
Your version control system triggers continuous integration pipeline
Continuous integration builds Docker images and push it your registry
Abstractions and metadata are the future of architecture in systems engineering, as they were before in software engineering. In many languages, there are abstractions and metadata; however, systems engineering has never adopted this view. Systems were always thought of as too unique for any standard abstractions. Now that we’ve standardized the lower-level abstractions, we’re ready to build new system-level abstractions.
There be dragons
When discussing abstractions, starting with a healthy dose of skepticism is important. Andrew Koenig stated, “Abstraction is selective ignorance.” And Joel Spolsky coined the term “Law of Leaky Abstractions” when he described how all abstractions leak that which they abstract.
Know that you’re choosing to be ignorant of a system when you abstract it. This doesn’t mean everyone is ignorant of the underlying system, but it does mean you’ll have less insight into the system.