Home Blog Page 476

Kubernetes is Transforming Operations in the Enterprise

At many organizations, managing containerized applications at scale is the order of the day (or soon will be). And few open source projects are having the impact in this arena that Kubernetes is.

Above all, Kubernetes is ushering in “operations transformation” and helping organizations make the transition to cloud-native computing, says Craig McLuckie co-founder and CEO of Heptio and a co-founder of Kubernetes at Google, in a recent free webinar, ‘Getting to Know Kubernetes.’  Kubernetes was created at Google, which donated the open source project to the Cloud Native Computing Foundation

As was historically true for the very first Local-Area Networks and Linux alike, McLuckie noted that small groups of upstart staffers at many organizations are driving operational change by adopting Kubernetes.

Read more at The Linux Foundation

Advanced lm-sensors Tips and Tricks on Linux

I’ve been using the lm-sensors tool ever since CPUs became hot enough to melt themselves. It monitors CPU temperature, fan speeds, and motherboard voltages. In this two-part series, I’ll explain some advanced uses of lm-sensors, and look at some of the best graphical interfaces to use with it.

Install and Run

Install lm-sensors, then run it with no options to see what it does:

$ sensors
coretemp-isa-0000
Adapter: ISA adapter
Physical id 0:  +37.0°C  (high = +80.0°C, crit = +100.0°C)
Core 0:         +35.0°C  (high = +80.0°C, crit = +100.0°C)
Core 1:         +37.0°C  (high = +80.0°C, crit = +100.0°C)
Core 2:         +34.0°C  (high = +80.0°C, crit = +100.0°C)
Core 3:         +36.0°C  (high = +80.0°C, crit = +100.0°C)

This is on an Ubuntu PC. My openSUSE Leap system installs it with a working configuration, but Ubuntu needs some additional tweaking. Run sensors-detect to set it up to detect even more stuff. The safe method is to accept all of the defaults by pressing the return key to answer every question:

$ sudo sensors-detect
# sensors-detect revision 6284 (2015-05-31 14:00:33 +0200)
# Board: ASRock H97M Pro4
# Kernel: 4.4.0-96-generic x86_64
# Processor: Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz (6/60/3)

This program will help you determine which kernel modules you need
to load to use lm_sensors most effectively. It is generally safe
and recommended to accept the default answers to all questions,
unless you know what you're doing.

Some south bridges, CPUs or memory controllers contain embedded sensors.
Do you want to scan for them? This is totally safe. (YES/no): 

[...]

When it finishes scanning, it will ask you if you want it to modify /etc/modules:

To load everything that is needed, add this to /etc/modules:
#----cut here----
# Chip drivers
coretemp
nct6775
#----cut here----
If you have some drivers built into your kernel, the list above will
contain too many modules. Skip the appropriate ones!

Do you want to add these lines automatically to /etc/modules? (yes/NO)

Before you answer, look in your kernel configuration file to see if the drivers are built-in, or are loadable modules. If they are built-in then don’t modify /etc/modules. If they are modules, then modify /etc/modules. This is what loadable modules look like in my /boot/config-4.4.0-96-generic file:

CONFIG_SENSORS_CORETEMP=m
CONFIG_SENSORS_NCT6775=m

If they are built-in to the kernel (statically-compiled, if you prefer the nerdy term) then they look like this:

CONFIG_SENSORS_CORETEMP=y
CONFIG_SENSORS_NCT6775=y

If they are loadable modules, go ahead and modify /etc/modules, and then manually load the modules, substituting your own module names of course:

$ sudo modprobe nct6775 coretemp

Use lsmod to verify they are loaded:

$ lsmod|grep "nct6775|coretemp"
nct6775                57344  0
hwmon_vid              16384  1 nct6775
coretemp               16384  0

Any modules listed in /etc/modules will load at boot. Now let’s see what sensors shows us:

$ sensors
coretemp-isa-0000
Adapter: ISA adapter
Physical id 0:  +37.0°C  (high = +80.0°C, crit = +100.0°C)
Core 0:         +35.0°C  (high = +80.0°C, crit = +100.0°C)
Core 1:         +37.0°C  (high = +80.0°C, crit = +100.0°C)
Core 2:         +34.0°C  (high = +80.0°C, crit = +100.0°C)
Core 3:         +36.0°C  (high = +80.0°C, crit = +100.0°C)

nct6776-isa-0290
Adapter: ISA adapter
Vcore:          +0.90 V  (min =  +0.00 V, max =  +1.74 V)
in1:            +1.82 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
AVCC:           +3.39 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
+3.3V:          +3.38 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in4:            +0.95 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in5:            +1.69 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in6:            +0.78 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
3VSB:           +3.42 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
Vbat:           +3.28 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
fan1:             0 RPM  (min =    0 RPM)
fan2:          1004 RPM  (min =    0 RPM)
fan3:             0 RPM  (min =    0 RPM)
fan4:             0 RPM  (min =    0 RPM)
fan5:             0 RPM  (min =    0 RPM)
SYSTIN:         +29.0°C  (high =  +0.0°C, hyst =  +0.0°C)  ALARM  sensor = thermistor
CPUTIN:         +42.5°C  (high = +80.0°C, hyst = +75.0°C)  sensor = thermistor
AUXTIN:         +47.0°C  (high = +80.0°C, hyst = +75.0°C)  sensor = thermistor
PECI Agent 0:   +37.0°C  (high = +80.0°C, hyst = +75.0°C)
                         (crit = +100.0°C)
PCH_CHIP_TEMP:   +0.0°C  
PCH_CPU_TEMP:    +0.0°C  
PCH_MCH_TEMP:    +0.0°C  
intrusion0:    ALARM
intrusion1:    ALARM
beep_enable:   disabled

A feast of information! Much of which is not useful because devices do not exist or are not connected, like most of the fan sensors. On Ubuntu I disabled these in /etc/sensors3.conf with the ignore directive:

ignore fan1
ignore fan3
ignore fan4
ignore fan5

Now when I run sensors the output does not include those (Figure 1). You should be able to put your customizations in files in /etc/sensors.d, but this doesn’t work on my Ubuntu machine.

Learn how to use lm-sensors to monitor CPU temperature, fan speeds, and motherboard voltages.

What do Those Things Mean?

CPUTIN is CPU temperature index, AUXTIN is auxiliary temperature index, and SYSTIN is system temperature index. These are all sensors on the motherboard. AUXTIN is the power supply temperature sensor, and SYSTIN measures motherboard temperature. Core temperature is different from CPUTIN as it reads from a sensor on your CPU.

HYST is short for hysteresis. This is the value that you want an alarm to turn off. For example, if your alarm temperature is 80C, set your HYST value to stop the alarm when the temperature falls to 75C.

Get the Specs

The basic lm-sensors monitoring of CPU temperatures may be enough for you. However, you can finely-tweak lm-sensors for greater accuracy, change labels, and run it as a daemon. You need the spec sheet for your motherboard (which will also help make sense of your lm-sensors output). Find your exact motherboard model and version by running $ sudo dmidecode -t 2. The kernel driver documentation is also useful. For example, this is the kernel spec for my nct6775 driver.

Come back next week and we’ll learn even cooler advanced uses of lm-sensors.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

The Eye-Opening Power of Cultural Difference

Inclusivity is the quality of an open organization that allows and encourages people to join the organization and feel a connection to it. Practices aimed at enhancing inclusivity are typically those that welcome new participants to the organization and create an environment that makes them want to stay.

When we talk about inclusivity, we should clarify something: Being “inclusive” is not the same as being “diverse.” Diversity is a product of inclusivity; you need to create an inclusive community in order to become a diverse one, not the other way around. The degree to which your open organization is inclusive determines how it adapts to, responds to, and embraces diversity in order to improve itself. Interestingly enough, the best way to know which organizational changes will make your group more inclusive is to interact with the people you want to join your community.

Read more at OpenSource.com

The Four Layers of Programming Skills

When learning how to code for the first time, there’s a common misconception that learning how to code is primarily about learning the syntax of a programming language. That is, learning how the special symbols, keywords, and characters must be written in the right order for the language to run without errors.

However, focusing only on knowledge of syntax is a bit like practicing to write a novel by only studying grammar and spelling. Grammar and spelling are needed to write a novel, but there are many other layers of skills that are needed in order to write an original, creative novel.

Similarly, to be developer and write original, creative code we need other layers of skills in addition to syntax. Here is one way to organize these skills into what I call the four layers of programming skills:

Syntax skills

This is the layer that is most often focused on in the early learning phase. Syntax skills essentially means how to read and write a programming language using the rules for how different characters must be used for the code to actually work.

Read more at Dev.to

Kubernetes Gains Momentum as Big-Name Vendors Flock to Cloud Native Computing Foundation

Like a train gaining speed as it leaves the station, the Cloud Native Computing Foundation is quickly gathering momentum, attracting some of the biggest names in tech. In the last month and a half alone AWSOracleMicrosoftVMware and Pivotal have all joined.

It’s not every day you see this group of companies agree on anything, but as Kubernetes has developed into an essential industry tool, each of these companies sees it as a necessity to join the CNCF and support its mission. This is partly driven by customer demand and partly by the desire to simply have a say in how Kubernetes and other related cloud-native technologies are developed.

For those of you who might not be familiar with this organization, it is the part of the Linux Foundation that houses Kubernetes, the open source project originally developed at Google. 

Read more at TechCrunch

Open Source Licensing: What Every Technologist Should Know

If you’re a software developer today, you know how to use open source software, but do you know how and why open source licensing started? A little background will help you understand how and why the licenses work the way they do.

Origins of open source licensing

Technologists today, having grown up in the age of Microsoft Windows and proprietary software, may believe that open source licensing is a recent trend that began in the 1990s. Although open source licensing’s popularity has skyrocketed in the past two decades, in truth, open source was the original model for software licensing, with proprietary licensing coming later.

In fact, the two models for software licensing (open source and proprietary) trace their origins from a common source: the Unix operating system. Unix was developed by AT&T Bell Laboratories in the late 1960s and early 1970s and was the first general-purpose operating system. At that time, AT&T’s market position was so dominant that the US Justice Department issued a consent decree barring AT&T from engaging in commercial activities outside the field of its telephone service, which was AT&T’s primary business. Because of the consent decree, AT&T could not exploit Unix as a commercial product, so Bell Labs gave Unix away in source code form under terms that allowed its modification and redistribution. This led to Unix’s widespread use and popularity among computer scientists in the 1970s and 1980s.

After the US Justice Department lifted the consent decree in 1983, AT&T pivoted to commercialize Unix as a proprietary product and adopted more restrictive licensing terms that allowed Unix to be redistributed only in object code format. 

Read more at OpenSource.com

The Cloud-Native Architecture: One Stack, Many Options

As the chief technology officer of a company specialized in cloud native storage, I have a first hand view of the massive transformation happening right now in enterprise IT. In short, two things are happening in parallel right now that make it radically simpler to build, deploy and run sophisticated applications.

The first is the move to the cloud. This topic has been discussed so much that I won’t try to add anything new. We all know it’s happening, and we all know that its impact is huge.

The second is the move to cloud-native architectures.  Since this is a relatively new development, I want to focus on it — and specifically on the importance of pluggable Cloud-Native architectures — in today’s post.  But before diving into how to architect for Cloud Native, let’s define it.

Read more at The New Stack

uniprof: Transparent Unikernel for Performance Profiling and Debugging

Unikernels are small and fast and give Docker a run for its money, while at the same time still giving stronger features of isolation, says Florian Schmidt, a researcher at NEC Europe, who has developed uniprof, a unikernel performance profiler that can also be used for debugging. Schmidt explained more in his presentation at Xen Summit in Budapest in July.

Most developers think that unikernels are hard to create and debug. This is not entirely true: Unikernels are a single linked binary that come with a shared address space, which mean you can use gdb. That said, developers do lack tools, such as effective profilers, that would help create and maintain unikernels.

Enter uniprof

uniprof’s goals are to be a performance profiler that does not require changes to the unikernel’s code. It requires minimal overhead while profiling, which means it can be useful even in production environments.

According to Schmidt, you may think that all you need is a stack profiler, something that would capture stack traces at regular intervals. You could then analyze them to figure out which code paths show up especially often, either because they are functions that take a long time to run, or because they are functions that are hit over and over again. This would point you to potential bottlenecks in your code.

A stack profiler for Xen already exists: xenctx is part of the Xen tool suite and is a generic introspection tool for Xen guests. As it has the option to print a call stack, you could run it over and over again, and you have something like a stack profiler. In fact, this was the starting point for uniprof, says Schmidt.

However, this approach presents several problems, not least of which that xenctx is slow and can take up to 3ms per trace. This may not seem much, but it adds up. And, desiring very high performance is not simply a nice feature; it’s necessity. A profiler interrupts the guest all the time — you have to pause, create a stack trace, and then unpause it. You cannot grab a stack trace while the guest is running or you will encounter race conditions when the guest modifies the stack while you are reading it. A high overhead can influence the results because it may change the unikernel behavior. So, you need a low overhead for a stack tracer if you are going to use it on production unikernels.

Making it Work

For a profiler to work, you need to access the registers to get the instruction pointers. These will allow you to know where you are in the code. Then you need the Framepointer to get the size of the stack frame. Fortunately this is easy: You can get both with getvcpucontext() hypercall.

Then you need to access the stack memory to read the information in it and read the addresses and the next FPs. This is more complicated. You need to read the memory from the guest you are profiling and dump the contents into the guest running the profiler. For that, you need the address resolution because the mapping functionality wants machine frame numbers. This is a multi-step and complex process that finally renders is a series of memory addresses.

Finally, you need to resolve these addresses into function names to see what is going on. For that you need a symbol table which is again thankfully easy to implement. All you need to do is extract symbols from ELF with nm.

Now you have all these stack traces, you have to analyse them. One tool you can use is Flamegraphs. Flamegraphs allows you to graphically represent the result and lets you see the relative run time of each step in your stack trace.

Performance

Schmidt started his project by modifying xenctx. The original xenctx utility has a huge overhead that you can solve with cached memory and virtual machine translations. This reduces the delay from 3ms per trace to 40µs.

xenctx uses linear search to resolve symbols. To make the search faster, you can use a binary search. Alternatively you could avoid doing the resolution altogether, at least while doing the stack tracing. You can wait and do it offline, after the tracing. This reduces the overhead further to 30µs or less.

Schmidt, however, discovered that, by adding some functionalities to xenctx and eliminating others, he was fundamentally changing the tool and it seemed logical to create a new tool from scratch.

uniprof originally is about 100 times faster than xenctx. Furthermore, Xen 4.7 introduces new low-level libraries, such as libxencall and libxenforeignmemory. Using these libraries instead of libxc used in older versions of Xen, you can reduce latency by a further factor of 3. The original version of uniprof took 35 µs for each stack call, while the version that uses libxencall only takes 12µs.

The latest version of uniprof supports both sets of libraries, just in case you are running an older version of Xen. uniprof also fully supports ARM, something xenctx doesn’t do.

You can watch the complete presentation below:

Watch the Keynote Videos from Open Source Summit in Los Angeles

If you weren’t able to attend Open Source Summit North America 2017 in Los Angeles, don’t worry! We’ve rounded up the following keynote presentations so you can hear from the experts about the growing impact of open source software.

Open source software isn’t just growing. It’s accelerating exponentially in terms of its influence on technology and society, and the sheer numbers involved are amazing, according to The Linux Foundation’s Executive Director Jim Zemlin. 

Watch the keynote presentations at The Linux Foundation

The Ten Essentials for Good API Documentation

API documentation is the number one reference for anyone implementing your API, and it can profoundly influence the developer experience. Because it describes what services an application programming interface offers and how to use those services, your documentation will inevitably create an impression about your product—for better or for worse.

In this two-part series I share what I’ve learned about API documentation. This part discusses the basics to help you create good API docs, while in part two, Ten Extras for Great API Documentation, I’ll show you additional ways to improve and fine-tune your documentation. 

Know your audience

Knowing who you address with your writing and how you can best support them will help you make decisions about the design, structure, and language of your docs. You will have to know who visits your API documentation and what they want to use it for. 

Your API documentation will probably be visited and used by the following audiences. 

Read more at A List Apart