Home Blog Page 467

Reasons Kubernetes Is Cool

I will try to explain some reason I think Kubenetes is interesting without using the words “cloud native”, “orchestration”, “container”, or any Kubernetes-specific terminology :). I’m going to explain this mostly from the perspective of a kubernetes operator / infrastructure engineer, since my job right now is to set up Kubernetes and make it work well.

I’m not going to try to address the question of “should you use kubernetes for your production systems?” at all, that is a very complicated question. (not least because “in production” has totally different requirements depending on what you’re doing)

Kubernetes lets you run code in production without setting up new servers

The first pitch I got for Kubernetes was the following conversation with my partner Kamal:…

Read more at Julia Evans blog

Linux Kernel 4.14 LTS Expected to Arrive Early Next Month, RC4 Ready for Testing

A day later than expected, the fourth RC (Release Candidate) build of the upcoming Linux 4.14 LTS kernel series has been announced earlier today by Linus Torvalds, who gives us an insight into the development cycle.

According to Linus Torvalds, things are starting calming down for the development cycle of Linux kernel 4.14, which will be the next long-term support (LTS) release, and while today’s RC4 milestone is bigger than a Release Candidate should be at this stage, it’s still fairly normal, with the exception of a large watchdog merge.

“In particular, ignoring that core watchdog thing, it’s the usual “mostly drivers and Archupdates”. … “And then the usual random stuff elsewhere,” said Linus Torvalds in the mailing list announcement.

Read more at Softpedia

10 Layers of Linux Container Security

Containers provide an easy way to package applications and deliver them seamlessly from development to test to production. This helps ensure consistency across a variety of environments, including physical servers, virtual machines (VMs), or private or public clouds. These benefits are leading organizations to rapidly adopt containers in order to easily develop and manage the applications that add business value.

Enterprises require strong security, and anyone running essential services in containers will ask, “Are containers secure?” and “Can we trust containers with our applications?”

Securing containers is a lot like securing any running process. You need to think about security throughout the layers of the solution stack before you deploy and run your container. You also need to think about security throughout the application and container lifecycle.

Try these 10 key elements to secure different layers of the container solution stack and different stages of the container lifecycle.

Read more at OpenSource.com

This Week in Open Source: Linux Foundation Launches Open Source Networking Event Series, Skype For Linux Keeps Expanding, & More

This week in Linux and open source news, The Linux Foundation kicks off new Open Source Networking events, Skype for Linux keeps gaining new features. 

1) In an effort to drive vendor collaboration, The Linux Foundation is kicking off new “OSN” series in Paris, Milan, Stockholm, London, Tel Aviv and Japan

Linux Foundation to Hold Global Open Source Networking Events, Looks to Foster Local Provider, Vendor Collaboration– FierceTelecom

2) Skype for Linux keeps gaining the features found in the Windows and Mac versions.

Microsoft Closes the Gap Between Skype for Windows and Linux– eWeek

3) EdgeX Foundry is making its first major code release available later this month

EdgeX’s Barcelona Release Sets Path for Open Source IoT– SDxCentral

4) “Mozilla has announced the latest recipients of its Open Source Support grants, totaling $539,000.”

Mozilla Funds Open Source Projects with Half a Million in Grants– TechCrunch

5) Google researchers have discovered at least three software bugs in a widely used software package that might affect Linux-running devices.

Code-Execution Flaws Threaten Users of Routers, Linux, and Other OSes– Ars Technica

Join Hyperledger at Sibos in Toronto

We’re traveling to Toronto in a few weeks to attend Sibos 2017, Oct 16-19. Under the conference theme of ‘Building for the Future,’ we have a robust program agenda planned that is designed to help attendees learn about permissioned blockchains, distributed ledger technologies and smart contracts, plus the latest innovations coming out of Hyperledger.

There will be a mix of Hyperledger sessions moderated by Executive Director, Brian Behlendorf and others on the team as well as our members that touch on everything from business standards, to security implications, to specific use cases of blockchain.

A brief synopsis of the schedule of activities is below. We hope to see you there!

Check out the complete line-up of Hyperledger activities onsite Sibos here.

Read more at Hyperledger

4 Best Linux Distros for Older Hardware

One of the many great aspects of the Linux operating system is its ability to bring new life to old hardware. This is not only a boon for your bottom line but also an environmentally sound philosophy. Instead of sending that older (still functioning) hardware to the trash heap, give it a second lease on life with the help of Linux. You certainly won’t be doing that with Windows 7, 8, or 10. Linux, on the other hand, offers a good number of options for those wanting to extend the life of their aging machines.

And don’t think these distributions aimed at outdated hardware are short on features. Remember, when that hardware was in its prime, it was capable of running everything you needed. Even though times have changed (and software demands far more power from the supporting hardware), you can still get a full-featured experience from a lightweight distro.

Let’s take a look at four distributions that will make your aging machines relevant again.

Linux Lite

If you’re looking for a distribution that is fully functional, out of the box, Linux Lite might be your ticket. Not only is Linux Lite an ideal distribution for aging hardware, it’s also one of the best distributions for new users. Linux Lite is built upon the latest Ubuntu LTS release and achieves something few other distributions in this category can — it manages to deliver all the tools you need to get your work done. This isn’t a distro that substitutes AbiWord and Gnumeric for LibreOffice (not that there’s anything wrong with those pieces of software).

Linux Lite depends upon the Xfce Desktop Environment (Figure 1) and includes the likes of LibreOffice, Firefox, Thunderbird, VLC, GIMP, GNOME Disks, and much more. With the use of Xfce and the inclusion of a full complement of software, Linux Lite makes for an outstanding distribution for new users, working with old hardware. That’s a serious win-win for businesses who want to save costs by distributing old hardware to temp employees and homes who want to hand down hardware to younger members of the family.

Figure 1: The Linux Lite Xfce desktop.

Don’t let the “Lite” moniker full you; this isn’t some stripped-down operating system. Linux Lite is a full-fledged distribution that just so happens to run well on lesser-powered machines.

The minimum system requirements for Linux Lite are:

  • 700MHz processor

  • 512mb RAM

  • VGA screen 1024×768 resolution

  • DVD drive or USB port (in order to install from the ISO image)

  • At least 5 GB free disk space

Bodhi Linux

Bodhi Linux has always held a special place in my heart. As the Enlightenment desktop was one of the first to pull me away from my beloved AfterStep, it was a breath of fresh air that a distribution was dedicated to keeping that particular desktop relevant. And what a masterful job the developers of Bodhi Linux have done.

Although the Enlightenment desktop isn’t exactly one that will have new users crying to the heavens, “Where have you been all my life?”, it is certainly a fan-favorite for many an old-school Linux user. But don’t think new users will have nothing but trouble with Enlightenment. For standard usage, it’s fairly straightforward. It’s when you want to begin customizing the desktop that you might encounter complexity. But if new users can get into the Enlightenment groove, they will find one of the most flexible desktops available.

Like Linux Lite, Bodhi is built upon the latest Ubuntu LTS release, but makes use of the Moksha Desktop (Figure 2) as its user interface. Moksha is a continuation of the Enlightenment E17 desktop and the Bodhi developers have done an outstanding job of bringing Enlightenment into the modern day (while retaining that which makes Enlightenment special).

Figure 2: The Moksha desktop in Bodhi Linux is elegant and simple.

The one caveat to Bodhi (besides the learning curve of Moksha) is that, out of the box, it doesn’t include much in the way of user-facing applications. You will find the Midori browser, ePad text editor, ePhoto image viewer, and not much more. Fortunately, Bodhi includes its own app store, called Appcenter, where users can easily install any number of software titles.

The minimum system requirements for Bodhi are:

  • 500mhz processor

  • 256MB of RAM

  • 4GB of drive space

The recommended requirements are:

  • 1.0ghz processor

  • 512MB of RAM

  • 10GB of drive space

Puppy Linux

No list of lightweight Linux distributions would be complete without Puppy Linux. Puppy is unique in that it isn’t a single Linux distribution, but a collection of distributions that share the same guiding principles and built with the same tool (Woof-CE). There are three categories of Puppy Linux:

  • The official Puppy Linux distributions. These are maintained by the Puppy Linux team and are targeted for general purpose.

  • The woof-built Puppy Linux distributions. These are developed to suit specific needs and appearances (while also targeting general purpose).

  • The unofficial derivatives (aka “puplets”). These are remasters, made and maintained by Puppy Linux enthusiasts, that target specific purposes.

Instead of a distribution based only on Ubuntu, Puppy offers releases based on Ubuntu and Slackware.

As you might expect, the tools offered on the Puppy Linux desktop (Figure 3) lean toward the minimal side of things (AbiWord, Gnumeric, mtPaint, Slypheed, Palemoon, etc.). Considering the size of the Puppy Linux ISO comes in at 224 MB, that is understandable. Along with this minimalist take on Linux, Puppy Linux is one of the best at making older hardware feel new again. Puppy Can work with a 333Mhz processor and 256MB of RAM and make it run smooth and fast.

Figure 3: The user-friendly world of Puppy Linux.

According to the Puppy Linux developers, Puppy is “grandpa-friendly certified.”

Lubuntu

If you’re looking for a Ubuntu respin that will give life to that aging PC, Lubuntu is a winner. Lubuntu is part of the Ubuntu family and makes use of the LXDE desktop (Figure 4). This aging PC-friendly distribution includes a selection of lite applications that won’t ever bog down your machine. Like Puppy Linux, Lubuntu is incredibly easy to use and opts for slimmer applications (such as Abiword and Gnumeric). Lubuntu also includes Firefox (for web browsing) as well as Audacious and GNOME-Mplayer for multimedia playback.

Figure 4: The Lubuntu desktop is clean and simple to use.

Lubuntu is a lightweight distribution, but not nearly as lightweight as, say, Puppy Linux. Lubuntu  can work on computers up to around ten years old. The minimum requirements for this particular desktop Linux are:

  • CPU: Pentium 4 or Pentium M or AMD K8

  • For local applications, Lubuntu can function with 512MB of RAM. For online usage (Youtube, Google+, Google Drive, and Facebook),  1GB of RAM is recommended.

Lubuntu also includes the Synaptic package manager; so if those base applications aren’t enough, you can always install whatever you need. New users will greatly appreciate the simplicity of the desktop.

There is next to zero learning curve involved with LXDE. Combine the ease of LXDE and the inclusion of the lightweight apps, you cannot go wrong with Lubuntu. If you’re concerned you will miss out (using the likes of Abiword), some of these tools are capable of working with more standard formats. Take, for instance, Abiword — this tool can save as .doc, .rtf, .txt, .epub, .pdf, .odt, and more. What’s best about the included apps is that they are lightning fast and reliable. The default software list, included with Lubuntu, offers quite a bit more than your average lightweight Linux distribution. You’ll find:

  • Xfburn

  • Mpv Media Player

  • guvcview

  • Audacious

  • GNOME Mplayer

  • PulseAudio Volume Control

  • AbiWord

  • Gnumeric

  • Firefox

  • Pidgin

  • Sylpheed

  • Transmission

  • Document Viewer

  • mtPaint

  • Simple Scan

  • GNOME Disks

  • PCManFM

  • Leafpad

  • Xpad

If you’re looking for an official Ubuntu flavor, that can breath life into that old hardware, Lubuntu is a great call.

The choice is yours

There are quite a number of other lightweight Linux distributions, but the four I’ve listed here offer the most variety, reliability, and capability, all the while performing like champs on older hardware. Give one of these a shot and see if those old desktops can’t be given new life without too much work.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Double Your Development Velocity without Growing Your Team

The Developer Experience team at SendGrid is a small, but mighty force of two. We attempt to tackle every problem that we can get our hands on. This often means that some items get left behind.  At the outset, we surveyed everything that was going on in our open source libraries and we quickly realized that we needed to find a way to prioritize what we were going to work on. Luckily, our team lives, organizationally, on the Product Management team, and we had just received a gentle nudge and training on the RICE prioritization framework.

On our company blog, I wrote an article about how employing this framework, using a spreadsheet, helped us double our velocity as a team within the first sprint. Our development velocity doubled because the most impactful things for the time spent are not always the biggest things, but the biggest things tend to attract the most attention due to their size.

Read more at The Linux Foundation

Review by Many Eyes Does Not Always Prevent Buggy Code

Writing code is hard. Writing secure code is harder—much harder. And before you get there, you need to think about design and architecture. When you’re writing code to implement security functionality, it’s often based on architectures and designs that have been pored over and examined in detail. They may even reflect standards that have gone through worldwide review processes and are generally considered perfect and unbreakable.*

However good those designs and architectures are, though, there’s something about putting things into actual software that’s, well, special. With the exception of software proven to be mathematically correct,** being able to write software that accurately implements the functionality you’re trying to realize is somewhere between a science and an art. This is no surprise to anyone who’s actually written any software, tried to debug software, or divine software’s correctness by stepping through it; however, it’s not the key point of this article.

Read more at OpenSource.com

What You Need to Know: Kubernetes and Swarm

Kubernetes and Docker Swarm are both popular and well-known container orchestration platforms. You don’t need a container orchestrator to run a container, but they are important for keeping your containers healthy and add enough value to mean you need to know about them.

This blog post introduces the need for an orchestrator then chalks-up the differences at an operational level between these two platforms.

What has orchestration done for you lately?

Even if you are not using Kubernetes or Swarm for your internal projects – it doesn’t mean that you’re not benefitting from their use. For instance ADP who provide iHCM and Payroll in the USA use Docker’s EE product (which is based around Swarm) to run some of their key systems.

Read more at Alex Ellis Blog

Performance Analysis in Linux (Continued): When Performance Really Matters

By Gabriel Krisman Bertazi, Software Engineer at Collabora.

This blog post is based on the talk I gave at the Open Source Summit North America 2017 in Los Angeles. Let me start by thanking my employer Collabora, for sponsoring my trip to LA.

Last time I wrote about Performance Assessment, I discussed how an apparently naive code snippet can hide major performance drawbacks. In that example, the issue was caused by the randomness of the conditional branch direction, triggered by our unsorted vector, which really confused the Branch Predictor inside the processor.

An important thing to mention before we start, is that performance issues arise in many forms and may have several root causes. While in this series I have focused on processor corner-cases, those are in fact a tiny sample of how thing can go wrong for performance. Many other factors matter, particularly well-thought algorithms and good hardware. Without a well-crafted algorithm, there is no compiler optimization or quick hack that can improve the situation.

In this post, I will show one more example of how easy it is to disrupt performance of a modern CPU, and also run a quick discussion on why performance matters – as well as present a few cases where it shouldn’t matter.

If you have any questions, feel free to start a discussion below in the Comments section and I will do my best to follow-up on your question.

CPU Complexity is continuously rising

Every year, new generations of CPUs and GPUs hit the market carrying an always increasing count of transistors inside their enclosures as show by the graph below, depicting the famous Moore’s law. While the metric is not perfect on itself, it is a fair indication of the steady growth of complexity inside of our integrated circuits.

transistor_count.svg.png
Figure 1: © Wgsimon. Licensed under CC-BY-SA 3.0 unported.
Much of this additional complexity in circuitry comes in the form of specialized hardware logic, whose main goal is to explore common patterns in data and code, in order to maximize a specific performance metric, like execution time or power saving. Mechanisms like Data and Instruction caches, prefetch units, processor pipelines and branch predictors are all examples of such hardware. In fact, multiple levels of data and instruction caches are so important for the performance of a system, that they are usually advertised in high caps when a new processor hits the market.

While all these mechanisms are tailored to provide good performance for the common case of programming and common data patterns, there are always cases where an oblivious programmer can end up hitting the corner case of such mechanisms, and not only write code which is unable to benefit from them, but also code which executes way worse than if there were no optimization mechanism at all.

As a general rule, compilers are increasingly great at detecting and modifying code to benefit from the CPU architecture, but there will always be cases where they won’t be able to detect bad patterns and modify the code. In those cases, there is no replacement for a capable programmer who understands how the machine is designed, and who can adjust the algorithm to benefit from its design.

When does performance really matter?

The first reaction of an inexperienced developer after learning about some of the architectural issues that affect performance, might be to start profiling everything he can get his hands on, to obtain the absolute maximum capability of his expensive new hardware. This approach is not only misleading, but an actual waste of time.

In a city that experiences traffic jams every day, there is little point in buying a faster car instead of taking the public bus. In both scenarios, you are going be stuck in the traffic for hours instead of arriving at your destination earlier. The same happens with your programs. Consider an interactive program that performs a task in background while waiting for user input, there is little point in trying to gain a few cycles by optimizing the task, since the entire system is still limited by the human input, which will always be much, much slower than the machine. In a similar sense, there is little point in trying to speed-up the boot time of a machine that almost never reboots, since the reboot time cost will be payed only rarely, when a restart is required.

In a very similar sense, the speed-up you gain by recompiling every single program in your computer with the fastest compiler optimizations possible for your machine, like some people like to do, is completely irrelevant, considering the fact that the machine will spend most of the time in an idle state, waiting for the next user input.

What actually makes a difference, and should be target of every optimization work, are cases where the workload is so intensive that gaining a few extra cycles very often will result in a real increase of the computing done in the long run. This requires, first off all, that the code being optimized is actually in the critical path of performance, which means that that part of the code is actually what is holding the rest of the system back. If that is not the case, the gain will be minimum and the effort will be wasted.

Moving back to the reboot example, in a virtualization environment, where new VMs or containers boxes need to be spawned very fast and very often to respond to new service requests, it makes a lot of sense to optimize reboot time. In that case, every microsecond saved at boot time matters to reduce to overall response of the system.

The corollary of the Ahmdal’s law states just that. It argues that there is little sense in aggressively optimizing a part of the program that executes only a few times, very quickly, instead of optimizing the part that occupies the largest part of the execution time. In another (famous) words, a gain of 10% of time in code that executes 90% of time is much better for the overall performance than a 90% speed up in code that executes only 10% of the time.

ahmdal.png

Continue reading on Collabora’s blog.

Learn more from Gabriel Krisman Bertazi at Open Source Summit Europe, as he presents “Code Detective: How to Investigate Linux Performance Issues” on Monday, October 23.