Home Blog Page 661

How to Install LXD Container Hypervisor on Ubuntu 16.04 LTS Server

LXD is lxc on steroids with strong security on the mind. LXD is not a rewrite of LXC. Under the hood, LXD uses LXC through liblxc and its Go binding. LXD container “hypervisor” runs unmodified Debian/Ubuntu/CentOS/Arch and other Linux operating systems  (“distros”) VM at incredible speed. In this tutorial, you will learn to set up LXD on a Ubuntu Linux server and create your first VM.

Install LXD

Type the following apt-get command:
$ sudo apt install lxd

OR
$ sudo apt-get install lxd

Read more at Nix Craft

How the Blockchain Will Radically Transform the Economy

Say hello to the decentralized economy — the blockchain is about to change everything. In this lucid explainer of the complex (and confusing) technology, Bettina Warburg describes how the blockchain will eliminate the need for centralized institutions like banks or governments to facilitate trade, evolving age-old models of commerce and finance into something far more interesting: a distributed, transparent, autonomous system for exchanging value.

Watch the video at TED.com

SUSE Advances its Open-Source Storage System

Besides announcing its next version of Ceph-powered SUSE Enterprise Storage, SUSE has bought openATTIC, the open-source Ceph and storage management framework.

SES 4, a software-defined storage system, is powered by Ceph. This is a distributed object store and file system. It, in turn, relies on a resilient and scalable storage model (RADOS) using clusters of commodity hardware. Along with the RADOS block device (RBD), and the RADOS object gateway (RGW), Ceph provides a POSIX file-system interface: CephFS. RBD and RGW have long been in use for production workloads but CephFS has been harder to use in the real world.

Read more at ZDNet

The Linux Foundation Issues Free E-Book on Open Source License Compliance Best Practices

The Linux Foundation today released a free e-book, Open Source Compliance in the Enterprise, that serves as a practical guide for organizations on how best to use open source code and participate in open source communities while complying with the spirit and the letter of open source licensing.

Written by Ibrahim Haddad, Ph.D., vice president of R&D and the head of the open source group at Samsung Research America, the new e-book aims to improve understanding of issues related to the licensing, development, and reuse of open source software. Haddad is responsible for overseeing Samsung’s open source strategy and execution, internal and external collaborative R&D projects, and is a former manager at The Linux Foundation.  ​

The book’s nine chapters take readers through the entire process of open source compliance, including an introduction to the topic, a description of how to establish an open source management program at their organization, and an overview of relevant roles. Examples of best practices and compliance checklists are provided to help those responsible for compliance activities create their own processes and policies.

“We frequently hear from organizations contributing to or simply using open source software about the desire to comply, but uncertainty about how best to do so,” said Mike Dolan, VP of strategic programs at The Linux Foundation. “Although it is sometimes viewed as a challenge, with better education on the topic, compliance can be easier for all involved in open source. This ebook, along with other efforts such as our free Compliance Basics for Developers training course, is one way we are working to help close the knowledge gap and make compliance easier for everyone.”

Companies Benefit from Open Source Compliance

As combining and building upon open source software components has become the de facto way for companies to create new products and services, organizations want to know how best to participate in open source communities and how to do so in a legal and responsible way.  

Under this “multi-source development model” software components can consist of source code originating from any number of different sources and be licensed under different licenses. As a result, the risks that companies previously managed through company-to-company license and agreement negotiations are now managed through robust compliance programs and careful engineering practices.

Open source initiatives and projects provide companies and other organizations with a vehicle to accelerate innovation through collaboration. But there are important responsibilities that come with the benefits of teaming with the open source community: Companies must ensure compliance to the obligations that accompany open source licenses.

“Open source compliance is the process by which users, integrators, and developers of open source observe copyright notices and satisfy license obligations for their open source software components,” according to the book.                                                       

It lists several advantages for companies that achieve open source compliance including:

  • A technical advantage, because compliant software portfolios are easier to service, test, upgrade, and maintain

  • In the event of a compliance challenge, having a compliance program can demonstrate an ongoing pattern of acting in good faith

  • Help in preparing a company for possible acquisition, sale, or new product or service release

  • Verifiable compliance in dealing with OEMs and downstream vendors.    

To learn more about the benefits of open source compliance and how to achieve it, download the free e-book today!

Microsoft Steps Up Its Commitment to Open Source

Today The Linux Foundation is announcing that we’ve welcomed Microsoft as a Platinum member. I’m honored to join Scott Guthrie, executive VP of Microsoft’s Cloud and Enterprise Group, at the Connect(); developer event in New York and expect to be able to talk more in the coming months about how we’ll intensify our work together for the benefit of the open source community at large.

 

Microsoft is already a substantial participant in many open source projects and has been involved in open source communities through partnerships and technology contributions for several years. Around 2011 and 2012, the company contributed a large body of device driver code to enable Linux to run as an enlightened guest on Hyper-V. Microsoft has an engineering team dedicated to Linux kernel work, and since that initial contribution, the team has contributed improvements and new features to the driver code for Hyper-V on a consistent basis.

 

Over the past two years in particular, we’ve seen that engineering team grow and expand the range of Linux kernel areas it’s working on to include kernel improvements that aren’t specifically related to Microsoft products. The company is also an active member of many Linux Foundation projects, including Node.js Foundation, R Consortium, OpenDaylight, Open API Initiative and Open Container Initiative. In addition, a year ago we worked with Microsoft to release a Linux certification, Microsoft Certified Solutions Associate Linux on Azure.

 

The open source community has gained tools and other resources as Microsoft has open sourced the .NET Core, contributed OpenJDK, announced Docker support in Windows Server, announced SQL on Linux, added the ability to run native Bash on Ubuntu on Windows, worked with FreeBSD to release an image for Azure, and open sourced Xamarin’s software development kit and PowerShell. The company supports Red Hat, SUSE, Debian and Ubuntu on Azure. Notably, Microsoft is a top open source contributor on GitHub.

 

The Linux Foundation isn’t the only open source foundation Microsoft has committed to in 2016: in March, the company joined the Eclipse Foundation. A Microsoft employee has served as the Apache Software Foundation’s president for three years.

 

Linux Foundation membership underscores what Microsoft has demonstrated time and again, which is that the company is evolving and maturing with the technology industry. Open source has become a dominant force in software development–the de facto way to develop infrastructure software–as individuals and companies have realized that they can solve their own technology challenges and help others at the same time.

 

Membership is an important step for Microsoft, but it’s perhaps bigger news for the open source community, which will benefit from the company’s sustained contributions. I look forward to updating you over time on progress resulting from this relationship.

 

Monitoring Network Load With nload: Part 1

On a continually changing network, it is often difficult to spot issues because of the amount of noise generated by expected network traffic. Even when communications are seemingly quiet, a packet sniffer will display screeds of noisy data. That data might be otherwise unseen broadcast traffic being sent to all hosts willing to listen and respond on a local network.

Make no mistake, noise on a network link can cause all sorts of headaches, because it can be impossible to identify trends quickly, especially if a host or the network itself is under attack. Packet sniffers will clearly display more traffic for the busiest connections, which ultimately obscures the activities of less busy hosts.

You may have come across the excellent nearly real-time networking monitoring tool, iftop, in the past. It uses ncurses via a console to display a variety of highly useful bar graphs and even accommodates regular expressions. An alternative to iftop is a powerful console-based tool called nload. Such network monitoring tools can really save the day if you need to analyze traffic on your networks in a hurry.

Background

In the past when I’ve been tasked with maintaining critical hosts, I’ve left the likes of iftop and nload running in a console throughout my working day. Spotting real-time spikes is essential if you’re struggling for bandwidth capacity or if you suspect that a specific host might be attacked thanks to historical attempts.

Thankfully, with nearly real-time graphical interfaces — even displayed over a standard console — there’s little eyestrain involved either. During times of heightened stress, such as when a host was being attacked, I’ve used these tools in one window alongside other console windows. That way I can simultaneously show the continually changed output of network-sniffing tools in both a text and a graphical form. I find that by running different filters through each tool and changing them periodically as my field of focus evolves means that digging down into the data that is of most interest is much easier. Ultimately, I end up with a clear picture of who is using the network and, most importantly, for what purpose.

Installation

The nload packages can be found in a number of software repositories. On Debian derivatives, you can use this command to trigger your package manager’s installation process:

# apt-get install nload

On Red Hat derivatives, use this command:

# yum install nload

In the same way that iftop uses the “ncurses” package to output “graphics” to your console without the need of a GUI, the flexible nload switched to using ncurses, too, back in 2001. Your package manager should take care of any dependencies in this regard so there’s no extra package installation work involved.

Look And Feel

Once you have a working installation, all you need to do to run the package is use this command:

# nload

The results of such a command is the simple but useful output as shown in Figure 1.

Figure 1: The load on “eth0” interface using ingress and egress displays.

The ability to split a single console window into two parts, one half for ingress (inbound) traffic and the other for egress (outbound) traffic is clearly of significant use. The clarity you immediately gain is invaluable, especially if you’re trying to diagnose an attack of some description. Also shown in Figure 1, at the top, are the adjustable options to alter your display on the fly, without stopping nload in its tracks.

If we only wanted to focus on one specific network interface, then we could run nload as follows:

# nload eth1

You can, however, add more than one network interface to the same console window for a useful comparison. In this case, we would use the following command:

# nload -m eth0 eth1

As you can see in Figure 2, this can give a useful insight into which of our network interfaces are under the highest load and without squinting at the output of a packet sniffing tool.

Figure 2: More than one network interface displayed at once.

Now that we’ve got an idea of how malleable nload is, let’s look at some of the configurations options available.

Refresh Frequency

Years ago, I remember debating the effectiveness of making your traffic statistics update faster than the default setting. Tools that use Simple Network Monitoring Protocol (SNMP), such as RRDTool and MRTG (and indeed tools that use the “pcap” library, such as iftop), use averages to populate the display you are presented with. In short, a quicker refresh frequency may lower the accuracy of the output from such tools. If you are interested in the intricacies of such a statement, I encourage you to read more about such matters online.

I mention this for good reason. Before we continue, one caveat is that the “screen refresh” frequency is a different animal altogether to the “averages” used during the collection of statistics. When it comes to nload, however, the two are separated for clarity (unlike with other applications). The clever author keeps things simple, which is very welcome.

From what I can tell from the manual, the “-a” option is for changing the period used for measuring “averages,” which (I am guessing) affects the calculations behind the scenes. The refresh frequency, however, is for “screen display,” in terms of tweaking it to suit your display needs as we’ve just mentioned. Both ultimately relate to how the majority of real-time statistics are collected and then displayed between them. In case it causes confusion with nload, the screen refresh period is referred to as the “refresh interval.”

The manual goes to some length to explain that lowering the refresh interval to less than the default 500ms is not that wise. It states: “Specifying refresh intervals shorter than about 100 milliseconds makes traffic calculation very unprecise.”

This confirms my experience with other traffic collection software, too. The sophisticated nload goes on to reassure us with this additional comment: “nload tries to balance this out by doing extra time measurements, but this may not always succeed.”

I couldn’t confirm what its exact effect is on other settings, but within the Options Window (the box seen at the top of Figure 1), there is a “Smoothness of  average” setting, which you may have some success in changing to affect your accuracy. Fear not, following some trial and error, you shouldn’t have too many problems now that you’re armed with the terminology and the difference between the two key adjustable parameters.

Battle Hardening

When I’m learning a new tool, I tend to run a few tests between machines and monitor how the tool reacts at both ends of a connection — to avoid a few headaches. Usually, I would send a small amount of data, stop and start the connection, and then try and saturate the network link (or at least put it under significant levels of stress) and repeat this a few times.

Coupling your new-found knowledge of how a tool will ultimately react to differing scenarios — having tuned the refresh frequency (and potentially the averages used for traffic calculations) to suit your needs — is the best way to battle-harden a tool in my experience.

In the next article, I’ll look at some specific examples of launching nload with various options.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in system administration! Check out the Essentials of System Administration course from The Linux Foundation.

Jennifer Cloer on Working with Linus Torvalds, Open Source and Women in Tech

“I’ve had the opportunity to be at the forefront of some of the most disruptive technologies in the history of computing, it’s an amazing and unique experience,” said Jennifer Cloer, former Director of Communications at the Linux Foundation.

I have known Jennifer Cloer from the very early days, even before the Linux Foundation was formed. She is among the most influential women in the tech world, especially in the open source world. I have been planning to start a series of interviews of those women who made it into CIO’s most influential women in Tech list. When I approached Jennifer, I learned about a development in her career that made this story even more interesting. Cloer is moving out of the Linux Foundation and venturing into a new world of her own.

Read more at CIO.com

Linux Dominates November TOP500 Supercomputer List

The latest semi-annual list of the world’s top 500 supercomputers was released on November 14, showing little change at the top of the list. The China-based Sunway TaihuLight supercomputer, which first claimed the title of the world’s fastest system in June 2016, still reigns supreme.

Sunway TaihuLight was built by the National Research Center of Parallel Computer Engineering and Technology (NRCPC), and it is installed at the National Supercomputing Center in Wuxi, China.

Read more at ServerWatch

Examining ValueObjects

Industry leader Martin Fowler provides some ”value” in using ValueObjects, particularly small ones.

When programming, I often find it’s useful to represent things as a compound. A 2D coordinate consists of an x value and y value. An amount of money consists of a number and a currency. A date range consists of start and end dates, which themselves can be compounds of year, month, and day.

As I do this, I run into the question of whether two compound objects are the same. If I have two point objects that both represent the Cartesian coordinates of (2,3), it makes sense to treat them as equal. Objects that are equal due to the value of their properties, in this case their x and y coordinates, are called value objects.

But unless I’m careful when programming, I may not get that behavior in my programs.

Read more at DZone

How to Avoid Burnout Managing an Open Source Project

Regardless of where you work in the stack, if you work with open source software, there’s likely been a time when you faced burnout and other unhealthy side effects related to your work on open source projects. A few of the talks at OSCON Europe addressed this darker side of working in open source head-on.

Katrina Owen is a developer advocate on the open source team at GitHub, but she is also the creator of Exercism.io, which was the focus of her talk, “The bait and switch of open source.”

Read more at The New Stack