Home Blog Page 732

Bedrock Linux Gathers Disparate Distros under One Umbrella

Want the power of Gentoo, the packages of Arch, and the display manager of Ubuntu in one distribution? An experimental distro could make that possible, though not necessarily easy.  An experimental Linux distribution under heavy development makes it possible to use software from other, mutually incompatible Linux distributions, all under one roof. 

Bedrock Linux does this without using virtual machines or containers. Instead, it uses a virtual file system arrangement, allowing each distribution’s software to be installed in parallel and executed against each other.

Read more at InfoWorld

How To Succeed at Failure with Microservices

A deep exploration of failure at microservices can help enterprise succeed. After all, failure is feedback. Failure is a temporary state. Leadership author John C. Maxwell, in his book “Failing Forward,” encourages a new definition of failure, seeing it as the price we pay to achieve success.

Speaking at the recent CA Technologies API360 Summit in New York, Ronnie Mitra, co-author of “Microservice Architecture: Aligning Principles, Practices, and Culture,” said missteps in reorienting towards a microservices architecture is to be expected, after all, when you make services small, the system around them becomes more complex.

Mitra says that the essence of microservices is speed and safety at scale and in harmony. He points to three areas that become increasingly complex at scale:

  • when demand increases (there are now lots of users of your app),
  • with distance (code for the app is geographically dispersed across cloud infrastructure) and,
  • amongst organizations (as the business grows, what worked for a 10-person company may no longer work when there are thousands of staff).

Read more at The New Stack

Containerized Security: The Next Evolution of Virtualization?

We in the security industry have gotten into a bad habit of focusing the majority of our attention and marketing dollars on raising awareness of the latest emerging threats and new technologies being developed to detect them. One just has to look at the headlines or spend fifteen minutes walking the show floor at a major security conference to see this trend. However, while we are focusing on what all the bad guys are doing, we’ve taken the eye off the ball of where our infrastructure business is going.

Don’t get me wrong, detecting new targeted attacks is an important priority for security pros, but it’s equally vital to look for technology advancements in the way we do business and what our counterparts are doing in infrastructure, data centers and/or clouds. 

Read more at SecurityWeek

App Dev Silos are DevOps Killers: Start by Tearing Them Down

The road to DevOps can be rocky. Larger enterprises often cite cultural barriers such as the “developers vs. operations” mentality as the biggest obstacles to achieving DevOps, and much has been written about how to break down those barriers.

But while these organizations struggle with the relationship between developers and ops staff, they often neglect the equally tricky relationships among different parts of the development organization itself.

Read more at TechBeacon

 

Upskill U on Open Source With OpenDaylight

Upskill U kicks off a new series on Open Source this week, starting tomorrow with “Telcos & Open Source 101,” led by Phil Robb, senior technical director at OpenDaylight. Open source plays a pivotal role in driving the migration to virtualization, but in order to fully reap the benefits of increased efficiency and reduced costs, operators must re-think the R&D process and build their own internal open source competencies.

In this course, Robb will examine how service providers can transition from traditional standards processes to using open source software, and compete at higher levels of the software stack.

Read more at LightReading

Analysing Docker Projects on GitHub with BigQuery

Maybe you know that the Github public archive can be analyzed with Google BigQuery. That’s 3Tb of data! This helped people run analysis on languages usage or framework popularity.

I wanted to produce similar results with projects using Docker. But what’s a project using Docker? For this article, I will consider a Docker project, a project that has a least one Dockerfile file.

So, why don’t we start by counting the number of Dockerfile files?

Read more at Java Bien!

Howdy, Ubuntu on Windows! How Fast Is It?

My primary laptop is a Lenovo x250, with an Intel i7-5600U CPU, 16GB of RAM, a 512GB Transcend SSD, and a 2TB Samsung SSD.  Let’s run some benchmarks on the CPU, Memory, Disk, and Network.

We’ll first run each test in Ubuntu running natively on the hardware, and then reboot, and run the same benchmarks on the same machine running Ubuntu on Windows.

We’ll use the utilities sysbench, dd and iperf, as well as compile the Linux kernel to do our benchmarking.  If you want to reproduce these tests, you may need to:

  • sudo apt install sysbench iperf

CPU

To execute our CPU benchmark, we ran the following:

  • sysbench –num-threads=4 –test=cpu run

And, we see almost identical results!  Basically 2.8 seconds to run 10,000 CPU instructions, in both cases (Ubuntu on Windows was ever so slightly faster, in fact, in these runs).  For CPU bound workloads, Ubuntu on Windows should perform just as well as Ubuntu running natively on hardware:

Screenshot from 2016-06-27 12-28-26.png

cpu.png

Memory

To execute our Memory benchmark, we ran the following:

  • sysbench –num-threads=4 –test=memory run

Here, we’re moving 100G of data through memory.  Native Ubuntu was able to move data through memory at 4,253 MB/s, while Ubuntu on Windows worked at 2,309 MB/s.   This difference exposes a bit of a difference in the IO performance of the two systems.  When dealing in heavy IO, Ubuntu on Windows does involve a bit more overhead.

u6lOycJrQKoPubLXGV9scz68UgVgAdlUNXNw-v1X

mem.png

Disk

For our disk performance test, we ran the following command:

  • dd if=/dev/zero of=testfile bs=1G count=1 oflag=dsync

Basically, we’re writing a 1GB file of zero’s, synchronously, to disk.  Interestingly, the native Ubuntu test yields about 147 MB/s average write speed to disk, while the Ubuntu on Windows environment average 248 MB/s write speed to disk!  How is that possible?  Well, it’s a bit of trickery on the Windows part.  The flag that we’re sending the dd command, oflag=dsync, is supposed to guarantee synchronous writes to disk — ensuring that every single byte is in fact written to disk and not cached in a buffer in memory.  While we used that same flag on the Windows test, it seems that Windows doesn’t yet know what to do with that flag.  It seems pretty clear that the Ubuntu on Windows writes are not entirely synchronous to disk.  

Screenshot from 2016-06-27 12-38-24.png

disk.png

Network

Next, we’ll test network throughput.  Specifically, we’re testing TCP bandwidth, using the iperf utility.  I’m running an iperf server on an Ubuntu machine hardwired to a Gigabit network:

  • iperf -s

And we’re going to connect the iperf client from the native Ubuntu machine, and the Ubuntu on Windows machines:

  • iperf -c

The native Ubuntu machine averaged 935 Mbps, while the Ubuntu on Windows average 805 Mbps of bandwidth.

Screenshot from 2016-06-27 12-14-45.png

network.png

Linux Kernel Compilation

Finally, let’s take a tried and true performance benchmark at every Linux developer is familiar with — let’s build the kernel!  The beauty of this sort of test is that it includes lots of CPU number crunching (compilation) as well as tons of disk reads (loading libraries and source files) and disk writes (binary output).  It’s much closer to “real world” use cases than some of the academic benchmarks we’ve run above.

To reproduce these tests, you’ll need to:

Note that we’re using the default configuration for the build (defconfig), and we’re telling the compiler to use 4 CPUs (-j 4), since my Intel i7 is dual-core, hyperthreaded.

The native Ubuntu build took 5 minutes and 38 seconds, while the Ubuntu on Windows build took 8 minutes 47 seconds.  I discussed this with the Windows developers and they suggested re-running the tests with Windows Defender disabled (just for testing — don’t do this long term, as Windows Defender detects malware).  I wasn’t able to re-run the test in time to publish this post, but perhaps some of our motivated readers will reproduce the tests here and post their results!

Screenshot from 2016-06-27 10-27-40.png

kernel.png

Conclusion

The Windows Subsystem for Linux is nothing shy of amazing.  Tremendous credit goes to the entire Microsoft team working on this technology.  When coupled with Ubuntu, WSL really shines, bringing tens of thousands of open source utilities directly to the Windows desktop command shell.

From a performance perspective, CPU and Network bound processes will perform nearly identically in Ubuntu on Windows, as native Ubuntu on bare metal.  For heavily cached disk IO operations, Ubuntu on Windows might even outperform native Ubuntu on bare metal.  But for heavily randomized reads and writes, and memory heavy operations, Ubuntu on Windows does introduce a bit of overhead that might be noticeable in some developer workloads.

In any case, Ubuntu on Windows is a fantastic bridge between two worlds.  Two worlds, that will learn a lot from one another.  I’m proud to live in a new world where those two worlds are one.

Cheers,

Dustin

 

Huawei Launches Labs to Drive Open Cloud Networks

The Cloud Open Labs are part of the vendor’s All Cloud strategy to make it easier for telco operators to migrate their infrastructures to the cloud.

Huawei is unveiling an interconnected group of laboratories that are designed to help network operators more quickly and easily embrace and deploy cloud computing solutions in their environments.

The Chinese tech giant, whose reach extends from the data center through the network and out to the edge—it’s the world’s third-largest smartphone maker behind Apple and Samsung—this week launched its Cloud Open Labs, comprising four linked facilities that touch on such emerging networking trends as software-defined networking (SDN) and network-functions virtualization (NFV).

Read more at eWeek

 

Data Center Architecture Lessons From Isaac Newton

The physicist’s third law provides insight into how emerging technologies like containers impact the core network.

Sir Isaac Newton remains our favorite source for axiomatic laws of physics, despite giving us the language of calculus. Particularly relevant for today’s discussion is Newton’s third law as formally stated: “For every action, there is an equal and opposite reaction.”

In the cosmology of the data center, this existentially proves itself in the network whenever there are significant changes in application infrastructure and architectures. As evidence, consider the reaction to first, virtualization, and now, containerization, APIs, and microservice architectures.

Read more at Network Computing

Safety First: The Best Use of the Public Cloud for Analytics Apps and Data

If concerns about data breaches have kept your organization from using the public cloud, read about use cases in which these worries should be a thing of the past.

A survey of European IT executives in 2014 revealed that 72% of businesses didn’t trust cloud vendors to obey data protection laws and regulations, and that 53% of respondents said the likelihood of a data breach increases due to the cloud.

In October 2015, Rob Enderle, president and principal analyst of the Enderle Group and previously Senior Research Fellow for Forrester Research and the Giga Information Group, wrote in a CIO.com post, “Simply stated, you can’t trust the employees of cloud service providers. Frankly, I don’t think we can really trust our own employees anymore either, but at least our capability to monitor them is far greater.”

Read more at TechRepublic