Home Blog Page 779

Google’s Quantum Computer Inches Nearer After Landmark Performance Breakthrough

Google has come up with a new quantum computing technique that could remove key limits on scalability today. 

Google engineers have found a way to make the company’s D-Wave quantum computer more scalable and capable of solving problems in multiple fields.

According to Nature, Google has created a device that blends analog and digital approaches to deliver enough quantum bits, or qubits, to create a scalable, multi-purpose quantum computer, capable of solving chemistry and physics problems by, for example, simulating molecules at the quantum level.

Read more at ZDNet

Specification Released for NVM Express over Fabrics

Today NVM Express, Inc. announced the release of its NVM Express over Fabrics specification for accessing storage devices and systems over Ethernet, Fibre Channel, InfiniBand, and other network fabrics. NVM Express, Inc. has also recently published Version 1.0 of the NVM Express Management Interface specification.

“Storage technologies are quickly innovating to reduce latency, providing a significant performance improvement for today’s cutting-edge applications. NVM Express (NVMe) is a significant step forward in high-performance, low-latency storage I/O and reduction of I/O stack overheads. NVMe over Fabrics is an essential technology to extend NVMe storage connectivity such that NVMe-enabled hosts can access NVMe-enabled storage anywhere in the datacenter, ensuring that the performance of today’s and tomorrow’s solid state storage technologies is fully unlocked, and that the network itself is not a bottleneck.”

Read more at insideHPC.

How to Record Your Terminal Session on Linux

Recording a terminal session may be important in helping someone learn a process, sharing information in an understandable way, and also presenting a series of commands in a proper manner. Whatever the purpose, there are many times when copy-pasting text from the terminal won’t be very helpful while capturing a video of the process is quite far-fetched and may not be always possible. In this quick guide, we will take a look at the easiest way to record and share a terminal session in .gif format.

 

How to Install, Secure and Performance Tuning of MariaDB Database Server

A database server is an critical component of the network infrastructure necessary for todays applications. Without the ability to store, retrieve, update, and delete data (when needed), the usefulness and scope of web and…

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options

VirtualBox comes with a suite of command line utilities, and you can use the VirtualBox command line interfaces (CLIs) to manage VMs on a remote headless server. In this tutorial, we will show you how to create and start a VM without VirtualBox GUI using VBoxManage. VBoxManage is the command-line interface to VirtualBox taht you can use to completely control VirtualBox from the command line of your host operating system. VBoxManage supports all the features that the graphical user interface gives you access to, but it supports a lot more than that. It exposes really all the features of the virtualization engine, even those that cannot (yet) be accessed from the GUI. You will need to use the command line if you want to use a different user interface than the main GUI and control some of the more advanced and experimental configuration settings for a VM.

Read more at Linuxpitstop.com

How To Install Btrfs Tools And Manage BTRFS Operations

Btrfs is a new copy on write (CoW) filesystem for Linux aimed at implementing advanced features while focusing on fault tolerance, repair and easy administration. Jointly developed at multiple companies, Btrfs is licensed under the GPL and open for contribution from anyone. Considering the rapid development of Btrfs at the moment, Btrfs is currently considered experimental. But according to the wiki maintained by the btrfs community, many of the current developers and testers of Btrfs run it as their primary file system with very few “unrecoverable” problems. Thus, Linux distributions tend to ship with Btrfs as an option but not as the default. Btrfs is not a successor to the default Ext4 file system used in most Linux distributions, but it can be expected to replace Ext4 in the future. Read more at Linuxpitstop.com

Linux Leader Bdale Garbee Touts Potential of HPE’s Newest Open Source Project

The technology landscape is changing very fast. We now carry devices in our pockets that are more powerful than PCs we had some 20 years ago. This means we are now continuously churning huge amounts of data that travels between machines — data centers (aka cloud) and our mobile devices. The cloud/data center as we know it today is destined to change, too, and this evolution is changing market dynamics.

As Peter Levine, general partner at Andreessen Horowitz said during Open Networking Summit (ONS), cloud computing is not going to be here forever. It will change, it will morph, and he believes that the industry will go from centralized to distributed and then back to centralized. He said that the cloud computing model would disaggregate in the not too distant future back to a world of distributed computing.

That metamorphosis means we will need a new class of computing devices, and we may need a new approach toward computers. That is something Hewlett Packard Enterprise (HPE) has been working on. Back in 2014, HPE introduced the concept of The Machine, which took a unique approach by using memristors.

The Machine page of HPE explained, “The Machine puts the data first. Instead of processors, we put memory at the core of what we call “Memory-Driven Computing.” Memory-Driven Computing collapses the memory and storage into one vast pool of memory called universal memory. To connect the memory and processing power, we’re using advanced photonic fabric. Using light instead of electricity is key to rapidly accessing any part of the massive memory pool while using much less energy.”

Open Source at the Core

Yesterday, HPE announced that it’s bringing The Machine to the open source world. HPE is inviting the open source community to collaborate on HPE’s largest and most notable research project, which is focused on reinventing the computer architecture on which all computers have been built for the past 60 years.

Bdale Garbee, Linux veteran and HPE Fellow, Office of the CTO at HPE, and a member of The Linux Foundation advisory board, told me in an interview that what’s really incredible about the announcement is that this is the first time a company has gone fully open source with really revolutionary technologies that have the potential to change the world.

“As someone who has been an open source guy for a really long time I am immensely excited. It represents a really different way of thinking about engagement with the open source world much earlier in the life cycle of a corporate research and development initiative than anything I have ever been near in the past,” said Garbee.

The Machine is a major shift from what we know of computing. Which also means a totally different approach from a software perspective. Although such a transformation is not new; we have witnessed many such transitions, most notably the shift from spinning disks of hard drives to solid state storage.

“What The Machine does with the intersection of very large low-cost, low-powered, fast non-volatile memory and chip level photonics is it allows us to be thinking in terms of the storage in a very memory driven computing model,” said Garbee.

HPE is doing a lot of research internally, but they also want to engage the larger open source community to get involved at a very early stage to find solutions to new problems.

Garbee said that going open source at an early stage allows people to figure out what the differences are in a Memory-Driven Computing model, with a large fabric-attached storage array talking to a potentially heterogeneous processing elements over a photonically interconnected fabric.

HPE will release more and more code in open source for community engagement. In conjunction with this announcement, HPE has made a few developer tools available on GitHub. “So one of the things that we are releasing is the Fabric Attached Memory Emulation toolkit that allows users to explore the new architectural paradigm. There are some tools for emulating the performance of system that are built around this kind of fabric attached memory,” said Garbee.

The other three tools include:  

  • Fast optimistic engine for data unification services: A completely new database engine that speeds up applications by taking advantage of a large number of CPU cores and non-volatile memory (NVM)

  • Fault-tolerant programming model for non-volatile memory: Adapts existing multi-threaded code to store and use data directly in persistent memory. Provides simple, efficient fault-tolerance in the event of power failures or program crashes.

  • Performance emulation for non-volatile memory bandwidth: A DRAM-based performance emulation platform that leverages features available in commodity hardware to emulate different latency and bandwidth characteristics of future byte-addressable NVM technologies

HPE said in a statement that these tools enable existing communities to capitalize on how Memory-Driven Computing is leading to breakthroughs in machine learning, graph analytics, event processing, and correlation.

Linux First

Garbee told me that “Linux is the primary operating system that we are targeting with The Machine.”

Garbee recalled that HPE CTO Martin Fink made a fundamental decision a couple of years ago to open up the research agenda and to talk very publicly about what HP was trying to accomplish with the various research initiatives that would come together to form The Machine. The company’s announcement solidifies that commitment to open source.

Fink’s approach towards opening up should not surprise anyone. Fink was the first vice president of Linux and open source at HPE. That was also the time when Garbee served as the open source and Linux chief technologist. Garbee said that Fink very well understands the concept of collaborative development and maintenance of the software that comes from the open source world. “He and I have a long history of engagement, and we ended up influencing each other’s strategic thinking,” said Garbee.

HPE has a team inside of the company that is working very hard on enabling the various technology elements of The Machine. They are also actively engaging with the Linux kernel community working on key areas of development that are needed to support The Machine. “We need better support for large nonvolatile memories that look more like memory and less like storage devices. There will be a number of things coming out as we move forward in the process of enabling The Machine hardware,” said Garbee.

Beyond Linux

Another interesting area for The Machine, beyond Linux, is the database. The Machine is a transition from things that look like storage devices, (file system on rotating media) to something that look like directly accessible memory. That memory can be mapped in blocks as big as processes and instruction sets architectures. Now this transition needs a new way of thinking about access to data, to figure out where the bottlenecks are.

“One of our research initiatives has been around developing a faster optimized engine for data unification that ends up looking like a particular kind of database, and we are very interested in having instant feedback from the open source community,” said Garbee. As we start to bring these new capabilities to the marketplace with The Machine, there is an opportunity to again rethink exactly how this stuff should work, he said.

HPE teams have been working with existing database and big database communities. “However, there are some specific chunks of code that have come from something closer to pure research and we will releasing that work to have a look and work out what the next steps are going to be,” said Garbee.

The Machine is to date Hewlett Packard Labs’ biggest project, but even bigger than that is HPE’s decision to make it open and powered by Linux. “It’s a change in behavior from anything that I can recall happening in this company before,” said Garbee.

 

Midokura Raises $20M Series B Round for its Network Virtualization Platform

Network virtualization specialist Midokura today announced it has raised a $20 million Series B round with participation from Japanese fintech company Simplex and existing investors like Allen Miner and the Innovation Network Corporation of Japan. With this round, Midokura’s total funding has now hit $44 million.

As enterprises move away from expensive proprietary networking hardware in favor of network virtualization and software-defined networking, Midokura offers a number of services that allow them to make this switch. The company’s efforts mostly focus on the open source OpenStack platform (which you can think of as an open source version of AWS that enterprises can run in their own datacenters). Midokura, like many similar players in this ecosystem, offers both an open source and an enterprise version of its core tools. The paid version, which costs $1,899 per host, includes enterprise support, as well as support for technologies like VMware’s vSphere and the ESXi hypervisor.

Read more at Tech Republic

MapR Shows Off Enterprise-Grade Spark Distribution

At Spark Summit in San Francisco, Calif., this week, Hadoop distribution vendor MapR Technologies announced a new enterprise-grade Apache Spark distribution.

The new distribution, available now in both MapR Converged Community Edition and MapR Converged Enterprise Edition, includes the complete Spark stack, patented features from MapR and key open source projects that complement Spark.

Read more at InfoWorld

Puppet DevOps Comes to the Mainframe

Without DevOps programs such as PuppetChef, and Ansiblethe cloud wouldn’t be possible. Now Puppet is trying to work in systems management magic on IBM’s z Systems and LinuxONE.

DevOps works by automating server operations. With it both programmers and administrators can focus on making the most from their hardware’s raw computing power instead of wasting time managing server operations by hand. It’s found its greatest success in controlling clusters of commercial off-the-shelf (COTS) x86 computers. Puppet has reasoned that there’s no reason it can’t also be used for mainframes.

So, Puppet has announced a new set of modules for managing for IBM z Systems andLinuxONE mainframes and IBM WebSphere programs. This will help make it easier for customers to manage their systems and their applications.

Read more at ZDNet