Home Blog Page 779

How to Record Your Terminal Session on Linux

Recording a terminal session may be important in helping someone learn a process, sharing information in an understandable way, and also presenting a series of commands in a proper manner. Whatever the purpose, there are many times when copy-pasting text from the terminal won’t be very helpful while capturing a video of the process is quite far-fetched and may not be always possible. In this quick guide, we will take a look at the easiest way to record and share a terminal session in .gif format.

 

How to Install, Secure and Performance Tuning of MariaDB Database Server

A database server is an critical component of the network infrastructure necessary for todays applications. Without the ability to store, retrieve, update, and delete data (when needed), the usefulness and scope of web and…

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

How To Install And Use VBoxManage On Ubuntu 16.04 And Use Its Command line Options

VirtualBox comes with a suite of command line utilities, and you can use the VirtualBox command line interfaces (CLIs) to manage VMs on a remote headless server. In this tutorial, we will show you how to create and start a VM without VirtualBox GUI using VBoxManage. VBoxManage is the command-line interface to VirtualBox taht you can use to completely control VirtualBox from the command line of your host operating system. VBoxManage supports all the features that the graphical user interface gives you access to, but it supports a lot more than that. It exposes really all the features of the virtualization engine, even those that cannot (yet) be accessed from the GUI. You will need to use the command line if you want to use a different user interface than the main GUI and control some of the more advanced and experimental configuration settings for a VM.

Read more at Linuxpitstop.com

How To Install Btrfs Tools And Manage BTRFS Operations

Btrfs is a new copy on write (CoW) filesystem for Linux aimed at implementing advanced features while focusing on fault tolerance, repair and easy administration. Jointly developed at multiple companies, Btrfs is licensed under the GPL and open for contribution from anyone. Considering the rapid development of Btrfs at the moment, Btrfs is currently considered experimental. But according to the wiki maintained by the btrfs community, many of the current developers and testers of Btrfs run it as their primary file system with very few “unrecoverable” problems. Thus, Linux distributions tend to ship with Btrfs as an option but not as the default. Btrfs is not a successor to the default Ext4 file system used in most Linux distributions, but it can be expected to replace Ext4 in the future. Read more at Linuxpitstop.com

Linux Leader Bdale Garbee Touts Potential of HPE’s Newest Open Source Project

The technology landscape is changing very fast. We now carry devices in our pockets that are more powerful than PCs we had some 20 years ago. This means we are now continuously churning huge amounts of data that travels between machines — data centers (aka cloud) and our mobile devices. The cloud/data center as we know it today is destined to change, too, and this evolution is changing market dynamics.

As Peter Levine, general partner at Andreessen Horowitz said during Open Networking Summit (ONS), cloud computing is not going to be here forever. It will change, it will morph, and he believes that the industry will go from centralized to distributed and then back to centralized. He said that the cloud computing model would disaggregate in the not too distant future back to a world of distributed computing.

That metamorphosis means we will need a new class of computing devices, and we may need a new approach toward computers. That is something Hewlett Packard Enterprise (HPE) has been working on. Back in 2014, HPE introduced the concept of The Machine, which took a unique approach by using memristors.

The Machine page of HPE explained, “The Machine puts the data first. Instead of processors, we put memory at the core of what we call “Memory-Driven Computing.” Memory-Driven Computing collapses the memory and storage into one vast pool of memory called universal memory. To connect the memory and processing power, we’re using advanced photonic fabric. Using light instead of electricity is key to rapidly accessing any part of the massive memory pool while using much less energy.”

Open Source at the Core

Yesterday, HPE announced that it’s bringing The Machine to the open source world. HPE is inviting the open source community to collaborate on HPE’s largest and most notable research project, which is focused on reinventing the computer architecture on which all computers have been built for the past 60 years.

Bdale Garbee, Linux veteran and HPE Fellow, Office of the CTO at HPE, and a member of The Linux Foundation advisory board, told me in an interview that what’s really incredible about the announcement is that this is the first time a company has gone fully open source with really revolutionary technologies that have the potential to change the world.

“As someone who has been an open source guy for a really long time I am immensely excited. It represents a really different way of thinking about engagement with the open source world much earlier in the life cycle of a corporate research and development initiative than anything I have ever been near in the past,” said Garbee.

The Machine is a major shift from what we know of computing. Which also means a totally different approach from a software perspective. Although such a transformation is not new; we have witnessed many such transitions, most notably the shift from spinning disks of hard drives to solid state storage.

“What The Machine does with the intersection of very large low-cost, low-powered, fast non-volatile memory and chip level photonics is it allows us to be thinking in terms of the storage in a very memory driven computing model,” said Garbee.

HPE is doing a lot of research internally, but they also want to engage the larger open source community to get involved at a very early stage to find solutions to new problems.

Garbee said that going open source at an early stage allows people to figure out what the differences are in a Memory-Driven Computing model, with a large fabric-attached storage array talking to a potentially heterogeneous processing elements over a photonically interconnected fabric.

HPE will release more and more code in open source for community engagement. In conjunction with this announcement, HPE has made a few developer tools available on GitHub. “So one of the things that we are releasing is the Fabric Attached Memory Emulation toolkit that allows users to explore the new architectural paradigm. There are some tools for emulating the performance of system that are built around this kind of fabric attached memory,” said Garbee.

The other three tools include:  

  • Fast optimistic engine for data unification services: A completely new database engine that speeds up applications by taking advantage of a large number of CPU cores and non-volatile memory (NVM)

  • Fault-tolerant programming model for non-volatile memory: Adapts existing multi-threaded code to store and use data directly in persistent memory. Provides simple, efficient fault-tolerance in the event of power failures or program crashes.

  • Performance emulation for non-volatile memory bandwidth: A DRAM-based performance emulation platform that leverages features available in commodity hardware to emulate different latency and bandwidth characteristics of future byte-addressable NVM technologies

HPE said in a statement that these tools enable existing communities to capitalize on how Memory-Driven Computing is leading to breakthroughs in machine learning, graph analytics, event processing, and correlation.

Linux First

Garbee told me that “Linux is the primary operating system that we are targeting with The Machine.”

Garbee recalled that HPE CTO Martin Fink made a fundamental decision a couple of years ago to open up the research agenda and to talk very publicly about what HP was trying to accomplish with the various research initiatives that would come together to form The Machine. The company’s announcement solidifies that commitment to open source.

Fink’s approach towards opening up should not surprise anyone. Fink was the first vice president of Linux and open source at HPE. That was also the time when Garbee served as the open source and Linux chief technologist. Garbee said that Fink very well understands the concept of collaborative development and maintenance of the software that comes from the open source world. “He and I have a long history of engagement, and we ended up influencing each other’s strategic thinking,” said Garbee.

HPE has a team inside of the company that is working very hard on enabling the various technology elements of The Machine. They are also actively engaging with the Linux kernel community working on key areas of development that are needed to support The Machine. “We need better support for large nonvolatile memories that look more like memory and less like storage devices. There will be a number of things coming out as we move forward in the process of enabling The Machine hardware,” said Garbee.

Beyond Linux

Another interesting area for The Machine, beyond Linux, is the database. The Machine is a transition from things that look like storage devices, (file system on rotating media) to something that look like directly accessible memory. That memory can be mapped in blocks as big as processes and instruction sets architectures. Now this transition needs a new way of thinking about access to data, to figure out where the bottlenecks are.

“One of our research initiatives has been around developing a faster optimized engine for data unification that ends up looking like a particular kind of database, and we are very interested in having instant feedback from the open source community,” said Garbee. As we start to bring these new capabilities to the marketplace with The Machine, there is an opportunity to again rethink exactly how this stuff should work, he said.

HPE teams have been working with existing database and big database communities. “However, there are some specific chunks of code that have come from something closer to pure research and we will releasing that work to have a look and work out what the next steps are going to be,” said Garbee.

The Machine is to date Hewlett Packard Labs’ biggest project, but even bigger than that is HPE’s decision to make it open and powered by Linux. “It’s a change in behavior from anything that I can recall happening in this company before,” said Garbee.

 

Midokura Raises $20M Series B Round for its Network Virtualization Platform

Network virtualization specialist Midokura today announced it has raised a $20 million Series B round with participation from Japanese fintech company Simplex and existing investors like Allen Miner and the Innovation Network Corporation of Japan. With this round, Midokura’s total funding has now hit $44 million.

As enterprises move away from expensive proprietary networking hardware in favor of network virtualization and software-defined networking, Midokura offers a number of services that allow them to make this switch. The company’s efforts mostly focus on the open source OpenStack platform (which you can think of as an open source version of AWS that enterprises can run in their own datacenters). Midokura, like many similar players in this ecosystem, offers both an open source and an enterprise version of its core tools. The paid version, which costs $1,899 per host, includes enterprise support, as well as support for technologies like VMware’s vSphere and the ESXi hypervisor.

Read more at Tech Republic

MapR Shows Off Enterprise-Grade Spark Distribution

At Spark Summit in San Francisco, Calif., this week, Hadoop distribution vendor MapR Technologies announced a new enterprise-grade Apache Spark distribution.

The new distribution, available now in both MapR Converged Community Edition and MapR Converged Enterprise Edition, includes the complete Spark stack, patented features from MapR and key open source projects that complement Spark.

Read more at InfoWorld

Puppet DevOps Comes to the Mainframe

Without DevOps programs such as PuppetChef, and Ansiblethe cloud wouldn’t be possible. Now Puppet is trying to work in systems management magic on IBM’s z Systems and LinuxONE.

DevOps works by automating server operations. With it both programmers and administrators can focus on making the most from their hardware’s raw computing power instead of wasting time managing server operations by hand. It’s found its greatest success in controlling clusters of commercial off-the-shelf (COTS) x86 computers. Puppet has reasoned that there’s no reason it can’t also be used for mainframes.

So, Puppet has announced a new set of modules for managing for IBM z Systems andLinuxONE mainframes and IBM WebSphere programs. This will help make it easier for customers to manage their systems and their applications.

Read more at ZDNet

ODPi: Test Less, Build More Applications With Hadoop

Testing applications against Hadoop distributions is not fun, either for application developers or end users, and it takes up too much precious time.   

According to Alan Gates, co-founder of Hortonworks and ODPi member, that’s the issue the Open Data Platform initiative (ODPi) is here to solve: create a single test specification that works across all Hadoop distributions so developers can get back to creating innovative applications and end users can get back to making money, or curing cancer, or sending people into space.

“That’s where ODPi sees itself bringing value,” Gates said. “Specifying not what’s in this software, or writing competitive software, but specifying how this software is installed where it can be used regardless of which distribution it is, how is it configured — all of those questions, which maybe aren’t as exciting as developing new software, but they’re questions you have to answer well in order for people to use your code.”

Gates gave a keynote session about the nonprofit organization at Apache Big Data in Vancouver in May. ODPi, which now has 29 member companies and 35 maintainers, released its first runtime specification on March 31 of this year.

It’s also working on an operations specification around using Apache Ambari that is slated to launch July of this year.

“Those specifications are frankly a little boring,” Gates said. “It’s just [saying things like]: here is how to lay out the directory so people can find the config files; here’s the environment variables that must be set so people know where you put the binaries; don’t move the binaries around on people and don’t take some away; don’t change public APIs; don’t rename .JARs.

“None of this is rocket science — despite the little rocket in our logo — but it’s all very necessary,” Gates said.

Gates sees three main constituencies for whom the ODPi is trying to make the Hadoop ecosystem a better place:

  • End users: “We want [end users] to be able to run ODPi-compliant distributions with ODPi compliant applications on top and be able to mix and match and not worry about who they bought which piece from.

  • Application developers and ISVs: “We want them to ‘test once, run anywhere’ and reduce the cost of building the applications. The more applications they build, the faster the ecosystem grows and everyone is happier.”

  • Distribution providers: “We want to give them guidelines on how to install and set up their software so the two groups above get their benefits.”

Gates said the ODPi doesn’t write much code, but any code it does write is contributed back to the Apache foundation’s projects. Ambari and Bigtop have see the most commits from ODPi, he said.

“We are very committed to making sure all that work that we do feeds back up into the Apache communities and is used by them,” Gates said.  

Watch the complete video below.

https://www.youtube.com/watch?v=mf5KKAsPyJc?list=PLGeM09tlguZQ3ouijqG4r1YIIZYxCKsLp

linux-com_ctas_apache_052316_452x121.png?itok=eJwyR2ye

Real Hackers Don’t Wear Hoodies (Cybercrime is Big Business)

Most people probably have an idea about what a hacker looks like. The image of someone sitting alone at a computer, with their face obscured by a hoodie, staring intently at lines of code in which their particular brand of crime or mischief is rooted, has become widely associated with hackers. You can confirm this by simply doing an image search for “hackers” and seeing what you come up with.

After decades of researching hackers, I’ve decided that this picture is distorting how people need to see today’s threats. It makes some very misleading implications about the adversaries that people and businesses need to focus on. It’s a mistake to take the old “hacker-in-a-hoodie” stereotype and think it applies to the cyber crime and nation-state attacks we’re facing today.

When I see a news article with a stock photo of a hacker-in-a hoodie, I feel like I’m being lead to believe that hackers work in isolation. And that hacking is a hobby one indulges in when they’re not working or studying. My takeaway from this image is that hackers are portrayed as pursuing a casual interest rather than working to achieve goals. But the idea that such unprofessional adversaries are responsible for things like Stuxnet or ransomware is incredibly naïve. Why don’t we see pictures of hackers wearing a suit and tie? Or a uniform?

Hacking is now a marketable skill that’s commodified as products and services, and sold to criminals, companies and governments. Hackers now have their own networks, both technical and social, that they use to buy, sell, and trade hacking services and malicious software. They pool resources and coordinate efforts, giving threats far greater capabilities than any individual hacker could develop on their own. After all, there wouldn’t be an exploit industry enabling cyber attacks if it weren’t for the networks connecting hackers, companies, governments, and other organizations.

Cyber crime has industrialized hacking. It’s created structures for hackers to operate within, and objectives (usually financial) to achieve. We are aware of several organized cyber crime gangs that have made tens of millions of dollars in profit with their attacks. And now, with nation-states becoming increasingly active participants in the threat landscape, we’re only going to see more growth and opportunities in hacking.

In the past year I’ve been speaking about the potential existence of Cyber Crime Unicorns – cyber crime ventures that could be valued at over one billion dollars. I can admit the comparison is problematic because a criminal enterprise could never be valued in the same way as a legitimate business. But comparing today’s hackers with the old stereotypes is even more problematic. The hacker-in-a-hoodie is a great picture of the hobbyist hackers from the past, and it’s still relevant today when discussing hacktivist groups like Anonymous. But the Cyber Crime Unicorn represents the relatively unimpeded growth of cyber crime, which is a far greater threat. Continuing to perpetuate the stereotypes allows the hobbyist hacker threats of history to distract us from the cyber threats of today, and ignoring such misdirection will only cause problems in the future.

I’ll be discussing these topics, and how they apply to open source systems and to service providers further in my keynote (“Complexity: The enemy of Security”) at the OPNFV Summit in Berlin on June 22-23. See you in Berlin!

Mikko Hypponen is the Chief Research Officer for F-Secure.