Home Blog Page 862

MAME Is now Free and Open Source Software

The MAME (Multiple Arcade Machine Emulator) project has announced a license change, moving from the old, unique “MAME License” to the GNU GPLv2-or-later for the full codebase, with many individual components available under the 3-clause BSD License. The announcement notes that a considerable effort went into the relicensing process.

Read more at LWN

 

17 Linux grep command examples for a data analysis

GREP is a command line search utility or tool to filter the input given to it. Grep got its name from ed editor as g/re/p (global / regular expression / print). Grep command can improve a command output by filtering out required information. Grep will become a killer command when we combined it with regular expressions. In this post we will see how to use grep in a basic way and then move on some advanced and rarely used options. In our next couple of posts we will see what grep can do with the help of regular expressions.

GREP command syntax

grep [options] [searchterm] filename

or

command | grep [options] [searchterm]

Before starting grep command examples, I used below file which contain following content.

cat file1.txt

Output:

surendra 31 IBM ChennaiSteve 45 BOA LondonBarbi 25 EasyCMDB ChristchurchMax 25 Easy CMDB ChristchurchNathan 20 Wipro NewyarkDavid 20 ai Newyark

Search single file using grep

Example 1: Search for a word â€œnathan†in file1.txt

grep nathan file1.txt

You dont get any output, as the word nathan is not there. By this type you should know grep is a case sensitive command. If you want specifically Nathan, use caps N in nathan and try once again.

Example 2: Search for exact word â€œNathanâ€

root@linuxnix:~# grep Nathan file1.txt Nathan 20 Wipro Newyark

Example 3: Search for a word which has either capital or small letters in it, no confusion between nathan or Nathan. The -i for ignore case.

root@linuxnix:~# grep -i Nathan file1.txt Nathan 20 Wipro Newyark

Example 4: I suggest you always use single quotes for your search term. This will avoid confusions to gerp. Suppose if you want to search for “Easy CMDB†in a file, it will be difficult to search with out single quotes. Try below examples.

with out quotes:

root@linuxnix:~# grep Easy CMDB file1.txt grep: CMDB: No such file or directory file1.txt:Barbi 25 EasyCMDB Christchurch file1.txt:Max 25 Easy CMDB Christchurch

What grep did?

If you observe, you got an error stating that, there is no file called CMDB. That is true, there is no such file. This output have two issues
1) Grep is considering second word in the command as file
2) Grep is considering just “Easy†as a search term.

Example 5: Search for exact search term using single quotes.

root@linuxnix:~# grep 'Easy CMDB' file1.txt Max 25 Easy CMDB Christchurch

You may get a doubt why single quotes and not double quotes. You can use double quotes as well when you want to send bash variable in to search term.

Example 6: Search for a shell variable in a file. My shell is NAME1 which is assigned with Nathan. See below examples with single and double quotes.

Read Full Post:  http://www.linuxnix.com/grep-command-usage-linux/

 

 

5 Lightweight Linux For Old Computers

5 lightweight linux for old computersDo you have old computer? Have you kept your old computer somewhere in a rack? So this is the time to take it out and start using it. In this article I will walk you through the list of 5 Lightweight Linux distributions that you can install and use on old computers. All of these 5 Linux distributions require less resources therefore can be run on old desktops or laptops. So without any further delay let’s dive in.

Read At LinuxAndUbuntu

ODPi: The Open Ecosystem of Big Data – Update and Next Steps

odpiAs it’s been a while since we updated everyone on our progress, we thought it would be appropriate to share what ODPi has been up to over the past several months. In upcoming blogs, we will preview some exciting deliverables that will be coming out the end of March.

ODPi’s journey to today can be thought of as passing through the following four phases:

  1. Problem Recognition

  2. Industry Coalescence

  3. Getting Organized

  4. Getting to Work

In the rest of this blog, I’ll describe each of these phases.

Problem Recognition

If they’re being honest, Hadoop and Big Data proponents recognize that this technology has not achieved its game-changing business potential.

Gartner puts it well: “Despite considerable hype and reported successes for early adopters, 54 percent of survey respondents report no plans to invest [in Hadoop] at this time, while only 18 percent have plans to invest in Hadoop over the next two years,” said Nick Heudecker, research director at Gartner. “Furthermore, the early adopters don’t appear to be championing for substantial Hadoop adoption over the next 24 months; in fact, there are fewer who plan to begin in the next two years than already have.” – Gartner Survey Highlights Challenges to Hadoop Adoption

The top two factors suppressing demand for Hadoop according to Gartner’s research are a skills gap and unclear business value for Hadoop initiatives. In ODPi’s view, the fragmented nature of the Hadoop ecosystem is a leading cause for businesses’ difficulty extracting value from Hadoop investments. Hadoop, its components, and Hadoop Distros, are all innovating very quickly and in different ways. This diversity, while healthy in many ways, also slows Big Data Ecosystem development and limits adoption.

Specifically, the lack of consistency across major Hadoop distributions means that commercial solutions and ISVs – precisely the entities in the value chain whose solutions deliver business value – must invest disproportionately in multi-distro certification and regression testing. This increases their development costs (not to mention enterprise support costs) and suppresses innovation.before odpi

Industry Coalescence

In February 2015, forward-thinking Big Data and Hadoop players at companies as diverse as AltiScale, Capgemini, CenturyLink, EMC, GE, Hortonworks, IBM, Infosys, Pivotal, SAS and VMware decided to work together to address the underachieving industry growth rate through standardization.

Here’s what some of the founding members had to say when they joined:

  • CapGemini: One of the most consistent challenges that we’ve come across is the need to get multiple vendors’ technologies working together. Sometimes this is to get IBM’s data integration or analytics technologies working on another vendor’s distribution, such as Pivotal or Hortonworks, or to run other vendors analytics tools, such as SAS, on top of an IBM Big Data platform.

  • Hortonworks: Some might look at Pivotal and IBM and others as competitors. We have to set those differences aside and focus on the things we can do jointly. That’s what this initiative is about. It just comes from working together and building trust and we’re used to that. It’s really what open source is about. If you look at the Hadoop industry, there are shared name components. There are varying versions of those components that have different capabilities, different protocols and API incompatibilities. What this effort is aimed at is a stable version of those, so that takes the guesswork out of the broader ecosystem.

  • IBM: One desired outcome of the ODP is to bring new big data solutions to the market more quickly. This will be achieved by making it easier for the ecosystem vendors to enable and test on a well-defined common Hadoop core platform.

  • SAS: SAS is not in it to choose sides on Hadoop distribution vendors.  We support all five major distributions — Cloudera, Hortonworks, IBM, MapR and Pivotal — with our applications, and requests continue to pour in for more support of region-specific distributions. SAS will continue our collaboration with all Hadoop vendors.

Anyone else working with multiple distributions of Hadoop will understand the challenges involved. Here are three revealing examples from the last few months, each from a different (unnamed) vendor:

  • Calling an HDFS API to see if an HDFS directory exists.  Some don’t throw an exception and return a null for the directory.  Some throw an exception.

  • Setting a baseline of Hive 13 so we get access to some new syntax. Try it on one, it works great and we are able to do some really innovative stuff.  Try it on another that says it also has Hive 13, and we get “syntax error”?  

  • Trying to be a good ecosystem citizen, ie. Anyone else working with multiple distributions of Hadoop will understand the challenges involved. Here are three revealing examples from the last few months, each from a different (unnamed) vendor: veraging the HCAT APIs for accessing shared metadata.  All is good. Get the latest “dot” release from the vendor, and guess what, they changed the package name of the class used to get the information. Code change necessary.

Getting Organized

In September of 2015, the decision was made to move under The Linux Foundation. The Linux Foundation’s recognized excellence in Open Source governance and community and ecosystem development provided existing and prospective ODPi members with the confidence that the organization would continue to grow while operating openly, equitably and transparently.

Coinciding with the move to the Linux Foundation, several prominent industry players joined ODPi, bringing the total membership to 25 by the end of 2015.

The Linux Foundation facilitated the bylaws and organization of ODPi in order to draw from all of its members’ diverse areas of expertise in the development of the specification. Figure 2 depicts ODPi’s operating structure.odpi structure1

Getting to Work

With the Release Team and TSC in place, the hard work of defining the ODPi Runtime and Operations Specifications got underway in earnest in Q4 2015.

The Release Team published the draft Runtime Specification on January 21 and has been hard at work since then on finalizing the spec and developing tests and a deployment sandbox, which will be announced at the end of March. We will also be publishing more details on the Spec and tests, so check back soon!

The author: John Mertic has spent his entire career in open source, from being a contributor to projects such as PHP, community manager for SugarCRM, and participating in open source foundations OW2 and OpenSocial. A long time speaker and author, he now uses his expertise in his role with the Linux Foundation to help nurture and grow large scale, collaborative open source projects.

Syncsort Delivers Mainframe Hadoop, Spark Data

Syncsort simplifies mainframe big data access for enterprises seeking governance and compliance in Apache Hadoop and Apache Spark data.

Read more at eWeek

This Week in Linux News: The Linux Foundation Advocates for Gender Diversity With New Partnership, SCO Lawsuit Comes to an End, & More.

Photo by Alaina PercivalThis week in Linux news, The Linux Foundation advocates for gender diversity in tech industry events with new Women Who Code partnership, SCO lawsuit comes to an end due to a lack of money, and more! Get up to date on the latest Linux & OSS stories with this weekly digest:

1) The Linux Foundation partners with Women Who Code on diversity initiatives for 2016 events.

Women In Technology: The Challenge and the Responsibility– Forbes

How the Linux Foundation is Increasing the Woman Force in Open Source– CIO

2) Linux/Unix code and copyright battle between SCO Group and IBM lawsuit is nearing an end due to SCO’s dwindling funds.

Win for Open Source: SCO Court Case against Linux Hits End of Road– The VAR Guy

3) Ubuntu MATE updates the Raspberry Pi flavour maker to support porting of Ubuntu MATE, Xubuntu, Lubuntu, and Ubuntu Server OSes for the newly-released Raspberry Pi 3 Model B.

Anyone Can Now Port Ubuntu Linux for Raspberry Pi 3 with Ubuntu Pi Flavour Maker– Softpedia

4) Linux Mint developers recieve criticism for their handling of a recent security breach, but might deserve praise.

Linux Mint: The Right Way to React to a Security Breach– ZDNet

5) The Ubuntu-running Dell XPS Developer Edition will now feature Thunderbolt 3 support with new update.

Dell is bringing Thunderbolt 3 support to Linux systems– Digital Trends

 

 

How to setup an NFS Server and configure NFS Storage in Proxmox VE

In this tutorial, I will guide you trough the installation of an NFS server on CentOS 7, then we will add the NFS share as a storage option in the Proxmox server so that it can be used as backup space the virtual machines.

Read more at HowtoForge

Docker Steps into Large-Scale Container Orchestration with Conductant Purchase

Containers are revolutionizing software development, with Docker blazing a trail for not only working with containers at scale, but how applications are shipped, built, and tested. With the rise of container-based solutions, Docker developed cluster-management software Docker Swarm to those working with containers in the enterprise.

On Thursday, Docker took the next step in addressing container orchestration at scale, announcing that it has acquired Conductant, a company that was in the process of building out the Apache Aurora container orchestration software. Aurora is based, at least in its general architecture, on the Borg software, built by Google to manage its own container operations. 

…Since Thursday’s announcement, Docker is wasting no time in coming up with ideas for how to integrate Aurora into Docker—Particularly Docker Swarm. With Aurora and Docker Swarm as part of one’s infrastructure, enterprises could manage microservices with the power of both platforms. As such, this will provide stability, ease of deployment, and much more to Docker users.

Read more at The New Stack

Beautiful Manjaro Deepin 16.03 Is Now Based on Linux Kernel 4.1

beautiful-manjaro-deepin-16-03Manjaro Deepin, one of the latest additions to the Manjaro family, has been upgraded to version 16.03 and is now ready for download.

Manjaro Linux distribution is based on Arch Linux, and the project has a build for pretty much all of the desktop environments and window managers out there. The Deepin flavor is just the latest one and it also happens to be one of the most interesting. Deepin Linux took everyone by surprise with their new and revolutionary desktop environment, and the Manjaro community couldn’t pass the opportunity of adopting the new solution for their distro.

Tech Firms Grapple With How to Make Open Source Pay

Issue gains currency, as private investors pour money into open-source startups. …Open-source projects underpin services offered by companies such Facebook Inc., Twitter Inc., and Uber Technologies Inc, and open-source-based operating systems such as Linux power many corporate servers, financial trading platforms, and Android phones. However, companies that offer such software as their primary product by and large have found it rough going.

Since most open-source firms don’t have a product to sell, they historically made money by selling technical services—essentially tech support and consulting services—that help companies take advantage of free tools.

Read more at The Wall Street Journal