Although the Linux find command does a fabulous job for searching on the command line, there may be situations where a dedicated tool may be more convenient. One such case is to find lines in a file that start with a particular word. There exists a command – dubbed look – that does this for you.
In this tutorial, we will discuss this command using some easy to understand examples. But before we do that, it’s worth mentioning that all examples in the article have been tested on an Ubuntu 18.04 LTS machine.
Linux look command
The look command in Linux displays lines beginning with a given string. Following is its syntax:
look [-bdf] [-t termchar] string [file …]
And here’s what the man page says about the tool:
The look utility displays any lines in file which contain string as a
prefix.
If file is not specified, the file /usr/share/dict/words is used, only
alphanumeric characters are compared and the case of alphabetic charac?
ters is ignored.
Elementary OS has been my distribution of choice for some time now. I find it a perfect blend of usability, elegance, and stability. Out of the box, Elementary doesn’t include a lot of apps, but it does offer plenty of style and all the apps you could want are an AppCenter away. And with the upcoming release, the numbering scheme changes. Named Juno, the next iteration will skip the .5 number and go directly to 5.0. Why? Because Elementary OS is far from a pre-release operating system and the development teams wanted to do away with any possible confusion.
Elementary, 0.4 (aka Loki) is about as stable a Linux operating system as I have ever used. And although Elementary OS 5.0 does promise to be a very natural evolution from .4, it is still very much in beta, but ready for testing. Because Juno is based on Ubuntu 18.04, it enjoys a rock-solid base, so the foundation of the OS will already be incredibly stable.
With that in mind, I downloaded 5.0 and spun it up in VirtualBox. The results are as impressive as I assumed they’d be. Let’s get this open source operating system installed and see what it has to offer.
Installation
I’m not going to spend much time on explaining the installation of Elementary OS. Why? If you’ve installed any flavor or Linux (or any operating system at all), then you can walk through the installation of this distribution in your sleep. The Elementary OS developers are working, in conjunction with System76, on a new installer. As of the current release of Juno, however, there is no sign of such an installer, so you’ll find the same method of installation seen in previous iterations of the platform.
You can run Elementary OS live or install it immediately. Burn the ISO image onto a CD/DVD or USB flash drive and boot it on your machine (or use the ISO image to create a virtual machine). The installer will have you configure your language, keyboard, select the installation type (Figure 1), select if you want to download updates immediately and install third-party media codecs, and then create a user.
Figure 1: The current Elementary OS installer.
Once the installation completes, reboot the machine and log in. Shortly after logging in, you should be prompted that updates are available. I highly recommend running the updates before using the desktop (since this is still in beta, the updates will come often). Now that we’re installed and updated, let’s take a look at some of those new features.
The AppCenter
The Elementary OS AppCenter has been given a slight facelift. Although the previous version was quite serviceable, it seems the designers have taken a nod from GNOME Software (which is a good thing) and added recommendations under the featured titles (Figure 2).
Figure 2: GNOME Software is on the left and the Elementary OS AppCenter is on the right.
Another upcoming feature to the AppCenter is the ability to pay developers “what you want” for apps. The Elementary OS developers are taking a unique approach to apps. Elementary OS first released the AppCenter in May 2017 and by February 2018 they’d processed $1,700.00 worth of payments from just over 750 charges. That means the average paid price for an app, purchased from the AppCenter was $2.30. To make things a bit more lucrative for developers (and to try an interesting experiment), Elementary OS will include a HumbleButton for paid apps that allow users to pay what they will. Another change will be that paid apps won’t automatically update (if you click the Update All button in the AppCenter). Instead, to update the app, you’ll have to donate to the app (starting with $0.00 to $10.00 or a custom amount). Hopefully, that change will translate into more developers getting paid for their work.
Aesthetics
The developers and designers of Elementary OS have intentionally held back the details on the user-facing improvements to the UI, so to offer users a big “reveal” on release day. According to Cassidy James Blædē, there’s actually a good amount of visual detailing and refinements to be found in Juno. So if you download the current beta, you may be surprised to find it looks remarkably like Loki. Fear not, when Juno is finally released the aesthetic changes will be noticeable. The one item they have released is the newly adopted color palette. The full palette can be viewed here (along with all logo and font information).
Along with the new palette, Juno brings:
A Night Light feature (to make late night staring at the screen a bit less harsh on the eyes).
Latest GTK+ features (which includes some animated panel icons).
Very slight changes to the default theme (icons are a bit brighter and colorful).
App Changes
Because there are so few apps shipped out of the box, you won’t find much in the way of change here. The developers have rebranded the default text editor, Scratch, as Code and even rolled in some basic code editor features. Outside of that, the standard default Elementary apps remain intact:
Mail — for your email needs.
Music — to play your tunes.
Files — serves as your file manager.
Videos — plays all of your videos.
Calendar — schedule your day.
Photos — manage your photos.
Epiphany
At one point, I would have said having Epiphany as the default browser was a big miss. However, Epiphany has come a long way. Case in point: The version of Epiphany shipping with Juno includes the ability to log into your Firefox Account, so it can now sync and share data (Figure 3).
Figure 3: Epiphany is becoming a much more robust web browser.
Another really nifty feature with newer releases of Epiphany is the ability to install a site as a Web Application. What this does is save a site as a launcher in the Elementary OS menu, such that you only need to click the launcher to open the site. When the site opens as an installed app, you will notice the browser window missing a few components (such as the bookmarks and configuration buttons, as well as the tab button/feature). It’s a handy way to gain quick access to specific sites you use frequently.
Figure 4: Installing a site as a web application in Epiphany.
To install a site as a web application, follow these steps:
Open Epiphany.
Navigate to the web site in question.
Click the Epiphany menu button (gear icon in the upper right corner).
Click Install Site as Web Application (Figure 4).
In the resulting popup, give the application a name and click Create.
A bit of clean up and a conclusion
Outside of the above features (and a few more minor details), the rest of the change comes by way of old code cleanup and closing out issues. Thanks to that codebase cleanup, you’ll find a bit of a performance and stability increase over previous releases.
All in all, Elementary OS continues to be my top-rated distribution for new Linux users. It’s incredibly clean, elegant, and user-friendly. Thankfully, the design and development team understand they have something special on their hands and, instead of bringing about new features and radical changes, are set on offering only slight changes and improvements to an already rock solid Linux distribution. So, if you’re looking for something magical and radical in the shift from .4 to 5.0, you might be disappointed. If, however, what you want is nothing more than an improved (and very familiar) experience with Elementary OS, Juno will not disappoint.
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.
Google’s recent announcement that it had ported its open source TensorFlowmachine intelligence (ML) library for neural networking to the Raspberry Pi was the latest in a series of chess moves from Google and its chief AI rival Nvidia to win the hearts and keyboards of embedded Linux developers. The competition is a part of a wider battle with Amazon, Microsoft, Intel, and others to bring cloud analytics to the edge in IoT networks to reduce latency, increase reliability, and improve security.
Thanks to a collaboration with the Raspberry Pi Foundation, the latest TensorFlow 1.9 release can now be installed on Raspberry Pi 2 or 3 SBCs from pre-built binaries using Python’s pip package system. Raspbian 9 users can install it with two simple commands.
Integrating TensorFlow models into an embedded project offers further challenges. Yet, as Google has shown with its AIY Projects kits for the Raspberry Pi, you can add a bit of ML smarts in Raspberry Pi based robots, vision systems, and other embedded gear without a huge learning curve.
The TensorFlow port should be particularly welcome in the Raspberry Pi educational community. As the RPi Foundation’s Eben Upton wrote in a congratulatory tweet about the “massive news,” the TensorFlow port will enable “cool machine-learning educational content.”
TensorFlow was essentially born to run on Linux, but on servers or desktops, not on a modest SBC like the Raspberry Pi. It now runs on all major server and desktop platforms and has been ported to Android and iOS. Yet, the Raspberry Pi was a particularly gnarly challenge, writes Google TensorFlow developer Pete Warden in the announcement. It wasn’t even possible until the Raspberry Pi 2 and 3 came along with faster quad-core processors.
A year ago, Warden and his team managed to cross-compile TensorFlow on the RPi 3, but it was a slow, complicated, crash-prone process. The new ability to install from pre-built binaries now makes it feasible for a much wider group of developers to join the party.
While Google’s AIY Projects was attempting to squeeze a cloud-based platform onto a simple hacker board, its team started with low-cost cardboard constructed kits with add-on boards for connecting with Google Cloud related embedded technologies. These include the AIY Vision Kit for the Raspberry Pi Zero W and WH, which performs TensorFlow-based vision recognition. It incorporates a “VisionBonnet” board with an Intel Myriad 2 neural network accelerator chip. AIY Projects also launched an AIY Voice Kit with the same RPi Zero WH target that lets you build a voice-controlled speaker with Google Assistant support.
As noted in this Hackster.io post about the port from Alasdair Allan, the AIY Vision Kit has struggled to perform well when operating locally. The Voice Kit has done better due to its greater reliance on Google Cloud.
Google’s Edge TPU accelerator
According to a Warden tweet following the announcement, TensorFlow is not currently tapping the potential ML powers of Broadcom’s VideoCore graphical processing unit, as Nvidia does with its more powerful Pascal GPU. He goes on to suggest that there might be potential for developing a special GPU-related port for the single-core Raspberry Pi Zero boardlets, but for now there’s sufficient power on the Pi’s four CPU cores. Speaking of potential hooks to the GPU, he writes: “With quad-core CPUs and Neon on the latest Pi’s, there’s not as big an advantage, though it’s still interesting on Pi Zeroes.”
Another interpretation is that Google is skipping the GPU because it expects Raspberry Pi users and other embedded developers to tap its recently announced, Linux-friendly Edge TPU ML accelerator chip for TensorFlow. The Edge TPU will be offered this fall along with an NXP i.MX8M based Linux development kit and an Edge TPU Accelerator USB dongle that can fit into any Linux computer including the Pi.
The Edge TPU is a lightweight, embedded version of its enterprise focused Cloud Tensor Processing Unit (Cloud TPU) AI co-processor. In conjunction with a new Cloud IoT Edge stack, the chip is designed to run TensorFlow Lite ML models on Arm Linux- or Android Things based IoT gateways connected to Google Cloud services.
Nvidia launches industrial TX2i and octa-core Xavier Jetson modules
Nvidia is farther along in its attempt to bring its Pascal/CUDA-related AI technologies to embedded Linux developers. Its Jetson TX1 and TX2 computer-on-modules have found widespread adoption in embedded Linux projects for ML applications. The Jetson TX2 recently appeared in devices including Axiomtek’s eBOX560-900-FL box computer, as well as an upcoming, FPGA-equipped AIR-T Mini-ITX board for AI-enabled SDR applications.
In recent months, Nvidia has begun shipping a Jetson TX2i spin on the TX2 aimed at industrial applications. The TX2i adds -40 to 85°C support, vibration resistance, and a wider humidity range. There’s also support for ECC RAM, a 10-year operating supply lifecycle, and a 3-year warranty.
Like the Jetson TX2, the TX2i provides dual high-end Denver 2 Arm cores, a quad-core, Cortex-A57 block, and a 256-core Pascal GPU with CUDA libraries for running AI and ML algorithms. Like the TX2, the module also supplies 8GB of LPDDR4 RAM, 32GB of eMMC 5.1, and 802.11ac WiFi and Bluetooth.
Existing Jetson carrier boards work with the TX2i. Aetina just announced an ACE-N310 carrier for all the Jetson modules that matches the TX2i’s industrial temperature support and supports six simultaneous HD cameras.
The Jetson TX2 was recently joined by a more powerful new Jetson Xavier module. The Xavier core, which has already been used in Nvidia’s Drive PX Pegasus autonomous car computer board, features 8x ARMv8.2 cores and a high-end, 512-core Nvidia Volta GPU with tensor cores. It also provides 2x NVDLA deep learning engines and a 7-way VLIW vision chip. The Xavier ships with 16GB 256-bit LPDDR4 and 32GB eMMC 5.1.
Google and Nvidia are not alone in their campaigns to bring cloud AI analytics to the edge. For example, Intel’s Movidius 2 neural network accelerator chip is finding widespread adoption. Presumably, however, any future AIY Projects kits will replace the Movidius 2 with the Edge TPU. Although Amazon has yet to reveal a neural accelerator of its own, it is perhaps still the leader in the larger race for IoT edge analytics due to the popularity of its AWS IoT stack and its AWS Greengrass software for local processing of cloud analytics software on Linux devices. Meanwhile, Microsoft is also targeting the IoT space with its Arm Linux based Azure Sphere distribution and IoT framework. Azure Sphere will initially target lower-power applications running on Cortex-A7 chips. Future versions, however, may be more robust and may include a homegrown AI component.
With all the excitement over neural networks and deep-learning techniques, it’s easy to imagine that the world of computer science consists of little else. Neural networks, after all, have begun to outperform humans in tasks such as object and face recognition and in games such as chess, Go, and various arcade video games.
These networks are based on the way the human brain works. Nothing could have more potential than that, right?
Not quite. An entirely different type of computing has the potential to be significantly more powerful than neural networks and deep learning. This technique is based on the process that created the human brain—evolution. In other words, a sequence of iterative change and selection that produced the most complex and capable machines known to humankind—the eye, the wing, the brain, and so on. The power of evolution is a wonder to behold. …
Evolutionary computing works in an entirely different way than neural networks. The goal is to create computer code that solves a specific problem using an approach that is somewhat counterintuitive.
The conventional way to create code is to write it from first principles with a specific goal in mind.
Evolutionary computing uses a different approach. It starts with code generated entirely at random. And not just one version of it, but lots of versions, sometimes hundreds of thousands of randomly assembled pieces of code.
The founding members include a number of high-powered media and tech companies, including Animal Logic, Blue Sky Studios, Cisco, DreamWorks, Epic Games, Google, Intel, SideFX, Walt Disney Studios and Weta Digital.
According to a survey by the Academy, 84 percent of the industry uses open source software already, mostly for animation and visual effects. The group also found that what’s holding back open source development in the media industry is the siloed nature of the development teams across the different companies in this ecosystem.
Another day, another portmanteau. DevSecOps — an expensive target on AdWords — tries to fit security into the DevOps process. It’s kind of silly because of course companies should be factoring security into their development, particularly when much of DevOps is about enterprises releasing applications faster.
Amazon Web Services’ Senior Solutions Architect Margo Cronin kicked off her talk at the European DevOps Enterprise Summit by saying how personally she doesn’t like the term DevSecOps.
The term DevSecOps “has always struck me like the last kid getting on the bus and there’s no seat available. We are treating security as an afterthought. Security has never been an afterthought with any customer I dealt with — in financial services or now at Amazon Web Services. I feel like the name doesn’t reflect the importance,” Cronin said.
In fact, with the new European regulations of GDPR, she says privacy by design and privacy by default are built right in.
“It nearly mandates you should be doing DevSecOps,” she said.
The denial of service bug had actually been patched in the Linux kernel weeks before news of it was ever announced.
Another day, another bit of security hysteria. This time around the usually reliable Carnegie Mellon University’s CERT/CC, claimed the Linux kernel’s TCP network stack could be “forced to make very expensive calls to tcp_collapse_ofo_queue() and tcp_prune_ofo_queue() for every incoming packet which can lead to a denial of service (DoS).”
True, this bug, already given the trendy name SegmentSmack, could cause DoS attacks. But it’s already been fixed.
Check Out the Initial Lineup of Blockchain Leaders Speaking at Hyperledger Global Forum.
Attend Hyperledger Global Forum to see real uses of distributed ledger technologies for business and to learn how these innovative technologies run live in production networks across the globe today. Hyperledger Global Forum will cut through the hype and focus on adoption. Attendees will see first-hand how the largest organizations in the world go beyond experimentation to lead blockchain production applications with measurable impact. Make your plans now to attend the premier blockchain event of 2018.
Keynote Speakers Include:
Alexis Gauba, Co-Founder, Mechanism Labs and She(256); R&D, Blockchain at Berkeley; R&D, Thunder Token
Leanne Kemp, Founder & CEO, Everledger
Bruce Schneier, Fellow and Lecturer at the Harvard Kennedy School
When it comes to running and managing open source in the enterprise, experience-driven advice counts for a lot. It is very likely that your organization already runs open source, but many organizations make the mistake of reacting to the open source ecosystem instead of adopting a proactive strategy that is optimized for success. That’s where the free Enterprise Open Sourceebook comes in.
This new 45-page ebook from The Linux Foundation provides a practical approach to establishing an open source strategy by outlining the actions your enterprise can take to accelerate its open source efforts. The information is based on more than two decades of professional, enterprise open source usage and development and will be most beneficial to software engineering executives, development managers, compliance experts, and senior engineers involved in enterprise open source activities.
“The availability of enterprise grade open source software is changing the way organizations develop and deliver products,” the book notes. “The combination of a transparent development community and access to public source code enables organizations to think differently about how they procure, implement, test, deploy, and maintain software. This has the potential to offer a wealth of benefits, including reduced development costs, faster product development, higher code quality standards, and more.”
The Linux Foundation Deep Learning Foundation (LF DLF) has announced five new members: Ciena, DiDi, Intel, Orange and Red Hat.
As an umbrella organization of The Linux Foundation itself, the LF DLF supports and sustains open source innovation in Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL).
Deep Learning is defined as an aspect of AI that is concerned with emulating the learning approach that human beings use to gain certain types of knowledge. It can be thought of as a way to automate predictive analytics and is also sometimes known as deep structured learning or hierarchical learning.
It can be supervised, semi-supervised or unsupervised and can be used to build architectures such as deep neural networks, deep belief networks and recurrent neural networks that have been used in fields including computer vision and speech recognition etc.Deep Learning concerns ‘learning data representations’ as opposed to ‘task-specific algorithms’.