Home Blog Page 446

ONAP Rolls Out Amsterdam Release

Less than nine months after AT&T and the Linux Foundation merged their open source projects to become the Open Network Automation Platform (ONAP), the group today rolled out its first code release, Amsterdam.

The highly anticipated release, which integrates AT&T’s ECOMP and the Linux Foundation’s Open-O code bases into a common open source orchestration platform, aims to automate the virtualization of network services.

“Some of the components originated from OpenECOMP, some came from Open-O; we removed code that was inefficient, and we’ve added new code,” said Mazin Gilbert, ONAP technical steering committee chair and VP of advanced technology at AT&T Labs. This includes new lifecycle management and multi-cloud interface features, as well as new closed-loop automation management code called CLAMP.

Read more at SDxCentral

The Advantages of Open Source Tools

What is open source? How does open source benefit users? And how do we support open source initiatives? In this article, Kayla Matthews introduces the basics of open source as well as the importance and value of open source tools.

Open source software, applications, and projects are becoming more commonplace, at least more than they ever have been. That’s because major organizations and brands have now embraced the development philosophy.

Some of the more renowned examples of open source projects include WordPress, Android, FileZilla, Audacity, GIMP, VLC Media Player, Notepad++, Blender, and, of course, Ubuntu/Linux.

But just what is open source, and what’s the difference between it and closed source projects? What is the inherent value of open source tools and software, and are there benefits of it? What should you do if you’re an avid supporter of open source?

Read more at Jaxenter

5 Tricks for Using the sudo Command

The sudoers file can provide detailed control over user privileges, but with very little effort, you can still get a lot of benefit from sudo. In this post, we’re going to look at some simple ways to get a lot of value out of the sudo command in Linux.

Trick 1: Nearly effortless sudo usage

The default file on most Linux distributions makes it very simple to give select users the ability to run commands as root. In fact, you don’t even have to edit the /etc/sudoers file in any way to get started. Instead, you just add the users to the sudo or admin group on the system and you’re done.

Adding users to the sudo or admin group in the /etc/group file gives them permission to run commands using sudo.

Read more at Network World

Top 10 Moments in 2017 Linux Foundation Events

See the Top 10 moments of 2017 Linux Foundation events, including a conversation with Linus Torvalds, a video created by actor Joseph Gordon-Levitt through his colloborative production company, the Diversity Empowerment Summit, and Auto Grade Linux in the new Toyota Camry.

And, you can look forward to more exciting events in 2018. Check out the newly released 2018 Events calendar and make plans now to attend or to speak at an upcoming conference.

Read more at The Linux Foundation

Finding Files with mlocate: Part 3

In the previous articles in this short series, we introduced the mlocate (or just locate) command, and then discussed some ways the updatedb tool can be used to help you find that one particular file in a thousand.

You are probably also aware of xargs as well as the find command. Our trusty friend locate can also play nicely with the –null option of xargs by outputting all of the results onto one line (without spaces which isn’t great if you want to read it yourself) by using the -0 switch like this:

# locate -0 .bash

An option I like to use (if I remember to use it — because the locate command rarely needs to be queried twice thanks to its simple syntax) is the -e option.

# locate -e .bash

For the curious, that -e switch means “existing.” And, in this case, you can use -e to ensure that any files returned by the locate command do actually exist at the time of the query on your filesystems.

It’s almost magical, that even on a slow machine, the mastery of the modern locate command allows us to query its file database and then check against the actual existence of many files in seemingly no time whatsoever. Let’s try a quick test with a file search that’s going to return a zillion results and use the time command to see how long it takes both with and without the -e option being enabled.

I’ll choose files with the compressed .gz extension. Starting with a count, you can see there’s not quite a zillion but a fair number of files ending in .gz on my machine, note the -c for “count”:

# locate -c .gz

7539

This time, we’ll output the list but time it and see the abbreviated results as follows:

# time locate .gz

real    0m0.091s

user    0m0.025s

sys     0m0.012s

That’s pretty swift, but it’s only reading from the overnight-run database. Let’s get it to do a check against those 7,539 files, too, to see if they truly exist and haven’t been deleted or renamed since last night:

# time locate -e .gz

real    0m0.096s

user    0m0.028s

sys     0m0.055s

The speed difference is nominal as you can see. There’s no point in talking about lightning or blink-and-you-miss-it, because those aren’t suitable yardsticks. Relative to the other indexing service I mentioned previously, let’s just say that’s pretty darned fast.

If you need to move the efficient database file used by the locate command (in my version it lives here: /var/lib/mlocate/mlocate.db) then that’s also easy to do. You may wish to do this, for example, because you’ve generated a massive database file (it’s only 1.1MB in my case so it’s really tiny in reality), which needs to be put onto a faster filesystem.

Incidentally, even the mlocate utility appears to have created an slocate group of users on my machine, so don’t be too alarmed if you see something similar, as shown here from a standard file listing:

-rw-r-----. 1 root slocate 1.1M Jan 11 11:11 /var/lib/mlocate/mlocate.db

Back to the matter in hand. If you want to move away from /var/lib/mlocate as your directory being used by the database then you can use this command syntax (and you’ll have to become the “root” user with sudo -i or su – for at least the first command to work correctly):

# updatedb -o /home/chrisbinnie/my_new.db

# locate -d /home/chrisbinnie/my_new.db SEARCH_TERM

Obviously, replace your database name and path. The SEARCH_TERM element is the fragment of the filename that you’re looking for (wildcards and all).

If you remember I mentioned that you need to run updatedb command as the superuser to reach all the areas of your filesystems.

This next example should cover two useful scenarios in one. According to the manual, you can also create a “private” database for standard users as follows:

# updatedb -l 0 -o DATABASE -U source_directory

Here the previously seen -o option means that we output our database to a file (obviously called DATABASE). The -l 0 addition apparently means that the “visibility” of the database file is affected. It means (if I’m reading the docs correctly) that my user can read it but, otherwise, without that option, only the locate command can.

The second useful scenario for this example is that we can create a little database file specifying exactly which path its top-level should be. Have a look at the database-root or -U source_directory option in our example. If you don’t specify a new root file path, then the whole filesystem(s) is scanned instead.

If you want to get clever and chuck a couple of top-level source directories into one command, then you can manage that having created two separate databases. Very useful for scripting methinks.

You can achieve that with this command:

# locate -d /home/chrisbinnie/database_one -d /home/chrisbinnie/database_two SEARCH_TERM

The manual dutifully warns however that ALL users that can read the DATABASE file can also get the complete list of files in the subdirectories of the chosen source_directory. So use these commands with some care.

Priced To Sell

Back to the mind-blowing simplicity of the locate command in use on a day-to-day basis. There are many times when newbies may confused with case-sensitivity on Unix-type systems. Simply use the conventional -i option to ignore case entirely when using the flexible locate command:

# locate -i ChrisBinnie.pdf

If you have a file structure that has a number of symlinks holding it together, then there might be occasion when you want to remove broken symlinks from the search results. You can do that with this command:

# locate -Le chrisbinnie_111111.xml

If you needed to limit the search results then you could use this functionality, also in a script for example (similar to the -c option for counting), as so:

# locate -l25 *.gz

This command simply stops after outputting the first 25 files that were found. When piped through the grep command, it’s very useful on a super busy system.

Popular Area

We briefly touched upon performance earlier, and I happened to see this nicely written blog entry, where the author discusses thoughts on the trade-offs between the database size becoming unwieldy and the speed at which results are delivered.

What piqued my interest are the comments on how the original locate command was written and what limiting factors were considered during its creation. Namely how disk space isn’t quite so precious any longer and nor is the delivery of results even when 700,000 files are involved.

I’m certain that the author(s) of mlocate and its forebears would have something to say in response to that blog post. I suspect that holding onto the file permissions to give us the “secure” and “slocate” functionality in the database might be a fairly big hit in terms of overhead. And, as much as I enjoyed the post, I won’t be writing a Bash script to replace mlocate any time soon. I’m more than happy with the locate command and extol its qualities at every opportunity.

Sold

I hope you’ve acquired enough insight into the superb locate command to prune, tweak, adjust, and tune it to your unique set of requirements. As we’ve seen, it’s fast, convenient, powerful, and efficient. Additionally, you can ignore the “root” user demands and use it within scripts for very specific tasks.

My favorite aspect, however, is when I’m awakened in the middle of the night because of an emergency. It’s not a good look, having to remember the complex find command and typing it slowly with bleary eyes (and managing to add lots of typos):

# find . -type f -name "*.gz"

Instead of that, I can just use the simple locate command:

# locate *.gz

As has been said, any fool can create something bigger, bolder, and tougher, but it takes a bit of genius to create something simpler. And, in terms of introducing more people to the venerable Unix-type command line, there’s little argument that the locate command welcomes them with open arms.

Learn more about essential sysadmin skills: Download the Future Proof Your SysAdmin Career ebook now.

Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows how hackers launch sophisticated attacks to compromise servers, steal data, and crack complex passwords, so you can learn how to defend against these attacks. In the book, he also talks you through making your servers invisible, performing penetration testing, and mitigating unwelcome attacks. You can find out more about DevSecOps and Linux security via his website (http://www.devsecops.cc).

Why the Open Source Community Needs a Diverse Supply Chain

At this year’s Opensource.com Community Moderator’s meeting in Raleigh, North Carolina, Red Hat CEO Jim Whitehurst made a comment that stuck with me.

“Open source’s supply chain is source code,” he said, “and the people making up that supply chain aren’t very diverse.”

Diversity and inclusivity in the technology industry—and in open source communities more specifically—have received a lot of coverage, both on Opensource.com and elsewhere. One approach to the issue foregrounds arguments about concepts that are more abstract—like human decency, for example.

But the “supply chain” metaphor works, too. And it can be an effective argument for championing greater inclusivity in our open organizations, especially when people dismiss arguments based on appeals to abstract concepts. Open organizations require inclusivity, which is a necessary input to get the diversity that reduces the risk in our supply chain.

Read more at OpenSource.com

 

 

Exploring the Linguistics Behind Regular Expressions

Little did I know that learning about Chomsky would drag me down a rabbit hole back to regular expressions, and then magically cast regular expressions into something that fascinated me. What enchanted me about regular expressions was the homonymous linguistic concept that powered them.

I hope to spellbind you, too, with the linguistics behind regular expressions, a a backstory unknown to most programmers. Though I won’t teach you how to use regular expressions in any particular programming language, I hope that my linguistic introduction will inspire you to dive deeper into how regular expressions work in your programming language of choice.

To begin, let’s return to Chomsky: what does he have to do with regular expressions? Hell, what does he even have to do with computer science?

Read more at Dev.to

Introducing BuildKit

BuildKit is a new project under the Moby umbrella for building and packaging software using containers. It’s a new codebase meant to replace the internals of the current build features in the Moby Engine.

BuildKit emerged from the discussions about improving the build features in Moby Engine. We received a lot of positive feedback for the multi-stage build feature introduced in April and had proposals and user requests for many similar additions. But before that, we needed to make sure that we have capabilities to continue adding such features in the future and a solid foundation to extend on. Quite soon it was clear that we would need to redefine most of the fundamentals about how we even define a build operation and needed a clean break from the current codebase.

Read more at Moby Project

Introducing Fn: “Serverless Must Be Open, Community-Driven, and Cloud-Neutral”

Fn, a new serverless open source project was announced at this year’s JavaOne. There’s no risk of cloud lock-in and you can write functions in your favorite programming language. “You can make anything, including existing libraries, into a function by packaging it in a Docker container.” We invited Bob Quillin, VP for the Oracle Container Group to talk about Fn, its best features, next milestones and more.

JAXenter: Oracle’s Mike Lehmann told us recently that “Oracle sees serverless as a natural next step from where the industry has gone from app server-centric models to containers and microservices and more recently with serverless.” At JavaOne 2017, Mark Cavage discussed Java’s pervasiveness in the cloud and the need to support container-centric microservices and serverless architectures. Why the sudden interest in serverless?

Bob Quillin: Developer efficiency, economics, and ease of use will drive serverless forward.  We believe serverless technology will drive a new, more efficient economic model – for both development teams and cloud providers while making a developer’s life that much easier.  

Read more at Jaxenter

AT&T Wants White Box Routers with an Open Operating System

AT&T says it’s not enough to deploy white box hardware and to orchestrate its networks with the Open Network Automation Platform (ONAP) software. “Each individual machine also needs its own operating system,” writes Chris Rice, senior vice president of AT&T Labs, Domain 2.0 Architecture, in a blog post. To that end, AT&T announced its newest effort — the Open Architecture for a Disaggregated Network Operating System (dNOS).

“If we want to take full advantage of the benefits of white box routers and other hardware, we need an equally open and flexible operating system for those machines,” writes Rice.

DNOS appears to be in the visionary phase. “Our goal is to start an industry discussion on technical feasibility … and determine suitable vehicles (standards bodies, open source efforts, consortia, etc.) for common specification and architectural realization,” according to an AT&T white paper, introducing dNOS.

Read more at SDxCentral