Home Blog Page 1016

Announcing dex, an Open Source OpenID Connect Identity Provider from CoreOS

Today we are pleased to announce a new CoreOS open source project called dex: a standards-based identity provider and authentication solution.

Just about every project requires some sort of authentication and user-management. Applications need a way for users to log-in securely from a variety of platforms such as web, mobile, CLI tools and automated systems. Developers typically use a platform-dependent solution or, just as often, find existing solutions don’t quite address their needs and so they resort to writing their own solution…

Read more at CoreOS

DIRECTORY PERMISSIONS BY EXAMPLE

DIRECTORY PERMISSIONS BY EXAMPLE

Can an unprivileged user remove a file, owned by root:root, and to which the user has absolutely no permissions whatsoever?
 
This article will answer that question in the course of exploring directory permissions.

If you are surprised that the answer is “yes,” read on to find out why.

If you already knew that, read on to see if you learn something else.

If you know everything there is to know about directory permissions, read on and correct my mistakes.

 

PRELIMINARIES

An effort has been made to use the terms “regular_file” and “directory_file” because they simultaneously point to both the similarity and distinction.  Everyone knows that “everything in Linux is a file.”   Sometimes it is helpful to reinforce that concept.

In the examples, PS1 will display the user name, and the working directory.

The examples will use with weak permission settings, with the intent of limiting the scope of the investigation.  If the user is a member of the public, and governed by permissions granted to “other”, we can narrowly focus on a single set of permissions limited to a set of eight possibilities.

Also, in order to keep that narrow focus, directories will generally be owned “root:root”, and the user executing the examples is a non-privileged user.  

 

REMOVING root’s SUPER SECRET, PROTECTED FILE

Non-privileged user “dan” is at the keyboard:

dan_/tmp> id
uid=1000(dan) gid=100(users) groups=100(users)

Consider  /tmp/Test_rm, a directory_file:

dan_/tmp> ls -ld Test_rm
d——-wx 2 root root 4096 Aug 11 08:06 Test_rm

The “x” bit for the public lets “dan” change to this directory_file:

dan_/tmp> cd Test_rm
dan_/tmp/Test_rm>

The “r” bit is not set for the public, so even though “dan” can “cd” to this directory_file, he cannot read it:

dan_/tmp/Test_rm> ls -l
ls: cannot open directory_file .: Permission denied

But since “x” gives “others” access to the directory, “dan” can list a file in the directory, but only if he has pre-knowledge of the file’s name:

dan_/tmp/Test_rm> ls -l do_not_remove_me
-r——– 1 root root 0 Aug 11 08:06 do_not_remove_me

As a member of the public, “dan” has no permissions on this file.  But he knows the pathname components, and has access to those components by virtue of the “x” bit on directory components of the path.  

Knowing the regular_file name, he tries to remove it, naming the file explicitly:

dan_/tmp/Test_rm> rm  -i  do_not_remove_me
rm: remove write-protected regular empty file ‘do_not_remove_me’? y

dan_/tmp/Test_rm> ls -l do_not_remove_me
ls: cannot access do_not_remove_me: No such file or directory

The listing above indicates that the preceding “rm” was successful,but let’s run the list via “sudo” (since “dan” does not have permission to list it), just to be sure:

dan_/tmp/Test_rm> sudo ls -l
total 4
-rw-r–r– 1 root root 23 Aug 11 08:43 do_not_edit_this

“rm” worked.  “do_not_remove_me” is gone.  

With no permissions on the regular_file, why was “dan” allowed to remove it?  Because removing a file does not write to the file.  It writes to the file’s directory.  

“w” on the directory_file allows write.  But writing, such as creating or removing a file, also requires directory access.  The ability to “search,” or traverse the directory, was granted via “x”.

Having “r”ead on the directory_file would have made it simpler, because then “dan” could have listed the directory.  

But with advance knowledge of the path components, a user does not need read permission on the directory_file to create a file, a symlink, or to unlink (remove) a file.  You don’t need any permission whatsoever on the file to be removed–you need write and execute on its directory.

Here’s what happens if you need wild-card help with the regular_file name, but you don’t have read access to the file’s directory_file:

dan_/tmp/Test_rm>rm -i do_not_edit_th*
rm: cannot remove ‘do_not_edit_th*’: No such file or directory
dan_/tmp/Test_rm>rm -i *
rm: cannot remove ‘*’: No such file or directory

This scenario will trip up a lot of users.

Listing another directory_file:

dan_/tmp> ls -ld Test_rm_again
d——r-x 2 root root 4096 Aug 11 11:13 Test_rm_again

Note that the “r” on directory_file “Test_rm_again” will let user “dan” list the directory contents.

Changing to “Test_rm_again”, then listing its contents:

dan_/tmp> cd Test_rm_again/
dan_/tmp/Test_rm_again> ls -l  
total 0
-rw——- 1 terry terry 0 Aug 11 11:13 do_not_remove_this_either

We see that the file named “do_not_remove_this_either”, in directory_file “Test_rm_again”, is owned by unprivileged user “terry” who has denied all permissions to everyone but the owner.

User “terry”, the owner, might conclude that the restricted file permissions protect it from removal:

dan_/tmp/Test_rm_again> rm  -i do_not_remove_this_either
rm: remove write-protected regular empty file ‘do_not_remove_this_either’? y
rm: cannot remove ‘do_not_remove_this_either’: Permission denied

But terry’s conclusion would be mistaken.  “Permission denied” refers to the directory_file, not to the regular_file.  It is NOT the permissions on the regular_file that protect it from removal.  It is directory_file permissions that protect the files within from removal (but not from editing).  
User “dan” cannot remove the file becasue the absence of “w” on the directory_file prevents “dan” from writing the directory /tmp/Test_rm_again

Permissions on regular_files are fairly straightforward, but as the above illustrations suggest, a misunderstanding of directory_file permissions muddies the understanding of regular_file permissions, and vice versa.

WHAT IS A DIRECTORY?

A directory is a type of file in Linux that contains a list of other names and their associated inodes.

The list of names refers to other files, which might include:  
   directory_files, regular_files, symlinks, sockets, named pipes, devices.  

Included in the list are the inodes associated with each name.  Information in the member file’s inode includes filetype, permissions, owner, group, size, timestamps.

(Aside:  “ext” filesystems optionally include file type in the directory,
per http://man7.org/linux/man-pages/man5/ext4.5.html
  “filetype  
        This feature enables the storage file type
        information in directory entries.  This feature is
        supported by ext2, ext3, and ext4.”)

WHAT ARE DIRECTORY PERMISSION  MODES 1, 2 AND 4

1  x – search   quoting the Linux Programmer’s Manual:  “”search” applies for
                directories, and  means  that  entries  within  the directory
                can be accessed”

                The names in the directory_file are accessible, eg, via
                “cd”, or a pathname (though the file named in the directory
                carries its own permissions, in its own inode).  A name in the
                directory, if known by the user, is accessible even if the
                user is denied permission to read the list from the directory
                (ie, no read perm on the directory_file).

2  w – write    names in the directory_file list can be removed (rm), created
                or changed (mv).  The directory must also be searchable to
                be written.

4  r – read     the directory_file can be read, that is, its name list can be
                displayed via “ls”  (though a file named in the list is not
                necessarily accessible, as it carries its own permissions
                in its inode).

READ PERMISSION ON A DIRECTORY

Understanding what is read from and written to directory_file data, as opposed to what is read from and written to regular_file data, helps with understanding directory permissions.

Directory file data is a list of names mapped to their corresponding inodes.  A directory does not contain file data or metadata for the names that thedirectory itself contains.  The entry for the directory name itself is in that directory’s parent directory.  Permission to access a directory AND to write it, allows for adding or remove entries (files).

Likewise, understanding the distinction between regular_file data and regular file metadata (from the inode), helps in understanding directory permissions.

The inode stores metadata about the file such as permissions, type, timestamps, size, link count.  So permission to write to a file is not the same as the permission to remove that file from its directory.

Some tutorials suggest that both read and execute permission are required to read (list, “ls”) a directory_file.

But “r”ead, and only read, is required to list (“ls”) names in a directory_file, based on the following illustration:

dan_/tmp> ls -ld Read_only
d——r– 2 root root 4096 Aug 11 09:05 Read_only

Above, we see that the directory_file “/tmp/Read_only” is “read-only,” and readable only for the public.  All permissions are turned off for user (owner) and group, and both write and execute are turned off for other.

(Aside:  turning off permissions does not affect the “root” user.)

Is it accessible?

dan_/tmp> cd Read_only
bash: cd: Read_only: Permission denied

We can’t access it via “cd” because we don’t have search (x) permission.

But can we read it?  Yes.  Sort of:

dan_/tmp> ls -l Read_only/

ls: cannot access Read_only/test.1: Permission denied
ls: cannot access Read_only/test.3: Permission denied
ls: cannot access Read_only/test.2: Permission denied
total 0
-????????? ? ? ? ?            ? test.1
-????????? ? ? ? ?            ? test.2
-????????? ? ? ? ?            ? test.3

“ls” was able to read the file names, “test.1”, “test.2”, and “test.3”, from the  “Read_only” directory_file.  But because the “x” bit is turned off on the directory_file, we can’t go any further.  That is, we can’t traverse the directory to access metadata stored in the inodes of its regular_files.

What we can see in the above listing comes from the directory_file.  And what we cannot see in the above listing (where the question marks are used as placeholders), is information from the inode.  

From the directory_file, we see the block count:

total 0

and the leading dash indicating a regular file, in:
-?????????
 
and the file names:
test.1
test.2
test.3

What could not be retrieved from the inode is shown as question marks:

-?????????   these question marks are in place of the permission bits.
                      (the “-” indicates regular file and comes from the directory_file, in this case)

The next five question marks (following the permission bit placeholders) are in place of link count, owner, group, size, and datestamp.

So “ls” can retrieve names from a read-only directory_file.  But it cannot traverse the directory_file to read the inode information of its contents, absent the “x” bit on the directory_file.

Lastly, here is a recursive listing, run via sudo, of the “Read_only” directory_file’s contents, followed by a listing of the directory_file itself.  

dan:/tmp>  sudo ls -lR Read_only/
Read_only/:
total 0
-rwxrwxrwx 1 root root 0 Aug 11 09:05 test.1
-rwxrwxrwx 1 root root 0 Aug 11 09:05 test.2
-rwxrwxrwx 1 root root 0 Aug 11 09:05 test.3

dan:/tmp>  ls -ld Read_only/
d——r– 2 root root 4096 Aug 11 09:05 Read_only/

For owner root, group root, and “other”, permissions are wide open on the three test files.  But the read-only setting on their directory_file is not sufficient for directory traversal.  Full access via these file permissions did not help user dan, because he did not have search “x” for their directory.  (Note that the absence of these same permissions did not prevent “root” from listing them.)

EXECUTE PERMISSION ON A DIRECTORY

e”x”ecute, and only execute, is required to traverse a directory_file.  That is, to “cd” to it, or to see below the directory_file by using it as a component in a pathname.

Consider the following directory_file tree, listed via sudo:

dan_/tmp> sudo  ls -ld  a  a/b  a/b/c  a/b/c/d  a/b/c/d/e
d——–x 3 root root 4096 Aug 10 15:50 a
d——–x 3 root root 4096 Aug 11 13:46 a/b
d——rwx 3 root root 4096 Aug 11 13:46 a/b/c
d——rw- 3 root root 4096 Aug 11 14:03 a/b/c/d
drwxrwxrwx 2 root root 4096 Aug 11 14:04 a/b/c/d/e

Note the absence of “x” for other on “a/b/c/d”, and its effect on the same listing for a non-privileged user (the error output is rearranged to make it more readable):

dan_/tmp> ls -ld  a  a/b  a/b/c  a/b/c/d  a/b/c/d/e
d——–x 3 root root 4096 Aug 10 15:50 a
d——–x 3 root root 4096 Aug 11 13:46 a/b
d——rwx 3 root root 4096 Aug 11 13:46 a/b/c
d——rw- 3 root root 4096 Aug 11 14:03 a/b/c/d
ls: cannot access a/b/c/d/e: Permission denied

The lack of execute on directory_file “a/b/c/d” prevents the listing of its sub-directory_file “e”.   That is, “ls” cannot traverse “a/b/c/d”.  “ls” can retrieve the directory_file name “a/b/c/d” from its parent directory_file, but “ls” cannot traverse “a/b/c/d” to show its sub-directory_file, “e”.  

Contrast that to “x”, and “x” alone, on directories “a” and “a/b”, and “rwx” on “a/b/c”.  “x” on these subdirectories allow for those directory_files to be traversed.

But “r”ead access on subdirectory_file “d” will allow the listing of filenames, (in this next example, “d” is not an option to “ls”, but a directory_file argument for “ls” to act on):

dan_/tmp/a/b/c> ls -lR   d
d:
ls: cannot access d/e: Permission denied
ls: cannot access d/abcd.test: Permission denied
total 0
-????????? ? ? ? ?            ? abcd.test
d????????? ? ? ? ?            ? e
ls: cannot open directory d/e: Permission denied

Without the directory_file traversal granted by “x”, no inode data is accessible for the above listing.

This example shows directory_file traversal, and operation of the “x” bit, using bash’s “cd” builtin:

dan_/tmp> cd a && cd b && cd c && cd d && cd e
bash: cd: d: Permission denied
dan_/tmp/a/b/c>

Each successive “cd” is only attempted if the previous “cd” succeeded.  Lack of “x” on the “d” sub-directory_file causes “cd” to fail at that point.  The final “cd” cannot be attempted, and the current working directory becomes   /tmp/a/b/c/

“x”, and only “x”, being required for directory_name traversal is analagous to “x”, and only “x”, being required to execute a binary regular file.

WRITE PERMISSION ON A DIRECTORY

“w”rite permission on a directory_file is necessary, but not sufficient, to create a file in that directory.  The same is true for removing a file from that directory_file.  The same is true for creating or removing symlinks in that directory.

Creating or removing a file from a directory_file requires both “w”rite and e”x”ecute permission on the directory.

A user can remove any file, owned by any user/group (including root), with any permissions, or no permissions at all, if that user has “wx” permission on that file’s directory.  This is because these operations–creating a file, removing a file, and symlinking to a file–do not write to the file.  These operations write to the file’s directory.

A user does not require “r”ead and/or “w”rite on a file’s directory_file to edit an existing file. But that user does require e”x”ecute on that file’s directory in order to traverse the directory, which is a precondition to editing that file.  

Without “x”, the user cannot traverse the directory to reach the file.  

With “x”, but without “r”, a user can still access the file if the user already knows its name.  

To write to an existing file, a user does not require “w”rite on the existing file’s directory, because writing to an existing file does not write to that file’s directory, it writes to the file.

Likewise, a user does not require “w”rite on an existing file’s directory to change the file’s permissions, because doing so does not write to thefile’s directory.  Changing a file’s permissions writes to the file’s inode.

Consider again the following directory_file, listed after “r” permission was added for “other” on directory_file “a”:

dan_/tmp> ls -ld  ?  ?/*
d——r-x 3 root root 4096 Aug 13 13:39 a
———- 1 root root    0 Aug 13 13:39 a/meeting

The above is like showing up un-invited to a secret meeting.  You can navigate your way by listing “/tmp”.  “r” on /tmp means you can discover “a” with the wild-card “?”.   E”x”ecute on “a” means that “a” is accessible.  And “r”ead on “a” means that “meeting” can be read with the wild card “*”.

But once you arrive, you are turned away at the door with no permission to open “meeting”.

Versus this directory tree:

dan_/tmp> ls -ld x x/y x/y/z x/y/z/meeting
d——–x 3 root root 4096 Aug 13 13:54 x
d——–x 3 root root 4096 Aug 13 13:54 x/y
d——–x 2 root root 4096 Aug 13 13:57 x/y/z
-rwxrwxrwx 1 root root    0 Aug 13 13:57 x/y/z/meeting

which is like pin-the-tail-on-the donkey.  You can traverse a path that you have committed to memory, but you can’t see any help from wildcards along the way. 

Absent knowledge of the existence of “x”, “x/y” and “x/y/z”, you could not use “ls” to show you the way (you would need read permission to see the step, and execute permission to take the step).  But you can get there if you already know the path, and on arrival, you have full access to “meeting”.  

 

EXPLAINING SEEMING ODDITIES

vim will appear to magically “write” a read-only file if you have “wx” on the file’s directory.  Of course, it does not write the existing file, but  appears to do so by removing the original, then writing a new file with the same name from its buffer.  “wx” on the directory allows removal of the original, followed by creation of a new file of the same name.

An aside, not related to directory permissions:  vim will not open a “write-only (-w-)”.  But it will open an empty buffer, and any saves will overwrite the original file.  The only directory permission that is required is search “x”. 

Another aside, not directly related to directory permissions:  You don’t need read permission to redirect to a file.  With write-only permission, you can over-write with > redirection, or append with >> redirection.  (But write-only for a user is meaningless if the user is clever enough to realize that “w” gives permission to change permissions.) 

 

SUMMARY:  DIRECTORY PERMISSIONS FROM 0 to 7

0   —  the only thing user can do is read the directory name (but the ability
           to do so depends on read permission on the parent, not the directory
           under consideration).

                ls -ld  Read-Execute  Read-Execute/NO_PERMS
                d——r-x 4 root root 4096 Sep  3 20:57 Read-Execute
                d——— 2 root root 4096 Sep  3 20:55 Read-Execute/NO_PERMS

1   –x    user can search (traverse) the directory, ie “cd” to it.  This can be
             very useful to give an application access through a directory tree in
             which you don’t want users poking around from their shell sessions.

2   -w-   write-only on a directory grants permission to change permissions on
             the directory.  I can’t think of a practical application.  What is the
             point in denying a user “r” and “x” while giving that user permission to
             change those very settings?

                ls -ld _Write_
                d——-w- 2 dan users 4096 Sep  3 21:03 _Write_
                chmod 777 _Write_
                ls -ld _Write_
                drwxrwxrwx 2 dan users 4096 Sep  3 21:03 _Write_

3   -wx   user can create files (including subdirectories), rename files, and
               remove files, in the directory, if he already know the names of the
               files.  This can be useful if the creation and deletion of files is
               under control of an application, but you need a way to protect users
               from themselves.

4   r–    user can list the names in the directory.  Not practical.  It can only
              encourage snoops with no business to the data to try harder.

5   r-x   user can list the names in the directory and “cd” to the directory.
             user can edit existing files in the directory (subject to permission
             granted on the file itself), but cannot create, delete or remove files
             within the directory.  Mode 5 is a very practical setting for directories.

6   rw-  This mode is not practical. A user with read permission on the directory
             can list directory’s files, and since the user has write permission on
             the directory, he can change its permissions.  Meaning, there was no
             practical point to denying search “x” in the first place.

7   rwx  User can wreak havoc.   Makes sense for a user’s home directory.

 

CONCLUSION

There are many good articles and tutorials discussing file permissions.

There aren’t so many discussing directory permissions, but here is an excellent one authored by Bri Hatch:

http://www.hackinglinuxexposed.com/articles/20030424.html

As one who has learned some hard lessons through lack of understanding, I strongly encourage everyone to set up and work through example scenarios, especially those folks most confident in their skills.   😉

Special Exclusive: Q&A with Joyent CEO Scott Hammond

 

Scott HammondI recently caught up with Joyent CEO Scott Hammond at LinuxCon in Seattle. Joyent has been a leader in supporting the growth and diversity of the Node.js community and was a founding member of the Node.js Foundation. I was interested to learn more about Scott and his work at Joyent, as well as more about the company’s contributions to Linux and open source. Below I include a Q&A with him on these topics. I’ll also be sharing a video interview with Scott a little later this fall. 

Can you describe Joyent’s business?

Joyent is a cloud and infrastructure software company.  We are big believers in containers and, along with Google, pioneered running container-based infrastructure at scale.  Containers deliver bare metal performance, workload density, and web scale economics, far beyond what is possible with virtual machines.  

Joyent’s Triton Elastic Infrastructure is the best place to run containers, making container ops simple and scalable with enterprise-grade security, software-defined networking and bare-metal performance. Triton is available for on-premises deployments or through the Joyent Triton Elastic Container Service on the Joyent Public Cloud.

Why is open source so important to the company?

Open source is the only way infrastructure software is being developed today.  No meaningful proprietary infrastructure software has been built in the last 10 years.  Open source has some significant advantages over the proprietary model.  First, we get to engage the user community directly to collaborate on innovation, let them participate in the technical direction, and extend the software to address their unique requirements.  We saw a great example of that recently as someone in the community built an OpenStack Heat template that allows you to use OpenStack to deploy containers directly on Triton instead of a VM.

Open source is also the best model to engage with customers since many have adopted an “open-first” policy where they look first for an open source solution before they evaluate proprietary products.  Due to open access to source code, documentation, expertise, and support, organizations can evaluate, deploy, and utilize open source software without enduring a judo match with an overbearing proprietary sales rep.

Customers have witnessed community development delivering rapid innovation.  Open source also allows them to de-risk their projects, avoid vendor lock-in, and steer clear of budget-crippling license agreements.  You can see the effects of the switch from proprietary to open on the recent quarterly announcements of the large proprietary software companies.

How would you describe Joyent’s open source strategy thus far?

So far, we have utilized open source as a model to innovate quickly and engage with customers and a broad developer community.  SmartOS and Node.js are open source projects we have run for a number of years. In November of last year we went all in when we open sourced two of the systems at Joyent’s core: SmartDataCenter and Manta Object Storage Service. The unifying technology beneath both SmartDataCenter and Manta is OS-based virtualization and we believe open sourcing both systems is a way to broaden the community around the systems and advance the adoption of OS-based virtualization industry-wide.

We’re also getting involved with the larger open source community through initiatives like The Open Container Initiative (OCI) and the Cloud Native Computing Foundation (CNCF). Last month, we joined the newly formed CNCF as a charter member because we believe it is a foundation with a clear mission that aligns with our values: accelerating innovation and adoption of open source, container-based cloud computing.

In addition to those more recent open source milestones for Joyent, we’ve of course been heavily involved in the Node.js project since its inception half a decade ago. I wasn’t at Joyent then, but the team fell in love with Node.js as a new platform on which to build its cloud management software. Joyent really believed in the project, so the company hired Ryan Dahl and became the project steward until the formation of the Node.js Foundation earlier this year.

What about Node.js drove Joyent to get so involved?

We immediately recognized just how important Node.js could become.  It is a low latency, event-driven platform that has broad application in fast growing markets such as robotics, IoT, mobile, and the web. Joyent wanted to make sure Node.js flourished and ended up supporting the project through years of incredible adoption and growth.

What led to the decision to found the Node.js foundation?

Our goals for the project were for it to be a production-grade platform. To ensure that the code was highly performant, highly available, and high quality, we felt it was important to support Ryan Dahl’s wishes to tightly control the project through a BDFL model.  The project became massively popular and attracted a passionate group of developers and tens of thousands of production deployments.  Over the years, the project became a victim of its own success.  The vendor ecosystem that sprung up around Node demanded a neutral playing field so they could monetize Node, the developers insisted on a louder voice in the technical direction, and the customers wanted to de-risk the project.  I feel very strongly that for a project to succeed, the needs of all constituents (developers, users, and vendors) must be balanced. It became pretty clear that the project had transcended the needs of any one company and despite TJ Fontaine’s efforts to relax the constraints of the BDFL model, we needed to move to a new governance model. That’s why I decided to form the Node.js Advisory Board, which brought together a representative group of project constituents to work on governance issues, IP issues, community concerns, etc. We were all trying to avoid a fork, which would ultimately fracture the community, but obviously io.js forked in November. In the end, Joyent and everyone involved with Node.js wanted a single, unified project to succeed and grow under an open governance model. The Foundation gives us that and is the path to a long future.

How has the foundation functioned thus far?

I think we’re moving in a very positive direction. You can see exactly what we’re up to by checking out the public meeting notes and documents. Transparency is a major ingredient of this succeeding and we’re committed to keeping this open. Our mission is to drive widespread adoption and accelerate development of the project. If that is to happen, we need to avoid falling into corporate anti-open source patterns. When deciding to form the foundation, I talked a lot with Jim Zemlin from the Linux Foundation to see how we could set up a foundation that addressed the unique needs of the Node community and let the community dictate the technical direction.  Whereas other foundations have fallen into pay-to-play situations driven by corporate desires, we set up an independent technical committee with good representation from the user community.  I think we got it right and I’m confident the Node.js Foundation is on the path to long-term sustainability — particularly given the reunification with io.js, I think we’re well on our way.

What do you hope to see from Node.js in the next 10 years? What do you think Joyent’s involvement will be in the long run?

Joyent is going to stay very involved. We’ve built our core solutions on Node.js and poured resources into it for years. We plan to stay involved in the Foundation, make technical contributions to the project, and offer Node.js technical support. We’re in it for the long run. In terms of what I hope to see, I am optimistic about increased adoption and significant technical development over the next 10 years. There’s a lot of work ahead, and open governance by itself does not guarantee long-term success. All of us — the vendors, contributors, users — will need to balance our needs and encourage an open ecosystem.

What makes foundations a good model for open source technology? Do you think they will continue to be the preferred model?

Foundations allow for greater collaboration, transparency and accountability.  They also are a neutral structure that provides the best vehicle to balance the needs of the developers, the users, and the vendors. Those are good things for all the reasons I’ve detailed above. But like I’ve pointed out, a foundation does not in and of itself guarantee technological success. As our CTO Bryan Cantrill describes so well, many foundations in the past have underestimated the complexities and restrictions of running a non-profit. Opening up ownership of a project can also lead to the loss of strong leadership. And, finally, some foundations — despite initial intentions — have fallen into the pay-to-play pattern of catering only to the needs of the largest donors.

So yes, I do think foundations are overall a good model for open source technology, but not without reserve. When an open source technology has reached a certain level of popularity and adoption that brings innumerable players and constituents into the fray, only a foundation can provide the necessary neutrality. I think foundations will continue to serve this purpose, but we need to all be diligent about maintaining that neutrality and the ability to think bigger than your own organization. That’s part of the reason we’re so excited about the CNCF. At its core, the new foundation’s goals extend beyond any single technology or the needs of one company. Rather, it’s part of the new era of open source foundations, one in which corporate neutrality, transparency and innovation are the guiding values. We hope this foundation will be a model for open source moving forward.

What’s next for Joyent and open source?

We’re excited to witness the result of open sourcing SmartDataCenter and Manta. Already, we’ve seen organizations using the technologies in innovative ways and we’re committed to supporting open source in the future. Open source is an approach that works, and we’re sticking to it.

We are also excited about the potential impact of foundations like the CNCF.  Foundations have historically been used as a steward for projects.  The CNCF is playing a different role.  It is a steward for a new model of computing.  It brings together a cadre of projects and companies to define use cases, reference architectures, API’s, and PoC’s that will de-risk and accelerate a new model of computing.  We are breaking new ground, and it is rife with challenges, but I am optimistic about the impact we can have.

 

Intel Invests $50 Million in Quantum Computing Effort

Intel is the latest technology giant to invest in quantum computing research. Quantum computing, years away from commercialization, is supposed to be a huge leap forward. Intel said Thursday that it will invest $50 million and provide engineering resources to the Delft University of Technology and TNO, the Dutch Organisation for Applied Research, in an effort to advance quantum computing.

Quantum computing promises multiple breakthrough and the possibility of new applications. Quantum computers use quantum bits, or qubits,…

Read more at ZDNet News

DevOps: An Introduction

devops-1Not too long ago, software development was done a little differently. We programmers would each have our own computer, and we would write code that did the usual things a program should do, such as read and write files, respond to user events, save data to a database, and so on. Most of the code ran on a single computer, except for the database server, which was usually a separate computer. To interact with the database, our code would specify the name or address of the database server along with credentials and other information, and we would call into a library that would do the hard work of communicating with the server. So, from the perspective of the code, everything took place locally. We would call a function to get data from a table, and the function would return with the data we asked for. Yes, there were plenty of exceptions, but for many application-based desktop applications, this was the general picture.

The early web add some layers of complexity to this, whereby we wrote code that ran on a server. But, things weren’t a whole lot different except that our user interface was the browser. Our code would send out HTML to the browser and receive input from the user through page requests and forms. Eventually more coding took place in the browser through JavaScript and we started building interactions between the browser and our server code. But on the server end, we would still just interact with the database through our code. And again, from the perspective of our code, it was just our program, the user interface, and most likely a database.

But, there’s something missing from this picture: The hardware. The servers. That’s because our software was pretty straightforward. We would write a program and expect that there’d be enough memory and disk space for the program to run (and issue an error message if there wasn’t). Of course, larger corporations and high-tech organizations always had more going on in terms of servers, but even then, software was rarely distributed, even in the case of central servers. If the server went down, we were hosed.

A Nightmare Was Brewing

This made for a lot of nightmares. The Quality Assurance (QA) team needed fresh computers to install the software on, and it was often a job that both the developer and the tester would do together. And, if the developer needed to run some special tests, he or she would ask a member of the IT staff to find a free computer. Then, he or she would walk to the freezing cold computer room and work in there for a bit trying to get the software up and running. Throughout all this, there was a divide between groups. There were the programmers writing code, and there were the IT people maintaining the hardware. There were database people and other groups. And each group was separate. But the IT people were at the center of it all.

Today software is different. Several years ago, somebody realized that a good way to keep a website going is to create copies of the servers running the website code. Then if one goes down, users could be routed to another server. But, this approach required changes in how we wrote our code. We couldn’t just maintain a user’s login information on a single computer unless we want to force the user to log back in after the one server died and another took over. So we had to adjust our code for this and similar situations.

Gradually our software grew in size as well. Some of our work has moved to other servers. Now we’re dealing not only with servers that are copies of each other (replicated), but we’re dealing with software and programs that are distributed among multiple computers. And our code has to be able to handle this. That part I said earlier regarding the time spent in the refrigerated data room just trying to get the software installed is still an issue with this distributed and replicated architecture. But now it’s much harder. You can no longer just request a spare PC to go test the software on. And QA staff can no longer just wipe a single computer and reinstall the software from a DVD. Just the installation alone is a headache. What external modules does your app need? And how is it distributed among hardware? And then, exactly what hardware is needed?

This situation requires the different groups to work closely together. The IT team who manages the hardware can’t be expected to just know what the developer’s software needs. And the developer can’t be expected to automatically know what hardware is available and how to make use of it.

DevOps to the Rescue

Thus we have a new field where the overlap occurs, which is a combination of developer and operations, called DevOps (see Figure 1 above). This is a field both developers and IT people need to know. But let’s focus today on the developers.

Suppose your program needs to spawn a process that does some special number crunching that would be well-suited on four separate machines, each with 16 cores, with the code distributed among those 64 cores. And when you have the code written, how will you try out your code?

The answer is in virtualization. With a cloud infrastructure, you can easily provision the hardware that you need, install the virtual operating systems on the virtual servers, upload your code, and have at it. Then when you’re finished working, you can shut down the virtual machines, where the resources return to a pool for other use by other people. That process works for your testing, but in a live environment, your code might need to do the work itself of provisioning the virtual servers and uploading the code. Thus, your code must now be more aware of the hardware architecture.

Developers must know DevOps — in the areas of virtualization and cloud technology, as well as hardware management and configuration. Most organizations have limited IT people and they can’t sit beside the developers and help out. And managing the hardware from within the program requires coding, which is what the developers are there for. The line between hardware and software is blurrier than it used to be.

What to Learn

So where can you, as a programmer, learn the skills of DevOps? The usual places online are great places to start (our own Linux.com and various sites).

As for what to learn, here are some starters:

  1. Learn what virtualization is and how, through software alone, you can provision a virtual computer and install an operating system and a software stack. A great way to learn this is by opening an account on Amazon Web Services and play around with their EC2 technology. Also, learn why these virtual servers are quite different from the early attempts, whereby one single-core computer would try to run multiple operating systems simultaneously, causing a seriously slow system. Today’s virtualization uses different approaches so this isn’t a problem, especially since multi-core computers are mainstream. Learn how block storage devices are used in a virtual setting.

  2. Learn about some of the new open source standards such as OpenStack. OpenStack is a framework that works similarly to the way you can provision hardware on Amazon Web Services.devops-2

  3. Learn network virtualization (Figure 2). This topic alone can become a career, but the idea is that you have all these virtual servers sitting on physical servers; while those physical servers are interconnected through physical networks, you can actually create a virtual network whereby your virtual servers connect in other ways, using a separate set of IP addresses in what’s called a virtual private network. That way you can, for example, block a database server from being accessed from the outside world, while the virtual web servers able to access it within the private network.

  4. Now learn how to manage all this, first through a web console, and then through programming, including with code that uses a RESTful API. And, while you’re there, learn about the security concerns and how to write code that uses OAuth2 and other forms of authentication and authorization. Learn, learn, learn as much as you can about how to configure ssh.

  5. Learn some configuration management tools. Chef and Puppet are two of the most popular. Learn how to write code in both of these tools, and learn how you can access that code from your own code.

Conclusion

The days of being in separate groups are gone. We have to get along and learn more about each other’s fields. Fifteen years ago, I never imagined I would become an expert at installing Linux and configuring ssh. But now I need this as part of my software development job, because I’m writing distributed cloud-based software. It’s now a job requirement and yes, we can all just get along.

Linux Foundation Puts Free Chromebooks in the Hands of its Training Students Throughout September

As students make their way back to the computer lab and professionals dig in post-summer, Linux Foundation offers free Chromebooks to individuals who enroll in Linux training during the month of September.

The Linux Foundation, the nonprofit organization dedicated to accelerating the growth of Linux and collaborative development, today announced it will give away one Chromebook to every person who enrolls in Linux Foundation training courses during September. Individual learners are eligible for this offer, which begins today and expires at 11:59 p.m. PT on September 30, 2015. All courses available for enrollment this month are offered through the end of the year, giving students flexibility in scheduling.

Read more at The Linux Foundation

IBM, ARM Link Arms on Internet of Things Analytics

The two firms plan to improve data analysis relating to industrial, health and wearable IoT devices. 

IBM and ARM are joining forces to boost Internet of Things (IoT) device analytics capabilities across the industrial, weather and wearable industries, among others. On Thursday, IBM and ARM announced plans to integrate the IBM Internet of Things (IoT) platform, dubbed IBM IoT Foundation, with ARM technology. Specifically, the platform will now connect ARM mbed users directly to IBM IoT Foundation analytics.

Read more at ZDNet News

RDO Juno DVR Deployment (Controller/Network)+Compute+Compute (ML2&OVS&VXLAN) on CentOS 7.1

 Neutron DVR implements the fip-namespace on every Compute Node where the VMs are running. Thus VMs with FloatingIPs can forward the traffic to the External Network without routing it via Network Node. (North-South Routing). It also implements the L3 Routers across the Compute Nodes, so that tenants intra VM communication will occur with Network Node not involved. (East-West Routing). Neutron Distributed Virtual Router provides the legacy SNAT behavior for the default SNAT for all private VMs. SNAT service is not distributed, it is centralized and the service node will host the service.

Complete text may seen here

CloudRouter Now Live

Want your open-source NetOps? Here it is. The collaborative open-source CloudRouter project has come out of beta. 

CloudRouter has two network operating system flavours – CentOS 7.1 with Java 1.8, or Fedora 22. It ships with ONOS 1.2 Cardinal and OpenDaylight Lithium, and supports Docker, CoreOS, Rkt, OSv or KVM containers. Routing is provided by ExaBGP, BIRD and Quagga, and its base functionality includes support for …

Read more at The Register

First X.Org Server 1.18 Release Candidate Build Brings Almost 300 Improvements

xorgThe X.Org Foundation, through Keith Packard, announced the immediate availability for download of the first Release Candidate (RC) build towards the X.Org Server 1.18 open-source implementation of the X Window System.

According to the overwhelming changelog, which we’ve attached at the end of the article for reference, X.Org Server 1.18 Release Candidate 1 is an enormous milestone that adds approximately 300 changes, which include addition of new features, under-the-hood improvements, and bugfixes. There are improvements for everything, from XWayland, XFree86, xf86Crtc, XQuartz, and Xephyr,…