Home Blog Page 1016

Special Exclusive: Q&A with Joyent CEO Scott Hammond

 

Scott HammondI recently caught up with Joyent CEO Scott Hammond at LinuxCon in Seattle. Joyent has been a leader in supporting the growth and diversity of the Node.js community and was a founding member of the Node.js Foundation. I was interested to learn more about Scott and his work at Joyent, as well as more about the company’s contributions to Linux and open source. Below I include a Q&A with him on these topics. I’ll also be sharing a video interview with Scott a little later this fall. 

Can you describe Joyent’s business?

Joyent is a cloud and infrastructure software company.  We are big believers in containers and, along with Google, pioneered running container-based infrastructure at scale.  Containers deliver bare metal performance, workload density, and web scale economics, far beyond what is possible with virtual machines.  

Joyent’s Triton Elastic Infrastructure is the best place to run containers, making container ops simple and scalable with enterprise-grade security, software-defined networking and bare-metal performance. Triton is available for on-premises deployments or through the Joyent Triton Elastic Container Service on the Joyent Public Cloud.

Why is open source so important to the company?

Open source is the only way infrastructure software is being developed today.  No meaningful proprietary infrastructure software has been built in the last 10 years.  Open source has some significant advantages over the proprietary model.  First, we get to engage the user community directly to collaborate on innovation, let them participate in the technical direction, and extend the software to address their unique requirements.  We saw a great example of that recently as someone in the community built an OpenStack Heat template that allows you to use OpenStack to deploy containers directly on Triton instead of a VM.

Open source is also the best model to engage with customers since many have adopted an “open-first” policy where they look first for an open source solution before they evaluate proprietary products.  Due to open access to source code, documentation, expertise, and support, organizations can evaluate, deploy, and utilize open source software without enduring a judo match with an overbearing proprietary sales rep.

Customers have witnessed community development delivering rapid innovation.  Open source also allows them to de-risk their projects, avoid vendor lock-in, and steer clear of budget-crippling license agreements.  You can see the effects of the switch from proprietary to open on the recent quarterly announcements of the large proprietary software companies.

How would you describe Joyent’s open source strategy thus far?

So far, we have utilized open source as a model to innovate quickly and engage with customers and a broad developer community.  SmartOS and Node.js are open source projects we have run for a number of years. In November of last year we went all in when we open sourced two of the systems at Joyent’s core: SmartDataCenter and Manta Object Storage Service. The unifying technology beneath both SmartDataCenter and Manta is OS-based virtualization and we believe open sourcing both systems is a way to broaden the community around the systems and advance the adoption of OS-based virtualization industry-wide.

We’re also getting involved with the larger open source community through initiatives like The Open Container Initiative (OCI) and the Cloud Native Computing Foundation (CNCF). Last month, we joined the newly formed CNCF as a charter member because we believe it is a foundation with a clear mission that aligns with our values: accelerating innovation and adoption of open source, container-based cloud computing.

In addition to those more recent open source milestones for Joyent, we’ve of course been heavily involved in the Node.js project since its inception half a decade ago. I wasn’t at Joyent then, but the team fell in love with Node.js as a new platform on which to build its cloud management software. Joyent really believed in the project, so the company hired Ryan Dahl and became the project steward until the formation of the Node.js Foundation earlier this year.

What about Node.js drove Joyent to get so involved?

We immediately recognized just how important Node.js could become.  It is a low latency, event-driven platform that has broad application in fast growing markets such as robotics, IoT, mobile, and the web. Joyent wanted to make sure Node.js flourished and ended up supporting the project through years of incredible adoption and growth.

What led to the decision to found the Node.js foundation?

Our goals for the project were for it to be a production-grade platform. To ensure that the code was highly performant, highly available, and high quality, we felt it was important to support Ryan Dahl’s wishes to tightly control the project through a BDFL model.  The project became massively popular and attracted a passionate group of developers and tens of thousands of production deployments.  Over the years, the project became a victim of its own success.  The vendor ecosystem that sprung up around Node demanded a neutral playing field so they could monetize Node, the developers insisted on a louder voice in the technical direction, and the customers wanted to de-risk the project.  I feel very strongly that for a project to succeed, the needs of all constituents (developers, users, and vendors) must be balanced. It became pretty clear that the project had transcended the needs of any one company and despite TJ Fontaine’s efforts to relax the constraints of the BDFL model, we needed to move to a new governance model. That’s why I decided to form the Node.js Advisory Board, which brought together a representative group of project constituents to work on governance issues, IP issues, community concerns, etc. We were all trying to avoid a fork, which would ultimately fracture the community, but obviously io.js forked in November. In the end, Joyent and everyone involved with Node.js wanted a single, unified project to succeed and grow under an open governance model. The Foundation gives us that and is the path to a long future.

How has the foundation functioned thus far?

I think we’re moving in a very positive direction. You can see exactly what we’re up to by checking out the public meeting notes and documents. Transparency is a major ingredient of this succeeding and we’re committed to keeping this open. Our mission is to drive widespread adoption and accelerate development of the project. If that is to happen, we need to avoid falling into corporate anti-open source patterns. When deciding to form the foundation, I talked a lot with Jim Zemlin from the Linux Foundation to see how we could set up a foundation that addressed the unique needs of the Node community and let the community dictate the technical direction.  Whereas other foundations have fallen into pay-to-play situations driven by corporate desires, we set up an independent technical committee with good representation from the user community.  I think we got it right and I’m confident the Node.js Foundation is on the path to long-term sustainability — particularly given the reunification with io.js, I think we’re well on our way.

What do you hope to see from Node.js in the next 10 years? What do you think Joyent’s involvement will be in the long run?

Joyent is going to stay very involved. We’ve built our core solutions on Node.js and poured resources into it for years. We plan to stay involved in the Foundation, make technical contributions to the project, and offer Node.js technical support. We’re in it for the long run. In terms of what I hope to see, I am optimistic about increased adoption and significant technical development over the next 10 years. There’s a lot of work ahead, and open governance by itself does not guarantee long-term success. All of us — the vendors, contributors, users — will need to balance our needs and encourage an open ecosystem.

What makes foundations a good model for open source technology? Do you think they will continue to be the preferred model?

Foundations allow for greater collaboration, transparency and accountability.  They also are a neutral structure that provides the best vehicle to balance the needs of the developers, the users, and the vendors. Those are good things for all the reasons I’ve detailed above. But like I’ve pointed out, a foundation does not in and of itself guarantee technological success. As our CTO Bryan Cantrill describes so well, many foundations in the past have underestimated the complexities and restrictions of running a non-profit. Opening up ownership of a project can also lead to the loss of strong leadership. And, finally, some foundations — despite initial intentions — have fallen into the pay-to-play pattern of catering only to the needs of the largest donors.

So yes, I do think foundations are overall a good model for open source technology, but not without reserve. When an open source technology has reached a certain level of popularity and adoption that brings innumerable players and constituents into the fray, only a foundation can provide the necessary neutrality. I think foundations will continue to serve this purpose, but we need to all be diligent about maintaining that neutrality and the ability to think bigger than your own organization. That’s part of the reason we’re so excited about the CNCF. At its core, the new foundation’s goals extend beyond any single technology or the needs of one company. Rather, it’s part of the new era of open source foundations, one in which corporate neutrality, transparency and innovation are the guiding values. We hope this foundation will be a model for open source moving forward.

What’s next for Joyent and open source?

We’re excited to witness the result of open sourcing SmartDataCenter and Manta. Already, we’ve seen organizations using the technologies in innovative ways and we’re committed to supporting open source in the future. Open source is an approach that works, and we’re sticking to it.

We are also excited about the potential impact of foundations like the CNCF.  Foundations have historically been used as a steward for projects.  The CNCF is playing a different role.  It is a steward for a new model of computing.  It brings together a cadre of projects and companies to define use cases, reference architectures, API’s, and PoC’s that will de-risk and accelerate a new model of computing.  We are breaking new ground, and it is rife with challenges, but I am optimistic about the impact we can have.

 

Intel Invests $50 Million in Quantum Computing Effort

Intel is the latest technology giant to invest in quantum computing research. Quantum computing, years away from commercialization, is supposed to be a huge leap forward. Intel said Thursday that it will invest $50 million and provide engineering resources to the Delft University of Technology and TNO, the Dutch Organisation for Applied Research, in an effort to advance quantum computing.

Quantum computing promises multiple breakthrough and the possibility of new applications. Quantum computers use quantum bits, or qubits,…

Read more at ZDNet News

DevOps: An Introduction

devops-1Not too long ago, software development was done a little differently. We programmers would each have our own computer, and we would write code that did the usual things a program should do, such as read and write files, respond to user events, save data to a database, and so on. Most of the code ran on a single computer, except for the database server, which was usually a separate computer. To interact with the database, our code would specify the name or address of the database server along with credentials and other information, and we would call into a library that would do the hard work of communicating with the server. So, from the perspective of the code, everything took place locally. We would call a function to get data from a table, and the function would return with the data we asked for. Yes, there were plenty of exceptions, but for many application-based desktop applications, this was the general picture.

The early web add some layers of complexity to this, whereby we wrote code that ran on a server. But, things weren’t a whole lot different except that our user interface was the browser. Our code would send out HTML to the browser and receive input from the user through page requests and forms. Eventually more coding took place in the browser through JavaScript and we started building interactions between the browser and our server code. But on the server end, we would still just interact with the database through our code. And again, from the perspective of our code, it was just our program, the user interface, and most likely a database.

But, there’s something missing from this picture: The hardware. The servers. That’s because our software was pretty straightforward. We would write a program and expect that there’d be enough memory and disk space for the program to run (and issue an error message if there wasn’t). Of course, larger corporations and high-tech organizations always had more going on in terms of servers, but even then, software was rarely distributed, even in the case of central servers. If the server went down, we were hosed.

A Nightmare Was Brewing

This made for a lot of nightmares. The Quality Assurance (QA) team needed fresh computers to install the software on, and it was often a job that both the developer and the tester would do together. And, if the developer needed to run some special tests, he or she would ask a member of the IT staff to find a free computer. Then, he or she would walk to the freezing cold computer room and work in there for a bit trying to get the software up and running. Throughout all this, there was a divide between groups. There were the programmers writing code, and there were the IT people maintaining the hardware. There were database people and other groups. And each group was separate. But the IT people were at the center of it all.

Today software is different. Several years ago, somebody realized that a good way to keep a website going is to create copies of the servers running the website code. Then if one goes down, users could be routed to another server. But, this approach required changes in how we wrote our code. We couldn’t just maintain a user’s login information on a single computer unless we want to force the user to log back in after the one server died and another took over. So we had to adjust our code for this and similar situations.

Gradually our software grew in size as well. Some of our work has moved to other servers. Now we’re dealing not only with servers that are copies of each other (replicated), but we’re dealing with software and programs that are distributed among multiple computers. And our code has to be able to handle this. That part I said earlier regarding the time spent in the refrigerated data room just trying to get the software installed is still an issue with this distributed and replicated architecture. But now it’s much harder. You can no longer just request a spare PC to go test the software on. And QA staff can no longer just wipe a single computer and reinstall the software from a DVD. Just the installation alone is a headache. What external modules does your app need? And how is it distributed among hardware? And then, exactly what hardware is needed?

This situation requires the different groups to work closely together. The IT team who manages the hardware can’t be expected to just know what the developer’s software needs. And the developer can’t be expected to automatically know what hardware is available and how to make use of it.

DevOps to the Rescue

Thus we have a new field where the overlap occurs, which is a combination of developer and operations, called DevOps (see Figure 1 above). This is a field both developers and IT people need to know. But let’s focus today on the developers.

Suppose your program needs to spawn a process that does some special number crunching that would be well-suited on four separate machines, each with 16 cores, with the code distributed among those 64 cores. And when you have the code written, how will you try out your code?

The answer is in virtualization. With a cloud infrastructure, you can easily provision the hardware that you need, install the virtual operating systems on the virtual servers, upload your code, and have at it. Then when you’re finished working, you can shut down the virtual machines, where the resources return to a pool for other use by other people. That process works for your testing, but in a live environment, your code might need to do the work itself of provisioning the virtual servers and uploading the code. Thus, your code must now be more aware of the hardware architecture.

Developers must know DevOps — in the areas of virtualization and cloud technology, as well as hardware management and configuration. Most organizations have limited IT people and they can’t sit beside the developers and help out. And managing the hardware from within the program requires coding, which is what the developers are there for. The line between hardware and software is blurrier than it used to be.

What to Learn

So where can you, as a programmer, learn the skills of DevOps? The usual places online are great places to start (our own Linux.com and various sites).

As for what to learn, here are some starters:

  1. Learn what virtualization is and how, through software alone, you can provision a virtual computer and install an operating system and a software stack. A great way to learn this is by opening an account on Amazon Web Services and play around with their EC2 technology. Also, learn why these virtual servers are quite different from the early attempts, whereby one single-core computer would try to run multiple operating systems simultaneously, causing a seriously slow system. Today’s virtualization uses different approaches so this isn’t a problem, especially since multi-core computers are mainstream. Learn how block storage devices are used in a virtual setting.

  2. Learn about some of the new open source standards such as OpenStack. OpenStack is a framework that works similarly to the way you can provision hardware on Amazon Web Services.devops-2

  3. Learn network virtualization (Figure 2). This topic alone can become a career, but the idea is that you have all these virtual servers sitting on physical servers; while those physical servers are interconnected through physical networks, you can actually create a virtual network whereby your virtual servers connect in other ways, using a separate set of IP addresses in what’s called a virtual private network. That way you can, for example, block a database server from being accessed from the outside world, while the virtual web servers able to access it within the private network.

  4. Now learn how to manage all this, first through a web console, and then through programming, including with code that uses a RESTful API. And, while you’re there, learn about the security concerns and how to write code that uses OAuth2 and other forms of authentication and authorization. Learn, learn, learn as much as you can about how to configure ssh.

  5. Learn some configuration management tools. Chef and Puppet are two of the most popular. Learn how to write code in both of these tools, and learn how you can access that code from your own code.

Conclusion

The days of being in separate groups are gone. We have to get along and learn more about each other’s fields. Fifteen years ago, I never imagined I would become an expert at installing Linux and configuring ssh. But now I need this as part of my software development job, because I’m writing distributed cloud-based software. It’s now a job requirement and yes, we can all just get along.

Linux Foundation Puts Free Chromebooks in the Hands of its Training Students Throughout September

As students make their way back to the computer lab and professionals dig in post-summer, Linux Foundation offers free Chromebooks to individuals who enroll in Linux training during the month of September.

The Linux Foundation, the nonprofit organization dedicated to accelerating the growth of Linux and collaborative development, today announced it will give away one Chromebook to every person who enrolls in Linux Foundation training courses during September. Individual learners are eligible for this offer, which begins today and expires at 11:59 p.m. PT on September 30, 2015. All courses available for enrollment this month are offered through the end of the year, giving students flexibility in scheduling.

Read more at The Linux Foundation

IBM, ARM Link Arms on Internet of Things Analytics

The two firms plan to improve data analysis relating to industrial, health and wearable IoT devices. 

IBM and ARM are joining forces to boost Internet of Things (IoT) device analytics capabilities across the industrial, weather and wearable industries, among others. On Thursday, IBM and ARM announced plans to integrate the IBM Internet of Things (IoT) platform, dubbed IBM IoT Foundation, with ARM technology. Specifically, the platform will now connect ARM mbed users directly to IBM IoT Foundation analytics.

Read more at ZDNet News

RDO Juno DVR Deployment (Controller/Network)+Compute+Compute (ML2&OVS&VXLAN) on CentOS 7.1

 Neutron DVR implements the fip-namespace on every Compute Node where the VMs are running. Thus VMs with FloatingIPs can forward the traffic to the External Network without routing it via Network Node. (North-South Routing). It also implements the L3 Routers across the Compute Nodes, so that tenants intra VM communication will occur with Network Node not involved. (East-West Routing). Neutron Distributed Virtual Router provides the legacy SNAT behavior for the default SNAT for all private VMs. SNAT service is not distributed, it is centralized and the service node will host the service.

Complete text may seen here

CloudRouter Now Live

Want your open-source NetOps? Here it is. The collaborative open-source CloudRouter project has come out of beta. 

CloudRouter has two network operating system flavours – CentOS 7.1 with Java 1.8, or Fedora 22. It ships with ONOS 1.2 Cardinal and OpenDaylight Lithium, and supports Docker, CoreOS, Rkt, OSv or KVM containers. Routing is provided by ExaBGP, BIRD and Quagga, and its base functionality includes support for …

Read more at The Register

First X.Org Server 1.18 Release Candidate Build Brings Almost 300 Improvements

xorgThe X.Org Foundation, through Keith Packard, announced the immediate availability for download of the first Release Candidate (RC) build towards the X.Org Server 1.18 open-source implementation of the X Window System.

According to the overwhelming changelog, which we’ve attached at the end of the article for reference, X.Org Server 1.18 Release Candidate 1 is an enormous milestone that adds approximately 300 changes, which include addition of new features, under-the-hood improvements, and bugfixes. There are improvements for everything, from XWayland, XFree86, xf86Crtc, XQuartz, and Xephyr,…

Linux Foundation Sysadmins Open-Source Their IT Policies

Konstantin-Ryabitsev-sysadminThe Linux Foundation is no stranger to the world of open source and free software — after all, we are the home of Linux, the world’s most successful free software project. Throughout the Foundation’s history, we have worked not only to promote open-source software, but to spread the collaborative DNA of Linux to new fields in hopes to enable innovation and access for all.

This is why the Linux Foundation IT staff has decided to release some of the internal IT checklists and policy guides as part of the “Useful IT Policies” open-source project on GitHub (https://github.com/lfit/itpol). Generalized versions of these internal documents are made available under the Creative Commons “Attribution-ShareAlike” license in hopes that other IT teams will fork them, adapt them, tweak them to fit their needs — and hopefully contribute back in true spirit of open-source collaboration.

We chatted with Konstantin Ryabitsev, Sr. Systems Administrator and head of the Linux Foundation Collaborative IT projects, about the goals and motivations behind the project.

Q: What prompted your decision to open-source these documents?

In doing this, we were inspired by other projects who have taken similar steps, such as when the recently defunct Ada Initiative released their excellent AdaCamp policies under a Creative Commons license. Our team has been producing open-source software and free documentation throughout the years, and extending this approach to our internal documentation was simply the next logical step.

Q: What documents are available?

Thus far, we have released two documents adapted from our internal IT policies. First one is a generalized version of our Linux workstation security checklist, which we hope will be useful to other IT teams. In the document, we go into some detail about the reasoning behind our recommendations and try to provide a rough guideline for what we consider essential steps, what is more of a nice to have, and what is likely to be seen as “too paranoid” by most other sysadmins. This is not to say that “paranoid” recommendations shouldn’t be considered — the “paranoid” label is more of a fair warning that implementing these measures will require a significant amount of dedication on the part of the sysadmin in order to be worth the effort.

The second document is a shorter checklist for establishing trusted team communication. It discusses subjects like using OpenPGP or S/MIME for sending and receiving encrypted email, securing IM conversations by using the OTR protocol, and establishing firm policies on what subjects should be never communicated over untrusted channels (passwords, internal policy decisions, security-sensitive details, etc).

Q: Do other companies provide such checklists and guidelines?

Absolutely, there are quite a number of workstation hardening documents published by reputable IT security companies, which everyone should absolutely consult. We by no means claim that our checklists are better, smarter, or more exhaustive. Based on our experiences as security-minded sysadmins, we think these are useful recommendations that strike a workable balance between security and usability. Other systems administrators should approach these documents just like they do all other open-source resources: they can evaluate them, adapt them, hack on them until they fit their team’s requirements, and hopefully contribute back via patches or feedback. This is why we released them on GitHub under a free license.

Q: Will there be other documents in the future?

Definitely, as time allows — and we invite other teams to contribute theirs. Open-source projects are successful because instead of reinventing the same things over and over again, we draw on the collective works and experiences of other developers who chose to freely share their expertise with others. There is absolutely no reason why we should treat policy documents differently from any other code, or keep them strictly proprietary. We hope that others will choose to freely share their guidelines and policy documents, to everyone’s mutual benefit.

​SanDisk and Nexenta Release Open-Source, Flash Software-Defined Storage Array

What do you get when you put open-source software and flash drives together? The first open-source software-defined storage array.

SanDisk is best known for storage. Led by Nithya Ruff, the company’s head of open-source strategy, the company is integrating open-source into storage. In their latest deal with Nexenta, an open source software-defined storage leader, the pairing of NexentaStor with SanDisk’s all-flash InfiniFlash IF100 system underlines this shift. Their new all-flash storage combo provides data-center customers with a full-featured, high-performance system for addressing today’s increasing Big Data challenges.

Read more at ZDNet News