Home Blog Page 667

Learn The Future of Node.js From Industry and Community Experts at Node.js Interactive

With almost five million users a month and adoption across numerous industries, Node.js is a universal platform for web applications, IoT development, enterprise application development, and microservice architectures. Its liberal contribution policies have allowed the platform to increase the number of contributors working on the project by a sustained 100% year-over-year growth for the last several years.

To help spread Node.js best practices and take a look what the future holds for the platform, the Node.js Foundation is hosting Node.js Interactive to bring together developers, enterprise users and the Node.js community for collaboration, education, and more from November 29 – December 2 in Austin.

The conference will feature two full days of talks, workshops and keynotes focused on skill-building and knowledge-sharing in several key areas: performance, DevOps, debugging, security, machine learning, IoT, and more. A few key talks include:

Node.js State of the Union – Rod Vagg

Rod, Technical Steering Director of the Node.js Foundation and Chief Node Officer at NodeSource, will discuss progress Node.js has made in the last year as well as key technologies and focus areas for Node.js Core and the Node.js Project teams in 2017.

Express State of the Union – Doug Wilson

You’ve likely used an app built with Express. With 53+ million downloads in the last two years, Express has become one of the key toolkits for building web applications. Express underpins some of the most significant projects that support Node.js and is heavily used by enterprises; the popular blogging framework Ghost; Loopback, a Node.js API framework, and many more. Doug Wilson, the rock of the Express community, will be talking about the progress that Express has made over the last year and showcase what is in store for future versions.

JavaScript Will Let Your Site Work Without JavaScript – Sarah Meyer of Buzzfeed

Is your site heavy and slow, especially on mobile devices? You might want to look into isomorphic JavaScript with Node.js. Any industry that demands superior customer satisfaction with their website, which is pretty much all industries, should know about isomorphic JavaScript. It has certainly helped with Rent the Runway. Sarah Meyer, now a software engineer at Buzzfeed, will provide an overview of how isomorphic JavaScript can be used to better users’ experiences on web pages, and subsequently make a developer’s life a lot easier.

Shedding Light on the Darknet – Dr. Nwokedi Idika of Google

What does the darknet really mean? Dr. Nwokedi Idika, software engineer at Google, asked this very question, and after multiple interviews, he came out with the same conclusion: confusion. His presentation will help right the common misconceptions about the darknet from concepts to technologies.

Slaying Monoliths with Docker and Node.js – Yunong Xiao of Netflix

At the heart of nearly every request from subscribers is Netflix’s data access platform. It enables Netflix’s innovative UIs to communicate efficiently with its bevy of backend services while also maintaining a large and ever growing subscriber base, which currently stands at 75 million members.

How does Netflix continue to grow with this monolithic platform? During Node.js Interactive, Yunong Xiao, senior Node.js software engineer at Netflix.com, will discuss a new container-based data access platform that the Netflix team is building to replace its monolith, and how they are using Node.js to instrument it all.

The third day of the conference will kick off with Code & Learn. If you are interested in learning how to contribute to Node.js, this is the right event for you. Code & Learn provides hands-on workshops and encourages new developers to tackle real problems in the code base – all with live, in-person support from mentors who attend Node.js conferences.

During the afternoon of December 1 through December 2, the Node.js Foundation will host a Collaboration Summit, which will feature un-conference sessions to discuss the present and future direction of Node.js. Have an interest in helping shape the future of Node.js? Join the Collaboration Summit.

View the full schedule to learn more about this marquee event for Node.js developers, companies that rely on Node.js, and vendors. Or register now for Node.js Interactive.

 

Getting Started With Kubernetes Is Easy With Minikube

Getting started with any distributed system that has several components expected to run on different servers can be challenging. Often developers and system administrators want to be able to take a new system for a spin before diving straight into a full-fledged deployment on a cluster.

All in one installations are a good remedy to this challenge. They provide a quick way to test a new technology by getting a working setup on a local machine. They can be used by developers in their local workflow and by system administrators to learn the basics of the deployment setup and configuration of various components.

Typically, all-in-one installs are provided as a virtual machine. Vagrant has been a great tool for this. For example, OpenShift Origin provides an all-in-one VM, and OpenStack has DevStack.

To get started with Kubernetes easily, we now have an all-in-one solution: minikube.

Minikube will start a virtual machine locally and run the necessary Kubernetes components. The VM will get configured with Docker and Kubernetes via a single binary called localkube. The end result will be a local Kubernetes endpoint that you can use with the Kubernetes client kubectl.

This is very similar to Docker’s docker-machine with one main difference being that minikube only starts local virtual machines, it does not interact with public Cloud Providers.

Setup

For clarity, I will skip over the very few requirements (e.g., VirtualBox setup or another hypervisor) and go straight into “minikube” installation.

While you can build from source by grabbing it on GitHub, the easiest is to grab the released binary for your platform (packaging should come shortly). For example, on Linux:


```

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.8.0/minikube-linux-amd64

$ chmod +x minikube

$ sudo mv minikube /usr/local/bin/

```

With minikube now on your machine, you will be able to create a local Kubernetes endpoint. But if you are totally new to Kubernetes, you should also download the Kubernetes client kubectl. This client will interact with the Kubernetes API to manage your containers.

Get kubectl:


```

$ curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kubectl

$ chmod +x kubectl

$ sudo mv kubectl /usr/local/bin/


```

You are now ready to use minikube. Just typing minikube at your shell prompt will return the usage. The first command to use, however, is the start command. This will boot the virtual machine that will run Kubernetes. Let’s do it:

```

$ minikube start

Starting local Kubernetes cluster...

Kubectl is now configured to use the cluster.

```

You can check the status of your minikube VM with the status command.

```

$ minikube status

Running

```

To see the kubectl is configured to talk to the minikube VM, try to list the nodes in your Kubernetes cluster. It should return a single node with the name minikubevm.

```

$ kubectl get nodes

NAME         STATUS    AGE

minikubevm   Ready     1h

```

And, to finish checking that everything is running, open your VirtualBox UI and see the minikube VM running.

Application Example

You are now all set with this all-in-one local Kubernetes install. You can use it to discover the API, explore the dashboard and start writing your first containerized applications that will easily migrate to a full Kubernetes cluster.

Explaining how an application is containerized and what are the various resources in Kubernetes is beyond the scope of this blog. But, to showcase minikube, we are going to create the canonical guestbook application. This application is made of a Redis cluster and a PHP front end. Containers are created via a Kubernetes resource called Deployments and exposed to each other via another Kubernetes primitive called a Service.

Let’s create the guestbook application on minikube. To do this, we will use the kubectl client and create all required resources by pointing at the YAML file that describes them in the examples folder of the Kubernetes source code.


```

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/all-in-one/guestbook-all-in-one.yaml

service "redis-master" created

deployment "redis-master" created

service "redis-slave" created

deployment "redis-slave" created

service "frontend" created

deployment "frontend" created

```

The CLI tells you that it created three services and three deployments.

You can now open the Kubernetes dashboard automatically, and you will see all sorts of resources that have been created and that we will discuss in a later post.

```

$ minikube dashboard

```

Below is a snapshot of the Dashboard.

dashboard.png

To access the application front end, the quickest way in development setup like this one is to run a local proxy. This is due to the fact that a default Kubernetes service is not reachable outside the cluster.

```

$ kubectl proxy

```

The frontend service will now be accessible locally. Open it in your browser and enjoy the guestbook application.

```

$ open http://localhost:8001/api/v1/proxy/namespaces/default/services/frontend

```

Here is a snapshot of the guestbook application:

guestbook.png

Additional minikube commands

If you want to learn what is actually happening inside the minikube VM, you can SSH into it with the minikube ssh command.

Because Kubernetes uses the Docker engine to run the containers, you can also use minikube as a Docker host with the minikube docker-env command.

As a developer you might also be interested to test different versions of Kubernetes. Minikube allows you to do this. Check what versions are available to test with minikube get-k8s-versions and use the –kubernetes-version= flag in minikube start to set a specific version.

Finally, you can stop and delete the minikube VM with intuitive commands like minikube stop and minikube delete.

In conclusion, the minikube binary is by far the easiest and quickest way to get Kubernetes for a spin. It will allow you to learn the Kubernetes API, resource objects as well as how to interact with it with the kubectl client.

Read the next articles in this series: 

Rolling Updates and Rollbacks using Kubernetes Deployments

Helm: The Kubernetes Package Manager

Federating Your Kubernetes Clusters — The New Road to Hybrid Clouds

Enjoy Kubernetes with Python

Want to learn more about Kubernetes? Check out the new, online, self-paced Kubernetes Fundamentals course from The Linux Foundation. Sign Up Now!

Sebastien Goasguen (@sebgoa) is a long time open source contributor. Member of the Apache Software Foundation, member of the Kubernetes organization, he is also the author of the O’Reilly Docker cookbook. He recently founded skippbox, which offers solutions, services and training for Kubernetes.

Shining a Light on the Enterprise Appeal of Multi-Cloud Deployments

Rather than settle for the services of a single cloud provider, enterprises are, in time, expected to want to source off-premise capacity from a variety of suppliers.

According to market watcher Gartner, the trend is being driven by the fact that cloud customers are increasingly aware of the merits and drawbacks of individual providers, which enables them to make informed decisions about where best to run specific workloads.

Mark D’Cunha, product manager at Pivotal, told Computer Weekly at the Cloud Foundry Summit in Frankfurt that parts of the industry have been surprised by the speed at which enterprises are looking to adopt a multi-cloud approach to IT consumption.  

Read more at ComputerWeekly

Coaches, Managers, Collaboration, and Agile: Part III

I started this series writing about the need for coaches in Coaches, Managers, Collaboration, and Agile, Part 1. I continued in Coaches, Managers, Collaboration, and Agile, Part 2, talking about the changed role of managers in Agile. In this part, let me address the role of senior managers in Agile and how coaches might help.

For years, we have organized our people into silos. That meant we had middle managers who (with any luck) understood the function (testing or development) and/or the problem domain (think about the major chunks of your product such as Search, Admin, Diagnostics, the feature sets, etc.). I often saw technical organizations organized into product areas with directors at the top, and some functional directors such as those involved in test and quality and/or performance.

Read more at DZone

How Continuous Integration Can Help You Keep Pace With the Linux Kernel

Written by Tomeu Vizoso, Principal Software Engineer at Collabora.

Almost all of Collabora’s customers use the Linux kernel on their products. Often they will use the exact code as delivered by the SBC vendors and we’ll work with them in other parts of their software stack. But it’s becoming increasingly common for our customers to adapt the kernel sources to the specific needs of their particular products.

A very big problem most of them have is that the kernel version they based on isn’t getting security updates any more because it’s already several years old. And the reason why companies are shipping kernels so old is that they have been so heavily modified compared to the upstream versions, that rebasing their trees on top of newer mainline releases is so expensive that is very hard to budget and plan for it.
 
To avoid that, we always recommend our customers to stay close to their upstreams, which implies rebasing often on top of new releases (typically LTS releases, with long term support). For the budgeting of that work to become possible, the size of the delta between mainline and downstream sources needs to be manageable, which is why we recommend contributing back any changes that aren’t strictly specific to their products.
 
But even for those few companies that already have processes in place for upstreaming their changes and are rebasing regularly on top of new LTS releases, keeping up with mainline can be a substantial disruption of their production schedules. This is in part because new bugs will be in the new mainline release, and new bugs will be in the downstream changes as they get applied to the new version.
 
Those companies that are already keeping close to their upstreams typically have advanced QA infrastructure that will detect those bugs long before production, but a long stabilization phase after every rebase can significantly slow product development.
 
To improve this situation and encourage more companies to keep their efforts close to upstream we at Collabora have been working for a few years already in continuous integration of FOSS components across a diverse array of hardware. The initial work was sponsored by Bosch for one of their automotive projects, and since the start of 2016 Google has been sponsoring work on continuous integration of the mainline kernel.
 
One of the major efforts to continuously integrate the mainline Linux kernel codebase is kernelci.org, which builds several configurations of different trees and submits boot jobs to several labs around the world, collating the results. This is being of great help already in detecting at a very early stage any changes that either break the builds, or prevent a specific piece of hardware from completing the boot stage.
 
Though kernelci.org can easily detect when an update to a source code repository has introduced a bug, such updates can have several dozens of new commits, and without knowing which specific commit introduced the bug, we cannot identify culprits to notify of the problem. This means that either someone needs to monitor the dashboard for problems, or email notifications are sent to the owners of the repositories who then have to manually look for suspicious commits before getting in contact with their author.
 
To address this limitation, Google has asked us to look into improving the existing code for automatic bisection so it can be used right away when a regression is detected, so the possible culprits are notified right away without any manual intervention.
 
Another area in which kernelci.org is currently lacking is in the coverage of the testing. Build and boot regressions are very annoying for developers because they impact negatively everybody who work in the affected configurations and hardware, but the consequences of regressions in peripheral support or other subsystems that aren’t involved critically during boot can still make rebases much costlier.
 
At Collabora we have had a strong interest in having the DRM subsystem under continuous integration and some time ago started a R&D project for making the test suite in IGT generically useful for all the DRM drivers. IGT started out being i915-specific, but as most of the tests exercise the generic DRM ABI, they could as well test other drivers with a moderate amount of effort. Early in 2016 Google started sponsoring this work and as of today submitters of new drivers are using it to validate their code.
 
Another related effort has been the addition to DRM of a generic ABI for retrieving CRCs of frames from different components in the graphics pipeline, so two frames can be compared when we know that they should match. And another one is adding support to IGT for the Chamelium board, which can simulate several display connections and hotplug events.
 
A side-effect of having continuous integration of changes in mainline is that when downstreams are sending back changes to reduce their delta, the risk of introducing regressions is much smaller and their contributions can be accepted faster and with less effort.
 
We believe that improved QA of FOSS components will expand the base of companies that can benefit from involvement in development upstream and are very excited by the changes that this will bring to the industry.
 
If you are an engineer who cares about QA and FOSS, and would like to work with us on projects such as kernelci.org, LAVA, IGT and Chamelium, get in touch!

Provision Bare Metal Servers for OpenStack with Ironic

The day-to-day life of a developer can change drastically from one moment to the next, particularly if one is working on open source projects. Intel Cloud Software Senior Developer Ruby Loo spends her days working on the bare metal provisioning software OpenStack Ironic, which is software that allows OpenStack to provision bare metal servers, as one of its Core members.

While Loo is employed by Intel, the bulk of her daily interactions are based in the upstream open source community. As patches come in, Loo reviews them alongside other Core members, ensuring that OpenStack Ironics feature priorities are met. In todays episode of The New Stack Makers recorded at OpenStack Summit Barcelona, Loo sat down with TNS Founder Alex Williams to explore more about her background, the daily tasks of an OpenStack Core project member and active open source community participant, and whats next for OpenStack Ironic.

Read more at The New Stack

China Adopts Cybersecurity Law in Face of Overseas Opposition

China adopted a controversial cyber security law on Monday to counter what Beijing says are growing threats such as hacking and terrorism, but the law triggered concerns among foreign business and rights groups.

The legislation, passed by China’s largely rubber-stamp parliament and set to take effect in June 2017, is an “objective need” of China as a major internet power, a parliament official said.

Overseas critics of the law say it threatens to shut foreign technology companies out of various sectors deemed “critical”, and includes contentious requirements for security reviews and for data to be stored on servers in China.

Read more at Reuters

Top 3 Questions Job Seekers Ask in Open Source

As a recruiter working in the open source world, I love that I interact every day with some of the smartest people around. I get to hear about the cool projects they’re working on and what they think about the industry, and when they are ready for a new challenge. I get to connect them to companies that are quietly changing the world.

But one thing I enjoy most about working with them is their curiosity: they ask questions, and in my conversations, I hear a lot of inquiries about the job search and application process. 

Read more at OpenSource.com

How to Recover a Deleted File in Linux

Did this ever happen to you? You realized that you had mistakenly deleted a file – either through the Del key, or using rm in the command line.

In the first case, you can always go to the Trashsearch for the file, and restore it to its original location. But what about the second case? As I am sure you probably know, the Linux command line does not send removed files anywhere – it REMOVES them. Bum. They’re gone.

Read complete article at Tecmint

Get Trained and Certified on Kubernetes with The Linux Foundation and CNCF

Companies in diverse industries are increasingly building applications designed to run in the cloud at a massive, distributed scale. That means they are also seeking talent with experience deploying and managing such cloud native applications using containers in microservices architectures.

Kubernetes has quickly become the most popular container orchestration tool according to The New Stack, and thus is a hot new area for career development as the demand for IT practitioners skilled in Kubernetes has also surged. Apprenda, which runs its PaaS on top of Kubernetes, reported a spike in Kubernetes job postings this summer, and the need is only growing.

To meet this demand, The Linux Foundation and the Cloud Native Computing Foundation today announced they have partnered to provide training and certification for Kubernetes.  

The Linux Foundation will offer training through a free, massive open online course (MOOC) on edX as well as a self-paced, online course. The MOOC will cover the introductory concepts and skills involved, while the online course will teach the more advanced skills needed to create and configure a real-world working Kubernetes cluster.

The training course will be available soon, and the MOOC and certification program are expected to be available in 2017. The course is open now at the discounted price of $99 (regularly $199) for a limited time. Sign up here to pre-register for the course.

The course curriculum will also be open source and available on GitHub, Dan Kohn, CNCF Executive Director, said in his keynote today at CloudNativeCon in Seattle.

Certification will be offered by Kubernetes Managed Service Providers (KMSP) trained and certified by the CNCF. Nine companies with experience helping enterprises successfully adopt Kubernetes are committing engineers to participate in a CNCF working group that will develop the certification requirements. These early supporters include Apprenda, Canonical, Cisco, Container Solutions, CoreOS, Deis, Huawei, LiveWyer, and Samsung SDS. The companies are also interested in becoming certified KMSPs once the program is available next year.

Kubernetes is a software platform that makes it easier for developers to run containerized applications across diverse cloud infrastructures — from public cloud providers, to on-premise clouds and bare metal. Core functions include scheduling, service discovery, remote storage, autoscaling, and load balancing.

Google originally engineered the software to manage containers on its Borg infrastructure, but open sourced the project in 2014 and donated it earlier this year to the Cloud Native Computing Foundation at The Linux Foundation. It is now one of the most active open source projects on GitHub and has been one of the fastest growing projects of all time with a diverse community of contributors.

“Kubernetes has the opportunity to become the new cloud platform,” said Sam Ghods, a co-founder and Services Architect at Box, in his keynote at CloudNativeCon. “We have the opportunity to do what AWS did for infrastructure but this time in an open, universal, community-driven way.”

With more than 170 user groups worldwide, it’s already easy to hire people who are experts in Kubernetes, said Chen Goldberg, director of engineering for the Container Engine and Kubernetes team at Google, in her keynote at CloudNativeCon.

The training and certification from CNCF and The Linux Foundation will go even further to help develop the pool of Kubernetes talent worldwide.

Pre-register now for the online, self-paced Kubernetes Fundamentals course from The Linux Foundation and pay only $99 ($100 off registration)!