Home Blog Page 505

Node.js Emerging as the Universal Development Framework for a Diversity of Applications

Last year and at the beginning of this year, we asked you, Node.js users, to help us understand where, how and why you are using Node.js. We wanted to see what technologies you are using alongside Node.js, how you are learning Node.js, what Node.js versions you are using, and how these answers differ across the globe.

Thank you to all who participated.

1,405 people from around the world (85+ countries) completed the survey, which was available in English and Mandarin. 67% of respondents were developers, 28% held managerial titles, and 5% listed their position as “other.” Geographic representation of the survey covered: 35% United States and Canada, 41% EMEA, 19% APAC, and 6% Latin and South America.

There was a lot of incredible data collected revealing:

  • Users span a broad mix of development focus, ways of using Node.js, and deployment locations.
  • There is a large mix of tools and technologies used with Node.js.
  • Experience with Node.js is also varied — although many have been using Node.js less than 2 years.
  • Node.js is a key part of the survey user’s toolkit, being used at least half of their development time.

The report also painted a detailed picture of the types of technologies being used with Node.js, language preferences alongside Node.js, and preferred production and development environments for the technology.

“Given developers’ important role in influencing the direction and pace of technology adoption, surveys of large developer communities are always interesting,” said Rachel Stephens, RedMonk Analyst. “This is particularly true when the community surveyed is strategically important like Node.js.”

In September, we will be releasing the interactive infographic of the results, which will allow you to dive deeper into your areas of interest. For the time being, check out our blog on the report below and download the executive summary here.

The Benefits of Node.js Grow with Time No Matter the Environment

Node.js is emerging as a universal development framework for digital transformation with a broad diversity of applications.

With more than 8 million Node.js instances online, three in four users are planning to increase their use of Node.js in the next 12 months. Many are learning Node.js in a foreign language with China being the second largest population outside of the United States using Node.js.

Those who continue to use Node.js over time were more likely to note the increased business impact of the application platform. Key data includes:

0*9_2aI_E8VhyZ_aa7.

Most Node.js users found that the application platform helped improve developer satisfaction and productivity, and benefited from cost savings and increased application performance.

0*T94w6EKMtnCku4-n.

The growth of Node.js within companies is a testament to the platform’s versatility. It is moving beyond being simply an application platform, and beginning to be used for rapid experimentation with corporate data, application modernization and IoT solutions. It is often times the primary focus for developers with the majority of developers spending their time with Node.js on the back-end, full stack, and front-end. Although Node.js use is beginning to rise in the Ops/DevOps sector and mobile as well.

0*FGGdaCEvZBxxPjvE.

Node.js Used Less Than 2 Years, but Growing Rapidly In Businesses

Experience with Node.js varied — although many have been using Node.js less than 2 years. Given the rapid pace of Node.js adoption with a growth rate of about 100% year-over-year this isn’t surprising.

0*Q31lY3YVoe2-yV9N.

Companies that were surveyed noted that they were planning to expand their use of Node.js for web applications, enterprise, IoT, embedded systems and big data analytics. Conversely, they are looking to decrease the use of Java, PHP and Ruby.

0*8v94wMKxzoEvoI0C.

Large Mix of Technology and Tools Used Alongside Node.js for Digital Transformation

Modernizing systems and processes are a top priority across businesses and verticals. Node.js’ light footprint and componentized nature make it a perfect fit for microservices (both container and serverless based) for lean software development without the need to gut out legacy infrastructure.

The survey revealed that 47% of respondents are using Node.js for container and serverless-based architectures across development areas:

  • 50% of respondents are using containers for back-end development.
  • 52% of respondents are using containers for full-stack development.
  • 39% of respondents are using containers for front-end development.
  • 48% of respondents are using containers for another area of development with Node.js.

The use of Node.js expands well beyond containers and cloud-native apps to touch development with databases, front-end framework/libraries, load balancing, message systems and more.

0*JMZitGLKkf0vunOd.

And for developers of all focus areas, Node.js has versatile usage.

0*_lWTTie6xWbgVcCi.

Amazon Leads Serverless Technology and Node.js

68% of survey respondents who are using Node.js for serverless are using Amazon Web Services for production. 47% of survey respondents using Node.js and serverless are using Amazon Web Services for development.

Developers who use Node.js and serverless used it across several development areas with the most popular being: back-end, full stack, front-end and DevOps.

0*1f6qaP-AkyuNvqnD.

The Big Data Benefit

Revenues for big data and business analytics are set to grow to more than $203 billion in 2020. Vendors in this market require distributed systems within their products and rely on Node.js for data analysis.

The survey revealed that big data/business analytics developers and managers are more likely to see major business impacts after instrumenting Node.js into their infrastructure with key benefits being productivity, satisfaction, cost containment, and increased application performance.

Node.js Grows in the Enterprise

With the creation of the long-term support plan in 2015, there has been an increase of enterprise development work with Node.js. The Long-Term support plan provides a stable (LTS) release line for enterprises that prioritize security and stability, and a current release line for developers who are looking for the latest updates and experimental features. The survey revealed:

  • 39% of respondents with the developer title are using Node.js for enterprise.
  • 59% of respondents with a manager title are using Node.js for enterprise.
  • 69% of enterprise users plan to increase their use of Node.js over the next 12 months.
  • 47% of enterprise users have been using Node.js for 3+ years and 58% of developers using Node.js have 10+ years in total development experience.

The Long-Term Support versions of Node.js tend to be the most highly sought after with development.

0*JYYKNtwrHINMyjam.

*Node.js 4 and 6 are the LTS versions and best suited for the enterprise of those that favor stability and security over new features.

If you want to continue to learn more about Node.js, sign up for our monthly community newsletter, which will continue to pepper you with data for months to come, and also provide you with information on cool new projects using Node.js. The Node.js Foundation will also hold its annual conference from October 4–6 in Vancouver, Canada. Come join us at Node.js Interactive.

*Mark Hinkle, the Node.js Foundation’s Executive Director, will be talking about this data and provide an update on the Foundation during his keynote at NodeSummit.

This article originally appeared on HackerNoon.

Should the ‘KEG’ Stack Replace the LAMP Stack?

For years, the LAMP stack (Linux, Apache, MySQL, PHP/Python/Perl) has been an oasis for developers looking to build modern apps without getting locked into the desert of some big vendor’s ecosystem. It’s a convenient, widely used open-source framework that makes application architecture easy for developers.

Today, if you are not breaking applications down into smaller components that can be independently deployed and scaled with flexibility and resilience to failure, you’re practically toast. Two major trends underscore this shift: First, every layer of the stack is now available “as a service,” enabling developers to outsource many responsibilities once deeply embedded in the stack, and to ultimately ship better products faster.

Read more at The New Stack

7 Steps to Start Your Linux SysAdmin Career

Dell server rack

Linux is hot right now. Everybody is looking for Linux talent. Recruiters are knocking down the doors of anybody with Linux experience, and there are tens of thousands of jobs waiting to be filled. But what if you want to take advantage of this trend and you’re new to Linux? How do you get started?

  1. Install Linux  

    It should almost go without saying, but the first key to learning Linux is to install Linux. Both the LFS101x and the LFS201 courses include detailed sections on installing and configuring Linux for the first time.

  2. Take LFS101x

    If you are completely new to Linux, the best place to start is our free LFS101x Introduction to Linux course. This online course is hosted by edX.org, and explores the various tools and techniques commonly used by Linux system administrators and end users to achieve their day-to-day work in a Linux environment. It is designed for experienced computer users who have limited or no previous exposure to Linux, whether they are working in an individual or enterprise environment. This course will give you a good working knowledge of Linux from both a graphical and command line perspective, allowing you to easily navigate through any of the major Linux distributions.

  3. Look into LFS201

    Once you’ve completed LFS101x, you’re ready to start diving into the more complicated tasks in Linux that will be required of you as a professional sysadmin. To gain those skills, you’ll want to take LFS201 Essentials of Linux System Administration. The course gives you in-depth explanations and instructions for each topic, along with plenty of exercises and labs to help you get real, hands-on experience with the subject matter.

    If you would rather have a live instructor teach you or you have an employer who is interested in helping you become a Linux sysadmin, you might also be interested in LFS220 Linux System Administration. This course includes all the same topics as the LFS201 course, but is taught by an expert instructor who can guide you through the labs and answer any questions you have on the topics covered in the course.

  4. Practice!

    Practice makes perfect, and that’s as true for Linux as it is for any musical instrument or sport. Once you’ve installed Linux, use it regularly. Perform key tasks over and over again until you can do them easily without reference material. Learn the ins and outs of the command line as well as the GUI. This practice will ensure that you’ve got the skills and knowledge to be successful as a professional Linux sysadmin.

  5. Get Certified

    After you’ve taken LFS201 or LFS220 and you’ve gotten some practice, you are now ready to get certified as a system administrator. You’ll need this certification because this is how you will prove to employers that you have the necessary skills to be a professional Linux sysadmin.

    There are several Linux certifications on the market today, and all of them have their place. However, most of these certifications are either centered on a specific distro (like Red Hat) or are purely knowledge-based and don’t demonstrate actual skill with Linux. The Linux Foundation Certified System Administrator certification is an excellent alternative for someone looking for a flexible, meaningful entry-level certification.

  6. Get Involved

    At this point you may also want to consider joining up with a local Linux Users Group (or LUG), if there’s one in your area. These groups are usually composed of people of all ages and experience levels, so regardless of where you are at with your Linux experience, you can find people with similar skill levels to bond with, or more advanced Linux users who can help answer questions and point you towards helpful resources. To find out if there’s a LUG near you, try looking on meetup.com, check with a nearby university, or just do a simple Internet search.

    There are also many online communities available to you as you learn Linux. These sites and communities provide help and support to both individuals new to Linux or experienced administrators:

7. Learn To Love The Documentation

Last but not least, if you ever get stuck on something within Linux, don’t forget about Linux’s included documentation. Using the commands man (for manual), info and help, you can find information on virtually every aspect of Linux, right from within the operating system. The usefulness of these built-in resources cannot be overstated, and you’ll find yourself using them throughout your career, so you might as well get familiar with them early on.

Interested in learning more about a career in system administration? Check out our free ebook “Future Proof Your SysAdmin Career.

20 Linux Commands Every Sysadmin Should Know

In a world bursting with new tools and diverse development environments, it’s practically a necessity for any developer or engineer to learn some basic sysadmin commands. Specific commands and packages can help developers organize, troubleshoot, and optimize their applications and—when things go wrong—provide valuable triage information to operators and sysadmins.

Whether you are a new developer or want to manage your own application, the following 20 basic sysadmin commands can help you better understand your applications. They can also help you describe problems to sysadmins troubleshooting why an application might work locally but not on a remote host. These commands apply to Linux development environments, containers, virtual machines (VMs), and bare metal.

Read more at OpenSource.com

Hands-On With Sparky Linux 5, Powered by Debian

Sparky Linux 5, based on Debian testing (buster) was recently released. I have taken a look at the standard desktop versions (Xfce, LXQt, and MATE), and created my own i3 desktop version.

I mentioned in my recent post about the release of Debian 9 (stretch) that the changes in Debian should soon start filtering through into the Debian-derived distributions. Sure enough, Sparky Linux announced a new release last weekend.

Sparky Linux is one of the few distributions which offers two versions, based on the Debian stable and testing branches. The new release is Sparky Linux 5, based on Debian testing.

The release announcement gives a brief overview, but because this version of Sparky is a rolling release distribution, there are not huge changes from the previous version.

Read more at ZDNet

DevOps Fundamentals, Part 3: Continuous Delivery and Deployment

We’re back with another installment in our preview of the DevOps Fundamentals: Implementing Continuous Delivery (LFS261) course from The Linux Foundation. In the previous articles, we looked at high-performing organizations and then discussed the value stream. In this article, we move along to Continuous Delivery and Deployment.

Continuous Delivery basically includes Continuous Integration. It is mandatory to have Continuous Integration to get Continuous Delivery. Let’s consider this definition from Effective DevOps by Jennifer Davis and Katherine Daniels.

Continuous delivery is the process of releasing new software frequently through the use of automated testing and continuous integration…

Continuous Integration is required. Additionally, they say, “It is closely related to CI, and is often thought of as taking CI one step further, so that beyond simply making sure that new changes are able to be integrated without causing regressions to automated tests, continuous delivery means that these changes are able to be deployed.

Basically, it shows what we want to accomplish with Continuous Delivery, which is that someone checks in code, version control is in place, it runs the build and tests, it fails, it kicks it back. You can watch the video below for more details.

This then, brings us to Continuous Deployment. The difference between delivery and deployment is that the deployment is actually automated.

But, again, let’s check the definition from Effective DevOps because the authors are DevOps leaders in every sense of the word.

Continuous deployment is the process of deploying changes to production through the engineering of application deployment that has defined tests and validations to minimize risk. While continuous delivery makes sure that the changes are able to be deployed, continuous deployment means that they get deployed into production.

The key points here are that code is deployed, and Continuous Deployment includes both Continuous Integration and Continuous Delivery.

There is mix and match; some things we automatically deploy and some things can be deployed while some things can be delivered.

The main point is that continuous deployment is all automated. You hit the button. You commit. It is gone.

Want to learn more? Access all the free sample chapter videos now!

This course is written and presented by John Willis, Director of Ecosystem Development at Docker. John has worked in the IT management industry for more than 35 years.

Read more:

DevOps Fundamentals: High-Performing Organizations

DevOps Fundamentals, Part 2: The Value Stream

Containing System Services in Red Hat Enterprise Linux – Part 1

At the 2017 Red Hat Summit, several people asked me “We normally use full VMs to separate network services like DNS and DHCP, can we use containers instead?”. The answer is yes, and here’s an example of how to create a system container in Red Hat Enterprise Linux 7 today.   

The Goal

Create a network service that can be updated independently of any other services of the system, yet easily managed and updated from the host.

Let’s explore setting up a BIND server running under systemd in a container. In this part, we’ll look at building our container, as well as managing the BIND configuration and data files.

In Part Two, we’ll look at how systemd on the host integrates with systemd in the container. We’ll explore managing the service in the container, and enabling it as a service on the host.

Creating the Bind Container

To get systemd working inside a container easily, we first need to add two packages on the host: oci-register-machine and oci-systemd-hook. The oci-systemd-hook hook allows us to run systemd in a container without needing to use a privileged container or manually configuring tmpfs and cgroups. The oci-register-machine hook allows us to keep track of the container with the systemd tools like systemctl and machinectl.

Read more at Red Hat blog

8 Things Every Security Pro Should Know About GDPR

In just under one year, the European Union’s General Data Protection Regulation (GDPR) will formally begin being enforced.

The statute requires any company, or entity, that handles personal data belonging to EU residents to comply with a broad set of requirements for protecting the privacy of that data. Significantly, GDPR vests EU residents with considerable control over their personal data, how it is used, and how it is made available to others. Under the statute, data subjects are the ultimate owners of their personal data, not the organizations that collect or use the data.

Companies that fail to comply with GDPR requirements can be fined between 2% and 4% of their annual global revenues or up to €20 million – which at current rates works out to just under $22.4 million USD – whichever is higher. 

Read more at Dark Reading

This Week in Scalability: System Backups in the Container Era

“To be clear; I am a believer in decentralization. Having built systems with that I can tell you it is not the magic dust you are looking for.” — Amazon Web Services Chief Technology Officer Werner Vogels

As we gear up to release our next e-book on the Kubernetes open source container orchestration engine (check with us in about a month), we have been reviewing how well K8s has been making its way into the enterprise — the true determinant of whether the software becomes an essential component of “the new stack,” so to speak.

Reviewing our notes from Kubecon 2017, held earlier this year in Berlin, we found some powerful testimonies from both Salesforce and Comcast. Salesforce is using it in a pilot program to power three cloud-native services, with plans to be running 20 services by the end of the year. When the company’s engineers were considering different orchestration options, they immediately appreciated the smarts behind the Kubernetes. After all, many had come from other jobs managing large at-scale workloads.

Read more at The New Stack

Serverless Computing May Offer Better Economics Than Virtual Machines

Serverless computing is becoming yet another way for cloud service providers to parse out access to enterprises looking to take advantage of virtualized services. Think containers, only slightly different.

Serverless computing architectures are designed to reduce the amount of overhead associated with offering services in the cloud. This includes the ability for a cloud provider to dynamically manage server resources. In a recent report and accompanying webinar, 451 Research described serverless computing as being similar to function-as-a-service (FaaS). In fact, Owen Rogers, research director for 451 Research’s Digital Economics Unit, said the terms are basically interchangeable.

Read more at SDxCentral